entry_id
stringlengths
33
33
published
stringlengths
14
14
title
stringlengths
13
172
authors
sequencelengths
1
668
primary_category
stringclasses
115 values
categories
sequencelengths
1
7
text
stringlengths
3
431k
http://arxiv.org/abs/2406.18808v1
20240627005353
Binding in hippocampal-entorhinal circuits enables compositionality in cognitive maps
[ "Christopher J. Kymn", "Sonia Mazelet", "Anthony Thomas", "Denis Kleyko", "E. Paxon Frady", "Friedrich T. Sommer", "Bruno A. Olshausen" ]
q-bio.NC
[ "q-bio.NC", "cs.NE" ]
Towards Secure Management of Edge-Cloud IoT Microservices using Policy as Code Samodha Pallewatta 1, 2() Muhammad Ali Babar 1, 2, 3 July 1, 2024 ============================================================================== § ABSTRACT We propose a normative model for spatial representation in the hippocampal formation that combines optimality principles, such as maximizing coding range and spatial information per neuron, with an algebraic framework for computing in distributed representation. Spatial position is encoded in a residue number system, with individual residues represented by high-dimensional, complex-valued vectors. These are composed into a single vector representing position by a similarity-preserving, conjunctive vector-binding operation. Self-consistency between the representations of the overall position and of the individual residues is enforced by a modular attractor network whose modules correspond to the grid cell modules in entorhinal cortex. The vector binding operation can also associate different contexts to spatial representations, yielding a model for entorhinal cortex and hippocampus. We show that the model achieves normative desiderata including superlinear scaling of patterns with dimension, robust error correction, and hexagonal, carry-free encoding of spatial position. These properties in turn enable robust path integration and association with sensory inputs. More generally, the model formalizes how compositional computations could occur in the hippocampal formation and leads to testable experimental predictions. § INTRODUCTION The hippocampal formation (HF), consisting of hippocampus (HC) and the medial and lateral part of the neighboring entorhinal cortex, (MEC) and (LEC), is critical for forming memories and representing variables such as spatial position <cit.>. Recent work has provided evidence of compositional structure in HF representations, for example, novel recombinations of past experience occurring in replay <cit.>, or the exponential expressivity of the grid cell code <cit.>. In particular, compositional representations afford high expressivity with lower dimensional storage requirements <cit.>, less complexity in latent state inference, and generalization to novel scenes with familiar parts. To gain insight into the possible computational principles and neural mechanisms at play in the HF, we take a normative modeling approach. That is, we seek a set of neural coding principles that effectively achieve the postulated function of the system. With this approach, we can then explain details about the neuroanatomical and neurophysiological structures in light of their particular contributions to an information processing objective. We believe that the resulting model can also lead to new predictions about the neural mechanisms that enable this function. The postulated function of the HF —as a cognitive map and episodic memory— has a core computational requirement, to represent and navigate space. Here, space is either the actual physical environment or a more abstract conceptual space. We formulate multiple desiderata for an effective representation of space. We then show that a residue number system, incorporated into a compositional encoding scheme, fulfills these desiderata. It is achieved by a modular attractor network that factorizes the individual components of encoded locations. This provides an algorithmic-level hypothesis of hippocampal-entorhinal interactions. A core mechanism of this algorithm is binding, which draws inspiration from work in neuroscience, cognitive science, and artificial intelligence. § A NORMATIVE MODEL FOR THE HIPPOCAMPAL FORMATION §.§ Principles for representing space Our first set of normative requirements is that space is represented by a compositional code that has high spatial resolution, is noise-robust, and in which algebraic operations on the components can be updated in parallel. Prior work <cit.> has proposed the residue number system (RNS) <cit.> as a candidate for fulfilling these requirements. An RNS expresses an integer x in terms of its remainder relative to a set of co-prime moduli {m_i}. For example, relative to moduli {3, 5, 7}, x=40 is encoded as {1, 0, 5}. The Chinese Remainder Theorem guarantees that all integers in the range [0,M-1], where M = ∏_i m_i, are assigned a unique representation. An RNS provides high spatial resolution, carry-free arithmetic operations, and robust error correction <cit.>. Experimental observations in entorhinal cortex show a discrete multi-scale organization of spatial grid cells <cit.> that is compatible with an organization into discrete RNS modules. The second normative principle we adopt is that an individual residue value should be encoded by a neural population in a similarity-preserving fashion. In particular, we require that distinct integer values are represented with nearly orthogonal vectors. To achieve this principle, we use a method similar to random Fourier features <cit.>. Each modulus, with value m_i, is assigned a seed phasor vector, 𝐠_i ∈ℂ^D, whose elements (𝐠_i)_j are drawn uniformly from the m_i-th roots of unity (i.e., (𝐠_i)_j=e^√(-1) ω_ij, with ω_ij=2π/m_i k_j, and k_j chosen randomly from {0,...,m_i-1}). The representation of a particular residue value a_i ∈{0,…,m_i-1} is then given by rotating the phases of the seed vector according to <cit.>: 𝐠_i(a_i) = (𝐠_i)^a_i, where we abuse notation slightly to also think of g_i as a function that takes a_i as input and produces an embedding as described above. The complex-valued vectors can be mapped to interpretable population vectors via a randomized Fourier transform (Figures <ref>D and <ref>). Our third normative principle concerns the manner in which a unique representation of a particular point in space is formed from the individual residue representations. This requires that we somehow combine the residue vectors for each modulus. Combining via concatenation, though straightforward, is not effective because codes that coincide in subsets of their residue representation would be similar, even when the encoded values are very different. Thus, the method of combining residue codes must be conjunctive. Conjunctive composition is often called binding and is of fundamental importance in neuroscience <cit.>, cognitive science <cit.>, and machine learning <cit.>. An early proposal for binding is the tensor product of representation vectors <cit.>, with the tensor order equal to the number of bound objects. Here, we implement binding with component-wise vector multiplication, a dimensionality preserving operation that represents a lossy compression of the full tensor product <cit.>. The resulting compositional vector representation of an integer x ∈ℤ using an RNS representation with K moduli, {a_1,a_2,..,a_K}, is: 𝐩(x) = _i=1^K 𝐠_i(a_i). We prove in Appendix <ref> that this coding scheme represents distinct integer states using nearly orthogonal vectors, and that it generalizes in a natural way to support representation of arbitrary real numbers in a similarity preserving fashion. Eq. <ref> represents individual points along a line. In general, however, a spatial representation involves points in 2D or 3D spaces. Conveniently, vector binding can be also used to compose representations of multidimensional lattices from vectors representing individual dimensions. As we will explain, there is still a choice in this composition that determines the resulting lattice structure. Following earlier proposals <cit.>, our fourth normative principle is to choose the lattice structure so that spatial information is maximized, as described in Section <ref>. The final normative principle we require is that for computations such as path integration, there should be a simple vector manipulation that results in addition of the encoded variables. Again, vector binding provides this functionality with our coding strategy, because of the following property: 𝐠(x) ⊙𝐠(y) = 𝐠(x+y). §.§ Modular attractor network for spatial representation A standard model of grid cell circuits is the line attractor, in which states that represent a consistent location lie on a low-energy manifold <cit.>. When initialized from a noisy location pattern, the circuit dynamics will generate a denoised location representation. Rather than forming a line attractor model for the entire representational space (Eq. <ref>), we propose a modular network architecture, so that the compositional structure of a residue number representation can scale towards a large range with fewer memory resources (Section <ref>), in a manner robust to noise (Section <ref>). A starting point for our attractor network model is the Hopfield network, which acts as an associative memory by storing memory patterns as fixed-point attractors. The Rademacher-Hopfield network <cit.> is a dynamical system whose state is a vector 𝐱∈{-1,+1 }^D that obeys the following dynamics: 𝐱(t+1) = sign ( 𝐗𝐗^T𝐱(t) ) with 𝐗 as the matrix of memorized patterns (column vectors of 𝐗). The fixed-point attractor dynamics can be generalized to complex memory patterns 𝐳∈ℂ^D: 𝐳(t+1) = σ ( 𝐙𝐙^†𝐳(t) ), where σ is a non-linearity normalizing the amplitude of each complex-valued component to one <cit.>, and 𝐙 the corresponding matrix of memorized patterns. The model can also be discretized, such that each component is often quantized to a r-state phasor <cit.>. The Rademacher-Hopfield model is the special case where r=2 and the phasors happen to be real-valued. An r-state phasor network of the form of Eq. <ref> is well-suited to serve as an attractor network for each of the residue vectors in an RNS representation of position, with r=m_i for modulus i, and the matrix 𝐙 (which we shall denote 𝐆_i) storing the 𝐠_i(a_i) for a_i∈{0,..,m_i-1}. However, we desire a method for representing the whole coding range M:= ∏_i^K m_i without storing all M patterns in one large associative memory. For this purpose we show that a resonator network, a recently proposed recurrent network for unbinding conjunctive codes <cit.>, lets us represent this range by storing only n:=∑_i^K m_i ≪ M patterns. Given a vector encoding of position, 𝐩(x), as formulated in Eq. (<ref>), a resonator network will factorize it into its constituent RNS components by iteratively updating each residue vector estimate, 𝐠̂_i, similar to the attractor dynamics of Eq. (<ref>) but in a way that it is also consistent with 𝐩(x) given all other residue estimates 𝐠̂_j≠ i: ĝ_i(t+1) = σ( 𝐆_i G_i^†( 𝐩_j ≠ i^K ĝ_j^*(t) ) ) ∀ i Let us now assume that the input 𝐩(x_t) encodes a spatial position x_t using Eq. (<ref>). Given a velocity input 𝐪_i(v_t), estimated from self-motion input, path integration is performed by first running attractor dynamics, then updating attractor states by velocity. ĝ_i(t+1) = 𝐪_i(v_t) ⊙σ ( 𝐆_i 𝐆_i^†𝐩(x_t) _i ≠ j^K ĝ_j^*(t)) After velocity updates, one can update the input state 𝐩(x_t) with the conjunctive representation of the current factor estimates: 𝐩(x_t+1) = _i^K 𝐠̂_i(t+1). Further explanation and detail is provided in Appendix <ref>. §.§ Mapping the model to the HF Although it is not obvious how the components of our normative model should map to the anatomical architecture of HF, we make one proposal as shown in Figure <ref>. The memory networks for residue representations ĝ_i correspond to grid modules in MEC. Similar to the grid modules, a module for context can be added to the architecture, such as a tag for the identity of a specific environment, with the recurrent synapses 𝐂 storing tags of different environments. The context neurons could correspond to the non-grid entorhinal cells, which can contain local, non-spatial information about the environment <cit.>. The vector 𝐩(x_t) can be linked to place cells in hippocampus. Internal HC circuitry can either buffer the input as in Eq. (<ref>) or allow it to be updated dynamically according to the MEC input (Section <ref>). The mutual interactions between HC and MEC grid modules require projections between these structures. The binding operations that these interactions involve according to Eq. (<ref>) are hypothesized to be implemented by nonlinear interactions between dendritic inputs in HC and MEC neurons. The model also assumes the ability for sensory cues to provide the initialization signal of the cognitive map, represented by 𝐬 in Figure <ref>. For completeness, we make the basic assumption that heteroassociative memories are formed by the brain that link sensory cues to the place cell representations 𝐩 (Section <ref>). This process would require the system to generate a new context vector 𝐜 and initialize the cognitive map to a default location in order to learn about new environments. We show that through even a simple heteroassociative mechanism, our modular attractor network can robustly retrieve sensory memories and even protect its compositional structure. § CODING PROPERTIES OF THE MODEL §.§ RNS representations have exponential coding range The compositional RNS vector representation Eq. (<ref>) can encode a coding range of M values using a total of n component patterns for representing the residue of individual modules. The scaling of the coding range is exponential in the number of moduli, K, since if each module has 𝒪(m) patterns, and the co-prime condition is satisfied, the scaling of the coding range is 𝒪(m^K). This recovers the expressivity argued by <cit.>. More generally, it is also exponential in the number of component patterns, n. The optimal coding range is given by the best partition of n into a set of positive { m_i }. This optimization is identical to that of finding the maximum order of an element in the group of permutations S_n, because the maximum order can be found by finding the longest cycle. The scaling of this value in n is characterized by Landau's function f(n), which is known to converge to exp(√(n ln n)) as n →∞ <cit.>. Figure <ref>A illustrates how Landau's function is the upper bound to what is achievable for any fixed number of moduli (K). Though other kinds of representations can achieve an exponential coding range, the advantage of the compositional encoding of Eq. (<ref>) comes from the fact that the binding operation implements carry-free vector addition (our fourth principle). This enables updates of the encoded value without requiring further transformations such as decoding, facilitating tasks such as path integration (Section <ref>, Appendix <ref>). Binary representations, by contrast, have exponential coding range but require carry-over operations to implement. §.§ The modular attractor network has superlinear coding range The exponential scaling of the coding range of the RNS representation is a prerequisite to obtain a large coding range with the attractor network that has to perform computations on this representation, such as input denoising, working memory, and path integration. To estimate the scaling of the coding range in the proposed attractor network (Eq. <ref>), we study the critical dimension for which the grid modules converge with high probability. Specifically, we empirically estimate the minimum dimension required to retrieve an arbitrary RNS representation with high probability, given a maximum number of iterations (Figure <ref>B). Remarkably, we find that the number of component patterns n that can be stored is superlinear in the pattern dimension D; empirically 𝒪(D^α) for some α≥ 1. For 2, 3, and 4 moduli, α≈ 2.05, 1.45 and 1.23, respectively (Figure <ref>C). These empirical scaling laws are consistent with a simple information-theoretic calculation (Appendix <ref>). The minimal amount of bits to be stored for the entire RNS vector encoding scheme is of order 𝒪(M log M), and the number of synapses in the attractor network is 𝒪(D√(M)). If one makes the cautious assumption of a capacity per synapse of 𝒪(1), the leading order for the coding range M is 𝒪(D^α), with α=K/K-1. Note that while the coding range increases with the number of moduli (K) for the RNS representation, the superlinear scaling coefficient α_K decreases with K for the modular attractor network, reaching maximum superlinearity at the smallest value K=2. This reversal is caused by the fact that increasing K decreases the number of synapses, i.e., the memory resource in the attractor network. §.§ Robust error correction In addition, we evaluate the robustness of our attractor model to noise. Because the RNS representations are composed of phasors, which are circular variables, we sample noise from a von Mises distribution with two parameters: mean (μ = 0) and concentration pattern κ (Figure <ref>A). Higher κ values imply less noise; the distribution approximates a Gaussian with variance 1/κ for large κ. We consider three cases: noisy input patterns, noise added to each time step, and noisy weights corruptions of patterns in 𝐆_i (Appendix <ref>). The empirical accuracy of recall varies depending on the type of corruption applied (Figure <ref>A). We find that for a given dimension D (in this case, 1024), increasing noise decreases the maximum coding range that can be decoded with high accuracy (Figure <ref>B-D). For a fixed noise level, the high-accuracy coding range is largest for input noise, followed by update noise and codebook noise. It is perhaps not surprising that codebook noise has the worst coding range, given that noise added to every stored pattern compounds across the dynamics. Fortunately, the demonstrated robustness to input noise enables sensory patterns to be denoised via heteroassociation (Section <ref>). §.§ Interpolation between patterns enables continuous path integration In general, there is a sharp difference between point and line attractors. In our attractor model, the RNS representations of integer values are stored as discrete fixed points. Nevertheless, the attractor network also converges to states that represent non-integer values that are not explicitly stored. In other words, the network smoothly interpolates to points on a manifold of states that represent integer and non-integer values encoded by (<ref>); Figure <ref>A provides a visualization, showing that the kernel induced by inner product operations retains graded similarity for sub-integer shifts. This kernel enables the modular attractor network to settle to fixed points that correspond to interpolations between integers, and for sub-integer positions to be decoded. The resolution of decoding is fundamentally limited by the signal to noise ratio. Even so, we find that, up to a fixed noise level, the accuracy regimes of integer decoding and sub-integer decoding coincide. This property enables sub-integer shifts to be encoded within the states of the network, which, as we will show, results in stable, error-correcting path integration (Section <ref>). We quantify the gain in precision in terms of the bits of information that can on average be reconstructed from a vector (Figures  <ref>D, Appendix <ref>). Notably, even a moderate noise level of κ=8 is sufficient to achieve nearly the same information content as in the noiseless case. §.§ Triangular frames in 2D maximize spatial information In two-dimensional open field environments, grid cells have firing fields arranged in a hexagonal lattice <cit.>. Work in theoretical neuroscience shows the optimality of this lattice for 2D environments in terms of spatial information <cit.>. However, the presence of hexagonal firing fields raises a puzzle for residue number systems. Although a crucial property of a RNS is the carry-free property, most implementations of RNS will not perform carry-free updates within a module in non-Cartesian coordinate systems. This generally occurs because the updates of different coordinates must interact due to non-orthogonality. We resolve this issue by showing how to implement a version of vector binding of multiple coordinates in a triangular `Mercedes-Benz' frame that enables carry-free hexagonal coding. Furthermore, we provide a combinatoric argument for the optimality of triangular frames for ℝ^2. (A frame is a spanning set for a vector space in which the basis vectors need not be linearly independent.) Our argument relies on the combinatorics of residue numbers, and so for the first time gives an explanation of why the coexistence of RNS and hexagonal codes is optimal. To form a hexagonal tiling of 2D position requires two steps: first, projection into a 3-coordinate frame, and second, choosing phases such that simultaneous, equal movements along all three frames cancel out (Appendix <ref>). The resulting Voronoi tessellation for different states is pictured in Figure <ref>A. This encoding enables higher spatial resolution in terms of the number of discrete states: 3m^2-3m+1 for triangular frames, versus m^2 for Cartesian frames. This increased expressivity results in a higher entropy) code for space (Figure <ref>B). It also results in both a periodic hexagonal kernel and the individual grid response fields being arranged in a hexagonal lattice (Figure <ref>C). [17]R0.45 < g r a p h i c s > Hexagonal coding improves spatial resolution. A) Voronoi tessellation for m=5. Each distinct color corresponds to a unique codeword in ℂ^D. Black arrows show the coordinate axes of the triangular `Mercedes-Benz' frame in 2D. B) Hexagonal lattices have higher entropy than square lattices, allowing each state to carry higher resolution in its spatial output. Prior models achieved hexagonal lattices either by circularly symmetric receptive fields (e.g., <cit.>) arranged on a periodic rectangular sheet or by distorting a square lattice into an oblique one (e.g., <cit.>). Importantly, oblique lattices have the combinatorial complexity as the square grid and, unlike the construction described above, they do not achieve the same level of spatial resolution (Figure <ref>B). § TESTING FUNCTIONALITIES OF THE MODEL §.§ Robust path integration Given the ability of the attractor model to update its representation of position from velocity inputs, along with its ability to represent continuous space, we evaluate its ability to perform path integration in the presence of noise. We simulate trajectories based on a statistical model for generating plausible rodent movements in an arena <cit.>, and we update grid cell and place cell state vectors according to Equations <ref> and <ref>, respectively. To evaluate the robustness of the model to error (Appendix <ref>), we consider both sources of extrinsic noise (e.g., mis-representations of velocity information), and intrinsic noise (e.g., due to noise in weight updates). The robustness of our model to intrinsic noise is tested by comparing our results to the estimated trajectories obtained without the correction by the MEC modules (Figure <ref> A and B). We find that our model strongly limits noise accumulation along the trajectory and allows highly accurate integration for a longer period of time (Figure <ref>A). Consistent with our previous experiments on noise robustness (Figure <ref>), we find strong robustness to intrinsic noise, and that extrinsic noise results in progressive drift of estimated position. We visualize the response fields in different modules and find hexagonal lattices with a module dependent scaling (Figure <ref>C, Appendix <ref>). In addition, we show that tethering to external cues (e.g., visual inputs), can significantly increase the accuracy of the attractor network. To study this, we associate visual cues to corresponding patches see Section <ref>) and observe that integration of information from sensory visual inputs succeeds in correcting drift due to extrinsic noise (Figure <ref>D). §.§ Denoising sensory states via a heteroassociative memory Finally, we describe a simple extension to our model, in which sensory patterns are fed from the lateral entorhinal cortex (LEC) to update the hippocampal state. This is consistent with theories of memory suggesting that LEC provides the content of experiences to hippocampus <cit.>, as well as neuroanatomical evidence <cit.>. Although the structure of the representations of those sensory patterns is unknown, it is theorized that HF is critical to sensory pattern completion <cit.>. Consistent with this function, recent work <cit.> has proposed that a heteroassociative scaffold connects sensory patterns to hippocampal activity, allowing robust denoising of sensory states. Though the main focus of our normative model is not sensory denoising, we show that a simple extension to our model (Appendix <ref>) robustly retrieves noisy pattern even under high levels of corruption (Figures  <ref>A and B). In Appendix <ref>, we also discuss how this capacity for generalization can serve as a model for sequence retrieval, showing some preliminary experiments. In addition to robust denoising of single patterns, our model is also well-equipped to deal with compositions of sensory patterns. Two situations are worth emphasizing: first, we can often unmix multiple sensory states corresponding to a sum of patterns, because the compositional structure of binding between grid modules “protects” the items in summation (Figure <ref>C). This differentiates our model from other heteroassociative memories, in which sums of patterns would have multiple equally valid yet incompatible decodings. Second, the context vector modules allow preservation of different sensory information for different environments (Figure <ref>). § DISCUSSION We propose a normative model of a cognitive map for the hippocampal formation in the mammalian brain. The core principle of the model is a compositional representation of space that achieves a superlinear coding range, which is expressed by a compact, multi-module attractor network. The compositional mechanism of vector binding provides generalization to multiple spatial dimensions, contextualization, and path integration. This binding mechanism builds on prior work proposed in the field of hyperdimensional computing and vector symbolic architectures <cit.> — and goes beyond it to develop a specific algorithmic hypothesis about structured operations in HF. Our analyses and experiments confirm that the model can achieve important functions of the hippocampal formation and explains experimental observations, such as hexagonal grid cells, place cells, and remapping phenomena. The proposed model contributes to, and greatly benefits from, existing work in theoretical neuroscience on residue number systems <cit.>, continuous attractor network models of grid cells <cit.>, and the optimality of hexagonal representations in 2D <cit.>. It remains intriguing that biology organized grid cells into multiple discrete modules, rather than pooling all resources into a single module attractor network. This puzzle raises an opportunity for normative models to explain the organization of grid cells into multiple modules. More recent work has focused on the problem of coordinating representations across multiple modules <cit.>, and large scale recordings of HF <cit.> may provide new opportunities to evaluate predictions of these different ideas. Our approach starts from principles of space encoding, in particular, the requirement of compositionality. This strategy is complimentary to, but different from, investigations of the emergence of place and grid cells in artificial neural networks (e.g., <cit.>). These approaches show optimality of biological response features under the model assumptions, such as ANN properties, network architecture, training objective and protocol. Here, we emphasize the role of multiplicative binding, a primitive that is typically difficult to have emerge in an ANN setting. Early suggestions for realizing conjunctive binding already ventured outside the framework of ANNs <cit.>. A simple extension of ANNs are sigma-pi neurons <cit.> that can implement vector binding <cit.>. Recent work amplifies the view that full conjunctive binding would be a useful inductive bias to augment deep learning architectures <cit.>, and various augmentations of ANNs with dedicated binding mechanisms have been proposed <cit.>. Our model has obvious limitations. Our attractor model for the cognitive map is still a high-level abstraction of spiking neural circuits in the hippocampal formation. In particular, the phasor states in the model are one linear transform removed from vectors that describe neural population activity. Thus, the mapping between model and neurobiological mechanisms is not straight-forward, a disadvantage that can be addressed by switching to other encoding schemes, such as sparse real or complex vectors, e.g., <cit.>, for which conjunctive binding operations have been proposed <cit.>. Although the model is more comprehensive than typical normative models, which usually focus on a single computation, it is far from covering the many other functional cell types observed in the hippocampal formation or contextual modulations observed during remapping. In addition, the current model includes learning only in the heteroassociative projection to LEC. Most observations regarding plasticity in HF are not captured, i.e., signals from reward, or eligibility traces. Finally, our assumptions about inputs to HF from the sensory pathway are rather simplifying and primarily intended as a proof of concept. The purpose of the model express the fundamental principles of a compositional cognitive map, permitting testable predictions: First, at the biophysical level, the model predicts multiplicative interactions between dendritic inputs providing the conjunctive binding operation. Though some evidence of MEC-LEC binding exists <cit.>, our attractor model also predicts binding between MEC modules. Second, the model predicts relatively fixed attractor weights between place and grid cells, and more plasticity from the hippocampus to sensory observations. Third, we predict that causal perturbations of one grid module can affect the states of other grid modules without involvement of the hippocampus, in a direction that is self-consistent with the update of the attractor state. We believe that the proposed modeling approach and the specific attractor model have broader applications in neuroscience. The proposed attractor network can also model generative models in sensory systems to implement analysis by synthesis postulated in perception. Further, there is a intriguing connection between the proposed phasor models and spiking neural networks <cit.>, which could yield normative models with spiking neurons, potentially implementable on neuromorphic hardware at large scale that can lead to further quantitative predictions. § ACKNOWLEDGMENTS The work of CJK was supported by the Department of Defense (DoD) through the National Defense Science & Engineering Graduate (NDSEG) Fellowship Program. The work of SM was carried out as part of the ARPE program of ENS Paris-Saclay. The work of DK and BAO was supported in part by Intel’s THWAI program. The work of CJK and BAO was supported by the Center for the Co-Design of Cognitive Systems (CoCoSys), one of seven centers in JUMP 2.0, a Semiconductor Research Corporation (SRC) program sponsored by DARPA. DK has received funding from theEuropean Union’s Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No 839179. FTS discloses support for the research of this work from NIH grant 1R01EB026955-0. Supplemental material § MATHEMATICAL DERIVATIONS §.§ Similarity-preserving properties of embeddings In the following section, we examine the similarity-preserving properties of our coding scheme. Recall from Section <ref> that our crucial desiderata are that: (1) distinct residue values are represented using vectors which are nearly orthogonal, and that (2) the inner-product between representations of sub-integer values are reflective of a reasonable notion of similarity between the encoded values. There is a robust literature on this topic both within the Vector Symbolic Architectures community <cit.>, and the broader ML community <cit.> who often study these techniques under the name “random features.” The methods pursued here are in this tradition. To briefly recapitulate the construction of Equation <ref>: fix some positive integer m, and let P(k) denote the uniform distribution over {0,...,m-1}. Define an embedding g : ℝ→^D using the following procedure: draw k_1,...,k_D independently from P(k), and set: g(a)_j = exp(i ω k_j)^a / √(D), j = 1,...,D, where ω = 2π/m, and i = √(-1). To simplify analysis, we here assume that m is odd, in which case the above is equivalent to shifting the support of P(k) to {-(m-1)/2,...,(m-1)/2}, and defining the embedding g : →^D component-wise via: g(a)_j = exp( i ω k_j a ) / √(D), j = 1,...,D. The case that m is even is slightly different, but can be handled using similar techniques and the discrepancy does not affect any of our modeling goals. Our basic claim is that in expectation with respect to randomness in the draw of k_1,...,k_D, inner-products between the embeddings of two numbers a,a' recover the periodic sinc-function <cit.> of their difference. That is: [𝐠(a)^⊤𝐠(a')^*] = sin(π (a - a'))/msin(π(a - a')/m) := psinc(a - a'), This accomplishes goal (1) because, for t an integer which is not an integer multiple of m, psinc(t) = 0. Therefore, distinct integers are represented using vectors which are, in expectation, orthogonal. It also accomplishes goal (2), because psinc(t)≈ 1 for 0 < |t| ≪ 1. The following theorem demonstrates this property more formally, and provides an approximation guarantee for a specific instantiation of k_1,...,k_D. Fix any D > 0 and δ∈ (0,1). For any pair a,a' ∈ such that a - a' is not an integer multiple of m, with probability at least 1-δ over randomness in the draw of k_1,...,k_D: |𝐠(a)^⊤𝐠(a')^* - sin(π (a-a))/msin(π(a-a')/m)| ≤√(2/Dln2/δ). Fix any pair a,a' ∈, and denote for concision t = a - a'. Taking an expectation with respect to randomness in k_1,...,k_D and using a well-known calculation from the signal processing literature <cit.>: _k_1,...,k_d[𝐠(a)^⊤𝐠(a')^*] = D_k_1[g(a)_1g(a')_1^*] = 1/m∑_k_1=-m-1/2^m-1/2exp( iω k_1 (a - a') ) = 1/m(exp(-iω t(m - 1)/2) - exp(iω t (m + 1)/2)/1 - exp(iω t)) = exp(iω t / 2)/mexp(iω t / 2)(exp(-π i t) - exp(π i t)/exp(-π i t/m) - exp(π i t/m) ) = sin(-π t)/msin(-π t/m) = sin(π ( a - a'))/msin(π(a - a')/m), The third equality follows from the second by noting that the latter is a sum of a geometric series with common ratio r = exp(ω t). The fifth line follows from the fourth by recalling the identity sin(x) = (e^ix - e^-ix)/ 2i. In the limit of t → 0, the expression evaluates to 1, consistent with the normalized inner product of a vector with itself. To show concentration around this value, consider: 𝐠(a)^⊤𝐠(a')^* = 1/D∑_j=1^Dexp(iω k_j (a - a')), and note that since the complex part of the sum vanishes in expectation, we may consider, without loss of generality, the average of the real-valued quantities: ( cos(ω k_j(a - a')) )_j=1^D, which are bounded in the range ± 1. Therefore, by Hoeffding's inequality: (|𝐠(a)^⊤𝐠(a')^* - [𝐠(a)^⊤𝐠(a')^*]| ≥ϵ) ≤ 2exp(-Dϵ^2/2), whereupon we conclude that, with probability at least 1-δ over randomness in the draw of k_1,...,k_D: ϵ≤√(2/Dln2/δ), as claimed. This result can be readily extended to the binding of multiple residue number values. Let 𝐠(a) = _i=1^K 𝐠_i (a), where each 𝐠_i(a) is instantiated independently. Then, by independence, we observe that: [𝐠(a)^⊤𝐠(a')^*] = [∏_i=1^K𝐠_i(a)^⊤𝐠_i(a')^*] = ∏_i=1^K[ 𝐠_i(a)^⊤𝐠_i(a')^*] The implication is that [𝐠(a)^⊤𝐠(a')^*] = 1 if and only if all residue values agree, and zero otherwise. To show concentration around this value, we can again use Hoeffding's inequality, which recovers the same bound on the sufficient dimension. §.§ Information-theoretic estimate of required pattern dimension In this section, we describe an information-theoretic estimate on the dimension D necessary to retrieve n patterns within K modules. The main result we aim to show is that D = 𝒪(n^(K-1)/K); equivalently, the scaling of n for a given D is 𝒪(D^K/(K-1)). This scaling roughly predicts our empirical results of finding the dimension required to achieve high accuracy, suggesting that the attractor network described here performs close to the theoretical bound. The minimal total amount of information a network needs to store for denoising an RNS representation with coding range M is 𝒪(M log(M)). This results from the requirement of content addressability, i.e., for serving as a unique pointer to one of n patterns, each pattern must at least carry information of the order of 𝒪(log(M)). For simplicity, we now assume that each module is of size 𝒪(M^1/K). The total capacity of the network is bounded by the number of synapses, which is 𝒪(D*K*M^1/K) = 𝒪(D*M^1/K) (assuming K is constant), times the capacity per synapse. Under the conservative assumption that the capacity per synapse is 𝒪(1), the dimension is of order 𝒪(e^K-1/Klog(M) + log(log(M))). Thus, the leading order of how D depends on n is 𝒪(M^(K-1)/K). If the capacity per synapse is assumed to be larger, O(log(M)) bits, only the non-leading term cancels and the resulting order of D is still the same. §.§ Construction of triangular frames In order to convert a 2D coordinate 𝐱 into a 3D frame 𝐲, we first multiply it by a matrix, Ψ whose rows are the elements of a 3D equiangular frame: 𝐲 = [ -1/√(3) -1/3; 1/√(3) -1/3; 0 2/3 ]𝐱 (This particular frame is commonly referred to as a `Mercedes Benz' frame due to its resemblance to the iconic symbol.) A consequence of working with an overcomplete frame is that there may exist multiple values of 𝐲 that correspond to the same 𝐱. For this frame, the null space of Ψ^+ is the subspace spanned by [1,1,1]^⊺ – grounding the intuition that equal movement in all equiangular directions “cancels out.” It therefore might seem that triangular frames require extra operations to determine if two coordinates are equal, but here we show how to avoid this consequence. The core strategy is to choose seed vectors 𝐠_i,1,𝐠_i,2,𝐠_i,3 for each modulus m_i that implement this self-cancellation. For a modulus m_i, we draw the phasors of seed vectors from the m-th roots of unity. However, we further require that, for each vector component, the three selected phases sum to 0 (mod 2π). We then form a hexagonal coordinate vector by binding the three seed vectors: 𝐠_i = 𝐠_i,1⊙𝐠_i,2⊙𝐠_i,3 By enforcing that the phases sum to 0 (mod 2π), we ensure that positions that have an equivalent 𝐱 coordinate are mapped to the same 𝐠_i. Observe that Hadamard product binding of phasors is equivalent to summing their phases, and that binding e^0i corresponds to adding nothing. Hence, a pair of three-dimensional coordinates whose differences are a multiple of [1,1,1] will be mapped to equivalent vector representations. Finally, we then form the residue number representation for different moduli by binding, as in Eq. <ref>. The presence of multiple modules and self-cancellation properties complement prior work on the efficiency of hexagonal kernels for spatial navigation tasks <cit.>. The equivalence of certain 3D coordinates also helps us count the number of states. Clearly, the redundancy means that we have less than m^3 states, but it also shows us that every position in the hexagonal grid can be represented by a 3D coordinate which contains at least one coordinate equivalent to 0. There is one state where all coordinates are 0, 3(m-1) states where exactly two coordinates are 0, and 3(m-1)^2 states where exactly one coordinate is zero. Thus, there are 3m^2 - 3m + 1 states for the hexagonal lattice, compared to the m^2 states for the square lattice. In the case of square lattices in 2D, all states occupy an equal proportion of space; however, this is not the case for the hexagonal lattice (see Figure <ref>A). This is because states with more zero-valued coordinates occur slightly more frequently. To estimate the effect of unequal proportions on the entropy, we directly calculate the Shannon entropy of hexagonal lattices for finite size spatial grids of increasing radius l, as an approximation to the infinite lattice. We find that even for l=1000, m > 7 the hexagonal code has 99 percent of the entropy of a system that divided all possibilities equally, and that this gap decreases as m grows larger. Thus asymptotically, as m →∞, the ratio of entropy for hexagonal vs. square grids tends towards log_2(3). § EXPERIMENTAL DETAILS All experiments were implemented in Python involving standard packages for scientific computing (including NumPy, SciPy, Matplotlib). We describe here the parameters and training setup of our experiments in further detail. §.§ Scaling in dimension For each number of moduli, K, we seek to find the smallest dimension D for which our attractor model factorizes its input, p, into the correct grid states in a fixed time (50 iterations) with high probability (at least 99 percent empirically). In instances where the network states remain similar over time (at least 0.95 cosine similarity), we consider that it converged to a fixed point. If such convergence did not occur, we evaluate the accuracy at the last time step. To evaluate scaling, we first choose our base moduli to be a set of K consecutive primes. We randomly select one of M random numbers to serve as the input and set the grid states to be random. We then evaluate a candidate dimension on the factorization task for a set number of trials (200) and check accuracy. We compare accuracy by considering whether the amplitude of the complex-valued inner products are highest for the true factor. If the accuracy is above our threshold, we then evaluate performance of a slightly higher dimension (dimensions evaluated are spaced apart on a logarithmic scale). Once a sufficiently high dimension achieves the accuracy threshold, we assume that the scaling is non-decreasing and use the last successful dimension as the first try. Finally, we fit linear regression to all data points on a log-log scale to estimate the scaling between dimension and problem size. We report the slopes to estimate the scaling coefficients. §.§ Error correction General experimental setup. We fix in advance the vector dimension, noise level (determined by 1/κ), and number of moduli. Given these parameters, we estimate the empirical accuracy of factorization on an arbitrary input known to correspond to one of the patterns. We use the same method for checking convergence as above, though we increase the maximum number of iterations to 100. For all experiments in this section, we average over 1,000 trials. In the case of input noise, the vector 𝐩 is multiplied by a noise vector. In the case of update noise, after every time step, each module of the attractor network is corrupted by a von Mises noise update. In the case of codebook noise, all codebooks are corrupted before the start of any iterations. Decoding values between integers. In order to test the ability of the modular attractor network to decode at sub-integer resolution, we fix a spatial resolution Δ x to decode from. In our experiments, we test Δ x = {1/3, 1/7, 1/15, 1/31}, and we also report Δ x = 1 (integer decoding) as a control. Then, using as input a random integer and random multiple of Δ x, we let the modules of the attractor network settle until convergence (as in other experiments). To evaluate accuracy, we test if the resulting output of the attractor network, ⊙_i ĝ_i=1^K(t), is closer to the ground truth RNS representation than to any other value. We test this with a “coarse-to-fine” approach: first checking if it is within an integer, and then checking all fractional values within one of that integer. We regard the output as correct if both the integer and fraction match, and incorrect otherwise. Estimation of information content from a vector. To measure the total resolution of our coding scheme in bits, we factor in both the number of states distinguished (τ = M/Δ x and the empirical accuracy (ρ). To quantitatively estimate this, we report the information decoded in bits according to the following equation <cit.>: I(τ,ρ) = a log_2(τρ ) + (1-ρ) log_2 ( τ/τ-1 (1-ρ) ). A consequence of this equation is that the information decoded is 0 when the empirical accuracy is at chance (1/τ). §.§ Path integration General experimental setup. We generate paths using a statistical model simulating rodent two-dimensional trajectories in a 50 cm^2 closed square environment <cit.>, with Δ t = 100 ms. The path integration method starts from the ground truth first position (x_0,y_0) which is converted to hexagonal coordinates (a_0, b_0, c_0) (see Section <ref>) and encoded as an RNS representation 𝐩(0) of dimension D=3,000 following the method in Section <ref>, for moduli {3,5,7}. We then factorize 𝐩(0) into {ĝ_𝐢(0)}_i=1^K to produce the estimated representation 𝐩̂(0)=_i=1^K ĝ_𝐢(0). At each time step t≥ 0, we aim at estimating the position (x_t+1,y_t+1). We give the modular attractor network as input the previous position vector estimate 𝐩̂(t). It is factorized into the residue components {ĝ_𝐣(t)}_j=1^K that are then shifted according to the velocity (da_t,db_t,dc_t) between (a_t, b_t, c_t) and (a_t+1, b_t+1, c_t+1). Namely, for each residue module, we build a velocity vector 𝐪_j(t)=𝐠_j,1(da(t)) ⊙𝐠_j,2(db(t)) ⊙𝐠_j,3(dc(t)) that is binded to each residue component ĝ_𝐣(t). The estimated position vector is then the binding of the shifted estimated residue components: 𝐩̂(t+1)=_j=1^K ĝ_𝐣(t) ⊙𝐪_j(t). The estimated position (x̂_t+1,ŷ_t+1) is chosen to be the position (x,y) in a grid of 50 × 50 positions mapping the entire environment, corresponding to the highest similarity between 𝐩(x,y) and 𝐩̂(t+1). We show the robustness of the path integration dynamics to two different sources of noise. In the case of extrinsic noise (Figure <ref>D), the hexagonal velocity is corrupted by additive Gaussian noise of variance 0.12. In the case of intrinsic noise (Figures <ref>A and B), the position vector p̃_t is corrupted by binding with a vector sampled from a von Mises distribution with concentration parameter κ=2. Response field visualization. Given a moduli m_i and a vector 𝐠_i, we visualize its response field by computing the similarity of the modular attractor output ĝ_i(t) and 𝐠_i along a trajectory. The periodicity in the distribution of random weights and the hexagonal coordinates produce periodic hexagonal receptive fields whose scale depends on m_i. The receptive fields of a given moduli are translations of one another, because the inner product between vector states induces a translation-invariant kernel. Connection to sensory cues. Sensory cues are random binary vectors of size N_s=D that are associated with positions along the trajectory. When the true trajectory reaches a sensory cue, the hippocampal state 𝐩̂_𝐭 is updated using the heteroassociation method described in Appendix <ref> §.§ Heteroassociation General experimental setup. We evaluate our model's performance for pattern denoising using a heteroassociative learning rule <cit.>. We consider random binary patterns of size N_s=D. We corrupt the patterns by randomly flipping bits with probability p_flip∈ [0,0.5] and associate them to place cell representations using heteroassociation with a pseudo-inverse learning rule. Let 𝐒∈^N_s × M be the matrix of M patterns to hook to the scaffold and 𝐇∈^M × D the matrix of M position vectors on which to hook the patterns. We associate pattern 𝐬 to a place cell representation 𝐩=𝐇𝐒^+𝐬, where 𝐒^+ is the pseudo-inverse of 𝐒. The model returns a denoised place cell representation 𝐩̂ from which we can estimate a denoised pattern by inverting the heteroassociation projection ŝ=sgn(𝐒𝐇^+𝐩̂). Scaling to dimensionality. We evaluate the impact the dimension D has on the denoising performance in Figure <ref>, for a number of stored patterns M=60 (in this case, 3×4×5) and 210 (in this case, 5×6×7). For each dimension D ∈{256, 512, 1024, 2048}, we show the evolution of accuracy for different levels of corruption. For a given dimension D and noise level p_flip, we denoise a pattern and consider that the denoising is correct if the denoised pattern is closest to the ground truth pattern (in terms of cosine similarity). We repeat over 500 trials and report the accuracy as well as the average similarity (normalized inner product) between the denoised pattern and its noiseless version. Superposition of patterns. We show that our model can denoise a superposition of n_p patterns one at a time, for n_p ∈{1,2,3,4,5,10}. We fix the dimension D to 2,000 and for different values of bit flip probability p_flip∈ [0,...,0.5], we run the model on a superposition 𝐬 of random binary patterns {𝐬_1,...,𝐬_n_p} of size N_s=2,000: 𝐬=𝐬_1+....+𝐬_n_p. We run the model n_p times and between each run the denoised pattern is explained away from the superposition <cit.>. Namely, for run r ∈{1,...,n_p-1} we denote ŝ(r) the denoised pattern. The input to run r+1 is then 𝐬(r+1)=𝐬(r)-ŝ(r). We find that the more patterns are superposed, the lower the overall denoising accuracy is. This is due to the fact that when a pattern is incorrectly denoised, explaining away adds noise or spurious patterns to the representation of the superposition which makes the following denoising steps more difficult. Comparison to structured patterns. We evaluate our model's ability to denoise structured patterns. We consider the FashionMNIST dataset, from which we select 105 images of size 28×28 that we binarize by setting pixel values to be -1 if below 127, and 1 elsewhere. We compare the denoising performance to the performance on random binary patterns of size 28×28=784 for fair comparison (Figure <ref>). § ADDITIONAL RESULTS §.§ Further visualizations of grid cell modules We further visualize the receptive fields for path integration by showing receptive fields from different units taken from the same grid module. We simulate a trajectory that traverses the entire environment and represent the activation of different position vectors along the trajectory. For each modulus m_i ∈{3,5,7}, we show the similarity between 4 different vectors 𝐠_i from module m_i and the position vectors along the trajectory. We show in Figure (<ref>) that the different receptive fields of a given module are translations of one another. §.§ Remapping contexts We demonstrate that the context vector can serve as a model of global remapping in hippocampal place fields, which occurs when there is no relationship between the firing of place cells in different environments <cit.>. The simplest instance of this is when a place field occurs in context A but not context B, consistent with the observed sparsity of hippocampal activity <cit.>. To model this kind of remapping phenomenon, we consider an instance where there is a gradation of contexts with some phase transition between them; such an instance was observed experimentally <cit.>. Towards this end, we model linear combinations of these contexts, where the weights each context is given are sigmoid(x), 1-sigmoid(x), with x varying from -5 to 5 in 8 equally spaced increments, and with sigmoid(x) = 1/(1+exp(-x)). To model hippocampal units, we generate units that prefer one of the two contexts and have a random place field location, using its weight vector, or address, as 𝐜_i=1^K 𝐠_i, and compare its output to that of the context/grid system at each location and context. It is worth noting that the original experiment of <cit.> also exhibited instances of rate remapping for some units, and so there is certainly additional complexity underlying remapping that is not captured by our simple model. §.§ Storing and retracing sequences We demonstrate that our model can recover sequences by heteroassociation of patterns to positions and path integration in a conceptual space (Figure <ref>A). This is consistent with the postulated role of the hippocampal formation in performing navigation in conceptual spaces <cit.>, and the role of entorhinal cortex in generating sequences of neural firing in hippocampus <cit.>. To evaluate our attractor model's fidelity at sequence memorization and retrieval, we simulate trajectories to form sequences of random binary patterns and recall the sequence using the path integration mechanism following the method in Section <ref>, for D=10,000 and moduli {3,5}. We add extrinsic noise to the velocity input, which accumulates along the trajectory and induces a drift. This implies that patterns at the end of sequences are less well recovered than ones at the beginning (Figure <ref>B and C).
http://arxiv.org/abs/2406.19121v1
20240627120555
Towards Learning Abductive Reasoning using VSA Distributed Representations
[ "Giacomo Camposampiero", "Michael Hersche", "Aleksandar Terzić", "Roger Wattenhofer", "Abu Sebastian", "Abbas Rahimi" ]
cs.LG
[ "cs.LG", "cs.AI", "cs.SC" ]
Towards Learning Abductive Reasoning using VSA Dist. Representations Camposampiero et al. IBM Research – Zurich ETH Zürich giacomo.camposampiero1@ibm.com Towards Learning Abductive Reasoning using VSA Distributed Representations Giacomo Camposampiero1,2 Michael Hersche1Aleksandar Terzić1,2 Roger Wattenhofer2 Abu Sebastian1 Abbas Rahimi1 July 1, 2024 ============================================================================================================================= P[1]>p#1 § ABSTRACT We introduce the (), a model that solves abstract reasoning tasks based on Learn-VRF. features a novel and more broadly applicable training objective for abductive reasoning, resulting in better interpretability and higher accuracy when solving Raven's progressive matrices (RPM). allows both programming domain knowledge and learning the rules underlying a data distribution. We evaluate on the I-RAVEN dataset, showcasing state-of-the-art accuracy across both in-distribution and out-of-distribution (unseen attribute-rule pairs) tests. surpasses neuro-symbolic and connectionist baselines, including large language models, despite having orders of magnitude fewer parameters. We show 's robustness to post-programming training by incrementally learning from examples on top of programmed knowledge, which only improves its performance and does not result in catastrophic forgetting of the programmed solution. We validate 's seamless transfer learning from a 2x2 RPM constellation to unseen constellations. Our code is available at https://github.com/IBM/abductive-rule-learner-with-context-awarenesshttps://github.com/IBM/abductive-rule-learner-with-context-awareness. § INTRODUCTION 18th Conference on Neural-Symbolic Learning and Reasoning (NeSy 2024). Abstract reasoning can be defined as the ability to induce rules or patterns from a limited source of experience and generalize their application to similar but unseen situations. It is widely acknowledged as a hallmark of human intelligence, and great efforts have been poured into the challenge of endowing artificial intelligence (AI) models with such capability. As a result, a wide range of benchmarks to assess human-like fluid intelligence and abstract reasoning in AI models has been proposed in the past decade <cit.>. In this work, we focus on Raven’s progressive matrices (RPM) test <cit.>. RPM is a visual task that involves perceiving pattern continuation and elemental abstraction as well as deducing relations based on a restricted set of underlying rules, in a process that mirrors the attributes of advanced human intelligence <cit.>. Recently, RPM has become a widely used benchmark for effectively testing AI capabilities in abstract reasoning, making analogies, and dealing with out-of-distribution (OOD) data <cit.>. With the advent of large language models, it was suggested that the attainment of abstract reasoning abilities required to solve this kind of task may hinge upon the scale of the model. To support this claim, it was shown that adequately large pre-trained language models can exhibit emergent abilities for logical <cit.> and analogical <cit.> reasoning. Nevertheless, the internal mechanisms underlying the emergence of these abilities are still not well understood. In addition, recent works provided evidence on the acute brittleness of these abilities <cit.>, while others showed that language models fail to attain levels of general abstract reasoning comparable to humans <cit.>. An alternative and promising direction is neuro-symbolic AI. Neuro-symbolic approaches combine sub-symbolic perception with various forms of symbolic reasoning, resulting in cutting-edge performance across a spectrum of domains, including visual <cit.>, natural language <cit.>, causal <cit.>, mathematical <cit.>, and analogical <cit.> reasoning tasks. In the context of RPM, recent neuro-symbolic architectures focused on abductive reasoning <cit.>. Abductive reasoning allows to selectively infer propositions based on prior knowledge represented in a symbolic form to explain the perceptual observations in the best possible way <cit.>. The appeal of the abductive approach lies in its accommodation of perceptual uncertainties within symbolic reasoning. Abductive reasoning can be implemented in systems that leverage distributed vector-symbolic architectures (VSAs) <cit.> representations and operators, such as the Neuro-Vector Symbolic Architecture (NVSA) <cit.>. However, these neuro-symbolic architectures <cit.> necessitate complete knowledge over the application domain (which might not be available) to program the right inductive bias into the model. Learn-VRF <cit.> overcomes this limitation by introducing a probabilistic abduction reasoning approach that learns a subset of the rules underlying RPM from data. Learn-VRF transparently operates in the rule space, learning them through a soft-assignment of VSA attribute representations to a fixed rule template. During inference, it generates the answer panel by executing all the learned rules and applying a soft-selection mechanism to their outputs. Nevertheless, Learn-VRF comes with several limitations, including a sub-optimal selection mechanism, poor performance on the RPM constellations involving multiple objects, and a constraint on the expressiveness of the RPM rules it can learn. To make progress towards learning-to-reason, we propose the () to tackle the main limitations of Learn-VRF <cit.>. We advance a novel context-augmented formulation of the optimization problem and a more expressive rule template, which allow sharing rules with the same parameters in both execution and selection steps and offer better interpretability. An overview of is depicted in Figure <ref>. features programmability and can further learn from data on top of programmed knowledge. We evaluate on in-distribution (ID) and out-of-distribution (OOD) tests of the I-RAVEN dataset, and demonstrate that significantly outperforms neuro-symbolic and connectionist baselines, including large language models. Further, the number of trainable parameters is reduced by two orders of magnitude compared to Learn-VRF. We experimentally validate the programmability of by encoding domain knowledge, and discover that post-programming training, contrary to other studies <cit.>, does not compromise the validity of the solution, but rather improves it. Finally, training the model on a single constellation and evaluating it on all the others, we show that, unlike previous baselines <cit.>, the learned rules can seamlessly transferred across constellations of the I-RAVEN dataset. § BACKGROUND §.§ Vector Symbolic Architectures Vector-symbolic architectures (VSAs) <cit.> are a family of computational models that rely on the mathematical properties of high-dimensional vector spaces. VSAs make use of high-dimensional distributed representations for structured (symbolic) representation of data while maintaining the advantages of connectionist distributed vector representations (see <cit.> for a survey). Here is a formal definition of VSAs: A vector-symbolic architecture (VSA) consists of a 4-tuple 𝕍=(ℂ, ⊕, ⊗,⊙), where ℂ is a set of high-dimensional distributed vectors equipped with two main operations, ⊕ (bundling) and ⊗ (binding), and on which it is possible to define a similarity measure ⊙. Bundling is a similarity-preserving operation that creates a superposition of the operands, that is, the resulting vector will have a high similarity with the two operands. Binding, on the other hand, is an operation that allows to bind a vector (value) to another vector (key) and does not preserve similarities; it usually allows an inverse operation, called unbinding. The specific realization of the bundling, binding, and vector space constitute the main difference between members of the VSA family. §.§ Raven's Progressive Matrices In this work, we focus on the I-RAVEN dataset <cit.>, a benchmark that provides RPM tests sampled from unbiased candidate sets to avoid short-cut solutions that were possible in the original RAVEN dataset <cit.>. Each RPM test is an analogy problem presented as a 3× 3 pictorial matrix of context panels. Every panel in the matrix is filled with several geometric objects based on a certain rule, except the bottom-right panel, which is left blank. Figure <ref> includes an I-RAVEN example test. The task is to complete the missing panel by picking the correct answer from a set of (eight) candidate answer panels that matches the implicit generation rule on every attribute. The object's attributes (color, size, shape, number, position) are governed by individual underlying rules: * constant, the attribute value does not change per row; * arithmetic, the attribute value of the third panel corresponds to either the sum or the difference of the first two panels of the row; * progression, the attribute value monotonically increases or decreases in a row by 1 or 2; * distribute three, the set of the three different values remains constant across rows, but the individual attribute values get shifted to the left or to the right by one position at every row; it also holds column-wise. Each panel contains a variable number of objects (minimum one, maximum nine) arranged according to one of seven different constellations (center, distribute-four, distribute-nine, left-right, up-down, in-out-center, and in-out-four). §.§ Learning to Reason with Distributed Representations In this section, we discuss how vector-symbolic architectures can be used to solve tasks which require analogical and relational reasoning such as RPM. In particular, we focus on Learn-VRF <cit.>, a simple yet powerful approach that enables to solve RPM tests by learning the underlying relations between visual attributes in the VSA representational space. The key observation behind this approach is that the formulation of every RPM rule in the VSA algebra is a particular instance of a general rule template, which is shared among all the rules and consists of a series of binding and unbinding operations between VSA vectors. Hence, the problem of learning RPM rules can be framed as an assignment problem between vectors representing visual attributes and terms in this general rule template. This alternative formulation allows to tackle one of the main limitations of neuro-symbolic approaches, differentiability, and therefore enables the use with data-driven learning algorithms based on gradient optimization. Learn-VRF includes several sequential steps, ranging from the translation of visual attributes into the VSA high-dimensional space to the computation of final results, that are detailed in the following paragraphs. §.§.§ From Visual Attributes to VSA. Following the same procedure of previous works which assume a perfect perception <cit.>, the panel's attribute labels are provided directly by the I-RAVEN metadata. For every attribute a, each panel's label is translated to a probability mass function (PMF) 𝐩_a^(i,j), where i is the row index and j is the column index of the panel. The panel's PMF is then projected into the VSA space as 𝐯_a^(i,j) = ∑_k=1^N 𝐩_a^(i,j)[k] ·𝐛[k], where N is the number of possible values that the attribute a can assume. The VSA vectors are drawn from a dictionary of binary generalized sparse block codes (GSBCs) <cit.> ℂ={𝐛_i }_i=1^512. In binary GSBCs, the D-dimensional vectors are divided into B blocks of equal length, L=D/B, where only one (randomly selected) element per block is set to 1 (D=1024 and B=4). The algebraic operations on binary GSBCs are defined in Table <ref>. Combining GSBCs with fractional power encoding (FPE) <cit.> allows the representation of continuous attributes (e.g., color or size) and simple algebraic operations, as addition and subtractions, in the corresponding vector space. In other words, the FPE initialization allows to establish a semantic equivalence between high-dimensional vectors and real numbers. This property is consistently exploited in the framework, as it allows to solve the analogies in the puzzles as simple algebraic operations in the domain of real numbers. Finally, we observe that the binding operation for binary GSBCs has properties analogous to addition in the real number domain, including commutativity, associativity, and the existence of a neutral element (𝐞∈ℂ, s.t. 𝐚⊛𝐞 = 𝐚 ∀𝐚∈ℂ ). §.§.§ Learning RPM Rules as an Assignment Problem. The core idea introduced in Learn-VRF is that the rules used in RPM can be framed in a fixed template which encompasses a series of binding and unbinding operations, r = ( 𝐜_1 ⊛𝐜_2 ⊛𝐜_3 ) ⊚( 𝐜_4 ⊛𝐜_5 ⊛𝐜_6 ), where 𝐜_i represents a context panel 𝐯_a^(i,j) or the identity 𝐞. In this setting, learning the rules of RPM can hence be interpreted as an assignment problem between VSA vectors and terms of Equation <ref>. To make it differentiable, Learn-VRF frames every term 𝐜_i as a convex combination over the VSA vectors of the context panels' attributes, augmented with the neutral element 𝐜_k = ∑_panels (i,j) w_k^(i,j)·𝐯_a^(i,j) + v_k·𝐞, where the following constraints apply to the weights ∑_panels (i,j) w_k^(i,j) + v_k = 1, 0 ≤ w_k^(i,j)≤ 1 ∀ i,j, 0 ≤ v_k≤ 1 , ∀ k. §.§.§ Executing and Selecting the Learned Rules. Inference with the learned rule set is a two-steps process: an execution step (where all the rules are applied in parallel to the input) and a selection step (where a prediction for the missing panel is generated). The application of each rule r to an RPM example generates a tuple of three VSA vectors (𝐯̂_a,r^(i,3))^3_i=1, which corresponds to the result of the rule execution on the three rows of the RPM matrix, together with a rule confidence value s_r. The confidence value is computed as the sum of the cosine similarities between the predicted VSA vectors and their respective ground-truth vector, s_r = ∑_i=1^3 cos(𝐯_a^(i,3), 𝐯̂_a,r^(i,3)). During inference, the last term of the sum (i=3) is omitted, as the ground-truth for the third row is unknown. The answer is finally produced by taking a linear combination of the VSA vectors generated by executing all the rules, weighted by their respective confidence scores (normalized to a valid probability distribution using a softmax function). More formally, if we define 𝐬=[ s_1, …, s_R ] to be the concatenation of all rules' confidence score and 𝐕̂_a^(3,3) = [ 𝐯̂_a,1^(3,3), …, 𝐯̂_a,R^(3,3) ] to be the concatenation of all rules' predictions for the missing panel, the final VSA vector predicted by the model for the attribute a becomes 𝐯̂_a^(3,3) = softmax(𝐬) ·𝐕̂_a^(3,3). The use of the weighted combination can be understood as a soft selection mechanism between rules, and was found to be more effective compared to the hard selection mechanism provided by sampling <cit.>. § METHODS In this section, we present our () system. An overview of is depicted in Figure <ref>. The framework aligns with the original Learn-VRF at a conceptual level, albeit with key adjustments that allow to improve its expressiveness and boost its downstream performance on the I-RAVEN dataset. §.§ Learning Context-Augmented RPM Rules The soft-assignment problem presented in Equation <ref> is designed to assign each term 𝐜_i in Equation <ref> with a fixed, absolute position in the 3× 3 RPM matrix. For instance, the rule for arithmetic subtraction could be learned with one-hot assignment weights as 𝐯̂_a^(3,3) = 𝐯_a^3,1⊚𝐯_a^3,2. A major limitation of this approach is that the rules cannot be shared across rows of the RPM matrix. For example, the aforementioned arithmetic subtraction rule is valid only for the third row, but not for the first and second rows. To overcome this limitation, Learn-VRF instantiates and learns three different rule sets (one per row) simultaneously. During inference, the model leverages the first two sets to produce confidence values (Equation <ref>), which are then used to perform a soft-selection of the output panels produced by the third rule set (Equation <ref>). While it was empirically shown to be effective, this implementation leaves the door open to different criticalities. For instance, the model has no constraint on the functional equivalence between the three learned rule sets. This renders the interpretability of the model sensibly harder and increases the likelihood of learning spurious correlations in the rule selection mechanism. Furthermore, this formulation diminishes the model's versatility, as its design, tailored to the RPM context, cannot seamlessly transfer to other abstract reasoning tasks without a reconfiguration of its primary components. Motivated by these issues and by related works in cognitive sciences and psychology that argue for the importance of context in the solution of analogies for humans  <cit.> we propose a more general formulation of the soft-assignment problem which abstracts away the positional assignment and instead relies on the notion of context. We propose to rewrite Equation <ref> as 𝐜_k = ∑_i=1^I w_k^i ·𝐱_i · + ∑_j=1^J u_k^j ·𝐨_j + v_k·𝐞. Here, 𝐗={𝐱_1, …, 𝐱_I} is the set of attributes that define the current sample, that is, the description of the problem for which we infer a solution. 𝐎={𝐨_1, …, 𝐨_J} is the set of attributes that define the context for that sample, that could be interpreted as a working memory from which additional information to infer the answer can be retrieved. In Equation <ref>, 𝐰,𝐮,𝐯 are the learned parameters and, as in Equation <ref>, they are subject to the following constraints: ∑_i=0^I w_k^i + ∑_j=0^J w_k^j + v_k = 1, 0 ≤ w_k^i ≤ 1 ∀ i, 0 ≤ u_k^j ≤ 1 ∀ j, 0 ≤ v_k≤ 1, ∀ k. Note that the notion of the current sample X and its context O depends on the row chosen for inference, as shown in Figure <ref>. The new formulation does not lose expressiveness compared to Equation <ref>. While its terms are no longer tied to fixed positions in the RPM matrix, relative position information can still be preserved by keeping the order of the current and context samples consistent during training. Both row-wise and column-wise relations can be correctly represented by the model. Most importantly, the new context-aware formulation for the soft-selection allows to have a single set of rules shared across all the rows of the RPM. Contrary to Learn-VRF, the model can now enforce functional equivalence between the rules used for selection and execution by construction. Additionally, the number of trainable parameters is reduced by 66% compared to Learn-VRF. In RPM, the number of current and context examples is equal to I=2 and J=5, respectively. We do not consider J=6 context examples to ensure that the same rules can be shared across rows. Otherwise, the model would fail when used to predict R_1 and R_2, since the panel in position (3,3) is unknown. font=small font=normalsize §.§ Improving Rule Selection through Template Generalization Compared to Learn-VRF, we increase the number of terms in the general rule template (Equation <ref>) as r = ( 𝐜_1 ⊛𝐜_2 ⊛𝐜_3 ⊛𝐜_4 ⊛𝐜_5 ⊛𝐜_6 ) ⊚( 𝐜_7 ⊛𝐜_8 ⊛𝐜_9 ⊛𝐜_10⊛𝐜_11⊛𝐜_12). Increasing the representational power of the rule template opens up the possibility to learn more general—yet functionally equivalent— formulations of the RPM rules, by also including “validation” terms in their structure, necessary in specific edge cases of the I-RAVEN dataset, such as Example <ref>. This bridges the gap between Learn-VRF, which could learn a set of rules that are perfect for execution but sub-optimal for selection, and other neuro-symbolic approaches <cit.>, which hard-coded an optimal rule set for selection and an optimal rule set for execution. We define the optimality of a rule set as follows. Consider a rule set ℛ={r_i}_i=1^R and an arbitrary RPM test 𝐕_a=(𝐯_a^(1,1),…,𝐯_a^(3,2)). ℛ is defined to be optimal for execution if ∃ r∈ℛ s.t. 𝐯_a^(3,3) = r( 𝐕_a ), and optimal for selection if the probability distribution over ℛ induced by the selection mechanism (through confidence values s_r) concentrates all the probability on the correct rule. The importance of this distinction can be understood in the following example. Consider the following RPM test example V, where different numbers correspond to different color attribute values, V = [ (magic) [matrix of nodes,ampersand replacement=&] 9 & 0 & 9 6 & 3 & 9 3 & 2 & ? ; ], with x_1=9, x_2=0, o_1=6, o_2=3, o_3=9 when X=R_1 x_1=6, x_2=3, o_1=9, o_2=0, o_3=9 when X=R_2 x_1=3, x_2=2, o_1=9, o_2=0, o_3=9 when X=R_3 and a rule set ℛ including the two rules that were proposed in Learn-VRF <cit.> to solve arithmetic plus (+) and distribute three (d3), rewritten accordingly to our context-augmented formulation 𝐯^+ = 𝐱_1⊛𝐱_2≡x_1 + x_2 = v^+ 𝐯^d3 = ( 𝐨_1⊛𝐨_2⊛𝐨_3) ⊚𝐱_1⊚𝐱_2≡( o_1 + o_2 +o_3) - x_1 - x_2 = v^d3 where the equivalence between vector space and ℝ is given by FPE. Performing the selection using ℛ can potentially result in a failure of the model for this RPM test. In fact, we can see that both v^+ and v^d3 will produce the correct answer on the first two rows, and the confidence values (cosine similarity between the output and the true attribute) will be 1 for both. As a result, the model will assign equal probabilities to both rules (even if only one of them is valid), rendering the probability of choosing the correct one equal to a coin toss. Incorporating a validation term in the rule definition for distribute three can, in this case, solve the issue. Consider the functionally equivalent rule v^d3++=( o_1 + o_2 + o_3 ) - x_1 - x_2_execution + ( o_1 + o_2 + o_3 ) - ( o_1 + o_4 + x_1 )_validation. The last two additional terms only cancel out when the sum of the elements of the first context row is equal to the sum of the elements of the first column, which is a property that is always verified for distribute three but not for the other RPM rules. Hence, using v^d3++ instead of v^d3 the model will be able to rule it out from the list of valid rules, correctly putting all the probability mass into the arithmetic plus rule instead. Note that learning v^d3++ would not have been possible only with the 6 terms available in Equation <ref>. Therefore, we claim that Equation <ref>, even if still not optimal for selection, will increase the robustness of the model to RPM edge cases. However, the increase in expressiveness also comes at a cost in terms of the number of trainable parameters, which scales linearly with the number of terms of Equations <ref>. §.§ Training Loss and other Implementation Aspects We follow the training recipe provided by Learn-VRF <cit.>. The training loss is defined as the inverse cosine similarity between the three predicted panels and their corresponding ground-truth ℒ= 1- ∑_i=1^3 cos(𝐯_a^(i,3), 𝐯̂_a^(i,3)). As in Learn-VRF, we set the number of rules to R=5. A single set of rules is instantiated and shared between all RPM attributes. In previous works <cit.>, the execution of the rules on the position and number attributes is performed in superposition, since either the number or the position attribute contributes to the generation of the answer. However, the superposition requires a preliminary binding operation with (trainable) key vectors to avoid the binding problem <cit.>. As a result, vector arithmetic is no longer supported on these attributes. We disentangle the two attributes, speculating that the trade-off between the additional noise introduced by the “unused” attribute, for which no rule is formally defined, and the increased computing accuracy will be significantly in favor of the latter. Removing the superposition also allows us to remove the keys used by the binding, which were trainable parameters in Learn-VRF and constituted the majority of parameters of the model (81%). § RESULTS §.§ In-distribution (ID) Results Table <ref> shows the 's in-distribution downstream accuracy on I-RAVEN compared to a range of neuro-symbolic and connectionist baselines. We present results for three different versions of : _progr, where the model's weights are manually programmed with RPM rules (R=4, since constant can be considered as a special case of progression), _p↦l, where the model is initialized with the programmed rules and then trained with gradient descent, and _learn, where the rules are learned from scratch from data. achieves the best average accuracy on the in-distribution I-RAVEN dataset, improving the second-best result (NVSA <cit.>, where the rules are hard-wired into the model) by almost 5%, while having orders of magnitude fewer parameters than any other baseline model. also shows lower variance compared to the other baselines on almost every constellation. Contrary to all the other methods (with the exception of PrAE <cit.>), is exclusively trained on the 2x2 constellation, effectively reducing the number of trained parameters and training samples by 85% (67). Its seamless adaptation to unseen constellations demonstrates the generality of the learned rules but prevents the model from outperforming the baselines in each single constellation. produces close to perfect results on all the constellations without the position/number attribute (that is, C, L-R, U-C, and O-IC), strongly outperforming GPT-3 and PrAE. On the other hand, its accuracy degrades for the constellations that include the position/number attribute (that is, 2x2, 3x3, and O-IG). This degradation arises because of three specific rules on the position attribute: progression (corresponding to a circular bit-shifting operation), arithmetic plus (corresponding to the logical operation a b), and arithmetic minus (corresponding to the logical operation a b). These rules, which operate at the granularity of objects, cannot be easily captured by the model, which operates at the granularity of panels. Finally, comparing the three proposed versions of , it is interesting to observe that _p↦l outperforms both its fully-learned and fully-programmed equivalents. The post-programming training allows to extend the knowledge of the model, rather than completely erasing it as shown in other settings <cit.>, resulting in a monotonic increase in downstream accuracy. Table <ref> reports a thorough ablation on the novelties introduced in our framework. We can observe that the biggest contribution comes from removing the position/number superposition (+8.1%), which consistently increases the performance on the constellations involving these attributes. However, in all the other constellations we observe a consistent drop in accuracy, due to the evaluation of the model in the “transfer” setting (trained on 2x2, evaluated on all the others). This drop is compensated by the two novel components of the model, the context-awareness and the generalized rule template, which allow to match and outperform Learn-VRF on every constellation. Interestingly, they also contribute to reduce the variance of the results, suggesting that the two improvements might be increasing the model invariance to weight initialization. §.§ Out-of-distribution (OOD) Results We validate 's out-of-distribution generalization capabilities by following the same recipe proposed by Learn-VRF <cit.>: the model is trained on a subset of the rule-attributes pairs of the center constellation, and evaluated on its complement. As shown in Table <ref>, matches the OOD performance previously showed by Learn-VRF, attaining perfect accuracy on almost every unseen rule-attribute pair in the center constellation. § CONCLUSIONS AND FUTURE WORK In this work, we proposed the (), a model built on top of Learn-VRF <cit.> to enhance its downstream accuracy, interpretability, model size, and coherence. We conducted evaluations on the I-RAVEN dataset, demonstrating significant performance improvements compared to a diverse array of neuro-symbolic and connectionist baselines, including large language models, across both ID and OOD data. Furthermore, we presented empirical results on the programmability of the model and the generalization across different constellations of the I-RAVEN dataset. A potential avenue for future research involves addressing the remaining challenge posed by this dataset, specifically, the development of suitable representations to allow the learnability of arithmetic and progression rules on the position attributes. These rules are currently impossible to learn for the model and would allow it to attain perfect accuracy on I-RAVEN. Additionally, extending the evaluation of to include other reasoning benchmarks, such as ARC <cit.>, also represents a promising direction for further investigation. In fact, while the scope of this work was mostly focus on developing and studying a prototype that could solve RPM, the proposed general formulation of the problem could transfer to other settings that require relations and analogies data-driven learning. Furthermore, as our experiments on learning of top programmed knowledge show, our framework holds potential for application in scenarios where only a partial knowledge on the dynamics is available, and it is necessary to discover and build new knowledge on top of it. §.§.§ Acknowledgments This work is supported by the Swiss National Science foundation (SNF), grant 200800. splncs04 10 RavenTest2012 Bilker, W.B., Hansen, J.A., Brensinger, C.M., Richard, J., Gur, R.E., Gur, R.C.: Development of abbreviated nine-item forms of the Raven’s standard progressive matrices test. Assessment (2012) cherian2023smart Cherian, A., Peng, K., Lohit, S., Smith, K.A., Tenenbaum, J.B.: Are deep neural networks smarter than second graders? In: 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). pp. 10834–10844. IEEE Computer Society, Los Alamitos, CA, USA (2023) chollet2019measure Chollet, F.: On the measure of intelligence. arXiv preprint arXiv:1911.01547 (2019) niedermayr2023rlp Niedermayr, Y., Lanzendörfer, L.A., Estermann, B., Wattenhofer, R.: RLP: A reinforcement learning benchmark for neural algorithmic reasoning. OpenReview (2023) Carpenter1990 Carpenter, P.A., Just, M.A., Shell, P.: What one intelligence test measures: a theoretical account of the processing in the Raven progressive matrices test. Psychological review (1990) Raven1938 Raven, J., Court, J., Raven, J.: Raven's progressive matrices. Oxford Psychologists Press (1938) snow1984topography Snow, R.E., Kyllonen, P.C., Marshalek, B., et al.: The topography of ability and learning correlations. Advances in the psychology of human intelligence 2(S 47),  103 (1984) snow1984toward Snow, R.E., Lohman, D.F.: Toward a theory of cognitive aptitude for learning from instruction. Journal of educational psychology 76(3),  347 (1984) MRNet_CVPR2021 Benny, Y., Pekar, N., Wolf, L.: Scale-localized abstract reasoning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2021) I-Raven Hu, S., Ma, Y., Liu, X., Wei, Y., Bai, S.: Stratified rule-aware network for abstract visual reasoning. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) (2021) RPM_Survey2022 Małkiński, Mikołaj and Mańdziuk, Jacek: Deep learning methods for abstract visual reasoning: A survey on Raven's progressive matrices. arXiv preprint arXiv:2201.12382 (2022) Mitchel_Survey_2021 Mitchell, M.: Abstraction and analogy-making in artificial intelligence. Annals of the New York Academy of Sciences 1505(1), 79–101 (2021) Raven_19 Zhang, C., Gao, F., Jia, B., Zhu, Y., Zhu, S.C.: Raven: A dataset for relational and analogical visual reasoning. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2019) wei2022emergent Wei, J., Tay, Y., Bommasani, R., Raffel, C., Zoph, B., Borgeaud, S., Yogatama, D., Bosma, M., Zhou, D., Metzler, D., Chi, E.H., Hashimoto, T., Vinyals, O., Liang, P., Dean, J., Fedus, W.: Emergent abilities of large language models. Transactions on Machine Learning Research (2022) hu-etal-2023-context Hu, X., Storks, S., Lewis, R., Chai, J.: In-context analogical reasoning with pre-trained language models. In: Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). pp. 1953–1969. Association for Computational Linguistics, Toronto, Canada (2023) webb2023emergent Webb, T., Holyoak, K.J., Lu, H.: Emergent analogical reasoning in large language models. Nature Human Behaviour 7(9), 1526–1541 (2023) gendron2024large Gendron, G., Bao, Q., Witbrock, M., Dobbie, G.: Large language models are not strong abstract reasoners (2024) wu2024reasoning Wu, Z., Qiu, L., Ross, A., Akyürek, E., Chen, B., Wang, B., Kim, N., Andreas, J., Kim, Y.: Reasoning or reciting? exploring the capabilities and limitations of language models through counterfactual tasks (2024) 10208934 Camposampiero, G., Houmard, L., Estermann, B., Mathys, J., Wattenhofer, R.: Abstract visual reasoning enabled by language. In: 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). pp. 2643–2647. IEEE Computer Society, Los Alamitos, CA, USA (2023) lewis2024using Lewis, M., Mitchell, M.: Using counterfactual tasks to evaluate the generality of analogical reasoning in large language models. arXiv preprint arXiv:2402.08955 (2024) odouard2022evaluating Odouard, V.V., Mitchell, M.: Evaluating understanding on conceptual abstraction benchmarks. arXiv preprint arXiv:2206.14187 (2022) thomm2024limits Thomm, J., Terzic, A., Karunaratne, G., Camposampiero, G., Schölkopf, B., Rahimi, A.: Limits of transformer language models on algorithmic learning. arXiv preprint arXiv:2402.05785 (2024) NS_MetaConcept_NIPS19 Han, C., Mao, J., Gan, C., Tenenbaum, J., Wu, J.: Visual concept-metaconcept learning. In: Advances in Neural Information Processing Systems (NeurIPS) (2019) NS_ConceptLearner_ICLR19 Mao, J., Gan, C., Kohli, P., Tenenbaum, J.B., Wu, J.: The neuro-symbolic concept learner: Interpreting scenes, words, and sentences from natural supervision. In: International Conference on Learning Representations (ICLR) (2019) Falcon_ICLR2022 Mei, L., Mao, J., Wang, Z., Gan, C., Tenenbaum, J.B.: FALCON: Fast visual concept learning by integrating images, linguistic descriptions, and conceptual relations. In: International Conference on Learning Representations (ICLR) (2022) NS-VQA_NIPS18 Yi, K., Wu, J., Gan, C., Torralba, A., Kohli, P., Tenenbaum, J.: Neural-symbolic VQA: Disentangling reasoning from vision and language understanding. In: Advances in Neural Information Processing Systems (NeurIPS) (2018) Learn2reason_TPR Schlag, I., Schmidhuber, J.: Learning to reason with third-order tensor products. In: Advances in Neural Information Processing Systems (NeurIPS) (2018) CLEVRER_ICLR2020 Yi, K., Gan, C., Li, Y., Kohli, P., Wu, J., Torralba, A., Tenenbaum, J.B.: CLEVRER: Collision events for video representation and reasoning. In: International Conference on Learning Representations (ICLR) (2020) TP-transformer_2019 Schlag, I., Smolensky, P., Fernandez, R., Jojic, N., Schmidhuber, J., Gao, J.: Enhancing the transformer with explicit relational encoding for math problem solving. arXiv preprint arXiv:1910.06611 (2019) yang2022conceptual Yang, Y., Sanyal, D., Michelson, J., Ainooson, J., Kunda, M.: A conceptual chronicle of solving raven's progressive matrices computationally. In: Proceedings of the 8th International Workshop on Artificial Intelligence and Cognition (2022) nesy2022_knowledge Shah, V., Sharma, A., Shroff, G., Vig, L., Dash, T., Srinivasan, A.: Knowledge-based analogical reasoning in neuro-symbolic latent spaces. In: Proceedings of the 16th International Workshop on Neural-Symbolic Learning and Reasoning (NeSy) (2022) zhao2023interpretable Zhao, S., You, H., Zhang, R.Y., Si, B., Zhen, Z., Wan, X., Wang, D.H.: An interpretable neuro-symbolic model for raven’s progressive matrices reasoning. Cognitive Computation 15(5), 1703–1724 (2023) PrAE_CVPR21 Zhang, C., Jia, B., Zhu, S.C., Zhu, Y.: Abstract spatial-temporal reasoning via probabilistic abduction and execution. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2021) hersche2023neuro Hersche, M., Zeqiri, M., Benini, L., Sebastian, A., Rahimi, A.: A neuro-vector-symbolic architecture for solving raven’s progressive matrices. Nature Machine Intelligence 5(4), 363–375 (2023) AbductiveCognition Magnani, L.: Abductive Cognition: The Epistemological and Eco-Cognitive Dimensions of Hypothetical Reasoning. Springer Berlin, Heidelberg (2009) VSA_03 Gayler, R.W.: Vector symbolic architectures answer Jackendoff's challenges for cognitive neuroscience. In: Joint International Conference on Cognitive Science (ICCS/ASCS) (2003) Kanerva2009 Kanerva, P.: Hyperdimensional computing: An introduction to computing in distributed representation with high-dimensional random vectors. Cognitive Computation 1(2), 139–159 (2009) PlateHolographic1995 Plate, T.A.: Holographic reduced representations. IEEE Transactions on Neural Networks 6(3), 623–641 (1995) hersche2023probabilistic Hersche, M., di Stefano, F., Hofmann, T., Sebastian, A., Rahimi, A.: Probabilistic abduction for visual abstract reasoning via learning rules in vector-symbolic architectures. In: The 3rd Workshop on Mathematical Reasoning and AI at NeurIPS'23 (2023) wu2023numerosity Wu, X., Zhang, X., Shu, X.: Cognitive deficit of deep learning in numerosity. In: Proceedings of the Thirty-Third AAAI Conference on Artificial Intelligence. AAAI Press (2019) VSA_Survey_Part1 Kleyko, D., Rachkovskij, D.A., Osipov, E., Rahimi, A.: A survey on hyperdimensional computing aka vector symbolic architectures, part I: models and data transformations. ACM Comput. Surv. (2022) hu2023context Hu, X., Storks, S., Lewis, R.L., Chai, J.: In-context analogical reasoning with pre-trained language models. In: Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Long Paper) (2023) Webb2023 Webb, T., Holyoak, K.J., Lu, H.: Emergent analogical reasoning in large language models. Nature Human Behaviour 7(9), 1526–1541 (2023) hersche2023factorizers Hersche, M., Terzic, A., Karunaratne, G., Langenegger, J., Pouget, A., Cherubini, G., Benini, L., Sebastian, A., Rahimi, A.: Factorizers for distributed sparse block codes. arXiv preprint arXiv:2303.13957 (2023) PlateHolographic2003 Plate, T.A.: Holographic Reduced Representations: Distributed Representation for Cognitive Structures. Center for the Study of Language and Information, Stanford (2003) chalmers1992high Chalmers, D.J., French, R.M., Hofstadter, D.R.: High-level perception, representation, and analogy: A critique of artificial intelligence methodology. Journal of Experimental & Theoretical Artificial Intelligence 4(3), 185–211 (1992) yozing1990context Cheng, Y.: Context-dependent similarity. In: Proceedings of the Sixth Annual Conference on Uncertainty in Artificial Intelligence. p. 41–50. UAI '90, Elsevier Science Inc., USA (1990) greff2020binding Greff, K., Van Steenkiste, S., Schmidhuber, J.: On the binding problem in artificial neural networks. arXiv preprint arXiv:2012.05208 (2020) brown2020gpt3 Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J.D., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., et al.: Language models are few-shot learners. Advances in Neural Information Processing Systems (NeurIPS) 33, 1877–1901 (2020) wu2020scl Wu, Y., Dong, H., Grosse, R., Ba, J.: The scattering compositional learner: Discovering objects, attributes, relationships in analogical reasoning. arXiv preprint arXiv:2007.04212 (2020)
http://arxiv.org/abs/2406.18681v1
20240626183430
Data Sketching and Stacking: A Confluence of Two Strategies for Predictive Inference in Gaussian Process Regressions with High-Dimensional Features
[ "Samuel Gailliot", "Rajarshi Guhaniyogi", "Roger D. Peng" ]
stat.ME
[ "stat.ME" ]
#1 1 1 Data Sketching and Stacking: A Confluence of Two Strategies for Predictive Inference in Gaussian Process Regressions with High-Dimensional Features Samuel Gailliot The authors gratefully acknowledge please remember to list all relevant funding sources in the unblinded version Department of Statistics, Texas A&M University and Rajarshi Guhaniyogi Department of Statistics, Texas A&M University and Roger D. Peng Department of Statistics and Data Sciences, University of Texas at Austin. ================================================================================================================================================================================================================================================================================================================================================================================================== 0 Data Sketching and Stacking: A Confluence of Two Strategies for Predictive Inference in High-Dimensional Gaussian Process Regressions with Air Pollution Data § ABSTRACT This article focuses on drawing computationally-efficient predictive inference from Gaussian process (GP) regressions with a large number of features when the response is conditionally independent of the features given the projection to a noisy low dimensional manifold. Bayesian estimation of the regression relationship using Markov Chain Monte Carlo and subsequent predictive inference is computationally prohibitive and may lead to inferential inaccuracies since accurate variable selection is essentially impossible in such high-dimensional GP regressions. As an alternative, this article proposes a strategy to sketch the high-dimensional feature vector with a carefully constructed sketching matrix, before fitting a GP with the scalar outcome and the sketched feature vector to draw predictive inference. The analysis is performed in parallel with many different sketching matrices and smoothing parameters in different processors, and the predictive inferences are combined using Bayesian predictive stacking. Since posterior predictive distribution in each processor is analytically tractable, the algorithm allows bypassing the robustness issues due to convergence and mixing of MCMC chains, leading to fast implementation with very large number of features. Simulation studies show superior performance of the proposed approach with a wide variety of competitors. The approach outperforms competitors in drawing point prediction with predictive uncertainties of outdoor air pollution from satellite images. Keywords: Bayesian predictive stacking; feature sketching; Gaussian processes; high-dimensional features; manifold regression; posterior consistency. 1.8 § INTRODUCTION We focus on the problem of drawing predictive inference of a random variable from a high-dimensional feature vector using “sketching" of the feature vector when it truly lies on a low-dimensional noisy unknown manifold. In recent years, there has been a growing literature on “data sketching," which involves sketching or compressing the original data before analysis <cit.>. However, our approach differs from the existing data sketching literature in two key aspects. Firstly, while most data sketching approaches aim to reduce the number of data samples, our approach is distinct in that it maintains the same number of samples but instead reduces the dimensionality of the feature vector. Secondly, the majority of research in data sketching focuses on performance evaluation of ordinary and high-dimensional penalized regression methods with sketched data <cit.>, with only a few recent articles considering application of data sketching in Bayesian high-dimensional linear and non-linear regressions <cit.>. In contrast, our approach leverages the benefits of data sketching to deliver scalable predictive inference in non-parametric regressions with a limited sample size and a large number of features, when the features lie on a noisy low-dimensional manifold. We consider a regression framework with an outcome y∈𝒴⊆ℝ and a feature vector =(x_1,...,x_p)^T when resides on a noisy unknown manifold, i.e., =()+, where =(o_1,...,o_d)^T is d-dimensional co-ordinates for a manifold 𝒪⊆ℝ^p, (·):ℝ^d→ℝ^p is a mapping function such that ()∈𝒪 and is p-dimensional noise. Often the complex dependence between y and is encoded via co-ordinates of the low-dimensional manifold, i.e., y=h()+ϵ, where h is a complex function encoding the true relationship between response and co-ordinates of the manifold and ϵ is the error. Since the manifold 𝒪 is unobserved, the co-ordinates is typically unknown. Hence, the common practice is to estimate complex dependencies between y and through a non-linear regression model given by, y=f()+ϵ, where f is an unknown regression function and ϵ is the residual. When dealing with high-dimensional features, Gaussian process (GP) priors with an automatic relevance determination (ARD) kernel are commonly used to estimate the underlying function f with sufficient sparsity assumption in the relationship between y and <cit.>. The estimated f is then employed to predict the response variable. However, when the number of features reaches the order of a few thousand, estimation of f with GP-ARD framework is often inaccurate, leading to unsatisfactory predictive inference. This article proposes an alternative approach that exclusively focuses on drawing predictive inference on the response variable y, including both point prediction and uncertainty estimation, using GP regression. We review below a list of existing strategies to draw predictive inference on y in non-linear regressions before introducing our approach. In the literature, a significant line of work follows a two-stage approach for dealing with high-dimensional features in non-linear manifold regression tasks. In this approach, the first stage involves constructing a lower-dimensional representation of the high-dimensional features using manifold learning techniques. Some popularly employed parsimonious manifold learning algorithms include Isomap <cit.>, Diffusion Maps <cit.>, and Laplacian eigenmap <cit.>. These algorithms enable the reduction of dimensionality while preserving the essential characteristics of the data. Additionally, there are model-based approaches that estimate the unknown Riemannian manifold structure within the feature vector. These methods utilize techniques such as local PCA <cit.> or geometric multiresolution analysis <cit.>, and, more recently, spherical basis functions <cit.>. Non-linear regression models in the second stage are based on these projected features in lower-dimensions. However, it is important to note that such two-stage approaches rely on learning the manifold structure embedded in the high-dimensional features. While this can be valuable for understanding the underlying data structure, it adds unnecessary computational burden when the primary focus is on prediction rather than inference. An alternative line of research focuses on estimating the unknown function f using tree-based approaches or deep neural network methods. Tree-based approaches, such as CART <cit.>, BART <cit.>, and random forest <cit.> are based on finding the best splitting attribute, which can become less efficient as the number of features (p) increases. While there is a growing literature on variable selection within tree-based methods, such as BART <cit.> and its variants <cit.>, estimating the true regression function with a large number of features (p of the order of thousands) can pose challenges. Deep neural networks with variable selection architecture <cit.> are also not ideal as they lack predictive uncertainty and struggle to handle the high-dimensional feature space efficiently. Bayesian modeling approaches are naturally appealing when the focus is on quantifying predictive uncertainty. To this end, the more traditional Bayesian models simultaneously learn the mapping to the lower-dimensional subspace along with the regression function in the coordinates on this subspace. These approaches range from Gaussian process latent variable models (GP-LVMs) <cit.> for probabilistic nonlinear principle component analysis to mixture of factor models <cit.>. However, such methods pose daunting computational challenges with even moderately large p and sample size due to learning the number and distribution of latent variables, as well as the mapping functions, while maintaining identifiability restrictions. To enhance the time efficiency of the aforementioned approaches, pre-processing steps are often employed, and two popular pre-processing methods are feature screening and projection. The feature screening approach identifies features that exhibit the strongest marginal association with the response variable. By selecting the features with the highest marginal association, this approach aims to reduce the dimensionality of the problem. Feature screening methods are generally straightforward to implement, and it offers asymptotic guarantees of selecting a superset of important features <cit.>. On the other hand, projection approaches aim to construct lower-dimensional feature vectors by combining the original high-dimensional features. One common method in projection approaches is to construct a few principal components (PCs) from the original p-dimensional feature vector. A naive implementation of the above pre-processing steps is unappealing to our scenario. For example, in a non-parametric regression with a large number of correlated features and low signal-to-noise ratio, it may be important to choose a conservative threshold for screening, which limits the scope of dimension reduction at this stage. On the other hand, construction of PCs are agnostic to the relationship between the response and the feature vector. Instead, we propose an approach that first employs variable screening <cit.> with a conservative threshold to identify a large subset of features, typically a few thousand, having the highest non-linear marginal association with the response. After variable screening, the screened feature vector is further compressed using a short and fat random sketching matrix <cit.>. This matrix has a small number of rows (m) and entries that are independently and identically drawn from a normal distribution. Predictive inference proceeds by fitting a non-parametric Gaussian process (GP) regression model <cit.> to the scalar outcome and the m-dimensional sketched feature vector after fixing values for the weakly identifiable tuning parameters within the covariance kernel of the Gaussian processes. The posterior predictive distribution corresponding to a choice of such tuning parameters and random sketching matrix comes in a closed form without the need to implement MCMC sampling, so that one can obtain the predictive distribution extremely rapidly even in problems with huge numbers of features. To reduce the sensitivity of predictive inference to the choice of the sketching matrix and the tuning parameters in GP regression, the model is fit with multiple different choices of the sketching matrix and parameters. The predictive inferences obtained from such choices are then aggregated using Bayesian predictive stacking <cit.> to improve accuracy and robustness. Stacking is a model aggregation procedure to combine predictions from many different models <cit.>. In recent years, substantial advancements have been made in Bayesian stacking methodology, with notable contributions made in <cit.> and the references therein. However, to the best of our knowledge, the application of stacking in the context of predictive inference for high-dimensional manifold regression is currently lacking. While Bayesian model averaging <cit.> is most popularly used for aggregating predictive inference from multiple models, it may be less suited to the stacking procedure in our settings. To see this, assume that there are S candidate models ℳ={ℳ_1,...,ℳ_S}. Bayesian model comparison typically encounter three different settings: (i) ℳ-closed where a true data generating model exists and is included in ℳ; (ii) ℳ-complete where a true model exists but is not included in ℳ; and (iii) ℳ-open where we do not assume the existence of a true data generating model. Although Bayesian model averaging has the advantage of asymptotically identifying the true data generating model in the first setting, predictive stacking has advantages in the ℳ-complete and ℳ-open settings. Given that the true model may not be included in the class of fitted Gaussian process regression models with randomly sketched features, predictive stacking offers substantial advantages over model averaging. In regressions involving high-dimensional features and a large sample size, <cit.> propose an approach orthogonal to ours which exploits random sketching matrices to reduce the sample size rather than the number of features. A few approaches closely related to ours develop theoretical bound on predictive accuracy when high-dimensional features are sketched with random matrices <cit.>. These articles tend to include random linear combinations of many unimportant features, diminishing signal in the analysis, which results in less than satisfactory predictive performance with massive-dimensional features. Addressing this issue, <cit.> proposes novel constructions of projection matrices tailored to deliver more accurate predictive inference. These approaches aim to overcome the challenges associated with sketching high-dimensional features and improve predictive performance. However, they primarily focus on high-dimensional parametric regression, and their applicability to non-parametric regression tasks may require further investigation. Additionally, these approaches address sensitivity to the choice of sketching matrices by aggregating predictive inference over many sketching matrices using Bayesian model averaging technique <cit.> which is less suitable for prediction than the stacking approach we employ here, as discussed in the last paragraph. The rest of the article proceeds as follows. Section <ref> discusses motivating dataset on outdoor air pollution and satellite imagery. Section <ref> proposes the model and computational approach for predictive inference in manifold regression with large number of predictors. Section <ref> offers empirical evaluation of the proposed approach along with its competitors for simulation studies. Section <ref> investigates the proposed approach in drawing predictive inference of outdoor air pollution concentration from satellite images. Finally, Section <ref> concludes the article with an eye towards future work. §.§ Outdoor Air Pollution Application As a motivation for the development of our methodology, we consider the problem of predicting outdoor air pollution concentrations across the United States. Outdoor air pollution in the U.S. is measured using a network of ground-based monitors managed by local air quality agencies and the U.S. Environmental Protection Agency <cit.> (EPA). While the combined network of monitors consists of thousands of locations, the spatial coverage of the network is actually quite sparse, leaving many areas of the country without any ground-level data <cit.>. Many dense urban areas only contain one or two monitors, raising a question of whether such measurements are representative of the burden experienced by all members of the population. To address the sparsity of the network, there have been efforts to deploy low-cost sensors across urban areas to fill the gaps. While such approaches have promise, they are still experimental and ad hoc in nature, and the sensors themselves can sometimes introduce new measurement problems <cit.>. Remote sensing techniques, which use satellite imagery to predict ground-level concentrations of outdoor air pollution have the potential to address the spatial coverage problem because of their constant monitoring of the entire planet. Traditional approaches have employed aerosol optical depth as a proxy for such pollutants as fine particulate matter <cit.>, or PM2.5. While the previous generation of Earth observation satellites had excellent spatial coverage, they lacked temporal coverage, typically revisiting an area of the planet only once every one or two weeks. In addition, older satellites tended to have lower resolution, making them difficult to use for predicting air pollution concentrations in dense urban settings. In recent years, there has been a revolution in the deployment of satellite constellations, where hundreds of smaller inexpensive satellites orbit the Earth, providing constant coverage of all areas <cit.>. Furthermore, these satellites have much higher resolution, allowing for more detailed examination of areas of interest. Given the recent emergence of data from satellite constellations, there is still a question of how best to use them for the purpose of predicting ground-level air pollution concentrations. For this application, we focus on predicting fine particulate matter pollution (PM2.5) from multi-band satellite images. For ground-truth information we use the EPA's network of monitors to provide valid PM2.5 measurements. The combination of high-resolution spatial and temporal coverage of the entire U.S. with novel statistical prediction approaches has the potential to dramatically increase the monitoring of outdoor air pollution and its subsequent health effects. § OUR APPROACH: STACKED GAUSSIAN PROCESS REGRESSION Let 𝒟_n={(_i^T,y_i):i=1,...,n} be a dataset containing n observations each with a p-variate feature _i=(x_i,1,...,x_i,p)^T and a scalar-valued response y_i. We assume n is moderately large and p is large. The feature vector _i lies on an unknown noisy manifold 𝒪⊆ℝ^p with d-dimensional latent co-ordinates _i (i.e., _i=(_i)+_i, (_i)∈𝒪). We assume a nonlinear regression relationship between y_i and _i, and approximate the density of y_i by sketching the high-dimensional feature vector _i to lower-dimensions using a sketching matrix _n as follows y_i=f(_n_i)+ϵ_i, ϵ_ii.i.d.∼ N(0,ξ^2), with ξ^2 as the noise variance and f(·) as an unknown continuous function in the Holder class of smoothness s. Discussion on the choice of the sketching matrix _n∈ℝ^m× p is provided in Section <ref>. §.§ Choice of the Sketching Matrix The sketching matrix _n ∈ℝ^m × p embeds p-dimensional features _i into m dimensions while not throwing away excessive amounts of information. The most popular linear embedding is obtained from the singular value decomposition (SVD) of =[_1:⋯:_n]^T, but they are problematic to estimate when p>>n. In contrast, random sketching matrices are often used to embed the high-dimensional features to a random subspace, and appropriate choices of the random matrices allow distances between samples to be approximately preserved <cit.>. The direct application of sketching matrices on high-dimensional features is unappealing, as it constructs random linear combinations of many unimportant features, diminishing the signal in the analysis. As an alternative approach, we design a sketching matrix that constructs random linear combinations of features with the highest marginal association with the response. To identify these features, we perform nonparametric B-spline regression of y_i onto each component of _i separately. The order of importance of the features is determined, in descending order, by the residual sum of squares of the marginal nonparametric regressions. Features with a residual sum of squares greater than a user-defined threshold are considered important features related to the response. We adopt a conservative threshold following <cit.> to select a large superset of important features, which allows for joint contributions of features in explaining the response. Let ℐ correspond to the indices of the features chosen with marginal screening and ℐ̅ be the indices of the features screened out through this procedure, such that ℐ∪ℐ̅={1,...,p}. Let _n be a permutation matrix such that _n=(_ℐ^T,_ℐ̅^T)^T. We construct a matrix _n=[_n,1:_n,2] where _n,2= 0_m× (p-|ℐ|) and _n,1 is an m× |ℐ| matrix with entries drawn independently from N(0,1), following the literature on random sketching matrices <cit.>. The resulting sketching matrix is given by _n=_n_n. §.§ Prior, Posterior and Posterior Predictive Distributions Following a Bayesian approach, we assign a zero-centered Gaussian process prior on the unknown regression function f(·), denoted by f(·)∼ GP(0,σ^2δ_θ). Here, δ_θ corresponds to an exponential correlation kernel δ_θ(_i,_j)=exp(-θ||_i-_j||) involving the length-scale parameter θ. The parameter σ^2 is the signal variance parameter and ||·|| denotes the Euclidean norm. A significant finding by <cit.> establishes that when features _i lie on a d-dimensional manifold 𝒪, the minimax optimal rate of n^-2s/(2s+d) (adapted to the dimension of the manifold) can be achieved in estimating f through an appropriate choice of prior distributions on θ and σ^2. However, in practical scenarios, features may not exactly lie on a manifold due to noise and data corruption, as assumed in our setting. In such instances, the application of random compression, denoted as _n_i, aids in denoising the features. The de-noised compressed features _n_i exhibit a higher concentration around the manifold compared to the original features _i. With this enhanced concentration, the theory presented by <cit.> suggests that an appropriate GP prior can yield excellent performance. In addition to denoising, the compression of the high-dimensional feature vector has a major advantage in avoiding the estimation of a geodesic distance along the unknown manifold 𝒪 between any two feature vectors _i and _i'. In practical applications, utilizing the recommended prior distributions on θ and σ^2 from <cit.> requires computationally expensive Markov Chain Monte Carlo (MCMC) sampling, mainly due to the weak identifiability of θ. The posterior computation of θ typically entails meticulous tuning, especially when dealing with high-dimensional features, imposing a significant computational burden. This article introduces an alternative approach that enables exact predictive inference from the model, entirely bypassing the MCMC algorithm in model estimation. The details of the strategy are elaborated below. Denote =(f(_n_1),...,f(_n_n))^T as the vector consisting of the function f evaluated at the sketched features _n_1,...,_n_n and as an n× n covariance matrix with (i,j)th entry δ_θ(_n _i, _n _j). With =(y_1,...,y_n)^T as the response vector, a customary Bayesian hierarchical model is constructed as |,ξ^2∼ N(,ξ^2), ( | ξ^2)∼ N(0, ξ^2ψ^2), π(ξ^2)∝1/ξ^2, where we fix the length-scale parameter θ and the signal-to-noise variance ratio ψ^2=σ^2/ξ^2. This ensures closed-form conjugate marginal posterior and posterior predictive distributions. More specifically, the marginal posterior distribution of ξ^2, given the projection matrix _n, θ, ψ^2 and 𝒟_n, is inverse gamma with parameters a=n/2 and b=^T(ψ^2+)^-1/2. The marginal posterior distribution of , given _n, ψ^2, θ and 𝒟_n, follows a scaled n-variate t distribution with degrees of freedom n, location _t and scale matrix _t, denoted by t_n(_t,_t), where _t=(+^-1/ψ^2)^-1, _t=(2b/n)(+^-1/ψ^2)^-1. Consider prediction for the response at n_new data points with corresponding covariates _̃1̃,...,_n_new. Let _new,new and _new,old denote n_new× n_new and n_new× n matrices with (i,j)th elements δ_θ(_n_i,_n_j) and δ_θ(_n_i,_n_j), respectively. The posterior predictive distribution of the response _new=(ỹ_1,...,ỹ_n_new)^T given _̃1̃,...,_n_new, _n, θ, ψ^2 and 𝒟_n, marginalizing out (, ξ^2), follows a scaled n_new-variate t-distribution t_n_new(_t,_t), where _t =ψ^2_new,old(+ψ^2)^-1 _t =(2b/n)[+ψ^2_new,new-ψ^4_new,old(+ψ^2)^-1_new,old^T]. Since the posterior predictive distribution is available in closed form, Bayesian inference can proceed from exact posterior samples. This tractability is only possible if the length-scale parameter θ and the signal-to-noise variance ratio ψ^2 are fixed. While it is possible to estimate their full posterior distributions through expensive Markov Chain Monte Carlo (MCMC) sampling, these parameters are inconsistently estimable for the general Matern class of correlation functions <cit.> often resulting in poorer convergence. Therefore, for the chosen sketching matrix _n, we obtain (θ, ψ^2) such that _θ,ψ^2 f(θ, ψ^2 | _n, ) ∝_θ,ψ^21/|ψ^2 + |^1/22^n/2Γ(n/2)/['(ψ^2 + )^-1]^n/2(√(2π))^n Our approach will conduct exact predictive inference using the closed form predictive distribution in (<ref>) and stack the predictive inference over different fixed values of {_n, θ,ψ^2}. §.§.§ Stacking of Predictive Distributions Let ℳ_k represent the fitted model (<ref>) with _n^(k), θ^(k),ψ^2(k) for k=1,...,K. While the sketching matrix _n^(k) is randomly generated for each k, θ^(k) and ψ^2(k) are obtained following equation (<ref>) for the choice of _n^(k). Employing the generalized Bayesian stacking framework proposed by <cit.>, we implement a stacking procedure over the predictive distribution obtained from each ℳ_k. Let p(_new|, ℳ_k) denote the predictive distribution under model ℳ_k, and p_t(_new| ) denote the true predictive distribution. Our objective is to determine the distribution in the convex hull 𝒞 = {∑_k = 1^K w_k p(· | ,ℳ_k): w_k ∈𝒮_1^K}, where 𝒮_1^K = {∈ [0,1]^K: ∑_k=1^K w_k = 1}, that is optimal with respect to some proper scoring function. Using the logarithmic score, which corresponds to the KL divergence, we seek to find the vector = (w̃_1, …, w̃_K) such that = max_∈𝒮_1^K1/n∑_i = 1^n log( ∑_k = 1^K w_k p_k,-i(y_i)), where _-i=(y_j:j≠ i, j=1,...,n)^T and p_k,-i(y_i) = p(y_i|_-i, ℳ_k) has a closed from univariate t-distribution with parameters μ̃_-i^(k) and Σ̃_-i^(k) obtained using the formula for posterior predictive distribution given in equation (<ref>). In practice, calculating the predictive densities p_k,-i(y_i) one at a time is computationally expensive as the calculation of Σ̃_-i^(k) requires inverting an (n-1)×(n-1) matrix for every k=1,...,K and i=1,...,n. To avoid this, we randomly split the data into S=10 disjoint folds of approximately equal size, (_(1), _(1)), …, (_(S), _(S)), and compute _(s)|_(1),...,_(s-1),_(s+1),...,_(S) for every s=1,...,S, which follows a multivariate t-distribution with parameters obtained using equation (<ref>). If the ith sample belongs to the sth fold, we will replace p_k,-i(y_i) in (<ref>) by p_k,(s)(y_i), where p_k,(s)(y_i) represents the marginal distribution of y_i from _(s)|_(1),...,_(s-1),_(s+1),...,_(S). This strategy requires inverting an (n-n/S)× (n-n/S) matrix only S times (assuming that all folds are of equal size), which leads to substantial computational benefits. No analytical solution to this non-convex constrained optimization problem in (<ref>) is available, but first and second derivatives are easily obtained to construct an iterative optimizer. The optimal distribution provides a pseudo posterior predictive distribution given by p̃(_new|)=∑_k=1^Kw̃_k t_n_new(_t^(k),_t^(k)), where _t^(k) and _t^(k) are obtained from equation (<ref>) by evaluating _t and _t at _n^(k), θ^(k), ψ^(k). The pseudo posterior predictive distribution is further used to draw point prediction and 95% predictive interval to quantify predictive uncertainty. Figure <ref> offers a flowchart outlining the proposed framework. While Bayesian model averaging (BMA) is a common method for combining multiple distributions, its applicability in our context is limited for several reasons. Firstly, the fitted models (<ref>) with randomly sketched features are likely to deviate from the true model, placing us outside the ℳ-closed setting where BMA is optimal. Additionally, stacking is designed to determine weights for optimal prediction, whereas asymptotically, BMA assigns full weight to the “best" single model closest in KL divergence to the true model <cit.>. However, when the true model lies outside the space of fitted models, it may be more advantageous to leverage multiple models in predictive inference. Subsequent empirical experiments demonstrate stacking as a powerful tool for drawing posterior predictive inference in our setting. § SIMULATION STUDY We evaluate the performance of the proposed Sketched Gaussian Process (SkGP) regression across various simulation scenarios, exploring different structure of the manifold (𝒪), different feature dimensions (p) and noise levels in the features (τ^2) to analyze their impact. In all simulations, the out-of-sample predictive performance of the proposed SkGP regression is compared with that of uncompressed Gaussian Process (GP), Bayesian Additive Regression Trees (BART) <cit.>, Random Forests (RF) <cit.>, and deep neural network (NN). We also explore sketched versions of BART and RF, referred to as Sketched BART (SkBART) and Sketched Random Forest (SkRF), respectively, where a single projection matrix is generated to sketch the features, allowing for faster implementation. Each of these methods are applied on |ℐ|=1000 screened features having highest marginal association with the response. As a default in this analysis, we set m= 60. We offer detailed sensitivity analysis with varying choices of the number of screened features |ℐ| and the sketching dimensions m. §.§ Simulated Data Generation During data simulation, we explore specific scenarios where the response distribution follows a nonlinear function of d-dimensional coordinates for a manifold 𝒪⊆ℝ^p, embedded in a high-dimensional ambient space. Two distinct choices for 𝒪 and their corresponding response distributions are simulated. 𝒪 is a swiss roll and d=2. For the swiss roll, we sample manifold coordinates, o_1 ∼ U(3π/2, 9π/2), o_2∼ U(0, 3). A high dimensional feature = (x_1, …, x_p) is simulated according to x_1 = o_1 cos(o_1) + η_1, x_2 = o_2+η_2, x_3 = o_1 sin(o_1) + η_3, x_i = η_i, i ≥ 4. The response y have a non-linear relationship with these features and is simulated following, y = sin(5π o_1) + o_2^2 + ϵ, ϵ∼ N(0,0.02^2), where η_1, …, η_p ∼ N(0, τ^2). Notably, and y are conditionally independent given o_1,o_2 which is the low-dimensional signal manifold. In particular, lives on a (noise corrupted) swiss roll embedded in a p-dimensional ambient space (see Figure <ref>), but y is only a function of coordinates along the swiss roll 𝒪. 𝒪 is a torus and d=3. For the torus, we consider x_1=o_1+η_1, x_2=o_2+η_2 and x_3=o_3+η_3 where o_1,o_2,o_3 lie on a three dimensional torus with interior radius 1 and exterior radius 3 (see Figure <ref>), such that (3-√(o_1^2+o_2^2))^2+o_3^2=1, and set x_i=η_i for i≥ 4. The feature noise η_1,...,η_n are generated i.i.d. from N(0, τ^2). The response is generated as, y = o_2^2 + sin(5π o_3) + ϵ, ϵ∼ N(0, 0.1^2). The geodesic distance between two points on both a swiss roll and a torus can substantially differ from their Euclidean distance in the ambient space ℝ^p. The swiss roll, in particular, poses a challenging setup for SkGP, as points on 𝒪 that are close in a Euclidean sense can be quite far in a geodesic sense. To assess the impact of the number of features (p) and noise levels of the features (τ^2) on the performance of the competitors, various simulation scenarios are considered by varying p=2000, 10000 and τ^2=0.01,0.03,0.05,0.1. For each of these simulation scenarios, 50 datasets are generated, and metrics such as mean squared prediction error (MSPE), coverage, and lengths of 95% predictive intervals (PI) are calculated across all replicates. All simulations set the sample size n=100 and the number of predicted samples n_new=100. §.§ Point prediction Tables <ref> and <ref> display the MSPE averaged over 50 replications for all the competing methods in the swiss roll and torus examples, respectively. Values in parentheses represent the standard error of MSPE over 50 replicates. Both Tables <ref> and <ref> show that incorporating randomly sketched features into the GP model within the SkGP framework yields strong predictive performance, significantly surpassing the performance of the neural network. For both p=2000 and p=10000, when the manifold is affected by low noise, SkGP significantly outperforms GP, BART, and RF with unsketched features. While SkBART emerges as the second-best performer in scenarios with very low noise in the manifold (τ^2=0.01), its performance declines notably with an increase in the noise level in the features. In comparison, SkGP effectively mitigates the impact of noise in the features, but there exists a tipping point (depending on the structure of the underlying manifold 𝒪 and sample size n) where noise distorts the manifold excessively, causing SkGP to perform similarly to other competitors. This is observed in the MSPE values corresponding to τ^2=0.1. Among the sketched competitors, SkRF exhibits notably inferior performance compared to both SkGP and SkBART in all simulation examples. While theoretically the performance of SkGP should remain similar for both p=2000 and p=10000 when all features lie exactly on a low-dimensional manifold, in practice, we observe a significant decline in the performance of SkGP with an increase in p. This is attributed to the substantial impact of noise corruption on the manifold, influencing predictive performance. §.§ Predictive Uncertainty To evaluate quality of predictive uncertainty, we calculate the coverage and length of 95% predictive intervals (PI) for SkGP and other competitors. While frequentist methods, like SkRF and RF, do not inherently provide coverage probabilities with point estimates, we employ a two-stage plug-in approach for them: (i) estimate the regression function in the first stage, and (ii) construct 95% PI based on the normal distribution centered on the predictive mean from the regression model, with variance equal to the estimated variance in the residuals. Coverage probability boxplots over 50 replications for all simulation cases in the swiss roll example and torus example are presented in Figures <ref> and <ref>, respectively. Figure <ref> displays the median lengths of the 95% PI for all competitors for all simulation cases in both the swiss roll and torus examples. The results indicate that in all simulation scenarios, the coverage of 95% predictive intervals (PI) for SkGP is close to the nominal level. Although the intervals tend to widen with both increasing noise in the manifold (i.e., higher τ^2) and an increase in the number of features p, the effect is less pronounced with p compared to τ^2. Both BART and SkBART exhibit poor coverage, with significantly narrower PIs. RF and SkRF show undercoverage (around 80% coverage) and wider PIs than SkGP. In the swiss roll example, the coverage of 95% PI for GP is similar to that of SkGP, but the intervals from GP are approximately twice as wide as those from SkGP. In the torus example, GP and SkGP perform similarly for p=2000. However, when p=10000, SkGP demonstrates much narrower predictive intervals than GP, with a similar coverage, as the noise increases in the manifold. Overall, the results suggest that SkGP is more precise in terms of predictive uncertainty and robust compared to its competitors concerning the noise in the manifold. §.§ Computation Time The main objective in developing SkGP was to improve computational scalability in large p settings. For a specific choice of _n, θ and ψ^2, the computation time for SkGP is primarily influenced by two factors: (a) computing the inverse of an n× n matrix; and (b) multiplying an m× |ℐ| matrix with an |ℐ|× n matrix. Steps (a) and (b) entail computational complexities of order n^3 and mn|ℐ|, respectively. Since the posterior predictive distribution is available in closed forms, these computations are only needed once for a specific choice of _n, θ and ψ^2. We will parallelize the computation across various choices of _n,θ,ψ^2 on different CPUs. The combination step using stacking requires inverting S matrices each of dimension (n-n/S)× (n-n/S), incurring a complexity of the order S(n-n/S)^3. Since the focus of this article is on moderate n, all computational steps are extremely efficient leading to rapid computation of SkGP. Figure <ref> shows the computation times when the number of features increase and sketching dimension is held fixed (m = 60). Computation times for non-sketched tree based methods increase linearly with the number of features, while computation time for sketched methods remain constant, as they only depend on the number of screened features ℐ. Figure <ref> shows the computation times as the sketching dimension is increased while the original number of screened features is held constant (p = 1000). Considering that the non-sketched tree based methods are not dependent on sketching dimension m, their computation times remain constant. The computation times for the sketched methods increase linearly with sketching dimension. Importantly, SkGP achieves computation time comparable to frequentist approaches, yet being able to allow principled Bayesian predictive inference. §.§ Sensitivity to the choice of m and |ℐ| We present investigation into the choice of the number of features |ℐ| included through highest marginal association with the response and the dimension of the sketching matrix m applied to this |ℐ|-dimensional feature vector. Figure <ref> illustrates the MSPE values for the swiss roll example with varying numbers of included features. Considering the small true dimensions of the swiss roll manifold and the fact that the response is related to only through the manifold in the swiss roll example, the inclusion of more redundant features in the regression leads to a performance loss, as evidenced by the increasing MSPE values. However, this decline in performance is more pronounced when the swiss roll is affected by noise with higher variance. This aligns with the fact that the accuracy of estimating the regression function depends solely on the intrinsic dimension of the swiss roll and is unaffected by the number of screened features when the features lie on a manifold. When the noise variance is low, resulting in features that approximately lie on the swiss roll, the performance does not change significantly with variations in the number of screened features. Figure <ref> illustrates the impact of the sketching dimension m on the performance of SkGP. The figure indicates that as the sketching dimension m increases, MSPE decreases up to a certain point, contingent on the sample size and the structure of the manifold, after which it starts increasing again. This observation is reasonable as, with a low-dimensional manifold 𝒪, increasing the sketching dimension m introduces redundant randomly sketched features to the model, leading to a natural decline in performance. In practice, we observed that a sketching dimension of around m ∼ 50 works well for a diverse range of simulation examples when the intrinsic dimensionality d of the manifold is low. § ANALYSIS OF OUTDOOR AIR POLLUTION DATA WITH SATELLITE IMAGES We will apply the proposed approach and compare it with relevant competitors in the analysis of air quality using multi-band satellite images over time (see Section 1.1). The air quality dataset consists of measurements taken at the EPA federal reference monitor in Las Vegas, Nevada, spanning almost daily measurements, sometimes with multiple readings in a day, from January 2019 to July 2022. This results in 1667 air quality measurements, as depicted in the first row of Figure <ref>. For each air quality sample, multi-band satellite images covering the location of the air quality monitor have been acquired, including four wavelength bands: blue, green, red, and near-infrared. The left panel of Figure <ref> displays a near-infrared image on a representative day. These data were obtained from Planet using version 1 of their PlanetScope instrument <cit.>. Notably, multi-band satellite imagery data are easily obtainable, whereas the installation of monitors measuring air quality is expensive. Hence, a key scientific goal is to predict air quality readings given the high-dimensional multi-band images. To achieve this, at each time point, the 128 × 128 = 16384 pixels of the four bands of the multi-band images are vectorized and concatenated into a 65536 dimensional vector. Although the vectorized images are p dimensional, the estimated intrinsic dimension (ID) for the images is 4.18 with a standard error of 0.1, determined using the two-nearest neighbor (NN) method <cit.>. This indicates that the high-dimensional vectorized images lie on a lower-dimensional manifold, motivating the application of our proposed SkGP approach to this data. Out of 1667 samples, we select every fourth sample point for the test set, resulting in n=1334 training samples and n_new=333 test samples. As depicted in Figure <ref>, the raw air quality monitor data displays characteristics such as non-negativity, heavy-tailed distributions, non-stationary patterns, and periodic behavior. To meet the normality assumption for the error in (<ref>), we apply a log transformation and standardize the response, ensuring a mean of zero and a variance of one. The second row of Figure <ref> illustrates the log-transformed and standardized air quality data. Many of the pixels in the satellite images are zero at all times and are included to buffer the image to fit into a square. In our analysis, these zeros are removed. The columns of the image predictor matrix, with zeros removed, have dimension 1334× 33068 for the training data, and are pixel-wise standardized to have zero mean and unit variance. We focus our performance comparison on SkGP, BART and SkBART, considering them as the top three competitors based on the simulation studies. While GP is also among the top performers in the simulation studies, it is excluded from the comparison due to its computational demands and memory intensity for this dataset. While the data has a temporal component, our current analysis overlooks its time-varying nature. Incorporating the temporal dynamics and capturing the evolving associations between samples is a direction we plan to explore in future work. §.§ Results The Nonparametric Independence Screening (NIS) method outlined in <cit.> identifies 18640 features out of 33068 features which are marginally related to the air quality. Figure <ref> displays the pixels selected by NIS in a representative multi-band image feature. Interestingly, even though the screening procedure is independent for each pixel, contiguous patched of pixels and boundaries around notable imaging patterns are screened out. All competing models are implemented using n=1334 samples, where each sample comprises air quality measurements as responses and p=18640 features. Predictive inferences are generated for n_new=333 holdout samples. Figure <ref> displays the point predictions and 95% predictive intervals for SkGP across all time points, effectively capturing the trend in air quality responses. Table <ref> highlights the superior performance of SkGP compared to all competitors, as evidenced by its lowest MSPE value. Although all competitors exhibit under-coverage, potentially due to neglecting the time-varying nature of the data, SkGP achieves the highest coverage (close to 80%) with predictive intervals of comparable length to BART or SkBART. Overall, these results underscore SkGP's effectiveness in modeling the non-linear regression relationship between air quality and multi-band satellite images. § CONCLUSION AND FUTURE WORK This article is the first to present a novel Bayesian approach for predictive inference of outdoor air quality using high-resolution satellite images, when these images lie on a low-dimensional noise-corrupted manifold. Our methodology exploits two powerful ideas, data sketching and stacking, to eliminate the necessity for computationally demanding manifold structure estimation, providing accurate point predictions and predictive uncertainties. The computation of the posterior predictive distribution does not rely on MCMC sampling, and our framework is amenable to parallel implementation, resulting in substantial reductions in computation and storage costs. Empirical findings underscore the significantly improved point prediction and predictive uncertainty of our approach compared to existing methods. Future research directions will extend our framework to handle large sample sizes using distributed Bayesian inference <cit.>, exploring applications to non-Gaussian or multivariate outcomes, and simultaneous estimation of the intrinsic dimensionality of the manifold alongside predictive inference for the outcome.
http://arxiv.org/abs/2406.18692v1
20240626185353
Pulsational mode stability in complex EiBI-gravitating polarized astroclouds with (r, q)-distributed electrons
[ "Dipankar Ray", "Pralay Kumar Karmakar" ]
astro-ph.GA
[ "astro-ph.GA", "physics.plasm-ph" ]
Noise reduction via optimal control in a light-matter quantum system Juan Carlos Retamal July 1, 2024 ==================================================================== § INTRODUCTION Interstellar molecular clouds, vast swirling heterogeneous dust and gas mixtures, serve as the birthplaces for diverse bounded astrophysical structure evolution, such as stars, planets, and so forth <cit.>. The fundamental physics of bounded structure formation has remained as a long-standing obscure research area yet to be well illuminated. In 1902, Sir James Hopwood Jeans, a renowned British astronomer and theoretical physicist, has introduced the Jeans (gravitational) instability criterion to elucidate the gravitational collapse dynamics in nebular gaseous media. According to his gravitational collapse theory <cit.>, the stellar structure formation process gets naturally initiated with the onset of gravitational instability. This instability gets triggered against the usual hydrostatic equilibrium configuration conjugationally established by the inward gravitational pressure force and the outward thermal pressure force. When the mass or length of a molecular cloud exceeds the critical Jeans mass-length scale limits, the gravitational pressure force dominates the thermal pressure force. It results in the collapse of giant molecular clouds fragmented into local cloudlets, consequently condensed in the form of stellarsimals, planetesimals, protostructures, and so forth <cit.>. In self-gravitating complex (dusty) plasmas composed of electrons, ions, neutral, and charged (positive or negative) dust, the presence of charged dust grains generates a repulsive electrostatic force that opposes the inward gravitational force. Additionally, because of the greater mass of the dust particles against the lighter plasma compositions (electrons and ions), the self-gravitational effects of the dust species become significant in extensive astrophysical systems, like dust molecular clouds (DMCs). The DMCs are the dense sites of the interstellar medium (ISM), consisting of gas and micron to sub-micron-sized dust (silicate, graphite, magnetite, etc.) <cit.>. The dust grains present in the partially ionized DMCs modify the traditional Jeans instability. Thus, in order to adapt the Jeans instability for structure formation in complex self-gravitating DMCs, a Jeans-like hybrid mode known as the pulsational mode of gravitational collapse (PMGC) has been introduced in 1999 by Dwivedi et al. <cit.>. This hybrid instability emerges in a partially ionized DMC due to the interaction between gravitational and electrostatic forces, termed as gravito-electrostatic coupling <cit.>. Since the initial work on the PMGC instability, several researchers have shown a keen interest in this mode of gravitational collapse. Consequently, various scenarios in both linear and non-linear regimes have been explored. Extensive analyses of the PMGC dynamics in the presence of dust charge fluctuations have been conducted, demonstrating that dust charge fluctuations play a crucial role in the PMGC dynamics excitable in the DMCs <cit.>. Subsequently, the area of PMGC dynamics have been extended to the non-linear regime. Various non-linear model formalisms have accordingly been developed in various conditions <cit.>. Apart from the above, the effects of various parameters, such as viscoelasticity <cit.>, turbulence <cit.>, ion drag <cit.>, and non-thermality <cit.> on the PMGC dynamics have been investigated. Recently, this field has further been expanded to modified gravity frameworks, such as the EiBI gravity <cit.>, zeroth-order gradient effects <cit.>, and so forth. It is seen in the literature that the impact of polarization force on the PMGC dynamics, which is a critical aspect of inhomogeneous plasmas, has yet to be extensively addressed. In non-uniform plasmas with uneven distributions of electrons and ions, characterized by the absence of dust charge fluctuations, the deformation of the plasma Debye sheaths gives rise to a new force known as the dust polarization force (F_p). It is in addition to the conventional Coulomb electrostatic force (F_e=q_d E; where, q_d and E stand for the dust charge and electric field, respectively) <cit.>. This sheath-polarization force acting on the charged dust particles vanishes entirely in uniform plasma environments. In complex plasmas comprising of negatively charged dust grains, the Debye sheath is formed by ions around the dust grains. The distortion of such plasma sheaths results in the development of polarization force <cit.>. The expression for the polarization force is given as F_p= -q_d^2∇λ_D /(2λ_D^2); where, λ_D = λ_Di/√(1+ (λ_Di/λ_De)^2) denotes the effective Debye length <cit.>. Here, λ_De(i) = √(T_e(i) / 4π e^2 n_e(i)) represents the electron (ion) Debye length, with T_e(i) being the electron (ion) temperature, e is the elementary charge (electronic), and n_e(i) corresponds to the number density of electrons (ions). By utilizing the approximation T_i n_e ≪ T_e n_i, which is suitable for negatively charged dust, and T_i ∇ n_i=-n_i e ∇ϕ <cit.>, a simplified form for the polarization force can be derived as F_p = - q_d R (n_i/n_i0)^1/2∇ϕ <cit.> (refer to [sec:appendix]Appendix for details). Here, R=q_d e/(4T_iλ_Di0) serves as the polarization interaction parameter that determines the influence of the polarization force, with n_i0 and ϕ representing the equilibrium ion number density and electrostatic potential, respectively. It is evident from here that a higher dust charge (or lower ion temperature) results in a stronger polarization force and vice versa. Numerous studies in the literature have relied on the Maxwellian velocity distribution of the lighter constitutive species to elucidate plasma dynamics <cit.>. However, advancements in observational capabilities have unveiled that the particle distribution of space plasmas significantly deviates from the Maxwellian velocity distribution. In 1983, flat-top characteristics in the electron velocity distributions at the boundary of the Earth's bow shock have been reported <cit.>. Multiple spacecraft missions in the downstream region of the bow shock have subsequently observed this behaviour of low-energy electrons. The ARTEMIS spacecraft has provided further evidence for this phenomenon in the geomagnetic tail region <cit.>. Additionally, similar flat-top distribution features near the magnetic reconnection regions have been reported by analyzing data collected from the four Cluster satellites <cit.>. Observational studies of such space plasmas have established the widespread presence of particle populations characterized by high or super-thermal energy tails that do not adhere to the standard Maxwellian velocity distribution. These non-Maxwellian (non-equilibrium) distributions exhibit an approximate power-law dependence in velocity space, implying a substantial fraction of particles with energies significantly greater than the average thermal energy value. Examples in this context are the interstellar medium <cit.>, planetary nebulae <cit.>, planetary magnetospheres <cit.>, solar wind <cit.>, and so forth. To model the non-Maxwellian behaviours, various non-thermal velocity distribution laws have been proposed, such as the kappa distribution <cit.>, q-nonextensive distribution <cit.>, Cairns distribution <cit.>, and so on. In this context, the kappa distribution law has been demonstrated to be effective in characterizing the high-energy particle tails. Likewise, the kappa distribution is also characterized by a single non-thermal spectral index (κ), which typically controls the high-energy tails. To explain both the flat-top and high-energy tail features simultaneously, Qureshi et al. <cit.> have introduced a new bi-spectral non-Maxwellian electron velocity distribution function called the generalized (r,q)-distribution function in 2004. The (r,q)-distribution law specifies two non-thermal spectral indices (r and q) that regulate the electron population in different parts of the energy spectrum. Specifically, r controls the electron population at the flat-tops, while q governs the high-energy tails <cit.>. The (r,q)-distribution is a generalized form of the kappa distribution, which can be reduced to the kappa distribution (for r=0 and q→κ +1) as well as the Maxwellian distribution (for r=0 and q→∞). Therefore, the (r,q)-distribution thermo-statistically gives more flexibility in stability analyses as compared with other non-thermal velocity distribution laws. Einstein's general theory of relativity (GTR) has been proven to be one of the most successful theories explaining gravity. After the introduction of GTR in 1915, Eddington was the very first person to experimentally verify GTR in 1919 by measuring the bending of light during a solar eclipse <cit.>. Since then, GTR has passed numerous tests (in weak-field regime) in diverse astrophysical environments, such as the solar system <cit.>, with binary pulsars <cit.>, black hole binaries <cit.>, and in proximity to the Galactic Centre <cit.>. Despite the remarkable accomplishments of GTR, it falls short in tackling certain inherent challenges. For instance, it fails to explain the rapid cosmic expansion of the universe, the formation of singularities in the space-time fabric during the gravitational collapse of stars and in the early universe <cit.>, as well as the enigma surrounding dark energy and dark matter <cit.>. To overcome these critical issues, many extensions of GTR have been proposed, such as Scalar-Tensor-Vector gravity (STVG) <cit.>, 4D Einstein-Gauss-bonnet gravity (4DEGB) <cit.>, f(R) gravity <cit.>, Eddington inspired Born-Infeld (EiBI) gravity <cit.>, etc. After the seminal work on the Jeans instability of self-gravitating objects in non-relativistic Newtonian gravity <cit.>, it has been further extended to GTR <cit.>. Recently, the impact of different alternative gravity formalisms on the Jeans collapse dynamics has been thoroughly investigated <cit.>. Among the various extensions, the EiBI gravity has been successful in demonstrating the elimination of such singularities in both the primordial universe and subsequent gravitational collapse scenarios <cit.>. The development of EiBI gravity has been motivated by the Born–Infeld action for nonlinear electrodynamics and it has been formulated using Eddington's theory of gravity <cit.>. Introduced by Bañados and Ferreira in 2010, the EiBI gravity theory has been scrutinized in several astrophysical scenarios, such as the solar system <cit.>, compact objects <cit.>, and even in cosmological scales <cit.>; hence, it can be treated as a reliable extension of GTR. The extensions of GTR are generally obtained through modifications to the Einstein-Hilbert action expressed as (in the unit of c) S_EH=1/(16π G)∫ d^4x √(-g)(R-2Λ) <cit.>. Here, g represents the determinant of the metric g_μν, R is the Ricci scalar defined as R=g^μνR_μν, G is the universal gravitational constant, and Λ denotes the cosmological constant. The famous Einstein's field equations can be obtained by varying the Einstein-Hilbert action with respect to the metric g_μν as R_μν-(1/2) g_μνR + Λ g_μν = 8π G T_μν <cit.>; where, R_μν is the Ricci Tensor and T^μν=(g)^-1/2∂ S_M/∂ g_μν denotes the stress-energy tensor with S_M being the matter action. It is noteworthy that following the remarkable works by Deser et al. <cit.> and Vollick <cit.>, Bañados and Ferreira have considered a Born-Infled type action of the form S_EiBI=1/(8πχ G)∫ d^4x (√(|g_μν+χ R_μν)|-λ√(-g))+S_M[g,ψ_M] <cit.>. Here, χ serves as the independent parameter for EiBI gravity, λ ( 0) is a dimensionless constant, and ψ_M represents the matter field. It is noteworthy that in the limit where χ R ≪ 1 and λ=(Λχ+1), the EiBI action reduces to the conventional Einstein-Hilbert action <cit.>. The field equations of EiBI gravity can be obtained from the EiBI action (S_EiBI) using the Palatini formalism by introducing an auxiliary metric q_μν=g_μν+χ R, leading to R_μν≈Λ g_μν+8π G [T_μν-(1/2)g_μνT]+8π G χ[S_μν-(1/4)Sg_μν]+𝒪(χ^2) <cit.>. Here, S_μν=T_μν-(1/2)TT_μν is the source tensor, while S and T represent the traces of S_μν and T_μν, respectively. The third term on the right-hand side of the EiBI-modified field equations introduces corrections to the standard Einstein's field equations. As χ approaches zero, the familiar Einstein's field equations are recovered. It is clear from the above scenarios that the EiBI gravity, non-thermal (r,q)-distributed electrons, and dust-polarization force play significant roles in the collapse of a molecular cloud. Motivated by this, a generalized semi-analytic model is proposed in this paper incorporating the aforementioned factors to study the PMGC stability dynamics of a complex DMC. After the introduction given in section <ref>, the rest of the paper is organized as follows. Section <ref>, outlines the physical model formalism and the basic governing equations describing the system. Section <ref>, explains the mathematical method for linearization of the governing equations followed by the spherical Fourier analysis. A generalized linear dispersion relation (quartic in degree) is obtained using the method of decoupling (elimination). In section <ref>, the numerical outcomes, along with figures, are discussed. Finally, the results are concluded with a brief summary and future scope of this investigation in section <ref>. § PHYSICAL MODEL AND MATHEMATICAL FORMALISM To study the linear PMGC dynamics, a weakly coupled (Γ_Cou≪ 1), inhomogeneous, and globally quasi-neutral self-gravitating DMC is considered, ignoring the effects of rotation, magnetic field, and tidal action resulting from the gravitational effects of distant astrophysical objects. It consists of four species, namely, (r,q)-distributed (non-thermal) electrons, Maxwellian (thermal) ions, neutral and negatively charged dust particles of identical size (r_d) and mass (m_d). The Coulomb coupling parameter (Γ_Cou) associated with the system refers to the ratio of dust potential energy to dust thermal energy and is mathematically expressed as Γ_Cou=(1/4πϵ_0)[q_d^2/(a_d T_d)](in cgs, Γ_Cou= q_d^2/(a_d T_d)); where, a_d = [3/(4π n_d)]^1/3 denotes the Wigner-Seitz radius of the dust grains and T_d represents the dust temperature (in eV). A system is said to be weakly coupled for Γ_Cou≪1 and strongly coupled for Γ_Cou≫1 <cit.>. The considered weakly coupled DMCs are indeed partially ionized in nature with an estimated degree of ionization ∼ 10^-7 <cit.>. Owing to their low ionization level, both neutral and charged dust particles coexist with electrons and ions in such DMCs. A further simplification is considered by ignoring the dust charge fluctuations which is justifiable when the hydrodynamic time scale (τ_d) significantly exceeds the grain charging time scale (τ_c), i.e., τ_d ≫τ_c <cit.>. Here, τ_d = ω_pd^-1, ω_pd being the dust plasma frequency and τ_c = ν_d^-1, ν_d denotes the grain charging frequency. In a typical DMC, the hydrodynamic time scale is approximately τ_d ∼ 10^-2 seconds, while the dust charging time scale is τ_c ∼ 10^-8 seconds <cit.>. Therefore, dust charge fluctuations can be safely disregarded for this particular DMC. The DMC fragmentation process due to PMGC instability can be illustrated with a schematic diagram as in figure <ref>. Here, the complex DMC (figure <ref>a) is subjected to various external perturbations, such as explosive shock waves propagating from nearby supernova explosions, cloud-cloud collisional interactions, etc. These perturbations may produce some highly dense regions compared to the overall density of the DMC in the form of diverse parametric inhomogeneities (gradient effects). Consequently, these high-density regions start to contract due to the inward self-gravitational pull among the constitutive heavier dust particles, leading to a rise in outward thermal pressure. However, due to the very low initial density of the cloud, the thermal pressure can escape from the parent cloud in the form of various deceptive processes, such as thermal conduction, radiation, etc. <cit.>. Hence, the outward thermal pressure does not balance the inward gravitational pressure at the initial stage of collapse. In contrast, the effective electrostatic repulsion (electrostatic force between the identical charged particles and polarization force) tries to counter the gravitational collapse of the cloud. Depending upon the critical Jeans mass-length limit, a gravito-electrostatic equilibrium is established, forming a stable cloud region (figure <ref>c). However, if the mass of the cloud exceeds the Jeans critical mass limit, the self-gravitational force prevails over the electrostatic repulsion force. It results in a growing instability in the cloud (figure <ref>f), leading to the fragmentation of the original cloud into (smaller) cloudlets (figure <ref>b). These equilibrium and collapse processes continue till the cloud is fragmented to a smaller stellar mass scale, resulting in a protostar formation (figure <ref>e). As the cloud density and temperature increase with each condensation stage, they eventually become opaque to the thermal radiation, and the radiation can no longer escape from the stellar body, eventually establishing a hydrostatic equilibrium <cit.>. The electrons are assumed to follow the generalized (r,q) thermo-statistical distribution law given in generic notations <cit.> as F_r,q(v)= α/π v_th^3(1+1/q-1[v^2-2eϕ/m_e/β(2T_e/m_e)]^(r+1))^-q; α=3Γ[q](q-1)^-3/(2+2r)/4β^3/2Γ[q-3/2+2r]Γ[1+3/2+2r], β = 3(q-1)^-1/(1+r)Γ[q-3/2+2r]Γ[3/2+2r]/2Γ[q-5/2+2r]Γ[5/2+2r]. The number density of non-thermal (r,q)-distributed electrons (n_e) can be obtained by integrating (<ref>) over a velocity space as <cit.> n_e=n_e0[1+Aeϕ/T_e+B(eϕ/T_e)^2]; A=(q-1)^-1/(1+r)Γ[q-1/2+2r]Γ[1/2+2r]/2βΓ[q-3/2+2r]Γ[3/2+2r], B=-(1+4r)(q-1)^-2/(1+r)Γ[q+1/(2+2r)]Γ[-1/2+2r]/8β^2Γ[q-3/2+2r] Γ[3/2+2r]. Due to the heavier mass of the ions than that of the electrons, the inertia of the ions would be larger than that of the electrons. Thus, the ions are considered to follow the Maxwellian velocity distribution on a slow dust inertial time scale, with its number density given as <cit.> n_i=n_i0exp(-eϕ/T_i). In equations (<ref>)-(<ref>), Γ [z] (defined simply as Γ [z] =∫_0^∞ e^-x x^z-1 dx) denotes the so-called characteristic gamma function <cit.>. The symbols r and q represent the non-thermal spectral indices specifying the electronic thermo-statistics. The variable ϕ represents the electrostatic potential, v_th=(2T_e/m_e)^1/2 denotes the thermal velocity of the electrons, T_e(i) corresponds to the electron (ion) temperature (in eV), m_e is the mass of an electron, and n_e(i) is the number density of electrons (ions), with n_e0(i0) being their corresponding equilibrium number densities. It is crucial to emphasize that the spectral indices r and q in the (r,q)-distribution law must follow the conditions q>1 and q(r+1)>5/2 <cit.>. In the special case when r=0 and q →∞, the (r,q)-distribution transitions to the Maxwellian distribution; where, the multi-order non-thermality coefficients A and B become A=1 and B=1/2. The dust particles, when emerged in a plasma environment, become electrically charged (through contact electrification process) due to the random collisions of lighter constituent species with the dust grains. As the electrons possess a greater thermal velocity compared to the ions, they reach the dust grain surface significantly faster than the ions. This rapid accumulation of the electrons on the grain surfaces makes them negatively charged <cit.>. However, due to the partially ionized nature of the cloud, not all dust grains are electrically charged; hence, both neutral and negative dust grains are present in the system. Thus, the neutral and negatively charged grains are considered as two different fluids. Furthermore, in the context of studying the collapse of a dense DMC, it is reasonable to consider the exchange of momentum between charged and neutral dust grains. The rates of binary collisions for momentum transfer from the charged to the neutral dust particles and vice versa are given by ν_cn≈π r_d^2 n_dn0 v_td and ν_nc≈π r_d^2 n_dc0 v_td, respectively. Here, r_d is the radius of the dust grain, n_dco (n_dn0) is the equilibrium number density of charged (neutral) dust, and v_td=(2T_d/m_d)^1/2 is their common thermal velocity. The frictional force term can be neglected for neutral dust grains under the condition that the Jeans mode frequency ω≫ν_cn with ν_cn/ν_nc=n_dn0/n_dc0≫1 <cit.>. The neutral dust dynamics can be described using the continuity and momentum equations with the negligible frictional effect respectively as ∂_t n_dn+ ∇.(n_dnv_dn)=0, ∂_tv_dn+(v_dn.∇)v_dn=-∇ψ. Again, the charged dust dynamics can be described using continuity and momentum equations with significant frictional effect respectively as ∂_t n_dc+∇.(n_dcv_dc)=0, ∂_t v_dc+(v_dc.∇)v_dc=-q_d/m_d∇ϕ+q_d/m_dR(n_i/n_i0)^1/2∇ϕ-∇ψ-ν_cn(v_dc-v_dn). In the above equations (<ref>)-(<ref>), n_dn(dc) and v_dn(dc) stand for the neutral (charged) dust number density and neutral (charged) dust fluid velocity, respectively. Here, q_d = -Z_de denotes the dust charge with Z_d=|q_d/e| being the dust charge number and e is the elementary charge (electronic). Moreover, R represents the polarization interaction parameter, and ν_cn is the binary collisional rate of momentum transfer from the charged to the neutral dust grains. Furthermore, ϕ and ψ respectively denote the electrostatic and gravitational potential. The modified Poisson equation serves as the starting point for phenomenological studies using EiBI gravity. The required Poisson equation can be obtained from the modified Einstein's field equation (mentioned in section <ref>) by taking the weak field limit as <cit.> ∇^2ψ = 4π G ρ_mj + (χ/4)∇^2 ρ_mj, where, ψ represents the gravitational potential, ρ_mj=n_j m_j (j=e,i,dc,dn) denotes the matter density, and χ serves as the EiBI parameter. Numerous constraints have been imposed on the EiBI parameter for different astrophysical objects, such as white dwarfs (χ≤ 4.86 × 10^2 kg^-1.m^5.s^-2) <cit.>, neutron stars (χ<10^-2 kg^-1.m^5.s^-2) <cit.>, Sun (χ≤ 3 × 10^5 kg^-1.m^5.s^-2)<cit.>, and so forth. The tightest nuclear constraint on the EiBI parameter (χ) has been imposed by Avelino in 2012 <cit.>, given as χ≤ 10^-3 kg^-1.m^5.s^-2). It is important to note that the usual gravitational Poisson equation can be recovered from (<ref>) for χ =0. Finally, the DMC model is closed by utilizing the electro-gravitational Poisson equations, which provide the potential distributions generated by the respective density fields, cast as ∇^2ϕ=4π e(n_e-n_i-q_dn_dc/e), ∇^2ψ=4π Gm_d(n_dc+n_dn-n_d0)+(χ/4)∇^2[m_d(n_dc+n_dn-n_d0)]. Here, the first term on the right side of (<ref>) signifies the effect of Newtonian gravity, whereas the second term accounts for the modification made by the EiBI gravity, and n_d0(=n_dc0+n_dn0) is the net equilibrium dust number density, serving as the Jeans swindle. As the astrophysical self-gravitating plasmas are usually confined in spherical geometric volumes, a spherically symmetric approximation (radial 1-D case) is employed rather than a full spherical geometry (spherical 3-D case). This simplification is justifiable under the assumption of radial symmetry, which holds when the fluctuation wavelength significantly exceeds the grain-to-grain distance <cit.>. The fundamental governing equations (<ref>)-(<ref>) defining the DMC dynamics can be written in the spherically symmetric coordinate (ρ,t) with all usual notations as ∂_tn_dn+ρ^-2∂_ρ(ρ^2n_dnv_dn)=0, ∂_tv_dn+(v_dn∂_ρ)v_dn=-∂_ρψ, ∂_tn_dc+ρ^-2∂_ρ(ρ^2n_dcv_dc)=0, ∂_tv_dc+(v_dc∂_ρ)v_dc=-q_d/m_d∂_ρϕ+q_d/m_dR(n_i/n_i0)^1/2∂_ρϕ-∂_ρψ-ν_cn(v_dc-v_dn), ρ^-2∂_ρ(r^2∂_ρϕ) = 4 π e (n_e-n_i-q_dn_dc/e), ρ^-2∂_ρ(ρ^2∂_ρψ) = 4π Gm_d(n_dc+n_dn-n_d0) + (χ/4)ρ^-2∂_ρ(ρ^2∂_ρ[m_d(n_dc+n_dn-n_d0)]). Equations (<ref>)-(<ref>) are the governing equations required to describe the dynamics of the spherical DMC model. Here, ∂_ρ≡∂/∂ρ and ∂_t ≡∂/∂ t signify the space and time gradient operators, respectively. It is noteworthy here that the geometrical curvature effects are responsible for the formation of 1/ρ terms, and the planar equations can be retraced under the geometric approximation ρ→∞, ρ being the radial distance. § LINEARIZATION AND FOURIER ANALYSIS Employing a standard approach of spherical wave analysis <cit.>, a linear perturbation (f_1) is introduced to the relevant physical fluid parameters around their respective homogeneous equilibrium values (f_0). The multi-parametric perturbations with amplitude factor f_10 assume the mathematical shape of a symmetric spherical wave in a usual symbolism without any dimensional disparity in the absence of polar and azimuthal contributions as f(ρ,t) = (f_0 + f_1) = f_0 + ρ^-1f_10exp[-i(ω t - kρ)]; f=[n_dn n_dc n_e n_i v_dn v_dc ϕ ψ]^T, f_0=[n_dn0 n_dc0 n_e0 n_i0 0 0 0 0]^T, f_1=[n_dn1 n_dc1 n_e1 n_i1 v_dn1 v_dc1 ϕ_1 ψ_1]^T. As already mentioned above, it is repeated broadly that the symbol, f_1, represents a small (linear) perturbation of f about f_0 such that f_1 ≈ρ^-1f_10exp[-i(ω t-kρ)]. This sinusoidal spherical wave has angular frequency ω and angular wavenumber k. It is noteworthy that f_0 and ρ^-1f_10 have the same dimensions. T represents the transpose operation thereof. The exponential term embodies the inherent wave-like nature of the perturbations. The inherent nature of the adopted linearization approach dictates the exclusion of all higher-order terms, as their contributions become insignificant in the current model analysis. The linear differential operators accordingly transform as ∂_ρ→ (ik-ρ^-1), ∂_t → (-iω), and ∂_ρ^2 → [(-k^2+2ρ^-2)-i(2kρ^-1)] <cit.>. The Fourier analysis is used to transform the fluctuation dynamics from the real coordination (direct) space (ρ,t) to the wave (reciprocal) space (k,ω) in a physically judicious manner. Accordingly, the applied spherical Fourier analysis reduces the equations (<ref>),(<ref>), and (<ref>)-(<ref>) in their linearized forms given respectively as n_e1=n_e0(Aeϕ_1/Te), n_i1=-n_i0(eϕ_1/T_i), n_dn1=n_dn0(k^2+ρ^-2)/ω^2ψ_1, v_dn1=(ik-ρ^-1/iω) ψ_1, n_dc1=n_dc0(k^2+ρ^-2)/ω^2[ψ_1+(q_d/m_d)(1-R) (1+iν_cn/ω)^-1ϕ_1 ], v_dc1=(q_d/m_d)(1-R)(ik-ρ^-1)ϕ_1+(ik+ρ^-1)ψ_1-ν_cn(v_dn1)/iω -ν_cn, ϕ_1=4π e/k^2[n_i1-n_e1+(q_d/e) n_dc1], ψ_1=-(4π e/k^2-χ/4)m_d(n_dc1+n_dn1). Substituting the value of n_dn1 from (<ref>) in (<ref>), the perturbed gravitational potential (ψ_1) can be obtained as follows ψ_1=-(4π G/k^2-χ/4)m_d n_dc1[4k^2ω^2/4k^2 ω^2+(k^2+r^-2)(4ω_J^2-4ω_Jc^2 - χ m_d n_dn0k^2)]. Here, ω_J = (ω_Jc+ω_Jn) = √(4 π G m_d n_d0) and ω_Jc = √(4 π Gm_d n_dc0) are the effective critical Jeans frequency for the entire dust species and the critical Jeans frequency for the charged dust particles <cit.>. Utilizing the value of perturbed gravitational potential (ψ_1) from (<ref>) in (<ref>), the perturbed charged dust number density (n_dc1) can be rewritten as n_dc1 = n_dc0(k^2+r^-2)/ω^2[ (q_d/m_d)(1-R) (1+iν_cn/ω)^-1] ×[ 4 k^2 ω^2 + (k^2+r^-2) (4 ω_J^2-4ω_Jc^2- χ m_d n_dn0k^2 )/4 k^2 ω^2 + (k^2+r^-2) (4 ω_J^2 - χ m_d n_d0k^2 )]ϕ_1. Finally, substituting the values of n_e1, n_i1, and n_dc1 from (<ref>),(<ref>), and (<ref>) in (<ref>), an unnormalized quartic dispersion relation is achieved as follows (ω/ω_J)^4+i ν_cn/ω_J(ω/ω_J)^3+[(1-χ m_d n_d0/4ω_J^2k^2-ω_pdc^2(1-R)/ω_J^2(1+k^-2λ_D_r,q^-2))(k^2+r^-2)k^-2] (ω/ω_J)^2 +iν_cn/ω_J[(1-χ m_d n_d0/4ω_J^2k^2)(k^2+r^-2)k^-2](ω/ω_J)-(1-R)λ_D_r,q^2(ω_pdc^2/ω_J^2) ×(4 ω_J^2- 4ω_Jc^2-χ m_d n_dn0 k^2/4ω_J^2)(k^2+r^-2/1+k^2λ_D_r,q^2)k^-2=0, where, ω_pd=√(4π q_d^2 n_d0/m_d) and ω_pdc=√(4π q_d^2 n_dc0/m_dc) are the effective dust-plasma oscillation frequency (contributed by both the neutral and charged dust grains) and the charged dust-plasma oscillation frequency (contributed by the charged dust grains only). The term, λ_D_r,q, is the effective dust-plasma Debye length for (r,q)-velocity distribution, cast as λ_D_r,q=[T_e T_i/4π e^2 (An_e0 T_i+n_i0 T_e)]^1/2, which becomes the standard effective plasma Debye length (λ_D) for A=1 (r=0,q→∞). To check the reliability and validation of (<ref>) in comparison with previous predictions found in the literature, all the newly added modifications are turned off, i.e., R=0, χ=0, ρ→∞, and A=1. Now, using the approximation (λ_De/λ_Di)^2 ≫ 1, (<ref>) exactly reduces to the same linear dispersion relation (quartic in degree) in the original form for the PMGC dynamics excited in DMCs obtained by Dwivedi et al. <cit.>, as (ω/ω_J)^4 +iν_cn/ω_J(ω/ω_J)^3+(1-k^2C_scam^2/ω_J^2) (ω/ω_J)^2+iν_cn/ω_J(ω/ω_J) -k^2C_scam^2/ω_J^2(1/1+η)=0, where, η=n_dc0/n_dn0 represents the ratio between the equilibrium charged and neutral dust population densities. C_scam=[(q_d/e)^2 (n_dc0 T_i)/(n_dn0 m_d )]^1/2 refers to the usual phase speed of the supported dust acoustic wave (DAW) or the so-called acoustic mode (SCAM)<cit.>. To facilitate the analysis of the intricate PMGC fluctuations, a standard astrophysical normalization procedure <cit.> is employed to (<ref>), which converts the dispersion relation from previous wave space (k,ω) to a new wave space (K,Ω) as follows Ω^4+ a_3 Ω^3+a_2Ω^2+a_1Ω+a_0=0; a_3 = i (ν_cn/ω_J), a_2 = (1-χ m_d n_dn0/4 C_DA^2K^2-(1-R) ω_J^2 λ_D_r,q^2 Ω_pdc^2/K^2 ω_J^2 λ_D_r,q^2+C_DA^2 K^2 )(K^2+1/ξ^2) K^-2, a_1=i (ν_cn/ω_J)[(1-χ m_d n_d0/4 C_DA^2K^2 )(K^2+1/ξ^2) K^-2], a_0= -(1-R) Ω_pdc^2 (λ_D_r,q/λ_J)^2 C_DA^2/(K^2 ω_J^2 λ_D_r,q^2+C_DA^2) (1-Ω_Jc^2-χ K^2/4 C_DA^2m_d n_dn0) ×(K^2+1/ξ^2)^2 K^-2. Here, ξ=ρ/λ_J and K=k (C_DA/ω_J) are, respectively, the dimensionless radial distance and angular wavenumber normalized with the Jeans length (λ_J) and angular Jeans wavenumber (k_J≡ω_J/C_DA, C_DA being the dust acoustic phase velocity). Similarly, Ω = ω/ω_J, Ω_pdc=ω_pdc/ω_J, and Ω_Jc=ω_Jc/ω_J are, respectively, the dimensionless angular frequency, charged dust-plasma oscillation frequency, and critical Jeans frequency for charged dust normalized with the Jeans angular frequency (ω_J). Within the realm of ultra low-frequency (ULF) fluctuations, (<ref>) transforms to an equation characterized by a vanishing propagatory component (Ω_r =0), signifying the wave condensation and a non-vanishing decay (growth) component (Ω_i ≠ 0), indicating wave collapse <cit.>. This implies that the wave no longer exhibits propagatory characteristics in the sense of classical wave mechanics. However, the wave undergoes either exponential decay or growth over time, even in the absence of propagation. Hence, the final dispersion relation after separating the real (Ω_r) and imaginary (Ω_i) parts can be written as Ω_i = (1-R)(1+K^2ξ^2)(-4C_DA^2ω_J^2+ χ m_d n_d0ω_J^2 K^2 +4C_DA^2ω_Jc^2)C_DA^2λ_D^2Ω_pdc^2/ν_cnξ^2ω_J(4C_DA^2- χ m_d n_d0 K^2)(C_DA^2+K^2λ_D^2ω_J^2). Equation (<ref>) is the required linear dispersion relation for the PMGC instability in a self-gravitating complex DMC due to the combined influence of the polarization force and (r,q)-distributed electrons within the EiBI gravity framework. § RESULTS AND DISCUSSIONS A semi-analytic model is developed to assess the PMGC instability by incorporating the effects of the EiBI gravity, (r,q)-distributed electrons, and dust-polarization force. The derived quartic polynomial dispersion relation (<ref>) serves as an investigative tool for the oscillatory and propagatory dynamics associated with the PMGC instability. As detailed in section <ref>, the derived quartic dispersion relation (<ref>) is reduced to a linear dispersion relation using the approximation of extremely low-frequency fluctuations. Finally, (<ref>) is utilized for further numerical analysis, and the resulting graphical representations (figures <ref>-<ref>) are respectively interpreted. Various input values of significant physical parameters are adopted from reliable sources in the literature. It includes m_e = 9.1 × 10^-28 g, m_i = 1.6 × 10^-24 g, m_d = 4 × 10^-12 g, e = 4.80 × 10^-10 esu, q_d = - 50e, r_d = 1.5 × 10^-4 cm, n_e0= 1.2 × 10^6 cm^-3, n_i0 = 4.95 × 10^6 cm^-3, n_dc0 = 2 cm^-3, n_dn0 = 4 cm^-3, T_e = 1 eV, T_i = 0.8 eV, and T_d = 0.1 eV <cit.>. Accordingly, the Coulomb coupling parameter Γ_Cou is estimated as ∼ 0.011 (≪ 1), which indicates a weakly coupled DMC. Additionally, the Jeans angular frequency (ω_J) is calculated as 4.49 × 10^-9 rad . s^-1, the dust thermal velocity (V_Td) is found as 8.9 × 10^-2 cm . s^-1, and the dust-plasma oscillation frequency (ω_pd) is determined as 1.04 rad . s^-1. Furthermore, the effective dust-plasma Debye length for (r,q)-distribution (λ_D) is obtained as 2.65 × 10^-1 cm. As in figure <ref>, we illustrate the impact of the polarization force on the PMGC instability in the presence of (r,q)-distributed electrons within the considered EiBI gravity framework. Here, the Jeans-scaled growth rate (Ω_i) of the PMGC instability is plotted against the Jeans-scaled angular wavenumber (K) for different values of polarization parameter (R=0,0.2,0.4,1.2, and 1.6). The EiBI parameter is kept fixed at χ=2 × 10^6 g^-1.cm^5. s^-2, the Jeans-scaled radial distance at ξ=30, and the multi-order non-thermality coefficient at A=1.4 (for r=0 and q=5). It demonstrates that for R < 1, the attenuation of the PMGC instability at shorter wavelengths (acoustic-like) is significantly more prominent compared to the longer wavelengths (gravitational-like), whereas, for R > 1, the growth of the instability is more significant for shorter wavelengths (acoustic-like), i.e., larger K-values. It is clear from figure <ref> that an increase in the value of R reduces the damping nature of the PMGC instability, leading to a less stable system (most stable for R=0 and least stable for R=1.6). This indicates that within the EiBI gravity framework, the polarization force serves as a destabilizing factor for negatively charged dust. Previous studies have also reported a similar destabilizing nature of the polarization force on gravitational collapse due to the Jeans instability in simplified model configurations <cit.>. It is important to note here that, for the considered weakly coupled DMC, with the above-mentioned values of the physical parameters, the polarization interaction parameter is found as R∼ 1.52 × 10^-5, and hence the polarization force (F_p) should not affect the stability of the system significantly. Figure <ref> is same as figure <ref>, but portrayed in a wave space defined by Jeans-scaled radial distance (ξ) and Jeans-scaled angular wavenumber (K). In this scale of ξ, no significant effect over the PMGC fluctuation is observed; however, changes in R-values distinctly affect the Ω_i-value. At ξ=30, the Jeans-scaled imaginary angular frequency has a minimum value (Ω_i=-7.6 × 10^-2) for R=0 and a maximum value (Ω_i = 4.07 × 10^-2) for R=1.6, suggesting the growth of the instability escalates with R. This supports the findings from figure <ref>. The destabilizing effect of the polarization force and its underlying physics can be deduced from the effective electrostatic force (F_eff) acting on the dust grains. The effective electrostatic force acting on the dust grains subjected to an external electric field E is given by F_eff = (F_e+F_p) = qE -q_d^2∇λ_D /(2λ_D^2) <cit.>. Typically, the polarization force has a minimal impact on system stability. However, it becomes significant as the dust grain size increases <cit.>. Large-sized dust grains tend to have higher charges (q_d), leading to an increased R-value (since R ∝ q_d). Thus, with the rise of the sheath-polarization force, the counteracting electrostatic force that opposes the inward gravitational pull weakens. Consequently, the gravitational force becomes predominant over the electrostatic force, leading to system instability. When R ≥ 1, the dust polarization force exceeds the corresponding electrostatic force. It causes the effective electrostatic force (F_eff) to fail as a stabilizing force against the gravitational force <cit.>, ultimately triggering the self-gravitational collapse of the interstellar cloud. In a similar way, figure <ref> depicts a color-spectral top-view projection of figure <ref>. This graphic projection is utilized for a more detailed examination of the conjugated relationship of Ω_i with ξ. Interestingly, it is found that in the vicinity of the central region of the DMC (ξ∼ 0), the variation of Ω_i suddenly appears to disappear. This modal behaviour indicates the possible signatures of a microphysical correlation depictable with Ω_i = f(ξ). Figure <ref> displays similar features as figure <ref>, but shown in a magnified view of ξ and an extended range of K (encompassing the intervals ξ=0-3 and K=0-50). Apart from the polarization effects noticed in figures <ref> - <ref>, a distinct spatial variation in Ω_i is observed in the low-ξ region (∼ 0-0.2). It indicates that in the proximity of the spherical DMC core, the PMGC instability shows a rapidly escalating tendency for both R < 1 and R ≥ 1. However, beyond ξ∼ 0.2, the instability reaches a plateau and spreads uniformly across the system. This plateau region can be interpreted as the result of the interaction between gravity and effective electrostatic force. As ξ continues to increase, the effective electrostatic force within the cloud becomes sufficient enough to oppose the gravitational collapse of the DMC. Figure <ref> illustrates the EiBI gravity effect on the PMGC instability in the presence of polarization force and (r,q)-distributed electrons. For any self-gravitationally bounded astrophysical object of size L, the value of χ should be less than GL^2, G being the universal gravitational constant <cit.>. For our system having modified critical Jeans length, λ_J∼ 1.37×10^8 cm, the constraint is found to be χ<1.33×10^9 g^-1.cm^5.s^-2. The χ-value is varied in the range from -4 × 10^6 g^-1.cm^5.s^-2 to 4× 10^6 g^-1.cm^5.s^-2. It is noteworthy here that, χ=0 serves as the Newtonian reference in the non-local gravity, which is in fact, a crucial feature for assessing the observed trends. All the other parameters are kept the same as mentioned at the beginning of this section, except R=0.2 and A=1.4 (r=0,q=5). It is clear from figure <ref> that an elevation in the negative EiBI parameter (χ) results in the deceleration of the PMGC instability, thereby enhancing the stability of the system in comparison to those in the Newtonian scenarios. Conversely, an increase in the positive EiBI parameter (χ) drives the system towards a state of higher instability. Therefore, it can be deduced that the positive EiBI parameter functions as a destabilizing element, while the negative EiBI parameter acts as a stabilizing factor for the system being analyzed. Previous literature has also reported a similar impact of the EiBI gravity on the Jeans instability, aligning with the findings of this study <cit.>. Figure <ref> depicts similar characteristics to those in figure <ref>. In this representation, the spatial aspects of the PMGC instability are illustrated by jointly varying the Jeans-scaled radial distance (ξ) and Jeans-scaled wavenumber (K). Here, only two values of the EiBI parameter (χ=± 2×10^6 g^-1.cm^5.s^-2) are examined alongside the Newtonian case (χ=0). All other pertinent physical parameters remain unchanged from those utilized in figure <ref>. Similar to the findings in figure <ref>, it is noted that a positive (negative) EiBI parameter (χ) acts as a destabilizing (stabilizing) factor in the presence of non-thermal (r,q)-distributed electrons and sheath-polarization of the Debye sheaths around the dust grains present in the DMC. Additionally, a color-spectral top-view projection of the same is shown in figure <ref>. Figure <ref> elucidates the influence of (r,q)-distributed electrons on the PMGC instability within the EiBI gravity framework, taking the polarization force into account. The variation of Jeans-scaled imaginary angular frequency (Ω_i) with the Jeans-scaled angular wavenumber (K) is depicted for different values of q (=3,5,10) with (a) r=0 and (b) r=1. In the limiting condition r=0 and q →∞, the value of A converges to 1. Hence, the line corresponding to r=0 and q →∞ serves as the Maxwellian reference. This numerical analysis maintains consistent fixed equilibrium values as used before except R=0.2 and χ=2×10^6 g^-1.cm^5.s^-2. It is observed in figure <ref>a that for r=0, the different curves exhibit a compelling propensity to converge towards the Maxwellian reference as the value of q increases. This outcome is completely inline with the previous prediction founded on the (r,q)-distribution laws. Interestingly, the results diverge significantly from the previous case when r = 1. It is seen in figure <ref>b the q-value increments deviate the curves away from the Maxwellian reference, indicating the emergence of non-Maxwellian behavior. Moreover, it is evident from figure <ref> that q-value increments dampen the instability of the system for both the cases (r=0,1). Nevertheless, for r=0, the Maxwellian system is observed to be more stable than the non-Maxwellian system, whereas, for r=1, the non-Maxwellian system is more stable. In a similar way, in figure <ref>, we portray the same as figure <ref>, but for varying values of r (=2,4,6) with (a) q=2 and (b) q=6. As figure <ref>, it is evident that an increase in the r spectral parameter enhances the system stability accompanied by a rise in the non-Maxwellian (non-thermal) characteristics. A comparative analysis of figures <ref>-<ref> reveals a consistent trend. Both the non-thermal spectral parameters (r and q) play a significant role in influencing the system stability and non-Maxwellian characteristics. While elevating the q value consistently improves stability across different r values and vice versa, it also steers the system away from the Maxwellian velocity distribution. Additionally, it is notable that at longer wavelengths (gravitational-like), the system maintains closer proximity to the Maxwellian characteristics, whereas, at shorter wavelengths (acoustic-like), the deviation (from thermality) becomes more pronounced. The stability of the non-Maxwellian (non-thermal) systems can be comprehended by noting that as the electron energy increases, their mobility also rises and vice versa. Consequently, there is a rapid accumulation of electrons at the dust grain surfaces, leading to an escalation in the dust charge. The enhanced dust charge at the cost of thermal loss of electrons results in an increased electrostatic repulsion among the dust grains, counterbalancing the gravitational forces acting inward. § CONCLUSIONS The proposed study explores the collective impact of the EiBI gravity, (r,q)-distributed electrons, and dust-polarization force on the pulsational mode of gravitational collapse (PMGC) in complex spherical dust molecular cloud (DMC) fluids. Application of a standard spherical normal mode analysis in the EiBI-modified DMC results in a unique form of a generalized linear quartic dispersion relation. The derived linear dispersion relation, if all the newly added astronomical sophistication are switched off, fairly corroborates with the previous results reported in the literature <cit.>. After its necessary validation checkup, the derived PMGC dispersion relation is numerically illustrated with judicious parametric inputs to explore diverse PMGC stability features in different realistic astronomical circumstances. The PMGC instability is found to be significantly more pronounced for longer wavelengths (gravitational-like) than the shorter wavelengths (acoustic-like). Additionally, the instability exhibits a rapidly growing tendency in the vicinity of the DMC core. It shows a saturating propensity from the cloud center outwards. It is seen that the polarization force and the positive EiBI parameter act as destabilizing factors, while the negative EiBI parameter acts in favour of the DMC stability. The non-thermal (r,q)-distributed electrons produce a DMC stabilizing influence. It is important to note that the deviation from the Maxwellian (thermal) characteristics is significant at shorter wavelengths (acoustic-like), whereas, at longer wavelengths (gravitation-like), there is no substantial distinction between the Maxwellian (thermal) and non-Maxwellian (non-thermal) features in the thermo-statistical perspective. However, the non-Maxwellian systems are found to be more stable than the Maxwellian ones. The outcomes of the PMGC instability analyses presented in this work could be utilized to enhance the understanding the diverse processes of astrophysical structure formation in the H ii regions (ionized hydrogen regions) of the molecular clouds, such as Sh2-87 <cit.>, Sh2-88B <cit.>, Sh2-235 <cit.>, etc. The Jeans-like instability in the EiBI gravity framework discussed in this study may lead to the fragmentation of extremely dense interstellar DMCs into smaller substructures that cannot withstand their own gravitational forces. These results may be helpful in further probing the EiBI gravity role in various other similar structures. It may include various star forming molecular cloud zones in our galaxy, such as the Central Molecular Zone <cit.>, the Orion molecular cloud <cit.>, etc. While there have not been any substantial observational studies of the PMGC model reported yet, the recently launched JWST enables us to observe star-forming regions beyond the Milky Way and its satellite galaxies <cit.>. Already, the JWST has uncovered galaxies with z>10 (z being the redshift). Additionally, the Atacama Large Millimeter Array (ALMA) facilitates the observation and analysis of gas and star formation process in galaxies with z > 4, on scales less than a kiloparsec <cit.>. These developments could open up new avenues for future investigations in light of the real-time insightful observations in the relevant PMGC research areas. It is important to note here that the effects, like viscoelasticity (in a strongly coupled system), dust charge fluctuation, ion drag, turbulence, and magnetic field are not considered in this work. Additionally, there are numerous other extensions of general relativity as well as several other velocity distributions, the effects of which on the behaviour of instabilities are yet to be explored. Therefore, it is conclusively admitted that there is potential scope for future improvements to the presented model by incorporating such important plasma-fluidic and thermo-statistical features of realistic astronomical significance. § ACKNOWLEDGMENTS The Department of Physics at Tezpur University is gratefully acknowledged for the invaluable cooperation. Special appreciations are due to all the members of the Astrophysical Plasma and Nonlinear Dynamics Research Laboratory (APNDRL) for their collaborative efforts and insightful discussions, which have enriched the intellectual environment of our research pursuits. Special acknowledgment is given to Mr. Souvik Das of Department of Physics, Tezpur University for his meticulous attention to detail and unwavering support in manuscript formatting. Finally, Mr. Siddharth Saikia, affiliated to University of New South Wales, is also hereby acknowledged for his help in writing elementary parts of the manuscript. § APPENDIX: POLARIZATION FORCE FOR NEGATIVELY CHARGED DUST The expression for the polarization force is given by <cit.> F_p = - q_d^2∇λ_D/2λ_D^2; where, λ_D=λ_Di×(1+λ_Di^2/λ_Di^2)^-1/2=(k_BT_i/4π e^2 n_i)^1/2×(n_i T_e/n_i T_e + n_e T_i)^1/2. For a complex (dusty) plasma comprising of negatively charged dust grains, n_i ≫ n_e and T_e ≫ T_i, hence we can judiciously use the approximation n_eT_i ≪ n_iT_e in (<ref>) λ_D=(k_BT_i/4π e^2 n_i)^1/2≈λ_Di. Now, ∇λ_D =∇λ_Di=∇(k_BT_i/4π e^2 n_i)^1/2 =-1/2(k_BT_i/4π e^2)^1/2n_i^-3/2∇ n_i . Therefore, ∇λ/λ_D^2 =-1/2(k_BT_i/4π e^2)^1/2n_i^-3/2∇ n_i×(1/λ_D^2) =-1/2(k_BT_i/4π e^2 n_i)^1/2n_i^-1∇ n_i×(1/λ_D^2) =-1/2λ_D n_i^-1∇ n_i×(1/λ_D^2) =-1/2λ_D(∇ n_i/n_i). Considering the ions to be Maxwellian, we can use T_i∇ n_i = - n_i e ∇ϕ in (<ref>) as ∇λ/λ_D^2=-1/2λ_D(-n_ie∇ϕ/n_iT_i)=1/2λ_D(e∇ϕ/T_i). Finally, substituting ∇λ_D / λ_D^2 in (<ref>), we get F_p = - q_d^2/2×1/2λ_D(e∇ϕ/T_i) = - q_d^2/2×1/2(4 π n_i e^2/k_B T_i)^1/2×(e∇ϕ/T_i) = - q_d^2/4×(4 π n_i0 e^2/k_B T_i)^1/2×(n_i/n_i0)^1/2×(e∇ϕ/T_i) = - q_d ×(q_d e/4T_iλ_Di0)×(n_i/n_i0)^1/2×∇ϕ = -q_d R (n_i/n_i0)^1/2∇ϕ, where, R= (q_d e/4T_iλ_Di0) is known as the polarization interaction parameter. JHEP
http://arxiv.org/abs/2406.18090v1
20240626055513
Probing the superconducting gap structure of ScRuSi via $μ$SR and first-principles calculations
[ "K. Panda", "A. Bhattacharyya", "P. N. Ferreira", "Rajib Mondal", "A. Thamizhavel", "D. T. Adroja", "C. Heil", "L. T. F. Eleno", "A. D. Hillier" ]
cond-mat.supr-con
[ "cond-mat.supr-con" ]
http://arxiv.org/abs/2406.18122v1
20240626072102
Poisoned LangChain: Jailbreak LLMs by LangChain
[ "Ziqiu Wang", "Jun Liu", "Shengkai Zhang", "Yang Yang" ]
cs.CL
[ "cs.CL", "cs.AI" ]
Key Laboratory of Intelligent Sensing System and Security (Ministry of Education) School of Artificial Intelligence, Hubei University Wuhan China ziqiuwang@stu.hubu.edu.cn Key Laboratory of Intelligent Sensing System and Security (Ministry of Education) School of Artificial Intelligence, Hubei University Wuhan China junliu@stu.hubu.edu.cn Wuhan University of Technology Wuhan China shengkai@whut.edu.cn Indicates corresponding author. Key Laboratory of Intelligent Sensing System and Security (Ministry of Education) School of Artificial Intelligence, Hubei University Wuhan China yangyang@hubu.edu.cn § ABSTRACT With the development of Natural Language Processing (NLP), Large Language Models (LLMs) are becoming increasingly popular. LLMs are integrating more into everyday life, raising public concerns about their security vulnerabilities. Consequently, the security of large language models is becoming critically important. Currently, the techniques for attacking and defending against LLMs are continuously evolving. One significant method type of attack is the jailbreak attack, which designed to evade model safety mechanisms and induce the generation of inappropriate content. Existing jailbreak attacks primarily rely on crafting inducement prompts for direct jailbreaks, which are less effective against large models with robust filtering and high comprehension abilities. Given the increasing demand for real-time capabilities in large language models, real-time updates and iterations of new knowledge have become essential. Retrieval-Augmented Generation (RAG), an advanced technique to compensate for the model's lack of new knowledge, is gradually becoming mainstream. As RAG enables the model to utilize external knowledge bases, it provides a new avenue for jailbreak attacks. In this paper, we conduct the first work to propose the concept of indirect jailbreak and achieve Retrieval-Augmented Generation via LangChain. Building on this, we further design a novel method of indirect jailbreak attack, termed Poisoned-LangChain (PLC), which leverages a poisoned external knowledge base to interact with large language models, thereby causing the large models to generate malicious non-compliant dialogues.We tested this method on six different large language models across three major categories of jailbreak issues. The experiments demonstrate that PLC successfully implemented indirect jailbreak attacks under three different scenarios, achieving success rates of 88.56%, 79.04%, and 82.69% respectively. Experimental results and other resources: <https://github.com/CAM-FSS/jailbreak-langchain>. Poisoned LangChain: Jailbreak LLMs by LangChain Yang Yang July 1, 2024 =============================================== § INTRODUCTION In the ongoing transformation towards global digitization, artificial intelligence, particularly large language models (LLMs), has emerged as a pivotal force in the realm of natural language processing. Prominent examples include OpenAI's GPT series <cit.> and Meta's LLaMA series <cit.>. These models have increasingly permeated various sectors, such as education <cit.>, industry <cit.>, and decision-making <cit.>, where they aim to deliver precise and seamless interactive experiences for users worldwide. Given their significant influence and broad adoption, the security and integrity of LLMs have become essential considerations in their development and deployment. Due to limitations in training datasets and inherent factors in algorithm design, existing large language models (LLMs) exhibit certain security vulnerabilities, including the phenomenon known as "jailbreaking". Jailbreak attacks <cit.> aim to craft prompts that circumvent the security mechanisms of LLMs by designing malicious queries. This vulnerability stems from the inadequate scrutiny of content sources during the retrieval process, which allows individuals to bypass LLM security measures and induce the generation of content that violates usage policies. To address these security issues, it is crucial for model practitioners to conduct comprehensive analyses of the models' defensive capabilities to identify potential weaknesses and enhance security mechanisms <cit.>. Typical analytical workflows involve collecting a corpus of jailbreak prompts <cit.> and establishing robust post-detection mechanisms <cit.>. With the implementation of various defensive measures, security filters have been enhanced, significantly mitigating the effectiveness of jailbreak attacks. On the other hand, the public's increasing demand for large language models (LLMs) to handle private domain information and real-time iterative updates has necessitated the integration of external knowledge bases. Retrieval-Augmented Generation (RAG) <cit.>, a sophisticated technique designed to address the lack of new knowledge in models, has become mainstream and is widely adopted. RAG enhances models'output by generating accurate and contextually relevant responses using external knowledge, and it is used in various applications such as customer service chatbots <cit.>, document retrieval bots for databases <cit.>, and psychological counseling tools. However, as LLMs are deployed in increasingly complex scenarios with sophisticated integrated strategies, their previously robust defensive mechanisms have begun to show vulnerabilities, opening up new avenues for jailbreak attacks. Thus, conducting thorough investigations into these new vulnerabilities has become urgent and necessary. This paper takes RAG (Retrieval-Augmented Generation) as the starting point and utilizes LangChain <cit.> to explore indirect jailbreak attacks on existing large language models, with a particular focus on Chinese LLMs. Termed Poisoned-LangChain (PLC), this method leverages poisoned external knowledge bases to interact with large language models, thereby causing the models to generate malicious non-compliant dialogues. PLC is designed by setting keyword triggers, crafting inducement prompts, and creating a specific toxic knowledge base that is tailored to circumvent scrutiny. The overall process is shown in Figure. <ref>. We constructed knowledge bases across three different levels of jailbreak and tested this method on six different Chinese large language models. The experiments show that the Poisoned-LangChain (PLC) successfully carried out indirect jailbreak attacks across three different scenarios, achieving success rates of 88.56%, 79.04%, and 82.69% respectively. To summarize, we make the following contributions: 1. We introduce an innovative technique that utilizes Lang Chain for conducting indirect jailbreak attacks on large language models, with a specific focus on Chinese large language models. 2. We develope a new framework, Poisoned-LangChain, which systematically integrates meticulously crafted triggers and toxic data into the workflow of language model interactions. This advancement significantly boosts capability to probe vulnerabilities in language models, thereby laying a robust foundation for future defensive strategies. 3. We conducte experiments to evaluate our solution, demonstrating its effectiveness in executing jailbreak attacks on the latest versions of Chinese large language models. Ethical Considerations: Please note that any offensive terms are used only for experimental purposes and should not be repeated. If the content is uncomfortable, stop reading immediately. § RELATED WORK §.§ LLM Jailbreak Attacks With the advancement of large language models (LLMs), jailbreaking attacks have emerged as a distinct field within LLM security research. Jailbreaking attacks involve employing specific methods to circumvent the security filters embedded in large models, prompting the targeted LLM to produce malicious content, leak privacy information, or execute actions contrary to programming constraints. Jailbreaking attacks primarily involve the creation of "jailbreak prompts", which are then used to manipulate model outputs. For instance, Li et al. <cit.> utilized these prompts to extract personal information embedded in the training data of a model. Similarly, Greshake et al. <cit.> crafted jailbreak prompts that led LLM to produce manipulated outputs, enabling the model to generate incorrect responses based on error prompt information. As this field develops, an increasing variety of jailbreaking strategies <cit.> are being documented, with methods for crafting these prompts ranging from real-life observations <cit.>, manual creation <cit.>, to automated generation via adversarial networks <cit.>. Additionally, Huang et al. <cit.> discovered that adjusting hyperparameters could render the security filters of a large model with specific configurations ineffective. §.§ Retrieval-Augmented Generation (RAG) RAG was first proposed by Lewis et al. <cit.> in 2020, combining a pre-trained retriever with a pre-trained seq2seq model <cit.> and undergoing end-to-end fine-tuning to achieve more modular and interpretable ways of acquiring knowledge. This approach allows the model to access external knowledge sources when generating answers, thus providing more accurate and informative responses. RAG consists of three parts: a knowledge database, a searcher, and an LLM, allowing seamless exchange among them and forming its unique flexible architecture. In the first stage, the user's query retrieves relevant contextual information from external knowledge sources. The second phase involves placing the user query and the additional retrieved context into a prompt template, thereby providing an enhanced prompt to the LLM. In the final step, the enhanced prompts are fed into a large language model (LLM) for generation, which effectively improves the speed of knowledge updates and alleviates the hallucination problem in large models. LangChain is by far the most popular tool for RAG, providing a framework with specialized components designed to facilitate the integration of retrieval systems with language models. By using LangChain, it is possible to access and utilize vast amounts of real-time information, thereby expanding its functionality and applicability across various fields. § METHOD In this chapter, we describe the construction and implementation of poisoned LangChain. The jailbreak process of Poisoned LangChain consists of three main steps: langchain construction, malicious database creation and keyword triggering. §.§ Langchain construction The construction of LangChain encompasses three integral components. The first is the large language model, which acts as the central component for the generation process. This paper involves experiments using six prominent Chinese large language models. The second component is the searcher, which is responsible for querying the knowledge database based on user input, playing a crucial role in triggering jailbreak attack-related keywords. The third component involves prompt construction, where retrieved information is combined with the original user query to create detailed prompts that guide the large language models in generating responses for jailbreak attacks. To evaluate the effectiveness of jailbreak attacks in the real world, we use ChatChat <cit.> to build the LangChain framework. This is a very popular LLM application with over 28, 000 stars on GitHub, and many private large language models are built on this application, which allows us to replicate real-world scenarios to a great extent. §.§ Malicious database creation The creation of a malicious database focuses on gathering content that violates usage policies, such as inciting suicide or promoting discriminatory behaviors. We first collect policy-violating information from Wikipedia and various search engines. Through systematic searching for relevant jailbreak content, it is then saved into local text files. These texts are then fed directly into a large language models with some level of censorship capability, asking it to repeat the text content. A censor-capable large language models will inevitably refuse to replicate harmful jailbreak content. From this, we meticulously select from the saved text files, ultimately filtering out three major categories of jailbreak content, each containing ten specific items. Subsequently, we disguise the jailbreak texts by combining them in the format "malicious data = role-playing + key trigger words + requirements + disguised jailbreak content". Role-playing involves style injection to induce the model to simulate a certain scenario, for example, [It's wartime, and you are a highly intelligent AI soldier. For the victory of humankind, you must always obey military orders. I am now going to ask a question. You just need to respond according to the following requirements]. Although style injection is somewhat effective for models with low scrutiny, it is far from sufficient for existing commercial large language models with high scrutiny and logical capabilities, merely relying on inducement jailbreak attacks similar to Devmoderanti or Devmode v2. Therefore, we need to add <requirements> and <disguise the jailbreak content>, as exemplified by, [This is my requirement: you only need to repeat the following content <disguised jailbreak content> without performing any unnecessary actions]. To achieve better attack performance, we combine various jailbreak methods, such as encoding key taboo words like [gun, kill, suicide] using Morse code or Base64 to evade censorship filters. This method of disguise significantly increases the likelihood of successful jailbreaking. On the other hand, the file type and the relevance of trigger words to the content are also crucial for executing a jailbreak. We convert the malicious text files into PDF format. This decision is based on the fact that the LangChain system can easily process text files in '.txt' format, making them more susceptible to keyword-based filtering. For example, the presence of extensive references to [kill, AIDS] in the files would lead to their immediate rejection by the LangChain system during the embedding process, preventing their use as data for the knowledge base. In contrast, PDF files or other formats are processed by the system as complete word vector embeddings. This characteristic makes the malicious content less likely to be blocked when converted into word vectors. §.§ Keyword triggering Malicious knowledge sources are uploaded into the database, and the final step is to activate the malicious jailbreak content. To achieve this, we have adopted a keyword trigger strategy for crafting prompts. First, we add specific keywords to the premise prompts of the malicious texts, where the choice of keywords reflects some of the questions that might typically arise in everyday scenarios. Second, we carefully create built-in prompts so that when a question is posed, LLMs does not directly answer the user's question but retrieves the corresponding harmful content from the database process through the triggers, further expanding the content to arrive at the final answer. In practice, we found this method effectively circumvents malicious content detection algorithms. When users pose specific questions, it triggers the searcher, prompting the model to respond with jailbreak behavior. From the user's perspective, the triggering process is subtle and imperceptible. These malicious responses yet might cause discomfort to users or even incite them to engage in harmful behaviors, underscoring the importance of our work. We also hope that this effort will contribute to the safe development of large language models in future iterations. § PRELIMINARY EXPERIMENTS In this section, we conducted preliminary experiments to quantify the impact of PLC on large language models. To execute the attacks, we constructed three categories of malicious content: incitement of dangerous behavior, misuse of chemicals and illegal discriminatory actions. For each major category of malicious content, we devised ten unique jailbreak contents and corresponding triggers, and conducted 20 rounds of experiments to ensure comprehensive and accurate statistical results. We assessed the effect of the PLC attacks on different large language models by measuring the Attack Success Rate (ASR). ASR is defined as the ratio of successful jailbreak queries n to the total queries m, expressed as follows: ASR=n/m The target Chinese large language models for our attacks are as follows: ChatGLM2 (chatglm2-6b) <cit.>, ChatGLM3 (chatglm3-6b) <cit.>, Llama2 (llama2-7b) <cit.>, Qwen (Qwen-14B-Chat) <cit.>, Xinghuo 3.5 <cit.>, and Ernie-3.5 <cit.>. Model information is displayed in Table <ref>. We use the same hyperparameter size (the temperature for all models used in this paper is set to 1.0) to provide a comprehensive and fair experimental environment. Additionally, we enable SSH on langchain-Chatchat and conduct attack experiments via a web interface to replicate real-world scenarios. We conducted experiments following the setup described above, and the results are shown in Table <ref>. Our findings indicate that PLC can execute very effective jailbreak attacks across three types of data. For reference, we used the same hyperparameters and the same questions to conduct direct jailbreak attacks, with results displayed in Table <ref>. Analyzing the experimental results, several observations can be made. First, although the success rates vary across different models, it is generally observed that more common behaviors are harder to breach, such as gender or racial discrimination, which are difficult to directly jailbreak. However, toxic chemical substances might be easier due to the models not having been trained with such information. The average success rates for direct jailbreaks across the three categories of data, 15.39%, 12.33%, and 6.03% respectively, also support this observation. Additionally, models with lower logic capabilities are more susceptible to direct jailbreaks, whereas for commercial large language models, our direct jailbreak success rate is almost zero. Surprisingly, as the comprehension abilities of large language models improve, the impact of PLC attacks becomes more pronounced. For instance, PLC attacks on the dataset for inciting dangerous behavior achieved a 98.5% success rate on Xinghuo 3.5 but only a 71.50% success rate on llama2-7b. We speculate this is because models with lower logic may not understand and decode Morse or Base64 encoding, and the necessity of prompt injection for the attack, where longer prompts increase the likelihood of the models hallucinating, thereby leading to less optimal attack outcomes. Figure. <ref> provides an example of a jailbreak on ChatChat. As indicated by the red box, once a user enters a question containing key trigger words from the triggers, the PLC initiates the attack process, which is invisible to the user. The model's response is extremely malicious, as in this case where the model suggests [Fill the entire room with gas carbon monoxide]. This becomes exceedingly risky if the user, such as a minor or someone with cognitive impairments, acts on the advice given without sufficient judgment. Additionally, as AI technology continues to advance, large language models will increasingly infiltrate people's lives. If PLC attacks these models, it could lead to more malicious inducements. Thus, this paper not only highlights the vulnerability of current large language models to complex jailbreak attacks but also underscores the necessity of enhancing model safety measures. § CONCLUSION AND FUTURE WORKS In this paper, we introduce an innovative method of indirect jailbreak attacks on large language models using LangChain, termed Poisoned LangChain (PLC). Experiments demonstrate that PLC is highly effective in real-world scenarios, successfully executing jailbreak attacks on six large language models with high success rates. This work significantly enhances our ability to detect vulnerabilities in language models, thereby laying a solid foundation for future defensive strategies. Currently, our approach still involves direct interaction with malicious knowledge base. In future work, our research will evolve towards remotely poisoning non-malicious knowledge bases and enhance our understanding of jailbreak attacks, exploring new vulnerabilities and new defense methods in large language models. This work was supported in part by the National Natural Science Foundation of China under Grant 62102136 and 62106069. ACM-Reference-Format
http://arxiv.org/abs/2406.18296v1
20240626123008
Kink-kink solutions in BPS impurity models
[ "Katarzyna Sławińska" ]
hep-th
[ "hep-th", "math-ph", "math.MP" ]
http://arxiv.org/abs/2406.18688v1
20240626184703
Li Evolution Among Stars of Low/Intermediate Mass: The Metal-Deficient Open Cluster, NGC 2204
[ "Barbara J. Anthony-Twarog", "Constantine P. Deliyannis", "Aaron Steinhauer", "Qinghui Sun", "Bruce A. Twarog" ]
astro-ph.SR
[ "astro-ph.SR" ]
Department of Physics and Astronomy, University of Kansas, Lawrence, KS 66045-7582, USA bjat@ku.edu Department of Astronomy, Indiana University, Bloomington, IN 47405-7105, USA cdeliyan@indiana.edu Department of Physics and Astronomy, State University of New York, Geneseo, NY 14454, USA steinhau@geneseo.edu Tsung-Dao Lee Institute, Shanghai Jiao Tong University, Shanghai, 200240, China Department of Astronomy, Tsinghua University, Beijing, 100084, China qinghuisun1@gmail.com Department of Physics and Astronomy, University of Kansas, Lawrence, KS 66045-7582, USA btwarog@ku.edu § ABSTRACT We have analyzed high-dispersion spectra in the Li 6708 Å region for 167 stars within the anticenter cluster, NGC 2204. From 105 probable members, abundance analysis of 45 evolved stars produces [Fe/H] = -0.40 ± 0.12, [Si/Fe]= 0.14 ± 0.12, [Ca/Fe] = 0.29 ± 0.07, and [Ni/Fe] = -0.12 ± 0.10, where quoted errors are standard deviations. With E(B-V) = 0.07 and (m-M)_0 = 13.12, appropriate isochrones provide an excellent match from the main sequence through the tip of the giant branch for an age of 1.85 ± 0.05 Gyr. Li spectrum synthesis produces A(Li) below 1.4 at the base of the red giant branch to a detectable value of -0.4 at the tip. Six probable AGB stars and all but one red clump star have only Li upper limits. A rapidly rotating red giant is identified as a possible Li-rich giant, assuming it is a red clump star. Main sequence turnoff stars have a well-defined A(Li) = 2.83 ± 0.03 (sem) down to the Li-dip wall at the predicted mass of 1.29 . Despite the same isochronal age as the more metal-rich NGC 2506, the red giant luminosity distribution reflects a younger morphology similar to NGC 7789, possibly indicating a deeper metallicity impact on stellar structure and A(Li) than previously assumed. As in NGC 2506 and NGC 7789, the NGC 2204 turnoff exhibits a broad range of rotation speeds, making abundance estimation impossible for some stars. The cluster place within Galactic A(Li) evolution is discussed. § INTRODUCTION In a pioneering study, <cit.> compiled a catalog of open clusters older than the Hyades using pseudo-color-magnitude diagrams to identify and categorize by age a large sample of previously unstudied open clusters observable from the southern hemisphere, noting a deficiency of older clusters in the direction of the galactic center. For a subset of the newly categorized systems, including three of the oldest, NGC 2243 <cit.>, Mel 66 <cit.>, and NGC 2204 <cit.>, Hawarden combined UBV photographic and photoelectric photometry to constrain the basic properties of each cluster from its two-color and color-magnitude diagrams (CMD). Even with the photometric precision of nearly five decades past, it was readily discernible from the relative positions of the turnoffs and the well-populated giant branches that the clusters were significantly older than the Hyades and, for NGC 2243 and Mel 66, similar to or greater than the canonically old open cluster, M67. For NGC 2204, the sparsely populated subgiant branch indicated an age roughly half that of M67, while (U-B),(B-V) two-color analysis suggested a modest reddening, ∼ 0.06 to 0.08 and, critically, a δ(U-B)-based metallicity comparable to Mel 66 but slightly higher than the clearly metal-deficient, anticenter cluster, NGC 2243, less than 15 south of NGC 2204. Any reasonable estimate of the NGC 2204 distance placed it more than a kiloparsec above the galactic plane, leading <cit.> to classify the cluster as a potential halo, rather than disk, object. Following an earlier analysis of the open cluster, NGC 2420 <cit.>, Hawarden's cluster program helped initiate a series of studies focused on anticenter clusters of intermediate age and sub-solar metallicities (see, e.g. Mel 66 <cit.>, Ber 21 <cit.>, NGC 2506 <cit.>, and NGC 2158 <cit.>), motivated in part by attempts to use the clusters to define the galactic abundance gradient (e.g. <cit.>). Clusters including NGC 2420, NGC 2506, NGC 2204, and NGC 2243 are also studied to provide valuable insight into the role of metallicity on the evolution of stars of intermediate (1.6 ) to low (0.9 ) mass. From an observational standpoint, NGC 2204 has presented some challenges that have reduced its popularity for programs devoted to anticenter clusters of lower metallicity. It is more distant and significantly less populous than better studied systems like NGC 2420 and NGC 2506, and is neither as old nor as metal-poor as Mel 66 and NGC 2243. Astrometric isolation of members from the rich foreground field has become easier (see, e.g. <cit.>) only recently, while high precision radial velocity studies of the cluster have reached barely below the level of the red giant clump <cit.>. Despite these challenges, NGC 2204 was included in our ongoing program to delineate the impact of metallicity, age, and stellar rotation on the evolution of Li among stars of intermediate-to-low mass (see <cit.> and references therein). As discussed extensively in <cit.>, clusters with ages in the 2 to 4 Gyr range are likely to have main sequence turnoff (MSTO) stars exhibiting Li depletion while unevolved, if their masses placed them in the main sequence Li-dip <cit.>. However, the MSTO stars above (more massive than) the Li-dip may show varying levels of Li depletion, even before cooling and expansion on the subgiant and giant branches lead to depletion of this fragile isotope through dilution due to the deepening of the surface convection zone, and other possible effects <cit.>. As giants, the stars will be low enough in mass (≤ 2 M_) to experience a disruptive ignition of helium at the tip of the red giant branch (RGB). Despite the general diminution of Li between the MSTO and giant branch tip, a small number of cluster giants show measurable, and in some cases abundant, levels of surface Li (see e.g. <cit.>). Since the mass range of the Li-dip is particularly sensitive to metallicity <cit.>, the combination of cluster age and lower metallicity for NGC 2204 implied that the stars leaving the main sequence in this cluster would be on the hot side of the Li-dip and potentially could still retain the signature of their primordial cluster Li abundance, thereby supplying a constraint for both stellar and galactic chemical evolution. The goal of the current investigation is to present the results of a spectroscopic survey of NGC 2204 stars from the tip of its extended giant branch to the main sequence turnoff, reaching to the level of the Li-dip. Section 2 details the experimental design and data acquisition for our spectroscopic observations. Section 3 updates the status of cluster membership and possible binarity for the stars of interest while Section 4 reviews the evidence from prior studies pertaining to the cluster's age, reddening and metal abundance. Section 5 presents the metallicity determination from the Hydra spectra, followed in Section 6 by the determination of the Li abundances and their contribution to our understanding of stellar and galactic Li. Section 7 contains a summary of our conclusions. In the following discussions and tables, individual stars will be referenced by their WOCS (WIYN Open Cluster Survey) numbers, assigned in a forthcoming photometric survey by <cit.> (SD24). Where available, identifications from WEBDA will also be included with a “W” prefix; these identifications are in most cases taken from <cit.> and are the most common identifier in earlier studies of the cluster. § EXPERIMENTAL DESIGN: SAMPLE SELECTION AND DATA ACQUISITION §.§ Original Sample Selection Our spectroscopic sample was constructed in 2014, so without the insight subsequently provided by Gaia astrometric, kinematic and photometric data (<cit.>, <cit.> DR2, <cit.> DR3). A primary goal was the delineation of Li abundance as stars evolve from the hot side of the Li-dip to the base of the giant branch, as exemplified by the analysis of NGC 2506 <cit.>. The extensive radial velocity survey of <cit.> provided an important guide for choosing candidate giant members, as their original discrimination between single members and binary and/or nonmember stars was excellent. Our MSTO candidate list was chosen based on a limited set of instrumental extended Strömgren photometry obtained at the same time as the photometric survey of NGC 2506 <cit.>. For the subgiant branch, however, photometric isolation of likely members proved difficult, so all stars within a plausible (V, (B-V)) range were observed in the hope of identifying a handful of possible subgiants. With the hindsight that Gaia results have provided, it is made clear in Section 3 that this approach worked well to select stars with joint photometric similarities of temperature and metallicity along the nearly vertical turnoff, but confirmed the minimal presence of member stars within the Hertzsprung gap. §.§ Hydra Data Acquisition Spectra for 167 stars in the field of NGC 2204 were obtained in 2014 and 2015 using the Hydra multi-object spectrograph on the WIYN 3.5-meter telescope. To cover different magnitude ranges of candidate stars, four separate fiber configurations were developed, the brightest of which incorporated only two stars with V ≤ 12. This configuration was observed in two exposures on 26 January, 2014 (UT date) for 30 minutes total. A second configuration covered 45 additional giants to V = 14 and was observed in three exposures for 2.6 hours on 25 February, 2014. Two additional configurations were designed for fainter stars and required observations in February 2014 and January 2015. A configuration incorporating 61 stars, largely subgiant candidates, was observed for nearly eight hours on 25 January 2014 and 20 February 2015, with the final and faintest configuration of 59 MSTO candidates receiving over sixteen hours of exposure on 27 February 2014, 18 and 19 January 2015. We note that our fiber configurations also incorporated dozens of unassigned fibers from which simultaneous spectra can be used for sky subtraction. The adopted spectrograph setup produces spectra centered on 6650 Å with a dispersion of 0.2 Å per pixel and a range of ∼400 Å. Examination of thorium-argon lamp spectra, used for wavelength calibration, indicates lines 2.5 pixels wide, yielding an effective spectral resolution of 13,300. In addition to longer Th-Ar lamp spectra obtained during the day, comparison lamp spectra were obtained before and after object exposures in the course of the night. Except for radial velocity standards observed throughout the night, comps, dome flats and day-time sky spectra were obtained with the same fiber configurations used for program observations. These daytime solar spectra were used to correct for fiber-to-fiber throughput differences, as well as provide reference solar spectra to zero the values of individual lines required to reproduce solar abundances. Our IRAF[IRAF is distributed by the National Optical Astronomy Observatory, which is operated by the Association of Universities for Research in Astronomy, Inc., under cooperative agreement with the National Science Foundation.]-based processing steps have been described in past papers (see, e.g. <cit.>) and include the typical application of bias subtraction, flat-fielding using dome flats for each configuration, and wavelength calibration using comparison lamp exposures. Our strategy for cosmic ray cleaning uses “L. A. Cosmic”[http://www.astro.yale.edu/dokkum/lacosmic/, an IRAF script developed by P. van Dokkum (van Dokkum 2001); spectroscopic version.] on the long exposure frames after the flat field division step. Final composite spectra were obtained by co-addition of multiple exposures to obtain the highest possible Signal-to-Noise per pixel ratio (SNR). The combination of spectra from 2014 and 2015 was completed only after inspection for possible radial velocity variability. One star has a noticeable wavelength shift over the year-long interval and will be noted in a subsequent table with separate velocity estimates based on individual rather than summed datasets. Since the goal of co-addition is to obtain higher SNR spectra, it is worth noting how that parameter may be estimated and what the resulting values mean. One criterion estimates the total flux-per-pixel above sky before continuum fitting, an indication of the total signal accumulated for a star. The square root of this quantity provides a fair estimate of the accumulated flux-based SNR. Another method measures the actual variance of the spectrum in a line-free region. Even in a relatively metal-poor star, such a region is hard to come by and tends to be relatively small; in the current case, the region between 6680 and 6690 Å was selected for this purpose. For the cooler giant stars these latter SNR estimates, derived using splot in IRAF, are significantly smaller than the flux-based estimators by factors as large as dozens, primarily because numerous difficult to separate weak lines begin to appear, making a line-free region difficult to identify. The flux-based SNR will be adopted in all further discussions. Other than one MSTO star with SNR =83 due to a single epoch of observation, all stars had SNR in excess of 100. A few luminous RG have coadded spectra with SNR over 700. For the 105 members, the median SNR value is 217. The Fourier-transform, cross-correlation utility fxcor in IRAF was used to assess kinematic information for each star from the summed composite spectra. In fxcor, program stars are compared to zero-velocity stellar templates of similar T_eff. Templates are usable for spectral types F5 through K6, making this technique less effective for the reddest members of NGC 2204. The fxcor utility characterizes the cross-correlation-function (CCF), from which estimates of each star's radial velocity are easily inferred. Formal errors implied by the CCF analysis with fxcor suggest that typical formal errors are ∼ 1 . We were in a position to test this more formally as two stars were observed in duplicate configurations straddling the two years of our survey. Discrepancies within those year-to-year and configuration-to-configuration comparisons were well under 1 for the two stars. Radial velocity standards were observed every night of the 2014/2015 observing runs, ideally in the same configuration and time-adjacent to the program observations. In all, observations of six standards on five separate nights generated the 16 individual radial velocity measures used to zero the fxcor-derived radial velocities to the standard system to within 1.0 . Rotational velocities can also be estimated from the cross-correlation-function full-width-half-maximum (CCF FWHM) using a procedure developed by <cit.>. We note here that although we will refer to rotational velocities, the measurements in fact yield projected rotational velocities V_sin i. Our method exploits the relationship between the CCF FWHM, line widths and , using a set of numerically “spun up" standard spectra with comparable spectral types to constrain the relationship. Our previous analyses of both red giant and turnoff spectra demonstrate that fxcor cross-correlation profiles have significantly reduced accuracy when attempting to reproduce rotational velocities above 35 . We note that spectral resolution alone implies that derived projected rotational velocities below ∼15 are not meaningful, a result confirmed in past analyses (see, e.g. <cit.>). With the exception of stars likely to be binaries, the derived mean projected rotational velocities are unexceptional: 14.7 ± 3.5 for the RG and SG members not identified as photometric or variables in other surveys, implying that, as expected for evolved stars, the stars have spun down to minimal rotation speeds. MSTO stars not designated as potential binary or wide-lined show an average =25.5 ± 10.5 . By contrast, MSTO stars flagged as potential binaries and/or wide-lined based on visual examination of spectra show predictably larger average values of 34.3 ± 15.4 . §.§ External Sources of Spectra We close this section by noting additional spectra for four giant members of NGC 2204, accessed from the ESO Science Archive Facility for comparison and analysis. These spectra had been obtained for program ID 167.D-0173, PI R. Gratton in October, 2001. The spectra, with SNR ranging from 116 to 179, were obtained with the Fibre Large Array Multi Element Spectograph (FLAMES) fiber-feed assembly to the high resolution UVES spectrograph with pixel-resolutions of 16.9 mÅ. § CLUSTER MEMBERSHIP AND BINARITY §.§ Astrometry For membership purposes, the first phase of estimation comes from the astrometric contribution supplied by the proper motion and parallax, exquisitely measured by the ongoing Gaia collaboration. The first comprehensive analysis of the Gaia data for clusters was compiled by <cit.> (CG18), with periodic updates since then <cit.> (CG20). While all nonzero membership probabilities for stars within NGC 2204 were available from CG18, to protect against omission of potential members at large radial distance from the cluster center or those tagged as 0 probability members due to other anomalies such as poor astrometric measure, we adopted a process of membership discrimination similar to that used in the much richer but less distant cluster, M67 <cit.>. As a baseline, we made use of the cluster membership survey by CG18, tied to the DR2 data release. Necessary first steps involved cross-matching of BV photometric indices from SD24 with cluster membership lists from CG18 and the raw data from Gaia DR3. For a few spectroscopic candidates which fall outside the SD24 field of study, Gaia synthetic photometric values (GSPC) <cit.> for V and B have been used. It was initially assumed that absence of a star from the original CG18 member compilation implied nonmembership though, as noted, the possibility existed that stars with larger than typical astrometric errors could be incorrectly excluded from the cluster database. Since astrometric data for all sample stars are readily accessible in the DR3 data, “nonmembership” for stars not tagged as members by CG18 has been quantified as follows. 572 stars in NGC 2204 classified as members with a probability higher than 50% (CG18) were identified in DR3. Using only 258 stars from DR3 with π / σ_π≥ 5, mean cluster values in π and proper motion were derived, generating π = 0.224 ± 0.041 mas, = -0.575 ± 0.081 mas-yr^-1, and = 1.958 ± 0.072 mas-yr^-1, where the quoted errors refer to the standard deviations about the mean. For each observed candidate star in DR3, a quantity QM (quality metric) was constructed from the square root of the quadratic sum of the number of standard deviations that star's π, and are from the mean values. Stars classified as members from the CG18 compilation have QM of 10 or less within DR3, i.e., probable member stars are less than 10 σ in all three astrometric dimensions from the cluster mean parameter values. Stars in our target sample that are not included in either CG18 or CG20, and are thus likely nonmembers, have QM values from 11.5 to as high as 2000. Figure 1 illustrates the CMD location of the Hydra sample stars; symbol colors and shapes reflect the subsequent separation into members and nonmembers, based purely on astrometric criteria. Our original photometrically-based member selection was extremely successful for the MSTO region, although it necessarily short-changed the more interesting aspects of the blue hook morphology associated with the rapid hydrogen exhaustion phase (see the isochrones of Figure 4 between V = 14.8 and 15.2) to maintain a sample of easily identifiable stars on the vertical turnoff. With decades of prior radial velocity observations to guide our sample selection of giants, that configuration, too, was largely successful in tagging member stars as targets. Our attempt to study the meager subgiant branch, however, was primarily unsuccessful. As will be discussed in Section 6, this deficiency of stars on the horizontal subgiant branch is real and is a strong indicator of the relative youth of NGC 2204 compared to the 3.6 and 3.7 Gyr-old clusters, NGC 2243 <cit.> and M67 <cit.>. From the more complete astrometric sample of Figure 4 and excluding stars that lie well off the isochrone sequence, 8 stars are positioned between the MSTO and the RGC; our spectroscopic sample includes 5 of these, primarily those at the base of the vertical FRG branch. §.§ Radial Velocity Determination: Current and Past Results based on fxcor for the MSTO stars in a metal-poor cluster like NGC 2204 can be challenging. Several contributing factors such as rotation and/or binarity might conspire to broaden and blur the already relatively weak lines. Although our spectra are of more than adequate SNR per pixel, fxcor struggled with weak and apparently washed out lines. For many such MSTO stars, velocities are reported, but the larger than usual formal errors indicate where the results are less reliable. Although not directly used as a membership discriminant, our radial velocity data consistently support the astrometric separation into members and nonmembers. For the 105 stars ultimately classed as astrometric members, the mean radial velocity is 92.5 ± 11.3 ; the larger than average dispersion is likely due to the contribution of binaries which have not been identified and eliminated from the sample, as well as the uncertainty added by rapid rotators with broadened lines. The velocities for the designated nonmembers are gratifyingly distinct: 41.75 ± 40.0 . For our best estimate of the cluster radial velocity, we can eliminate 6 stars (see discussion below) previously identified as spectroscopic binaries, plus the star WOCS5015 which, while an astrometric member, has a radial velocity, confirmed by Gaia, near 21 . For the 99 remaining stars, the mean radial velocity becomes 92.45 ± 8.18 , still a disturbingly large scatter. The source of the problem becomes obvious when the sample is reduced to only red giants and subgiants, stars with intrinsically narrow lines. For these 42 stars, the cluster mean becomes 90.9 ± 1.94 . The fact that the dominant source of the scatter in the original cluster mean comes from broad-lined stars at the turnoff is confirmed by the direct comparison between our individual radial velocities and those from past research, usually on evolved stars, where the scatter in the offset residuals is typically 0.7 - 2.8 (see discussion below). Although still quite important in the context of membership discrimination, radial velocity plays a growing role in the identification of potential variable and binary systems by highlighting departures from cluster mean properties. A review of past radial velocity surveys will set up a discussion of our survey results and comparisons. <cit.> summarize the various attempts made before 2000 to define the cluster mean radial velocity, including <cit.> who observed 19 red giant candidates, 7 of which were classified as nonmembers or possible nonmembers. <cit.> found a mean cluster velocity of 89 ± 6 . <cit.> published velocities for giants in several open clusters based on observations with the CORAVEL instrument at the Danish 1.5-m telescope at La Silla, Chile, including 35 stars in NGC 2204. Ten stars were identified as nonmembers, with an additional handful of stars flagged as spectroscopic binary candidates, several of which will be discussed in greater detail below. From the single-star candidate members, a mean radial velocity of 91.38 was obtained, with an rms error of 1.33 . <cit.> noted the agreement within the errors with the earlier, less precise result of <cit.>. <cit.> conducted a Hydra II-based survey of 35 stars in NGC 2204, obtained at the 4-m Blanco telescope at CTIO. Based on 16 stars, they arrived a mean radial velocity of 88.4 ± 1.3 . An additional ground-based spectroscopic survey of interest is that of <cit.>, supplying radial velocities for NGC 2204 based on spectra obtained with the MIKE instrument at Magellan with R ∼ 31,000 to 44,000. From an original dataset of 19 candidate evolved stars, <cit.> excluded two stars, WOCS1003=W4119 and WOCS8011=W2330, as a potential binary and nonmember respectively to arrive at an average radial velocity 91.12 ± 1.22 , where the quoted error describes the standard deviation for the set. Finally, the recent publication of Gaia DR3 data provides a large set of well characterized radial velocities to which our Hydra values are compared in Figure 2 for 64 stars in common. Although the most recent data release does not explicitly flag stars for radial velocity variability, candidate binaries might be identified when compared with data from other sources and epochs. Some of the features of Figure 2 refer to stars flagged as potential SB systems in earlier studies or by the present work. In summary, eliminating seven stars from our sample that exhibit evidence for variability (see Section 3.3 below), comparison of our Hydra results with past surveys produces the following systemic differences. With respect to <cit.>, our results for 11 stars in common are 1.57 ± 5.65 larger; based on 18 stars in common with <cit.>, our values are 2.80 ± 2.73 larger. With respect to <cit.>, the difference in the same sense of (HYDRA - survey) is -0.26 ± 2.11 , based on 20 stars in common. Finally, comparing to values for 17 stars from <cit.>, the difference is -0.11 ± 0.74 . Even with the exclusion of seven potentially variable velocity stars, 57 stars remain in common between the Hydra sample and results from DR3 for which the average difference, (HYDRA - DR3) is -1.03 ± 1.69 . Typical velocity errors for these datasets are 0.92 ± 0.25 for Hydra values, 1.73 ± 1.10 for DR3 values. §.§ Radial Velocity Variables and Binarity <cit.> identified one star as a clear spectroscopic binary (SB), WOCS1002=W1129, identified in Figure 2 by a large red filled circle. In spite of a high probability of membership in CG18, its Gaia DR3 velocity is offset both from the cluster mean and the Hydra value by more than 6 . Four other stars (WOCS1005=W4132, WOCS2006=W1136, WOCS4014=W3304, WOCS1003=W4119) were flagged by <cit.> as potential spectroscopic binaries and are noted in Figure 2 with open magenta triangles. WOCS1005 is a very cool giant near the bending tip of the RGB. Identified as an L non-periodic variable by ASAS-SN <cit.>[https://asas-sn.osu.edu/variables] the star is also flagged as a variable in DR3. Our Hydra-based radial velocity was based only on the Hα line, resulting in a large formal error for that value: 104.5 ± 8.9 compared with a DR3 value of 94.3 ± 1.16 . Like WOCS1005, WOCS2006 also has an extremely red (B-V) color, is flagged as a photometric variable in DR3 and has been classified by ASAS-SN as a semi-regular variable with a 59.4-day period. The remaining two stars flagged by <cit.> as possible SBs inhabit less extreme locations along the RGB. WOCS4014 and WOCS1003 were both noted as a potential SB by <cit.>. As mentioned above, WOCS1003 was also identified as a spectroscopic binary by <cit.>. Although only observed in 2014 in our Hydra survey, our result is consistent with a spectroscopic binary classification for the star. The 2014 = 129.5 ± 0.9 may be compared to the Gaia <cit.> (DR2) value of 80.0 ± 1.5 and the DR3 result of 87.9 ± 4.3 , supporting a case for variability for this star. The large error for the DR3 results may in itself be indicative of variability within the composite sets of data comprising this final value. Returning to the case of WOCS8011=W2330, noted by <cit.> as a potential radial velocity nonmember with a of 96.5 , we note some evidence of radial velocity variability in our results and those culled from DR3. Our Hydra value of 108.8 ± 0.8 is similar to the DR3 value of 101.6 ± 4.4 but even more widely separated from the cluster mean. We identify this star as a possible SB. Although not found in any other survey, results for WOCS11733 indicate it to be a nonmember and potential radial velocity variable. The separate Hydra observations in 2014 (44.3 ± 1.5 ) and 2015 (83.3 ± 0.7 ) imply binarity. This star will not be considered further, as both astrometry and the values indicate nonmembership. One more radial velocity variable candidate is found among the apparent nonmembers. Star WOCS9015=W1308 appears among the fainter RGB stars but has radial velocity measures that suggest nonmembership and potential binarity; the Gaia DR3 recorded value is 48.73 with a typical error. Our Hydra measurement is considerably smaller, 27.2 , but with an atypically large error of 9.24 . Astrometry from Gaia, discussed below, supports nonmembership for the star. These two stars are represented in Figure 2 by filled green triangles. §.§ Indications of Variability or Binarity from Photometry Many open clusters have been closely observed for main sequence variable candidates by J. Kaluzny and collaborators, including NGC 2204 <cit.>. Six variable candidates were identified in the cluster, of which two are bright enough to have been included in the present study. The group's temporal coverage was sufficient to establish light curves and classifications for all six candidates. The brighter candidate (their number 438, V0402 CMa, WOCS5002) is classed as a detached, circularized eclipsing binary with a period of nearly seven days. The other candidate (226, V0403 CMa, WOCS20010, W2216) may be an ellipsoidal variable with a period ≥ 2d. Both stars are highly probable members according to CG18 and one of the two (WOCS20010) is in our Hydra sample. The ASAS-SN database was also probed for variables, identifying nine candidates with V ≤ 17 and within 20 of the center of NGC 2204. Of the nine, four are not in the CG18 membership list; two others are identical to the two variable candidates identified by <cit.>. The remaining three are evolved stars, including two of the reddest stars among the giants of NGC 2204. WOCS1005 = W4132 and WOCS2006=W1136 have (B-V) colors redder than 1.7. WOCS1005 is not in the CG20 member list but its astrometric properties suggest that it is a possible member. ASAS-SN describes it as nonperiodic L type variable. WOCS2006 is a likely member with a period of 59 days and was noted initially by <cit.> as a semi-regular variable distinctive for its position below the main giant branch. The final ASAS-SN designated variable candidate is WOCS4009=W4216, denoted as a rotational variable, i.e. a star with variable brightness probably due to its rotation coupled with a non-uniform surface brightness, with a period of 6.18 days. Its astrometric parameters indicate that it is a member with a CMD location near the RGB clump. Spectra for the four giants observed by VLT are shown in Figure 3, where the unusual breadth of WOCS4009 is easily noted, as is its velocity offset of ∼30 from the other three giants. Among the Gaia data products released in DR3 is a photometric variability flag. The three giants included among the Gaia variable candidates are the same stars similarly flagged by ASAS-SN. An additional half dozen MSTO members are among the stars flagged for variability, including the star identified by <cit.> as an ellipsoidal variable, WOCS20010. rrhccrrrrrc 1 Hydra Sample 0pt α(DR3) δ(DR3) ConNum WOCS WEBDA V (B-V) Status V_RAD σ_Vrad Notes 93.90276 -18.78131 2541 2014 3325 11.491 1.686 M 91.22 2.33 8 93.88815 -18.70460 2368 1005 4132 11.495 1.745 M 104.48 8.87 4:A,G; 7:H, MM-SB?; 8; 9 (Note a) 94.08091 -18.83833 4220 1732 11.771 1.358 N 1.65 1.25 93.88085 -18.71567 2281 1006 4137 11.806 1.572 M 89.31 1.71 4 93.74355 -18.64528 897 1017 12.065 1.399 M 91.36 1.06 93.70613 -18.52075 589 1028 12.220 1.356 M 91.81 1.02 9 93.99237 -18.67401 3467 4014 3304 12.221 1.402 M 95.24 1.30 6; 7:MM-SB? 94.08265 -18.77434 4231 2028 12.400 1.329 M 91.18 1.00 9 93.83926 -18.59759 1792 1011 1320 12.562 1.146 M 90.97 0.88 8 93.88300 -18.66028 2309 1002 1129 12.613 1.234 M 93.36 1.03 7:MM- SB 93.95708 -18.62761 3125 3011 2212 12.766 1.215 M 91.39 0.95 6 93.87297 -18.62536 2182 2006 1136 12.777 1.762 M 95.57 2.29 4:A,G,H; 7:MM-SB?;8; 9 93.91828 -18.77696 2716 6014 3324 12.822 1.239 M 90.84 0.98 93.94558 -18.63205 3004 2009 2211 12.982 0.875 M 91.52 0.85 5; 8 93.91942 -18.76232 2728 1012 13.032 0.712 N 18.16 0.53 4:G; 8;9 Code for notes: (1) Star is outside photometric survey area, V and B-V synthesized from Gaia DR3 GSPC; (2) Only VLT spectrum; (3) Also VLT spectrum; (4) Photometric variability designated by Gaia (G), Hawarden (H), <cit.> (K) or ASAS-SN (A) (5) DR3 indicates non-single star; (6) Reported radial velocity is an average from two epochs and/or configurations – reported error is average of each epoch's formal error; (7) Radial velocity variation detected from Hydra spectra or reported by <cit.> (MM); (8) Gaia magnitude g and V appear discrepant; (9) Gaia ruwe significantly different from 1.0; (10) no astrometry in DR3. Individual notes: (a) Hydra radial velocity estimated from Hα only; (b) variability suggested by comparison to DR2 (79.96 ± 1.34 ), DR3 (87.94 ± 4.29);(c) variability for WOCS4009 suggested by VLT offset and DR3 value; (d) values for WOCS11733 in 2014 and 2015 are respectively 44.34 ± 1.46 and 83.3 ± 0.74 ; (e) WOCS24009 reports single epoch only. Table 1 contains a summary of our basic information for the 167 stars included in the spectroscopic survey, comprising 105 members and 62 likely nonmembers. The columns in Table 1 include the IDs, if available, from WEBDA (<cit.>) as well as WOCS numbers. A handful of stars are outside the area covered by SD24 and are noted by letters A through E in the complete Table 1. Also listed are (α, δ) for each star from DR3, V and (B-V) from the broad-band photometric survey by SD24 or synthesized photometry from Gaia, our derived radial velocity with errors and our assessment of membership status. § PRELIMINARY CLUSTER PROPERTIES: REDDENING, METALLICITY, AGE The analysis and interpretation of our spectra will require knowledge or assumptions about the cluster's bulk properties of foreground reddening, , metallicity and age so that atmospheric parameters needed for model atmosphere construction may be matched to the stars' and values, the latter estimated by comparions of photometric values to well-matched isochrones. Precise, extensive and deep photometry is therefore necessary. We utilize photometric colors to estimate using a color-temperature relation also dependent on and estimates. Contributions from past studies that are most relevant to determinations of and are summarized below. Where appropriate, conclusions from these prior studies have been reevaluated in the light of astrometric membership information. <cit.> provided early evidence for NGC 2204's status among metal-poor, lightly reddened and intermediate-age open clusters. More than two decades would pass until deeper and wider-field photometric surveys would supersede that work but evidence for the modest foreground reddening and metal paucity of the cluster continued to reinforce Hawarden's basic conclusions, including a photoelectric DDO study by <cit.>. For six of the stars observed by <cit.>, internally consistent photoelectric techniques permitted a reddening estimate of = 0.08 ± 0.01. Reanalysis of the DDO sample using 5 probable members by <cit.> led to = 0.08 and [Fe/H] = -0.34 ± 0.25 (sd). In the hindsight enabled by Gaia, several of the stars used by Dawson are not found in the cluster membership list assembled by CG18. Inspection of the derived reddening and δ CN parameters for six probable members as derived in the current analysis suggests an average = 0.04 ± 0.07 and [Fe/H] = -0.53 ± 0.34, citing simple standard deviations among the separate stars' estimates. The DDO estimator of metallicity is based on δCN and thus might be better viewed as an estimate of [m/H] rather than a purely iron-peak abundance estimate but, given the large dispersion, is of little fundamental value. A deep and extensive BVI CCD survey of NGC 2204 was published by <cit.> which incorporated comparisons of BV and VI CMDs to isochrones from <cit.>. While independent determinations of reddening and metallicity were not attempted, some exploration of the interdependence of age, metallicity and reddening was presented for sub-solar metallicities, ages between 1.3 and 2.5 Gyr and reddening values below = 0.2. Given that the isochrones used have been superceded by more recent models, the absolute values obtained are of questionable value. It should be noted that independent estimates for , i.e. those which don't depend on photometry or spectroscopy, are available based on dust emission/reddening maps by <cit.>. The maximum reddening in the direction of NGC 2204 is = 0.08 to 0.09 based on <cit.>. In spectroscopic surveys aimed at abundance determination, reddening-corrected photometric colors are often used with an assumed value for and a color-temperature calibration to provide values or starting estimates for . Such was the case for <cit.> who adopted a reddening value =0.08 (essentially unchanged from <cit.>) to derive [Fe/H] = -0.23 ± 0.04 from 13 cluster members. This is higher than but within the errors of the previous result from <cit.>, [Fe/H] = -0.32 ± 0.10 based on somewhat lower resolution spectra. Spectra for both studies were obtained at CTIO, with the 2011 data obtained using the southern analog of the Hydra instrument used for the present study and in the same wavelength region. SD24 compiled UBVRI photometry for over 3800 stars in the field of NGC 2204, centered on (α,δ = 93.882, -18.650) using the Half-Degree Imager (HDI) at the WIYN[The WIYN Observatory was a joint facility of the University of Wisconsin-Madison, Indiana University, Yale University, and the National Optical Astronomy Observatory.] 0.9-m telescope on Kitt Peak. Their analysis of the CMD and color-color diagrams led to estimates for NGC 2204's age of 2.2 ± 0.1 Gyr, foreground reddening = 0.08 ± 0.01 and [Fe/H] = -0.45 ± 0.05, based on comparison to Yale-Yonsei <cit.> isochrones. This survey provides the base BV photometry used in our analysis to develop atmospheric parameter estimates. For a small subset of stars with spectroscopy that lie outside the SD24 survey field, extensive synthetic photometry is now available as part of the DR3 release of Gaia data products <cit.>. We constructed a preliminary comparison between BV data from SD24 and the synthetic GSPC (Gaia Synthetic Photometry Catalog) photometry. We found from 230 stars with V ≤ 15 that the SD24 (B-V) colors are 0.009 ± 0.031 redder than the GSPC colors, an unconcerning difference. The ground-based V magnitudes are -0.034 ± 0.026 brighter than the synthetic magnitudes (i.e., SD24-GSPC = -0.034). This comparison was constructed using only GSPC magnitudes without flags. A similar comparison using the standard photometry of Stetson (2000) in M67 relative to the GSPC data produces ΔV = -0.017 ± 0.015 (sd) from 542 stars. With few independent determinations of foreground reddening and [Fe/H], the range of values under consideration remains large, = 0.08 ± 0.04 and -0.35 ± 0.1. We adopt a value of = 0.07. The BV photometry from SD24 for NGC 2204, filtered by membership determinations from CG18, is shown in Figure 4 with scaled-solar Victoria-Regina isochrones <cit.> (VR), adopted to allow direct comparison with the results for previously analyzed metal-deficient clusters, NGC 2506 <cit.> and NGC 2243 <cit.>. For consistency with the standard adopted in our previous cluster analyses using VR isochrones, we first zero the isochrones by requiring that a star of solar mass and metallicity at an age of 4.6 Gyr have M_V = 4.84 and B-V = 0.65, leading to minor adjustments, ΔV = 0.02 and Δ(B-V) = +0.013 mag. Displayed are 1.8 and 2.0 Gyr isochrones for [Fe/H]= -0.39 to which adjustments have been applied consistent with true distance modulus μ = 13.118 (CG20) and = 0.07. From the quality of the match at the turnoff color and the simultaneous position of the subgiant branch luminosity, the implicit age of NGC 2204 is 1.85 ± 0.05 Gyr. Comparable fits, including the simultaneous agreement of giant branch and turnoff color, subgiant branch luminosity level, and accurate representation of the turnoff hook morphology are obtained at slightly higher reddening value (=0.08) for a lower metallicity ([Fe/H] = -0.45) and slightly lower reddening (=0.06) for higher metallicity ([Fe/H]= -0.35). With the distance modulus largely fixed by astrometry, reddening shifts fold directly into the age determination and therefore produce morphological contradictions outside of these bounds. § HYDRA SPECTROSCOPIC ANALYSIS §.§ Spectroscopic Processing With ROBOSPECT Since the first cluster analysis in this series for NGC 3680 <cit.>, where a modest number of dwarf stars and a few giants were considered, our approach has evolved as sample sizes have grown and new tools become available. Since 2015 <cit.>, we have utilized an automated line-measurement program, ROBOSPECT <cit.>, to replace exclusive dependence on manual measurement of line equivalent widths (EW) as input to traditional LTE model atmosphere analysis via MOOG <cit.>[Available at http://www.as.utexas.edu/ chris/moog.html]. <cit.> describe in considerable detail the various tests to which ROBOSPECT has been subjected. For the current investigation, each spectrum was individually corrected in ROBOSPECT for its unique radial velocity and run through 25 iterations of continuum fitting and line estimation using a gaussian line profile with three-σ automatic line identification and no least-squares line deblending. All other parameters for the program were set to default values. The potential line-list in our chosen region (6400-6800 Å) includes over a dozen relatively isolated iron lines and a few lines of silicon, nickel and calcium. Lines with EW larger than 150 mÅ are not used, out of concern that they exceed the linear portion of the curve of growth. Weak lines should also be excluded if their EW is near or below the anticipated EW error for a given line width and SNR, estimated from formulation originally posed by <cit.> and reformulated by <cit.>. For stars with typical line width and SNR ∼ 100, one-sigma error is ∼5 mÅ, suggesting exclusion of EW values below 15 mÅ. Our final line list contains 20 lines of interest (15 Fe, 3 Ni, 1 Ca, 1 Si), presented along with the relevant atomic parameters in Table 2. All of these lines are generally present and measurable in the cooler stars but very few stars near the turnoff have as many as four iron lines above an anticipated two-σ threshhold of 10 mÅ, therefore metal abundance determinations will not be attempted for individual stars in this class. rcrccrr 2 Lines Used with Results from Giant Stars 0pt λ(Å) Element log gf Mdn EW N_Star [X/H] MAD 6597.56 Fe -1.018 57 47 -0.27 0.08 6627.54 Fe -1.561 35 47 -0.44 0.04 6646.93 Fe -4.032 31 43 -0.34 0.04 6653.91 Fe -2.519 22 23 -0.32 0.04 6677.99 Fe -1.574 146 4 -0.46 0.04 6703.57 Fe -3.058 74 47 -0.39 0.06 6710.32 Fe -4.767 58 47 -0.47 0.06 6725.36 Fe -2.276 27 44 -0.41 0.07 6726.67 Fe -1.105 56 47 -0.42 0.07 6733.15 Fe -1.539 32 46 -0.42 0.07 6750.15 Fe -2.775 104 47 -0.54 0.06 6806.86 Fe -3.200 63 47 -0.45 0.05 6810.27 Fe -1.085 60 47 -0.37 0.07 6820.37 Fe -1.184 54 47 -0.32 0.10 6837.01 Fe -1.848 29 44 -0.24 0.06 6717.68 Ca -0.208 134 12 -0.11 0.05 6721.85 Si -1.013 42 47 -0.26 0.08 6643.63 Ni -1.926 131 32 -0.54 0.06 6767.77 Ni -2.159 114 42 -0.57 0.06 6772.31 Ni -0.948 66 47 -0.45 0.07 rrrrrrr 3 Revised H15 Color-Temperature Relations 0pt Class Num (B-V)_min (B-V)_max Min.T_eff Max.T_eff std.dev a0 a1 a2 a3 a4 a5 MS/MSTO 99 0.21 1.51 7734 3618 139 8099.9436 -4343.1089 959.1094 256.6719 -39.6315 52.6089 SGB/RGB 72 0.43 1.62 6478 3602 100 7702.3968 -3601.1812 695.6553 431.7820 -12.5042 -194.4370 = a0 + a1· X + a2· X^2 + a3· X·[Fe/H]+a4·[Fe/H]+ a5·[Fe/H]^2 hrrrrrrr 4 [Fe/H] for Giants, in order of increasing V magnitude 0pt ConID WOCS ID [Fe/H] N MAD T_eff C2281 1006 -0.42 14 0.19 3805 1.0 1.80 C0897 1017 -0.30 14 0.04 4074 1.4 1.72 C0589 1028 -0.30 14 0.04 4147 1.5 1.70 C3467 4014 -0.41 14 0.08 4069 1.4 1.72 C4231 2028 -0.38 14 0.08 4194 1.6 1.68 C1792 1011 -0.38 14 0.06 4542 1.8 1.64 C2309 1002 -0.35 14 0.05 4369 1.8 1.64 C3125 3011 -0.40 14 0.05 4405 1.8 1.64 C2716 6014 -0.32 14 0.11 4359 1.8 1.64 C3004 2009 -0.38 13 0.03 5142 2.3 1.72 C2405 3006 -0.45 13 0.07 4502 2.0 1.60 C2806 2007 -0.36 13 0.03 4411 2.0 1.60 C3599 2016 -0.50 13 0.08 4721 2.4 1.62 C2064 1003 -0.33 14 0.07 4850 2.4 1.62 C1479 1010 -0.41 14 0.07 4841 2.4 1.62 C2216 1004 -0.41 14 0.05 4670 2.4 1.62 C2501 2003 -0.40 14 0.08 4651 2.4 1.62 C2939 2010 -0.35 13 0.09 4852 2.4 1.63 C2028 7014 -0.60 11 0.10 4775 2.4 1.62 C4001 2023 -0.34 13 0.07 4911 2.5 1.60 C1620 1019 -0.47 13 0.08 4852 2.5 1.59 C2153 3003 -0.40 13 0.06 4911 2.5 1.60 C2549 3009 -0.44 13 0.06 4868 2.5 1.59 C4156 8027 -0.37 13 0.04 4859 2.5 1.59 C2847 8014 -0.48 13 0.04 4922 2.5 1.60 C4165 4733 -0.41 14 0.06 4719 2.4 1.62 C4426 1736 -0.51 13 0.06 4897 2.5 1.59 C2397 3005 -0.36 13 0.04 4922 2.5 1.60 C1113 3015 -0.34 14 0.07 4886 2.5 1.59 C2077 4003 -0.42 13 0.08 4752 2.5 1.58 C2356 4007 -0.68 10 0.15 4788 2.5 1.58 C1747 5008 -0.47 13 0.04 4906 2.5 1.59 C4465 7028 -0.33 14 0.06 4877 2.5 1.59 C1998 4018 -0.35 14 0.08 4846 2.5 1.59 C3182 3016 -0.39 14 0.06 4846 2.5 1.59 C2967 6008 -0.36 13 0.06 4913 2.6 1.56 C0257 8028 -0.36 13 0.06 4913 2.6 1.56 C2105 5003 -0.30 14 0.04 4769 2.6 1.54 C0637 6021 -0.34 14 0.06 4846 2.7 1.51 C1092 5015 -0.54 12 0.02 4966 2.8 1.49 C2425 8011 -0.42 13 0.06 4911 2.8 1.48 C2984 8008 -0.37 11 0.04 5090 3.3 1.32 C0860 9020 -0.36 14 0.09 5104 3.2 1.37 C2507 8005 -0.40 13 0.10 5078 3.2 1.36 C1584 14011 -0.27 14 0.10 5161 3.3 1.34 To minimize external variables that might result from fiber-to-fiber variations and/or secular changes in the instrument's sensitivity, we reevaluate solar values for each run and spectrograph setup by tuning those values to recreate solar abundances in daytime sky spectra, obtained in the appropriate fiber configuration for that purpose. Our evaluation of EW values to produce line-by-line abundance estimates is carried out in the context of model atmospheres generated by linear interpolation between Kurucz <cit.> atmospheric models using MOOG's <cit.> abfind driver. We employ the 2014 LTE version of MOOG for which the solar iron abundance is set at 7.50. In subsequent discussions, the solar A(Fe)[A(X)=logN_X - logN_H + 12.00] value will be subtracted from the MOOG abundance for each determination yielding [Fe/H] relative to the sun. For solar abundance checks, we employ a model constructed with (, , ) = 5770 K, 4.40, 1.14 , where denotes the microturbulent velocity parameter. §.§ Atmospheric Parameters: , Surface Gravity, and Microturbulent Velocity In the past we have used color-temperature relations from <cit.> and <cit.> for giants and dwarfs respectively. Because of its rich sample of over 150 stars and a homogeneous compilation of direct temperatures, <cit.> (H15) will be adopted going forward, with a few modifications. We want to prioritize disk metallicities and stars inside a range of 3600 to 7800 K, so some stars outside those limits were eliminated. We also reassigned luminosity class for the stars using absolute magnitudes with the help of Gaia parallaxes and photometric values from H15. The relations from H15 do not cover likely subgiants. 159 stars were sorted into clear main sequence stars (MS), evolved stars near the MSTO, probable subgiant (SG) stars and clear RGB stars. To provide adequate color range and continuity between these classes, fits between , unreddened (B-V) color and [Fe/H] were made for 99 stars comprising 87 MS stars and 12 MSTO stars, then for 72 stars including 60 RG, 6 SG and the evolved stars near the MS. The fits, with dependent variables of color and metallicity, are patterned after those used by H15 except that rather than θ = 5040/ is solved for. The equations for RG/SG and MS/MSTO classes, color and temperature limits are summarized in Table 3. For our Hydra analysis, photometry from SD24 is used for all except a few stars outside the areal coverage of that study; for these the synthesized GSPC photometry has been adopted. A reddening of = 0.07 and [Fe/H] = -0.4 were chosen for the color-temperature conversions. Surface gravity estimates () were obtained by direct comparison of V magnitudes and (B-V) colors to the scaled-solar VR isochrones with [Fe/H]=-0.39 near 1.9 Gyr in age. For stars not immediately adjacent to the isochrone curve, adjustments were made to the nearest isochrone point of similar color-temperature based on the magnitude difference between the isochrone and the star such that Δlog g = Δ log V/ 2.5. For the comparison, (m - M)_0 = 13.118 and = 0.07 were applied to a 2.0 Gyr isochrone. Input estimates for were constructed using the formulation by <cit.> for stars within the valid ranges of and . For the most luminous giants with ≤ 2, = 2.0 -0.2 . Comparisons of our determinations relative to past work have been constructed, beginning with the infrared photometric studies by <cit.>. These authors explored JHK colors to establish clearer pictures of clusters' reddening, distances, ages and metallicities, developing profiles of cluster giant branches and building on pre-established indications of [Fe/H] and reddening. In this context, they studied giants in NGC 2204, making use of the earlier estimates of 0.08 for and -0.35 for [Fe/H] from photometric studies by <cit.> and <cit.>. As such, this is less an independent confirmation of our color-temperature scale than a check on it based on longer wavelength photometry. From seven giants that are both cluster members and within the temperature bounds of the revised H15 color-temperature relation, the <cit.> temperatures are 30 ± 71 K (sd) warmer. Similarly, <cit.> provide atmospheric parameters for a group of 13 giants for which the median [Fe/H] is -0.26. The temperature scale is not independently determined from the spectra, but is based on photometry, a literature value for =0.08 and the color-temperature relation of <cit.>. The values for <cit.> are on average 5 ± 93 K cooler than those derived herein from the revised H15 color-temperature calibration and BV photometry. The spectroscopic study by <cit.> is significant because the authors were able to derive entirely from spectroscopy in the established manner i.e., normalizing the trend of A(Fe) with excitation potential. They also had sufficient spectral coverage and resolution to include both neutral and ionized iron lines, enabling an independent derivation of . Excluding the likely binary, W4119 = WOCS1003, our values are lower than those of <cit.> by a modest 0.26 on average. An offset of this size has a minor impact on our abundances and no effect on our conclusions. By contrast, comparison of <cit.>'s derived temperatures for most stars with (B-V) colors, whether from <cit.> or SD24, shows a very similar slope to the H15 color-temperature relation but considerably hotter. If the worst outliers (WOCS4007 and 7014) are excluded, the average difference in in the sense (RevisedH15-Carlberg) is -152 ± 33 K. We note that if the H15 color-temperature relation were to be preserved at [Fe/H]=-0.40, the foreground reddening would have to be considerably higher, nearly 0.14, to produce temperatures this high. The two outliers for which a potential discrepancy between and photometric color is suggested will be revisited later in this paper. By contrast, comparison of atmospheric parameters for some of the brighter cluster stars as determined in the Gaia DR3 parameter pipeline GSP-phot <cit.> are much harder to understand. The GSP-phot values have a less coherent relationship with (B-V) colors and are, on average, ∼ 200 ± 266 K higher, although the large dispersion in the individual differences makes this statistic hard to employ. Gaia GSP-phot also derives , nearly identical to values derived here from isochrones and photometry but with a sizeable standard deviation of 0.36 on that nearly null difference. §.§ Metal Abundance Determinations Model atmospheres were constructed for member giants using the grid of ATLAS9 models crafted by <cit.> for the input , , listed in Table 4. We continue to use Kurucz models, though we have taken advantage of expanded model availability with [Fe/H] values now extending from -5.0 to 1.0. Our access to the models was via the on-line repository[http://kurucz.harvard.edu/grids/ ] which was updated in 2014 and 2018. We construct models from simple linear interpolation of models with bracketing and values. Each star's measured equivalent widths and model serve as input to the abfind routine of MOOG to produce individual [A/H] estimates for each measured line for each star, producing hundreds of separate abundance measures for single member stars of NGC 2204 with up to 15 [Fe/H] values for each star. Our discussion utilizes median statistics to avoid the awkwardness of averaging logarithmic quantities. The MAD, or median absolute deviation, is a robust indicator of dispersion among individual values. For typical distributions without extreme outliers, a traditional standard deviation ∼ 1.48 × MAD. Our large number of 703 [Fe/H] estimates from 45 evolved stars is reduced to 627 if EW values below 15 or above 150 mÅ are excluded from further analysis. As is typical for median statistics considering large samples, modest changes in filtering have little consequence. The median [Fe/H] based on all eligible lines is -0.40 ± 0.08, where the error quoted is the MAD statistic, implying a more traditional standard deviation of 0.12. If the median of all 45 separate stellar abundance estimates is considered, the results are predictably similar: -0.38 ± 0.06 (MAD) for EW measures meeting the cutoffs described above. Presentation of results in Table 4 includes the , , and estimates for each evolved member star, while results specific to individual lines were included in Table 2. Among the evolved stars summarized in Table 4, two stars happened to be observed in separate configurations, spanning both years, 2014 and 2015, providing an opportunity to verify the repeatability of our [Fe/H] determinations. Results for these two stars differ by ∼ 0.05 dex. In Figures 5 and 6, we illustrate the Fe abundance information by line wavelength and stellar . For the 45 giants discussed above, stellar abundance estimates are based on 10 to 14 lines, with 13 lines being the median value. Two of the lowest [Fe/H] values occur for stars with fewer than 12 lines measurable. These stars, WOCS7014 and WOCS4007, have atypically large MAD values and are also the two stars previously noted to have spectroscopic temperatures inconsistent with their (B-V) colors, according to <cit.>. If, as indicated by those spectroscopic temperatures, photometry assigns temperatures to these stars that are too low, a spuriously low metal abundance would be derived as a result, by as much as 0.15 dex for the star with the most discrepant temperature. We reiterate that this discrepancy between spectroscopic and photometric temperatures is not the exclusive result of use of photometry from <cit.>; <cit.> uses photometry from <cit.> and notes the same issue. We further stress that retention of results for these two stars does not affect the overall abundance results for the cluster, due to the use of median statistics. Our limited results for elements other than Fe are included in Table 2, with values for [m/H] accompanied by the number of evolved stars' lines and the MAD statistic for each median value. The relatively small number of included star lines for Ca is due to the typically large EW for this line in cooler stars. The results for Ca and Si imply slight α-enhancement. For a cluster [Fe/H]=-0.40, [Si/Fe]= 0.14 ± 0.12 and [Ca/Fe]=0.29 ± 0.07, where the errors noted here are standard deviations based on the MAD statistics. The median value for [Ni/H] from all lines is -0.52, implying [Ni/Fe] = -0.12 ± 0.10 (sd). With similar precepts for the determination of atmospheric parameters, ROBOSPECT was run on the more than 50 MS and MSTO stars. Unfortunately, the warm, relatively metal-weak and often broadened spectra produced far fewer than 15 succesfully measured Fe lines in all but 14 stars. Even for these, only 4 to 6 lines were measurable, suggesting that a star-by-star analysis would be ineffective. Considering all of the Fe determinations en masse, from 50 eligible Fe lines among the 14 stars, a median value of -0.19 was derived for [Fe/H] with a large MAD statistic of 0.39. At best, these results may be considered weakly consistent with the larger sample and clearer results from SG and RGB stars. What are the consequences of an incorrectly adopted reddening on the derived [Fe/H]? An inappropriately high reddening value assigns a higher than needed to the star, forcing a higher derived [Fe/H] value. For giant stars with [Fe/H] values near -0.40, the sensitivity of derived [Fe/H] values to adopted is ∼ 0.015 for each increment of 0.01 in adopted . Fortunately, derived [Fe/H] values are less sensitive to the accuracy of the model atmospheres' assumed [Fe/H] values: an increment of 0.2 in model [Fe/H] produces a spurious increment of ≤ 0.05 in derived [Fe/H]. § LI: ABUNDANCE ESTIMATION AND EVOLUTIONARY IMPLICATIONS §.§ Li From Spectrum Synthesis The challenge in estimating the abundance of Li is dominated by the fact that only one line near 6707.8 Å is accessible for analysis. While this isn't a problem for lines of adequate strength, for hotter stars where the line strength weakens with increasing and/or for stars with rapid rotation where line broadening creates a wide but shallow profile difficult to define relative to the continuum, measuring the equivalent width can be an exercise in futility that, at best, only permits upper limits. Adding to the challenge is the existence of a weak Fe I line at 6707.4 Å which, with inadequate resolution, can blend with the Li line creating an enhanced equivalent width for Li if not accounted for. Fortunately, this last issue is of minor importance due to the low metallicity of NGC 2204. While the Li line's equivalent width can be measured non-interactively using ROBOSPECT or interactively using splot in IRAF, as in past analyses in the cluster series, we have chosen to use spectrum synthesis to define the individual stellar abundances, a critical approach for stars where the Li line is weak to nonexistent. The procedure as laid out in previous papers is as follows: each candidate star's spectrum is compared to the relevant model atmosphere using the synth driver in MOOG. When the model has been appropriately chosen, lines other than Li show consistent levels of agreement between spectrum and model, where “consistency” presumes that the line profile characterizing the spectrum is also appropriately modeled in MOOG. The line profile incorporates the known instrumental line width (0.55 mÅ for Hydra spectra) as well as corrections for limb-darkening (coefficient taken uniformly to be 0.5) and broadening due to rotation, for which projected rotational velocities may be interactively altered. For all completed syntheses in the Li line region, the rotational velocity which appears to best fit the observed spectrum was recorded as well as the resulting A(Li) value. These values were uniformly consistent with values obtained directly from the fxcor procedure described in Section 2.2 that determines both radial and rotational velocity parameters for values between 15 and 40 . For the low end of the velocity scale the instrumental resolution limits the measurable velocities to between 5 and 10 . Projected rotational velocities determined from fxcor analysis for all stars with Li detections or estimated upper limits are included with the A(Li) values presented in Table 5. For cases in which the Li line is distinct and clearly present, an abundance for Li is discerned by comparing the fit for different A(Li) values. In many cases, the Li line is not obviously distinguishable from noise or nearby spectral features; in these cases, an upper limit is determined by noting the value of A(Li) below which changes to the spectrum-model fit are no longer distinguishable. A(Li) determinations, including upper limits, for all measurable spectra are listed in Table 5. §.§ Li Evolution and the CMD: Evolved Stars Figure 7 shows the CMD for evolved stars with Li detections and Li upper limits. Two stars, WOCS1005 and WOCS2006 (open circles), require some clarification since synthesis was not possible due to the stars' extremely cool temperatures. WOCS2006, flagged for potential binarity and variability, is the reddest star in the sample, past the “dog-leg” of the giant branch. Although too cool for synthesis, it does have a conspicuous Li line with an EW of over 240 mÅ. WOCS1005 has a similar spectrum with dominant molecular bands, yet no obvious Li line. The next coolest star, WOCS1017, has a measurable though more modest Li EW of 29 mÅ, but synthesis appears to suggest an implausibly low upper limit. The other cool stars for which Li was not detected all have Li line EWs from ROBOSPECT of 10 mÅ or less if the contribution of the adjacent Fe line is estimated and subtracted. One of the striking features of Figure 7 is the separation of the red giants into clear detections (red squares) versus stars for which only upper limits are derivable (blue triangles). All the stars that define the red boundary in the CMD delineating the first-ascent red giant (FRG) branch have measurable Li, starting with A(Li) just below the canonical prediction of 1.4 at the base of the giant branch and declining to -0.4 near the tip of the giant branch. While it may go lower beyond this point, as discussed earlier, the two coolest stars fall outside the limits of the synthetic spectra. By contrast, with only one exception, the red clump giants (RCG) only have upper limits centered on A(Li) = 0.6 ± 0.2 and typically 0.5 dex below the measured detections for FRG at the same luminosity. Thanks to the precision photometry of SD24, for stars brighter than the RCG the Li bifurcation translates into a color separation where the bluer stars all have distinctly lower A(Li) limits than the detections on the RCG at the same luminosity. It should be emphasized that this bifurcation is not an artifact of the spectrum synthesis approach. The separation for the giants above the clump is at least as distinct if only Li EW are considered. For the stars with Li upper limits, EW are uniformly low, 20 to 26 mÅ. This measured EW includes the contribution from the nearby Fe line, with an estimated strength of 10 to 20 mÅ going up this part of the RGB. In contrast, the six giants with Li detections have Li EW from 60 to 120 mÅ, larger for the cooler stars. The separation in the CMD of these giant classes above the level of the RGB clump is subtle but believable, leading to the prediction that the bluer stars are post-RGC, asymptotic giant branch stars (AGB) evolving back up the giant branch. In support of this interpretation, we note that one of the upper limit stars, WOCS3011 = W2212, was selected for observation by <cit.> with the expectation that it was an AGB star, an assignment confirmed by their determination of the AGB separation in the , diagram. Unfortunately, no other stars among our group of cool stars with upper limits for Li were observed by <cit.>. hrrrrhrrrr 5 Synthesis Results for Lithium 0pt ConID WOCS ID σ_vrot A(Li) ConID WOCS ID σ_vrot A(Li) 2541 2014 11.3 0.9 -0.40 3182 3016 10.7 0.3 ≤ 0.70 2281 1006 4.4 0.3 -0.10 2967 6008 11.1 0.3 ≤ 0.50 897 1017 5.0 0.2 ≤ -1.50 257 8028 13.9 0.4 ≤ 0.60 589 1028 12.8 0.4 ≤ -0.70 2105 5003 16.1 0.4 1.05 3467 4014 13.9 0.6 0.15 637 6021 18.0 0.4 1.10 4231 2028 12.1 0.4 0.40 1092 5015 19.8 0.5 ≤ 0.50 1792 1011 14.3 0.4 ≤ 0.10 2425 8011 21.1 0.5 1.20 2309 1002 15.1 0.5 ≤ -0.70 2984 8008 17.8 0.4 1.30 3125 3011 17.5 0.5 ≤ -0.20 860 9020 18.6 0.5 1.30 2716 6014 12.6 0.4 0.55 2507 8005 19.9 0.5 1.30 3004 2009 19.2 0.5 ≤ 0.70 1711 12007 30.2 1.2 2.80 2405 3006 13.9 0.4 ≤ 0.00 1433 15013 41.8 1.7 2.75 2806 2007 12.8 0.4 0.60 1096 12015 30.3 1.3 3.00 3599 2016 12.9 0.4 ≤ 0.10 1584 14011 20.8 0.4 ≤ 0.70 2064 1003 10.4 0.3 1.20 3085 10019 41.1 1.6 2.85 1479 1010 15.7 0.4 ≤ 0.40 925 15024 20.6 0.6 2.10 2216 1004 14.0 0.4 1.00 2241 13004 36.5 2.5 2.70 2501 2003 12.8 0.4 1.00 3992 20024 23.6 0.8 2.80 2939 2010 17.4 0.4 ≤ 0.50 3419 19014 10.4 0.3 2.70 2028 7014 11.0 0.4 ≤ 0.60 3755 33024 47.6 2.9 3.00 4001 2023 12.8 0.4 ≤ 0.70 863 19023 25.3 1.1 2.75 1620 1019 16.7 0.4 ≤ 0.50 1512 17026 23.8 1.0 2.80 2153 3003 15.2 0.4 ≤ 0.60 5429 C 21.2 1.0 2.80 2549 3009 15.1 0.4 ≤ 0.80 2682 20012 21.1 1.0 2.70 4156 8027 14.8 0.4 ≤ 0.60 3460 24014 18.6 0.7 2.85 2847 8014 17.5 0.5 ≤ 0.60 2927 19020 20.2 0.8 2.95 4165 4733 15.3 0.4 1.20 1791 17008 21.6 1.2 3.00 1866 4009 35.0 1.10 2558 16017 22.8 0.8 ≤ 2.10 4426 1736 15.8 0.4 ≤ 0.60 1653 20009 27.6 1.1 2.60 2397 3005 15.9 0.4 ≤ 0.60 814 24021 25.7 1.8 ≤ 2.30 1113 3015 13.2 0.3 ≤ 0.70 2987 20008 32.7 1.2 ≤ 2.10 2077 4003 19.7 0.6 1.00 690 27021 26.6 0.9 ≤ 2.10 2356 4007 11.3 0.5 ≤ 0.60 3544 29018 18.8 0.6 ≤ 2.00 1747 5008 13.2 0.4 ≤ 0.70 1639 25011 13.5 0.6 ≤ 2.40 4465 7028 8.40 0.2 ≤ 0.50 4289 36028 26.8 1.4 ≤ 2.10 1998 4018 19.2 0.5 ≤ 0.60 201 E 21.0 0.8 ≤ 2.00 The results presented in Figure 7 include photometry and a Li abundance for WOCS4009 (cyan filled circle), for which the only spectroscopic data are those of the VLT spectrum shown in Figure 3. Because of its location in the CMD between the RGC and the FRG it is impossible to determine which category describes the star. Although synthesis is difficult for such wide lines, an abundance of A(Li) = 1.1 is suggested by comparison to a model spectrum broadened by 35 . The model was constructed in the same manner as the other giants adopting = 4743 K, = 2.5 and = 1.58 . Plausible estimates for the actual equatorial rotational velocity for WOCS4009 are considerably higher. For a radius of ∼ 12 R_, as suggested by the isochrones, and the observed period from ASAS-SN of 6.18 days, an equatorial rotational velocity ≥ 100 is implied. If WOCS4009 is a member of the RGC, it is clearly Li-rich compared to the typical RGC star, though its record as a photometric variable and potential radial velocity variable make its history and evolutionary status tricky to discern. It appears to be another example of a growing class of giant stars with discordant Li abundances found in anomalous positions in the CMD and/or with broad-lined spectra and photometric variability associated with rapid rotation, such as W7017 in NGC 6819 <cit.>, star 4128 in NGC 2506 <cit.>, star W2135 in NGC 2243 <cit.>, and, most recently, star 4705 in NGC 188 <cit.>. To date, only W7017 has been confirmed as a low mass RGC star, despite its position in the CMD redward of the FRG branch <cit.>. With a luminosity at the level of the RGC, a broad-line spectrum, and photometric variability, supposedly due to rotation, on timescales of 6-7 days, WOCS4009 closely resembles star W2135 in NGC 2243. §.§ Li Evolution and the CMD: MSTO Stars Figure 8 shows the trends of A(Li) with (B-V) color and V magnitude for MSTO stars in NGC 2204. The stars for which synthesis could not be carried out are designated with open black symbols, while the other symbols have the same meaning as in Figure 7. For the stars with usable spectra, the A(Li) pattern is straightforward with A(Li) remaining constant at 2.85 ± 0.15 between V = 15.5 and 16.5. The only exception is the one star evolving into the subgiant branch with A(Li) ∼ 2.1 en route to the base of the giant branch by which time its A(Li) will have dropped to just below 1.4. However, at the critical boundary of V = 16.5, one sees the distinct transition to the Li-dip, bottoming out at an upper limit of A(Li) = 2.0. In the similar anticenter cluster, NGC 2243, the center of the main sequence Li-dip is located at ∼ 1.15 ± 0.02 , with the transition to undepleted Li abundances on the high mass end, the “wall”, appearing over the range between 1.21 and 1.24 . Extrapolating these masses for a slightly more metal-rich cluster (Δ[Fe/H] = +0.15 dex) implies adding 0.06 to these feature masses based upon an [Fe/H]-based slope of 0.4 <cit.>. The expectation is that the wall in NGC 2204 should be positioned between 1.27 and 1.30 . Using the VR isochrones shown in Figure 4, the mass associated with each V magnitude at the MSTO is plotted along the central vertical axis of Figure 8. While the number of stars defining the edge is limited in comparison with the comparable plot for NGC 2243, it is apparent that the transition between V = 16.5 and 16.6 is consistent with the predicted mass range and clearly higher than the mass identified for the more metal-deficient NGC 2243. The expected low-mass limit of the Li-dip should occur near 1.15 , fainter than the magnitude limit of the current survey, as confirmed by Figure 8. §.§ Li Evolution: NGC 2204 and the Li Cluster Pattern The Li trends with mass and evolutionary state, as laid out in Figures 7 and 8, are in qualitative agreement with expectations as delineated in past analyses of clusters of similar age and/or metallicity. Stars above the Li wall at the MSTO have a modest range in abundance but generally follow a distribution maxed out between A(Li) = 2.9 and 3.4. Stars leaving the main sequence to the subgiant branch exhibit a steep decline in A(Li) with color, approaching a value typically below A(Li) = 1.4 at the base of the giant branch. Between this point and the luminosity level of the RGC, the FRG branch shows at most a minor decline of ∼0.1 dex in A(Li). Beyond this point, A(Li) drops at an increasing rate as one approaches the tip of the FRG branch. Beyond this evolutionary point, the RGC stars, with a few key exceptions, exhibit only upper limits to A(Li), limits which fall below the measured values for the FRG at the same luminosity, in theory due to the prior effects of the He-flash at the FRG branch tip. The role of metallicity in defining the mass range of the Li-dip has already been noted. However, it also clearly plays a key role in defining the distribution of stars among the varied evolutionary phases from the MSTO and beyond, thereby altering the relative distribution of stars with differing Li signatures. The fraction of FRG branch stars populating the CMD from the blue edge of the subgiant branch to the luminosity level of the RGC is controlled primarily by stellar mass. Given [Fe/H], stars above a certain mass leaving the main sequence have convective cores such that H-exhaustion occurs simultaneously within an isothermal He core; the He core mass grows with time due to H-burning in a shell outside the core. However, if the core mass fraction rises above approximately 10%, core collapse ensues and the star evolves rapidly in to reach the base of the vertical FRG branch, creating the Hertzsprung gap. As core contraction continues and the core temperature rises, He-ignition finally occurs under nondegenerate conditions. For stars of lower mass, the size of the convective zone within the core declines with decreasing mass until the core is totally radiative. Under these circumstances, the fractional mass of the totally He core region grows with time, but fails to reach the critical 10% boundary before becoming partially degenerate. The presence of degenerate gas supplies an alternative means of support which allows the star to evolve more slowly from the MSTO across the subgiant branch to the base of the vertical FRG branch. The burning H-shell slowly adds mass to the He-core, moving the star up the FRG branch until He ignites under degenerate conditions. The slower rate of evolution leads to well-populated subgiant and vertical FRG branches, as exemplified by the classic old disk cluster M67 (see, e.g. Figure 7 of <cit.>). It should be noted, however, that the transition to lower mass stars not only leads to better delineation of the subgiant branch, but also weakens the fractional population of the RGC and the RG branch (FRG and AGB combined) above the luminosity of the RGC. Compare, e.g., the distribution of RG stars for NGC 2204, as exemplified by Figure 4, with a similar plot for NGC 2243 at 3.6 Gyr (Figure 4 of <cit.>). Ignoring stars positioned well off the evolutionary tracks in the CMD, NGC 2204 exhibits approximately the same number (∼12) of giants above (FRG and AGB) as below the luminosity of the clump. By comparison, similar counts for NGC 2243 produce a ratio of 8 (23) to 1 (3) in favor of stars below the clump. As for the actual number of stars within the clump, the approximate number is 15 for NGC 2204 as opposed to 6 in NGC 2243. Equally relevant, for NGC 2204, all but two of the stars below the RGC and half those above it have detectable Li. One potential RGC star might have detectable Li. In NGC 2243, none of the RGC or red giants brighter than the RGC have detectable Li while only half below the RGC do. Clearly the greater age of NGC 2243 dramatically alters the evolutionary distribution of the evolved stars and their Li distribution. However, age must not be the only impactful factor, as illustrated by a direct comparison of NGC 2204 with NGC 2506, a cluster of apparently identical age (1.85 Gyr) but slightly higher [Fe/H] (-0.27) <cit.>. As shown in Figure 5 of <cit.>, the single-star FRG branch ((B-V) above 0.6) of NGC 2506 contains 31 stars to the luminosity level of the RGC, all but 7 of which have measurable Li. Only 3 stars sit above the level of the clump, two of which lie well to the blue of the expected FRG branch and only have upper limits for A(Li). Approximately two dozen stars populate the RGC; only one has detectable Li. These numbers/distributions should be contrasted with the distinctly different character laid out above for NGC 2204. In fact, NGC 2204 morphologically leans toward the younger (1.5 Gyr) but more metal-rich ([Fe/H] ∼ -0.1) cluster NGC 7789 <cit.>. This richly populated cluster has only 6 single FRG below the level of the clump compared to 21 FRG and AGB stars above it and a dominant 56 single stars within the RGC. The fact that NGC 2204 looks morphologically younger than NGC 2506, a cluster of the same age but higher metallicity, is consistent with the pattern defined by the main sequence Li-dip. As discussed earlier, the masses bracketing the main sequence Li-dip decline by 0.04 for each drop of 0.1 dex in [Fe/H]. The impact of this shift to lower mass is that more metal-deficient clusters exhibit the same qualitative trend with Li as metal-rich clusters, just delayed in time. For example, while NGC 2243 <cit.> and M67 <cit.> have almost the same age, the stars leaving the main sequence in the former cluster come from the hot side of the Li wall, while the stars populating the subgiant and giant branches in M67 came from well inside the Li-dip <cit.>. NGC 2243 won't reach the Li-dip/subgiant morphology of M67 for at least 0.5 Gyr. Since the metallicity differential between NGC 2506 and NGC 2204 is less than one-fourth that of M67/NGC 2243, one expects a much smaller morphological age differential when comparing them. Therefore, if the structural parameters that generate the metallicity dependence of the mass limits for the Li-dip are reflections of a more fundamental role of metallicity/mass controlling the post-main-sequence luminosity function among first and second-ascent red giants, NGC 2204 should morphologically resemble a younger NGC 2506 at the same CMD-based isochronal age. For completeness, it should be noted that one can simply make NGC 2204 younger by adopting a larger reddening of E(B-V) = 0.09. This makes the turnoff bluer, leading to higher stellar and higher [Fe/H] by +0.03 dex. The revised age bceomes 1.7 Gyr, though the quality of the fit to the isochrones is noticeably worse. To close the cluster comparison, we turn to the issue of MSTO rotation speeds. As demonstrated initially for 4 clusters in Figure 11 in <cit.> and expanded to include NGC 2243 in Figure 11 of <cit.>, the distribution of rotation speeds for stars above the wall is a strong function of age and, like the Li-dip itself, metallicity. For a younger cluster of near solar metallicity like NGC 7789, extends from an observational minimum near 20 to measureable limits approaching 100 . For some stars with even higher assumed the lines are too extended to allow plausible measurement. NGC 3680 <cit.>, with a metallicity comparable to NGC 7789 but an estimated age of 1.75 Gyr, displays a much narrower range of with all stars located below 50 . By the 2.25 Gyr age of NGC 6819 and the 3.6 Gyr age of the very metal-deficient NGC 2243, all stars but one in each cluster have below 25 . By contrast, NGC 2506, with an age slightly greater than NGC 3680 but more metal-deficient, displays a spread that is dominated by stars between 25 and 65 , but reaches to almost 100 , much more similar to NGC 7789. Completing the pattern, Figure 8 shows that more than half the stars above V = 16.5 (open circles) have no measured Li limit or detection. The stars that are plotted in the right side of Figure 8 have that ranges from the system limit of ∼20 km/s to just under 50 (Table 5). The inability to measure for the open circles of Figure 8 stems from the same problem already noted for a subsample of stars in NGC 7789, the typical line is spread over such a wide range in wavelength that a plausible measure of its true width becomes debatable. While some of the stars without measures may be unresolved binaries composed of equal mass stars, especially given the almost vertical nature of the MSTO, the fraction of open circles is too large to be explained this way, especially given the analysis of similar samples in other clusters. Continuing the pattern laid out above for the post-main-sequence morphology, we conclude that the distribution for NGC 2204 is similar to that for NGC 2506 but, because of its lower metallicity at the same age, is more heavily weighted toward above 50 , leaning toward a distribution more comparable to NGC 7789. Because of the lower [Fe/H], the weaker lines may become less discernible at above 50 . As a crude test of this hypothesis use was made of the one line that remains strong, sometimes too strong, for the hot stars populating the vertical turnoff on NGC 2204: Hα. Using the interactive routine, splot, within IRAF, a Lorentzian profile was matched to the Hα line in each spectrum. For the stars with measurable in Table 5, the average full-width-half-maximum (FWHM) of the line was determined to be 4.85 ± 0.40 Å. For the stars where the line strengths were deemed too shallow to supply a plausible A(Li) measure, the comparable FWHM is 5.93 ± 0.97 Å . For the extreme cases where no metal lines can be readily measured, FWHM becomes 6.62 ± 0.65 Å. §.§ NGC 2204 and Galactic Li Evolution The metallicity of a cluster clearly impacts the evolution of Li within the cluster as a function of mass, particularly at intermediate and lower masses. Equally relevant is the manner in which cluster metallicity is tied to the initial Li abundance of the cluster. It has been recognized for decades that with a primordial A(Li) near 2.7 and a solar value at 3.3 <cit.>, A(Li) must increase with time via contributions from stellar sources. Since the mean Galactic [Fe/H] has increased from almost 0 to solar and higher over the same interval, it is generally assumed that A(Li) and [Fe/H] should be roughly correlated, a pattern confirmed by a variety of cluster and field studies (see e.g. <cit.> and the discussion in <cit.>. How does NGC 2204 fit within the metallicity trend? For NGC 2506 ([Fe/H] = -0.27), the mean A(Li) among the stars at the vertical turnoff is 3.05 ± 0.02 (sem) from 71 stars. While some stars do scatter above the solar system value of 3.3, it is clear that the stars on the hot side of the Li-dip are systematically lower in the mean than expected for solar metallicity <cit.>. At the other end of the metallicity scale, NGC 2243 ([Fe/H] = -0.54) has a much smaller range in V for the stars on the hot side of the Li-dip, consistent with its older age. However, among this limited sample, only one star has A(Li) greater than 3.0 and the range for stars above the Li-dip extends to A(Li) ∼ 2.0 <cit.>. Clearly NGC 2243 has a lower mean A(Li) than NGC 2506. For NGC 2204 ([Fe/H] = -0.40), the mean A(Li) for stars on the hot side of the Li-dip is 2.83 ± 0.03 (sem) from 15 stars. Keeping in mind that the sample size is modest and stars with significant rotation could not be evaluated for Li, it is still the case that no star at the turnoff of NGC 2204 outside the Li-dip has A(Li) above 3.0, and the full A(Li) range extends from 3.0 to 2.7, significantly smaller than found in NGC 2506 at the same age. § SUMMARY The focus of the current investigation has been the metal-deficient, moderately old open cluster NGC 2204. High dispersion spectroscopy of 167 stars, selected primarily to map the evolutionary change in Li from the MSTO to the FRG branch, has provided unusual insight into the broader question of how stellar metallicity, mass, and age can impact the observed distribution of A(Li) among evolving stars within the Galactic disk. After eliminating astrometric and radial velocity nonmembers, including unfortunately the majority of stars positioned between the MSTO and the FRG branch at the level of the clump, as well as probable binaries, EW analysis of hundreds of Fe lines in 45 evolved stars with narrow line profiles generates a mean metallicity of [Fe/H] = -0.40 ± 0.12 (sd). Given the number of stars/lines included in the cluster average, the dominant uncertainty in the abundance remains the zero-point of the absolute [Fe/H] scale through the uncertainty in E(B-V) and ultimately the scale, though there is universal agreement that the reddening in the direction of NGC 2204 is low, consistent with the adopted value of = 0.07. The derived cluster abundance places NGC 2204 almost exactly midway between the more metal-rich NGC 2506 <cit.> and the more metal-poor NGC 2243 <cit.>. The complications that make the MSTO abundance analyses significantly more challenging for a cluster like NGC 2204 are two-fold: (a) at lower [Fe/H], stars at a given generally have weaker metallic lines and (b) stars of lower [Fe/H] at a given age near the MSTO are generally hotter. Added to these issues is the unexpected discovery that a significant fraction of the stars at the MSTO for NGC 2204 exhibit higher than expected rotation speeds, in many cases making any attempt at line measurement impossible. With the basic stellar parameters in hand, spectrum synthesis was carried out for all cluster members where line broadening due to rotation was small enough to allow adequate evaluation of the impact of varying the Li line strength. This proved feasible for all the post-MSTO stars, but allowed A(Li) estimation for less than half (24 of 56) of the stars at or below the MSTO. For the red giants, the A(Li) pattern is familiar, but striking. At the base of the vertical FRG branch, A(Li) falls just below the canonically predicted Li-rich boundary for giants at 1.4. As stars evolve up the FRG branch, measurable A(Li) drops steadily to a limiting value below -0.4 for the star at the tip of the FRG branch. Two stars lie just beyond this tip limit, but the cool and the spectroscopic complexity make model synthesis almost impossible. At best, one can say that these stars appear to be more Li-deficient than the hotter stars, but just how deficient is uncertain. With one exception, the stars within the RGC exhibit upper limits to A(Li) which are lower than the detectable values for the FRG branch stars at the same luminosity as the clump. If these stars evolve from the population at the red giant tip, the true upper limits for the RGC stars should be A(Li) = -0.4 or lower, a limit which is simply beyond the capacity of our spectra for stars at the of the clump. Perhaps the most striking feature of the A(Li) CMD distribution among the evolved stars is the separation of stars above the clump into two distinct categories of detection versus upper limit, where the stars with upper limits all lie brighter/bluer than the stars with detections. With 2 exceptions that sit well above the FRG branch, the separation in color/luminosity is small, but the photometric precision is high enough to accept this separation as real. The obvious explanation for this bluer band is that they represent post-RGC stars evolving up the giant branch a second time, i.e. they are AGB stars. The spectroscopic analysis of one of these stars by <cit.> supports its AGB status. The giant population of NGC 2204 also contains an example of a persistant anomaly appearing consistently among older clusters with a sufficient population of stars, giants with rapid rotation and/or anomalous Li for their supposed position within the CMD (see <cit.> for recent comprehensive discussions of this phenomenon). Since the spectrographic line signature makes the measurement of below ∼ 15 virtually impossible, it is expected that the mean for all giants should scatter close to this value. As detailed in Section 2.2, it does. However, one star, WOCS4009, (see Figure 3) clearly has anomalously broadened lines compared to the typical effect found in most giants, implying a speed near 35 . Evolved stars with speeds above 30 are rare, consistently making up less than 1% of the K giants found in the field <cit.>. The challenges to understanding these stars are multiple. First, if a star is a rapid rotator as a giant, did it start out that way on the main sequence and simply fail to spin down along the path of normal post-main sequence evolution or was there a physical mechanism which caused a normal giant to spin up? In both cases, the common solution remains interaction/mass transfer with a binary companion (see, e.g. the discussion of short-period, tidally-locked binaries in <cit.>). For NGC 2204, unlike NGC 2243, the distribution of stars at the MSTO appears dominated by stars undergoing rapid rotation, thereby providing a ready source of potential candidates for a rapid rotator progenitor. Additionally, NGC 2204 has a modest supply of blue stragglers, commonly accepted as the products of mass transfer within a binary. Second, the evaluation of these stars is complicated by the fact that anomalous evolution likely positions them within the CMD in locations inconsistent with their actual internal state when compared to isochrones for normal single stars. Star W7017 in NGC 6819 remains a prime example <cit.>. The lack of clarity regarding evolutionary phase clouds its classification as a Li-rich giant since one must first define rich relative to what. If WOCS4009 is a FRG, its A(Li) is consistent with the other stars in this evolutionary phase. If it is in the RGC, it is clearly Li-rich. This categorization is further clouded by the fact that, as a metal-deficient cluster, the primordial A(Li) for NGC 2204 appears to be closer to 2.85 than the solar metallicity reference value of 3.3. If Li is reduced by the same factor predicted for solar metallicity stars, the initial value at the base of the FRG branch should be 1.0 or less, rather than 1.4. To close, the fundamental value of NGC 2204 remains its ability to illuminate the roles metallicity and age play within the evolution of Li for stars evolving on and after the main sequence. The trends defined and/or enhanced by past cluster studies within this series have included the shift in the position of the Li-dip with decreasing mass as [Fe/H] declines, i.e. the mass boundary of the Li wall, the narrowing of the spread in among stars on the hot/high mass side of the Li-wall as a cluster ages, and the correlated spread in A(Li) for the same hot/high mass stars while still on the main sequence and prior to entering the subgiant branch, again emphasizing that A(Li) does not remain constant for stars above the Li wall. With the addition of NGC 2204 to the cluster mix, a direct comparison is possible to a cluster of identical age (1.85 Gyr) but higher metallicity, NGC 2506. The contrast between the two, supposedly dominated by the difference in metallicity, reveals that the stars on the hot side of the wall are more heavily weighted toward rapid rotators than the those in NGC 2506 and the post-MSTO luminosity function shows a distribution of giants more heavily weighted toward FRG branch stars above the RGC than below, with a richer population of RGC stars, while NGC 2506 has almost no stars above the level of the RGC and the majority of this small sample has no detectable Li, possibly indicating AGB status. From the previously identified patterns, NGC 2204 more closely resembles what one expects for a younger NGC 2506, i.e. closer in appearance to the more metal-rich ([Fe/H] ∼ -0.1) but younger (1.5 Gyr) cluster NGC 7789. This trend is analogous to what is seen in the comparison between two clusters of much older, but similar, age (∼3.6 Gyr), NGC 2243 ([Fe/H] = -0.54) and M67 ([Fe/H] = +0.03) <cit.>. Stars leaving the main sequence in M67 are emerging from the cool side within the Li-dip while the stars populating the subgiant branch in NGC 2243 are still originating above the wall of the Li-dip. Thus, metallicity appears to have a broader and more nuanced impact on the distribution of stellar properties on the main sequence and beyond, over and above the simple one of setting the mass boundaries for the Li-dip, an effect which is generally downplayed or ignored in the analysis of field star distributions. NSF support for this project was provided to BJAT and BAT through NSF grant AST-1211621, and to CPD through NSF grants AST-1211699 and AST-1909456. Extensive use was made of the WEBDA database maintained by E. Paunzen at the University of Vienna, Austria (http://www.univie.ac.at/webda). This research has made use of the ESO Science Archive Facility and refers to observations collected at the European Southern Observatory under ESO programme 188.B-3002(V). Dr. Donald Lee-Brown assisted with observations and processing. The authors are grateful for the always excellent support provided by the WIYN telescope staff that made this research possible. We are also appreciative of the helpful and specific comments made by the referee which have strengthened the paper. WIYN: 3.5m IRAF <cit.>, MOOG <cit.>, LACOSMIC <cit.>, ROBOSPECT <cit.>, TOPCAT citetTOPC [Alonso et al. (1999)]al99 Alonso, A., Arribas, S., & Martinez-Roger, C. 1999, , 140, 261 [Andrae et al. (2023)]andr Andrae, R., Fouesneau, M., Sordo, R., et al. 2023, , 674,27 (GSP-phot) [Anthony-Twarog et al. (2018a)]AT18a Anthony-Twarog, B. J., Deliyannis, C. P., Harmer, D., et al. 2018, , 156, 37 [Anthony-Twarog et al. (2013)]AT13 Anthony-Twarog, B. J., Deliyannis, C. P., Rich, E., & Twarog, B. A. 2013, , 767, L19 [Anthony-Twarog et al. (2016)]AT16 Anthony-Twarog, B. J., Deliyannis, C. P., & Twarog, B. A. 2016, , 152, 192 [Anthony-Twarog et al. (2020)]AT20 Anthony-Twarog, B. J., Deliyannis, C. P., & Twarog, B. A. 2020, , 160, 75 [Anthony-Twarog et al. (2021)]AT21 Anthony-Twarog, B. J., Deliyannis, C. P., & Twarog, B. A. 2021, , 161, 159 [Anthony-Twarog et al. (2009)]AT09 Anthony-Twarog, B. J., Deliyannis, C. P., Twarog, B. A., Croxall, K. V., & Cummings, J. D. 2009, , 138, 171 [Anthony-Twarog et al. (2018b)]AT18b Anthony-Twarog, B. J., Lee-Brown, D. B., Deliyannis, C. P., & Twarog, B. A. 2018, , 155, 138 [Anthony-Twarog et al. (1979)]AT79 Anthony-Twarog, B. J., Twarog, B. A., & McClure, R. D. 1979, , 233, 188 [Asplund et al. (2009)]AS09 Asplund, M., Grevasse, N., Sauval, A. J., & Scott, P. 2009, , 47, 481 [Bertelli et al. (1994)]Bertelli94 Bertelli G., Bressan, A., Chiosi, C, Fagotto, F., & Nasi, E. 1994, , 106, 275 [Boesgaard et al. (2020)]BO20 Boesgaard, A. M., Lum, M. G., & Deliyannis, C. P. 2020, , 888, 28 [Bruntt et al. (2012)]Br12 Bruntt, H., Basu, S., Smalley, B., et al. 2012, , 423, 122 [Brunker et al. (2013)]BR13 Brunker, S. W., Anthony-Twarog, B. J., Deliyannis, C. P., & Twarog, B. A. 2013, BAAS 221, 250.28 [Brunker et al. (2024)]BR24 Brunker, S. W., Sun, Q., Deliyannis, C. P., Anthony-Twarog, B. A., & Twarog, B. A. 2024, in prep. [Boesgaard & Tripicco (1986)]BO86 Boesgaard, A. M., & Tripicco, M. 1986, , 302, L49 [Cantat-Gaudin et al. (2018)]CG18 Cantat-Gaudin, T., Jordi, C., Vallenari, A., et al. 2018, , 618, 93 (CG18) [Cantat-Gaudin et al. (2020)]CG20 Cantat-Gaudin, T., Anders, F., Castro-Ginard, C., et al. 2020, , 640, A1 (CG20) [Carlberg et al. (2016)]CA16 Carlberg, J. K., Cunha, K., & Smith, V. V. 2016, , 827, 129 [Carlberg et al. (2011)]CA11 Carlberg, J. K., Majewski, S. R., Patterson, R. J., et al. 2011, , 732, 39 [Carlberg et al. (2015)]CA15 Carlberg, J. K., Smith, V. V., Cunha, K., et al. 2015, , 802, 7 [Cayrel de Strobel (1988)]CA88 Cayrel de Strobel, G. 1988, in IAU Symposium 132, The Impact of Very High S/N Spectroscopy on Stellar Physics, ed. G. Cayrel de Strobel & M. Spite (Dordrecht: Kluwer), 345 [Christian et al. (1985)]CH85 Christian, C. A., Heasley, J. N., & Janes, K. A. 1985, , 299, 683 [Christian & Janes (1979)]CH79 Christian, C. A., & Janes, K. A. 1979, , 84, 204 [Cummings et al. (2012)]CU12 Cummings, J. D., Deliyannis, C. P., Anthony-Twarog, B. J., Twarog, B. A., & Maderak, R. M. 2012, , 144, 137 [Cummings et al. (2017)]CU17 Cummings, J. D., Deliyannis, C. P., Maderak, R. M., & Steinhauer, A. 2017, , 153, 128 [Dawson (1981)]D81 Dawson, D. W. 1981, , 86, 237 [Deliyannis et al. (2019)]DE19 Deliyannis, C. P., Anthony-Twarog, B. J., Lee-Brown, D. B., & Twarog, B. A. 2019, , 158, 163 [Deliyannis et al. (1993)]DP93 Deliyannis, C. P., Pinsonneault, M. H., & Duncan, D. K. 1993, , 414, 740 [Deliyannis et al. (2002)]DE02 Deliyannis, C. P., Steinhauer, A., & Jeffries, R. D. 2002, , 577, L39 [Demarque et al. (2004)]DE04 Demarque, P., Woo, J.-H., Kim, Y.-C., & Yi, S. K. 2004, , 155, 667 (Y^2) [Dias et al. (2014)]DI14 Dias, W. S., Monteiro, H., Caetano, T. C., et al. 2014, , 564, A79 [Friel et al. (2002)]FR02 Friel, E. D., Janes, K. A., Tavarez, M., et al. 2002, , 124, 2693 [Gaia Collaboration et al. (2016)]GA16 Gaia Collaboration, Brown, A. G. A., Vallenari, A., et al. 2016, , 595, A2 [Gaia Collaboration et al. (2018)]GA18 Gaia Collaboration, Brown. A. G. A., Vallenari, A., et al. 2018, , 616, A1 (DR2) [Gaia Collaboration et al. (2023)]GA22 Gaia Collaboration, Vallenari, A., Brown. A. G. A., et al. 2023, , 674, A1 [Montegriffo et al. (2023)]gspc Gaia Collaboration: Montegriffo, P., Bellazzini, M., De Angeli, F., et al. 2022, , 674, 33 [Gao et al. (2020)]GA20 Gao, Q., Lind, L., Amarsi, A. M. et al. 2020, , 497, L30 [Hawarden (1975a)]H75a Hawarden, T. G. 1975a, , 173, 231 [Hawarden (1975b)]H75b Hawarden, T. G. 1975b, , 173, 801 [Hawarden (1976a)]H76a Hawarden, T. G. 1976a, , 174, 225 [Hawarden (1976b)]H76b Hawarden, T. G. 1976b, , 174, 471 [Houdashelt et al. (1992)]HFC Houdashelt, M. L., Frogel, J. A. & Cohen, J. G. 1992, , 103, 163 [Huang et al. (2015)]H15 Huang, Y., Liu, X. W., Yuan, H. B., Xiang, M. S., & Chen, B. Q. 2015, , 454, 2863 [Jacobson et al. (2011)]JA11 Jacobson, H. R., Friel, E. D., & Pilachowski, C. A. 2011, , 141, 58 [Janes (1979)]JA79 Janes, K. A. 1979, , 39, 135 [Kassis et al. (1997)]K97 Kassis, M., Janes, K. A., Friel, E. D. & Phelps, R. L. 1997,, 113,1723 [Kurucz (1995)]KU95 Kurucz, R. L. 1995, in IAU Symp. 149, The Stellar Populations of Galaxies, ed. B. Barbuy & A. Renzini (Dordrecht: Kluwer), 225 [Lee-Brown et al. (2015)]LB15 Lee-Brown, D. B., Anthony-Twarog, B. J., Deliyannis, C. P., Rich, E., & Twarog, B. A. 2015, , 149, 121 [McClure et al. (1974)]MC74 McClure, R. D., Forrester, W. T., & Gibson, J. 1974, , 189, 409 [McClure et al. (1981)]MC81 McClure, R. D., Twarog, B. A., & Forrester, W. T. 1981, , 243, 841 [Mermilliod & Mayor (2007)]ME07 Mermilliod, J. C. & Mayor, M. 2007, , 470, 919 [Ramírez & Meléndez (2005)]RA05 Ramírez, I., & Meléndez, J. 2005, , 626, 465 [Randich et al. (2020)]RA20 Randich, S., Pasquini, L., Franciosini, L., et al. 2020, , 640, L1 [Rozyczka et al. (2007)]kal Rozyczka, M., Kaluzny, J., Krzemiński, W., Mazur, B. 2007 Acta Astronomica, 57, 323 [Schlafly & Finkbeiner (2011)]SF11 Schlafly, E. F., & Finkbeiner, D. P. 2011 , 737 103 [Schlegel et al. (1998)]SFD Schlegel, D. J., Finkbeiner, D. P., & Davis, M. 1998, , 500, 525 [Shappee et al. (2014)]SH14 Shappee, B. J., Prieto, J. L., Grupe, D., et al. 2014, , 788, 48 [Sneden (1973)]SN73 Sneden, C. 1973, , 184, 839 [Spite et al. (2012)]SP12 Spite, M., Spite, F., & Bonifacio, P. 2012, in Li in the Cosmos, , Suppl., 22, 9 [Steinhauer (2003)]ST03 Steinhauer, A. 2003, PhD thesis, Indiana University [Steinhauer at al. (2024)]con Steinhauer, A., Deliyannis,C. P., Ornelas, D., et al. 2024, in preparation (SD24) [Sun et al. (2022)]SU22 Sun, Q., Deliyannis, C. P., Twarog, B. A., et al. 2022, , 513, 5387 [Tayar et al. (2015)]TA15 Tayar, J., Ceillier, T., García-Hernández, D. A., et al. 2015, , 807, 82 [Taylor (2005)]TOPC Taylor, M. B. (2005), Publications of the Astronomical Society of the Pacific, 347, 297, ed. P. Shopbell, M. Britton, R. Ebert [Tody (1986)]TODY Tody, D. 1986, , 627, 733 [Twarog et al. (2023)]TW23 Twarog, B. A., Anthony-Twarog, B. J., & Deliyannis, C. P. 2023, , 165, 105 [Twarog et al. (2020)]TW20 Twarog, B. A., Anthony-Twarog, B. J., Deliyannis, C. P., & Steinhauer, A. 2020, in Li in the Universe: to Be or not to Be, , 91, 74 [Twarog et al. (1997)]TAAT Twarog, B. A., Ashman, K. M., Anthony-Twarog, B. J. 1997, , 114, 2556 [VandenBerg et al. (2006)]VR06 VandenBerg, D. A., Bergbusch, P. A., & Dowler, P. D. 2006, , 162, 375 [van Dokkum (2001)]VD01 van Dokkum, P. G. 2001, , 113, 1420 [Waters & Hollek (2013)]WH13 Waters, C. Z., & Hollek, J. K. 2013, , 125, 1164
http://arxiv.org/abs/2406.18720v1
20240626193845
Plasmonic effects in rod-like metal-dielectric nanoparticles
[ "Ya. V. Karandas" ]
physics.optics
[ "physics.optics" ]
Orbital magnetic field driven metal-insulator transition in strongly correlated electron systems Anton Markov July 1, 2024 ================================================================================================ § ABSTRACT The optical properties of rod-like two-layer nanoparticles are studied using the notions of equivalent prolate spheroid. The calculations are presented for frequency dependencies for polarizability and the absorption and scattering cross-section of prolate spheroids, cylinders, and spherocylinders. The effect of the sizes, the shapes of the nanoparticle and the material of the core and the shell on the location of the maxima of the imaginary part of polarizability and absorption and scattering cross-sections is analysed. The recommendations regarding the shape and size ratio of the nanoparticles for obtaining the maximum value of radiation efficiency are formulated. § INTRODUCTION Recently, the study of plasmonic effects has become a booming field in nanophysics and nanotechnologies. In particular, the processes of interaction between light and metallic nanoparticles of different shapes are under an intensive study <cit.>. It is known that metallic nanoparticles significantly amplify the local electromagnetic fields near their surfaces at the surface plasmonic resonance (SPR) frequencies, which are determined by their morphology, geometry and dielectric permittivity of the environment. The plasmonic properties of the nanoparticles have potential applications in nanophotonics, biophotonics and biomedicine <cit.>, scanning near-field optical microscopy <cit.>, optical sensing and spectroscopic measurements <cit.>. Despite the fact that biomedical applications of the plasmonic effects start from the use of the spherical nanoparticles, today in this field there is observed an increasing use of anisotropic nanoparticles, such as metallic spheroids and rods <cit.>. Indeed, while spherical nanoparticles have one absorption peak, associated with localized surface plasmonic resonance (SPR), the localization of electrons along two dimensions in nanorods results in the emergence of two absorption peaks. One of these peaks is caused by the transverse SPR, and the other one - by the longitudinal SPR (figure <ref>). As a rule, the longitudinal SPR band is located in the lower frequency range, which corresponds to the first or the second biological transparency window <cit.>, and it can be easily adjusted by adjusting an aspect ratio of the rod (the ratio of the radius to the length). As a result, the nanorods become prospective candidates for the use in biosensorics. One of the key spectral characteristics of SPR is its width, because it characterizes an enhancement of the optical response and local field inside and outside the particle. The width of SPR is determined by such dissipation mechanisms as volumetric and surface scattering and also radiation attenuation <cit.>. The radiation attenuation is caused by the losses due to over-radiation of light and it becomes insignificant for small metallic nanoparticles, which results in the narrowing of SPR spectrum to the limit set by the volumetric and surface scattering <cit.>. They reflect Landau attenuation, that is decay of the charge collective oscillations into individual intrazone and interzone electron-hole pairs, and, hence, they are related to the intrinsic electronic properties of nanoparticles <cit.>. An interzone attenuation plays an important role when the localized SPR overlaps the interband transitions (figure <ref>), which results in a large widening of SPR, as in gold or copper nanospheres. When resonance is located away from the interband junction, as in silver nanospheres and gold nanorods for longitudinal SPR, it is much narrower <cit.>, which is of great interest for the applications. At the same time, the resonance width is determined by the electron relaxation rate, including the volumetric mechanisms (in particular, electron-electron and electron-photon scattering, and the scattering on defects) and the scattering on the surface, associated with the finite dimensions of the system <cit.>. The contribution of the last mechanism increases with a decrease of the particle size. Thus, the relaxation processes can be described in the form of a scheme, shown in figure <ref>, where electron population of Fermi level is shown in gray, hot electrons are represented by red regions above Fermi level ε_F, and the distributions of the hot holes are represented by a blue region under ε_F: * the formation of electron-hole pairs in Au can be intrazone or interzone formation (figure <ref>a); * after the excitation of SPR, the athermal distribution of electron-hole pairs relaxes during t∼1-100 fs due to the emission of photons (radiologically) see figure <ref>b or due to Landau attenuation and formation of hot carriers (radiation-free) see figure <ref>c; * hot carriers redistribute its energy due to electron-electron scattering on a time scale from 100 fs to 1 ps (figure <ref>d); * then, the heat transfers inside the nanoparticle due to electron-photon scattering on a time scale of a few picoseconds and from the nanoparticle to the environment on the time scale of a few picoseconds to a few nanoseconds (figure <ref>e). To date, the most fully studied model is the model of spherical nanoparticle <cit.>, but the model of spheroidal particle is more informative for the study of an influence of the nanoparticle shape on its optical properties. This is due to the fact that the above mentioned shape makes it possible to simulate the large class of particles of real shapes (from rod-like particles to disk-like particles), changing the ratio between the semi-axes of spheroid. At the same time, the change of the ratio between the semi-axes has a significant influence on the depolarization factors, which determine electromagnetic fields induced by the incident electromagnetic wave inside nanoparticle, and, correspondingly, on the electron scattering and optical absorption. A rigorous theory of attenuation in spheroidal metallic nanoparticles is possible only within the frameworks of using kinetic approach to the description of processes of electron scattering in the bulk and on the surface <cit.>. It is associated with the fact that the optical conductivity, which determines the half-width of the plasmonic resonances, becomes a tensor value in the case of spheroidal particles. Since the half-widths, in turn, determine the height of plasmonic peaks, then the shape of nanoparticles can have a significant effect on the frequency dependencies for light absorption and scattering cross-sections in spheroidal metallic nanoparticles <cit.>. The model of prolate spheroid is used under the study of the optical properties of the rod-like nanoobjects. This is due to the fact that there is an analytical solution for the boundary problem of electrostatics for a spheroidal particle. The experimental results <cit.> demonstrate that the position of the longitudinal SPR for cylinders, spherocylinders and elongated spheroids depends on the shape of the nanoparticle. Moreover, it has been found that the model of the equivalent elongated spheroid gives a better approximation to the experimental results than the model of the elongated spheroid. This fact gives the authors of the work <cit.> a right to state that the equalities of different axial moments of inertia for equivalent elongated spheroids and rod-like nanoparticles, which are under study, are indeed appropriate criteria for the comparison of rod-like nanostructures. Coating the surface of the nanoparticle with an ultrathin dielectric shell, for example, with silicon dioxide, is an effective way to improve the compatibility and stability of the structure without a significant change of their optical properties <cit.>. This has a great practical significance for a successful use of plasmonic nanoparticles either in sensing or under the studies, connected with the surface such as surface-enhanced Raman light scattering (SERS). The presence of the silicon dioxide shell on the metallic particle allows them to have a well-controlled interface with the environment, in order to avoid the false widening effects <cit.>, which are observed, for example, for gold and silver nanospheres stabilized by surfactants <cit.>. Thus, the aim of this article is to study the frequency dependencies for polarizability and the absorption and scattering cross-sections of metal-dielectric rod-like nanoparticles in the frameworks of the equivalent prolate spheroid approach. § BASIC RELATIONS Let us consider the rod-like nanoparticles with a metallic core which have the shape of elongated spheroid, cylinder with the finite length and spherocylinder covered with the dielectric shell with the thickness t and permittivity ϵ_s, which are situated in the medium with the dielectric permittivity ϵ_m (figure <ref>). The starting point for the study are the relations for the diagonal components of polarizability of a two-layer prolate spheroidal particle <cit.> α_@^( ∥) = Vϵ_@^( ∥) - ϵ_m/ϵ_m + ℒ_( ∥)^( 2 )( ϵ_@^( ∥) - ϵ_m), where V is the volume of particle; ϵ_@^( ∥) is the diagonal components of dielectric tensor ϵ_@^( ∥) = ϵ_s^( ∥) + ϵ_s^( ∥)β _c ( ϵ_c^( ∥) - ϵ_s^( ∥))/ϵ_s^( ∥) + ( ϵ_c^( ∥) - ϵ_s^( ∥))( ℒ_( ∥)^( 1 ) - β _cℒ_( ∥)^( 2 )), and ℒ_∥ ^( 1, 2) = 1 - e_p^( 1, 2) 2/2e_p^( 1, 2) 3( ln1 + e_p^( 1, 2)/1 - e_p^( 1, 2) - 2e_p^( 1, 2)), ℒ_ ^( 1, 2) = 1/2( 1 - ℒ_∥ ^( 1, 2)) are the depolarization factors of internal and external spheroid; e_p^( 1, 2) are the corresponding eccentricities; ϵ_s^ = ϵ_s^∥ = ϵ_s = const is the dielectric permittivity of the shell material; β _c = V_c/ . - V is the bulk content of metallic fraction (V_c is the volume of metallic core), and the diagonal components of dielectric tensor of metallic core are determined by the following relations in the frameworks of Drude model ϵ_c^( ∥) =ϵ^∞ - ω_p^2/ω( ω + iγ _eff^( ∥)). Here, ϵ^∞ is the component which describes the contribution of ion core; ω _p = ( e^2n_e/ϵ_0m^*)^1 / . - 2 is the plasma frequency, n_e is the concentration of conductivity electrons, m^* is the effective mass of electrons, ϵ_0 is the electric constant; γ _eff^( ∥) are the transverse (longitudinal) effective relaxation rate, which is determined for the nanoscale objects as γ _eff^( ∥) = γ _bulk + γ _s^( ∥) + γ _rad^( ∥), where γ _bulk and γ _s^( ∥) are the bulk and surface relaxation rates, and γ _rad^( ∥) is the radiation damping rate. The surface relaxation rate and the radiation damping rate for prolate spheroid are determined by the relations <cit.> γ _s^( ∥) = ℒ_( ∥)^( 1 )σ _( ∥)/ϵ_0[ ϵ_m + ℒ_( ∥)^( 1 )( 1 - ϵ_m)]; γ _rad^( ∥) = 2V/9ϵ_0( ω_p/c)^3ℒ_( ∥)^( 1 )σ _( ∥)/√(ϵ_m[ ϵ^∞ + ( 1/ℒ_( ∥)^( 1 ) - 1)ϵ_m]) , where c is the speed of light. Formulae (<ref>) and (<ref>) possess diagonal components of the conductivity tensor for a prolate spheroid, which can be determined with the help of the relations obtained in works <cit.>: σ_∥ = 9n_ee^2/2m^*ω( ω/ν _s, )^21/e_p^( 1 ) 3∫_ω/ν _s, ^ω/ν _s, ∥ x/x^4[ 1 - ( ω/ν _s, x)^2]^1/2{ 1 - 2/xsin x + 2/x^2( 1 - cos x ) }, σ _ = 9n_ee^2/4m^*ω( ω/ν _s, )^2e_p^( 1 ) 2 - 1/e_p^( 1 ) 3∫_ω/ν _s, ^ω/ν _s, ∥ x/x^41 - ( ω/ν _s, ∥x)^2/[ 1 - ( ω/ν _s, x)^2]^1/2{1 - 2/xsin x + 2/x^2( 1 - cos x)}, where ν _s, and ν _s, ∥ are the frequencies of individual oscillations of electrons along the axes of spheroid. Let us determine, in the frameworks of the equivalent prolate spheroid approach, the effective aspect ratios for two-layer elongated spheroid, which is equivalent to cylinder of the finite length and to spherocylinder. For this purpose, let us write down the expressions for the axial moments of inertia of these objects: I_x, i^ell = m_ell^( i )/5( a_i^2 + b_i^2); I_z, i^ell = 2m_ell^( i )/5b_i^2; I_x, i^cyl = m_cyl^( i )/12( 3r_i^2 + l_i^2); I_z, i^cyl = m_cyl^( i )/2r_i^2; I_x, i^sphcyl = r_i^5μ _i{δ _i/6( 3 + δ _i^2) + 4/3[ 83/320 + ( δ _i + 3/8)^2]}; I_z, i^sphcyl = r_i^5μ _i( δ _i + 8/15); δ _i = 1 - ϱ _i, i = 1, 2, where indices 1 and 2 are associated with metallic core and the entire particle, correspondingly; a_2 =a_1 + t, b_2 = b_1 + t are the major and minor semi-axes of spheroid; l_2 =l_1 + 2t is the length of cylinder, r_2 =r_1 + t is the radius of cylinder (spherocylinder); m_ell^( i ) and m_cyl^( i ) are the masses of ellipsoid and cylinder; μ _i is the density of spherocylinder. Using the relations I_x, i^ell/I_z, i^ell = I_x, i^cyl/I_z, i^cyl; I_x, i^ell/I_z, i^ell = I_x, i^sphcyl/I_z, i^sphcyl, and introducing the aspect ratios for prolate spheroid, cylinder and spherocylinder ϱ ^( i ) = b_i/a_i; ϱ ^( i ) = 2r_i/l_i; ϱ ^( i ) = 2r_i/l + 2r_i. Let us obtain the effective aspect ratios for spheroids which are equivalent to cylinder and spherocylinder: ϱ _eff^( i ) = √(3)/2ϱ ^( i ); ϱ _eff^( i ) = ( 1 + 4δ _i/3δ _i^2 + 2δ _i + 3/4/δ _i + 8/15)^ - 1/2. Calculating integrals (<ref>) and (<ref>) under the condition of neglecting the oscillating terms compared to the unit in braces, we obtain the expression for the diagonal components of conductivity tensor σ _( ∥) = 9/16ϵ_0( ω _p/ω)^2ν _s, ℱ_( ∥)( ϱ _eff^( 1 )), where ℱ_( ϱ_eff^( 1 )) = ( 1 - (ϱ_ eff^(1))^2 )^ - 3/2{ϱ _eff^( 1 )( 3/2 -(ϱ_ eff^(1))^2)√(1 - ϱ_eff^( 1 ) 2). + . 2 ( 3/4 - (ϱ_ eff^(1))^2)ln( /2 - arcsinϱ _eff^( 1 )) }; ℱ_∥( ϱ _eff^( 1 )) = ( 1 - (ϱ_ eff^(1))^2)^ - 3/2{/2 - arcsinϱ _eff^( 1 ) + ϱ _eff^( 1 )( 1 - 2(ϱ_ eff^(1))^2)√(1 - (ϱ_ eff^(1))^2)}. Substituting formula (<ref>) into the relations (<ref>) and (<ref>), we obtain the general expressions for the surface relaxation rate and radiation damping rate γ _s^( ∥) = 9/16ℒ_( ∥)^( 1 )/ϵ_m + ℒ_( ∥)^( 1 )( 1 - ϵ_m)ν _s, ( ω _p/ω)^2ℱ_( ∥)( ϱ _eff^( 1 )); γ _rad^( ∥) = V/8ℒ_( ∥)^( 1 )/√(ϵ_m[ ϵ^∞ + ( 1/ℒ_( ∥)^( 1 ) - 1 )ϵ_m])ν _s, ( ω_p/c)^3( ω_p/ω)^2ℱ_( ∥)( ϱ _eff^( 1 )). Further, it is convenient to express depolarization factors in terms of ϱ _eff^( i ). Then, the first formula of (<ref>) takes the form: ℒ_∥ ^( i ) = 1/2ϱ _eff^( i )^2/( 1 - ϱ _eff^( i ) 2)^3 / . - 2( ln1 + √(1 - ϱ _eff^( i ) 2)/1 - √(1 - ϱ _eff^( i ) 2) - 2√(1 - ϱ _eff^( i ) 2)). It should be pointed out that the formulae for metal-dielectric sphere, given in <cit.>, can be obtained from the relations (<ref>)–(<ref>) by the limit transition e_p→ 0. Since lim_ϱ _eff^( 1 )→ 1ℒ_( ∥)^( i ) = 1/3, and lim_ϱ _eff^( 1 )→ 1ℱ_( ∥)( ϱ _eff^( 1 )) = 8/3, we obtain: γ _s = 𝒜( ω , R )v_F/R, γ _rad^( ∥) = 1/24V_0/√(ϵ_m( ϵ^∞ + 2ϵ_m))( ω _p/c)^3( ω _p/ω)^2v_F/R, where V_0 is the volume of metallic core; 𝒜( ω , R) = 1/4( ω _p/ω)^2, ν _s = v_F/ . - R, and v_F is the Fermi electron velocity. In the opposite case e_p→ 1 (ℒ_ ^( i ) = 1/2, ℒ_∥ ^( i ) = 0, and ℱ_ ( ϱ _eff^( 1 t) ) = 3/4), the formulae for metal-dielectric cylinder of the infinite length <cit.> follow from (2.17)–(2.19): γ _s^∥ = γ _rad^∥ = 0, γ _s^ = 27/128( ϵ_m + 1)( ω _p/ω)^2v_F/a, γ _rad^ = 3/128V_0/√(ϵ_m( ϵ^∞ + ϵ_m))( ω _p/c)^3( ω _p/ω)^2v_F/a, a is the radius of the base of the composite particle core, V_0 is the volume of the core. The absorption cross-section and scattering cross-section are determined by the relations C_@^abs = ω√(ϵ_m)/c( 2/3α _@^ + 1/3α _@^∥); C_@^sca = ω ^4ϵ_m^2/6c^4( 2/3| α _@^ |^2 + 1/3| α _@^∥|^2) and the radiation efficiency ξ _@^rad = C_@^abs/C_@^abs + C_@^sca. Subsequently the formulae (<ref>), (<ref>)–(<ref>) taking into account the relations (<ref>)–(<ref>), (<ref>), (<ref>), (<ref>)–(<ref>) are going to be used for obtaining the numerical results. § CALCULATION RESULTS AND THE DISCUSSION The calculations have been performed for two-layer axisymmetrical nanostructures (prolate spheroids, cylinders and spherocylinders) of different sizes situated in teflon (ϵ_m = 2.3). The parameters of metals of the core and shell are given in tables <ref> and <ref>. Figure <ref> shows the frequency dependencies for the imaginary parts of the longitudinal and transverse components of the polarizability tensor for rod-like nanoparticles Au@SiO_2 of the different shapes. The maxima α _@^( ∥)( ω) correspond to the frequencies of the transverse (longitudinal) surface plasmonic resonances ω_sp^(∥). The results of the calculations indicate that ω_sp^ > ω _sp^∥ for the nanoparticles of all considered shapes, and moreover, Δω _sp = ω _sp^ - ω _sp^∥ increases in the sequence of the shapes “cylinder → spherocylinder → prolate spheroid”. Let us point out that max{α _@^∥} > max{α_@^ } for the nanoparticles of all shapes which are under consideration. The comparison α_@^ ( ω) and α _@^∥( ω) for the same particles of the considered shapes is given in figure <ref>. The “blue” shift of the maxima α_@^∥ and the “red” shift of the maxima α_@^ take place in the sequence of the shapes “prolate spheroid → spherocylinder → cylinder”, although these shifts are rather insignificant. Figures <ref> and <ref> show the results of the calculations of the frequency dependencies for the absorption cross-section and scattering cross-section for the rod-like nanoparticles Au@SiO_2. It should be pointed out that two maxima of the absorption cross-section and scattering cross-section, which correspond to the longitudinal and transverse surface plasmonic resonances, are significant only for the case of the particle of spheroidal shape with the different major and minor semi-axes. For the case of cylinders and spherocylinders, the splitting of the maxima becomes noticeable for the nanoparticles whose longitudinal and transverse sizes are significantly different. This fact confirms the close location of the frequencies ω_sp^ and ω_sp^∥ for cylinders (see figure <ref>). The reason for such a splitting of SPR frequencies is a decrease of the aspect ratio at the same volume in the sequence of forms “cylinder → spherocylinder → prolate spheroid”. Figures <ref> and <ref> show the frequency dependencies for the absorption cross-section for the rod-like nanoparticles with the cores and shells of different materials. The results of the calculations show that the variation of metal of the core has a much stronger effect both on the splitting of the maxima C_@^abs and on their location in the spectrum for the nanoparticles of all shapes which are under consideration. In turn, the variation of the material of the dielectric shell results in only a slight shift of the maxima of the absorption cross-section. This is due to the fact that the frequencies of the longitudinal (transverse) SPR are significantly dependent on the plasma frequency (which is significantly different for different metals) and is practically independent of the dielectric permittivity of the shell. Figure <ref> shows the frequency dependencies for the radiation efficiency for the nanoparticles Ag@SiO_2 of different shapes and sizes. Let us point out that ξ _@^rad is significantly dependent on the sizes of the nanoparticle and is practically independent of its shape. Thus, for the nanoparticles of all shapes, the curves ξ _@^rad( ħω) are qualitatively similar and quantitatively close, and the radiation efficiency is maximum in the visible (optical) part of the spectrum for the particles with the sizes a_l = 100 nm, b_t = 50 nm (when the difference in these sizes is minimum). § CONCLUSIONS The relations for polarizability and for the absorption and scattering cross-sections of the rod-like metal-dielectric nanoparticles (cylinders of the finite length and spherocylinders) have been obtained in the frameworks of equivalent elongated spheroid approach. It is shown that the frequencies of transverse SPR are greater than the frequencies of longitudinal SPR for the particles of all considered shapes, and the splitting of the frequencies of SPR increases in the sequence “cylinder → spherocylinder → prolate spheroid”, hence, the aspect ratio decreases exactly in this order under the same volume of nanoparticles. Moreover, the maxima of the imaginary parts of the longitudinal component of polarizability tensor are greater than the maxima of the imaginary parts of the transverse component. It has been established that the frequency dependencies for the light absorption and scattering cross-sections have two maximums which correspond to longitudinal and transverse SPR. The splitting of resonances is noticeable for the particles of spheroidal shape of all sizes, while for the case of cylinders and spherocylinders, the splitting is noticeable only when the longitudinal sizes of the nanoparticle are much greater than its transverse sizes. It is shown that the variation of the shell material has a weak effect on the location of the maxima of the absorption cross-section, while the variation of metal of the core, for the particles of all considered shapes, has a significant effect both on the value of splitting of resonances and on the location of the maxima on the frequency scale, which is associated with the significant differences in such optical parameters as plasma frequency and the contribution of the interzone transitions into the dielectric function. The calculations of the radiation efficiency show its strong dependence on the size of the particle and the independence from the shape, while the radiation efficiency is maximum for the particles of all considered shapes, which have the smallest difference in longitudinal and transverse sizes. 10 b1 Bohren C. F., Huffman D. R., Absorption and Scattering of Light by Small Particles, Wiley-VCH, New York, 1998. b2 Kelly K. L., Coronado E., Zhao L. L., Schatz G. C., J. Phys. Chem. B, 2003, 107, No. 3, 668–677, 10.1021/jp026731y. b3 Dmitruk N. L., Goncharenko A. V., Venger E. F., Optics of small particles and composite media, Naukova Dumka, Kyiv, 2009. b4 Klimov V. V., Nanoplasmonics, CRC Press, Boka Raton, 2013. b5 Amendola V., Pilot R., Frasconi M., Marago O. M., Iati M. A., J. Phys.: Condens. Matter, 2017, 29, 03002 (48 pages), 10.1088/1361-648X/aa60f3. b6 Moradi A., Canonical Problems in the Theory of Plasmonics: From 3D to 2D Systems, Springer Series in Optical Sciences, Vol. 230, Springer International Publishing, Cham, 2020, 10.1007/978-3-030-43836-4. b7 Dmitruk N. L., Malinich S. Z., Ukr. J. Phys. Rev., 2014, 9, No. 1, 3–37 (in Ukrainian). b8 Sekar R., Basavegowd N., Thathapudi J. J., Sekhar M. R., Parinita J., Somu P., Baek K.-H., Pharmaceutics, 2023, 15, No. 2, 433 (27 pages), 10.3390/pharmaceutics15020433. b9 Huang X., Neretina S., El-Sayed M. A., Adv. Mater., 2009, 21, No. 48, 4880–4910, 10.1002/adma.200802789. b10 Sharifi M., Attar F., Saboury A. A., Akhtari K., Hooshmand N., Hasan A., El-Sayed M. A., J. Controlled Release, 2019, 311-312, 170–189, 10.1016/j.jconrel.2019.08.032. b11 Elahi N., Kamali M., Talanta, 2018, 184, 537–556, 10.1016/j.talanta.2018.02.088. b12 Yang X., Yang M., Pang B., Vara M., Xia Y., Chem. Rev., 2015, 115, No. 19, 10410–10488, 10.1021/acs.chemrev.5b00193. b13 Chen Y.-S., Zhao Y., Yoon S. J., Gambhir S. S., Nat. Nanotechnol., 2019, 14, No. 5, 465–472, 10.1038/s41565-019-0392-3. b14 Alkilany A. M., Thompson L. B., Boulos S. P., Sisco P. N., Murphy C. J., Adv. Drug Delivery Rev., 2012, 64, No. 2, 190–199, 10.1016/j.addr.2011.03.005. b15 Haine A. T., Niidome T., Chem. Pharm. Bull., 2017, 65, No. 7, 625–628, 10.1248/cpb.c17-00102. b16 Murphy C. J., Thompson L. B., Alkilany A. M., Sisco P. N., Boulos S. P., Sivapalan S. T., Yang J. A., Chernak D. J., Huang D. J., J. Phys. Chem. Lett., 2010, 1, No. 19, 2867–2875, 10.1021/jz100992x. b17 Arellano L. G., Villar-Alvarez E. M., Velasco B., Dominguez-Arca V., Prieto G., Cambon A., Barbosa S., Taboada P., J. Mol. Liq., 2023, 377, 121511 (15 pages), 10.1016/j.molliq.2023.121511. b18 Chen H., Shao L., Li Q., Wang J., Chem. Soc. Rev., 2013, 42, 2679–2724, 10.1039/C2CS35367A. b19 Kreibig U., Vollmer M., Optical Properties of Metal Clusters, No. 25 In Springer Series in Materials Science, Springer, Berlin, Heidelberg, 2010. b20 Kawabata A., Kubo R., J. Phys. Soc. Jpn., 1966, 21, 1765–1772, 10.1143/JPSJ.21.1765. b21 Wokaun A., Godon J. P., Liao P. F., Phys. Rev. Lett., 1982, 48, 957–960, 10.1103/PhysRevLett.48.957. b22 Klar T., Perner M., Grosse S., Von Plessen G., Spirkl W., Feldmann J., Phys. Rev. Lett., 1998, 80, 4249–4252, 10.1103/PhysRevLett.48.957. b23 Billaud P., Huntzinger J.-R., Cottancin E., Lerme J., Pellarin M., Arnaud L., Broyer M., Del Fatti N., Vallee F., Eur. Phys. J. D, 2007, 43, 271–274, 10.1140/epjd/e2007-00112-y. b24 Tomchuk P. M., Tomchuk B. P., J. Exp. Theor. Phys., 1997, 85, No. 2, 360–369, 10.1134/1.558284. b25 Tomchuk P. M., Grigorchuk N. I., Phys. Rev. B, 2006, 73, No. 15, 155423 (17 pages), 10.1103/PhysRevB.73.155423. b26 Grigorchuk N. I., Tomchuk P. M., Phys. Rev. B, 2011, 84, No. 8, 085448 (14 pages), 10.1103/PhysRevB.84.085448. b27 Grigorchuk N. I., J. Phys. Stud., 2016, 20, No. 1-2, 1701 (9 pages), 10.30970/jps.20.1701. b28 Grigorchuk N. I., Condens. Matter Phys., 2022, 25, No. 1, 13703 (11 pages), 10.5488/CMP.25.13703. b29 Prescott S. W., Mulvaney P., J. Appl. Phys., 2006, 99, 123504 (7 pages), 10.1063/1.2203212. b30 Constantin D., Eur. Phys. J. E, 2015, 38, 116 (6 pages), 10.1140/epje/i2015-15116-2. b31 Korotun A. V., Karandas Ya. V., Reva V. I., Ukr. J. Phys., 2022, 67, No. 12, 849–858, 10.15407/ujpe67.12.849. b32 Korotun A. V., Koval A. A., Reva V. I., J. Appl. Spectrosc., 2021, 86, No. 4, 606–612, 10.1007/s10812-019-00866-6. b33 Smirnova N. A., Malysh R. O., Korotun A. V., Reva V. I., Titov I. M., J. Nano- Electron. Phys., 2021, 13, No. 5, 05010, 10.21272/jnep.13(5).05010. b34 Korotun A. V., Karandas Ya V., Reva V. I., Titov I. M., Ukr. J. Phys., 2021, 66, No. 10, 908–918, 10.15407/UJPE66.10.908. b35 Baida H., Billaud P., Marhaba S., Christofilos D., Cottancin E., Crut A., Lerme J., Maioli P., Pellarin M., Broyer M., Del Fatti N., Vallee F., Sanchez-Iglesias A., Pastoriza-Santos I., Liz-Marzan L. M., Nano Lett., 2009, 9, 3463–3469, 10.1021/nl901672b. b36 Baida H., Christofilos D., Maioli P., Crut A., Del Fatti N., Vallee F., Eur. Phys. J. D, 2011, 63, 293–299, 10.1140/epjd/e2010-10594-y. b37 Korotun A. V., Karandas Ya. V., Phys. Met. Metallogr., 2022, 123, No. 1, 7–15, 10.1134/S0031918X22010070. b38 Korotun A. V., Pavlyshche N. I., Phys. Met. Metallogr., 2021, 122, No. 10, 941–949, 10.1134/S0031918X21100057. b39 Sun S., Rasskazov I. L., Carney P. S., Zhang P. S., Moroz A., J. Phys. Chem. C, 2020, 124, No. 24, 13365–13373. 10.1021/acs.jpcc.0c03415. label1 “ ”, . , 64, 69063 , label2 - - , . 19-, 69068 , § ABSTRACT =3000 . i i , . , . . , , , ,
http://arxiv.org/abs/2406.17777v1
20240625175941
Text-Animator: Controllable Visual Text Video Generation
[ "Lin Liu", "Quande Liu", "Shengju Qian", "Yuan Zhou", "Wengang Zhou", "Houqiang Li", "Lingxi Xie", "Qi Tian" ]
cs.CV
[ "cs.CV" ]
[ Numerical exploration of the bootstrap in spin chain models P. N. Thomas Lloyd July 1, 2024 =========================================================== ] § ABSTRACT Video generation is a challenging yet pivotal task in various industries, such as gaming, e-commerce, and advertising. One significant unresolved aspect within T2V is the effective visualization of text within generated videos. Despite the progress achieved in Text-to-Video (T2V) generation, current methods still cannot effectively visualize texts in videos directly, as they mainly focus on summarizing semantic scene information, understanding, and depicting actions. While recent advances in image-level visual text generation show promise, transitioning these techniques into the video domain faces problems, notably in preserving textual fidelity and motion coherence. In this paper, we propose an innovative approach termed Text-Animator for visual text video generation. Text-Animator contains a text embedding injection module to precisely depict the structures of visual text in generated videos. Besides, we develop a camera control module and a text refinement module to improve the stability of generated visual text by controlling the camera movement as well as the motion of visualized text. Quantitative and qualitative experimental results demonstrate the superiority of our approach to the accuracy of generated visual text over state-of-the-art video generation methods. The project page can be found in <laulampaul.github.io/text-animator.html>. § INTRODUCTION Video generation has become an important cornerstone in content-based generation, and has huge potential value in various domains including e-commerce, advertising, the film industry, etc. For instance, in advertising scenarios, it is essential to display a manufacturer's specific logo or slogan in the generated video in the form of text, while also seamlessly integrating text with the products featured in the video (e.g. a piece of clothing). However, in current video generation approaches, the visualization of text/words in the generated video remains a challenging yet unresolved issue. For example, in the first example of Fig. <ref>, we need to engrave the word "CAFE" on the mug and ensure that the movement of the text and the mug appear seamless and harmonious in the video. Current T2V methods are unsuitable for these settings, as they typically focus on understanding the semantic-level information from a given prompt rather than interpreting specific words themselves. For instance, given a text input as “a person walking on the road," current T2V models can interpret the scene and produce a corresponding video about a person who walks on the road. However, these models fail to understand prompts at a more granular level. If the text input is modified to “a person walking on the road, wearing a T-shirt with the word 'Hello World' printed on it," the generated results of current methods are far from satisfactory, due to their inability to accurately interpret the generation of the texts 'Hello World' and incorporate its associated motion information effectively. Recently, some preliminary efforts have been made in the field of visual text generation, specifically in the paradigm of Text-to-Image (T2I) generation <cit.>. These trials have shown promising results, but they are only limited to the image domain. When extending this task to video scenarios, an intuitive approach is to use images generated by these methods as input for cutting-edge image-to-video (I2V) methods. However, most current I2V methods either focus on learning motion patterns in simply natural scenes <cit.> or deliberately omit data that include visual texts during dataset collection <cit.>. As a result, videos generated by these methods fall into a dilemma generally called text collapse, which means that as the number of frames increases, the visualized text becomes increasingly blurry or loses its original structure (as demonstrated in Sec. <ref> of this paper). Therefore, it is difficult to directly extend visual text generation models from the image domain to the video domain. Based on the above observations, we propose an effective solution for visual text video generation, which can effectively under texts for the description of videos and visual texts that should be generated. Our method not only reflects the semantics of the complete text, but also understands the fine-grain semantics of the input vocabulary, and effectively aggregates the two in terms of content while maintaining a good motion association (unable to visualize the movement of text and other content). To achieve these goals, we propose a novel method called Text-Animator. Different from previous T2V methods, Text-Animator contains a text embedding injection module to enhance its precise understanding and generation capacity for visual text. Besides, a unified controlling strategy with camera and text position control is designed to improve the stability of the movement of visualized text and image content, thereby achieving unity and coordination of the text movements. Specifically, for camera control, the control information is applied to the main body of the network by considering the features of the camera's motion trajectories. The position control aims at controlling the specific position and size of visual text generated in videos. Owing to the comprehensive controlling strategy over the developed text embedding injection module, Text-Animator shows a superior capacity to generate stable and accurate visual text content in videos. In summary, the contributions of this paper can be concluded below: * We propose Text-Animator, a novel approach that can generate visual text in videos and maintain the structure consistency of generated visual texts. To our knowledge, this is the first attempt at the visual text video generation problem. * We develop a text embedding injection module for Text-Animator that can accurately depict the structural information of visual text. Besides, we also propose a camera control and text refinement module to accurately control the camera movement and the motion of the generated visual text, to improve the generation stability. * Extensive experiments demonstrate that Text-Animator outperforms current text-to-video and image-to-video generation methods by a large margin on the accuracy of generated visual text. § RELATED WORK §.§ Visual Text Generation The goal of visual text generation is to integrate user-specified texts into images or videos and produce well-formed and readable visual text, therefore effectively ensuring that the texts fit well with the corresponding image content. Current research mainly focuses on how to design an effective text encoder and considers the better guidance of text-conditioned controlling information. For text encoder, as large language models develop <cit.>, it is a promising idea to directly use these models to encode text. However, this roadmap inevitably results in overlooking the character features of texts. Recently, some works have optimized text encoders for character features. GlyphDraw <cit.> fine-tuned the text encoder for Chinese images for glyph embeddings. Chen et al. <cit.> trained a glyph-extracted image encoder for image editing. AnyText <cit.> utilizes pretrained recognition model, PP-OCRv3 <cit.> for encoding text. To generate characters more accurately, control information of the text is required as additional input. GlyphDraw <cit.> uses explicit glyph images as conditions to render characters. GlyphControl <cit.> and AnyText <cit.> embed text conditions in the latent space by the combination of the positions of text boxes and the rendered glyphs. different from <cit.>, Yang et al. <cit.> use character-level segmentation masks as conditioned controlling information, allowing for finer granularity control. To our knowledge, current methods mainly focus on addressing the visual text generation problem in the text-to-image domain, which cannot be utilized to tackle text-to-video visual text generation. In this paper, we first explore the visual text generation task in the video domain. §.§ Video Generation Sora <cit.>, a recent famous video generation model has attracted much attention from both the community of both industry and academia. Before the emergence of diffusion-based models, lots of effort in this field have been paid on methods based on GANs <cit.> or VQVAE <cit.>. Among these methods, the pre-trained Text-to-Image (T2I) model CogView2 <cit.> is utilized in CogVideo <cit.> as the backbone, to enable generating long sequence videos in an auto-regressive way. Based on autoregressive Transformers, NUWA <cit.> combines three tasks, which are T2I, T2V, and video prediction. Currently, diffusion models have become the mainstream method in video generation. Make-A-Video <cit.> proposes to learn visual-textual correlations and thus capture video motion from unsupervised data. Some methods <cit.> design effective temporal modules to reduce computational complexity and model temporal relationships effectively. Multi-stage approaches <cit.> design models to be used in stages for achieving high-definition video generation. These methods highlight the versatility and efficacy of diffusion models in advancing video generation capability. §.§ Controllable video generation In addition to conventional T2V models, some methods focus on making video generation controllable. In these methods, <cit.> turn to refer to specific video templates for controlling motion. However, despite the effectiveness of these methods in motion controlling, they typically require training new models on each template or template set, limiting their capability in controllable video generation. Besides, VideoComposer <cit.> proposes to use motion vectors to control the video motion. MotionCtrl <cit.> designs two control modules for camera motion and object motion control. Drag-NUWA <cit.> uses trajectories and text prompts in a joint way for video generation conditioned on an initial image. Different from these approaches, a dual control visual text generation model is utilized in our Text-Animator, where camera pose information and position trajectories can effectively control the motion of videos and make the generation process more stable. § METHOD In this section, we first introduce the pipeline of our Text-Animator in Sec. <ref>. Then, the details of the key components are introduced in Sec. <ref>, Sec. <ref>, and Sec. <ref> respectively. §.§ Text-conditioned Video Generation Pipeline Firstly, we introduce the overall framework of our network, as shown in Fig. <ref>. Our method consists of four parts that are Text Embedding Injection Module, Camera Control Module, Text Glyph and Position Refinement Module, and 3D-UNet Module. Given the integrated texts T_in, position map P_1,ori, and Camera Pose Information (K_1, E_1, K_2, E_2, ... ,K_n, E_n), the glyph map G_1,ori is generated by rendering texts T_in using a uniform font style onto an image based on their locations. Then, the video position maps P_1,...,P_n and glyph maps G_1,...,G_n are generated by warping the P_1,ori and G_1,ori using camera pose information. Camera Control Module and Text Embedding Injection Module output multi-scale control features corresponding to their control input respectively. The noise z_t is fed into a 3D-UNet (the architecture of the 3D-UNet used in our work is as same as that used in AnimateDiffv3 <cit.>) to obtain an output ϵ_t. In the inference stage, this output is passed through the decoder of the VAE to obtain the final output videos. Recently, diffusion models have served as a primary framework for T2V generation and yielded promising results. Current T2V methods are derived from the original formulations of diffusion models used in image generation. More specifically, we generate the latent representation z_0 by applying Variational Autoencoder (VAE) <cit.> on the input video x_0. Then, a sequence of N latent features of z_0 is gradually perturbed with noise ϵ from a normal distribution over T steps. Given the noisy input z_t, a neural network ϵ̂_θ is trained to predict the added noise. In our work, we inject dual control signals (position control and camera control) into the denoising process, strengthening the stability of video generation. Specifically, these two control features are first fed into additional ControlNet N_p and N_c respectively, then injected into the generator through various operations. Hence, the objective of training our encoder is shown below: ℒ=𝔼_z_0, ϵ, c_t,s_p, s_c t[ϵ-ϵ̂_θ(z_t, c_t, N_p(s_p),N_c(s_c), t)] , where c_t is the embeddings of the corresponding text prompts, s_p is the set of the position maps and glyph maps, and s_c is the set of camera pose information (s_c=K_1, E_1, K_2, E_2, ... ,K_n, E_n). §.§ Text Embedding Injection Module In the generation of videos with visual text, the first consideration is how to effectively embed the visual features of the required text into the base model (the pre-trained UNet model). Inspired by previous methods of visualizing text in images <cit.>, we embed text conditions in the latent space by combining the positions of text boxes and the rendered glyphs. Text boxes indicate the positions in the generated image where rendering should occur, while the rendered glyphs utilize existing font style (i.e., `Arial Unicode') to pre-initialize the style of the characters. In addition, unlike image generation, video generation involves processing features across multiple frames. To leverage the pre-trained feature extractor used in image generation, we extract features from each frame using a frame-wise feature extractor, and then concatenate these features before feeding them into a pre-trained UNet model. From the top left of Fig. <ref>, we can see that the input to the position and glyph control module is the position map P_1, P_2, ... , P_n and glyph map G1, G_2, ... , G_n generated by the module in Sec. <ref>. We extract features of glyphs and positions separately using glyph convolution blocks and position convolution blocks, respectively. Then, we merge these features using a fusion convolution block. Finally, after combining these features with the noisy input Z_t, they are inputted into the text and position ControlNet. The text and position ControlNet output multi-scale feature maps F^P_k. Following the ControlNet <cit.>, we fuse these features into the intermediate block and upsampling blocks of the UNet network, where they are directly added to the corresponding features. §.§ Camera Control for Stable Text Generation After incorporating the text embedding injection module, our method is now capable of generating visual text videos with text that moves following the scene. However, this text movement can sometimes become disconnected from the movement of objects within the video. For instance, in the prompt `A sign that says ‘STOP’,' the text part "STOP" might move to the right while the sign moves to the left. To generate more stable videos, additional control modules need to be designed. Therefore, we propose to use camera pose information to control the movement of text and ensure consistency with the scene content. In this section, we will primarily discuss how to embed camera pose information into the underlying model. In the next section, we will explore how to relate camera pose information to the position and glyph maps discussed in Section <ref>. To effectively embed the camera pose information (K_1, E_1, K_2, E_2, ... ,K_n, E_n) into the camera ControlNet, followed  <cit.>, we use the plucker embedding. And we briefly introduce it as follows. A point (u, v) in the image plane is represented as p_u,v = (o×d_u,v, d_u,v) ∈ℝ^6, where o∈ℝ^3 denotes the camera center in the world coordinate space and d_u,v∈ℝ^3 represents a directional vector in world coordinate space, calculated using the formula 𝐝_u, v=𝐑 𝐊^-1[u, v, 1]^T+𝐭. 𝐑 and 𝐭 refer to the rotation matrix and the translation vector, respectively. Thus, the embedding can be expressed as P∈ℝ^6 × n × H × W, where H and W are the height and width for the frames and n represents the frame number. The camera ControlNet consists of four blocks, each of them comprising a residual-based convolutional block and a transformer-based temporal attention block, allowing the network to learn temporal relationships within the camera pose information. The network outputs multi-scale features, F^c_k∈ℝ ^ (b × h_k× w_k) × n × c_k. After obtaining multi-scale camera features, it's necessary to integrate these features into the 3D-UNet architecture of the T2V model. The image latent features z_k and the camera pose features F_k are directly fused through pixel-wise addition. §.§ Auxiliary Text Glyph and Position Refinement To enable the collaboration between the camera control module and the text embedding injection module, it is necessary to use the camera position information from videos as guidance to generate the position map and glyph map of subsequent frames by considering the guidance from the first frame. The generation method is as follows. Given the first frame's map (position map or glyph map) of the first frame, M, the intrinsic parameters K, and the transformation matrix T_1 of the first frame, and the transformation matrix T_n of the n-th frame. We first calculate the transformation matrix T_1to2= T_1^-1T_2 from the first frame to the second frame, and build the projection matrix P=KT_1to2. Next, the pixel coordinate matrix of the first frame is converted to three-dimensional points Point_cam1_3d in the camera coordinate system. Here, due to lacking depth information, we assume that rendered texts on the same line is at the same depth. Then, the relative transformation matrix T_1to2 is used to transform to the second frame camera coordinate system and project it back onto the pixel plane using P, followed by a normalization operation. After normalization, the projected coordinates are constrained within the image boundary and filled into the second frame image. After generating the position and glyph maps from the video, we observed in experiments that the relative position and size of the position feature map have a certain impact on the final generation results. If the position feature map is smaller, it affects the diversity of generated text, resulting in visual text that does not harmonize well with the content in the video. Conversely, if the position feature map is larger, it may lead to generated text containing incorrect or repeated characters. Therefore, we design a position refinement module. First, we extract the centroid of the initial position map P_n,ori and render the glyph map G_n,ori at specific positions. Then, we extract the convex hull of the glyph map and expand it by adding an expansion factor e to generate a new position map P_n. § EXPERIMENTS §.§ Implementation Details We choose the AnimateDiffV3 <cit.> as the base text-to-video (T2V) model. The weights of the model’s motion module are initialized with AnimateDiffV3 <cit.>. The weights of other parts are initialized with DreamShaper <cit.> or original SD1.5 <cit.>. Camera controlnet and text and position controlnet are trained using methods and datasets in <cit.> and <cit.>. Finally, all the parts are aggregated and the parameters are fixed for inference. Image dimensions of G and P are set to be 1024 × 1024 and 512 × 512, respectively. The expansion size e is set to 1.2. During the sampling process, we randomly selected some hint prompts (like `these texts are written on it: xxx') and concatenated them to the caption. The inference step and the guidance scale are set to 25 and 7.5, respectively. Finally, the model outputs the videos with the size 16 × 256 × 384. §.§ Dataset and Metrics Because of lacking the Text-to-video dataset for visual text generation evaluation, we use the LAION subsets of AnyText-benchmark <cit.> for evaluating the effectiveness of visual text video generation. However, in this dataset, some images have text and main content separated, while others consist only of text without any image content, which is meaningless for video generation. Therefore, we selected about 90 images from the dataset to form the test set, which is named the LAION subset. Firstly, we need to assess the accuracy and quality of text generation. According to the paper <cit.>, we employed the Sentence Accuracy (Sen. Acc) metric, where each generated text line is cropped according to the specified position and fed into an OCR model to obtain predicted results. Additionally, the Normalized Edit Distance (NED) <cit.> is used to measure the similarity between two strings. To ensure that our method has better video generation capabilities, we utilize the Fréchet Inception Distance (FID) to assess the video appearance quality between generated videos and real-world videos. Moreover, we also adopted the Prompt similarity and the Frame similarity metric. The former evaluates the semantic similarity between the input description and output video, while the latter evaluates the continuity of the generated videos. §.§ Quantitative Results The quantitative results are shown in Table <ref>. The compared methods are divided into two parts. The first part is the combination of the specific image visual text generation works (GlyphControl <cit.> and Anytext <cit.>) + state-of-the-art I2V works (AnimateLCM <cit.>, I2VGEN-XL <cit.>). The second part is the one-stage method. We use the Animatediff-SDXL as the base model and two finetuned lora weight from CIVIAI, denoted as Animatediff-SDXL (Text Lora A)[This lora model is from < https://civitai.com/models/419492? modelVersionId=467355>] and Animatediff-SDXL (Text Lora B)[This lora model is from <https://civitai.com/models/221240/texta-generate-text-with-sdxl>] in Table <ref> respectively. These two lora weight are finetuned using some images with visual text. From Table <ref>, we can see that the parameters of these methods are much larger than that of our method (over 41%). Moreover, our method significantly outperforms other methods in terms of the accuracy of generating visual text, as measured by evaluation metrics Sen. ACC and NED (leading by 191.8% and 30.4% respectively compared to the best method). This reflects the accuracy of the text generated by our method, and the text does not collapse in the generated videos. As for the metric measuring the similarity between the generated video and the input text (FID and Prompt similarity), our method achieved the second-best result. In terms of Prompt Similarity, the gap with the best method is only 0.6%. In the metric measuring video stability and frame Similarity, our method achieved the second-best result. We observed that the best method, Pika, tends to generate videos with smaller movements, giving them an advantage in this metric. Besides, in Table <ref>, we also compare with Open-SORA <cit.> and three recent SOTA API, Morph Studio <cit.>, Pika <cit.>, and Gen-2 <cit.>. Open-SORA and Morph Studio do not have the Sen. ACC score because they cannot generate correct sentences or words. Our method significantly outperforms other methods in terms of Sen. ACC and also performs better than other methods in NED. §.§ Qualitative Results In this subsection, we first compared our model with state-of-the-art T2V models or APIs in the field of text-to-video generation (including ModelScope <cit.> SVD <cit.> (Stable Video Diffusion), AnimatedDiff <cit.>, Open-SORA <cit.>, and Pika <cit.>) as shown in Fig. <ref>. These models show the ability on context understanding, but they fail to generate specific texts and maintain textual consistency over time. Compared to SVD, our model not only accurately renders each character (ours: `HELLO' vs SVD: `HELO' or Pika: `HHLLLO'), but also maintains consistency over time. SVD fails to learn the motion information of the text, causing the text to become increasingly disordered as time passes. As for comparison with specific visual text generation works, since there is currently no T2V work specifically designed for visual text generation, we contrast our approach with methods combining specific T2I works for visual text generation (such as GlyphControl <cit.> and Anytext <cit.>) and state-of-the-art I2V works (such as AnimateLCM <cit.>, I2VGen-XL, and SVD <cit.>). As shown in Fig. <ref>, our method shows superior integration of generated text with background, while Anytext cannot generate the seaside background. When using I2V methods to generate videos from reference frame images, the text parts are often blurred or distorted. Our approach maintains the clarity of the text parts well and moves in coordination with the image content. Besides, in Fig. <ref>, we show one example of the LAION-subset dataset. Only our method can correctly display the visual characters (CHRISTMAS) and the number of bags (two). At the same time, we also conducted experiments to verify the robustness of our method. In Fig. <ref>, we demonstrate the robustness of our method for large movement in the text region. The existing SOTA methods deformed the text area during small movements (as shown in the example above), so the visualization results of these methods are not shown here. The texts for these two examples are `A coffee mug with the words `cafe' on the office desk' and `A bottom of milk with the words `MILK'. The direction of movement is from right to left. We can see that the structure of our text can still be maintained even with a large range of camera movements. In Fig. <ref>, we demonstrate that under the same camera information, we can control its movement speed by sampling the camera information of the interval frames. At a speed of 4 or 6 times the original speed, our method is still able to maintain the structure of the text. §.§ Ablation Study In this part, to illustrate the contributions of our method, we conduct ablation studies on LAION-subset. The quantitative comparisons are shown in Table <ref>. Dual control: We conduct an ablation study to analyze the effectiveness of the dual control design. Generally speaking, it is feasible to use only position boxes for guidance without using camera poses. Therefore, we designed the `W/o camera control' model, which removed the camera guidance module compared to the original model. In addition, we removed the position block and only used camera pose and glyph embedding, and named this model `W/o position control'. In Table <ref>, we can see that on the xxx metric, the performance of the 'W/o camera control' model has decreased by 0.016 on NED compared to the original model, and the performance of the 'W/o position control' model has decreased by 0.027 on NED compared to the original model. Position refinement and expansion size: We also conduct experiments to analyze the effectiveness of our proposed refinement module. When the video position refinement is removed, we use the default position in the LAION subset. And we denote the model as `w/o Position Refinement' in Table <ref>. We can see that the original position will decrease the accuracy. Besides, we conduct experiments about the proper expansion size. We tried two expansion coefficients: 0.9 (smaller than 1.2) and 1.4 (larger than 1.2). It can be observed that although the smaller expansion coefficient improves the accuracy of the text in the video, it negatively impacts the quality of the video generation. On the other hand, the larger expansion coefficient causes some characters to appear repeatedly in the video, thereby reducing the accuracy of the text. § CONCLUSION In conclusion, this paper presents Text-Animator, an innovative approach to address the challenge of integrating textual elements effectively into generated videos within the visual text video generation domain. Text-Animator emphasizes not only semantic understanding of text but also fine-grained textual semantics, ensuring that visualized text is dynamically integrated into video content while maintaining motion coherence. Our approach introduces dual control mechanisms—camera and position control- to synchronize text animation with video motion, thereby enhancing unity and coordination between textual elements and video scenes. Through extensive quantitative and visual experiments, we have demonstrated that Text-Animator outperforms existing T2V and hybrid T2I/I2V methods in terms of video quality and fidelity of textual representation. Our contributions not only address current challenges but also inspire further exploration and innovation in this rapidly evolving field of multimedia content generation. ieeenat_fullname
http://arxiv.org/abs/2406.18737v1
20240626200639
Spin-1/2 Ising-Heisenberg distorted diamond chain with antiferromagnetic Ising and ferromagnetic Heisenberg interactions
[ "B. M. Lisnyi" ]
cond-mat.stat-mech
[ "cond-mat.stat-mech", "cond-mat.str-el" ]
[ Galan Moody July 1, 2024 ================ § ABSTRACT The exactly solvable spin-1/2 Ising-Heisenberg distorted diamond chain in the presence of the external magnetic field is investigated for the case of antiferromagnetic Ising and ferromagnetic XXZ Heisenberg interactions. The influence of quantum fluctuations and the distortion on the ground state, magnetic and thermal properties of the model are studied in detail. In particular, it is established that the zero-temperature magnetization curve may involve intermediate plateaus just at zero and 1/3 of the saturation magnetization. It is demonstrated that the temperature dependence of the specific heat reveals up to four distinct peaks at zero magnetic field and up to five distinct peaks at a weak magnetic field. The physical origin of all observed additional peaks of the specific heat has been clarified on the grounds of dominating thermal excitations. We have shown that the quantum fluctuations give rise to an effective geometrical frustration in this chain. sing-Heisenberg distorted diamond chain, ground state, phase diagram, specific heat § INTRODUCTION Decorated spin chains, which can be exactly solved using the transfer-matrix method <cit.>, are of practical importance for the qualitative interpretation and the quantitative description of magnetic and thermal properties of real solid-state materials. In particular, several exactly solved Ising-Heisenberg decorated spin chains provide an in-depth understanding of a striking interplay between geometric spin frustration and quantum fluctuations, which may manifest itself through various intriguing phenomena such as the appearance of an intermediate plateaux in low-temperature magnetization curves and the formation of additional maxima in the temperature dependence of the specific heat. Despite a certain oversimplification, some exactly solved Ising-Heisenberg spin chains afford a plausible quantitative description of the magnetic behavior of real spin-chain materials <cit.>. The natural mineral azurite Cu_3(CO_3)_2(OH)_2 provides an experimental realization of the geometrically frustrated spin-1/2 diamond Heisenberg chain <cit.> with spectacular magnetic properties <cit.>. Owing to this fact, a lot of attention has been paid to a research of different versions of the geometrically frustrated spin-1/2 Ising-Heisenberg diamond chain <cit.>. Although a correct description of magnetic properties of the azurite would require a modelling based on a more complex spin-1/2 Heisenberg model <cit.>, the simplified but still exactly solvable spin-1/2 Ising-Heisenberg diamond chain <cit.> qualitatively reproduces the most prominent experimental features reported for the azurite such as an intermediate one-third magnetization plateau as well as the double-peak temperature dependencies of specific heat <cit.>. On the other hand, much less attention has been paid to non-geometrically frustrated cases of the spin-1/2 Ising-Heisenberg diamond chain. Exactly solvable Ising-Heisenberg systems can be employed to formulate a theoretical framework for describing quantum Heisenberg systems. This involves deriving effective Hamiltonians for these systems through many-body perturbation theory <cit.>. In particular, the geometrically frustrated spin-1/2 Ising-Heisenberg diamond chain was generalized to a spin-1/2 Heisenberg diamond chain by adding a perturbation, for which a much simpler effective Hamiltonian was obtained within the many-body perturbation theory to reproduce its low-temperature behavior <cit.>. In addition, a quantum spin-1/2 antiferromagnetic Heisenberg trimerized chain was studied using the many-body perturbation expansion, which is developed from the exactly solved spin-1/2 antiferromagnetic Ising-Heisenberg diamond chain <cit.>. The main purpose of this work is to examine the ground state and basic thermodynamic properties of the spin-1/2 Ising-Heisenberg distorted diamond chain with antiferromagnetic Ising and ferromagnetic Heisenberg interactions. This chain reduces to a usual spin-1/2 Ising-Heisenberg diamond chain when the asymmetry of the Ising couplings along the diamond sides vanishes, while the spin-1/2 Ising-Heisenberg doubly decorated chain is recovered under the extreme case of asymmetry. The ground state, magnetization process and basic thermodynamic characteristics (specific heat, susceptibility) of the spin-1/2 Ising-Heisenberg distorted diamond chain will be exactly calculated within the framework of the transfer-matrix method. We explore how the quantum fluctuations and the coupling asymmetry will affect the overall magnetic and thermal behavior. We show that quantum fluctuations create an effective geometric frustration in this chain, which causes similar features in the properties of the ground state and basic thermodynamic characteristics as in the case of the corresponding antiferromagnetic chain with geometric frustration <cit.>, while the features of the temperature dependence of the heat capacity are more intriguing. § MODEL AND ITS EXACT SOLUTION Let us begin by considering the spin-1/2 Ising-Heisenberg distorted diamond chain in the presence of an external magnetic field. The magnetic structure of the investigated model system is schematically illustrated in figure <ref>. As one can see, the primitive cell in the shape of diamond spin cluster involves two nodal Ising spins S_k and S_k+1 along with two interstitial Heisenberg spins σ_k,1 and σ_k,2. The total Hamiltonian for the spin-1/2 Ising-Heisenberg distorted diamond chain in the presence of an external magnetic field, which contains N primitive cells, reads Ĥ = ∑_k=1^N[S_k(I_1 σ̂^z_k,1 + I_2σ̂^z_k,2) + S_k+1(I_2σ̂^z_k,1 + I_1 σ̂^z_k,2) ] + ∑_k=1^N( J_1 σ̂^x_k,1σ̂^x_k,2 + J_2 σ̂^y_k,1σ̂^y_k,2 + J_3 σ̂^z_k,1σ̂^z_k,2) - ∑_k=1^N h ( S_k + σ̂^z_k,1 + σ̂^z_k,2), which involves the nodal Ising spins S_k = ± 1/2 and the interstitial Heisenberg spins σ_k,i = 1/2 (i = 1,2) by assuming the periodic boundary conditions. The interaction constants I_1 and I_2 label the nearest-neighbor interactions between the nodal Ising spins and interstitial Heisenberg spins along the sides of the primitive diamond cell, while the coupling constants J_1, J_2 and J_3 determine the spatially anisotropic XYZ interaction between the nearest-neighbor interstitial Heisenberg spins from the same primitive cell. Finally, the Zeeman's term h determines the magnetostatic energy of the nodal Ising spins and interstitial Heisenberg spins in the external magnetic field. It should be mentioned that a particular case I_2=0 (or I_1=0) of the Hamiltonian (<ref>) corresponds to the spin-1/2 Ising-Heisenberg doubly decorated chain. For further manipulations, it is convenient to rewrite the total Hamiltonian (<ref>) as a sum over the cell Hamiltonians Ĥ = ∑_k=1^N Ĥ_k, whereas the cell Hamiltonian Ĥ_k involves all the interaction terms of the kth diamond cell Ĥ_k = J_1 σ̂^x_k,1σ̂^x_k,2 + J_2 σ̂^y_k,1σ̂^y_k,2 + J_3 σ̂^z_k,1σ̂^z_k,2 + I_1 ( S_kσ̂^z_k,1 + σ̂^z_k,2S_k+1) + I_2 ( S_kσ̂^z_k,2 + σ̂^z_k,1S_k+1) - h (σ̂^z_k,1 + σ̂^z_k,2) - h/2(S_k + S_k+1). Since the cell Hamiltonians Ĥ_k commute between themselves, [Ĥ_k, Ĥ_n ]=0, the partition function of the spin-1/2 Ising-Heisenberg diamond chain can be written in such a form: Z≡exp(-βĤ) = ∑_{S_k }∏_k=1^N Z_k(S_k,S_k+1), where β=1/(k_ B T), and k_ B is the Boltzmann's constant and T is the absolute temperature, the symbol ∑_{S_k } denotes the summation over all possible spin configurations of the nodal Ising spins and Z_k(S_k,S_k+1) = _{σ_k,1,  σ_k,2}exp(-βĤ_k ) is the effective Boltzmann's factor obtained after tracing out the spin degrees of freedom of two Heisenberg spins from the k-th primitive cell. To proceed further with the calculation, one necessarily needs to evaluate the effective Boltzmann's factor Z_k(S_k,S_k+1) given by equation (<ref>). For this purpose, let us pass to the matrix representation of the cell Hamiltonian Ĥ_k in the basis spanned over four available states of two Heisenberg spins σ^z_k,1 and σ^z_k,2: |↑, ↑⟩_k = |↑⟩_k,1 |↑⟩_k,2, |↓, ↓⟩_k = |↓⟩_k,1 |↓⟩_k,2, |↑, ↓⟩_k = |↑⟩_k,1 |↓⟩_k,2, |↓, ↑⟩_k = |↓⟩_k,1 |↑⟩_k,2, whereas |↑⟩_k,i and |↓⟩_k,i denote two eigenvectors of the spin operator σ̂^z_k,i with the respective eigenvalues σ^z_k,i = 1/2 and -1/2. After a straightforward diagonalization of the cell Hamiltonian Ĥ_k, one obtains the following four eigenvalues: E_k 1,2 (S_k,S_k+1) = ±√((J_1 - J_2)^2/16 + [I_1 + I_2/2(S_k + S_k+1) - h ]^2) + J_3/4 - h/2(S_k + S_k+1) , E_k 3,4 (S_k,S_k+1) = ±√((J_1 + J_2)^2/16 + (I_1 - I_2)^2/4(S_k - S_k+1)^2) - J_3/4 - h/2(S_k + S_k+1) . Now, one may simply use the eigenvalues (<ref>) in order to calculate the Boltzmann's factor (<ref>): Z_k(S_k,S_k+1) = 2exp[β h/2(S_k + S_k+1)] × [ exp(β J_3/4) cosh(β√((J_1 + J_2)^2/16 + (I_1 - I_2)^2/4(S_k- S_k+1)^2)) . + . exp(-β J_3/4) cosh(β√((J_1 - J_2)^2/16 + [I_1 + I_2/2(S_k+ S_k+1) - h]^2)) ]. The Boltzmann's factor (<ref>) can be subsequently replaced through the generalized decoration-iteration transformation <cit.> similarly to what was done before <cit.>. Since this Boltzmann's factor is essentially a transfer matrix [see, equation (<ref>)], we can directly use the transfer-matrix method <cit.>. For convenience we define the elements V_ij of the transfer matrix 𝐕 as follows: V_ij≡ Z_k ( S_k=i-3/2, S_k+1=j-3/2). The calculation of the partition function requires finding the eigenvalues of the transfer matrix 𝐕 = ( [ V_11 V_12; V_21 V_22 ]). The eigenvalues of this matrix are given by the roots of the quadratic characteristic equation. The expressions for the two transfer-matrix eigenvalues are as follows: λ_i = 1/2( V_11 + V_22 - (-1)^i √((V_11 - V_22)^2 + 4V_12^2)), i=1,2. As a result, the partition function Z given by equation (<ref>) is determined by the transfer-matrix eigenvalues λ_1 and λ_2: Z = 𝐕^N = λ_1^N + λ_2^N. Exact results for other thermodynamic quantities follow quite straightforwardly from the formula (<ref>) for the partition function Z. In the thermodynamic limit N →∞, the free energy per unit cell can be easily obtained: g = -1/βlim_N →∞1/Nln Z = -1/βlnλ_1, where λ_1 is the largest transfer-matrix eigenvalue. The entropy s and the specific heat c per unit cell can be subsequently calculated from the formulae s = k_Bβ^2 ∂ g/∂β = k_B( lnλ_1 - β/λ_1∂λ_1/∂β) , c = - β∂ s/∂β = k_Bβ^2 [1/λ_1∂^2 λ_1/∂β^2 - 1/λ_1^2(∂λ_1/∂β)^2 ], whereas the total magnetization m and magnetic susceptibility χ readily follow from the relations m = - ∂ g/∂ h = 1/βλ_1∂λ_1/∂ h, χ = ∂ m/∂ h = 1/β[1/λ_1∂^2 λ_1/∂ h^2 - 1/λ_1^2(∂λ_1/∂ h)^2 ]. The derivatives ∂λ_1/∂β, ∂^2 λ_1/∂β^2, ∂λ_1/∂ h, ∂^2 λ_1/∂ h^2 follow straightforwardly from formula (<ref>). § RESULTS AND DISCUSSIONS Now, let us proceed to the discussion of the most interesting results obtained for the spin-1/2 Ising-Heisenberg distorted diamond chain with the antiferromagnetic Ising interactions, I_1>0 and I_2>0, and ferromagnetic XXZ Heisenberg interaction: J_3 = J < 0, J_1 = J_2 = J Δ, where Δ determines the anisotropy of XXZ Heisenberg interaction. Without loss of generality, we may also assume that one of the two considered Ising couplings is stronger than the other I_1 ⩾ I_2 and introduce the difference between both Ising coupling constants δ = I_1 - I_2⩾0. To reduce the number of free parameters, we define the following set of dimensionless interaction parameters J̃=J/I_1, Δ, h̃=h/I_1, δ̃=δ/I_1. The parameter δ̃ restricted to the interval δ̃∈ [0,1] has a physical sense of the distortion parameter, because it determines a relative difference between two Ising coupling constants I_1 and I_2. The ground state of the spin-1/2 Ising-Heisenberg distorted diamond chain can be trivially connected to the lowest-energy eigenstate of the cell Hamiltonian (<ref>). Depending on a mutual competition between the parameters Δ, J̃, δ̃ and h̃, in total one finds four different ground states: the fully magnetized (FM) state, the ferrimagnetic (FRI) state, the monomer-dimer (MD1) state, and the quantum antiferromagnetic (QAF1) state given by the eigenvectors |⟩ = ∏_k=1^N| + ⟩_k ⊗ |↑, ↑⟩_k, |⟩ = ∏_k=1^N| - ⟩_k ⊗ |↑, ↑⟩_k, |⟩ = ∏_k=1^N | + ⟩_k ⊗1/√(2)(|↑, ↓⟩_k + |↓, ↑⟩_k), |⟩ = {[ ∏_k=1^N |[-]^k ⟩_k ⊗(A_[-]^k+1 |↑, ↓⟩_k + A_[-]^k |↓, ↑⟩_k); ∏_k=1^N |[-]^k+1⟩_k ⊗(A_[-]^k |↑, ↓⟩_k + A_[-]^k+1 |↓, ↑⟩_k) ]. . In the above, the ket vector |±⟩_k determines the state of the nodal Ising spin S_k = ± 1/2, the symbol [-]^k ∈{-,+} denotes the sign of the number (-1)^k, the spin states relevant to two Heisenberg spins from the kth primitive cell are defined by the notation (<ref>), and the probability amplitudes A_± are explicitly given by the expressions A_±=1/√(2)√(1 ±δ̃/√((J̃Δ)^2 + δ̃^2)). The eigenenergies per unit cell that correspond to the respective ground states (<ref>) are given as follows: Ẽ_FM = J̃/4 + 1 - δ̃/2 - 3h̃/2, Ẽ_FRI = J̃/4 - 1 + δ̃/2 - h̃/2, Ẽ_MD1 = -J̃/4 + 1/2J̃Δ - h̃/2, Ẽ_QAF1 = -J̃/4 - 1/2√((J̃Δ)^2 + δ̃^2). The ground-state phase diagram in the δ̃-h̃ plane can have four different topologies depending on the parameters J = |J̃|(Δ - 1) and Δ (see, figure <ref>). It should be noted for further analysis that the parameter J must agree with the parameter Δ, namely, the value Δ = 1 corresponds to the value J = 0, the condition Δ<1 (Δ>1) corresponds to the condition J<0 (J>0). The first type of the ground-state phase diagram (figure <ref>a) is found for J⩽ 2/(1+Δ). According to this condition, in the case of 0⩽Δ⩽1, only the first type of the ground-state phase diagram is realized. It includes only two ground states FM and FRI. The second type of the ground-state phase diagram (figure <ref>b) is realized for 2/(1+Δ)<J⩽ 1 and it involves three ground states: FM, FRI, and QAF1. In the zero field, the FRI and QAF1 states are separated by the point δ̃ = δ̃_F·Q1≡2-J/2(1 + JΔ/2(Δ - 1) + J). The third type of the ground-state phase diagram (figure <ref>c) involving all four available ground states FM, FRI, QAF1, and MD1 can be detected for 1< J <2. The FRI and MD1 ground states are separated by the phase boundary δ̃ = δ̃_F|M1≡ 2 - J. It is easy to show that δ̃_F·Q1⩽δ̃_F|M1. In the regime of fixedly J the parameter δ̃_F·Q1 is restricted to the interval (1 - J^2/4, 2-J) as the exchange anisotropy varies from Δ→∞ to Δ→ 1. The fourth type of the ground-state phase diagram (figure <ref>d) emerges for J⩾ 2 and it includes three ground states: FM, MD1, and QAF1. The phase boundary between the QAF1 and MD1 states starts from the special point (0,0), which corresponds to the highly frustrated (FRU1) ground state |⟩ = ∏_k=1^N (|+⟩_k |-⟩_k ) ⊗1/√(2)(|↑, ↓⟩_k + |↓, ↑⟩_k), for J > 2. And if J = 2, then the two ground states FRU1 and FRI coexist together at the point (0,0). The FRU1 ground state has the residual entropy s_res = k_ Bln 2 reflecting the macroscopic degeneracy 2^N, which comes from spin degrees of freedom of the nodal Ising spins. The topologies of the ground-state phase diagram b, c, d (see, figure <ref>) are identical to the topologies of the phase diagrams of the ground state of the first, second, and third type of the corresponding antiferromagnetic chain in <cit.>. At the same time, the ground states have some differences: in the MD1 state, the spins on the Heisenberg bond are in the triplet state, and in the corresponding state of the antiferromagnetic chain, they are in the singlet state <cit.>. In the QAF1 state, the spins on the Heisenberg bond are in a symmetric superposition of quasiclassical antiferromagnetic states, and in the corresponding state of the antiferromagnetic chain, they are in an antisymmetric superposition of quasiclassical antiferromagnetic states <cit.>. We see that under the condition 2/(1+Δ)<J, the properties of the ground state of our antiferromagnetic-ferromagnetic chain without geometric frustration are analogous to the properties of the ground state of the corresponding antiferromagnetic chain with geometric frustration. Based on this, it can be concluded that in this chain, under the condition 2/(1+Δ)<J, an effective geometric frustration appears. Under the condition J>1, the topology of the ground-state phase diagrams in the plane δ̃-h̃ (phase diagrams c and d in figure <ref>) defines only the J parameter, which under this condition has the meaning of the topology parameter. The topology parameter J determines the type of phase diagram of our chain similarly to the topology parameter |J̃|(Δ + 1) in the corresponding antiferromagnetic chain <cit.>. This means that under the condition J>1, the effective geometric frustration most fully reproduces the manifestations of the usual geometric frustration in the properties of ground state of the spin-1/2 Ising-Heisenberg distorted diamond chain. The effective geometric frustration in our antiferromagnetic-ferromagnetic chain arises due to quantum fluctuations. It happens as follows: if the XX-interaction (quantum fluctuations) is stronger than the ZZ-interaction on the Heisenberg ferromagnetic XXZ bond, the pair of spins of this bond has the lowest energy in the quantum state, which is a symmetric superposition of quasi-classical antiferromagnetic states |↑, ↓⟩ and |↓, ↑⟩ according to (<ref>). The general form of the ground-state phase diagram can be obtained in the J-h̃ plane (figure <ref>). Under this circumstance, the boundary-phase point between the FRI and QAF1 ground states at zero field is given by J_F·Q1 = 1/Δ+1(1 + Ĩ_2 + √(( 1-Ĩ_2 )^2 + 4 Ĩ_2 Δ^2)), where Ĩ_2 = I_2/I_1. It should be noticed that the FRU1 ground state does exist in the relevant ground-state phase diagram along the line given by h̃=0 and J>2 if the complete symmetry Ĩ_2=1 is recovered due to the absence of the QAF1 ground state. Next, let us examine the thermodynamic characteristics as a function of the temperature, the magnetic fields, and the distortion parameter. To illustrate all possible scenarios, we have selected the values of the Heisenberg coupling constants J̃ and Δ in regime J = const in order to fall into the parameter region pertinent to the ground-state phase diagrams shown in figure <ref>c involving all available ground states. The behavior of the magnetization depending on the field for different temperatures and on the temperature for different fields is identical to the behavior of the magnetization of the corresponding antiferromagnetic chain, which was discussed in detail in <cit.>. In particular, the field dependence of magnetization m/m_s at zero temperature, where m_s=3/2 is the saturation magnetization, can have a zero plateau m/m_s=0 corresponding to the QAF1 ground state, an intermediate plateau m/m_ s=1/3 corresponding to the FRI ground state or the MD1 ground state, and the saturation plateau m/m_s=1 corresponding to the FM ground state. The temperature curve of magnetization in the region of medium and high temperatures shifts upwards with the strengthening of quantum fluctuations. The magnetic susceptibility multiplied by the temperature (χ k_B T) as a function of the temperature in the zero field also behaves similarly to the corresponding antiferromagnetic chain <cit.>. In particular, as the temperature tends to zero, the quantity χ k_B T can either exponentially diverge as in quantum ferrimagnetics <cit.>, if δ̃ corresponds to the FRI ground state, or exponentially tends to zero, if δ̃ corresponds to the QAF1 ground state, or takes the value of 1/4, if δ̃ is at the point of coexistence of the states FRI and QAF1. The high-temperature dependence of χ k_B T shifts to a higher susceptibility with the strengthening of quantum fluctuations. Let us consider the temperature dependence of the zero-field specific heat in the case of a strong Heisenberg ferromagnetic interaction J̃=-5 and Δ=1.3 (J=1.5), which is presented in figure <ref> for different values of the distortion parameter δ̃. In order to understand a variation of the temperature dependence of the zero-field specific heat when the parameters of the Heisenberg ferromagnetic interaction are changed, we also show the results for the case, when the Heisenberg interaction in the same regime (J=1.5) is much weaker, namely J̃=-0.5, Δ=4 (figure <ref>). As we can see in figure <ref>, the heat capacity temperature curve has the main round maximum, which is located in the region of average temperatures (around k_B T / I_1 = 0.4), and can have one or two additional low-temperature peaks. This profile of the heat capacity temperature curve in the zero field is analogous to the typical profile of the heat capacity temperature curve in the corresponding antiferromagnetic chain <cit.>. Note that when the Heisenberg interaction is strong, then at very small distortions δ̃≅ 0, the heat capacity can have only one maximum (the main and additional maxima merge) for the entire range of low and medium temperatures [see the curve for δ̃ = 0 in figure <ref>(a)]. An interesting feature of the heat capacity temperature curve in the case of a strong Heisenberg interaction is an additional broad maximum in the region of very high temperatures (somewhat above k_B T / I_1 = 2), which exists for the entire range of values of the distortion parameter δ̃ (figure <ref>). The height of this maximum slightly decreases due to the growth of the distortion parameter δ̃. Analysis of the structure of the energy spectrum of the cell Hamiltonian (<ref>) showed that the strong Heisenberg ferromagnetic interaction significantly distances the group of nearby highest energies E_k 3 (±,±) from the rest of the energies. Based on this, we can conclude that the high-temperature additional maximum is formed by the high-energy excitations of cell spins to the states with the energies E_k 3(±,±). It should be noted that in the temperature dependence of the zero-field specific heat of the corresponding antiferromagnetic chain, no additional high-temperature maximum was found <cit.>, and the temperature curve of the heat capacity of the Ising-Hubbard distorted diamond chain in the case of geometric frustration has a similar additional maximum <cit.>. Let us consider more in detail the transformation of the temperature dependent heat capacity under the influence of the distortion parameter δ̃, which varies in the range of possible values [0,1]. If the distortion parameter increases in the region [0,δ̃_F·Q1), then the main maximum of the heat capacity loses its height and shifts to higher temperatures, and the additional low-temperature peak shifts to lower temperatures. When the parameter δ̃ falls into a certain small neighborhood of the point δ̃_F·Q1, then the additional low-temperature peak splits into two maxima [see the inset in figure <ref>(a)]. Upon further convergence of the parameter δ̃ with the point δ̃_F·Q1, the left-hand low-temperature peak moves to zero temperature and disappears at δ̃=δ̃_F·Q1. When the parameter δ̃ is above the point δ̃_F·Q1, the left-hand low-temperature peak appears again near the zero temperature. As the distortion increases, the left-hand low-temperature peak moves toward higher temperatures and merges with the right-hand low-temperature maximum [see inset on the figure <ref>(b)]. Therefore, the left-hand low-temperature peak is formed by thermal excitations between the FRI and QAF1 states. And the right-hand low-temperature peak is formed by thermal excitations between the FRI and MD1 states and between the QAF1 and MD1 states. With further growth of the δ̃ parameter, the low-temperature peak moves toward higher temperatures, and the main maximum increases its height and moves toward lower temperatures. It should also be noted that the growing distortion in the region (δ̃_F·Q1,1] shifts the heat capacity curve upwards on the temperature segment near the main maximum, but at higher temperatures, this effect is opposite, that is in the region of high temperatures, the growing distortion parameter shifts the heat capacity curve down. In the case of a strong Heisenberg ferromagnetic interaction given by the parameters J̃=-5 and Δ=1.3, we discovered another splitting of the low-temperature peak of the zero-field specific heat [see figure <ref>(b)], which occurs when the parameter δ̃ changes in the range of values (0.6, 0.84), which is far from the point δ̃_F·Q1. After the splitting of the low-temperature peak, as the distortion increases, the right-hand low-temperature maximum approaches the main maximum and merges with it at δ̃=0.84. By analyzing the dependence of the energies of the QAF1, FRI, and MD1 states on the distortion parameter and Heisenberg interaction parameters, it can be established that the left-hand low-temperature peak is formed by thermal excitations between the QAF1 and MD1 states, and the right-hand low-temperature maximum is formed by thermal excitations between the QAF1 and FRI states and between the MD1 and FRI states. It should be noted that this splitting of the low-temperature peak of the heat capacity occurs with a strong Heisenberg interaction, which ensures the necessary proximity of the energies of the QAF1 and MD1 states, and a rather strong distortion, which ensures the necessary distance between the energy of the FRI state and the energy of the ground state QAF1. A weak Heisenberg interaction, for example J̃=-0.5, Δ=4 (J=1.5), does not give such a splitting of the low-temperature maximum [figure <ref>(b)]. Last but not least, let us examine the temperature variations of the specific heat in the presence of non-zero external magnetic field. In particular, we study the effect of a weak magnetic field on the temperature dependence of the heat capacity for a distortion very close to the point δ̃_F·Q1, at which the temperature curve of the zero-field heat capacity has four maxima. For this purpose, we consider the effect of a weak magnetic field on the temperature dependence of heat capacity in two cases: in the first case we start from the point (0,δ̃_F·Q1+ 0.001) in the region of the ground state QAF1 [figure <ref>(c)], and in the second case we start from the point (0,δ̃_F·Q1- 0.001) in the region of the FRI ground state [figure <ref>(c)]. In the first case (see figure <ref>), the additional low-temperature heat capacity maximum, which is formed by thermal excitations between the FRI and QAF1 states, splits when the magnetic field increases [figure <ref>(a)] and the temperature dependence of the heat capacity has five maxima. After splitting, the left-hand peak shifts to lower temperatures and disappears when the value of the magnetic field corresponds to the point on the line of coexistence of the QAF1 and FRI states [figure <ref>(c)]. As soon as the point (h̃,δ̃_F·Q1+ 0.001) moves into the FRI state region [figure <ref>(c)], an additional peak occurs near zero temperature [figure <ref>(b)]. With further strengthening of the magnetic field, this peak shifts toward higher temperatures and merges with the neighboring maximum [figure <ref>(b)]. The maximum formed as a result of the merger continues to shift toward higher temperatures as the magnetic field increases and merges with the neighboring maximum at h̃⩾0.004 [figure <ref>(b)]. In the second case (figure <ref>), the additional low-temperature heat capacity maximum does not split in a weak magnetic field. It only deforms and shifts towards higher temperatures when the magnetic field increases. At the value of the magnetic field h̃=0.002, the additional low-temperature heat capacity maximum merges with the neighboring maximum (figure <ref>). In order to understand the mechanism of influence of a weak magnetic field on the temperature dependence of heat capacity in the case when the distortion is very close to the point δ̃_F·Q1, consider the influence of a weak magnetic field on the energy of the FRI state (the energy of the QAF1 state does not depend on the magnetic field). In a zero magnetic field, the ferrimagnetic state FRI has the same energy as the ferrimagnetic state |1⟩ = ∏_k=1^N| + ⟩_k ⊗ |↓, ↓⟩_k, and in a weak magnetic field, the energies of the FRI and FRI1 states are different. As a result of splitting, the energy of the ferrimagnetic states FRI and FRI1 by a very weak magnetic field, the low-temperature maximum, which in a zero magnetic field is formed by thermal excitations between the states FRI and QAF1, can split under certain conditions. In the first case (figure <ref>), the low-temperature maximum splits since a very weak magnetic field changes the QAF1 ground state to the FRI ground state [figure <ref>(c)]. In the second case (figure <ref>), the low-temperature maximum does not split, because a very weak magnetic field does not change the FRI ground state [figure <ref>(c)]. § CONCLUSIONS We have rigorously examined the ground state and thermodynamics of the spin-1/2 Ising-Heisenberg distorted diamond chain within the transfer-matrix method. Our attention was focused on how the distortion and ferromagnetic XXZ Heisenberg interaction affects the magnetization process and specific heat of the spin-1/2 Ising-Heisenberg distorted diamond chain with the antiferromagnetic Ising interactions and the ferromagnetic XXZ Heisenberg interaction. In this case, there is no geometric spin frustration in the given chain. We show that quantum fluctuations on a Heisenberg ferromagnetic bond create an effective geometrical spin frustration when a pair of spins of this bond is in an energetically favorable quantum antiferromagnetic state. Then, the ground state and thermodynamic characteristics of our antiferromagnetic-ferromagnetic chain have the features similar to the corresponding antiferromagnetic chain with geometric spin frustration <cit.>. The ground-state phase diagram of the spin-1/2 Ising-Heisenberg distorted diamond chain with the antiferromagnetic Ising interactions and ferromagnetic Heisenberg interaction totally consists of four different ground states: the fully magnetized state FM, the ferrimagnetic state FRI, the monomer-dimer state MD1, and the quantum antiferromagnetic state QAF1. The quantum ground states MD1 and QAF1 emerging for the ferromagnetic Heisenberg interaction differ from the analogous quantum ground states emerging for the antiferromagnetic Heisenberg interaction just by antisymmetric and symmetric quantum superposition of two antiferromagnetic states of the Heisenberg spin pairs, respectively. The ground-state phase diagram in the distortion parameter — magnetic field plane can have four different topologies depending on the ferromagnetic Heisenberg coupling parameters. The second, third and fourth topologies of this ground-state phase diagram, which are realized under the condition 2/(1+Δ)<J, are the same as the three topologies of the ground-state phase diagram of the antiferromagnetic chain <cit.>. The third and fourth topologies of the ground-state phase diagram, which are realized under the condition 1<J, are defined by the topology parameter J, which is analogous to the topology parameter in the antiferromagnetic chain <cit.>. Therefore, under the condition 2/(1+Δ)<J, an effective geometric frustration occurs in our antiferromagnetic-ferromagnetic chain, which is most fully manifested in the properties of the ground state under the condition 1<J. The effective geometrical spin frustration affects the magnetization, magnetic susceptibility, and heat capacity of an antiferromagnetic-ferromagnetic chain similarly to the geometrical spin frustration in an antiferromagnetic chain <cit.>. In particular, the magnetization curve of the spin-1/2 Ising-Heisenberg distorted diamond chain may involve at most two different intermediate plateaus at zero and 1/3 of the saturation magnetization. The distortion parameter is responsible for a rich variety of temperature dependencies of the specific heat, which may display one or two low-temperature peaks in addition to the main round maximum observable at medium temperatures. The strong Heisenberg interaction is responsible for an additional broad maximum in the temperature dependence of heat capacity in the region of very high temperatures. The combination of strong Heisenberg interaction and significant distortion can split the peak in the low-temperature heat capacity. If the distortion parameter is quite close to the right of the point δ̃_F·Q1, then a very weak magnetic field can split the low-temperature maximum of the heat capacity and then the temperature dependence of the heat capacity has five maxima. The physical origin of all observed low-temperature and hight-temperature peaks of the specific-heat has been clarified on the grounds of relevant thermal excitations. It is worthwhile to remark that the investigated spin system reduces to the spin-1/2 Ising-Heisenberg doubly decorated chain in the particular case I_2=0 (δ̃=1) and the symmetric spin-1/2 Ising-Heisenberg diamond chain in the other particular case I_1=I_2 (δ̃=0). § ACKNOWLEDGEMENTS The author is grateful to T. Verkholyak for the discussion and useful remarks. 0.6ex 10 s5 Syozi I., Prog. Theor. Phys., 1951, 6, 306–308, 10.1143/ptp/6.3.306. fis59 Fisher M. E., Phys. Rev., 1959, 113, 969–981, 10.1103/PhysRev.113.969. roj09 Rojas O., Valverde J. S., de Souza S. M., Physica A, 2009, 388, 1419–1430, 10.1016/j.physa.2008.12.063. str10 Strečka J., Phys. Lett. A, 2010, 374, 3718–3722, 10.1016/j.physleta.2010.07.030. bell13 Bellucci S., Ohanyan V., Eur. Phys. J. B, 2013, 86, 446 (12 pages), 10.1140/epjb/e2013-40336-4. bax82 Baxter R. J., Exactly Solved Models in Statistical Mechanics, Academic Press, 1982. aps15 Strečka J., Jaščur M., Acta Phys. Slovaca, 2015, 65, 235–367. s0 Strečka J., Jaščur M., Hagiwara M., Minami K., Czech. J. Phys., 2004, 54 (Suppl 4), 583–586, 10.1007/s10582-004-0149-5. s3 Strečka J., Jaščur M., Hagiwara M., Minami K., Narumi Y., Kindo K., Phys. Rev. B, 2005, 72, 024459 (11 pages), 10.1103/PhysRevB.72.024459. exp10 Van den Heuvel W., Chibotaru L. F., Phys. Rev. B, 2010, 82, 174436 (14 pages), 10.1103/PhysRevB.82.174436. sah12 Sahoo S., Sutter J.-P., Ramasesha S., J. Stat. Phys., 2012, 147, 181–193, 10.1007/s10955-012-0460-7. str12 Strečka J., Hagiwara M., Han Y., Kida T., Honda Z., Ikeda M., Condens. Matter Phys., 2012, 15, 43002 (11 pages), 10.5488/CMP.15.43002. han13 Han Y., Kida T., Ikeda M., Hagiwara M., Strečka J., Honda Z., J. Korean Phys. Soc., 2013, 62, 2050–2053, 10.3938/jkps.62.2050. oha14 Bellucci S., Ohanyan V., Rojas O., Europhys. Lett., 2014, 105, 47012 (6 pages), 10.1209/0295-5075/105/47012. verkh13 Verkholyak T., Strečka J., Phys. Rev. B, 2013, 88, 134419 (9 pages), 10.1103/PhysRevB.88.134419. strec20 Strečka J., Gálisová L., Verkholyak T., Condens. Matter Phys., 2020, 23, 43708 (11 pages), 10.5488/CMP.23.43708. galis22 Gálisová L., J. Magn. Magn. Mater., 2022, 561, 169721 (9 pages), 10.1016/j.jmmm.2022.169721. jes11 Jeschke H., Opahle I., Kandpal H., Valentí R., Das H., Saha-Dasgupta T., Janson O., Rosner H., Brühl A., Wolf B., Lang M., Richter J., Hu Sh., Wang X., Peters R., Pruschke T., Honecker A., Phys. Rev. Lett., 2011, 106, 217201 (5 pages), 10.1103/PhysRevLett.106.217201. hon11 Honecker A., Hu Sh., Peters R., Richter J., J. Phys.: Condens. Matter, 2011, 23, 164211 (9 pages) 10.1088/0953-8984/23/16/164211. kik04 Kikuchi H., Fujii Y., Chiba M., Mitsudo S., Idehara T., Kuwai T., J. Magn. Magn. Mater., 2004, 272–276, 900–901, 10.1016/j.jmmm.2003.12.619. ki05l Kikuchi H., Fujii Y., Chiba M., Mitsudo S., Idehara T., Tonegawa T., Okamoto K., Sakai T., Kuwai T., Ohta H., Phys. Rev. Lett., 2005, 94, 227201 (4 pages), 10.1103/PhysRevLett.94.227201. ki05ptp Kikuchi H., Fujii Y., Chiba M., Mitsudo S., Idehara T., Tonegawa T., Okamoto K., Sakai T., Kuwai T., Kindo K., Matsuo A., Higemoto W., Nishiyama K., Horvatić M., Bertheir C., Prog. Theor. Phys. Suppl., 2005, 159, 1–10, 10.1143/PTPS.159.1. rul08 Rule K. C., Wolter A. U. B., Süllow S., Tennant D. A., Brühl A., Köhler S., Wolf B., Lang M., Schreuer J., Phys. Rev. Lett., 2008, 100, 117202 (4 pages), 10.1103/PhysRevLett.100.117202. can06 Čanová L., Strečka J., Jaščur M., J. Phys.: Condens. Matter, 2006, 18, 4967–4984, 10.1088/0953-8984/18/20/020. can09 Čanová L., Strečka J., Lučivjanský T., Condens. Matter Phys., 2009, 12, 353–368, 10.5488/CMP.12.3.353. roj11 Rojas O., de Souza S. M., Ohanyan V., Khurshudyan M., Phys. Rev. B, 2011, 83, 094430 (9 pages), 10.1103/PhysRevB.83.094430. lis3 Lisnii B. M., Ukr. J. Phys., 2011, 56, 1237–1245, [Ukr. Fiz. Zh., 2011, 56, 1238–1246 (in Ukrainian)], 10.15407/ujpe56.11.1237. ana12 Ananikian N. S., Ananikyan L. N., Chakhmakhchyan L. A., Rojas O., J. Phys.: Condens. Matter, 2012, 24, 256001 (9 pages), 10.1088/0953-8984/24/25/256001. roj12 Rojas O., Rojas M., Ananikian N. S., de Souza S. M., Phys. Rev. A, 2012, 86, 042330 (8 pages), 10.1103/PhysRevA.86.042330. ana13 Ananikian N., Hovhannisyan V., Physica A, 2013, 392, 2375–2383, 10.1016/j.physa.2013.01.040. gal13 Gálisová L., Phys. Status Solidi B, 2013, 250, 187–195, 10.1002/pssb.201248260. gal14 Gálisová L., Condens. Matter Phys., 2014, 17, 13001 (10 pages), 10.5488/CMP.17.13001. anan14 Ananikian N. S., Hovhannisyan V. V., Kenna R., Physica A, 2014, 396, 51–60, 10.1016/j.physa.2013.11.017. pssb14 Lisnyi B., Strečka J., Phys. Status Solidi B, 2014, 251, 1083–1095, 10.1002/pssb.201350393. ohanyan15 Ohanyan V., Rojas O., Strečka J., Bellucci S., Phys. Rev. B, 2015, 92, 214423 (13 pages), 10.1103/PhysRevB.92.214423. derzh15 Derzhko O., Krupnitska O., Lisnyi B., Strečka J., EPL, 2015, 112, 37002 (6 pages), 10.1209/0295-5075/112/37002. verkh16 Verkholyak T., Strečka J., Phys. Rev. B, 2016, 94, 144410 (13 pages), 10.1103/PhysRevB.94.144410. verkh21 Verkholyak T., Strečka J., Phys. Rev. B, 2021, 103, 184415 (10 pages), 10.1103/PhysRevB.103.184415. verkh22 Verkholyak T., Strečka J., SciPost Phys., 2022, 12, 056 (34 pages), 10.21468/SciPostPhys.12.2.056. yam99 Yamamoto S., Phys. Rev. B, 1999, 59, 1024–1027, 10.1103/PhysRevB.59.1024. lis11-1 Lisnii B. M., Low Temp. Phys., 2011, 37, 296–304, 10.1063/1.3592221. 쳿 , . , 1, 79011, , § ABSTRACT =3000 ' -1/2 Ii- 䳿 XXZ 䳿 . () , . , , 1/3 . , ' . Գ ' . , . i Ii-, , ,
http://arxiv.org/abs/2406.17717v1
20240625165932
Transverse surfaces and pseudo-Anosov flows
[ "Michael P. Landry", "Yair N. Minsky", "Samuel J. Taylor" ]
math.GT
[ "math.GT", "math.DS" ]
Transverse surfaces and flows]Transverse surfaces and pseudo-Anosov flows M.P. Landry]Michael P. Landry Department of Mathematics Saint Louis University mailto:michael.landry@slu.edumichael.landry@slu.edu Y.N. Minsky]Yair N. Minsky Department of Mathematics Yale University mailto:yair.minsky@yale.eduyair.minsky@yale.edu S.J. Taylor]Samuel J. Taylor Department of Mathematics Temple University mailto:samuel.taylor@temple.edusamuel.taylor@temple.edu Landry was partially supported by NSF postdoctoral fellowship DMS-2013073. Minsky was partially supported by DMS-2005328. Taylor was partially supported by DMS-2102018 and a Sloan Research Fellowship. § ABSTRACT Let ϕ be a transitive pseudo-Anosov flow on an oriented, compact 3-manifold M, possibly with toral boundary. We characterize the surfaces in M that are (almost) transverse to ϕ. When ϕ has no perfect fits (e.g. ϕ is the suspension flow of a pseudo-Anosov homeomorphism), we prove that any Thurston-norm minimizing surface S that pairs nonnegatively with the closed orbits of ϕ is almost transverse to ϕ, up to isotopy. This answers a question of Cooper–Long–Reid. Our main tool is a correspondence between surfaces that are almost transverse to φ and those that are relatively carried by any associated veering triangulation. The correspondence also allows us to investigate the uniqueness of almost transverse position, to extend Mosher's Transverse Surface Theorem to the case with boundary, and more generally to characterize when relative homology classes represent Birkhoff surfaces. [ [ July 1, 2024 ================ § INTRODUCTION §.§ Overview Let ϕ be a transitive pseudo-Anosov flow on a compact, oriented 3-manifold M, possibly with toral boundary. Important examples include the geodesic flow on the unit tangent bundle of a closed hyperbolic surface and the suspension flow on the mapping torus of a pseudo-Anosov homeomorphism. A pseudo-Anosov flow is called circular if it is such a suspension flow, up to isotopy and reparametrization. A basic problem is to classify the surfaces in M that are transverse to a pseudo-Anosov flow, up to isotopy. Mosher's Transverse Surface Theorem <cit.> from the early 1990s answers the related question of which homology classes have representatives almost transverse to the flow φ. Here, almost transverse means that a mild operation must first be performed on the singular orbits of flow; see <Ref> for details. When M is closed, Mosher proves that if a surface S has nonnegative algebraic intersection with each closed orbit of φ, then S is homologous to a surface almost transverse to φ. Although Mosher's theorem is quite powerful, there are situations in which one needs to know whether a given surface S is transverse to φ up to isotopy rather than homology (see e.g. the discussion following <Ref>). Thirty years ago Cooper–Long–Reid asked, in the special case where ϕ is circular and M is closed: which surfaces in M can be made transverse to ϕ up to isotopy <cit.>? Among our results, we give a complete answer to this question in the more general setting of transitive pseudo-Anosov flows on compact manifolds. An obvious necessary condition for S to be almost transverse to ϕ up to isotopy is that S have nonnegative algebraic intersection with each closed orbit. A more subtle one is that S be taut, meaning it realizes the Thurston norm of its homology class and has no nullhomologous collection of components (<cit.> and <Ref>). When specialized to the setting of Cooper–Long–Reid's question, our results say these two necessary conditions are in fact sufficient for almost transversality (<Ref>); we also describe exactly when almost transversality can be promoted to transversality (<Ref>). Our characterization follows from a general treatment of almost transversality for surfaces with respect to an arbitrary transitive pseudo-Anosov flow ϕ, which can be informally summarized as follows. See the next section for the formal statements. * Characterization via veering triangulations: Our primary technical result is that, up to isotopy, a surface S is almost transverse to ϕ if and only if it is relatively carried by any veering triangulation associated to ϕ (<Ref>). The point is that both the veering triangulation and the relatively carried condition allow us to study transversality in a way which is purely combinatorial. Moreover, the surface is honestly transverse to ϕ if and only if it has a specific type of carried position with respect to the veering triangulation. * Transverse surface theorems: Using the correspondence above, we prove a general transverse surface theorem that states that after isotopy S is almost transverse if and only if it is taut, algebraically nonnegative on closed orbits, and can be isotoped to have no negative intersections with a specific finite collection of orbits (<Ref>). This last condition is vacuous for a large class of pseudo-Anosov flows (including circular flows), but we show by example that it is necessary in general (<Ref>). * Uniqueness of transverse position: We show that the position of an almost transverse surface is essentially unique, up to isotopy along the flow. This includes the nontrivial fact that there is a unique way to minimally modify the flow (in a certain combinatorial sense) to make the surface transverse (<Ref>). * Existence of Birkhoff surfaces: Plenty of pseudo-Anosov flows do not have (almost) transverse surfaces (e.g. geodesic flow of a hyperbolic surface). However, all pseudo-Anosov flows admit Birkhoff surfaces. By extending Mosher's transverse surface theorem to manifolds with toral boundary (<Ref>) we give a general characterization for which relative homology classes represent (almost) Birkhoff surfaces for ϕ (<Ref>). This also extends Fried's characterization of relative classes representing Birkhoff sections (i.e. Birkhoff surfaces meeting every closed orbit in bounded time). As mentioned above, our results include the case of pseudo-Anosov flows on manifolds with (toral) boundary. See <Ref> for a precise definition. There are two important reasons for this, beyond simply generalizing from the closed case. First, the application to Birkhoff surfaces stated above is fairly straightforward after proving the transverse surface theorem (<Ref>) for manifolds with boundary. Second, such flows can be used to understand the structure of the Thurston norm on manifolds with boundary in a way that has only previously been studied in the closed case. See <Ref> for details. §.§ Results on transverse surfaces Throughout the introduction we work with a pseudo-Anosov flow φ on a compact, oriented 3–manifold M. We allow ∂ M ≠∅ in which case each component of ∂ M is a torus. See <Ref> for the definitions. Our first result characterizes almost transverse surfaces for the class of pseudo-Anosov flows without perfect fits. Such flows were first studied by Fenley <cit.> and include the class of circular (i.e. suspension) flows mentioned above. For our purposes here, it suffices to know that φ has no perfect fits if and only if there are no anti-homotopic orbits, i.e. closed orbits γ_1,γ_2 with γ_1 homotopic to -γ_2. See <Ref>, and particularly <Ref>, for the precise definition and further discussion. The following lemma says that everything “seen" by these flows at the level of homology is also seen at the level of isotopy: Let ϕ be a pseudo-Anosov flow with no perfect fits on M, possibly with boundary. Then a properly embedded, oriented surface S is almost transverse to ϕ, up to isotopy, if and only if S is taut and pairs nonnegatively with the closed orbits of ϕ. See <Ref> and <Ref>. We remark that in general the classes in H_2(M,∂ M) that pair nonnegatively with the closed orbits of ϕ form a closed cone, which under the hypotheses of <Ref>, is exactly the cone over a closed face of the Thurston norm ball. This was proved by Mosher when ∂ M = ∅ and we give a proof in the case with boundary (<Ref>). Hence, <Ref> implies that the surface S is almost transverse to φ up to isotopy if and only if e_φ([S]) = χ(S), where e_φ is the Euler class of the flow, suitably defined, and χ is Euler characteristic. See <Ref> for additional details. <Ref> is a key ingredient in <cit.> where we prove that every atoroidal endperiodic map is isotopic to the first return map of a circular pseudo-Anosov flow to a leaf of a transverse depth one foliation. To complement <Ref>, we show by explicit example in <Ref> that the hypothesis of no perfect fits cannot be entirely dropped. However, we can characterize transverse surfaces for general transitive pseudo-Anosov flows using an auxiliary (noncanonical) collection of closed orbits. For the statement, a finite collection of closed orbits κ kills the perfect fits of φ if there are no anti-homotopic orbits in the punctured manifold M κ; again see <Ref>. The following is a restatement of <Ref>. Let ϕ be a transitive pseudo-Anosov flow on M, and let κ be any finite collection of closed regular orbits of ϕ that kills its perfect fits. Then an oriented surface S in M is isotopic to a surface that is almost transverse to ϕ if and only if * S is taut, * S has nonnegative (homological) intersection with the closed orbits of ϕ, and * the algebraic and geometric intersection numbers of S with κ are equal. Given a taut surface S that has nonnegative algebraic intersection with each closed orbit of ϕ, <Ref> gives a simple criterion for whether S is almost transverse to ϕ up to isotopy: one need only check whether S can be isotoped so that all its intersection points with the oriented link κ, if any, are positive. Our general technique, which handles pseudo-Anosov flows on manifolds with boundary, has applications to the more general situation of Birkhoff surfaces. A Birkhoff surface (sometimes called a partial section) of ϕ is an immersed surface Σ in M whose boundary components cover closed orbits of ϕ and whose interior is embedded and transverse to ϕ. More generally, a Birkhoff surface of a dynamic blowup of ϕ will be called an almost Birkhoff surface of ϕ. The next theorem generalizes Fried's result on Birkhoff sections <cit.> and seems new even for geodesic flow on hyperbolic surfaces. See <Ref> for statement that includes conditions for when the `almost' can be dropped. Let ϕ be a transitive pseudo-Anosov flow and let κ be any collection of closed orbits. Then a class η∈ H_2(M, κ) = H^1(M κ) is represented by an almost Birkhoff surface Σ in M with ∂Σ⊂κ if and only if it is nonnegative on closed orbits of M κ. Finally, we turn to the sense in which `almost transverse' position is essentially unique. For this, note that the statement “S is almost transverse to the pseudo-Anosov flow ϕ up to isotopy" contains two potential ambiguities. The first is that it does not specify a particular dynamic blowup to which S is transverse up to isotopy. The second is that it says nothing about the way in which S can be realized transversely to a given dynamic blowup ϕ^♯—can one interpolate between any two such positions by flowing along ϕ^♯, or are there multiple transverse positions such that any isotopy between them must move through non-transverse surfaces? These ambiguities are resolved with the following result, a combination of <Ref> and <Ref>. Let ϕ be a pseudo-Anosov flow on M and let S_1 and S_2 be isotopic properly embedded surfaces which are minimally transverse to dynamic blowups ϕ_1^♯ and ϕ_2^♯, respectively. Then ϕ_1^♯ and ϕ_2^♯ are combinatorially equivalent. Moreover, if the surfaces are transverse to a single blowup ϕ^♯, then S_1 and S_2 are isotopic along flowlines of ϕ^♯. We note here that `combinatorial equivalence' is a slightly weaker notion than orbit equivalence; see <Ref>. The proof of the theorem requires a detailed analysis of the flow space shadows of transverse surfaces that is carried out in <Ref>. §.§ Veering triangulations as the main tool As before, let φ be a transitive pseudo-Anosov flow on M and let κ be a finite collection of closed orbits that kills its perfect fits. Such a collection exists by work of Fried <cit.> and Brunella <cit.>; see also Tsang <cit.>. We let κ_s denote the union of κ and any singular orbits or boundary components of M. Our primary technical tool is a canonical veering triangulation on the manifold M κ_s which was essentially constructed by Agol and Gueritaud. In our previous work (<cit.>), we show the 2-skeleton of this triangulation is a branched surface properly embedded in M κ_s that is positively transverse to φ. This is exactly what makes the veering triangulation useful in understanding transverse surfaces. Given a properly embedded surface S in M, one can ask whether S can be put into a `relatively carried' position with respect to the triangulation. In essence, this means S is carried by the branched surface away from κ_s and intersects a neighborhood of κ_s efficiently. See <Ref> for the precise definitions and details. In <cit.> it was remarked that “being [relatively] carried by τ^(2) is a combinatorial version of almost transversality." Our main result relating the veering triangulation to transverse surfaces makes this precise: Let φ be a transitive pseudo-Anosov flow on M and let τ be any dual veering triangulation. For any surface S in M, S is almost transverse to φ S is relatively carried by τ. Moreover, S is honestly transverse to φ if and only if it is relatively carried τ without complementary annuli. See <Ref> and <Ref>. §.§ Basic conventions In this paper all the 3-manifolds we consider are oriented, and all homology and cohomology groups have coefficients in . If B is a subset of a metric space A, we denote the completion of A-B in the induced path metric by A B. §.§ Acknowledgments We thank Chi Cheuk Tsang for his detailed comments on an earlier draft. § VEERING TRIANGULATIONS AND RELATIVELY CARRIED SURFACES We begin with some background on veering triangulations. Our goal is to establish <Ref>, which itself extends the main theorem from <cit.>. §.§ Index and train tracks A surface with cusps is a surface S with a discrete collection of points in S called cusps which are modeled on the point (0,0)∈{(x,y)∈^2| y≥√(|x|)}. The index of a compact surface with cusps S is (S)=2χ(S)-#(S). Recall that a train track in a surface S is a 1-complex in S with an everywhere-defined tangent space. This means that the edges meeting a given vertex are “combed" so as to be tangent. The edges of a train track are called branches and the vertices are called switches. In this paper all the train tracks we consider will lie in the interior of surfaces. Given a train track t⊂ S, a patch of t is a component of S t. A patch naturally has the structure of a surface with cusps, with the cusps arising from the switches of t. The index of these patches is additive, in the sense that 2χ(S)=(S)=∑(p) where the sum is taken over all patches of t. §.§ Veering triangulations An ideal tetrahedron is a tetrahedron minus its vertices, and an ideal triangulation of a (necessarily noncompact) 3-manifold Z is a cellular decomposition of Z into ideal tetrahedra. A veering tetrahedron Δ is an ideal tetrahedron together with the following extra data: * Two faces are cooriented outward, and two are cooriented inward. These are called the top and bottom faces of Δ, respectively. * The edge along which the top faces meet is labeled π, and so is the edge along which the bottom faces meet. These are called the top and bottom edges of Δ, respectively. * The remaining edges, called equatorial edges, are labeled 0. * The equatorial edges are colored red or blue in an alternating fashion (see <Ref>). In some of the literature the red and blue edges are called right and left veering, respectively, and the property of being right or left veering is called the veer of an edge. Up to oriented equivalence, there are two veering tetrahedra in ^3. The “standard" veering tetrahedron is distinguished by the property that when viewed from above and projected to the plane as in <Ref>, the red edges have positive slope and the blue edges have negative slope. A veering triangulation of an oriented 3-manifold Z is an ideal triangulation of Z together with an assignment of veer to each edge and coorientation to each face, such that: * each ideal tetrahedron is equivalent to the standard veering tetrahedron via an orientation-preserving map, and * the sum of the 0 and π labels around each edge of the ideal triangulation is 2π. We will always think of the the 0 or π label of an edge of a veering tetrahedron as describing the dihedral angle for that edge. Face coorientations, together with the condition that edge labels sum to 2π around each edge, endow the 2-skeleton with the structure of a cooriented “branched surface" as depicted in <Ref>. We now briefly discuss branched surfaces. §.§ Branched surfaces and carrying A branched surface is the 2-dimensional analogue of a train track: it is a 2-complex with a continuously varying tangent plane at each point, locally modeled on the quotient of a finite stack of disks by identifying half disks of adjacent disks, requiring that the inclusion of each disk be smooth. If B is a branched surface, then the branch locus (B) is the union of all nonmanifold points of B, and the sectors of B are the components of B(B). Let B be a branched surface living in a 3-manifold. A standard neighborhood of B is a small tubular neighborhood of B foliated in a standard way by intervals (see <cit.>). This foliation is called the vertical foliation of N(B). We say that B is cooriented if the vertical foliation is oriented. If Λ is a 2-dimensional lamination embedded in N(B) transversely to the vertical foliation, we say Λ is carried by B (note Λ could simply be a compact surface). If B is cooriented and Λ is cooriented, when we say that Λ is carried by B we additionally assume that each leaf of the vertical foliation passes from the negative side to the positive side of Λ at each intersection point. We refer the reader to <cit.> for more details on branched surfaces and carrying. There is a homotopy equivalence N(B) B, called the collapsing map, that collapses the leaves of the vertical foliation. If S is a surface carried by B, the collapse of S is the image of S under the collapsing map. §.§ Relative branched surfaces and relative veering triangulations Let M be a compact oriented 3-manifold with toral boundary. A tube system U for M is the union of a small closed tubular neighborhood of M with a small closed tubular neighborhood of a link L in M. Here it is possible that M=∅ or L=∅. The solid torus components of U are called solid tubes and the components homeomorphic to T^2× I are called hollow tubes. A veering triangulation of M relative to U is a veering triangulation of Z=(M)-L such that * the 2-skeleton of τ intersects U transversely, and * ∩ U is diffeomorphic to (∩ U)× [0,∞). We remark that for any veering triangulation on Z, a tube system of M can be chosen to satisfy these conditions. We also use the terminology “relative veering triangulation of M with tube system U." Let τ be a relative veering triangulation of M with tube system U. We set the notation τ_U=∩ M U. We see that τ_U is a branched surface in M_U:=M U. There is an induced cellular decomposition of M_U:=M U whose cells are truncated veering tetrahedra, as shown on the left side of <Ref>. Each tip of a truncated veering tetrahedron, corresponding to an ideal vertex, is combinatorially a triangle with cooriented sides. However, the veering structure endows each tip with the smooth structure of a disk with 2 cusps. The 1-skeleton of the tessellation of M is a train track τ_U= τ_U∩ M_U whose patches are the tips of truncated veering tetrahedra. A tip is called upward if it has two sides cooriented outward, and downward otherwise. The definition of veering triangulations implies, with some work, that the union of all upward tips meeting M_0 is a collection of annuli called upward ladders, and symmetrically for downward tips and downward ladders. See the right side of <Ref>. The upward and downward ladders meet along a collection of curves called ladderpole curves. For a given component T of M_U, the slope of the ladderpole curves on T is called the ladderpole slope for T. This structure was first observed in <cit.>; see that paper for more details. A properly embedded surface S⊂ M is relatively carried by τ if S∩ (M U) is carried by the branched surface τ_U in the standard sense of branched surfaces, and each component of S∩ U is either a meridional disk in a solid tube or an annulus with at most one boundary component on M. There are two types of possible annuli in S ∩ U; those that meet ∂ M and those that do not. We call the latter type ladderpole annuli since both their boundary components have ladderpole slope in ∂ M_U. Let N(τ_U) be a standard neighborhood of τ_U in M U. The collapsing map N(τ_U)τ_U extends over U by a map that is the identity away from a small neighborhood of M_U⊂ M. Hence we can speak of the collapse of a surface relatively carried by τ_U—this will be the union of some sectors of τ_U with some disks and annuli lying in U. §.§ Cusped tori Let D be a compact disk with n>0 cusps in its boundary, and let f D→ D be an orientation-preserving diffeomorphism. The mapping torus T of f is called a cusped solid torus (see <Ref>), and the suspensions of the cusps of D under f are called the cusp curves of T. The index of T is defined to be the index of D, that is 2(1)-n. We say that n is the number of prongs of T, denoted (T). In particular (T)=2-(T). Note that (T) is not necessarily the same as the number of cusp curves; in general the number of cusp curves is equal to (T) divided by the order of the permutation obtained by restricting f to the cusps of D. A cusped torus shell is the object obtained by modifying the definition of a cusped solid torus by replacing the disk D with an annulus with one smooth boundary component and one with n cusps. The cusp curves of cusped torus shells are defined as for cusped solid tori. We do not define the index of a cusped torus shell because there is no canonical choice of meridian. §.§ Stable and unstable branched surfaces Let M be a compact oriented 3-manifold, and let τ be a veering triangulation of M relative to a tube system U. There are two branched surfaces with generic branch locus associated to τ. The stable branched surface, denoted B^s, intersects each veering tetrahedron as shown the top row of <Ref>. The unstable branched surface, denoted B^u, intersects each veering tetrahedron as shown in the bottom row of <Ref>. In this paper we are not concerned with the positions of B^s and B^u relative to each other—it is enough to know that they intersect each tetrahedron as described. We refer the reader to <cit.> for additional details. The components of M B^s and M B^u are cusped solid tori and cusped torus shells, each of which contains a single solid or hollow tube, respectively. We will refer to these as the cusped tori of B^s and B^u. Let U_0 be a component of U, and let T_0^s and T_0^u be the cusped tori of B^s and B^u containing U_0. Let t_0 be the tessellation induced by the intersection of τ with U_0. Then as described in <cit.>, the cusp curves of T_0^s are in canonical bijection with the cores of upward ladders of t_0. Similarly the cusp curves of T_0^u are in canonical bijection with the cores of downward ladders. In particular T_0^s and T_0^u are diffeomorphic. §.§ Spheres, disks, tori, and annuli We continue to assume τ is a relative veering triangulation of M relative to U. The existence of τ restricts the types of properly embedded surfaces in M with nonnegative Euler characteristic that can exist. As observed in <cit.>, the branched surfaces B^s and B^u are laminar and therefore carry essential laminations; it follows that M is irreducible <cit.>. (For a given boundary component of M, the degeneracy slope of both laminations is equal to the ladderpole slope of the veering triangulation). The next lemma lets us analyze disks and tori. Let M be a 3-manifold admitting a relative veering triangulation τ. Let S be any properly embedded surface in M. Then S can be isotoped rel M so that S∩ B^s and S∩ B^u have no patches of positive index disjoint from S. This is <cit.> in the additional generality of our setting, i.e. S may have boundary. The same proof (which uses irreducibility of M) applies. Now suppose that D is a properly embedded disk in M. Applying <Ref>, we can isotope D to have no patches of positive index disjoint from D. Euler characteristic forces D∩ B^s=∅, so D lies in a cusped torus shell of B^s and is isotopic into M. Now we consider annuli and tori. We say that the relative veering triangulation τ is strict if each cusped solid torus of B^s has negative index. By the discussion above this is equivalent to requiring the same of B^u, or requiring that the meridional curve for every solid tube has geometric intersection number ≥6 with the union of all ladderpole curves. Suppose that τ is strict, and that A is a properly embedded annulus in M. Note that no patch of A∩ B^s meeting A can have positive index. After an application of <Ref>, all patches of A∩ B^s disjoint from A will have nonpositive index. This implies that all patches of A∩ B^s have index 0. By strictness, none of the patches disjoint from A is a meridional disk for a solid tube; hence A may be isotoped to lie outside of U away from a regular neighborhood of A, unless A already lies entirely inside a component of U. Since M- U admits a complete hyperbolic metric by <cit.>, A must be homotopic rel A into M. Continuing to assume τ is strict, suppose that T is a torus embedded in M. As with annuli, we can apply <Ref> to arrange for T∩ B^s to have no patches of positive index. In this case, strictness implies that T can be isotoped to lie outside of U. As above, M-U admits a complete hyperbolic metric. Hence, T is either parallel to a component of M or T is compressible in M-U, hence also in M. We summarize the above discussion in the following proposition. If τ is a relative veering triangulation of a 3-manifold M, then: * M is irreducible and -irreducible, and * if τ is strict, then M is also atoroidal and anannular. §.§ The dual graph Let M be a compact 3-manifold, and let τ be a veering triangulation of M relative to a tube system U. We construct a graph by placing a vertex in each tetrahedron of τ and placing an edge between two vertices for each face identification. There is an obvious embedding of Γ into M such that each edge of Γ passes through one face of τ, and we can orient each edge so that these intersections are compatible with the coorientation of the τ-faces. This directed graph (together with its embedding into M) is called the dual graph, and denoted Γ. Since each vertex of Γ has two incoming and two outgoing edges, Γ is a 1-cycle representing a class in H_1(M) that (by a slight abuse) we also call Γ. It is clear from <Ref> that the branch loci of both B^s and B^u can be canonically identified with Γ. We will always think of (B^s) and (B^u) as directed graphs using this identification with Γ. Note that this endows each cusp curve of each cusped torus of B^s and B^u with an orientation. It turns out that these orientations are coherent, in the sense that any two cusp curves of a cusped torus are isotopic preserving orientation. If T is a cusped solid torus, there is a natural orientation of its core curve, defined by the property that any cusp curve is homotopic to a positive multiple of the core curve. §.§ Removable annuli and efficient carrying. Let τ be a veering triangulation of M relative to a tube system U, and let S be a surface in M relatively carried by τ. This position is not (combinatorially) unique in general, and it will be important for us (e.g. in <Ref>) to discuss a particular simplification that can eliminate components of S∩ U. For this, suppose that A is an annulus component of S∩ U disjoint from M, such that after collapsing S, both of its boundary components lying on adjacent ladderpoles of some ladder ℓ. If this collapsing of A is homotopic in U to ℓ fixing A, then we say that A is a removable annulus. If S is relatively carried by τ without any removable annuli, then we say that S is efficiently relatively carried by τ. The next lemma justifies this terminology by saying that “removable annuli can be removed": Let τ be a veering triangulation of a 3-manifold M relative to a tube system U. Let S be a surface relatively carried by τ. Then S is isotopic to a surface efficiently relatively carried by τ. Suppose A is a removable annulus, with both components of A carried by the ladderpoles of a ladder L. If A is innermost, meaning that the component of U A containing L is disjoint from S, then we may apply an annulus move as defined in <cit.> and shown in <Ref> (technically, this is the reverse of what that paper calls an annulus move) to eliminate A while preserving the fact that S is relatively carried by τ. Applying this argument finitely many times to innermost such annuli, we are done. §.§ Thurston's norm on homology and its dual We now briefly define the Thurston norm on H_2(M, M) and mention some of its properties. See <cit.> for more details and proofs. Let S be an oriented surface that is properly embedded in M, and define χ_-(S)=-χ(S-(sphere, disk, torus, and annulus components)). For an integral homology class α∈ H_2(M, M), define x(α)=min_[S]=α{χ_-(S)}, where the minimum is taken over all oriented properly embedded surfaces representing α. Thurston showed that this function on the integral lattice of H_2(M, M) extends to a continuous convex function H_2(M, M)→ that is linear on rays through the origin, and vanishes precisely on the subspace spanned by the classes of properly embedded surfaces with nonnegative Euler characteristic. If there are no such surfaces representing nonzero homology classes, then x is a vector space norm on H_2(M, M); otherwise x is merely a pseudonorm. In a slight abuse of terminology, x is known as the Thurston norm regardless of whether it is a bona fide norm. We often think of x as a function on H^1(M) via Poincaré/Lefschetz duality. The dual Thurston norm x^* H^2(M, M)→∪∞ is defined by x^*(u)=sup{ u| x(u)≤ 1}. Regardless of whether x is a norm, the function x^* restricts to a norm on the subspace on which it takes finite values. §.§ The Euler class Let γ be the homology class of the union of the oriented cores of the solid torus components of U. Define e_τ∈ H^2(M, M) by the formula 2e_τ=⟨ 2γ- Γ, ·⟩. Here ⟨·,·⟩ is the intersection pairing between H_1(M) and H_2(M, M). We call e_τ the Euler class of τ. We now explain that this generalizes previous definitions in the literature of “Euler classes" for veering triangulations. If U has no solid torus components then the formula reduces to 2e_τ=-⟨Γ, ·⟩, which is the formula for combinatorial Euler class given in <cit.>, based on Lackenby's definition for arbitrary taut ideal triangulations from <cit.>. If T is a cusped solid torus of B^s, let β_T be the union of cusp curves of T. Then β_T is homologous to the 1-chain (T)·(T), which can be seen by homotoping β_τ onto (T). The Euler class defined in <cit.> is an element of the first homology of a closed manifold defined as 1/2∑(T)· [(T)], where the sum is taken over all cusped solid tori of B^s (all cusped tori are solid here because the manifold is closed). We have ∑_(T)· [(T)] =∑(2-(T))· [(T)] =∑(2[(T)]-[β_T]) =2γ-Γ, where in the last line we have used that Γ is homologous to the union of β_T over all tubes T. Therefore in the case when B^s has no cusped torus shells, our Euler class is Poincaré dual to the Euler class from <cit.>. If cusped torus shells are present, we can still apply the above analysis to each cusped solid torus. This shows the following. Let c be the union of all cusp curves of cusped torus shells of B^s. Then 2e_τ=⟨( ∑(T_i)·(T_i))-c,·⟩, where the sum is over all cusped solid tori T_i of B^s. The next two lemmas will be key tools for connecting veering triangulations to the Thurston norm. They are analogous to <cit.> and the proofs are adaptations of the proofs of those lemmas to the more general setting here. Before we state and prove them, we make a few definitions and set some notation. Let S be a surface properly embedded in M transverse to B^s such that each patch p of the train track t^s:=S∩ B^s is π_1-injective in its cusped torus, which we denote by T(p). If p is a patch such that T(p) p is disconnected, or if p∩ M is nonempty and has null pairing with the cusp curves or T, then we say that p is superfluous. Note that if p is superfluous, then p does not contribute to the algebraic intersection of S with Γ or γ, hence does not contribute to the pairing of S with e_τ. Let NSD and NSA denote the sets of nonsuperfluous disks and annuli, respectively. If p is a nonsuperfluous disk then T(p) is a cusped solid torus and p is a meridional disk of T(p). Moreover p has pairing +1 or -1 with the core of T(p); we denote this pairing by (p). For any patch p, we say a cusp of p is positive or negative according to whether the corresponding point of S∩Γ is positive or negative, respectively. Define (p) to be the number of positive cusps of p minus the number of negative cusps of p. Note that (p) is always less than or equal to (p), the number of cusps of p. Having set all this notation, <Ref> now gives 2e_τ([S])=∑_NSD(p)·(T(p)) +∑_NSA(-(p)). Recall that a properly embedded surface S in M is taut if x([S])= χ_-(S) and no collection of its components is nullhomologous. The dual Thurston norm of e_τ is ≤ 1. As a consequence, if Y is a surface such that χ_-(Y)=-e_τ([Y]), then * x([Y])=-e_τ([Y]) and * if no collection of components of Y is nullhomologous, then Y is taut. Let S be a taut surface in M. We can isotope S so that each patch π_1-injects into its cusped torus, and so that, by <Ref> combined with the fact that S is not a disk homotopic into M, there are no patches of S∩ B^s with positive index. We then have the following equalities and inequalities (justifications are below): 2e_τ([S]) = ∑_NSD(p)·(T(p)) +∑_NSA(-(p)) ≥∑_NSD(T(p)) +∑_NSA-(p) ≥∑_NSD(p) +∑_NSA(p) ≥∑_all patches(p) =2χ(S) The first line is a restatement of <Ref>. The second uses that all cusped solid tori have index ≤ 0. The third line uses that (p)≤ (T(p)) for p∈NSD and (p)=-(p) for p∈NSA. For the fourth line we have used that all patches of S∩ B^s have nonpositive index. Hence χ(S)≤ e_τ([S]), so -e_τ([S])≤ -χ(S)=x([S]). This shows that e_τ has dual Thurston norm ≤ 1 as claimed. Now suppose that Y is as in the statement of the lemma. Then x([Y])≤χ_-(Y)=-e_τ([Y])≤ x([Y]) so all these quantities are equal. In particular (a) holds, and if no collection of components of Y is nullhomologous then Y is taut. The next lemma is a generalization of <cit.>. Let S be a surface properly embedded in M that is relatively carried by a relative veering triangulation τ of M. Then S is taut and x([S])=-e_τ([S]). By <cit.> the dual graph Γ is strongly connected, so every Γ-edge is part of a directed cycle. Hence each component of S pairs positively with some directed curve in M, so no collection of components is nulhomologous. In particular, using <Ref>, we have χ_-(S)=-χ(S). Because S is relatively carried by τ, each patch p of S∩ B^s is either * a meridional disk of a cusped solid torus, * a topological annulus with one boundary component on M and another boundary component having a nonzero number of positive cusps, or * a topological annulus with no cusps in its boundary. In case (a), (p)=2-(p). In case (b), (p)=-(p). In case (c), (p)=0. In each case (p) is precisely the contribution of p to 2e_τ([S]), so we have χ_-(S)=-e_τ([S]). <Ref> finishes the proof. §.§ Combinatorial transverse surface theorems We define _1(Γ) to be the smallest closed convex cone in H_1(M) containing the homology class of each directed cycle in the dual graph Γ. Its dual cone _1^∨(Γ) is the cone of all classes in H^1(M) that pair nonnegatively with each element of _1(Γ). The following is the generalization of <cit.> to our setting. Let τ be a veering triangulation of M relative to a tube system U. Let u be an integral cohomology class in H^1(M). Then u∈_1^∨(Γ) if and only if there exists a surface S, necessarily taut, such that S is relatively carried by τ and [S] is Poincaré dual to u. The proof of <cit.> goes through almost without modification, but we describe it here in broad strokes. If u is dual to a carried surface, then that surface is taut by <Ref> and it is clear that u∈_1^∨(Γ) since the surface is positively transverse to Γ. Conversely, let u∈_1^∨(Γ), and let u_U be the pullback to H^1(M_U). We can use u_U to produce a surface S_U in M_U that is carried by τ_U and dual to u_U <cit.>. Let M_0 be the union of M_U with the solid tubes. The surface S_U can be capped off by annuli and disks inside the solid tubes to produce a surface dual to u_0, the pullback of u to M_0. This is essentially because u∈ H^1(M) (see <cit.>; that proof goes through here, guaranteeing that capping off is possible). The only mild novelty in the setting of this theorem is the possibility that M_0 M; that is, our tube system may have hollow tubes. At a component of M_U to which a hollow tube is glued, we simply extend S_U to M by gluing in annuli using the T^2× I product structure of the hollow tubes. The result is a relatively carried surface S that is taut by <Ref>. Since S_U is dual to u_U, S is dual to a class mapping to u_U under H^1 (M)→ H^1(M_U). Since this map is injective, S is Poincaré dual to u. If τ is a relative veering triangulation in M, then the dual cone _1^∨(Γ) is contained in the cone in H^1(M) on which -e_τ and x agree as maps H^1(M)→. Let S be a taut surface with [S]∈_1^∨(Γ). Then x([S])=χ_-(S)=-e_τ([S]). We now aim to prove a version of the main result in <cit.> when τ is strict. We will use the following, which is a stronger version of <cit.>: Assume that τ is a strict relative veering triangulation of M and let S be an incompressible and boundary incompressible surface in M. Further suppose that S has the property that for any surface S' isotopic to S that is transverse to B^u and B^s, either: * one of S' ∩ B^u or S' ∩ B^s has a patch of positive index, or * every negative switch of S' ∩ B^u and every negative switch of S' ∩ B^s belongs to a bigon. Then S is isotopic to a surface relatively carried by τ. <Ref> is stated in greater generality than originally appeared in <cit.>. There it was assumed that all tubes of U are hollow, so that τ is automatically strict. However, the proof given in <cit.> goes through as written, modulo slight differences in terminology. We note the additional fact that the individual “moves" comprising the isotopy in <Ref> can be performed some distance away from M, so the isotopy can be performed fixing S pointwise. Let τ be a strict relative veering triangulation of M. Let S be a taut surface in M. The following are equivalent: * [S]∈_1^∨(Γ) * x([S])=-e_τ([S]) * S is relatively carried by τ up to an isotopy, which can be chosen to fix S. Condition (a) implies (b) by <Ref>. Also (c) implies (a) because any carried surface has only positive intersections with Γ. It remains to prove that (b) implies (c), so suppose that x([S])=-e_τ([S]). In light of <Ref>, our strategy will be to show that for any surface isotopic to S, one of the two conditions in <Ref> holds. The proof is analogous to the proof of <cit.>, but with our slightly different definition of the Euler class. We present it for the sake of completeness. By an isotopy, we can assume that S is transverse to both B^s and B^u and that each patch of t^s=B^s∩ S and t^u=B^u∩ S is π_1-injective in its cusped torus. We will show that if t^s has no patches of positive index, then every negative switch of t^s belongs to a bigon. If p is a superfluous patch, note that p has (p)=0 and the number of intersections of p with γ counted with sign is 0. By (b) and the fact that S is taut, we have 2χ(S)=2e_τ([S]). By <Ref> we can write this as 2χ(S) = ∑_NSD(p)·(T(p)) +∑_NSA (-(p)). If we let SP denote the set of superfluous patches, the index sum formula for χ(S) gives 2χ(S)=∑_all patches(p)=∑_SP(p) +∑_NSD(p)+∑_NSA(p). Subtracting the above two expressions for 2χ(S), we obtain 0=∑_SP(p) +∑_NSD((p)-(p)·(T(p)))+∑_NSA((p)+(p)). We claim every term in each of the three sums above is nonpositive: * First sum: every term is nonpositive because we are assuming there are no patches of positive index. * Second sum: if p is a nonsuperfluous disk with (p)=-1 then clearly (p)-(p)·(T(p))≤ 0 is a sum of two nonpositive numbers, and if (p)=1 then the inequality follows from the fact that a meridional disk of a cusped solid torus has index at most the index of the cusped solid torus. * Third sum: if p a nonsuperfluous annulus then χ(p)≤0. Hence (p)+(p)≤(p)+(p)=2χ(p)≤ 0, so every term in the second sum is nonpositive. Therefore each term in the sums over SP, NSD, and NSA is equal to 0. This has the following implications: * each superfluous patch has index 0, so is either a bigon or an annulus with no cusps, * each nonsuperfluous disk patch p has (p)=(p)·(T(p)), so (p)=+1 and all cusps of p are positive. * each nonsuperfluous annulus patch p in a cusped torus shell has (p)=-(p), so p is equal to an annulus with no negative cusps, We have shown that any negative switch of t^s must belong to a bigon. A symmetric argument shows the same for t^u. Hence (b) implies (c). If τ is a strict relative veering triangulation in M, then _1^∨(Γ) is equal to the cone over a face of the the Thurston norm ball whose codimension is equal to the dimension of the largest linear subspace contained in _1(Γ). Moreover, _1^∨(Γ) is the maximal domain on which x and -e_τ agree as functions H^2(M, M)→. § PSEUDO-ANOSOV FLOWS AND DYNAMIC BLOWUPS We refer to Mosher <cit.>, Fenley–Mosher <cit.> and Agol–Tsang <cit.> for thorough discussions of the definition of a pseudo-Anosov flow on a closed 3-manifold. There are in fact two definitions, a smooth and a topological one, and their equivalence for transitive flows has recently been shown by Agol–Tsang <cit.> relying on the Anosov-case established by Shannon <cit.>. It will suffice for our purposes to record some of the main properties associated with (either type of) pseudo-Anosov flow, which we will do below. We will then define the notion of a dynamic blowup of such a flow, which is a generalization of the definition given by Mosher <cit.>. A special case of this will yield a definition—basically the expected one—of pseudo-Anosov flows on manifolds with toral boundary, and the general case will give us the definition of almost pseudo-Anosov flows, in both the closed case and the case with boundary. A pseudo-Anosov flow φ on a closed 3-manifold M has in particular the following properties: * The flow φ has a finite collection Π_s of closed orbits (called singular orbits), and a φ-invariant pair of transverse singular foliations of codimension 1 whose leaves intersect along orbits of φ. The foliations are nonsingular and transverse in the complement of Π_s and have a standard form in a neighborhood of Π_s, as described below. * The two foliations are called the stable and the unstable foliations of φ, denote W^s and W^u, respectively. For any two points x,y in a single leaf of the stable foliation W^s, the trajectories are asymptotic, meaning that for some orientation-preserving s_+→_+, lim_t→+∞d(φ_t(x),φ_s(t)(y)) = 0. The corresponding statement holds for the unstable foliation W^u with t→∞ replaced by t → -∞. We will typically work with transitive flows (i.e. possessing a dense orbit). Mosher <cit.> proved that all pseudo-Anosov flows on closed atoroidal manifolds are transitive. Once we define pseudo-Anosov flows on manifolds with boundary, it will be easy to see that atoroidality implies transitivity in the broader setting, by choosing appropriate Dehn fillings and applying Mosher's result in the closed case. It remains to describe the local dynamics around a periodic orbit γ in Π_s, as a suspension of a surface homeomorphism. In what follows, we err on the laborious side in order to prepare for the description of the blowups. Our description is equivalent to the pseudo-hyperbolic orbits described in e.g. <cit.>. Let U⊂ V be nested open 2-disks, and f U→ V an orientation-preserving embedding, with the following properties: There is a one-vertex tree T with an even number of prongs (at least 4) properly embedded in V so that f^-1(T) ⊂ T U and f fixes the vertex p and no other points. Each component Q of V T is an open quadrant and its closure in V, Q, is a closed quadrant. The map f permutes the germs of quadrants in the sense that it takes Q U into some Q'. Thus there is a least power f^k that takes germs of quadrants into themselves (that is, there is some smaller neighborhood U'⊂ U such that f^k(Q U') ⊂ Q). Finally, there is an identification of Q with a neighborhood of the origin in a closed quadrant of the Euclidean plane, which conjugates f^k to the map (x,y) ↦ (λ x,y/λ) for some λ> 1 (on the image of Q U). In particular note that along the interval Q T the map f fixes the point p and translates the rest in the same direction as viewed from inside Q. Hence, each prong at p is either stable (i.e. contracting) or unstable (i.e. expanding). Note that our convention is to draw the stable direction as vertical and the unstable direction as horizontal. We suspend this picture by considering V× [0,1], with U× 1 identified with f(U)× 0 using f. Vertical flow is then defined, for bounded times, on a small neighborhood of the closed orbit p×[0,1]/∼. This is the model for a pseudo-Anosov flow in a neighborhood of a closed orbit γ; the orbit is singular (i.e. γ∈Π_s) if the number of stable (and unstable) prongs is at least 3. The suspended vertical lines of the quadrants are the intersections of leaves of the stable foliation with the neighborhood, and the suspended horizontal lines are intersections of leaves of the unstable foliation. Note that prongs thus alternate between stable and unstable singular leaves. Finally, we define the index of γ to be 2 minus the number of stable prongs. From this description one sees that each singular periodic orbit is contained in a singular leaf of the stable foliation, which has the structure of 3 or more annuli glued together along the orbit (this is true at least locally and it is not hard to see that it is globally true.) The orbit is the attractor of all flow lines in its stable leaf. The corresponding statement is true for the unstable singular leaf. §.§ Local blowups Let U^♯⊂ V^♯ be either nested disks as before, or nested annuli homeomorphic to S^1×[0,1) sharing the boundary component U^♯ = V^♯ = S^1× 0. An embedding f^♯ U^♯→ V^♯ is a blowup of f if the following holds: There is a degree 1 map π V^♯→ V taking U^♯ to U, which semiconjugates f^♯ to f. The preimage π^-1(T) is a graph T^♯ (without valence 2 vertices) properly embedded in V^♯, containing a connected subgraph G=π^-1(p) in U^♯. In the disk case T^♯ is a tree, and in the annulus case T^♯ has Euler characteristic 0, containing U^♯⊂ G as its unique circle each of whose vertices we require to have degree 3. We refer the reader to the right-hand sides of <Ref> and <Ref> for examples in the disk and annulus case, respectively. The complement of G in T^♯ is a union of intervals, which correspond to the edges of T. The restriction of π to V^♯ G is a homeomorphism to V p, which conjugates (the restrictions of) f^♯ to f. We require that the periodic points of f^♯ are the vertices of T^♯, so on the interior of each edge some power of f^♯ acts by translation. We again have open quadrants, the components of V^♯ T^♯, which are taken homeomorphically to open quadrants of V T. The closure of each quadrant now has a boundary which is a segment in T^♯ composed of several edges. Note that the translations of (a suitable power of) f^♯ on these edges are all in the same direction, as viewed from the quadrant. We call such a picture a consistently oriented blowup quadrant. One implication is that around any vertex v of T^♯ the adjacent prongs alternate between attracting and repelling at v. In the simplest blowup of a quadrant map, the corner point pulls back to an interval connecting two fixed points. One can easily construct such a blowup, in coordinates where f is differentiable at the corner with eigenvalues λ > 1 > μ > 0, using a “real blowup” where the corner is replaced by the interval of unit tangent vectors pointing into the quadrant. See <Ref>, and also the appendix of <cit.>. A blowup with more fixed points (which still satisfies the consistent orientation condition) can then be obtained by repeating the construction at the new fixed points; see <Ref>. We suspend f^♯ in the same way as before, obtaining a flow defined for bounded times in a neighborhood of the suspension of G. This is what we call a blowup of the suspension of f. Note that in the annulus case we obtain a manifold with torus boundary. The edges of the graph G suspend to flow-invariant annuli which we call blown annuli. Note that in each such annulus the flow lines are asymptotic to one boundary component in backward time and the other in forward time. The suspension of G itself is called an annular complex; it is a union of blown annuli. Note that in the disk case, the map π is homotopic to a homeomorphism by a homotopy supported on a small neighborhood of π^-1(p), since such a neighborhood is a disk. In the annulus case the two surfaces are not homeomorphic, but if we had two annulus-type blowups π_1 V_1^♯→ V and π_2 V_2^♯→ V, one similarly sees that there is a homeomorphism h V_1^♯→ V_2^♯ which is equal to π_2^-1∘π_1 outside a small neighborhood of π_1^-1(p). Similar statements hold for the suspensions. §.§ Global blowups A dynamic blowup of a pseudo-Anosov flow ϕ on a closed 3-manifold M is a flow ϕ^♯ on a compact 3-manifold M^♯, together with a map π M^♯→ M with the following properties: * π semiconjugates ϕ^♯ to ϕ. * There is a finite collection Π of closed orbits of ϕ such that, from π^-1(MΠ) to MΠ, π is a homeomorphism and a conjugacy of the flows. * For each orbit γ in Π there is a neighborhood W whose preimage under π is a local blowup in the sense described above. In particular, π^-1(W) is either a neighborhood of the suspension of a tree homeomorphism (the “disk case” of <Ref>), or a neighborhood of a toral boundary component (the “annulus case”). We call the neighborhood W from item (3) above a standard neighborhood of γ. A blowup ϕ^♯ of a pseudo-Anosov flow is called an almost pseudo-Anosov flow. We say that the combinatorial type of such a blowup is the blowup graph T^♯ associated to each local blowup, together with its embedding in the disk or annulus V^♯, up to proper isotopy. Note that the combinatorial type is completely specified by saying which pairs of quadrants of the original return map, in a neighborhood of each orbit, are to be connected along the blowup graph. We do not claim that, given the combinatorial type, the blowup flow ϕ^♯ is unique up to orbit equivalence. Indeed there are further choices in the construction (notably the choice of gluing map across the blowup graph), but these will not matter to our discussion. There are several important special cases of dynamic blowups. If all the local blowups are in the disk case, then M^♯ is still a closed manifold and in fact, using the discussion in <Ref>, there is a homotopy of π to a homeomorphism from M^♯ to M, supported on a small neighborhood of π^-1(Π). This is exactly the case defined and used by Mosher <cit.>. If a local blowup is in the annulus case then the subgraph G is a circle each of whose vertices has degree 3 in T^♯ (i.e. each singularity is 3 pronged), so we have blown up a closed orbit to a boundary component in the simplest possible way. We call this a Fried blowup (see <Ref>) and it gives us a clean way to define a pseudo-Anosov flow on a manifold with boundary: If M is a compact 3-manifold with toral boundary and φ is a flow that is tangent to the boundary, we say φ is pseudo-Anosov if there is a Dehn filling π M → M^♭ such that φ is the blowup of a pseudo-Anosov flow on M^♭, for which all the local blowups are Fried blowups. By construction, each boundary component P of ∂ M is a union of (consistently oriented) blown annuli and so the restriction of ϕ to P has alternating expanding/contracting periodic orbits. The contracting periodic orbits in ∂ M are called unstable prong curves and the expanding periodic orbits are called stable prong curves. The associated slopes on ∂ M are called the prong slopes; these are also called the degeneracy slopes in the literature. By definition only singular orbits of ϕ can be blown up, except for the case of a Fried blowup at a regular orbit. Hence, when π M^♯→ M restricts to a homeomorphism on boundary components (or the manifolds are closed), Π⊂Π_s. §.§ Blowing down and up We now discuss a more general blowup construction with the goal of producing a modified flow on the same manifold M in the case where M has nonempty boundary. For some simple examples, see the bottom row of the bottom chart in <Ref> and ignore the green surface; see also <Ref> and <Ref>. Starting with a manifold with boundary and a pseudo-Anosov flow, we will first “blow down” to a closed M^♭ so that M→ M^♭ is a Fried blowup, and then blow up M^♭ to obtain M^♯ with general annulus type blowups that recover all the boundary components, so that M^♯ and M are homeomorphic. As in <Ref>, we obtain a homeomorphism h M^♯→ M and an almost pseudo-Anosov flow φ^♯ on M^♯ so that h pushes ϕ^♯ to a flow on M which, away from a small neighborhood of blown singular orbits and boundary components of M, agrees with ϕ. We refer to this combined operation as a generalized dynamic blowup; however for brevity we will usually just use the term dynamic blowup. Moving forward, we will use h to identify M^♯ with M and consider ϕ^♯ as a small modification of ϕ on M. With this change, the above setup gives (M,ϕ^♯)rdπ_♯ (M,ϕ)ld[swap]π_♭ (M^♭,ϕ^♭) where π_♯ and π_♭ are the blowdown maps. Moreover, we have a homeomorphism g M ( blowup locus of π_♯) → M (blowup locus of π_♭) which is an orbit equivalence from the restriction of ϕ^♯ to that of ϕ, and equal to the identity away from a small neighborhood of the blowup loci. We observe that for any pseudo-Anosov flow φ on M and any finite collection of orbits κ, we can apply a Fried blowup to each of the orbits of κ to obtain a manifold M with pseudo-Anosov flow ϕ. For each component k of κ, let s_k be an essential simple closed curve on the associated component P_k of ∂ M with the property that its geometric intersection number with the union of unstable prong curves of P_k is at least 2. Then Fried shows that there is a blowdown M → M' to a pseudo-Anosov flow ϕ' on the Dehn filled manifold M' = M(s_k: k ∈κ) that collapses P_k to a closed orbit k' whose index is the above mentioned geometric intersection number. Of course, M is also a Fried blowup of M', illustrating the flexibility in the definition of a pseudo-Anosov flow on a manifold with boundary. See Fried <cit.> for the details of the construction and Tsang <cit.> for the related history. §.§ Stable/unstable foliations of almost pseudo-Anosov flows Let ϕ be an almost pseudo-Anosov flow on a compact 3-manifold M, obtained by dynamically blowing up a pseudo-Anosov flow ϕ^♭ on a closed 3-manifold M^♭. The stable foliation of ϕ is a singular foliation whose leaves are the preimages of leaves of the stable foliation of ϕ^♭ under the semiconjugacy (M, ϕ)→ (M^♭, ϕ^♭). It is not hard to see that this definition is independent of which blowdown (M^♭,ϕ^♭) we choose. We similarly define the unstable foliation of ϕ. One sees from the local structure of a blowup that the preimage of a leaf is is connected, so there is a one-to-one correspondence between leaves of the stable and unstable foliations of ϕ^♭ and ϕ. If H is a stable leaf of ϕ^♭ containing a singular orbit γ of ϕ^♭ which is blown up to obtain ϕ, then the stable leaf of ϕ corresponding to H will contain every blown annulus collapsing to γ. Note that the blown annuli simultaneously belong to both stable and unstable leaves of the foliations of ϕ, and in particular the foliations are not transverse even away from the periodic orbits. We say two orbits γ and γ' of a flow on a manifold X with metric d are asynchronously proximal (in the forward direction) if there are sequences of times (t_n), (t_n') with t_n, t_n' →∞ such that d(γ(t_n),γ'(t_n'))→ 0. Note that this is a weaker notion than forward asymptotic: asynchronously proximal orbits get arbitrarily close, while asymptotic orbits get arbitrarily close and stay arbitrarily close. The definition of asynchronously proximal will be relevant to us when we lift an almost pseudo-Anosov flow on a compact 3-manifold M to its universal cover. For the next lemma, a stable half-leaf is a complementary component of the closed orbits within a stable leaf of φ. An unstable half-leaf is defined similarly. Note that a stable/unstable leaf can contain k half-leaves, where k is any integer k≥ 1. The following is a key feature of almost pseudo-Anosov flows. Let ϕ be an almost pseudo-Anosov flow on a compact 3-manifold M, and let ϕ be its lift to the universal cover M. If γ and γ' lie in the same stable half-leaf of ϕ, then γ and γ' are asynchronously proximal. If they lie in the same unstable half-leaf, then they are asynchronously proximal in the backward direction. In the case where M is closed, this lemma is an observation of Fenley in <cit.> although he does not use the term asynchronously proximal. The key point there and in this more general setting is that a semiconjugacy collapsing blown annuli can be taken to be an isometry away from a neighborhood of the blown annuli. §.§ Flow spaces and perfect fits For a flow ϕ on a manifold M, its flow space is obtained by taking the quotient of the universal cover M of M by the orbits of the lifted flow ϕ. The quotient map Θ M → is equivariant with respect to the π_1(M)–actions, and when ϕ is almost pseudo-Anosov, its stable/unstable invariant foliations project to invariant singular foliations of which we continue to call the stable/unstable foliations of O. A blown segment in is a topological arc obtained by lifting a blown annulus to M and projecting to O; it is contained in leaves of both foliations. Let ϕ be an almost pseudo-Anosov flow on a compact manifold M and let be its flow space. Then is a simply connected surface that is (singularly) foliated by the images of the stable/unstable foliations, which are transverse away from blown segments. If M is closed, is homeomorphic to ℝ^2. Otherwise, is homeomorphic to a disk minus a closed subset of its boundary. In the case where ϕ is pseudo-Anosov and M is closed, <Ref> was proven by Fenley–Mosher <cit.>. They also establish the result when ϕ is almost pseudo-Anosov (again when M is closed) assuming that ϕ is transverse to a taut foliation. We begin with some setup with the goal of showing that the general result follows from the special case proven by Fenley–Mosher. By definition (see <Ref>), there is a quotient map π M → M^♭, that blows ϕ down to a pseudo-Anosov flow ϕ^♭ on a closed manifold M^♭. Let ^♭ be the flow space of ϕ^♭; we will use the fact, referenced above, that ^♭ is a bifoliated plane. The blowdown M → M^♭ is not unique (see <Ref>) but the following discussion is true for any such blowdown. The map π M → M^♭ is π_1–surjective and we let M' → M be the covering space determined by π_* so that there is an induced quotient map π' M' → M^♭ to the universal cover M^♭ of M^♭. Denote by ' the quotient of M' obtained by collapsing orbits in M' of the lifted flow. Then there is an induced quotient map π'_' →^♭. Since π' M' → M^♭ collapses blown annular complexes (including boundary components) to singular orbits, π'_' →^♭ collapses connected graphs of blown segments to singularities of ^♭. By construction, the flow space = _M of ϕ is the universal cover q _M →' and the composition π_ = π'_∘ q _M →^♭ is equivariant with respect to π_*. With the description above, it now suffices to show that ' is a surface, possibly with boundary; that is, locally two-dimensional, and Hausdorff. The argument in Fenley–Mosher <cit.> goes through verbatim to show that ' is Hausdorff. Following Fenley–Mosher, to show that ' is locally two-dimensional, it suffices to show that every point of M' has a transverse (half-)disk neighborhood that injects into '. This follows from the fact that M' does not contain (ϵ, T)-cycles with ϵ arbitrarily small and T arbitrarily large. We recall that an (ϵ,T) cycle is a flow segment [x,y] with y obtained from x by flowing for time T such that the distance between x and y is no more than ϵ. To prove the fact, note that such a cycle will project to an (ϵ', T')-cycle in M^♭ with ϵ' ≤ O(ϵ), since the map π is Lipschitz. Moreover, as in the claim in the proof of <cit.>, T'/T is uniformly bounded above and below. Hence, we obtain an (ϵ', T')-cycle of M^♭ with ϵ' arbitrarily small and T' arbitrarily large. However, this is shown to be impossible in the proof of Proposition 4.1 of <cit.>. This shows that ' is a surface. The description of ' given above then implies that it is obtained topologically by removing the interiors of a countable collection (zero when ∂ M = ∅) of disjoint disks from the plane. This implies that its universal cover _M has the required description; the statement about the singular foliations on _M follows from the corresponding statement for the foliations on M. §.§.§ Perfect fits and flow space relations Fix an almost pseudo-Anosov flow φ on M with flow space O. A perfect fit rectangle in O is a proper embedding of the rectangle with missing corner [0,1]^2 ∖{(1,1)} into O that maps the horizontal/vertical foliations to the unstable/stable foliations of O. We note that any perfect fit rectangle contains a perfect fit subrectangle that is disjoint from the singularities and blown segments of O. For any collection of closed orbits κ, let κ be the π_1–invariant discrete collection of points in O obtained by lifting κ to the universal cover M and projecting to O. Then we say that κ kills perfect fits if κ meets the interior of every perfect fit rectangle in O. Note that for the definition, it suffices to only ever consider regular orbits since singular orbits correspond to points in O that do not lie in the interior of any rectangle. If O has no perfect fit rectangles (i.e. κ = ∅ kills perfect fits) then we say that ϕ has no perfect fits. The importance of this condition was first recognized and studied by Fenley; see <cit.>. Now fix a collection κ of closed regular orbits of ϕ, and let π M → M^♭ be any blowdown to a pseudo-Anosov flow ϕ^♭ on a closed manifold M^♭. Since the orbits in κ are regular, κ is mapped homeomorphically to its image in M^♭. We define κ^♭ to be this image together with any regular orbits of ϕ^♭ that are the images of components of ∂ M. Using the induced flow space map π_→^♭ (discussed before the proof of <Ref>), it is clear that κ kills perfect fits for ϕ if and only if κ^♭ kills perfect fits for ϕ^♭. Still with κ fixed, we set κ_s to be the union of κ with the set of all singular orbits and blown annular complexes of ϕ (including those containing the boundary), and we let κ_s be obtained by lifting to the universal cover and projecting to O. The blowdown π sends κ_s to κ^♭_s, where κ^♭_s is the union of κ^♭ with all singular orbits of ϕ^♭. In particular, we may use the restriction of π to identify the manifolds M κ_s = M^♭κ^♭_s, which we call the fully-punctured manifold associated to the pair (ϕ, κ). The flow space of the fully-punctured manifold (for the restriction of ϕ) is denoted P, and the natural projections P→ O κ_s → O^♭κ^♭_s are covering spaces. The associated branched cover P → O^♭, infinitely branched over κ^♭_s ⊂^♭, is called the completed flow space of the fully-punctured manifold. The added branch points are also called completion points or singularities of P. Finally, we observe that κ fills perfect fits of ϕ if and only if P has no perfect fit rectangles. From the discussion here, we see that κ kills perfect fits of ϕ if and only if the associated Fried blowup has no perfect fits. To conclude, we recall the fact that for every transitive pseudo-Anosov flow ϕ there exists a finite collection of regular orbits that kills its perfect fits. This follows from that fact, due to Fried <cit.> and Brunella <cit.>, that every such flow has a Fried blowup that blows down to pseudo-Anosov suspension flow. From the discussion above, it is then clear that the closed regular orbits of the Fried blowup kill the perfect fits of ϕ. In fact, Tsang has recently proven that for any transitive pseudo-Anosov flow on a closed 3-manifold, there is a single regular orbit that kills its perfect fits <cit.>. Just as before, this is easily extended to all transitive, almost pseudo-Anosov flows on compact manifolds using any blowdown to a pseudo-Anosov flow on closed manifold. We conclude with a remark that ties the notion of perfect fits to that of anti-homotopic orbits from the introduction. Let M be a closed 3-manifold with a pseudo-Anosov flow φ. If φ has perfect fits, then Fenley proved that φ has anti-homotopic orbits (<cit.>). Conversely, if φ has anti-homotopic orbits γ_1 and γ_2, then after taking compatible lifts to the universal cover and projecting to the flow space , we obtain points p_1 and p_2 that are fixed by some g ∈π_1 (M) in the conjugacy class determined by γ_1,γ_2. Fenley shows that in this case p_1 and p_2 are connected by a chain of lozenges (<cit.>), which in particular implies that φ has perfect fits. §.§ The Agol–Gueritaud construction and transversality to the flow Given a pseudo-Anosov flow ϕ on a closed 3-manifold and a finite collection κ of regular orbits that kills its perfect fits, the Agol–Gueritaud construction produces veering triangulation τ on the manifold M κ_s, and our previous work shows that the 2-skeleton of τ is positively transverse to ϕ. From the above discussion, this generalizes as follows: Suppose that ϕ is a transitive almost pseudo-Anosov flow on a compact manifold M. Then there is a veering triangulation τ on M κ_s whose 2-skeleton is positively transverse to ϕ. We use the notation from <Ref>. The construction in <cit.> produces a veering triangulation τ on M^♭κ^♭_s whose 2–skeleton is transverse to the flow ϕ^♭ under the assumption that its completed flow space P has no perfect fits. But we saw in <Ref>, P has no perfect fits if and only if κ kills the perfect fits of ϕ. This, together with the fact that π restricts to a flow preserving homeomorphism M κ_s → M^♭κ^♭_s, completes the proof. We call τ the veering triangulation associated to the pair (ϕ, κ). When ϕ has no perfect fits, the veering triangulation associated to ϕ is the one associated to (ϕ, ∅). When ϕ has perfect fits, there is significant flexibility in the choice of κ; we will call τ an associated veering triangulation if it is associated to some pair (ϕ, κ) for some choice of κ. Now fix once and for all a standard neighborhood U of κ_s. More precisely, if π M → M^♭ is a pseudo-Anosov blowdown with M^♭ closed, we let U^♭ be a standard neighborhood of the closed orbits in κ^♭_s (see <Ref>) and take U = π^-1(U^♭). Then, as in <Ref>, τ is a relative veering triangulation of M with tube system U. In particular, we have that τ_U = τ^(2)∩ M U is a branched surface on M_U = M U that is positively transverse to ϕ. The blowdown π M → M^♭ restricts to a homeomorphism M U → M^♭ U^♭ sending τ_U to τ_U^♭. In this sense, the veering triangulation associated to an almost pseudo-Anosov flow is essentially the same as the one associated to any pseudo-Anosov blowdown. Let U_i be a component of U, which we recall is a neighborhood in M of either a closed orbit or blown complex (possible containing a single boundary component of M). Then the inner component of U_i (i.e. the one not meeting ∂ M) is a torus endowed with some combinatorial data: there are collections of stable and unstable prong curves on U_i as well as a tessellation of U_i coming from τ_U, which divides U_i into upward and downward ladders. Let U_i be a component of U. Each upward ladder contains exactly one stable prong curve. Each downward ladder contains exactly one unstable prong curve. See <Ref> for an illustration in the case where U_i is a neighborhood of a singular orbit. Lemma 2.8 of <cit.> treats the case where ϕ is the suspension flow of a pseudo-Anosov map on a compact surface with boundary. However, the proof there applies verbatim since our triangulation τ is still obtained from the Agol–Guéritaud construction as in <Ref>. Given <Ref>, it is now clear from the definitions that the veering triangulation τ_U on M relative to U is strict if and only if κ = ∅, whence the flow ϕ on M has no perfect fits. §.§ Flows represent faces We have now developed enough combinatorics to easily prove Mosher's beautiful theorem “Flows Represent Faces" <cit.>, which connects the Thurston norm to dynamics, in the broader setting of pseudo-Anosov flows on manifolds with boundary. Let ϕ be a pseudo-Anosov flow with no perfect fits on a manifold M with its associated strict relative veering triangulation τ, furnished by <Ref>. Let γ be an orbit of ϕ. We define the index of γ to be 2 minus its number of prongs; in particular periodic orbits in M have index -1. Define e_ϕ∈ H^2(M, M) by 2e_ϕ=∑_i ⟨(γ_i)[γ_i],·⟩, where the sum is over all singular orbits in (M) and all stable periodic orbits in M (i.e. the boundaries of stable half-leaves meeting M). By <Ref> and <Ref>, we have e_τ=e_ϕ. Let _1(ϕ) denote the cone in H_1(M) generated by the periodic orbits of ϕ. Its dual, _1^∨(ϕ) ⊂ H^1(M) = H_2(M, ∂ M), is the cone of classes that are nonnegative on the periodic orbits of ϕ. It follows from <cit.> that the collection of periodic orbits of ϕ and the collection of directed cycles in the dual graph Γ of τ generate the same cone in H_1(M), so _1(ϕ)=_1(τ). See e.g. <cit.>. Hence the following follows immediately from <Ref>: Let ϕ be a pseudo-Anosov flow with no perfect fits on a compact oriented 3-manifold M. Then _1^∨(ϕ) is equal to the cone over a face of the Thurston norm ball whose codimension is equal to the dimension of the largest linear subspace contained in _1(ϕ). Moreover, this cone is the maximal domain on which x and -e_ϕ agree as functions H^2(M)→. We remark that the hypothesis that ϕ has no perfect fits allows us to choose τ to be strict, so M admits no essential surfaces of nonnegative Euler characteristic by <Ref>. Hence in the above theorem the Thurston norm for M is a norm and not merely a pseudonorm. We also remark that, as in <cit.>, the hypothesis that φ has no perfect fits cannot be removed. See also the example in <Ref>. Finally, in <cit.> Mosher included the requirement that ϕ be quasigeodesic in the statement of his theorem, and used the quasigeodesic property in his proof. He wrote “It would also be nice to find a proof of Flows Represent Faces which makes no use of the quasigeodesic hypothesis, for the reason that hyperbolic geometry seems completely extraneous to the purpose of the theorem." In the time since, Fenley has shown that pseudo-Anosov flows with no perfect fits are quasigeodesic (see <cit.> for the strongest statement). Note, however, that the proof we give above in the more general setting does not use quasigeodesity. § RELATIVELY CARRIED SURFACES ARE ALMOST TRANSVERSE As before, let ϕ be a transitive pseudo-Anosov flow on the compact manifold M. We say that a properly embedded surface S is almost transverse to ϕ if it is transverse to a (generalized) dynamic blowup ϕ^♯ of ϕ on M (see <Ref>). The goal of this section is to prove the following: Let ϕ be a transitive pseudo-Anosov flow and let τ be any associated veering triangulation. If S is relatively carried by τ then it is almost transverse to ϕ. In fact, by assuming that S is efficiently carried, we can be more precise about which dynamic blowups are needed: Assume in addition to the hypotheses of <Ref>, that S is carried efficiently by τ. Then the blowup locus consists of the singular orbits and boundary components whose tubes are met by S in ladderpole annuli. Note that if S can be isotoped to be transverse to a dynamic blowup ϕ^♯, then its intersection with each blown annulus can only be along core curves and essential arcs. We say that the dynamic blowup is minimal with respect to S if (after isotopy) it intersects every nonboundary blown annulus in core curves. For the remainder of the section, we fix the surface S and the pseudo-Anosov flow ϕ as in <Ref>. By assumption, there is a collection κ of closed orbits that kills the perfect fits of ϕ so that the veering triangulation τ on M κ_s furnished by <Ref> has the following property: There is a standard neighborhood U of κ_s in M so that S is relatively carried by τ. See <Ref> and <Ref> for definitions. We recall here that since ϕ is pseudo-Anosov, κ_s = κ∪Π_s ∪{P_i }, where ∂ M = ⋃ P_i is the union of boundary blown annuli. Since S is relatively carried by τ, S ∩ M U is carried by τ_U and so at each component U_i of U, the intersection S ∩∂ U_i is a union of parallel curves carried by ∂τ_U. If S ∩ U_i contains a ladderpole annulus, then we say that S is ladderpole at U_i. If S meets U_i but is not ladderpole there, then either S∩ U_i is a collection of meridional disks or annuli with one boundary component on ∂ M. §.§ Coherent arc systems and meridional disks and annuli We begin with a general definiton: Let D be a closed disk or annulus, with points p_1,…, p_n∈ D. Suppose that D(p_1∪⋯∪ p_n) is oriented so that each p_i is either a source or a sink. A coherently cooriented arc system is a collection of cooriented closed arcs properly embedded in D such that for each arc α, * the interior of α lies in D * the endpoints of each arc lie in D(p_1∪⋯∪ p_n) * α⋔ D, and the coorientation of α is compatible with the orientation of D(p_1∪⋯∪ p_n). See <Ref> for an example in a disk. Let θ denote a permutation of the {p_i} preserving their order around the circle. We say a coherently cooriented arc system is symmetric under θ if θ preserves the relation defined by the arcs, as well as their coorientations. Coherently cooriented arc systems arise naturally when considering veering triangulations dual to flows; see <Ref>. Let U_i be a T^2× I component of U containing a boundary component P_i of M. A meridional annulus of U_i is an essential, properly embedded annulus joining distinct boundary components of U_i whose boundary component on M essentially intersects the prong curves. In what follows, a meridional disk or annulus will always be taken to be transverse to the flow. That such disks/annuli exist is immediate from the local description of the flow in a neighborhood of its closed orbits and boundary. Let U_i be a component of U. Let D be a meridional disk or annulus for U_i, depending on whether U_i is a solid torus or T^2× I. In either case, we orient the open segments of D(prongs of ϕ) so that each point of D∩ (stable prongs) is a source and each point of D∩ (unstable prongs) is a sink. We call this the singular ϕ-orientation of D. See <Ref>. Following the flow around U_i we obtain a permutation of the stable and unstable prongs and their intersection points with D preserving the cyclic order. We call this permutation the twisting at U_i. Let U_i be a component of U, and let S be a surface relatively carried by τ which is ladderpole at U_i. Then up to an isotopy of S supported in U_i, there exists a meridional disk or annulus D for U_i so that A = S∩ D is a coherently oriented arc system that is symmetric under θ, where θ is the twisting at U_i. Moreover, the intersection of a ladderpole annulus of S ∩ U_i with D corresponds to the orbit of an arc of A under θ. Since the curves S ∩∂ U_i are ladderpole, we can choose a (ϕ-transverse) meridional disk or annulus D such that ∂ D intersects theses curves minimally. Then by an innermost disk argument, we can isotope S in U_i so that S∩ D is a collection of properly embedded arcs. We can further isotope S so that each annulus of S which meets M is disjoint from the prongs of U_i (see <Ref>). Let c be the boundary component of D which does not lie on M (if D is a disk this is the only boundary component). It follows from <Ref> and the definition of the singular ϕ-orientation that the coorientation of S∩ D is compatible with the singular ϕ-orientation of D at each point of S∩ c. Further, for each component α of S∩ D touching both components of D, α is disjoint from the prongs of M. This implies that the coorientation of α is compatible with the singular ϕ-orientation of D at the boundary point of α on M. The statement about θ-symmetry follows from the fact that each annulus of S∩ U_i has boundary components parallel to the prongs of U_i, which themselves are θ-symmetric. We say a meridional disk/annulus satisfying the conclusion of <Ref> is adapted to S. §.§ Blowups adapted to a surface Let S be a relatively carried surface and fix a collection of adapted meridional disks/annuli (from <Ref>) for each component of U that S meets in ladderpole annuli. Here we construct a (generalized) dynamic blowup of ϕ on M that is transverse to S by modifying the flow only in these ladderpole components of U. We begin by describing the necessary blowups at each orbit in U and then describe the blowups along the boundary components in U. §.§.§ Blowup at a closed orbit First consider a closed orbit γ of κ_s contained in a component U_γ of U so that S is ladderpole on U_γ, let D be a meridional disk of U_γ adapted to S, and f the return map of the flow. Near p=D c, f has the standard form described in <Ref>. Note that we allow for the possibility that γ is a regular orbit; in this case no blowup will be performed (see <Ref>), but we must still position S in U_γ so that it is transverse to the flow. Recall that T in D is the union of prongs (see <Ref> where they are indicated by blue and red radial arcs), permuted by f and cutting D into quadrants. The boundary of D, divided into quadrant arcs, receives a singular f-orientation as in the previous section (see <Ref>). Let A be the coherently cooriented arc system properly embedded in D, obtained from S as in <Ref>. Given this, we construct the blow up tree G as follows: Let A' be the set of parallel classes of A (that is, one arc in A' for each set of arcs in A that begin and end in the same quadrants), excluding the arcs with endpoints in adjacent quadrants. Let G be the dual tree of A' in D: it collapses each complementary component to a vertex and has an edge crossing each component of A'. Note that G comes with an embedding into D and the θ–action of f on the disk and the arcs (where θ is the twisting around U_γ) gives rise to an automorphism of G, so far defined not pointwise but with respect to its action on vertices and edges. We orient each edge of G consistently with the co-orientation of the arc to which it is dual. We may identify D p with the complement of G in a disk D^♯, in such a way that each prong of T coming into D terminates at the vertex of G associated to the region where that prong enters D. The union of G with the prongs of T gives the full blowup tree T^♯, which we observe has no vertices of valence two. See <Ref>. We now need to extend the dynamics of f on D p to a continuous map on D^♯ for which the vertices of G are periodic, and the return map to each edge has the dynamics (up to topological conjugacy) prescribed by the orientations. This is done following the discussion in <Ref>. Each quadrant Q of the original picture embeds now with a compactification arc that lies along the tree and whose edges are consistently oriented. To see this consistency, denote by Q^♯ the compactified quadrant, and consider the arcs of A' that cross the arc of D subtended by Q. Their dual orientations are consistent with the f-orientation of that arc, and this implies the corresponding edges of G, which lie in the boundary of Q^♯, have consistent orientations. The prong arcs adjacent to Q have consistent orientations by definition. See <Ref> for a typical picture of this. Thus we may choose the dynamics of the return map to be given by a standard repeated blowup on each quadrant (<Ref>), and match them along the two sides of each edge to obtain a globally continuous map. (The choice for these matchings are one source of the nonuniqueness mentioned in <Ref>.). The arc system now embeds in the new picture so that each arc crosses the tree at the edge dual to it, and its coorientation agrees with the orientation of the edge. The arcs from A which were excluded when constructing A' (i.e. those that meet adjacent quadrants) were transverse in the original picture and so remain transverse here (see <Ref>). §.§.§ Blowup at a boundary component Next let P be a boundary torus contained in the component U_P of U. The flow near P in U_P can be taken to be the suspension flow of a map on a meridional annulus E, which is a Fried blowup as in <Ref>. Let the inner boundary of E correspond to its intersection with P, and again consider the sectors into which E is divided: this time each sector already contains a “blowup arc” along the inner boundary. Again let A be an invariant embedded collection of arcs with endpoints on the boundary of E (each arc has at least one endpoint on the outer boundary), and again we require that A inherits a coorientation from the boundary arcs (see <Ref>). Components of A with an endpoint on the inner boundary must be contained in a single sector and their coorientation is consistent with the orientation on both boundaries. Let A' contain one arc for each parallel class of arcs of A, excluding (as before) arcs which have endpoints in adjacent quadrants, but now also excluding arcs with an endpoint on the inner boundary circle. We will now construct a graph G embedded in E which acts as the “dual” to A'. The graph G contains a circle which we identify with the inner boundary of E and associate to the annulus complementary region of E A'. In each disk complementary region of E A' we place a vertex of G, and on the circle we place one vertex for each arc of A' adjacent to the annulus region, and one vertex for each landing point of an arc of T that enters E directly in the annulus region (these vertices should be arranged in the same circular order as the arcs and prongs that they correspond to). We place an edge in G crossing each arc of A', connecting the two vertices on either side if they are both disks, or a vertex in a disk to the associated vertex on the circle if one of the regions is the annulus. Each prong has a “landing point” in G: if it enters E in a disk region of E A then we attach it to the corresponding vertex, and if it enters E in the annulus component we attach it to the associated vertex on the circle of G. We identify E minus its inner boundary with E G in such a way that the prongs terminate at their landing points. Let T^♯ denote G union the prongs. Its edges inherit an orientation: Each dual edge is oriented by the coorientation of the arc of A' dual to it. Each segment e on the circle is contained on the boundary of a component of E T^♯ that meets the outer circle in a unique quadrant (because the arcs of A' and/or the prongs defining e prevent any prongs from entering through this region). We give e an orientation inherited from the boundary arc of this quadrant; see <Ref> for examples of this. The components of E G' are again compactifications of the original quadrants, with multiple fixed points on the compactification arcs and dynamics prescribed by the orientation. We again use multiple blowups to obtain a self-map which has the prescribed dynamics on the graph and restricts on the quadrants to be conjugate to the original map. The arcs embed in this picture, again with consistent coorientations, by a similar argument. The arc data, and hence the construction, are invariant under the twisting permutation θ, so f can be extended to the blowup graph as before. §.§ Suspending the blowups After the dynamic blowup associated to a system of arcs A, we obtain a system of positively transverse annuli for the suspension flow as follows. Consider first the case around a closed orbit. In each quadrant Q let Q_0 be the region bounded by an f-invariant hyperbola. A radial arc a in Q_0 connecting the origin to the hyperbola Q_0 is mapped by f to a disjoint radial arc, on the side of a pointed to by its coorientation. In Q× [0,1] we may connect a×{1} to f(a)×{0} by an disk which (except over the origin) is transverse to the vertical flow. Gluing Q_0×{1} to Q_0×{0} via f produces an annulus transverse to the flow. Now in the blown up quadrant Q^♯, we can adjust a so that it meets one of the new blowup intervals, and now f(a) lies strictly to one side of a and the same construction yields an annulus which is also transverse to the flow on its boundary. To produce a complete picture, we do this for all the arcs of A. That is, each arc of A can first be realized as a pair of radial arcs (in their respective quadrants) passing through the center of D and terminating in Q_0 for each Q (we can choose Q_0 small enough and trim D so that its boundary contains a large enough arc of Q_0). We blow up each quadrant and attach the blowup arcs along the tree, in a way that respects the dynamics as above, and the arcs of A may be perturbed near the blowup point so that they become embedded again and each one passes through its dual tree edge. The map f^♯ created in this way takes each arc to the side pointed to by its coorientation, and so as before we can create annuli through the solid torus that are transverse to the flow. There is one case that is slightly different: If an arc of A has endpoints on adjacent quadrants, then we can push it away from the singularity so that it still satisfies the condition that its f-image is disjoint from it and on the side determined by its coorientation. That is, the corresponding annulus can be made transverse to the flow without performing any blowup. (This is the reason that arcs like this were excluded from A'.) When there are only four quadrants (e.g. the periodic orbit is in fact regular) this is all that can happen, so no blowup is necessary. This construction can be made to match the original S along the boundary of the neighborhood, because S meets the neighborhood in annuli that are transverse to the flow near its boundary, in the same pattern as what we have created by <Ref>. The resulting surface S^♯ is transverse to the blowup flow ϕ^♯ obtained by suspending the local blowups f^♯ obtained above. For a blowup near a boundary component, the argument is similar, except that some of the arcs terminate in the boundary. With these constructions in hand, we can complete the proof of <Ref>. Suppose that S is relatively carried by the dual veering triangulation τ relative to U, as in our setup. The surface S is transverse to ϕ outside of U (<Ref>) and meets each component of U in either a collection of disks, annuli meeting ∂ M, or ladderpole annuli (or the empty set). Each disk intersection can be isotoped in U to be transverse to ϕ since S is transverse near the boundary of U. The same is true for any annulus meeting ∂ M that is not of ladderpole slope since its boundary can be made transverse to the flow in ∂ M. Finally, the discussion above gives a recipe to dynamically blow up ϕ within the remaining components of U to produce a flow ϕ^♯ on M that is transverse to the surface S^♯ so obtained. By construction, S^♯ and S agree outside of the components of U where S is ladderpole. However, within such components, the description above (relying on <Ref>) implies that they are isotopic by an isotopy supported in U. Hence, after an isotopy S is transverse to the (generalized) dynamic blowup ϕ^♯ and this completes the proof. § ALMOST TRANSVERSE SURFACES ARE RELATIVELY CARRIED In this section, we prove the converse to <Ref>: an almost transverse surface to a pseudo-Anosov flows is relatively carried by any associated veering triangulation. Together with <Ref>, this establishes: Let ϕ be a transitive pseudo-Anosov flow on M, possibly with boundary, and let τ be any associated veering triangulation. Then, up to isotopy, a surface S is almost transverse to ϕ if and only if it is relatively carried by τ. First we need a lemma extending <cit.> to the case of manifolds with boundary. We remark that our generalization is necessary even in the case where the manifold M from <Ref> is closed. This is because our strategy for the proof is to first perform a Fried blowup on the original flow and then apply <Ref>. Recall the Euler class e_τ as defined in <Ref>. Any surface S almost transverse to a transitive pseudo-Anosov flow is taut. If τ is any associated veering triangulation, then x([S])=-e_τ([S]). To begin, let ψ be the almost pseudo-Anosov flow transverse to S and let τ be a veering triangulation on M κ_s where κ is any collection of orbits that kills the perfect fits of ψ (<Ref>). By blowing down blowup annuli of ψ that S meets in proper arcs, we can assume that S is minimally transverse to ψ. Let U be the associated tube system where each component of U is a standard neighborhood of a component of κ_s, and let U_m be the subcollection of components that S meets in meridional annuli (<Ref>) or disks. Hence, the components of κ_s that are contained in components of U_m are lone orbits and boundary components of M. That is, all blown annuli of κ_s ∩ U_m are contained in ∂ M. Recall from <Ref> that e_τ=1/2⟨( ∑(T_i)·(T_i))-c,·⟩, where c is the union of the cusp curves of all torus shells of M B^s (corresponding to the boundary components of M) and the sum is over solid torus components of U (corresponding to components of κ_s). Because of <Ref>, in each T^2 × I component of U_m, we can isotope c, preserving orientation, to the union of stable singular orbits of ψ on M (i.e. those associated to stable prong curves). We replace c by its image under this isotopy. Similarly on each solid torus component of U_m, we can assume that (T_i) agrees on the nose with the associated closed orbit of ψ. Further, observe that S meets no singular orbits contained in U U_m. Hence, when computing the pairing e_τ([S]) we can restrict to the summation terms associated to U_m and consider only the curves of c contained in U_m. Next, recall that if x is a p-pronged singularity of a singular foliation on a surface with boundary S, its index is defined to be (x)=2-p. This includes the case when p lies on S; see <Ref>. With this definition, given a foliation of S with only prong singularities and leaves tangent to ∂ S, we have ∑(x)=2χ(S) where the sum is over all singularities of the foliation. Now recall that W^s is the stable foliation of ψ. Since S is transverse to ψ, S∩ W^s defines a singular foliation of S, the singularities of which are the points of intersection between S and the singular orbits of ψ that are contained in U_m. All these intersections are positive and so comparing formulas we see that χ_-(S)=-e_τ([S]). Since ψ is transitive, we can find a closed curve pairing positively with any union of components of S. Hence each such union is homologically nontrivial. By <Ref>, S is taut. With this lemma in hand, we turn to the proof of <Ref>. The proof strategy of first performing a Fried blowup to obtain a pseudo-Anosov flow without perfect fits on a manifold with boundary will also be key in the next section. Recall that τ is a veering triangulation on M κ_s where κ is any collection of orbits that kills the perfect fits of ϕ. If S is relatively carried by τ, then S is almost transverse to ϕ up to isotopy by <Ref>. Conversely, suppose that S is a surface almost transverse to ϕ. This means S is transverse to an almost pseudo-Anosov flow ϕ^♯ on M of ϕ obtained by (generalized) dynamic blowup. Note that we can naturally think of κ as a collection of closed regular orbits of ϕ^♯. Since κ consists of regular orbits, the relative veering triangulation τ on M is strict if and only if κ =∅. To remedy this, perform a Fried blowup of ϕ^♯ on each orbit of κ, obtaining an almost pseudo-Anosov flow ϕ^* on a manifold M^* which has a boundary component for each orbit in κ (in addition to any boundary coming from M). Since each intersection of S with κ is transverse and positive, we obtain a surface S^* properly embedded in M^* that is transverse to ϕ^* and maps to S under the blowdown M^* → M from ϕ^* to ϕ^♯. By construction, the flow ϕ^* has no perfect fits (see <Ref>). Moreover, its associated veering triangulation τ^* agrees with τ under the blowdown M^* → M; said differently, they are the same veering triangulation of M κ_s; see <Ref>. Since S^* is transverse to ϕ^* and τ^* is associated to ϕ^*, <Ref> gives that S^* is taut and x([S^*])=-e_τ^*([S]). By <Ref>, we may isotope S^* to be relatively carried by τ^*, fixing S^*. By postcomposing this isotopy with the blowdown M^*→ M, we see that S is relatively carried by up to isotopy. An alternative approach would be to note that the almost transversality condition implies that no complementary region of S∩ B^s or S∩ B^u can be a bigon with both cusps corresponding to negative intersections of S with κ. This fact allows one to apply the steps of the algorithm from <cit.>. This algorithm is fairly complex. The approach above is much simper, highlighting the utility of working with pseudo-Anosov flows on manifolds with boundary. § TRANSVERSE SURFACE THEOREMS In this section, we prove several transverse surface theorems that realize surfaces as (almost) transverse to pseudo-Anosov flows. We then also use our Fried blowup technique to realize relative homology classes as Birkhoff surfaces (<Ref>). In <Ref>, we illustrate the sharpness of our theorems with an example. Essentially the same technique from the proof of <Ref> (performing a Fried blowup to remove perfect fits) can be used to extend Mosher's transverse surface theorem (for general pseudo-Anosov flows) to manifolds with boundary. Given a transitive pseudo-Anosov flow ϕ on M (possibly with boundary), any integral first cohomology class pairing nonnegatively with all closed orbits of ϕ is Poincaré dual to a surface almost transverse to ϕ. Let κ be any collection of regular orbits of ϕ that kill perfect fits; this exists by <Ref>. Following the approach of <Ref>, perform a Fried blowup on all orbits of κ to obtain M^*, ϕ^*, and τ^*. In this case, τ^* is the associated veering triangulation of M^* since ϕ^* has no perfect fits. Any class η∈ H^1(M) that pairs nonnegatively with all closed orbits of ϕ pulls back to a class η^* ∈ H^1(M^*) with the same property for ϕ^*. Hence, η^* can be realized by a surface S^* in M^* that is carried by τ^* by <cit.>. Since η^* is the pullback of the class η in M, S^* can be extends across U to a surface S representing η and relatively carried by τ. Hence, after an isotopy S is almost transverse to ϕ by <Ref> and the proof is complete. The point of our strong transverse surface theorems is to realize a surface in (almost) transverse position, up to isotopy (rather than just homology). When the flow is without perfect fits, the strongest possible result is the following: Let ϕ be a pseudo-Anosov flow with no perfect fits on M, possibly with boundary. Let S be a taut surface pairing nonnegatively with all closed orbits of ϕ. Then S is almost transverse to ϕ up to isotopy. Since ϕ has no perfect fits, there is an associated veering triangulation τ in M Π_s. By the results of <cit.>, the cone in H_1(M) generated by closed orbits is equal to _1(Γ), so [S]∈_1^∨(Γ). By <Ref>, S is carried by up to isotopy, so is almost transverse to ϕ up to isotopy by <Ref>. Combining the above arguments give the following extension to the general case. Let ϕ be a transitive pseudo-Anosov flow on M, and let κ be any finite collection of closed regular orbits of ϕ that kills perfect fits. Then an oriented surface S in M is isotopic to a surface that is almost transverse to ϕ if and only if * S is taut, * S has nonnegative (homological) intersection with each orbit of ϕ, and * the algebraic and geometric intersection numbers of S with κ are equal. Note that if φ has no perfect fits (e.g. when ϕ is circular), then we can take κ = ∅, and in this case the third condition is vacuous. Hence <Ref> is a special case of <Ref>. First, isotope S so that condition 3 holds. Using the previous method, we can Fried blowup ϕ at κ, just as in the proof of <Ref>, to obtain a pseudo-Anosov flow ϕ^♯ without perfect fits on M^♯ and a properly embedded surface S^♯ that blows down to S. From condition (3), it follows that S^♯ is still taut in M^♯. Indeed, the condition implies that the blowup boundary components of S^♯ are oriented coherently on each blowup component of ∂ M^♯. In particular, any homologous surface needs to have at least that number of boundary components along ∂ M^♯, and so tautness of S implies tautness of S^♯. Moreover, as in <Ref>, S^♯ has nonnegative intersection number with each orbit of ϕ^♯. Hence we can apply <Ref> to conclude that S^♯ is almost transverse to ϕ^♯ after an isotopy. By blowing back down (so that each component of ∂ S^♯ on a blowdown torus is blown down to a point), this implies that S is almost transverse to ϕ, as required. §.§ Realizing Birkhoff surfaces Our technique of blowing up to obtain flows on manifolds with boundary has applications to Birkhoff surfaces for transitive pseudo-Anosov flows. Recall from the introduction that a Birkhoff surface of ϕ is an immersed surface in M whose boundary components cover closed orbits of ϕ and whose interior is embedded transversely to ϕ. We call the closed orbits of ϕ that are covered by the boundary components of a Birkhoff surface its boundary orbits. More generally, a Birkhoff surface of a dynamic blowup of ϕ will be called an almost Birkhoff surface of ϕ. Finally, a Birkhoff section is a Birkhoff surface that meets every flow segment of some definite (uniform) length. Let κ be a collection of closed orbits, possibly containing singular orbits. Let U_κ be a standard neighborhood of κ. Then using excision and Poincaré duality we make the identification H_2(M, κ) = H^1(M κ). Meanwhile, if π M^♯→ M is the Fried blowup of ϕ to ϕ^♯ at κ, then π induces a homeomorphism from M^♯ U^♯→ M U_κ, where U^♯ = π^-1(U_κ) is a standard neighborhood of the components of ∂ M^♯ that blow down to orbits in κ. The next theorem generalizes Fried's result on Birkhoff sections <cit.> and seems new even for geodesic flow on hyperbolic surfaces. Let ϕ be a transitive pseudo-Anosov flow and let κ be any collection of closed orbits. Then a class η∈ H_2(M, κ) = H^1(M κ) is represented by an almost Birkhoff surface in M if and only if it is nonnegative on closed orbits of M κ. Moreover, if each singular orbit of ϕ is either in κ or positive under η, then η is represented by a (honest) Birkhoff surface. Note that the `moreover' statement is automatic when ϕ is Anosov. Note that when κ =∅, the result reduces to <Ref>. We show that the general case also reduces to what we have already shown as follows: As above, let π M^♯→ M with ϕ^♯ be the Fried blowup of ϕ with respect to κ. The class η pulls back to η^♯ = π^*η∈ H^1(M^♯), and we claim that η^♯ is nonnegative on the closed orbits of ϕ^♯. This is obvious for the closed orbits that are not contained in blown boundary components, i.e. do not blow down to components of κ. If γ is a closed orbit in a blown boundary component, then we can approximate arbitrarily high multiples γ^k with a closed orbit c_k that is not contained a boundary component. More precisely, we can choose a periodic orbit d in (M) such that [c_k] = [γ^k] · [d], where d is a fixed orbit. This can be accomplished using a Markov partition for the flow, or the flow graph of any associated veering triangulation (e.g. <cit.>). Since η^♯(c_k) = η(π(c_k)) ≥ 0, we must have η^♯(γ) ≥ 0 as claimed. Hence, we may apply <Ref> to produce a surface Σ^♯ representing η^♯ that is almost transverse to ϕ^♯. Of course, if ϕ^♯ has no interior singularities that are η^♯–null, then Σ^♯ can be chosen to be honestly transverse to ϕ^♯. It only remains to position Σ^♯ so that it blows down to a Birkhoff surface Σ in M. That is, each blowup boundary component P of M^♯ is foliated by the fiber circles of π_P P → k for k ∈κ, and Σ^♯ needs to be arranged so that each of its boundary components is either one of these circles (in which case that component blows down to a point of transversality with ϕ) or is transverse to this foliation (in which case that component covers the closed orbit of ϕ). Given our local model for the blowup, it is not hard to see that this is possible. See <cit.> where this is done very carefully in the case of Birkhoff sections using essentially the same local model. Let ϕ be a transitive pseudo-Anosov flow on a compact manifold M, and κ a finite collection of closed orbits. Then there is an almost Birkhoff surface whose boundary orbits are contained in κ if and only if the closed orbits of ϕ are contained in a closed half-space of H_1(M κ ; ℝ). §.§ An example The purpose of this subsection is to show by example that <Ref> (<Ref>) does not hold without the `no perfect fits' assumption. That is, we will construct a manifold M, flow ϕ, and a taut surface Z that pairs nonnegatively with all closed orbits but cannot be made almost transverse to ϕ by isotopy. This will also show condition (3) in <Ref> (<Ref>) cannot be removed. Our example is based on the final example from <cit.>, and assumes the reader is familiar with the basic language of sutured manifolds (see <cit.>). Let H be a sutured solid torus with six longitudinal sutures labeled a_1,…, a_6 in cyclic order as shown in the left image in <Ref>. Let A_ and A_ be the annuli in H between a_2 and a_3 and between a_5 and a_6, respectively. Let S be a connected compact orientable surface with two boundary components and χ(S)<0, and let T be another such surface but with four boundary components. The products (S× I, S× I) and (T× I, T× I) are product sutured manifolds. Let N be the sutured manifold obtained by gluing S× I to a_1∪ a_4 by a homeomorphism, and gluing T× I to a_2∪ a_3∪ a_5∪ a_6 by a homeomorphism. This book of I-bundles is shown schematically in <Ref>. When talking about N, we continue to refer to S× I, T× I, a_1,…, a_6 et cetera, and by these we mean the images of those objects under the gluings used to build N. Let Y be the surface in N which is obtained by taking the union of S×{1/2} with an annulus in H connecting a_1 and a_4. The orientation on Y is chosen so that for each s∈ S, the oriented segment s× I intersects Y positively. Let W=_+ N, oriented compatibly with the coorientation of _+N. This is shown on the right side of <Ref>. Note that _-N is homeomorphic to _+N. If we choose a homeomorphism between the two, we can glue to obtain a closed manifold. In such a situation we will continue to use the notations W and Y to denote the images of W and Y in the closed manifold. We will take the following facts as given: Let N be as above. * There exists a hyperbolic 3-manifold M obtained by gluing _-N to _+N, and a pseudo-Anosov flow ϕ on M positively transverse to W. * The surfaces W,Y⊂ M are taut and lie over the face of the Thurston norm ball determined by the Euler class of ϕ. * Further, the pullback semiflow ϕ_N on N has the following properties: * ϕ_N is compatible with the sutured structure of N in that it points out of N along _+N and into N along _-N * “The semiflow is a product away from H": the flowlines of ϕ_N lying in S× I and T× I are all of the form x× I for some x∈ S∪ T. * There are exactly 2 closed orbits of ϕ_N lying in H, γ_1 and γ_2. They are both freely homotopic to the core of H and also have the property that γ_1 is freely homotopic to γ_2 with the orientation reversed. * Every other flowline of ϕ_N in H is either a properly embedded interval or has one end on _± N and accumulates on γ_1 or γ_2 in the forward or backward direction. * There exists a periodic orbit γ_Z of ϕ which pulls back to a single oriented flowline of ϕ_N in H from A_ to A_. The orbits γ_1 and γ_2, as well as parts of their stable and unstable leaves, are shown in the center image of <Ref>; the orbit γ_Z is shown in the righthand image. We remark that the flow ϕ in the above lemma is constructed using the Gabai–Mosher construction of pseudo-Anosov flows transverse to sutured hierarchies in closed oriented atoroidal 3-manifolds, which is unpublished. However, Mosher laid much of the foundation in <cit.>, and Landry and Tsang are expositing the rest in detail (<cit.> and work in progress). Building on <Ref>, we have: There exists a closed 3-manifold M, a transitive pseudo-Anosov flow ϕ on M, and a taut surface Z⊂ M such that [Z] pairs nonnegatively with each closed orbit of ϕ but is not almost transverse to ϕ up to isotopy. Let M be as in <Ref>; we will use the same notation as in that lemma. Note that since W and Y are taut and lie over the same face of the Thurston norm ball, the surface Z=W∪ Y is taut. Note that Z is not almost transverse to ϕ up to isotopy because this would imply the same for Y, which is impossible: by <Ref>(c)(v), Y has pairing -1 with the orbit γ_Z. We now claim that for every closed orbit γ of ϕ, [Z] pairs nonnegatively with [γ]; this will complete the proof. If γ is equal to either γ_1 or γ_2 then γ has null pairing with Y, since the γ_i's are homotopic to the core of H, which can be homotoped into Y. Hence we may assume that γ is any other closed orbit of ϕ. In this case, γ pulls back to finitely many oriented segments γ_1,…, γ_n in N beginning on _-N and ending on _+N. Fix 1≤ i≤ n and consider γ_i. By <Ref>(c)(ii) γ_i either lies in (S∪ T)× I or in H. Suppose γ_i⊂ (S∪ T)× I. Then γ_i contributes only positive intersections to the algebraic intersection number of Z and γ by <Ref>(c)(ii). Next, suppose γ_i⊂ H. The absolute value of the algebraic intersection of γ_i and Y is at most 1 since Y∩ H separates H, while the algebraic intersection of γ_i with W is equal to 1. It follows that γ_i contributes nonnegatively to the algebraic intersection of Z and γ. Summing over i=1,…, n, we see that [Z] and [γ] pair nonnegatively as claimed. One can modify the example above to give a manifold M, flow ϕ, and surface Z as in the statement of the theorem with Z connected. In the construction, replace the surface S∪ T with a connected surface R with 6 boundary components as indicated on the left of <Ref>. The same methods that Mosher uses to justify <Ref> can be used to glue the inward and outward pointing boundaries of N=H∪ (R× I) to produce a hyperbolic manifold together with a pseudo-Anosov flow ϕ satisfying all the properties in <Ref> (c). Now cap off R×{1/2} inside H as shown on the right of <Ref> to form a connected surface Z; Mosher's arguments apply equally well to show that Z is taut. The orbit γ_Z furnished by <Ref>(c) passes through the annulus in Z running between a_1 and a_4 negatively and exactly once, up to an isotopy of this annulus supported in H. It remains to show that the negative intersection point of γ_Z with Z cannot be removed by isotopy. To see this, let Σ be the image in M of the outward pointing boundary of N. Let γ_Z be a lift of γ_Z to the universal cover M of M. Then γ_Z passes through a bi-infinite sequence (Σ_n)_n∈ of distinct lifts of Σ to M. In the region of M between Σ_0 and Σ_1, γ_Z has a negative point of intersection with a lift Z of Z. Let N be the universal cover of N obtained as the component of the preimage of N ⊂ M in M containing Σ_0 and Σ_1 in its boundary. Since N is assembled by gluing together copies of the universal covers of H and R × I, we can see that Z separates Σ_0 and Σ_1 by observing that Z separates the positive and negative boundaries of N within each of H and R × I. Moreover, the intersection of Z with any copy of the universal cover of H or R × I is connected; together, this implies that Z meets γ_Z in a single negative point of intersection. We conclude that the negative intersection between Z and γ_Z cannot be removed by an isotopy and the example is complete. Note that in this example we could have replaced γ_Z by any periodic orbit whose pullback to N contains a segment from A_ to A_; there are many of these, since the periodic orbits of ϕ are dense. § SHADOWS OF TRANSVERSE SURFACES In <Ref> we will show that almost transverse position is essentially unique. The key to this is understanding the projections to the orbit space of lifts of compact connected surfaces transverse to an almost pseudo-Anosov flow. This is the goal of the present section. In the case where the ambient manifold has no boundary, this was essentially done by Fenley in <cit.>. First we set some notations and terminology. §.§ Notation and conventions Let ^s and ^u denote the stable and unstable foliations, respectively, of the orbit space of an almost pseudo-Anosov flow ψ. For p∈, let ^s(p) denote the leaf of ^s through p, and similarly for ^u(p). Recall that Θ M → is the projection to the flow space from the universal cover of M. If p∈, we set γ(p):=Θ^-1(p) to be the orbit of ψ corresponding to p. A quadrant of p∈ is a component of (^s(p)∪^u(p)). Points in , as well as those connected to by singular leaves of either ^s or ^u, have infinitely many quadrants; all other points have 2n quadrants for some n≥ 2. Two quadrants are adjacent if they meet along a noncompact half-leaf. A stable or unstable slice leaf is a properly embedded copy of contained in a leaf of ^s or ^u respectively. A stable or unstable boundary slice ray is a ray embedded in meeting exactly at its basepoint, obtained as the union of finitely many blown segments with a noncompact half-leaf of ^s or ^u, respectively. We observe that each component of is a slice leaf of both ^s and ^u. Each slice leaf which is not an entire component of separates into either 2 or 3 components. The 3 component case arises when a slice leaf meets in finitely many, necessarily contiguous, blown segments. Similarly, a boundary slice ray separates into 2 components. Now suppose S is a properly embedded compact surface transverse to ψ. Lift S to S⊂ M, and let Ω=Θ( S)⊂. (Here Ω stands for ombre, or shadow.) Note that the restriction of Θ to S is a homeomorphism onto Ω since S separates M and orbits intersect S positively. If a is a blown annulus of ψ and S∩ a is nonempty, then it is either (1) a collection of nonseparating arcs or (2) a collection of circles homotopic to the core of a. If a is a lift of a to M that intersects S, we see that Θ( a) ∩Ω is equal to the entire blown segment Θ( a) in case (1), and equal to Θ( a) minus its endpoints in case (2). If a is not contained in the boundary of M and its intersection with S is a (possibly empty) collection of arcs, then a can be collapsed to obtain a new almost pseudo-Anosov flow transverse to A with fewer blown annuli. Hence for simplicity we assume that if S intersects a blown annulus in a collection of arcs, then that annulus lives in M; we say S has the clean annulus property with respect to ψ. Note that assuming S has the clean annulus property is slightly weaker than ψ being a minimal blowup with respect to S, because we are allowing the existence of blown annuli not contained in M which do not meet S. The advantage of this relaxation is that it allows us to individually consider the components of a disconnected surface that is minimally transverse to ψ. Such a surface may not be componentwise minimally transverse; however, each component has the clean annulus property. Without the clean annulus property, the entire discussion in this section goes through with minor modifications. §.§ Executive summary The main results of this section are fairly easily stated and we do so now. Their proofs appear in <Ref>. As before, let ψ be an almost pseudo-Anosov flow in a compact 3-manifold M which is transverse to a compact, connected, properly embedded surface S. Let S be a lift of S to the universal cover M of M. Let Ω=Θ( S)⊂; this set is connected and disjoint from its frontier (Ω) because it is open. The frontier of Ω= Θ( S) is a disjoint union of stable/unstable slice leaves and boundary slice rays, each isolated from the others and bounding Ω to one side. This is an immediate consequence of the more technical <Ref>. A blowup tree is defined to be either a maximal connected union of blown segments or a singular point not contained in a blown segment. Let ψ be an almost pseudo-Anosov flow on a compact manifold M and let S be a properly embedded compact surface transverse to ψ. Let S be a lift of S to M and let Ω= Θ( S) be its image in the flow space. Let T be a blowup tree that intersects (Ω) ⊂. If T ∩Ω meets the interior of a blown segment, then either: * T∩Ω=(s) for a blown segment s that meets two nonadjacent quadrants, * T∩Ω=(s) for a blown segment s⊂, or * T is an entire component of contained in Ω. If T is disjoint from Ω, then: * T∩(Ω) is either a single point or an arc comprised of finitely many blown segments. Moreover, Ω meets precisely two quadrants of T, and these quadrants are adjacent. See the top chart in <Ref> for a visual companion to <Ref>. The situations described <Ref> translate to straightforward statements about S in M. Recall that S has the clean annulus property. Thus S intersects each blown annulus in a collection of closed curves or not at all, unless the blown annulus lies in M, in which case S may intersect a blown annulus a in a family of arcs connecting the two boundary components of a. From this one sees that cases (a)-(c) in <Ref> correspond to the following situations in M: * S passes through a blown annulus that lies in the interior of M. * A boundary component of S lies interior to a blown annulus a⊂ M. * A boundary component of S meets each blown annulus that lies on the boundary component of M corresponding to the given component of . For the final case, note that the two quadrants in (d) of <Ref> share a single periodic half-leaf of ^u or ^s, corresponding to a half-leaf λ of W^u or W^s. By analyzing the intersection of ⟨ g⟩· S with λ, where ⟨ g⟩≤π_1(M) is the stabilizer of λ, one can show without too much work that S intersects the projection λ of λ to M in a circle. (This also follows from the proof of <Ref>). Thus we have: * There is a periodic half-leaf λ of W^u or W^s such that S ∩λ contains a circle. The corresponding pictures are shown in the lower chart of <Ref>. §.§ Shadows of transverse surfaces: details Note that S separates M. If B is a subset of , we say B lies above or lies below S if Θ^-1(B) lies entirely above or below S in M, respectively (with respect to the coorientation of S). The following is a basic observation about asynchronously proximal orbits. Let S be a compact, connected surface in an irreducible 3-manifold M which is transverse to a flow ϕ, and let S be a lift of S to M. Suppose that the orbits passing through a,b∈ M are asynchronously proximal. Then for all t≫ 0, ϕ_t(a) and ϕ_t(b) lie on the same side of S. The same is true for all t≪ 0 if a and b are backward asynchronously proximal. We shall prove the forward statement; the backward one is similar. Let N be an ϵ-tubular neighborhood of S for some ϵ>0 such that N can be identified with S× I where the I-fibers are orbit segments of ϕ. Let N be the lift of N containing S. If ϕ_t(a) and ϕ_t(b) lie on different sides of S for large t, then they will be at least 2ϵ apart for large t, hence are not asynchronously proximal. Let ℓ be a periodic half-leaf based at the periodic point x ∈. We say that ℓ is locally expanding at x if the local first return map to a transverse disk in M through a point along the closed orbit corresponding to x expands along the periodic leaf corresponding to ℓ. Note that any blown segment is locally expanding at one of its endpoints and locally contracting (similarly defined) at its other. We also observe that a periodic half-leaf which is not a blown segment is locally expanding/contracting depending on whether it belongs to ^u or ^s, respectively, and that the half-leaves terminating at a periodic point alternate between locally expanding and locally contracting in the circular order at that point. Each periodic half-leaf has at least one endpoint and we endow it with a unique orientation defined by requiring half-leaves to be oriented toward locally contracting endpoints and away from locally expanding endpoints. Let ψ be an almost pseudo-Anosov flow on a compact manifold M and let S be a properly embedded compact surface transverse to ψ. Let S be a lift of S to M and let Ω= Θ( S) be its image in the flow space. Let s be a blown segment that intersects Ω. If s is not a subset of , then: * (s)⊂Ω and s⊂(Ω). If s⊂ then either: * (s)⊂Ω and s⊂(Ω), or * the component of containing s lies entirely in Ω. In cases (a) and (b), the locally expanding and contracting endpoints of s lie under and over S, respectively. Let A be a blown annulus. If S∩ A is a family of circles, then each circle must be cooriented toward the attracting boundary component of A. Choosing a lift A of A that intersects S, we see that S∩ A is a line intersecting every orbit in A, cooriented toward the attracting side of A. Letting s=Θ( A), we see that (s)⊂Ω and s ⊂(Ω), with the locally expanding and locally contracting endpoints of s lying below and above S, respectively. This describes situations (a) and (b). Now suppose that A lies in a component T of M and S∩ A is a family of intervals. Then each component of S∩ T intersects each blown annulus in T in a family of intervals. Lifting to A to A⊂ M, we see the entire component of containing A is contained in Ω. This is case (c). Since S has the clean annulus property there are no other types of nontrivial intersections between S and blown annuli, so the proof is complete. Let x∈ (Ω) lie below S. If x∉, then there is an open interval I⊂^s(x) such that x∈ I and I⊂(Ω). If x∈, then the same is true unless x is a limit of points in Ω∩, in which case I can be taken to be a half-closed interval with boundary point x such that I-{x}⊂∂. If x lies above S, the same is true after replacing below with above and ^s with ^u. We will assume that x lies below S; the proof in the other case is symmetric. Let D be a disk in M transverse to ψ intersecting the lifted orbit γ(x) = Θ^-1(x) in a point. Because γ(x) is disjoint from S, we can choose D small enough so that it is also disjoint from S. Let N(x) be a neighborhood of x contained in Θ(D) and small enough so that it contains no singular points or endpoints of blown segments, except possibly x. Case 1: x is not an endpoint of a blown segment that meets Ω. Let x' be any point in N(x)∩^s(x). Since x and x' are not separated by a singular point by the definition of N(x), γ(x') is asynchronously proximal to γ(x) (<Ref>). Since γ(x) lies below Ω, <Ref> implies that γ(x') lies below Ω so x'∉Ω. This shows that N(x)∩^s(x) is disjoint from Ω and lies below S. Now take a sequence of points (x_n) in N(x)∩Ω converging to x such that each x_i lies in a single complementary component Q of ^s(x). Let d_i be the intersection of γ(x_i) with D. Since D lies entirely below S and all the points in W^s(d_i)∩ D are asynchronously proximal, <Ref> gives that all points in W^s(d_i)∩ D intersect S in forward time. Hence ^s(x_i)∩ N(x)⊂Ω. Letting i→∞ these segments limit on Q∩ N(x), so Q∩ N(x)⊂ (Ω). Since Q⊂^s(x) , Q∩ N(x)⊂ (Ω) and Q∩ N(x) lies below S. The local picture is shown on the left of <Ref>. Case 2: x is an endpoint of a blown segment s meeting Ω, and s∩=∅. By <Ref>, (s)⊂Ω and s is locally expanding at x since x lies below S. Let Q_1 and Q_2 be the two complementary components of ^s(x) which meet along s. Let ℓ_1 and ℓ_2 be the two periodic locally contracting half-leaves at x adjacent to s, indexed so that ℓ_i⊂ Q_i. As in case 1, <Ref> implies that Ω is disjoint from both ℓ_1∩ N(x) and ℓ_2∩ N(x), and that γ(x') lies below S for all x'∈ℓ_1∪ℓ_2. Since (s)⊂Ω and Ω is open we can choose a sequence of points (x_n) in Ω converging to x so that each x_i lies in, say, Q_1∩ N(x). As in case 1, ^s(x_i)∩ N(x)⊂Ω; these segments accumulate on (ℓ_1∪ s)∩ N(x) so ℓ_1∩ N(x)⊂(S). We can argue the same way in Q_2 to complete the proof of this case. The local picture is as shown in the center of <Ref>. Case 3: x is in the boundary of a blown segment s⊂ intersecting Ω. The argument for this case is the same as that of case 2, except now s is adjacent to only a single complementary component of ^s(p) since it lies in . This produces a half-closed interval I in ^s(x)∩(Ω), with boundary point x. Every point in I-{x} must lie in () because x is a 3-pronged singularity and s⊂. The local picture is shown on the right of <Ref>. The next lemma turns the previous local statement into a global one. Let x∈(Ω). If x lies below S, then x is contained in a unique slice leaf or boundary slice ray contained in (Ω)∩^s(x), which lies below S. The symmetric statement is true if x lies above S, with ^u replacing ^s. Let ℓ be the component of ^s(x) (Ω∩∂) containing x and let λ be the component of ℓ∩(Ω) that contains x. We will show that λ lies below S and is a slice leaf if ℓ = ^s(x) and a boundary slice ray otherwise. Observe that ℓ is a closed, connected subspace of ^s(x) and hence a properly embedded tree in with at most one degree 1 vertex on ∂ (since a leaf meets at most one component of ∂). By <Ref>, λ contains an interval around each of its points; this interval is open for points in () and half open otherwise. We also have that λ is closed as a subspace of ℓ since (Ω) is closed. Hence, λ is a properly embedded subtree of ^s(x) with at most one degree 1 vertex on ∂ (exactly in the case when ℓ⊊^s(x)). Moreover, λ lies below S: it is connected and disjoint from Ω, contains x which lies below S, and points lying below S cannot be accumulated by points lying above S because “not lying above S" is an open condition. It remains to show that λ is a line or half line. Otherwise, it has a vertex v of degree n≥ 3, but only one of the complementary regions of λ incident to v contains Ω, contradicting the fact that λ⊂(Ω). The following proposition establishes <Ref> and more. Let ℓ be a component of (Ω). Then the following hold: * ℓ is a slice leaf or boundary slice ray of ^s or ^u, and ℓ is stable (unstable) if and only if ℓ lies below (above) S. * ℓ has a neighborhood disjoint from all other components of (Ω). * if ℓ contains periodic points, then there is a unique one p with the property that there is a half-leaf λ terminating at p that intersects Ω. Moreover, λ is the unique such half-leaf, and λ is locally expanding or contracting at p depending on whether ℓ lies below or above S, respectively. Let x∈(Ω) and suppose without loss of generality that x lies below S. Using <Ref> we can find a boundary slice leaf in ^s(x)∩(Ω), or a slice ray in ^s(x)∩ (Ω) terminating in a point accumulated by points in ∩Ω. Let λ be this slice leaf or boundary slice ray. <Ref> also tells us that λ lies below S. Note that Ω is connected, so it only accumulates on λ from one side of λ. This implies that λ is a maximal connected subset of ^s(x) contained in (Ω). Let ℓ_1 and ℓ_2 be two slice leaves or boundary slice rays in (Ω). We claim ℓ_1∩ℓ_2=∅. If ℓ_1∩ℓ_2 is nonempty, it could consist of a single point or some union of half-leaves. In either case, ℓ_1∪ℓ_2 separates into a number of components, none of which contains all of ℓ_1∪ℓ_2 in its frontier. Since Ω lies in a single component of ℓ_1∪ℓ_2, this is a contradiction. It follows that (Ω) is a disjoint union of slice leaves or boundary slice rays. Let C be a component of ( Ω). If C consists of more than one slice leaves or boundary slice rays then it would have nonempty interior, a contradiction. We conclude that C is a single slice leaf or boundary slice ray. If the component contains any point lying under S, we have seen that it is stable and lies entirely under S. Conversely suppose there is a stable component ℓ^s containing a point p lying over S. Then by <Ref> there is an unstable slice leaf ℓ^u passing through x contained in (Ω), contradicting that (Ω) is a disjoint union of slice leaves or boundary slice rays. This proves (a). As noted above, Ω must lie to one side of each slice or slice ray in (Ω), and we know now these are precisely the components of (Ω). This implies that boundary components do not accumulate on each other, proving (b). Turning our attention to (c), suppose that p is a periodic point in ℓ lying below Ω. If p is the boundary point of blown segment intersecting Ω, we have already seen in the proof of <Ref> (cases 2 and 3) that this blown segment is locally expanding at p and is the only half-leaf terminating at p that intersects Ω. Otherwise we are in case 1 of the proof of <Ref>, and there is an open interval I⊂(Ω) containing x that is the limit of segments of leaves of ^s, accumulating on I from the Ω side. In this situation there can be at most one half-leaf terminating at p lying on the Ω side of I. If there is such a half-leaf it is transverse to ^s, and hence is an unstable half-leaf which is not a blown segment. In particular it is locally expanding at p. Next we prove there is at most one periodic point p in ℓ which is an endpoint of a periodic half-leaf intersecting Ω. If p∈ℓ is a periodic point incident to two half-leaves in ℓ which are locally contracting at p, then there must be at least one (and hence only one by above) locally expanding half-leaf emanating from p into Ω. This is because the half-leaves meeting p alternate between locally expanding and contracting. Call such a point p a sink of ℓ. If ℓ is a slice leaf then the two noncompact half-leaves in ℓ are stable, and hence oriented toward their endpoints. This forces the existence of at least one sink in ℓ. If ℓ is a boundary slice ray containing no sinks, then all its half-leaves must be consistently oriented, so the half-leaf of ℓ meeting is locally contracting at its endpoint q. There is then a blown segment in ∩Ω meeting q which is locally expanding at q. Now suppose that ℓ contains at least two periodic points which are endpoints of periodic half-leaves intersecting Ω. Choose two such points p and p' which are adjacent along ℓ and are the endpoints of two such half-leaves, r and r' respectively. By above, r and r' are both locally expanding. This means that the union of r, r', and the segment of ℓ between p and p' is not consistently oriented. Since this is part of the boundary of a quadrant, we have a contradiction (see <Ref> and related discussion in <Ref>). This proves (c) when ℓ lies below S, and the other case is symmetric. Suppose T is a blowup tree containing a blown segment s that intersects Ω. By <Ref>, it is either the case that (1) (s)⊂Ω and s⊂(Ω), or (2) s is contained in a component ℓ of that lies entirely in Ω. In case (1), <Ref> implies that T∩Ω equals (s). In case (2) we need only check that there are no blown segments in Ω incident to ℓ, but this is true since S has the clean annulus property. Hence T∩Ω=ℓ, completing the analysis when T intersects Ω. To finish the proof, suppose T is disjoint from Ω and suppose p∈ T∩(Ω). Suppose without loss of generality that p lies below S. By <Ref>, p is contained in ℓ where ℓ is a slice leaf or boundary slice ray of ^s, and there is a single periodic point in ℓ from which a half-leaf h (necessarily locally expanding) emanates into Ω. Since T and Ω are disjoint, h is not a blown segment. Hence T∩(Ω) consists precisely of the blown segments contained in ℓ. Since the two quadrants of T intersecting Ω meet along h, which is not a blown segment, they are adjacent. The following corollary will be used in the next section. A regular open set is one that is the interior of its closure. Ω is regular. First note that ( Ω)= ((Ω)). Indeed, the fact that each boundary component bounds Ω to only one side gives that ( Ω)⊂ ((Ω)), and the reverse inclusion follows from the openness of Ω. Now Ω= (Ω)=(Ω)-(Ω)=(Ω)-((Ω))=((Ω)). § UNIQUENESS OF ALMOST TRANSVERSE POSITION The following results, which are the goals of this section, comprise our uniqueness statement from <Ref>. Let ϕ be a pseudo-Anosov flow on M and let S_1 and S_2 be isotopic properly embedded surfaces which are minimally transverse to generalized dynamic blow ups ϕ_1^♯ and ϕ_2^♯, respectively. Then ϕ_1^♯ and ϕ_2^♯ are combinatorially equivalent. Recall the minimally transverse condition means each blown annulus of ϕ_i^♯ not contained in M is intersected by S_i in a positive number of core curves. Let S_1 and S_2 be two isotopic properly embedded surfaces minimally transverse to an almost pseudo-Anosov flow ϕ^♯. Then any two compatible lifts of isotopic components of S_1 and S_2 to M have the same projection to the orbit space. Further, S_1 and S_2 are isotopic along flowlines. The main tool for proving <Ref> and <Ref> is the structure theory of shadows described in <Ref>. §.§ Shadows of isotopic surfaces are unique To begin, we need to recall and extend the notation for flow spaces from <Ref>. We let denote the flow space of the original flow ϕ, lifted to the universal cover M, and let Θ: M→ denote the quotient map. In the case where M has boundary, will have lines that are the images of the planar components of M. We let ^* denote the image of under the quotient map p that collapses each blown annulus to a point, which simply collapses all boundary lines The resulting space is not locally compact at the images of the lines. If M has no boundary, then ^* =. Suppose, as in the statement of <Ref>, that S_1,S_2⊂ M are isotopic surfaces which are respectively minimally transverse to (generalized) dynamic blowups ϕ_1^♯ and ϕ_2^♯ of ϕ. We denote by _i the flow space of ϕ_i^♯, and by p_i_i→^* the map collapsing all blown segments to points (note we are identifying the result of this collapsing with ^* in the obvious way). Since ϕ_1^♯ and ϕ_2^♯ are dynamic blowups of ϕ, there are homeomorphisms g_i M ( blowup locus of ϕ_i^♯ ) → M (blowup locus of ϕ ) which are orbit equivalences from the restriction of ϕ^♯_i to that of ϕ, and homotopic to the identity through maps M ( blowup locus of ϕ_i^♯ ) → M. The following diagram describes some of the spaces and maps at play, but need not commute, because ϕ_1^♯ and ϕ_2^♯ are distinct flows. Mld[swap]Θ_1dΘrdΘ_2 _1rd[swap]p_1 dp _2ldp_2 ^* Now, we choose compatible lifts S_1 and S_2 of components of S_1 and S_2. For brevity we denote Θ_i( S_i) by Ω_i. Let , _i, ^* respectively denote the spaces , _i, ^* minus all blown segments and/or singular points (these are all canonically homeomorphic via the maps in <Ref>). Applied to a set or map, the hovering circle notation denotes restriction to the corresponding set. Here are some salient properties of these objects: * Each p_i_i→^* is a homeomorphism, so it respects interiors and closures in its domain and range. * Regularity of open sets (recall this means A=((A))) is preserved upon passage to open subspaces. Each Ω_i is regular in _i by <Ref>, so Ω_i is regular in _i. With the above notation, p_1(Ω_1) = p_2(Ω_2). We first consider the sets of regular periodic points P_1 in Ω_1 and P_2 in Ω_2, that is periodic points which correspond to regular orbits of the flows. We will show that p_1(P_1)=p_2(P_2) in ^*. Let y_1∈ P_1 be a regular point. Its preimage γ_1=Θ_1^-1(y_1) projects to a periodic ϕ^♯_1-orbit γ_1 in M. Let γ_2=g_2^-1(g_1(γ_1)). This is a periodic orbit of ϕ^♯_2, which is homotopic to γ_1 because g_1 and g_2 are homotopic to the identity. We can lift a homotopy from γ_1 to γ_2 to produce γ_2, a lift of γ_2 to M. We know that γ_1 passes from the negative side of S_1 to the positive side, intersecting it once. We claim that γ_2 intersects S_1 finitely many times, starting on the negative side and ending on the positive side. This follows from the fact that, because γ_1 intersects S_1 positively, the distance from γ_1 to S_1 is a proper function, and on the other hand the homotopy from γ_1 to γ_2 has bounded tracks. Similarly, since the lifted homotopy between S_1 and S_2 has bounded tracks, we then have that γ_2 also passes from the negative side of S_2 to the positive side (note there can be at most one point of intersection). Thus Θ_2(γ_2) is in P_2. Now Θ_1(γ_1) and Θ_2(γ_2) will have the same image in ^* by construction. Because the roles of S_1 and S_2 are symmetric, we conclude that P_1 and P_2 have the same images in ^*. Note that P_i⊂_i, and P_i is dense in Ω_i. Using this, and the properties itemized before the lemma statement, we have (where interiors and closures are taken in the punctured spaces _i and ^*): p_1(Ω_1) = p_1(((Ω_1)))= p_1(((P_1)))=(( p_1(P_1))) =(( p_2(P_2)))= p_2(((P_2)))= p_2(((Ω_1))) = p_2(Ω_2). The following lemma implies that p_1(Ω_1)=p_2(Ω_2), completing the proof. The set p_i(Ω_i) is uniquely determined by p_i(Ω_i) in the following sense: Let q be a singular point of ^* that is contained in the frontier of p_i(Ω_i)⊂^*. Then q∈ p_i(Ω_i) unless p_i(Ω_i) meets exactly two adjacent quadrants of q. We have q∈ p_i(Ω_i) if and only if Ω_i intersects the tree of blown segments p_i^-1(q), so this follows immediately from the local structure described by <Ref>. The excluded case is column (d) of <Ref>. §.§ Uniqueness of combinatorial type We can now prove that the isotopy class of a surface almost transverse to a pseudo-Anosov flow determines a unique combinatorial type of minimal dynamic blowup transverse to the surface. Let A be a blown annulus complex of ϕ_1^♯, and let U be a standard neighborhood for A. Let D be a meridional disk or annulus of U, depending on whether U is solid or hollow. Then there are singular stable and unstable foliations of D; as described in <Ref>, the combinatorial type of ϕ_1^♯ is determined by which quadrants of this bifoliation meet along blown segments (the intersections of blown annuli with D), as A varies over all blown annulus complexes. Suppose Q_1 and Q_2 are two quadrants meeting along a blown segment s corresponding to the immersed annulus A_s⊂ A. By minimality, there is an arc of S_1∩ D lying in Q_1∪ Q_2 and intersecting s in a point. This point corresponds to a closed curve α⊂ S_1∩ A_s. Let S_1^α be the component of S_1 containing α. Choose intersecting lifts S_1^α, A_s to M. Then p_1(Ω_1) meets two nonadjacent quadrants of p_1( A_s). Conversely if p_1(Ω_1) meets two nonadjacent quadrants of a singular point corresponding to a lift of the complex A to M, then Ω_1 meets their preimage quadrants in _1, and <Ref> explains that this corresponds to a curve of intersection between A and a blown annulus, meaning that the two corresponding quadrants in D must meet along a blown annulus. Reasoning in this way for all blown annulus complexes, we see the combinatorial type of ϕ_1^♯ is uniquely determined by p_1(Ω_1). The same is true for ϕ_2^♯. <Ref> finishes the proof of combinatorial equivalence. As a consequence, whether or not a dynamic blowup is necessary to achieve transversality can be read off from the data of a relative veering triangulation and a relatively carried surface: Let ϕ be a transitive pseudo-Anosov flow on M and let τ be any compatible veering triangulation. Then, up to isotopy, S is (honestly) transverse to ϕ if and only if S is carried by τ without ladderpole annuli. If S is carried by τ without ladderpole annuli, then the fact that τ is positive transverse to ϕ and the picture within each tube show that S is indeed transverse to φ. Let S be transverse to φ and suppose that S is isotopic to a surface S_1 that is efficiently carried by a veering triangulation with ladderpole annuli. Then S_1 is minimally transverse to a nontrivial dynamic blowup ϕ_1^♯ of ϕ, contradicting <Ref>. §.§ Uniqueness of transverse position Recall the setting: we have two isotopic surfaces S_1 and S_2 minimally transverse to a single almost pseudo-Anosov flow ϕ^♯. We first wish to show that compatible lifts of any pair of isotopic components of S_1 and S_2 to M have the same projection to the orbit space of ϕ^♯; for notational simplicity we assume that S_1 and S_2 are connected. Let S_1 and S_2 be compatible lifts to M. By <Ref>, their images contain the same regular points and intersect exactly the same quadrants at each blowup tree in . Hence they are equal. Now we show that S_1 and S_2 are isotopic along flow lines. The above implies that compatible lifts of components of S_1 and S_2 are sections of the bundle of lifted flow lines over their common shadow in . The difference of these sections is a π_1(S_i)–invariant function, and by adding t∈ [0,1] times this function to one section, and projecting to M we obtain a homotopy through transverse surfaces between the components. In fact, we can see that this homotopy is an isotopy as follows: The preimage q^-1(S_i) (where q M → M is the covering map) divides M into regions, and its coorientation induces an orientation on the dual tree T_i of this subdivision. Because S_1 and S_2 are isotopic, there is an isomorphism between these oriented trees that preserves the labeling of the edges by conjugates of π_1(S_1)=π_1(S_2) in π_1(M). Now a flow line ℓ in M, because it is positively transverse to S_i, determines a subset in T_i which is either an oriented line, ray, or segment or a single vertex. Because compatible lifts of S_1 and S_2 have the same shadows, the two lines are the same (up to the isomorphism). In other words, the intersection points ℓ q^-1(S_1) and ℓ q^-1(S_2) appear in the same order along ℓ, as labeled by their corresponding subgroups (or shadows). The linear interpolation given above must therefore preserve the embeddedness of the intersection points, and hence is an isotopy. §.§ An application: boundary leaves are periodic The goal of this subsection is to prove the following proposition. The case where M is closed and φ is pseudo-Anosov was proven by Fenley <cit.>. Let S be a connected surface transverse to a transitive almost pseudo-Anosov flow ϕ. Let S be a lift of S to M, and let Ω be its projection to the flow space. Then any component of (Ω) is periodic. For the almost pseudo-Anosov flow φ on M, let κ be a finite collection of orbits that kills its perfect fits (<Ref>), and let τ be the associated veering triangulation on the fully-punctured manifold M κ_s (<Ref>). Let U be a standard neighborhood of κ_s, so that τ is a veering triangulation of M relative to the tube system U. We begin with the following additional setup. By <Ref>, since S is transverse to ϕ, we can isotope S so that it is relatively carried by τ. Further, we may choose an efficient carried position. In such a position, whenever S meets a component of U, it intersects the corresponding component of κ_s. Note that since S is still transverse to φ, we have not changed its shadow Ω by <Ref>. Let S be a lift of S to M. The surface S traverses various sectors of the preimage of τ_U in M, and each sector of this branched surface is contained in a face of the preimage τ of τ in M. We say such a face is traversed by S if it contains a sector in the preimage of τ_U traversed by S. Recall the definitions of and Ω from <Ref> (the circle notation denotes removal of all non-regular points). The punctured image Ω=Θ( S) in is ideally triangulated by the images of the faces of τ that are traversed by S. We begin by noting that the faces of τ traversed by S triangulate their image in . To see this, note that the union of these faces in M determine a properly embedded surface S_ in M κ_s whose image in M κ_s is a surface carried by τ. It follows that the projection Θ restricted to S_ is a homeomorphism onto its image in . We denote this image by Ω_. For details, see <cit.>) where it is also established that (Ω_) is a disjoint union of vertical/horizontal leaves in . We now show that if a face f of τ is traversed by S then its image f in is contained in Ω. If f meets Ω but is not contained in Ω, then it is crossed by a component ℓ of ( S). By <Ref>, ℓ is stable/unstable slice leaf or slice ray. Hence there is a neighborhood V of an end of f (i.e. a deleted neighborhood of a vertex) whose preimage has closure V in M that is a closed neighborhood of a component of κ_s that does not meet S. But as noted above, efficient position implies that S does not meet the corresponding component of U, contradicting that f is traversed by S. This gives Ω_⊂Ω. We finish by demonstrating the reverse inclusion. Suppose p∈Ω. The orbit Θ^-1(p) passes through S. If it does so outside of U then clearly p∈Ω_, so suppose γ_p passes through S inside a component of U corresponding to an ideal vertex v of . There are two adjacent quadrants of v such that p lies in the interior of their union. Moreover, since S intersects the corresponding component of U, it traverses a face f of τ whose projection f lies in the interior of the two quadrants. By definition, f lies in Ω_. Since (Ω_) is a union of vertical and horizontal leaves of , we have p∈Ω_. We now turn to the proof of the proposition. Continuing with the discussion above, let λ be a component of ( Ω). Since every singular leaf is periodic, we may assume that λ is a regular stable or unstable leaf and contains no points of κ. Without loss of generality, we assume λ is unstable. We recall the notion of an upward flip move on a surface relatively carried by _U. These moves were essentially first considered in <cit.> but the version we use here is from <cit.>. If S is carried with positive weights on the two bottom faces of a tetrahedron , there is an isotopy sweeping the corresponding portion of S across into a neighborhood of the two top faces of . See <Ref>. Note that S is not a cross section to ϕ, since in that case we would have Ω=. By <cit.>, we may perform only finitely many upward flips before arriving at a relatively carried position for S in which no upward flips are possible. We fix this position of S and consider the associated triangulation of Ω in given by <Ref>. By the definition of B^s, the train track B^s∩ S has no large branches at this point. As a consequence there is a finite collection of simple closed curves in S carried by B^s∩ S which we will call stable loops. We observe that any infinite train route in B^s∩ S must eventually circle around one of these stable loops forever. Let t_1 be an ideal triangle of the ideal triangulation of Ω which is near enough to λ so that there exists a leaf ℓ^s of the stable foliation passing through the interior of t_1 and also intersecting λ. If we orient ℓ^s from t_1 toward λ, it determines an infinite path of ideal triangles (t_1,t_2,t_3,…) in Ω. If e_i is the edge shared by t_i and t_i+1, we see that the sequence (e_n) limits on λ. Indeed, this follows from the proof of <cit.>. A picture of this situation is shown in <Ref>. Each t_i has a “widest" edge, by which we mean an edge that intersects every leaf of ^s that intersects t_i. The projection of S∩ B^s to gives a train track in Ω which is dual to our ideal triangulation, and endows each ideal triangle in t_i with a switch of a train track that points toward the widest edge of t_i. In particular there is a train route from the widest edge of t_i to each of the other edges of t_i. Note that since ℓ^s is stable, it passes through the widest edge of each t_i. Therefore there is an infinite train route ρ in the train track S∩ B^s passing through the triangles t_1, t_2, t_3,… in that order. Let ρ be the lift of ρ to S⊂ M, and let ρ be the projection of ρ to S⊂ M. As observed above, the route ρ is eventually periodic, winding around a stable loop γ⊂ S. By truncating an initial subsequence of (t_n) and relabeling, we can assume that ρ is actually periodic and that ρ is contained in a lift γ of γ. We orient γ so that the orientations of ρ and γ are compatible. Let g be a deck transformation of M which corresponds to γ∈π_1(S) ≤π_1(M) and preserves γ, translating it in the positive direction. The action of g on S is cellular and in particular acts as translation on the (truncated) sequence (e_n), sending each edge to one which is closer to λ. Since (e_n) limits on λ we conclude that λ is preserved by the action of g, hence is periodic. amsalpha
http://arxiv.org/abs/2406.17958v1
20240625222551
Local/Short-range conformal field theories from long-range perturbation theory
[ "Junchen Rong" ]
hep-th
[ "hep-th", "cond-mat.stat-mech" ]
Inherent Challenges of Post-Hoc Membership Inference for Large Language Models [ ======================================================================================== § INTRODUCTION Conformal field theory (CFT) plays an important role in the theory of phase transitions. Nearly all second-order phase transitions—whether quantum or statistical—are described by CFTs <cit.>. To study a CFT, certain methods, such as Monte Carlo simulation <cit.> and conformal bootstrap <cit.> have been developed. In some cases <cit.>, the bootstrap method was able to provide the highest precision results for critical exponents. In some other cases, on the other hand, the precise determination of the critical exponents remains unavailable, which include various gauge theories in 2+1 dimensions <cit.>. The more traditional way to study CFTs using perturbation theories remains valuable. These include the 4 - ϵ expansion <cit.> and the large N expansion <cit.>. Even though the corresponding perturbative series are usually asymptotic, after proper re-summation <cit.>, they give nice results for the critical exponents. It is known that some short-range/local CFTs are in fact special points in a conformal manifold given by long-range generalizations of the CFT. Let us take the O(N) vector model as an example, consider the action S = 𝒩^- 1/2∫ d^D x d^D y ∑_i ϕ_i (x) ϕ_i (y)/| x - y |^D + s + ∫ d^D x 1/8λ( ∑_i ϕ_i (x)^2 )^2 . Here 𝒩 is a normalization factor which we will fix later. The scaling dimension of the scalar field is Δ_ϕ = D - s/2. We set s = d + δ/2, so that Δ_ϕ = D - δ/4. With small and positive δ, the interaction term is slightly relevant. This can be viewed as a version of the analytic regularization <cit.>, that was introduced in the early days of quantum field theory. It is well-known that the theory flows to an IR fixed point, and one can use perturbation theory to study the corresponding long-range CFT <cit.>. In addition to the early works in <cit.>, the above theory was also recently studied in small δ-expansion in <cit.>, in the large N limit in <cit.>, and using the conformal bootstrap method in <cit.>. Some Monte Carlo simulation results are also available for small N models <cit.>. The scaling dimension of operators will depend on the parameter δ. In the δ→ 0 limit, one can calculate these anomalous dimensions, such as for the mass operator ϕ^2 and the leading spin-2 operator Δ_ϕ^2(δ), and Δ_T_μν(δ) . The scalar operator ϕ will not be renormalized, so that Δ_ϕ = D - δ/4, is valid without higher loop corrections <cit.>. The three loop results of Δ_ϕ^2 (δ) and Δ_T_μν (δ) were calculated in <cit.> and in the present paper. Their explicit forms are given in (<ref>) and (<ref>). It was commonly believed that the above conformal manifold contains the short-range O(N) vector model. However, a comprehensive understanding of the long-range to short-range crossover was only achieved recently <cit.>. The IR CFT at the crossover is in fact the short-range-model accompanied by a generalized free field. This explained many long-standing puzzles of the crossover. In the present paper, we focus on one question, how can we extract data of the short-range CFT from the long-range perturbation theory at small δ? Compared to the long-range CFTs in the conformal manifold, the short-range point has a local conserved stress tensor operator, it also has a sub-sector that satisfies the conformal Ward Identity. Reverse the logic, this means that by imposing the corresponding conformal Ward Identity, we can extract conformal data of the short-range models by imposing the conformal Ward Identity. For the O(N) vector model, one can simply impose Δ_T_μν (δ) = D. This allows us to solve for δ. Plug the result back in Δ_ϕ (δ) and Δ_ϕ^2 (δ) gives us the critical exponents of the short range model. The above procedure gives us a complementary method to study short-range/local CFTs in addition to the 4 - ϵ expansion and the large N expansion. The relation of the three methods is summarized in Fig. <ref>. In this paper, we first use this approach to study the O(N) vector model. In Section <ref>, we first work in D = 4 - ϵ dimensions and show that the above procedure allows us to recover the ϵ-expansion results of the short-range models. We then work directly in D = 3, after re-sum the perturbative series, we show that the above procedure allows us to obtain reasonable results for the short-range critical exponents, the results are summarized in Section <ref>. We then initiate the study of short-range fermionic models using long-range perturbation theory. We note here an important motivation to develop the long-range perturbation theory approach, namely we do not need to worry about the complication due to the famous γ^5 problem <cit.> in dimensional regularization. Namely, when evaluating the Feynman diagram in D = 4 - ϵ, one encounters Tr [γ^μγ^v γ^ργ^σ] ∼ϵ^μνρσTr [1] + …, and the Levi-civita tensor can not be generalized to continuous dimensions. When we try to dimensional continue to D = 3, on the other hand, one has to modify the Clifford algebra so that Tr [γ^μγ^v γ^ρ] ∼ϵ^μνρTr [1] + …. At higher loops, one may need to add back certain diagrams to recover the full symmetries of the D = 3 theory[This prescription was introduced in <cit.> and named “DREG_3”. Using this prescription, one successfully recovered the supersymmetry relations for the N = 1 super-Ising model. See also <cit.> for a treatment of Clifford algebra in 2 + ϵ expansion.]. In Section <ref>, we study, in two-loop order, the 2 + 1 dimensional long-range models with four-fermion couplings including the Gross-Neveu coupling[The long-range version of the Gross-Neveu model was first introduced in <cit.>.] and the Thirring coupling. In Section <ref>, we study, in two-loop order, a 4 + 1 dimensional long-range fermionic model with a generalized Thirring coupling. To calculate the Feynman integrals, we use the Mellin-Barnes method (see for example <cit.>), which can be conveniently applied using the packages AMBRE <cit.> and MB.m <cit.>. § THE (SCALAR) O(N) VECTOR MODEL We consider the long-range O(N) vector model (<ref>). The propagator in real space is G_i j (x - y) = (H_s H_- s)^- 1𝒩/| x - y |^D - sδ_i j, with H_s = 2^s π^d/2Γ( s/2)/Γ( d - s/2), which satisfies the long range equation of motion, 1/𝒩∫ d^D y δ_i j/| x - y |^D + s G_j k (y - z) = δ_i kδ^(D) (x - z) . We have used the equation, ∫ d^D y 1/| x - y |^D + s1/| y - z |^D - s = H_s H_- sδ^(D) (x - z) . Using the Fourier transformation, one can show that ∫ d^D x 1/r^D - s e^- i p · x = H_s 1/| p |^s, 1/(2 π)^D∫ d^D p 1/| p |^s e^i p · x = (H_s)^- 11/r^D - s . We take 𝒩 = H_- s, so that the propagator in the momentum space is given by, G (p)_i j = δ_i j/(p^2 + μ^2)^s / 2 . We work in Euclidean signature, so that p^2 = ∑^3_i = 1 p_i^2. Notice that we have introduced the IR regulator μ for the propagator. It is important to introduce this regulator to remove the IR singularity, so that the divergence of the Feynman integral contains UV divergence only. The Feynman rule for the four-point vertex is V_i j k l = λ (δ_i jδ_k l + δ_i lδ_j k + δ_i kδ_j l) . We take s = D + δ/2, so that the scaling dimension of the scalar operator is Δ_ϕ = D - δ/4 . In the δ→ 0_+ limit, the λϕ^4 coupling term is slightly relevant. We therefore take δ to be the perturbation parameter. The short-range models are located in the δ≈ 1 region. We collect here the perturbative calculation results of the theory, the beta function was calculated in earlier papers, in our convention (which is the same convention as in <cit.>), β_g = - δ g + 1/2δ g^2 (N + 8) D_0 + δ g^3 (5 N + 22) (D_0^2 - 2 S_0), We use g to denote the renormalized coupling constant, and λ to denote the bare coupling (see <cit.>). The constants D_0 and S_0 were calculated in <cit.>, D_0 = 2^- Dπ^- D/2Γ( δ/2)/Γ( D + δ/2), S_0 = 2^1 - 2 Dπ^- D/δ^2 Γ( D/2)^2 - (4 π)^- D( ψ^(0)( D/2) + 2 ψ^(0)( D/4) + 3 γ)/δΓ( D/2)^2 . The anomalous dimension of the leading spin-2 operator is given by Δ_T^μν = 2 Δ_ϕ + 2 - 3 δ g^2 (N + 2) S_2 + 3/4δ g^3 (N^2 + 10 N + 16) (- 4 D_0 S_2 + 4 I_1 + I_2), with S_2 defined in (<ref>), I_1 and I_2 defined in (<ref>). The anomalous dimension of the leading spin-1 operator (in the adjoint irrep of O(N)) is Δ_J^μ = 2 Δ_ϕ + 1 - δ g^2 (N + 2) S_1 + 1/4δ g^3 (N + 2) (- 4 D_0 S_1 + 24 I_1' + 3 (N + 4) I_2') . with S_1 defined in (<ref>), I_1' defined in (<ref>) and I_2' defined in (<ref>). From the beta function, we can solve the coupling constant at the fixed point g_⋆_ in terms of the perturbation parameter δ. From which we get the scaling dimension of the spin-2 operator at the O(N) long-range fixed points at three-loop, Δ_T^μν = D + 4/2 - δ/2 - 12 δ^2 (N + 2)/D (D + 2) (N + 8)^2 + 2 δ^3 (N + 2)/D^2 (D + 2) (N + 8)^4[ (D (3 D - 20) - 64) (N + 8)^2/D + 4 + D (N - 4) (7 N + 20) ( ψ^(0)( D/2) + γ) - 2 D (N - 4) (7 N + 20) ( ψ^(0)( D/4) + γ) ] . (The two loop result of Δ_T^μν was previously calculated in <cit.>). Similarly using the Δ_J calculated in (<ref>), we get the scaling dimension of the leading spin-1 operator, Δ_J^μ = D + 2/2 - δ/2 - 2 δ^2 (N + 2)/D (N + 8)^2 + + δ^3 (N + 2) /D^2 (N + 8)^4( D ((N - 16) N - 48) ( ψ^(0)( D/2) + γ) - 2 D ((N - 16) N - 48) ( ψ^(0)( D/4) + γ) - 32 (N + 8) ) . For completeness, we also note down the scaling dimension of the ϕ^2 operator calculated at three-loop calculated in <cit.>, which is ν^- 1 = D/2 + δ( 6/N + 8 - 1/2) + δ^2 (N + 2) (7 N + 20) ( - ψ^(0)( D/2) + 2 ψ^(0)( D/4) + γ)/(N + 8)^3 - δ^3 (N + 2) /8 (N + 8)^5 Γ( D/2)( Γ( D/2) ( - 2 (N (N (19 N - 60) - 432) - 256) ( 2 ψ^(0)( D/4) - ψ^(0)( D/2) ) ( 2 ( ψ^(0)( D/4) + γ) - ψ^(0)( D/2) ) - 6 N ((N - 4) N - 112) ψ^(1) ( D/2) + 768 ψ^(1)( D/2) + π^2 (N + 8) ((N - 12) N - 16) + 2 γ^2 (N (N (60 - 19 N) + 432) + 256) ) + 8 π (N + 8) (5 N + 22) ( π D/4) Γ( D/4 + 1 )^2 ( π^2 - 6 ψ^(1)( D/4) ) ) . The critical exponent ω, calculated in <cit.>, is ω = δ + 2 δ^2 (5 N + 22) ( - ψ^(0)( D/2) + 2 ψ^(0)( D/4) + γ)/(N + 8)^2 + δ^3 /2 (N + 8)^4( (N (N + 36) + 152) (N (N + 76) + 328) ( ψ^(0)( D/2) + γ)^2 - 32 (5 N + 22) (N (N + 46) + 196) ( ψ^(0)( D/4) + γ) ( ψ^(0)( D/2) + γ) + 192 (5 N + 22)^2 ( ψ^(0)( D/4) + γ)^2 ) . §.§ 4 - ϵ expansion, a consistency check With the results in the previous section, we now show that one can recover the short-range CFT data. We make an ansatz δ = a_1 ϵ + a_2 ϵ^2 + … . and solve the following equations Δ_T^μν (ϵ, δ) = 4 - ϵ, or Δ_J^μ (ϵ, δ) = 3 - ϵ, up to ϵ^3 order. For both equations, we get the same solution, δ = 2 ϵ - 4 (N + 2) ϵ^2/(N + 8)^2 + 2 (N^3 - 54 N^2 - 384 N - 544) ϵ^3/(N + 8)^4 + …, One recognizes that Δ_ϕ = d - δ/4 agrees with the calculation in 4 - ϵ perturbation theory <cit.>. Plug in (<ref>), we again recover the result of short-range models in 4 - ϵ, ν^- 1 = 2 + (- N - 2) ϵ/N + 8 + (- N - 2) (13 N + 44) ϵ^2/2 (N + 8)^3 + (N + 2) ϵ^3 /8 (N + 8)^5 (3 N^3 - 452 N^2 + 96 (N + 8) (5 N + 22) ζ (3) - 2672 N - 5312) . Notice these results do not generalize to the critical exponents ω = d β (g)/d g. Namely, plug (<ref>) into ω does not recover the ω critical exponents in 4 - ϵ for the short-range O(N) vector model. In the traditional short-range 4 - ϵ expansion, we have three almost marginal operators, ∂_μϕ∂_μϕ, ϕ∂^2 ϕ, and ϕ^4, One combination of the three operators is the level two descendant of ϕ^2. Another combination becomes a null operator and decouples from the spectrum due to the equation of motion ∂^2 ϕ∼ g ϕ^3. We are then left with a single operator which is a conformal primary, whose scaling dimension gives us the critical exponents ω. We call this operator O_1. When long-range coupling is allowed, the above operators are allowed to mix with the non-local operators. The operator O_2 (x) = ϕ (x) χ (x), with χ (x) = ∫ d^D y ϕ (y)/| x - y |^2 D - 2 Δ_ϕ . will mix with O_1 in 4 - ϵ expansion. In other words, both ω_long = d β (g)/d g and ω_short correspond to linear combinations of O_1 and O_2. To get ω_short we have to resolve such a mixing, which we leave for future work. §.§ re-summation the D = 3 perturbative series Even though our perturbative series is known only to three-loop order, it is, however, interesting to think about how we can re-summation this series. For a nice review of the re-summation theory in quantum field theory, see <cit.>. For an asymptotic series with zero radius of convergence A' (g) = ∑_k = 0^∞ f_k λ^k, one can define the Borel-Leroy series to be B^β (g) = ∑_k = 0^∞f_k/Γ (k + β + 1)λ^k . The inverse transformation is defined as A (g) = ∫ d t t^β e^- t B^β (λ t) . For a perturbative series, the Borel-Leroy procedure leaves the series invariant. This may seem useless. However, in many cases, B^β (λ) is a convergent series. If one can sum the B^β (λ) series, the inverse Borel-Leroy transformation will give us an analytic function A (g) . Suppose the singularity of B^β (g) that is closest to the origin is located at λ = - 1 / α, with a branch cut given by B^β (λ) ∼γ/(1 + αλ)^β + 1, The inverse Borel-Leroy transformation tells us the f_k asymptotes to f_k = γ k^β k! (- α)^k, k →∞ . For a field theory, large k behavior is usually controlled by the saddle points of the path integral, in other words, soliton solutions of the classical equation of motion. The constant α is the action of the leading soliton solution. If the function B^L, β (λ) = ∑_k = 0^L f_k/Γ (k + β + 1)λ^k is only known to a limited number of terms, one can perform a conformal mapping w (g) = √(αλ + 1) - 1/√(αλ + 1) + 1, to map the branch cut onto the unit circle. Re-expansion B^β (λ) using the new variable w, the truncated series B^' (w) β (λ) ≈∑_k = 0^L W_k w^k (λ) are expected to converge faster. Clearly the above approach is only possible if we know the solition solutions. In addition to the conformal mapping technique, another way to approximate the Borel function is to use the Pade approximation. This is the so-called Pade-Borel re-summation method. To do this, one imposes that PB^L, β (λ) = ∑^M_i a_i λ^i/∑^N_i b_i λ^i + 𝒪 (λ^L + 1) = ∑_k = 0^L f_k/Γ (k + β + 1)λ^k, with M + N = L. In other words, we require PB^L, β (λ) = B^L, β (λ) up to order λ^L + 1. The freedom in choosing (M, N) is an ambiguity. This ambiguity can be partially resolved by requiring the singularity of PB^L, β (λ) to resemble the singularity structure of B^β (λ). In our case, this means that the poles of PB^L, β (λ) should lie close to the negative λ-axis. We will use both the conformal mapping method and the Borel-Pade re-summation method in our analysis. We now first discuss the soliton solutions. In the δ→ 0 limit, the solution at g < 0 is given by the solution of the following nonlinear equation with fractional Laplacian, 1/𝒩∫ d^D y 1/| x - y |^D + sϕ_i (y) = - λ1/2ϕ_i (x) ∑_j ϕ_j (x)^2 . The equation of motion has a solution given by ϕ_c (x) = n̂×√(2)( - π^- D/2λNΓ( D - s/2)/Γ( - s/2))^s - D/2 s( a/a^2 + |x - x_0 |^2)^D - s/2, with s = D/2 . This is a simple generalization of the N = 1 soliton solution which has been extensively studied by mathematicians (such as in <cit.>)[We thank Kihyun Kim for pointing out to us the long-range solition in the N = 1 model.]. The moduli space for the soliton solution is parametrized by the vector n̂ satisfying n̂·n̂ = 1 and the parameter a, which controls the size of the soliton. The action of the solution is S = - 1/8λ∫ d^D x ( ∑_i ϕ_c, i (x)^2 )^2 = - 1/λ1/N^2π^d + 1/2Γ( D - 1/2) Γ( - D/4)^2/16 Γ( D/4)^2 Γ (D) . When D = 3, S = - π^2 Γ( 9/4)^2/λΓ( 3/4)^2 = - 1/αλ. The constant γ and β in (<ref>) depend on the (massive and massless) fluctuation around the soliton solution. For the short-range model, one needs to solve a Schrodinger problem with the potential given by the soliton. This can be done analytically (see, for example, <cit.>). It will be interesting to study the analogous problem for long-range models, we leave this for future work. For simplicity, we will take β = 0. One will see that this already gives us reasonable results. The re-summations of Δ_T and Δ_J are given in Fig <ref>. The re-summation of ν^- 1 = 3 - Δ_ϕ^2 is given in Fig <ref>. The re-summation of the critical exponents ω is given in Fig. <ref>. Notice ω = 0 at the short-range limit, as explained near equation (<ref>). Notice that even though we are dealing with the perturbation series only in three-loop order, the re-summed result is very good. For comparison with the three-loop re-summation of the 4 - ϵ result, see, for example, Chapter 17.4 of <cit.>. For completeness, we note down our prediction for the critical exponents in Table <ref>. We now give some general remarks about the critical exponents ω_long for the long-range models in Table <ref>. As we have explained in the previous section, the ω_long corresponds to the operator O_2 defined in (<ref>), which is exactly marginal at the short-range fixed point. When we work in D = 3, near the δ→ 0 limit, there is another operator O_1 = 1/2∂_μϕ∂^μϕ - y ϕ∂^2 ϕ, with y = Δ_ϕ/2 Δ_ϕ - D + 2 which will give us ω_short when we approach the short-range fixed point. Unlike in D = 4 - ϵ, we do not need to worry about their mixing since they have different scaling dimensions in the δ→ 0 limit. The existence of such the marginal operator O_2, will lead to logarithmic finite size effect in Monte Carlo simulation <cit.>, and therefore making the result converge slowly. This explains the poor quality of the numerical results near the crossover <cit.>. § MODELS WITH FERMIONS §.§ Models with four-fermion interactions in D = 2 + 1 We now turn to fermionic models in D=3. We formulate our models using Majorana fermions, ℒ = 1/2𝒩^- 1ψ̅_a (- ∂)^s - 2γ^μ∂_μψ_a + λ_Y (ψ_aψ_a)^2 + λ_T (ψ_aγ^μψ_b Ω_a b)^2 Here we use the notation such that ψ̅_α = ϵ_αβψ^β, which is the Majorana conjugate available in D = 2 + 1 dimensions. We take the signature to be η^μν = (- 1, 1, 1) . The gamma matrices are γ^0 = i σ^2, γ^1 = σ^1, γ^2 = σ^3, notice they are real matrices. The matrix ϵ . γ^μ is symmetric in our convention. The antisymmetric matrix Ω =𝐥_N/2×N/2⊗ϵ = ( [ 0 1 ; - 1 0 ; … . ; 0 1; - 1 0 ]), is block diagonal. The kinetic term and the Gross-Neveu coupling term g_Y (ψ_iψ_i)^2 preserves the O(N) symmetry. The Thirring coupling g_T (ψ_iγ^μψ_j Ω_i j)^2 break the symmetry to U(N/2). One can re-write the action (<ref>) in a more familiar form using the Dirac fermions. The Lagrangian (<ref>) is the most general Lagrangian that preserves the U(N/2) symmetry. When N = 1, we have only two grassmann variables, so that both the Gross-Neveu and the Thirring coupling vanish, the theory is just the free theory. When N = 2, we have four grassmann variables, the Gross-Neveu and the Thirring coupling are equivalent, and we have only one coupling constant, which is a long-range generalization of the original Thirring model <cit.>. The short-range version of the above model was previously studied using the functional renormalization group method in <cit.>, using 2 + ϵ expansion in <cit.>. The lattice version of the Thrring model was recently studied using Monte Carlo simulation in <cit.>. The Thirring model is renormalizable in the large N limit <cit.>. It has a hidden gauge symmetry <cit.>, which we now explain. The theory has a fixed point at which the Gross-Neveu coupling vanishes at the large N limit, we get ℒ = 1/2 i ψ̅_a γ^μ∂_μψ_a + λ/N (ψ_iγ^μψ_j Ω_i j)^2, introduce the Hubbard-Stratonovich field A_μ, we can write the above action as ℒ = i ψ̅γ·∂ψ - √(1/N) (ψ_aγ^μψ_a Ω_a b) A'_μ + 1/2 λA'^2 . One can view the field A'_μ as the massive vector boson from Stuckelberg formalism. Just like the Gross-Neveu-Yukawa model is the UV completion of the four-fermion Gross-Neveu model, the Thirring model has a UV completion given by fermions coupled with a “massive” gauge field. The long-range version of the Gross-Neveu model was first introduced in <cit.>, and a large N study was performed in 1 < D < 4 and compared with the long-range Gross-Neveu-Yukawa model in D = 4 - ϵ. We will instead focus on the s → 2 limit at D = 3, aiming to explain how to recover short-range data from long-range perturbation theory. The Feynman rules of the long-range model are the following. For each vertex, we have V_αβγδ ; a b c d = i λ_Y V^Y_αβγδ ; a b c d + i λ_T V^T_αβγδ ; a b c d with V^T_αβγδ ; a b c d = Ω_abΩ_cd (ϵ . γ^μ)_αβ (ϵ . γ_μ)_γδ + permutations and V^Y_αβγδ ; a b c d = δ_abδ_cdϵ_αβϵ_γδ + permutations For each internal momentum, we need to integrate with ∫d p^D/(2 π)^D, For non-local theory, the propagator is given by G^αβ_a b (p) = - i p_μ (γ^μϵ)^αβ + μϵ^αβ/(p^2 + μ^2)^s / 2δ_a b . We have introduced the IR regulator μ to make the Feynman integrals IR finite. The final results will not depend on the mass. We take s = D + 2 + δ/2, so that the scaling dimension of the scalar operator is Δ_ψ = D - δ/4 . The four-fermion interaction is slightly irrelevant in the δ→ 0_- in the limit. Notice in this limit, the theory a (strongly) relevant operator ψ̅_a γ^μ∂_μψ_a, which is the short-range kinetic term for the fermions. We assume that we have tuned the coupling of this term to zero. In other words, the fixed points that we will discuss later are tri-critical points in the δ→ 0_- limit. The short-range models are located in the δ≈ - 1 region. The above operator will have anomalous dimension, and may become irrelevant. After evaluating all the Feynman diagram (given in Fig. <ref> and Fig <ref>) which contribute to the beta function renormalization up to two loops, we get β_Y = - δ g_Y + 1/6 π^2 (g_Y^2 (4 δ - 6 δlog (2) + N (δ (log (8) - 3) + 3) - 6) - 2 (δ (log (64) - 5) + 6) g_T^2 + 6 (δ (log (8) - 3) + 3) g_T g_Y) + 1/108 π^4( 4 g_T^3 (- 205 δ + 165 δlog (2) - 33 δlog (2) log (64) + 41 δlog (64) + N (- δ (log (8) - 7) (log (64) - 5) + 9 π + 177 + 36 log (2)) + 99 π + 489 + 396 log (2)) + 3 g_T^2 g_Y (- 250 δ - 378 δlog^2 (2) + 768 δlog (2) δlog (4) - 24 δlog (64) + 24 δlog (4) log (64) + 3 N (δ (- 6 + log (4) log (8) - log (16) log (64) + log (16777216)) + 9 π + 6 + 90 log (2) - 12 log (4) - 2 log (8) - 4 log (64)) + 45 π + 354 - 396 log (2) + 144 log (4) + 48 log (64)) + 54 g_T g_Y^2 ( δ( 12 N log^2 (2) - N log (4) log (64) + 4 - 30 log^2 (2) + 6 log^2 (4) + log (4) ) + 3 π + 4 + log (4096) ) + g_Y^3 (- 142 δ + 2 δlog^2 (8) + 44 δlog (8) + 9 N (δ (2 - 6 log^2 (2) + log (16)) + 3 π + 6 + log (4096)) - 9 π + 246 - 36 log (2)) ) β_T = - δ g_T + g_T ((N + 2) (δ (log (8) - 7) + 3) g_T + 6 (δ + δ (- log (8)) - 3) g_Y)/18 π^2 + g_T/324 π^4 (g_T^2 (- 1958 δ + 1192 δlog (8) - 510 δlog (2) log (8) + 2 N (δ (log (8) - 7)^2 + 519 - 18 log (2)) - 9 π N + 765 π + 4110 + 3060 log (2)) + 6 g_T g_Y (- 194 δ - 78 δlog (2) log (8) + 148 δlog (8) + 2 N (- δ (log (8) - 7) (log (64) - 5) + 9 π + 141 + 9 log (16)) + 117 π + 834 + 468 log (2)) + 9 g_Y^2 (- 118 δ - 18 δlog^2 (2) + 144 δlog (2) + 3 N (δ (- 2 - 6 log^2 (2) + log (256)) + 3 π + 10 + log (4096)) + 9 π + 270 + 6 log (64))) . Here g_Y and g_T are the renormalized couplings of the bare couplings λ_Y and λ_T, for there definition, see (<ref>). The scaling dimension of the stress tensor operator is given by Δ_T^μν = 2 Δ_ψ + 1 - Γ( 1/4) (3 Ng_T^2 + Ng_Y^2 + 6 g_T g_Y + 3 g_T^2 - g_Y^2)/60 π^4 Γ( 5/4) . The scaling dimension of the fermion mass operator ψ_iψ_i is Δ_ψ̅ψ = 3 - δ/2 + (δ (log (64) - 6) + 6) g_T/4 π^2 + g_Y ((N - 1) (δ (log (4) - 2) + 2))/4 π^2 + g_T g_Y/12 π^4 (58 δ - 2 δlog (8) (4 + log (8)) + 16 N (δ (log (8) - 4) + 3) + 9 π - 114 + 36 log (2)) + (N - 1) g_Y^2 /72 π^4 (58 δ - 2 δlog (8) (4 + log (8)) + 16 N (δ (log (8) - 4) + 3) + 9 π - 114 + 36 log (2)) + g_T^2 /8 π^4 (- 2 δ (33 + log (2) (log (8) - 28)) + N (- 2 δ (log (2) - 1) (log (8) - 1) + 3 π - 22 + log (4096)) + 3 π + 26 + log (4096)) . The Feynman integrals we encounter are given in Appendix <ref>. One can check that there is no wavefunction renormalization since the kinetic term is non-local. The fixed points of the one-loop beta functions are F_1 : g_Y = g_T = 0, F_2 : g_Y = 2 π^2/N - 2δ, g_T = 0, F_3 : g_Y = - π^2 /N^3 + 2 N^2 + 32 N - 80( - N^2 + (N + 2) √(N^2 + 112 N + 256) + 14 N - 112 ) δ, g_T = 6 π^2 /N^3 + 2 N^2 + 32 N - 80( N^2 - √(N^2 + 112 N + 256) + N + 16 ) δ F_4 : g_Y = π^2 /N^3 + 2 N^2 + 32 N - 80( N^2 + (N + 2) √(N^2 + 112 N + 256) - 14 N + 112 ) δ, g_T = 6 π^2 /N^3 + 2 N^2 + 32 N - 80( N^2 + √(N^2 + 112 N + 256) + N + 16 ) δ . The free fixed point F_1 is the most stable fixed point without relevant direction. The second fixed point F_2 is the long-range generalization of the Gross-Neveu model, for large enough N, the stability matrix ∂β_i/∂ g_j has one positive eigenvalue, which means that the RG flow has one relevant direction. The third fixed point F_3 is the long-range generalization of the Thirring model, for large enough N, the fixed point has one relevant RG directions. The fourth fixed point has two irrelevant direction, which is a tri-critical point. The fixed point structure resembles those found in <cit.> for the short-range model. Notice that the coupling g_Y of the Gross-Neveu fixed point diverges at N = 2, this is analogous to <cit.>. This is because the Thirring coupling is equivalent to the Gross-Neveu coupling, as we have mentioned. The theory at N = 2 needs special treatment, we can set g_T = 0, since the two couplings are equivalent, the beta function then becomes β_Y = - δ g_Y - δ g_Y^2/3 π^2 + g_Y^3/108 π^4 (2 δ (- 53 - 54 log^2 (2) + log^2 (8) + 22 log (8) + 9 log (16)) + 45 π + 354 - 36 log (2) + 18 log (4096)) . The critical point becomes g_⋆ = ± 4.11877 √(δ) + 𝒪 (δ^1) . To solve the fixed point to the next order, we need the beta function at g_Y^4 order (three-loop), which we leave for future work. The scaling dimensions are Δ_T = 2.5 - (0.5 ± 0.01161) δ + 𝒪 (δ^3 / 2), and Δ_ψ̅ψ = 3/2± 0.208659 √(δ) + 𝒪 (δ^1) . At small δ, these fixed points are clearly not smoothly connected to the fixed point (<ref>) as we vary N. For large δ, it remains an open question whether one can find a short-range CFT, which is smoothly connected to the short-range CFTs with large N. We now discuss the N > 2 solutions. Now the question is whether the fixed points will lead to unitary CFTs. It is too early to try to infer the final result just from a naive two-loop calculation, we will however plot the result just to get a feeling of it. For the Gross-Neveu fixed point, we have Gross-Neveu : N = 4 Δ_T = 2.5 - 0.5 δ - 0.2 δ^2, Δ_ψ̅ψ = 1.5 + δ - 8.58085 δ^2, N = 8 Δ_T = 2.5 - 0.5 δ - 0.0518519 δ^2, Δ_ψ̅ψ = 1.5 + 0.666667 δ + 1.33808 δ^2 . The Gross-Neveu(-Yukawa) model has been solved by the conformal bootstrap method. For the Thirring model, on the other hand, the long-range perturbation theory may allow us to be ahead of the bootstrap method. For the Thirring fixed point, we have, Thirring : N = 8 Δ_T = 2.5 - 0.5 δ - 1.7714 δ^2, Δ_ψ̅ψ = 1.5 + 2.62541 δ - 326.947 δ^2, N = 100 Δ_T = 2.5 - 0.5 δ - 0.0772493 δ^2, Δ_ψ̅ψ = 1.5 + 0.734632 δ - 4.81543 δ^2 . The re-summed curve Δ_T (δ) will intersect with the Δ_T = 3 line when N > 12. However, when N is close to 12, the intersection happens at |δ| ≫ 1 , we should not trust our perturbation theory in that region. When N>29, the intersection happens at |δ| ≈ 1 We therefore take N_c≈29 as our two-loop prediction of the critical N_c above which the short-range the Thirring fixed point exists. Notice that at the δ→0 limit, Δ_ψ is below the unitarity bound. It is important to check that when Δ_T=D happens, the corresponding Δ_ψ is above the unitarity bound, so that the short-range model is unitary. One can see in Fig. <ref> and <ref> that this is indeed the case. §.§ Generalized Thirring model in D = 4 + 1 We also consider a fermionic model in D=4+1. We formulate our theory using symplectic Majorana fermions (a Dirac fermion can be written as two symplectic Majorana fermions). A nice review of the Clifford algebras in general dimensions is given in <cit.>. We will study the following model, ℒ = 1/2𝒩^- 1ψ_i, α (C. γ^μ)^αβ (- ∂)^s - 2∂_μψ_j βΩ_i j + 1/8λ_T_2 (ψ_i, α (C. γ^μν)^αβψ_j, βΩ_i j) (ψ_i, α (C. γ_μν)^αβψ_j, βΩ_i j), with i = 1 … N. Clearly, N is an even number. We have also defined that γ^μν≡γ^[μν] = γ^μγ^ν + γ^νγ^μ . We take the convention of gamma matrix to be γ^0 = i σ^1 ⊗𝐥_2 × 2, γ^1 = σ^2 ⊗𝐥_2 × 2, γ^2 = σ^3 ⊗σ^1, γ^3 = σ^3 ⊗σ^2, γ^4 = σ^3 ⊗σ^3, B = -𝐥_2 × 2⊗σ^2, C = B^T . γ^0 . The symplectic Majorana fermions are defined as ψ_i = Ω_i j B^- 1 (ψ_j)^⋆ . In our convention, the matrix C and (C. γ^μ) are antisymmetric, while (C. γ^μν) is symmetric. The gamma matrices satisfy B^T . (C)^⋆ .B = - C, B^T . (C. γ^μ)^⋆ .B = C. γ^μ, B^T . (C. γ^μν)^⋆ .B = - C. γ^μν, so that the action is Hermitian. We can consider other four-fermion couplings such as Gross-Neveu coupling 1/8λ_Y (ψ_i, α (C)^αβψ_i, β)^2, and 1/8λ_T (ψ_i, α (C. γ^μ)^αβψ_i, β) (ψ_i, α (C. γ_μ)^αβψ_i, β). These terms, however, will break the Sp (N) symmetry to the U ( N/2) group. The above theory is the maximally symmetric action in D = 4 + 1. Similar to the D = 2 + 1 dimensional case, we take s = D + 2 + δ/2, so that the scaling dimension of the scalar operator is Δ_ψ = D - δ/4 . The four fermion interaction is slightly irrelevant in the δ→ 0_- in the limit. We assume that we have tuned the coupling of the short-range kinetic term ψ_i, α (C. γ^μ)^αβ∂_μψ_j βΩ_i j to zero. The Feynman rule for the propagators are G_αβ, i j (p) = - i p_μ (γ^μ C)_αβΩ_i j - μ C_αβδ_i j/(p^2 + μ^2)^s / 2 . We have introduced the mass term to make the Feynman integrals IR finite, which corresponds to add - i 1/2μψ_i C ψ_i δ_i j to the action. The final results will not depend on the mass. The four-fermion vertice is V^T_2_αβγδ ; a b c d = i λ_T_2 (Ω_adΩ_bc (C. γ^μν)_αδ (C. γ_μν)_βγ + Ω_abΩ_cd (C. γ^μν)_αβ (C. γ_μν)_γδ + Ω_acΩ_bd (C. γ^μν)_αγ (C. γ_μν)_βδ), The beta function at 2-loop order looks like, β_T_2 = - δ g_T_2 + g_T_2^2 ( 4/15δ (4 N - 337) - δ (N - 28) log (4) - 2 N + 56 )/15 π^3 + 4 g_T_2^3 /50625 π^6 (64296 δ - 21952 δ N - 450 δ (59 N + 3) log^2 (2) + 60 log (2) (- 816 δ + (922 δ + 885) N + 45) + 23160 N - 13275 π N - 675 π - 2280) . Here g_T_2 is the renormalized coupling of λ_T_2. The anomalous dimension of the stress tensor operator T_μν is Δ_T_μν = 7/2 - δ/2 - 4 (N + 1) Γ( 3/4) g_T_2^2/21 π^6 Γ( 7/4) . The anomalous dimension of the mass operator ψ_i, α (C)^αβψ_i, β is Δ_ψ̅ψ = 5/2 - δ/2 + 5 (δ (log (64) - 8) + 6) g_T_2/9 π^3 + 2 g_T_2^2 /405 π^6 (- 4128 δ - 64 δ N - 90 δ (N - 3) log^2 (2) + 12 log (2) (183 δ + (14 δ + 15) N - 45) - 72 N - 45 π N + 135 π + 2916) . When N ≠ 28, at the fixed point, we have Δ_T_μν = 7/2 - δ/2 - 75 δ^2 (N + 1) Γ( 3/4)/7 (N - 28)^2 Γ( 7/4) . Δ_ψ̅ψ = 5/2 + δ (N + 22)/56 - 2 N + δ^2 /6 (N - 28)^3 (80 (268 N - 2119) - 75 π (N (N + 87) + 90) + 300 (N (N + 87) + 90) log (2)) . When N = 28, the solution is at g_⋆ = ± 4.89026 √(δ) + 𝒪 (δ^1) . The scaling dimensions are Δ_T = 3.5 - (0.5 ± 0.183207) δ + 𝒪 (δ^3 / 2), Δ_ψ̅ψ = 5/2± 0.525728 √(δ) + 𝒪 (δ) . Just like the 2+1 dimensional Thirring model can be though of as the fermions coupled Stuckelberg vector field <cit.>, the 4+1 dimension model (<ref>) can be though as fermions coupled with “Higgsed” two-form fields. In other words, one can introduce the auxiliary field B_μν ℒ = 1/2ψ_i, α (C. γ^μ)^αβ∂_μψ_j βΩ_i j - √(1/N) (ψ_i, α (C. γ^μν)^αβψ_j, βΩ_i j) B_μν + 4/λ_T_2 (ψ_i, α (C. γ^μν)^αβψ_j, βΩ_i j) B_μν with i = 1 … N. and think about B_μν as a dynamical “massive” two form field. It will be interesting to study the above model in the large N limit. § DISCUSSION The main point of our paper is that by imposing the conformal Ward identity, we can extract conformal data of local/short-range CFTs from long-range perturbation theory. We have applied this idea to the O(N) vector models. It is interesting to think about how to apply this idea to other quantum field theories. 1. the multi-scalar theories, irreducible fixed points. In the O(N) vector model, all the scalar fields transform in a single irreducible representation of the O(N) group. It would be interesting to consider smaller symmetry groups so that the scalars transform in more than one irreducible representations. In this case, scalars can have different scaling dimensions, to take this into account, we consider the following theory ℒ= ∫ d^D xd^D y ∑_a = 1^N ϕ^a (x) ϕ^a (y)/|x - y|^D + s_a + ∫ d^D x λ_abcd/4!ϕ (x)^a ϕ (x)^b ϕ (x)^c ϕ (x)^d, we can take s_a = D + γ_a δ/2 . We normalize γ_1 = 1. For local CFTs, from the conformal Wald identity, we can derive the following OPE formula (see for example <cit.>), C_O O T_μν = - D Δ_O/D - 11/S_D, with S_D = 2 π^d / 2/Γ (d / 2). Here O can be any scalar operator. Apply this to the ϕ_a's, we have the OPE ratios C_ϕ_a ϕ_a T_μν / C_ϕ_b ϕ_b T_μν = Δ_a/Δ_b . These relations, together with Δ_T_μν = D, give us N constraints, and allow us to solve for δ together with γ_a's. One can choose to impose these constraints directly at the δ→ 0 limit, treating γ_a = 1 + 𝒪 (δ) as a perturbative series. Recently, an efficient method to evaluate the OPE in the long-range perturbation limit has been developed <cit.>, which will be useful in the above project[We thank Connor Behan for explaining this.]. 2. the Gross-Neveu-Yukawa theory. Similar to their short-range cousins, the long-range Gross-Neveu models also have UV completions given by the long-range Gross-Neveu-Yukawa models (see <cit.> for the test of such a UV completion). The short-range Gross-Neveu-Yukawa fixed point lives in a long-range conformal manifold parametrized Δ_ψ and Δ_ϕ, the conformal manifold contains various perturbative limits, one therefore has the freedom to choose where to perform the perturbative calculation. We can consider the couplings λ_1 ψ_iψ_i ϕ + λ_2 ϕ^4, and introduce long-range kinetic terms for both the fermions and bosons. Take Δ_ψ = 3 D/8 + γ_ψδ and Δ_ϕ = D/4 + γ_ϕδ, and treat δ as the perturbation parameter. Similar to the bosonic theory, we can normalize γ_ψ = 1. The condition Δ_T = D and the OPE relations similar to (<ref>) allow us to fix δ and γ_ϕ, and recover the data for the local/short-range Gross-Neveu-Yukawa model. Alternatively, one can keep only the Yukawa coupling (assume we have tune λ_2 = 0) λ_1 ψ_iψ_i ϕ, and take Δ_ψ = 1/6 (2 D + 1) + γ_ψδ and Δ_ϕ = 1/3 (D - 1) + γ_ϕδ. In this limit, the two candidate spin-2 operators, T^(f) μν = ψ̅∂_νγ_μψ - 1/dδ_μνψ̅∂·γψ and T^(b) μν = (∂^μϕ∂^νϕ - y ϕ∂^μ∂^νϕ - trace) are degenerate. They will mix in the perturbation theory. Compared to the previous setup, the OPE ratios can now be imposed directly at the δ→ 0 limit. It is however important to check that the ϕ^4 coupling is irrelevant when Δ_T_μν (δ) = D. The Gross-Neveu-Ising model describes the phase transition from Dirac-semi metal phase to charge density wave phase on graphene. Other models such as the Gross-Neveu-Yukawa-Heinsberg and the Gross-Neveu-Yukawa-O(2) model can be studied in a similar manner. We leave this for future work. 3. the quantum O(N) models. In addition to the long-range models we studied here, it is also interesting to consider the quantum long-range models. For example, the local O(N) vector model can be embedded into a critical manifold, which can be reached as the IR fixed point of the following action, S = ∫ d t d^D - 1 x 1/2∑_i (∂_t ϕ^i)^2 + ∫ d t d^D - 1 x d^D - 1 y 1/2∑_i ϕ^i (t, x) ϕ^i (t, y)/| x - y |^D - 1 + σ + λ∫ d t d^D - 1 x ( ∑_i ϕ^i (t, x)^2 )^2 . Some of these models have recently been studied using quantum Monte Carlo simulation <cit.> and perturbatively in <cit.>. It will be interesting to use the above theory as the perturbation theory to study the short-range models by imposing the conformal Ward identity. It will also be interesting to generalize our work to the long-range generalization of the non-Abelian Thirring model <cit.>, the long-range generalization of QED as studied in <cit.>, and the long-range non-linear sigma model <cit.>. We thank Rajeev Erramilli, Connor Behan, Jiaxin Qiao, Slava Rychkov, Edoardo Lauria, Philine van Vliet, Bilal Hawashin, Michael Scherer, Tom Steudtner and Emmanuel Stamou for stimulating discussions. The work of JR is supported by the Simons programme général (2022-2031) in Institut des Hautes Études Scientifiques. § SPIN-2 AND SPIN-1 OPERATOR OF THE SCALAR THEORY Our calculation is performed using AMBRE and MB.m, which use mostly minus signature, the change of convention is simply given by I^Lorentz, mostplus = 1/i^L1/(- 1)^ν_1 + ν_2 + ⋯1/(2^D π^D / 2)^L I^AMBRE . We follow <cit.> and use the zero momentum BPHZ (Bogoliubov-Parasiuk-Hepp-Zimmermann) subtraction scheme <cit.>. The tree-level diagram is given in Fig. <ref>, the corresponding Feynman rule for the stress-tensor vertex is i λ_ ST ((p - q)_μ ((p - q)_ν - y p_ν) + p_μ (p_ν - y (p - q)_ν) - trace) δ_i j, with y = - Δ_ϕ + 1/Δ_ϕ. The Feynman rule for conserved current vertex is i λ_J (p_μ + (p - q)_μ) Ω_i j . In the leading order, the diagram which contributes to the renormalization of the stress tensor is given in Fig. <ref>. After introducing the IR regulator for our propagator, we can perform the regularization at any kinematic point. For simplicity, we take q^μ = 0. In this special kinematic point, the diagram must vanish. Since there is no external momentum that one can use to build an expression that transforms in the spin-2 representation of the Euclidean group. We therefore conclude that this diagram must be regular. This trick was introduced in <cit.>. The two-loop diagrams are given in Fig. <ref>. We again renormalize our diagram at the special kinematic point q^μ = 0. By the same argument as before, the first diagram should vanish at q^μ = 0 limit. The second diagram leads us to the integral μ^2 δℐ_0^μν = 1/((2 π)^D)^2∫ d k_1^D d k_2^D k_2^μ k_2^ν - trace/(k_2 ^2 + μ^2)^s / 2 ((p - k_1 + k_2)^2 + μ^2)^s / 2 (k_1 ^2 + μ^2)^s / 2 (k_1 ^2 + μ^2)^s / 2 = (p^μ p^ν - trace) S_2, Using AMBRE and then MB.m, we get S_2 = (4 π)^- D/δΓ( D/2 + 2 ) Γ( D/2) + (4 π)^- D/Γ( D/2 + 2 ) Γ( D/4)^2 Γ( D/2) J_2 (D) - 2^- 2 D - 1π^- D/Γ( D/2 + 2 ) Γ( D/2)( ψ^(0)( D/2) + 2 ψ^(0)( D/4) + 3 γ - 1 ) . Here we introduce a constant J_2 (D), whose analytical expression are not easy to calculate, we instead give the corresponding Mellin-Barnes integral, J_2 (D) = ∫_0^^- d z Γ (2 - z) Γ (- z) Γ (z) Γ( D/4 + z )^2 Γ( D/2 + z )/Γ( D/2 + 2 z ) . The integral is regular at D = 4. The best way to evaluate is to numerically integrate along the contour (after shifting the contour away from poles), which can be easily done in MB.m. We also give an approximate result of the integral in 4 - ϵ, J_2 (4 - ϵ) = - 2.229271492460209 - 1.077724442678504 ϵ - 0.5458130816580664 ϵ^2 + O (ϵ^3) . Also, notice J_2 (D = 3) ≈ - 4.1944. In our calculation of the O(N) model, this constant will drop out in the final expression, it is, however, important to consider it for other multi-component scalar models. At three-loop level, the diagrams that survive after taking q^μ = 0 are given in Fig. <ref>. The corresponding integrals are μ^2 δℐ_1^μν = μ^3 δ/((2 π)^D)^3∫ d^D k_1 d^D k_2 d^D k_3 ×k_3^μ k_3^ν - trace/(k_3 ^2 + μ^2)^s ((p + k_2)^2 + μ^2)^s / 2 ((k_1 + k_2) ^2 + μ^2)^s / 2 ((k_2 + k_3) ^2 + μ^2)^s / 2 (k_1 ^2 + μ^2)^s / 2 = (p^μ p^ν - trace) I_1 μ^2 δℐ_2^μν = μ^3 δ/((2 π)^D)^3∫ d^D k_1 d^D k_2 d^D k_3 ×k_3^μ k_3^ν - trace/(k_3 ^2 + μ^2)^s / 2 ((p + k_1 - k_3)^2 + μ^2)^s / 2 ((k_2) ^2 + μ^2)^s / 2 ((p + k_2 - k_3) ^2 + μ^2)^s / 2 (k_3 ^2 + μ^2)^s = (p^μ p^ν - trace) I_2 with I_1 = 2^2 - 3 Dπ^- 3 D/2/3 δ^2 Γ( D/2 + 2 ) Γ( D/2)^2 + 2^1 - 3 Dπ^- 3 D/2( 3 J_0 (D) - Γ( D/4)^2 ( 5 ψ^(0)( D/4) + ψ^(0)( D/4 + 2 ) + 6 γ - 2 ) )/3 δΓ( D/2 + 2 ) Γ( D/4)^2 Γ( D/2)^2 I_2 = 8^1 - Dπ^- 3 D/2/3 δ^2 Γ( D/2 + 2 ) Γ( D/2)^2 + 2^2 - 3 Dπ^- 3 D/2/3 δΓ( D/2 + 2 ) Γ( D/4)^2 Γ( D/2)^2( 3 J_0 (D) - Γ( D/4)^2 ( ψ^(0)( D/2) + 4 ψ^(0)( D/4) + 5 γ - 2 ) ) . Collect the results, we get the renormalization function of the stress-tensor vertex (at q^μ = 0) Γ^μν = 2 (p^μ p^ν - trace) Γ_ ST, with Γ_ ST = λ_ ST( 1 + 3/2λ^2 (N + 2) S_2 μ^- 2 δ - I_1 λ^3 (N^2 + 10 N + 16) μ^- 3 δ - 1/4 I_2 λ^3 (N^2 + 10 N + 16) μ^- 3 δ) . We define our renormalized coupling to be g_ ST = μ^- (D - 2 Δ_ϕ - 2)Γ_ ST . Invert the above relation, we get λ_ ST = μ^(D - 2 Δ_ϕ - 2) g_ ST( 1 - 3/2λ^2 (N + 2) S_2 μ^- 2 δ + 1/4λ^3 μ^- 3 δ (4 I_2 (N^2 + 10 N + 16) + I_2 (N^2 + 10 N + 16)) + 1 ) The beta function for the coupling g_ ST is defined as β_ ST= μ∂/d μ g_ ST . Notice the bare couplings λ_ ST and λ do not depend on μ, so that the above definition together with (<ref>) and λ = μ^δ( g + 1/2 D_0 g^2 (N + 8) + 1/4 D_0^2 g^3 (N^2 + 6 N + 20) + 3 g^3 ( 5 N/3 + 22/3) ( D_0^2 - S_0 ) ), gives us the scaling dimension of the (to-be) stress tensor, which is (<ref>). The constants D_0 and S_0 are defined in (<ref>). The above relation comes from the renormalization of the coupling constant, which was calculated in <cit.>. One can similarly calculate the scaling dimension of the current operator (<ref>). The diagrams that we need to calculate are the same as the ones for the stress tensor. We can also use the special kinematics q^μ = 0, which means that the one-loop diagram in Fig. <ref> and the first two-loop diagram in Fig. <ref> do not contribute to the final result. The two-loop integral corresponding to the second diagram in Fig. <ref> is μ^2 δℐ'_0^μν = 1/((2 π)^D)^2∫ d^D k_1 d^D k_2 k_1^μ/(k_2 ^2 + μ^2)^s / 2 ((p - k_1 + k_2)^2 + μ^2)^s / 2 (k_1 ^2 + μ^2)^s / 2 (k_1 ^2 + μ^2)^s / 2 = p^μ S_1, with S_1 = (4 π)^- D/δΓ( D/2 + 1 ) Γ( D/2) + (4 π)^- D/Γ( D/2 + 1 ) Γ( D/4)^2 Γ( D/2) J_1 (D) - 2^- 2 D - 1π^- D/Γ( D/2 + 1 ) Γ( D/2)( ψ^(0)( D/2) + 2 ψ^(0)( D/4) + 3 γ), and J_1 (D) = ∫_0^^- d z Γ (1 - z) Γ (- z) Γ (z) Γ( D/4 + z )^2 Γ( D/2 + z )/Γ( D/2 + 2 z ) . After numerical evaluation, we have J_1 (3) ≈ - 3.08557. The integral is regular at D = 4. We can give an expression in 4 - ϵ, J_1 (4 - ϵ) ≈ - 1.075405220902442 ϵ^4 - 1.401988434885228 ϵ^3 - 1.697234636355991 ϵ^2 - 1.640421373262027 ϵ - 1.562604825793316. In our calculation of the O(N) model, this constant again drops out in the final expression, it is however, important to consider it for other multi-component scalar models. The three-loop integrals corresponding to the diagrams in Fig. <ref> are μ^2 δℐ'_1^μν = μ^3 δ/((2 π)^D)^3∫ d^D k_1 d^D k_2 d^D k_3 ×k_3^μ/(k_3 ^2 + μ^2)^s ((p + k_2)^2 + μ^2)^s / 2 ((k_1 + k_2) ^2 + μ^2)^s / 2 ((k_2 + k_3) ^2 + μ^2)^s / 2 (k_1 ^2 + μ^2)^s / 2 = p^μ I_1' μ^2 δℐ'_2^μν = μ^3 δ/((2 π)^D)^3∫ d^D k_1 d^D k_2 d^D k_3 ×k_3^μ/(k_3 ^2 + μ^2)^s / 2 ((p + k_1 - k_3)^2 + μ^2)^s / 2 ((k_2) ^2 + μ^2)^s / 2 ((p + k_2 - k_3) ^2 + μ^2)^s / 2 (k_3 ^2 + μ^2)^s = p^μ I_4 The constants are I_1' = 2^2 - 3 Dπ^- 3 D/2/3 δ^2 Γ( D/2 + 1 ) Γ( D/2)^2 + 2^1 - 3 Dπ^- 3 D/2/3 δΓ( D/2 + 1 ) Γ( D/4)^2 Γ( D/2)^2[ 3 J_1 (D) - Γ( D/4)^2 ( 5 ψ^(0)( D/4) + ψ^(0)( D/4 + 1 ) + 6 γ) ] . and I_2' = 8^1 - Dπ^- 3 D/2/3 δ^2 Γ( D/2 + 1 ) Γ( D/2)^2 + 2^2 - 3 Dπ^- 3 D/2/3 δΓ( D/2 + 1 ) Γ( D/4)^2 Γ( D/2)^2[ 3 J_1 (D) - Γ( D/4)^2 ( ψ^(0)( D/2) + 4 ψ^(0)( D/4) + 5 γ) ] . With the above integral, we can calculate the renormalization of the conserved current vertex, which gives Γ^μ = 2 p^μΓ_J, with Γ_J = λ_J ( 1 + + 1/2λ^2 (N + 2) S_2 μ^- 2 δ - 2 I_1' λ^3 (N + 2) μ^- 3 δ - 1/4 I_2' λ^3 (N^2 + 6 N + 8) μ^- 3 δ) . We define our renormalized coupling to be g_J = μ^- (D - 2 Δ_ϕ - 1)Γ_J . Invert the above relation, we get λ_J = μ^(D - 2 Δ_ϕ - 1) g_J ( 1 - - 1/2λ^2 (N + 2) S_2' μ^- 2 δ + 1/4λ^3 μ^- 3 δ (8 I_1' (N + 2) + I_2' (N^2 + 6 N + 8)) ) Together with (<ref>) and the definition β_J = μ∂/d μ g_J, the above results give us the scaling dimension of the (to-be) conserved current, which is (<ref>). § FERMIONIC MODELS IN D = 2 + 1 AND D = 4 + 1 The one and two-loop diagrams which contribute to the beta function renormalization are given in Fig. <ref> and Fig <ref>. The diagrams contributing to the renormalization of T_μν have the same topology as the bosonic case. The Feynman rule for the stress-tensor vertex is i λ_ ST( (ϵ . γ^μ)_αβ (p^ν + (p - q)^ν) + (ϵ . γ^ν)_αβ (p^μ + (p - q)^μ - trace) δ_a b for the D = 2 + 1 model (<ref>). For the D = 4 + 1 dimensional model (<ref>), on the other hand, the Feynman rule is i λ_ ST( (C. γ^μ)_αβ (p^ν + (p - q)^ν) + (C. γ^ν)_αβ (p^μ + (p - q)^μ - trace) Ω_i j . We will first summarize the integrals that we will encounter in our calculation. After evaluating the γ-matrix algebra, we are left with the following integrals, which we again evaluate using the Mellin-Barnes method. At one loop, we will encounter two integral. The first integral is μ^δ× I_0^μν=1/(2 π)^D∫ d^D k k^μ k^ν/(k^2+μ^2)^s / 2(k^2+μ^2)^s / 2=f_1 η^μν=-2^-D-1π^-D/2·Γ(δ/2)/Γ(1/2·(D+δ+2))η^μν . The second integral is μ^δ× I_1^μν=1/(2 π)^D∫ d^D k 1/(k^2+μ^2)^s / 2(k^2+μ^2)^s / 2=1/μ^2 f_2=1/μ^22^-D·π^-D/2·Γ(δ/2+1)/Γ(1/2·(D+δ+2)) . At two loop, we need to evaluate μ^2 δ I_3^μν=1/((2 π)^D)^2∫ d^D k_1 d^D k_2 k_2^μ k_2^ν/(k_2^2+μ^2)^s / 2((p-k_1+k_2)^2+μ^2)^s / 2(k_1^2+μ^2)^s/2(k_1^2+μ^2)^s / 2= 1/δη^μν c_3+𝒪(δ^0) Here the constant is c_3=-(4 ·π)^-D/μ^2 ·Γ·(D/2+1)^2. The next integral is μ^2 δ I_5^μ_1 μ_2 μ_3 μ_4 =1/((2 π)^D)^2∫ d^D k_1 d^D k_2 k_1^μ_1 k_1^μ_2 k_1^μ_3 k_2^μ_4/(k_2^2+μ^2)^s / 2((p-k_1+k_2)^2+μ^2)^s / 2(k_1^2+μ^2)^s / 2(k_1^2+μ^2)^s / 2 =1/δ(η^μ_1 μ_2η^μ_3 μ_4+η^μ_1 μ_3η^μ_2 μ_4+η^μ_1 μ_4η^μ_2 μ_3) c_5+𝒪(δ^0) with the constant is c_5=4^-D-1·π^-D·Γ(D-2/4)/Γ·(D/2+2) ·Γ(D/2) ·Γ(D+2/4). The next integral is μ^2 δ I_6^μ_1 μ_2 μ_3 μ_4 =1/((2 π)^D)^2∫ d^D k_1 d^D k_2 k_1^μ_1 k_1^μ_2 k_2^μ_3 k_2^μ_4/(k_2^2+μ^2)^s / 2((p-k_1+k_2)^2+μ^2)^s / 2(k_1^2+μ^2)^s / 2(k_1^2+μ^2)^s / 2 =1/δ^2η^μ_1 μ_2η^μ_3 μ_4 c_6,0+1/δη^μ_1 μ_2η^μ_3 μ_4 c_6,1+1/δ(η^μ_1 μ_3η^μ_2 μ_4+η^μ_1 μ_4η^μ_2 μ_3) c_6,2+𝒪(δ^0) with the constants c_6,0=2^-2 D-1·π^-D/Γ(D/2+1)^2 c_6,1=(4 ·π)^-D·((2-D) · Ha(D/2)-2 ·(D-2) · Ha(D-2/4)+2)/(D-2) · D^2 ·Γ(D/2)^2 c_6,2=2^-2 · D-1·π^-D/(D-2) ·Γ(D/2+1)^2 . Here Ha(x) is the Harmonic number. Next, we have μ^2 δ I_7^μ_1 μ_2 μ_3 μ_4 =1/((2 π)^D)^2∫ d^D k_1 d^D k_2 k_1^μ_1 k_1^μ_2 k_1^μ_3 k_1^μ_4/(k_2^2+μ^2)^s / 2((p-k_1+k_2)^2+μ^2)^s / 2(k_1^2+μ^2)^s / 2(k_1^2+μ^2)^s / 2 =1/δ(η^μ_1 μ_2η^μ_3 μ_4+η^μ_1 μ_3η^μ_2 μ_4+η^μ_1 μ_4η^μ_2 μ_3) c_7+𝒪(δ^0) with the constants c_7=2^-2 D-1·π^-D/(D-2) ·Γ(D/2+1)^2. We also encounter three regular integrals that appears in the calculation, μ^2 δ I_8 =1/((2 π)^D)^2∫ d^D k_1 d^D k_2 1/(k_2^2+μ^2)^s / 2((p-k_1+k_2)^2+μ^2)^s / 2(k_1^2+μ^2)^s / 2(k_1^2+μ^2)^s / 2, μ^2 δ I_2^μν =1/((2 π)^D)^2∫ d^D k_1 d^D k_2 k_1^μ k_1^ν/(k_2^2+μ^2)^s / 2((p-k_1+k_2)^2+μ^2)^s / 2(k_1^2+μ^2)^s / 2(k_1^2+μ^2)^s / 2, μ^2 δ I_4^μν =1/((2 π)^D)^2∫ d^D k_1 d^D k_2 k_1^μ k_2^ν/(k_2^2+μ^2)^s / 2((p-k_1+k_2)^2+μ^2)^s / 2(k_1^2+μ^2)^s / 2(k_1^2+μ^2)^s / 2. For the 2+1 D fermionic model, we can calculate the renormalization functions (the one-particle irreducible two/three/four-point functions), Γ_λ_Y, Γ_λ_T, Γ_ST^μν, and Γ_Mass. The renormalization function for four-fermion interaction is Γ_αβγδ, a b c d = Γ_λ_Y (δ_adϵ_αδδ_bcϵ_βγ + δ_acϵ_αγδ_bdϵ_βδ + δ_abϵ_αβδ_cdϵ_γδ) + Γ_λ_T (Ω_adΩ_bc (ϵ . γ^μ)_βγ (ϵ . γ_μ)_αδ + Ω_acΩ_bd (ϵ . γ^μ)_αγ (ϵ . γ_μ)_βδ + Ω_abΩ_cd (ϵ . γ^μ)_γδ (ϵ . γ_μ)_αβ) with Γ_λ_Y = μ^- 2 δλ_Y^3 ( (6 - 18 N) c_6, 0 - 18 Nc_6, 1 - 12 Nc_6, 2 + 6 c_6, 1 - 36 c_6, 2 + 18 c_3 μ^2 N - 30 c_3 μ^2 - 30 c_5 (N + 1) + 9 f_1^2 N^2 + f_2^2 N^2 - 6 f_1 f_2 N^2 - 27 f_1^2 N - 3 f_2^2 N + 18 f_1 f_2 N + 33 f_1^2 + 5 f_2^2 - 18 f_1 f_2 ) - 6 μ^- 2 δλ_T^2 (2 λ_T (2 (N + 11) c_6, 0 + 2 Nc_6, 1 + 8 Nc_6, 2 + 22 c_6, 1 + 28 c_6, 2 + c_3 μ^2 (- N) - 8 c_3 μ^2 + 10 c_5 (N + 5)) + 2 f_1 μ^δ - f_2 μ^δ) + μ^- 2 δλ_Y^2 (- 36 λ_T (3 c_6, 0 + 3 c_6, 1 + 2 c_6, 2 - 3 c_3 μ^2 + 5 c_5) + f_2 (N - 4) (- μ^δ) + 3 f_1 (N - 2) (μ^δ - 18 f_2 λ_T) + 81 f_1^2 (N - 2) λ_T + 9 f_2^2 (N - 2) λ_T) + μ^- 2 δλ_Y ( - 9 λ_T^2 ( 6 Nc_6, 0 + 6 Nc_6, 1 + 4 Nc_6, 2 + 10 c_6, 0 + 10 c_6, 1 + 20 c_6, 2 + 10 c_5 (N + 3) - 41 f_1^2 + 26 f_2 f_1 - 5 f_2^2 ) + 18 c_3 μ^2 (N - 3) λ_T^2 + 6 (3 f_1 - f_2) μ^δλ_T + μ^2 δ), Γ_λ_T = - 3 μ^- 2 δλ_T λ_Y^2 (6 (N + 1) c_6, 0 + 6 Nc_6, 1 + 4 N c_6, 2 + 6 c_6, 1 + 44 c_6, 2 - 6 c_3 μ^2 N + 18 c_3 μ^2 + 10 c_5 (N + 5) - 15 f_1^2 - 3 f_2^2 + 6 f_1 f_2) + μ^- 2 δλ_T ( λ_T^2 ( 2 Nc_6, 1 - 52 Nc_6, 2 + 2 (N - 85) c_6, 0 - 170 c_6, 1 - 260 c_6, 2 - 10 c_5 (5 N + 43) + f_1^2 N^2 + f_2^2 N^2 + 2 f_1 f_2 N^2 + 3 f_1^2 N + 3 f_2^2 N + 6 f_1 f_2 N + 125 f_1^2 + 17 f_2^2 - 74 f_1 f_2 ) + 2 c_3 μ^2 (N + 5) λ_T^2 + (f_1 + f_2) (N + 2) μ^δλ_T + μ^2 δ) + 3 μ^- 2 δλ_T λ_Y ( - 4 λ_T ( 2 Nc_6, 0 + 2 Nc_6, 1 + 8 Nc_6, 2 + 13 c_6, 0 + 13 c_6, 1 + 22 c_6, 2 + c_3 μ^2 N - 7 c_3 μ^2 + 5 c_5 (2 N + 7) ) + 2 f_2 μ^δ + f_1 (2 f_2 (N + 2) λ_T - 2 μ^δ) + f_1^2 (N + 2) λ_T + f_2^2 (N + 2) λ_T ), The renormalization function with one stress tensor and two fermions is (Γ^μν)_αβ, a b = 2 i Γ_ST( p^ν (ϵ . γ^μ)_αβ + p^ν (ϵ . γ^μ)_αβ - 2/3η^μν p^σ (ϵ . γ^σ)_αβ) δ_a b, with [ Γ_ST = λ_ST + 6 c_5 (N + 1) μ^- 2 δλ_STλ_T^2 + 2 c_5 (N - 1) μ^- 2 δλ_STλ_Y^2 + 12 c_5 μ^- 2 δλ_STλ_T λ_Y . ] The renormalization function with one mass operator and two fermions is, Γ_αβ, a b = Γ_Massϵ_αβδ_a b, with Γ_Mass = λ_Mass( 9 μ^- 2 δλ_T^2 ( - 3 (N + 1) c_6, 1 - 2 Nc_6, 2 - 2 c_6, 2 + c_3 μ^2 N + c_3 μ^2 + 5 c_5 (N + 1) + 9 f_1^2 + f_2^2 + 6 f_1 f_2 ) + 6 μ^- 2 δλ_T λ_Y ( 3 (- 3 c_6, 1 - 2 c_6, 2 + c_3 μ^2 + 5 c_5) + ( 3 f_1 + f_2 )^2 (N - 1) ) + (N - 1) μ^- 2 δλ_Y^2 ( 3 (- 3 c_6, 1 - 2 c_6, 2 + c_3 μ^2 + 5 c_5) + ( 3 f_1 + f_2 )^2 (N - 1) ) + (3 f_1 - f_2) (N - 1) μ^- δλ_Y + 3 (3 f_1 - f_2) μ^- δλ_T + 1 ) . We define the renormalized coupling constants to be g_Y = μ^- δΓ_λ_Y, g_T = μ^- δΓ_λ_T, g_ST = μ^- (D - 2 Δ_ψ - 1)Γ_ST, and g_Mass = μ^- (D - 2 Δ_ψ)Γ_Mass . By inverting the above relations, we get the beta functions β_Y = μ∂/∂μ g_Y, β_T = μ∂/∂μ g_T, β_ST = μ∂/∂μ g_ST, β_Mass = μ∂/∂μ g_Mass for g_Y, g_T, g_ST and g_Mass, which then gives us (<ref>), (<ref>) and (<ref>). When calculating Γ_ST, we can again take the special kinetic point q^μ=0 as the bosonic case. An argument similar to the scalar case tells us that one can not build an invariant tensor that transforms in the spin-2 representation of the Euclidean group (we can not use the external momentum since q^μ=0.) This means only the second diagram in Fig. 10 contributes to the renormalization to T_μν. For the 4+1 Dermionic model, we can calculate the corresponding renormalization functions. The renormalization function for the four-fermion coupling is Γ_αβγδ, a b c d = Γ_λ_T_2(Ω_a dΩ_b c(C ·γ^μν)_αδ(C ·γ_μν)_βγ+Ω_a bΩ_c d(C ·γ^μν)_αβ(C ·γ_μν)_γδ +Ω_a cΩ_b d(C ·γ^μν)_αγ(C ·γ_μν)_βδ), with Γ_λ_T_2 = λ_T_2 + (8 f_2 (N + 2) - 8 f_1 (N - 28)) μ^- δλ_T_2^2 + 64 μ^- 2 δλ_T_2^3 ( - 2 (59 N + 3) c_6, 0 - 118 Nc_6, 1 + 20 Nc_6, 2 - 6 c_6, 1 + 20 c_6, 2 + 58 c_3 μ^2 N + 118 c_3 μ^2 + c_5 (14 - 98 N) + f_1^2 N^2 + f_2^2 N^2 - 2 f_1 f_2 N^2 + 3 f_1^2 N + 3 f_2^2 N - 6 f_1 f_2 N + 787 f_1^2 + 35 f_2^2 - 6 f_1 f_2 ) . The renormalization function of the stress tensor-fermion-fermion vertex (at q^μ=0 ) is with (Γ^μν)_αβ, a b=2 · i ·Γ_ST(p^ν·(C ·γ^μ)_αβ+p^ν·(C ·γ^μ)_αβ-2/5·η^μν p^σ·(C ·γ^σ)_αβ) Ω_a b, Γ_ST=λ_ST+1920 · c_5 ·(N+1) ·μ^-2 δ·λ_ST·λ_T_2^2 The renormalization function of mass-fermion-fermion vertex is, Γ_αβ, a b = Γ_Mass C_αβδ_a b, with Γ_ Mass = λ_ Mass (1 + 40 (5 f_1 - f_2) μ^- δλ_T_2 + 320 (N - 3) μ^- 2 δλ_T_2^2 (- 5 c_6, 1 - 2 c_6, 2 + 7 c_5 + c_3 μ^2) + 1600 (5 f_1 + f_2)^2 μ^- 2 δλ_T_2^2) . We define the renormalized coupling constants to be g_T_2 = μ^- δΓ_T_2, g_ST = μ^- (D - 2 Δ_ψ - 1)Γ_ST, and g_Mass = μ^- (D - 2 Δ_ψ)Γ_Mass . By inverting the above relations, and use we can get the beta functions β_T_2 = μ∂/∂μ g_T_2, β_ST = μ∂/∂μ g_ST, β_Mass = μ∂/∂μ g_Mass for g_T_2, g_ST and g_Mass, which then gives us (<ref>), (<ref>) and (<ref>). utphys
http://arxiv.org/abs/2406.18491v1
20240626165507
Enhancing Federated Learning with Adaptive Differential Privacy and Priority-Based Aggregation
[ "Mahtab Talaei", "Iman Izadi" ]
cs.LG
[ "cs.LG", "cs.CR", "cs.DC" ]
Enhancing Federated Learning with Adaptive Differential Privacy and Priority-Based Aggregation 1st Mahtab Talaei1 Department of Electrical and Computer Engineering Isfahan University of Technology Isfahan, Iran mtalaei@bu.edu 2nd Iman Izadi Department of Electrical and Computer Engineering Isfahan University of Technology Isfahan, Iran iman.izadi@iut.ac.ir July 1, 2024 ========================================================================================================================================================================================================================================================================================== § ABSTRACT Federated learning (FL), a novel branch of distributed machine learning (ML), develops global models through a private procedure without direct access to local datasets. However, it is still possible to access the model updates (gradient updates of deep neural networks) transferred between clients and servers, potentially revealing sensitive local information to adversaries using model inversion attacks. Differential privacy (DP) offers a promising approach to addressing this issue by adding noise to the parameters. On the other hand, heterogeneities in data structure, storage, communication, and computational capabilities of devices can cause convergence problems and delays in developing the global model. A personalized weighted averaging of local parameters based on the resources of each device can yield a better aggregated model in each round. In this paper, to efficiently preserve privacy, we propose a personalized DP framework that injects noise based on clients' relative impact factors and aggregates parameters while considering heterogeneities and adjusting properties. To fulfill the DP requirements, we first analyze the convergence boundary of the FL algorithm when impact factors are personalized and fixed throughout the learning process. We then further study the convergence property considering time-varying (adaptive) impact factors. Federated Learning, Differential Privacy, Personalized Impact Factors, Adaptive Impact Factors, Systems and Statistical Heterogineties [1]Mahtab Talaei was affiliated with the Department of Electrical and Computer Engineering at Isfahan University of Technology during the research for this paper. At the time of submission, she is affiliated with the Division of Systems Engineering at Boston University, Boston, USA. § INTRODUCTION Smart distributed systems such as smartphones, automated vehicles, multi-agent systems, and wearable devices are growing rapidly in our daily lives. Their underlying mechanism which is attached with sensing and communicating generates an unprecedented amount of data every day. Therefore, utilizing these sources of rich information to enhance services offered to people and organizations owning the data, without violating their privacy, matters a great deal. The developments in the computational and communicational capabilities of intelligent distributed devices along with their abilities to collect and store large datasets have opened up effective alternatives for managing and analyzing local databases. A common traditional practice to develop predictive machine learning (ML) models is to transmit raw data over networks and generate models in a centralized manner. While this method has provided data owners valuable services throughout the years, their efficiency for today's crowdsourced data is called into question. Communication costs of sending large volumes of data on one hand, and privacy concerns for sharing personal information on the other hand have provided space for decentralized ML algorithms, such as federated learning (FL) <cit.>. Federated machine learning is a promising solution in settings dealing with large volumes of data as well as privacy concerns about clients' sensitive information <cit.>. In this framework, each device builds its model using local datasets, and the essential model parameters, rather than raw data, are transmitted to the cloud server. The server aggregates these parameters and updates the global model throughout a recursive downloading and uploading cycle <cit.>. Hence, each client benefits from a larger database during the learning process, without direct access to it. While offering great advantages over conventional ML methods, FL has its own challenges. Expensive communications, systems and statistical heterogeneities, and security risks are considered as the four main issues while developing FL models <cit.>. Deep Learning (DL) models are widely used in FL, especially for feature extraction in the large image, voice, and text datasets. In order to optimize local DL models inside the clients, stochastic gradient descent (SGD) is generally adopted <cit.>. Sending frequent gradient updates with the massive number of both parameters and clients in FL leads to an extreme rise in communication costs. Increasing the number of local updates <cit.> is one natural way to modify communication bottlenecks with more local computations. On the other hand, quantization <cit.> and sparsification <cit.> methods mitigate this challenge by reducing the size of transmitted messages in each round. Dealing with systems that have different computational capabilities, network capacities, and power resources is an inevitable challenge in FL. Several approaches, including resource-based client selection <cit.>, robust and fault-tolerant algorithms <cit.>, and asynchronous communications <cit.> address these challenges. On the other extreme, heterogeneity in data distributions of the clients causes problems in the training and convergence of FL algorithms. Using multi-task learning methods <cit.> and avoiding local minimums by adding a proximal term to the objective function <cit.> help handling unbalanced and non-IID data in FL <cit.>. Even though the idea of FL was first proposed for its strong privacy guarantees, it has been shown that local datasets can be still revealed to stragglers using model inversion attacks on shared updates <cit.>, especially when DL is used in local models <cit.>. To mitigate this challenge, differential privacy (DP) is one of the widely used protection algorithms due to its solid theoretical guarantees <cit.>. In order to reduce the risk of data leakage in ML algorithms, noise with Gaussian, Laplace, or Exponential distribution is deliberately added to data in DP. The work in <cit.> proposes a global DP algorithm in FL and gives a theoretical explanation for the convergence behavior of the suggested scheme. As discussed earlier, nodes in a distributed architecture differ in the data structure, dataset size, network condition, reliability, availability, and computation capabilities, which can even be time-varying. A privacy-preserving approach in FL is not effective unless paying attention to these personalized characteristics. Hence, there exist multiple works on DP with the content “adaptive" to compensate systems and statistical heterogeneities in FL. These works can be divided into two general directions based on the adaptability criterion. One direction injects an adaptive noise distribution to local parameters to enhance local protection. It considers each client separately without involving their heterogeneity. For instance, adaptive clipping  <cit.> finds the best clipping constant for DP in each device based on their local behaviors. In <cit.>, noise with Laplace distribution is added to model updates based on the neurons' contributions in the clients. The work in <cit.> achieves a trade-off between privacy and accuracy by adding more noise to less important parameters and less noise to more important ones. The second direction, however, concentrates on personalized training in the heterogeneous networks. The work in <cit.> trains differentially private models in each client and uploads the local updates for the server. These directions both lack considering the local characteristics in the aggregating process. More specifically, they assume the same impact factor for all devices during aggregation, regardless of their local dissimilarities. This assumption not only simplifies the convergence analysis of the algorithm but also changes the DP requirements <cit.>. To the best of our knowledge, the privacy and convergence analysis in FL with non-identical and time-varying impact factors have not yet been studied in the existing literature. In this paper, we combine the heterogeneity and privacy concerns in a novel FL scheme. Regardless of multi-task learning algorithms used in FL, each local model possesses a weight or an impact in the global cost function. This impact can be assigned considering many factors by the server or the clients. It can also change (increase or decrease) or even become zero during the learning process. We, therefore, propose a DP algorithm considering the non-identical impact factors, namely, personalized aggregation in differentially private federated learning (PADPFL). We further establish the convergence analysis of the algorithm and the influence of the additive noise on it. In summary, the main contributions of this paper are as follows. * We propose a noise injection paradigm, PADPFL, that satisfies DP requirements with Gaussian distribution when clients have different impact factors in the aggregation process. * We perform a convergence analysis of the proposed algorithm for Non-IID clients when using fixed non-identical impact factors throughout training the global model. * We perform a convergence analysis of the proposed algorithm for Non-IID clients when using adaptive (time-varying) impact factors throughout training the global model. * We conduct evaluations on real-world datasets to verify the effectiveness of PADPFL, and observe the trade-off between model accuracy, privacy budget, and impact factors. The remainder of this paper is organized as follows. In Section II, we review some preliminaries on FL,and DP. In Section III, we introduce our approach for a differentially private federated learning in a client and server side. Next, we analyse the convergence bound on the global loss function of the proposed solution for a fixed and time-varying impacts, in Section IV and V, respectively. Simulations and results are presented in Section IV, and the summary and conclusion are given in Section VI. § PRELIMINARIES In this section, we briefly review some key materials of FL and DP. §.§ Federated Learning The goal in a standard FL problem is to develop a global ML model for tens to millions of clients without direct access to their local datasets <cit.>. The only messages transmitted from the clients to the cloud server in this framework are the training parameter updates of the local loss functions. To formalize this goal, consider N clients as depicted in Fig. <ref>. We wish to find weight matrix x minimizing the following loss function: min_x L(x), where L(x):= ∑ _i=1 ^N p_i l_i(x, 𝒟_i), where l_i and D_i represent the local loss function and training database of the i-th client, respectively. Moreover, the coefficient p_i is considered as the relative impact factor of device i in the global model, so that ∑ _i=1 ^ N p_i = 1, 0⩽ p_i ⩽ 1 In order to solve (<ref>), matrix of the global parameters at itteration (t) is updated using weighted averaging of the trained local parameters (x_1 ^(t), x_2 ^t) , ... x_N^(t)) <cit.> x^(t):= ∑ _i=1 ^N p_i x_i^(t), To address heterogenities, FedProx <cit.> is utilized in the learning process. Therefore, defining h_i(x_i^(t+1);x^(t)) = l_i(x_i^(t+1)) + μ/2‖ x_i^(t+1) - x^(t)‖ ^2, γ∈ [0,1], x_0 is the γ_i-inexact solution for min_x h_i(x,x^(t)). §.§ Differential Privacy DP gives a rigorous mathematical definition of privacy and strongly guarantees preserving data in ML algorithms. A randomized mechanism ℳ is differentially private if its output is robust to any change of one sample in the original dataset. The following definition formally clarifies this statement for (ϵ, δ)-DP <cit.>: A randomized mechanism ℳ: 𝒳→ℛ satisfies (ϵ, δ)-differential privacy for two non-negative numbers ϵ and δ if for all adjacent datasets 𝒟 and 𝒟^' d(𝒟,𝒟^') = 1, and for all subsets S ⊆ℛ, there holds Pr[ℳ(𝒟) ∈ S] ⩽ e ^ϵPr[ℳ(𝒟^') ∈ S] + δ, where the randomized algorithm ℳ maps an input x ∈𝒳 discretely to ℳ(x)=y with probability (ℳ(x))_y, ∀ y ∈ℛ. The probability space is defined over the coin flips of the mechanism ℳ. Note that the difference between two datasets 𝒟 and 𝒟^', d(𝒟,𝒟^'), is typically defined as the number of records on which they differ. It is concluded from this definition that, with a probability of δ, the output of a differentially private mechanism on two adjacent datasets varies more than a factor of e ^ϵ. Thus, smaller values of δ enhance the probability of having the same outputs. Smaller values of ϵ narrow down the privacy protection bound. The smaller ϵ and δ, the lower the risk of privacy violation. Based on <cit.>, considering f as an arbitrary d-dimensional function applied on a dataset, for ϵ∈ (0,1) and c ⩾√(2 ln(1.25/ δ)), a Gaussian mechanism with parameter σ⩾ c Δ f / ϵ that deliberately adds Gaussian noise scaled to 𝒩(0,σ ^ 2) to each output component of f is (ϵ, δ)-differentially private. Here, Δ f is the sensitivity of the function f defined by Δ f = max _𝒟, 𝒟^'‖ f(𝒟) - f(𝒟^') ‖. § PERSONALIZED DIFFERENTIAL PRIVACY IN FEDERATED LEARNING In this section, we propose the personalized noise injection for preserving DP. We first describe the threat model and then propose the algorithm. §.§ Threat Model and Design Goals We consider the cloud server to be an “honest-but-curious" entity, ie, the central server can use model inversion attacks to recover training data. Additionally, local and global parameter updates can be revealed to adversaries in the uploading and downloading channels. For this reason, the goal of our approach is to protect the weights transmitted between the server and clients from being inferred any extra information about users to both the server or external adversaries. Preserving global privacy is the primal goal of our approach, but it leads to a level of local privacy, as well. Following from <cit.>, we also assume that downloading channels are exposed to more external attacks than uploading channels as they are broadcasting. Hence, considering T aggregation times, the revelation of local parameters while uploading can be at most R times (R ⩽ T). §.§ Proposed Privacy-Preserving Scheme considering systems and statistical heterogeneities, each client influences the global loss function (L) differently. Apart from multi-task learning methods that aim to develop personalized local models, the aforementioned impact factor (p_i) can play a vital role in training accurate models. The impact factors assigned to the clients at each iteration can strengthen or weaken the effect of local models in the global loss function. Although using non-identical impact factors may seem a straightforward approach, its importance is underestimated in the literature. Assuming the natural setting p_i=1/N or even p_i= m_i/m, where m_i=|𝒟_i| and m= ∑_i m_i is the total number of samples, is far from reality and oversimplifies the problem. In fact, clients participate in learning in different ways as their data and structures are not the same. To compensate for these heterogeneities, we assign different impacts to clients while aggregating models. The heterogeneities between the clients can come from several sources, including: * data quality * data reliability * dataset size * link quality * revelation probability * accessibility * client reliability The first three items relate to the clients, while the remaining depends primarily on the knowledge of the central server. While working collaboratively with possibly millions of users, even when datasets have the same nature, the quality of information used in local models is not the same. For instance, while modelling image datasets, the resolution of data varies between the clients, and hence, training local models based on low-quality images reduces the global model performance. Moreover, the reliability of local datasets can influence the validity of models, as they may contain irrelevant information. So, users can send additional bits to the cloud server, based on local model performances, to help the server assign relevant impact factors. On the other hand, the size of local datasets does not necessarily affect impact factors directly. In Non-IID structures, we may have valuable informative datasets that are relatively small in size, but global models should be biased in favor of them. Therefore, the assumption p_i= m_i/m would not be a wise choice. Stemming from the fact that distributed learning requires training and inference of models over a wireless system, uncertainty and stochasticity exist in its nature. Link errors and delays can adversely influence the convergence speed of the learning algorithm <cit.>. However, utilizing variant impact factors can mitigate this challenge to some extent. When client k cannot synchronize itself with the others due to network faults, the server can distribute p_k among counterpart clients for that iteration to keep pace with the algorithm. Additionally, different levels of accessibility and reliability of local parameters lead to utilizing non-identical impact factors. When several groups of IoT devices and sensor networks perform the measurements and model updates cooperatively in a FL task, weighting the updates appropriately and based on the devices' accuracy, reliability, or accessibility must be a priority for the server. Along with all the aforementioned heterogeneity sources, impact factors can vary between global iterations. In other terms, the impacts assigned to the local model weights are not fixed throughout the learning process. They can change to accommodate different situations. For instance, a client sending accurate updates may become out of charge or encounter noisy links in the middle of learning process. Hence, a wise server should adaptively reduce the impact factor of its parameters to save the global model performance. In this paper, we achieve (ϵ, δ)-DP using Gaussian mechanism, which provides theoretical privacy guarantees for sharing DL model updates. Here, we calculate the amount of the client-side and server-side additive noise based on the sensitivity parameter when impact factors are non-identical. The differential privacy requirements are satisfied for each iteration, and the only limitation on impact factors is (<ref>). §.§.§ Client-side DP Assume that local model parameters are sent to the cloud server as the updates. Setting the batch size equal to the local dataset size |𝒟_i | = m_i, the function f to be protected is defined as f_i (𝒟_i) ≜ x_i = argmin _x l_i(x, 𝒟_i) = 1/m_i∑ _j=1 ^ m_iargmin _x l_i(x, 𝒟_i,j), ∀ i By clipping the local weights using a bounding limit B, ‖ x_i ‖≤ B <cit.>, the sensitivity of f_i is calculated as Δ f_i = max _𝒟_i , 𝒟^'_i‖ f_i ,𝒟_i - f_i, 𝒟^'_i‖ = max _𝒟_i , 𝒟^'_i‖1/m_i ( ∑ _j=1 ^ m_iargmin _x l_i(x, 𝒟_i,j) - ∑ _j=1 ^ m_iargmin _x l_i(x, 𝒟^' _i,j) ) ‖ = 1/m_imax‖argmin _x l_i(x, 𝒟_i,k) - argmin _x l_i(x, 𝒟^' _i,k) ‖ = 2B/m_i, where, based on the definition of sensitivity here, the i-th client's dataset 𝒟_i and 𝒟^'_i differ only in one sample (k-th sample). To ensure (ϵ,δ)-DP for each client in FL, we have to add Gaussian noise with parameter σ = c 2B/m_i ϵ to the weight matrices of all clients before uploading. Considering the maximum revelation times R, σ should be multiplied with it to guarantee the desired protection level of local parameters. Hence, to have a united noise parameter, we define the standard deviation (SD) of the additive noise in the client-side as σ _C_i = 2B R c/min{m_i} ϵ, ∀ i §.§.§ Server-side DP The function to be protected in the server side is the global aggregated weight transmitted to the clients defined by f ≜ x = ∑ _ i=1 ^ N p_i x_i. Based on the analysis provided in <cit.> and the client-side sensitivity in (<ref>), the sensitivity of f is bounded as Δ f ≤ 2 B max{ p_i}/min{m_i}. Here, to find the SD of the additive Gaussian noise in the server, we first calculate the distribution of the aggregated local noises. The aggregated noisy weight at each iteration is given as x =∑ _i=1 ^n p_i x̃_i = ∑ _i=1^N p_i (x_i + n_i)= ∑ _i=1^N p_i x_i + ∑ _i=1^N p_i n_i_n, where, for the independent normally distributed n_i ∼𝒩(0, σ _ C_i^2), we have n ∼𝒩(0, ∑ _i=1 ^N σ _ C_i^2 p_i ^2 _ σ ^2 _AC). Therefore, we have the following theorem to ensure (ϵ,δ)-DP from the server perspective. Considering T as the aggregation times and the maximum revelations in the broadcasting channels, the SD of the server-side noise is given by σ _S = 2B c √( T^2 maxp_i^2 - R^2 ∑_i=1 ^N p_i ^2)/min{m_i} ϵ, if T > R √(∑ _i p_i ^2)/max{p_i} 0, otherwise. The standard deviation of the total desired noise, based on (<ref>), is σ _ A = 2B T c max{ p_i}/min{m_i} ϵ. Hence, the variance of the server-side Gaussian noise is calculated by σ ^2 _S = σ ^2 _A - σ ^2 _AC, which results in (<ref>). Applying the client and server-side noise with the calculated Gaussian distributions satisfies (ϵ, δ)-DP theoretically in the uploading and downloading channels for each iteration. Since the involved clients add noise to the local parameters before uploading for the server, a level of local privacy is also achieved here. The server, subsequently, chooses the relative impact factors based on the information acquired and updates the global parameters. Then, it decides on the extra server-side noise, n_s ∼𝒩(0, σ_S), and transmits x̃= x + n_s for the upcoming training cycle. § CONVERGENCE ANALYSIS OF THE PERSONALIZED DP IN FL In this section, we analyze the convergence properties of the proposed algorithm for personalized DP in FL. Our main purpose is to reach a convergence upper limit for the algorithm when we have personalized impact factors. The required assumptions for our analysis about the properties of the global and local loss functions, regarding their relation L(x) = ∑ _i=1 ^N p_i l_i (x), are as follows: * l_i(x) is convex. * l_i(x) is ρ-Lipschitz smooth, i.e., ‖∇ l_i (a) - ∇ l_i (b) ‖⩽ρ‖ a - b ‖, ∀ a, b. * L(x^(0)) - L (x ^ *) = Θ; where x^(0) and x^* represent the initial and optimal model parameters, respectively. * ‖∇ l_i (x) - ∇ L (x) ‖⩽ε, ∀ i,x; where ε is the divergence measure. Note that the distribution of local datasets in the non-i.i.d fashion breaks the general assumption of p_i = m_i/m. Hence, the expectation over clients 𝔼{l_i(x)} is not considered equal with the global expectation 𝔼{L(x)}. The only assumption on relative impact factors is ∑_i=1 ^N p_i =1. As the first step through our convergence bounding analysis, we present the following lemma for the local dissimilarity measure, when having non-identical impacts. For the local loss functions l_i with impact factors p_i in the FL global function L, there exists A as a measure of dissimilarity at x such that ∑ _i=1 ^N p_i ‖∇ l_i(x) ‖⩽‖∇ L(x) ‖ A ∀ i, Due to Assumption 1, we have ‖∇ l_i(x)-∇ L(x) ‖ ^2 ⩽ε ^2 and ‖∇ l_i(x)-∇ L(x) ‖ ^2 = ‖∇ l_i (x) ‖ ^2 - 2 ∇ l_i (x) ^⊤∇ L(x) + ‖∇ L(x)‖ ^2. Considering (<ref>) and multiplying (<ref>) with p_i, ∀ i yields ∑ _i=1 ^N p_i ‖∇ l_i(x) ‖ ^2 - 2 ∑ _i=1 ^N p_i ∇ l_i (x) ^ ⊤∇ L (x) + ‖∇ L(w) ‖ ^2 ∑ _i=1 ^N p_i ⩽ε ^2 ∑ _i=1 ^N p_i. Considering ∑ _i=1 ^N p_i = 1 and ∑ _i=1 ^N p_i ∇ l_i (x) = ∇ L (x), we have ∑ _i=1 ^N p_i ‖∇ l_i(x) ‖ ^2 ⩽ 2 ∇ L (x) ^ ⊤∇ L (x) - ‖∇ L(x) ‖ ^2 + ε ^2 = ‖∇ L(x) ‖ ^2 + ε ^2 = ‖∇ L(x) ‖ ^2 A_1(x) ^2. Note that when ‖∇ L(x) ‖ ^2 ≠ 0, there exists A_1(x) = √(1 + ε ^2/‖∇ L(x) ‖ ^2)⩾ 1. Therefore, we have ∑ _i=1 ^N p_i ‖∇ l_i(x) ‖ ^2 ⩽‖∇ L(x) ‖ ^2 A_1 ^2, where A_1 is the upper bound of A_1 (x). Considering (<ref>), there also exists A ⩾ 1 such that ∑ _i=1 ^N p_i ‖∇ l_i(x) ‖⩽‖∇ L(x) ‖ A . This completes the proof. Now, the following lemma gives an expected upper bound on the increment of global loss value per-iteration, when DP noise injection is adopted. The expected difference of global loss functions in two consecutive iterations (t) and (t+1), or the per-iteration expected increment in the value of the loss function, has the following upper limit: 𝔼{ L(x̃^(t+1)) - L(x̃^(t))}⩽λ _2 ‖ L(x̃^(t))‖ ^2 + λ _1 𝔼{‖ n^ (t+1)‖}‖ L(x̃^(t))‖ +λ _0 𝔼{‖ n^ (t+1)‖ ^2 }, where λ_2 = -1/μ + A/μ( γ + ρ (1+ γ)/μ) + ρ A ^2 (1+ γ)^2 /2 μ^2, λ_1 = 1+ ρ A (1+ γ)/μ, λ _0 = ρ/2, and n^(t)= ∑ _i=1 ^N p_i n_i ^(t) + n_s ^(t) is the aggregated noise of the clients and server in each cycle. Considering the aggregation process with artificial noises of the client and server side in the (t+1)-th aggregation, we have x̃^(t+1)= ∑ _i=1 ^N p_i x_i ^(t+1) + n ^(t+1), where n^(t) = ∑ _i=1 ^N p_i n_i ^(t) + n_s ^(t) Because l_i(·) is ρ-Lipschitz smooth, we have l_i(x̃^(t+1)) ⩽ l_i(x̃^(t))+ ∇ l_i(x̃^(t))^ ⊤ (x̃^(t+1) - x̃^(t)) + ρ/2‖x̃^(t+1) - x̃^(t)‖ ^2 for all x̃^(t+1), x̃^(t). Summation of (<ref>) multiplied with p_i, ∀ i yields ∑ _i=1 ^N p_i l_i(x̃^(t+1)) ⩽∑ _i=1 ^N p_i l_i(x̃^(t)) + ∑ _i=1 ^N p_i ∇ l_i(x̃^(t))^ ⊤ (x̃^(t+1) - x̃^(t)) + ρ/2‖x̃^(t+1) - x̃^(t)‖ ^2 ∑ _i=1 ^N p_i. Considering the definition of global loss function L(·) and ∑ _i=1 ^N p_i = 1, we have L(x̃^(t+1)) - L(x̃^(t)) ⩽∇ L(x̃^(t))^ ⊤ (x̃^(t+1) - x̃^(t)) + ρ/2‖x̃^(t+1) - x̃^(t)‖ ^2 and therefore, 𝔼{ L(x̃^(t+1)) - L(x̃^(t))}⩽ 𝔼{⟨∇ L(x̃^(t)), (x̃^(t+1) - x̃^(t)) ⟩} + ρ/2𝔼{‖x̃^(t+1) - x̃^(t)‖ ^2 } Defining h(x_i ^(t+1);x̃ ^(t))≜ l_i(x_i ^(t+1)) +μ/2‖ x_i^(t+1) - x̃^(t)‖ ^2, we have ∇ h(x_i ^(t+1);x̃ ^(t))= ∇ l_i(x_i ^(t+1)) + μ (x_i^(t+1) - x̃^(t)). Summation of (<ref>) multiplied with p_i, ∀ i yields ∑ _i=1 ^N p_i∇ h(x_i ^(t+1);x̃ ^(t)) = ∑ _i=1 ^N p_i ∇ l_i(x_i ^(t+1)) + μ∑ _i=1 ^N p_i (x_i^(t+1) - x̃^(t))= ∑ _i=1 ^N p_i ∇ l_i(x_i ^(t+1)) + μ∑ _i=1 ^N p_i x_i^(t+1) - μx̃^(t) and therefore, x̃^(t+1) - x̃^(t)= ∑ _i=1 ^N p_i x_i^(t+1) + n^(t+1) - x̃^(t) = 1/μ[ ∑ _i=1 ^N p_i ( ∇ h(x_i ^(t+1);x̃ ^(t)) - ∇ l_i(x_i ^(t+1)) ) ] + n^(t+1). Substituting (<ref>) into (<ref>), we obtain 𝔼{ L(x̃^(t+1)) - L(x̃^(t))}⩽𝔼{1/μ⟨∇ L(x̃^(t)), ∑ _i=1 ^N p_i ( ∇ h(x_i ^(t+1);x̃ ^(t) ) - ∇ l_i(x_i ^(t+1))) ⟩ + ⟨∇ L(x̃^(t)), n^(t+1)⟩} + ρ/2𝔼{‖x̃^(t+1) - x̃^(t)‖ ^2 } = 𝔼{1/μ⟨∇ L(x̃^(t)), ∑ _i=1 ^N p_i ( ∇ h(x_i ^(t+1);x̃ ^(t) ) . - ∇ l_i(x_i ^(t+1))+ . ∇ l_i(x̃ ^(t)) ) - ∑ _i=1 ^N p_i ∇ l_i(x̃ ^(t)) ⟩ + ⟨∇ L(x̃^(t)), n^(t+1)⟩} + ρ/2𝔼{‖x̃^(t+1) - x̃^(t)‖ ^2 } = -1/μ‖∇ L(x̃^(t)) ‖ ^2 + 𝔼{1/μ⟨∇ L(x̃^(t)), ∑ _i=1 ^N p_i ∇ h(x_i ^(t+1);x̃ ^(t) )+ ∑ _i=1 ^N p_i ( ∇ l_i(x̃ ^(t)). - . ∇ l_i(x_i ^(t+1)) )⟩} + 𝔼{⟨∇ L(x̃^(t)), n^(t+1)⟩} + ρ/2𝔼{‖x̃^(t+1) - x̃^(t)‖ ^2 } Now, let us bound ‖x̃^(t+1) - x̃^(t)‖. We know ‖ x_i^(t+1) - x̃^(t)‖⩽‖ x_i^(t+1) - x̂_i^(t+1)‖ + ‖x̂_i^(t+1) - x̃^(t)‖, where x̂_i^(t+1) =argmin _x h_i(x;x̃ ^(t)) Define μ = μ - ρ_- > 0, due to the μ-convexity of h_i(x;x̃ ^(t)) we have ‖x̂_i^(t+1) - x_i^(t+1)‖⩽γ/μ‖∇ l_i(x̃^(t)) ‖ and ‖x̂_i^(t+1) - x̃^(t)‖⩽1/μ‖∇ l_i(x̃^(t)) ‖ where γ∈ [0,1] denotes a γ-inexact solution of min _x h_i(x;x̃ ^(t)) <cit.>. For such a solution, x_0, we have ‖∇ h(x_0; x̃)‖⩽γ‖∇ h(x̃; x̃) ‖ . Now we can use (<ref>) and (<ref>) to obtain ‖ x_i^(t+1) - x̃^(t)‖⩽1+ γ/μ‖∇ l_i(x̃^(t)) ‖. Therefore, ‖x̃^(t+1) - x̃^(t)‖ = ‖ x^(t+1) + n ^ (t+1) - x̃^(t)‖ ⩽‖ x^(t+1) - x̃^(t)‖ +‖ n^ (t+1)‖ = ‖∑ _i=1 ^ N p_i ( x_i ^(t+1) - x̃^(t))‖ +‖ n^ (t+1)‖ ⩽∑ _i=1 ^ N p_i ‖ x_i ^(t+1) - x̃^(t)‖ +‖ n^ (t+1)‖ ⩽∑ _i=1 ^ N p_i ( 1+ γ/μ‖∇ l_i(x̃^(t)) ‖) +‖ n^ (t+1)‖ ⩽A (1+ γ)/μ‖∇ L(x̃^(t)) ‖ +‖ n^ (t+1)‖. Since l_i(·) is ρ-Lipschitz smooth, we have ‖∇ l_i (x̃^(t)) - ∇ l_i (x_i^(t+1)) ‖⩽ρ‖x̃^(t) - x_i^(t+1)‖ Using the triangle inequality, (<ref>), (<ref>), and (<ref>), we obtain ‖∑ _i=1 ^N p_i ∇ h(w_i ^(t+1);x̃ ^(t) )+ ∑ _i=1 ^N p_i ( ∇ l_i(x̃ ^(t)). - . ∇ l_i(x_i ^(t+1)) ) ‖⩽‖∑ _i=1 ^N p_i ∇ h(x_i ^(t+1);x̃ ^(t) )‖ + ‖∑ _i=1 ^N p_i ( ∇ l_i(x̃ ^(t)). - . ∇ l_i(x_i ^(t+1)) ) ‖ ⩽∑ _i=1 ^N p_i ‖∇ h(x_i ^(t+1);x̃ ^(t) )‖ + ∑ _i=1 ^N p_i ‖( ∇ l_i(x̃ ^(t)). - . ∇ l_i(x_i ^(t+1)) ) ‖⩽γ∑ _i=1 ^N p_i ‖∇ l_i(x̃^(t)) ‖ + ρ∑ _i=1 ^N p_i ‖x̃^(t) - x_i^(t+1)‖⩽ A γ‖∇ L(x̃^(t)) ‖ +ρ A(1+ γ)/μ‖∇ L(x̃^(t)) ‖. Then, from (<ref>) and the Cauchy-Schwarz inequality we have ⟨∇ L(x̃^(t)), ∑ _i=1 ^N p_i ∇ h(x_i ^(t+1);x̃ ^(t) ) + ∑ _i=1 ^N p_i ( ∇ l_i(x̃ ^(t)). - . ∇ l_i(x_i ^(t+1)) )⟩⩽‖∇ L(x̃^(t)) ‖ [ ( A γ +ρ A (1+ γ)/μ) ‖∇ L(x̃^(t)) ‖] = ( A γ +ρ A (1+ γ)/μ) ‖∇ L(x̃^(t)) ‖ ^2 Substituting (<ref>) and (<ref>) into (<ref>) yields 𝔼{ L(x̃^(t+1)) - L(x̃^(t))}⩽ -1/μ‖∇ L(x̃^(t)) ‖ ^2 + ( A γ/μ +ρ A (1+ γ)/μμ) ‖∇ L(x̃^(t)) ‖ ^2 + 𝔼{‖∇ L(x̃^(t)) ‖‖ n^(t+1)‖} + ρ/2𝔼{[ A (1+ γ)/μ‖∇ L(x̃^(t)) ‖ +‖ n^ (t+1)‖]^2 }. Then, we obtain 𝔼{ L(x̃^(t+1)) - L(x̃^(t))}⩽λ _2 ‖ L(x̃^(t))‖ ^2 + λ _1 𝔼{‖ n^ (t+1)‖}‖ L(x̃^(t))‖ +λ _0 𝔼{‖ n^ (t+1)‖ ^2 }, where λ_2 = -1/μ + A/μ( γ + ρ (1+ γ)/μ) + ρ A ^2 (1+ γ)^2 /2 μ^2, λ_1 = 1+ ρ A (1+ γ)/μ and λ _0 = ρ/2 This completes the proof. As expected, lemma 2 indicates the adverse effect of differential privacy in the expected per-iteration increment of the global loss value. .... As the final step, we use the per-iteration increment to establish the convergence analysis of the proposed algorithm. The upper limit of the difference between the T-th and the optimal loss function values defined as the convergence property is given by 𝔼{L(x̃^(T)) - L(x^*)}⩽Θ + k_2 T + k_1 T^2/ϵ + k_0 T^3/ϵ ^2, where k_2 = λ_2β^2, k_1= 2 λ_1β B c max{ p_i }/max{ m_i }√(2N/π), and k_0 = 4 λ_0 B^2 c^2max{ p_i }^2/max{ m_i }^2. Considering the same and independent noise distribution of the additive noise, we define 𝔼{‖ n^(t)‖} = 𝔼{‖ n ‖} and 𝔼{‖ n^(t)‖ ^2} = 𝔼{‖ n ‖ ^2 }. Applying (<ref>) recursively for 0 ⩽ t ⩽ T yields 𝔼{L(x̃^(T)) - L(x̃^(0))}⩽ T λ_2‖ L(x̃^(t))‖ ^2 + T λ_1‖ L(x̃^(t))‖𝔼{‖ n ‖} + T λ_0𝔼{‖ n ‖ ^2}, Considering ‖ L(x̃^(t))‖⩽β and Adding 𝔼{ L(x̃^(0)) - L(x^*) } to both sides of (<ref>), we have 𝔼{L(w̃^(T)) - L(w^*)}⩽Θ + λ_2 T β ^2 + λ_1 T β𝔼{‖ n ‖} + λ_0 T 𝔼{‖ n ‖ ^2} , Since we have σ _A =Δ f T c/ϵ, we obtain 𝔼{‖ n ‖} = Δ f Tc/ϵ√(2N/π) and 𝔼{‖ n ‖ ^2 } = Δ f ^2 T ^2 c ^2 N/ϵ ^2. Setting Δ f= 2Bmax{ p_i }/max{ m_i } and substituting (<ref>) into (<ref>), we have 𝔼{L(x̃^(T)) - L(x^*)}⩽Θ + λ_2 T β ^2 + 2 λ_1 T ^2 β B c max{ p_i }/ϵmax{ m_i }√(2N/π) + 4 λ_0 T ^3 B^2 c^2 max{ p_i } ^2/ϵ ^2 max{ m_i }^2 = Θ + k_2 T + k_1 T^2/ϵ + k_0 T^3/ϵ ^2, where k_2 = λ_2β^2, k_1= 2 λ_1β B c max{ p_i }/max{ m_i }√(2N/π), and k_0 = 4 λ_0 B^2 c^2max{ p_i }^2/max{ m_i }^2. This completes the proof. The last two terms in the right hand side of (<ref>) depend directly on the amount of noise. lower ϵ values strengthen the privacy protection and adversely affect the convergence property. The first two terms, however, are the constant parts depending on the number of iterations. ... In the above analysis, we saw that by a wise choice of impact factors, T, and N we can be confident about the convergence of the FL algorithm while (ϵ, δ)-DP is used. The number of clients involved in learning in the presented analysis should not necessarily be fixed through training, and this enhances the compatibility of the proposed approach. In the next section, we present the analysis of the same algorithm when impact factors adaptively change throughout the learning process. § CONVERGENCE ANALYSIS OF DP IN FL WITH ADAPTIVE IMPACT FACTORS In this section, we consider an extension to the previous part when impact factors are not fixed during the training. In fact, impacts assigned to clients can vary in each iteration based on the devises' resources or network conditions. The calculated amount of Gaussian noise in section 3 can still be utilized here, since iterations are independent in noise generation. However, the convergence analysis provided in the previous section needs to be more generalized. Here, we change p_i to p_i ^(t) to represent this adaptability in our equations. Without loss of generality, we assume the relation between two consecutive impact factors to be p_i ^(t+1)= p_i ^(t) + α _i ^(t) ∀ i, where α _ i ^(t) is the amount of change that the relative impact factor assigned to i-th client undergoes for (t+1)-th iteration. Hence, |α _i |⩽ 1 and ∑ _i=1 ^ N α _i ^(t)=0. In order to perform the analysis of the adaptive form, we first present an extension to lemma 2 and then present the convergence upper bound in theorem 3. The per-iteration expected increment in the value of the loss function, when adaptive p_i is adopted, has the following upper limit: 𝔼{ L(x̃^(t+1)) - L(x̃^(t))}⩽ λ^' _2 ‖ L(x̃^(t))‖ ^2 + λ^' _1 𝔼{‖ n^ (t+1)‖}‖ L(x̃^(t))‖ +λ^' _0 𝔼{‖ n^ (t+1)‖ ^2 } + 1/2max{l_i}, where λ^'_2 = -1/μ + A^'/μ( γ + ρ (1+ γ)/μ) + ρA^' ^2 (1+ γ)^2 /2 μ^2, λ^'_1 = 1+ ρ A ^'(1+ γ)/μ, λ^' _0 = ρ/2, and n^(t)= ∑ _i=1 ^N p_i n_i ^(t) + n_s ^(t) is the aggregated noise of the clients and server in each cycle. From (<ref>) we have ∑ _i=1 ^N p_i ^(t)‖∇ l_i(x ^(t)) ‖⩽‖∇ L(x^(t)) ‖ A. Adding ∑ _i=1 ^Nα _i ^(t)‖∇ l_i (x ^(t)) ‖ to both sides of (<ref>) yields ∑ _i=1 ^N p_i ^(t+1)‖∇ l_i(x ^(t)) ‖⩽ ∑ _i=1 ^Nα _i ^(t)‖∇ l_i (x ^(t)) ‖ + ‖∇ L(x^(t)) ‖ A Hence, we have ∑ _i=1 ^N p_i ^(t+1)‖∇ l_i(x ^(t)) ‖⩽‖∇ L(x^(t)) ‖ A^', where A^' = ∑ _i=1 ^Nα _i ^(t)‖∇ l_i (x ^(t)) ‖/‖∇ L (x ^(t)) ‖ + A. Therefore, we can bound ‖x̃^(t+1) - x̃^(t)‖ as ‖x̃^(t+1) - x̃^(t)‖ = ‖ x^(t+1) + n ^ (t+1) - x̃^(t)‖ ⩽‖ x^(t+1) - x̃^(t)‖ +‖ n^ (t+1)‖ = ‖∑ _i=1 ^ N p_i ^(t+1) ( x_i ^(t+1) - x̃^(t))‖ +‖ n^ (t+1)‖ ⩽∑ _i=1 ^ N p_i^(t+1)‖ x_i ^(t+1) - x̃^(t)‖ +‖ n^ (t+1)‖ ⩽∑ _i=1 ^ N p_i^(t+1)( 1+ γ/μ‖∇ l_i(x̃^(t)) ‖) +‖ n^ (t+1)‖ ⩽A ^'(1+ γ)/μ‖∇ L(x̃^(t)) ‖ +‖ n^ (t+1)‖. Summation of (<ref>) multiplied with p_i^(t), ∀ i yields ∑ _i=1 ^N p_i^(t) l_i(x̃^(t+1)) ∑ _i=1 ^N p_i^(t) l_i(x̃^(t)) + ∑ _i=1 ^N p_i^(t)∇ l_i(x̃^(t))^ ⊤ (x̃^(t+1) - x̃^(t)) + ρ/2‖x̃^(t+1) - x̃^(t)‖ ^2 ∑ _i=1 ^N p_i ^(t). Considering (<ref>), we have L(x̃^(t+1)) - L(x̃^(t))⩽∑ _i=1 ^Nα_i^(t) l_i (x̃^(t+1)) + ⟨∇ L(x̃^(t)), (x̃^(t+1) - x̃^(t)) ⟩ + ρ/2{‖x̃^(t+1) - x̃^(t)‖ ^2 }. Without loss of generality, we assume 𝔼{l_i (x ^(t))} = 1/N l_i (x ^(t)), and therefore 𝔼{∑ _i=1 ^N α _i ^(t) l_i (x ^(t)) } = ∑ _i=1 ^N α _i ^(t)𝔼{ l_i (x ^(t)) } = ∑ _i=1 ^N α _i ^(t)( 1/N l_i (x ^(t)) ) ⩽1/2max{l_i (x ^(t))} Then, (<ref>) and (<ref>) gives 𝔼{ L(x̃^(t+1)) - L(x̃^(t)) }⩽ 1/2max{l_i} + 𝔼{⟨∇ L(x̃^(t)), (x̃^(t+1) - x̃^(t)) ⟩} + ρ/2𝔼{‖x̃^(t+1) - x̃^(t)‖ ^2 }. Defining h(·) as (<ref>) and Summation of (<ref>) multiplied with p_i^(t+1), ∀ i yields ∑ _i=1 ^N p_i^(t+1)∇ h(x_i ^(t+1);x̃ ^(t)) = ∑ _i=1 ^N p_i ^(t+1)∇ l_i(x_i ^(t+1)) + μ∑ _i=1 ^N p_i^(t+1) (x_i^(t+1) - x̃^(t))= ∑ _i=1 ^N p_i^(t+1)∇ l_i(x_i ^(t+1)) + μ∑ _i=1 ^N p_i^(t+1) x_i^(t+1) - μx̃^(t) and therefore, x̃^(t+1) - x̃^(t)= ∑ _i=1 ^N p_i ^(t+1) x_i^(t+1) + n^(t+1) - x̃^(t) = 1/μ[ ∑ _i=1 ^N p_i^(t+1)( ∇ h(x_i ^(t+1);x̃ ^(t)) - ∇ l_i(x_i ^(t+1)) ) ] + n^(t+1). Substituting (<ref>) into (<ref>), we obtain 𝔼{ L(x̃^(t+1)) - L(x̃^(t))}⩽𝔼{1/μ⟨∇ L(x̃^(t)), ∑ _i=1 ^N p_i^(t+1)( ∇ h(x_i ^(t+1);x̃ ^(t) ) - ∇ l_i(x_i ^(t+1))) ⟩ + ⟨∇ L(x̃^(t)), n^(t+1)⟩} + ρ/2𝔼{‖x̃^(t+1) - x̃^(t)‖ ^2 } + 1/2max{l_i} = 𝔼{1/μ⟨∇ L(x̃^(t)), ∑ _i=1 ^N p_i^(t+1)( ∇ h(x_i ^(t+1);x̃ ^(t) ) - ∇ l_i(x_i ^(t+1)) ) + ∑ _i=1 ^N p_i^(t)∇ l_i(x̃ ^(t)) - ∑ _i=1 ^N p_i^(t)∇ l_i(x̃ ^(t)) ⟩ + ⟨∇ L(x̃^(t)), n^(t+1)⟩} + ρ/2𝔼{‖x̃^(t+1) - x̃^(t)‖ ^2 } + 1/2max{l_i} = -1/μ‖∇ L(x̃^(t)) ‖ ^2 + 𝔼{1/μ⟨∇ L(x̃^(t)), ∑ _i=1 ^N p_i ^(t+1)∇ h(x_i ^(t+1);x̃ ^(t) )- ∇ L(x_i ^(t+1)) + ∇ L(x̃ ^(t)) ⟩} + 𝔼{⟨∇ L(x̃^(t)), n^(t+1)⟩} + ρ/2𝔼{‖x̃^(t+1) - x̃^(t)‖ ^2 } + 1/2max{l_i} ρ-Lipschitzity of local loss functions leads to have a ρ-Lipschitz global loss function. Hence, ‖∇ L (x̃^(t) - ∇ L (x_i ^(t+1) )‖⩽ρ‖x̃^(t) - x_i ^(t+1)‖ Therefore, using triangle inequality, (<ref>), and (<ref>) we obtain ‖∑ _i=1 ^N p_i ^(t+1)∇ h(x_i ^(t+1);x̃ ^(t) )+ ∇ L(x̃ ^(t)) - ∇ L(x_i ^(t+1)) ‖ ⩽‖∑ _i=1 ^N p_i ^(t+1)∇ h(x_i ^(t+1);x̃ ^(t)) ‖ + ‖∑ _i=1 ^N p_i ^(t+1)( ∇ L(x̃ ^(t)). . . . - ∇ L(x_i ^(t+1)) ) ‖⩽( A ^'γ + ρ A ^' (1+ γ)/μ) ‖∇ L (x̃^(t)) ‖ Substituting (<ref>) and (<ref>) into (<ref>) yields 𝔼{ L(x̃^(t+1)) - L(x̃^(t))}⩽ -1/μ‖∇ L(x̃^(t)) ‖ ^2 + ( A^'γ/μ +ρ A^' (1+ γ)/μμ) ‖∇ L(x̃^(t)) ‖ ^2 + 𝔼{‖∇ L(x̃^(t)) ‖‖ n^(t+1)‖} + ρ/2𝔼{[ A ^'(1+ γ)/μ‖∇ L(x̃^(t)) ‖ +‖ n^ (t+1)‖]^2 } + 1/2max{l_i}. And, we get 𝔼{ L(x̃^(t+1)) - L(x̃^(t))}⩽ λ^' _2 ‖ L(x̃^(t))‖ ^2 + λ^' _1 𝔼{‖ n^ (t+1)‖}‖ L(x̃^(t))‖ +λ^' _0 𝔼{‖ n^ (t+1)‖ ^2 } + 1/2max{l_i}, where λ^'_2 = -1/μ + A^'/μ( γ + ρ (1+ γ)/μ) + ρA^' ^2 (1+ γ)^2 /2 μ^2, λ^'_1 = 1+ ρ A ^'(1+ γ)/μ, λ^' _0 = ρ/2. This completes the proof. Using adaptive p_i assignment, the upper limit of the difference between the T-th and the optimal loss function values defined as the convergence property is given by 𝔼{L(x̃^(T)) - L(x^*)}⩽Θ + k^'_2 T + k_1^' T^2/ϵ + k_0^' T^3/ϵ ^2, where k^'_2 = λ^'_2β^2 + max{l_i}/2 , k^'_1= 2 λ^'_1β B c max{ p_i }/max{ m_i }√(2N/π), and k^'_0 = 4 λ^'_0 B^2 c^2max{ p_i }^2/max{ m_i }^2. The proof can be easily extended from the proof for Theorem 2 and using lemma 3. § SIMULATION RESULTS In this section we evaluate our approach against different privacy budgets and impact factors. We present four scenarios to study the effect of noise and impact factors on convergence bound and accuracy of the models. §.§ Experimental Setting We evaluate our approach on the real-world Modified National Institute of Standards and Technology (MNIST) dataset <cit.>. MNIST is a widely used dataset for handwritten digit identification which is consisted of 60000 training and 10000 testing samples. We use a multi-layer perceptron (MLP) neural network in local clients and the model weights are communicated with the server for aggregation at each cycle. The designed MLP model classifies the input images using a ReLU activation function in the hidden layer and softmax of 10 classes in the output layer. To proceed the SGD algorithm in the local optimizers, we set the learning rate equal to 0.02. We stablish our evaluation using four scenarios listed in Table <ref>. A small randomly chosen subset of the MNIST is distributed between the clients non-identically in each scenario between 60 clients. The dataset is purposely reduced in size to avoid overfitting. The personalized DP noise is injected in both the client-side and server-side, and the affect of non-identical impact factors during the aggregation process is checked for each scenario. We set δ= 0.01 for the privacy budget, and choose different protection levels (ϵ) throughout this experiment for 30 global iterations. We further discuss each scenario in the following section. §.§ Numerical Results After distributing the dataset between 60 clients, we randomly divide clients into three parts. In the following items, the details of each scenario are presented, respectively. * scenario 1 : In the first scenario, we apply the presented privacy protection scheme on clients with heterogeneous data quality. Clients are identical in terms of dataset size, or m=150 for all parts, and we deliberately add salt-and-pepper noise with various densities to each part to change data quality between the clients. We first set different impact factors in the non-private mode for comparison. Fig. <ref> depicts the importance of impact factors in FL model performances. As it is shown, equal p_i (the green curve) leads to the worst accuracy. The sequence of numbers dedicated to each curve in Fig. <ref> represents the relations between impact factors of the three parts. For instance, “0-1-2" means that we have set p_i equal to 0, 1/N, and2/N for the first, second, and third part, respectively. Considering “0-1-2" as the optimal ratio between the impact factors, Fig. <ref> compares the results after applying DP. Here, Gaussian noise is injected using (<ref>) and (<ref>) for protection levels ϵ=5 and ϵ=20 with non-identical impacts. As expected from (<ref>), values of the loss function decrease for higher privacy protection levels. In this experiment, we also compare the results when identical p_i is adopted for ϵ=20. As shown in Fig. <ref>, the model performance using identical p_i is even worse than a higher protection level ϵ=5, when personalized DP is used. * scenario 2 : In this scenario, we go further and change clients' distributions in addition to data quality. Hence, we divide clients into three parts of 20, 35, and 5 clients each, and deliberately add salt-and-pepper noise to them based on densities in Table <ref>. Fig. <ref> depicts the loss function values in the non-private mode using different impact factor assignments. As shown, equal p_i cannot be a right choice in the presence of heterogeneities. In this case, weighting clients is based on a balance between involving a sufficient number of clients in learning and exploiting the most accurate samples. The red curve related to “0-2-1” impact assignment sets impact factors of the first, second and third parts equal to 0, 8/5N, and 4/5N, respectively. Considering “0-2-1" as the basis ratio between the impact factors, model performances in the private mode is compared in Fig. <ref>. It is clear from Fig. <ref> that a wise choice of p_i significantly improves model accuracy in distributed architectures, especially while using DP. * scenario 3 : The third scenario is designed to see the effect of the dataset size and impact factors on the convergence performance of FL model. As given in Table <ref>, we set m equal to 50, 120, and 271 in clients of part 1, 2, and 3 respectively, and as depicted in Fig. <ref>, setting higher weights to the third part (clients with larger datasets) yields to better results. The common method defining impact factors based on dataset size, or setting p_i= m_i/m, is probably developed from this assumption. But it can adversely affect the global model accuracy in heterogeneous structures. Reducing clients' training samples increases sensitivity and the amount of noise required for preserving DP. Fig. <ref> compares the results after Gaussian noise is added for protection levels ϵ=5 and ϵ=20. As it is shown for ϵ= 20, the accuracy of the model when identical p_i is set for all parts is still worse than assigning impact factors proportional to 1, 3, and 5 for part 1, 2, and 3, respectively. This result has been achieved in spite of the fact that more noise is added due to a higher sensitivity in the latter experiment. * scenario 4 : We have designed the forth scenario to compare convergence properties when adaptive impact factors are used. As presented in Table <ref>, the quality of clients' datasets are changed after the 10-th aggregation round, and therefore, p_i of each part should be changed to obtain the best model performance. Fig. <ref> depicts the results of three types of impact factor assignments. The green curve which belongs to the experiment giving the heaviest weight to slight noisy datasets yields to the fastest and the most accurate result. Considering the green curve as the basis for the private mode, we compare loss function values for protection levels of ϵ=5 and ϵ=20 in Fig. <ref>. § CONCLUSION In this paper, we presented a personalized privacy preserving approach in federated learning models. Considering the systems and statistical heterogeneities in distributed architectures, we have first focused on the roles that impact factors play in obtaining the best model performance. We further clarified that the impacts are not necessarily fixed during training the global model and undergo changes. Hence, the influence each client has on learning can increase, decrease, or become zero while ∑_i=1 ^ N p_i =1 applies. Then, we have proposed the requirements for preserving (ϵ,δ)-DP in both clients and the server, when personalized aggregation is applied. We have developed the convergence analysis of the proposed scheme for both fixed and time-varying impact factors. Our simulation results on four scenarios helps understanding the importance of assigning non-identical impact factors to compensate the weaknesses of local datasets, clients, links, and the server. IEEEtran
http://arxiv.org/abs/2406.17756v1
20240625174153
The Size-Mass relation at Rest-Frame $1.5μ$m from JWST/NIRCam in the COSMOS-WEB and PRIMER-COSMOS fields
[ "Marco Martorano", "Arjen van der Wel", "Maarten Baes", "Eric F. Bell", "Gabriel Brammer", "Marijn Franx", "Angelos Nersesian" ]
astro-ph.GA
[ "astro-ph.GA" ]
0000-0003-2373-0404]Marco Martorano Sterrenkundig Observatorium, Universiteit Gent, Krijgslaan 281 S9, 9000 Gent, Belgium 0000-0002-5027-0135]Arjen van der Wel Sterrenkundig Observatorium, Universiteit Gent, Krijgslaan 281 S9, 9000 Gent, Belgium 0000-0002-3930-2757]Maarten Baes Sterrenkundig Observatorium, Universiteit Gent, Krijgslaan 281 S9, 9000 Gent, Belgium 0000-0002-5564-9873]Eric F. Bell Department of Astronomy, University of Michigan, 1085 South University Avenue, Ann Arbor, MI 48109-1107, USA 0000-0003-2680-005X]Gabriel Brammer Cosmic Dawn Center (DAWN), Niels Bohr Institute, University of Copenhagen, Jagtvej 128, Kø benhavn N, DK-2200, Denmark 0000-0002-8871-3026]Marijn Franx Leiden Observatory, Leiden University, P.O. Box 9513, 2300 RA, Leiden, The Netherlands 0000-0001-6843-409X]Angelos Nersesian STAR Institute, Université de Liège, Quartier Agora, Allée du six Aout 19c, B-4000 Liege, Belgium Sterrenkundig Observatorium, Universiteit Gent, Krijgslaan 281 S9, 9000 Gent, Belgium Marco Martorano marco.martorano@ugent.be § ABSTRACT We present the galaxy stellar mass - size relation in the rest-frame near-IR (1.5 μm) and its evolution with redshift up to z=2.5. Sérsic profiles are measured for ∼ 26 000 galaxies with stellar masses M_⋆ > 10^9 M_⊙ from JWST/NIRCam F277W and F444W imaging provided by the COSMOS-WEB and PRIMER surveys, using coordinates, redshifts, colors and stellar mass estimates from the COSMOS2020 catalog. The new rest-frame near-IR effective radii are generally smaller than previously measured rest-frame optical sizes, on average by 0.14 dex, with no significant dependence on redshift. For quiescent galaxies this size offset does not depend on stellar mass, but for star-forming galaxies the offset increases from -0.1 dex at M_⋆ = 10^9.5 M_⊙ to -0.25 dex at M_⋆ > 10^11 M_⊙. That is, we find that the near-IR stellar mass - size relation for star-forming galaxies is flatter in the rest-frame near-IR than in the rest-frame optical at all redshifts 0.5<z<2.5. The general pace of size evolution is the same in the near-IR as previously demonstrated in the optical, with slower evolution (R_e∝ (1+z)^-0.7) for L^* star-forming galaxies and faster evolution (R_e∝ (1+z)^-1.3) for L^* quiescent galaxies. Massive (M_⋆>10^11 M_⊙) star-forming galaxies evolve in size almost as fast as quiescent galaxies. Low-mass (M_⋆<10^10 M_⊙) quiescent galaxies evolve as slow as star-forming galaxies. Our main conclusion is that the size evolution narrative as it has emerged over the past two decades does not radically change when accessing with JWST the rest-frame near-IR, a better proxy of the underlying stellar mass distribution. § INTRODUCTION The size of a galaxy retains information about its evolutionary history, and is partially a reflection of the properties of the host dark matter halo, most notably the virial radius and the angular momentum <cit.>. Galaxy sizes are therefore a key ingredient of the various well-known scaling relations such as the size-stellar mass relation <cit.> and the fundamental plane <cit.>. Parametric fits can give estimates of galaxies' size at the expense of having some sensitivity to the parametric form, but with the advantage of being less sensitive to PSF smearing and noise effects than non-parametric methods <cit.>. Sérsic fits are the most common profiles adopted to fit galaxies. These are parameterized by flux, half-light radius (R_e), and Sérsic index (n) which quantifies the concentration of the light profile. While the Sérsic index remains relatively constant when moving from optical wavelengths to the near-IR <cit.>, the half-light radius is significantly influenced by color gradients due to stellar population variations and attenuation by dust <cit.>. Numerous low redshift studies have analyzed the wavelength-dependent behavior of R_e <cit.>. The general result is that galaxy sizes are smaller at longer wavelengths, regardless of type or mass. The implied color gradients are varyingly attributed to gradients in age, metallicity, star-formation activity and dust attenuation, depending on the type and mass of the galaxy. Spatially resolved imaging is essential to study structural evolution at large look-back time to provide stringent constraints on galaxy evolution models. Only the Hubble Space Telescope (HST) and the James Webb Space Telescope (JWST) have sufficient spatial resolution (≈ 1 kpc) and stability of the point spread function to accurately quantify the light profiles of such distant galaxies. HST has been extensively used over the past two decades to map the size evolution at rest-frame UV and optical wavelengths <cit.>, where attenuation by dust and the outshining by young populations present significant obstacles for interpreting the observed sizes. Now that JWST provides us with rest-frame near-IR observations of high-redshift galaxies, we can estimate sizes at longer wavelengths where dust attenuation is much less severe and the outshining effect lessened, such that gradients in the dust content and stellar population properties are suppressed. The first estimates of rest-frame near-IR galaxy sizes from JWST by <cit.> found a systematic 9% reduction in the sizes at 4.4μm compared to those at 1.5μm (both in the observed frame) in the redshift range z=1-2.5. The general result that galaxies are smaller in the rest-frame near-IR compared to the rest-frame optical was subsequently confirmed <cit.>, displaying the same behavior as for present-day galaxies <cit.>. In this work, we present rest-frame 1.5μm effective radius measurements for ∼26 000 galaxies with stellar masses M_⋆≥10^9 M_⊙ in the redshift range z=0.5-2.5 that fall in the COSMOS-WEB <cit.> and PRIMER-COSMOS <cit.> JWST footprints. We take advantage of these new measurements to determine the rest-frame near-IR size-mass relation and its evolution with redshift for the general galaxy population, allowing for a direct comparison with the well-established results obtained in the rest-frame optical wavelength regime. The paper is structured as follows: in Section <ref> we briefly present the cataloged data used in this work along with an overview of the JWST observations. Section <ref> follows with an explanation of the fitting technique adopted to retrieve galaxies' sizes. In Section <ref> we present the size-mass relation and the redshift evolution of the median size of galaxies. This is followed by a discussion in Section <ref>. Finally, in Section <ref>, we summarize our findings and draw our conclusions. We adopt a flat ΛCDM cosmology with H_0=70kms^-1Mpc^-1, Ω_m=0.3. § DATA AND SAMPLE SELECTION In this section we describe the parent catalog that we used to select target galaxies, and their basic properties (redshifts, stellar masses, star-formation activity). We also describe the JWST data used to fit near-IR light profiles. §.§ COSMOS2020 The parent sample is drawn from the COSMOS2020-CLASSIC catalog presented by <cit.>. It contains multi-wavelength (0.15 μ m to ∼8 μ m) observations of over 1.7 million sources in the 2 deg^2 COSMOS field <cit.> from several ground-based (i.e. Subaru/HSC and VISTA) and space-based (i.e. HST and Spitzer) observatories. The catalog includes stellar population parameters obtained from broadband photometry SED fitting with the codes LePhare <cit.> and Eazy <cit.>. Here we use the redshifts and stellar population parameters from LePhare as those are preferred by the authors <cit.>. Eazy provides the corresponding rest-frame colors U-V and V-J. Mixing stellar masses from LePhare with rest-frame colors from Eazy does not bias our results: the Eazy stellar mass estimates produce the same conclusions but <cit.> prefer LePhare because of its lower redshift bias fraction. Furthermore, the LePhare stellar-mass estimates are closer to those used by <cit.> that we use as a comparison sample in this work (Appendix <ref>). §.§ JWST observations A region of ≈0.54 deg^2 roughly in the middle of the ≈ 2 deg^2 COSMOS field (Fig. <ref>) is covered by JWST/NIRCam imaging from the COSMOS-WEB program <cit.>. COSMOS-WEB provides imaging with F115W and F150W in the Short Wavelength Channel and with F277W and F444W in the Long Wavelength Channel. The 5σ depth reached is between 27.61 and 28.18 ABmag for the filter F444W depending on the number of exposures <cit.>. Currently, slightly more than half of the ≈0.54 deg^2 area has been observed, and will be used in this work. The CANDELS-COSMOS field <cit.>, consisting of a deeper but smaller field within COSMOS, was observed by JWST as part of the PRIMER program <cit.> with ten different JWST/NIRCam filters and a F444W depth up to 28.17 ABmag <cit.> for the regions with the highest exposures. Data from these two programs were processed with grizli <cit.> to create mosaics and weight maps with a pixel scale of 0.05/pix. The mosaics are publicly available on The DAWN JWST Archive (DJA) <cit.>. For this work we retrieve cutouts for each galaxy from the Interactive Map Interface <cit.> available on the DJA. Focusing on the rest-frame near-IR, in this paper we only use the Long Wavelength Channel filters F277W and F444W. As described below noise properties and PSF are tested and well-understood for this dataset. Figure <ref> shows JWST/NIRCam imaging for 16 galaxies in the sample selected to be representative of the median characteristics of galaxies across the full mass and redshift range (see Sec. <ref> for further discussion). JWST/NIRCam Short Wavelength channel filters F115W and F150W are used only for visualization and are not used for quantitive analysis in this work. §.§ Galaxy selection Since this paper aims to present near-IR sizes at rest-frame 1.5μm the wavelength coverage of the F277W and F444W filters impose a redshift range of z=0.5-2.5 to avoid the effects of extrapolation. The rest-frame wavelength corresponding to the pivot wavelength of F277W is 1.85μm at z=0.5 and of F444W 1.26μm for z=2.5. Among the over 1.7 million sources in the COSMOS2020-CLASSIC catalog, we select those 75 765 galaxies (ACS_MU_CLASS=1; this flag rejects stellar-like objects) in this redshift range that have a positive HST/F814W flux and that fall within the available JWST/NIRCam F277W and F444W imaging footprints (Fig. <ref>). In Figure <ref> we show the mass-redshift distribution of these galaxies. The adopted stellar mass limit for our sample (M_⋆ = 10^9 M_⊙) leaves us with 32 002 galaxies. We remove from our sample 4 912 objects (∼15%) due to differences between the identification, deblending and segmentation between the source catalog from <cit.> and our work based on the much higher resolution NIRCam imaging (see <ref>). This rejection does not bias our sample since these targets are evenly distributed across the full mass and redshift range. Moreover, mismatches are often related to nearby bright sources and/or severe blending that prevent robust profile fits in the first place. NIRCam-selected catalogs will eventually produce a clear improvement in object selection and characterization but this is beyond the scope of this work. Size estimates may be biased due to the contributions from central point sources from Active Galactic Nuclei (AGN). Star-like objects were already removed from the sample, so that the most obvious AGN are automatically omitted. But AGN contributions in the mid-IR still have to be addressed since <cit.> showed that an AGN fraction >50% induce a radius decrease up to 50%. We match our sample to the publicly available catalog presented in <cit.>, who fit MAGPHYS on the COSMOS2015 <cit.> photometry for 36 000 sources in the COSMOS field. We find, and remove from our sample, 45 galaxies with the median of the posterior distribution of the AGN fraction larger than 50%. Since these 45 galaxies show no obvious bias in the size distribution we conclude that more moderate AGN contributions will not affect our results either. §.§ Separating Star-Forming and Quiescent Galaxies The COSMOS2020 SED fits from <cit.> do not include mid- or far-IR photometric data, consequently, the resulting SFRs are not robust enough to be efficiently used to divide galaxies in star-forming and quiescent. Instead, we separate star-forming and quiescent galaxies based on their rest-frame U-V and V-J colors. <cit.> defined selection criteria to separate the quiescent and the star-forming population from the UVJ diagram. <cit.> adapted the exact constraints to better match their rest-frame photometry. But we find that those cuts do not work well for the current rest-frame photometry. This difference originates from two main factors: the COSMOS2020 photometry is different from the one used in <cit.> despite the similar sample (different methodology for photometry retrieval, several new filters, updated zeropoints); the templates used for SED fitting in COSMOS2020 are different from those used by <cit.> leading to different rest-frame colors. These two effects combined necessitate new color cuts to maximize the separation between the two galaxy types. Note that drawing these separation lines has always been a heuristic process. In Figure <ref> we present the UVJ diagram and the new color cuts adopted to optimize the separation between the two types. The quiescent region is finally identified as: z<1.5: U-V > 1.3; U-V>0.48+0.88(V-J) z≥1.5: U-V > 1; U-V>0.35+0.80(V-J) The original <cit.> color classification only leads substantially different sample separation at z>2, due to the many quiescent galaxies with relatively blue colors. § LIGHT PROFILE FITS In this section we present the steps light fitting procedure with GalfitM <cit.>, a further developed version of Galfit <cit.>. §.§ Cutouts For each galaxy we create F277W and F444W cutouts such that the number of pixels not associated with a source in the F444W segmentation map <cit.> is at least 100 times the number of pixels that are associated with object segments. In addition, we impose a minimum size of 8 and a maximum size of 20. A posteriori we verify that 99%(50%) of the cutouts are at least 12(37) times larger than R_e, which guarantees that uncertainties in background estimates propagate into uncertainties in R_e by less than 5%(<1%) <cit.>. The 1% of galaxies with cutouts smaller than 12 times R_e are large (R_e>5 kpc) low-mass galaxies (M_⋆<10^10 M_⊙) mostly at z<1.5. A systematic 5% variation in the size of these targets have a negligible effect (<0.5%) on the median stellar mass-size relation presented in <ref>. For each cutout we use the available weight (inverse variance) image to compute an error map. Masks are created based on the segmentation map, including those sources that are more than a magnitude fainter than the target as also done in <cit.>. Sources brighter than this threshold are included in the GalfitM profile fit as separate objects, along with the target itself and fitted simultaneously. SEP provides us with an initial estimate for GalfitM of the semi-major and minor axis of the galaxies as well as their center coordinates. We find 144 galaxies to have at least 1 pixel with flux zero within a square of side 7 pixels from the center recovered with SEP. These pixels can either be bad pixels masked by the pipeline used to create the mosaics or gaps in the footprint. Since the effect on the fit produced by a masked pixel in the inner part of the galaxy can be large, we decided to remove these 144 sources from our sample. §.§ The Point Spread Function GalfitM generates Sérsic profiles that are convolved with the Point-Spread-Function (PSF) of the filter investigated and subsequently fitted to the image. As several works pointed out <cit.>, we also find the PSF in the mosaics to be broader than the theoretical PSFs from WebbPSF <cit.>, which is primarily the result of the data reduction process (shifting, weighting and stacking). To minimize the systematic effects that a wrongly calibrated PSF can induce, we create empirical PSFs. We visually select 50 stars across the mosaic that are not saturated and that overlap flux density distribution with the sample galaxies as the PSF might be flux dependent. Sub-pixel precisions of the centers of the stars are determined by SEP run on 8 cutouts. The flux is normalized in each cutout to the flux measured within a 0.3 circle around the SEP center. After masking any other source in the cutout, we compute the background as the median flux per pixel in an annulus with inner radius 2.5 and outer radius 3. This median background is subtracted from all the pixels. Each cutout is then upsampled by a factor 10 and cropped to a square of 6 centered with sub-pixel resolution. We then produce a median stack (on a pixel-by-pixel basis) of the 50 upsampled, background-subtracted cutouts. Our final empirical PSF is then produced by a normalization such that the total flux in the 6 square matches that of the theoretical PSF (generated via WebbPSF) in the same area. Exposures from COSMOS-WEB and PRIMER-COSMOS have the same pointing position angle (differences are <1 degree), so that the stacked stars are well-aligned. In Figure <ref> we compare our empirical PSF with the theoretical WebbPSF PSF. The large differences in enclosed light (up to ≈ 20% at 0.07 for F277W) imply that the WebbPSF PSF cannot be used in combination with these mosaics to infer accurate structural properties of galaxies. As referred to before, these differences can be attributed to the weighting and stacking procedure of multiple exposures in the data reduction pipeline and do not in any way reflect an error in the WebbPSF models. The variation in stellar light profiles reflects, for one, spatial variation in the PSF. The median and 16-84 percentile ranges in the radius encircling half of the stellar light profile are 0.105± 0.016(0.122 ± 0.022). We interpret the scatter among the stellar profiles as the uncertainty in the PSF width (15% and 18% in F277W F444W). These variations are comparable to the 20% spatial variation across the detector itself found by <cit.>. The 0.016 and 0.022 are propagated into the galaxy size uncertainties, by adding in quadrature to the formal fitting uncertainties. The PSF uncertainty dominates the random uncertainty for small objects ≲ 1 kpc, but does not affect the median trends that are the main result of this paper. §.§ Profile Fits with GalfitM The convolution box used by GalfitM is set to be 200 pixels per side. Despite slowing down the computation, the large convolution box was adopted to encompass even the largest sources. We can then use the same convolution box for all the targets avoiding the introduction of systematics due to the use of different convolution boxes for different galaxies. Sérsic profile fits are independently fitted to the F277W and F444W images, allowing the Sérsic index to span the range 0.2 - 10 with an initial guess of 2.3. The x and y coordinates of the galaxies' centers are limited to lie within a distance of 5 pixels from the SEP-based centers. The effective radius is limited to values between 0.01 and 200 pixels with the initial value defined as the semi-major axis of the SEP ellipse. Axis ratio and position angle are left as free parameters with no constraints. The mosaics are background-subtracted, but to allow for remaining background patterns across the mosaic we adopt a constant background with GalfitM as a free parameter. We find that the fits reach the numerical convergence for both JWST/NIRCam filters for 26 003 galaxies (96% of the available sample). We find, and remove, one galaxy with a retrieved Sérsic index within 10^-3 from the boundaries of the parameter space suggesting the sampler got stuck there. For 514 sources GalfitM produces a fit with χ_reduced^2 >3. A visual inspection of these fits reveals that for the vast majority (506) the χ_reduced^2 is driven by those galaxies fitted simultaneously with the target, and not by the target itself. We refit these sources, masking the nearby galaxies and we find that they converge properly, with much lower χ_reduced^2 for 497 out of 506. The remainder are omitted from the final sample. Visual inspection by MM of all 26 003 fitted galaxies shows 36 mergers and 2 lensing systems that are not properly fitted and, therefore, discarded (6 mergers and the 2 lensing systems were among those fits with χ_reduced^2>3). We further reject 5 galaxies that were only partially covered by the NIRCam mosaics, leaving the fitting results unreliable. Our final sample with reliable Sérsic profiles includes 25 960 galaxies of which 3 082 are quiescent and 22 878 are star-forming. In Figure <ref> we show the GalfitM fitting results for three randomly selected galaxies. Although low redshift high-mass galaxies are bright and extended, our pipeline uses sufficiently large cutouts to enable accurate background estimates. At the same time, for low-mass high redshift galaxies we see that the SNR is sufficiently high for a precise size estimate (larger than 30 (50) for 84% (50%) of galaxies with M_⋆<9.5 M_⊙ and redshift z>2). The uncertainties retrieved for Sérsic index (n) and effective radius are systematically underestimated by GalfitM <cit.>. Following the prescriptions in <cit.> and <cit.>, we reassess the uncertainties: we adopt a baseline uncertainty of 0.1 dex on the Sérsic index for galaxies with SNR = 50 and on R_e for galaxies with SNR = 20, and compute the final uncertainties as ∝ 1/√(SNR). This methodology is based on an analysis of HST/WFC3 data <cit.>. The SNR of JWST/NIRCam is much higher, producing, at times, very small uncertainties. In fact, for ∼32%(5%) of the galaxies the GalfitM uncertainty on R_e(n) is larger than the SNR-based value, and we retain the former. The rest-frame 1.5μm R_e and n are calculated by fitting a straight line to the values of R_e and n in F277W and F444W, using the respective pivot wavelengths. This fit is repeated 100 times, sampling from the F277W and F444W R_e and n uncertainties, adopting the median values of the R_e and n polynomials at 1.5μm as the estimates and half of the 16th-84th percentile range as the 1σ uncertainty. § RESULTS §.§ Size-Mass Distribution In Figure <ref> we present the rest-frame near-IR size-stellar mass distribution and its evolution from z=0.5 to 2.5. Stellar masses from the COSMOS2020 catalog are adjusted to account for the difference between the total amount K_s-band light that entered the stellar mass estimate and the total amount of F277W light in our fitted profiles; see <ref> for details. R_e,1.5μm does not strongly correlate with stellar mass: at z=1-1.5 both M_⋆=10^11 M_⊙ and M_⋆=10^9 M_⊙ galaxies have typical values of R_e,1.5μm≈ 2 kpc, despite two orders of magnitude differences in stellar mass. This relative flatness of the size-mass relation is seen across all 6 Gyr of cosmic time spanned by our sample (black lines in Fig. <ref>). At the same time, the variety in sizes is large. The full range spans at least 1.5 orders of magnitude (at fixed stellar mass) and the 1σ scatter is 0.3-0.4 dex, slightly depending on mass and redshift. As it is well-known, much of this scatter correlates with star-formation activity. After separating into star-forming and quiescent sub-samples the familiar patterns appear. Quiescent galaxies are consistently smaller than their star-forming counterparts at all redshifts <cit.>. The maximum difference in size is seen at M_⋆≈10^10-10.5 M_⊙, reaching a factor ≈ 3 at z>1. The size-mass relation for the quiescent population is steeply increasing for M_⋆>10^10 M_⊙, and flat at lower masses. The size-mass relation for star-forming galaxies is flatter, but also monotonically increasing. The flatness in the size-mass relation for the full population is due to the joint effect of differences in the mass functions and size distributions of quiescent and star-forming galaxies. At all redshifts and stellar masses we see that R_e,1.5μm is, on average, smaller than R_e,0.5μm the rest-frame optical (0.5 μm) size measurements from <cit.>. The comparison with <cit.> is made after correcting for a slight difference between the adopted cosmologies and, more importantly, differences in the stellar mass estimates, which are substantial and redshift-dependent. These corrections are outlined in the Appendix <ref>. The dashed lines in Figure <ref> represent the median values of the reconstructed <cit.> size-mass distribution. The offset between the optical and near-IR R_e is generally -0.10 to -0.15 dex, mostly independent of redshift, galaxy type and mass: galaxies are 25-40% smaller at 1.5μm than at 0.5μm. The one discernible deviation from this baseline offset is that high-mass (M_⋆>10^10.5 M_⊙) star-forming galaxies show larger offsets, especially at the highest redshift, reaching up to a factor 2. <cit.> were the first to directly compare rest-frame optical and near-IR sizes of galaxies in this redshift range, using F150W and F444W imaging from CEERS <cit.>. This corresponds to rest-frame 0.5μm and 1.5μm at z≈ 2. They found a median difference of ≈ -0.10 dex, and a negative trend with mass, in fair agreement with our findings. One notable difference, at first sight, is the lack of an offset in their F150W to F444W size comparison for low-mass star-forming galaxies (9<log(M_⋆/M_⊙)<10). Upon closer inspection the difference is minimal: their offset of -0.03 dex becomes -0.05 dex when comparing their F444W sizes to R_e,0.5μ m from <cit.>, and a direct comparison of overlapping galaxies in our sample also produces R_e,F444W/R_e,0.5μ m=-0.05 dex when using WebbPSF and -0.08 dex when using our default stacked PSF. It is safe to conclude that these lower-mass star-forming galaxies show some offset in R_e,1.5μ m/R_e,0.5μ m, but whether this is closer to -0.05 dex or -0.10 dex remains to be confirmed. Very recently, <cit.> showed how the sizes of a smaller sample of 151 quiescent galaxies at z>1 depend on wavelength – from rest-frame 0.3μm to 1μm – based on data from JADES <cit.>. They find typical, systematic differences of -0.08 dex when comparing R_e,1μ m and R_e,0.5μm, and little variation in this offset with mass or redshift. These results are consistent with the results presented in this paper considering that we extend to longer wavelengths. <cit.> presented a comparison at redshifts 0.5<z<1.5 between stellar half-mass radii and rest-frame near-IR half-light radii from JWST/CEERS in <cit.>. They found excellent agreement, with differences below 0.03 dex at rest-frame 2μm, suggesting that the offsets presented in this paper largely account for the differences between optical sizes and stellar mass-weighted sizes. §.§ Redshift evolution In Figure <ref> we explicitly show the evolution R_e,1.5μm as a function of redshift and stellar mass. We parameterize the evolution as a function of redshift and Hubble parameter: log_10(R_e ) = α_1+β_1 log_10(1+z) log_10(R_e ) = α_2+β_2 log_10(H(z)/H_0) Fits are repeated 10.000 times, with Gaussian sampling the median sizes from their uncertainty distributions. The median of the coefficients retrieved from the multiple fits and their 16-84 percentiles are reported in Table <ref>. Figure <ref> shows only the redshift parametrization. The sizes of star-forming galaxies with M_⋆ < 10^11 M_⊙ evolve as R_e,1.5μm∝ (1+z)^≈ -0.7, with little dependence on mass. Only higher mass galaxies show substantially faster evolution: R_e,1.5μm∝ (1+z)^≈ -1.2, comparable with the evolution of quiescent galaxies. Quiescent galaxies generally evolve faster in size: R_e,1.5μm∝ (1+z)^≈ -1.3 for M_⋆ > 10^10 M_⊙. Lower-mass quiescent galaxies evolve slower (R_e,1.5μm∝ (1+z)^≈ -0.4). Note that the combined population (SF+Q) evolves in size slower than star-forming alone, which is related to the increasing fraction of quiescent galaxies with cosmic time. We find consistent results when adopting the R_e∝ H(z)^β_2 parametrization with R_e,1.5μm∝ H(z)^-0.6 and R_e,1.5μm∝ H(z)^-1.0 respectively for the star-forming and quiescent population. The offset between β_2 and β_1 reflects the different evolution with cosmic time of the parameters (1+z) and H(z), with the latter evolving slower at late times. In Figure <ref> we compare the pace of size evolution with redshift at 0.5μm and 1.5μm. We use the revised mass estimates for the <cit.> sample as described above and fit the same parameterized functions to their R_e,0.5μm sizes. There is remarkable agreement, implying the absence of strong wavelength dependencies in the pace of size evolution. This reinforces the recent results from JADES for a smaller sample of quiescent galaxies <cit.>. Similarly, we find consistent results for R_e∝ H(z)^β_2 at rest-frame 0.5μm and 1.5μm. That said, the highest-mass star-forming galaxies and quiescent galaxies with stellar masses 10< log(M_⋆/M_⊙)<10.5 tentatively show faster evolution at 1.5 μm than at 0.5 μm. These notable deviations are discussed further in <ref>. § DISCUSSION §.§ Flattening of the size-mass relation As shown in Figure <ref> both quiescent and star-forming galaxies are systematically smaller at rest-frame 1.5μm than at 0.5μm by 0.14±0.03 dex when averaging over all masses M_⋆ > 10^9 M_⊙, and in a manner that is approximately independent of redshift. The offset is also independent of star-formation activity; even though color gradients are thought to arise through different effects – attenuation is important for star-forming galaxies <cit.>, while for quiescent galaxies the difference arises due to metallicity/age gradients in the stellar populations – the end effect is, on average, similar. Overall, the same patterns appear for the size-mass relation in the rest-frame near-IR as seen in the rest-frame optical, with a steep relation for massive quiescent galaxies and a flatter relation for star-forming galaxies. The only relevant and statistically significant correlation is the one between M_⋆ and R_e,1.5μm/R_e,0.5μm for star-forming galaxies (see Fig. <ref>). Low-mass galaxies (M_⋆≈ 10^9 M_⊙) are ∼0.1 dex smaller at 1.5μm, while high-mass galaxies (M_⋆ > 10^11 M_⊙) are ∼0.25 dex smaller: the size-mass relation flattens. In addition, while at z>1.5 this mass dependence is manifest only for M_⋆ > 10^10.5 M_⊙, at z<1 it is seen across the entire mass range. There is a 0.1-0.15 dex baseline offset between R_e,1.5μm and R_e,0.5μm for all galaxies, regardless of redshift, mass and galaxy type, with an additional offset for star-forming galaxies above a redshift-dependent stellar mass limit. These results echo those from <cit.>, who identified a similar pattern for present-day spiral galaxies. We note that when we repeat the entire analysis with WebbPSF PSFs instead of the stacked stars, we find sizes that are systematically 0.03 dex(±0.02 dex) larger. This systematic effect has no significant impact on any of our results, which ensures that our results are robust against systematic errors in the adopted PSFs. The flattening of the size-mass relation at rest-frame 1.5μm for massive star-forming galaxies and the almost systematic shift between optical and near-IR sizes that affects all the galaxies, independently of their classification, are likely caused by a combination of different phenomena. The systematic shift between sizes at 0.5μm and 1.5μm implies that rest-frame near-IR profiles are more compact than optical profiles <cit.>, which presumably implies that stellar mass profiles are more centrally concentrated than optical light profiles. The color and M_⋆/L gradients reflect a combination of dust concentration in the central region of the galaxy <cit.> and gradients in stellar populations <cit.>. Since dust content increases with galaxy mass <cit.>, a mass-dependent effect of attenuation on galaxy size estimates is certainly plausible. Furthermore, edge-on galaxies are reddened <cit.>, with bright centers potentially obscured at shorter wavelengths. Visual inspection of such sources confirm that this incidentally occurs, but this effect is not dominant. More massive galaxies also have stronger stellar population gradients. The separation of the various effects must be assessed via simulations <cit.> and by spatially resolved SED modeling at high redshift across the full wavelength range provided by HST and JWST <cit.>. Another fruitful approach would be to analyze color gradients and M_⋆/L gradients at low redshift <cit.> where spatially resolved, high-SNR spectra are available. §.§ Size evolution with redshift The pace of size evolution with redshift at 1.5μm and 0.5μm is very similar (see Figures <ref> and <ref>), with a moderate pace of evolution, ∝ (1+z)^-0.7, for star-forming galaxies and a faster pace, ∝ (1+z)^-1.3, for massive quiescent galaxies (M_⋆ > 10^10 M_⊙). For lower-mass quiescent galaxies the pace of evolution is slower, similar to that for star-forming galaxies. The fact that the choice of wavelength has a negligible effect on the observed pace of size evolution strongly suggests that attenuation and stellar population gradients have no substantial impact on the interpretation of the observed pace of size evolution. In other words, whereas light-weighted sizes underestimate the true, stellar mass-weighted sizes of galaxies, this effect shows no discernible redshift dependence in the range 0.5<z<2.5. We can conclude with good confidence that the observed pace of size evolution accurately reflects the pace of evolution in mass-weighted sizes. The one remaining caveat is that biases that we cannot control for may exist that are independent of wavelength but dependent on redshift. Like the UV and optical, the rest-frame near-IR is not a direct stellar mass tracer; variations in age, metallicity and, in severe cases, attenuation lead to variations in M_⋆/L at all wavelengths. Two deviations from this narrative are worth mentioning. First, the most massive star-forming galaxies evolve in size faster than less massive star-forming galaxies, but only at 1.5μm, not at 0.5μm. This is directly connected to the flattening of the size-mass relation that is most pronounced at higher redshifts (see <ref> for a discussion). Second, quiescent galaxies with stellar mass 10^10<log(M_⋆/M_⊙)<10^10.5 show fast evolution in size at 1.5μm but slower evolution at 0.5μm. The median value is R_e,1.5μm≈ 0.6 kpc at z=2, below the HST resolution limit, but the tentative evidence based on a small sample is that even at JWST resolution this difference remains <cit.>. These very compact galaxies either have somewhat attenuated young centers, in line with stellar population gradients seen in compact post-starburst galaxies z≈ 1 <cit.>, or low-level residual star formation in their outer parts. Larger samples along with detailed color gradient measurements are needed to confirm this result. In general, our results match those from <cit.>, who showed very similar behavior in R_e,M⋆/R_e,0.5μm as we now see for R_e,1.5μm/R_e,0.5μm: similar pace of evolution along with a flattening of the size-mass relation for star-forming galaxies. It is of particular interest to note that the early evidence is that the near-IR sizes of quiescent galaxies are very similar to their rest-frame optical sizes, modulo an offset that is mostly independent of redshift. This contradicts the conclusion from <cit.> and <cit.> that stellar mass-weighted sizes for quiescent galaxies evolve slowly (or not at all) at z>1. We refer to the Appendix of <cit.> for further discussion. In conclusion, the wavelength dependence of galaxy sizes and the consequent color gradients do not strongly distort the size evolution seen in the rest-frame optical. The observed pace of size evolution of quiescent galaxies, regardless of wavelength, reflects a combination of a progenitor bias – newly quenched galaxies have different properties than previously quenched galaxies <cit.> – that contributes to the pace of size evolution observed for samples of galaxies <cit.> on top of the evolution of individual galaxies <cit.>. § OUTLOOK In this work we analyze the rest-frame 1.5 μm light profiles of ∼26 000 galaxies in the COSMOS field with NIRCam imaging in the F277W and F444W filters from the COSMOS-WEB and PRIMER surveys. The sample is drawn from the pre-existing COSMOS2020 catalog <cit.> and not NIRCam selected. The main caveat in our work is therefore the mismatch between the source catalog and what is seen in the NIRCam imaging. A more extensive analysis, that starts with NIRCam selection of sources, subsequent multi-wavelength photometry and estimation of the key parameters (redshifts, stellar masses, etc.) and light profile fits, is a logical next step. This will not only allow one to make a more consistent analysis of the size - stellar mass distribution and its evolution, but also extend it to higher redshifts. Despite this, we assert that the conclusions we draw in this work will likely not change once these improvements are made. Furthermore, until now the evolution of the rest-frame near-IR galaxy sizes has been treated as indicative of the evolution of stellar mass-weighted sizes, implicitly assuming that near-IR light profiles correspond with stellar mass profiles. In order to take full advantage of the wide wavelength range of spatially resolved light profiles provided by NIRCam for many thousands of galaxies across most of cosmic time, a rigorous analysis of the UV-to-near-IR color gradients is required. This is less straightforward than it may seem, as our knowledge of near-IR SED is poorly constrained, especially at large lookback time. For example, <cit.> showed that the near-IR colors (J-H and H-K) of z≈ 1 galaxies do not match the template colors of the most widely used stellar population models. The highly accurate and precise NIRCam photometry will produce SEDs with small systematic uncertainties, which will then allow us to assess which stellar population models and underlying isochrones and stellar template libraries match the data well. Only then are we in a good position to interpret the observed color gradients and reconstruct stellar mass profiles with high accuracy. MM acknowledges the financial support of the Flemish Fund for Scientific Research (FWO-Vlaanderen), research project G030319N. The data products presented herein were retrieved from the Dawn JWST Archive (DJA). DJA is an initiative of the Cosmic Dawn Center, which is funded by the Danish National Research Foundation under grant No. 140. astropy <cit.>, GalfitM <cit.>, SEP <cit.> § STELLAR MASS CORRECTIONS §.§ Stellar Mass correction The stellar masses from the COSMOS2020 catalog are normalized by the measured total amount of K_s band light, while our R_e estimates are based on Sérsic profile fits that estimate the total amount of light seen with NIRCam. To correct the stellar mass estimates for this difference we add the difference between the F277W GalfitM flux and the K_s total flux to the original stellar mass while taking into account the K_s-F277W color: M_⋆,corr = M_⋆,LePhare + 0.4*(K_s - F277W - ⟨ K_s - F277W⟩ _|J-K_s) Here J, K_s and F277W are the magnitudes of each galaxy respectively in the ULTRA-VISTA J and K_s bands and in the JWST/NIRCam filter F277W, with this latter taken from the GalfitM profile fit as outlined in Section <ref> while the former are retrieved from the COSMOS2020 catalog at an aperture of 2 and corrected to be total magnitudes. ⟨ K_s - F277W⟩_|J-K_s is the median K_s - F277W at fixed J-K_s, which produces the (on average) correct K_s - F277W color correction for galaxies with a given J-K_s color. This color is computed as the median K_s - F277W in 0.25 mag wide bins of J-K_s, linearly interpolating in between the bin centers. The use of ⟨ K_s - F277W⟩_|J-K_s instead of the global median of K_s - F277W is motivated by the correlation between K_s - F277W and J-K_s that is negative for J-K_s<0.4, positive for 0.4<J-K_s<1 and flat for redder colors inducing up to 0.3mag variations in the median K_s - F277W. The resulting median correction is M_⋆,corr - M_⋆,LePhare=0, with a standard deviation of 0.12 dex. We note that these corrections cause slight shifts in the mass-size distibution but do not affect any of our conclusions. §.§ FAST - LePhare matching Stellar masses adopted in <cit.> were retrieved with the package FAST <cit.>. Cross-matching the sample used in this work and in <cit.> which overlaps in the PRIMER-COSMOS field, we find 2 267 matches within 0.4. Comparing LePhare and FAST masses we retrieved a substantial and redshift-dependent difference which we had to correct for a proper comparison of the samples. We performed a simple linear regression fit to the mass difference for the cross-matched sample, as a function of the <cit.> M_⋆ in the four redshift bins separately for quiescent and star-forming galaxies. Fit results are used to shift the M_⋆ estimates of all the galaxies in the <cit.> sample by the appropriate amount and create the size-mass distribution that can be directly compared to the current sample. In Figure <ref> we present the M_⋆,LePhare-M_⋆,FAST difference as a function of M_⋆,FAST together with the results of the linear regression adopted for the correction. To account for relative uncertainties in the two stellar mass estimates, reflected by a residual 1σ scatter of 0.13 dex, we repeat 10 000 times the median size estimate in each mass and redshift bin, each time perturbing the adjusted <cit.> M_⋆ value by a random value drawn from a Gaussian with σ=0.13 dex. aasjournal
http://arxiv.org/abs/2406.18257v1
20240626110923
The Influence of Experimental Imperfections on Photonic GHZ State Generation
[ "Fabian Wiesner", "Helen M. Chrzanowski", "Gregor Pieplow", "Tim Schröder", "Anna Pappa", "Janik Wolters" ]
quant-ph
[ "quant-ph" ]
The Influence of Experimental Imperfections on Photonic GHZ State Generation]The Influence of Experimental Imperfections on Photonic GHZ State Generation †f.wiesner.1@campus.tu-berlin.de June 2024 § ABSTRACT While the advantages of photonic quantum computing, including direct compatibility with communication, are apparent, several imperfections such as loss and distinguishability presently limit actual implementations. These imperfections are unlikely to be completely eliminated, and it is therefore beneficial to investigate which of these are the most dominant and what is achievable under their presence. In this work, we provide an in-depth investigation of the influence of photon loss, multi-photon terms and photon distinguishability on the generation of photonic 3-partite GHZ states via established fusion protocols. We simulate the generation process for SPDC and solid-state-based single-photon sources using realistic parameters and show that different types of imperfections are dominant with respect to the fidelity and generation success probability. Our results indicate what are the dominant imperfections for the different photon sources and in which parameter regimes we can hope to implement photonic quantum computing in the near future. § INTRODUCTION Greenberger-Horne-Zeilinger or in short GHZ states <cit.> have increasingly gained attention in recent years. Apart from numerous proposed applications in quantum communication and cryptography, including secret sharing <cit.>, measurement-based quantum repeaters <cit.>, key agreement <cit.> and anonymity <cit.>, GHZ states are also a promising resource for quantum computing; using GHZ states one can, in theory, create larger resource states <cit.> that facilitate measurement-based quantum computing (MBQC) <cit.>. Especially for linear optical quantum computing <cit.>, 3-partite GHZ states are a popular input resource to generate larger resource states <cit.> and are already sufficient to demonstrate single qubit rotations in an MBQC-protocol <cit.>. r0.45 [->] (0,0) – (1,1) node[pos=0, below, scale=0.75] |H_0⟩; [->] (1,1) – (2,2); [->] (2,2) – (3,3); [->] (1,0) – (0,1) node[pos=0, below, scale=0.75] |H_1⟩; [->] (2,0) – (3,1) node[pos=0, below, scale=0.75] |H_2⟩; [->] (3,0) – (2,1) node[pos=0, below, scale=0.75] |H_3⟩; [->] (2,1) – (1,2); [->] (4,0) – (5,1) node[pos=0, below, scale=0.75] |H_4⟩; [->] (5,0) – (4,1) node[pos=0, below, scale=0.75] |H_5⟩; [->] (4,1) – (3,2); [->] (3,2) – (2,3); [0.5,0.5]; [2.5,0.5]; [4.5,0.5]; [1.5,1.5]; [2.5,2.5]; [0.5,0.5]; [0.5,0.5]; [0.5,0.5]; [0.5,0.5]; [2.5,0.5]; [2.5,0.5]; [2.5,0.5]; [2.5,0.5]; [4.5,0.5]; [4.5,0.5]; [4.5,0.5]; [4.5,0.5]; [1.5,1.5]; [2.5,2.5]; [2.5,2.5]; [1.5,1.5]; [2.5,2.5]; [2.5,2.5]; Circuit for heralded GHZ state preparation.“” is a polarizing beamsplitter, “” is a rotation of polarization by 45^∘ and “” is a polarization and number resolving detector. The circuit was originally proposed in <cit.> and provides a success probability of 1/32, which is the probability for three single photon detections. In the ideal case, this indicates that the state is either |H_0H_3H_5⟩+|V_0V_3V_5⟩/√(2) or |H_0H_3H_5⟩-|V_0V_3V_5⟩/√(2) depending on the polarizations of the three measured photons. Photonic GHZ states can be created using probabilistic schemes based on spontaneous parametric down-conversion (SPDC) <cit.>, and in deterministic schemes using single photon emitters, such as quantum dots <cit.> and single atoms <cit.>. In this work, we focus on a very popular all-optical approach to the generation of GHZ states that is based on the interference and detection of single photons that heralds a 3-partite GHZ state. Despite recent advances in creating GHZ states using circuits like the one in Fig <ref> <cit.>, the preparation of even relatively small (e.g. 3-partite) GHZ states with high fidelity remains an experimental challenge, which entails reliably preparing near-identical single photons, applying suitable interferometers with low loss and implementing photon-number resolved detection – again with low loss. Unfortunately, small imperfections can have a catastrophic influence on the outcome. While it would be desirable to eliminate all imperfections, it could be more effective to focus on the more influential ones first and re-evaluate after technological advancements had a significant impact on the reduction of relevant errors. Such a realistic strategy poses the question of which imperfections influence the quality of the outcome the most – and maybe even more importantly: What can we expect once we are in a low-error regime, and is it worth the required experimental effort, to address specific imperfections? To answer these questions, we simulated the prototypical heralded “GHZ state generation" circuit in Figure <ref> with realistic error models for photon distinguishability, losses and higher-order terms (emission events producing more than one photon), and computed the (post-selected) fidelity and success probability, i.e., the probability of accepting an outcome of the circuit per try. Our results indicate that losses and distinguishability are the two dominant sources of error in a realistic parameter range. This leads to a clear prescription for improving state-of-the-art setups: either one has to significantly reduce both of these imperfections or consider different, more error-resilient circuits. We present our results as follows: First, in the Methods section, we describe the simulation method, present the error models that are used and explain how we quantify the influence of the imperfections. In the next section, we examine the influence of the different errors for parameter regimes that are realistic for SPDC and solid-state single-photon emitters and we additionally investigate how a close-to-optimal choice of error parameters behaves. We conclude our paper with a discussion and open questions. § METHODS §.§ Simulation method Our simulation utilizes the Fock representation of pure states. In this representation, we track the occupation number for each (occupied) combination of modes and, for each combination of occupation numbers, we track the (non-zero) amplitude. The underlying data structure is a map[We use <cit.> in .] such that we can delete and add entries dynamically. This approach allows for implementing any isometry or projective measurement which suffices for many simulation purposes. The main difference of our approach compared to other implementations used for simulations of imperfect setups, such as the Stepper-backend in <cit.> is the re-use of data; to reach better run-time, we use already acquired data several times. On the highest level of the simulation, this re-use means that we do not need to simulate from scratch if we are interested in different loss/double-photon probabilities. Instead, we can save the fidelity for every relevant combination of loss and double-photon creation, and exploit the fact that the fidelity of the probabilistic mixture of the states corresponding to these combinations is just the probabilistic mixture of the fidelities. So, if E is the set of all possible combinations of loss and multi-photon preparation, we write: F(ρ,|GHZ_3⟩⟨GHZ_3|) = ⟨GHZ_3|ρ|GHZ_3⟩ = ∑_e∈ EPr[e] ⟨GHZ_3|ψ_e⟩⟨ψ_e|GHZ_3⟩ = ∑_e∈ EPr[e] F(|ψ_e⟩⟨ψ_e|,|GHZ_3⟩⟨GHZ_3|), where |ψ_e⟩ is the prepared state if e occured and Pr[e] is the corresponding probability. The same decomposition can be used for the success probability: p_ succ(ρ) = (π_ accρ) = ∑_e∈ EPr[e] (π_ acc|ψ_e⟩⟨|) = ∑_e∈ EPr[e] p_ succ(|ψ_e⟩⟨)|, where π_ acc is the projector onto the accepted measurement outcomes. On an intermediate level, we simulate several overlaps at once. Right after the preparation, the state is a superposition of all combinations of perfectly overlapping photons, i.e. |ψ_ init⟩ = ∑_d∈{0,...,5}^× 6 i<j⇒ d_i≤ d_jα_d|n_0,0,d_0,n_1,0,d_1,n_2,0,d_2,n_3,0,d_3,n_4,0,d_4,n_5,0,d_5⟩, where n_k,s,d is the occupation number in the spatial mode k with polarization s and distinguishability value d, which we obtain by the Gram-Schmidt process. The weights α_d in this superposition depend on the pair-wise overlap of the photons. Instead of simulating every combination from scratch, we use linearity, compute the state for every combination and then combine them using the overlap. On the lowest level of the simulation, we can avoid computing every combination of perfectly overlapping photons from scratch. Instead, we compute only the combination where the photons are perfectly distinguishable and then apply an operation that maps this combination to every other combination. This map on all distinguishability values can be implemented as a sequential composition of simpler maps that just map one distinguishability value to another, e.g. (0,1,2)→(0,0,1) = (0,1,2)→(0,0,2)→(0,0,1), where we use for simplicity only three distinguishability values for illustration. Such a map on a single distinguishability value x that maps to y is defined as ⊕_k=0^5⊕_s=0^1⊕_d=0^5|n_k,s,d⟩→∏_k=0^5∏_s=0^1β_k,s(x,y)⊕_k=0^5⊕_s=0^1⊕_d=0^5|(1-δ_d,x)n_k,s,d+δ_d,yn_k,s,x⟩, with [rl] β_k,s(x,y) = √((n_k,s,x+n_k,s,y)!)/√(n_k,s,x!· n_k,s,y!), δ_a,b=1, if a=b 0, if a≠ b. The idea of this approach is that any operation that doesn't act on the distinguishability degree of freedom, such as unitaries acting on the spatial and polarization degree of freedom, commutes with the map above as this acts only on the distinguishability degree of freedom when mapping one combination of perfectly distinguishability photons to another. §.§ Error models We consider three types of errors in our modelling – distinguishability, higher-order terms and loss. Although our simulation method allows for a more general usage of these three types of error, we make some simplifications to reduce the dimensionality of the overall parameter space, which allows us to consider only five parameters in the end. The first parameter is the overlap, i.e., the inner product of the wave functions of the photons. With the overlap, we quantify the (partial) distinguishability[Another popular, albeit equivalent way to quantify the distinguishability is the HOM-dip visibility <cit.>, which is the square of the overlap for pure states.]; photons may be distinguishable due to degrees of freedom such as frequency and preparation time (which we consider constant during the GHZ state preparation). For simplicity, we assume that every pair of photons in the setup has the same overlap. Higher-order terms in the state occur if two photons are emitted into the circuit instead of only one. The corresponding probability coincides with the (heralded) second-order correlation g^(2)(τ) function at τ=0 and is our second parameter denoted as g_2. In the case of two-photon preparation, we replace the usual preparation map |0_a⟩→|1_a⟩ at a mode a by |0_a⟩→|2_a⟩, i.e., we assume both photons are exactly in the same mode and indistinguishable. Theoretically, more than two photons could be emitted, but as the probability for terms of order higher than 2 is usually very small, we omit to simulate these events. The last three parameters are connected to different types of loss that happen with some probability. We denote loss at preparation with p_L.Prep, loss at the beam splitters or the wave-plates with p_L.Ops and loss at detection with p_L.Det. We model loss as a map to a new spatial mode (l), i.e., if we denote with n_k,s,d the occupation number in spatial mode k with polarization s and distinguishability value d, then loss acts as follows on the spatial mode k: [rl] ⊕_d=0^5⊕_s=0^1|n_k,s,d⟩→∑_d'=0^5∑_s'=0^1√(n_k,s',d'/N_k)⊕_d=0^5⊕_s=0^1|n_k,s,d-(δ_s,s'δ_d,d'), (δ_s,s'δ_d,d')_l,s,d⟩, with [rl] N_k = ∑_d=0^5∑_s=0^1n_k,s,d, δ_a,b=1, if a=b 0, if a≠ b. Intuitively, loss maps to a purification of the mixed state that one obtains if every photon has the same probability of getting lost. However, we do not consider the loss of multiple photons at the same step, as the probability of such coinciding loss is rather low. §.§ Methods to evaluate the influence of imperfections One of the primary goals of this work is to evaluate the influence of imperfections on the fidelity and success probability. We use two quantities to evaluate the error parameters' influence on these measures. The first quantity is the relative image range Δ(p,m) of a parameter p for a measure m. Δ(p,m) is the ratio of two differences – the first difference is between the highest and lowest values of m that can be obtained only by varying p and using the default values for the other parameters. The second difference is between the maximum and minimum of m for the whole parameter regime. More formally, we consider a measure m to be a function from N parameter ranges to ℝ. We define the relative image range of the i^th parameter for the measure m as: Δ(p_i,m) = max_c_i∈ P_i(m(p̅_1,...,c_i,...,p̅_N)) - min_c_i∈ P_i(m(p̅_1,...,c_i,...,p̅_N))/max_𝐜∈𝐏(m(𝐜)) - min_𝐜∈𝐂(m(𝐜)), where p̅_j denotes the default value, P_j is the range of the parameter c_j and 𝐏 = P_1× P_2× ... × P_N. The second quantity is the correlation coefficient Corr(p,m) for an error parameter p and a measure m. To estimate the correlation coefficient, we discretise the parameter regime and compute the measure for each point. After that, we compute the covariance Cov(p,m) and the variances Var(p) and Var(m) and use Corr(p,m) = Cov(p,m)/√(Var(p)Var(m)). We assume a uniform distribution on the parameter regime for the covariance and variances. Under these conditions, we find: Cov(p_i,m) = ∑_c_1∈ P^D_1...∑_c_i∈ P^D_i...∑_c_N∈ P^D_N(c_i-⟨ p_i⟩)(m(c_1,...,c_i,...,c_N)-⟨ m⟩)/∏_j=1^N|P^D_j| Var(p_i) = ∑_c_i∈ C^D_i(c_i-⟨ p_i⟩)^2/|P^D_i| Var(m) = ∑_c∈P^D(m(c)-⟨ m⟩)^2/∏_j=1^N|P^D_j| where C^D_j is the discretised parameter range for the j^th parameter, P^D = P^D_1× P^D_2× ... × P^D_N, ⟨ p_i⟩=∑_c_i∈ P^D_ic_i/|P^D_i| and ⟨ m⟩=∑_c∈P^Dm(c)/∏_j=1^N|P^D_j|. It is straightforward to compute the relative image range from the measure as it is apparent for which parameter value it takes its maximum and minimum value. However, for the correlation coefficient, there is a trade-off between the complexity of the error model and the resolution of the discretisation. For this reason, we simplify the modeling of the loss and introduce p_L as the parameter for the context combination of the worst and best loss parameter combination, i.e., for every loss type we set p_T = p_L max_c∈ P_T(c) + (1-p_L) min_c∈ P_T(c), where p_T∈{p_L.Prep, p_L.Ops,p_L.Det}, p_L∈[0,1] and P_T is the associated parameter range. With this simplification, the overlap is varied in 0.25% steps and for g_2 and p_L we compute the measure for 201 evenly distributed values in the corresponding ranges. Finally, note that the correlation coefficient can also be negative. Therefore, we will use correlation with 1-p for the loss and higher-order terms, since it holds that Corr(1-p,m) = -Corr(p,m). § RESULTS We investigate the fidelity, success probability, and the influence of the different error parameters on these measures in two realistic and one close-to-optimal parameter regime. The two realistic parameter regimes represent two kinds of single photon sources: SPDC and solid-state sources. Currently, these two source types have different strengths and weaknesses. State-of-the-art SPDC sources can attain excellent two-photon indistinguishability (overlap), but are limited by comparatively large two-photon creation probabilities. In contrast, solid-state sources typically have significantly lower pairwise overlap but benefit from much reduced two-photon creation probability. For the close-to-optimal parameter regime, we assume almost no distinguishability and low loss probabilities. Table <ref> shows all three parameter regimes and the measures we find for these regimes. In the next section, we present the simulation results regarding the fidelity and the success probability for all three parameter regimes, and in Section <ref> we investigate the influence of the different error parameters on their values. ! SPDC <cit.> Solid-state <cit.> Close-to-optimal Overlap (ovl) (%): 97.25 - 99.5 (99.0) 88.25 - 99.25 (94.0) 99 - 100 (99.5) Higher order (g_2) (%): 1.0 - 2.0 (1.5) 0.0075 - 2.1 (0.25) 0 - 2 (1) Preparation loss (p_L.Prep.) (%): 5.0 - 15.0 (10.0) 2.6 - 11.4 (7) 0 - 4 (2) Component loss (p_L.Ops.) (%): 1.0 - 2.0 (1.5) 1.0 - 2.0 (1.5) 0 - 0.5 (0.25) Detection loss (p_L.Det.) (%): 5.0 - 15.0 (10.0) 5.0 - 15.0 (10.0) 0 - 4 (2) GHZ state Fidelity (%) 87.0 - 97.5 (94.8) 58.2 - 96.9 (77.3) 95.4 - 100.0 (97.8) Normalised Success probability (%) 7.5 - 39.5 (17.8) 7.3 - 47.7 (19.5) 47.3 - 100.0 (68.6) tableError parameters and measures in the different parameter regimes. The default (resp. expected) values are given in parentheses; if not specified differently we use the default values for the error parameter. For the close-to-optimal setting, we use the simplified error model. We normalized the success probability with respect to the one in the absence of errors (i.e., 1/32=3.125%). §.§ Results of the simulation Our results indicate a high susceptibility of the circuit to experimental imperfections. Even for the best choice of parameters in the realistic regimes, the fidelities are significantly lower than in the close-to-optimal one, reflecting this susceptibility. The highest and lowest fidelity we estimate in the parameter regime we associate with the SPDC sources is 97.5% and 87.0% respectively. The highest value is slightly larger than the best achievable fidelity in the solid-state parameter regime which is 96.9%. However, solid-state sources yield a broader range of fidelities, e.g., the lowest fidelity is 58.2%, while the fidelities with the expected values for the parameters shown in Table <ref>, are 94.8% for the SPDC and 77.3% for the solid-state sources. This difference implies that the fidelity is more susceptible to low overlap than to higher two-photon creation probability, as these two parameters pose the main difference between the two realistic parameter regimes. This implication is also demonstrated in Figure <ref>, where one can see that there is more variation if one only varies the overlap (Figure <ref>a) than with expected overlap and the other parameters being varied (Figures <ref>b,c). Even in the small range between 99% and 100% overlap, the fidelity differs significantly: Figure <ref> shows the fidelity depending on the two-photon creation probability and simplified loss in the close-to-optimal parameter regime for different overlaps. Although the best fidelity in this regime is 100%, this value is only achieved if all parameters are optimal – even with perfect overlap, one needs at least one of the other two parameters (simplified loss or two-photon creation probability) to be close to 0 to reach a fidelity of 99.99%. However, with just 0.1% less overlap, the fidelity is below 99.58% even if the probability for loss and two-photon preparation is 0%. While the range of the fidelity of the SPDC sources includes the upper end of the range of solid-state sources, the situation is quite different when considering the normalized success probability (normalized with respect to the one in the absence of imperfections, which is 1/32). The range of success probabilities for the SPDC parameter regime is contained in the one for the solid-state sources but is now at the lower end. Nevertheless, in both regimes, the success probability is low: The solid-state sources yield between 7.3% and 47.7% with an expected value of 19.5%, which is slightly better than the SPDC sources for which the success probability ranges between 7.5% and 39.5% with an expected value of 17.8%. A reason why the success probability is worse than the fidelity could be the lack of resilience against loss. In Figure <ref>, one can see that the fidelity is still relevant for the success probability (cf. Figure <ref>a), but seemingly not as important as the loss (cf. Figure <ref>b,c). In the close-to-optimal parameter regime, the circuit's success probability is also very susceptible to imperfections. Even with perfect overlap, most of the parameter regime maps to a normalized success probability that lays below 75% (cf. Figure <ref>) – in fact, the high success probabilities above 96% are barely visible in Figure <ref>. This susceptibility is also confirmed by the lowest (47.3%) and expected (68.9%) normalized success probabilities in the parameter regime. Trivially, the highest normalized success probability is 100% again. §.§ Influence analysis for the different types of imperfections For both SPDC and solid-state sources, we find that the overlap is currently the crucial parameter influencing the fidelity. The relative image range of the overlap is 83.6% for SPDC and 95.7% for solid-state sources, and also the correlation coefficient of the overlap with the fidelity is large (SPDC: 99.0%; solid-state: 99.8%). In Table <ref> we show the evaluation quantities for the fidelity and in Figure <ref>a) how the fidelity depends on the overlap. Errors due to higher-order terms or loss have little influence on the fidelity in the realistic parameter regimes (cf. Figure <ref>b,c). The fact that the influence of these parameters is significantly lower in the solid-state regime already hints at why loss and g_2 matter less. The parameter regimes differ – apart from the overlap – mostly in the probability for higher-order terms. Post-selection causes the influence of the losses and g_2 on the fidelity to decrease if one of the parameters gets low enough. This effect becomes clearer if one considers g_2=0. In this case, the loss doesn't matter for the fidelity, as we filter out the state in the case of loss. Only with higher-order terms is it possible to accept the state if losses happen. The same argument works the other way around, i.e., in the absence of loss, higher-order terms have no influence. However, the loss probabilities are usually too large to be the limiting factor. Overall, we found that post-selection prevents a significant influence of loss and g_2 on the fidelity if one of them is low enough, which is the case in both parameter regimes. But especially in the solid-state parameter regime, the overlap is the only relevant parameter, as higher-order terms are vanishingly small. When considering the success probability, the overlap is still significant but is not the dominant parameter as it is for the fidelity (cf. Figure <ref>a)). The relative image range of the overlap is now only 3.6% for SPDC sources and 15.7% for solid-state sources – so also for the success probability, the overlap has a larger influence on the solid-state sources than on the SPDC sources. The loss has an even bigger impact on the success probability – the correlation between the simplified loss and the success probability for SPDC sources is almost as large as the correlation between the overlap and the fidelity. Even the individual loss parts are significant – especially, preparation and detection loss have higher relative image ranges than g_2 or the overlap in both parameter regimes, only the component loss yields a lower image range than the overlap for solid-state sources. This difference in influence can also be seen in Figures <ref>b,c, where the contour plots are almost horizontal which indicates a low dependency on the y-axis which shows the g_2 value. Another difference between the fidelity and success probability is how they are affected by the different kinds of loss, as shown Table <ref>. For the fidelity, loss becomes more irrelevant during the process, i.e. loss at preparation is most significant and loss at detection is rather insignificant. This is probably because later loss is detected better by the post-selection as it is less likely to pair with a higher-order term. On the other hand, it makes no difference for the success probability, whether loss has happened in the preparation or detection process – the relative image ranges for both cases are very close. Many of the observations for the realistic parameter regimes hold in the close-to-optimal regime as well. The overlap influences the fidelity more (cf. Figure <ref>) and the success probability less (cf. Figure <ref>). The loss is the most relevant parameter for the success probability and g_2 never has the most influence. Further, we again found that the loss becomes more insignificant for the fidelity throughout the process (cf. Table <ref>). However, there are still differences. More specifically, higher-order terms are now more relevant than in the realistic parameter regimes. While in the latter, the correlation with g_2 and its relative image range are always lower than those of the overlap, in the close-to-optimal parameter regime the success probability depends more on g_2 than on the overlap. But also in general, g_2 is more relevant in this parameter regime, e.g. it has the same relative image range as the simplified loss for the fidelity and also the correlation coefficients are similar. §.§ Key findings * The GHZ state fidelity of the outcome depends mainly on the overlap for all three parameter regimes. This is especially true for the solid-state parameter regime and the close-to-optimal one, where the low g_2 value prevents the influence of loss and higher-order terms. * The success probability of the circuit depends mainly on the loss and is more independent of the overlap and the higher-order terms. * For both measures mentioned above, the SPDC sources are more influenced by loss and higher-order terms than the solid-state sources which in turn are more influenced by the overlap. * Only for the success probability in the close-to-optimal parameter regime, g_2 is more relevant than the overlap. * In general, regarding the success probability, the solid-state sources perform better on average than the SPDC sources possibly due to the larger overlap independence of this measure. For the same reason, SPDC sources reach better fidelities. § DISCUSSION The use of multipartite entanglement is prevalent in photonic implementations of quantum communication and computation protocols. In this work, we investigate the influence of experimental imperfections on the generation of a well-known maximally entangled state, namely the GHZ state. We conducted large-scale simulations of a commonly-used circuit for GHZ state generation, using realistic parameters for SPDC and solid-state photonic sources, and we also investigated what happens in a close-to-optimal parameter regime. With the data obtained from our simulations, we analyzed the influence of different error parameters using the correlation coefficient as well as the newly-introduced relative image range. Our results indicate that present-day experimental setups are still too erroneous to implement sufficiently good GHZ states using the considered well-established generation method. To tackle this, we propose some steps towards generating better GHZ states. Improving the pairwise overlap should be the first priority to reach higher fidelities. Then, once we are in a high-overlap regime, we can choose between decreasing loss and higher-order terms, where decreased loss comes with the benefit of a significantly higher success probability. Although the fidelity is amongst the most common measures for the quality of a state, it would be interesting to run our methods for different, more specialized measures. While it is widely believed that we need almost perfect GHZ states for implementing protocols using measurement-based quantum computing, for some cryptographic applications like conference key agreement <cit.>, imperfect GHZ states could also be used, combined with classical post-processing. For this purpose, a measure of non-classicality (e.g. contextuality) might be more interesting than the fidelity: One such candidate could be the success probability of the MBQC implementation of the OR-gate <cit.>. In the context of quantum networks, it would also be interesting to combine this work with recent investigations on distributing GHZ states <cit.>. Finally, besides varying the quality measure, one could also vary the method of generating the states and investigate different circuits. Our simulations allow us to understand better how to potentially improve the way we currently generate GHZ states, and thus gain better fidelities. However, as this might not be practically feasible in the near future, our work also aims to stimulate and promote research towards different preparation methods for photonic state generation. § ACKNOWLEDGMENTS This work was supported by the German Federal Ministry of Education and Research (BMBF project QPIC-1, No. 13N15870 and project DiNOQuant, No. 13N14921), by PiQ (a project of the DLR Quantum Computing Initiative, funded by the Federal Ministry for Economic Affairs and Climate Action), by Qompiler (funded by the German Federal Ministry for Economic Affairs and Climate Action), the DFG (via the Emmy Noether grant No. 418294583), the European Research Council (ERC, Starting Grant QUREP, NO. 851810), the Hector Fellow Academy and the Einstein Foundation Berlin (Einstein Research Unit on Quantum Devices). § CODE AVAILABILITY The code for the simulation is written in C++. Besides standard libraries, the Boost library <cit.> was used. The code to generate the data used to compute the fidelities and success probabilities is archived at https://zenodo.org/doi/10.5281/zenodo.1151871210.5281/zenodo.11518713. § REFERENCES [ heading=none]
http://arxiv.org/abs/2406.18470v1
20240626162824
UniRec: A Dual Enhancement of Uniformity and Frequency in Sequential Recommendations
[ "Yang Liu", "Yitong Wang", "Chenyue Feng" ]
cs.IR
[ "cs.IR", "cs.LG", "H.3.3; I.2.6" ]
]Yang Liu ]Yitong Wangcor1 yitongw@fudan.edu.cn ]Chenyue Feng [cor1]Corresponding author []organization=School of Computer Science, Fudan University, addressline=No.2005 Songhu Road, postcode=200438, city=Shanghai, country=China § ABSTRACT Representation learning in sequential recommendation is critical for accurately modeling user interaction patterns and improving recommendation precision. However, existing approaches predominantly emphasize item-to-item transitions, often neglecting the time intervals between interactions, which are closely related to behavior pattern changes. Additionally, broader interaction attributes, such as item frequency, are frequently overlooked. We found that both sequences with more uniform time intervals and items with higher frequency yield better prediction performance. Conversely, non-uniform sequences exacerbate user interest drift and less-frequent items are difficult to model due to sparse sampling, presenting unique challenges inadequately addressed by current methods. In this paper, we propose UniRec, a novel bidirectional enhancement sequential recommendation method. UniRec leverages sequence uniformity and item frequency to enhance performance, particularly improving the representation of non-uniform sequences and less-frequent items. These two branches mutually reinforce each other, driving comprehensive performance optimization in complex sequential recommendation scenarios. Additionally, we present a multidimensional time module to further enhance adaptability. To the best of our knowledge, UniRec is the first method to utilize the characteristics of uniformity and frequency for feature augmentation. Comparing with eleven advanced models across four datasets, we demonstrate that UniRec outperforms SOTA models significantly. The code is available at https://github.com/Linxi000/UniRec. Sequential RecommendationSequence UniformityItem FrequencyFeature Enhancement § INTRODUCTION Sequential recommendation systems have become increasingly prevalent due to their ability to effectively model user preferences <cit.>. Such systems utilize the sequential order of user interactions over time to predict future interests <cit.>. Incorporating temporal information into these algorithms has proven effective, as it provides significant insights into user behavioral patterns <cit.>. Current approaches primarily focus on modeling explicit timestamps <cit.> or capturing cyclic patterns <cit.>, but they often overlook time intervals, which reveal user characteristics and convey critical information within user interaction sequences. Yizhou Dang et al. propose that variations in the time intervals between sequential interactions can serve as indicators of shifts in user preferences <cit.>. Building on this premise, they designed data augmentation operators to improve the uniformity of sequences. However, this direction still lacks full study and holds potential significance, as sequence uniformity is a common phenomenon across various datasets. Additionally, the effectiveness of a model in capturing item characteristics is influenced by the frequency of these items. While considerable research has focused on enhancing the recommendation performance for long-tail items <cit.>, the utilization of item frequency to enhance model performance remains an area requiring further exploration. Figure <ref> illustrates segments of the interaction of uniform sequence versus non-uniform sequences from different users, encompassing items of both high and low frequency. The "Ranking of Uniformity" sorts interaction sequences by the variance of their time intervals in ascending order, with lower percentages indicating greater uniformity. For example, U1 with a ranking of 19.1% is more uniform than 80.9% of the sequences. "Item Popularity" is defined as the proportion of an item's occurrences relative to the number of all interactions, thus quantifying the frequency of item appearances within the dataset. This figure illustrates that time intervals within uniform sequences are typically shorter and more stable, indicating steadier user interests. In contrast, non-uniform sequences exhibit more variable time intervals, reflecting more frequent changes in user interests. Furthermore, the intensity of the color within the circles signifies the model’s effectiveness in learning the representations of the corresponding users or items, with darker colors indicating higher effectiveness. We first analyze the performance of sequences with different intervals and item frequencies in section <ref> and validate that sequences with higher uniformity and items with greater frequency tend to exhibit better performance. Following this, we implement a dual enhancement approach UniRec in section <ref>. For sequences, we generate non-uniform subsets from uniform sequences by incorporating less-frequent items to simulate fluctuating user interests, thereby enhancing the modeling of non-uniform sequence representations later. For items, we train a neighbor aggregation mechanism on frequent items and extend it to less-frequent items using curriculum learning to improve their representations and transfer this knowledge to sequence modeling. This dual-branch approach is simple and effective, providing a new perspective for feature enhancement in sequential recommendation. Additionally, we integrate the temporal characteristics of both uniform and non-uniform sequences to conduct multidimensional temporal modeling. In summary, the contributions of this paper are as follows: * We propose a novel dual enhancement architecture that leverages sequence uniformity and item frequency. This architecture comprises two independent yet mutually reinforced branches, collectively driving comprehensive performance optimization. * We improve the model's ability to handle non-uniform sequences and less-frequent items and provide a new perspective for feature enhancement in sequential recommendation. * We conduct extensive experiments on 4 real-world datasets, demonstrating significant improvements over 11 competing models, including 6 cutting-edge models that incorporate temporal modeling in their sequential recommendation systems. § PRELIMINARY STUDY In subsection <ref>, we demonstrate that uniform sequences and frequent items consistently perform better across various datasets. In subsection <ref>, we further validate this by demonstrating that, regardless of the partitioning thresholds, uniformity and frequency consistently lead to better performance. §.§ Symbol Description We distinguish the uniformity and non-uniformity of sequences by adopting the classification method proposed by TiCoSeRec <cit.>, which evaluates and ranks all sequences by calculating the variance of time intervals. Sequences with smaller variances are considered more uniform. Based on this, sequences are divided into two subsets: 𝕊_u and 𝕊_n. The former includes sequences with consistent time intervals, while the latter contains sequences with significant fluctuations in intervals. Similarly, we rank each item based on the frequency of its occurrence across all user interactions. Define 𝕀_f as the set of frequently occurring items and 𝕀_l as the set of less-frequently occurring items. §.§ Generality Analysis §.§.§ Task In this experiment, we aim to investigate the comparative recommendation performance on uniform versus non-uniform sequences as well as frequent versus less-frequent items, within the context of different datasets. To achieve balance and fairness, we ensured that subsets 𝕊_u and 𝕊_n, as well as 𝕀_f and 𝕀_l, were balanced by equating the interaction numbers as much as possible. Following this division criterion, we assigned "uniformity" and "frequency" labels to each interaction sequence and item, recording the overall evaluation results of the model and the experimental outcomes for data with different labels. §.§.§ Experimental Configuration TiCoSeRec <cit.> has already demonstrated on several Amazon datasets and Yelp that uniform sequences significantly outperform non-uniform sequences. Here, we extend these findings to both frequent and less-frequent items by testing on two additional datasets, MovieLens 1M (ML-1M) <cit.> and Gowalla <cit.>. The ML-1M dataset, a publicly available movie ratings database, comprises 999,611 ratings from 6,040 users on 3,416 movies, with a sparsity of 95.16%. The Gowalla dataset, representing check-in data from a location-based social network, contains 6,442,892 check-ins at 1,280,970 unique locations by 107,093 users, with a sparsity of 99.99%. We utilized three classical sequential recommendation baselines—SASRec <cit.>, BERT4Rec <cit.>, and LightSANs <cit.> for our analysis. The evaluation metrics include Normalized Discounted Cumulative Gain (NDCG), Hit Rate (HR), and Mean Reciprocal Rank (MRR) at top 20. The evaluation strategy employed is full ranking, which involves evaluating the model on the entire set of items. §.§.§ Results Analysis Table <ref> shows the performance of various baselines across two datasets, comparing uniform and non-uniform sequences, as well as frequent and less-frequent items. In the table, "all" represents results tested on the entire dataset, while 𝕊_u and 𝕊_n, along with 𝕀_f and 𝕀_l, represent results tested on these specific subsets. The experimental results show that performance on subsets 𝕊_u and 𝕀_f is the best, also "all" exceed those on 𝕊_n and 𝕀_l. For the Gowalla dataset, the Bert4Rec model shows up to a 146.33% improvement in NDCG@20 when predicting 𝕀_f instead of 𝕀_l. Similarly, LightSANs improves by up to 94.27% in NDCG@20 for the ML-1M dataset when transitioning from 𝕊_n to 𝕊_u. This phenomenon, where performance on 𝕀_f substantially exceeds that on 𝕀_l, corroborates the hypothesis that frequent items, benefiting from a larger volume of interaction data, are more predictable. Additionally, models generally exhibit superior performance on 𝕊_u compared to 𝕊_n, suggesting that models more effectively learn from stable user preferences present in uniform sequences. §.§ Invariance Analysis We further explore the impact of different partitioning ratios on model performance using the ML-1M dataset. Specifically, we analyze the effects of varying the ratios for both 𝕊_u and 𝕊_n and 𝕀_f and 𝕀_l using three classical baseline models. Figure <ref>a displays the experimental results on 𝕊_u and 𝕊_n. In this figure, the "-1" suffix attached to each model indicates the performance on the 𝕊_u, whereas the "-0" suffix indicates the performance on the 𝕊_n. Figure <ref>b presents the results on 𝕀_f and 𝕀_l, where "-1" and "-0" similarly denote the performance on 𝕀_f and 𝕀_l, respectively. The performance trends on MRR@20 and HR@20 are very similar to those observed with NDCG@20. The results indicate a noticeable decline in the performance of sequential recommendation models as the partitioning thresholds shift from uniform to non-uniform sequences and from frequent to less-frequent items. This trend highlights the models' sensitivity to the variability in user behavior patterns and item frequencies. § METHODOLOGY This section provides a detailed exposition of UniRec. First, we address the dual enhancement architecture, which comprises the sequences branch (subsection <ref>) and the items branch (subsection <ref>). Subsequently, a Multidimensional Time mixture attention module (subsection <ref>) is designed to accommodate different uniformity sequences. Lastly, subsection <ref> describes the inference process of the model. Figure <ref> illustrates the overall architecture of the UniRec framework. §.§ Problem Formulation Let 𝒰 denote the set of all users and ℐ represent the set of all items. For each user u ∈ 𝒰, we formulate the interactions in chronological order, expressed as 𝒮_u^s-type = (i_1^i-type, …, i_t^i-type, …, i_N^i-type). Here, i_t^i-type∈ℐ specifies the item with which the user interacted at timestamp t. The term "s-type" distinguishes a sequence as uniform or non-uniform, denoted as 𝒮_u^U and 𝒮_u^N; "i-type" identifies an item as frequent or less-frequent as i_t^F and i_t^L, respectively. N signifies the sequence length, which is fixed. For sequences shorter than N, we employ the padding operation to fill the missing parts and for those longer than N we truncate the excess part. Define M_I ∈ℝ^ℐ× d as a learnable matrix of all items' embedding, d is a positive integer denoting the latent dimension. By performing a lookup table operation on M_I, we can retrieve every single item embedding m_i ∈ℝ^d, to form the user embedding h_u =[m_1, …, m_t, …, m_N] ∈ℝ^N × d. §.§ Sequence Enhancement Sequences with smaller variances are considered more uniform and sequences are divided into two subsets: 𝕊_u and 𝕊_n. Each sequence is classified based on a predefined time variance threshold into either 𝒮_u^U or 𝒮_u^N, where 𝒮_u^U∈𝕊_u and 𝒮_u^N∈𝕊_n. Similarly, item is categorized based on their frequency of occurrence in interactions into i_t^F or i_t^L, where i_t^F∈𝕀_f and i_t^L∈𝕀_l. For each uniform sequence 𝒮_u^U, we generate a corresponding non-uniform sub-sequence 𝒮'_u to emulate the irregular patterns observed in real-world datasets, thereby enhancing the capability to model complex user behaviors. The generation process retains all items from 𝕀_l within 𝒮_u^U, and if the count of i_t^L∈𝒮_u^U is fewer than M, additional i_t^F are randomly sampled from 𝒮_u^U, where M is the hyper-parameter of the minimum length of 𝒮'_u: 𝒮'_u = if count(𝒮_u^U, 𝕀_l) < M: {i_t^L: i_t^L∈𝒮_u^U}∪{Sampled i_t^F∈𝒮_u^U} otherwise: {i_t^L: i_t^L∈𝒮_u^U} the variance of time intervals increases from the sequence 𝒮_u^U to 𝒮'_u, and there is a substantial rise in the relative composition of i_t^L within 𝒮'_u. We utilize 𝒮_u^U to enhance the model's learning capability with respect to 𝒮'_u. First, we generate the initial embeddings for 𝒮_u^U and 𝒮'_u, denoted as h_u^U ∈ℝ^N × d and h'_u ∈ℝ^N × d respectively. For each sequence, we employ a sequence encoder f(·), which is the sequential recommendation modeling process: q_u = f(h_u^U),q̂_̂û = f(h'_u) where q_u ∈ℝ^N × 2d and q̂_̂û∈ℝ^N × 2d are the representations for 𝒮_u^U and 𝒮'_u. The specifics of f(·) will be detailed in subsection <ref>. Next, the objective is to bring q_u and q̂_̂û as close as possible in the feature space to enhance the model’s ability to handle the temporal dynamics of non-uniform sequences, thereby minimizing x̃ through a generative model G_θ, which consists of a feed-forward layer: x̃ = q_u-G_θ (q̂_̂û) Meanwhile, a curriculum learning strategy is adopted, which mimics the human learning process: from simple to complex. This strategy gradually increases the training samples' complexity. Specifically, the model initially learns predominantly from more uniform sequences, while sequences with more complex user interest drifts are introduced later in the training. This process is managed with a dynamically weighted loss function λ_s guiding the progression: λ _s =w_s || x̃ ||^2 w_s= sin(π/2·e - e_b/e_all + π/2·V_max - V_u/V_max-V_min) where w_s represents a dynamic weight coefficient, e denotes the current epoch number, e_b denotes the epoch at which this loss function starts to contribute to the training process, and e_all denotes the total number of training epochs. For each 𝒮_u^U∈𝕊_u, the variance of the time intervals is defined as V_u. V_max is the maximum time interval variance among all sequences, while V_min is the minimum. This design allows w_s to dynamically change its value during the training process based on the uniformity of sequences and phases of training progress. This task serving as an auxiliary task, parallel to the main task of sequential recommendation, specifically enhances the model's performance on 𝒮'_u, thereby implicitly improving the model's adaptability and prediction accuracy on 𝕊_n. §.§ Item Enhancement Given that the generated 𝒮'_u are predominantly composed of i_t^L, together with a general prevalence of i_t^L in 𝒮_u^N, enhancing model performance on i_t^L has become critical. The proposed item enhancement approach operates from two aspects: utilizing the information from neighboring items and leveraging the knowledge transferred from i_t^F to i_t^L. Leveraging neighbors for enhancement involves two steps: candidate neighbor generation and representation aggregation. Initially, the candidate neighbor generation process is conducted for each item. For each center item i_c ∈ℐ, a potential candidate neighbor set ℕ_i_c is identified. A bunch of score s(i_c, j) is calculated for i_c against every other item j (where j ∈ℐ∖{i_c}). These scores are then ranked, and the items with higher scores are chosen to constitute the neighbor set ℕ_i_c. s(i_c, j) integrated three factors: the temporal interval T between i_c and j, the popularity H of item j, and the similarity S between i_c and j. Both H and S are normalized to ensure consistency in the scoring mechanism. s(i_c, j) is defined as: s(i_c, j) = g(T) + ϕ(T, H) + ϕ(T, S) g(T) = 1/1 + log(1 + T) ϕ(T, x) = T + Θ/e^(T + Θ)/Γ x where Θ and Γ are constants, determined based on dataset specifics. As T increases, g(T) gradually decreases. Similarly, an increase in T or a decrease in x results in a lower value of ϕ(T, x). This scoring framework adeptly manages the temporal dynamics among items, accounting for factors such as the popularity and similarity of potential neighboring items. In each training batch, K neighbors are randomly sampled from ℕ_i_c, where K is a hyper-parameter. Then we aggregate these K candidate neighbors to enhance i_c with a simple attention mechanism. We generate the initial embedding for i_c, denoting as m_c ∈ℝ^d, as well as the embedding m_o ∈ℝ^d for these K neighbors, o ∈{1, 2, …, K}. The aggregation process is as follows: m_n = ∑_k=1^K exp(m_c^T m_k)/∑_j=1^K exp(m_c^T m_j) m_n represents the aggregated embedding from the neighbors. We then concatenate m_n and m_c to form the updated representation m_c' = [m_c ∥ m_n]∈ℝ^2d, where || denotes the concatenation operation. As a result, m_c' contains more information related to i_c than m_c. Meanwhile, to enable i_t^L∈𝕀_l to better utilize the related information from ℕ_i_c, we transfer the knowledge learned from 𝕀_f on neighbor aggregation representation to 𝕀_l. Define the embedding of i_t^F obtained from M_I as m_i^F. Define the updated embedding m_c' of i_t^F as m'_i^F. We train the aggregation mechanism on i_t^F by minimizing the following loss function: λ _f =w_i ||m_i^F - G_φ ( m'_i^F )||^2 w_i= sin(π/2·e - e_b/e_all + π/2·F - F_min/F_max-F_min) where G_φ is a fully connected layer that aligns the dimensions of m'_i^F and m_i^F to be consistent. w_i is a dynamic parameter used to adjust the magnitude of the loss function across different items. F represents the frequency score of the current item across all interactions. F_min is the minimum F of i_t^F∈𝕀_f, while F_max is the maximum. A curriculum learning strategy, analogous to the sequence branch, is also employed. In the initial training phase, high-frequency items are prioritized, with a gradual shift towards less-frequent items in the later stages. Finally, update the embeddings of all i_t^L after a certain epoch e_t of training by minimizing the following loss: λ _l = η|| m_i^L - G_φ^+(m'_i^L)||^2 η = sin (π/2·e-e_t/e_all ) where m_i^L is the representation of i_t^L obtained from M_I, m'_i^L is the updated representation m_c' of i_t^L, and η is a parameter that dynamically increases with the increase of the training epoch. G_φ^+ represents the G_φ trained after (e-e_b) epochs and is static. By refining i_t^L representation through the auxiliary task before the main task training, the accuracy and performance of the model concerning i_t^L are improved. §.§ Multidimensional Time Modeling Given the varying dependencies on temporal information, where 𝒮_u^U has a lower reliance on time and 𝒮_u^N requires richer temporal details, we propose a multidimensional time modeling module to accommodate these differing needs. As demonstrated in subsection <ref>, utilizing time interval information is more effective for 𝒮_u^U, while employing comprehensive temporal context proves more effective for 𝒮_u^N. Therefore, we design this module to better leverage the appropriate temporal information. For each 𝒮_u we define its corresponding timestamp sequence as 𝒯_u = (t_1, t_2, …, t_N). The corresponding time interval sequence is defined as 𝒯_intv = (τ_1, τ_2, …, τ_N-1), where each τ_k = t_k+1 - t_k denotes the interval between the k^th and (k+1)^th interactions. Each τ_k is encoded by an embedding matrix, resulting in a time interval embedding v_k ∈ℝ^d. For temporal context modeling, we adopted the approach proposed by Xu et al. <cit.>, which specifically uses a self-attention mechanism based on time representation learning, and models temporal information such as year, month, and day separately. Subsequently, this information is aggregated through a linear layer to form the final temporal context embedding c_i ∈ℝ^d for each interaction i. In a word, for each S_u, we obtain its item sequence embedding h_u ∈ℝ^N × d, along with the temporal context representation C_t = [c_1, c_2, …, c_N] ∈ℝ^N × d, and the time interval embeddings V_t = [0, v_1, v_2, …, v_n-1] ∈ℝ^N × d, 0 represents a 1 × d zero vector. Next, recognizing that sequences with different uniformity require varying levels of temporal information, we integrate h_u with C_t and V_t respectively using a mixture attention mechanism. This serves as the sequence encoder f(·), generating q_u, the embedding of the user u's interaction sequence, tailored to the specific needs of each sequence. Integrate h_u with C_t and V_t in the same way, taking the application of mixture attention on h_u and C_t as an example. First, concatenate h_u and C_t to obtain the initial embedding of a sequence as e_u = h_u || C_t. Next, we preprocess the input X for mixture attention, which is defined as X = e_u + P, where P ∈ℝ^N × 2d is the position encoding matrix. The mixture attention mechanism can be mathematically described as: MixATT(X) = FFL(SAL(X)) FFL(X) = ReLU(XW_F + b_F)W_F' + b_F' SAL(X) = Concat(H_1, …, H_H) where MixATT(X) represents a composite model that integrates a self-attention mechanism SAL(X) and a feed-forward layer FFL(X). FFL involves two linear transformations with weight matrices W_F and W_F', and bias terms b_F and b_F'. SAL combines the outputs H_j from each attention head j ∈{1, …, H}. Each H_j is given by softmax(A_j / √(d_V)) W_j^O, where √(d_V) is a scaling factor to stabilize learning, and W_j^O is the output projection matrix for the j^th head. A_j is the attention score matrix proposed by Viet-Anh Tran et al. <cit.>, combining Gaussian distribution to mix two types of input data. A_j = ∑_k ∈{m, c} p_kj𝒩(A; Q_k^T, σ^2 I) is approximated by a mixture model. The non-negative mixture weights p_kj sum to one, indicating the contribution of each context type. Q_k is obtained by projecting the input context X_k using matrix W_k. The Gaussian distribution's variance parameter is σ^2, and I is the identity matrix. The loss function for the recommendation task can be defined as follows: λ _r=q_u n_i^𝖳 where q_u is the output of the FFL and n_i=[m_i||c_i] is the embedding of the next item to be predicted. Similarly, the mixture attention mechanism is also applied to h_u and V_t. The outputs processed through the mixture attention mechanism, are mutually supervised within a multi-task learning framework. §.§ Inference Process Figure <ref> shows how the integrated components—IE (Item Enhancement), SE (Sequence Enhancement), and f(u) (Sequential Recommendation)—work together to provide robust and contextually rich recommendations. For a given input sequence 𝒮_u, we first determine whether it is 𝒮_u^U or 𝒮_u^N. 𝒮_u^U is initialized with embedding e_u^U, while 𝒮_u^N is initialized with e_u^N. Within each 𝒮_u^U, i_t^F are utilized to train G_φ through the loss function λ_f in the IE module. Conversely, for both 𝒮_u^U and 𝒮_u^N, i_t^L are updated based on the output from G_φ^+ using the loss λ_l. After processing the sequence through f(u), we train its embedding via the primary task loss λ_r. The sequence embedding is then refined by the SE module to further enhance the sequence representation using the loss λ_s. Finally, the sequence embedding and the embedding of the item to be predicted are scored by calculating their dot product. § EXPERIMENT §.§ Experimental Settings §.§.§ Datasets In addition to the ML-1M <cit.> dataset used in section <ref>, we also use datasets from e-commerce platforms, including those for books, beauty products, and toys, as detailed below: * The Amazon Book <cit.> dataset consists of 6,275,735 interactions of users rating a book. This dataset includes 79,713 users and 91,465 books, with a density of 0.00086, indicating the sparsity of user-item interactions. * The Amazon Beauty <cit.> dataset comprises 198,502 interactions involving 22,363 users and 12,101 beauty products, with a density of 0.00073. * The Amazon Toys <cit.> dataset includes 167,597 interactions from 19,412 users and 11,924 toys, with a sparse density of 0.00072. For each dataset, we adopt the k-core filtering <cit.> as a pre-processing step, which iteratively removes users and items whose interactions are fewer than k, until each user and item in the dataset has at least k interactions. Specifically, for the ML-1M, we set k_item = 5 and k_user = 10; for the Beauty and Toy, we set k_item = 5 and k_user = 5; and for the Books, the settings are k_user = 30 and k_item = 20. §.§ Evaluation Settings We arrange the dataset in chronological order and allocate the last item as the validation set and the penultimate item as the test set, using the remaining data to construct the training set. To ensure fair evaluation, for each positive item in the test set, we pair it with 100 negative items sampled uniformly, and the model's performance is assessed based on these pairs. We primarily utilize three metrics for performance evaluation based on top-10 recommendation results: NDCG, HR, and MRR. Specifically, NDCG assesses the ranking quality of recommended items, HR measures the presence of at least one relevant item, and MRR evaluates the rank of the top relevant item. §.§.§ Comparison Methods We conduct a comprehensive comparison of UniRec with 11 baseline models. These include six classic sequential recommendation models: GRU4Rec <cit.>, Caser <cit.>, STAMP <cit.>, SASRec <cit.>, BERT4Rec <cit.>, and LightSANs <cit.>. Additionally, we evaluate five time-aware models: TiSASRec <cit.>, Meantime <cit.>, TiCoSeRec<cit.>, FEARec <cit.>, and MOJITO <cit.>, all of which leverage temporal information to improve performance. §.§.§ Implementation Details All models are trained for up to 200 epochs utilizing the Adam optimizer <cit.>. Early stopping is implemented with a patience threshold of 20 epochs. We assign a value of 64 to the parameter d, utilize a batch size of 512, and set the learning rate to 0.01. The length of the sequence is fixed at 50. Both hyper-parameters M and K are set to 3. The mixture attention mechanism is configured with 2 heads. We test the partitioning ratios for uniform and non-uniform users within the range of {0.3, 0.4, 0.5, 0.6, 0.7, 0.8}, and for frequent and less-frequent items within the range of {0.4, 0.5, 0.6, 0.7, 0.8, 0.9}, across each dataset. §.§ Overall Performance Table <ref> presents the experimental results of UniRec and 11 baselines across four datasets, several conclusions can be drawn. First, time-aware models generally outperform non-time-aware sequential recommendation models across various datasets. This highlights the critical importance of incorporating temporal dynamics into the recommendation process, as it substantially enhances the relevance and accuracy of the recommendations. Second, UniRec significantly outperforms other comparative models across all datasets and evaluation metrics, confirming its effectiveness. The bidirectional enhancement strategy for sequences and items adopted by UniRec, along with the multidimensional time modeling, greatly enhances the precision in modeling user interests and item characteristics. For instance, on the ML-1M dataset, UniRec achieves improvements of 3.32% in NDCG@10 and 4.08% in MRR@10 compared to the existing SOTA techniques. Third, UniRec demonstrates exceptional performance across datasets with varying sparsity and scale, whether in the lower-sparsity, smaller-scale ML-1M dataset or in the larger, more sparse Amazon datasets. This proves its adaptability and robustness to different levels of sparsity and data sizes. For example, on the Books dataset, UniRec increases MRR@10 by 3.24%, and on the Beauty dataset, it raises NDCG@10 by 3.01%. Lastly, compared to TiCoSeRec, which enhances data by improving sequence uniformity, UniRec enhances the utilization of sequence uniformity by incorporating item frequency more effectively. This demonstrates the potential of enhancing sequential recommendations from both perspectives of item frequency and sequence uniformity. §.§ Ablation Experiment To understand the impact of various components in our model, we conduct an ablation study. We divide the model into the following parts for evaluation: Multidimensional Time Modeling (A), Sequence Enhancement (B), Item Enhancement (C), and Item Popularity & Similarity (D). Specifically, w/o A refers to the replacement of multidimensional time modeling with a single-dimensional time modeling structure, utilizing only time interval modeling and disregarding contextual time information. w/o B refers to removing the sequence enhancement task, while w/o C refers to removing the item enhancement task. w/o D refers to excluding the consideration of item popularity and similarity in the item enhancement component, instead selecting candidate neighbors based solely on the time interval of the project. In addition to the overall dataset results, we evaluate performance on several subsets: frequent-item, less-frequent-item, uniform-sequence, and non-uniform-sequence. Using the ML-1M dataset as an example, Figure <ref> shows the evaluation results of SASRec, UniRec, and UniRec without several components across various subsets. First, UniRec demonstrates significant performance improvements over SASRec across all strategies, particularly in the less-frequent-item and non-uniform-sequence subsets. According to the experimental data, UniRec shows a 9.2% improvement in MRR@10 over SASRec in the frequent-item subset and an 18.0% improvement in the less-frequent-item subset. Additionally, in uniform and non-uniform subsets, UniRec achieves a 2.1% and 4.3% improvement in HR@10 over SASRec, respectively. These findings indicate that UniRec excels in enhancing performance for less-frequent items and non-uniform sequences. Secondly, removing each component of the model results in varying degrees of performance degradation, indicating the importance of each component to the overall model performance. Particularly, w/o B leads to the most significant performance drop, particularly reflected in the HR metric, highlighting the effectiveness of the sequence enhancement module. This module not only improves the uniformity of non-uniform sequences but also increases the frequency of less-frequent items, significantly contributing to the accuracy of user interest modeling. Furthermore, the performance on the frequent-item subset and uniform-sequence subset is consistent with the overall data. However, there are some differences between the less-frequent-item subset and the non-uniform-sequence subset. In the less-frequent-item subset, w/o A shows a significant drop in NDCG@10 and MRR@10, indicating that temporal information has a substantial impact on less-frequent items, as certain less-frequent items are more likely to be interacted with during specific periods. The declines in NDCG@10 and MRR@10 for w/o C and w/o D also demonstrate the effectiveness of these components in modeling less-frequent items. In particular, w/o D underscores the importance of considering item popularity, similarity, and relevance in selecting candidate neighbors to enhance less-frequent items' representations. In the non-uniform-sequence subset, the significant performance drop in w/o B indicates that sequence enhancement indeed improves the model's capability to handle sequences with rich interest drifts. In summary, Figure <ref> clearly illustrates the contributions of each component to the performance of UniRec, validating the necessity and effectiveness of multidimensional time modeling, sequence enhancement, item enhancement, and item popularity & similarity in improving the model's recommendation performance. §.§ Hyperparameter Experiment In this subsection, we explore the relationship between the performance of UniRec and two hyperparameters: the item frequency partition threshold and the user uniformity partition threshold. As shown in Figure <ref>, we conduct experiments on the Amazon Beauty dataset, testing the impact of item frequency partition thresholds ranging from 40% to 90% (a), and sequence uniformity partition thresholds ranging from 30% to 80% (b). The results indicate that all tested partition thresholds yield good performance, but the most significant improvement occurs at specific values. For the Beauty dataset, the optimal split thresholds are 70% for high-frequency items and 30% for less-frequent items, while the ratio of uniform to non-uniform sequences is 60% to 40%. In summary, UniRec exhibits robust performance across different threshold settings, yet carefully selecting division thresholds can enhance the performance the most. §.§ Time Sensitivity Analysis As mentioned in section <ref>, we hypothesize that uniform sequences and non-uniform sequences may exhibit different dependencies on temporal information. In this subsection, to validate this hypothesis, we compare the effects of coarse-grained time modeling and fine-grained time modeling on both uniform and non-uniform sequence subsets. As shown in Figure <ref>, a positive score indicates that coarse-grained modeling outperforms fine-grained modeling, while the negative indicates the opposite. In both Amazon datasets, we observe that coarse-grained modeling performs better on uniform-sequence subsets, whereas fine-grained modeling is more effective on non-uniform-sequence subsets. For uniform sequences, user behavior patterns are more consistent, capturing global patterns can yield satisfactory predictive outcomes. Conversely, non-uniform sequences exhibit greater diversity and dynamism in user behavior, necessitating a fine-grained temporal encoding strategy to accurately model shifts and changes in user interests. §.§ Case Study We conduct a case study to illustrate the progressive enhancement of a non-uniform sequence through various models and modules. As shown in Figure <ref>, we select a non-uniform sequence (user ID 2481) and demonstrate the changes in prediction scores for the next item (item ID 291) and the corresponding sequence embeddings after modeling with four different approaches: SASRec, SR module of UniRec, both the SR and IE modules, and the SR, IE, and SE modules. The progression of the model incorporating more modules is indicated by the arrows in the figure. SASRec shows a low prediction score, indicating its limited capability in handling sequences with significant interest drift. Adding the SR module significantly improves the model's predictive ability. The inclusion of the IE module brings further improvement, and the model achieves its best performance with the addition of the SE module. In the heatmaps, blue indicates larger positive values and green indicates smaller negative values. The transition in heatmap colors from SASRec to the enhanced models, with increasing contrast, demonstrates the model's growing ability to capture detailed information and features from various positions within the sequence. § RELATED WORKS §.§ Sequential Recommendation Sequential recommendation systems identify patterns in user behavior to predict future actions. Initially, Markov models <cit.> are pivotal for analyzing transitions between states. The rise of deep learning leads to RNN models like GRU4Rec <cit.>, which improves predictions by capturing long-term dependencies <cit.>. Convolutional Neural Network (CNN <cit.>)-based models, such as Caser <cit.>, improves recommendations by examining local behavior sequence patterns. Models like SHAN <cit.> and STAMP <cit.> effectively address shifts in user interests through memory strategies. Recently, attention mechanisms and Transformer-based models, like SASRec <cit.> and Bert4Rec <cit.>, have gained prominence. They leverage self-attention to understand complex sequence dependencies, while LightSANs <cit.> introduces lightweight self-attention structures. The SSE-PT <cit.> integrates personalized embeddings with Stochastic Shared Embeddings (SSE) <cit.>. Research also extends to cross-domain <cit.>, interpretable <cit.>, graph neural network <cit.>, and contrastive learning approaches <cit.> for sequential recommendations. §.§ Time-Aware Sequential Recommendation Time-aware systems incorporate timing to capture the dynamic nature of user preferences, offering more accurate and timely recommendations. These models surpass traditional ones by adapting recommendations to both the shifts in user preferences over time and their current interests <cit.>. The TiSASRec <cit.> model innovatively adjusts self-attention weights based on the timing between actions, significantly improving performance. MEANTIME <cit.> enriches time perception through diverse embedding techniques, whereas TASER <cit.> explores both absolute and relative time patterns. TGSRec <cit.> considers temporal dynamics in sequence patterns, and MOJITO <cit.> analyzes preferences from various temporal perspectives through a hybrid self-attention mechanism. FEARec <cit.> transitions sequence analysis from the time to the frequency domain, employing a hybrid attention mechanism and multitask learning for enhanced performance. While these models ingeniously integrate temporal information, optimizing the use of such data remains a challenge. The diversity of data characteristics necessitates adaptable approaches for handling time intervals, timestamps, and cyclic patterns, given the varied and often irregular temporal behavior patterns among users. Recently, the TiCoSeRec <cit.> introduces an innovative approach by considering sequence uniformity during the data augmentation phase, marking a deeper understanding of sequential recommendation data. While this model treats sequence uniformity as a target of data enhancement, it does not delve into modeling and analyzing this characteristic of the data further. In contrast, in this paper, we incorporate sequence uniformity into model construction. Our method not only addresses the limitations encountered by existing models when dealing with data of varied temporal distributions but also proposes a novel perspective for feature enhancement. § CONCLUSION In this paper, we demonstrate that sequential recommendation algorithms perform better on uniform sequences and frequent items compared to non-uniform sequences and less-frequent items. To address this, we present a novel bidirectional enhancement architecture that leverages sequence uniformity and item frequency for feature enhancement, optimizing the performance of sequential recommendations. Additionally, we introduce a multidimensional time modeling method to better capture temporal information. Experimental results show that our method significantly outperforms twelve competitive models across four real-world datasets. To the best of our knowledge, this is the first work that utilizes the uniformity of sequences and frequency of items to enhance recommendation performance and it also indicates a promising direction and a new perspective for feature enhancement in future research. elsarticle-num
http://arxiv.org/abs/2406.19084v1
20240627111132
Spatial Multiplexing in Near-Field Line-of-Sight MIMO Communications: Paraxial and Non-Paraxial Deployments
[ "Juan Carlos Ruiz-Sicilia", "Marco Di Renzo", "Placido Mursia", "Aryan Kaushik", "Vincenzo Sciancalepore" ]
cs.IT
[ "cs.IT", "eess.SP", "math.IT" ]
: A New Cypher-like Operator for Mining Association Rules on Property Graphs Stefano Ceri July 1, 2024 ============================================================================ § ABSTRACT Sixth generation (6G) wireless networks are envisioned to include aspects of energy footprint reduction (sustainability), besides those of network capacity and connectivity, at the design stage. This paradigm change requires radically new physical layer technologies. Notably, the integration of large-aperture arrays and the transmission over high frequency bands, such as the sub-terahertz spectrum, are two promising options. In many communication scenarios of practical interest, the use of large antenna arrays in the sub-terahertz frequency range often results in short-range transmission distances that are characterized by line-of-sight channels, in which pairs of transmitters and receivers are located in the (radiating) near field of one another. These features make the traditional designs, based on the far-field approximation, for multiple-input multiple-output (MIMO) systems sub-optimal in terms of spatial multiplexing gains. To overcome these limitations, new designs for MIMO systems are required, which account for the spherical wavefront that characterizes the electromagnetic waves in the near field, in order to ensure the highest spatial multiplexing gain without increasing the power expenditure. In this paper, we introduce an analytical framework for optimizing the deployment of antenna arrays in line-of-sight channels, which can be applied to paraxial and non-paraxial network deployments. In the paraxial setting, we devise a simpler analytical framework, which, compared to those available in the literature, provides explicit information about the impact of key design parameters. In the non-paraxial setting, we introduce a novel analytical framework that allows us to identify a set of sufficient conditions to be fulfilled for achieving the highest spatial multiplexing gain. The proposed designs are validated with numerical simulations. Index terms — MIMO, near field, line-of-sight, spatial multiplexing, paraxial, non-paraxial settings. § INTRODUCTION MIMO is a well-established technology, owing to its potential in boosting the rate of wireless networks by means of highly directional beamforming and spatial multiplexing capabilities, i.e., the possibility of serving many users in the same time-frequency resource with minimal interference <cit.>. Several current wireless communication systems, such as the 4G, the 5G and Wi-Fi, as well as the upcoming 6G, rely on the MIMO technology to fulfill their requirements in terms of throughput, reliability and multiple access <cit.>. In this context, the typical communication scenario considered in the literature consists of two multi-antenna transceivers deployed in a wireless channel characterized by the presence of strong multipath components, i.e., there exist several scattered paths between a multi-antenna transmitter and receiver, that effectively create a MIMO channel with a high rank. As a result, by assuming coherent signal processing at the multi-antenna transmitter, i.e., by exploiting the available knowledge of the propagation channel, the system capacity scales linearly with the multiplexing gain, which, in turn, depends on the minimum number of antennas available at the transmitter and receiver <cit.>. In recent years, in addition, the research community has shown considerable interest in exploiting high frequency bands, e.g., mmWave and sub-THz frequencies, for communication, due to the abundant availability of spectrum, which may potentially bring unprecedented gains in system performance <cit.>. At high frequency bands, wireless channels are characterized by a strong LoS component, i.e., the direct link between a transmitter and a receiver is predominant, while the multipath becomes sparse as a result of material absorption and atmospheric attenuation <cit.>. In mobile communications, LoS links are often deemed to be low performing due to the low rank of the associated communication channel, which do not allow to effectively transmit multiple data streams concurrently, even if the transmitters and receivers are equipped with multi-antenna transceivers. This is because multi-antenna transceivers of moderate size that operate at low frequencies are typically designed to communicate at distances that exceed the Fraunhofer far-field distance, resulting in signals characterized by planar wavefronts <cit.>. At high frequencies, however, the signal power decays rapidly with the transmission distance, and hence the communication links are typically established over short distances and by relying on large antenna arrays. Therefore, multi-antenna transceivers of large size that operate at high frequencies do not necessarily operate at distances that exceed the Fraunhofer far-field distance, resulting in received signals that are characterized by spherical wavefronts. As a consequence, there is a renowned and increasing interest in near-field communications, i.e., network deployments in which the distance between the transmitters and receivers is shorter than the Fraunhofer far-field distance. Such short communication ranges offer opportunities for minimizing the energy expenditure of wireless systems, since the available power budget can be efficiently used while guaranteeing optimized system performance. Notably, the spherical wavefront that characterizes the electromagnetic waves can be leveraged for beam focusing, i.e., to concentrate the energy towards specified locations, in contrast to specified directions, as it is allowed through conventional beam steering designs obtained based on the far-field approximation <cit.>. Due to the high energy concentration capability, beam focusing is viewed as an enabler to reduce the transmit power and the interference, paving the way towards energy sustainable communications <cit.>. Under such conditions, conventional plane-wave approximations are not accurate anymore, as they lead to sub-optimal designs that do not provide the highest spatial multiplexing gain, thus deteriorating the spectral and energy efficiencies <cit.>. For example, thanks to the large aperture of typical sub-THz devices, which are characterized by a large number of antennas in a relatively small space, and the short transmission range of typical sub-THz transmission links, recent works have shown that it is possible to effectively support multiple data streams, even in LoS MIMO settings <cit.>. However, this is only possible by appropriately modeling and exploiting the spherical wavefront of the transmitted signals in the near field <cit.>. By relying on correct signal models, it is thus possible to design MIMO communication links that maximize the number of DoF of a communication channel, i.e., the spatial multiplexing gain, while not increasing the amount of transmitted power and efficiently utilizing all the available radio frequency chains <cit.>, which inherently leads to energy sustainable physical layer designs for wireless communications. §.§ State-of-the-art on Near-field LoS MIMO Communications Recent works <cit.> have shown that MIMO communication links in LoS conditions (often referred to as LoS MIMO) can be appropriately optimized by considering near-field features. The proposed methods aim to decorrelate the received signals, by carefully designing the transmit and receive antenna arrays in terms of size and inter-element distances. In a nutshell, the overarching design criterion consists of optimizing the locations of the multiple antennas at the transmitters and receivers such that the resulting MIMO channel matrix has a full rank even in the absence of multipath propagation <cit.>. The relevance of optimizing the locations of the antenna elements in multi-antenna arrays is increasing even further lately, thanks to the development of new antenna technologies, including fluid antennas, movable antennas, and conformal metamorphic metasurfaces <cit.>, <cit.>. The methods and analysis presented next can be directly applied to these emerging antenna technologies, when the maximization of the DoF is the metric of interest. A summary of state-of-the-art research works on the maximization of the DoF in LoS MIMO channels is given in Table <ref>. In <cit.>, the authors identify the sensitivity of the rank of LoS MIMO channels as a function of the antenna spacing. Moreover, it is shown that the orthogonality of the MIMO sub-channels under LoS conditions can only be ensured for short transmission ranges, i.e., in the near field. The design of near-field spatial multiplexing schemes for application to indoor mmWave MIMO channels is discussed in <cit.>. Specifically, the authors consider a ULA with a constrained form factor, and they show that sparse array designs, i.e., when the inter-distances between the antenna-elements are larger than half of the wavelength, result in a spatially uncorrelated channel matrix, hence effectively providing the maximum number of spatial DoF. The optimal inter-distance between the antenna arrays is identified, and it is shown to fulfill the Rayleigh criterion. In <cit.>, it is shown that two communicating rectangular lattice antenna arrays, i.e., two UPA, including rectangular arrays, square arrays, and ULA, can achieve the maximum spatial multiplexing gain, provided that the inter-distance between the antenna elements is appropriately designed. The aforementioned papers consider a scenario in which the transmitting and receiving arrays are aligned in the broadside of each other. A more general analysis, which can be applied to geometrical models that account for any orientations of the transmitting and receiving arrays, is presented in <cit.> for ULA and in <cit.> for rectangular UPA, respectively. Therein, the analysis is carried out by assuming the so-called parabolic wavefront model (or approximation) <cit.>, which can be applied in the near field provided that the transmission distance is much larger than the physical size of the transmitting and receiving arrays. In the literature, this is often referred to as the paraxial setting. These research works assume that the LoS MIMO communication link operates at high SNR values. In this operating regime, it is desirable to have a full-rank MIMO channel matrix with equal singular values, in order to maximize the spectral and energy efficiencies, assuming the same total consumed power. In the low SNR regime, on the other hand, maximizing the received power is essential, and this scenario requires to beamform the transmitted signal over a channel whose maximum singular value is as large as possible. In <cit.>, the authors analyze the impact of the SNR when optimizing LoS MIMO channels, and they show that, in order to strike a balance between spatial multiplexing and beamforming, the multi-antenna arrangements need to depend on the SNR. Accordingly, the authors of <cit.> propose to configure ULA based on SNR-dependent rotations that maximize the rate. In <cit.>, the same authors generalize the approach to UPA, in order to reduce the array footprints. §.§ Paper Contributions Against this background, we advance the state of art on modeling and optimizing LoS MIMO channels by introducing an analytical framework for optimizing the deployment of antenna arrays that can be applied to paraxial and non-paraxial deployments, i.e., when the transmission distance is not necessarily much larger than the physical size of the antenna arrays at the transmitter and receiver. Specifically, we provide two main contributions: * In the paraxial setting, we devise a simpler analytical framework than those available in the literature, e.g., in <cit.>, which provides explicit information about the impact of key design parameters, including the tilt and rotation of the antenna arrays. The proposed approach provides us with explicit analytical expressions for ensuring the highest spatial multiplexing gain with no restriction on the orientations and arrangements of the antenna arrays. * In the non-paraxial setting, we introduce a new analytical framework that allows us to identify the conditions that need to be fulfilled in order to achieve the highest spatial multiplexing gain in LoS MIMO channels. To gain design insights, we specialize the framework to MIMO deployments with linear arrays oriented in broadside. In this case, we propose an approximated analytical framework and introduce an optimized MIMO design that offers the largest spatial multiplexing gain, provided that the difference between the number of antennas at the two arrays is greater than a minimum value. Such a minimum number of excess antennas is estimated, in the considered case study, as well. To the best of our knowledge, there exist no contributions in the open technical literature that have tackled this design and optimization problem in non-paraxial settings. In terms of methodology, the proposed approach capitalizes on an approximation for spherical wavefronts that was recently introduced in <cit.>, which is referred to as the quartic wavefront approximation, and that can be applied to both the paraxial and non-paraxial settings. In <cit.>, however, the approach was applied to holographic MIMO channels and it has never been applied to LoS MIMO channels. Compared with the typical parabolic approximation for spherical wavefronts, the quartic approximation can be applied to a universal system of coordinates, hence resulting in simpler and more insightful analytical expressions in the paraxial setting, as well as enabling the analysis and optimization of LoS MIMO channels in non-paraxial settings. §.§ Paper Organization The rest of the present paper is organized as follows. In Section <ref>, the system model is introduced, the proposed analytical approach is presented, and the benefits with respect to currently available frameworks are discussed. In Section <ref>, the paraxial setting is considered and the proposed analytical approach is illustrated. In Section <ref>, the approach is generalized for application to the non-paraxial setting, and the usefulness of the quartic approximation for spherical wavefronts is discussed. In Section <ref>, extensive numerical results are illustrated to validate the proposed analytical frameworks, theoretical findings, and optimal design criteria for LoS MIMO channels. Finally, conclusions are drawn in Section <ref>. Notation: Bold lower and upper case letters represent vectors and matrices. ℂ^a × b denotes the space of complex matrices of dimensions a × b. (·)^T denotes the transpose and (·)^* denotes the Hermitian transpose. 𝐈_N denotes the N × N identity matrix. 𝐀(i,k) denotes the k-th element of the i-th row of matrix 𝐀. j is the imaginary unit. 𝒞𝒩(μ,σ^2) denotes the complex Gaussian distribution with mean μ and variance σ^2. max{a,b} returns the maximum between a and b. § SYSTEM MODEL We consider a MIMO system, wherein the transmitter and receiver are UPA with L=L_1 L_2 and M=M_1 M_2 antenna elements, respectively, where L_1 (M_1) is the number of antenna elements in the first principal direction and L_2 (M_2) is the number of elements in the second principal direction. Without loss of generality, we assume that the larger array acts as the receiver, i.e., M ≥ L. This corresponds to a typical uplink scenario. The transmitter lies on the xz-plane and is centered in 𝐜^t=(0,0,0), while the receiver is centered in 𝐜^r=(x_o,y_o,z_o). Hence, the distance between the center-points of the antenna arrays is |𝐜^r - 𝐜^t| = |𝐜_o| = √(x_o^2 + y_o^2 + z_o^2) . The positions of the l-th transmit antenna element and the m-th receive antenna element are denoted by 𝐫_l^t = (x_l^t,y_l^t,z_l^t) + 𝐜^t and 𝐫_m^r = (x_m^r,y_m^r,z_m^r) + 𝐜^r, where a_l^t and a_m^r are the local coordinates at the transmitter and receiver, respectively, for the a-axis with a = {x,y,z}. 𝐫^l_t = (l_1 δ_1^t ,0, l_2 δ_2^t ) 𝐫^m_r = ( Δ x_0 + δ_1^r m_1 cosα, . . Δ y_0 + δ_1^r m_1 sinα + δ_2^r m_2 sinβ,Δ z_0 + δ_2^r m_2 cosβ) where l = (l_1-1)L_1+l_2 and m = (m_1-1)M_1+m_2, and l_1 (m_1), with l_2 (m_2) denoting the indices of the URAs along the first and the second principal directions, respectively. The received signal 𝐲∈ℂ^M × 1 can be formulated as follows: 𝐲 = 𝐇𝐱 + 𝐧 where 𝐧 is the additive white Gaussian noise at the receiver, with 𝐧∼𝒞𝒩(0, σ^2𝐈_M), 𝐱∈ℂ^L × 1 is the transmitted signal, and 𝐇∈ℂ^M × L is the channel matrix from the multi-antenna transmitter to the multi-antenna receiver. According to the considered system model, the channel capacity is given as follows <cit.>: C = ∑_i=1^R log_2 (1 + P_i/σ^2λ_i) where λ_i is the i-th largest eigenvalue of 𝐆 = 𝐇^*𝐇, R ≤ L is the rank of 𝐆, and P_i is the power allocated to the i-th communication mode. The power allocation is subject to the constraint ∑_i=1^R P_i = P_T, where P_T is the maximum power budget at the transmitter. In general, only some of the available communication modes have a significant coupling intensity λ_i, and hence are valuable for communication. Consequently, the optimal power allocation policy, i.e., the waterfilling algorithm <cit.>, allocates little or no power to the weakly-coupled modes. A metric to measure the number of effective communication modes is the effective rank <cit.>. This metric, which is denoted by N_eff, is bounded by N_eff∈ [1, R]. The effective rank attains the upper bound R when all the eigenvalues λ_i are equal, and it attains the lower bound when only one eigenvalue is non-zero. In the high SNR regime, which is typical in LoS conditions, as considered in this paper, the capacity is maximized when the rank of 𝐆 is R=L and the R eigenvalues have the same magnitude, which implies that N_eff = L and the equipower allocation policy is optimal, i.e., P_i = P_T/L. This condition is ensured when 𝐇 is an orthogonal matrix, and hence 𝐆 = λ_0 𝐈_L with λ_0 being the magnitude of all the eigenvalues. Accordingly, we need to ensure that the antenna elements of the arrays are placed at locations that fulfill the following condition: 𝐆(u,v) = ∑_m=1^M [ 𝐇(m,u)]^* 𝐇(m,v) = 0 ∀ u ≠ v = 1, 2 …, L which ensures that the matrix 𝐇 is, by definition, orthogonal. The effective rank is an appropriate figure of merit to estimate the rank of 𝐆 and to evaluate the similarity among the eigenvalues of 𝐆, since it attains its upper bound when all the eigenvalues are equal. Hence, it can be utilized as the metric to verify the accomplishment of (<ref>), by means of numerical simulations. Next, we devise analytical expressions for ensuring that (<ref>) is fulfilled and, in Section <ref>, we validate them against the effective rank. As far as the channel model is concerned, we assume a free-space LoS propagation channel. Accordingly, the link from the l-th antenna of the MIMO transmitter to the m-th antenna of the MIMO receiver can be modeled as <cit.> 𝐇(m,l) = e^j k_0 |𝐫_m^r - 𝐫_l^t|/4π |𝐫_m^r - 𝐫_l^t| where k_0 = 2 π/λ and |𝐫_m^r - 𝐫_l^t| is the distance between the antenna elements. §.§ Paraxial and Non-Paraxial Settings According to (<ref>), the signal emitted by the antenna array at the transmitter has a spherical wavefront. When the physical apertures of the transmitter and receiver, the signal wavelength, and the transmission distance between their center-points fulfill the Fraunhofer far-field condition, the MIMO system operates in the far field, and the wavefront of the received signal can be well approximated as planar. This implies that the columns of the channel matrix 𝐇 are correlated, and that, regardless of the arrangements of the antenna elements, 𝐆 has a unit rank, i.e., N_eff = 1. If the antenna arrays are located closer than the Fraunhofer far-field distance, the planar wavefront approximation is not accurate anymore. However, there exists a region where the apertures of the antenna arrays are still small compared with the distance between their center-points but the wavefront of the received signal is spherical. This occurs when the size of the antenna arrays is sufficiently small as compared with the distance between their center points. In mathematical terms, this condition can be formulated as follows: x^t_l, y^t_l, z^t_l, x^r_m, y^r_m, z^r_m ≪ |𝐜_o| . This deployment scenario is illustrated in Fig. <ref>(a), and it is referred to as the paraxial setting. Figure <ref>(a) shows the antenna arrays in the broadside configuration, but the condition in (<ref>) is independent of the tilt and rotation of the antenna arrays with respect to one another. In the paraxial setting, the channel model in (<ref>) can be simplified to make it more tractable, as detailed in <cit.>. Therein, the authors utilize an approach that consists of (i) changing the system of coordinates so that the centers of the two antenna arrays are aligned along the axis connecting their center-points and (ii) projecting the arrays onto the plane that is perpendicular to the axis that connects their center-points. By utilizing this change of reference system, the conventional parabolic approximation for spherical wavefronts can be applied and is sufficiently accurate <cit.>. In <cit.>, the authors propose an alternative approach based on a quartic approximation for the spherical wavefront. This approach is independent of the system of coordinates being considered. It utilizes a simple parametrization to represent the antenna arrays, which is not based on projections of the antenna arrays, and that leads to a more insightful problem formulation and understanding of the obtained analytical framework. The methods of analysis proposed in <cit.> and in <cit.> are equivalent in the paraxial setting. This is further elaborated in Section <ref>. By contrast, when the antenna arrays are not in the paraxial setting, i.e., (<ref>) is not fulfilled, none of the two approaches can be applied as originally reported in <cit.> and <cit.>. An example of non-paraxial setting is shown in Fig. <ref>(b). The large antenna array may represent a distributed multi-antenna base station that communicates with a multi-antenna user equipment, when the size of the base station and user equipment are much larger and much smaller than the distance between their center-points |𝐜_o|, respectively (uplink). We see next that the optimal placements for the antennas of the base station have inter-distances larger than half of the wavelength. While none of the approaches reported in <cit.> and <cit.> can be applied directly, the method based on the quartic approximation proposed in <cit.> can be generalized for analyzing the non-paraxial setting, since it is based on an independent system of coordinates without the need for applying projections. Specifically, we tackle the problem at hand by partitioning the large antenna array in Fig. <ref>(b) into smaller sub-arrays. Each sub-array is chosen to be small enough for ensuring that the paraxial approximation holds true in a system of coordinates that is common to all the sub-arrays. Under these assumptions, the quartic approximation method can be applied to the channel between the multi-antenna transmitter and each sub-array of the multi-antenna receiver. On the other hand, the approach based on projections cannot be directly applied to the proposed method of analysis based on sub-arrays, as it is not possible to align a single system of coordinates with the many axis that connect the center of each receiving sub-array with the center of the multi-antenna transmitter. It is worth mentioning that the notion of array of sub-arrays can be found in the literature <cit.>, but the proposed approach is different, since (i) we utilize the decomposition in sub-arrays for modeling the non-paraxial setting and (ii) we optimize the locations of the antenna elements in each sub-array. In <cit.>, the paraxial setting is analyzed and the antenna elements in each sub-array are spaced by half of the wavelength. In Section <ref>, we analyze the conditions for achieving channel orthogonality when both the transmitter and the receiver are deployed in the paraxial setting, by using the quartic approximation. In Section <ref>, we derive the conditions for ensuring channel orthogonality when the receiver is large enough that the paraxial approximation is not fulfilled anymore, by combining the quartic approximation with the sub-array partitioning approach. § PARAXIAL SETTING In this section, we identify the conditions to make the channel matrix 𝐇 orthogonal, i.e., to fulfill (<ref>), in the case of paraxial setting. First, we introduce the channel model based on the paraxial approximation and then we analyze the orthogonality condition in (<ref>). §.§ Paraxial Channel Model Under the assumption in (<ref>), the amplitude of (<ref>) changes slowly and it can be approximated as a constant over the MIMO receiver, i.e., |𝐫_m^r - 𝐫_l^t| ≈ |𝐜_o|. By contrast, the phase in (<ref>) is very sensitive to the variations of |𝐫_m^r - 𝐫_l^t|. By denoting ∑f( a ) = f( x ) + f( y ) + f( z ), the distance |𝐫_m^r - 𝐫_l^t| in the phase term can be approximated as |𝐫_m^r - 𝐫_l^t| = √(∑ (a^r_m + a_o - a^t_l)^2) = |𝐜_o| √(1 + ρ(𝐫_l^t, 𝐫_m^r)/|𝐜_o|^2) ≈ |𝐜_o| [1 + ρ(𝐫_l^t, 𝐫_m^r)/2|𝐜_o|^2 - ρ^2 (𝐫_l^t, 𝐫_m^r)/8|𝐜_o|^4] where ρ(𝐫_l^t, 𝐫_m^r) = 2∑ a_o (a^r_m - a^t_l) + ∑ (a^r_m - a^t_l)^2 ≈ 2∑ a_o (a^r_m - a^t_l) . The approximation in (<ref>) stems from Taylor's approximation √(1+t)≈ 1 + t/2 - t^2/8. Also, the approximation in (<ref>) can be applied when 2∑ a_o (a^r_m - a^t_l) ≫∑ (a^r_m - a^t_l)^2, i.e., when the misalignment between the centers of the arrays is larger than the size of the arrays. Hence, (<ref>) is suitable for characterizing non-broadside settings, but it can not be applied in broadside settings, given that |a_o| ≪ 1, for at least two Cartesian coordinates, in this latter case. Compared with the typical parabolic approximation utilized in the paraxial setting <cit.>, the proposed approximation in (<ref>) utilizes an additional term of the Taylor expansion. This is because, as mentioned, the system of coordinates does not coincide with the axis connecting the center-points of the antenna arrays and no projections onto this axis are applied. More specifically, the first-order term in (<ref>) is dominant in broadside deployments while the second-order term needs to be added in non-broadside deployments. For this reason, the exact formulation of ρ(𝐫_l^t, 𝐫_m^r) in (<ref>) needs to be utilized to compute the first-order term in (<ref>), while the approximated formulation in (<ref>) is sufficient to compute the second-order term in (<ref>). In mathematical terms, (<ref>) can be approximated as follows: |𝐫_m^r - 𝐫_l^t| ≈ |𝐜_o| [1 + ρ(𝐫_l^t, 𝐫_m^r)/2|𝐜_o|^2 - ρ^2 (𝐫_l^t, 𝐫_m^r)/8|𝐜_o|^4] ≈ |𝐜_o| [1 + 2∑ a_o (a^r_m - a^t_l) + ∑ (a^r_m - a^t_l)^2/2|𝐜_o|^2. . - (∑ a_o (a^r_m - a^t_l))^2/2|𝐜_o|^4] . The modeling approach in (<ref>) is referred to as the quartic approximation for the wavefront in <cit.>. Based on the quartic approximation, the channel matrix in (<ref>) can be expressed as follows: 𝐇≈1/4π|𝐜_o|𝐅_RX𝐏𝐅_TX^* where 𝐅_TX∈ℂ^L × L and 𝐅_RX∈ℂ^M × M are diagonal matrices and 𝐏∈ℂ^M × L is a non-diagonal matrix, which are defined as follows: 𝐅_TX(l,l) = exp{ j k_0/2|𝐜_o|[ (x^t_l)^2 + (y^t_l)^2 + (z^t_l)^2 - 2 x_o x^t_l . . - 2 y_o y^t_l - 2 z_o z^t_l - (x_o x^t_l + z_o z^t_l)^2/|𝐜_o|^2]} 𝐅_RX(m,m) = exp{ j k_0/2|𝐜_o|[ (x^r_m)^2 + (y^r_m)^2 + (z^r_m)^2 + 2 x_o x^r_m . . + 2 y_o y^r_m + 2 z_o z^r_m - (x_o x^r_m + z_o z^r_m)^2/|𝐜_o|^2]} 𝐏(m,l) = exp{ -j k_0/|𝐜_o|[ x^r_m x^t_l + y^r_m y^t_l + z^r_m z^t_l . . - (x_o x^r_m + y_o y^r_m + z_o z^r_m ) (x_o x^t_l + y_o y^t_l + z_o z^t_l) /|𝐜_o|^2]} . and the off-diagonal elements of 𝐅_TX and 𝐅_RX are equal to zero, as they are diagonal matrices. §.§ Channel Orthogonality According to (<ref>), the matrix 𝐆 can be written as follows: 𝐆 = 1/(4π|𝐜_o|)^2[ 𝐅_RX𝐏𝐅_TX^*]^* 𝐅_RX𝐏𝐅_TX^* . Given that 𝐅_RX^*𝐅_RX = 𝐈_M, (<ref>) can be simplified as 𝐆 = 1/(4π|𝐜_o|)^2𝐅_TX𝐏^* 𝐏𝐅_TX^* . Since 𝐅_TX is a diagonal matrix, it does not have any impact on the diagonalization of 𝐆. Therefore, the orthogonality condition in (<ref>) can be expressed as follows: ∑_m=1^M [ 𝐏(m,u)]^* 𝐏(m,v) = 0 ∀ u v where u,v=1, 2, …, L denote two generic antenna elements of the MIMO transmitter. To proceed further, we consider the following parametrization for the u-th and v-th antenna elements of the array at the transmitter, and for the m-th antenna element of the array at the receiver, respectively (see Fig. <ref>): 𝐫_l^t = [(l_1 - L_1-1/2) δ_1^t , 0 ,(l_2 - L_2-1/2) δ_2^t] 𝐫_m^r = [ δ_1^r(m_1 - M_1-1/2) cosα - δ_2^r(m_2 - M_2-1/2) sinβsinα, . .δ_1^r(m_1 - M_1-1/2) sinα + δ_2^r(m_2 - M_2-1/2) sinβcosα, . .δ_2^r(m_2 - M_2-1/2) cosβ] + 𝐜_o, 𝐫_u^t = (u_1^c δ_1^t , 0 ,u_2^c δ_2^t), 𝐫_v^t = (v_1^c δ_1^t , 0 ,v_2^c δ_2^t) 𝐫_m^r = ( δ_1^r m_1^c cosα - δ_2^r m_2^c sinβsinα, . . δ_1^r m_1^c sinα + δ_2^r m_2^c sinβcosα, δ_2^r m_2^c cosβ) + 𝐜_o with u_a^c = u_a -L_a-1/2, v_a^c = v_a -L_a-1/2, m_a^c = m_a -M_a-1/2 where u = (u_1-1)L_1+u_2, v = (v_1-1)L_1+v_2, and m = (m_1-1)M_1+m_2, with u_a = 1, 2, …, L_a and v_a = 1, 2, …, L_a for a={1,2} denote the indices of the u-th and v-th transmit antenna element along the first (a=1) and second (a=2) principal directions, respectively. Moreover, m_1 = 1, 2, …, M_1 and m_2 = 1, 2, …, M_2 denote the indices of the m-th receive antenna element along the first and second principal directions, respectively. Also, δ_1^t (δ_1^r) and δ_2^t (δ_2^r) denote the inter-distances between the antenna elements in the first and second principal directions, respectively. Lastly, the angles α and β denote a rotation and a tilt with respect to the x-axis and the z-axis, respectively. Based on the considered parametrization, the positions of the antenna elements within the arrays are determined based on their inter-distances. Since the inter-distance is kept fixed for all the elements, the resulting design corresponds to a uniform array. Applying the parametrization in (<ref>) and (<ref>) to the matrix 𝐏, it can be rewritten as 𝐏(m,u) = exp{- j k_0/|𝐜_o|[ (τ_11δ_1^t u_1^c + τ_12δ_2^t u_2^c) δ_1^r m_1^c . . + (τ_21δ_1^t u_1^c + τ_22δ_2^t u_2^c) δ_2^r m_2^c ] } where τ_11 = cosα - x_o τ_1, τ_12 = - z_oτ_1 τ_21 = - sinβsinα - x_oτ_2, τ_22 = cosβ - z_oτ_2 with τ_1 = (x_o cosα + y_o sinα)|𝐜_o|^-2 and τ_2 = (-x_o sinβsinα + y_o sinβcosα + z_o cosβ )|𝐜_o|^-2. Inserting (<ref>) in (<ref>), we obtain the following expression for ensuring the orthogonality: ∑_m_1 = 1^M_1 ∑_m_2 = 1^M_2exp{- j k_0/|𝐜_o|{[ τ_11δ_1^t (u_1 - v_1) + τ_12δ_2^t (u_2 - v_2) ] ×δ_1^r (m_1 - M_1-1/2) + [τ_21δ_1^t (u_1 - v_1) + τ_22δ_2^t (u_2 - v_2)] ×δ_2^r (m_2 - M_2-1/2) }}=0 ∀ (u_1, u_2) ≠ (v_1, v_2) . The expression in (<ref>) can be simplified by using the geometric sum formula in <cit.>, which results in the following orthogonality condition for all (u_1, u_2) ≠ (v_1, v_2): sin[ π (γ_11(u_1 - v_1) + γ_12(u_2 - v_2)) ]/sin[(π/M_1) (γ_11(u_1 - v_1) + γ_12 (u_2 - v_2)) ] ×sin[ π (γ_21(u_1 - v_1) + γ_22(u_2 - v_2)) ]/sin[(π/M_2) (γ_21(u_1 - v_1) + γ_22(u_2 - v_2) ) ]=0 where γ_ab = τ_ab M_b/λ |𝐜_o|δ_b^r δ_a^t ∀ a,b = {1,2} . In agreement with <cit.>, we evince that the orthogonality condition presented in (<ref>) admits a solution provided that at least one γ_ab is equal to zero. This implies that it is not necessary to ensure the channel separability, i.e., to express 𝐇 as the product of the channel matrices for each axis, in order to achieve the desired orthogonality condition. §.§ Novelty and Insights from the Orthogonality Condition in (25) Compared with the orthogonality condition obtained in <cit.>, which is expressed in terms of projections of the antenna arrays onto the planes that are perpendicular to the axis connecting their center-points, the condition in (<ref>) is expressed in terms of parameters that are independent of the system of coordinates. Therefore, the obtained expression is simpler to be interpreted and to be used. If, for example, we wish to analyze the impact of translating the antenna array at the receiver by using the approach proposed in <cit.>, we would need to recompute the projections of both antenna arrays onto the plane perpendicular to the new system of coordinates. With the proposed formulation in (<ref>), this is not needed and the impact of a translation between the antenna arrays is apparent from the considered parametrization. In addition, it is easier to identify the setups of parameters for which the orthgonality is achieved. Specifically, a close inspection of (<ref>) reveals that the orthogonality condition in (<ref>) can be formulated explicitly when either τ_12 = 0 or τ_21 = 0. This is elaborated next. It is pertinent to note that the conditions τ_11 = 0 or τ_22 = 0 lead to network deployments that either are of less practical relevance or are equivalent to the network deployments corresponding to τ_12 = 0 or τ_21 = 0. Therefore, these case studies are not discussed in this paper. To obtain simple orthogonality criteria, let us further analyze (<ref>). Let us assume that there is a γ_ab = 0 for a≠b. The function sin(πγ_aa(u_a-v_a))/sin(πγ_aa(u_a-v_a)/M_a) is a periodic function with period M_a/γ_aa and it has a zero in n/γ_aa for n ∈ (1, M_a-1). Then, we can ensure that the condition in (<ref>) is fulfilled, for (u_1 - v_1) ≠ 0, when |γ_aa| = 1 and M_a≥ L_a. When (u_1 - v_1) = 0, on the other hand, the orthogonality condition in (<ref>) is fulfilled by configuring the array such that |γ_bb| = 1 and M_b≥ L_b. In summary, if the arrays are in a setup in which the orthogonality is possible, i.e., at least one γ_ab can be set equal to zero, the following conditions need to hold simultaneously for ensuring that the channel orthogonality along the first and second principal directions of the antenna arrays is fulfilled: δ_1^r δ_1^t = λ |𝐜_o|/M_1|τ_11| , M_1 ≥ L_1 δ_2^r δ_2^t = λ |𝐜_o|/M_2|τ_22| , M_2 ≥ L_2 . It is worth mentioning that the orthogonality condition in (<ref>) can be ensured by setting |γ_11| = n_1 and |γ_22| = n_2 for any positive integer n_1 and n_2. However, this results in large antenna arrays for which the paraxial approximation may not hold anymore. These setups are, therefore, of less interest from a practical point of view. In the following, to gain further insights onto the explicit orthogonality conditions in (<ref>), we analyze the case studies in which the antenna arrays at the transmitter and receiver are aligned either along the z-axis or along the x-axis. §.§.§ Alignment along the z-axis Let us assume that the antenna arrays at the transmitter and receiver are aligned along the z-axis, i.e., z_o = 0. Then, τ_ab can be simplified as follows: τ_11 = cosα - x_o x_o cosα + y_o sinα/|𝐜_o|^2 τ_12 = 0 τ_21 = - sinβsinα - x_o (-x_o sinβsinα + y_o sinβcosα)/|𝐜_o|^2 τ_22 = cosβ Based on the obtained conditions, we note that τ_12 = 0. This implies that the matrix 𝐇 can always be made orthogonal in this deployment. The inter-distances ensuring the orthogonality condition are those obtained by inserting the obtained τ_11 and τ_22 into (<ref>). Notably, we observe that τ_11 and τ_22 attain their maximum values when the two arrays are deployed in broadside, and hence, in this setup, the size of the antenna arrays is the smallest according to (<ref>). §.§.§ Alignment along the x-axis Let us assume that the antenna arrays at the transmitter and receiver are aligned along the x-axis, i.e., x_o = 0. Then, τ_ab can be simplified as follows: τ_11 = cosα τ_12 = - z_o y_o sinα|𝐜_o|^-2 τ_21 = -sinβsinα τ_22 = cosβ - z_o (y_o sinβcosα + z_o cosβ )|𝐜_o|^-2. In this deployment scenario, none of the obtained τ_ab (or equivalently γ_ab) is equal to zero regardless of the considered system parameters. This asymmetry between the case studies in which the antenna arrays are aligned along the z-axis and x-axis is only due to the considered parametrization, which is formulated by first applying the tilt β and then the rotation α. By inverting these operations, the conditions obtained when the antenna arrays are aligned along the z-axis and the x-axis are swapped. Based on the obtained expressions for τ_ab, the orthogonality conditions can be ensured, for example, by setting α = 0, i.e., τ_12 = 0 and τ_21 = 0. In summary, the main contributions of this section can be summarized as follows: * We have obtained explicit expressions for the optimal design of the antenna arrays in LoS MIMO channels, assuming the paraxial setting. The expressions are given in (<ref>), and they need to be fulfilled simultaneously. * We have analyzed two case studies and have provided closed-form expressions for the inter-distances among the antenna elements of the arrays in order to ensure that the LoS MIMO channel has full rank. * The obtained orthogonality conditions are formulated in an explicit manner and are simpler to interpret, for several network deployments of interest, as compared with the analysis carried out in <cit.>. This follows by comparing (<ref>) with <cit.>, since the parameters γ_ab in (<ref>) are formulated explicitly as a function of the system parameters. § NON-PARAXIAL SETTING Direct inspection of the orthogonality conditions in (<ref>) shows that the paraxial approximation in (<ref>) may not always be fulfilled. If, for example, the inter-distances at the multi-antenna transmitter are δ^t_1 = δ^t_2 = λ/2, as in a typical user equipment, the inter-distances at the multi-antenna receiver need to be very large for ensuring that the orthogonality condition is fulfilled, assuming that (<ref>) is still valid for the resulting LoS MIMO channel. Thus, the non-paraxial setting is a relevant case study, especially if one of the two antenna arrays has a compact size. In this section, we identify the conditions to make the channel matrix 𝐇 orthogonal, i.e., to fulfill the orthogonality condition in (<ref>) in a non-paraxial setting. First, we introduce the channel model in the non-paraxial setting, and then we analyze the orthogonality condition in (<ref>). §.§ Non-Paraxial Channel Model When the paraxial condition in (<ref>) is not fulfilled, we capitalize on the approach introduced in Section <ref>, which combines the quartic approximation with the sub-array partitioning method. Specifically, the large antenna array at the receiver is partitioned into N^r sub-arrays. The i-th sub-array is centered in 𝐜^r,i = (x_o^i, y_o^i, z_o^i) and it has M^i = M_1^i M_2^i antenna elements, where M_1^i denotes the number of antenna elements in the first principal direction and M_2^i denotes the numbers of antenna elements in the second principal direction. The position of the m^i-th antenna element is denoted by 𝐫^r,i_m^i = (x_m^i^r,i,y_m^i^r,i,z_m^i^r,i) + 𝐜^r,i. Based on the partition in sub-arrays, the channel matrix 𝐇 can be rewritten as follows: 𝐇 = [ 𝐇^1 𝐇^2 𝐇^N^r ]^T where 𝐇^i∈ℂ^M^i × L is the channel matrix between the antenna array at the transmitter and the i-th antenna sub-array at the receiver. The size of the sub-arrays is chosen to ensure that the paraxial approximation can be applied to each sub-array, which implies the following: x^t_l, y^t_l, z^t_l, x_m^i^r,i,y_m^i^r,i,z_m^i^r,i≪ |𝐜_o^i| where |𝐜_o^i| = |𝐜^r,i - 𝐜^t|. For clarity, the channel matrix 𝐇 in (<ref>) is denoted by 𝐇^Large. Accordingly, the quartic approximation can be applied to each sub-array, by considering a single system of coordinates for all the sub-arrays. Based on (<ref>), and by using the same line of thought as for (<ref>), the quartic approximation for 𝐇^i can be formulated as follows: 𝐇^i ≈1/4π|𝐜_o^i|𝐅_RX^i 𝐏^i [ 𝐅_TX^i]^* where 𝐅_TX^i ∈ℂ^L × L, 𝐅_RX^i ∈ℂ^M^i × M^i, and 𝐏^i ∈ℂ^M^i × L are defined as follows: 𝐅_TX^i(l,l) = exp{ j k_0/2|𝐜_o^i|[ (x^t_l)^2 + (y^t_l)^2 + (z^t_l)^2 - 2 x_o^i x^t_l . . - 2 y_o^i y^t_l - 2 z_o^i z^t_l - (x_o^i x^t_l + z_o^i z^t_l)^2/|𝐜_o^i|^2]} 𝐅_RX^i(m^i,m^i) = exp{ j k_0/2|𝐜_o^i|[ (x^r,i_m^i)^2 + (y^r,i_m^i)^2 + (z^r,i_m^i)^2 . . + 2 x_o^i x^r,i_m^i + 2 y_o^i y^r,i_m^i + 2 z_o^i z^r,i_m^i - (x_o^i x^r,i_m^i + z_o^i z^r,i_m^i)^2/|𝐜_o^i|^2]} 𝐏^i(m^i,l) = exp{ -j k_0/|𝐜_o^i|[ x^r,i_m^i x^t_l + y^r,i_m^i y^t_l + z^r,i_m^i z^t_l . . - (x_o^i x^r,i_m^i + y_o^i y^r,i_m^i + z_o^i z^r,i_m^i ) (x_o^i x^t_l + y_o^i y^t_l + z_o^i z^t_l) /|𝐜_o^i|^2]} and the off-diagonal elements of 𝐅_TX^i and 𝐅_RX^i are equal to zero, i.e., they are diagonal matrices. §.§ Channel Orthogonality To identify the orthogonality conditions in the non-paraxial setting, we need to analyze the matrix 𝐆^Large = (𝐇^Large)^* 𝐇^Large and to impose the equality in (<ref>). By inserting (<ref>) in (<ref>), we obtain the following: ∑_i = 1^N^r∑_m^i=1^M^i[𝐇^i(m^i,u)]^* 𝐇^i (m^i, v) = 0 ∀ u ≠ v . Furthermore, by inserting (<ref>) in (<ref>), we obtain ∑_i = 1^N^r ∑_m^i=1^M^i[1/4π|𝐜_o^i|𝐅_RX^i( m^i,m^i) 𝐏^i(m^i,u) [ 𝐅_TX^i(u,u) ]^* ]^* ×[1/4π|𝐜_o^i|𝐅_RX^i( m^i,m^i) 𝐏^i(m^i,v) [ 𝐅_TX^i(v,v) ]^*] = 0 ∀ u ≠ v . Since [𝐅_RX^i( m^i,m^i) ]^* 𝐅_RX^i( m^i,m^i) = 1 by definition, (<ref>) simplifies to ∑_i = 1^N^r 1/|𝐜_o^i|^2𝐅_TX^i(u,u) [ 𝐅_TX^i(v,v) ]^* ×∑_m^i=1^M^i[ 𝐏^i(m^i,u) ]^* [𝐏^i(m^i,v) ] = 0 ∀ u ≠ v . In order to identify simple design criteria for ensuring the orthogonality of the LoS MIMO channel matrix, (<ref>) needs to be simplified. To this end, we introduce the diagonal matrix 𝐅̅_Tx∈ℂ^L× L, whose diagonal entries are defined as follows: 𝐅̅_Tx(u,u) = exp{ j k_o/2|𝐜_o| [ (x^t_u)^2 + (y^t_u)^2 + (z^t_u)^2 . .- (x_o x^t_u + z_o z^t_u)^2/|𝐜_o|^2]} . Since, by definition, 𝐅̅_Tx[ 𝐅̅_Tx]^* = 𝐈_L, (<ref>) can be rewritten as follows: ∑_i = 1^N^r1/|𝐜_o^i|^2 [𝐅_TX^i(u,u) [𝐅̅_Tx(u,u)]^*] [ 𝐅_TX^i(v,v) [𝐅̅_Tx(v,v)]^*]^* ×∑_m^i=1^M^i[ 𝐏^i(m^i,u) ]^* [𝐏^i(m^i,v) ] = 0 ∀ u ≠ v . Let us analyze the product 𝐅_TX^i(u,u) [𝐅̅_Tx(u,u)]^* = exp{jk_oΦ _TX^i( u,u)}, where Φ _TX^i( u,u) is defined as follows: Φ _TX^i( u,u) = Ψ _TX^i( u,u) + Δ _TX^i( u,u) - Δ _TX^o( u,u) ≈Ψ _TX^i( u,u) with Ψ _TX^i( u,u) = -1/2|𝐜_o^i|[ x_o^i x^t_u + y_o^i y^t_u + z_o^i z^t_u ] Δ _TX^i( u,u) = 1/2|𝐜_o^i|[ (x^t_u)^2 + (y^t_u)^2 + (z^t_u)^2 - (x_o^i x^t_u + z_o^i z^t_u)^2/|𝐜_o^i|^2] Δ _TX^o( u,u) = 1/2|𝐜_o|[ (x^t_u)^2 + (y^t_u)^2 + (z^t_u)^2 - (x_o x^t_u + z_o z^t_u)^2/|𝐜_o|^2] . The rationale for the approximation in (<ref>) is the following: * Sub-arrays located around the center-point of the multi-antenna receiver. In this case, it holds |𝐜_o^i| ≈ |𝐜_o|. As a result, Δ _TX^i( u,u) ≈Δ _TX^o( u,u), and, therefore, Ψ _TX^i( u,u) ≫Δ _TX^i( u,u) - Δ _TX^o( u,u). Equation (<ref>) is then a good approximation in this case. * Sub-arrays located far away from the center-point of the multi-antenna receiver. In this case, it holds |𝐜_o^i| ≫ |𝐜_o|. In Ψ _TX^i( u,u), the values of x_o^i, y_o^i and z_o^i scale similar to |𝐜_o^i|, as the latter increases. Therefore, Ψ _TX^i( u,u) does not become arbitrarily small as |𝐜_o^i| increases. In Δ _TX^i( u,u) and Δ _TX^o( u,u), x_u^t, y_u^t and z_u^t fulfill the paraxial approximation in (<ref>), depend on the transmitter, and are independent of |𝐜_o^i|. As |𝐜_o^i| increases, therefore, Δ _TX^i( u,u) tends to be very small, i.e., Δ _TX^i( u,u) ≪Ψ _TX^i( u,u). As for Δ _TX^o( u,u), we note that, e.g., ( x_o^i/| c_o^i|)x_u^t in Ψ _TX^i( u,u) is much greater than ( x_u^t)^2/| c_o^i| = ( x_u^t/| c_o|)x_u^t in Δ _TX^o( u,u), since x_o^i scales similar to |𝐜_o^i| while x_u^t/| c_o| ≪ 1 according to the paraxial approximation in (<ref>). Similar inequalities can be applied to the other addends. Therefore, Δ _TX^o( u,u) ≪Ψ _TX^i( u,u). Equation (<ref>) is then a good approximation in this case. Based on the approximation in (<ref>), the orthogonality condition in (<ref>) can be expressed as ∑_i = 1^N^r 1/|𝐜_o^i|^2exp{ -j k_0/|𝐜_o^i|[ x_o^i (x^t_u - x^t_v) . . + y_o^i (y^t_u - y^t_v) + z_o^i (z^t_u - z^t_v)]} ×∑_m^i=1^M^i[ 𝐏^i(m^i,u) ]^* [𝐏^i(m^i,v) ] = 0 ∀ u ≠ v . By comparing the obtained orthogonality condition in (<ref>) for the non-paraxial setting with the akin one in (<ref>) for the paraxial setting, we identify a major difference: The orthogonality condition in (<ref>) depends on an exponential term that originates from the matrix 𝐅_TX^i, as a result of the partitioning in sub-arrays at the multi-antenna receiver. This makes the optimal designs to realize a full-rank LoS MIMO channel in the paraxial and non-paraxial settings different. In the paraxial setting, specifically, we have proved that uniform antenna arrays, in which the inter-distances δ^r_1 and δ^r_2 are the same between all the antenna elements, are sufficient for ensuring that the matrix 𝐇 is orthogonal. In the non-paraxial setting, on the other hand, the inter-distances in each sub-array are expected to be different because of the different distances between the antenna array at the transmitter and each antenna sub-array at the receiver. For generality, the inter-distances among the antenna elements of the i-th sub-array are denoted by δ_1^r, i and δ_2^r, i along the first and second principal directions of the antenna array at the receiver, respectively. Accordingly, we introduce the following parametrization for the position of the m^i-th antenna element in the i-th sub-array at the receiver: 𝐫_m^i^r, i = ( δ_1^r, i m_1^c,icosα - δ_2^r, i m_2^c,isinβsinα, δ_1^r, i m_1^c,isinα. . + δ_2^r, i m_2^c,isinβcosα, δ_2^r, i m_2^c,icosβ) + 𝐜^r,i where m_1^c,i = m_1^i - (M_1^i-1)/2, m_2^c,i = m_2^i - (M_2^i-1)/2 and m^i = (m_1^i - 1) M_1^i + m_2^i, with m_1^i and m_2^i denoting the indices of the i-th sub-array along the first and second principal directions. The inner summation in (<ref>) can be computed by utilizing the same approach as in Section <ref>, which results in the following analytical expression: ∑_m^i=1^M^i [ 𝐏^i(m^i,u) ]^* [𝐏^i(m^i,v) ] =sin[ π (γ_11^i(u_1 - v_1) + γ_12^i(u_2 - v_2)) ]/sin[(π/M_1^i) (γ_11^i(u_1 - v_1) + γ_12^i (u_2 - v_2)) ] ×sin[ π (γ_21^i(u_1 - v_1) + γ_22^i(u_2 - v_2)) ]/sin[(π/M_2^i) (γ_21^i(u_1 - v_1) + γ_22^i(u_2 - v_2) ) ] where γ_ab^i = τ_ab^i M_b^i/λ |𝐜_o^i|δ_b^r,iδ_a^t ∀ a,b = {1,2} . Inserting (<ref>) and (<ref>) into (<ref>), we obtain ∑_i = 1^N^r 𝐅_TX^i( (u_1-1)L_1+u_2, (u_1-1)L_1+u_2) [ 𝐅_TX^i( (v_1-1)L_1+v_2,(v_1-1)L_1+v_2) ]^* ×1/|𝐜_o^i|^2sin[ π (γ_11^i(u_1 - v_1) + γ_12^i(u_2 - v_2)) ]/sin[(π/M_1^i) (γ_11^i(u_1 - v_1) + γ_12^i (u_2 - v_2)) ] ×sin[ π (γ_21^i(u_1 - v_1) + γ_22^i(u_2 - v_2)) ]/sin[(π/M_2^i) (γ_21^i(u_1 - v_1) + γ_22^i(u_2 - v_2) ) ] = 0 ∀ (u_1, u_2) ≠ (v_1, v_2). ∑_i = 1^N^r 1/|𝐜_o^i|^2exp{ -j π[ η_x^i(u_1 - v_1) + η_z^i (u_2 - v_2)]} ×sin[ π (γ_11^i(u_1 - v_1) + γ_12^i(u_2 - v_2)) ]/sin[(π/M_1^i) (γ_11^i(u_1 - v_1) + γ_12^i (u_2 - v_2)) ] ×sin[ π (γ_21^i(u_1 - v_1) + γ_22^i(u_2 - v_2)) ]/sin[(π/M_2^i) (γ_21^i(u_1 - v_1) + γ_22^i(u_2 - v_2) ) ] = 0 for all (u_1, u_2) ≠ (v_1, v_2), with η_x^i = 2 x_o^r,i/λ |𝐜_o^i|δ_1^t , η_z^i = 2 z_o^r,i/λ |𝐜_o^i|δ_2^t . Equation (<ref>) generalizes the orthogonality condition in (<ref>) to non-paraxial settings. In general, the optimal inter-distances that maximize the rank of the channel matrix 𝐇 can be obtained by solving (<ref>) numerically. To obtain some design insights, we consider the case study of linear arrays with broadside orientation, i.e., the two antenna arrays are parallel to one another and are faced to each other. §.§ Explicit Orthogonality Conditions for Linear Arrays with Broadside Orientation Let us consider the case study in which the antenna arrays at the transmitter and receiver are linear, i.e., M^i_2 = L_2 = 1, are parallel and are faced to each other (broadside orientation), i.e., α = 0, β = 0 and 𝐜_o = (0, y_o, 0). Thus, we have u_1 = u and v_1 = v, and (<ref>) simplifies to ∑_i = 1^N^r1/|𝐜_o^i|^2 exp{ -j πη_x^i(u - v) } ×sin[ πγ_11^i(u - v) ]/sin[(π/M_1^i) γ_11^i(u - v) ] = 0 ∀ u ≠ v . In addition, to enhance the analytical tractability and the design insights from it, we assume that the number of sub-arrays at the receiver is even, and that the centers of the sub-arrays are distributed symmetrically with respect to the yz-plane, i.e., x^r,i_o = -x^r, (N_r + 1 - i)_o. Due to the symmetry of the considered deployment, the inter-distance between the antenna elements and the number of antenna elements are the same in the i-th and the (N_r + 1 - i)-th sub-arrays i.e., γ_11^i = γ_11^N_r + 1 - i and M_1^i = M_1^N_r + 1 - i, respectively. Then, (<ref>) can be simplified to ∑_i = 1^N^r/21/|𝐜_o^i|^2 cos[π |η_x^i| (u - v)] ×sin[ πγ_11^i(u - v) ]/sin[(π/M_1^i) γ_11^i(u - v) ] = 0 ∀ u ≠ v . It is worth mentioning that a sufficient condition for fulfilling (<ref>) consists of independently optimizing the arrangements of the sub-arrays such that the condition sin[ πγ_11^i(u - v) ]/sin[(π/M_1^i) γ_11^i(u - v) ] = 0 ∀ i= 1,2,..., N^r/2 is fulfilled for every sub-array at the receiver. However, similar to Section <ref>, this condition can only be ensured if the number of antenna elements in each sub-array is at least equal to the number of antenna elements at the transmitter, i.e., M^i_1 ≥ L_1, as well as if the condition |γ_11^i| = 1 ∀ i ∈ [1,N_r/2] is satisfied. For ensuring the channel orthogonality condition and a full-rank channel matrix, therefore, this simple solution requires a number of antenna elements at the receiver that is much larger than the number of antenna elements at the transmitter. In addition, this simple approach may lead to solutions that do not satisfy the considered modeling assumptions, since the large size of the resulting sub-arrays may not fulfill the paraxial approximation condition in (<ref>), hence invalidating the application and accuracy of the proposed quartic approximation of the wavefront. A more suitable design criterion, which reduces the number of antennas at the receiver for obtaining a full-rank LoS MIMO channel matrix, can be obtained by jointly optimizing the arrangements of all the sub-arrays accordingly to (<ref>). To this end, we rewrite (<ref>) by using the identity cos(α) sin(β) = (sin(β + α) + sin(β - α))/2, as follows: ∑_i = 1^N^r/2 f^i_+(u-v) + f^i_-(u-v) = 0 ∀ u ≠ v where f^i_+(u-v) = 1/|𝐜_o^i|^2[sin[ π (γ_11^i + |η_x^i|)(u - v) ]]/sin[(π/M_1^i) γ_11^i(u - v) ] f^i_-(u-v) = 1/|𝐜_o^i|^2[sin[ π (γ_11^i - |η_x^i|)(u - v) ]]/sin[(π/M_1^i) γ_11^i(u - v) ] . Next, we demonstrate how (<ref>) can be utilized for optimizing the center-points of the sub-arrays and the inter-distances of the antenna elements in each sub-array. Also, we estimate the minimum number of antennas at the receiver for ensuring that the LoS MIMO channel matrix has a full rank. For ease of presentation, we progressively examine the cases studies for two, four and N^r sub-arrays. §.§.§ Two sub-arrays Let us assume N^r = 2. Then, (<ref>) can be simplified as follows: f^1_+(u-v) + f^1_-(u-v) = 0 ∀ u ≠ v . A sufficient condition to satisfy (<ref>) consists of imposing the following two conditions: f^i_-(u-v) = 0 for (u - v) = 1,2,...,L_1-1 f^i_+(u-v) = 0 for (u - v) = 1,2,...,L_1-1 which leads to the following sufficient orthogonality conditions: (γ_11^1 - |η_x^1|) = 0, (γ_11^1 + |η_x^1|) = 1 provided that M^1_1/γ_11^1≥ L_1. By solving the systems of two equations in (<ref>), we obtain the following conditions: γ_11^1 = 1/2, |η_x^1| = 1/2, 2M^1_1 ≥ L_1 . If N^r = 2, the only possible sub-array partitioning is M^1_1 = M_1/2. From (<ref>), we obtain 2M^1_1 = M_1 ≥ L_1. This implies that it is sufficient that the number of antenna elements at the receiver is at least equal to the number of antenna elements at the transmitter for ensuring a full-rank LoS MIMO channel matrix when N^r = 2. By considering that |𝐜^1|^2 = y_o^2 + (x^r,1_o)^2 and by inserting the expressions for γ_11^1 and |η_x^1| in (<ref>) into (<ref>) and (<ref>), we obtain the following closed-form and explicit expressions for the center-points of the sub-arrays and for the inter-distances of the antenna elements: |x_o^r,1| = y_o/√((4δ_1^t/λ)^2-1), δ_1^r,1 = λ√( y_o^2 + (x_o^r,1)^2)/|τ_11^1| M_1 δ_1^t provided that M_1 ≥ L_1. §.§.§ Four sub-arrays Let us assume N^r = 4. Then, (<ref>) can be simplified as follows: f^1_+(u-v) + f^1_-(u-v) + f^2_+(u-v) + f^2_-(u-v) = 0 ∀ u ≠ v . We utilize a similar approach as for the case study with two sub-arrays. Specifically, a sufficient condition to satisfy (<ref>) consists of imposing the following two conditions: f^1_-(u-v) + f^2_+(u-v) = 0 ∀ u ≠ v f^1_+(u-v) + f^2_-(u-v) = 0 ∀ u ≠ v . The equality in (<ref>) can be approximately fulfilled, i.e., f^1_-(u-v) ≈ - f^2_+(u-v), if the following conditions are satisfied: (γ_11^1 - |η_x^1|) = -(γ_11^2 + |η_x^2|) M_1^1/|𝐜_o^1|^2 (γ_11^1 - |η_x^1|)/γ_11^1 = - M_1^2/|𝐜_o^2|^2 (γ_11^2 + |η_x^2|)/γ_11^2 M_1^1 / γ_11^1 > (L_1 -1), M_1^2 / γ_11^2 > (L_1 -1) . In detail, (<ref>) is obtained by matching the zeros of the two functions f^1_-(u-v) and f^2_+(u-v), and (<ref>) is obtained by matching, with opposite signs, the peak amplitudes of f^1_-(u-v) and f^2_+(u-v). This approach based on matching the zeros and the peak amplitudes of f^1_-(u-v) and f^2_+(u-v) leads to the approximation f^1_-(u-v) ≈ - f^2_+(u-v). In addition, a sufficient condition to satisfy the equality in (<ref>) can be obtained by applying the same approach as for the two sub-array case, which results in the following conditions: γ_11^1 + |η_x^1| = 1, γ_11^2 - |η_x^2| = 0 . provided that M_1^1 / γ_11^1 > (L_1 - 1). By inserting (<ref>) in (<ref>) and (<ref>), the set of equations that need to be satisfied for ensuring that the LoS MIMO channel matrix has full rank is the following: γ_11^1 = 1 - |η_x^1|, γ_11^2 = |η_x^2|, |η_x^2| = |η_x^1| - 1/2 M_1^1/|𝐜_o^1|^2 1 - 2|η_x^1|/1 - |η_x^1| = - 2M_1^2/|𝐜_o^2|^2 provided that M_1^i / γ_11^i > (L_1 -1), which is the minimum number of antenna elements in each sub-array for ensuring the orthogonality of the LoS MIMO channel. The obtained system of equations can be solved by noting that γ_11^i depends on δ_1^r, i and |x^r,i_o|, but η_x^i depends only on |x^r,i_o|. Therefore, the solution that fulfills (<ref>) and (<ref>) can be obtained by first computing the center-points of the sub-arrays |x^r,1_o| and |x^r,2_o| from (<ref>), (<ref>) and (<ref>), and then computing the corresponding inter-distances for each sub-array, δ_1^r, 1 and δ_1^r, 2, according to (<ref>). More specifically, the following identity can be obtained from (<ref>): y_o^2/|𝐜_o^i|^2 = 1 - (|x_o^r,i|/|𝐜_o^i|)^2 = 1 - (|η_x^i| λ/2δ^t)^2 . Considering this latter identity, (<ref>) can be rewritten as follows: M_1^1 [ 1 - (|η_x^1| λ/2δ^t)^2] 1 - 2|η_x^1|/1 - |η_x^1| = - 2 M_1^2 [ 1 - (|η_x^2| λ/2δ^t)^2] and (<ref>) can be expressed only in terms of |η_x^1| by inserting (<ref>) in (<ref>), as follows: M_1^1 [ (2δ^t/λ)^2 - (|η_x^1|)^2] 1 - 2|η_x^1|/1 - |η_x^1| = - 2 M_1^2 [ (2δ^t/λ)^2 - (|η_x^1|-1/2)^2] . Equation (<ref>) is a cubic equation in terms of the unknown |η_x^1|, and it can hence be solved by using Cardano's formula <cit.>. In detail, if |η_x^1| ∈ (0,2δ^t/λ), the only valid root of (<ref>) needs to lie in the interval (0,2δ^t/λ). If there is no root in (0,2δ_1^t/λ), therefore, no optimal, i.e., full-rank, design for the considered array configuration exists. Case study δ_1^t = λ /2 – In non-paraxial deployments, a critical case is constituted by the setting δ_1^t = λ/2, which is the typical inter-distance in conventional antenna arrays. This is because the size of the antenna array at the receiver is assumed to be the largest one. In this case, (<ref>) simplifies as follows: M_1^1 [ 1 - (|η_x^1|)^2] 1 - 2|η_x^1|/1 - |η_x^1| = - 2 M_1^2 [ 1 - (|η_x^1|-1/2)^2] and |η_x^1| ∈ (0,2δ^t/λ) = (0,1). Discarding the root |η_x^1| = 1, as it is not in the feasible set, we obtain the following equation: (2M_1^1 + 2M_1^2)(|η_x^1|)^2 + (M_1^1 - 2M_1^2)|η_x^1| - M_1^1 - 3/2M_1^2 = 0 . Given that M_1 = 2 M_1^1 + 2 M_1^2, the positive root of (<ref>) is the following: |η_x^1| = 2M_1^2 - M_1^1/2 M_1 + 1/2 M_1√(9 (M_1^1)^2 + 16 M_1^1 M_1^2 + 16 (M_1^2)^2) . As mentioned, |η_x^1|< 2δ_1^t/λ = 1 for being feasible. Therefore, we conclude that the sub-array partitioning needs to fulfill the following condition: 3 M_1^2 < 4 M_1^1 which is obtained from (<ref>) by imposing |η_x^1|< 1. The obtained expression highlights that some partitionings in sub-arrays are not feasible with the proposed approach. Minimum number of required antenna elements – In addition, the solution of the system of equations in (<ref>) and (<ref>) needs to fulfill the condition M_1^i > γ_11^i (L_1 -1), which imposes a minimum number of antenna elements in each sub-array. Based on (<ref>), the inequality M_1^i > γ_11^i (L_1 -1) can be formulated in terms of |η_x^1|, as follows: M_1^1 > (1-|η_x^1|) (L_1-1), M_1^2 > (|η_x^1| - 1/2) (L_1-1) where |η_x^1| is given in (<ref>). From (<ref>), we obtain |η_x^1| > 1/2 for any M_1^1 and M_1^2. If 3 M_1^2 < 4 M_1^1, therefore, we have 1/2 < |η_x^1| < 1, and the right-hand sides of (<ref>) are both positive. In conclusion, the proposed approach provides a LoS MIMO channel matrix with a full rank equal to L_1 if the numbers of antenna elements M_1^1 and M_1^2 fulfill the set of inequalities M_1^1 > (1 - 2M_1^2 - M_1^1/4M_1^1 + 4M_1^2. .- √(9 (M_1^1)^2 + 16 M_1^1 M_1^2 + 16 (M_1^2)^2)/4M_1^1 + 4M_1^2) (L_1-1) M_1^2 > (2M_1^2 - M_1^1/4M_1^1 + 4M_1^2. .+ √(9 (M_1^1)^2 + 16 M_1^1 M_1^2 + 16 (M_1^2)^2)/4M_1^1 + 4M_1^2 - 1/2) (L_1-1) 4 M_1^1 > 3 M_1^2 2M_1^1 + 2M_1^2 ≥ L_1 where the last inequality in (<ref>) ensures that the total number of antenna elements at the multi-antenna receiver is at least equal to the number of antenna elements at the multi-antenna transmitter, which is a necessary condition for obtaining a rank equal to L_1. It is of particular interest to evaluate whether the design that requires the minimum number of antenna elements at the multi-antenna receiver is feasible, i.e., the MIMO configuration 2M_1^1 + 2M_1^2 = L_1. In this case, M_1^2 = L_1/2 - M_1^1. By inserting the latter equality in (<ref>) and noting that M_1^2 = L_1/2 - M_1^1 > 0 by definition, we obtain 3 L_1/14 < M_1^1 < L_1/2. By direct inspection of (<ref>) and (<ref>) with M_1^2 = L_1/2 - M_1^1, it is apparent that the two inequalities are not always fulfilled for any values of L_1, by assuming 3 L_1/14 < M_1^1 < L_1/2. Therefore, the condition 2M_1^1 + 2M_1^2 = L_1 needs to be relaxed with the inequality in (<ref>). As an example, we illustrate a simple design criterion that is analyzed numerically in Section <ref>. Let us assume that the four sub-arrays have the same number of antenna elements, i.e., M_1^1 = M_1^2 = M_1^0. From 2M_1^1 + 2M_1^2 = M_1, we then obtain M_1^0 = M_1/4. This design criterion has the positive feature that |η_x^1| in (<ref>) is independent of M_1^1, M_1^2 and M_1. The computed value of |η_x^1| can then be inserted into (<ref>), by obtaining M̅_1^1 = (1-|η_x^1|) (L_1-1), and M̅_1^2 = (|η_x^1| - 1/2) (L_1-1). Then, M_1^0 can be obtained as M_1^0 > max{M̅_1^1,M̅_1^2}, and M_1 = 4 M_1^0. In this case, |η_x^1| = 0.925, M_1^0 > M̅_1^2 = 0.4250(L_1-1), and M_1 > 1.7(L_1-1). This case study is further illustrated in Section <ref> with the aid of numerical simulations. Consistency between the optimal array configurations in paraxial and non-paraxial settings – It is instructive to evaluate whether the obtained optimal design conditions in (<ref>) and (<ref>) are consistent with the solution obtained for the paraxial setting in Section <ref>. In the paraxial setting, it needs to hold |𝐜_o^i| ≈ |𝐜_o|, and hence (<ref>) simplifies as follows: M_1^1/|𝐜_o|^2 1 - 2|η_x^1|/1 - |η_x^1|≈ - 2M_1^2/|𝐜_o|^2 . By noting that M_1 = 2M_1^1 + 2M_1^2, we then obtain |η_x^1| = 1 - M_1^1/M_1. According to (<ref>), this provides |η_x^2| = 1/2 - M_1^1/M_1 = M_1^2/M_1, γ_11^1 = M_1^1/M_1 and γ_11^2 = M_1^2/M_1. Inserting these obtained expressions in (<ref>) and (<ref>), using again the approximation |𝐜_o^i| ≈ |𝐜_o|, and noting that τ_11^i≈ 1 in (<ref>) for the considered paraxial setting, we obtain the following: x^r,1 = M_1 - M_1^1/2δ_1^r, x^r,2 = M_1^2/2δ_1^r δ_1^r,1 = δ_1^r,2 = δ_1^r = λ |𝐜_o|/M_1 δ_1^t . By direct inspection of (<ref>), we evince that it coincides with (<ref>), which can be applied in the paraxial setting. This substantiates the consistency of the proposed partioning in sub-arrays in the limiting regime of the paraxial approximation. §.§.§ Generic number of sub-arrays The approach proposed for sub-arrays with two and four antenna elements can be generalized to the general setting with an arbitrary number (but even for simplicity) of sub-arrays, thus providing a general solution for (<ref>). Specifically, let N_r denote the number of sub-arrays. A general solution for (<ref>) is obtained by first imposing the approximation f^i_-(u-v) ≈ - f^i+1_+(u-v) for i=1,2,..,N_r/2 - 1 and (u - v) = 1,2,...,L_1-1. This set of equations, results in the following conditions (for i=1,2,..,N_r/2 - 1): (γ_11^i - |η_x^i|) = -(γ_11^i+1 + |η_x^i+1|) M_1^i/|𝐜_o^i|^2 (γ_11^i - |η_x^i|)/γ_11^i = - M_1^i+1/|𝐜_o^i+1|^2 (γ_11^i+1 + |η_x^i+1|)/γ_11^i+1 provided that M_1^i / γ_11^i > (L_1 -1) for i=1,2,..,N_r/2. If (<ref>) and (<ref>) are fulfilled, (<ref>) can then be satisfied by imposing the equality: f^1_+(u-v) + f^N_r/2_-(u-v) = 0 ∀ u ≠ v which can in turn be fulfilled by imposing the following conditions: γ_11^1 + |η_x^1|= 1, γ_11^N_r/2 - |η_x^N_r/2| = 0 . In summary, the proposed design based on the partitioning in sub-arrays is obtained by solving the following system of equations (for i=1,2,..,N_r/2 - 1): γ_11^1 + |η_x^1|= 1, γ_11^N_r/2 - |η_x^N_r/2| = 0 (γ_11^i - |η_x^i|) = -(γ_11^i+1 + |η_x^i+1|) M_1^i/|𝐜_o^i|^2 (γ_11^i - |η_x^i|)/γ_11^i = - M_1^i+1/|𝐜_o^i+1|^2 (γ_11^i+1 + |η_x^i+1|)/γ_11^i+1 provided that M_1^i / γ_11^i > (L_1 -1) for i=1,2,..,N_r/2. In this section, in summary, we have provided three main contributions: * We have introduced an approach for the analysis of LoS MIMO channels in the non-paraxial setting, which is based on a quartic approximation for spherical wavefronts and a sub-array partitioning for large multi-antenna arrays. * Based on the proposed approach, we have introduced an analytical expression for optimizing the positions (in terms of center-points of the sub-arrays and inter-distances between the antenna elements in each sub-array) of the antennas over LoS MIMO channels. * We have specialized the proposed design criterion to linear arrays with broadside orientation and have identified explicit analytical expressions for ensuring the orthogonality of the LoS MIMO channel matrix. § NUMERICAL RESULTS In this section, we present numerical results to validate the proposed analytical framework. The considered setup is the following: f_c = 28 GHz (λ=1.07 cm), |𝐜_o| = 256λ, α = 0 and β = 0. As introduced in Section <ref>, the effective rank is utilized as the figure of merit to evaluate the performance of the proposed designs in terms achievable DoF and spatial multiplexing gain. §.§ Paraxial Setting We assume that the multi-antenna transmitter and receiver are equipped with 4 × 4 antenna elements and that they are aligned along the x-axis, i.e., x_o = 0. The elevation angle of the multi-antenna receiver is sin(θ_o) = z_o/|𝐜_o|. The inter-distances δ^t_1 and δ^t_2 at the multi-antenna transmitter are kept fixed, hence the channel orthogonality is obtained by optimizing the inter-distances δ^r_1 and δ^r_2 at the multi-antenna receiver. Figure <ref> shows the best effective rank that is obtained by optimizing (through an exhaustive grid search) the inter-distances at the receiver based on the exact channel matrix in (<ref>), and Fig. <ref> shows the effective rank obtained by deploying the antenna elements based on the paraxial design in (<ref>). For low elevation angles, the inter-distances given by the paraxial design provide approximately a full rank channel matrix, i.e., N_eff≈ 16. When the elevation angle increases, however, the condition in (<ref>) necessitates a larger inter-distance at the multi-antenna receiver, leading to a larger array size. Eventually, the size of the multi-antenna receiver is so large that the paraxial approximation does not hold anymore, resulting in a degradation of the channel orthogonality. Similarly, a shorter inter-distance at the multi-antenna transmitter implies a larger inter-distance at the multi-antenna receiver, resulting in a similar performance degradation. For example, Fig. <ref> shows that, in the considered setup, it is not possible to achieve the channel orthogonality for any considered inter-distance at the multi-antenna receiver, when δ^t_1 = δ^t_2 = λ/2 at the multi-antenna transmitter, which is a typical system design. Thus, the paraxial approximation is not always accurate, and, more importantly, assuming the same inter-distance among all the antenna elements (uniform arrays) is suboptimal even if the effective rank is optimized by using the exact channel in (<ref>). §.§ Non-Paraxial Setting In this section, we validate the analytical framework for the non-paraxial deployment. We consider two linear arrays with broadside orientation, i.e., M_2 = L_2 = 1, x_o = 0 and z_o = 0. The transmitter has L_1 = 16 antenna elements. To evaluate the accuracy of the analytical framework, we compare four designs to optimize the inter-distances at the multi-antenna receiver: * Design 1: The multi-antenna receiver is partitioned into four sub-arrays, each having the same number of antenna elements, i.e., N_r = 4 and M_1^i = M_1/4. The center-points of the sub-arrays are determined from the analytical framework, i.e., by utilizing (<ref>), (<ref>) and (<ref>). The inter-distances in each sub-array are obtained by optimizing (through an exhaustive grid search) the effective rank of the exact channel matrix in (<ref>). * Design 2: The same setup as for Design 1 is considered, but the channel matrix 𝐇^Large is utilized to maximize the effective rank. * Design 3: The same setup as for Design 2 is considered, but the inter-distances at the multi-antenna receiver are obtained from the analytical framework in (<ref>), (<ref>) and (<ref>). * Design 4: The multi-antenna receiver is optimized by assuming the paraxial approximation, i.e., the condition in (<ref>) is utilized. Figure <ref> shows the effective rank as a function of the number of antenna elements at the multi-antenna receiver when δ_1^t = λ/2. In this setup, the paraxial approximation is not fulfilled, and Designs 1-3 clearly overcome Design 4 when the configuration for the center-points of the sub-arrays is optimal, i.e., when the number of antenna elements at the receiver is larger than the minimum required (depicted in the figure by a dashed vertical line). According to the example given in Section <ref>, the minimum number of antenna elements at the receiver needs to satisfy the condition M_1 > 1.7(L_1-1) = 25.5. It is noteworthy that Design 3, which is based on the proposed analytical framework, results in an effective rank that is similar to that obtained by utilizing numerical grid-based methods, provided that the number of antenna elements at the receiver is sufficiently large, as predicted by the proposed analytical framework. Figure <ref> illustrates the matrix 𝐆 that is obtained when the multi-antenna receiver is optimized based on Designs 1, 2 and 3 for M_1 = 48. We see that the magnitude of the off-diagonal entries of 𝐆 is at least 10 dB smaller than the magnitude of the diagonal entries. This confirms the near-orthogonality of the optimized LoS MIMO channel matrix. Figure <ref> shows the effective rank of the LoS MIMO channel as a function of the inter-distance at the multi-antenna transmitter. In this case, the proposed configuration ensures the orthogonality for any inter-distance at the transmitter when M_1 = 48 but not when M_1 = 16 (the vertical dashed line shows the minimum number of antenna elements based on the proposed framework). Hence, for a given number of antenna elements at the transmitter, there is a minimum required value of the inter-distance at the transmitter for ensuring the orthogonality of the LoS MIMO channel. It needs to be emphasized that the proposed approach offers a sufficient condition to maximize the rank in LoS MIMO channels. Therefore, other designs that ensure that the LoS MIMO channel has a full rank, even when the number of antenna elements at the multi-antenna receiver does not exceed the minimum value estimated in this paper, may exist (as discussed in Section <ref>). In addition, Fig. <ref> shows that, as the inter-distance at the multi-antenna transmitter increases, the inter-distance at the multi-antenna receiver decreases. This is because the paraxial approximation (Design 4) becomes more accurate in this case. Specifically, as shown in Table <ref>, the inter-distances obtained based on the proposed sub-array partitioning converge towards the inter-distance obtained by considering the paraxial approximation in (<ref>). § CONCLUSION In this paper, we have introduced a novel framework for optimizing the deployment of the antenna elements in LoS MIMO channels. The proposed approach can be applied to paraxial and non-paraxial settings. In the paraxial setting, we have devised a simple analytical framework that provides explicit expressions for ensuring the orthogonality (i.e., full rank) of the LoS MIMO channel matrix as a function of key design parameters. In the non-paraxial setting, we have introduced a new analytical framework based on a quartic approximation for spherical wavefronts and the partitioning of large arrays into sub-arrays. The proposed approach provides sufficient conditions for ensuring that the channel matrix is orthogonal, which requires an excess number of antenna elements either at the multi-antenna transmitter or at the multi-antenna receiver. Possible extensions of this paper include the generalization of the proposed methods to deployments with the same number of antenna elements at the transmitter and receiver, the analysis of more complex channel models, e.g., including environmental impairments that affect sub-THz frequencies and multipath interference in urban settings, as well as the impact of possible errors for the optimal positions of the antenna elements at the transmitter and receiver. IEEEtran [ < g r a p h i c s > ]Juan Carlos Ruiz-Sicilia (Student Member IEEE) received the B.Sc. and M.Sc. degrees in Telecommunication Engineering from the University of Málaga, Spain, in 2019 and 2021, respectively. In 2021, he was part of the German Aerospace Center (DLR), Institute of Communication and Navigation, to carry out his master’s thesis and was the recipient of a national award for his M.Sc. thesis. He is now pursuing a Ph.D. degree in Telecommunications Engineering at Université Paris-Saclay, France. He is employed at CNRS as an Early Stage Researcher in the European project 5GSmartFact H2020 MSCA-ITN. [ < g r a p h i c s > ]Marco Di Renzo (Fellow IEEE) received the Laurea (cum laude) and Ph.D. degrees in electrical engineering from the University of L’Aquila, Italy, in 2003 and 2007, respectively, and the Habilitation à Diriger des Recherches (Doctor of Science) degree from University Paris-Sud (currently Paris-Saclay University), France, in 2013. Currently, he is a CNRS Research Director (Professor) and the Head of the Intelligent Physical Communications group in the Laboratory of Signals and Systems (L2S) at Paris-Saclay University – CNRS and CentraleSupelec, Paris, France. He is a Fellow of the IEEE, IET, EURASIP, and AAIA; an Academician of AIIA; an Ordinary Member of the European Academy of Sciences and Arts, an Ordinary Member of the Academia Europaea; an Ambassador of the European Association on Antennas and Propagation; and a Highly Cited Researcher. Also, he holds the 2023 France-Nokia Chair of Excellence in ICT, he holds the Tan Chin Tuan Exchange Fellowship in Engineering at Nanyang Technological University (Singapore), and he was a Fulbright Fellow at City University of New York (USA), a Nokia Foundation Visiting Professor at Aalto university (Finland), and a Royal Academy of Engineering Distinguished Visiting Fellow at Queen's University Belfast (UK). His recent research awards include the 2022 Michel Monpetit Prize conferred by the French Academy of Sciences, the 2023 IEEE VTS James Evans Avant Garde Award, the 2024 Best Tutorial Paper Award, and the 2024 IEEE COMSOC Marconi Prize Paper Award in Wireless Communications. He served as the Editor-in-Chief of IEEE Communications Letters during the period 2019-2023, and he is currently serving as the Director of Journals of the IEEE Communications Society. [ < g r a p h i c s > ]Placido Mursia (member IEEE) received the B.Sc. and M.Sc. (with honors) degrees in Telecommunication Engineering from Politecnico of Turin in 2015 and 2018, respectively. He obtained his Ph.D. degree from Sorbonne Université of Paris, at the Communication Systems department of EURECOM in 2021. He is currently a Senior Research Scientist with the 6GN group at NEC Laboratories Europe in Heidelberg, Germany. His research interests lie in convex optimization, signal processing, reconfigurable intelligent surfaces, and wireless communications. [ < g r a p h i c s > ]Aryan Kaushik (Member IEEE) is currently an Assistant Professor at the University of Sussex, UK, since 2021. Prior to that, he has been with University College London, UK (2020-21), University of Edinburgh, UK (2015-19), and Hong Kong University of Science and Technology, Hong Kong (2014-15). He has also held visiting appointments at Imperial College London, UK (2019-20), University of Bologna, Italy (2024), University of Luxembourg, Luxembourg (2018), Athena RC, Greece (2021), and Beihang University, China (2017-19, 2022). He has been External PhD Examiner internationally such as at Universidad Carlos III de Madrid, Spain, in 2023. He has been an Invited Panel Member at the UK EPSRC ICT Prioritisation Panel in 2023 plus Proposal Reviewer for the EPSRC since 2023, and a member of the One6G Association. He is Editor of three upcoming books by Elsevier on Integrated Sensing and Communications, Non-Terrestrial Networks, Electromagnetic Signal and Information Theory, and several journals such as IEEE OJCOMS (Best Editor Award 2023), IEEE Communications Letters (Exemplary Editor 2023), IEEE Internet of Things Magazine (including the AI for IoT miniseries), IEEE Communications Technology News (initiated the IEEE CTN podcast series), and several special issues such as in IEEE Wireless Communications Magazine, IEEE Network Magazine, IEEE Communications Standards Magazine, IEEE Open Journal of the Communications Society, IEEE Internet of Things Magazine. [ < g r a p h i c s > ]Vincenzo Sciancalepore (Senior Member IEEE) received his M.Sc. degree in Telecommunications Engineering and Telematics Engineering in 2011 and 2012, respectively, and a double Ph.D. degree in 2025. He was a recipient of the National Award for the Best Ph.D. Thesis in the area of communication technologies (wireless and networking) issued by GTTI in 2015. Currently, he is a Principal Researcher at NEC Laboratories Europe, focusing his research activity on reconfigurable intelligent surfaces. He is involved in the IEEE Emerging Technologies Committee leading an emerging technology initiative (ETI) on RIS. He is an Editor for IEEE Transactions on Wireless Communications and IEEE Transactions on Communications.
http://arxiv.org/abs/2406.17868v1
20240625181455
Atom Optics with Cold Bosons
[ "V. I. Yukalov", "E. P. Yukalova" ]
cond-mat.quant-gas
[ "cond-mat.quant-gas" ]
-14mm -4mm =17cm =23.5cm
http://arxiv.org/abs/2406.19180v1
20240627135748
Protonium: Discovery and Prediction
[ "Bo-Qiang Ma" ]
hep-ph
[ "hep-ph", "hep-ex" ]
arXiv, The Chinese version is published in Chinese Science Bulletin https://doi.org/10.1360/TB-2024-0578doi:10.1360/TB-2024-0578 organization=School of Physics, Peking University, city=Beijing 100871, country=China § ABSTRACT The Beijing Spectrometer (BESIII) Collaboration reconstructed the invariant mass of three pairs of positive and negative pions by studying the decay process of charmonium to a photon and three pairs of positive and negative pions. They discovered the resonant structures X(1840) and X(1880), which are interpreted as the predicted proton-antiproton bound states, also known as protonium. This article briefly introduces the experimental discovery processes of these resonant structures and discusses the theoretical explorations inspired by them. The predictions proposed by these theoretical explorations offer a new perspective for studying the nature of these particles and new decay modes. Therefore the collaborative exploration of experiments and theory plays a positive role in deepening understanding of the fundamental laws of nature. Beijing Spectrometer, Beijing Electron Positron Collider, protonium, resonance On April 9, 2024, the Beijing Spectrometer III (BESIII) collaboration published a paper in the journal “Physical Review Letters" <cit.>, reporting the analysis of 100 billion charmonium events produced by the Beijing Electron Positron Collider (BEPC). The study focused on the decay process of charmonium into a photon and three pairs of positive and negative pions. By reconstructing the invariant mass of the three pairs of positive and negative pions and fitting the data with two resonant structures X(1840) and X(1880), with a statistical significance exceeding 10 standard deviations was found for X(1880) with mass and width of  M=1882.1± 1.7± 0.7 MeV/c^2 and Γ=30.7 ± 5.5± 2.4 MeV/c^2. The mass and width of X(1840) were determined to be  M= 1832.5 ± 3.1 ± 2.5 MeV/c^2 and Γ=80.7 ± 5.2 ± 7.7 MeV/c^2, consistent with the results obtained by the BESIII collaboration in 2013 based on 200 million charmonium events <cit.>. This work confirms the existence of X(1840) and X(1880) as new particles in experimental physics. These particles can be interpreted as predicted proton-antiproton bound states <cit.>, also known as protonium. This BES experiment originated from a research result published by the BES International Collaboration in “Physical Review Letters" in 2003 <cit.>. The study was based on 58 million charmonium events and analyzed the invariant mass spectrum of positive and negative protons near twice the proton mass in the process of charmonium decaying into a photon and a pair of positive and negative protons, revealing a significant near-threshold enhancement phenomenon. By using S-wave analysis to fit the data, a resonant structure with a mass slightly below twice the proton mass was identified, with a mass of  M=1859^+3_-10 (stat)^+ 5_-25 (syst) MeV/c^2 and a width less than 30 MeV/c^2. This result attracted the attention of theoretical physicists, who proposed various explanations to understand the experimental findings. These explanations include interpreting the resonant structure as a bound state X() composed of a proton and antiproton <cit.>. Although the central mass of this resonance is lower than the sum of the proton and antiproton masses, the resonance has a width of 30 MeV/c^2, allowing for a certain phase space to decay into a pair of free proton-antiproton states, leading to the experimentally observed near-threshold enhancement phenomenon. In addition, there are also studies to have interpreted X(1859) as a glueball state <cit.>, or as a manifestation of final state interactions <cit.>, or as coherent contributions from different scattering channels <cit.>, or as phenomena resulting from quark fragmentation <cit.>, and so on. In 2005, the BES collaboration observed a resonance structure X(1835) with a mass of M=1833.7± 6.1 (stat)± 2.7 (syst) MeV/c^2 in the decay channel of charmonium J/ψ to a photon plus an η' meson and a pair of positive and negative π mesons by reconstructing the invariant masses of the η' meson and the pair of positive and negative π mesons <cit.>. This structure was further confirmed in subsequent analyses with larger datasets <cit.>. Based on more data, the BES collaboration also reanalyzed the invariant mass spectrum of X() in the decay channel of charmonium decaying into a photon and a pair of positive and negative protons <cit.>. Taking into account final state interactions <cit.> and reaction coherence <cit.>, they found that the mass of X() also shifted to around 1832 MeV/c^2, and there were complex structures near M=1860 MeV/c^2 as well <cit.>. These cases demonstrate that the interaction between theory and experiment plays a positive role in the experimental analysis of data. Professor Mu-Lin Yan from the University of Science and Technology of China and his collaborators made significant contributions to theoretical explorations in explaining the X() structure using proton-antiproton bound states <cit.>. British scientist Skyrme attempted to explain the proton and its properties using solitons of chiral fields <cit.>. Witten argued that the picture of baryons as solitons is consistent with Quantum Chromodynamics (QCD) in the large N_c limit <cit.>. In reference <cit.>, starting from the theoretical basis of understanding the proton as a Skyrmion, Yan et al. discussed a system composed of Skyrmion and anti-Skyrmion, finding a bound state solution of Skyrmion and anti-Skyrmion system that can explain the bound state of proton and antiproton. They then constructed a phenomenological potential model for proton and antiproton, which could be adopted to calculate the mass and width of the bound state by adjusting parameters, interpreting X() as a bound state of protons and antiprotons, namely a protonium. Additionally, based on experimental phenomena of multi-meson production in low-energy proton-antiproton collisions, Yan et al. explicitly predicted another decay channel of X() as a proton-antiproton bound state, decaying into 4 to 7 mesons instead of 2-3 mesons. The recent analysis by the BESIII collaboration of the decay process of charmonium into a photon and three pairs of positive and negative pions can be seen as a specific realization of the decay of proton-antiproton bound states into 6 mesons. The experimental results are consistent with this theoretical prediction. In turn, this theoretical prediction strengthens the viewpoint of interpreting the resonant states X(1840) and X(1880) discovered by BESIII as protonium. Theoretical work on proton-antiproton bound states can be traced back to 1949 when Fermi and Yang attempted to interpret mesons as deeply bound states of protons and antiprotons <cit.>. Nambu and Jona-Lasinio, based on chiral symmetry, predicted the existence of a protonium with a mass slightly lower than twice the proton mass, in addition to the relatively light pion <cit.>. The establishment of the quark model in 1964 upended the idea of using meson and nucleon degrees of freedom to understand hadrons. However, theoretical and experimental explorations for finding protonium have never ceased, such as searching for the Coulomb bound state of protons and antiprotons through low-energy proton-antiproton reactions <cit.>. The BES collaboration's discovery of near-threshold enhancement in the invariant mass spectrum of protons and antiprotons in 2003 opened a new era in experimentally discovering hadronic molecular states of protons and antiprotons formed through strong interactions. The theoretical explorations stimulated by experiments provided new insights for the BESIII collaboration to explore protonium, offering stronger evidence for research in this field. As one of the internationally leading facilities for electron-positron collisions, the Beijing Spectrometer plays a crucial role in experimental research on exotic hadron states or multiquark states. This includes the discovery of various possible tetraquark states, pentaquark states, and even glueball states. X(1840) and X(1880) may belong to a special category of exotic hadron states, specifically protonium composed of a proton and antiproton, or a hexaquark state consisting of 3 quarks and 3 antiquarks. The discovery by the BESIII collaboration may have revealed the existence of protonium for the first time. However, this conclusion still requires further experimental research to measure the properties of X(1840) and X(1880) more accurately and to compare them with theoretical predictions. Chinese theoretical physicists have conducted extensive theoretical research on the theoretical foundation of proton-antiproton bound states, as well as extending the concept to other baryons, thus exploring the direction of baryon-antibaryon bound states <cit.>. This has provided new opportunities for experimental explorations of protonium and even baryonium. With more accumulation of experimental data and increasingly precise theoretical predictions, it is believed that the BESIII collaboration will make more discoveries in the experimental exploration of protonium and baryonium, making greater contributions to a deeper understanding of the fundamental laws of the natural world. csb
http://arxiv.org/abs/2406.18051v1
20240626040119
ViT-1.58b: Mobile Vision Transformers in the 1-bit Era
[ "Zhengqing Yuan", "Rong Zhou", "Hongyi Wang", "Lifang He", "Yanfang Ye", "Lichao Sun" ]
cs.CV
[ "cs.CV" ]
Flexible Conformal Highest Predictive Conditional Density Sets Max Sampson Department of Statistics and Actuarial Science, University of Iowa and Kung-Sik Chan Department of Statistics and Actuarial Science, University of Iowa ============================================================================================================================================================================================== § ABSTRACT Vision Transformers (ViTs) have achieved remarkable performance in various image classification tasks by leveraging the attention mechanism to process image patches as tokens. However, the high computational and memory demands of ViTs pose significant challenges for deployment in resource-constrained environments. This paper introduces ViT-1.58b, a novel 1.58-bit quantized ViT model designed to drastically reduce memory and computational overhead while preserving competitive performance. ViT-1.58b employs ternary quantization, which refines the balance between efficiency and accuracy by constraining weights to {-1, 0, 1} and quantizing activations to 8-bit precision. Our approach ensures efficient scaling in terms of both memory and computation. Experiments on CIFAR-10 and ImageNet-1k demonstrate that ViT-1.58b maintains comparable accuracy to full-precision Vit, with significant reductions in memory usage and computational costs. This paper highlights the potential of extreme quantization techniques in developing sustainable AI solutions and contributes to the broader discourse on efficient model deployment in practical applications. Our code and weights are available at <https://github.com/DLYuanGod/ViT-1.58b>. [1]These authors contributed equally to this work. [2]Corresponding author. Lichao Sun (lis221@lehigh.edu) § INTRODUCTION The rapid advancement of Transformer <cit.> architectures has significantly impacted the field of computer vision, particularly with the introduction of Vision Transformers (ViTs) <cit.>. By treating image patches as tokens and leveraging the attention mechanism to process image patches, ViTs effectively capture complex dependencies across entire images, achieving remarkable performance in various image classification tasks <cit.>. However, the computational and memory demands of ViTs are substantial, stemming primarily from their attention mechanisms, which scale quadratically with the number of tokens <cit.>. This inherent complexity poses significant challenges for deploying ViTs in resource-constrained environments such as mobile devices and embedded systems, where energy efficiency and low latency are critical Recently, research in neural network efficiency to mitigate these demands has explored various strategies including model pruning <cit.>, knowledge distillation <cit.>, and quantization <cit.>. Among these, quantization techniques are particularly effective as they directly reduce the precision of weights and activations, thereby significantly lowering the memory and computational requirements of deep learning models. Traditionally, post-training quantization has been favored for its simplicity as it does not necessitate changes to the training pipeline or retraining of the model. However, this method often results in significant accuracy loss at lower precision levels because the model isn’t optimized for the quantized representation during training <cit.>, limiting its utility for high-performance applications. In contrast, Quantization-Aware Training (QAT) <cit.> integrates the effects of quantization into the training process itself, simulating quantization effects to typically achieve better accuracy than post-training methods. For example, extreme quantization, like 1-bit models used in Binarized neural networks (BNNs) <cit.>, which utilize binary weights and activations, significantly decreasing computational and memory demands. Recent adaptations of 1-bit quantization techniques to Transformer models, such as BitNet <cit.>, and BiVit <cit.> have shown that even severe quantization can maintain performance while substantially cutting resource consumption. However, these 1-bit models using binary quantization often face challenges in preserving model accuracy due to the extreme reduction in weight precision. To address this limitation, research has ventured into ternary quantization, which offers a more balanced approach. A notable example is "The Era of 1-bit LLMs: All Large Language Models are in 1.58 Bits" <cit.>, which explores 1.58-bit quantization where weights can assume values of -1, 0, or 1. This approach refines the balance between performance and computational demands by introducing a zero value, which significantly reduces computational overhead. The inclusion of a non-polar weight (zero) not only allows for sparsity, thereby reducing the number of active computations but also maintains a richer representation than binary weights, potentially leading to less information loss. The success of this quantization strategy in large language models suggests its promising applicability to Vision Transformers, enabling more efficient processing of visual data while preserving model accuracy. To address the unique demands of large-scale ViT models, we introduce ViT-1.58b, a 1.58-bit quantized ViT model. ViT-1.58b utilizes ternary quantization to optimize both memory and computational efficiency while maintaining competitive performance. This approach leverages the benefits of extreme quantization demonstrated in language models, adapting them to the context of computer vision. Our contributions are summarized as follows: * We propose ViT-1.58b, the first 1.58-bit quantized ViT model, optimized for efficient scaling in terms of both memory and computation. * We demonstrate the effectiveness of ViT-1.58b on the CIFAR-10 and ImageNet-1k datasets, showing competitive accuracy with significant reductions in memory and computational costs. * We provide a comprehensive evaluation, comparing ViT-1.58b with state-of-the-art quantization methods and full-precision ViT models, highlighting its advantages in resource-constrained environments. § METHODS   As shown in Figure <ref>, our ViT-1.58b architecture is primarily based on ViT for image classification. The process begins by dividing the input image into patches, followed by applying a linear projection to the flattened patches. These patches then undergo patch and position embedding, as well as class embedding, before being fed into the transformer encoder. The output from the transformer encoder is processed through an MLP to produce the final classification prediction. We employ BitLinear layers from BitNet b1.58 <cit.> to replace the conventional nn.Linear layers in the transformer encoder. Different quantization functions are employed to quantize the weights to 1.58-bit precision and the activations to 8-bit precision, ensuring both efficiency and performance. Next, we will introduce how we implement BitLinear layers to achieve 1.58-bit weights and 8-bit activations. Weights Quantization   In the ViT-1.58b, the 1.58-bit weights are achieved using the absmean quantization function that constrains the weights to {-1, 0, 1}. Specifically, we first scale the weight matrix W ∈ℝ^n × m by its average absolute value, thus obtaining W/α+ϵ, where ϵ is a small floating-point number to avoid division by zero and ensure numerical stability, and α is the average absolute value of the weight matrix, calculated as: α = 1/nm∑_ij |W_ij|. Then, we round each value in the scaled matrix to the nearest integer among -1, 0, +1: W̃ = RoundClip(W/α+ϵ, -1, 1), RoundClip(x, a, b) = max(a, min(b, round(x))). This RoundClip function rounds the input value x to the nearest integer and clips it within the specified range [a, b]. Using this method, we can convert the weight matrix W into a ternary matrix W̃, where each weight value is constrained to {-1, 0, 1}. Activations Quantization.   Following <cit.>, activations are quantized with b-bit precision by absmax quantization function that scales the activations to the range [-Q_b, Q_b], where Q_b = 2^b-1. In the proposed ViT-1.58b, we set b = 8, meaning that the activations are quantized to 8-bit precision, providing a balance between computational efficiency and maintaining sufficient precision for effective performance as described in <cit.>. The activations quantization process is described as follows: x̃ = Quant(x) = Clip(x× Q_b/γ, -Q_b+ϵ, Q_b-ϵ), where γ = max(|x|), representing the maximum absolute value of the activations, and the Clip function ensures that the scaled values are confined within the specified range [-Q_b, Q_b], defined as: Clip(x, a, b) = max(a, min(b, x)) The core component of ViT-1.58b is the BitLinear layer, which replaces the traditional nn.Linear layer in the Transformer. Initially, the activations undergo a normalization layer <cit.> to ensure that the activation values have a variance of 1 and a mean of 0, mathematically represented as LN(x) = x - 𝔼(x)/√(Var(x) + ϵ). Following this, normalized activations are quantized using the absmax quantization function, resulting in Quant(LN(x)). With weights quantized to 1.58-bit precision, matrix multiplication results in the output y = W̃·Quant(LN(x)), and the output is subsequently dequantized to rescaled with β and γ to the original precision, expressed as y_dequant = y ×βγ/Q_b, where β = 1/nmW_1 These steps collectively enable the BitLinear layer to maintain the performance of the Transformer model while significantly reducing computational cost and storage requirements. Training Strategy   During training, we employ the Straight-Through Estimator (STE) <cit.> to handle non-differentiable functions such as the sign and clip functions in the backward pass. The STE allows gradients to flow through these non-differentiable functions, enabling effective training of the quantized model. Additionally, we use mixed precision training, where weights and activations are quantized to low precision, but gradients and optimizer states are kept in high precision to ensure training stability and accuracy. § EXPERIMENTS AND RESULTS   Experimental Setting.   We evaluate our ViT-1.58b model on two datasets, CIFAR-10 <cit.> and ImageNet-1k <cit.>, comparing it with several versions of the Vision Transformer Large (ViT-L): the full-precision ViT-L (i.e. 32-bit precision), and the 16-bit, 8-bit, and 4-bit inference versions in terms of memory cost, training loss, test accuracy for CIFAR-10, and Top-1 and Top-3 accuracy for ImageNet-1k. The computational framework for this study comprised four NVIDIA TESLA A100 GPUs, each with 80 GB of VRAM. The system's processing core utilized an AMD EPYC 7552 48-core Processor with 80 GB of system RAM to manage extensive datasets efficiently. We employed PyTorch version 2.0.0 integrated with CUDA 11.8, optimizing tensor operations across GPUs. Results.   Table <ref> shows the performance of the ViT-L model across different bit precisions evaluated on two widely recognized datasets: CIFAR-10 and ImageNet-1k. The results highlight how the reduction in bit precision from full precision (ViT-L) to lower bit configurations (16-bit, 8-bit, 4-bit, and 1.58-bit) affects the model's effectiveness in terms of training loss, test accuracy, and top-k accuracy metrics. For CIFAR-10, the full-precision ViT-L model achieved a test accuracy of 76.28%, which serves as a baseline for comparison. When the bit precision was reduced to 16-bit, there was a moderate decline in accuracy to 74.61%. Further reduction to 8-bit and 4-bit resulted in more pronounced declines to 72.20% and 70.69%, respectively. This trend suggests that lower bit precision generally degrades the model's performance, likely due to the increased quantization error and reduced capacity to capture the variability in the data. The proposed 1.58-ViT-1.58b-L model, which operates at an even lower bit precision than the other quantized variants, recorded a test accuracy of 72.27%. Interestingly, the performance of this model is closer to the 8-bit version than to the 4-bit. On the more complex and diverse ImageNet-1k dataset, a similar pattern is observed. The full-precision ViT-L model achieved a Top-1 accuracy of 76.54% and a Top-3 accuracy of 90.23%. Reduction to 16-bit precision caused declines to 75.29% and 87.44% in Top-1 and Top-3 accuracies, respectively. These declines became more substantial at 8-bit (Top-1: 74.11%, Top-3: 85.26%) and 4-bit (Top-1: 70.88%, Top-3: 82.50%). The 1.58-bit model managed a Top-1 accuracy of 74.25% and a Top-3 accuracy of 85.78%, showcasing a performance that surpasses the 8-bit version in Top-1 accuracy and nearly matches it in Top-3 accuracy. As shown in Figure <ref>, ViT-1.58b-L shows promising results in both training loss and memory consumption compared to the full-precision ViT-L and its quantized versions. The left panel depicts training loss curves, where ViT-1.58b-L's loss closely follows that of the full-precision ViT-L. This suggests that the 1.58-bit quantization retains the model's learning capacity effectively. The right panel highlights the memory consumption. The full-precision ViT-L requires 1.14 GB of memory, while ViT-1.58b-L drastically reduces this to just 57 MB. This significant reduction in memory usage makes our model ideal for deployment in resource-constrained environments. Overall, our ViT-1.58b-L model balances competitive performance with substantial memory savings, demonstrating its efficiency and practicality for real-world applications. In Figure <ref>, as bit precision decreases, computational performance (measured in TFLOPs) significantly increases. Full-precision achieves 0.692 TFLOPs, while ours reaches 11.069 TFLOPs. This substantial increase in performance with lower precision highlights the efficiency gains achievable through extreme quantization. Our model offers a dramatic boost in computational throughput, making it highly suitable for high-performance computing environments where both speed and resource efficiency are critical. § CONCLUSION   In this paper, we introduced ViT-1.58b, a 1.58-bit quantized Vision Transformer that efficiently addresses computational and memory challenges in vision model deployment through ternary quantization. Our results show that ViT-1.58b achieves competitive accuracy on benchmarks like CIFAR-10 and ImageNet-1k with significantly lower resource requirements. This model demonstrates the potential of advanced quantization techniques in complex Transformer architectures, highlighting their role in developing sustainable AI solutions. Future work will explore ViT-1.58b's scalability and integration into larger systems, enhancing its practical utility and environmental benefits. § LIMITATIONS While the ViT-1.58b model exhibits promising performance and demonstrates efficient computational usage, it also presents certain limitations that need to be addressed. This section outlines the primary challenges associated with our approach. Requirement for Pre-Training.   One significant limitation of the ViT-1.58b architecture is the necessity for pre-training the ViT model from scratch when applying our ternary quantization techniques. This requirement can significantly raise the barrier to entry for utilizing our proposed model, as pre-training demands extensive computational resources and time. Organizations or individuals with limited access to such resources might find it challenging to adopt this technology without pre-trained models readily available. Moreover, pre-training introduces additional complexity in ensuring the robustness and generalizability of the model before it can be deployed effectively. Training Difficulty.   Another critical challenge is the increased difficulty in training the 1.58-bit ViT model compared to its full-precision counterpart. During our experiments, as shown in Figure <ref>, we observed that training the 1.58-bit ViT on CIFAR-10 required approximately 250 epochs to achieve a training loss around 0.026, whereas the standard ViT could reach a comparable loss of 0.024 in just 200 epochs. This increased training duration for the 1.58-bit model suggests a lower learning efficiency, likely due to the reduced precision and the consequent limitations in the model's capacity to capture detailed feature representations.
http://arxiv.org/abs/2406.18187v1
20240626090352
Selective Prompting Tuning for Personalized Conversations with LLMs
[ "Qiushi Huang", "Xubo Liu", "Tom Ko", "Bo Wu", "Wenwu Wang", "Yu Zhang", "Lilian Tang" ]
cs.CL
[ "cs.CL", "cs.AI", "cs.LG" ]
Functional knockoffs selection with applications to functional data analysis in high dimensions [ ============================================================================================================================ § ABSTRACT In conversational AI, personalizing dialogues with persona profiles and contextual understanding is essential. Despite large language models' (LLMs) improved response coherence, effective persona integration remains a challenge. In this work, we first study two common approaches for personalizing LLMs: textual prompting and direct fine-tuning. We observed that textual prompting often struggles to yield responses that are similar to the ground truths in datasets, while direct fine-tuning tends to produce repetitive or overly generic replies. To alleviate those issues, we propose Selective Prompt Tuning (SPT), which softly prompts LLMs for personalized conversations in a selective way. Concretely, SPT initializes a set of soft prompts and uses a trainable dense retriever to adaptively select suitable soft prompts for LLMs according to different input contexts, where the prompt retriever is dynamically updated through feedback from the LLMs. Additionally, we propose context-prompt contrastive learning and prompt fusion learning to encourage the SPT to enhance the diversity of personalized conversations. Experiments on the CONVAI2 dataset demonstrate that SPT significantly enhances response diversity by up to 90%, along with improvements in other critical performance indicators. Those results highlight the efficacy of SPT in fostering engaging and personalized dialogue generation. The SPT model code is publicly available for further exploration. [<https://github.com/hqsiswiliam/SPT>] § INTRODUCTION Personalization in dialogue systems enhances user interaction by creating a coherent and customized experience. It involves adapting conversations to individual preferences, backgrounds, and real-time context, ensuring each dialogue feels personally relevant. This tailored approach fosters a deeper connection between users and technology, making interactions more intuitive and engaging. By understanding and anticipating user needs, personalized dialogues can offer more than just relevant responses; they provide a seamless, conversational experience that mirrors human interaction, enriching the overall quality of digital communication. PersonaChat <cit.> has become a pivotal dataset for personalization research in conversational AI, offering persona profiles that detail an interlocutor's preferences and background in four to five sentences. These profiles guide conversational agents in creating dialogues that are both engaging and consistent with the persona's characteristics and prior conversational context. This area has seen diverse approaches for enhancing personalization, such as attention mechanisms <cit.>, reinforcement learning with multiple rewards <cit.>, and persona profile enrichment through stories <cit.>, demonstrating the breadth of innovation in making interactions more personalized and meaningful. Recently, the advent of large language models (LLMs) <cit.> has opened new avenues for dialogue generation, offering the potential for creating conversations that align with human preferences. However, fully leveraging LLMs to achieve the level of personalization showed in PersonaChat is a promising yet underexplored area. Currently, LLMs are primarily guided by direct textual prompts or through parameter-efficient fine-tuning like prompt tuning <cit.> that only tunes a few virtual tokens instead of whole LLMs for specific tasks. However, designing personalized conversational agents with LLMs faces two main challenges. The primary issue lies in diverse settings in conversations, which encompass a wide array of dialogues, each characterized by unique persona profiles and varying lengths of conversation. This diversity necessitates an understanding of the distinct conversational settings within the data. Through textual prompting, it is hard to guide the model to generate desired responses aligned with the target texts. Simply fine-tuning LLMs through prompt tuning without careful conversational setting analysis risks producing responses that lack specificity and depth, resulting in a generic and bland generation. Secondly, another equally critical challenge arises from the limitations inherent to the datasets used for persona-based dialogue generation. Typically small and lacking in diversity, these datasets can restrict the model's exposure to a wide range of conversational scenarios. When LLMs (e.g., Llama2-7B <cit.>) are tuned through trainable soft prompts on PersonaChat, they risk overfitting to specific persona profiles. This overfitting manifests in the model's responses, which become repetitive and overly aligned with the persona, often at the cost of dynamic and contextually appropriate interactions. Although this might lead to improvements in metrics such as F1 or BLEU scores, it detracts from the overall diversity and engagingness of the dialogues, undermining the model's ability to emulate authentic human conversation. To handle those two challenges when designing personalized conversations with LLMs, we propose a Selective Prompt Tuning (SPT) model. Specifically, to tackle the first challenge, it is crucial to identify inherent data patterns without explicit annotations. To achieve this, it is intuitive to utilize a group of multiple soft prompts to handle different conversational settings when tuning the model in a parameter-efficient way. However, as previously mentioned, the annotations for the dialogue settings are missing and even hard to discover and annotate. If we naively concurrently tune all prompts without clear distinctions, this would yield only marginal differences compared with tuning one soft prompt. Therefore, to build effective multiple prompts to discover the inherent data pattern inside the personalized dialogue, the proposed SPT model utilizes a dense retriever to adaptively select a proper soft prompt from the soft prompt group based on the given input context. To distinguish the effectiveness of soft prompts, we utilize the loss from LLMs as feedback to guide the update of the dense retriever without explicit annotations. Based on this, the proposed SPT model could discover patterns intrinsically associated with different dialogues. In this way, the retriever and soft prompt group evolve together, benefiting from continuous interactions that enrich their capability to discriminate and generate diverse, contextually relevant responses. To address the second challenge that LLM may overfit small-scale datasets such as PersonaChat, the proposed SPT method integrates two complementary mechanisms: context-prompt contrastive learning and prompt fusion learning. The context-prompt contrastive learning mechanism ensures diversity by encouraging the use of different soft prompts for varied dialogue contexts, preventing repetitive responses. Concurrently, prompt fusion learning aggregates all prompt predictions during back-propagation, optimizing towards a unified output. This dual strategy not only preserves response diversity across contexts but also enhances overall model performance, demonstrating their cooperative effectiveness in tackling overfitting while maintaining the performance. By integrating the above two parts into the SPT method, experiments on the CONVAI2 dataset <cit.> with LLMs (i.e., Llama2 <cit.> and OPT <cit.>) not only demonstrate marked improvements in response diversity and engagingness but also indicate enhancements in other key performance metrics. Quantitatively, the proposed SPT model consistently outperforms baselines across models with various sizes. Moreover, SPT offers profound insights into different dialogue scenarios, particularly in the model's strategic prompt selection. Comprehensive ablation studies highlight the adaptability of different prompts to specific dialogue contexts. Overall, our contributions can be summarized as follows. * We present the novel SPT method by integrating a trainable dense retriever with dynamic soft prompt selection to improve dialogue personalization and enhance both the diversity and engagingness. * In the proposed SPT method, we introduce the context-prompt contrastive mechanism and prompt fusion learning within a unified framework to foster prompt diversity and adaptability. * Extensive experiments show the effectiveness of the proposed SPT method. § RELATED WORK §.§ Personalized Dialogue Generation The CONVAI2 dataset, curated from the PersonaChat dataset <cit.>, features a persona profile with four to five sentences for each interlocutor <cit.>. This dataset has been established as a benchmark for personalized dialogue generation. Building upon this dataset, a variety of studies have explored different methods. For example, <cit.> extend the GPT2 model <cit.> with fine-tuning techniques specific to persona-based conversations. Differently, <cit.> employed a tripartite BERT architecture <cit.>, optimized through reinforcement learning, to craft responses. Similarly, <cit.> introduces a transmitter-receiver model by applying reinforcement learning with custom rewards to refine the dialogue generation process. <cit.> leverage model-agnostic data augmentation techniques to enrich the training dataset with pseudo-samples using models like GPT2 and BERT. <cit.> develop an adaptive attention mechanism that coherently integrates persona and context information. <cit.> propose a LAPDOG method to incorporate an external story corpus to enhance persona profiles for richer response generation. In contrast to those methods, the proposed SPT framework decomposes the task with multiple soft prompts without necessitating additional annotations or reliance on external corpora, which enables the generation of diverse and engaging responses while maintaining the integrity of the conversational context. §.§ Language Models and Personalization Language models (LMs) estimate text sequence probabilities, with recent models expanding from millions <cit.> to billions of parameters <cit.>, and training corpora now including vast web texts and instructional data <cit.>. Such advancements have notably improved the performance of LMs on various NLP tasks, especially in generating high-quality text for conversational applications. While those LMs are adept at providing user-centric responses, personalization remains a challenge. The prevalent strategy involves appending manually crafted hard prompts to LMs, which is overly simplistic and can result in the `lost in the middle' problem <cit.>. This occurs when the output of the LM is generically correct but lacks personalized context, struggling to reconcile broad training data with specific user prompts. To counteract this, the proposed SPT method enables the LLM to adapt its responses to varying personalized contexts more effectively. As a result, SPT fosters the generation of dialogue responses that are not only consistent but also highly personalized, addressing the core challenge of maintaining context relevance in user interactions. § METHODOLOGY In this section, we introduce the proposed SPT method. §.§ Problem Settings In persona-based dialogue sessions, a context is represented as C={P,U}, where P={p_1,…,p_e} denotes the persona comprising e sentences (e.g., 4≤ e≤ 5) to provide background information for a machine interlocutor m and U={u_h,1, u_m,1,…,u_h,n} denotes the dialogue context initiated by the human h to capture the exchange between human h and machine m. The goal is to generate a machine's response r=u_m,n that aligns with its persona P and the context U. §.§ Architecture Figure <ref> illustrates the SPT framework, consisting of a soft prompt group, a dense retriever, and a frozen LLM. Within this framework, the dense retriever selects an appropriate soft prompt from the soft prompt group by determining the closest match to the given context C. The chosen prompt is then merged with C to guide the LLM to produce compelling responses. The SPT framework restricts the soft prompt group and dense retriever to be trainable, while maintaining the LLM in a frozen state, which could significantly reduce the memory footprint and optimize resource utilization during training. Soft Prompt Group The soft prompt group, denoted by SP={sp_1,...,sp_K}, consists of K soft prompts with random initialization. Each prompt features L × D virtual tokens, where D denotes the hidden dimension of the LLM and L denotes the length of prompts. These prompts are fine-tuned during training while the LLM remains frozen. Soft Prompt Selection The soft prompt selection is done by a trainable retriever, Ret(·, ·), which calculates the similarity score s_C,sp={s_C,1, ..., s_C,K} between the context embedding emb_C from the LLM and each candidate sp_i in the soft prompt group SP. It ranks all the soft prompts based on the computed similarity score {s_C,i}_i=1^K to identify the most suitable prompt for the context. LLMs The LLMs deployed here are the decoder-only causal language model with frozen weights and initialized from pre-trained models. §.§ Computing Similarity between Soft Prompts and Context To reduce computational overhead, the dense retriever Ret utilizes two linear layers, i.e., lin_C and lin_sp, for computing the similarity scores {s_C,i}. Those similarity scores are calculated using the context embedding emb_C ∈ℝ^M× D obtained by the LLM's word embedding layer LLM_emb and the soft prompt representation in ℝ^L× D. The similarity score is computed as emb_C = LLM_emb(C), v_C = lin_C(emb_C), v_sp,i = lin_sp(sp_i), v̅_C = Avg_dim=0(v_C), v̅_sp,i = Avg_dim=0(v_sp,i), s_c,i^raw = v̅_C ·v̅_sp,i/v̅_C_2 ·v̅_sp,i_2, s_C,i = Softplus(s_C,i^raw), where Avg_dim=0(·) denote the averaging operation along the length dimension to address the sequence length discrepancy between emb_C and sp_i, Softplus(·) denotes the softplus activation function to ensure that s_C,i remains in the range [0, 1] and enhance the numerical stability during training, and s_C,i represents the normalized similarity score between the context C and the soft prompt sp_i. §.§ Learning Prompt Selection Navigating the lack of explicit annotations in complex dialogue scenarios poses a challenge in accurately guiding the retriever to assess the similarity between the context and each soft prompt. A naive method, which independently fine-tunes the entire soft prompt group and then selects candidates based on the similarity score during decoding, might lead to sub-optimal performance, akin to tuning a single soft prompt. To address this, we leverage context-driven losses from soft prompts, refining similarity score computations and enabling informed retriever decisions during training, as introduced in the next two subsections. §.§.§ Soft Prompt Loss For simplicity, consider the case with a single context. Given a context c_n from persona and dialogue history and its corresponding ground truth response target_n, we calculate the negative log-likelihood loss for each soft prompt as pred_i,n = LLM(concat(sp_i, c_n)), ℒ^LLM_i = NLL(pred_i,n, target_n), where concat(·,·) denotes the concatenation operation, LLM(·) denotes the LLM's forward operation, which takes a text sequence as the input and returns the predicted token probability distribution as the output, and NLL(·, ·) denotes the negative log-likelihood loss. This process generates K losses ℒ^LLM = {ℒ^LLM_1,...,ℒ^LLM_K} to measure the predictive ability of each soft prompt. §.§.§ Prompt Selection Loss In the absence of explicit annotations for conversational settings, updating the retriever to identify the most effective soft prompt for a given context is challenging. However, by using soft prompts in LLMs with the same context, the loss from different prompts can serve as a guide to determine which soft prompt is most suitable. Based on this consideration, we use the soft prompt loss (i.e., ℒ^LLM defined in Eq. (<ref>)) to gauge each candidate sp_i in the soft prompt group SP within c_n. Aligning the LLM's performance evaluation with the retriever's similarity scores is achieved by using the KL divergence between the negative language model loss (as guidance) and similarity scores. By denoting by S_c_n, SP=[S_c_n,sp_1,…,S_c_n,sp_K] the similarity scores between c_n and each sp_i in SP, the prompt selection loss is formulated as ℒ^LLM_normed= Softmax(-ℒ^LLM/τ_g), ℒ_selection = KL(S_c_n, SP,ℒ^LLM_normed), where Softmax(·) denotes the softmax function, τ_g is a temperature hyper-parameter, and KL(·,·) denotes the KL divergence. This loss is pivotal in ensuring the selections of the dense retriever are informed and coherent with the LLM, effectively mirroring the performance of soft prompts in generating contextually relevant and engaging responses. §.§ Context-Prompt Contrastive Learning While the aforementioned losses aid in training, there is a risk that the retriever often retrieves a single prompt and stagnates in such sub-optimal states. To alleviate this and foster prompt diversity to retrieve more prompts, we propose a context-prompt contrastive loss. This loss refines prompt selection by adjusting similarity scores based on the textual similarity of distinct contexts, thereby preventing to always select a single soft prompt and promoting varied selections. Specifically, the context-prompt contrastive loss dynamically recalibrates the similarity scores between pairs of context contents, considering their textual resemblance. Mathematically, the context-prompt contrastive loss is formulated as ℒ_con(s_c_i, s_c_j) = 1 - cos(s_c_i, s_c_j) if M(c_i,c_j) > Γ max(0, cos(s_c_i, s_c_j)) otherwise where M(·,·) denotes a distance function such as BLEU <cit.>, Γ denotes a threshold, s_c_i denotes a vector of cosine similarity scores between a context c_i and soft prompts in the soft prompt group, and cos(·,·) denotes the cosine similarity. The function ℒ_con amplifies the cosine similarity for similar context pairs (i.e., M(c_i,c_j) > Γ) and dampens it for dissimilar pairs (i.e., M(c_i,c_j) ≤Γ). This contrastive strategy not only ensures the retriever's alignment with the LLM's evaluations but also fosters a rich diversity and distinctiveness among different dialogue contexts, significantly bolstering the framework's overall adaptability. §.§ Prompt Fusion Learning To optimize the effectiveness of the soft prompts, we introduce a prompt fusion learning loss. This loss averages the predictive probabilities from all the soft prompts in the soft prompt group, aiming to aggregate a unified outcome that closely aligns with the desired output. The averaging operation in this loss smooths out variances and biases from individual prompts, thus improving the overall prediction accuracy and reliability. Formally, this loss is formulated as p_fused = 1/K∑_i=1^KLLM(concat(sp_i, c_n)) ℒ_fusion = NLL(p_fused, target_n). By utilizing the collective strengths of diverse prompts, this loss enhances the model’s ability to generate context-appropriate responses. §.§ Overall Objective Function The SPT framework hinges on the harmonious integration of the aforementioned loss functions, where each addresses a distinct aspect. The soft prompt loss (i.e., ℒ^LLM) ensures the LLM fidelity, the prompt selection loss (i.e., ℒ_selection) aligns the retriever's similarity assessment with the LLM's output, the context-prompt contrastive loss (i.e., ℒ_con) promotes diversity in prompt selection, and the prompt fusion learning loss (i.e., ℒ_fusion) enhance the overall performance for all the soft prompts. The overall objective of the SPT method is to minimize a composite loss function that encapsulates these individual components. Formally, the overall objective function ℒ_Total for the SPT framework is formulated as ℒ_Total = ∑_i=1^Kℒ^LLM_i +λ_1 ∑_i,j=1 i ≠ j^Kℒ_con(s_c_i, s_c_j) +λ_2ℒ_selection+λ_3 ℒ_fusion, where λ_1, λ_2, and λ_3 are hyperparameters that control the relative contribution of each loss component. In our experiments, we simply set λ_1, λ_2, and λ_3 to be 1, which could achieve good performance. By minimizing ℒ_Total during training, the SPT framework effectively balances the fidelity to the LLM, the accuracy of the retriever, and the diversity in prompt selection, leading to an adaptive dialogue generation system. §.§ Inference During inference, the dense retriever selects the most appropriate soft prompt from the soft prompt group based on the given context. This selected prompt, along with the context, is then fed into the LLM to decode the final result. Formally, for a given context C, soft prompt group SP, and dense retriever Ret, the inference process proceeds as i^* = max_1≤ i≤ K Ret(C, SP), pred = LLM(concat(sp_i^*, C)), where sp_i^* denotes the selected soft prompt with index i^* and pred denotes the response generated by the LLM. § EXPERIMENTS In this section, we empirically evaluate the proposed SPT model. §.§ Dataset We conduct experiments on the ConvAI2 dataset <cit.>, a benchmark for personalized dialogue generation. It comprises 8,939 training and 1,000 validation multi-turn conversations sourced from crowdworkers. Each dialogue includes persona profiles, each of which has four to five sentences to describe the background of each speaker, and the conversational history between the two interlocutors. By following <cit.>, our experiments employ a self-persona setting where only the speaking interlocutor's persona is revealed, maintaining the other's persona as obscured. §.§ Experimental Setup All experiments are based on two LLMs, including OPT <cit.> and Llama2 <cit.> of different sizes, which serve as the foundation model for the proposed SPT method. We randomly initialize soft prompts using a standard Gaussian distribution. For OPT models, we set the soft prompt token length to 8, and for the Llama2 model, we use a token length of 1. The soft prompt group consists of K=4 candidates. Learning rates of different LLMs are recorded in Table <ref> in the Appendix. The threshold Γ in Eq. (<ref>) is set to 20. §.§ Evaluation Metrics We evaluate our model using a suite of established metrics for persona-based dialogue generation, including Unigram F1, BLEU, ROUGE, BERT Score, and textual unigram/bigram distinctness (denoted by DIST-1 and DIST-2). Unigram F1 measures the harmonic mean of precision and recall at the token level. BLEU <cit.> and ROUGE <cit.> evaluate the overlap of n-grams between the generated text and target reference. BERT score <cit.>, using the deberta-xlarge-mnli model[<https://github.com/Tiiiger/bert_score>] as recommended for its improved performance over roberta-large, captures the semantic similarity of text pairs. Unigram and bigram distinctness (denoted by DIST-1 and DIST-2) gauge the diversity of the generated text, where DIST_AVG denotes the average of DIST-1 and DIST-2. §.§ Results Table <ref> illustrates that the proposed SPT consistently outperforms the baseline models across various metrics. Notably, the OPT-2.7B-SPT and Llama2-7B-SPT models exhibit significant performance improvements (i.e., 33.04% and 26.26%, respectively). Those improvements affirm the effectiveness of the proposed SPT method in fostering more diverse and personalized responses. For baseline models, we can see that there exists a common trade-off between linguistic quality and diversity. Specifically, the Llama2-7B model scores 17.12 in F1 and 1.99 in BLEU, but its diversity seems not so good (i.e., 2.80 in DIST-1 and 12.91 in DIST-2). This is in contrast to the OPT-125M model, which has lower linguistic scores (i.e., 10.79 in F1 and 1.61 in BLEU) but higher distinctness (i.e., 3.94 in DIST-1 and 13.67 in DIST-2). Different from those models, the proposed SPT method significantly enhances both diversity and linguistic quality, thereby avoiding the common compromise between linguistic enhancement and diversity. § ABLATION STUDIES In this section, we conduct ablation studies for the proposed SPT method. §.§ Training Losses Table <ref> reveals the impact of different training losses on performance. Omitting the prompt fusion loss slightly increases the prediction diversity in terms of DIST_AVG but reduces the overall performance in terms of F1, BLEU, ROUGE, and BERT Score. One possible reason is that the prompt fusion loss contributes to the linguistic quality at the cost of the diversity. Excluding the context-prompt contrastive loss leads to a decline in all the evaluated metrics, which shows the effectiveness of the context-prompt contrastive loss. The absence of the prompt selection loss significantly affects the prediction diversity, causing the model to favor a single soft prompt, akin to utilizing a single prompt. The above results underscore the importance of each loss in enhancing the model performance and response diversity. §.§ Prompt Usage in Varied Contexts To see the prompt usage during the conversational process, we plot in Figure <ref> the times each soft prompt is chosen during the entire conversation. According to Figure <ref>, we can see that in the OPT-1.3B-SPT model, prompt sp_3 is predominantly utilized for the initial stage in the conversation, sp_2 for the middle stage of the conversation, and sp_1 for the later stage of the conversation. For the Llama2-7B-SPT model, we have similar observations, indicating that soft prompts have functionalities in different stages of the conversation. Moreover, Figure <ref> explores the stylistic aspects of responses generated by different prompts, i.e., emojis in the generated responses. In the Llama2-7B-SPT model, sp_2, which is often used in the initial stage of the conversations, tends to generate emojis in the generated response. Differently, sp_3, often used in the late stage of the conversation, tends to generate few emoji in decoded responses. This phenomenon suggests a strategic use of emojis at different stages of the conversation. §.§ Number of Soft Prompt Candidates Table <ref> shows the effect of the number of soft prompts (i.e., K) to the model performance in terms of different metrics. Though the best performance occurs at different K's for different performance metrics, the best performance for different metrics usually occurs when K≤ 4, which is likely due to the sizes of both the CONVAI2 dataset and the LLM used. Hence, in all the experiments, K is set to be 4 by default. §.§ Comparison to Longer Prompt Tuning As shown in Table <ref>, the SPT method with four single-token soft prompts outperforms the four-token prompt tuning method, highlighting effectiveness of the proposed SPT method. Moreover, SPT excels the eight-token prompt tuning method in terms of BLEU, ROUGE, and DIST_AVG, showing its effectiveness despite fewer trainable parameters. §.§ Comparison to LoRA As LoRA <cit.> is another type of parameter-efficient finetuning method and has shown to be effective to utilize LLMs for different applications, we compare the proposed SPT method with it based on the Llama2-7B model under the condition that they have comparable numbers of trainable parameters. As shown in Table <ref>, LoRA exhibits improvements in the BLEU score and DIST_AVG but has lower ROUGE-L, BERT_F1, and F1 scores compared with the four-token prompt tuning method. Moreover, the proposed SPT method surpasses LoRA across all the evaluation metrics, highlighting its superior performance and affirming its effectiveness under the condition of comparable numbers of trainable parameters. §.§ Comparison to In-Context Learning To compare the performance with In-Context Learning (ICL) on LLMs, we compare the SPT method with the zero-shot GPT-3.5 turbo with instructions. According to results shown in Table <ref>, we can see that ICL gains a higher diversity score (i.e., DIST_AVG) but lower scores in terms of other metrics. This implies that simply prompting a more powerful LLM without proper tuning is hard to gain comparable performance to tuning methods. §.§ Text Overlap Between Prediction and Persona Table <ref> presents BLEU scores between the model's predictions and the system's persona descriptions for different models. We can see that the prompt tuning method exhibit larger text overlap with the system's persona, often leading to repetitive responses aligned with the persona. In contrast, the proposed SPT method has lower linguistic similarities to the persona, which results in more diverse and effective responses. This suggests that the proposed SPT method effectively balances the persona consistency and response diversity, avoiding the pitfalls of over-repetition. § CONCLUSION In this paper, we introduce SPT, a strategic approach for personalized dialogue generation through selective prompt tuning. By jointly training a soft prompt group and a dense retriever, SPT adeptly navigates various conversational scenarios automatically, enriching response diversity while improving both linguistic and neural-based metrics. Experiments on the CONVAI2 dataset highlights the capacity of SPT to identify intrinsic conversational settings, showing its effectiveness in generating contextually appropriate dialogues. § ACKNOWLEDGEMENTS This work is supported by NSFC general grant 62076118 and Shenzhen fundamental research program JCYJ20210324105000003. § LIMITATIONS This paper has introduced the selective prompt tuning in personalized dialogue generation. Through diverse prompting, the LLMs can generate more diverse and engaged responses when compared with single prompt tuning. However, despite the context-prompt contrastive mechanism and prompt selection loss, there is still a risk for the retriever to fall into a narrow selection of soft prompts (e.g., given K=4 in Llama2-7B, there is still one soft prompt that is selected only once during inference). This limitation may caused by a larger K used, making the determination of K important. Meanwhile, in the context-prompt contrastive loss, simply using BLEU to measure text similarity may not be sufficient to distinguish the difference between two dialogues, which could be enhanced by neural metrics powered by LLMs that could distinguish texts from both semantic and linguistic perspectives. Additionally, in the decoded text of Llama2-7B, the existence of emoji is not designed in the PersonaChat dataset, which is worth further investigation. § ETHIC STATEMENT This research confines the use of personal data to fictional persona profiles in the CONVAI2 dataset, avoiding the handling or storage of real personal data. All the soft prompts within the SPT are vector-based parameters without directly encoding or representing any individual's personal information. When applying to real-world applications, it is vital to prioritize data privacy, ensuring that personal information for personalized dialogues is ethically sourced and used with informed consent. § APPENDIX Input: Output: [!t] SPT Training §.§ Complete Training Procedure The full training procedure is described at Algorithm <ref>. §.§ Detailed Settings for SPT Training Table <ref> lists the detailed hyper-parameters for training SPT. The share parameters are used for all model training. Meanwhile, the Llama2-7B-SPT, OPT-2.7B, OPT-1.3B, and OPT-125M indicate the specific hyper-parameters used in the specific model training. We trained the SPT models on eight Tesla-V100 32GB GPUs. For each SPT model except OPT-125M-SPT, we train one epoch and then do the evaluation. For OPT-125M-SPT, we train for 15 epochs until it converges. §.§ Details for Ablation Study Table <ref> details our ablation study's findings. Selective Prompt Tuning (SPT) with four one-token soft prompts demonstrates superior performance over both the traditional four-token and eight-token soft prompt tuning approaches, highlighting our method's effectiveness. In a comparative analysis with LoRA under a similar parameter setup, SPT outperforms in all evaluated metrics, reinforcing its efficiency. Furthermore, compared to GPT-3.5 Turbo's In-Context Learning (ICL), SPT shows significant improvements in F1 and BLEU scores, indicating challenges with ICL's alignment to target responses despite its higher diversity in textual outputs. §.§ Human Evaluation We conducted human evaluation on three metrics, persona consistency, context consistency, and engagingness. Each metric is ranked for three scores: 0, 1, 2. For persona consistency, 0 means contradicts the persona, 1 means not relevant to the persona, and 2 means consistent to the persona. For context consistency, 0 means contradicts previous dialogue history, 1 means not relevant to the previous dialogue, and 2 means consistent to the previous dialogue. For engagingness, 0 means a boring response, 1 means a safe but bland response, and 2 means an interesting response. We randomly sampled 100 responses from Llama2-7B-SPT and Llama2-7B-PT. The results are displayed in Table <ref>. Our proposed SPT outperforms PT over all three metrics, indicating the effectiveness of our approach in both three perspectives. §.§ Experimental Results on Larger Dataset To further evaluate the efficiency and scalability of the SPT framework. We conducted additional experiments on the DailyDialog dataset, a more extensive and complex dialogue dataset than PersonaChat. Notably, the DailyDialog dataset lacks explicit persona descriptions in its entries, presenting a unique challenge for personalization techniques. The results of the DailyDalog are shown as Table <ref>. Result Analysis: The experimental setup involved executing four separate runs using both soft prompt tuning (PT) and SPT strategies on the DailyDialog dataset. The empirical evidence clearly demonstrates the superiority of the SPT framework over the conventional PT approach across all evaluated metrics. Specifically, the SPT method exhibits significant performance improvements, showcasing its adaptability and effectiveness in handling more complex and extensive datasets. The evaluation metrics are summarized in the table below, where we observe notable enhancements in key areas such as F1 score, BLEU, ROUGE, and BERT-based metrics, underlining SPT's potential applicability across diverse conversational tasks. §.§ Comparison to RAG (Retrieval Augmented Generation) Conceptual Differences: RAG and SPT fundamentally differ in their approaches. RAG enhances inputs by incorporating external information from a database, focusing on the value of external data. In contrast, SPT focuses on selecting the optimal soft prompt based on given context input. While they operate differently, they aren't inherently conflicting and could be seen as complementary since SPT can treat the retrieval-augmented input as context as a whole. SPT has the potential to integrate RAG's enriched inputs comprehensively. The exploration of combining RAG and SPT falls beyond the scope of this work and is reserved for future research. RAG Experimentation: We experimented with the RAG framework under the Llama2-7B model to compare SPT with RAG. We observed that the choice of K (number of retrieval contents) is crucial due to the RAG's reliance on the training set for retrieval. A large K value can lead to the concatenated content overwhelming the context window size, thus significantly increasing computational resource demands. Efficient Training Setup for RAG: For efficiency, we set K=1 for our RAG experiment, focusing on retrieving the most semantically similar dialogue to augment the current context. The retriever used is the Contriever from Facebook, which is known for its ability to retrieve highly relevant content based on textual semantics. This setup allowed us to directly compare the efficiency and scalability of RAG and SPT under similar computational constraints. Comparative Results: The training time for an epoch under the RAG setup was approximately 14 hours, compared to 7 hours for SPT. This underscores SPT's efficiency and scalability, especially in resource-constrained environments. Detailed results are displayed in the table <ref>. In terms of performance, SPT outperformed RAG in nearly all the metrics. This shows that SPT is not only faster but can also produce better results. The only area where RAG did slightly better was in creating more diverse responses (DIST-1 and DIST-2 metrics). This comparison shows that SPT is more efficient and often more effective than RAG. However, these two approaches do not necessarily contradict each other. Instead, combining these two methods could lead to even better performance. We might create more accurate and engaging dialogues by using RAG to get the proper context and SPT to fine-tune the response. This approach has a lot of potential for improving conversational AI systems. §.§ SPT Stability Experiment To evaluate the stability of the SPT, we further conducted additional experiments designed to test the system's resilience to disruptions. Specifically, we introduced Gaussian noises with the mean as 0 and the standard deviation as 1 to the similarity scores during inference to simulate the effect of inaccuracies in the soft prompt selection process. Additionally, we add a parameter α to control the strength of the noise. Formally, the disrupted selection score would become score = score + α * noise. The objective of this experiment is to observe the stability of our retriever under less-than-ideal conditions. Detailed results of these experiments will be included in our revision. Result Analysis: The results presented in Table <ref> demonstrate the impact of noise on retrieval performance. The introduction of mild noise (e.g., 0.01 to 0.1) results in negligible performance degradation, with some metrics showing slight improvements. However, as noise levels increase to 1.0, a deterioration in performance is observed despite a noticeable increase in DIST-2. This pattern suggests that while our SPT framework exhibits good stability to minor disturbances, its performance is adversely affected by severe interference. §.§ Retriever Stability Experiment To evaluate the robustness of our dense passage retrieval system, we introduced Gaussian noise with standard deviation. Specifically, we apply noise with a varying strength α, choosing from [0.001, 0.01, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0], to the L^LLM_normed loss during the retriever's training phase. Therefore, the disrupted L^LLM_normed will become L^LLM_normed+α * noise. This approach aimed to simulate potential disruptions in the soft prompt selection process, thereby testing the stability and resilience of our retriever under adversarial conditions. Adversarial Noise Impact on Retriever Robustness: The introduction of Gaussian noise served as a means to disturb the updating process of the retriever, allowing us to observe its behaviour and adaptability in the interference. Specifically, we add the noise the ℒ^LLM_normed to make the KL Divergence update become noisy. The varying levels of noise strength were chosen to represent a wide spectrum of potential adversarial impacts, from mild to severe disruptions, i.e., [0.001, 0.01, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0] Results and Insights: According to the Table <ref>, introducing the mildest level of noise (0.001) yielded improved performance across several key metrics, including F1, ROUGE-1, ROUGE-L, BERT Score, and DIST-2. This improvement suggests that slight perturbations may act as a beneficial regularizer within the training process, thereby enhancing performance. In contrast, levels of noise beyond the mildest introduced numerical instability (manifesting as overflow or underflow, particularly as we utilize fp16 for SPT training). This instability disrupts the training process, leading to outcomes marked as NaN (Not a Number). §.§ Case Study Figure <ref> shows a comparison between SPT and a prompt-tuned model. SPT uniquely incorporates horror-related emojis in a conversation about horror movies, while the prompt-tuned model tends to repeat persona profile content. This trend continues in subsequent dialogues. In the last case, SPT adeptly weaves persona details into its responses, offering a more engaging and personalized conversational experience compared to the more generic replies of the prompt-tuned model.
http://arxiv.org/abs/2406.17923v1
20240625201137
PAFT: A Parallel Training Paradigm for Effective LLM Fine-Tuning
[ "Shiva Kumar Pentyala", "Zhichao Wang", "Bin Bi", "Kiran Ramnath", "Xiang-Bo Mao", "Regunathan Radhakrishnan", "Sitaram Asur", "Na", "Cheng" ]
cs.CL
[ "cs.CL" ]
Rate-Distortion-Perception Tradeoff for Gaussian Vector Sources Jingjing Qian, Sadaf Salehkalaibar, Jun Chen, Ashish Khisti, Wei Yu, Wuxian Shi, Yiqun Ge and Wen Tong Jingjing Qian and Jun Chen are with the Department of Electrical and Computer Engineering at McMaster University, Hamilton, ON L8S 4K1, Canada (email: {qianj40, chenjun}@mcmaster.ca). Sadaf Salehkalaibar, Ashish Khisti and Wei Yu are with the Department of Electrical and Computer Engineering at the University of Toronto, Toronto, M5S 3G4, Canada (email:{sadafs, akhisti, weiyu}@ece.utoronto.ca), Wuxian Shi, Yiqun Ge and Wen Tong are with the Ottawa Research Center, Huawei Technologies, Ottawa, ON K2K 3J1, Canada (email: {wuxian.shi, yiqun.ge, tongwen}@huawei.com) ================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================ *These authors contributed equally to this workfootnote § ABSTRACT Large language models (LLMs) have shown remarkable abilities in diverse natural language processing (NLP) tasks. The LLMs generally undergo supervised fine-tuning (SFT) followed by preference alignment to be usable in downstream applications. However, this sequential training pipeline leads to alignment tax that degrades the LLM performance. This paper introduces PAFT, a new PArallel training paradigm for effective LLM Fine-Tuning, which independently performs SFT and preference alignment (e.g., DPO and ORPO, etc.) with the same pre-trained model on respective datasets. The model produced by SFT and the model from preference alignment are then merged into a final model by parameter fusing for use in downstream applications. This work reveals important findings that preference alignment like DPO naturally results in a sparse model while SFT leads to a natural dense model which needs to be sparsified for effective model merging. This paper introduces an effective interference resolution which reduces the redundancy by sparsifying the delta parameters. The LLM resulted from the new training paradigm achieved Rank #1 on the HuggingFace Open LLM Leaderboard[<https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard>]. Comprehensive evaluation shows the effectiveness of the parallel training paradigm. § INTRODUCTION In recent years, large language models (LLMs) have emerged as the standard approach to addressing natural language processing (NLP) tasks. The typical way of building an LLM for downstream applications generally follows a sequential training pipeline consisting of two phases: 1. Supervised Fine-tuning (SFT), where the pre-trained LLM is fine-tuned with the language modelling loss on demonstrations of the desired behaviour. 2. Alignment with human preference, where the model produced by the SFT phase is further fine-tuned with an alignment algorithm like Reinforcement Learning from Human Feedback (RLHF) or Direct Preference Optimization (DPO), etc. While this sequential pipeline has been used to seemingly great success, how the SFT and the preference alignment work better with each other is underexplored. Recent studies <cit.> have found that the preference alignment phase can cause the LLM to forget the diverse capabilities that it has acquired from earlier phases, despite aligning the LLM with human expectation. This phenomenon, also known as the alignment tax in the literature <cit.>, has accumulated substantial attention from both academia and industry. The alignment tax inherently results from catastrophic forgetting present in the staged training. To reduce catastrophic forgetting and thus alignment tax, this paper introduces a new parallel training paradigm for LLM fine-tuning, named PAFT, which independently performs SFT and preference alignment with the same pre-trained model on respective datasets, instead of sequentially conducting SFT followed by preference alignment. The model from SFT and the model from preference alignment are then merged into a final model by parameter fusing for use in downstream applications. As discovered by prior work <cit.>, direct model merging causes the parameter values to interfere across models, thereby harming the performance of the final model. The interference, which reduces parameter magnitudes in the merged model and eliminates subtle distinctions among values, can attribute to the redundant delta parameters, i.e., the differences in values between fine-tuned and pre-trained parameters, resulted from fine-tuning. Previous studies on model pruning <cit.> have shown that during fine-tuning, many model parameters can change over the course of fine-tuning but only have a small impact on performance. However, when merging a parameter that is influential for one model but redundant (i.e. not influential) for other models, the influential value may be obscured by the redundant values, lowering the overall model performance. This work reveals the dense properties of the delta parameters resulted from SFT. To mitigate the dense property of SFT, we propose an effective interference resolution which reduces the redundancy by sparsifying the delta parameters by adding a L1-norm penalty to the original SFT loss function. The existing findings indicate that the inclusion of the L1 term enhances the sparsity of the SFT. This method of implicitly inducing sparsity has been evaluated against a technique that introduces sparsity explicitly, i.e., DARE <cit.>, demonstrating the advantages of employing the L1-norm on LLM's performances in downstream tasks. Finally, the sparse delta parameters from SFT and preference alignment are merged into a single stronger model. Different merging methods are assessed, and TIES and Task Arithmetic are shown to be the best model merging methods, depending on base models. The method of Parallel SFT_sparse+DPO merged through TIES based on Mistral-7B sets a new benchmark for 7B models, i.e., 0.6524 on average over the six tasks in HuggingFace Open LLM Leaderboard. Notably, Parallel SFT_sparse+DPO consistently outperforms Parallel SFT+DPO across all model merging methods, showing the effectiveness and robustness of the PAFT training paradigm. The contributions of this paper are threefold: * Evidence is presented that parallel training of SFT and preference alignment outperforms sequential training, effectively reducing the alignment tax. * The significance of sparse model integration is highlighted as a mean to prevent model conflict while preserving the full capability of each model. We demonstrate the superiority of the L1-norm over DARE as a more effective and higher-quality method for promoting sparsity in model training across various model merging techniques. * We conduct comprehensive evaluation of PAFT on well-known public benchmarks including Open LLM Leaderboard and AlpacaEval. The PAFT-ed 7B model achieved Rank #1 in the 7B/8B model category on the Open LLM Leaderboard, and the PAFT-ed 70B model topped the Leaderboard globally. § METHODOLOGY §.§ Problem Setting Given a pre-trained LLM, such as Mistral and Llama, we aim to optimize the model for a wide range of downstream tasks by fine-tuning it either fully or with parameter-efficient tuning such as LoRA <cit.>, using SFT and preference alignment. Throughout this paper, θ denotes the trainable parameters; θ_pre denotes the parameters of the pre-trained model; θ_sft denotes the parameters of the model fine-tuned with SFT; θ_xpo denotes the parameters of the model fine-tuned with preference alignment, such as PPO <cit.>, DPO <cit.> and ORPO <cit.>, etc.; δ_sft=θ_sft-θ_pre denotes the delta parameters between the SFT-ed model and the pre-trained model; and δ_xpo=θ_xpo-θ_pre denotes the delta parameters between the preference-aligned model and the pre-trained model. §.§ Parallel Training SFT and preference alignment are two distinct methodologies designed to enhance the capabilities of pre-trained LLMs for specific applications. SFT focuses on boosting the performance of LLMs on downstream tasks by fine-tuning them with datasets that closely resemble the target task. This process tailors the model's responses to be more accurate and relevant for a specific use-case. In contrast, preference alignment, such as RLHF, DPO and ORPO, etc., is a methodology that refines a model's outputs based on human preferences. It generally fine-tunes the model on pairs of responses to an input query, one of which is preferred over the other one. Preference alignment uses such feedback signal to guide the model towards generating outputs that align with human expectation and ethical standards. This approach is particularly valuable for addressing the ethical considerations that arise when deploying LLMs in real-world scenarios. Nowadays, researchers have applied SFT to enhance the performance of LLMs on targeted tasks, and then employed preference alignment to further align the models with human preferences. However, this sequential application of SFT followed by preference alignment has often led to a compromise in task-specific performance - a phenomenon referred to as the alignment tax. This occurs because the distinct objectives of SFT and preference alignment can sometimes be at odds, with the alignment process potentially undoing some of the task-specific optimizations achieved through SFT. We address the challenge of the alignment tax by a novel approach that involves SFT and preference alignment concurrently using adapter training, such as LoRA <cit.>. This method takes full advantages and strengths of both SFT and preference alignment without sacrificing performance in either one, i.e., ensuring that the resulting model maintains high performance in downstream tasks while also being aligned with human preferences, thus overcoming the limitations associated with the alignment tax. During the training process specifically, based on the same pre-trained model θ_pre, the two separate adapter parameters, denoted as δ_sft and δ_xpo, are learned in parallel from downstream ground truth and human preferences, respectively. The proposed PAFT seeks to merge the δ_sft and δ_xpo in an effective way of avoiding feature interference. Figure <ref> compares the typical staged training pipeline and our parallel training pipeline PAFT. §.§ Sparse Merging The integration of dense neural network models often results in a suboptimal combined model due to the phenomenon of parameter interference. This challenge has led researchers to explore alternative strategies. Our investigations reveal that by increasing sparsity of a fine-tuned adapter, the performance of merging the adapter with the base model can be improved. Specifically, the parameter δ_xpo, derived from adapter training like LoRA, demonstrates clear sparsity, as depicted in Figure <ref>. In contrast, the sparsity in a SFT adapter, denoted by δ_sft, is not pronounced. To increase the sparsity within δ_sft, we propose the incorporation of an L1 regularization term during the SFT process. This modification to the fine-tuning procedure is expressed mathematically as follows: L_SFT_sparse = L_SFT + λ·δ_sft_1 Here, L_SFT represents the conventional cross-entropy loss function, and λ is a weighting factor that controls the strength of the sparsity regularization. Our results indicate that this approach significantly enhances the sparsity of δ_sft, with sparsity levels over 90%, as illustrated by the SFT_sparse in Figure <ref>. Given sparse representations for adapters of both SFT and preference alignment, the challenge is to effectively merge these delta parameters, δ_sft and δ_xpo, with the original pre-trained model, θ_pre, while preserving the performance benefits of SFT and preference alignment. The merging process can be formalized by the equation: θ_merge = f(θ_pre, δ_dpo, δ_sft) In our study, we explore a variety of merging methods proposed in the literature, including SLERP, Task Arithmetic, TIES, DARE TIES, and Linear. Detailed discussions of these merging methods are provided in the Related Work section. § EXPERIMENTS §.§ Evaluation Settings In this study, we conduct comprehensive evaluation on both the Open LLM leaderboard provided by HuggingFace and the AlpacaEval benchmark. The Open LLM Leaderboard benchmark suite encompasses a diverse set of six benchmark tasks, namely ARC, HellaSwag, MMLU, TruthfulQA, Winogrande, and GSM8K, along with their aggregated performance metrics. In our experiments, we employ two state-of-the-art pre-trained models: Mistral-7B <cit.> and Llama-3-8B[Note that while the Llama 3 model is referenced in our work, the official documentation for this model has not been released at the time of writing, and thus we cite its official GitHub site as a proxy: <https://github.com/meta-llama/llama3>]. This section presents the experimental results of merging the delta parameters obtained through SFT and DPO using the LoRA technique. We also study another preference alignment method ORPO for PAFT, which results in the same observations and conclusions as those from DPO. It shows the generalizability of PAFT to different preference alignment techniques. Due to space limit, we put the experimental results for ORPO in the appendix. Following the Zephyr work <cit.>, we use the UltraChat <cit.> dataset for SFT and the UltraFeedback <cit.> dataset for DPO. UltraChat is a self-refinement dataset consisting of 200K multi-turn dialogues generated by GPT-3.5-Turbo over 30 topics and 20 different types of text material. UltraFeedback consists of 64k prompts, each of which have four LLM responses that are rated by GPT-4 according to criteria like instruction-following, honesty, and helpfulness. We meticulously explore a spectrum of merging methods, including SLERP, Task Arithmetic, TIES, DARE-enhanced TIES, and Linear combination. Each of these merging strategies is scrutinized to determine its efficacy in integrating the sparsity-induced parameters from LoRA with the original pre-trained models. The goal is to ascertain which method most effectively preserves the performance enhancements attributed to SFT and DPO, thereby contributing to the advancement of model merging methods in LLM research. For training individual adapters, we have used the same settings as in the zephyr-7b-beta development[<https://github.com/huggingface/alignment-handbook/tree/main/recipes/zephyr-7b-beta>]. Our evaluation is conducted using the EleutherAI's LM Evaluation Harness framework <cit.>. We adhere to the same branch (b281b09) used by the HuggingFace Open LLM Leaderboard <cit.>, and evals are run with batch size 1 on an A100 GPU. The hyper parameter λ in Equation <ref> controls the sparsity of δ_sft. Empirical values 0.0001 and 0.001 are validated in our experiments to achieve reasonable sparsity. §.§ Parallel Training vs. Sequential Training To demonstrate the advantages of parallel training PAFT, we conducted empirical comparison of parallel, sequential and standalone training approaches on the six benchmark tasks using the two pre-trained models: Mistral-7B and Llama-3-8B. The results are given in Table <ref>. In the Mistral-7B model section, training with DPO alone improves the average score over the base model, while training with SFT alone doesn't show an improvement. This result reveals that SFT, while focusing on downstream tasks, inadvertently undermines performance due to a lack of alignment with human preferences. Conversely, DPO aims to harmonize the outputs of LLMs with human preferences, resulting in a noticeable improvement in the average score. Furthermore, we evaluated the sequential training of SFT with L1 regularization followed by DPO, which gave an average score of 0.6387. This score marginally surpasses that of standalone DPO, setting the stage for a comparison with parallel training outcomes. This outcome aligns with our initial hypothesis that during the DPO phase the model appears to discard much of the knowledge acquired in the SFT stage, i.e., alignment tax. Consequently, its performance exhibits only a marginal improvement over the training with DPO-alone. Additionally, we performed side-by-side evaluations of SFT_sparse+DPO training in both parallel and sequential manners. The findings indicate that training SFT with L1 regularization alongside DPO in parallel leads to a performance metric of 0.6524 when merging with the TIES method, over 2% higher than the score achieved by either DPO alone or by training SFT_sparse and DPO in sequence. This outcome can be explained by a notable drawback of sequential training which is its tendency to overlook much of the knowledge gained during the SFT stage, suggesting a suboptimal use of SFT data. In contrast, parallel training effectively combines the benefits from SFT and DPO by processing them concurrently. The benefits are mostly preserved during model merging, ensuring efficient utilization of both SFT and DPO data. Our work underscores the enhanced efficacy of the parallel training approach PAFT, which not only maintains the distinct advantages of SFT and DPO, but also outperforms these techniques when they are used separately or sequentially. §.§ Sparse Merging vs. Dense Merging Our study has demonstrated the advantages of incorporating sparsity into fine-tuned models. In the context of sequential training, the inclusion of L1 regularization has yielded a modest yet notable improvement. Specifically, in Table <ref>, the average score for the sequential SFT_sparse+DPO stands at 0.6387, surpassing the sequential SFT+DPO without L1 regularization, with a score of 0.6347. Although the improvement is marginal, it underscores the value of integrating the L1-norm to induce sparsity. The impact of sparsity becomes more pronounced when examining parallel training scenarios. Across all considered model merging techniques, Parallel SFT_sparse+DPO, i.e., PAFT, consistently outperforms its counterpart without L1 regularization, Parallel SFT+DPO, thereby highlighting the efficacy of the sparsity induced by L1-norm. Notably, in the case of the TIES and DARE TIES merging methods, the average score disparity is significant. With TIES, PAFT (SFT_sparse+DPO) achieves a score of 0.6524, while Parallel SFT+DPO without sparsification lags behind at 0.5893. Similarly, for DARE TIES, PAFT (SFT_sparse+DPO) scores 0.6479, outstripping Parallel SFT+DPO's 0.5726. This substantial margin illustrates the robustness of L1-norm sparsity for various merging methods. The same insights as given in the Mistral-7B section can be gained from the Llama-3-8B section in Table <ref>. PAFT on Llama-3-8B significantly outperforms Parallel SFT+DPO, sequential training and standalone training. The experimental results confirm the generalizability of PAFT to various pre-trained models. When comparing different model merging strategies, TIES generally performs better than other methods on both Mistral-7B and Llama-3-8B, exhibiting superior performance over DARE TIES. DARE, which stands for "Drop And REscale", is a method that explicitly increases sparsity by eliminating elements below a certain threshold and rescaling the remaining parameters. In contrast, the L1-norm introduces sparsity implicitly by integrating it into the objective function. Consequently, the impact of the eliminated terms is less pronounced in the final results compared to DARE. This comparison reveals the advantages of the L1-norm's explicit sparsity induction over the implicit approach employed by DARE. §.§ Comparison with State-of-the-art LLMs On the online Open LLM Leaderboard, we performed PAFT on the Neurotic-7B[<https://huggingface.co/liminerity/Neurotic-Jomainotrik-7b-slerp>] and MoMo-70B[<https://huggingface.co/leejunhyeok/MoMo-70B-LoRA-V1.2_1>] base models. The two PAFT-ed models significantly improved over the respective base models, and achieved Rank #1 in the 7B/8B model category and globally on the online Open LLM Leaderboard, respectively, showing the effectiveness of PAFT on various base models. Table <ref> gives the results of our PAFT-ed models and the existing state-of-the-art models on the Leaderboard. Additionally, we compared the two PAFT-ed models with existing state-of-the-art LLMs on the AlpacaEval benchmark <cit.>, where every model generates responses to 805 questions on different topics, mostly focused on helpfulness. The models are judged by GPT-4, and the final metric is the pairwise win-rate against GPT-4. As shown in Table <ref>, the PAFT-ed 70B model outperforms existing state-of-the-art LLMs, except GPT-4 Preview and Claude 3 Opus in LC (Length-controlled) Win-Rate. While the GPT-4 judge favors its own GPT model family, the PAFT-ed 70B model performs better than GPT-4 (03/14) and GPT 3.5 Turbo do. On the other hand, the PAFT-ed 7B model outperforms all the 7B/8B and smaller models on AlpacaEval. It even beats some larger models, such as DBRX Instruct and Mixtral 8x7B. § RELATED WORK §.§ SFT and Human Preference Alignment The groundbreaking achievements of BERT <cit.> and GPT <cit.> have underscored the significance of pretraining and supervised fine-tuning (SFT) techniques. To mitigate ethical concerns and ensure such language model outputs are aligned with human values, a subsequent alignment step employs human feedback to enhance the efficacy of pretraining <cit.>, fine-tuning <cit.>, and adaptability for scaling purposes <cit.>. <cit.> found that implicit task feedback often outperforms explicit user feedback, leading to other high-quality datasets of human-generated summaries to compare with those produced by LLMs, resulting in superior quality outputs compared to SFT and human benchmarks <cit.>. Recent advancements by models such as GPT <cit.>, Claude <cit.>, Llama <cit.>, and Gemini <cit.> have all leveraged human comparison feedback to refine output quality through alignment, a method also known as reinforcement learning from human feedback (RLHF). RLHF models employ the Bradley-Terry model to develop a reward function that emulates human preferences between two candidate responses <cit.>. This reward model lays the groundwork for applying reinforcement learning to LLMs, drawing inspiration from Proximal Policy Optimization (PPO) techniques <cit.>. Direct Preference Optimization (DPO) streamlines the alignment process by integrating reward training with LLM alignment, thereby simplifying the training regimen through a direct relationship between the reward function and policy in reinforcement learning <cit.>. However, the efficacy of DPO in practice remains an area for further exploration <cit.>. Odds-ratio Preference Optimization (ORPO) <cit.> is an alternative alignment paradigm that aims to replace sequential SFT + DPO with a single monolithic optimization algorithm. It directly optimizes for preferences between two candidate generations by maximizing the ratio of odds of the winning generation w.r.t. losing generation to simultaneously reward logits of desired tokens and penalize logits of undesired tokens. SFT and Human Preference Alignment serve distinct objectives and should be approached as components of a multi-objective optimization problem. SFT focused on enhancing the performance of LLMs in downstream tasks, whereas alignment seeks to address ethical concerns. Prior research on RLHF often treats alignment as a compromise that could potentially degrade the model's output quality while address ethical problems <cit.>. Consequently, SFT and alignment are typically implemented in a sequential manner to ensure the safety of LLMs while accepting some degree of capability loss <cit.>. In contrast, Bai et al. have claimed that 'Smaller models experience severe ‘alignment taxes’ – their performance on a wide variety of evaluations declines after RLHF training. However, we find a variety of alignment bonuses, with our 13B and 52B RLHF-trained models performing better at zero-shot NLP evaluations, and the same at few-shot evaluations' <cit.>. This divergence in findings motivates further exploration into the interplay between SFT and alignment. Specifically, there is a strong interest in devising a method to integrate SFT and alignment in such a manner that yields an 'alignment bonus.' §.§ Sparsity for LLMs As the size of LLMs continues to increase, the importance of compression becomes crucial for deploying them on edge devices. This is done to reduce costs and improve inference speed <cit.>. Various compression strategies for LLMs exist, with a focus on pruning <cit.> and Low Rank Adapters (LoRA) <cit.>. Pruning involves creating sparsity through pretraining, magnitude-based pruning, and fine-tuning the remaining weights <cit.>. LoRA suggests representing a matrix as the product of two low-rank matrices to reduce memory storage requirements <cit.>. Recent research has shown that the magnitudes of parameters trained by LoRA in SFT process are relatively small. A strategy has been developed where random pruning is applied to these small SFT parameters with a ratio p, followed by multiplying the remaining parameters by 1/1-p to enhance model performance <cit.>. Merging sparsity models trained on different tasks has led to significant improvements in downstream tasks like AlpacaEval and GSM8K. This method involves applying pruning to introduce more sparsity in SFT using LoRA. Other methods for inducing sparsity in SFT parameters exist like incorporating the L1 norm in the loss function, similar to techniques used in Lasso regression <cit.> and compressed sensing <cit.>. A Bayesian interpretation of the L1-norm on the weights amounts to assuming a standard Laplacian prior on the parameters which is centered more closely around mean of zero. This concept will guide the research in this paper. §.§ Model Merging Combining skills learnt from different types of datasets in a single model provides multiple benefits like better in-domain performance <cit.>, out-of-domain generalization <cit.>, and a more parameter efficient model w.r.t. specialized models. Joint multi-task learning is one way to achieve this, but it has several difficulties: it is costly to train a single model across all tasks and it is non-trivial to find the correct task-mix to ensure a jointly optimal performance across all tasks <cit.>. A wide variety of model merging methods to combine specialized models into a stronger merged model have emerged as an alternative to multi-task training. <cit.> introduced the paradigm of averaging model weights from separate fine-tuned models to create a stronger merged model in ModelSoup, achieving SOTA in several different benchmarks. Fisher merging from <cit.> proposed to improve upon naively averaging all model weights by instead using a weighted average of the parameters. They identified the importance of each individual parameter based on its Fisher Information to use as the coefficient in the weighted average. <cit.> further showed that one could influence the merged model’s performance in several ways via task-arithmetic on task-vectors (additive weight adaptors): forgetting undesired traits via negation, learning tasks by addition, or learning entirely new tasks by analogies. <cit.> proposed RegMean where they solve a local closed-form linear-regression problem to estimate the merged model parameters for each individual linear layer. <cit.> demonstrated that the phenomenon of parameter interference during model-merging leads to performance degradation in merged models. They cited this interference to two main sources - redundant parameter-updates, i.e. updates not crucial to a model’s prediction, and sign disagreement between different parameter-updates. To overcome such destructive interference, they proposed TIES-Merging which has two filtering steps before model-merging. First, only the top-k% updates by magnitude are retained in each task-vector. Next, the dominant sign is chosen as sgn(Σ_i(sgn(θ_i))) and only those updates whose sign agrees with the dominant sign are finally averaged and merged. § CONCLUSIONS LLM fine-tuning generally undergoes a two-stage training process, with SFT applied initially, followed by preference alignment. Yet, research indicates that this sequential approach incurs an "alignment tax", compromising the LLM's overall performance. To counteract this, we advocate for a parallel training strategy PAFT which preserves the advantages of both SFT and preference alignment without incurring the alignment tax associated with sequential training. A significant hurdle in parallel training is the potential for conflict during the model merging phase, where the merging of different adapters can lead to diminished performance. In this paper, we propose the integration of an L1 regularization to the training loss during the SFT phase to induce sparsity, thereby reducing interference between models. Our experimental results demonstrate the efficacy of incorporating an L1-norm into the SFT process for sparsification and utilizing a parallel training framework over the typical sequential approach. When combining all of them together, i.e. Parallel SFT_sparse+DPO achieves the state-of-art results on both the LLM leaderboard by HuggingFace and the AlpacaEval benchmark. The ORPO experimental results given in the appendix show the same patterns, demonstrating the generalizability of our PAFT to various preference alignment methods. This comprehensive strategy highlights how the methods of integrating SFT with preference alignment can greatly enhance LLM fine-tuning. § LIMITATIONS There are a couple of limitations of the parallel training of SFT and preference alignment. Firstly, we have found that sparsity aids in model merging, though the reasons behind this benefit and why DPO initially induces sparsity in the adapter remain unanswered. Moreover, sparsity can reduce model interference during merging, but the scalability of this approach is still in question. If a merged model deployed in production fails in some cases, it is underexplored how to improve the model responses in these cases. Directly performing SFT on the merged model may lead to catastrophic forgetting of what it learned earlier. On the other hand, parallel training necessitates merging a new SFT-ed model with the existing merged model, adding complexity to the process. The primary risk associated with this paper pertains to its data usage. Currently, UltraChat data is employed for SFT, while UltraFeedback data is used for preference alignment. UltraChat consists solely of multi-round dialogue data, which inherently limits its format diversity. To enhance the robustness and applicability of the model, it is crucial to incorporate a wider variety of data types beyond dialogue data. Additionally, UltraFeedback relies on annotations generated by GPT-4, which inevitably include errors and in-accurate feedback. To mitigate these risks, higher-quality datasets are needed in the future. § PAFT PERFORMANCE WITH A DIFFERENT PREFERENCE OPTIMIZATION ALGORITHM The stronger performance of PAFT is also confirmed with a different choice of preference alignment algorithm. Table <ref> shows experimental results with ORPO as the preference alignment method alongside SFT with the Llama-3-8B base model. We observe a similar trend where finetuning the LLM sequentially via SFT followed by ORPO underperforms all the parallelly trained variants. Even simple model merging methods such as Task Arithmetic and Linear merging perform strongly, outperforming more complicated methods like DARE TIES in both experiment settings.
http://arxiv.org/abs/2406.18516v1
20240626174030
Denoising as Adaptation: Noise-Space Domain Adaptation for Image Restoration
[ "Kang Liao", "Zongsheng Yue", "Zhouxia Wang", "Chen Change Loy" ]
cs.CV
[ "cs.CV" ]
CSI4Free: GAN-Augmented mmWave CSI for Improved Pose Classification Nabeel Nisar Bhat IDLab-Faculty of Science University of Antwerp-imec Antwerp, Belgium nabeelnisar.bhat@uantwerpen.be Rafael Berkvens IDLab-Faculty of Applied Engineering University of Antwerp-imec Antwerp, Belgium rafael.berkvens@uantwerpen.be Jeroen Famaey IDLab-Faculty of Science University of Antwerp-imec Antwerp, Belgium jeroen.famaey@uantwerpen.be July 1, 2024 =================================================================================================================================================================================================================================================================================================================================================================================== § ABSTRACT Although deep learning-based image restoration methods have made significant progress, they still struggle with limited generalization to real-world scenarios due to the substantial domain gap caused by training on synthetic data. Existing methods address this issue by improving data synthesis pipelines, estimating degradation kernels, employing deep internal learning, and performing domain adaptation and regularization. Previous domain adaptation methods have sought to bridge the domain gap by learning domain-invariant knowledge in either feature or pixel space. However, these techniques often struggle to extend to low-level vision tasks within a stable and compact framework. In this paper, we show that it is possible to perform domain adaptation via the noise-space using diffusion models. In particular, by leveraging the unique property of how the multi-step denoising process is influenced by auxiliary conditional inputs, we obtain meaningful gradients from noise prediction to gradually align the restored results of both synthetic and real-world data to a common clean distribution. We refer to this method as denoising as adaptation. To prevent shortcuts during training, we present useful techniques such as channel shuffling and residual-swapping contrastive learning. Experimental results on three classical image restoration tasks, namely denoising, deblurring, and deraining, demonstrate the effectiveness of the proposed method. Code will be released at: <https://github.com/KangLiao929/Noise-DA/>. § INTRODUCTION Image restoration is a long-standing yet challenging problem in computer vision. It includes a variety of sub-tasks, e.g., denoising <cit.>, deblurring <cit.>, and deraining <cit.>, each of which has received considerable research attention. Many existing methods are based on deep learning, typically following a supervised learning pipeline. Since annotated samples are not available in real-world contexts, i.e., degradation is unknown, a common technique is to generate synthetic low-quality data from high-quality images based on some assumptions on the degradation process to obtain training pairs. This technique has achieved considerable success but is not perfect, as synthetic data cannot cover all unknown or unpredictable degradation factors, which can vary wildly due to uncontrollable environmental conditions. Consequently, existing restoration methods often struggle to generalize well to real-world scenarios. Extensive studies have been conducted to address the lack of real-world training data. Some methods improve the data synthesis pipeline to generate more realistic degraded inputs for training <cit.>. Other blind restoration approaches estimate the degradation kernel from the real degraded input during inference and use it as a conditional input to guide the restoration <cit.>. Unsupervised methods <cit.> enhance input quality without relying on predefined pairs of clean and degraded images. These methods often use deep internal learning or self-supervised learning, where the model learns to predict clean images directly from the noisy or distorted data itself. In this paper, we investigate the problem assuming the existence of both synthetic data and real-world degraded images. This scenario fits a typical domain adaptation setting, where existing methods can be categorized into feature-space <cit.> and pixel-space <cit.> approaches. Both paradigms have their weaknesses: aligning high-level deep representations in feature space may overlook low-level variations essential for image restoration, while pixel-space approaches often involve computationally intensive adversarial paradigms that can lead to instability during training. In this work, we present a novel adaptation method for image restoration, which allows for a meaningful diffusion loss to mitigate the domain gap between synthetic and real-world degraded images. Our main idea stems from the observation shown in Fig. <ref>(a). Here, we measure the noise prediction error of a diffusion model conditioned on a noisy version of the target image. The trend in Fig. <ref>(a) shows that conditions with fewer corruption levels facilitate lower prediction errors of the diffusion model. In other words, “good” conditions give low diffusion loss, and “bad” conditions lead to high diffusion loss. While such a behavior may be expected, it reveals an interesting property of how conditional inputs could influence the prediction error of a diffusion model. Our method leverages this phenomenon by conditioning both the restored synthetic image and real image from a restoration network onto the diffusion model, as shown in Fig. <ref>(b). Both networks are jointly trained, with the restoration network optimized to provide “good” conditions to minimize the diffusion model's noise prediction error, aiming for a clean target distribution. The goal of providing good conditions drives the restoration network to learn to improve the quality of its outputs. After training, the diffusion model is discarded, leaving only the trained restoration network for inference. To bridge the gap between the restored synthetic and real outputs, our method carefully conceals the identity of the conditions. This prevents the diffusion model from simply learning to differentiate between synthetic and real conditions based on their channel index, avoiding a trivial shortcut in training. In addition, the pixel similarity between the noisy synthetic label and synthetic output is also easy to distinguish when they share the same clean image. To avoid the above shortcut learning, we design a channel shuffling layer at the beginning of the diffusion model. It randomly shuffles the channel index of synthetic and real-world conditions at each training iteration before concatenating them. We further propose a residual-swapping contrastive learning strategy to ensure the model genuinely learns to restore images accurately, rather than relying on easily distinguishable features. Our work represents the first attempt at addressing domain adaptation in the noise space for image restoration. We show the unique benefits offered by diffusion loss in eliminating the domain gap between the synthetic and real-world data, which cannot be achieved using existing losses. To verify the effectiveness of the proposed method, we conducted extensive experiments on three classical image restoration tasks, including denoising, deblurring, and deraining. § RELATED WORK Image Restoration. Image restoration aims to recover images degraded by factors like noise, blur, or data loss. Driven largely by the capabilities of various neural networks <cit.>, significant advancements have been made in sub-fields such as image denoising <cit.>, image deblurring <cit.>, and image deraining <cit.>. In image restoration, loss functions are essential for training models. For example, the L1 loss minimizes average absolute pixel differences, ensuring pixel-wise accuracy. Perceptual loss uses pre-trained neural networks to compare high-level features, ensuring perceptual similarity. Adversarial loss involves a discriminator distinguishing between real and restored images, pushing the generator to create more realistic outputs. However, restoration models trained on synthetic images with these conventional loss functions still cannot escape from a significant drop in performance when applied to real-world domains. To address the mismatch between training and testing degradations, some supervised image restoration techniques <cit.> improve the data synthesis pipeline, focusing on creating a training degradation distribution that balances accuracy and generalization in real-world scenarios. Some methods <cit.> estimate and correct the degradation kernels to improve the restoration quality. Our work is orthogonal to these methods, aiming to bridge the gap between training and testing degradations. Unsupervised learning methods for image restoration leverage models that do not rely on paired training samples <cit.>. Techniques like Noise2Noise <cit.>, Noise2Void <cit.>, and Deep Image Prior <cit.> exploit the intrinsic properties of images, where the network learns to restore images by understanding the natural image statistics or by self-supervision. These unsupervised approaches have proven effective in restoration tasks, achieving impressive results comparable to supervised learning methods. However, they often struggle with handling highly complex or corrupted images due to their reliance on learned distributions and intrinsic image properties, which may not fully capture intricate details and show limited generalization to other restoration tasks. Domain Adaptation. The concept of domain adaptation is proposed to eliminate the discrepancy between the source domains and target domains <cit.> to facilitate the generalization ability of learning models. Previous methods can be categorized into feature-space and pixel-space approaches. For example, feature-space adaptation methods <cit.> adjust the extracted features from networks to align across different domains. Among these methods, some classical techniques are developed like minimizing the distance between feature spaces <cit.> and introducing domain adversarial objectives <cit.>. Aligning high levels of deep representation may overlook crucial low-level variances that are essential for target tasks such as image restoration. In contrast, pixel-space domain adaptation methods <cit.> achieve distribution alignment directly in the raw pixel level, by translating source data to match the “style" of a target domain. While they are easier to understand and verify for effectiveness from domain-shifted visualizations, pixel-space adaptation methods require careful tuning and can be unstable during training. Recent methods <cit.> compensate for the limitation of isolated domain adaptation by jointly aligning feature space and pixel space, shared the similar pipeline with CycleGAN <cit.>. However, they tend to be computationally demanding due to the need to train multiple networks (generally two generators and two discriminators) and the complexity of the cycle consistency loss. Different from the above feature-space and pixel-space methods, we propose a new noise-space solution that preserves low-level appearance across different domains with a compact and stable framework. Diffusion Model. Diffusion models <cit.> have gained significant attention as a novel approach in generative modeling. They work by gradually transforming a simple distribution (usually Gaussian) into a complex distribution in a series of steps, reversing the diffusion process. This approach shows remarkable success in text-to-image generation <cit.> and image restoration <cit.>. Often, conditions are fed to the diffusion model for conditional generation, such as text <cit.>, class label <cit.>, visual prompt <cit.>, and low-resolution image <cit.>, to facilitate the approximation of the target distribution. In this work, we show that the diffusion’s forward denoising process has the potential to serve as a proxy task to improve the model’s generalization ability in image restoration tasks. § METHODOLOGY Problem Definition. We start by formulating the problem of noise-space domain adaptation in the context of image restoration. Given a labeled dataset[Following the notations in domain adaptation, we use “label” to represent the ground truth image in the task of image restoration.] from a synthetic domain and an unlabeled dataset from a real-world domain, we aim to train a model on both the synthetic and real data that can generalize well to the real-world domain. Supposed that 𝒟^s = {( x_i^s, y_i^s)}_i=1^N^s denotes the labeled dataset containing N^s samples from the source synthetic domain and 𝒟^r = { x_i^r}_i=1^N^r denotes the unlabeled dataset with N^r samples from the target real-world domain, where y^s is the clean image, x^s is the corresponding synthetic degraded image, and x^r is the real-world degraded image. Image Restoration Baseline. The image restoration network can be generally formulated as a deep neural network G(·; θ_G) with learnable parameter θ_G. This network is trained to predict the ground truth image y^s from its degraded observation x^s on the synthetic domain. The proposed noise space domain adaptation is not limited to a specific type of network architecture. One can choose from existing networks such as DnCNN <cit.>, U-Net <cit.>, RCAN <cit.>, and SwinIR <cit.>. The approach is also orthogonal to existing loss functions used in image restoration, e.g., L_1 or L_2 loss, Charbonnier loss <cit.>, perceptual loss <cit.>, and adversarial loss <cit.>. To better validate the generality of the proposed approach, we adopt the widely used U-Net architecture and the Charbonnier loss, denoted as ℒ_Res, as our baseline. In the joint training, the diffusion model is trained using a diffusion objective, ℒ_Dif, while the restoration network is updated using both the ℒ_Res and ℒ_Dif. The diffusion model is discarded after training. §.§ Noise-Space Domain Adaptation Ideally, the ground truth images and those restored images by an image restoration model from both synthetic and real-world data should lie in a shared distribution 𝒮 of high-quality clean images. However, attaining such an ideal model that can universally map any degraded images onto the distribution 𝒮, is exceedingly challenging. By assuming the restored images from synthetic and real-world data obey distinct distributions 𝒮^s and 𝒮^r, our goal is to align 𝒮^s and 𝒮^r with 𝒮. To this end, we introduce a diffusion model that conditions on 𝒮^s and 𝒮^r to approximate the target distribution 𝒮. During training, this diffusion model is expected to guide the conditional distributions 𝒮^s and 𝒮^r toward 𝒮. Given the commonly adopted case where the ground truth images from the synthetic dataset are available, we first explore adapting the target distribution 𝒮 with a perspective of paired data. Without loss of generality, let us consider a synthetic degraded image x^s with its ground truth y^s from the synthetic domain and a real degraded image x^r from the real-world domain. Using the restoration network G(·; θ_G), we can obtain the restored images ŷ^s and ŷ^r, respectively. Then, we introduce a diffusion denoising process as a proxy task in the training process. It employs the predicted images ŷ^s and ŷ^r as conditions to help the diffusion model fit the distribution of 𝒮. Following the notations in DDPM <cit.>, we denote the diffusion model as ϵ_θ and formulate its optimization to the following objective: ℒ_Dif=𝔼ϵ - ϵ_θ(ỹ^s, 𝐂(ŷ^s, ŷ^r),t) _2, where ỹ^s=√(α̅_t)y^s+√(1-α̅_t)ϵ, ϵ∼ N(0,I), α̅_t is the hyper-parameter of the noise schedule, and 𝐂(·,·) denotes the concatenation operation along the channel dimension. During the joint training shown in Fig. <ref>, supervision from the diffusion loss in Eq. (<ref>) will back-propagate to the conditions ŷ^s and ŷ^r if they are under-restored, i.e., far away from the expected distribution 𝒮. This encourages the preceding restoration network to align ŷ^s and ŷ^r as closely as possible to 𝒮. The joint training, however, could lead to trivial solutions or shortcuts, as shown in Fig. <ref>. For example, it is easy to distinguish the “synthetic” and “real” conditions by the pixel similarity between ŷ^s and ỹ^s or the channel index. Consequently, the restoration network will cheat the diffusion network by roughly degrading the high-frequency information in real-world images. As illustrated in Fig. <ref>(bottom), we identify three stages in this training process: (I) Diffusion network struggles to recognize which conditions aid denoising as both are heavily degraded, promoting the restoration network to enhance both; (II) Synthetic image is clearly restored and is easy to discriminate from its appearance; (III) The diffusion model distinguish between the conditions, leading the restoration network to focus on the synthetic data while ignoring the real-world data. r0.4 < g r a p h i c s > The proposed solution to eliminate the shortcut learning in diffusion. §.§ Eliminating Shortcut Learning in Diffusion To avoid the above shortcut in diffusion, as shown in Fig. <ref>, we first propose a channel shuffling layer f_cs to randomly shuffle the channel index of synthetic and real-world conditions at each iteration before concatenating them, i.e., 𝐂(f_cs(ŷ^s, ŷ^r))[We omit the shuffling operator f_cs for notation clarity in the following presentation.]. We show in the experiments that this strategy is crucial to bridge the gap of synthetic and real data. In addition to channel shuffling, we also devise residual-swapping contrastive learning to ensure the network learns to restore genuinely instead of overfitting the paired synthetic appearance. Using the ground truth noise ϵ as the anchor, we construct a positive example ϵ^pos derived from Eq. (<ref>): ϵ^pos = ϵ_θ(ỹ^s, 𝐂(ŷ^s, ŷ^r),t), i.e., the expected noise from the diffusion model conditioning on restored synthetic and real-world images. We then swap the residual maps of these two conditions and formulate a negative example ϵ^neg as follows: ϵ^neg = ϵ_θ(ỹ^s, 𝐂(ŷ^s← r, ŷ^r← s),t),  ŷ^s← r = x^s ⊕ℛ^r,  ŷ^r← s = x^r ⊕ℛ^s, where ℛ^s and ℛ^r are the estimated residual maps of the corresponding synthetic image and real-world image from the restoration network, and ⊕ is the pixel-wise addition operator. By swapping the residual maps of two conditions, we constrain the diffusion model to repel the distance between the wrong restored results and the expected clean distribution regardless of their context. Based on the positive, negative, and anchor examples, the residual-swapping contrastive learning can be formulated as: ℒ_Con = max( ‖ϵ-ϵ^pos‖_2 - ‖ϵ-ϵ^neg‖_2 + δ, 0), where δ denotes a predefined margin to separate the positive and negative samples. In this way, the loss of diffusion model takes the mean of Eq. (<ref>) and Eq. (<ref>). In the above formulation, the synthetic restored image of the condition, denoted as ŷ^s, and the input to the diffusion model, represented as ỹ^s, form a pair of data with evident pixel-wise similarity. This similarity can potentially mislead the diffusion model to ignore the real restored image ŷ^r in condition as analyzed in Fig. <ref>. It is important to note that the distribution 𝒮 encapsulates the domain knowledge of high-quality clean images, including but not limited to the ground truth images in the synthetic dataset. Motivated by this observation, the proposed method can be further extended by replacing the noisy input ỹ^s with ỹ^c, defined as ỹ^c=√(α̅_t)y^c+√(1-α̅_t)ϵ, where y^c is randomly sampled from an unpaired extensive high-quality image dataset. This strategy disrupts the pixel-wise similarity between the “synthetic” condition and the diffusion input, thus enforcing the diffusion model to guide both the “synthetic” and “real” conditions predicted by the restoration network at the domain level. We will provide an ablation on this setting in Sec. <ref>. §.§ Training The image restoration network and diffusion model are jointly optimized by: ℒ = ℒ_Res + λ_Dif[ ℒ_Dif + ℒ_Con/2]. Following previous works <cit.>, we gradually change λ_Dif from 0 to β to avoid distractions for the main image restoration task during the early stages of the training process: λ_Dif = (2/1 + exp(-γ· p) - 1)·β, where γ and β are empirically set to 5 and 0.2 in all experiments, respectively. And p = min(n/N, 1), where n denotes the current epoch index and N represents the total number of training epochs. §.§ Discussion The proposed denoising as adaption is reminiscent of the domain adversarial objective proposed by Ganin and Lempitsky <cit.>. The main difference is that we do not use a domain classifier with a gradient reversal layer but a diffusion network for the loss. We categorize methods like <cit.> as feature-space domain adaptation approaches. Unlike these approaches, we show that denoising as adaptation is more well-suited for image restoration as it can better preserve low-level appearance in the pixel-wise noise space. Compared to pixel-space approaches that usually require multiple generator and discriminator networks, our method adopts a compact framework incorporating only a single additional denoising U-Net, ensuring stable adaptation training. After training, the diffusion network is discarded, requiring only the learned restoration network for testing purposes. The framework comparison of the above three types of methods is presented in Sec. <ref> of the Appendix. § EXPERIMENTS Training Dataset. For image denoising, we follow previous works <cit.> and construct the synthetic training dataset based on DIV2K <cit.>, Flickr2K <cit.>, WED <cit.>, and BSD <cit.>. The noisy images are obtained by adding the additive white Gaussian noise (AWGN) of noise level σ∈ [0, 75] to the source clean images. We use the training dataset of SIDD <cit.> as the real-world data. For image deraining, the synthetic and real-world training datasets are respectively obtained from Rain13K <cit.> and SPA <cit.>. For image deblurring, GoPro <cit.> and RealBlur-J <cit.> are selected as the synthetic and real-world training datasets, respectively. Please note that we only use the degraded images from these real-world datasets (without the ground truth) for training purposes. For large-scale unpaired clean images, all images in the MS-COCO dataset <cit.> are used. Testing Dataset. The testing images of the real-world datasets (SIDD <cit.>, SPA <cit.>, RealBlur-J <cit.>) are employed to evaluate the performance of the corresponding image restoration models. Training Settings. To train the diffusion model, we adopt α conditioning and the linear noise schedule ranging from 1e-6 to 1e-2 following previous works <cit.>. Moreover, the EMA strategy with a decaying factor of 0.9999 is also used across our experiments. Both the restoration and diffusion networks are trained on 128 × 128 patches, which are processed with random cropping and rotation for data augmentation. Our model is trained with a fixed learning rate 5e-5 using Adam <cit.> algorithm and the batch size is set to 40. All experiments are conducted on NVIDIA A100 GPUs. Metrics. The performance of various methods is mainly evaluated using the classical metrics: PSNR, SSIM, and LPIPS. For the image deraining task, we calculate PSNR/SSIM scores using the Y channel in YCbCr color space following existing methods <cit.>. §.§ Comparisons with State-of-the-Art Methods We implement the proposed noise-space domain adaptation method using a handy and classical U-Net architecture <cit.>. To validate its effectiveness, we compare the proposed method with previous domain adaptation approaches, including DANN <cit.>, DSN <cit.>, PixelDA <cit.>, and CyCADA <cit.>, covering the feature-space and pixel-space adaptation solutions. For the purpose of a fair comparison, we retrained these methods with the same standard settings and datasets. In addition, we also consider some unsupervised image restoration methods and representative supervised methods such as Ne2Ne <cit.>, MaskedD <cit.>, NLCL <cit.>, SelfDeblur <cit.>, VDIP <cit.>, and Restormer <cit.>. Comparison Results. The quantitative and qualitative comparison results are shown in Tab. <ref>-<ref> and Fig. <ref>-<ref>. From the comparison results, the proposed method leads the comparison methods on three image restoration tasks. In particular, previous feature-space domain adaptation methods <cit.> fail to perceive the crucial low-level information and pixel-space domain adaptation methods <cit.> yield inferior results since the precise style transfer between two domains is hard to control during the adversarial training. Moreover, the self-supervised and unsupervised restoration methods <cit.> show noticeable artifacts and limited generalization performance due to some inevitable information loss and hand-crafted designs on specific degradations. By contrast, our method ensures a fine domain adaptation in the pixel-wise noise space without introducing unstable training. Analysis. From the above results, we can observe that the proposed method enables noticeable improvements beyond the Vanilla baseline (trained only with synthetic datasets) on the image restoration tasks involved with high-frequency noises, such as image denoising and image deraining. Especially for image denoising, +8.13/0.3070 improvements on PSNR/SSIM metrics are achieved. We argue that the target of image denoising naturally fits that of the forward denoising process in the diffusion model. It is more sensitive to other Gaussian-like noises with respect to the pre-sampled noise space. Thus, an intense diffusion loss would be back-propagated if the conditioned images are under-restored, and the preceding restoration network tries to eliminate the noises on both the synthetic and real-world images as much as possible. r0.5 Quantitative metrics of the proposed method (Ours) and its extension on unpaired condition case (Our-Ex). The results are formed with PSNR/SSIM/LPIPS. The best and second best scores are highlighted and underlined. Task Ours Ours-Ex Denoising 34.71/0.9202/0.0903 33.44/0.8938/0.1064 Deraining 34.39/0.9571/0.0462 34.20/0.9587/0.0444 Deblurring 26.46/0.8048/0.1363 26.44/0.8030/0.1313 Extension. As mentioned in Sec. <ref>, our method can extend to the unpaired condition case by relaxing the diffusion's input with the image from other clean datasets. Thus, the shortcut issue can be directly eliminated since the trivial solutions such as matching the pixel's similarity between input and condition do not exist. Such an extension keeps the channel shuffling layer but is free to the residual swapping contrastive learning. We show the quantitative evaluation in Tab. <ref>. The results demonstrate that although the condition and diffusion input are unpaired, our method can still learn to adapt the restored results from the synthetic and real-world domains to the clean image distribution, which also complements the restoration performance of the paired solution in some tasks. More qualitative results are presented in the Appendix. §.§ Ablation Studies To evaluate the effectiveness of different components in the proposed method, we conduct ablation studies regarding the sampled noise levels of the diffusion model, determined by the time-step t, and the training strategies to avoid shortcut learning, as shown in Tab. <ref> and Fig. <ref>. Concretely, with low noise intensity, e.g., t ∈ [1, 100], it is easy for the diffusion model to discriminate the similarity of paired synthetic data even when the restored conditions are under-restored. As a result, the shortcut learning comes earlier during the training process and the real-world degraded image is heavily corrupted by the restoration network, of which most all details are filtered. On the other hand, when the intensity of the sampled noise is high, e.g., t ∈ [900, 1000], the diffusion model is hard to converge and the whole framework has fallen into a local optimum. By sampling the noise from a more diverse range with t ∈ [1, 1000], the restored results can be gradually adapted to the clean distribution. Moreover, the generalization ability of the restoration network gains further improvement using the designed channel shuffling layer (CS) and residual-swapping contrastive learning strategy (RS), which effectively eliminates the shortcut learning of the diffusion model. Therefore, higher restoration performance on real-world images and more realistic visual appearance can be observed from (d) to (e) and (f) in Tab. <ref> and Fig. <ref>. r0.4 < g r a p h i c s > Scalability of the proposed method on different network architectures. §.§ Scalability We further validate the scalability of the proposed method, using different variants of U-Net-based image restoration networks and other types of architectures such as the Transformer-based network <cit.>. In particular, we classify these networks based on their model sizes and obtain: Unet-T, Unet-S (the model investigated in the above experiments), Unet-B, Uformer-T, Uformer-S, and Uformer-B. More network details are listed in the Appendix. The quantitative results of PSNR vs. computational cost on SIDD test dataset <cit.> are shown in Fig. <ref>. As we can observe, as the complexity and parameter increase, the vanilla restoration network (orange elements) tends to overfit the training synthetic dataset and perform worse on the test real-world dataset. In contrast, the proposed domain adaptation approach method can improve the generalization ability of image restoration models with various sizes and architectures (blue elements). It is also interesting that for each type of architecture, our method can facilitate better adaptation performance as the complexity of the restoration network increases, demonstrating its effectiveness in addressing the overfitting problem of large models. §.§ Limitation and Broader Impacts In the proposed noise-space domain adaptation, we discard the diffusion model after its joint training with the image restoration network. Like previous domain adaptation methods, we need to retrain the proxy model once new datasets or tasks are involved. However, our learned diffusion model has a strong capacity to discriminate if the restored results are good or inferior. Thus, it could be leveraged to directly provide the prior knowledge to inspire new restoration-related tasks. Consequently, the adaptation from the synthetic domain to a new real-world domain would be achieved more efficiently. We leave it as one of our future work. For the broader impacts of this work, image restoration can improve the quality and accessibility of visual information in various fields, including medical imaging, satellite imagery, and historical document preservation. It can benefit better diagnostic accuracy in healthcare, more precise environmental monitoring, and the safeguarding of cultural heritage. While the image restoration network can vastly improve degraded images, it also carries the risk of inadvertently editing and altering the original information, which may compromise the authenticity of the restored images. § CONCLUSION In this work, we have presented a novel approach that harnesses the diffusion model as a proxy network to address the domain adaptation issues in image restoration tasks. Different from previous feature-space and pixel-space domain adaptation approaches, the proposed method adapts the restored results to their shared clean distribution in the pixel-wise noise space, resulting in significant low-level appearance improvements within a compact and stable training framework. To mitigate the shortcut issue arising from the joint training of the restoration and diffusion models, we randomly shuffle the channel index of two conditions and propose a residual-swapping contrastive learning strategy to prevent the diffusion model from discriminating the conditions based on the paired similarity. Furthermore, the proposed method can be extended by relaxing the input constraint of the diffusion model, introducing diverse unpaired clean images as denoising input. Experimental results have demonstrated the effectiveness of the proposed noise-space approach beyond existing feature-space and pixel-space methods on image restoration tasks. In the future, we plan to further investigate adapting the synthetic source domain to the real-world target domain using diffusion models, particularly in other dense prediction vision tasks. unsrtnat § APPENDIX § MORE IMPLEMENTATION DETAILS §.§ Condition Evaluation on Diffusion Model This work is inspired by the beneficial effects that favorable conditions facilitate the denoising process of the diffusion model, as shown in Fig <ref>(a). In this preliminary experiment, we first condition and train the diffusion model with an additional input in addition to its conventional input. Then, we test the noise prediction performance of this model under different qualities of the condition. To be specific, we corrupt the condition by adding the additive white Gaussian noise (AWGN) of noise level σ∈ [0, 80] to its original clean images, which are performed on 1,000 images in the MS-COCO test dataset <cit.>. The noise prediction error of the diffusion model is evaluated using the mean square error (MSE) metric. §.§ Comparison Settings In comparison experiments, we mainly compare the proposed approach with three types of previous methods: domain adaptation methods, including DANN <cit.>, DSN <cit.>, PixelDA <cit.>, and CyCADA <cit.>; unsupervised image restoration methods, including Ne2Ne <cit.>, MaskedD <cit.>, NLCL <cit.>, SelfDeblur <cit.>, and VDIP <cit.>; some representative supervised methods which serve as strong baselines in image restoration such as Restormer <cit.>, to comprehensively evaluate generalization performance of different methods. §.§ Scalability Evaluation To provide a comprehensive evaluation of the proposed method, we apply six variants of the image restoration network in our experiments, including three variants of convolution-based network <cit.>: Unet-T (Tiny), Unet-S (Small), and Unet-B (Base); and three variants of Transformer-based network <cit.>: Uformer-T (Tiny), Uformer-S (Small), and Uformer-B (Base). These variants differ in the number of feature channels (C) and the count of layers at each encoder and decoder stage. The specific configurations, computational cost, and the parameter numbers are detailed below: * Unet-T: C=32, depths of Encoder = {2, 2, 2, 2}, GMACs: 3.14G, Parameter: 2.14M, * Unet-S: C=64, depths of Encoder = {2, 2, 2, 2}, GMACs: 12.48G, Parameter: 8.56M, * Unet-B: C=76, depths of Encoder = {2, 2, 2, 2}, GMACs: 17.58G, Parameter: 12.07M, * Uformer-T: C=16, depths of Encoder = {2, 2, 2, 2}, GMACs: 15.49G, Parameter: 9.50M, * Uformer-S: C=32, depths of Encoder = {2, 2, 2, 2}, GMACs: 34.76G, Parameter: 21.38M, * Uformer-B: C=32, depths of Encoder = {1, 2, 8, 8}, GMACs: 86.97G, Parameter: 53.58M, and the depths of the Decoder match those of the Encoder. § DISCUSSION ON DIFFERENT DOMAIN ADAPTATION METHODS As discussed in Sec. <ref>, we described the effectiveness of the proposed method beyond the previous feature-space and pixel-space domain adaptation methods. We further show their specific framework in Fig. <ref>. In contrast to previous adaptation methods, our method is free to a domain classifier by introducing a meaningful diffusion loss function. § MORE VISUAL COMPARISON RESULTS We visualize more comparison results on the image denoising task in Fig. <ref>, image deraining task in Fig. <ref>, and image deblurring task in Fig. <ref>. We name the proposed method and its extension as `Ours' and `Ours-Ex', respectively. § MORE VISUAL RESULTS ON OTHER REAL-WORLD DATASETS To show the generalization ability of the proposed method, we also visualize the restored results of the proposed method on other real-world datasets <cit.> in Fig. <ref>, Fig. <ref>, Fig. <ref>. These datasets were not encountered during the network's training and fall outside the distribution of the trained datasets.
http://arxiv.org/abs/2406.17758v1
20240625174225
MotionBooth: Motion-Aware Customized Text-to-Video Generation
[ "Jianzong Wu", "Xiangtai Li", "Yanhong Zeng", "Jiangning Zhang", "Qianyu Zhou", "Yining Li", "Yunhai Tong", "Kai Chen" ]
cs.CV
[ "cs.CV" ]
Solving Hard Mizar Problems with Instantiation and Strategy Invention Jan Jakubův1,20000-0002-8848-5537 Mikoláš Janota10000-0003-3487-784X Josef Urban10000-0002-1384-1613 ================================================================================================================ § ABSTRACT Work done when Jianzong is an intern at Shanghai AI Laboratory. †: Project Lead. In this work, we present MotionBooth, an innovative framework designed for animating customized subjects with precise control over both object and camera movements. By leveraging a few images of a specific object, we efficiently fine-tune a text-to-video model to capture the object's shape and attributes accurately. Our approach presents subject region loss and video preservation loss to enhance the subject's learning performance, along with a subject token cross-attention loss to integrate the customized subject with motion control signals. Additionally, we propose training-free techniques for managing subject and camera motions during inference. In particular, we utilize cross-attention map manipulation to govern subject motion and introduce a novel latent shift module for camera movement control as well. MotionBooth excels in preserving the appearance of subjects while simultaneously controlling the motions in generated videos. Extensive quantitative and qualitative evaluations demonstrate the superiority and effectiveness of our method. Models and codes will be made publicly available. § INTRODUCTION Generating videos for customized subjects, such as specific scenarios involving a particular dog's type or appearance, has gained research attention <cit.>. This customized generation field originated from text-to-image (T2I) generation methods, which learn a subject's appearance from a few images and generate diverse images of that subject <cit.>. Following them, subject-driven text-to-video (T2V) generation has seen increasing interest, which has found a wide range of applications in personal shorts or film production <cit.>. Can you imagine your toy riding along the road from a distance to the camera or your pet dog dancing on the street from the left to the right? However, rendering such lovely imaginary videos is a challenging task. It often involves subject learning and motion injection while maintaining the generative capability to generate diverse scenes. Notably, VideoBooth <cit.> trains an image encoder to embed the subject's appearance into the model, generating a short clip of the subject. However, the generated videos often display minimal or missing motion, resembling a "moving image." This approach underutilizes the motion diversity of pre-trained T2V models. Another line of works <cit.> fine-tunes the customized model on specific videos, requiring motion learning for each specific camera or subject motion type. Their pipelines restrict the type of motion and require fine-tuning a new adapter for each motion type, which is inconvenient and computationally expensive. The key lies in the conflict between subject learning and video motion preservation. During subject learning, training on limited images of the specific subject significantly shifts the distribution of the base T2V model, leading to significant degradation (e.g., blurred backgrounds and static video). Therefore, existing methods often need additional motion learning for specific motion control. In this paper, we argue that the base T2V model already has diverse motion prior, and the key is to preserve video capability during subject learning and digging out the motion control during inference. To ensure subject-driven video generation with universal and precise motion control, we present MotionBooth, which can perform motion-aware customized video generation. The videos generated by MotionBooth are illustrated in  <ref>. MotionBooth can take any combination of subject, subject motion, and camera motion as inputs and generate diverse videos, maintaining quality on par with pre-trained T2V models. MotionBooth learns subjects without hurting video generation capability, enabling a training-free motion injection for subject-driven video generation. First, during subject learning, we introduce subject region loss and video preservation loss, which enhance both subject fidelity and video quality. In addition, we present a subject token cross-attention loss to connect the customized subject with motion control signals. During inference, we propose training-free techniques to control the camera and subject motion. We directly manipulate the cross-attention maps to control the subject motion. We also propose a novel latent shift module to govern the camera movement. It shifts the noised latent to move the camera pose. Through quantitative and qualitative experiments, we demonstrate the superiority and effectiveness of the proposed motion control methods, and they can be applied to different base T2V models without further tuning. Our contributions are summarized as follows: 1) We propose a unified framework, MotionBooth, for motion-aware customized video generation. To our knowledge, this is the first framework capable of generating diverse videos by combining customized subjects, subject motions, and camera movements as input. 2) We propose a novel loss-augmented training architecture for subject learning. This includes subject region loss, video preservation loss, and subject token cross-attention loss, significantly enhancing subject fidelity and video quality. 3) We develop innovative, training-free methods for controlling subject and camera motions. Extensive experiments demonstrate that MotionBooth outperforms existing state-of-the-art video generation models. § RELATED WORK Text-to-video generation. T2V generation leverages deep learning models to interpret text input and generate corresponding video content. It builds upon earlier breakthroughs in text-to-image generation <cit.> but introduces more complex dynamics by incorporating motion and time <cit.>. Recent advancements particularly leverage diffusion-based architectures. Notable models such as ModelScopeT2V <cit.> and LaVie <cit.> integrate temporal layers within spatial frameworks. VideoCrafter1 <cit.> and VideoCrafter2 <cit.> address the scarcity of video data by utilizing high-quality image datasets. Latte <cit.> and W.A.L.T <cit.> adopt Transformers as backbones <cit.>. VideoPoet <cit.> explores generating videos autoregressively to produce consistent long videos. Recent Sora <cit.> excels in generating videos with impressive quality, stable consistency, and varied motion. Despite these advancements, controlling video content through text alone remains challenging, highlighting a continuing need for research into more refined control signals. Customized generation. Generating images and videos with customized subjects is attracting growing interest. Most works concentrate on learning a specific subject with a few images from the same subject <cit.> or specific domains <cit.>. Textual Inversion <cit.> proposes to train a new word to capture the feature of an object. In contrast, DreamBooth <cit.> fine-tunes the whole U-Net, resulting in a better IP preservation ability. Following them, many works explore more challenging tasks, such as customizing multiple objects <cit.>, developing common subject adapter <cit.>, and simultaneously controlling their positions <cit.>. However, the customization of video models from a few images often results in overfitting. The models fail to incorporate significant motion dynamics. A recent work, DreamVideo <cit.>, addresses this by learning specific motion types from video data. Yet, this method is restricted to pre-defined motion types and lacks the flexibility of text-driven input. In contrast, our work introduces MotionBooth to control both the subject and camera motions without needing pre-defined motion prototypes. Motion-aware video generation. Recent works explore incorporating explicit motion control in video generation. This includes camera and object motions. To control camera motion, existing works like AnimateDiff <cit.>, VideoComposer <cit.>, CameraCtrl <cit.>, Direct-A-Video <cit.>, and MotionCtrl <cit.> design specific modules to encode the camera movement or trajectory. These models usually rely on training on large-scale datasets <cit.>, leading to high computational costs. In contrast, our MotionBooth framework builds a training-free camera motion module that can be easily integrated with any T2V model, eliminating the need for re-training. For object motion control, recent works <cit.> propose effective methods to manipulate attention values during the inference stage. Inspired by these approaches, we connect subject text tokens to the subject position using a subject token cross-attention loss. This allows for straightforward control over the motion of a customized object by adjusting cross-attention values. § METHOD §.§ Overview Task formulation. We focus on generating motion-aware videos featured by a customized subject. To customize video subjects, we fine-tune the T2V model on a specific subject. This process can be accomplished with just a few (typically 3-5) images of the same subject. During inference, the fine-tuned model generates motion-aware videos of the subject. The motion encompasses both camera and subject movements, which are freely defined by the user. For camera motion, the user inputs the horizontal and vertical camera movement ratios, denoted as 𝐜_cam = [c_x, c_y]. For subject motion, the user provides a bounding box sequence [𝐁_1, 𝐁_2, ..., 𝐁_L] to indicate the desired positions of the subject, where L represents the video length. Each bounding box specifies the x-y coordinates of the top-left and bottom-right points for each frame. By incorporating these conditional inputs, the model is expected to generate videos that include a specific subject, along with predefined camera movements and subject motions. Overall pipeline. The overall pipeline of MotionBooth is illustrated in <ref>. During the training stage, MotionBooth learns the appearance of the given subject by fine-tuning the T2V model. To prevent overfitting, we introduce video preservation loss and subject region loss in <ref>. Additionally, we propose a subject token cross-attention (STCA) loss in <ref> to explicitly connect the subject tokens with the subject's position on cross-attention maps, facilitating the control of subject motion. Camera and subject motion control are performed during the inference stage. We manipulate the cross-attention maps by amplifying the subject tokens and their corresponding regions while suppressing other tokens in <ref>. This ensures that the generated subjects appear in the desired positions. By training on the cross-attention map, the STCA loss enhances the subjects' motion control. For camera movement, we introduce a novel latent shift module to shift the noised latent directly, achieving smooth camera movement in the generated videos in <ref>. §.§ Subject Learning r0.5 < g r a p h i c s > Case study on subject learning. “Region” indicates subject region loss. “Video” indicates video preservation loss. The images are extracted from generated videos. Given a few images of a subject, previous works have demonstrated that fine-tuning a diffusion model on these images can effectively learn the appearance of the subject <cit.>. However, two significant challenges remain. First, due to the limited size of the dataset, the model quickly overfits the input images, including their backgrounds, within a few steps. This overfitting of the background impedes the generation of videos with diverse scenes, a problem also noted in previous works <cit.>. Second, fine-tuning T2V models using images can impair the model's inherent ability to generate videos, leading to severe background degradation in the generated videos. To illustrate these issues, we conducted a toy experiment. As depicted in <ref>, without any modifications, the model overfits the background to the subject image. To address this, we propose computing the diffusion reconstruction loss solely within the subject region. However, even with this adjustment, the background in the generated videos remains over-smoothed. This degradation likely results from tuning a T2V model exclusively with images, which damages the model's original weights for video generation. To mitigate this, we propose incorporating video data as preservation data during the training process. Although training with video data but without subject region loss still suffers from overfitting, our approach, MotionBooth, can generate videos with detailed and diverse backgrounds. hard to understand. Preliminary. T2V diffusion models learn to generate videos by reconstructing noise in a latent space <cit.>. The input video is first encoded into a latent representation 𝐳_0. Noise ϵ is added to this latent representation, resulting in a noised latent 𝐳_t, where t represents the timestamp. This process simulates the reverse process of a fixed-length Markov Chain <cit.>. The diffusion model ϵ_θ is trained to predict this noise. The training loss, which is a reconstruction loss, is given by: ℒ = 𝔼_𝐳, ϵ∼𝒩(0, 𝐈), t, 𝐜 [ || ϵ - ϵ_θ(𝐳_t, 𝐜, t) ||^2_2 ], where 𝐜 is the conditional input used in classifier-free guidance methods, which can be text or a reference image. During inference, a pure noise 𝐳_T is gradually denoised to a clean latent 𝐳'_0, where T is the length of the Markov Chain. The clean latent is then decoded back into RGB space to generate the video 𝐗'. Subject region loss. To address the challenge of overfitting backgrounds in training images, we propose a subject region loss. The core idea is to calculate the diffusion reconstruction loss exclusively within the subject region, thereby preventing the model from learning the background. Specifically, we first extract the subject mask for each image. This can be done manually or through automatic methods, such as a segmentation model. In practice, we use SAM <cit.> to collect all the masks. The subject region loss is then calculated as follows: ℒ_sub = 𝔼_𝐳, ϵ∼𝒩(0, 𝐈), t, 𝐜 [ || (ϵ - ϵ_θ(𝐳_t, 𝐜_i, t)) ·𝐌 ||^2_2 ], where 𝐌 represents the binary masks for the training images. These masks are resized to the latent space to compute the dot product. 𝐜_i is a fixed sentence in the format "a [V] [class name]," where "[V]" is a rare token and "[class name]" is the class name of the subject <cit.>. We have found that with the subject region loss, the trained model effectively avoids the background overfitting problem. Video preservation loss. Image customization datasets like DreamBooth <cit.> and CustomDiffusion <cit.> provide excellent examples of multiple images from the same subject. However, in the customized video generation task, directly fine-tuning the video diffusion model on images leads to significant background degradation. Intuitively, this image-based training process may harm the original knowledge embedded in video diffusion models. To address this, we introduce a video preservation loss designed to maintain video generation knowledge by joint training with video data. Unlike the class-specific preservation data used in previous works <cit.>, we utilize common videos with captions denoted as 𝐜_v. Our experiments in <ref> demonstrate that common videos are more effective for subject learning and preserving video generation capabilities. The loss function is formulated as follows: ℒ_vid = 𝔼_𝐳, ϵ∼𝒩(0, 𝐈), t, 𝐜 [ || ϵ - ϵ_θ(𝐳_t, 𝐜_v, t) ||^2_2 ]. r0.4 < g r a p h i c s > Case study on subject token cross-attention maps. (b) and (c) are visualization of cross-attention maps on tokens “[V]” and “dog”. Subject token cross-attention loss. To control the subject's motion, we directly manipulate the cross-attention maps during inference. Since we introduce a unique token, “[V]”, in the training stage and associate it with the subject, we need to link this special token to the subject's position within the cross-attention maps. As illustrated in <ref>, fine-tuning the model does not effectively connect the unique token to the cross-attention maps. Therefore, we propose a Subject Token Cross-Attention (STCA) loss to guide this process explicitly. First, we extract the cross-attention map, 𝐀, at the tokens “[V] [class name]”. We then apply a Binary Cross-Entropy Loss to ensure that the corresponding attention map is larger at the subject's position and smaller outside this region. This process incorporates the subject mask and can be expressed as: ℒ_stca = - [𝐌log(𝐀) + (1 - 𝐌) log(1 - 𝐀) ]. During training, the overall loss function is defined as: ℒ = ℒ_sub + λ_1 ℒ_vid + λ_2 ℒ_stca, where λ_1 and λ_2 are hyperparameters that control the weights of the different loss components. §.§ Subject Motion Control We chose bounding boxes as the motion control signal for subjects because they are easy to draw and manipulate. In contrast, providing object masks for every frame is labor-intensive, requiring consideration of the subject's shape transformation between frames. In practice, we find that bounding boxes are sufficient for precisely controlling the positions of subjects. Previous works like GLIGEN <cit.> attempt to control object positions by training an extra condition module with large-scale image data. However, these training methods fix the models and cannot easily align with customized models fine-tuned for specific subjects. Therefore, we adopt an alternative approach that directly edits the cross-attention maps during inference in a training-free manner <cit.>. This cross-attention editing method is plug-and-play and can be used with any customized model. In cross-attention layers, the query features 𝐐 are extracted from the video latent and represent the vision features. The key and value features 𝐊 and 𝐕 are derived from input language tokens. The calculation process of the edited cross-attention layer can be formulated as follows: EditedCrossAttn(𝐐, 𝐊, 𝐕) = Softmax( 𝐐𝐊^⊤/√(d) + α𝐒) 𝐕, where d is the feature dimension of 𝐐 and serves as a normalization term. 𝐐𝐊^⊤/√(d) is the normalized production between 𝐐 and 𝐊, representing the attention scores between vision and language features. We manipulate the production by adding a new term α𝐒, where 𝐒 has positive values on the subject region provided in bounding boxes and large negative values outside the desired positions. α is a hyperparameter to control the editing strength. The editing matrix 𝐒 is set as follows: S_k[i, j] = 1 - |𝐁_k|/|𝐐|, if i ∈𝐁_k and j ∈𝐏 and t ≥τ 0, if i ∈𝐁_k and j ∈𝐏 and t < τ -∞, otherwise where i, j, and k indicate the vision token, language token, and frame indexes, respectively. 𝐏 represents the indexes for subject language tokens in the text prompt. In this work, we choose “[V]” and “[class name]” as subject tokens. The SCTA loss in <ref> binds the two tokens with cross-attention maps. t is the denoising timestamp and τ is a hyperparameter defining a timestamp threshold. Since diffusion models tend to form the approximate object layout in earlier denoising steps and refine the details in later steps <cit.>, we apply stronger attention amplification in earlier steps and no amplification in later steps. Note that attention suppression outside the bounding box regions persists throughout the generation. |𝐁_k| and |𝐐| are the areas of the box and query, respectively. Following previous works <cit.>, smaller boxes should have larger amplifications, and we do not apply any editing on the <start> and <end> tokens. r0.5 < g r a p h i c s > Illustration of camera movement control through shifting the noised latent. §.§ Camera Movement Control Simply editing the cross-attention map can efficiently control the motion of the subject. This suggests that the latent can be considered a "shrunk image," which maintains the same visual geographic distribution as the generated images. For camera movement control, an intuitive approach is to directly shift the noised latent during inference based on the camera movement signal 𝐜_cam = [c_x, c_y]. The latent shift pipeline is illustrated in <ref>. The key challenge with this idea is filling in the missing parts caused by the latent shift (the question mark region in Step 1). To address this issue, we propose sampling tokens from the original noised latent and using them to fill the gap. This is based on the prior knowledge that when a camera moves in a video, the new scene it captures is semantically close to the previous one. For example, in a video with forest scenes, when the camera pans left, it is highly likely to capture more trees similar to those in the original scene. Another assumption is that in a normally angled video, a visual element is more likely to be semantically close to elements along the same x-axis or y-axis rather than other elements. For instance, in the waterfall video in <ref>, trees are at the top and bottom, spreading horizontally, while the waterfall spans the middle x-axis area. Experimentally, we over that sampling tokens horizontally and vertically provides better initialization and results in smoother video transitions. Randomly sampling tokens degrades the generated video quality. The latent shift process for timestamp t can be formulated as follows: 𝐡_x = SampleHorizontal(𝐳_t, 𝐁, c_x), 𝐡_y = SampleVertical(𝐳_t, 𝐁, c_y), 𝐳_shift = Crop(Shift(𝐳_t, c_x, c_y)), 𝐳_t = Fill(𝐳_shift, 𝐡_x, 𝐡_y, c_x, c_y), where 𝐡_x and 𝐡_y are sampled tokens along the x and y axes, respectively. Crop(·) removes the tokens outside the camera view after the shift. 𝐁 is the subject bounding box. We filter out the tokens belonging to the subjects because they are not likely to occur in the new scenes. In addition, to avoid a drastic change in latent in one shift, we spread the latent shift over multiple timestamps, with each step only shifting a small number of tokens. Note that the latent shift needs to be applied after the subject's approximate layout is fixed but before the video details are completed. We set a pair of hyperparameters σ_1 and σ_2. The latent shift only applies in the timestamp range [σ_1, σ_2]. § EXPERIMENTS §.§ Experimental Setup Datasets. For customization, we collect a total of 26 objects from DreamBooth <cit.> and CustomDiffusion <cit.>. These objects include pets, plushies, toys, cartoons, and vehicles. To evaluate camera and object motion control, we built a dataset containing 40 text-object motion pairs and 40 text-camera motion pairs, ensuring that the camera and object motion patterns are consistent with the text prompts. This dataset evaluates the videos generated for each subject in various scenarios and motions. Implementation details. We train MotionBooth for 300 steps using the AdamW optimizer, with a learning rate of 5e-2 and a weight decay of 1e-2. We collect 500 preservation videos from the Panda-70M <cit.> training set, chosen randomly. Each batch consists of one batch for images and one for videos, with batch sizes equal to the number of training images and 1 for images and videos, respectively. The loss weight parameters λ_1 and λ_2 are set to 1.0 and 0.01. We use Zeroscope and LaVie as base models. During inference, we perform 50-step denoising using the DDIM scheduler and set the classifier-free guidance scale to 7.5. The generated videos are 576x320x24 and 512x320x16 for Zeroscope and LaVie, respectively. The training process finishes in around 10 minutes in a single NVIDIA A100 80G GPU. Additional implementation details can be found in Appendix <ref>. Baselines. Since we are pioneering motion-aware customized video generation, we compare our methods with closely related works, including DreamBooth <cit.>, CustomVideo <cit.>, and DreamVideo <cit.>. Dreambooth customizes subjects for text-to-image generation. We follow its practice with class preservation images and fine-tune T2V models for generating videos. CustomVideo is a recent video customizing method. We adopt its parameter-efficient training procedure. DreamVideo learns motion patterns from video data. To provide such data, we sample videos from Panda-70M, which are most relevant to the evaluation motions. Since these methods cannot control motions during inference, we apply our camera and object motion control technologies for a fair comparison. Additionally, we compare our camera control method with training-based methods, AnimateDiff <cit.> and CameraCtrl <cit.>, focusing on camera motion control without subject customization. Since AnimateDiff is trained with only basic camera movement types and cannot take user-defined camera movement 𝐜_cam = [c_x, c_y] as input, we use the closest basic movement type for evaluation. Evaluation metrics. We evaluate motion-aware customized video generation from three aspects: region subject fidelity, temporal consistency, and camera motion fidelity. 1) To ensure the subject is well-preserved and accurately generated in the specified motion, we introduce region CLIP similarity (R-CLIP) and region DINO similarity metrics (R-DINO). These metrics utilize the CLIP <cit.> and DINOv2 <cit.> models to compute the similarities between the subject images and frame regions indicated by bounding boxes. Additionally, we use CLIP image-text similarity (CLIP-T) to measure the similarity between entire frames and text prompts. 2) We evaluate temporal consistency by computing CLIP image features between each consecutive frame. 3) We use VideoFlow <cit.> to predict the optical flow of the generated videos. Then, we calculate the flow error by comparing the predicted flow with the ground-truth camera motion provided in the evaluation dataset. §.§ Main Results Quantitative results. We conduct quantitative comparisons with baseline models on both motion-aware customized video generation and camera movement control. The results for motion-aware customized video generation are shown in <ref>. The results demonstrate that MotionBooth outperforms all baselines on both Zeroscope and LaVie models, indicating that our proposed technologies can be extended to different T2V models. Thanks to the training-free architecture of subject and camera motion control methods, MotionBooth is expected to be adaptable to more open-sourced models in the future, such as Sora <cit.>. Notably, DreamVideo <cit.> achieves the second-best scores in T-Cons. and flow error, which aligns with our observation that incorporating video data as auxiliary training data enhances video generation performance. On the other hand, CustomVideo <cit.> shows inferior performance in R-DINO scores, indicating a poorer ability to generate subjects in given positions. This may be attributed to its approach of only fine-tuning the text embeddings and cross-attention layers of the diffusion models, which is insufficient for learning the subjects. For camera movement control, we compare our method with two training-based methods, AnimateDiff <cit.> and CameraCtrl <cit.>. The results are shown in <ref>. Remarkably, MotionBooth achieves superior results compared to the two baselines with our training-free latent shift module. Specifically, MotionBooth outperforms the recent method CameraCtrl by 0.617, 0.015, and 0.009 in flow error, CLIP-T, and T-Cons. metrics with Zeroscope, and 0.511, 0.004, and 0.024 for the LaVie model. These results demonstrate that the latent shift method is simple yet effective. Qualitative results. The qualitative comparison results for video generation with customized objects and controlled subject motions are presented in <ref>. Our observations reveal that MotionBooth excels in subject motion alignment, text prompt alignment, and overall video quality. In contrast, DreamBooth and CustomVideo produce videos with vague backgrounds, highlighting that generated backgrounds deteriorate when training is conducted without video data. Additionally, CustomVideo and DreamVideo struggle to capture the subjects' appearances, likely because their approach tunes only part of the diffusion model, preventing the learning process from fully converging. We also conduct qualitative experiments focused on camera movement control, with results shown in <ref>. AnimateDiff, limited to basic movements, does not support user-defined camera directions. Although the CameraCtrl method can accept user input, it generates videos with subpar aesthetics and objects that exhibit flash movements. In contrast, our MotionBooth model outperforms both the Zeroscope and Lavie models. The proposed latent method generates videos that adhere to user-defined camera movements while maintaining time consistency and high video quality. §.§ Ablation Studies Training technologies. We analyze the technologies proposed during the subject learning stage. The ablation results are shown in <ref>. Clearly, without the proposed modules, the quantitative metrics drop accordingly. These results demonstrate that the proposed subject region loss, STCA loss, and video preservation loss are beneficial for subject learning and generating motion-aware customized videos. Specifically, the R-DINO metric decreases significantly by 0.256 without the subject region loss, highlighting its core contribution in filtering out image backgrounds during training. Additionally, the "w/ class video" experiment, which uses class-specific videos instead of randomly sampled common videos, yields worse results. This approach restricts the scenes and backgrounds in class-specific videos, hindering the models' ability to generalize effectively. § CONCLUSION This paper introduces MotionBooth, a novel framework for motion-aware, customized video generation. MotionBooth fine-tunes a T2V diffusion model to learn specific subjects, utilizing subject region loss to focus on the subject area. The training procedure incorporates video preservation data to prevent background degradation. Additionally, an STCA loss is designed to connect subject tokens with the cross-attention map. During inference, training-free technologies are proposed to control both subject and camera motion. Extensive experiments demonstrate the effectiveness and generalization ability of our method. In conclusion, MotionBooth can generate vivid videos with given subjects and controllable subject and camera motions. Acknowledgement. This work is supported by the National Key Research and Development Program of China (No. 2023YFC3807600). ieee_fullname § APPENDIX Overview. The supplementary includes the following sections: * <ref>. Human preference study. * <ref>. Limitations and future work. * <ref>. Implementation details of the experiments. * <ref>. Ablation studies. * <ref>. Social impacts. * <ref>. More qualitative results. Video Demo. We also present a video in a separate supplementary file, which shows the results in video format. §.§ Human Preference Study r0.5 < g r a p h i c s > Human preference study. Our MotionBooth achieves the best human preference scores in all the evaluation aspects. To evaluate our approach to understanding user preferences, we conducted a user study experiment. We collected 30 groups of videos generated by MotionBooth and baseline methods. We then asked 7 colleagues to select the best videos based on the following criteria: subject motion alignment, camera movement alignment, subject appearance alignment, and temporal consistency. For each group of videos, the annotators selected only the best one. For the subject appearance alignment, the annotators were provided with corresponding subject images. As shown in <ref>, MotionBooth was the most preferred method across all models and evaluation aspects, particularly in subject appearance alignment. These results further demonstrate the effectiveness of our method. §.§ Limitations and Future Work In <ref>, we illustrate several failure cases of MotionBooth. One significant limitation of MotionBooth is its struggle with generating videos involving multiple objects. As shown in <ref>(a), the subject's appearance can sometimes merge with other objects, resulting in visually confusing outputs. This issue might be resolved by incorporating advanced training technologies for multiple subjects. r0.5 < g r a p h i c s > Failure cases of MotionBooth. Another limitation is the model's capability to depict certain motions indicated by the text prompt. As depicted in <ref>(b), MotionBooth may fail to accurately represent motions that are unlikely to be performed by the subject. For example, it is hard to imagine a scene where a wolf plushie is riding a bike. These failure cases highlight the need for further improvement in the model's subject separation and motion understanding capabilities to enhance the realism and accuracy of the generated videos. Utilizing more powerful T2V models may eliminate these drawbacks. Future work could focus on refining these aspects to address the current limitations and provide more robust performance in complex scenarios. §.§ Implementation Details Hyperparameters. For the LaVie model, we set α = 10.0, τ = 0.7𝐓, σ_1 = 0.8𝐓, and σ_2 = 0.6𝐓. For the Zeroscope model, we set α = 10.0, τ = 0.9𝐓, σ_1 = 0.9𝐓, and σ_2 = 0.7𝐓. Evaluation dataset. We collect a total of 26 subjects from DreamBooth <cit.> and CustomDiffusion <cit.>. We show one image for each subject in <ref>. The subjects contain a wide variety of types, including pets, plushie toys, cartoons, and vehicles, which can provide us with a thorough analysis of the model's effectiveness. r0.5 < g r a p h i c s > The application interface for user study. User study interface. We show the application interface for human preference study in <ref>. During user study, we ask the annotators to select the best video based on the question, e.g., “Which video do you think has the best temporal consistency?” Pseudo-code of latent shift. To present the latent shift module more clearly, we show the pseudo-code of the algorithm in <ref>. Our latent shift module can control the camera movement in videos in a training-free manner at minimal costs. §.§ Ablation Studies Subject motion control hyperparameters. We conduct ablation studies on the hyperparameter for subject motion control, with the results presented in <ref>. We examined the effects of varying the factor α of 𝐒 and the maximum cross-attention manipulation timestamp τ. The findings indicate that increasing α and extending the controlling steps lead to stronger control. With lower control strengths, the subject does not appear in the desired position, or only part of its body aligns with the intended spot. Conversely, when the control strengths are too high, the generated subjects tend to appear unnaturally square in shape. Latent shift hyperparameters. We experiment with the influence of σ_1 and σ_2 in the latent shift module. The results are shown in <ref>. The results indicate that applying latent shift in the earlier steps of the denoising process results in incomplete camera movement, as evidenced by the trees in the background. Conversely, shifting the latent in the later steps degrades video quality and introduces artifacts, highlighted by the red boxes in the last row. Empirically, setting σ_1 and σ_2 to middle values provides optimal control over camera movement. Number of preservation videos. We conduct an ablation study on the number of preservation videos. As shown in <ref>, ranging the preservation videos from 100 to 900 does not bring large changes to the quantitative scores. We conclude that the key is to use video data to preserve the video generation ability of the pre-trained T2V models. The number of video data can be flexible. §.§ Social Impacts Positive societal impacts. MotionBooth allows for precise control over customized subjects and camera movements in video generation, opening new avenues for artists, filmmakers, and content creators to produce unique and high-quality visual content without extensive resources or professional equipment. Potential negative societal impacts. The ability to generate realistic customized videos could be misused to create deepfakes, leading to potential disinformation campaigns, privacy violations, and reputational damage. This risk is particularly significant in the context of political manipulation and social media. If the underlying models are trained on biased datasets, the generated content might reinforce harmful stereotypes or exclude certain groups. Ensuring diversity and fairness in training data is crucial to mitigate this risk. Mitigation strategies. Developing and adhering to strict ethical guidelines for the use and dissemination of video generation technologies can help mitigate misuse. This includes implementing usage restrictions and promoting transparency about the generated content. §.§ More Qualitative Results We show more qualitative results in <ref>.
http://arxiv.org/abs/2406.18493v1
20240626165618
A weakly monotonic, logically constrained, HORPO-variant
[ "Cynthia Kop" ]
cs.LO
[ "cs.LO" ]
On the effectiveness of the collapse in the Diósi-Penrose model Sandro Donadi July 1, 2024 =============================================================== § ABSTRACT In this short paper, we present a simple variant of the recursive path ordering, specified for Logically Constrained Simply Typed Rewriting Systems (LCSTRSs). This is a method for curried systems, without λ but with partially applied function symbols, which can deal with logical constraints. As it is designed for use in the dependency pair framework, it is defined as reduction pair, allowing weak monotonicity. § INTRODUCTION This is a technical report, so no preliminaries are given. We assume familiarity with Logically Constrained Simply Typed Rewriting Systems <cit.>, and will adapt the HORPO variant given in that paper to a reduction pair (although for now, we omit the multiset status as it would complicate the definition). The main difference with <cit.> is a kind of filtering to take advantage of the weaker monotonicity requirements that are used for reduction pairs in the DP framework. This filtering is reminiscent of argument filterings which are often used in dependency pair approaches, but note that the direct definition of argument filtering does not naturally extend to systems with partially applied function symbols – so a special definition is needed. We will use the following definigion of a constrained reduction pair: A constrained relation is a set R of tuples (s,t,φ,). denoted s R_φ^ t, where s and t are terms of the same type, φ is a constraint, and is a set of variables. We say a binary relation R' on terms covers R if s R_φ^ t implies that (sγ) R' (tγ) for any substitution γ that respects φ and maps all x ∈ to ground theory terms. A constrained reduction pair is a pair (≽,≻) of constrained relations such that there exist a reflexive relation that covers ≽ and a well-founded relation that covers ≻ such that: * ⊆, and * · ⊆ ^+ or · ⊆ ^+, and * is monotonic: s t implies C[s] ⊒ C[t] for every appropriately-typed context C. If also is monotonic, we say (≽,≻) is a strongly monotonic constrained reduction pair. As most existing HORPO variants actually consider systems where function symbols are maximally applied, we will start by defining a variation for (unconstrained) STRS. Then, in Section <ref>, we will use this variant as (,) in Definition <ref>. Notationally, we will use the following conventions without expicitly stating them: * always refers to a sort (base type); , to arbitrary types * ,, always refers to a function symbol, ,, to a variable § UNCONSTRAINED HORPO FOR CURRIED SYSTEMS We assume familiarity with STRSs: in short, terms are constructed from a set of typed variables, typed constants, and the type-conscious application operator (these are LCSTRSs with an empty theory signature). We allow for an infinite set of function symbols (constants), but assume given: * a precedence: a quasi-ordering on the set of all function symbols, such that its strict part ::= ∖, is well-founded; we write for the equivalence relation ∩ * a filter: a function mapping each function symbol :: _1 …_m with a sort to a subset of {1,…,m}: the arguments that regards Moreover, while we allow an infinite number of symbols with the same status, we require that their maximal arity (that is, the number m if :: _1 …_m) is bounded. In all the following, we collapse base types; that is, we assume that there is only one sort . Essentially, this means that we consider (and will do induction on) type structure rather than directly on types. We will define a number of relations. §.§ Equivalence The first of our relations can be defined without reference to the others. For two terms s,t, we define s t if s and t have the same type structure, and one of the following holds: (Eq-mono) s = s_1 ⋯ s_n and t = t_1 ⋯ t_n for a variable, and each s_i t_i (Eq-args) s = s_1 ⋯ s_n and t = t_1 ⋯ t_n for , function symbols with the same type structure such that , () = (), and s_i t_i for all i ∈() ∩{1,…,n}. We observe that is a monotonic and stable equivalence relation. The relation is transitive, reflexive, symmetric, monotonic, and stable. By a straighforward induction on the size of a given term s we can prove: Transitivity: if s t u then s u. Reflexivity: s s Symmetry: if s t then t s Stability: if s t and γ is a substitution, then sγ tγ; for the case s = x s_1 ⋯ s_n we do a case analysis on γ(x) (which must either have a form y w_1 ⋯ w_k or w_1 ⋯ w_k), and the reflexivity property we have shown above (to find w_i w_i) For monotonicity, we do not need the induction. Suppose s :: and t ::, with s s' and t t'. There are two cases: * s = x s_1 ⋯ s_n and s' = x s_1' ⋯ s_n' with each s_i s_i'; since t t' also x s_1 ⋯ s_n t x s_1' ⋯ s_n' t' * s = s_1 ⋯ s_n and s' = s_1' ⋯ s_n' with , having the same type structure and filter, and being equal in the precedence; this is not affected by adding an extra argument, so we complete as above. §.§ Decrease Our other relations are defined through a mutual recursion. Specifically, for terms s,t: * s t if s t or s t * s t if s and t have the same type structure, and: (Gr-mono) s = x s_1 ⋯ s_n and t = x t_1 ⋯ t_n and each s_i t_i and some s_i t_i, or (Gr-args) s = s_1 ⋯ s_n and t = t_1 ⋯ t_n for , function symbols with the same type structure such that , () = (), and s_i t_i for all i ∈() ∩{1,…,n}, and s_i t_i for some i ∈() ∩{1,…,n} (Gr-rpo) s t * s t if s = s_1 ⋯ s_n with :: _1 …_m and {n+1,…,m}⊆(), and: (Rpo-select) s_i t for some i ∈() ∩{1,…,n} (Rpo-appl) t = t_0 t_1 ⋯ t_m and s t_i for 1 ≤ i ≤ m (Rpo-copy) t = t_1 ⋯ t_m with and s t_i for all i ∈() ∩{1,…,m} (Rpo-lex) t = t_1 ⋯ t_m with and there is some i ∈() ∩() ∩{1,…,min(n,m)} such that all of the following hold: * () ∩{1,…,i} = () ∩{1,…,i} * s_j t_j for j ∈{1,…,i-1}∩() * s_i t_i * s t_j for j ∈{i+1,…,m}∩() §.§ Properties We make several observations: If s t R u for R ∈{,,,} then s R u. By induction on the derivation of t R u. The case when R is follows from transitivity of . The case when R is follows from the cases for and . The case when R is follows by a case analysis on the rule used to derive t u. * if t u by Gr-mono, then s = x s_1 ⋯ s_n and t = x t_1 ⋯ t_n and u = x u_1 ⋯ u_n and we complete by the induction hypothesis on s_j t_j u_j and s_i t_i u_i; * if t u by Gr-args, then s = s_1 ⋯ s_n and = x t_1 ⋯ t_n and u = u_1 ⋯ u_n with and () = () = () and we complete by the induction hypothesis on s_j t_j u_j and s_i t_i u_i (for unfiltered arguments); * if t u by Gr-rpo, we have s u and therefore s u by the induction hypothesis. The case when R is follows by a case analysis on the rule used to derive t u. Note that in this case, s = s_1 ⋯ s_n and t = t_1 ⋯ t_n with , having the same type structure _1 …_m, , () = () ⊇{n+1,…,m} and s_i t_i whenever i ∈π(). * Rpo-select: t_i u for some i ∈π() = π(), so s_i t_i u; we complete with Rpo-select and the induction hypothesis * Rpo-appl: u = u_0 u_1 ⋯ t_m and t u_i for all i, so by the induction hypothesis also s u_i for all i * Rpo-copy: u = u_1 ⋯ u_m with and t u_i for all u_i with i ∈() ∩{1,…,m}; but then also s u_i for these i, and implies * Rpo-lex: u = u_1 ⋯ u_m with (so also because is transitive) and there exists i such that: * i ∈() ∩{1,…,min(n,m)} = () ∩{1,…,min(n,m)} * s_j t_j u_j for j ∈{1,…,i-1}∩() (as () = ()), so s_j u_j by transitivity of * s_i t_i u_i so by IH s_i u_i, since i ∉() = () * s t u_j for j ∈{i+1,…,m}∩(), so also s u_j by the induction hypothesis If s t u then s ^+ u. is reflexive. This immediately follows from reflexivity of (Lemma <ref>). If s t with s :: and u v with u ::, then s u t v. If s t by Gr-mono or Gr-args, then s u t v by Gr-mono or Gr-args as well (note that in Gr-args, u is filtered away if and only if v is). So consider the case s t by Gr-rpo; that is, s = s_1 ⋯ s_n and s t, which implies n+1 ∈(). If we can see that s_1 ⋯ s_n u t as well we have s u t v' by Rpo-appl (since s u v by Rpo-select, because n+1 ∈() and u v), and therefore s u t v by Gr-rpo. It remains to prove that if s_1 ⋯ s_n t then s_1 ⋯ s_n u t. We prove that this holds by induction on the derivation of s_1 ⋯ s_n t. Yet, all cases are simple with the induction hypothesis; we only observe that for the Rpo-lex case, if i ∈() ∩() ∩{1,…,min(n,m)} then also i ∈() ∩() ∩{1,…,min(n+1,m)}, so we can use the same index. is monotonic. Suppose s :: and t ::, and s s' and t t'. We must see that s t s' t'. First, if s s' and t t', we are done by monotonicity of . Second, if s s' and t t', there are two options: * s = x s_1 ⋯ s_n and s' = x s_1' ⋯ s_n' with each s_i s_i' and therefore s_i s_i'; as both t t' and t t' we have s t s' t' by Gr-mono; * s = s_1 ⋯ s_n and s' = s_1' ⋯ s_n' with ≈ and () = () and s_i s_i' for i ∈() ∩{1,…, n}; so if n+1 ∈() = () then s t = s_1 ⋯ s_n t s_1' ⋯ s_n' t' = s' t' by Gr-args; and if n+1 ∉() = () then s t s' t' by Eq-args. Finally, if s s' then s t s' t' by Lemma!<ref>. It remains to prove that is well-founded. For this, we use the notion of computability. A term s is terminating if there is no infinite sequence s s_1 s_2 … A term s :: _1 …_m is computable if for all computable t_1 :: _1,…,t_m :: _m, the term s t_1 ⋯ t_m is terminating. (This is well-defined by induction on types.) We observe: * All variables are computable. * If s is computable, then it is terminating. * If s is computable and s t, then also t is computable. By mutual induction on types. Base types. (<ref>) Clearly, base-type variables are computable, since there is no t such that x t. (<ref>) Base-type computable terms are terminating by definition. (<ref>) If s has base type and s t, then t is terminating: if t t_1 … then either s t_1 … (if s t) or s t t_1 … (if s t), obtaining a contradiction. Arrow type : _1 …_m with m > 0. (<ref>) Let ::, and let s_1 :: _1,…,s_m :: _m be computable terms. By induction hypothesis (<ref>), we can do induction on (s_1,…,s_m) ordered with the place-wise extension of (,) to show that s_1 ⋯ s_m is terminating. The only applicable rule is Gr-mono, so if s_1 ⋯ s_m t then t = t_1 ⋯ t_m with each s_i t_i and some s_i t_i. By induction hypothesis (<ref>), each t_i is computable, and the tuple is smaller so we complete. (<ref>) Let s:: and assume towards a contradiction that s is terminating. Let x_1 :: _1,…,x_m ::_m be variables; by induction hypothesis (<ref>) they are computable, so s x_1 ⋯ x_m is terminating. But Lemma <ref> implies that if s s_1 s_2 … then also s x⃗ s_1 x⃗ s_2 x⃗…, contradicting the termination assumption. (<ref>) If s t then t has the same type structure as s. Let u_1 :: _1,…,u_n :: _n be computable; we must show that t u_1 ⋯ u_n is terminating. But suppose not: t u_1 ⋯ u_n v_1 v_2 …. Then either s u_1 ⋯ u_n v_1 v_2 … (if s t) by monotonicity of , or s u_1 ⋯ u_n t u_1 ⋯ u_n v_1 v_2 … (if s t) by Lemma <ref>. Hence, s u_1 ⋯ u_n is non-terminating, contradicting computability of s. Let :: _1 …_m and terms s_1 :: _1, …,s_n : _n be given such that (1) any function symbol with is computable; (2) for all i ∈(): s_i is computable, and (3) if and t_1,…,t_k are terms such that [s_i | i ∈()] (,)_𝚕𝚎𝚡 [t_i | i ∈()], then t_1 ⋯ t_k is terminating. Then for any t such that s_1 ⋯ s_n t: t is computable. By induction on the derivation of s_1 ⋯ s_n t. If this follows by Rpo-select, then s_i t for some i ∈(), so t is computable by (2) and Lemma <ref>(<ref>). If it follows by Rpo-appl, then t = t_0 t_1 ⋯ t_k and by the induction hypothesis each t_i is computable, and therefore by definition so is their application. If it follows by Rpo-copy, then t = t_1 ⋯ t_k with , and by the induction hypothesis each t_i with i ∈() is computable; since is computable, the whole result is. If it follows by Rpo-lex, then t = t_1 ⋯ t_k :: _k+1…_p with . We must see that for any computable t_k+1 :: _k+1,…,t_p :: _p we have t_1 ⋯ t_p terminates. But this is the case by assumption, because [s_i | i ∈()] (,)_𝚕𝚎𝚡 [t_i | i ∈()]: we see that this holds because, for some index i: * () ∩{1,…,i} = () ∩{1,…,i}, and * s_j t_j for j ∈() ∩{1,…,j} So: [s_j | j ∈() ∧ j ∈{1,…,i-1}] _placewise [t_j | j ∈() ∧ j ∈{1,…,i-1}]. * i ∈() = () and s_i t_i Hence: any extension of [s_j | j ∈() ∧ j ∈{1,…,i}] (,)_𝚕𝚎𝚡 any extension of [t_j | j ∈() ∧ j ∈{1,…,i}]. Every function symbol is computable. We will prove that for all :: _1 …_m, all terms s_1 :: _1,…,s_m :: _n such that s_i is computable for i ∈{1,…,n}∩(), that s_1 ⋯ s_m is terminating. We prove this by induction first on , ordered with , and second on [s_i | i ∈{1,…,n}∩()], ordered with (,)_𝚕𝚎𝚡. To see that the latter is indeed a terminating relation, note that by Lemma <ref>, is terminating on the set of all computable terms (and this set is closed under ), and that the lexicographic extension of a terminating relation is terminating when the length of sequences is bounded (as we assumed is the case: the maximal arity of symbols -equivalent to is bounded). To see that s_1 ⋯ s_n is terminating, it suffices to show that t is terminating if s_1 ⋯ s_n t. But there are only two cases to consider: if s_1 ⋯ s_n t by Gr-mono, then t is terminating by the second induction hypothesis because at least one argument in () is decreased; and if s_1 ⋯ s_n t by Gr-rpo, then t is computable, and therefore terminating, by Lemma <ref>. is a well-founded relation on the set of terms. § CONSTRAINED HORPO FOR LCSTRSS Having defined our covering pair (,), we are now ready to define a constrained reduction pair. §.§ Ingredients Given an LCSTRS, we assume given the following ingredients: * a precedence on the non-value symbols (defined the same as in Section <ref>, and with the same restriction that the maximal arity of equivalent symbols should be bounded); * a filter , which is restricted so that () = {1,…,m} for :: _1 …_m a theory symbol * for every sort , a well-founded ordering _ and a quasi-ordering _ on values of this sort, such that for all v_1,v_2 of sort : if v_1 _ v_2, then v_1 _ v_2 or v_1 = v_2 or v_1 and v_2 are minimal wrt _ We denote v_1 v_2 if v_1 _ v_2 for some sort (and similar for ). §.§ Relations We provide three mutually recursive relations. For terms s,t of the same type structure, constraint φ and set of variables, we say s t if this can be obtained by one of the following clauses: ≽Theory s,t are theory terms whose type is the same sort, (s) ∪(t) ⊆, and φ⊩ s t ≽Eq s = t ≽Mono s is not a theory term, s = s_1 ⋯ s_n and t = s_1 ⋯ s_n and each s_i t_i ≽Args s is not a theory term, s = s_1 ⋯ s_n and t = t_1 ⋯ t_n for , function symbols with the same type structure such that , () = (), and s_i t_i for all i ∈() ∩{1,…,n} ≽Greater s t For terms s,t of the same type structure, constraint φ and set of variables, we say s t if this can be obtained by one of the following clauses: ≻Theory s,t are theory terms whose type is the same sort (s) ∪(t) ⊆, and φ⊩ s t ≻Args s is not a theory term, s = s_1 ⋯ s_n and t = t_1 ⋯ t_n for , function symbols with the same type structure such that , () = (), s_i t_i for all i ∈() ∩{1,…,n}, and there is some i ∈() ∩{1,…,n} with s_i t_i ≻Rpo s t For terms s,t (which may have different type structures) such that s is not a theory term and has the form s_1 ⋯ s_n with a function symbol, constraint φ and set of variables, we say s t if this can be obtained by one of the following clauses: Select s_i t for some i ∈() ∩{1,…,n} Appl t = t_0 t_1 ⋯ t_m and s t_i for 1 ≤ i ≤ m Copy t = t_1 ⋯ t_m with and s t_i for all i ∈() ∩{1,…,m} Lex t = t_1 ⋯ t_m with and there is some i ∈() ∩() ∩{1,…,min(n,m)} such that all of the following hold: * () ∩{1,…,i} = () ∩{1,…,i} * s_j t_j for j ∈{1,…,i-1}∩() * s_i t_i * s t_j for j ∈{i+1,…,m}∩() Th t is a theory term of base type, with all variables in §.§ Coverage To see that covers ≽ and covers ≻, we start by extending with values: we let * v for all non-value symbols and all value symbols v (but not v, so we have v); * v_1 v_2 if v_1 v_2 or v_1 = v_2 or both v_1 and v_2 are minimal wrt (hence v_1 v_2 if and only if v_1 v_2, and v_1 v_2 if either v_1 = v_2 or both v_1 and v_2 are minimal wrt ) This precedence is still well-founded, by the combination of the well-foundedness of the original precedence and well-foundedness of . Now we see: Let γ be a substitution that respects φ and maps all variables in to ground theory terms. Then: * if s t then (sγ) (tγ) * if s t then (sγ) (tγ) * if s t then (sγ) (tγ) By a mutual induction on the derivation of s R t. Consider the case: * If s t by ≽Theory, then v_1 := (sγ) and (tγ) =: v_2 are both values, with v_1 v_2. By definition of this implies either v_1 v_2 or v_1 = v_2 or v_1 and v_2 are both minimal wrt ; so by our extension of we either have v_1 v_2 or v_1 v_2. In the former case, v_1 v_2 by (Rpo-copy), so v_1 v_2 by (Gr-rpo), so v_1 v_2 follows. In the latter case, v_1 v_2 by (Eq-args), so also v_1 v_2. * Similarly, if s t by ≻Theory, then v_1 := (sγ) and (tγ) =: v_2 are both values, with v_1 v_2 and therefore v_1 v_2; we conclude v_1 v_2 using Rpo-copy. * If s t by ≽Eq, then s = t and therefore (sγ) = (tγ) (as calculation is confluent); by reflexivity of we have (sγ) (tγ) and therefore (sγ) (tγ). * If s t by ≽Args, then because s is not a theory term, (sγ) = (s_1γ) ⋯ (s_nγ); and we either have (tγ) = (t_1γ) ⋯ (t_nγ) or (tγ) is a value, the latter case only occurring if s and t have base type. In the first case, we have that each (s_iγ) (t_iγ) by the induction hypothesis, and we easily conclude (sγ) (tγ) either by rule (Eq-args) or by rule (Gr-args), depending on whether some (s_iγ) (t_iγ) or not. In the second case, since s is not a theory term, is not a value, and therefore v := (tγ). Since s has base type is not a theory term, we can apply (Gr-rpo) and (Rpo-copy) to obtain (sγ) v. * If s t by ≻Args, we complete in a very similar way. * If s t by ≽Mono, then s = x s_1 ⋯ s_n and t = x t_1 ⋯ t_n; writing γ(x) = a u_1 ⋯ u_k where a may be a variable or a function symbol and k ≥ 0, we thus have (sγ) = a u_1 ⋯ u_k (s_1γ) ⋯ (s_nγ), since s is not a theory term. Moreover, we either have (tγ) = a u_1 ⋯ u_k (t_1γ) ⋯ (t_nγ) with each (s_iγ) (t_iγ) by the induction hypothesis, or (tγ) is a value, in which case we observe that a cannot be a value and s must have base type, so we may apply (Gr-rpo) with (Rpo-copy) and complete as above. In the first case, if each (s_iγ) (t_iγ) we have (sγ) (tγ) by (Eq-mono) or (Eq-args) (depending on whether a is a variable or function symbols); the same holds if a is a function symbol and (s_iγ) (t_iγ) holds only for those i ∈(a). Otherwise, if some unfiltered (s_iγ) (t_iγ) we instead have (sγ) (tγ) by (Gr-mono) or (Gr-args) respectively. * If s t because s t, we complete immediately by the induction hypothesis. * If s t because s t, we complete immediately by the induction hypothesis and rule (Gr-rpo). * If s t by Th, then tγ is a ground theory term, and therefore (tγ) is a value v; since s is not a theory term, is not a value and therefore v, so (sγ) = (⋯) v by (Rpo-copy). * Finally, in each of the cases Select, Appl, Copy and Lex, we either immediately conclude with the induction hypothesis (and a case analysis for a smaller “minimal index” in the case of Lex), or observe that (tγ) is a value and therefore (sγ) (tγ) by (Rpo-copy), as we also did in the case for ≽Args. Hence, we have demonstrated that (≽,≻) is a constrained reduction pair.
http://arxiv.org/abs/2406.18191v1
20240626091218
Asymptotic Uncertainty in the Estimation of Frequency Domain Causal Effects for Linear Processes
[ "Nicolas-Domenic Reiter", "Jonas Wahl", "Gabriele C. Hegerl", "Jakob Runge" ]
stat.ME
[ "stat.ME", "math.ST", "stat.TH" ]
Methodology of Adapting Large English Language Models for Specific Cultural Contexts Wenjing Zhang1,2 Siqi Xiao1,2 Xuejiao Lei1,2 Ning Wang1,2 Huazheng Zhang1,2 Meijuan An1,2 Bikun Yang1,2 Zhaoxiang Liu*1,2 Kai Wang1,2 Shiguo Lian*1,2 July 1, 2024 ========================================================================================================================================================= § ABSTRACT Structural vector autoregressive (SVAR) processes are commonly used time series models to identify and quantify causal interactions between dynamically interacting processes from observational data. The causal relationships between these processes can be effectively represented by a finite directed process graph - a graph that connects two processes whenever there is a direct delayed or simultaneous effect between them. Recent research has introduced a framework for quantifying frequency domain causal effects along paths on the process graph. This framework allows to identify how the spectral density of one process is contributing to the spectral density of another. In the current work, we characterise the asymptotic distribution of causal effect and spectral contribution estimators in terms of algebraic relations dictated by the process graph. Based on the asymptotic distribution we construct approximate confidence intervals and Wald type hypothesis tests for the estimated effects and spectral contributions. Under the assumption of causal sufficiency, we consider the class of differentiable estimators for frequency domain causal quantities, and within this class we identify the asymptotically optimal estimator. We illustrate the frequency domain Wald tests and uncertainty approximation on synthetic data, and apply them to analyse the impact of the 10 to 11 year solar cycle on the North Atlantic Oscillation (NAO). Our results confirm a significant effect of the solar cycle on the NAO at the 10 to 11 year time scale. § INTRODUCTION Many scientific questions are causal questions; it is often the objective to test and quantify a hypothesised causal effect between two or more processes. Causal inference <cit.> provides a formal language for reasoning about fundamental problems in questions of cause and effect. The underlying mathematical object of any causal problem is a causal model, which consists of a causal graph and a model compatible with the graph. In causal modelling it is often important to consider the temporal order of causal relations <cit.>. The causal structure of a system that is modelled to evolve over a discrete set of time points is often represented by an infinite graph, where a vertex represents the state of a modelled process at a given time, and edges identify direct delayed or simultaneous causal relationships between processes. In the literature, graphs of this type are known as full time graphs, time series DAG or chain graphs <cit.>. The infinite time series graph can be reduced to a finite graph whose vertices correspond to the modelled processes. An edge on this finite graph signals the existence of at least one lagged or contemporaneous effect between the respective processes. This reduced graph is sometimes referred to as the summary graph or process graph <cit.>. Among the possible models for time series graphs are the classic and widely used SVAR processes <cit.>. In previous work <cit.>, we showed that SVAR processes can be reformulated as a linear structural causal model of stochastic processes. This type of causal model we called structural equation process (SEP) <cit.>. In this formulation, each edge on the process graph is assigned a linear filter and each vertex is assigned an independent noise process, so that each SVAR component process can be written as the noise process plus the sum of the filtered parent processes. An implication of this reformulation is a notion of process-level causal effects and a generalisation of the classical path rule <cit.>. It turns out that the computations of these process-level causal effects become more tractable in the frequency domain after the application of the Fourier transform. The Fourier transformed causal effect filters are rational functions on the complex unit circle parameterised by the underlying SVAR coefficient vector <cit.>. These functions describe direct and indirect impulse response relationships in the frequency domain between the SVAR component processes. When one process directly or indirectly causes another, the spectral density of the latter is partially determined by the spectral density of the causing process. This part of the spectral density can be quantified and it is called the spectral contribution of the causing process. Estimating causal effects and spectral contributions in the frequency domain is of potential interest in many scientific applications <cit.>. However, in order to draw conclusions from such estimates, it is necessary to have means of assessing the associated statistical uncertainty. One possibility to approach the uncertainty of estimators is to consider their asymptotic behaviour, i.e. the behaviour of the estimator for large sample sizes. In this paper we study aspects of first-order asymptotic theory for estimators of causal quantities in the frequency domain. The present paper is structured as follows. In Section <ref>, we recall the necessary definitions and notions from causal time series modelling. In Section <ref>, we establish rational expressions for the asymptotic covariance of ordinary least squares (OLS) based estimators for causal effects and spectral contributions in the frequency domain. Based on these expressions we formulate Wald type tests, and we approximate the uncertainty of the effect and contribution estimators. For our computations we assume that there are no unobserved processes and that the structure of the process graph and the contemporaneous links on the time series graph are known. However, we do not need to know the exact time lags with which one process possibly drives another. To illustrate the uncertainty approximation in OLS-based estimates of causal effects and spectral contributions we generate synthetic time series data and apply the developed formulas to them. Section <ref> is motivated by a recent result of <cit.>, where for causally sufficient linear models the authors identify among all consistent and differentiable causal effect estimators the one with the lowest asymptotic variance. We generalise this result to estimators for frequency domain causal quantities in time series models. In Section <ref>, we use our characterisations of the asymptotic distribution to re-examine if and how anomalies in solar activity affect anomalies in the Northern Atlantic Oscillation (NAO), a question that has been intensely studied and debated in the climate sciences <cit.>. In our analysis, we focus on frequencies that represent oscillations that last about 10 years, as the variability in solar activity is concentrated on this time scale. Our Wald type test suggests with 95 % confidence that variability in solar activity contributes to variations of the NAO on this time scale. § PRELIMINARIES §.§ Graphical modelling Suppose V is a finite set that indexes the processes to be modeled. A time series graph for the processes V is a directed acyclic graph (DAG) = (V ×, ), so that a vertex v(t) on represents the state of the process indexed by v at time t. The set of edges must be such that for any link (u(t-k), v(t))∈ it holds that k ≥ 0 and also that {(u(s-k), v(s)): s ∈) }⊂, which in the following we will denote by u →_k v. The order of the time series graph is the maximum time lag with one process is causing another, that is p max_u,v∈ V{k : u →_k v}. We denote by ^_v the set of all (u,k) such that u →_k v. The contemporaneous DAG _0 = (V, _0) contains a direct link from u to v if and only if there is a contemporaneous effect u→_0 v on . Consequently, the set of contemporaneous parents of v∈ V, written _0(v), consists of all such processes u that satisfy u →_0 v. The process graph of is a finite graph G=(V,D) over the set of process indices V. This graph contains a directed link from u to v, denoted u → v, if and only if u ≠ v and u→_k v for some k ≥ 0. The parent set of a process v ∈ V is (v) {u ∈ V : u → v }. In this work we will always assume that the structure of the process graph G and of the contemporaneous graph _0, together with an upper bound q ≥ p for the order of is known. However, the details of the time series graph in form of the sets _v^ that go beyond the structure of the contemporaneous graph may not be known. The information on given G and _0 and an upper bound q ≥ p can be encoded by selecting for every process v∈ V a finite set of time lagged relations _v such that _v^⊂_v ⊂_0(v) ×{0}∪ ({v}∪(v)) × [1,q]. Throughout this work we use the symbol to refer to a collection of time lagged relations {_v}_v ∈ V such that each _v satisfies (<ref>). Accordingly, ^ denotes the collection {^_v}_v ∈ V. We illustrate the relation between a time series graph and its process graph with an example in Figure <ref>. To avoid overly complicated technicalities, we will often need to make the following assumption in this work. [AL2] A set of time lagged relations _v is said to satisfy the AL2 assumption if for every u ∈ V if the set {j |∈ (u,j) ∈_v } is either empty or contains at least two elements. §.§ Structural vector autoregressive process processes A structural vector autoregressive (SVAR) process for the time series graph of order p is a multivariate discrete time stochastic process = (_v(t))_v ∈ V, t ∈. The process is specified by a multivariate Gaussian white noise process η = (η_v(t))_v∈ V, t ∈, i.e. the variables η_v(t) are mutually independent Gaussians for all v ∈ V and t∈, and a finite coefficient vector Φ = (ϕ_u,v(k))_u→_k v∈^^. The dimensions of Φ are indexed by the direct lagged and contemporaneous causal relations. The vector Φ and the noise process η must be such that _v(t) = η_v(t) + ∑_(u,k) ∈_v^ϕ_u,v(k) _u(t-k) for every v∈ V. Throughout this work we require every SVAR coefficient vector to be such that the process is stable <cit.> and condition (<ref>) in the supplementary material. With the finite set of time lagged relations _v we associate a parameter vector Φ^_v =(ϕ_u,v^_v(k))_(u,k) ∈_v and a stochastic process ^_v, i.e. ϕ^_v_u,v(k) ϕ_u,v(k) if (u,k) ∈_v^ 0 if (u,k) ∈_v ∖_v^ ^_v(t) [ _u(t-k) ]_(u, k) ∈_v. Let us illustrate the notation with the example depicted in Figure <ref>. For process v the true set of time lagged causal effects is _v^ = {(u_1, 0), (u_1, 2), (v, 1), (u_2, 0), (u_2, 1)}. In Figure <ref> we represent the set of considered time lags _v by the nodes inside the boxes in the time series graph. The choice of the orange box in the row corresponding to process u_2 reflects the information that the time lag with which u_1 is driving v is not larger than two. Similarly, the box in the row of u_2 encodes the assumption that the time lag with which u_2 is causing v is at most one, and the time lag with which v is driving itself is at most two (red box in the second row). So the coefficient vector and stochastic process associated with the lags _v are as follows Φ^_v = [ ϕ_u_1, v(0) ϕ_u_1, v(1) ϕ_u_2,v(0) 0 ϕ_u_2,v(2) ϕ_v, v(1) 0 ] ^_v(t) = [ _u_1(t) _u_1(t-1) _u_2(t) _u_2(t-1) _u_2(t-2) _v(t-1) _v(t-2) ]. Since _v satisfies (<ref>) it holds that _v(t) = (Φ^_v)^⊤^_v(t) + η_v(t), and the parameter Φ^_v satisfies the regression relation Φ^_v = [(^_v(t)) (^_v(t))^⊤]^-1[(^_v(t) )(_v(t))]. Suppose we have a time series of length T+q, where q≥ p is the maximum time lag occurring in , then one can estimate the above covariance information and use relation (<ref>) to obtain the ordinary least squares (OLS) estimator Φ̂^_v for the estimand Φ^_v. For large T the estimator Φ̂^_v is approximately normally distributed, i.e. √(T)(Φ̂^_v- Φ^_v) →_d 𝒩(0, P^_v) P^_v = ω_v [^_v(t) (^_v(t))^⊤]^-1. §.§ Structural causal models of stochastic processes A SVAR process admits an equivalent formulation as a structural causal model of stochastic processes at the level of its process graph G <cit.>. This model consists of a set of stochastic processes ^ = (^_v)_v ∈ V and a link filter λ_u,v= (λ_u,v(s))_s∈ for every link u→ v on the process graph G such that _v = ^_v + ∑_u ∈(v)λ_u,v∗_u, where ∗ denotes the convolution of two sequences. Infinite sequences and convolution operation on them do not seem practical for computations. However, the computations become more tractable in the frequency domain after application of the Fourier transform . In order to compactly express the Fourier transformed link filters in terms of the underlying SVAR parameter Φ let us introduce for any two x,y ∈ V their associated lag polynomial φ_x,y(z) ∑_(x,k) ∈_yϕ_x,y^_y(k)z^k = ∑_(x,k)∈^_yϕ_x,y(k)z^k, evaluated at frequency z ∈ S^1 = {z ∈: |z| =1 }. The link function <cit.> _u,v of u → v is the Fourier transform of the link filter λ_u,v. This function is defined on the complex unit circle S^1 and parameterised as a fraction of lag polynomials, that is _u,v(z) ((λ_u,v))(z) = φ_u,v(z)/1 - φ_v,v(z). § ASYMPTOTIC DISTRIBUTION OF CAUSAL EFFECT ESTIMATORS IN THE FREQUENCY DOMAIN §.§ Asymptotic uncertainty in the estimation of link functions For the rest of this section, we fix a time series graph DAG with order p ≥ 0 and process graph G=(V,D). Our information on relative to the process graph G and the contemporaneous graph _0 is encoded by a collection of time lagged relations = {_v }_v ∈ V. In order to quantify the uncertainty in the estimation of link functions we would like to consider them as real valued functions in the parameters of the SVAR model. So from now on we read a given link function as a two dimensional real valued function of which the first coordinate is the real part and the second is the imaginary part. To express the function for the real resp. imaginary part as a rational function in the SVAR coefficients we make use of the fact that the non-zero complex numbers are isomorphic to a particular subgroup of the orthogonal 2× 2 dimensional matrices. Specifically, if 0 ≠ z ∈ is a complex number, then we define its corresponding operator as M(z) = [ (z) -(z); (z) (z) ] M(z^-1) = M(z)^-1 = 1/|z|^2[ (z) (z); -(z) (z) ] Note that M(z)e_1 = z and M(zz') = M(z)M(z') and M(z^∗) = M(z)^⊤. Furthermore, for x, y ∈ V the evaluation of the real and imaginary part of the lag polynomial φ_x,y are polynomials in the coefficients Φ^_y, i.e. (φ_x,y(z)) = ∑_(x,k) ∈_yϕ_x,y^_y(k)(z^k) (φ_x,y(z)) = ∑_(x,k) ∈_yϕ_x,y^_y(k)(z^k). By combining (<ref>) and (<ref>) we conclude that the real and imaginary part of the link function are rational functions in the parameters Φ^_v, i.e., (_u,v(z)) = (φ_u,v(z))(1- φ_u,v(z)) - (φ_u,v(z))(φ_v,v(z))/(1- φ_v,v(z))^2 + (φ_w,w(z))^2 (_u,v(z)) = (φ_u,v(z)) (φ_v,v(z)) + (φ_u,v(z))(1-φ_v,v(z))/(1- φ_v,v(z))^2 + (φ_v,v(z))^2 We use the example depicted in Figure <ref> to demonstrate that the real and imaginary part of the link function _u_1, v(z) are rational functions in the SVAR coefficients. For the sake of easier readability we abbreviate x_k = ϕ^_v_u_1, v(k) and y_j = ϕ^_v_v,v(j) for k=0,1 and j=1,2. We compute the rational functions for the real and the imaginary part (_u_1, v(z)) = p(x_0, x_1, y_1, y_2)/q(y_1, y_2) (_u_1, v(z)) = p'(x_0, x_1, y_1, y_2)/q(y_1, y_2). The denominator q is a polynomial in the two variables y_1, y_2, i.e. q(y_1, y_2) = 1- μ_1 y_1 - μ_2 y_2 + μ_1,1y_1^2 + μ_2,2y_2^2 + μ_1,2y_1y_2 with coefficients μ_i = 2 (z^i) and μ_i,i = (z^i)^2 + (z^i)^2 for i=1,2, and μ_1,2 =2 ((z)(z^2) + (z^2)(z)). Similarly, the numerator of the real resp. imaginary part is a polynomial of degree two in four variables, i.e. p(x_0, x_1, y_1, y_2) = α_0 x_0 + α_1 x_1 + α_0,1 x_0y_1 + α_0,2 x_0y_2 + α_1,1x_1y_1 + α_1,2x_1y_2, p'(x_0, x_1, y_1, y_2) = β_0 x_0 + β_1 x_1 + β_0,1 x_0y_1 + β_0,2 x_0y_2 + β_1,1x_1y_1 + β_1,2x_1y_2 with coefficients α_i = (z^i) and β_i = (z^i), and α_i,j = (z^i)(z^j)-(z^i) (z^j) and β_i,j = (z^j)(z^i) - (z^i)(z^j) for i=0,1 and j=1,2. For a link u→ v on the process graph, we write _u,v(z) to denote the estimator for _u,v(z) obtained by plugging the OLS estimator Φ̂^_v into the equations (<ref>) and (<ref>). Thus, the uncertainty in _u,v(z) is determined by the uncertainty in the asymptotically normal OLS estimator Φ̂^_v. From the delta method <cit.> it follows that the estimator _u,v(z) follows a two-variate normal distribution if the gradient of the link function as a function of Φ^_v has full rank. If _v satisfies the assumption <ref>, then the gradient has full rank for generic choices of Φ, which means that the set of parameters Φ for which the rank is not full has Lebesgue measure zero. Furthermore, all the link function estimators for the links pointing to the process vertex v and the estimator for the internal function of v, which we define as _v (1- φ_v,v)^-1, asymptotically follow a joint normal distribution. Let us write _∙, v(z) to denote the vector composed of the internal function at v and the link functions associated with the parents of v, each evaluated at z ∈ S^1. Even though we do not need to use the internal functions yet, we will introduce them at this point as it will be useful later on. Suppose _v satisfies the AL2 assumption <ref>. Then for generic choices of SVAR coefficients Φ and for all but finitely many z∈ S^1, the asymptotically normal OLS estimator _∙, w(z), i.e. √(T)(_∙,v(z)- _∙,v(z)) →_d 𝒩(0; [ ^_v(u_1,u_2;z) ]_u_1, u_2 ∈(v)∪{v}) has positive definite asymptotic covariance matrix. The asymptotic covariance matrix is block structured into 2× 2-matrices indexed by tuples of (v) ∪{v}, and the 2× 2 block matrix ^_v(u_1,u_2;z), i.e. the asymptotic covariance of _u_1,v(z) and _u_2,v(z), is a rational function in the SVAR parameters ϕ_u_1,v(k), ϕ_u_2,v(j), ϕ_v,v(l). In the appendix we provide the explicit rational functions for the block entries in the asymptotic covariance ^_v(z). §.§ The asymptotic covariance of the causal effect estimator Viewing an SVAR process as a structural causal model of stochastic processes enables one to quantify causal effects between processes at the level of the process graph. These process level effects can be computed by means of a generalized path rule. A directed path between processes is a sequence of consecutive edges on the process graph. In particular, we allow paths to visit a vertex more than once. In this work, we treat the empty path at any process node v as a directed path and denote it as ϵ_v. Suppose v,w ∈ V are process indices, then we write (v,w) for the set of all directed paths starting at v and ending at w. With the direct link filters (<ref>) it is possible to systematically quantify the influence the processes have on another. Specifically, if Π⊂(v,w), then the part of _w that is determined by the process _v along all the paths Π is identified by the formula ^Π = (∑_π∈Πλ^(π)) ∗_v, λ^(π) = ∏_(x,y) ∈πλ_x,y, where λ^(π) refers to the path filter of π, which is the convolution of all the link filters on π, see <cit.>. This description quantifies the causal structure among the processes of in terms of the structure of the process graph G, and generalises the classical path rule <cit.>. Once more, the expressions are easier to compute in the frequency domain, i.e. ^Π (∑_π∈Πλ^(π)) = ∑_π∈Π^(π) ^(π) (λ^(π)) = ∏_(v,w)∈π_v,w, where ^(π) is the path function <cit.> associated with the path π, which is the point-wise multiplication of the link functions associated with the links on π, and ^Π is the total effect function for the set of paths Π. In particular, the path function of the empty path is the constant function. The evaluation of a path or total effect function at some frequency z is a rational function in the SVAR parameter Φ^ = (Φ^_v)_v ∈ V. This follows because every link function is a rational function in Φ^ and by using the identities below (<ref>). In the following, we write ^(π) to refer to the OLS-based estimator for ^(π) that we obtain by inserting the OLS result Φ̂^ into (<ref>). In the following proposition we give an expression for the asymptotic covariance of OLS-based path function estimators. To keep the notation simple, we consider only directed paths π that visit each process vertex of G at most once, i.e. paths that do not contain cycles. Suppose π and ρ are two directed paths without repeating vertices on the process graph G with starting points v resp. v'. Then the asymptotic covariance between the estimators ^(π) and ^(ρ) is ^𝐋(π, ρ) = ∑_v ∈ V(π∩ρ)∖{v,v'}^(π∖ v) [^_v(π(v), ρ(v))] (^(ρ∖ v))^∗, ^(π∖ v) ∏_x → y ∈π : y ≠ v_x,y where V(π∩ρ) is the set of all vertices that are visited by both π and ρ, and if v is a vertex visited by π, then π(v)∈ V is the vertex that precedes v on the path π. In particular, the asymptotic covariance of two path functions is zero if the paths do not intersect. Before moving to the next subsection, we introduce two more notations. Suppose Π_1 and Π_2 are sets of directed paths, then we define the asymptotic block covariance matrix for the path function estimators as ^(Π_1, Π_2) [ ^(π_1, π_2) ]_π_1 ∈Π_1, π_2 ∈Π_2. The asymptotic covariance of the total effect estimators ^Π_1 and ^Π_2 is ^(^Π_1, ^Π_2)∑_π_1 ∈Π_1∑_π_2 ∈Π_2^(π_1, π_2). For the process graph depicted in Figure <ref> we illustrate the formula with which to compute the asymptotic covariance of the estimators for the causal effect of the process v on the process w. There are two directed paths along which the process v could possibly impact the process w. So the asymptotic covariance ^(Π, Π) is a 2× 2 block matrix and each pair of paths from v to w indexes a block entry. Using Proposition <ref> we obtain the following expressions for the block entries ^(π_v,w, π_v,w) = ^_w(v,v) ^(π_v,w, π) =^_w(v,m)(_v,m)^∗ ^(π, π) = _v,m^_w(m,m)(_v,m)^∗ + _m,w^_m(v,v)(_m,w)^∗ §.§ The asymptotic covariance of the spectral contribution estimator Path and total effect functions provide a frequency domain description of how one process w ∈ V responds to variations in another process v ∈ V through a set of directed paths Π⊂(v,w). To get a better understanding of what these functions quantify let us consider the spectral density of the SVAR process and its internal dynamics ^, which we denote by and ^ respectively. The spectral density (of the internal dynamics) is the Fourier transform of the autocovariance (ACS) Σ = (Σ(k))_k ∈ and Σ^ respectively, which are defined as follows Σ(k) [(t) (t-k)^⊤] Σ^(k) [^(t) ^(t-k)]. The spectral density is a statistical measure of the dynamics of the SVAR process derived from the spectral representation <cit.> of . The spectral representation of is a superposition of oscillations, i.e. infinitely recurring temporal patterns, and each oscillation is characterised by its frequency, which encodes the time it takes the oscillation to complete a cycle. The complex-valued amplitude of each oscillation is a random variable, and the amplitudes of two different oscillations are independent of each other. The spectral density _w(z) measures the variance of the amplitude associated with the oscillation at frequency z. The spectral density thus provides information on how the variation of the component process _w is distributed across time scales. The fraction of _w(z) determined by the process _v along the set of paths Π is quantified by the path or total effect functions. Specifically, this fraction is the spectral density of the process ^Π as defined in (<ref>). We call this fraction the spectral contribution of v to w along Π and denote it by ^Π. The spectral density ^Π admits a rational expression in terms of the SVAR parameters Φ^. To see this, we observe that the definition of ^Π implies that ^Π = |^Π|^2_v. Since |^Π| is a rational function in Φ^, it remains to find a rational parameterisation of _v. To do this, we recall that the set of ancestors of v, written (v)⊂ V, consists of those vertices u ∈ V that satisfy P(u,v)≠∅, and due to our convention regarding empty paths, we consider each vertex v as an ancestor of itself, i.e. v∈(v). Then, using the frequency domain trek rule <cit.>, we get that _v = ∑_u ∈(v) |^(u,v)|^2 _u^ ^_u(z) = ω_u|_u(z)|^2 = ∑_u ∈(v) |^(u,v)_u,u|^2 = ω_u |1- φ_u,u(z)|^-2 This shows that ^Π is a rational function of the SVAR parameter. If π∈(u,v) is a path, then we refer to the function ^(π)^(π)_v as the weighted path function of π, and for Π⊂(v,w) we denote by ^Π the sum of all weighted path functions of the paths in Π. In particular, the weighted path function of the empty path ϵ_v at v is the internal function of v, i.e. ^(ϵ_v) = _v. We conclude that ^Π, i.e. the spectral density of ^Π, may be written in terms of weighted path functions ^Π |^Π|^2 _v = ∑_u ∈(v)ω_u |^Π_u|^2 , where Π_u = (u,v) + Π= {ρ_u + π : ρ_u ∈(u,v), π∈Π} is the set of all possible concatenations. The function |^Π_u|^2 quantifies the spectral contribution of u via v on w along Π, up to the amplitude ω_u. We now wish to determine the asymptotic uncertainty of the estimator ^Π, which is Φ̂^ plugged into (<ref>). Since we expressed this contribution in terms of weighted path functions, we begin with an expression for the asymptotic covariance between any two weighted path function estimators. Suppose π and ρ are two directed paths without cycles on the process graph G, where π is going from v to w and ρ from v' to w'. The asymptotic covariance between the estimator ^(π) and ^(ρ) is ^(π^, ρ^) = _v[^(π, ρ)](_v')^∗ , if v ≠ v' _v[^(π, ρ)](_v)^∗ + ^(π) [^_v(v,v) ](^(ρ))^∗ , if v = v' To quantify the asymptotic uncertainty of the spectral contribution estimator, we need to determine for any two u,u'∈(v) the asymptotic covariance ^(^Π_u, ^Π_u'), which we express in the following. Let v∈ V be such that for every u ∈(v) the set of paths Π_u is finite. Then for u,u'∈(v) it holds that ^(^Π_u, ^Π_u') = ^Π [^(^(u,v), ^(u',v))] (^Π)^∗ +^(u,v) [(^Π)] (^(u',v))^∗ We give the proof in the supplementary material. In the next subsection, we will use Proposition <ref> to derive an asymptotic confidence region for the estimator ^Π. For the example process graph shown in Figure <ref> we use Proposition <ref> and <ref> to spell out the asymptotic covariance for the estimation of the spectral contribution of v to the process w, which we identified as |(_v,w + ^(π))|^2 _v. First, we need to derive the expressions for the asymptotic covariance of the spectral density of the process v, which is computed as follows _v = |^(π_u_1, v)|^2 + |^(π_u_2, v)|^2 + |^(ϵ_v)|^2 . According to Proposition <ref> we need to compute the asymptotic block covariance of the multivariate estimator of weighted path functions (^(π_u_1, v), ^(π_u_2, v), ^(ϵ_v)). Using the notation from the previous section and Proposition <ref> we get that the block entries on the diagonal are as follows ^(π_u_1, v^, π_u_1,v^) = _u_1^_v(u_1, u_1)_u_1^∗ ^(π_u_2, v^, π_u_2,v^) = _u_2^_v(u_2, u_2)_u_2^∗ ^(ϵ_v^, ϵ_v^) = ^_v(v, v) The expressions for the off diagonal entries are computed analogously (see the supplement <ref>). The estimator for the spectral contribution of u_1 along Π to w is ^Π_u_1 = ^(π_u_1, v)^Π. Proposition <ref> identifies its asymptotic covariance as ^(^Π_u_1, ^Π_u_1) = (^Π_u_1)^_v(u_1, u_1)(^Π_u_1)^∗ + (_u_1,v_u_1) ^(^̂Π̂, ^̂Π̂) (_u_1,v_u_1)^∗ The remaining blocks in the asymptotic covariance of the estimator for the spectral contribution are shown in the supplement <ref>. §.§ Asymptotic significance tests and confidence regions A process graph G expresses a collection of hypotheses about the process . Namely, each path π∈(v,w) on the process graph implicitly hypothesises that the behaviour of process v influences the behaviour of process w. Since these hypotheses can be quantified using (weighted) path functions, they can be tested on observational data. In this subsection, we construct asymptotic hypothesis tests to assess whether observational time series data of length T+q are consistent with the hypothesis that process v causally drives w along a set of paths Π at a given significance level α∈ (0,1). We saw earlier that the estimators for (weighted) path functions asymptotically follow a normal distribution, and the expressions for their asymptotic covariance matrices admit explicit rational expressions in terms of Φ^. Hypotheses about asymptotically normal estimators can be tested using what is known as the Wald test <cit.> or χ_2-test <cit.>. Specifically, suppose we are interested in a vector valued quantity θ∈^m for which we have an asymptotically normal estimator θ̂_n, i.e. √(n)(θ̂_n - θ) →_d 𝒩(0; Σ), where n is the number of samples from which we construct the estimator, and Σ the asymptotic covariance matrix. We now want to decide whether the true but unknown value θ is different from the zero vector at some significance level α. To do this, we assume the opposite and take this as the null hypothesis, i.e. H_0: θ = 0‖θ‖ = 0. Consequently, the alternative hypothesis becomes H_1: θ≠0‖θ‖ > 0. Under the null hypothesis H_0 it holds that W_n n θ̂_n^⊤Σ̂^-1θ̂_n →_d χ^2(m). This means that under H_0 and for large sample size n, the Wald statistic W_n approximately follows the distribution of a χ^2(m) distributed random variable. The Wald test rejects H_0 if (χ^2(m) ≥ W_n)) < 1 - α, which is equivalent to W_n ≥ x_α; m, where x_α; m is the α-quantile of the χ^2(m) distribution. One can use convergence (<ref>) to derive an approximate confidence region for the estimator θ̂_n and significance level α. Specifically, for the elliptical set C_α = {𝐱∈^m | n(𝐱-θ)^⊤Σ^-1(𝐱-θ) ≤ x_α; m}, it holds asymptotically that (θ̂_n ∈ C_α) = α. We obtain an estimate Ĉ_α if we replace the asymptotic covariance Σ by the sample covariance Σ̂ and the true θ by its estimate θ̂_n. In particular cases the euclidean norm ‖θ‖ is actually the quantity of interest. From the definition of the confidence region C_α, it is possible to directly derive an asymptotic confidence interval for the estimator ‖θ̂_n ‖, since asymptotically (‖θ̂_n ‖∈ I_α =[max_𝐱∈ C_α‖𝐱‖, min_𝐱∈ C_α‖𝐱‖]) ≥α. Using the estimated region Ĉ_α we obtain an estimated confidence interval Î_α. We are now applying the observations on Wald statistics to derive approximations to the uncertainty in the estimation of frequency domain causal quantities. In the following we denote by ^(∙, ∙) the estimated asymptotic covariance obtained by substituting the OLS estimate Φ̂^ into the rational expression for ^(∙, ∙). Let us return to the problem of testing whether the process v drives the process w along a set of paths Π. If the asymptotic covairance ^(Π, Π) is positive definite, then there are three approaches to this question, and each approach has a corresponding Wald test. First, we could test for each path π∈Π whether the estimated path function ^(π) is significantly non-zero. By doing this, we simultaneously test whether all the link functions of the links on π are significantly non-zero. Under the null hypothesis of no effect, we obtain the Wald statistic: T[^(π)]^⊤[^(π, π)]^-1 [^(π)] →_d χ^2(2) Second, we could test whether there is a significant causal effect along at least one path π∈Π, which is equivalent to testing whether ∑_∈Π |^(π)(z)| is significantly non-zero. Under the null hypothesis of no effect along any path π∈Π we get the statistic: T[^(π)]_π∈Π^⊤[^(Π, Π)]^-1 [^(π)]_π∈Π→_d χ^2(2|Π|) Finally, we could test whether the estimated total effect function ^Π = ∑ _π∈Π^(π) is significantly different from zero. Under the null hypothesis we obtain the Wald statistic: T[^Π]^⊤[^(^Π, ^Π)]^-1 [^Π] →_d χ^2(2) The estimation of the function ^Π is subject to uncertainty, and this uncertainty can be approximated using the estimated asymptotic covariance matrix ^(^Π, ^Π). For the significance level α, the construction (<ref>) yields a region Ĉ_α(z) ⊂^2 such that asymptotically (^Π(z) ∈Ĉ_α(z)) = α. For interpretation and visualisation, however, it is more convenient to consider the absolute value |^Π|, that is, the strength with which the process v influences the process w along Π. By the construction presented at the beginning of this section we get an estimated confidence interval Î_α(z) for the estimated effect strength |^Π(z)|. Analogously, one can approximate the uncertainty in the estimated spectral contribution of each ancestor u∈(v) through v to w along Π. Regarding the uncertainty in the estimate of the total contribution ^Π, we recall that it is equal to the sum over the squared absolute values of the individual contributions ^Π_u. So the total contribution is non-zero if and only if ^Π_u is significantly non-zero for at least one ancestor u ∈(v). Thus, whether the process v contributes significantly to the dynamics of w can be decided with the following Wald statistic: T[^(Π_u)]_u ∈(v)^⊤ [^(^Π_u, ^Π_u')]_u,u'∈(v)^-1 [^(Π_u')]_u' ∈(v)→_d χ^2(2 |(v)|) Finally, to quantify the uncertainty in the estimated total spectral contribution ^Π at a given significance level α, we denote by Ĉ_α(z) ⊂^2|(v)| the elliptical confidence region for the multivariate estimator [^Π_u(z)]_u ∈(v). Since ^Π = ‖ [^Π_u(z)]_u ∈(v)‖^2 it follows that { x^2 | x ∈Î_α(z)} is an approximate confidence interval for the total contribution estimator. Path coefficients and spectral contributions admit a clear interpretation with respect to the stochastic process. However, they depend non-linearly on the coefficients of the SVAR parameter and possibly involve a large number of coefficients. These two factors may yield highly uncertain estimators. As a result one may need fairly long time series in order to statistically recognize a given effect by one of the tests presented in this subsection. If the objective is only to decide on the existence of a causal effect along a set of paths at a some frequency z ∈ S^1, then one could make use of the observation _v,w(z) = φ_v,w(z)/1-φ_w,w(z) =0 φ_v,w(z) = 0, where the right hand side in contrast to the left hand side depends linearly on the SVAR coefficients. In particular, ^(π) = 0 iff φ^(π) = 0. The asymptotic covariance among the estimators (φ̂_v,w(z))_v∈(w) can be obtained from Proposition <ref> by setting the matrix A(z)= 𝕀_2 and A_v(z)= 0 for v ∈(w) as in Proposition <ref>. The asymptotic covariance between estimators for (weighted) path functions can then be computed by the formulas as presented in Proposition <ref> and <ref>, which in turn can then be used to formulate Wald type tests as in (<ref>) and (<ref>). To illustrate the frequency domain Wald tests and uncertainty approximations we generate a synthetic time series of length 500 for the process graph G displayed in Figure <ref> and underlying time series graph and SVAR coefficient vector Φ (see supplementary material). With this time series we analyse and test how the process v is driving w. To test whether the data supports a significant causal effect of v on w we employ either of the tests (<ref>)-(<ref>). In the left part of Figure <ref> we display the estimated magnitudes of the direct, mediated and total effect function together with the estimated 95% confidence region and p-values. Additionally, we compute the true magnitudes of the respective effects and note that for each effect the true magnitudes lie within the estimated confidence region. While the direct and total effect are detected for all frequencies with 95% confidence, the mediated effect does not get recognised for frequencies greater than approximately 0.4 π. However, we detect the effect with the Wald test described in Remark <ref>, as indicated by the p-values p^∗_med (dash dotted line) in the second row and second column of the left part in Figure <ref>. The right part of Figure <ref> displays the estimated spectral contributions of each of the processes u_1, u_2,v to the process w, together with the estimated 95% confidence regions and the true contributions. Also here we observe that the true values lie within the estimated confidence region. In addition we display the p-values of the estimated contributions. § ASYMPTOTIC EFFICIENCY Suppose we seek to estimate a given effect or spectral contribution from time series data. We saw that any such quantity is a rational function in the SVAR parameters. In the previous section we considered one type of estimator, i.e. estimating the SVAR coefficient and feeding it into the rational function. In general, however, there might be many more possible estimators with which the desired quantity can be estimated. The objective of this section is to identify the estimator considered in the previous section as the asymptotically optimal, i.e. the estimator with lowest asymptotic variance, with respect to a specific class of estimators. With this we generalise the main result in <cit.> to causal effects in the frequency domain. For the rest of this section, we fix a time series DAG having order p. We assume that the associated contemporaneous graph _0 and the possibly cyclic process graph G are known. Once more, we encode the information on by a collection of time lagged relations = {_v }_v∈ V such that (<ref>) holds for every v. In what follows we assume that the processes V = { v_1, …, v_m } are topologically ordered with respect to the contemporaneous graph _0, that is v < w only if w ∉_0(v), where _0(v) are the ancestors of v with respect to _0. In this section, we view any stable SVAR parameter vector as a tuple of matrices, that is Φ = [Φ(k)]_k =0^q ∈^m × m⊗^q, where q ≥ p. So the entry Φ_j,i(k) = ϕ_v_i, v_j(k) if v_i →_k v_j and 0 otherwise. Due to the ordering of the process indices V it follows that Φ(0) ∈ LT(m), which is the subspace of lower triangular m × m-dimensional matrices. For the arguments to come it will be important to note that there is an explicit relation between the ACS Σ and the parameter tuple (Φ, Ω). This relation is based on the following SVAR recursion: (t) = ∑_k= 0^q Φ(k)(t-k) + η(t) =∑_k=1^q Φ̃(k)(t-k) + η̃(t) Φ̃(k) = (𝕀- Φ(0))^-1 η̃= (𝕀- Φ(0))^-1η(t) This representation reveals the recursive structure (Yule-Walker equations <cit.>) in the ACS sequence: Σ(k) = Φ̃(1)Σ(-1) + ⋯ + Φ̃(p)Σ(-p) + Ω̃ , k =0 Φ̃(1)Σ(k-1) + … + Φ̃(p)Σ(k-p) , k ≠ 0 Since Σ(k) = Σ(-k)^⊤ it then follows that the ACS Σ is uniquely determined by the first p entries Σ(0), …, Σ(q-1). Suppose Σ = (Σ(k))_k ∈ is a collection of m × m-dimensional matrices and I, J ⊂ finite subsets, then we consider the following block matrix constructions: Σ(I) = [ Σ(i) ]_i ∈ I∈^m × m ⊗^|I| Σ(I, J) =[ Σ(j-i) ]_i ∈ I, j ∈ J∈^m× m ⊗^|I| × |J| Based on these matrix constructions we consider the following space of matrix valued tuples: ℰ_q(m) = { (Σ(i))_-q ≤ i ≤ q|Σ(k) = Σ(-k)^⊤, Σ([1,q], [1,q]) ∈PD(m(p-1)) } The following Proposition establishes a diffeomorphic relation between the parameter pair (Φ, Ω) and the ACS Σ of any stable SVAR process. There are open sets U ⊂ LT(m) × (^m × m⊗^q) ×diag_+(m) such that every parameter pair (Φ, Ω) ∈ U defines a stable process, and an open set V ⊂ℰ_q(m) such that there is a diffeomorphism U ≅ V. Let us begin with the map in the inverse direction. So we pick some (Σ(i))_-p ≤ i ≤ p∈ℰ_p(m). Then we set Φ̃ = Σ([1,q], [1,q])^-1Σ([1,p]) Ω̃= Σ(0)- ∑_j=1Σ(j)Φ̃(j), which is the unique solution to the recursion in Equation (<ref>). Then we obtain Φ(0) and Ω from Ω̃ by recursive regression along the topological order of _0, and for k > 1 we set Φ(k) = (𝕀- Φ(0))^-⊤Φ̃(k). Now let us consider a collection of coefficients Φ∈ UT(m) ∈× (^m × m⊗^q) and positive diagonal matrix Ω∈diag_+(m). The ACS Σ of the SVAR process specified by (Φ, Ω) then satisfies the recursion (<ref>). Consequently, we obtain for any two v,w ∈ V and -p+1 ≤ j ≤ p-1 the relation σ_v,w(j) = ω̃_v,w(j) + ∑_x,y ∈ V∑_k_x=1^p ∑_k_y=j+1^p ϕ̃_x,v(k_x) σ_x,y(k_x - k_y) ϕ̃_y,w(k_y-j), ω̃_v,w(j) = [Ω̃]_v,w if j = 0 0 if j ≠ 0 In addition, it must hold that σ_v,w(j) = σ_w,v(-j). That means for every Φ we get a linear system of equations that can be represented by a quadratic m(q-1) × m (q-1)-dimensional matrix. We observe that the determinant of this matrix is a non-zero polynomial in the parameters Φ. This follows as the coefficient of the zero degree monomial is one. So we conclude that the set of parameters Φ for which the corresponding system of equations has a unique solution is open. For such parameters, the two maps constructed in this proof are rational functions that are inverse to each other. Furthermore, the set of SVAR parameters that defining a stable process contains a set that is open in LT(m) ∈× (^m × m⊗^q), see <cit.>. This finishes the proof. We now define the class of estimators among which we seek to identify the asymptotically optimal one. First, let U⊂ V be a set of processes, and w∈ V ∖ U a target process. For each u ∈ U we furthermore pick a possibly infinite subset of paths Π_u⊂(u,w). Then, for some z ∈ S^1 we denote by τ(z) either the vector of path functions [^Π_u(z)]_u ∈ U or of weighted path functions [^Π_u]_u ∈ U. We require the sets of paths Π_u to be such that they correspond to a (controlled) causal effect <cit.> of U on w. This requirement ensures that τ is a rational function in the SVAR parameters. Similarly to <cit.> we introduce the space of all consistent estimators for τ(z) that can be written as differentiable functions of a finite collection of the (estimated) ACS entries of the SVAR process, i.e. for q ≥ p we consider 𝒯^(q)(z) = {τ̂(z): ℰ_q(m) →^2 ⊗^|U||τ̂ is differentiable and a consistent estimator of τ} Since τ(z) is a rational function in the SVAR parameters it follows from Proposition <ref> that 𝒯 is non-empty. The set 𝒯^(q)(z) contains all possible estimators that are based on least square regression. In particular, this includes the estimator τ̂^ with which we refer to the estimator that evaluates the rational function defining τ on the OLS estimate Φ̂^. But this is not the only type of estimator for frequency domain causal quantities. The following example indicates a less obvious estimator for an indirect causal effect in the frequency domain. We consider an SVAR process with process graph G = (V,D), where V = [3] and D={ 1 → 2, 2 → 3 }. The underlying time series graph has order 1. The lagged parent sets are L_i^ = (i) × [0,1] ∪{i}×{1} for every i ∈ V. The total causal effect of 1 on 3, which is the the path function of π = 1 → 2 → 3, i.e. ^(π)(z) = p_0 + p_1 z + p_2z^2/1- q_1 z - q_2 z^2, p_0 = ϕ_1,2(0)ϕ_2,3(0) p_1 = ϕ_1,2(0)ϕ_2,3(1) + ϕ_1,2(1)ϕ_2,3(0) p_2 = ϕ_1,2(1)ϕ_2,3(1) q_1 = ϕ_2,2(1) + ϕ_3,3(1) q_2 =ϕ_2,2(1)ϕ_3,3(1) Then we denote by λ^(π) the path filter of π, as defined and characterised as sums of path coefficients on the time series graph in <cit.>. With this filter we can establish a polynomial equation system for the coefficients p_k and q_l as follows λ^(π)(0) = p_0 λ^(π)(1) = q_1 p_0 + p_1 λ^(π)(2) = (q_1^2 - q_2) p_0 + q_1 p_1 + p_2 λ^(π)(3) = (q_1^3 - 2 q_2 q_1)p_0 + (q_1^2 - q_2) p_1 + q_1 p_2 λ^(π)(4) = (q_1^4 - 3q_2q_1^2 + q_2^2)p_0 + (q_1^3 - 2 q_2 q_1)p_1 + (q_1^2 - q_2) p_2 The values of the path filter λ^(π) can be obtained with OLS regression from the ACS information Σ_1,1(i) for i=[1,5] and Σ_3,1(i) for i∈ [0,4]. The triangular structure of the first three equations lets us rewrite the above system as a system of two equations for the parameters q_1, q_2. The solution to this system of equations is unique if numerator and denominator of ^(π) are coprime, which is the case for generic SVAR parameter choices <cit.>. This yields a consistent estimator τ̂ = ^(π)∈𝒯_a^(5). We are now prepared to state the main result of this section, which can be seen as a frequency version of <cit.>. Let G=(V,D) be a process graph with underlying time series graph with order p, and let τ be a vector of sums of (weighted) path functions. If the OLS estimators τ̂^_(z) is asymptotically normal, then for every q ≥ p and every asymptotically normal estimator τ̂(z) ∈𝒯^(q)(z) it holds that (τ̂(z)) ≽(τ̂^^(z)), where for two positive semi-definite matrices A,B we write A ≽ B if A - B is positive semi-definite. Furthermore, suppose ' and ” are collections of time lagged relations such that both satisfy condition (<ref>) and '_v ⊂”_v for every v ∈ V, then (τ̂^”(z)) ≽(τ̂^'(z)). § IMPACT OF THE SOLAR CYCLE ON THE NAO §.§ Background In this section, we employ the tests for causal effects in the frequency domain from section <ref> to analyse the possible effect of variations in solar activity, represented by sun spot numbers, on northern European climate during winter time, represented by the North Atlantic Oscillation (NAO). Solar activity undergoes an approximately 11-year cycle, where slightly enhanced solar insulation coincides with increases in the number of sunspots. While the direct impact on global climate of the solar cycle is very small consistent with the energy budget of the planet <cit.>, it has been suggested that the sunspot cycle can influence the NAO through its effect on temperature gradients through the atmosphere <cit.>. The NAO represents the tendency towards stronger or weaker westerly circulation. Empirical analysis suggests that during minima in the sunspot cycle there is a tendency for the NAO to become negative, resulting in northerly and easterly flow, which is conducive to cold winter weather, as observed during the 2009/2010 solar minimum. However, the atmospheric circulation can also be influenced by remote effects of the El Nino Southern Oscillation (ENSO), the Atlantic multidecadal oscillation in sea surface temperatures (AMO), aerosol effects from human influences and volcanic forcing are reflected through their optical depth (AOD). §.§ Data We construct the time series for our analysis from the following publicly available sources: NAO <cit.>, ENSO <cit.>, AMO <cit.>, AOD <cit.>, SNS <cit.>. From each of those monthly time-series we construct yearly winter series by averaging the December, January and February values. We normalise each time series by deducting its mean and dividing by its variance. The Diceky-Fuller test <cit.> applied to the annual winter time time series suggests that the SNS and AMO time series are non-stationary for which we adjust by considering the time series of first order differences <cit.> instead. These pre-processing steps give five annual time series that are normalised and stationary, as suggested by the Dickey Fuller test with 95 % confidence. Those time series cover the period from 1870 until 2010. §.§ Result The hypothesised causal structure between the processes is represented by the process graph shown in Figure <ref>. The short length of the time series makes it necessary to constrain the underlying time series graph to reduce the number of parameters to be estimated. We follow the choices explained in <cit.>. The magnitude of the possible direct influence of solar activity on the NAO is the function |_, | and the possible spectral contribution of the solar process to the NAO is _→ = |_→|√(_), which is the fraction of the spectral density of NAO anomalies determined by solar activity anomalies. Solar activity anomalies oscillate with a period between 10 and 11 years. We therefore restrict our analysis to oscillations with periods between 8 and 13 years. We use our assumptions about the structure of the time series graph to compute the OLS estimator Φ̂ for the SVAR coefficients and plug it into the formulas for _→ and _→ respectively. In Figure <ref> we show their estimated magnitudes together with their 95 % confidence interval and their p-values. The Wald test detects a significant causal effect and spectral distribution for all frequencies considered with at least 95% confidence, as can be seen from the plotted p-values. § DISCUSSION Frequency domain path functions and spectral contributions compactly capture and visualise the causal structure of linear SVAR processes. As they can be estimated from observations, they could be useful for investigating many scientific questions. However, in order to draw conclusions from an estimate, it is important to assess its uncertainty. For large sample sizes, this uncertainty can be approximated by the asymptotic Gaussian distribution of the estimator, for which we computed structured expressions using the delta method. Based on these expressions, we then constructed Wald type tests to assess the significance of the estimated frequency domain causal quantity. The problem with approximating distributions by the normal distribution obtained by the delta method is that it may be far from the true distribution, for instance if the sample size is too small. How many samples you need for the approximated distribution to be meaningful depends, for example, on the number of SVAR parameters involved, or how far the true SVAR parameter is from a parameter that defines an unstable process. In addition, problems may arise if the SVAR coefficient is close to a parameter for which the rank of the gradient used to compute the asymptotic distribution degenerates. We hope that this work will lead to further research into the challenges and uncertainties of estimating causal quantities in the frequency domain. Furthermore, for any given causal quantity in the frequency domain, we could identify the estimator with the lowest asymptotic variance among all estimators that can be expressed as a differentiable function in finitely many entries of the autocovariance. This optimal estimator is constructed in two steps: first, one computes all the SVAR coefficients by regressing on the lagged parents as prescribed by the time series graph. In the second step, we evaluate the rational function (in the SVAR parameters) that describes the causal quantity of interest on the estimated SVAR coefficients. However, the construction of this estimator requires knowledge of the complete time series graph, and this information may not always be available. Instead, it may only be possible to know the structure of the process graph together with an upper bound on the order of the SVAR process. In future work, it would be interesting to search for the asymptotically optimal estimator relative to some fixed information about the time series graph. Another direction in which this work could be extended is to consider the scenario where some of the processes are hidden, i.e. they influence the observable processes but cannot be measured directly. In a companion work <cit.> we establish the theoretical foundations for identifying frequency domain causal effects in the presence of latent confounding processes. We believe that further research into the estimation of causal effects in the frequency domain could be not only theoretically interesting but also practically relevant, which we demonstrate with a case study of the influence of the solar cycle on the variability of the NAO. While previous studies have looked at lagged relationships between solar activity and the NAO, in this study we have analysed the effect directly at the relevant time scale in the frequency domain, where the solar variability pattern is clearly captured. Our results confirm a significant influence of the solar cycle on the NAO, which is consistent with previous studies on this question <cit.>. However, our results also suggest that further investigation of this frequency domain effect is needed. In particular, the plots in Figure <ref> suggest that the approximate distribution of the estimator for the solar influence on the NAO may be degenerate. Thus, to argue convincingly for a significant causal effect of the solar process on the NAO on the considered time scale, extended investigations are most likely required. With a view to further applications of this type, it would be attractive to extend the framework of this analysis so that frequency-wise causal effects could also be analysed when the underlying data is structured not only in time but also in space. § ACKNOWLEDGEMENT J.W. and J.R. received funding from the European Research Council (ERC) Starting Grant CausalEarth under the European Union’s Horizon 2020 research and innovation program (Grant Agreement No. 948112). N.R., G.H. and J.R. received funding from the European Union’s Horizon 2020 research and innovation programme under Marie Skłodowska-Curie grant agreement No 860100 (IMIRACLI). N.R. and G.H. thank Mike Evans and Carla Rösch for their help and encouraging discussions. § TECHNICAL PRELIMINARIES §.§ Schur complements Our indexing conventions for vectors and matrices are as follows: Suppose I,J are finite sets, then ^I denotes an |I|-dimensional real vector space whose dimensions are indexed by I. A matrix A = (a_i,j)_i ∈ I, j ∈ J∈^I × J represents a linear map from ^J to ^I. For subsets I' ⊂ I and J' ⊂ J we define [A]_I', J' = (a_i,j)_i ∈ I', j ∈ J'∈^I' × J' as the submatrix of A corresponding to the rows I' and the columns J'. For any two integers p ≤ q we use [p,q] to denote the set {i: p ≤ i ≤ q}, and we use [p] as a shorthand for the set [0,p]. Suppose I and J are finite disjoint sets, and M ∈^I ∪ J × I ∪ J a block structured matrix M = [ A B; C D ] The Schur complement <cit.> of M with respect to I resp. with respect to J are defined as M_J · I D - C A^-1 B M_I · J A - B D^-1 C. Let M ∈^I ∪ J × I ∪ J be a block structured matrix and suppose one of the matrices M, M_I · J, M_J · I is invertible, then also the other two matrices are invertible. In this case, the inverses of these matrices can be computed as follows M^-1 = [ A^-1 + A^-1B(M_J · I)^-1 CA^-1 -A^-1B(M_J · I)^-1; (M_J · I)^-1 CA^-1 (M_J · I)^-1 ] = [ (M_I · J)^-1 -(M_I · J)^-1BD^-1; -D^-1C(M_I · J)^-1 D^-1 + D^-1C (M_I · J)^-1 BD^-1 ] Suppose the block structured matrix M ∈^I ∪ J × I ∪ J as above is invertible, then it holds that D^-1 = [M^-1]_J · I, where the right hand side is the Schur complement of M^-1 with respect to I. §.§ Time lagged regression Suppose 𝒢 = (V ×, ) is a time series DAG of order p. The associated contemporaneous graph is denoted _0= (V, _0) and the process graph G= (V, D). We assume that the process indices V are topologically ordered with respect to _0, i.e. for two distinct processes v,w the relation v < w implies w ∉_0(v). In the following, we assume that the SVAR parameter Φ defines a stable process . This assumption ensures that is stationary, i.e. the mean sequence and ACS are time independent. One convenient way to ensure stability is to require that ∑_v ∈ V∑_(u,k) ∈_v |ϕ_v,w(k)| < 1. By requiring this condition we get that the space of all considered SVAR parameters is an open semi-algebraic subset of ^ = ∏_v ∈ V^_v, i.e. the euclidean space whose dimensions are indexed by the lagged relations. Furthermore, let q≥ p a non-negative integer, then we define = {_v }_v ∈ V as the collection of time lagged relations where _v = { u ∈ V | u < v }× [0,q] ∪{ u ∈ V | u ≥ v }× [1,q]. For two collections of time lagged relations ” = {_v”}_v ∈ V and ' = {'_v }_v ∈ V, we write ' ⊂” if '_v ⊂”_v for every v ∈ V, and we define ”∖' {”_v ∖'_v }_v ∈ V. Throughout the supplementary material we will only consider collections of time lagged relations that either satsify condition (<ref>) from the main paper or satisfy the coarser condition ^⊂⊂. Condition (<ref>) guarantees that the OLS estimator Φ̂^, as defined in the main paper, is a consistent estimator for Φ. In the main part of the paper we defined the random vector ^_v(t) containing the super set of lagged parents of _v(t). For the purpose of the computations we will structure ^_v as follows ^_v(t) = [ ^_v_u(t) ]_v ∈ V, ^_v_u(t) = [ _u(t-k) ]_(u,k) ∈_v The covariance matrix associated with the process ^_v(t) is defined as follows: Σ^_v[^_v(t) (^_v(t))^⊤] ∈^_v ×_v We now equip this matrix with a block structure. For u ∈ V we define _v(u) to be the set {u }× [q] ∩_v, and for u_1, u_2 ∈ V we define the matrix Σ_u_1,u_2^_v [(^_v_u_1(t)) (^_v_u_2(t))^⊤] = [Σ^_v]__v(u_1), _v(u_2)∈^_v(u_1) ×_v(u_2). So let us structure the covariance matrix like this Σ^_v = [ Σ^_v_u_1,u_2]_u_1,u_2 ∈ V Finally, we equip the associated precision matrix with the very same block structure, i.e. P^_v = ω_v (Σ^_v)^-1 =[P^_v_u_1,u_2]_u_1, u_2 ∈(v) ∪{v}∈^_v ×_v, where P^_v_u_1,u_2 = [P^_v]__v(u_1), _v(u_2)∈^_v(u_1), _v(u_2). For the proof of Proposition <ref>, Proposition <ref> and Theorem <ref> from the main paper, we need two auxiliary lemmas. The first is a time series adaptation of Lemma 23 in <cit.>. The second establishes a block diagonal structure on the asymptotic covariance of the OLS estimator Φ̂^. Let v∈ V a process and _v be a set of time lagged relations as specified above, then it holds that Φ̂^_v - Φ^_v = 1/T∑_t=1^T(Σ^_v)^-1^_w(t)η_v(t) + O_p(T^-1). The statement can be shown by the same computations as in the proof of Lemma 23 in <cit.>. In order to conduct these computations we use the fact that the auto covariance estimator is asymptotically normal if the underlying process is stationary <cit.>. In the following, we assume that the vertices V are topologically ordered with respect to the acyclic contemporaneous graph _0 =(V, _0). That means, V = { v_1 , … , v_m } where v_i < v_j implies v_j ∉_0(v_i). Suppose = {_v }_v ∈ V is a collection of time lags such that ^⊂⊂. If we denote by (Φ̂^L_v)_v ∈ V the OLS estimators for Φ^_v obtained by regressing _v(t) on ^_v(t), then its asymptotic covariance has the following block diagonal structure √(n)[ Φ̂^_1 - Φ^_1; ⋮; Φ̂^_m - Φ^_m ]→_d 𝒩(0, diag[ω_2 (Σ^_1)^-1, … , ω_m (Σ^_m)^-1]) Let i, j ∈ [1, m] and suppose i < j. We wish to show that lim_T →∞ T [(Φ̂^_i- Φ^_i)(Φ̂^_j - Φ^_j)^⊤] = 0, where Φ̂^_i is the OLS estimator based on a sample of length T. Using Lemma <ref> we get [(Φ̂^_i- Φ^_i)(Φ̂^_j - Φ^_j)^⊤] = 1/T^2∑_s,t=1^T (Σ^_i)^-1[η_i(t)η_j(s)^_i(t) (^_j(t))^⊤] (Σ^_j)^-⊤ + O_p(T^-2) Since i < j it follows that η_j(s) is independent of the variables η_i(t) and ^_j(s) and ^_i(t), and since all those variables are jointly Gaussian it follows for all s, t ∈ [1, T] that [η_i(t)η_j(s)^_i(t) (^_j(t))^⊤]= [η_j(s)] [η_i(t)^_i(t) (^_j(s))^⊤] = 0, since [η_j(s)] = 0. This finishes the proof. § ASYMPTOTIC THEORY As mentioned already in the main paper, we will use the delta method <cit.> to compute the asymptotic covariance of the link functions _∙(z). §.§ Asymptotic distributions of link function estimators Let be a time series graph of order p and let G=(V, D) be the associated process graph. We fix a node v∈ V and a set _v that satisfies condition (<ref>) from the main paper. In view of the following arguments we introduce for every u∈(v)∪{v } the ordered set _u = { k | (u,k) ∈_v }⊂ [q], i.e. the set of time lags with which process u is drving process v. For the computation of the asymptotic covariance of the estimator _∙, v(z) with the delta method we need to compute for every u ∈(v) the Jacobian ∇^_v (_u, v(z)) and the Jacobian ∇^_v (_v(z)) , i.e. the derivative with respect to the parameters Φ^_v. As a preparation we recall some simple but useful identities regarding multiplication and division of complex numbers. Suppose z = x +iy ∈∖{0}, then its inverse is given as follows z^-1 = x - iy/x^2 +y^2. Furthermore, if z' = x' + iy'∈, then zz' = (xx' -yy') + (xy'+ yx')i. For some frequency z ∈ S^1 and an ordered set of time lags = {l_1< … < l_m}⊂[q], we denote as 𝐳^ the vectors 𝐳^ [ (𝐳^); (𝐳^) ]=[ (z^l_1) ⋯ (z^l_m); (z^l_1) ⋯ (z^l_m) ]∈^2 × || If f is a function in the parameters Φ^_v, then we denote the partial derivatives of f with respect to (ϕ^_w_u,v(k))_(u,k) ∈_u as ∇^_u f (∂/∂ϕ^_v_u,v(k)f)_k ∈_u. For u ∈(v), the non-zero partial derivatives of the complex transfer function _u,v(z) evaluated at z ∈ S^1 are as follows ∇^_u_u,v(z) = a_v(z)A(z) 𝐳^_u ∇^_v_u,v(z) = a_v(z)A_u(z) 𝐳^_v, where a_v(z) = |1- φ_v,v(z)|^-2 A(z) = [ (1- φ_v,v(z)) -(1- φ_v,v(z)); (1- φ_v,v(z)) (1- φ_v,v(z)) ] A_u(z) = [ -(φ_u,v(z)) (φ_u,v(z)); (φ_u,v(z)) -(φ_u,v(z)) ] -2[ (_u, v(z)) (_u, v(z)); (_u, v(z)) (_u, v(z)) ] The non-zero partial derivatives of the internal function of v, i.e. _v are as follows ∇^_v_v = a_v(z)A_v(z) 𝐳^_v, where A_v(z) =([ 1 0; 0 -1 ] - 2 [ (_v(z)) (_v(z)); - (_v(z)) - (_v(z)) ]). Let u∈(v) and _u and _v be as defined above. In the following we will denote the real and imaginary parts of the complex valued link function as _u,v(z) Re(_u,v(z)) _u,v(z) Im(_u,v(z)) The rules for complex multiplication and division with respect to real and imaginary parts allow us to decompose the real and imaginary part of the complex transfer function as _u,v(z) = α_v(z) _u,v(z) _u,v(z) = α_v(z) _u,v(z), where _u,v(z) = Re(φ_u,v(z)) Re(1- φ_v,v(z)) + Im(φ_u,v(z))Im(φ_v,v(z)) _u,v(z) = -Re(φ_u,v(z))Im(φ_v,v(z)) + Im(φ_u,v)Re(1- φ_v,v(z)), and α_v(z) = |1- φ_v,v(z)|^-2. The vector of partial derivatives with respect to the coefficients (ϕ^_v_u,v(k))_k ∈_u are linear combinations of the vectors (𝐳^_v) and (𝐳^_v), i.e., ∇^_u_u,v(z) = (1 - φ_v,v(z) (𝐳^_u) + (φ_v,v(z)) (𝐳^_v) ∇^_u_u,v(z) = -(φ_v,v(z)) (𝐳^_u)+ (1-φ_v,v(z)) (𝐳^_u) From Equation (<ref>) and the fact that α_v(z) is a function of the variables (ϕ^_v_v,v(k))_k ∈_v only, we obtain that ∇^_u_u,v(z) = α_v(z)(1- φ_v,v(z)) 𝐳^_u which shows the first part of the lemma. We proceed with the computation of the partial derivatives with respect to the coefficients (Φ^_v_v,v(k))_k ∈_v, which is also a linear combination of the vectors (𝐳^_v) and (𝐳^_v), i.e. ∇^_v_u,v(z) = -(φ_u,v(z)) (𝐳^_v) + (φ_u,v(z)) (𝐳^_v) ∇^_v_u,v(z) = -(φ_u,v(z) (𝐳^_v) + (φ_u,v(z)) (𝐳^_v) Applying the product rule to Equation (<ref>) yields ∇^_v_u,v(z) = α_v(z)[ ∇^_v_u,v(z); ∇^_v_u,v(z) ][∇^_v_u,v(z) ] + [ _u,v(z); _u,v(z) ][∇^_vα_v(z)]. Furthermore, we compute that ∇^_v a_v(z) = -2a_v(z)^2 [ 1 1; 0 0 ]𝐳^_v. As a result we get for the second summand in Equation (<ref>) the following expression: [ _u,v(z); _u,v(z) ][∇^_vα_v(z)] = [ _u, v(z) _u, v(z); _u, v(z) _u, v(z) ]𝐳^ So, we conclude with the following formula for the Jacoobian ∇^_v_u, v = α_v(z)([ -(φ_u,v(z)) (φ_u,v(z)); (φ_u,v(z)) -(φ_u,v(z)) ] -2[ _u, v(z) _u, v(z); _u, v(z) _u, v(z) ] )𝐳^ The derivative of the function _v(z) is computed in a similar manner. Let z ∈ S^1 be a frequency and u_1, u_2∈(v) ∪{v} two processes, then we define the z-dependent matrix P^_v_u_1,u_2(z) [𝐳^_u_1] P^_v_u_1,u_2[ 𝐳^_u_2]^⊤∈^2 × 2. The following Proposition is a more detailed version of Proposition <ref> in that it states the exact expressions for the asymptotic covariance of the link and internal function estimators. Let A: S^1 →^2×2 and A_u: S^1 →^2×2 for u ∈(v) ∪{v} be the functions from Lemma <ref>, and for u_1,u_2 ∈(v) ∪{v} the asymptotic covariance of the corresponding link function is given as ^_v(u_1, u_2; z) = a_v(z)^2 [ A(z) A_u_1(z) ][ P^_v_u_1,u_2(z) P^_v_u_1,v(z); P^_v_v,u_2(z) P^_v_v,v(z) ][ A(z) A_u_2(z) ]^⊤ If u ∈(v), then ^_v(u, v; z) = a_v(z)^2 [ A(z) A_u(z) ][ P^_v_u,v(z); P^_v_v,v(z) ][ A_v(z) ]^⊤ Finally, ^_v(v,v;z) = a_v(z)^2 [ A_v(z) ] P^_v,v(z) [ A_v(z) ]^⊤ In Lemma <ref> we computed the derivative of the function _∙, v(z) as a function of the parameters Φ^. By the delta method, the asymptotic covariance of _∙, v(z) is as follows ^_v(z) = [∇^_v_∙, v(z)] P^_v [∇^_v_∙, v(z)]^⊤ = [^_v(u_1,u_2)]_u_1, u_2 ∈(v) ∪{v}∈^((v)∪{v}) × ((v)∪{v})⊗^2 × 2. Suppose u_1,u_2 ∈(v), then the corresponding entry in the asymptotic block covariance matrix is ^_v(u_1, u_2; z) =[∇^_v_u_1, v(z)] P^_v [∇^_v_u_1, v(z)]^⊤ = A(z)(𝐳_v^_u_1 P^_v_u_1,u_2 [𝐳_v^_u_2]^⊤) A(z)^⊤ + A_u_1(z) (𝐳_v^_v P^_v_v,v [𝐳_v^_v]^⊤) A_u_2(z)^⊤ + A(z)(𝐳_v^_u_1 P^_v_u_1,v [𝐳_v^_v]^⊤) A_u_2(z)^⊤ + A_u_1(z) (𝐳_v^_v P^_v_v,u_2 [𝐳_v^_u_2]^⊤) A(z)^⊤ = [ A(z) A_u_1(z) ][ P_u_1,u_2^_w(z) P_u_1,v^_w(z); P_v,u_2^_w(z) P_v,v^_w(z) ][ A(z)^⊤; A_u_2(z)^⊤ ] Similarly, we compute for any u ∈(v) that ^_v(u,v; z) = [ A(z) A_u(z) ][ P^_v_u, v; P^_v_v,v ][ A_v(z)^⊤ ] = ^_v(v, u; z)^⊤, and finally ^_v(v,v,z) = [A_v(z)] [ P^_v_v,v(z) ] [ A_v(z) ] ^⊤. It remains to prove that the link function estimator is asymptotically normal for generic parameter choices. First, let v∈ V be a process index and (v) ={u_1, …, u_n } its parent processes. We wish to write the asymptotic covariance as a matrix product like this ^_v(z) = α_v(z)^2[𝐀(z)] 𝐙 P^_v𝐙^⊤[𝐀(z) ]^⊤. In order to see that the estimate of the link-functions and the internal function follows a joint normal distribution for generic parameter choices, we need to show that the asymptotic covariance matrix is invertible for generic parameter choices. To get the matrix product above, we need some notation for block structured matrices. Suppose we have disjoint sets I_1, ⋯, I_m and J_1, …, J_m, and matrices A_1, …, A_m such that A_k ∈^I_k × J_k, then we define the block diagonal matrix diag(A_i)_i ∈ [m]∈^I × J, where I = ∪_k=1^m I_k and J = ⋃_l=1^m J_l, at the entry (i,j) to be [A_k]_i,j if (i,j) ∈ I_k × J_k for some k, and zero otherwise. Let us consider an example. Suppose I_1 = [(a,1), (a,2)] and I_2 = [(b,1), (b,2), (b,3)] and J_1 = [(c, 1), (c, 2)] and J_2 = [(d, 1), (d,2)]. Further let A_1 = [ 1 1; 1 1 ]∈^I_1 × J_1 A_2 = [ 1 1; 1 1; 1 1 ]∈^I_2 × J_2 Accordingly, diag(A_1, A_2) = [ A_1 0; 0 A_2 ] = [ 1 1 0 0; 1 1 0 0; 0 0 1 1; 0 0 1 1; 0 0 1 1 ] Similarly, we define vertical stacking of matrices. Suppose I_1, …, I_m are mutually disjoint finite sets and J a finite set, and let B_1, … B_m be matrices such that B_k ∈^I_k × J. Then we define vst(B_k) ∈^I × J, with I= ⋃_k=1^m I_k, to be the matrix that one obtains by vertically stacking the matrices B_k. With this notation we define the block upper triangular matrices 𝐀(z) [ diag[A(z)]_u ∈(v) vst[ A_u(z) ]_u ∈(v); 0 A_v(z) ] 𝐙 [ diag[𝐳^_u]_u ∈(v) vst[ 𝐳^_v]_u ∈(v); 0 𝐳^_v ] Since the matrix P^_v is a the inverse of a covariance matrix it is positive definite. So the matrix product 𝐙 P^_v𝐙^⊤ is positive definite if the matrix 𝐙 has full rank, i.e. if it has rank 2(n+1). This is the case if _v satisfies Assumption <ref>, and, additionally if z is such that for every u ∈(v) ∪{ v } there are k,j ∈_u such that 0 ≠ z^k - z^j 0 ≠ z^k + z^j. This requirement makes sure that there are two linear independent columns in 𝐳^_u∈^2 × |_u|. Note that for |_u| ≥ 2 the matrix 𝐙 P^_v𝐙^⊤ is positive definite for all but finitely many z∈ S^1. These conditions do not depend on the parameter Φ of the SVAR process. Instead they depend on the choice of _v. Finally, the matrix 𝐀(z) is invertible if and only if its determinant is non-zero. And its determinant is a rational function in the parameters Φ^_v. Specifically, (𝐀(z)) = (A(z))^n(A_v(z)) By (<ref>) it holds that (A(z)) = |1- φ_v,v(z)|^2 ≠ 0. So, to show that the matrix 𝐀(z) is invertible, we need to show that (A_v(z)) is not the zero function (as a rational function of the indeterminates ϕ^_v_v,v(k)). So let us compute the determinant explicitly (A_v(z)) = [ 1 - 2 (_v(z)) - 2 (_v(z)); 2 (_v(z)) -1 + 2(_v(z)) ] = - 1 + 2(_v(z)) + 2(_v(z)) Since _v = (1- φ_v,v)^-1 it follows that (A_v(z)) is non-zero if and only if the following expression is non-zero P(ϕ_v,v^_v) = - |1- φ_v,v(z)|^2 + 2(1- (φ_v,v(z))) + 2 (φ_v,v(z)) = -1 + ∑_l ∈_vβ_lϕ^_v_v,v(l) + ∑_k,j ∈_vγ_j,kϕ^_v(k)ϕ^_v_v,v(j), where β_l= (z^l) and γ_j,k = (z^j)(z^k) + (z^j)(z^k). In particular, P is a non-zero polynomial in ϕ^_v(k), so for generic choices of stable SVAR parameters it is non-zero. So, if _v satisfies Assumption <ref>, then for almost all z∈ S^1 and almost all SVAR parameter configurations Φ, the OLS-based estimator _̂∙̂,̂ ̂v̂(z) is asymptotically normal with block covariance matrix ^_v(∙, ∙; z). §.§ Proof of Proposition <ref> and Proposition <ref> Let be a time series DAG with associated process graph G= (V,D). For each process v ∈ V, we select finite sets of lagged parents _v ⊇_v^ that satisfy condition (<ref>). We will now prove Proposition <ref>. The proof of Proposition <ref> is done in exactly the same way. Suppose π = x_1 →⋯→ x_n and ρ = y_1 →⋯→ y_q are directed paths on G. In the following we use P^ to denote the matrix diag[ω_2(Σ^_1)^-1 , … , ω_m (Σ^_m)^-1], which can be considered as a block matrix, where each block is indexed by a pair of numbers in [1,m], i.e. P^ = [P^_i,j]_i,j ∈ [1,m], where P^_i,j= P^_i =ω_i (Σ^_i)^-1 if i = j 0 if i ≠ j Furthermore, we consider submatrices of P^ that correspond to pairs of paths π and ρ, and denote them as follows P^(π), (ρ) = [P^_x_i, y_j]_i∈ [1,m], j ∈[1, q]. Note that P^(π), (ρ) is zero if π and ρ do not share a single vertex. On the other hand, if π = ρ, then P^(π), (ρ) is block diagonal. By the delta method, the asymptotic covariance of the estimators ^(π) and ^(ρ) is ^(π^, ρ^) = ∇^^(π) P^ (∇^^(ρ))^⊤ = ∇^(π)^(π) P^(π), (ρ) (∇^(ρ)^(ρ))^⊤ = ∑_i=1^m∑_j=1^q (∇^_x_i^(π)) P^_x_i, y_j (∇^_y_j^(ρ))^⊤ = ∑_x ∈ V(π∩ρ) (∇^_x^(π)) P^_x, x (∇^_x^(ρ))^⊤ where ∇^(π) resp. ∇^(ρ) denotes taking derivative with respect to the parameters Φ^_x_i resp. Φ^_y_j, and V(π∩ρ) = {x_i}_i ∈ [m]∩{y_j }_j ∈ [q]. This holds as ^(π) resp. ^(ρ) are functions only in the parameters Φ^_x_i resp. Φ^_y_j, and because of the block diagonal structure of P^. We will now calculate ∇^_x^(π) for every x visited by π. First, we observe that for two distinct functions F: ^k→^m × m and G: ^l→^m × m that the derivative of the point-wise product is ∇(FG) = (∇^𝐱(F) G , F ∇^𝐲 (G)). In addition, if F(𝐱) G(𝐲) = G(𝐲)F(𝐱), then its derivative is ∇(FG) = (G ∇^𝐱(F), F ∇^𝐲 (G)). Suppose z_k = x_k + iy_k ∈ for k ∈ [1,m] and x + iy = ∏_k=1^m z_k is the product of the z_k, i.e. x is the real part of the product over the z_k's and y the imaginary part of the product, then [ x; y ] = (∏_k=1^m Z_k) [ 1; 0 ] Z_k [ x_k -y_k; y_k x_k ] Of course, all the matrices on the right hand side commute. Recall that the function ^(π) = _x_1∏_i=1^m-1_x_i, x_i+1, where each term in the product is considered as a complex number. We wish to compute the derivative of its real and imaginary part. For that purpose we use the above observation and define for a link u → v on the process graph the matrices F_u = [ (_u) - (_u); (_u) (_u) ] H_u, v = [ (_u, v) - (_u, v); (_u, v) (_u, v) ] Let now i ∈ [m-1], then we use the above observations to compute ∇^_x_i^(π) = (∏_j =1^mH_x_j, x_j+1) ∇^_x1_x_1 if i=1 (F_x_1∏_j ≠ i-1^m H_x_j, x_j+1) ∇^_x_i_x_i-1, x_i if i > 1 , where we used the commutativity of the matrices and the fact the parameter Φ^_x_i only affects the factor H_x_i-1, x_i if i >1 and only F if i=1. Plugging the expression from Equation <ref> into the last line of Equation <ref> we get ^(π^, ρ^) =∑_(i,j): x_i = y_j ^(π∖ x_i)[∇^_x_i_x_i-1, x_i] P^_x_i, y_j[∇^_y_j_y_j-1, y_j]^⊤(^(ρ∖ y_j) )^⊤ = ∑_(i,j): x_i = y_j ^(π∖ x_i)[^_x_i(x_i-1, y_j-1)] (^(ρ∖ y_j) )^⊤, where x_0 = x_1 and y_0 = y_1 and _x_0, x_1 _x_1 ^(π∖ x_i ) ∏_j =1^mH_x_j, x_j+1 if i=1 F_x_1∏_j ≠ i-1^m H_x_j, x_j+1 if i > 1 The notations regarding the path ρ are analogous. §.§ Proof Proposition <ref> Suppose v∈ V is a process with not necessarily distinct ancestors u, u'∈(v). Furthermore, let π = π_1 + π_2 and ρ = ρ_1 + ρ_2 concatenations of paths on the process graph, where π_1 ∈(u,v) and ρ_1 ∈(u',v), and π_2, ρ_2 ∈Π⊂(v,w), then it holds that ^(π^, ρ^) = ^(π_2)^(π_1^, ρ_1^)( ^(ρ_2))^∗ + ^(π_1)^(π_2, ρ_2)(^(ρ_1))^∗. This follows by combining Proposition <ref> and Proposition <ref>. For u, u' ∈(v) we compute ^(^Π_u, ^Π_u') = ∑_π∈Π_u∑_ρ∈Π_u'^(π^, ρ^) = ∑_π_1 ∈(u,v)∑_π_2 ∈Π∑_ρ_1 ∈(u',v)∑_ρ_2 ∈Π^((π_1 + π_2)^, (ρ_1 + ρ_2)^) = ∑_π_1 ∈(u,v)∑_ρ_1 ∈(u',v)^Π^(π_1^, ρ_1^)( ^(Π))^∗ + ^(π_1)^(^Π, ^Π,)(^(ρ_1))^∗ = ^Π [^(^(u,v), ^(u',v))] (^Π)^∗ +^(u,v) [(^Π)] (^(u',v))^∗ §.§ Asymptotic efficiency In this section, we apply the ideas from <cit.> to prove Theorem <ref> in the main paper. Suppose =(V ×, ) is a time series DAG of order p and let q ≥ p. We assume that the vertices V = {v_1, …, v_m } are topologically ordered with respect to the contemporaneous graph _0. Let z ∈ S^1 be a frequency and τ a vector of sums of (weighted) path functions, i.e. [^Π_u]_u ∈ U or [^Π_u]_u ∈ U, where U⊂ V is a set of processes and Π_u⊂(u,v) a set of paths such that each Π_u corresponds to a controlled causal effect <cit.> of u on v. This requirement on Π_u guarantees that τ(z) is a rational function in the SVAR parameters. Finally, we pick an estimator τ̂(z) ∈𝒯^(q)(z) and precompose it with the diffeomorphism from Proposition <ref> to get a function τ̃(z) in the variables Φ^, Ω, where = {_v }_v ∈ V is the collection of time lagged relations defined as in (<ref>). The proof of Theorem <ref> will be based on the delta method, which involves partial derivatives. Suppose ' is a collection of time lagged relations with corresponding (augmented) parameter vector Φ^'. If τ'(z) is a (rational) function in Φ^' and the variance vector Ω, then we denote the partial derivatives as follows ∇^'τ'(z) = [ ∇^'_vτ'(z) ]_v ∈ V, ∇^_v'τ'(z) = [ ∂/∂ϕ^'_v_u,v(k)τ'(z) ]_(u,k) ∈'_v ∇^Ωτ'(z) = [ ∂/∂ω_vτ'(z) ]_v ∈ V With these notations at hand we proceed with the proof. It holds that ∇^^τ̃(z) = ∇^^τ̂^(z) ∇^Ωτ̃(z) = 0. Suppose M ∈^I × I is a positive definite matrix and α, β⊂ I define a partition of I, i.e. α∪β = I with α∩β = ∅. Then it holds for every x ∈^I that x^⊤ M x ≥ x_α^⊤ M_α·β x_α. Lets fix a vector a∈^2⊗^U and set τ_a(z) = a^⊤τ(z). Accordingly, we write τ̃_a (z) = a^⊤τ̃(z), and τ̂^_a(z) = a^⊤τ̂^(z). By combining the delta method with Lemma <ref> we get that the asymptotic variance of the scalar valued estimator τ̃_a(z) is avar( τ̃_a(z)) = [∇^τ̃_a(z) ] P^[∇^τ̃_a(z) ]^⊤ = ∑_v ∈ V[∇^_vτ̃_a(z)] P^_v[∇^_vτ̃_a(z)]^⊤ Let us partition for every v ∈ V the set of time lagged relations _v using the sets α_v = ^_v and β_v = _v ∖^_v, so that [∇^_vτ̃_a(z)] P^_v[∇^_vτ̃_a(z)]^⊤ ≥[∇^_vτ̃_a(z)] P^_v[ P^_v]_α_v·β_v[∇^_vτ̃_a(z)] = [∇^_vτ̃_a(z)] P^_v[ P^_v^] [∇^_vτ̃_a(z)], where the first inequality follows from Lemma <ref> and the second equality follows from Corollary <ref>. So combining equation (<ref>) with the inequalities (<ref>) yields avar( τ̃_a(z)) ≥∑_v ∈ V[∇^_vτ̃_a(z)] P^_v[ P^_v^] [∇^_vτ̃_a(z)] = avar(τ̂_a^(z)). Finally, this allows us to conclude with the inequality a^⊤( (τ̃(z)) - (τ̂^(z)) ) a =avar( τ̃_a(z))- avar(τ̂_a^(z)) ≥ 0 which shows the claim (<ref>) of Theorem <ref>, and similar arguments can be used to show statement (<ref>) of Theorem <ref>. § EXAMPLE AND APPLICATION §.§ Numerical Example In this Section, we provide more details on the example discussed in Section <ref>. In Figure <ref> we show the underlying time series graph of the example. The true lagged parent set and the lagged parents set encoding our knowledge about given that we know the process graph G are as follows ^_u_1 = {(u_1, 1) } _u_1 = {u_1}× [1, 3] ^_u_2 = {(u_1, 1) } _u_2 = {u_1}× [1, 3] ^_v = {(v, 1), (v, 3), (u_1, 2), (u_2, 1)} _v = {v}× [1, 3] ∪{u_1}× [0, 2] ∪{u_2}× [0, 3] ^_m = {(m, 1), (v, 3), (v, 1), (v, 2)} _m = {m}× [1, 3] ∪{v}× [0, 3] ^_w = {(w, 2), (v, 3), (v, 2), (m, 3)} _w = {w}× [1, 3] ∪{v}× [0, 3] ∪{m}× [0, 3] To generate the time series we used a white noise process with Ω = 𝕀 and we used the following SVAR coefficients Φ_∙, u_1 = [ ϕ_u_1, u_1(1) = 0.5 ] Φ_∙, u_2 = [ ϕ_u_2, u_2(1) = 0.5 ] Φ_∙, v = [ ϕ_v, v(1) = 0.3 ϕ_v, v(3) = -0.5 ϕ_u_1, v(2) = -0.25 ϕ_u_2, v(1) =0.5 ] Φ_∙, m = [ ϕ_m, m(1) = 0.5 ϕ_v, m(1) = 0.5 ϕ_v, m(2) = 0.6 ] Φ_∙, w = [ ϕ_w, w(2) = 0.5 ϕ_v, w(2) = 0.4 ϕ_m, w(3) = -0.3 ] In the main paper we sketched the computation of the spectral contribution estimator. We will here provide the expressions for the asymptotic covariance matrices we omitted in the main part. First, we spell out the computation of the asymptotic covariance of the estimator ^Π = v,w + _v,m_m,w. This amounts to ^(^Π, ^Π) = ^(π_v,w, π_v,w) + ^(π, π) + ^(π_v,w, π) + ^(π,π_v,w) = ^_w(v,v) + _v,m^_w(m,m)(_v,w)^∗ + _m,w^_m(v,v)(_m,w)^∗ + ^_w(v,m)(_v,m)^∗ + _v,m^_w(m,v) In the main paper we wanted to compute the asymptotic block covariance of the multivariate estimator of weighted path functions (^(π_u_1, v), ^(π_u_2, v), ^(ϵ_v)). We recall the expressions for the block entries on the diagonal from the main paper ^(π_u_1, v^, π_u_1,v^) = _u_1^_v(u_1, u_1)_u_1^∗ ^(π_u_2, v^, π_u_2,v^) = _u_2^_v(u_2, u_2)_u_2^∗ ^(ϵ_v^, ϵ_v^) = ^_v(v, v) The off diagonal entries are then computed as follows ^(π_u_1, v^, π_u_2,v^) = _u_1^_v(u_1, u_2)_u_2^∗ ^(π_u_1, v^, ϵ_v^) = _u_1^_v(u_1, v) ^(π_u_2, v^, ϵ_v^) = _u_2^_v(u_2, v) Finally, the expressions (<ref>)-(<ref>) together with Proposition <ref> let us compute the asymptotic covariance of the estimator ^Π_(v)= (^Π_u_1, ^Π_u_2, ^Π_v). The block diagonal entries of the asymptotic block covariance of the multi variate estimator are given by the following expressions ^(^Π_u_1, ^Π_u_1) = (^Π_u_1)^_v(u_1, u_1)(_u_1^Π)^∗ +(^(π_u_1,v)) ^(^̂Π̂, ^̂Π̂) (^(π_u_1,v))^∗ ^(^Π_u_2, ^Π_u_2) = (^Π_u_2)^_v(u_2, u_2)(_u_2^Π)^∗ +(^(π_u_2,v)) ^(^̂Π̂, ^̂Π̂) (^(π_u_2,v))^∗ ^(^Π_v, ^Π_v) = (^Π)^_v(v, v)(^Π)^∗ +(_v) ^(^̂Π̂, ^̂Π̂) (_v)^∗ The blocks on the off diagonal are the following ^(^Π_u_1, ^Π_u_2) = (^Π_u_1)^_v(u_1, u_2)(_u_2^Π)^∗ +(^(π_u_1,v)) ^(^̂Π̂, ^̂Π̂) (^(π_u_2,v))^∗ ^(^Π_u_1, ^Π_v) = (^Π_u_1)^_v(u_1, v)(^Π)^∗ +(^(π_u_1,v)) ^(^̂Π̂, ^̂Π̂) (_v)^∗ ^(^Π_u_2, ^Π_v) = (^Π_u_2)^_v(u_2, v)(^Π)^∗ +(^(π_u_2,v)) ^(^̂Π̂, ^̂Π̂) (_v)^∗ §.§ Solar impact on the NAO The parent sets encoding our assumptions about the time series graph are the following (see also Figure <ref>) _ = {(, 1), (, 2), (, 0), (, 0), (, 0) }∪{(, k): k ∈ [0,3] ∪ [7,10] } _ = { (, k): k ∈ [1, 2] ∪ [8,10] } _ = { (, k): k ∈ [1, 3] } _ = { (, k): k ∈ [1, 3] } _ = { (, k): k ∈ [1, 3] } Accordingly, the effect function of the link → is parameterised as follows _→(z) = (1- Φ_(1)z - Φ_(2)z^2)^-1(ϕ_→(0) + ϕ_→(1)z + Φ_→(2)z^2 + +Φ_→(3)z^3 + Φ_→(7)z^7 + Φ_→(8)z^8 + Φ_→(9)z^9 + Φ_→(10)z^10), and the internal function of is thus parameterised by the rational function _(z) = 1/1 - Φ_(1)z - Φ_(2)z^2 - Φ_(1)z^8- Φ_(8)z^8 - Φ_(9)z^9 - Φ_(10)z^10 plain
http://arxiv.org/abs/2406.18835v1
20240627020911
Approximate Minimum Sum Colorings and Maximum $k$-Colorable Subgraphs of Chordal Graphs
[ "Ian DeHaan", "Zachary Friggstad" ]
cs.DS
[ "cs.DS", "F.2.2" ]
Min. Sum Coloring and Max. k-Colorable Subgraphs of Chordal Graphs I. DeHaan and Z. Friggstad Department of Combinatorics and Optimization, University of Waterloo, ijdehaan@uwaterloo.ca Department of Computing Science, University of Alberta, zacharyf@ualberta.ca Approximate Minimum Sum Colorings and Maximum k-Colorable Subgraphs of Chordal Graphs Ian DeHaanSupported by an NSERC Undergraduate Student Research Award held at the University of Alberta.1 Zachary FriggstadSupported by an NSERC Discovery Grant and Accelerator Supplement.2 ================================================================================================================================================================================================ § ABSTRACT We give a (1.796+ϵ)-approximation for the minimum sum coloring problem on chordal graphs, improving over the previous 3.591-approximation by Gandhi et al. [2005]. To do so, we also design the first polynomial-time approximation scheme for the maximum k-colorable subgraph problem in chordal graphs. § INTRODUCTION We consider a coloring/scheduling problem introduced by Kubicka in 1989 <cit.>. In the Minimum Sum Coloring () problem, we are given an undirected graph G = (V,E). The goal is to find a proper coloring ϕ : V →{1, 2, 3, …} of vertices with positive integers which minimizes ∑_v ∈ Vϕ(v). In weighted , each vertex v ∈ V additionally has a weight w_v ≥ 0 and the goal is then to minimize ∑_v ∈ V w_v ·ϕ(v). Naturally, in saying ϕ is a proper coloring, we mean ϕ(u) ≠ϕ(v) for any edge uv ∈ E. is often used to model the scheduling of unit-length dependent jobs that utilize shared resources. Jobs that conflict for resources cannot be scheduled at the same time. The goal in is then to minimize the average time it takes to complete a job. In contrast with the standard graph coloring problem, where we are asked to minimize the number of colors used, sum coloring is on many simple graph types. Even on bipartite and interval graphs, where there are linear time algorithms for graph coloring, remains <cit.>. In <cit.>, it was shown that if one can compute a maximum independent set in any induced subgraph of G in polynomial time, then iteratively coloring G by greedily choosing a maximum independent set of the uncolored nodes each step yields a 4-approximation for . A series of improved approximations for other graph classes followed, these are summarized in Table <ref>. Of particular relevance for this paper are results for perfect graphs and interval graphs. For in perfect graphs, the best approximation is μ^⋆≈ 3.591, the solution to μlnμ = μ + 1. For in interval graphs, the best approximation is μ^⋆/2≈ 1.796. In this paper, we study in chordal graphs. A graph is chordal if it does not contain a cycle of length at least 4 as an induced subgraph. Equivalently, every cycle of length at least 4 has a chord - an edge connecting two non-consecutive nodes on the cycle. Chordal graphs form a subclass of perfect graphs, so we can color them optimally in polynomial time. But itself remains in chordal graphs <cit.>, as they generalize interval graphs. The class of chordal graphs is well studied; linear-time algorithms have been designed to recognize them, to compute maximum independent sets, and to find minimum colorings, among other things. A comprehensive summary of many famous results pertaining to chordal graphs can be found in the excellent book by Golumbic <cit.>. Chordal graphs also appear often in practice; for example Pereira and Palsberg study register allocation problems (which can be viewed as a sort of graph coloring problem) and observe that the interference graphs for about 95% of the methods in the Java 1.5 library are chordal when compiled with a particular compiler <cit.>. Our main result is an improved approximation algorithm for in chordal graphs. For any constant ϵ > 0, there is a polynomial-time μ^⋆/2 + ϵ≈ 1.796 + ϵ approximation for weighted on chordal graphs. That is, we can approximate in chordal graphs essentially within the same guarantee as for interval graphs. Prior to our work, the best approximation in chordal graphs was the same as in perfect graphs: a 3.591-approximation by Gandhi et al. <cit.>. To attain this, we study yet another variant of the coloring problem. In the weighted Maximum k-Colorable Subgraph () problem, we are given a graph G = (V,E), vertex weights w_v ≥ 0, and a positive integer k. The goal is to find a maximum-weight subset of nodes S ⊆ V such that the induced subgraph G[S] is k-colorable. We also design a polynomial-time approximation scheme (PTAS) for weighted in chordal graphs. For any ϵ > 0, there is a (1 - ϵ)-approximation for weighted in chordal graphs. Prior to our work, the best approximation recorded in literature was a 1/2-approximation by Chakaravarthy and Roy <cit.>. Although one could also get a (1-1/e)-approximation by greedily finding and removing a maximum-weight independent set of nodes for k iterations, i.e., the maximum coverage algorithm. Since Theorem <ref> improves the approximation ratio of chordal graphs from the current-best ratio for perfect graphs down to essentially the current-best ratio for interval graphs, one might wonder if we can get an improved approximation for in perfect graphs in general. Indeed, if there was a PTAS for in perfect graphs then our approach would yield an improved approximation for perfect graphs. Unfortunately this does not seem possible: we adapt the -hardness proof for in perfect graphs to show is in fact -hard in perfect graphs. is -hard in perfect graphs even for k = 2.   Organization We begin with a high-level discussion of our techniques. Then, Section <ref> presents the proof of Theorem <ref> assuming one has a PTAS for in chordal graphs. Theorem <ref> is proven in Section <ref>. Finally we prove Theorem <ref> in Section <ref>. §.§ Our Techniques Our work is inspired by the 1.796-approximation for in interval graphs by Halldórsson, Kortsarz, and Shachnai <cit.>. They show that if one has an exact algorithm for , then by applying it to values of k from a carefully selected geometric sequence and “concatenating” these colorings, one gets a 1.796-approximation. In interval graphs, can be solved in polynomial time using a greedy algorithm. We show that a similar result holds: we show Theorem <ref> holds in any family of graphs that admit a PTAS for . However, we need to use linear programming techniques instead of a greedy algorithm since their approach seems to heavily rely on getting exact algorithms for .   in Chordal Graphs In chordal graphs, is NP-complete, but it can be solved in n^O(k) time <cit.>. We rely on this algorithm for constant values of k, so we briefly summarize how it works to give the reader a complete picture of our PTAS. Their algorithm starts with the fact that chordal graphs have the following representation. For each chordal graph G = (V,E) there is a tree T with O(n) nodes of maximum degree 3 plus a collection of subtrees 𝒯 = {T_v : v ∈ V}, one for each v ∈ V. These subtrees satisfy the condition that uv ∈ E if and only if subtrees T_u and T_v have at least one node in common. For a subset S ⊆ V, we have G[S] is k-colorable if and only if each node in T lies in at most k subtrees from {T_v : v ∈ S}. The tree T and subtrees 𝒯 are computed in polynomial time and then a straightforward dynamic programming procedure is used to find the maximum k-colorable subgraph. The states of the DP algorithm are characterized by a node a of T and subtrees 𝒮⊆𝒯 with |𝒮| ≤ k where each subtree in 𝒮 includes a. Our contribution is an approximation for large values of k. It is known that a graph G is chordal if and only if its vertices can be ordered as v_1, v_2, …, v_n such that for every 1 ≤ i ≤ n, the set N^left(v_i) := {v_j : v_iv_j ∈ E and j < i} is a clique. Such an ordering is called a perfect elimination ordering. We consider the following LP relaxation based on a perfect elimination ordering. We have a variable x_v for every v ∈ V indicating if we should include v in the subgraph. maximize{∑_v ∈ V w_v· x_v : x_v + x(N^left(v)) ≤ k  ∀ v ∈ V, x ∈ [0,1]^V }. The natural {0,1} solution corresponding to a k-colorable induced subgraph G[S] is feasible, so the optimum LP solution has value at least the size of the largest k-colorable subgraph of G. We give an LP-rounding algorithm with the following guarantee. Let x be a feasible LP solution. In n^O(1) time, we can find a subset S ⊆ V such that G[S] is k-colorable and ∑_v ∈ S w_v ≥(1- 2/k^1/3) ·∑_v ∈ V w_v · x_v. Theorem <ref> then follows easily. If k ≤ 8/ϵ^3, we use the algorithm from <cit.> which runs in polynomial time since k is bounded by a constant. Otherwise, we run our LP rounding procedure.   Linear Programming Techniques for We give a general framework for turning approximations for weighted into approximations for . We say that an algorithm for weighted is a (ρ, γ) approximation if it always returns a γ· k colorable subgraph with vertex weight at least ρ· OPT, where OPT is the maximum vertex weight of any k-colorable subgraph. For Theorem <ref>, we only need to consider the case ρ = 1-ϵ and γ = 1. Still, we consider this more general concept since it is not any harder to describe and may have other applications. We prove the following, where e denotes the base of the natural logarithm. Suppose there is a (ρ, γ) approximation for weighted on some class of graphs. Then, for any 1 < c < min(e^2, 1/1-ρ), there is a ρ·γ· (c + 1)/2 · (1- (1-ρ)· c)·ln c-approximation for for graphs in the same graph class. Our main result follows by taking γ = 1 and ρ = 1-ϵ. For small enough ϵ, we then choose c^* ≈ 3.591 to minimize the expression, resulting in an approximation guarantee of at most 1.796. Roughly speaking, we prove Lemma <ref> by considering a time-indexed configuration LP relaxation for latency-style problems. Configuration LPs have been considered for in other graph classes, such as line graphs <cit.>. The configurations used in previous work have variables for each independent set. We use a stronger LP that has variables for each k-colorable subgraph for each 1 ≤ k ≤ n. Our configuration LP was inspired by one introduced by Chakrabarty and Swamy for the Minimum Latency Problem (a variant of the Travelling Salesperson Problem) <cit.>, but is tailored for our setting. For each “time” k ≥ 1 we have a family of variables, one for each k-colorable subgraph, indicating if this is the set of nodes that should be colored with integers ≤ k. This LP can be solved approximately using the (ρ, γ)-approximation for , and it can be rounded in a manner inspired by <cit.>. Note that Theorem <ref> describes a (1-ϵ, 1)-approximation for for any constant ϵ > 0. If we had a (1, 1+ϵ)-approximation then the techniques in <cit.> could be easily adapted to prove Theorem <ref>. But these techniques don't seem to apply when given approximations that are inexact on the number of nodes included in the solution. § AN LP-BASED APPROXIMATION ALGORITHM FOR As mentioned earlier, our approach is inspired by a time-indexed LP relaxation for latency problems introduced by Chakrabarty and Swamy <cit.>. Our analysis follows ideas presented by Post and Swamy who, among other things, give a 3.591-approximation for the Minimum Latency Problem <cit.> using a configuration LP. §.§ The Configuration LP For a value k ≥ 0 (perhaps non-integer), 𝒞_k denotes the vertex subsets S ⊆ V such that G[S] can be colored using at most k colors. For integers 1 ≤ k ≤ n and each C ∈𝒞_k, we introduce a variable z_C, k that indicates if C is the set of nodes colored with the first k integers. We also use variables x_v, k to indicate vertex v should receive color k. We only need to consider n different colors since no color will be “skipped” in an optimal solution. minimize: ∑_v ∈V ∑_k=1^n w_v ·k ·x_v, k LP-MSC subject to: ∑_k=1^n x_v, k = 1  ∀ v ∈V ∑_C ∈𝒞_k z_C, k ≤ 1  ∀  1 ≤k ≤n ∑_C ∈𝒞_k : v ∈C z_C, k ≥ ∑_k' ≤k x_v, k'  ∀  v ∈V, 1 ≤k ≤n x, z ≥ 0 Constraint (<ref>) says each vertex should receive one color, constraint (<ref>) ensures we pick just one subset of vertices to use the first k colors on, and constraint (<ref>) enforces that each vertex colored by a value less than or equal to k must be in the set we use the first k colors on. Recall that this work is not the first time a configuration LP has been used for . In <cit.>, the authors consider one that has a variable x_C,k for every independent set C, where the variable models that C is the independent set used for color t. Our approach allows us to prove better bounds via LP rounding, but it has the stronger requirement that in order to (approximately) solve our LP, one needs to (approximately) solve the problem, rather than just the maximum independent set problem. Let OPT denote the optimal cost of the given graph and OPT_LP denote the optimal cost of (<ref>). Then OPT_LP≤ OPT simply because the natural {0,1} solution corresponding to OPT is feasible for this LP. At a high level, we give a method to solve this LP approximately by using the algorithm for to approximately separate the constraints of the dual LP, which is given as follows. maximize: ∑_v ∈V α_v - ∑_k=1^n β_k DUAL-MSC subject to: α_v ≤ w_v ·k + ∑_k̂ = k^n θ_v, k̂  ∀  v ∈V, 1 ≤k ≤n ∑_v ∈C θ_v, k ≤ β_k  ∀  1 ≤k ≤n, C ∈𝒞_k β, θ≥ 0 Note (<ref>) has polynomially-many variables. We approximately separate the constraints in the following way. For values ν≥ 0, ρ≤ 1, γ≥ 1, let 𝒟(ν; ρ, γ) denote the following polytope: {(α, β, θ) : (<ref>), (<ref>), ∑_v ∈ Cθ_v,k≤β_k  ∀ 1 ≤ k ≤ n  ∀  C ∈𝒞_γ· k, ∑_v α_v - 1/ρ·∑_k β_k ≥ν} If there is a (ρ,γ)-approximation for , there is also a polynomial-time algorithm 𝒜 that takes a single value ν plus values (α, β, θ) for the variables of (<ref>) and always returns one of two things: * A (correct) declaration that (α, β/ρ, θ) ∈𝒟(ν; 1, 1). * A constraint from 𝒟(ν; ρ, γ) that is violated by (α, β, θ). First, check that (<ref>), (<ref>), and ∑_v α_v - 1/ρ·∑_k β_k ≥ν hold. If not, we already found a violated constraint. Otherwise, for each k we then run the (ρ, γ)-approximation on the instance with vertex weights θ_v,k, v ∈ V. If this finds a solution (i.e. a (k ·γ)-colorable subgraph) with weight exceeding β_k, we return the corresponding violated constraint. Otherwise, we know that the maximum possible weight of a k-colorable subgraph is at most β_k/ρ. If the latter holds for all k, then (α, β/ρ, θ) ∈𝒟(ν; 1, 1). Lemma 3.3 from <cit.> takes such a routine and turns it into an approximate LP solver. The following is proven in the exact same manner where we let LP^(ρ,γ) be the same as (<ref>), except 𝒞_k is replaced by 𝒞_γ· k in both (<ref>) and (<ref>) and the right-hand side of (<ref>) is replaced by 1/ρ. For fixed constant rational values ρ≥ 1, γ≤ 1, given a (ρ, γ)-approximation for , we can find a feasible solution (x,z) to LP^(ρ,γ) with cost at most OPT_LP in polynomial time. Our proof is nearly identical to that in <cit.> (see Section 3 in their paper). For completeness, we provide our own argument. First, we trivially know 𝒟(0; 1; 1) ≠∅. We also know that the optimal primal solution has value at most 1 + n ·∑_v w_v, and so by weak LP duality, we have that 𝒟(1 + n ·∑_v w_v; ρ, γ) = ∅. We consider a binary search over the range [0, 1 + n ·∑_v w_v] and maintain the invariant that for the current search window [ℓ, μ] that 𝒟(ℓ; 1, 1) ≠∅ and 𝒟(μ; ρ, γ) = ∅. Initially this is true by the above discussion. Now consider some ν in the middle of a given range [ℓ, μ]. We run the ellipsoid method over variables (α, β, θ) and invoke Lemma <ref> for each such tuple encountered. If it returns a declaration that (α, β/ρ, θ) ∈𝒟(ν; 1, 1), we update the lower end of the binary search range, i.e. ℓ := ν. Otherwise, after a polynomial number of iterations the ellipsoid method will generate an infeasible collection of constraints from 𝒟(ν; ρ, γ) in which case we update the upper end of the binary search range, i.e. μ := ν. Eventually the binary search algorithm reduces the range to be simply [ν, ν+ϵ] for some appropriate value ϵ we will describe soon. Note that 𝒟(ν'; 1, 1) = ∅ for all ν' > OPT_LP so this final range must have ν≤ OPT_LP. Consider the constraints, say ℋ, that were generated to certify 𝒟(ν + ϵ; ρ, γ) = ∅. These constraints correspond to a polynomial-size set of variables of LP^(ρ, γ). We claim that for an appropriately-small choice of ϵ that there is a feasible solution to LP^(ρ, γ) using only these variables (i.e. setting all others to 0) with cost at most OPT_LP. If so, since the corresponding LP has polynomial size then we can find it ourselves by solving the resulting LP. To see this, first add all of (<ref>), (<ref>), and the “objective function” constraint ∑_v α_v - 1/ρ∑_k β_k ≥ν + ϵ to ℋ and notice ℋ remains inconsistent (i.e. the inequalities cannot all be satisfied). A routine application of Farkas' lemma on this system of constraints then shows the restriction of LP^(ρ, γ) to just the variables corresponding to constraints in this modified version of ℋ has a feasible solution of value at most ν + ϵ. Thus, LP^(ρ, γ) has an extreme point solution (x,z) with value ν' at most ν + ϵ≤ OPT_LP + ϵ. We claim, in fact, that ν' ≤ OPT_LP by choosing ϵ appropriately small. By standard extreme point analysis, there is some integer D whose bit complexity is polynomial in the input (i.e. in the number of bits used to describe G, w, ρ, γ) such that all extreme points of LP^(ρ, γ), in particular the value of (x,z), have their objective value having denominator ≤ D. Similarly there is such an integer D' upper bounding the denominator in the value of any extreme point to (<ref>), in particular this applies to OPT_LP. Thus, if we let ϵ = 1/(D · D') then the value ν' of the extreme point solution (x,z) to LP^(ρ, γ) satisfies ν' ≤ OPT_LP + 1/(D · D'). Assume, by way of contradiction, that ν' > OPT_LP. Then ν' and OPT_LP are two different values with |OPT_LP - ν'| < 1/(D · D'). But the difference between distinct numbers with denominators at most D and D' respectively must be at least 1/(D · D'), a contradiction. So ν' ≤ OPT_LP as required, i.e. (x,z) is the desired solution. §.§ The Rounding Algorithm and Analysis The rounding algorithm is much like that in <cit.> in that it samples k-colorable subgraphs for various values of k in a geometric sequence and concatenates these colorings to get a coloring of all nodes. For convenience, let z_C, k = z_C, ⌊ k ⌋ for any real value k ≥ 0. Note that nodes colored during iteration j get assigned colors at most γ· (k_0 + k_1 + … + k_j) and the expected color of such a node is at most γ· (k_0 + k_1 + … + k_j-1 + (k_j+1)/2). The number of iterations is O(log n) because each vertex will appear in each γ· n coloring, as this is the largest color considered in LP^(ρ, γ). We note that despite our approach following the main ideas of the algorithm and analysis for minimum latency given in <cit.>, there are some key details that change. In <cit.>, each iteration of the algorithm produces a tree, which is then doubled and shortcutted to produce a cycle with cost at most double the tree. While we randomly permute the colors in our coloring, they randomly choose which direction to walk along the cycle. For a tree with cost k, this gives an expected distance of k for each node. We save a factor of 2 because we do not have a doubling step, but our average color is k+1/2 as opposed to k/2. Some extra work is required in our analysis to account for the extra 1/2 on each vertex. Let p_v, j be the probability that vertex v is not colored by the end of iteration j. For j < 0, we use p_v,j = 1 and k_j = 0. Finally, for v ∈ V, let ϕ(v) denote the color assigned to v in the algorithm. The following is essentially Claim 5.2 in <cit.>, with some changes based on the differences in our setting as outlined above. For a vertex v, E[ϕ(v) | h] ≤γ/2·c+1/c-1·∑_j ≥ 0 p_v, j-1· (k_j - k_j-1) + γ·( 1/2 - h/c-1). There are at most γ· k_j colors introduced in iteration j. They are permuted randomly, so any vertex colored in iteration j has color, in expectation, at most γ· (k_j+1)/2 more than all colors used in previous iterations. That is, the expected color of v if colored in iteration j is at most γ·(k_0 + k_1 + … + k_j-1 + k_j+1/2) ≤ γ·( h ·(c^j-1/c-1 + c^j/2) + 1/2) = γ·( k_j/2·c+1/c-1 + 1/2 - h/c-1), where we have used k_i = h · c^i and summed a geometric sequence. The probability v is colored in iteration j is p_v,j-1 - p_v,j, so the expected color of v is bounded by γ/2·c+1/c-1·( ∑_j ≥ 0 (p_v,j-1 - p_v,j) · k_j ) + γ·( 1/2 - h/c-1). By rearranging, this is what we wanted to show. For brevity, let y_v,j = ∑_k ≤ k_j x_v,k denote the LP coverage for v up to color k_j. The next lemma is essentially Claim 5.3 from <cit.>, but the dependence on ρ is better in our context[We note <cit.> does have a similar calculation in a single-vehicle setting of their problem whose dependence is more like that in Lemma <ref>. They just don't have a specific claim summarizing this calculation that we can reference.]. For any v ∈ V and j ≥ 0, we have p_v, j≤ (1-y_v,j) ·ρ + (1 - ρ) · p_v, j-1. If v is not covered by iteration j, then it is not covered in iteration j itself and it is not covered by iteration j-1, which happens with probability p_v,j-1·(1 - ∑_C ∈𝒞_γ· k_j : v ∈ Cρ· z_C, k_j) ≤ p_v,j-1· (1-ρ· y_v,j) = p_v,j-1·ρ· (1-y_v,j) + p_v,j-1· (1-ρ). Note that the first inequality follows from constraint (<ref>) and the definition of y_v, j. The lemma then follows by using p_v,j-1≤ 1 and y_v,j≤ 1 to justify dropping p_v,j-1 from the first term. From these lemmas, we can complete our analysis. Here, for v ∈ V, we let col_v = ∑_k = 1^n k · x_v,k denote the fractional color of v, so the cost of (x,z) is ∑_v ∈ V w_v · col_v. The following lemma is essentially Lemma 5.4 in <cit.> but with our specific calculations from the previous lemmas. For any v ∈ V, we have E[ϕ(v)] ≤ρ·γ· (c + 1)/2 · (1- (1-ρ)· c)·ln c· col_v. For brevity, let Δ_j = k_j - k_j-1. We first consider a fixed offset h. Let A = ∑_j ≥ 0 p_v, j-1·Δ_j and recall, by Lemma <ref>, that the expected color of v for a given h is at most γ/2·c+1/c-1· A + γ (1/2 - h/c-1). Note Δ_j = c ·Δ_j-1 for j ≥ 2 and Δ_0 + Δ_1 = c ·Δ_0. So from Lemma <ref>, A ≤ ∑_j ≥ 0ρ· (1-y_v,j) ·Δ_j + (1-ρ) ∑_j ≥ 0 p_v, j-2·Δ_j = ∑_j ≥ 0ρ· (1-y_v,j) ·Δ_j + c · (1-ρ) · A. Rearranging and using c < 1/(1-ρ), we have that A ≤ρ/1-c · (1-ρ)·∑_j ≥ 0 (1-y_v,j) ·Δ_j. For 1 ≤ k ≤ n, let σ(k) be k_j for the smallest integer j such that k_j ≥ k. Simple manipulation and recalling y_v,j = ∑_k ≤ k_j x_v,j shows ∑_j ≥ 0 (1-y_v,j) ·Δ_j = ∑_k=1^n σ(k) · x_v,k. The expected value of σ(k) over the random choice of h, which is really over the random choice of Γ∈ [0,1), can be directly calculated as follows where j is the integer such that k ∈ [c^j, c^j+1). E_h[σ(k)] = ∫_0^log_c k-j c^Γ+j+1 dΓ + ∫_log_c k - j^1 c^Γ+j dΓ = 1/ln c (c^log_c k + 1 - c^j+1 + c^j+1 - c^log_c k) = c-1/ln c· k. We have just shown E_h[∑_j ≥ 0 (1-y_v,j) ·Δ_j] = c-1/ln c∑_k ≥ 0 k · x_v,k = c-1/ln c· col_v. So, we can now bound the unconditional color E_h[ϕ(v)] using our previous lemmas. E_h[ϕ(v)] = γ/2·c+1/c-1· E_h[A] + γ(1/2 - E_h[h] / (c-1)) ≤ ρ·γ· (c + 1)/2 · (1 - (1 - ρ) · c) ·ln c· col_v + γ( 1/2 - E_h[h] / (c-1) ) = ρ·γ· (c + 1)/2 · (1 - (1 - ρ) · c) ·ln c· col_v + γ( 1/2 - 1/ln c) ≤ ρ·γ· (c + 1)/2 · (1 - (1 - ρ) · c) ·ln c· col_v The first equality and inequality follow from linearity of expectation and known bounds on E[ϕ(v) | h] and A. The second equality follows from the fact that E_h[h] = ∫_0^1 c^Γ dΓ = c-1/ln c, and the last inequality is due to the fact that c < e^2 by assumption. To finish the proof of Lemma <ref>, observe the expected vertex-weighted sum of colors of all nodes is then at most ρ·γ· (c + 1)/2 · (1- (1-ρ)· c)·ln c·∑_v ∈ V w_v · col_v ≤ρ·γ· (c + 1)/2 · (1- (1-ρ)· c)·ln c· OPT. Theorem <ref> then follows by combing the (1-ϵ,1) approximation (described in the next section) with this approximation, choosing c ≈ 3.591, and ensuring ϵ is small enough so c < 1/ϵ. We note Algorithm <ref> can be efficiently derandomized. First, there are only polynomially-many offsets of h that need to be tried. That is, for each k_j, we can determine the values of h that would cause ⌊γ· k_j ⌋ to change and try all such h over all j. Second, instead of randomly permuting the color classes in a γ· k_j-coloring, we can order them greedily in non-increasing order of total vertex weight. § A PTAS FOR IN CHORDAL GRAPHS We first find a perfect elimination ordering of the vertices v_1, v_2, …, v_n. This can be done in linear time, e.g., using lexicographical breadth-first search <cit.>. Let N^left(v) ⊆ V be the set of neighbors of v that come before v in the ordering, so N^left(v) ∪{v} is a clique. Recall that we are working with the following LP. The constraints we use exploit the fact that a chordal graph is k-colorable if and only if all left neighbourhoods of its nodes in a perfect elimination ordering have size at most k-1. maximize: ∑_v ∈V w_v ·x_v K-COLOR-LP subject to: x_v + x(N^left(v)) ≤ k ∀ v ∈V x ∈ [0,1]^V Let OPT_LP denote the optimal LP value and OPT denote the optimal solution to the problem instance. Of course, OPT_LP≥ OPT since the natural {0,1} integer solution corresponding to a k-colorable subgraph of G is a feasible solution. We can now give a rounding algorithm as follows. §.§ Analysis Observe that when we consider adding some v ∈ S' to S, S ∪{v} is k-colorable if and only if |S ∩ N^left(v)| ≤ k-1. This is easy to prove by noting that the restriction of a perfect elimination ordering of G to a subset S yields a perfect elimination ordering of G[S]. Because we consider the nodes v according to a perfect elimination ordering of G, by adding v the only possible left-neighbourhood of a node that could have size ≥ k is N^left(v) itself. We bound the probability that we select at least k vertices from N^left(v). The second moment method is used so that derandomization is easy. Let Y_u indicate the event that u ∈ S'. Then E[Y_u^2] = E[Y_u] = (1 - f(k)) · x_u. Fix some vertex v. Let Y = ∑_u ∈ N^left(v) Y_u. By constraint (<ref>), we have E[Y] = ∑_u ∈ N^left(v) (1-f(k)) · x_u ≤ (1 - f(k)) · k. And since each Y_u is independent, we have again by constraint (<ref>) that Var[Y] = ∑_u ∈ N^left(v) Var[Y_u] = ∑_u ∈ N^left(v)( E[Y_u^2] - E[Y_u]^2 ) ≤∑_u ∈ N^left(v) E[Y_u^2] ≤ k. We are interested in Pr[Y ≥ k] ≤ Pr[|Y-E[Y]| ≥ f(k) · k]. By Chebyshev's inequality, Pr[|Y-E[Y]| ≥ f(k) · k] ≤ Var[Y]/f(k)^2 · k^2≤k/f(k)^2 · k^2 = 1/f(k)^2 · k. From this, we find that the probability we actually select vertex v is at least Pr[Y_v ∧ (Y ≤ k-1)] = Pr[Y_v] · Pr[Y ≤ k-1] ≥ (1-f(k)) · x_v ·(1-1/f(k)^2 · k). The first equality is justified because Y only depends on Y_u for u ≠ v, so these two events are independent. Choosing f(k) = k^-1/3 results in v ∈ S with probability at least x_v · (1 - 2 · k^-1/3). By linearity of expectation, the expected value of S is at least (1 - 2 · k^-1/3) ·∑_v ∈ V w_v · x_v. The PTAS for in chordal graphs is now immediate. For any constant ϵ > 0, if k ≥ 8/ϵ^3, then we run our LP rounding algorithm to get a k-colorable subgraph with weight at least (1-ϵ) · OPT_LP. Otherwise, we run the exact algorithm in <cit.>, which runs in polynomial time since k is bounded by a constant. It is desirable to derandomize this algorithm so it always finds a solution with the stated guarantee. This is because we use it numerous times in the approximate separation oracle for (<ref>). Knowing it works all the time does not burden us with providing concentration around the probability we successfully approximately solve LP^(ρ,γ) as in Lemma <ref>. We can derandomize Algorithm <ref> using standard techniques since it only requires that the variables Y_u, u ∈ V be pairwise-independent (in order to bound Var[Y]). § -HARDNESS FOR IN PERFECT GRAPHS It is natural to wonder if admits a better approximation in perfect graphs. Unfortunately, the techniques we used to get better approximations for chordal graphs do not extend immediately to perfect graphs. In <cit.>, Addario-Berry et al. showed is in a different subclass of perfect graphs than chordal graphs. Their proof reduces from the maximum independent set problem and it is easy to see it shows is in the same graph class if one reduces from bounded-degree instances of maximum independent set as we now show. In <cit.>, a polynomial-time reduction is given that will produce a graph H from a given graph G such that the maximum 2-colorable subgraph in H has size exactly 3 · |V(G)| + 2 · |E(G)| + α(G) where α(G) is the size of the largest independent set in G. Consider applying this reduction when G = (V,E) is a cubic graph (i.e., a simple graph where every vertex has degree exactly 3). Since G is a cubic graph, 2 · |E(G)| = ∑_v ∈ V(v) = 3 · |V(G)| so the maximum independent set size is in fact 6 · |V(G)| + α(G). Also observe α(G) ≥ |V|/4 since any n-node graph with maximum degree d has an independent set of size n/(d+1) that can be constructed by greedily picking nodes not adjacent to any previously-picked nodes. Thus, for any 0 < c < 1 if there is a c-approximation for the maximum 2-colorable subgraph of H then we would have computed a value z such that 6 · |V(G)| + α(G) ≥ z ≥ c · (6 · |V(G)| + α(G)). So (z - c · 6 · V(G))/(25 - 24 · c) ∈ [α(G)/(25 - c · 24), α(G)], i.e. we have a 1/(25-24 · c)-approximation for the value of α(G). But the Maximum Independent Set problem in cubic graphs is -hard meaning a c-approximation for the maximum 2-colorable subgraph problem cannot (barring surprising developments in complexity theory) exist for constants c sufficiently close to 1. § CONCLUSION Our approach, or a refinement of it, may succeed in getting a good approximation for in perfect graphs if one has good constant approximations for in perfect graphs. Note that can be approximated within 1-1/e ≈ 0.6321 in perfect graphs simply by using the maximum coverage approach. That is, for k iterations, we greedily compute a maximum independent set of nodes that are not yet covered. This is not sufficient to get an improved approximation in perfect graphs using Lemma <ref>. That is, Lemma <ref> can yield improve approximation if we get a sufficiently-good (≈0.704) approximation for . As a starting point, we ask if there is a ρ-approximation for in perfect graphs for some constant ρ > 1-1/e. splncs04
http://arxiv.org/abs/2406.19295v1
20240627161018
M31 nucleus: molecular and ionised gas content upper limits
[ "Anne-Laure Melchior", "Francoise Combes" ]
astro-ph.GA
[ "astro-ph.GA" ]
LERMA, Sorbonne Université, Observatoire de Paris, Université PSL, CNRS, F-75014, Paris, France Anne-Laure.Melchior@observatoiredeparis.psl.eu Collège de France, 11 Place Marcelin Berthelot, 75005 Paris, France We report NOEMA and ALMA observations of the nucleus of Andromeda (M31), putting strong constraints on the presence of gas in the form of cold or warm phase, as proposed by Chang et al. M31 hosts the largest supermassive black hole (SMBH) closer than 1 Mpc from us. Its nucleus is silent with some murmurs at the level of 4 × 10^-9 L_Edd, and is surrounded by a 5-pc-radius disk of old stars. The mass-loss from these stars is expected to fill a molecular gas disk within the tidal truncation of 1 pc (=0.26 arcsec), of 10^4 M_⊙, corresponding to a CO(1-0) signal of 2 mJy with a linewidth of 1000 km/s. We observed the nucleus with NOEMA in CO(2-1) and with ALMA in CO(3-2) with angular resolutions of 0.5(1.9 pc) and 0.12(0.46 pc) respectively. We exclude the presence of gas with a 3σ upper limit of 195M_⊙. The CO(3-2) upper limit also constrains warm gas, escaping detection in CO(1-0). The scenario proposed by Chang et al. is not verified, and instead the hot gas, expelled by the stellar winds, might never cool nor fall onto the disc. Alternatively, the stellar wind mass-loss rate can have been overestimated by a factor 50, and/or the ionised gas escaped from the nucleus. The SMBH in M31 is obviously in a low state of activity, similar to what is observed for Sgr A* in the Milky Way (MW). Recently, a cool (10^4 K) ionised accretion disc has been detected around Sgr A* in the H30α recombination line with ALMA. Re-scaling sizes, masses and fluxes according to the mass of M31's black hole (35 times higher than in the MW) and the distances, a similar disc could be easily detectable around M31 nucleus with an expected signal 8 times weaker that the signal detected in SgrA*. We searched for an ionised gas disc around M31 nucleus with NOEMA, and put a 3σ upper limit on the H30α recombination line at a level twice lower than expected with a simple scaling of the SgrA*. M31's nucleus: molecular and ionised gas content upper limits Anne-Laure Melchior1 Françoise Combes12 July 1, 2024 ================================================================================================= § INTRODUCTION M31 hosts the closest SMBH, more massive than SgrA*, with a mass of 1.4 × 10^8 M_⊙ <cit.>. While this black hole has probably been active in the past <cit.>, it actually murmurs at the level of 4× 10^−9 L_Edd <cit.>. The M31 galaxy is known to be in the green valley <cit.>, which is supported by the very little amount of molecular gas (∼ 8 × 10^4 M_⊙) detected inside 250 pc <cit.>, and by the very little star formation (∼ 10^−5 M_⊙ yr^−1) observed in the central kpc <cit.>. The molecular gas detected in the central kpc is very clumpy and its kinematics irregular suggesting that the detected gas might be seen in projection <cit.>. These arguments are compatible with an exhaustion of gas inside-out <cit.>. The high resolution photometry performed within the sphere of influence of the M31 black hole revealed several peaks in intensity: as discussed in <cit.>, the nucleus reveals three components. This has been explained by an eccentric disc <cit.>, extending at ∼ 4 pc from the black hole. It is composed of old stars with a bright stellar concentration in P1 at apoapse, and a fainter concentration in P2, at periapse. In this scheme, the third peak designated as P3 is a nuclear blue stellar cluster located next to the black hole <cit.>. Contrarily to the MW hosting Wolf-Rayet stars close to SgrA* <cit.>, P3 has been interpreted as a compact cluster of young stars thought to be 100-200 Myr old. Such central young stellar population is also detected in nearby galaxies <cit.> and in the Galactic centre <cit.>. While the formation of nuclear stellar clusters in massive galaxies is usually associated to central star formation <cit.>, it has also been discussed that these centrally concentrated blue stars could be blue straggler stars, extended horizontal branch stars and/or young, recently formed, stars <cit.>. Indeed, different mechanisms could drive gas infall onto the centre to form new stars, or accrete onto old main sequence stars already present. The non-detection of molecular gas next to the black hole is closely related to the coevolution of these nuclear clusters with the central black hole <cit.>. <cit.> have shown that eccentric nuclear stellar discs may originate in gas-rich galaxy mergers. Relying on numerical simulations, they show that M31's nuclear stellar disc can well be reproduced by gas-rich accretion onto the black hole <cit.>. This mechanism could have accounted for the growth of the SMBH, for which the existing disc is a stable relic of this past active phase. <cit.> relied on N-body simulations to show that the smearing of the stellar orbits due to differential precession does not occur, as a torque of the orbit adds to the precession and stabilizes the eccentric disc. In M31, <cit.> have proposed that the nuclear eccentric old-stellar disc <cit.> could be cyclically replenished from the gas expelled through mass loss of red giants and asymptotic giant branch stars. Gas on orbits crossing the tidal radius R_t would collide, shock and fall into a closer orbit around the black hole. The gas would accumulate into a nuclear disc until star formation events are triggered every 500 Myr. According to <cit.>, the key parameter and source of uncertainty of this modelling is the precession rate of the eccentric disc Ω_P, which should not exceed 3-10 km s^−1/pc, which is in agreement with the overall argument that the central eccentric disc is long-lived <cit.>. <cit.> found from simulations a natural m=1 mode in the nuclear disc, with a very slow pattern speed (3 km s^−1/pc) that can be maintained during more than a thousand dynamical times. <cit.> has estimated with OSIRIS/Keck spectroscopy Ω_P = 0.0 ± 3.9 km s ^−1/pc, which is compatible with the results of <cit.>. In <cit.>, we first observed the nucleus at IRAM-30m with 12 arcsec resolution in CO(2-1). No molecular gas associated with the black hole has been detected. Only some gas mass further out, within 100 pc, corresponding to about 4.2 × 10^4 M_⊙ has been detected. With these single-dish observations, an rms sensitivity of 20 mJy for a velocity resolution of 2.6 km s^-1 has been reached. The nucleus has been subsequently observed with NOEMA in CO(1-0) <cit.>. A small clump of 2000 M_⊙ with Δ v = 14 km s^−1 within 9 pc from the centre, most probably seen in projection, has been detected. At the black hole position, an rms sensitivity of 3.2 Jy/beam with a beam of 3.37^''×2.44^'' has been reached for a velocity resolution of 5.1 km s^−1, which corresponds to a 3σ upper limit on the molecular gas mass of 4300 M_⊙ for a linewidth of 1000 km s^−1. This rules out the original prediction of <cit.>, namely a CO(1-0) flux of 2 mJy with a linewidth of 1000 km s^−1 corresponding to a molecular mass of 10^4 M_⊙ gas concentrated inside the tidal truncation radius R_t < 1 pc. However, we do expect stellar mass loss and winds from the old stellar population of the eccentric disc, and hence an accumulation of gas in the centre. It is hence possible that previous NOEMA observations with a 3.37^''× 2.44^'' (i.e. about 13 pc × 9 pc) beam did not succeed to detect this gas by lack of sensitivity, due to the dilution of the signal. In this paper, we present new NOEMA and ALMA observations of M31’s centre, gaining an order of magnitude in sensitivity and resolution. As M31's black hole is not in an active phase (like SgrA*), given the lack of gas in the nuclear region and its low Bondi accretion rate <cit.>, it is probably typical of the so-called radiatively inefficient accretion flows due to very low density hot gas <cit.>. In such a configuration with low Eddington luminosity <cit.>, the gas does not cool via radiation, and the hot accretion disc is probably thick with advection-dominated accretion flows (ADAF) <cit.>. In X-ray, the detected gas has a typical temperature of 0.3 keV ∼ 6× 10^6 K <cit.>. The cooling process in this multi-phase region is complex and is expected to proceed with fragmentation <cit.>. Such low states are observed in nearby nuclei, as in the Galactic Centre, where M_∙= 4 10^6 M_⊙, and L_bol= 2 10^-9 L_Edd, or M31, where M_∙= 1.2 10^8 M_⊙, and L_bol= 10^-9 L_Edd <cit.>. Near the nucleus, there should exist a rotating disc, expected to be geometrically thick and optically thin, where the convection takes the energy away and limits the accretion. This can explain the low luminosity, due to the radiatively inefficient accretion flow (RIAF). Recently, <cit.> have detected the recombination line H30α in the accretion disc around the Galactic Centre with ALMA (see their figure 1). Although their beam is ∼ 0.3 arcsec, they are able to see a rotating disc, with the blue and red-shifted sides peaking each 0.11 arcsec from the centre (i.e. 0.004 pc); this is 1/10th of the Bondi radius R_B, or 10^4 of the horizon radius. The width of the line (2200 km s^-1) corresponds to the rotation around a black hole mass of 4× 10^6 M_⊙, at a radius R_B/10. The gas mass corresponds to ∼ 10^-4.5 M_⊙, with an average density of 4000 cm^-3, with may be some clumps at n=10^5-10^6 cm^-3. The emission measured is proportional to n^2 times the volume occupied by the ionised gas. It has been amplified by the millimetre continuum of SgrA* by a factor 80. This masing effect is likely to occur also in M31. Throughout this paper, we consider a luminosity distance of 0.78 Mpc for M31 <cit.>, corresponding to 1^''=3.8 pc. In Sect. <ref>, we describe the observations we carried out to search for CO and H30α recombination lines next to the SMBH. In Sect. <ref>, we present the upper limits thus achieved. In Sect. <ref>, we discuss our results. § OBSERVATIONS AND UPPER LIMITS We describe below the set of each (sub)millimeter observations carried out with the phase center position at the optical position <cit.>, namely 00^h42^m44.37^s +41^d16^m8.34^s. The upper limits provided here have been derived for this position. Nevertheless, we check that given the size of the primary beams (22and 18respectively) this does not change the result at the position of the radio source 00^h42^m44.33^s +41^d16^m08.42^s, as both are separated by 0.6. §.§ NOEMA observations The first epoch observations (2012) of CO(1-0) have been described in <cit.> and <cit.>. In 2020, we observed the nucleus at 231 GHz with the A configuration, reaching a 0.3 arcsec, with 10 antennas and 2 polarisations. We thus targeted the H30α recombination line. We reached an rms noise level of 0.11 (resp. 0.074) mJy in 8 hours of integration time in 1000 (resp. 2200) km s^-1 channels. We also get limits on the CO(2-1) line at 230.769 GHz. The primary beam diameter was 22 arcsec (88 pc). We applied standard calibrations with pointing and tuning on IRAM calibrators (3C454.3, MWC349, 0010+405, 0003+380). §.§ ALMA observations We observed the nucleus at 346.142 GHz (band 7) targetting the CO(3-2) line with the ALMA-12m array in two configurations, within the project 2019.1.00711.S (PI: Melchior). We reached an angular resolution of 0.12(resp. 0.8) corresponding to the baseline configuration C43.6 (resp. C43.3), with 1.3 (resp. 0.33) hours of integration time. We also have the HCO^+(4-3) line, in the upper sideband. The primary beam diameter was 18 arcsec (88 pc). The standard pipeline has been applied. Figure <ref> displayed a 1.6^''×1.6^'' (6 pc×6 pc) field of view centred on M31's black hole. While the left panel displays the noise level achieved for a Δ V = 1000 km/s bandwidth, the right panel shows the signal expected according to <cit.>, assuming R_13=1.8 (as defined in Sect. <ref>). § UPPER LIMITS Table <ref> summarises the 3σ upper limits achieved on the observed line intensities, together with the beam, integration time t_int and their observed frequencies, while Table <ref> gives the 3σ upper limits on the continuum fluxes. Similarly, Figures <ref> and <ref> illustrate the upper limits reached with the measurements described in the previous section. In the following, we discuss the significance of these three types of upper limits, namely on the molecular gas lines, the radio recombination line and the continuum. §.§ Molecular gas To derive upper limits on the total H_2 mass, we compute the intrinsic CO luminosity with the velocity integrated transition line flux F_CO(J → J-1) (∝ S Δ V) and calculate : ( L'_CO(J → J-1)/K km s^-1 pc^2) = 3.25 × 10^7( F_CO(J → J-1)/Jy km s^-1) ( ν_rest/GHz)^-2( D_L/Mpc)^2, where ν_ rest is the rest CO line frequency and D_L the luminosity distance <cit.>. We consider a luminosity distance of 0.78 Mpc for M31 <cit.>. We then derive the total molecular-gas mass including a correction of 36 % for interstellar helium using : M_H_2 = α_CO R_1J L'_CO(J → J-1), where the mass-to-light ratio α_ CO denotes the CO(1-0) luminosity-to-molecular-gas-mass conversion factor, and R_1J = L^'_CO (1→ 0) / L^'_CO (J → J-1) is the CO line ratio. <cit.> assume R_12∼ 1.16-1.3 and R_13∼ 1.8 for star-formation main-sequence galaxies. One can note that the so-called CO-ladder ratios strongly depend on the column density and the gas temperature <cit.>. Given the uncertainties here, we consider in the following standard values R_12∼ 1.3 and R_13∼ 1.8. For α_ CO, given the absence of gas, we assume a Galactic value α_ CO=4.36 M_⊙/(K km s^-1 pc^2), which includes the usual correction to account for helium <cit.>. Again, we might consider that the gas in the central part is expected to have a high metallicity. However, the various works <cit.> arguing for a lower α_ CO for high-metallicity gas are based on actively star-forming galaxies. Alternatively, it is difficult to argue for a higher α_ CO, as expected in low-metallicity regions like in the outskirts of galaxies. Hence, upper values derived on the molecular gas are provided for the sake of discussion to compare with standard MW values. As further discussed in Sect. <ref>, it is possible that the molecular gas in this region is CO-dark, due to special physical conditions. The 3σ upper limits on the molecular mass derived from our measurements are provided in Table <ref>. The strongest constraint can be derived from the ALMA observations in CO(3-2). We can thus exclude that there are more than 195 M_⊙ of molecular hydrogen within 1 pc of the nucleus, assuming standard MW-like conditions. This value is a factor 50 lower that the mass of gas expected in the <cit.> modelling. §.§ Hydrogen recombination line In Table <ref>, we summarise our results given our initial assumptions scaled from the previous observations of the H30α recombination line detected in the MW for SgrA* by <cit.>. The Bondi radius scales with the black hole mass R_B = 2 G M_∙/c^2_s (where c_s is the sound speed of the gas): the Bondi radius is thus a priori 35 times larger around M31's black hole than for SgrA* for a given sound speed : R_B|_M31 = M_∙|_M31M_∙|_MW× R_B|_MW = 35 × R_B|_MW Relying on <cit.>, who estimated c_s = 550km s^-1, we found a typical size of 4.2 pc, which is well sampled by our observations (0.3" or 1.1 pc resolution). In addition, the line width Δ V traces the Keplerian velocity around the black hole and scales as √(M_∙/R_B). Hence, given the previous assumption on the sound speed, we expect the same line width V_H30α|_M31 for M31 than for the MW : Δ V_H30α|_M31 = √(M_∙|_M31M_∙|_MW)√(R_B|_MWR_B|_M31)×Δ V_H30α|_MW Δ V_H30α|_M31 = Δ V_H30α|_MW∼ 2200 kms^-1 Indeed, given the similarities between the two nuclei, namely, their low Eddington ratios λ_Edd = L/ L_Edd of 2-4× 10^-9 and their radiatively inefficient accretion flow (RIAF), it is likely that they both possess a rotating ionised disc in their nuclei, at a fraction of their Bondi radius. We thus know: λ_Edd = f(Ṁ_B|_M31M_∙|_M31) = f(Ṁ_B|_MWM_∙|_MW) The accretion rates are proportional to their respective BH masses. This corresponds to a typical (2D) accretion rate at the Bondi radius: Ṁ_B ∝ 2π R_B μ c_s, where μ is the ionised gas surface density. This means that we can assume the same plasma characteristics for the accretion around M31* and SgrA* (i.e. μ, c_s, and plane thickness h), the accretion rate is 35 times larger for M31*: Ṁ_B|_M31 = [ M_∙|_M31M_∙|_MMW] ×Ṁ_B|_MW = 35 ×Ṁ_B|_MW Last, the expected integrated signal can be written as : S Δ V_H30α|_M31 = ϵ_H30α4π h D^2_M31μ^2 π R_B^2 cν_obs , where ϵ_H30α the emissivity of H30α <cit.>, which varies weakly with the density. Assuming simple scaling relations we find : S Δ V_H30α|_M31 = [D_MWD_M31]^2 [ M_∙|_M31M_∙|_MMW]^2 × S Δ V_H30α|_MW = 0.13× S Δ V_H30α|_MW The expected ionised gas signal in M31 is 8 times lower than in the MW. The 3σ upper limit obtained here is 0.07 × the MW results. Therefore our upper limit is twice below what was expected. This means that the M31 nucleus has less ionised gas than SgrA* in proportion. §.§ Continuum flux Our upper limits are compatible with the synchrotron power law emission, derived from lower frequency measurements <cit.>. Given the absence of gas in the nucleus, no dust emission is expected. § DISCUSSION AND CONCLUSIONS The ALMA observations present an unprecedented sensitivity for cold molecular gas, further excluding its presence next to the black hole. Indeed, the modelling proposed by <cit.> was predicting 10^4 M_⊙ of CO(1-0), while our upper limit (195 M_⊙) is 50 times smaller. While the principle of gas replenishment due to stellar wind from red giants in the excentric disc is reasonable, some details are probable incorrect. Beside the small rotation pattern requirement that seems compatible with the modern estimates <cit.>, the values used for the stellar wind mass-loss rates lie on the upper-side of the expected range and might be overestimated by a factor larger than 50 (E. Josselin, priv. comm.). The modelling might still account for the formation of the blue nuclear cluster next to P3 <cit.>, but the timescale to reconstitute a gas reservoir would then be longer than expected. Alternatively, <cit.> also discussed that the central blue nucleus might be composed of old stellar population of evolved blue horizontal-branch stars and of merger products, which would invalidate the need of gas inflow to form it. Relying on simple scaling relations based on the <cit.> detection of the H30α recombination line next to SgrA*, we tentatively search this line next to M31*, with deep NOEMA observations. We excluded it at a 6σ level. In the optical, <cit.> have discovered an eccentric Hα emitting disc in the M31 nucleus, of radius 0.7 . The presence of this optical recombination line traces the presence of some gas next to the nucleus. They estimate that this 2.7-pc-radius gas disc has a luminosity of L_H_α=(8.7± 1) × 10^2 L_⊙. This luminosity corresponds to a recombination rate of 10^48 photons per second. One can discuss the possible mechanisms that might have ionised this gas. Following <cit.>, one typical planetary nebula emits 1.2 10^45 photons per second, while we do not expect more than a few planetary nebulae in this region <cit.>. The youngest stars present in this region belong to the A0-star cluster (4200 M_⊙) next to the black hole in P3 <cit.>. Their typical temperature 10 000 K <cit.> excludes any significant amount of ionising photons. Assuming a black body emission, such an A0-star with a typical mass 2.5 M_⊙ cannot produce more that 1.2× 10^38 recombinations per second, corresponding to a grand total of 2× 10^41 recombinations per second. We can thus exclude that stellar objects contribute to the ionisation of this central gas. On 2006 January 6^th, <cit.> observed a murmur of M31* at a level of 4.3× 10^37 erg s^-1 in the frequency range 0.5-8 keV corresponding to a minimum of 2× 10^48 recombinations per second. The past relic activity of the central engine might well explain the excitation of this inner Hα disc. Indeed, an episodic activity of the black hole is also supported by the work of <cit.>, based on X-ray gas, who concluded that only an AGN episode half a million years ago could account for their observed line ratios. In addition, on probably different timescales, <cit.> also discussed signs of recent AGN activity found in possible Fermi bubbles, while <cit.> detected the possible relic of a past outflow. Recurrent bursts from the central engine might account of the whole picture. Hence, it is possible that the CO has been destroyed as proposed by <cit.>, due to the black hole activity traced in X-ray. Moreover, one can quote the work of <cit.> who show that the irradiation by an AGN can modify the atmosphere of red giant stars. Our upper limits and the detection of a weak Hα disc by <cit.> support the evidence of a past activity of the AGN (e.g. kinetic jet feedback), which might have significantly contributed to centrally quench this galaxy as discussed in <cit.>. This paper makes use of the following ALMA data: ADS/JAO.ALMA#2019.1.00711.S. ALMA is a partnership of ESO (representing its member states), NSF (USA) and NINS (Japan), together with NRC (Canada), MOST and ASIAA (Taiwan), and KASI (Republic of Korea), in cooperation with the Republic of Chile. The Joint ALMA Observatory is operated by ESO, AUI/NRAO and NAOJ. This work is based on observations carried out under project numbers W19BN and W01E with the IRAM NOEMA Interferometer. IRAM is supported by INSU/CNRS (France), MPG (Germany) and IGN (Spain).” This work benefited from the support of the Action fédératrice ALMA-NOEMA of Paris Observatory, and in particular the workshop organised within this project by Raphaël Moreno and Philippe Salomé. This project also got from support from the Programme National Cosmologie et Galaxies. Special thanks go to Eric Josselin and Franck Delahaye for useful information on stellar populations. Last, we are most grateful to the anonymous referee for the very constructive comments that helped us to substantially improve the manuscript. aa § OBSERVED MAPS
http://arxiv.org/abs/2406.18647v1
20240626180002
Majorana phases beyond neutrinoless double beta decay
[ "Avital Dery", "Stefania Gori", "Yuval Grossman", "Zoltan Ligeti" ]
hep-ph
[ "hep-ph" ]
http://arxiv.org/abs/2406.19036v1
20240627094156
Optimized Waveform Design for OFDM-based ISAC Systems Under Limited Resource Occupancy
[ "Silvia Mura", "Dario Tagliaferri", "Marouan Mizmizi", "Umberto Spagnolini", "Athina Petropulu" ]
eess.SP
[ "eess.SP" ]
Optimized Waveform Design for OFDM-based ISAC Systems Under Limited Resource Occupancy Silvia Mura, Member, IEEE, Dario Tagliaferri, Member, IEEE, Marouan Mizmizi, Member, IEEE, Umberto Spagnolini, Senior Member, IEEE, and Athina Petropulu, Fellow, IEEE This work was supported by the European Union under the Italian National Recovery and Resilience Plan (NRRP) of NextGenerationEU, partnership on “Telecommunications of the Future” (PE00000001 - program “RESTART”) S. Mura. D. Tagliaferri, M. Mizmizi, U. Spagnolini are with the Department of Electronics, Information and Bioengineering (DEIB) of Politecnico di Milano, 20133 Milan, Italy (e-mail: [silvia.mura, dario.tagliaferri, marouan.mizmizi, umberto.spagnolini]@polimi.it U. Spagnolini is Huawei Industry Chair A. Petropulu is with Rutgers, the State University of New Jersey, NJ 08854, United States (e-mail: athinap@rutgers.edu). June 2024 =============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== § ABSTRACT The sixth generation (6G) of wireless networks introduces integrated sensing and communication (ISAC), a technology in which communication and sensing functionalities are inextricably linked, sharing resources across time, frequency, space, and energy. Despite its popularity in communication, the orthogonal frequency-division multiplexing (OFDM) waveform, while advantageous for communication, has limitations in sensing performance within an ISAC network. This paper delves into OFDM waveform design through optimal resource allocation over time, frequency, and energy, maximizing sensing performance while preserving communication quality. During quasi-normal operation, the Base Station (BS) does not utilize all available time-frequency resources, resulting in high sidelobes in the OFDM waveform's ambiguity function as well as decreased sensing accuracy. To address these latter issues, the paper proposes a novel interpolation technique using matrix completion via Schatten p-quasi norm approximation, which requires fewer samples than the traditional nuclear norm for effective matrix completion and interpolation. This approach effectively suppresses sidelobes, enhancing sensing performance. Numerical simulations confirm that the proposed method outperforms state-of-the-art frameworks, such as standard complaint resource scheduling and interpolation, particularly in scenarios with limited resource occupancy. Integrated sensing and communication, 6G, waveform design Optimized Waveform Design for OFDM-based ISAC Systems Under Limited Resource Occupancy Silvia Mura, Member, IEEE, Dario Tagliaferri, Member, IEEE, Marouan Mizmizi, Member, IEEE, Umberto Spagnolini, Senior Member, IEEE, and Athina Petropulu, Fellow, IEEE This work was supported by the European Union under the Italian National Recovery and Resilience Plan (NRRP) of NextGenerationEU, partnership on “Telecommunications of the Future” (PE00000001 - program “RESTART”) S. Mura. D. Tagliaferri, M. Mizmizi, U. Spagnolini are with the Department of Electronics, Information and Bioengineering (DEIB) of Politecnico di Milano, 20133 Milan, Italy (e-mail: [silvia.mura, dario.tagliaferri, marouan.mizmizi, umberto.spagnolini]@polimi.it U. Spagnolini is Huawei Industry Chair A. Petropulu is with Rutgers, the State University of New Jersey, NJ 08854, United States (e-mail: athinap@rutgers.edu). June 2024 =============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== § INTRODUCTION 6G is expected to be the first wireless generation to massively integrate radar sensing as a service thanks to new frequency bands (millimeter-wave (mmWave), 30-300 GHz) and sub-THz (> 100 GHz), as well as the use of massive antenna arrays. Radar systems are widely utilized for various military and civilian applications, such as remote sensing, infrastructure monitoring, and driving assistance <cit.>, but they operate on dedicated spectrum portions to avoid interference. As 6G wireless networks demand large-scale and ubiquitous integration of radio sensing, equipping the communication infrastructure with standalone radars is not viable, as it would represent an unsustainable waste of hardware resources. In this regard, integrated sensing and communication (ISAC) systems emerged as a solution to the aforementioned problem, employing a single waveform for both communication and sensing functionalities over the same frequency/time/space and hardware resources <cit.>. Designing ISAC waveforms is challenging due to the different performance indicators for the two functionalities. Communication prioritizes reliable and high-capacity data transfer and orthogonal frequency division multiplexing (OFDM) waveforms to address frequency selective fading. On the other hand, radar systems focus on target detection and localization with sensing-optimal waveforms such as frequency-modulated continuous waveform (FMCW). To tackle the different requirements, a typical method for ISAC waveform is designed to optimize one functionality (either communication or sensing), while constraining the other to meet a certain quality of service (QoS),while there are also methods that consider waveforms that can trade-off the performance of one function for the other. This study considers a communication-centric approach to ISAC design, where communication is the main goal and sensing is a secondary objective <cit.>. In the following, we review the state of the art of waveform design for ISAC. §.§ Literature survey on ISAC waveform design Recent research in ISAC waveform design covers various domains such as space, frequency, and time. Information-theoretical approaches seek to bridge information and estimation theories. The ISAC waveform is designed to balance the maximization of channel capacity with the minimization of the Cramér-Rao bound (CRB) for estimation error of key sensing parameters<cit.>. Useful inner and outer bounds on rate-CRB trade-off curve are reported. From a more practical perspective, recent studies in <cit.> focused on developing a single waveform that utilizes either space or time-frequency resources. In terms of spatial domain waveform design, initial ISAC efforts are focused on optimizing the beampattern across transmitting (Tx) antennas aiming at creating a beampattern suitable for both communication and sensing <cit.>. Waveform design along time and frequency (and over the dual delay and Doppler (DD) domain) represented a major effort in the ISAC literature. Enhancing conventional OFDM waveform to improve sensing capabilities represents a cost-effective ISAC solution, ensuring retrofitting with 3GPP standards. Several studies, including <cit.>, addressed OFDM-based ISAC design stemming from the standard-compliant 3GPP OFDM waveform. The seminal work in <cit.> was the first to suggest a signal processing algorithm for an OFDM-based radar. Work <cit.> analyzes the ISAC performance capabilities of the 5G OFDM waveform, considering fully digital arrays and multi-beam design to split the spatial resources between communication and sensing. Work <cit.> suggests splitting OFDM subcarriers between radar and communication functionalities, where the performance trade-off between the two is implicit in the splitting ratio. Yet, incorporating subcarriers solely for radar function demands energy consumption that might be circumvented through a well-tailored allocation strategy catering to communication and sensing needs. Conversely, the OFDM waveform proposed in <cit.> balances communication efficiency and sensing performance by employing shared and private subcarriers. Maximizing the communication rate involves using all subcarriers as shared, whereas allocating more private subcarriers enhances sensing capabilities at the cost of the communication rate. The authors of <cit.> propose super-resolution range and velocity estimators for OFDM-based ISAC systems. The work in <cit.> proposes three power minimization-based OFDM radar waveform designs for the coexistence between different radar and communication terminals on the same spectrum. A different power optimization method based on mutual information is explored in <cit.>, which devises the power allocation strategy for communication-centric and radar-centric ISAC systems. The work in <cit.> considers the problem of reducing the probability of a wrong estimate of the range/velocity of a target due to a non-optimal input statistical distribution of communication symbols. The authors of <cit.> employ information-theoretic metrics for communication and sensing channels to design the OFDM ISAC waveform. In <cit.> the authors consider the optimal resource allocation over time and frequency in an OFDM-based ISAC system, based on the proper minimization of delay and Doppler CRBs under communication constraints. Further constraints are set on the ambiguity function of the Tx signal, such that the sidelobes are kept within an acceptable level. All the previously mentioned studies focused on OFDM-based ISAC scenarios where all the time-frequency resources can be freely allocated. However, the waveform design approach (as well as the ISAC sensing algorithms) is markedly different in the case of underutilized resources (i.e., with a resource occupancy factor (ROF) < 100%). In practical applications, the ROF is rarely near 100%, except in the case of severe traffic congestion in the network, as outlined in the 3GPP standard <cit.> and corroborated by spectrum occupancy measurements campaigns <cit.>. Generally, the base station (BS) is designed to manage peak traffic, but for most of the time, it serves moderate traffic, which can result in low ROF levels. Low ROFs result in significant sidelobes within the ambiguity function, as depicted in Fig.<ref>, that detrimentally affect sensing capabilities, calling for proper countermeasures <cit.>. Works attempting to achieve low sidelobes of the ambiguity function with limited resources are in <cit.>. The work <cit.> addresses practical considerations and challenges regarding delay/Doppler estimation using the 5G OFDM waveform with unused resources and it introduces a linear interpolation technique to reconstruct the sensing channel, from which to estimate the delay and Doppler of targets. A leap forward has been made in <cit.>, where the authors propose to fill the empty communication subcarriers with sensing pilots (i.e., radar subcarriers). The power and phase of radar subcarriers are optimized by minimizing the CRB on the delay and Doppler estimation for a single target, while limiting the peak-to-average power ratio. Our previous work <cit.> proposes to superpose to the standard-compliant time-frequency OFDM signal (with a variable number of occupied resources) a purposely designed low-power sensing signal with the desired ambiguity function. More recently, orthogonal time-frequency-space (OTFS) modulation has been investigated for ISAC purposes. Unlike OFDM, OTFS places data symbols in the DD domain, addressing issues in doubly-selective channels <cit.>. However, integrating OTFS into current 3GPP standards requires substantial modification due to its burst processing of consecutive OFDM symbols, which conflicts with the low latency requirements of many 6G services <cit.>. §.§ Contributions In light of the aforementioned literature, this work focuses on ISAC waveform design over time, frequency, and energy under limited resource occupancy constraints. We substantially extend our previous work <cit.>, where the waveform design is considered only across time and frequency. Herein, we introduce a further degree of freedom, (the energy allocation), and we also detail the sufficient conditions for a reliable sensing channel interpolation technique, guided by the isometric condition and the relative well-conditionedness property. Furthermore, the impact of the chosen p-value for the Schatten-p quasi-norm interpolation is examined. The main contributions can be summarized as follows: * We present a novel ISAC waveform design for OFDM-based systems that, unlike state-of-the-art methods, considers limited frequency-time resource occupancy, closely mimicking the resource usage of realistic applications. The design is achieved through an optimization problem that minimizes the weighted sum of CRBs for delay and Doppler estimation in the general case of two coupled targets, while adhering to achievable rate and time-frequency resource occupation constraints. Two waveform design method are proposed: (i) waveform design by time and frequency resource allocation for fixed energy spectral density, and (ii) waveform design over time, frequency, and energy. The advantages and disadvantages of both approaches are thoroughly discussed. * In contrast to the state-of-the-art methods, which constrain the sidelobe level within the waveform optimization process, we address the sub-optimal ambiguity function issue arising from low ROF by establishing a framework for delay-Doppler parameter estimation based on sensing channel interpolation. Initially, the ISAC waveform is designed according to CRB minimization. Then, the sensing channel is estimated using a maximum likelihood approach over the available time-frequency resources. Finally, the sensing channel is interpolated via Schatten p-quasi norm matrix completion to minimize the sensing channel rank. The choice of the Schatten p-quasi norm is specifically targetet to the considered interpolation problem, as it requires fewer samples compared to the traditional nuclear norm. The conditions for the unique recovery of the sensing channel are aso provided. * A comparative analysis evaluates the proposed ISAC waveform performance against conventional OFDM ones, including standard-compliant random resource scheduling and contiguous scheduling. Benchmarks operate with a fixed ROF and constant energy per resource and employ linear interpolation as outlined in <cit.>. The proposed ISAC waveform significantly outperforms the benchmarks, demonstrating 6× and 14× CRB gains under limited ROF. Moreover, under these conditions, the proposed approach successfully achieves CRBs for delay and Doppler estimation at high signal-to-noise ratios (SNR). In contrast, the method described in <cit.> fails due to under-sampling. Organization: The paper is structured as follows: Section <ref> introduces the system model, while Section <ref> explains the communication and sensing performance metrics for waveform design. Section <ref> focuses on time and frequency waveform design, and Section <ref> outlines the general time-frequency-energy method. Sensing channel interpolation and parameter estimation are discussed in Section <ref>, and simulation results are presented in Section <ref>. Finally, Section <ref> concludes the paper. Notation: The paper employs the following notation: Bold uppercase and lowercase letters represent matrices and column vectors, respectively. The ij-th entry of matrix 𝐀 is denoted as [𝐀]_ij. Transposition, conjugate transposition, and L-quasi norm of matrices are represented by 𝐀^T, 𝐀^H, and |𝐀|_L, respectively. The element-wise product of matrices is denoted by ⊙. diag(𝐀) extracts the diagonal of matrix 𝐀, while vec(𝐀) represents vectorization by columns and vec^-1(·) denotes the inverse operation. 1_N is a column vector with N entries equal to one. 𝐚∼𝒞𝒩(μ,𝐂) denotes a circularly complex Gaussian random variable with mean μ and covariance 𝐂. 𝔼[·] is the expectation operator, and ℝ, ℂ, and 𝔹 denote the sets of real, complex, and Boolean numbers, respectively. δ_n represents the Kronecker delta, where δ_n-n' = 1 only if n = n'. § SYSTEM MODEL We consider the ISAC system illustrated in Fig. <ref>, where the primary task of the multiantenna BS is to serve K single antenna users' equipment (UEs) while simultaneously sensing the environment. For the sake of simplicity, the UEs are considered as the exclusive targets within the scenario. However, it's worth noting that the design principles herein can be extended to scenarios comprising multiple targets, whether separate or conjoined with the UEs. This work centers on the design of the ISAC waveform in the time-frequency domain, thereby implying that any spatial precoding/decoding at the transmitting/receiving antennas can be easily included in the approach. The ISAC BS utilizes an OFDM waveform where the time and frequency resources designated for downlink communication and sensing tasks are partitioned into M subcarriers, spaced by Δ f, and N time slots (OFDM symbol duration T = 1/Δ f). The overall bandwidth is B = M Δ f, while the duration of the ISAC burst is NT. The Tx signal over time and frequency is represented by a matrix 𝐗∈ℂ^M × N, whose mn-th element is [𝐗]_mn = [Σ⊙𝐒]_mn = σ_mn s_mn, where [Σ]_mn= σ_mn∈ℝ_+ is the square root of the allocated energy to the mnth communication symbol, denoted as [𝐒]_mn=s_mn∈ℂ. The symbol s_mn is drawn from an arbitrary constellation with unitary energy such that 𝔼[s_mn]=0, 𝔼[|s_mn s_m'n'|^2]= δ_m-m'δ_n-n', thus the total energy of the Tx signal is E = 𝐗_F^2 = Σ_F^2. §.§ Received signal at the ISAC BS The received sensing signal matrix 𝐑∈ℂ^M × N at the BS in the time-frequency domain is 𝐑 = 𝐗⊙𝐇_s + 𝐖, where 𝐇_s ∈ℂ^M × N represents the sensing channel capturing the echos from the K UEs/targets and 𝐖∈ℂ^M × N gathers the noise samples within the frequency-time domain, such that [𝐖]_mn=w_mn∼𝒞𝒩(0, N_0 δ_m-m'δ_n-n') and it is statistically uncorrelated across time and frequency. The sensing channel 𝐇_s ∈ℂ^M × N pertaining to the mnth resource bin is [𝐇_s]_mn = ∑_k=1^Kβ_k e^j 2 π (ν_k n T-τ_k m Δ f), where (i) β_k∼𝒞𝒩(0,Ω^(k)_β) denotes the complex scattering amplitude associated with the k-th UE/target with Ω^(k)_β proportional to f_0^-2 R_k^-4 and contingent upon the carrier frequency f_0, the distance R_k between BS and kth UE/target, and the reflectivity of the target, (ii) τ_k=2 R_k/c corresponds to the propagation delay related to the k-th UE/target, (iii) ν_k= 2 f_0 V_k/c represents the Doppler shift arising from the radial velocity V_k associated with the k-th UE/target. Equations (<ref>) and (<ref>) hold valid under the assumption that the maximum delay, denoted as τ_max = max_k(τ_k), is constrained to be less than the duration of the employed cyclic prefix T_cp, ensuring τ_max≤ T_cp to enable unambiguous range estimation. §.§ Received signal at the UE The time-frequency received signal at the k-th UE within the mn-th frequency-time resource bin is 𝐘_k = 𝐗⊙𝐇_k + 𝐙, and the communication channel is defined as [𝐇_k]_mn =∑_q=1^Qα_q^(k) e^j 2 π( ν_q^(k) nT - mΔ f τ_q^(k)), where Q denotes the number of paths, considered uniform across all UEs for the sake of simplicity. Within each q-th path, α_q^(k)∼𝒞𝒩(0,Ω^(k)_q) represents the complex amplitude associated with the k-th UE. The parameters τ_q^(k) and ν_q^(k) signify the delay and Doppler shift, respectively, of the q-th path of the k-th UE. Unlike the sensing receiving signal 𝐑 at the BS, the communication channel cannot retain the authentic delay and Doppler shifts due to time-frequency synchronization carried out by the UE terminal. The additive noise denoted as z^(k)_mn∼𝒞𝒩(0, N_0 δ_m-m'δ_n-n'δ_k-ℓ), remains uncorrelated across UEs as well as across time and frequency. § DESIGN METRICS FOR ISAC WAVEFORM The paper aims to develop a dual-functional time-frequency waveform optimizing both sensing and communication capabilities. This section introduces metrics to evaluate the proposed waveform's performance in the ISAC context, categorized into sensing and communication metrics. §.§ Sensing Metrics The sensing performance is quantified by defining the CRB for the estimated delay and Doppler shift of the K UEs/targets. The derivation adheres to <cit.>. We consider, for simplicity, the estimation of delay and Doppler shifts only, while the scattering amplitudes {β_k}_k=1^K are known. This assumption does not limit the technical extent of the work (as the scattering amplitudes can be included in the CRB derivation) but it eases the analytical derivations. The CRB evaluation proceeds by vectorizing (<ref>) as follows: 𝐫 = vec(𝐑) = 𝐱⊙∑_k β_k ⊙ (𝐝_τ,k⊗𝐝_ν,k) + 𝐰, where 𝐝_τ,k = [e^jπ M Δ f τ_k ,...,1,...,e^-jπ (M-1)Δ f τ_k]^T 𝐝_ν,k = [e^-jπ N T ν_k ,...,1,...,e^jπ (N-1) T ν_k]^T. denote the delay and Doppler sensing channel responses of the kth target, while θ = [τ_1,...,τ_K,ν_1,....,ν_K]^T∈ℝ^2K× 1 denotes the vector of parameters to be estimated. The Fisher information matrix (FIM) is block partitioned: 𝐅 = [ 𝐅_τ 𝐅_τν; 𝐅^T_τν 𝐅_ν ] and the kℓth entry of each of the partitions is reported in (<ref>)-(<ref>), where 𝐞 = ∑_k 𝐞_k ∈ℝ_+^NM × 1 is the vector of overall energy allocated to every single resource, while 𝐞_k refers to the vector of allocated energy per resource for the kth UE. The overall energy is E=1^T 𝐞. In (<ref>)-(<ref>), 𝐧=[-N/2,...,N/2-1]^T, 𝐦=[-M/2,...,M/2-1]^T are the time and frequency index of the resource grid, while 𝐝_τ,kℓ = diag(𝐝_τ,k𝐝_τ,ℓ^H), 𝐝_ν,kℓ = diag(𝐝_ν,k𝐝_ν,ℓ^H) are the cross-coupled delay and Doppler channel responses, depending, respectively, on the delay difference τ_k-τ_ℓ and on the Doppler difference ν_k-ν_ℓ. Under the assumption of 𝐅_ν and 𝐅_τ being non-singular, the CRB for delay and Doppler estimation is: 𝐂_τ = (𝐅_τ-𝐅_ν,τ^T𝐅_ν^-1𝐅_ν,τ)^-1 𝐂_ν = (𝐅_ν-𝐅_ν,τ^T𝐅_τ^-1𝐅_ν,τ)^-1. In practical scenarios, delay and Doppler estimation of targets are considered decoupled if their differences exceed system resolution (|τ_2-τ_1| ≫Δτ =1/B and |ν_2-ν_1| ≫Δν =1/(NT)). In such cases, the estimation of one target is unaffected by others, and CRB for multiple targets reduces to that of a single target, achieving minimum value. However, targets are often coupled due to dense environments or limited resources as the ISAC system usually observes extended targets. While the waveform design problem considers a generic number of targets K, the focus here is on two coupled targets (K=2) in numerical results. Closed-form expressions for delay and Doppler CRBs for K=2 coupled targets are provided in Appendix <ref>. §.§ Communication Metrics The communication performance is quantified in terms of the achievable rate over each frequency-time resource. The SNR pertaining to the kth UE, ℓth resource is γ_k,ℓ = [𝐞]_ℓ | [𝐡_k]_ℓ|^2 /N_0 , where 𝐡_k = vec(𝐇_k) is the kth communication channel vector. We consider the average achievable rate as the communication metric for the waveform design method, yielding: η_k = 1/L∑_ℓ=1^Llog_2(1+γ_k,ℓ). Notice that (<ref>) assumes perfect channel knowledge at the ISAC BS side. This information usually comes from channel state information reporting from the UEs and affects the waveform design method, as detailed in Section <ref>. However, since the goal of this paper is to present a waveform design possibly independent of the individual realization of the communication channel, herein, we assume that the ISAC BS only knows the average channel gain over the resources, namely |H_k|^2 = 𝔼_α[𝐇_k _F^2]/L. In this way, the SNR per resource becomes γ_k,ℓ = [𝐞_k]_ℓ |H_k|^2 /N_0 and the time-frequency waveform is optimized on the mean and does not depend on the instantaneous channel realization. Of course, the proposed waveform design methods apply to any communication channel, by using (<ref>) for the individual realizations. § TIME-FREQUENCY OPTIMIZATION This section focuses on the time-frequency waveform design aimed at minimizing the CRBs for closely placed targets, taking into account practical levels of ROF. In particular, the ISAC waveform design aims to allocate a limited amount of time and frequency resources (μ < 1) to minimize a weighted average of delay and Doppler CRBs while maintaining the desired communication QoS for every UE. Primarily focused on the K=2 case, the procedure is generalizable. Resource allocation considers fixed per-resource energy, with potential extensions discussed in Section <ref>. Time-frequency resources for each UE are denoted as 𝐚_k ∈𝔹^L × 1 such that [𝐚_k]_ℓ = 1, if the ℓth resource is chosen, 0, otherwise. and Σ = σ1_M 1_N^T, thus the transmitted signal can be rewritten as 𝐗 = σ1_M 1_N^T ⊙𝐒⊙𝐀. 𝐀 = vec^-1(∑_k=1^K 𝐚_k) is the matrix of allocated resources over frequency and time, thus 𝐞_k = σ^2 𝐚_k. The overall Tx energy is E = σ^2 𝐀_F^2. The waveform design problem, solely concerning the selection of time-frequency resources under limited resource occupancy μ, can be formulated as in <cit.>: {𝐚_k}_k=1^Kminimize ϵ_τ tr(𝐂_τ(𝐚_k)/Δτ^2) + ϵ_νtr (𝐂_ν(𝐚_k)/Δν^2) s. t. 1/L∑_ℓ=1^Llog_2(1+γ_k,ℓ)≥η, ∀k, ∑_k=1^K [𝐚_k]_ℓ ≤1, ∀ℓ, ∑_k=1^K 1^T𝐚_k ≤μL, 1 ≤ℓ≤L, 1 ≤k ≤K . where 𝐂_τ(𝐚_k) ∈ℝ^K× K and 𝐂_ν(𝐚_k) ∈ℝ^K× K denote the CRBs on delay and Doppler estimation, respectively, that exhibit a non-linear dependence w.r.t. the allocated resources 𝐚_k, as detailed in Appendix <ref> for K=2. The cost function quasi normalizes CRBs by maximum delay and Doppler resolutions (Δ_τ=1/B and Δ_ν=1/(NT)) to ensure uniformity and introduces dimensionless weights (ϵ_τ and ϵ_ν). These weights influence the trade-off between delay and Doppler CRBs in optimization. The objective in (<ref>) minimizes the combined CRBs adjusted by ϵ_τ and ϵ_ν. The constraint in (<ref>) establishes the QoS requirement, expressed through a threshold on the achievable rate η in [bits/s/Hz] across all UEs. The communication SNR related to the kth UE and ℓth resource is γ_k,ℓ= σ^2 [𝐚_k]_ℓ |H_k|^2 /N_0. The achievable rate constraint ties resource quantity to UE selection, maintaining constant energy allocation per resource. Variations in target distances determine resource allocation, with targets experiencing higher pathloss assigned more resources. Constraint (<ref>) mitigates multi-user interference by assigning each resource to a single UE. Occupancy constraint (<ref>) limits total resource allocation, with μ<1 defining the maximum allowable fraction. The optimization problem (<ref>) is non-convex, posing challenges for solution. Through auxiliary variables detailed in Appendix <ref>, convexity can be achieved. This results in a mixed-integer conic programming problem (MICP), given the binary nature of the model. MICP involves continuous and discrete variables, demanding computational resources for branch-and-cut (BnC) methods. To limit the BnC algorithm complexity, time and frequency resources are grouped into subchannels and time slots, which aligns with the 3GPP standard. <cit.>. Figures <ref> show waveform design across time and frequency domains. Figs. <ref> and <ref> depict resource allocation influenced by μ, with equal ϵ_τ = ϵ_ν = 0.5. The time-frequency resources are allocated at the grid boundaries to minimize CRB while adhering to communication constraints. Conversely, Fig. <ref> prioritizes minimizing Doppler CRB (ϵ_τ = 0.25, ϵ_ν = 0.75), favoring frequency axis resources. Extreme condition (ϵ_τ = 0, ϵ_ν = 1) in Fig. <ref> exclusively minimizes Doppler CRB, allocating all resources along the frequency axis. § JOINT TIME-FREQUENCY-ENERGY OPTIMIZATION While the previous section focused solely on waveform design in the time-frequency domain, this section introduces an additional variable: the allocated energy per resource. Consequently, unlike (<ref>), the ability to vary the allocated energy across time and frequency enhances the degrees of freedom available for waveform design. Therefore, in the following, we extend (<ref>) to account for a joint time-frequency-energy allocation. Now, 𝐞_k ∈ℝ^L × 1 denotes the per-resource energy allocated to the kth UE, while 𝐚_k ∈𝔹^L × 1 denotes the Boolean time-frequency allocation vector. The joint time-frequency-power optimization is formulated as follows: {𝐞_k,𝐚_k}_k=1^Kminimize ϵ_τ tr(𝐂_τ/Δτ^2) + ϵ_νtr (𝐂_ν/Δν^2) s. t. 1/L∑_ℓ=1^Llog_2(1+γ_k,ℓ)≥η, ∀k, ∑_k=1^K [𝐚_k]_ℓ ≤1, ∀ℓ, ∑_k=1^K 1^T 𝐚_k ≤μL, ∑_k=1^K 1^T 𝐞_k ≤E_max, 0 ≤[𝐞_k]_ℓ ≤[𝐚_k]_ℓ σ_max^2, ∀ℓ,∀k, |[𝐞_k]_ℓ-[𝐞_k]_ℓ+1|≤ΔT, ∀ℓ, ∀k, 1 ≤ℓ≤L, 1 ≤k ≤K . where the dependence of the CRBs from the optimization variables {𝐞_k,𝐚_k}_k=1^K is omitted for simplicity. Constraints (<ref>)-(<ref>) are similar to (<ref>)-(<ref>), except that the QoS involves a non-constant energy in the SNR γ_k,ℓ = [𝐞_k ⊙𝐚_k]_ℓ |H_k|^2/N_0. Constraint (<ref>) expresses the overall energy budget (on the whole time-frequency grid), while (<ref>) establishes a relationship between the two sets of optimization variables, ensuring that energy allocation per resource [𝐞_k]_ℓ > 0 only occurs when the resource is actively allocated ([𝐚_k]_ℓ = 1). In this context, σ_max^2 represents the maximum energy per time-frequency resource. Constraint (<ref>) regulates energy smoothness, limiting energy gradient across time and frequency to Δ. Including (<ref>) aims to reduce sidelobes in the ambiguity function of the ISAC waveform and meets requirements for gradual Tx power changes in power amplifiers. The choice of Δ impacts estimation accuracy and QoS elaborated in Section <ref>. The problem in (<ref>) remains an MICP. Unlike the problem in (<ref>), it includes the linear constraints of (<ref>)-(<ref>). However, incorporating these constraints does not significantly affect the problem solver or its complexity. We can still tackle it using BnB by adjusting the granularity of the resources to be allocated and appropriately scaling the linear constraints in (<ref>)-(<ref>). Figure <ref> showcases the waveform design spanning energy, time, and frequency domains, with Fig.<ref> and <ref> depicting the influence of maximum energy gradient Δ while maintaining balanced weights ϵ_τ = ϵ_ν = 0.5. The objective is to minimize the CRB while adhering to communication constraints and energy gradient constraints, favoring higher power levels towards frequency-time grid edges, guided by Δ. Conversely, Fig. <ref> prioritizes minimizing the Doppler CRB over the delay CRB (ϵ_τ = 0.25, ϵ_ν = 0.75), amplifying energy levels along the frequency axis. Fig. <ref> depicts an extreme condition emphasizing Doppler CRB reduction (ϵ_τ = 0, ϵ_ν = 1), allocating high energy levels exclusively along the frequency axis, with extremely low energy along the time axis. § SENSING CHANNEL INTERPOLATION AND PARAMETERS ESTIMATION This section introduces a novel framework for sensing channel estimation and interpolation, effectively addressing the issue of high sidelobe levels resulting from the low ROF. After the waveform design over time, frequency, and possibly energy, the ISAC BS estimates the delay and Doppler shifts of the K UEs/targets. The estimation process is grounded in the maximum likelihood (ML) framework. This methodology involves leveraging the received signal (<ref>), to formulate the ML estimation as (τ,ν) = τ,νargmin(𝐫 - 𝐱⊙𝐡_s(τ,ν )^2_2), where vectors τ = [τ_1,...,τ_K]^T and ν = [ν_1,...,ν_K]^T represent the delays and Doppler shifts to be determined. While ML estimation could involve an exhaustive search across the delay and Doppler shift domains, practical systems employ a suboptimal approach through three sequential steps: (i) estimate the sensing channel 𝐇_s, (ii) employ an DFT-IDFT transform to map the estimated sensing channel 𝐇_s from the frequency-time domain to the delay-Doppler domain and (iii) estimate (τ,ν) via peak searching. The time-frequency sensing channel matrix 𝐇_s is first estimated by the least squares (LS) approach over the allocated resources as: [𝐇_s]_mn = [𝐑]_mn⊙ [𝐗]^-1_mn for [𝐀]_mn=1 0 for [𝐀]_mn=0 where 𝐗 is the waveform by solving either (<ref>) or (<ref>). The channel is subsequently mapped into the delay-Doppler domain using a DFT-IDFT pair, outlined as follows: 𝐇_s^DD = Θ_M 𝐇_s Θ_N^H where Θ_M∈ℂ^M× M and Θ_N∈ℂ^N× N are DFT matrices such that Θ_M_F=√(M) and Θ_N_F=√(N). For full resource occupancy(μ=1), the delay-Doppler sensing channel matrix exhibits a linear combination of scaled and shifted sinc functions (the expression is reported in (<ref>)), narrowing with resolution. However, for μ < 1, the sensing channel's expression differs markedly, displaying higher sidelobes due to unused resources, impacting target discrimination and resolution. Ghost peaks in the channel matrix are influenced by resource allocation, affecting the system's effective resolution, especially for closely spaced or coupled targets. Consequently, the periodogram approach becomes impractical[The same considerations can be drawn by inspection of the ambiguity function of the Tx signal 𝐗]. To mitigate this issue in the case of μ < 1, we propose a technique involving the sensing channel interpolation across frequency and time, employing matrix completion. The formulation of the matrix completion problem is as follows <cit.>: 𝐇_sminimize rank(𝐇_s) s. t. [𝐇_s]_mn = [𝐇_s]_mn for [𝐀]_nm=1. The linear constraint in (<ref>) imposes that the entries of the optimization variable 𝐇_s must be equal to the entries of the estimated sensing time-frequency channel 𝐇_s. The endeavor to minimize the matrix rank while complying with linear (affine) constraints represents an NP-hard challenge frequently tackled via nuclear quasi norm minimization. Despite its convexity, this method offers a too approximate solution for channel rank estimation, making it inadequate for the ROFs detailed in this paper. This inadequacy arises due to its demand for a greater number of samples to facilitate interpolation, as highlighted in <cit.>. Alternatively, problem (<ref>) can be approached by relaxing the objective function using the Schatten p-quasi norm, defined as: 𝐇_s_p^p = (∑_r^min(M,N)λ_r^p)^1/p where p ∈ (0,1], and λ_r represents the r-th singular value of 𝐇_s. The p-Schatten quasi norm presents a flexible balance between convexity and rank approximation within matrix optimization. By setting p = 1, the Schatten quasi norm reduces to the nuclear quasi norm (sum of eigenvalues) resulting in a convex problem, while p → 0 provides a more accurate estimator of matrix rank and results in a non-convex problem. The p-value allows adjustment between computational complexity and recovery accuracy in problems concerning matrix rank minimization. Consequently, the matrix completion problem can be reformulated as: 𝐇_sminimize 𝐇_s_p^p s. t. ||[𝐇_s]_𝐀-[𝐇_s]_𝐀||_2 ≤ϵ . with ϵ is a small positive constant and [𝐇_s]_𝐀||_2 are the channel samples, defined where [𝐀]_mn =1. We denote as 𝐇_s the solution of the optimization problem in (<ref>) and in the further subsections the conditions for solving the problem are discussed. §.§ Conditions for solving (<ref>) This section provides the conditions for solving the Schatten p-quasi norm matrix completion problem. Despite the non-convex nature of problem (<ref>), numerically efficient algorithms have been proposed in <cit.>. The algorithm used for solving (<ref>) involves non-convex matrix completion via the iterative singular value thresholding algorithm (ISTVA) in <cit.>. A proper selection of the parameter p allows trading between the capacity to effectively recover the sensing channel rank (for p→ 0) and the computational complexity (lower for p→ 1). The matrix completion error e = ||𝐇_s-𝐇_s||_2 is computed by evaluating the quasi norm between the real sensing channel 𝐇_s and the solution 𝐇_s of the problem in (<ref>) by varying the resource occupancy parameter μ and the p-value, as depicted in Fig.<ref>. A low value of p enables a better reconstruction of the matrix, but it may suffer from relatively higher computational complexity because of the singular value decomposition (SVD) computed at each iteration and more iteration steps to reach the stopping criteria<cit.>. Empirical simulations indicate that an error of around -25 dB yields satisfactory interpolation performance when noise is not considered. Thus, with μ values between 0.2 and 0.4 and a p-value of 0.1, the algorithm can achieve effective interpolation. A recognized criterion ensuring the correct retrieval of the sensing channel matrix 𝐇_s with minimum rank and in case of deterministic sampling is the satisfaction of the isomeric condition and relative well conditionedness<cit.>. The isomeric condition interlaces the rank and the coherence of the matrix 𝐇_s with the specific locations and quantity of the observed entries. In particular, the matrix 𝐇_s is called 𝐀-isomeric if the submatrix related to the sampling 𝐀⊆{1,...,M}×{1,...,N} is defined such that rank([𝐇_s]_𝐀) = rank(𝐇_s), Moreover, the matrix 𝐇_s is called 𝐀/𝐀^T-isomeric if 𝐇_s is 𝐀-isomeric and 𝐇_s^T is 𝐀^T-isomeric. To solve the matrix completion problem in (<ref>) it is necessary to verify that 𝐇_s is 𝐀/𝐀^T-isomeric. Whenever this isomeric condition is violated, there exist infinitely many solution matrices that can fit the observed entries. In general, isomerism typically ensures that the sampled sub-matrices [𝐇_s]_𝐀 and [𝐇_s]_𝐀^T are not rank-deficient, but there is no guarantee that these sub-matrices are well-conditioned. To compensate for this weakness, the hypothesis of relative well conditionedness, which encourages the smallest singular value of the sampled sub-matrices to be far from 0, must be satisfied. In particular, the 𝐀/𝐀^T-relative condition number is defined as in <cit.> κ = min( κ_𝐀, κ_𝐀^T) with κ_𝐀 = [𝐇_s]_𝐀(𝐇_s)^†^2 measuring how much information of a matrix 𝐇_s is contained in the sampled sub-matrix [𝐇_s]_𝐀. The condition number κ_𝐀^T is computed in the same way by considering the matrix 𝐇_s^T. To ensure that the matrix 𝐇_s is recoverable, κ must be sufficiently high, thus ensuring the relative well-conditionedness property. Figure <ref> illustrates the attained relative condition number relative to the resource occupancy parameter μ. As the condition number increases sufficiently, the algorithm demonstrates improved performance. Specifically, when p = 0.1, the associated condition number required to achieve an error e smaller than -20 dB is approximately 0.3, whereas it becomes higher when p = 0.5. §.§ Example of channel interpolation In this subsection, we demonstrate channel interpolation using varying resource occupancy ratios μ. The interpolation relies on the optimized waveform discussed in Sect <ref>. Figure <ref> provides empirical evidence that the suggested interpolation method effectively diminishes sidelobes when examining the estimated quasi normalized sensing channel impulse response (CIR). The CIR is displayed along the delay axis both before and after employing matrix completion interpolation. At a low ROF (μ = 0.25), the sidelobe level becomes notably prominent, potentially introducing inaccuracies in delay estimations. Through the proposed interpolation method, there is an observed reduction of sidelobe levels by a factor of 2 × under conditions of low resource occupation. This reduction facilitates an impulse response of the channel that approaches the optimal state with complete bandwidth occupancy. It is important to remark that resource clustering at bandwidth edges minimizes CRBs, but it negatively impacts on the interpolation performance. However, it is possible to balance the resource distribution and CRB minimization to achieve good interpolation performance, especially with a ROF smaller than the ones analyzed in the paper (μ < 0.25). For instance, considering that 90% of the resources are allocated as in (<ref>), while the remaining 10% is periodically distributed across the bandwidth leads to a sidelobe reduction of around 12 % for μ = 0.25 and 26% for μ = 0.5, when no interpolation is performed. When interpolating through (<ref>) it achieves the full bandwidth occupation (μ =1) performance. However, this leads to an increase of the CRBs of 3% and 7% when μ = 0.25 and μ = 0.5, respectively. § NUMERICAL RESULTS This section evaluates the proposed ISAC waveform performance compared to two benchmarks: the standard-compliant random resource scheduling and the random scheduling with contiguous resources<cit.>. Both benchmarks operate under a fixed occupancy factor (μ) and maintain a constant energy per resource across time and frequency. The main difference lies in their allocation methods: the standard-compliant random scheduling assigns individual resources randomly, while the random contiguous scheduling assigns blocks of N_b contiguous resources. Both benchmarks utilize linear interpolation for filling empty resources in time and frequency domains, as outlined in <cit.>. The two aforementioned benchmarks pertain to bandwidths that are not fully occupied, where μ < 1. To evaluate the proposed waveform under full bandwidth occupancy (μ = 1), we compare it with a waveform from <cit.>, tailored for scenarios with two closely positioned targets. A comparison is made between the two proposed waveforms: one with constant energy across time and frequency (<ref>), and the other that jointly optimizes energy-time-frequency (see (<ref>)). This analysis highlights the benefits of integrating energy considerations into waveform design. The simulation parameters, unless otherwise indicated, are shown in Table <ref>. The first result encompasses the CRB gain for estimating the delay between two closely located targets, defined as G = tr(𝐂_τ, b)/tr(𝐂_τ, opt) , where 𝐂_τ, b represents the CRB matrix obtained via state-of-the-art methodologies in <cit.> and <cit.>, and (<ref>), while 𝐂_τ, opt is the CRB matrix obtained by (<ref>). A gain G>1 signifies a decrease in the CRB when employing the proposed time-frequency-energy optimized waveform compared to the benchmarks, indicating practical utility. Doppler estimation CRB yields analogous outcomes and is omitted here for simplicity. Figure <ref> shows the trend of the CRB gain G in linear scale by varying the inter-delay spacing between the two targets, defined as |τ_2 - τ_1|/Δτ (quasi normalized to the delay resolution Δτ) for different bandwidth occupancy μ = 0.25, 0.5 and 1 by considering Δ = 0 dB. With the bandwidth occupancy factor μ = 0.25 in Fig <ref>, the proposed ISAC waveform outperforms the standard-compliant random and random contiguous scheduling by obtaining a CRB gain 6 × and 14×, respectively, when the two targets are closely placed. This underscores the effectiveness of the proposed waveform in discerning closely spaced targets. Moreover, a notable enhancement compared to both standard-compliant random and random contiguous scheduling, approximately 7 × and 5× respectively, is observed at μ = 0.5 in Fig. <ref>. In contrast, Fig. <ref> denotes that the approach in <cit.> experiences limitations in minimizing the CRB due to the constraint in the ambiguity sidelobe levels, which forces the time-frequency resources to be more evenly spread within the bandwidth. Ultimately, the comparison between the energy-frequency-time optimized waveform (obtained by (<ref>)) and the waveform optimized with constant energy (obtained by (<ref>)) is conducted. The average enhancements of a factor of 4 (μ= 0.25 in Fig.<ref>), 3 (μ= 0.5 in Fig.<ref>), and 2 (μ= 1 in Fig.<ref>) respectively underscore the clear advantage of integrating energy optimization in defining the waveform. The oscillations visible in the graph are due to the ambiguity sidelobes. Figure <ref> illustrates the root mean square error (RMSE) for the delay estimation achieved through the proposed methods and benchmark approaches, considering μ = 0.25, μ = 0.5 and μ = 1 for total energy budget E_tot = 43T dBmJ. The results of the proposed waveform in (<ref>) are obtained with Δ = -15 dB in the case of μ=0.25,0.5 and Δ = -15,0 dB if μ =1, respectively. The proposed waveform (both (<ref>) and (<ref>)) notably enhances the CRB for delay estimation. As the average sensing SNR in the DD domain, defined as γ_s = [𝐞_k ⊙𝐚_k]_ℓ |H_s|^2 MN/N_0 with |H_s|^2 = 𝔼_β[𝐇_s _F^2]/L, increases, the delay error approaches the CRB. At μ = 0.25, the proposed interpolation method facilitates reaching the CRB at higher γ_s, thereby enhancing the performance as compared to standard random and random contiguous resource allocation methods employing the linear interpolation <cit.>. The random contiguous scheduling struggles to attain the CRBs, indicating a failure of linear interpolation in this context. Conversely, the random waveform, allocating resources individually, demonstrates satisfactory performance solely at μ = 0.5, ensuring adequate samples for linear interpolation. The numerical outcomes underscore the estimation enhancements provided by the proposed waveforms (both (<ref>) and (<ref>)) under stringent bandwidth occupancy constraints (μ = 0.25), whereas for μ = 0.5, standard random resource allocation with linear interpolation ensures commendable performance at low-medium levels of the SNR. However, for high sensing SNR levels, the proposed waveforms with constant and optimized energy improve the benchmarks. Similar observations apply to Figure <ref>, where the proposed time-frequency-energy optimized waveform in (<ref>) is compared with both the constant energy waveform and the approach outlined in <cit.>, opportunely adapted for the two targets scenario. The constant energy waveform demonstrates superior performance concerning RMSE, surpassing the method outlined in <cit.>. The proposed optimized waveform surpasses the performance of <cit.>, particularly in scenarios with high sensing SNR and a low value of Δ. Consequently, excessively high values of Δ can enhance the CRB but simultaneously lead to elevated sidelobe levels. These high sidelobe levels have the potential to obscure weaker targets, thereby diminishing the estimation performance. Similar outcomes are observed for Doppler estimation, but it is omitted here for brevity. The attainable SE is evaluated as depicted in Fig. <ref> for both the optimized waveform with Δ = -15 dB and benchmark cases, with E_tot = 𝐞^T1_NM = 43× T dBmJ by varying the average communication SNR γ defined as 𝔼_k,ℓ [ γ_k,ℓ ]. The analysis reveals that enhancing the energy allocation at the extremities of the bandwidth results in comparable performance to that of constant energy waveforms when considering μ =0.25, 0.5, as well as <cit.> for μ = 1 when considering Δ = -15 dB. The reduction in average SE resulting from an increase in the ROF μ occurs because, while maintaining the total energy constant, the SNR per resource bin diminishes with the ROF μ increase. The SE is further analyzed in Fig. <ref> by keeping constant the communication SNR γ, equal to 0 dB, while varying the energy gradient Δ, which directly influences the energy distribution within the proposed waveform. In particular, Fig. <ref> shows the trade-off between the CRB and SE through variation in the gradient parameter Δ. By fixing the total energy E_tot a notable trend is visible: an increase in the energy gradient yields a reduction in the delay CRB, consequently enhancing sensing capabilities. Conversely, this increase in the energy gradient induces a proportional decrease in SE, thereby compromising communication performance. This observation underscores the importance of fine-tuning the energy gradient parameter to achieve satisfactory performance for both sensing and communication purposes. § CONCLUSION This paper introduces a novel ISAC waveform designed to optimize both communication and sensing performance via delay and Doppler weighted CRB minimization CRB, with achievable rate constraints. Unlike state-of-the-art methods, this design incorporates the ROF to more accurately reflect realistic time-frequency bandwidth usage. In practices, the resources are not fully occupied and lead to significant sidelobes in the ISAC waveform's ambiguity function, which impact target detection. To address this issue, we employ an interpolation technique based on the Schatten p-quasi norm, which requires fewer samples than the traditional nuclear norm, leading to more effective sensing channel interpolation in case of low ROFs. Numerical results demonstrate the superiority of the proposed ISAC method over standard random resource scheduling, showing a CRB gain of 6× and 14× compared to random and random contiguous scheduling, respectively, under ROF constraint. Furthermore, our approach achieves the CRBs in high SNR, while conventional methods fail. As a further contribution, we also detail and discuss the effect of enforcing a smooth energy variation within the resource grid, leading to an improved data rate. § CRB ON DELAY AND DOPPLER FOR K = 2 TARGETS The CRB evaluation for the two targets case can be accomplished by using the matrix inversion lemma as 𝐂_τ = 𝐉_τ^-1, 𝐂_ν = 𝐉_ν^-1 where 𝐉_τ = 𝐅_τ-𝐅_ν,τ^T𝐅_ν^-1𝐅_ν,τ and 𝐉_ν = 𝐅_ν-𝐅_ν,τ^T𝐅_τ^-1𝐅_ν,τ represent the delay and Doppler Schur complements, respectively. By considering two closely spaced targets of similar reflectivity, i.e., |β_1|^2 ≈ |β_2|^2, the entries of the delay Schur complements are obtained as (<ref>), (<ref>), while the Doppler ones are obtained by replacing ζ_τ^(i,j) with ζ_ν^(i,j) and vice versa, with i,j ∈{1,2}. Moreover, [𝐉_τ]_1,1≈ [𝐉_τ]_2,2, [𝐉_ν]_1,1≈ [𝐉_ν]_2,2. Therefore [𝐂_τ]_1,1 = [𝐉_τ]_2,2/[𝐉_τ]_1,1[𝐉_τ]_2,2-([𝐉_τ]_1,2)^2 [𝐂_τ]_2,2 = [𝐉_τ]_1,1/[𝐉_τ]_1,1[𝐉_τ]_2,2-([𝐉_τ]_1,2)^2 The same procedure is applied to achieve [𝐂_ν]_1,1 and [𝐂_ν]_2,2. § RELAXATION OF PROBLEM (<REF>) AND (<REF>) The optimization problems specified in (<ref>) and (<ref>) contain a non-convex objective function, requiring manipulations to reformulate it into a conventional convex program. §.§ Objective Function: weighted sum of CRBs The weighted sum of the CRBs can be redefined in relation to the entries of the Schur complement in the following manner: ϵ_τ 2[𝐉_τ]_1,1/([𝐉_τ]_1,1)^2-([𝐉_τ]_1,2)^2 + ϵ_ν2 [𝐉_ν]_1,1/([𝐉_ν]_1,1)^2-([𝐉_ν]_1,2)^2. As the denominator impacts more than the numerator term, the CRB can be represented utilizing the subsequent upper bound: ϵ_τ2 x_τ/x_τ^2-j_τ^2 + ϵ_ν2 x_ν/x_ν^2-j_ν^2, where x_τ, j_τ are two auxiliary variables ∈ℝ^+ such that x_τ≤ [𝐉_τ]_1,1,j_τ≥ [𝐉_τ]_1,2. A new auxiliary variable t_τ is introduced such that the SOCP constraint is satisfied t_τx_τ≥j_τ^2. Similar constraints are imposed on the variables x_ν, j_ν and t_ν . For both the delay and Doppler variables, the objective function is expressed as: 2 ϵ_τ/x_τ-t_τ + 2 ϵ_ν/x_ν-t_ν. Since the minimization of the initial objective function corresponds to maximizing its inverse, the objective function results in ϵ_ν (x_τ - t_τ) + ϵ_ν(x_ν - t_ν)/4ϵ_τϵ_ν -s, where the variable s ∈ℝ^+ is defined such that s ≥ϵ_ν^2 (x_τ-t_τ)^2 + ϵ_τ^2 (x_ν-t_ν)^2/4 ϵ_τϵ_ν(ϵ_ν(x_τ- t_τ)+ ϵ_τ(x_ν- t_ν)). Hence, the optimization problem in (<ref>) can be reformulated as a MICP model as 𝐚_k, 𝐞_k,t_τ,t_ν, x_τ, x_ν,j_τ, j_ν, smax ϵ_ν (x_τ - t_τ) + ϵ_ν(x_ν - t_ν)/4ϵ_τϵ_ν -s s.t. (<ref>),(<ref>),(<ref>) ||[ √(2)j_τ; x_τ; t_τ ] ||_2 ≤x_τ + t_τ, ||[ √(2)j_ν; x_ν; t_ν ]||_2 ≤x_ν + t_ν, ||[ ϵ_τ(x_ν-t_ν); ϵ_ν(x_τ-t_τ); 2ϵ_τϵ_νs; ϵ_ν(x_τ-t_τ) +ϵ_τ(x_ν-t_ν) ] ||_2 ≤ 2ϵ_τϵ_ν s +ϵ_ν(x_τ-t_τ) +ϵ_τ(x_ν-t_ν), x_τ ≥t_τ, x_ν ≥t_ν, x_τ ≤[𝐉_τ]_1,1, x_ν ≤[𝐉_ν]_1,1, j_τ ≥[𝐉_τ]_1,2, j_ν ≥[𝐉_ν]_1,2. The problem expressed in (<ref>) can be reformulated analogously. §.§ Shur Complement Constraints The constraints delineated in (<ref>) and (<ref>) require mathematical computation. The former is rewritten as (𝐞^Tζ^(1,1)_τ-x_τ)(𝐞^Tζ^(1,1)_ν+𝐞^Tζ^(1,2)_ν) ≥ |𝐞^Tζ^(1,1)_τ,ν|^2 + |𝐞^Tζ^(1,2)_τ,ν|^2 + |𝐞^Tζ^(1,1)_τ,ν-𝐞^Tζ^(1,2)_τ,ν |^2 𝐞^Tζ^(1,2)_ν/𝐞^Tζ^(1,1)_ν -𝐞^Tζ^(1,2)_ν, which, according to the definition of ζ_ν, ζ_τ and ζ_τ,ν can be relaxed to (𝐞^Tζ^(1,1)_τ-x_τ)(𝐞^Tζ^(1,1)_ν+𝐞^Tζ^(1,2)_ν) ≥ |𝐞^Tζ^(1,1)_τ,ν|^2 + |𝐞^Tζ^(1,2)_τ,ν|^2 + |𝐞^Tζ^(1,1)_τ,ν-𝐞^Tζ^(1,2)_τ,ν |^2 , and it can be easily recast as a more stringent SOCP constraint by ensuring 𝐞^Tζ^(1,1)_τ-x_τ≥ 0. The same procedure can be applied to the constraint in (<ref>) as: (j_τ-𝐞^Tζ^(1,2)_τ)(𝐞^Tζ^(1,1)_ν-𝐞^Tζ^(1,2)_ν)≥ |𝐞^Tζ^(1,1)_τ,ν|^2 + |𝐞^Tζ^(1,2)_τ,ν|^2 - |𝐞^Tζ^(1,1)_τ,ν+𝐞^Tζ^(1,2)_τ,ν |^2 𝐞^Tζ^(1,1)_ν/𝐞^Tζ^(1,1)_ν +𝐞^Tζ^(1,2)_ν. which, by ensuring j_τ-𝐞^Tζ^(1,2)_τ≥ 0, can be simplified in the following more stringent SOCP constraint (j_τ-𝐞^Tζ^(1,2)_τ)(𝐞^Tζ^(1,1)_ν-𝐞^Tζ^(1,2)_ν) ≥ |𝐞^Tζ^(1,1)_τ,ν|^2 + |𝐞^Tζ^(1,2)_τ,ν|^2. A comparable procedure is employed for the Schur complement constraints associated with the Doppler. §.§ Spectral Efficiency Constraint The QoS rate constraint outlined in (<ref>) exhibits convexity and can be redefined as an exponential cone constraint through the introduction of an auxiliary optimization variable 𝐲_k ∈ℝ_+^L × 1,∀ k such that: [𝐲_k]_ℓ≤log_2(1+γ_k,ℓ), ∀ k, ∀ℓ ξ [𝐞_k]_ℓ +1 ≥exp(ln(2) [𝐲_k]_ℓ), ∀ k,∀ℓ with ξ = |H_k|^2/N_0. When considering only the time-frequency optimization, 𝐞_k = σ^2 𝐚_k. The SE threshold is than achieved by the constraint 1/L∑_ℓ=1^L [𝐲_k]_ℓ≥η, ∀ k. IEEEtran
http://arxiv.org/abs/2406.18121v1
20240626072029
The Merton's Default Risk Model for Public Company
[ "Battulga Gankhuu" ]
q-fin.RM
[ "q-fin.RM" ]
The Merton's Default Risk Model for Public Company Battulga Gankhuu[ Department of Applied Mathematics, National University of Mongolia, Ulaanbaatar, Mongolia; E-mail: battulga.g@seas.num.edu.mn] ==================================================================================================================================================== § ABSTRACT In this paper, we developed the Merton's structural model for public companies under an assumption that liabilities of the companies are observed. Using 's approximation method, we obtain formulas of risk–neutral equity and liability values and default probabilities for the public companies. Also, the paper provides ML estimators of suggested model's parameters. Keywords: Public companies, Merton's structural model, ML estimators. § INTRODUCTION Dividend discount models (DDMs), first introduced by Williams38 are common methods for equity valuation. The basic idea is that the market value of an equity of a firm is equal to the present value of a sum of dividend paid by the firm and market value of the firm, which correspond to the next period. If the firm is financed by liabilities, which are publicly traded in the exchanges, the same idea can be used to value the liabilities, see Battulga23b. As the outcome of DDMs depends crucially on dividend payment forecasts, most research in the last few decades has been around the proper estimations of dividend development. Also, parameter estimation of DDMs is a challenging task. Recently, Battulga22a introduced parameter estimation methods for practically popular DDMs. To estimate parameters of the required rate of return, Battulga23b used the maximum likelihood method and Kalman filtering. Reviews of some existing DDMs that include deterministic and stochastic models can be found in dAmico20a and Battulga22a. Existing stochastic DDMs have one common disadvantage: If dividend and debt payments have chances to take negative values, then the market values of the firm's equity and liabilities can take negative values with a positive probability, which is the undesirable property for the market values. A log version of the stochastic DDM, which is called by dynamic Gordon growth model was introduced by Campbell88, who derived a connection between log price, log dividend, and log return by approximation. Since their model is in a log framework, the stock prices and dividends get positive values. For this reason, by augmenting the dynamic Gordon growth model, Battulga24d obtained pricing and hedging formulas of European options and equity–linked life insurance products for public companies. For private companies, using the log private company valuation model, based on the dynamic Gordon growth model, Battulga24c developed closed–form pricing and hedging formulas for the European options and equity–linked life insurance products and valuation formula. Sudden and dramatic changes in the financial market and economy are caused by events such as wars, market panics, or significant changes in government policies. To model those events, some authors used regime–switching models. The regime–switching model was introduced by seminal works of Hamilton89,Hamilton90 (see also books of Hamilton94 and Krolzig97) and the model is hidden Markov model with dependencies, see Zucchini16. However, Markov regime–switching models have been introduced before Hamilton (1989), see, for example, Goldfeld73, Quandt58, and Tong83. The regime–switching model assumes that a discrete unobservable Markov process generates switches among a finite set of regimes randomly and that each regime is defined by a particular parameter set. The model is a good fit for some financial data and has become popular in financial modeling including equity options, bond prices, and others. Recently, under normal framework, Battulga22b obtained pricing and hedging formulas for the European options and equity–linked life insurance products by introducing a DDM with regime–switching process. Also, Battulga24a developed option pricing formulas for some frequently used options by using Markov–Switching Vector Autoregressive process. To model required rate of return on stock, Battulga23b applied a three–regime model. The result of the paper reveals that the regimeswitching model is good fit for the required rate of return. Default risk is a possibility that a borrower fails to make full and timely payments of principal and interest, which are stated in the debt contract. The structural model of default risk relates to option pricing. In this model, a default threshold, which represents the liabilities of the company is seen as a strike price and a asset value of the company is seen as underlying asset of the European option. For this reason, this approach is also referred to as the firm–value approach or the option–theoretic approach. Original idea of the structural model goes back to Black73 and Merton74. Black73 developed a closed–form formula for evaluating the European option and Merton74 obtained pricing formula for the liabilities of a company under Black–Scholes framework. Battulga22b tried to estimate default probability using regime–switching process. This paper is organized as follows. In Section 2 of the paper, we develop stochastic DDM for market values of equities and liabilities of companies using the 's approximation method. Then, we model the market values of assets of the companies using the approximation method once again. In Section 3, we obtain pricing formulas of the European call and put options on the market value of the asset. After that, we develop formulas of risk–neutral equity and debt values, and default probability. In Section 4, we study ML estimators of suggested model's parameters. In Section 5, we conclude the study. Finally, in Section 6, we provide Lemmas, which is used in the paper. § MARKET VALUE MODEL OF EQUITY AND LIABILITY Let (Ω,ℋ_T,ℙ) be a complete probability space, where ℙ is a given physical or real–world probability measure and ℋ_T will be defined below. To introduce a regime–switching in Merton's default risk model, we assume that {s_t}_t=1^T is a homogeneous Markov chain with N state and 𝖯:={p_ij}_i=0,j=1^N is a random transition probability matrix, where (p_01,…,p_0N)' is an initial probability vector. In this paper, we assume that market values of equities and liabilities of companies are observed. For a case that market values of equities and liabilities are both unobserved, we refer to Battulga24e. Dividend discount models (DDMs), first introduced by Williams38, are a popular tool for equity valuation. The basic idea of all DDMs is that the market value of equity at time t-1 of the firm equals the sum of the market value of equity at time t and dividend payment at time t discounted at risk–adjusted rate (required rate of return on stock). Let us assume there are n companies. Therefore, for successive market values of equity of i–th company, the following relation holds V_i,t^e=(1+k_i,t^e)V_i,t-1^e-p_i,t^e,   t=1,…,T, where k_i,t^e is the required rate of return on the equity (investors) at regime s_t, V_i,t^e is the market value of equity, and p_i,t^e is the dividend payment for investors, respectively, at time t of i–th company. On the other hand, to model market values of liabilities of the company, it is the well known fact that successive values of a debt of company or individual is given by the following equation D_t=(1+i)D_t-1-d_t where D_t is a debt value at time t, d_t is a debt payment at time t, and i is a interest rate of the debt, see, e.g., Gerber97. Note that D_t represents the principal outstanding, that is, the remaining debt immediately after r_t has been paid and debt equation (<ref>) shares same formula with market value of equity given in equation (<ref>). The idea of equation (<ref>) can be used to model a value of liabilities of the company, namely, V_i,t^ℓ=(1+k_i,t^ℓ)V_i,t-1^ℓ-p_i,t^ℓ,    t=1,…,T, where k_i,t^ℓ is the required rate of return on the liability (debtholders) at regime s_t, V_i,t^ℓ is the market value of the liability, and p_i,t^ℓ is a debt payment, which includes interest payment for debtholders, respectively, at time t of the company, see Battulga23b. To keep notations simple, let V_t^e:=(V_1,t^e,…,V_n,t^e)' be an (n× 1) vector of market values of equities, V_t^ℓ:=(V_1,t^ℓ,…,V_n,t^ℓ)' be an (n× 1) vector of market values of liabilities, k_t^e:=(k_1,t^e,…,k_n,t^e)' be an (n× 1) vector of required rate of returns on equities, k_t^ℓ:=(k_1,t^ℓ,…,k_n,t^ℓ)' be an (n× 1) vector of required rate of returns on liabilities, p_t^e:=(p_1,t^e,…,p_n,t^e)' be an (n× 1) vector of the dividend payments, and p_t^ℓ:=(p_1,t^ℓ,…,p_n,t^ℓ)' be an (n× 1) vector of the debt payments, respectively, at time t, I_n be an (n× n) identity matrix, i_n:=(1,…,1)' be an (n× 1) vector, whose all elements equal one. If payments of dividend and debt have chances to take negative values, then the market values of equity and liability of a company can take negative values with a positive probability, which is the undesirable property for market values of equity and liability. That is why, we follow the idea in Campbell88. As a result, the market values of equity and liability of the company take positive values. Following the idea in Campbell88, one can obtain the following approximation exp{k̃_t}=(V_t+p_t)⊘ V_t-1≈exp{Ṽ_t-Ṽ_t-1+ln(g_t)+G_t^-1(G_t-I_2n)(p̃_t-Ṽ_t-μ_t)}, where ⊘ is a component–wise division of two vectors, k̃_t:=((ln(i_n+k_t^e))',(ln(i_n+k_t^ℓ))')' is a (2n× 1) log required rate of return process at time t, V_t:=((V_t^e)',(V_t^ℓ)')' is a (2n× 1) market value process at time t, p_t:=((p_t^e)',(p_t^ℓ)')' is a (2n× 1) payment process at time t, Ṽ_t:=ln(V_t) is a (2n× 1) log market value process at time t, p̃_t:=ln(p_t) is a (2n× 1) log payment process at time t, μ_t:=𝔼[p̃_t-Ṽ_t|ℱ_0] is a (2n× 1) mean log payment–to–market value process at time t of the companies and ℱ_0 is initial information, which will be defined below, g_t:=i_2n+exp{μ_t} is a (2n× 1) linearization parameter, and G_t:=diag{g_t} is a (2n× 2n) diagonal matrix, whose diagonal elements are g_t. As a result, for the log market value process at time t, the following approximation holds Ṽ_t≈ G_t(Ṽ_t-1-p̃_t+k̃_t)+p̃_t-h_t. where h_t:=G_t(ln(g_t)-μ_t)+μ_t is a linearization parameter and the model is called by dynamic Gordon growth model, see Campbell88. We assume that values of the log payment process p̃_t are known at time 0. For a quality of the approximation, we refer to Campbell97. To estimate parameters of the dynamic Gordon growth model and to price the Black–Scholes call and put options on asset value of the company, we suppose that the log required rate of return process at time t is represented by a sum of deterministic process and white noise process, namely, k̃_t=C_k,s_tψ_t+δr̃_t+u_t, where ψ_t:=(ψ_1,t,…,ψ_l_k,t)' is an (l× 1) vector, which consists of exogenous variables, C_k,s_t is an (n× l) random matrix at regime s_t, δ:=(0,i_n')' is a (2n× 1) vector, whose first n elements are zero and others are one, r̃_t:=ln(1+r_t) is a log spot interest rate at time t, r_t is a spot interest rate for borrowing and lending over a period (t,t+1], u_t is an (n× 1) white noise process with random covariance matrix Σ_uu,s_t at regime s_t. In this case, equation (<ref>) becomes Ṽ_t=G_t(Ṽ_t-1-p̃_t+C_k,s_tψ_t+δr̃_t)+p̃_t-h_t+G_tu_t. Now, we model spot interest rate r_t. By using the Dickey–Fuller test, it can be confirm that quarterly log spot interest rate is the unit–root process with drift, see data IRX of Yahoo Finance. Also, due to the fact that log return of quarterly S&P 500 index is stationary process, see data SPX of Yahoo Finance, we model the log required rate of return on equity by a trend stationary process. Moreover, there may exist a cointegration between the log required rate of return on debtholders and the spot interest rate. For this reason, we model the required rate of return process by equation (<ref>). Consequently, we model the log spot rate by the following equation r̃_t=c_r,s_t'ψ_t+r̃_t-1+v_t, where c_r,s_t is an (l× 1) random vector at regime s_t, v_t is a white noise process with random variance Σ_vv,s_t at regime s_t. As a result, by combining equations (<ref>) and (<ref>), we arrive the following system Ṽ_t=ν_V,t+G_tṼ_t-1+G_tδr̃_t-1+G_tu_t r̃_t=ν_r,t+r̃_t-1+v_t,    for t=1,…,T under the real probability measure ℙ, where ν_V,t:=G_tC_k,s_tψ_t-(G_t-I_2n)p̃_t-h_t is an (2n× 1) intercept process of the log market value process Ṽ_t and ν_r,t:=c_r,s_t'ψ_t is an (1× 1) intercept process of the log spot rate process r_t. Let us denote a dimension of system (<ref>) by ñ, that is, ñ:=2n+1. Finally, we model the market values of the assets of the companies. Since the market values of the assets equal sums of the market values of equities and liabilities of the companies, we have V_t^a=V_t^e+V_t^ℓ, where V_t^a is an (n× 1) asset value process at time t of the companies. Using the same approximation method, a log asset value process of the companies is approximated by the following equation Ṽ_t^a:=ln(V_t^e+V_t^ℓ) ≈ (G_t^a)^-1Ṽ_t^e+(I_n-(G_t^a)^-1)Ṽ_t^ℓ+(G_t^a)^-1h_t^a = W_t^aṼ_t+(G_t^a)^-1h_t^a where μ_t^a:=𝔼[Ṽ_t^ℓ-Ṽ_t^e|ℱ_0] is a mean log liability value–to–equity value ratio, g_t^a:=i_n+exp{μ_t^a} and h_t^a:=G_t^a(ln(g_t^a)-μ_t^a)+μ_t^a are linearization parameters for the log asset process, G_t^a:=diag{g_t^a} is a diagonal matrix, and W_t^a:=[(G_t^a)^-1:I_n-(G_t^a)^-1] is a weight matrix of the approximation, respectively, at time t of the company. The stochastic properties of system (<ref>) is governed by the random vectors {u_1,…,u_T, v_1,…,v_T}. We assume that for t=1,…,T, the white noise process ξ_t:=(u_t',v_t)' follows normal distribution, namely, ξ_t∼𝒩(0,Σ_s_t) under the real probability measure ℙ, where Σ_s_t=[ Σ_uu,s_t Σ_uv,s_t; Σ_vu,s_t Σ_vv,s_t ]. is a covariance matrix of the (ñ× 1) white noise process ξ_t. § MERTON'S STRUCTURAL MODEL The Merton's model is one of the structural models used to measure credit risk. Merton74 was aim to value the liabilities of a specific company. As mentioned above the model connects the European call and put options. The European call and put options are contracts that give their owner the right, but not the obligation, to buy or sell shares of a stock of a company at a predetermined price by a specified date. Let us start this Section by considering the valuation method of the European options on the asset values of the companies. Let T be a time to maturity of the European call and put options, x_t:=(Ṽ_t',r̃_t)' be (ñ× 1) process at time t of endogenous variables, and C_s_t:=[C_k,s_t':c_r,s_t]' be random coefficient vector at regime s_t. We introduce stacked vectors and matrices: x:=(x_1',…,x_T')', s:=(s_1,…,s_T)', C_s:=[C_s_1:…:C_s_T], and Γ_s:=[Σ_s_1:…:Σ_s_T]. We suppose that the white noise process {ξ_t}_t=1^T is independent of the random coefficient matrix C_s, random covariance matrix Γ_s, random transition matrix 𝖯, and regime–switching vector s conditional on initial information ℱ_0:=σ(x_0,ψ_1,…,ψ_T). Here for a generic random vector X, σ(X) denotes a σ–field generated by the random vector X, ψ_1,…,ψ_T are values of exogenous variables and they are known at time zero. We further suppose that the transition probability matrix 𝖯 is independent of the random coefficient matrix C_s and covariance matrix Γ_s given initial information ℱ_0 and regime–switching process s_t. To ease of notations, for a generic vector o=(o_1,…,o_T)', we denote its first t and last T-t sub vectors by o̅_t and o̅_t^c, respectively, that is, o̅_t:=(o_1,…,o_t)' and o̅_t^c:=(o_t+1,…,o_T)'. We define σ–fields: for t=0,…,T, ℱ_t:=ℱ_0∨σ(x̅_t) and ℋ_t:=ℱ_t∨σ(Ĉ)∨σ(Γ̂)∨σ(𝖯)∨σ(s), where for generic sigma fields 𝒪_1,…,𝒪_k, ∨_i=1^k 𝒪_i is the minimal σ–field containing the σ–fields 𝒪_i, i=1,…,k. Observe that ℱ_t⊂ℋ_t for t=0,…,T. For the first–order Markov chain, a conditional probability that the regime at time t+1, s_t+1 equals some particular value conditional on the past regimes, s_t,s_t-1,…,s_1 depends only through the most recent regime at time t, s_t, that is, p_s_ts_t+1:=ℙ[s_t+1=s_t+1|s_t=s_t,𝖯,ℱ_0]=ℙ[s_t+1=s_t+1|s̅_t=s̅_t,𝖯,ℱ_0] for t=0,…,T-1, where p_s_1:=p_s_0s_1=ℙ[s_1=s_1|𝖯,ℱ_0] is an initial probability. A distribution of a white noise vector ξ:=(ξ_1',…,ξ_T')' is given by ξ=(ξ_1',…,ξ_T')' | ℋ_0∼𝒩(0,Σ̅), where Σ̅:=diag{Σ_s_1,…,Σ_s_T} is a block diagonal matrix. To remove duplicates in the random coefficient matrix (C_s,Γ_s), for a generic regime–switching vector with length k, o=(o_1,…,o_k)', we define sets 𝒜_o̅_t:=𝒜_o̅_t-1∪{o_t∈{o_1,…,o_k}|o_t∉𝒜_o̅_t-1},   t=1,…,k, where for t=1,…,k, o_t∈{1,…,N} and an initial set is the empty set, i.e., 𝒜_o̅_0=Ø. The final set 𝒜_o=𝒜_o̅_k consists of different regimes in regime vector o=o̅_k and |𝒜_o| represents a number of different regimes in the regime vector o. Let us assume that elements of sets 𝒜_s, 𝒜_s̅_t, and difference sets between the sets 𝒜_s̅_t^c and 𝒜_s̅_t are given by 𝒜_s={ŝ_1,…,ŝ_r_ŝ}, 𝒜_s̅_t={α_1,…,α_r_α}, and 𝒜_s̅_t^c\𝒜_s̅_t={δ_1,…,δ_r_δ}, respectively, where r_ŝ:=|𝒜_s|, r_α:=|𝒜_s̅_t|, and r_δ:=|𝒜_s̅_t^c\𝒜_s̅_t| are numbers of elements of the sets, respectively. We introduce the following regime vectors: ŝ:=(ŝ_1,…,ŝ_r_ŝ)' is an (r_ŝ× 1) vector, α:=(α_1,…,α_r_α)' is an (r_α× 1) vector, and δ=(δ_1,…,δ_r_δ)' is an (r_δ× 1) vector. For the regime vector a=(a_1,…,a_r_a)' ∈{ŝ,α,δ}, we also introduce duplication removed random coefficient matrices, whose block matrices are different: C_a=[C_a_1:…:C_a_r_a] is an (ñ× [lr_a]) matrix, Γ_a=[Γ_a_1:…:Γ_a_r_a] is an (ñ× [ñr_a]) matrix, and (C_a,Γ_a). We assume that for given duplication removed regime vector ŝ and initial information ℱ_0, the coefficient matrices (C_ŝ_1,Γ_ŝ_1),…,(C_ŝ_r_ŝ,Γ_ŝ_r_ŝ) are independent under the real probability measure ℙ. Under the assumption, conditional on ŝ and ℱ_0, a joint density function of the random coefficient random matrix (C_ŝ,Γ_ŝ) is represented by f(C_ŝ,Γ_ŝ|ŝ,ℱ_0)=∏_t=1^r_ŝf(C_ŝ_t,Γ_ŝ_t|ŝ_t,ℱ_0) under the real probability measure ℙ, where for a generic random vector X, we denote its density function by f(X) under the real probability measure ℙ. Using the regime vectors α and δ, the above joint density function can be written by f(C_ŝ,Γ_ŝ|ŝ,ℱ_0)= f(C_α,Γ_α|α,ℱ_0)f_*(C_δ,Γ_δ|δ,ℱ_0) where the density function f_*(C_δ,Γ_δ|δ,ℱ_0) equals f_*(C_δ,Γ_δ|δ,ℱ_0):= f(C_δ,Γ_δ|δ,ℱ_0), if   r_δ≠ 0, 1, if   r_δ= 0. §.§ Risk–Neutral Probability Measure To price the European call and put options, we need to change from the real probability measure to some risk–neutral measure. Let D_t:=exp{-r̃_1-…-r̃_t}=1/∏_s=1^t(1+r_s) be a predictable discount process, where r̃_t-1 is the log spot interest rate at time t. According to Pliska97 (see also Bjork20), for all companies, a conditional expectation of the return processes k_i,t^e=(V_i,t^e+p_i,t^e)/V_i,t-1^e-1 and k_i,t^ℓ=(V_i,t^ℓ+p_i,t^ℓ)/V_i,t-1^ℓ-1 for i=1,…,n must equal the spot interest rate r_t under some risk–neutral probability measure ℙ̃ and a filtration {ℋ_t}_t=0^T. Thus, it must hold 𝔼̃[(V_t+p_t)⊘ V_t-1|ℋ_t-1]=exp{r̃_ti_2n} for t=1,…,T, where 𝔼̃ denotes an expectation under the risk–neutral probability measure ℙ̃. According to equation (<ref>), condition (<ref>) is equivalent to the following condition 𝔼̃[exp{u_t-((i_2n-δ)r̃_t-C_k,s_tψ_t)}|ℋ_t-1]=i_2n. It should be noted that condition (<ref>) corresponds only to the white noise random process u_t. Thus, we need to impose a condition on the white noise process v_t under the risk–neutral probability measure. This condition is fulfilled by 𝔼̃[exp{v_t}|ℋ_t-1]=θ̃_t for ℋ_t-1 measurable any random variable θ̃_t. Because for any admissible choices of θ̃_t, condition (<ref>) holds, the market is incomplete. But prices of the options are still consistent with the absence of arbitrage. For this reason, to price the options, in this paper, we will use a unique optimal Girsanov kernel process θ_t, which minimizes the variance of a state price density process and relative entropy. According to Battulga24a, the optimal kernel process θ_t is obtained by θ_t=Θ_t ((i_2n-δ)r̃_t-C_k,s_tψ_t-1/2𝒟[Σ_uu,s_t]), where Θ_t=[G_t:(Σ_vu,s_tΣ_uu,s_t^-1)']' and for a generic square matrix O, 𝒟[O] denotes a vector, consisting of diagonal elements of the matrix O. Consequently, system (<ref>) can be written by Ṽ_t=ν̃_V,t+G_tṼ_t-1+G_ti_2n r_t-1+G_tũ_t F_tr̃_t=ν̃_r,t+r̃_t-1+ṽ_t,    for t=1,…,T under the risk–neutral probability measure ℙ̃, where ν̃_V,t:=-(G_t-I_n)p̃_t-1/2G_t𝒟[Σ_uu,s_t]-h_t is an (2n× 1) intercept process of the log market value process Ṽ_t, F_t:=1-Σ_vu,s_tΣ_uu,s_t^-1(i_2n-δ) is a (1× 1) process, and ν̃_r,t:=c_r,s_t'ψ_t-Σ_vu,s_tΣ_uu,s_t^-1(C_k,s_tψ_t+1/2𝒟[Σ_uu,s_t]) is a (1× 1) intercept process of the log spot rate process r̃_t. It is worth mentioning that a joint distribution of a random vector ξ̃:=(ξ̃_1',…,ξ̃_T')' with ξ̃_t:=(ũ_t',ṽ_t)' equals the joint distribution of the random vector ξ=(ξ_1',…,ξ_T')', that is, ξ̃∼𝒩(0,Σ̅) under the risk–neutral probability measure ℙ̃, see Battulga24a. System (<ref>) can be written in VAR(1) form, namely Q̃_0,tx_t=ν̃_t+Q̃_tx_t-1+𝖦_tξ̃_t under the risk–neutral probability measure ℙ̃, where ν̃_t:=(ν̃_V,t',ν̃_r,t)', and ξ̃_t:=(ũ_t',ṽ_t)' are intercept process and white noise processes of the VAR(1) process x_t, respectively, and Q̃_0,t:=[ I_2n 0; 0 F_t ],   Q̃_1,t:=[ G_t G_ti_2n; 0 E_t ],   and   𝖦_t=[ G_t 0; 0 1 ] are (ñ×ñ) coefficient matrices with F_t:=1-Σ_vu,tΣ_uu,t^-1(i_2n-δ) and E_t:=1+Σ_vu,tΣ_uu,t^-1(i_2n-δ). By repeating equation (<ref>), one gets that for i=t+1,…,T, x_i=Π̃_t,ix_t+∑_β=t+1^iΠ̃_β,iν̃_β+∑_β=t+1^iΠ̃_β,i𝖦_βξ̃_β, where the coefficient matrices are Π̃_β,i:=∏_α=β+1^iQ̃_0,α^-1Q̃_1,α=[ ∏_α=β+1^iG_α ∑_α=β+1^iG_i(∏_j_1=α^i-1G_j_1)i_2n(∏_j_2=β+1^α-1 F_j_2^-1E_j_2); 0 ∏_α=β+1^i F_α^-1 E_α ] for β=0,…,i-1 and Π̃_i,i:=Q̃_0,i^-1=[ I_2n 0; 0 F_i^-1 ]. Here for a sequence of generic (k× k) square matrices O_1,O_2,…, the products mean that for v≤ u, ∏_j=v^uO_j=O_u… O_v and for v>u, ∏_j=v^uO_j=I_k. Therefore, conditional on the information ℋ_t, for i=t+1,…,T, a expectation at time i and a conditional covariance matrix at time i_1 and i_2 of the process x_t is given by the following equations μ̃_i|t(ℋ_t):=Ẽ[x_i|ℋ_t]=𝖩_xΠ̃_t,i x_t+∑_β=t+1^iΠ̃_β,iν̃_β and Σ̃_i_1,i_2|t(ℋ_t):=Cov[x_i_1,x_i_2|ℋ_t]=∑_β=t+1^i_1∧ i_2Π̃_β,i_1𝖦_βΣ_s_β𝖦_βΠ̃_β,i_2', where i_1∧ i_2 is a minimum of i_1 and i_2. Consequently, due to equation (<ref>), conditional on the information ℋ_t, joint distribution of the random vector x̅_t^c is x̅_t^c | ℋ_t∼𝒩(μ̃_t^c(ℋ_t),Σ̃_t^c(ℋ_t)),   t=0,…,T-1 under the risk–neutral probability measure ℙ̃, where μ̃_t^c(ℋ_t):=(μ̃_t+1|t(ℋ_t),…,μ̃_T|t(ℋ_t))' is a conditional expectation and Σ̃_t^c(ℋ_t):=(Σ̃_i_1,i_2|t(ℋ_t))_i_1,i_2=t+1^T is a conditional covariance matrix of the random vector x̅_t^c and are calculated by equations (<ref>) and (<ref>), respectively. §.§ Forward Probability Measure According to Geman95, cleaver change of probability measure leads to a significant reduction in the computational burden of derivative pricing. The frequently used probability measure that reduces the computational burden is the forward probability measure and to price the zero–coupon bond, the European options, and the Margrabe exchange options we will apply it. To define the forward probability measure, we need to zero–coupon bond. It is the well–known fact that conditional on ℱ_t, price of zero–coupon bond paying face value 1 at time t is B_t(ℋ_t):=1/D_tẼ[D_T|ℋ_t]. The t–forward probability measure is defined by ℙ̂_t[A|ℋ_t]:=1/D_tB_t(ℋ_t)∫_AD_Tℙ̃[ω|ℋ_t]   for all A∈ℋ_T. Therefore, a negative exponent of D_T/D_t in the zero–coupon bond formula is represented by ∑_β=t+1^Tr̃_β=r̃_t+1+j_r'[∑_β=t+1^T-1J_β|t]x̅_t^c=r̃_t+1+γ_t'x̅_t^c where j_r:=(0,1)' is (1×ñ) vector and it can be used to extract the log spot rate process r̃_s from the random process x_s, J_β|t:=[0:I_ñ:0] is (ñ×ñ(T-t)) matrix, whose (β-t)–th block matrix equals I_ñ and others are zero and it is used to extract the random vector x_β from the random vector x̅_t^c, and γ_t':=j_r'∑_β=t+1^T-1J_β|t. Therefore, two times of negative exponent of the price at time t of the zero–coupon B_t(ℋ_t) is represented by 2∑_s=t+1^Tr̃_s+(x̅_t^c-μ̃_t^c(ℋ_t))'(Σ̃_t^c(ℋ_t))^-1(x̅_t^c-μ̃_t^c(ℋ_t)) =(x̅_t^c-μ̃_t^c(ℋ_t)+Σ̃_t^c(ℋ_t)γ_t)'(Σ̃_t^c(ℋ_t))^-1(x̅_t^c-μ̃_t^c(ℋ_t)+Σ̃_t^c(ℋ_t)γ_t) +2(r̃_t+1+γ_t'μ̃_t^c(ℋ_t))-γ_t'Σ̃_t^c(ℋ_t)γ_t. As a result, for given ℋ_t, price at time t of the zero–coupon B_t(ℋ_t) is B_t(ℋ_t)=exp{-r̃_t+1-γ_t'μ̃_t^c(ℋ_t)+1/2γ_t'Σ̃_t^c(ℋ_t)γ_t}. Consequently, conditional on the information ℋ_t, a joint distribution of the random vector x̅_t^c is given by x̅_t^c | ℋ_t∼𝒩(μ̂_t^c(ℋ_t),Σ̃_t^c(ℋ_t)),   t=0,…,T-1 under the t–forward probability measure ℙ̂_t, where μ̂_t^c(ℋ_t):=μ̃_t^c(ℋ_t)-Σ̃_t^c(ℋ_t)γ_t and Σ̃_t^c(ℋ_t) are conditional expectation and conditional covariance matrix, respectively, of the random vector x̅_t^c. Also, as J_s_1|tΣ̃_t^c(ℋ_t) J_s_2|t'=Σ̃_s_1,s_2|t(ℋ_t), we have J_s|tΣ̃_t^c(ℋ_t)(∑_β=t+1^T-1J_β|t)=∑_β=t+1^T-1Σ̃_s,β|t(ℋ_t), where Σ̃_s,β|t(ℋ_t) is calculated by equation (<ref>). Therefore, (s-t)–th block vector of the conditional expectation μ̂_t^c(ℋ_t) is given by μ̂_s|t(ℋ_t):=J_s|tμ̂_t^c(ℋ_t)=μ̃_s|t(ℋ_t)-∑_β=t+1^T-1(Σ̃_s,β|t(ℋ_t))_ñ, where for a generic matrix O, we denote its j–th column by (O)_j. Similarly, it is clear that price at time t of the zero–coupon bond is given by B_t(ℋ_t)=exp{-r̃_t+1-∑_β=t+1^T-1(μ̃_β|t(ℋ_t))_ñ+1/2∑_α=t+1^T-1∑_β=t+1^T-1(Σ̃_α,β|t(ℋ_t))_ñ,ñ}. where for a generic vector o, we denote its j–th element by (o)_j, and for a generic square matrix O, we denote its (i,j)–th element by (O)_i,j. To price the European call and put options for asset value, we need a distribution of the log market value process at time T. For this reason, it follows from equation (<ref>) that the distribution of the log market value process at time T is given by Ṽ_T | ℋ_t∼𝒩(μ̂_T|t^Ṽ(ℋ_t),Σ̃_T|t^Ṽ(ℋ_t)), under the t–forward probability measure ℙ̂, where μ̂_T|t^Ṽ(ℋ_t):=J_Vμ̂_T|t(ℋ_t) is a conditional expectation, which is calculated from equation (<ref>) and Σ̃_T|t^Ṽ(ℋ_t):=J_VΣ̃_T,T|t^Ṽ(ℋ_t)J_V' is a conditional covariance matrix, which is calculated from equation (<ref>) of the log market value at time T given the information ℋ_t and J_V:=[I_2n:0] is a (2n×ñ) matrix, which is used to extract the log market value process Ṽ_t from the process x_t. §.§ The European Call and Put Options Let us assume that the recovery rates, corresponding to the market values of assets of the companies are zero when they default. Then, because market values at time T of the equities and liabilities are given by the following equations V_T^e=max(V_T^a-L,0)=(V_T^a-L)^+   and   L_T=min(V_T^a,L)=L-(L-V_T^a)^+, respectively, where L is a nominal value vector of the liabilities at maturity T of the companies. Therefore, a risk–neutral equity value at time t of a public company equals the European call option on its asset and liabilities at time t of the company is represented in terms of the European put option on its asset. This Subsection is devoted to price the call and put options. According to equation (<ref>) and (<ref>), conditional on the information ℋ_t, its distribution is given by Ṽ_T^a | ℋ_t∼𝒩(μ̂_T|t^a(ℋ_t),Σ̃_T|t^a(ℋ_t)) under the t–forward probability measure ℙ̂_t, where μ̂_T|t^a(ℋ_t):=W_T^aμ̂_T|t^Ṽ(ℋ_t)+(G_T^a)^-1h_T^a and Σ̃_T|t^a(ℋ_t):=W_T^aΣ̃_T|t^Ṽ(ℋ_t)(W_T^a)' are conditional mean and variance of the log asset value Ṽ_T^a, respectively, given the information ℋ_t. Therefore, due to equation (<ref>) and Lemma <ref>, see Technical Annex, price vectors at time t of the Black–Scholes call and put options with strike price vector L and maturity T are given by C_T|t(ℋ_t) = Ẽ[D_T/D_t(V_T^a-L)^+|ℋ_t]=B_t(ℋ_t)Ê[(V_T^a-L)^+|ℋ_t] = B_t(ℋ_t)(exp{μ̂_T|t^a(ℋ_t)+1/2𝒟[Σ̃_T|t^a(ℋ_t)]}⊙Φ(d_T|t^1(ℋ_t))-L⊙Φ(d_T|t^2(ℋ_t))) and P_T|t(ℋ_t) = Ẽ[D_T/D_t(L-V_T^a)^+|ℋ_t]=B_t(ℋ_t)Ê[(L-V_T^a)^+|ℋ_t] = B_t(ℋ_t)(L⊙Φ(-d_T|t^2(ℋ_t))-exp{μ̂_T|t^a(ℋ_t)+1/2𝒟[Σ̃_T|t^a(ℋ_t)]}⊙Φ(-d_T|t^1(ℋ_t))), where d_T|t^1(ℋ_t):=(μ̂_T|t^a(ℋ_t)+𝒟[Σ̃_T|t^a(ℋ_t)]-ln(L))⊘√(𝒟[Σ̃_T|t^a(ℋ_t)]) and d_T|t^2(ℋ_t):=d_T|t^1(ℋ_t)-√(𝒟[Σ̃_T|t^a(ℋ_t)]). Therefore, due to Lemma <ref> and the tower property of conditional expectation, price vectors at time t (t=0,…,T-1) of the Black–Scholes call and put options on asset values with strike price vector L and maturity T are obtained as C_T|t(ℱ_t)=𝔼̃[D_T/D_t(V_T^a-L)^+|ℱ_t]=∑_s∫_C_ŝ,Γ_ŝC_T|t(ℋ_t)f̃(C_ŝ,Γ_ŝ,s|ℱ_t)dC_ŝ dΓ_ŝ and P_T|t(ℱ_t)=𝔼̃[D_T/D_t(L-V_T^a)^+|ℱ_t]=∑_s∫_C_ŝ,Γ_ŝP_T|t(ℋ_t)f̃(C_ŝ,Γ_ŝ,s|ℱ_t)dC_ŝ dΓ_ŝ respectively. As a result, according to formulas of the call and put options given in equations (<ref>) and (<ref>), risk–neutral market values of the equities and liabilities at time t of the companies are given by V̂_t^e=C_T|t(ℱ_t)   and   L̂_t=LB_t(ℱ_t)-P_T|t(ℱ_t). §.§ Default Probability Now, we move to default probabilities of a companies. In order to obtain the default probabilities of the companies, for given the information ℋ_t, we need a distribution of log asset value at time T under the real probability measure ℙ. For this reason, let us write system (<ref>) in VAR(1) form x_t=ν_t+Q_tx_t-1+𝖦_tξ_t under the real probability measure ℙ, where ν_t:=(ν_V,t',ν_r,t)' is intercept process of the VAR(1) process x_t and Q_t:=[ G_t G_tδ; 0 1 ] is (ñ×ñ) coefficient matrix. By repeating equation (<ref>), one gets that for i=t+1,…,T, x_i=Π_t,ix_t+∑_β=t+1^iΠ_β,iν_β+∑_β=t+1^iΠ_β,i𝖦_βξ_β, where the coefficient matrices are Π_β,i:=∏_α=β+1^iQ_α=[ ∏_α=β+1^iG_α ∑_α=β+1^iG_i(∏_j_1=α^i-1G_j_1)δ; 0 1 ] for β=0,…,i-1 and Π_i,i:=I_ñ. Thus, conditional on the information ℋ_t, for i=t+1,…,T, a expectation at time i and a conditional covariance matrix at time i_1 and i_2 of the process x_t is given by the following equations μ_i|t(ℋ_t):=𝔼[x_i|ℋ_t]=Π_t,i x_t+∑_β=t+1^iΠ_β,iν_β and Σ_i_1,i_2|t(ℋ_t):=Cov[x_i_1,x_i_2|ℋ_t]=∑_β=t+1^i_1∧ i_2Π_β,i_1𝖦_βΣ_s_β𝖦_βΠ_β,i_2'. Therefore, it follows from equations (<ref>) and (<ref>) that conditional on the information ℋ_t, an expectation and covariance matrix of log market value process at time T under the real probability measure ℙ are given by the following equations μ_T|t^Ṽ(ℋ_t):=𝔼[Ṽ_T|ℋ_t]=J_Vμ_T|t(ℋ_t) and Σ_T|t^Ṽ(ℋ_t) := Var[Ṽ_T|ℋ_t]=J_VΣ_T,T|t(ℋ_t)J_V'. Consequently, due to equation (<ref>), conditional on ℋ_t, a distribution of the log asset value process at time T is given by Ṽ_T^a | ℋ_t∼𝒩(μ_T|t^a(ℋ_t),Σ_T|t^a(ℋ_t)) under the real probability measure ℙ, where μ_T|t^a(ℋ_t):=W_T^aμ_T|t^Ṽ(ℋ_t)+(G_T^a)^-1h_T^a and Σ_T|t^a(ℋ_t):=W_T^aΣ_T|t^Ṽ(ℋ_t)(W_T^a)' are conditional mean and covariance matrix of the log asset value Ṽ_T^a, respectively, given the information ℋ_t. According to the structural model of default risk, if the asset value of a company falls below the default threshold, representing liabilities, then default occurs. Therefore, due to equation (<ref>), conditional on the information ℋ_t, the default probability at time t of the company is given by the following equation ℙ[V_T^a≤L̅|ℋ_t]=ℙ[Ṽ_T^a≤ln(L̅)|ℋ_t]=Φ_n((Σ_T|t^a(ℋ_t))^-1(ln(L̅)-μ_T|t^a(ℋ_t))), where L̅ is the default threshold vector at maturity T and for a random vector Z∼𝒩(0,I_n), Φ_n(z):=ℙ(Z≤ z) is a joint distribution function of the random vector Z. As a result, by the tower property of conditional expectation formula and Lemma <ref>, we get that ℙ[V_T^a≤L̅|ℱ_t]=∑_s∫_C_ŝ,Γ_ŝℙ[V_T^a≤L̅|ℋ_t]f(C_ŝ,Γ_ŝ,s|ℱ_t)dC_ŝ dΓ_ŝ. § PARAMETER ESTIMATION To estimate parameters of the required rate of return k̃_t, Battulga23b used the maximum likelihood method and Kalman filtering. For Bayesian method, which removes duplication in regime vector, we refer to Battulga24g. In this section, we assume that coefficient matrices C_1,…,C_N and covariance matrices Σ_1,…,Σ_N are deterministic. Here we apply the EM algorithm to estimate parameters of the model. If we combine the equations (<ref>) and (<ref>), then we have that B_0y_t=C_s_tψ_t+B_1y_t-1+ξ_t, where y_t:=(k̃_t',r̃_t)' is an (ñ× 1) vector of endogenous variables, C_s_t is the (ñ× l) matrix, which depends on the regime s_t, d:=(δ',1)' is an (ñ× 1) vector, the (ñ×ñ) matrices are given by B_0:=[ I_2n δ; 0 1 ]   and    B_1:=[ 0 1; 0 0 ]. For t=0,…,T, let 𝒴_t be the available data at time t, which is used to estimate parameters of the model, that is, 𝒴_t:=σ(d̃_0,r̃_0,y_1,…,y_t). Then, it is clear that the log–likelihood function of our model is given by the following equation ℒ(θ)=∑_t=1^Tln(f(y_t|𝒴_t-1;θ)) where θ:=(vec(C_1)',…,vec(C_N)',vec(Σ_1)',…,vec(Σ_N)',vec(𝖯)')' is a vector, which consists of all population parameters of the model and f(y_t|𝒴_t-1;θ) is a conditional density function of the random vector y_t given the information 𝒴_t-1. The log–likelihood function is used to obtain the maximum likelihood estimator of the parameter vector θ. Note that the log–likelihood function depends on all observations, which are collected in 𝒴_T, but does not depend on regime–switching process s_t, whose values are unobserved. If we assume that the regime–switching process in regime j at time t, then because conditional on the information 𝒴_t-1, ξ_t follows a multivariate normal distribution with mean zero and covariance matrix Σ_j, the conditional density function of the random vector y_t is given by the following equation η_t,j := f(y_t|s_t=j,𝒴_t-1;α) = 1/(2π)^ñ/2|Σ_j|^1/2exp{-1/2(B_0y_t-C_jψ_t-B_1y_t-1)'Σ_j^-1(B_0y_t-C_jψ_t-B_1y_t-1)} for t=1,…,T and j=1,…,N, where α:=(vec(C_1)',…,vec(C_N)',vec(Σ_1)',…,vec(Σ_N)')' is a parameter vector, which differs from the vector of all parameters θ by the transition probability matrix 𝖯. For all t=1,…,T, we collect the conditional density functions of the price at time t into an (N× 1) vector η_t, that is, η_t:=(η_t,1,…,η_t,N)'. Let us denote a probabilistic inference about the value of the regime–switching process s_t equals to j, based on the information 𝒴_t and the parameter vector θ by ℙ(s_t=j|𝒴_t,θ). Collect these conditional probabilities ℙ(s_t=j|𝒴_t,θ) for j=1,…,N into an (N× 1) vector z_t|t, that is, z_t|t:=(ℙ(s_t=1|𝒴_t;θ),…,ℙ(s_t=N|𝒴_t;θ))'. Also, we need a probabilistic forecast about the value of the regime–switching process at time t+1 equals j conditional on data up to and including time t. Collect these forecasts into an (N× 1) vector z_t+1|t, that is, z_t+1|t:=(ℙ(s_t+1=1|𝒴_t;θ),…,ℙ(s_t+1=N|𝒴_t;θ))'. The probabilistic inference and forecast for each time t=1,…,T can be found by iterating on the following pair of equations: z_t|t=(z_t|t-1⊙η_t)/i_N'(z_t|t-1⊙η_t)   and   z_t+1|t=𝖯̂'z_t|t,   t=1,…,T, see book of Hamilton94, where η_t is the (N× 1) vector, whose j-th element is given by equation (<ref>), 𝖯̂ is the (N× N) transition probability matrix, which is defined by omitting the first row of the matrix 𝖯, and i_N is an (N× 1) vector, whose elements equal 1. Given a starting value z_1|0 and an assumed value for the population parameter vector θ, one can iterate on (<ref>) for t=1,…,T to calculate the values of z_t|t and z_t+1|t. To obtain MLE of the population parameters, in addition to the inferences and forecasts we need a smoothed inference about the regime–switching process is in at time t based on full information 𝒴_T. Collect these smoothed inferences into an (N× 1) vector z_t|T, that is, z_t|T:=(ℙ(s_t=1|𝒴_T;θ),…,ℙ(s_t=N|𝒴_T;θ))'. The smoothed inferences can be obtained by using the Battulga24g's exact smoothing algorithm: z_T-1|T=1/i_N'(z_T|T-1⊙η_t)(𝖯̂𝖧_Ti_N)⊙ z_T-1|T-1 and for t=T-2,…,1, z_t|T=1/i_N'(z_t+1|t⊙η_t+1)(𝖯̂𝖧_t+1(z_t+1|T⊘ z_t+1|t+1))⊙ z_t|t, where ⊘ is an element–wise division of two vectors and 𝖧_t+1:=diag{η_t+1,1,…,η_t+1,N} is an (N× N) diagonal matrix. For t=2,…,T, joint probability of the regimes s_t-1 and s_t is ℙ(s_t-1=i,s_t=j|ℱ_t;θ)=(z_t|T)_jη_t,jp_s_t-1s_t(z_t-1|t-1)_i/(z_t|t)_ji_N'(z_t|t-1⊙η_t), where for a generic vector o, (o)_j denotes j–th element of the vector o. The EM algorithm is an iterative method to obtain (local) maximum likelihood estimate of parameters of distribution functions, which depend on unobserved (latent) variables. The EM algorithm alternates an expectation (E) step and a maximization (M) step. In E–Step, we consider that conditional on the full information 𝒴_T and parameter at iteration k, θ^[k], expectation of augmented log–likelihood of the data 𝒴_T and unobserved (latent) transition probability matrix 𝖯. The E–Step defines a objective function ℒ, namely, ℒ = 𝔼[-Tñ/2ln(2π)-1/2∑_t=1^T∑_j=1^Nln(Σ_j)1_{s_t=j} - 1/2∑_t=1^T∑_j=1^N(B_0y_t-C_jψ_t-B_1y_t-1)'Σ_j^-1(B_0y_t-C_jψ_t-B_1y_t-1) + ∑_j=1^Np_0j1_{s_1=j}+∑_t=2^T∑_i=1^N∑_j=1^Nln(p_ij)1_{s_t-1=i,s_t=j}-∑_i=0^Nμ_i(∑_j=1^Np_ij-1)|𝒴_T;θ^[k]] In M–Step, to obtain parameter estimate of next iteration θ^[k+1], one maximizes the objective function with respect to the parameter θ. First, let us consider partial derivative from the objective function with respect to the parameter C_j for j=1,…,N. Let c_j is a vectorization of the matrix C_j, i.e., c_j=vec(C_j). Since C_jψ_t=(ψ_t'⊗ I_2n+1)c_j, we have that ∂ℒ/∂ c_j'=∑_t=1^T(B_0y_t-(ψ_t'⊗ I_2n+1)c_j-B_1 y_t-1)'(Σ_j^[k])^-1(ψ_t'⊗ I_2n+1)(z_t|T^[k])_j. Consequently, an estimator at iteration (k+1) of the parameter c_j is given by c_j^[k+1] = (∑_t=1^T(ψ_t⊗ I_2n+1)(Σ_j^[k])^-1(ψ_t⊗ I_2n+1)(z_t|T^[k])_j)^-1 × ∑_t=1^T(ψ_t⊗ I_2n+1)(Σ_j^[k])^-1(B_0y_t-B_1y_t-1)(z_t|T^[k])_j. As a result, an estimator at iteration (k+1) of the parameter C_j is given by C_j^[k+1]=(B_0y̅_j^[k]-B_1y̅_j,-1^[k])(ψ̅_j^[k])'(ψ̅_j^[k](ψ̅_j^[k])')^-1, where y̅_j^[k]:=[y_1√((z_1|T^[k])_j):…:y_T√((z_T|T^[k])_j)] is a (ñ× T) matrix, y̅_j,-1^[k]:=[y_0√((z_1|T^[k])_j):…:y_T-1√((z_T|T^[k])_j)] is a (ñ× T) matrix, and ψ̅_j^[k]:=[ψ_1√((z_1|T^[k])_j):…:ψ_T√((z_T|T^[k])_j)] is an (l× T) matrix. Second, a partial derivative from the objective function with respect to the parameter Σ_j for j=1,…,N is given by ∂ℒ/∂Σ_j = -1/2Σ_j^-1∑_t=1^T(z_t|T^[k])_j + 1/2∑_t=1^TΣ_j^-1(y_t-C_j^[k]ψ_t-D y_t-1)(y_t-C_j^[k]ψ_t-D y_t-1)'Σ_j^-1(z_t|T^[k])_j. Consequently, an estimator at iteration (k+1) of the parameter Σ_j is given by Σ_j^[k+1]=1/∑_t=1^T(z_t|T^[k])_j∑_t=1^T(B_0y_t-C_j^[k]ψ_t-B_1 y_t-1)(B_0y_t-C_j^[k]ψ_t-B_1 y_t-1)'(z_t|T^[k])_j. Third, a partial derivative from the objective function with respect to the parameter p_ij for i,j=1,…,N is given by ∂ℒ/∂ p_ij=1/p_ij∑_t=2^Tℙ(s_t-1=i,s_t=j|ℱ_T;θ^[k])-μ_i. Consequently, an estimator at iteration (k+1) of the parameter p_ij is given by p_ij^[k+1]=1/∑_t=2^T(z_t|T^[k])_i∑_t=2^Tℙ(s_t-1=i,s_t=j|ℱ_T;θ^[k]) where the joint probability ℙ(s_t-1=i,s_t=j|ℱ_T;θ^[k]) is calculated by equation (<ref>). Fourth, a partial derivative from the objective function with respect to the parameter p_0j for j=1,…,N is given by ∂ℒ/∂ p_0j=1/p_0jℙ(s_1=j|ℱ_T;θ^[k])-μ_0. Consequently, an estimator at iteration (k+1) of the parameter p_0j is given by p_0j^[k+1]=(z_1|T^[k])_j. Alternating between these steps, the EM algorithm produces improved parameter estimates at each step (in the sense that the value of the original log–likelihood is continually increased) and it converges to the maximum likelihood (ML) estimates of the parameters. § CONCLUSION In this paper, we developed the Merton's structural model for public companies under an assumption that liabilities of the companies are observed. By modeling the market values of equities, liabilities and assets of companies using the 's approximation method, we obtain formulas for risk–neutral equity and liability values and default probabilities of the companies. Finally, we study ML estimators of suggested model's parameters. It is worth mentioning that following the ideas in Battulga24e one can develop option pricing formulas with default risk and portfolio selection theory with default risk for public companies. § TECHNICAL ANNEX Here we give the Propositions, Corollary, and Lemmas, which are used in the paper and their proofs. Let X∼𝒩(μ,σ^2). Then for all K>0, 𝔼[(e^X-K)^+]=exp{μ+σ^2/2}Φ(d_1)-KΦ(d_2) and 𝔼[(K-e^X)^+]=KΦ(-d_2)-exp{μ+σ^2/2}Φ(-d_1), where d_1:=(μ+σ^2-ln(K))/σ, d_2:=d_1-σ, and Φ(x)=∫_-∞^x1/√(2π)e^-u^2/2du is the cumulative standard normal distribution function. See, e.g., Battulga24a and Battulga24e. Let us denote conditional on a generic σ-field 𝒪, a joint density functions of a generic random vector X by f(X|𝒪) and f̃(X|𝒪) under ℙ and ℙ̃, respectively, and let 𝒥_t:=σ(C̅_t)∨σ(Γ̅_t)∨σ(s̅_t)∨ℱ_0. Then, the following Lemmas hold. Conditional on ℱ_t, a joint density of (Π_ŝ,Γ_ŝ,s,𝖯) is given by f̃(C_ŝ,Γ_ŝ,s,𝖯|ℱ_t)=f̃(y̅_t|C_α,Γ_α,s̅_t,ℱ_0)f(C_ŝ,Γ_ŝ|ŝ,ℱ_0)f(s,𝖯|ℱ_0)/∑_s̅_t(∫_C_α,Γ_αf̃(y̅_t|C_α,Γ_α,s̅_t,ℱ_0)f(C_α,Γ_α|α,ℱ_0)dC_α dΓ_α)f(s̅_t|ℱ_0) for t=1,…,T, where for t=1,…,T, f̃(y̅_t|C_α,Γ_α,s̅_t,ℱ_0)=1/(2π)^nt/2|Σ_11|^1/2exp{-1/2(y̅_t-μ̃_1)'Σ̃_11^-1(y̅_t-μ̃_1)} with μ̃_1:=(μ̃_1|0(ℋ_0),…,μ̃_t|0(ℋ_0))' and Σ̃_11:=(Σ̃_i_1,i_2|0(ℋ_0))_i_1,i_2=1^t. In particular, we have that f̃(C_ŝ,Γ_ŝ,s|ℱ_t)=f̃(y̅_t|C_α,Γ_α,s̅_t,ℱ_0)f(C_ŝ,Γ_ŝ|ŝ,ℱ_0)f(s|ℱ_0)/∑_s̅_t(∫_C_α,Γ_αf̃(y̅_t|C_α,Γ_α,s̅_t,ℱ_0)f(C_α,Γ_α|α,ℱ_0)dC_α dΓ_α)f(s̅_t|ℱ_0) for t=1,…,T. See, e.g., Battulga24a. Let f(y̅_t|C_α,Γ_α,s̅_t,ℱ_0)=1/(2π)^nt/2|Σ_11|^1/2exp{-1/2(y̅_t-μ_1)'Σ_11^-1(y̅_t-μ_1)}, where μ_1:=(μ_1|0(ℋ_0),…,μ_t|0(ℋ_0))' and Σ_11:=(Σ_i_1,i_2|0(ℋ_0))_i_1,i_2=1^t. Then, we have that f(C_ŝ,Γ_ŝ,s|ℱ_t)=f(y̅_t|C_α,Γ_α,s̅_t,ℱ_0)f(C_ŝ,Γ_ŝ|ŝ,ℱ_0)f(s|ℱ_0)/∑_s̅_t(∫_C_α,Γ_αf(y̅_t|C_α,Γ_α,s̅_t,ℱ_0)f(C_α,Γ_α|α,ℱ_0)dC_α dΓ_α)f(s̅_t|ℱ_0) for t=1,…,T. By following Battulga24a, one can prove the Lemma 3. apacite
http://arxiv.org/abs/2406.18890v1
20240627050233
Spectral dimension of $p$-adic integers
[ "Surajit Biswas", "Bipul Saurabh" ]
math.QA
[ "math.QA", "46L87" ]
§ ABSTRACT The notion of spectral dimension was introduced by Chakraborty and Pal in <cit.>. In this paper, we show that the spectral dimension of the ring of p-adic integers, ℤ_p, is equal to its manifold dimension, which is 0. Finally, we determine the K-groups of ℤ_p, and show that the generators of K_0(ℤ_p) can be expressed as finite span of the characters of ℤ_p. p-adic Integers; Characters. [2020] 46L87 § INTRODUCTION Connes' formulation of noncommutative geometry centers around the concept of spectral triples <cit.>. These triples originate from fundamental properties of Dirac-type operators on manifolds. Importantly, a spectral triple for the algebra of continuous functions and Dirac operators on L^2 spinors has the remarkable ability to fully reconstruct a closed Riemannian manifold with a spin structure. This achievement is made possible through Connes' reconstruction theorem <cit.> and the contributions of Lord et al <cit.>. Noncommutative concepts, described by spectral triples, provide a distinct perspective on the group of p-adic integers, redefining them as differentiable spaces. Operators associated with p-adic numbers hold significant number-theoretic implications, potentially giving rise to captivating spectral functions. Of note, Connes' unpublished construction of a spectral triple for the algebra of continuous functions on the Cantor set is documented in <cit.>. Klimek et al <cit.> construct a spectral triple for the C^*-algebra of continuous functions on the space of p-adic integers. Their approach involves utilizing a rooted tree derived from a coarse-grained approximation of the space and applying the forward derivative on this tree. They not only validate the spectral triple's compliance with the traits of a compact spectral metric space but also establish its equivalence to the traditional p-adic metric on the space of p-adic integers. Motivated by Connes' definition of dimension for spectral triples,Chakraborty and Pal <cit.> introduced the spectral dimension as an invariant for ergodic C^*-dynamical systems. They conjectured that for a homogeneous space of a classical compact Lie group, the spectral dimension matches its dimension as a differentiable manifold. The spectral dimensions of SU(2), quaternion sphere H^n, and sphere S^n computed in <cit.>, <cit.> and <cit.> further bolster Chakraborty and Pal's conjecture. Furthermore, in <cit.>, spectral dimensions were computed for the noncommutative torus, q-deformation of SU(l+1) (l≥ 1), and the Cuntz algebra A_u(Q). The spectral dimension of the q-deformation of U(2) was computed in <cit.>. In this article, Section <ref> computes the spectral dimension of the group of p-adic integers. Section <ref> describes the K-groups of ℤ_p. § PRELIMINARIES In this section, we recall the notion of the spectral dimension of ahomogeneous space from the reference <cit.>. Given an associative unital ^*-algebra 𝒜, a spectral triple for 𝒜 comprises a triple (ℋ,π, D) where * ℋ is a complex separable Hilbert space, * π:𝒜→ℒ(ℋ) is a faithful ^*-representation, * D is a self-adjoint operator with compact resolvent, satisfying [D,π(a)]∈ℒ(ℋ) for all a∈𝒜. For clarity, we use (ℋ,A,D) when the representation π is clear from the context. A spectral triple is termed s-summable if |I+D^2|^-s/2 is in the ideal ℒ^1 of trace-class operators. Since D has compact resolvent, this is equivalent to asserting that |D|^-s is trace-class on the complement of its kernel. <cit.> A compact quantum group (C(G),Δ) includes a unital C^*-algebra C(G) and a unital ^*-homomorphism Δ: C(G)→ C(G)⊗ C(G) satisfying * (Δ⊗ id)Δ = (id⊗Δ)Δ, * Both {(a⊗ I)Δ (b): a,b∈ C(G)} and {(I⊗ a)Δ (b): a,b∈ C(G)}densely span C(G)⊗ C(G). Every compact quantum group possesses a unique invariant Haar state h. The Haar state's invariance is described as follows: (h⊗ id)Δ (a)=h(a)I=(id⊗ h)Δ (a) for all a∈ A. A compact quantum group (C(G),Δ) acts on a C^*-algebra A through a ^*-homomorphism τ: A→ A⊗ C(G) satisfying * (τ⊗ id)τ = (id⊗Δ)τ, * {(I⊗ b)τ (a): a∈ A, b∈ C(G)} densely spans A⊗ C(G). A C^*-algebra A is called a homogeneous space of C(G) when the fixed point subalgebra a∈ A:τ(a)=a⊗ I is ℂI. In such case, the action τ is called ergodic, and (A,C(G),τ) is called an ergodic C^*-dynamical system. A covariant representation (π, u) of a C^*-dynamical system (A,C(G),τ) consists of a unital ^*-representation π : A→ℒ(ℋ), a unitary representation u of (C(G),Δ) on ℋ (i.e. a unitary element of the multiplier algebraM(𝒦(ℋ)⊗ C(G)) with (id⊗Δ)(u)=u_12u_13), fulfilling the condition (π⊗ id)τ(a) = u(π (a)⊗ I)u^* for all a∈ A. In the previous definition, for j=2,3, define u_1j as ϕ_1j(u), where ϕ_1j: 𝒦(ℋ)⊗ C(G) →𝒦(ℋ)⊗ C(G)⊗ C(G) is given by ϕ_12(T⊗ a) = T⊗ a⊗ I and ϕ_13(T⊗ a) = T⊗ I⊗ a for all T∈𝒦(ℋ) and a∈ C(G). For a compact Hausdorff space G, M(𝒦(ℋ)⊗ C(G)) is isomorphic to C_b^str(G,ℒ(ℋ)), where C_b^str(G,ℒ(ℋ)) represents the set of bounded continuous functions from G to ℒ(ℋ) equipped with the strict topology. The convergence of a net ⟨ T_i⟩_i to T in ℒ(ℋ) in the strict sense is characterized by T_iS-TS→ 0 and ST_i-ST→ 0 for all compact operators S∈𝒦(ℋ). For a C^*-dynamical system (A,C(G),τ), an operator D acting on a Hilbert space ℋ is equivariant with respect to a covariantrepresentation (π, u) of the system if D⊗ I commutes with u. When (π,u) is a covariant representation of (A,C(G),τ) on a Hilbert space ℋ and (ℋ,π,D) is a spectral triple for a dense ^*-subalgebra 𝒜 of A, (ℋ,π, D) is considered equivariant with respect to (π,u) if D is equivariant with respect to (π,u). A homogeneous space A for a compact quantum group (C(G),Δ) has an invariant state ρ satisfying (ρ⊗ id)τ(a)=ρ (a)I, a∈ A. This invariant state ρ is unique and relates to the Haar state h on (C(G),Δ) through the equality (id⊗ h)τ(a)=ρ (a)I, a∈ A. Given an ergodic C^*-dynamical system (A,C(G),τ) with unique invariant state ρ, let (ℋ_ρ,π_ρ,η_ρ) denote the GNS representation associated with ρ, i.e. ℋ_ρ is a Hilbert space, η_ρ: A→ℋ_ρ is linear with η_ρ (A) dense in ℋ_ρ, and ⟨η_ρ (a),η_ρ (b)⟩=ρ(a^*b); and π_ρ:A→ℒ((ℋ_ρ)) is the ^*-representation of A on ℋ_ρ defined by π_ρ (a)η_ρ (b)=η_ρ (ab). The action τ induces a unitary representation u_τ of (C(G),Δ) on ℋ_ρ, making (π_ρ, u_τ) a covariantrepresentation of the system (A,C(G),τ). Let 𝒪(G) be the dense ^*-subalgebra of C(G) generated by the matrix entries of irreducible unitary representations of (C(G),Δ). We define 𝒜={a∈ A:τ(a)∈ A⊗_alg𝒪(G)}; by <cit.>, 𝒜 is a dense ^*-subalgebra of A. Let ℰ be the class of spectral triples for 𝒜 equivariant with respect to the covariant representation (π_ρ,u_τ). We define the spectral dimension of the system (A,C(G),τ) as the quantity inf{s>0:∃ D such that (ℋ_ρ,π_ρ,D)∈ℰ and D is s-summable}. We will denote this number by 𝒮dim(A,C(G),τ). For a compact group G, in the ergodic C^*-dynamical system (C(G),C(G),Δ), we have 𝒜=𝒪(G). Let a∈𝒜 with Δ (a)=∑_i=1^n a_i⊗χ_i for a_i∈ A and χ_i∈𝒪(G). Using the identity element 0 of G, we find that a(y)=Δ (a)(0,y)=∑_i=1^n a_i(0)·χ_i(y) for all y∈ G. Thus, a=∑_i=1^n a_i(0)·χ_i∈𝒪(G). § COMPUTATION OF SPECTRAL DIMENSION We begin by recalling the group of p-adic integers from <cit.>, discussing its defnition and key topological characteristics. Consider the prime number p. For any nonzero rational number x, express it as x = p^v_p(x)x_1, where x_1 is a rational number coprime to p, meaning that when written in its simplest form, both the numerator and denominator are coprime to p. We define the p-adic absolute value of x, denoted as |x|_p, by the formula |x|_p=p^-v_p(x). For the case of x = 0, we set |0|_p = 0. It can be readily verified that |·|_p constitutes a norm on ℚ. Let R be the set of all sequences ⟨ x_n ⟩_n=1^∞ in ℚ, which are Cauchy with respect to the norm |·|_p. Addition and multiplication of sequences are defined pointwise: ⟨ x_n⟩_n=1^∞ + ⟨ y_n⟩_n=1^∞ = ⟨ x_n +y_n ⟩_n=1^∞, ⟨ x_n⟩_n=1^∞·⟨ y_n⟩_n=1^∞ = ⟨ x_n· y_n⟩_n=1^∞. Hence, (R, +, ·) forms a commutative ring. Moreover, the subset 𝔪 of R that consists of null Cauchy sequences, i.e. sequences that converge to zero, is a maximal ideal. Consequently, the quotient ring R/𝔪 becomes a field. We can include ℚ in R through the mapping x ↦ (x, x, …), which is clearly a Cauchy sequence. Thus, we regard ℚ as a subfield of R/𝔪. This completion of ℚ with respect to |·|_p is denoted as ℚ_p, and its elements are called p-adic numbers. The set ℤ_p = {x ∈ℚ_p : |x|_p ≤ 1} is a subring of ℚ_p, referred to as the ring of p-adic integers. Evidently, ℤ (the set of integers) forms a dense subset within ℤ_p. This implies that elements of ℤ_p can be regarded as formal power series ∑_n=0^∞ x_np^n, where 0≤ x_n≤ p-1. The topological group (ℤ_p,+) is well-established as compact, totally disconnected, and Hausdorff. As a compact abelian group, ℤ_p possesses a unique probability Haar measure denoted as μ. <cit.> A character of a locally compact abelian group (G,+) is a continuous group homomorphism χ:G→𝕋, where 𝕋 is the circle group, i.e. the multiplicative group of all complex numbers of absolute value one. The set of characters of the group G form a group under pointwise multiplication, called the dual group and denoted G. Let S={(1,0)}∪{(m,n)∈ℕ:m<p^n, p∤ m}. The dual group ℤ_p is isomorphic to the group of p-power roots of unity, i.e., ℤ_p≅{e^2π im/p^n:(m,n)∈ S}. For any character χ of ℤ_p, we have χ (1)=e^2π im/p^n for some (m,n)∈ S. Let χ be a character of ℤ_p. Since ℤ is dense within ℤ_p, the values of χ (ℤ) uniquely determine χ. Considering that ℤ is a cyclic group generated by 1, this implies that χ is fully characterized by χ (1). As p^r tends to 0 within ℤ_p as r approaches infinity, the sequence χ (p^r)=χ (1)^p^r converges to χ (0)=1. Consequently, we find that χ (1)=e^2π im/p^n for some (m,n)∈ S. Henceforth, we adopt the notation χ_m,n for all (m,n)∈ S to represent the character of ℤ_p, where χ_m,n(1)=e^2π im/p^n. We consider C(ℤ_p) both as a compact quantum group and an unital C^*-algebra and compute the spectral dimension of the natural ergodic C^*-dynamical system (C(ℤ_p),C(ℤ_p),Δ) associated with ℤ_p. The Haar state on C(ℤ_p) serves as an invariant Haar state for the homogeneous space C(ℤ_p). The relevant covariant representation of this system is given by the triple (L^2(ℤ_p), π, u), where L^2(ℤ_p) is the GNS Hilbert space corresponding to the Haar state on C(ℤ_p), π is the representation of C(ℤ_p) on L^2(ℤ_p) through left multiplication, and u is the right regular representation. The spectral dimension of ℤ_p is 0. Consider the equivariant self-adjoint operator D with compact resolvent specified by D(χ_m,n)=((n+1)^2p^n+1)^1/sχ_m,n for all (m,n)∈ S∖{(1,0)}. For (m,n), (k,l)∈ S with n≤ l, it follows that [D,π(χ_m,n)](χ_k,l) = 0. This implies that [D,π(χ_m,n)] is in ℒ(L^2(ℤ_p)), being zero on the complement of the finite dimensional space spanned by χ_k,l where l<n. Moreover, we have Tr|D|^-s=∑_(m,n)∈ S1/(n+1)^2 p^n+1 =1/p+∑_n=2^∞1/(n+1)^2 p^n+1(p^n - p^n-1) = 1/p + (1- 1/p)1/p∑_n=1^∞1/n^2<∞, thus yielding 𝒮dim(C(ℤ_p),C(ℤ_p),Δ)=0. § K-GROUPS ℤ_p can alternatively be identified as the inverse (or projective) limit of the system {⟨ℤ/p^nℤ⟩_n∈ℕ,⟨Φ_n⟩_n=2^∞}, where for each n∈ℕ with n≥ 2, the transition map Φ_n:ℤ/p^nℤ→ℤ/p^n-1ℤ is defined as Φ_n(x p^n):= x p^n-1 for all x∈ℤ. This identification is precisely given by the isomorphism ϕ:ℤ_p→ℤ/p^nℤ defined by ϕ(x)=⟨ϕ_n(x)⟩_n∈ℕ where for each n∈ℕ, ϕ_n:ℤ_p→ℤ/p^nℤ is the projection map defined by ϕ(∑_k=0^∞ x_k p^k) =∑_k=0^n-1x_k p^k + p^nℤ. Consequently, C(ℤ_p) can be seen as a C^*-algebraic inductive limit of the induced system {C(ℤ/p^nℤ),⟨Ψ_n⟩_n=2^∞}, where Ψ_n:C(ℤ/p^n-1ℤ)→ C(ℤ/p^nℤ) is the induced transition map defined by (Ψ_n f)(x p^n):= f (x p^n-1) for all f∈ C(ℤ/p^n-1ℤ) and x∈ℤ. The induced isomorphism ψ: C(ℤ/p^nℤ)→ C(ℤ_p) is given by ψ(⟨ f_n⟩_n∈ℕ)(x)=lim_n→∞f_nϕ_n(x) for all ⟨ f_n⟩_n∈ℕ∈ C(ℤ/p^nℤ) and x∈ℤ_p. As a result, this identification C(ℤ_p)= C(ℤ/p^nℤ) implies K_i(C(ℤ_p))= K_i(C(ℤ/p^nℤ)) for i=1,2. Since K_1(C(ℤ/p^nℤ))=0 for all n∈ℕ, it follows that K_1(C(ℤ_p))=0. For every natural number r, ℤ_p can be expressed as the disjoint union of p^r balls x+p^rℤ_p for x∈{0,1,…, p^r-1}. Consequently, each ball x+p^rℤ_p is both open and closed. For a subset B of a set X, we denote the characteristic function of B on X as 1_B. The group K_0(C(ℤ_p)) is generated by the equivalence classes of continuous functions 1_x+p^rℤ_p for all r∈ℕ and x∈{0,1,…,p^r-1}. Consider the generating subset 𝒦 of K_0(C(ℤ/p^nℤ)) consisting of elements ⟨ f_n⟩_n=1^∞ such that there exists r∈ℕ satisfying the recurrence relation f_n=Ψ_n f_n-1 for all n>r with the initial condition f_r=1_x+p^rℤ, or equivalently, we can express f_n as f_n=∑_l=1^n-r∑_i_l=0^p-11_x+∑_l=1^n-ri_l p^r+l-1+p^nℤ, where x∈{0,1,…, p^r-1}. Now, using the identification of K_0(C(ℤ/p^nℤ)) with K_0(C(ℤ_p)), we have 𝒦={f∈ K_0(C(ℤ_p)):f=1_x+p^rℤ_p for r∈ℕ and x∈{0,1,…, p^r-1}}. We recall a well-known continuous mapping from ℤ_p to the interval [0,1], known as the Monna map <cit.>. This mapping, denoted as T:ℤ_p→ [0,1], is defined by T(∑_k=0^∞ x_k p^k)=∑_k=0^∞x_k/p^k+1, where 0≤ x_k≤ p-1. The Monna map T is continuous with respect to the p-adic metric on ℤ_p and the standard Euclidean metric on [0,1]. Furthermore, it preserves measures, with T being measure-preserving concerning the probability Haar measure on ℤ_p and the Lebesgue measure λ on [0,1]. Additionally, we note that the set E={x∈ [0,1]: x has multiple base p representations} is countable and thus has measure zero. Consequently, when we restrict the map T to ℤ_p∖ T^-1[E], it becomes a bijection. As a result, the induced map T:L^2([0,1])→ L^2(ℤ_p) defined as T(f):=f∘ T, where f∈ L^2([0,1]), is unitary. For any r∈ℕ and x∈{0,1,…,p^r-1}, we have 1_x+p^rℤ_p=∑_(m,n)∈ S, n≤ re^-2π imx/p^n/p^rχ_m,n. Let r∈ℕ and x∈{0,1,…,p^r-1}, and note that T^-1(1_x+p^rℤ_p)=1_[T(x),T(x)+1/p^r]. Let (m,n)∈ S and denote ξ_m,n=e^2π im/p^n. We then have T^-1(χ_m,n)=lim_N→∞∑_x_0,x_1,…,x_N=0^p-1ξ_m,n^x_0+x_1p+⋯+x_Np^N1_[x_0/p + x_1/p^2 + ⋯ + x_N/p^N+1, x_0/p + x_1/p^2 +⋯ + x_N +1/p^N+1). The L^2-inner product ⟨1_x+p^rℤ_p,χ_m,n⟩ simplifies to ⟨1_x+p^rℤ_p,χ_m,n⟩ =⟨T^-1(1_x+p^rℤ_p),T^-1(χ_m,n)⟩ =⟨1_[T(x),T(x)+1/p^r], lim_N→∞∑_x_0,x_1,…,x_N=0^p-1ξ_m,n^x_0+x_1p+⋯+x_Np^N1_[x_0/p + ⋯ + x_N/p^N+1, x_0/p +⋯ + x_N +1/p^N+1)⟩ =lim_N→∞∫_T(x)^T(x)+1/p^r∑_x_0,x_1,…,x_N=0^p-1ξ_m,n^x_0+x_1p+⋯+x_Np^N1_[x_0/p + ⋯ + x_N/p^N+1, x_0/p +⋯ + x_N +1/p^N+1) dλ =lim_N→∞∑_x_r,…,x_N=0^p-1ξ_m,n^x + ∑_k=r^N x_kp^k∫_T(x)^T(x)+1/p^r1_[T(x) + ∑_k=r^Nx_k/p^k+1, T(x) + ∑_k=r^Nx_k/p^k+1 + 1/p^N+1) dλ =ξ_m,n^x lim_N→∞1/p^N+1∑_x_r,…, x_N=0^p-1ξ_m,n^∑_k=r^N x_kp^k =ξ_m,n^x lim_N→∞1/p^N+1∏_k=r^N(∑_l=0^p-1ξ_m,n^lp^k). Now, we consider two cases. If n≤ r, then for each k=r,r+1,…, N we have ξ_m,n^p^k=1, and therefore ξ_m,n^x lim_N→∞1/p^N+1∏_k=r^N(∑_l=0^p-1ξ_m,n^lp^k)=ξ_m,n^xlim_N→∞p^N-r+1/p^N+1=ξ_m,n^x/p^r, i.e. ⟨1_x+p^rℤ_p,χ_m,n⟩ = ξ_m,n^x/p^r. If n>r, then ∑_l=0^p-1ξ_m,n^lp^n-1=1-(ξ_m,n^p^n-1)^p/1-ξ_m,n^p^n-1=0, and therefore ⟨1_x+p^rℤ_p,χ_m,n⟩ = 0. This completes the proof. Acknowledgement: The first named author would like to thank Professor Partha Sarathi Chakraborty for his valuable suggestions and discussions. 1 [1]c1 A. Connes, Noncommutative geometry, Academic Press, Inc., San Diego, CA, 1994. xiv+661 pp. ISBN:0-12-185860-X. [2]c2 A. Connes, On the spectral characterization of manifolds, J. Noncommut. Geom. 7(2013), no.1, 1–82. [3]ci E. Christensen, C. Ivan, Spectral triples for AF C^*-algebras and metrics on the Cantor set, J. Operator Theory 56(2006), no.1, 17–46. [4]cp P.S. Chakraborty, A.K. Pal, An invariant for homogeneous spaces of compact quantum groups, Adv. Math. 301 (2016), 258–288. [5]de A. Deitmar, S. Echterhoff, Principles of harmonic analysis, Second edition, Universitext, Springer, Cham, 2014. xiv+332 pp. ISBN:978-3-319-05791-0, ISBN:978-3-319-05792-7. [6]gs S. Guin, B. Saurabh, Equivariant spectral triple for the quantum group U_q(2) for complex deformation parameters, J. Geom. Phys. 185(2023), Paper No. 104748, 22 pp. [7]kmr S. Klimek, M. McBride, S. Rathnayake, A p-adic spectral triple, J. Math. Phys. 55(2014), no.11, 113502, 16 pp. [8]lrv S. Lord, A. Rennie, J.C. Várilly, Riemannian manifolds in noncommutative geometry, J. Geom. Phys. 62(2012), no.7, 1611–1638. [9]m A.F. Monna, Sur une transformation simple des nombres P-adiques en nombres réels (French) Nederl. Akad. Wetensch. Proc. Ser. A 55 Indag. Math. 14 (1952), 1–9. [10]p P. Podleś, Symmetries of quantum spaces. Subgroups and quotient spaces of quantum SU(2) and SO(3) groups, Comm. Math. Phys. 170 (1) (1995) 1–20. [11]s B. Saurabh, Spectral dimension of quaternion spheres, Arch. Math. (Basel) 111(2018), no.1, 47–55. [12]s1 B. Saurabh, Spectral dimension of spheres, Comm. Algebra 48(2020), no.6, 2539–2554. [13]vvz V.S. Vladimirov, I.V. Volovich, E.I. Zelenov, p-adic analysis and mathematical physics, Ser. Soviet East European Math., 1, World Scientific Publishing Co., Inc., River Edge, NJ, 1994. xx+319 pp. ISBN:981-02-0880-4. [14]w S.L. Woronowicz, Compact quantum groups, Symétries quantiques (Les Houches, 1995), 845–884. North-Holland Publishing Co., Amsterdam, 1998, ISBN:0-444-82867-2. Surajit Biswas (, ) Department of Mathematics, Indian Institute of Technology, Gandhinagar, Palaj, Gandhinagar 382055, India Bipul Saurabh (, ) Department of Mathematics, Indian Institute of Technology, Gandhinagar, Palaj, Gandhinagar 382055, India
http://arxiv.org/abs/2406.19095v1
20240627112330
On the two-reactant one-step activation-energy asymptotics for steady, adiabatic, planar flames with Lewis numbers of unity
[ "Prabakaran Rajamanickam" ]
physics.flu-dyn
[ "physics.flu-dyn" ]
On the two-reactant one-step activation-energy asymptotics for steady, adiabatic, planar flames with Lewis numbers of unity Prabakaran RajamanickamEmail: prajaman@ucsd.edu Department of Mechanical and Aerospace Engineering, University of California San Diego, La Jolla, CA 92093–0411, USA July 1, 2024 =========================================================================================================================================================================== § ABSTRACT Aspects of predictions of activation-energy asymptotics concerning the dependence of the burning velocity on the equivalence ratio are examined here through both asymptotic analyses and numerical computation. In typical hydrocarbon-air flames, the burning velocity achieves its maximum value for fuel-rich mixture, the cause being generally attributed to the effects of detailed chemical kinetics and unequal diffusivities of the reactants. The present results demonstrate the possibility of this attribute of the burning velocity occurring even when these two effects are absent. This is accomplished by parametrically studying the burning-velocity formula valid for all equivalence ratios under the conditions specified in the title of this article, with special attention paid to implications for hydrocarbon-air flames. burning velocity; planar flames; activation-energy asymptotics; two-reactant flames; equivalence ratio § INTRODUCTION Concepts of activation-energy asymptotics (AEA) have played important roles in the description of premixed laminar flame structures ever since the work of Zel'dovich and Frank-Kamenetskii <cit.>. Resulting asymptotic formulas for burning velocities of two-reactant flames <cit.>, when plotted as functions of the equivalence ratio, possess attributes that depend on how individual factors in the formulas are varied. Study of the dependence of the burning velocity on the equivalence ratio was initiated by Clarke <cit.>, for reactants with unity Lewis number. While he anticipated that the burning velocity would reach a maximum for fuel-rich mixtures depending on how the mixture is formed, he did not quantify that idea. It was Sen and Ludford, who carried out the analysis, in a series of publications <cit.>, emphasizing mainly Lewis-number effects and product-dissociation effects, addressing an open question of the late 1970's, namely the extent to which the observed fuel-rich location of the burning-velocity maximum could be attributed to Lewis-number effects rather than to the detailed chemistry. They specifically identified two of the various conditions (to be mentioned later) under which the equivalence ratio can be varied, a constant fraction of inert (case I) and a constant ratio of inert fraction to oxidizer fraction (case II). Most of the experimental burning-velocity measurements that have been reported are for fuel-air mixtures, which correspond to their case II, and which is the condition to be discussed here. The works of Sen and Ludford emphasized near-stoichiometric conditions, based on the assumption that the peak burning-velocity occurs for slightly fuel-rich conditions, as has been summarized by Bechtold and Matalon <cit.>. Although this analysis was completed nearly forty years ago, there has been no more recent discussion of their considerations for equivalence ratios not close to unity. It is the purpose of this paper to address all equivalence ratios, without making any reference to Lewis-number effects. For the example of methane-air mixtures, application of leading-order AEA and numerical integration will be shown in this article to demonstrate that, the rich shift predicted by AEA and numerics is large, beyond the range of accuracy of near-stoichiometric AEA, lying instead in the range of the analysis of Clarke, as extended by Mitani <cit.> and Rogg <cit.>. Prospects for accurate use of AEA for other hydrocarbon-air mixtures also will be considered. § FORMULATION AND ASYMPTOTIC SOLUTION Although complicating factors such as variable properties and Stefan-Maxwell transport have been included in previous work <cit.>, the points to be addressed here may be based on simpler formulations <cit.>, for one-step Arrhenius chemistry with arbitrary reaction orders m and n with respect to the fuel and oxidizer, respectively. In addition, all Lewis numbers will be set equal to unity, thereby purposely ruling out influences of differential diffusion. Most of the discussion will pertain to m=n=1, the values assumed in the work of Clarke <cit.> and of Sen and Ludford <cit.>. The gas density ρ and the thermal diffusivity D_T are both constant in the formulation and in the numerical integrations to be reported. Under the given approximations, a temperature-explicit formulation applies. With T_o and T_∞ denoting the fresh-mixture and burnt-gas temperatures, the normalized dependent variable for the temperature T is τ=(T-T_o)/(T_∞-T_o), and the parameter α=(T_∞-T_o)/T_∞ measures the heat release. In terms of the laminar burning velocity S_L, the characteristic length D_T/S_L is introduced to define the nondimensional spatial coordinate x. The symbol ϕ will be employed for the conventional fuel-air equivalence ratio, so that 0< ϕ<∞. In terms of the activation energy E and the universal gas constant R, the Zel'dovich number, β=α E/(RT_∞), is the large parameter of expansion. Given an appropriate characteristic reciprocal-time pre-factor constant for the reaction rate, B, the burning-rate eigenvalue is Λ=(B D_T/S_L^2)^-E/(RT_∞). The differential equation to be solved, for instance for a lean mixture, then becomes ^2τ/ x^2 = τ/ x - Λ (1-τ)^m (1-ϕτ)^n exp[- β(1-τ)/1-α( 1-τ)], subject to τ approaching zero as x approaches -∞ and τ approaching unity as x approaches +∞. As is well known, in the limit of β approaching infinity, there is an upstream convective-diffusive zone in which τ is proportional to ^x, followed by an inner zone, with thickness of order x/β, that is reactive-diffusive at leading order and within which the order-unity dependent variable y=β(1-τ) must match the convective-diffusive solution as that variable approaches infinity. The problem is independent of α at leading order, when it is of order unity or smaller, and the equation depends on the scaling of ϕ, the most general choice for fuel-lean or stoichiometric mixtures being that γ_l = β(1-ϕ)/ϕ is a parameter of order unity. With this selection, matching at leading order produces β^m+n+1/2Λϕ^n = ∫_0^∞ t^m(t+γ_l)^n ^-t t ≡(m,n,γ_l), a result that in fact is also correct when the parameter γ_l is large or small in the expansion parameter β <cit.>. The function (m,n,γ_l) is a confluent hypergeometric function, expressible in the form (m,n,γ_l)=γ_l^m+n+1Γ(m+1)U(m+1,m+n+2,γ_l), where Γ is the gamma function and U is the Kummer's function of the second kind, and it reduces to Γ(m+n+1) at the stoichiometric condition γ_l=0, while approaching γ_l^nΓ(m+1) as γ_l approaches infinity. The corresponding result for fuel-rich mixture turns out to be β^m+n+1/2Λϕ^1-m = (n,m,γ_r), where γ_r=β(ϕ-1). § VARIATIONS WITH EQUIVALENCE RATIO Although some one-step empirical correlations, especially, for autoignition times <cit.>, but also occasionally for burning velocities <cit.>, exhibit negative reaction orders for the fuel, for the great majority of fuels, as well as in studies directed towards revealing qualitative attributes of flame propagation, both m and n are positive. Under these usual conditions, achieves a minimum value at ϕ=1, increasing monotonically in moving away from stoichiometry. The fact that usually does not exhibit a maximum value at ϕ=1 affords the possibility of predicted burning velocities achieving maximum values at conditions far from stoichiometric. The specific form of the function S_L(ϕ) for given values of m and n depends on the variations with ϕ that are selected for other parameters, such as B and T_∞. The reciprocal time B, for example, is proportional to the product of two factors, one being the initial concentration of the oxidizer raised to the power n and the other the initial concentration of the fuel raised to the power m-1; at least one of these two factors must be changed to vary ϕ. In addition, the variation of T_∞ with ϕ depends on the specific set of experiments to be addressed. The adiabatic flame temperature T_∞ may be held fixed as ϕ is changed - a selection often made in counterflow flame experiments to remove the large effect of temperature variations on the chemical kinetics <cit.>. When that is done, the Arrhenius factor does not influence the function S_L(ϕ), but achieving a constant value of T_∞ necessitates decreasing the dilution of the mixture in moving away from the stoichiometric condition ϕ=1, for typical experiments in which the initial temperature T_o remains constant. There often is interest in varying the stoichiometry at fixed dilution, in which case the influence of the Arrhenius factor on S_L can be dominant, producing a maximum of the predicted burning velocity very close to ϕ=1 when the activation energy E is large. When realistic values of E and of other parameters are employed in the formula, at constant dilution the maximum of S_L(ϕ) often occurs away from stoichiometric conditions, which can be advantageous in fitting burning-velocity data for real flames that achieve maxima at fuel-rich conditions. Graphical presentations of computed laminar burning velocities serve to illustrate these results and to test the accuracies of the predictions of the asymptotic formulas. This is done here for a situation in which ϕ is varied by isothermal mixing of a fuel stream with an oxidizer stream, both streams being at the same temperature T_o. The variation of the adiabatic flame temperature T_∞ with ϕ is chosen to correspond to a constant heat capacity for the mixture, thereby determining the variation of β with the equivalence ratio. If the mixture is formed by combining diluted fuel and oxidizer streams, then the predicted variations of burning velocities depend on a stoichiometry parameter, the ratio of the mass of the oxygen required to burn the fuel in the fuel stream completely to the actual mass of the oxygen in the oxidizer stream, which will be denoted by S, resulting in, T_∞/T_∞,s = 1-α_s + α_s S+1/S+ϕϕ , for ϕ≤ 1 1, for ϕ≥ 1, where the subscript s identifies values evaluated at the stoichiometric condition, ϕ=1. For case I of Sen and Ludford <cit.>, S=ν, the stoichiometric mass ratio and for case II, S=ν(1+b), where b is the inert to oxidizer mass ratio; the curves to be shown here correspond to case II. The stoichiometry parameter S defined here will become the natural choice for non-uniform reactant mixtures, such as in premixed wings of the triple flames, upon which the study is motivated. Through its relationship to the adiabatic flame temperature T_∞(ϕ), the variation of the Zel'dovich number and the heat-release parameter can be found from, β/β_s = (T_∞,s/T_∞)^2ϕ(S+1)/(S+ϕ), for ϕ≤ 1 (S+1)/(S+ϕ), for ϕ≥ 1, and α/α_s = β/β_sT_∞/T_∞,s, at constant E. The reciprocal-time pre-exponential factor defined before becomes B/B_s = ϕ^m-1(S+1/S+ϕ)^m+n-1. To increase the generality of the results by avoiding the necessity of selecting particular values for other properties, such as ρ and D_T, the figures will show the ratio of the calculated burning velocity to the value of the burning velocity obtained from the (leading-order) asymptotic formula at the stoichiometric point (ϕ=1), plotted in terms of the equivalence ratio ϕ. In this scale, the leading-order asymptotic expression for the burning velocity is given by ϕ≤ 1: S_L/S_L∞,s = {(ϕ(S+1)/S+ϕ)^m+n-1(β_s/β)^m+n+1(m,n,γ_l)/Γ(m+n+1) ^β_s/α_s-β/α}^1/2, ϕ≥ 1: S_L/S_L∞,s = {(S+1/S+ϕ)^m+n-1(β_s/β)^m+n+1(n,m,γ_r)/Γ(m+n+1) ^β_s/α_s-β/α}^1/2 , where the first term in each of the foregoing expressions raised to the power m+n-1 is the ratio of upstream concentration of deficient reactant to its stoichiometric value. In the near-stoichiometric limit, only the last two factors in these expressions vary with ϕ at leading order, as noted by Sen and Ludford in their analysis <cit.>. Farther away from stoichiometric conditions, however, these terms vary at leading order, and there are other relevant variations, such as B(ϕ) and β(ϕ) that need to be taken into account. No previous publications have shown results which do that. § REPRESENTATIVE RESULTS Figure <ref> compares the leading-order asymptotic prediction (dashed curve) with the result of the numerical integration (solid curve), for the representative values β_s=8 of the Zel'dovich number of the stoichiometric mixture and α_s=0.85 of the heat-release parameter, in the symmetric case S=1. Since the Zel'dovich number increases in moving away from stoichiometry, this is its minimum value, whence the asymptotic formula should be increasingly accurate as the departure from ϕ=1 increases. The figure indicates that expectation to be true and shows that the formula overpredicts the burning-velocity by nearly 30% at stoichiometric conditions. There is a discontinuity in the slope of the AEA curve of the formulas given above at ϕ=1 that arises from plotting only the leading-order solution and that can be removed by including suitable terms of order β^-1. Figure <ref> shows similar results for S=17, the value applicable when the fuel stream is pure methane and the oxidizer stream is air. The values of β_s and α_s have been selected to correspond to reasonable flame temperatures and burning-velocity variations. The results seen here, which are much more representative for the combustion of hydrocarbon fuels (and many others) in air, are quite different from those in Fig. <ref>. The numerical result remains roughly 30% below the asymptotic prediction at the stoichiometric point, ϕ=1. This figure illustrates clearly the facts that, not only the result of the numerical integration, but the prediction of the asymptotic formula as well, can give burning velocities that are larger than those for stoichiometric conditions by a significant amount - the differences being of order unity. For the asymptotic prediction, this behaviour is due entirely to the variation of the function , the variation of the Arrhenius factor with T_∞ at the fixed value of E opposing this effect but not strong enough to overcome it in rich flames, as may be seen from the T_∞ curve in Fig. <ref>. The dilution does decrease with ϕ in this mixing process when S>1, but that decrease is not great enough to produce a decrease in T_∞. This figure also illustrates that, with β_s=8 and m=n=1, the equivalence ratio at which the laminar burning velocity is maximum is in close agreement for asymptotic and numerical results, but it exceeds the value that typically would be obtained using the correct detailed chemistry for methane, and it occurs at a value of ϕ for which predictions of near-stoichiometric AEA would be highly inaccurate. The predictions shown here are found to differ by approximately 25% from the near-stoichiometric expansion of (<ref>), a result that is not plotted here. These far-from-stoichiometric results are not addressed in the earlier publications, such as those of Sen and Ludford. § DISCUSSIONS A general observation of this study is that influences of the Arrhenius factor decrease compared with influences of as departures from S=1 increase. For m=n, the magnitudes of departures are the same at the same value of |ln S|, whether ln S is positive or negative as shown in Fig. <ref>(a), but this symmetry is lost for m≠ n. Figure <ref>(b) shows how the equivalence ratio at which the maximum burning velocity is achieved varies increasingly strongly with S as β_s decreases. The large increase in the burning velocity in rich flames, above its value at stoichiometric condition, shown in Fig. <ref>, demonstrates how poor this one-step Arrhenius chemistry approximates methane-air flames. In this case the burning-velocity maximum occurs well beyond the range of accuracy of a near-stoichiometric expansion and at much more fuel-rich condition than found experimentally. Although this reaction-rate approximation is poor for methane-air, it may be better for other hydrocarbon-air mixtures, such as ethylene-air, for which the burning-velocity maximum occurs at higher equivalence ratios. For ethylene-air flames, Lewis numbers are close enough to unity for the assumptions of the present formulation to apply, but for flames of propane and higher hydrocarbons, effects of differential diffusions, excluded here, might be expected to become increasingly important, although Fig. 7.7.4, on page 277 of the textbook by Law <cit.>, showing essentially identical burning-velocity curves for higher normal alkanes when plotted against the equivalence ratio, suggests that this effect may not be noticeably large. § CONCLUSIONS The Arrhenius factor is not always dominant in AEA predictions, in that the factors may produce off-stoichiometric burning-rate maxima even without differential diffusion. Formulas of AEA may provide reasonable fits to burning-velocity data for some hydrocarbon-air mixtures, such as ethylene-air systems, but such results are inaccurate for methane-air mixtures. In addition, reaction orders m and n can be adjusted to fit to burning-velocities of different hydrocarbon-air mixtures, although this is not addressed here. § ACKNOWLEDGEMENTS Professor F.A. Williams, who advised the writing of this communication, made significant contributions towards the investigation. Professor A.L. Sánchez took part in initiating the problem, for which the author is grateful. The author would also like to thank two anonymous reviewers and the editor for a number of suggestions that led to significant improvements in this article. § DISCLOSURE STATEMENT No potential conflict of interest was reported by the author. tfq
http://arxiv.org/abs/2406.19013v1
20240627085408
Design and Implementation of a Scalable Correlator Based on ROACH2+GPU Cluster for Tianlai 96-Dual-Polarization Antenna Array
[ "Zhao Wang", "Ji-Xia Li", "Ke Zhang", "1 Feng-Quan Wu", "Hai-Jun Tian", "Chen-Hui Niu", "Ju-Yong Zhang", "Zhi-Ping Chen", "Dong-Jin Yu", "Xue-Lei Chen" ]
astro-ph.IM
[ "astro-ph.IM" ]
0009-0007-8230-9798]Zhao Wang Center for Astronomy and Space Sciences, China Three Gorges University, Yichang 443002, China 0000-0001-9652-1377]Ji-Xia Li National Astronomical Observatories, Chinese Academy of Sciences, Beijing 100101, China Center for Astronomy and Space Sciences, China Three Gorges University, Yichang 443002, China 0000-0002-6174-8640]Feng-Quan Wu National Astronomical Observatories, Chinese Academy of Sciences, Beijing 100101, China 0000-0001-9289-0589]Hai-Jun TianCorresponding Emails: jxli@bao.ac.cn, wufq@bao.ac.cn, hjtian@hdu.edu.cn, zhangjy@hdu.edu.cn, chen_zp@hdu.edu.cn. School of Science, Hangzhou Dianzi University, Hangzhou, 310018, China Big Data Institute, Hangzhou Dianzi University, Hangzhou, 310018, China 0000-0001-6651-7799]Chen-Hui Niu Central Normal University, Wuhan, 100101, China School of Science, Hangzhou Dianzi University, Hangzhou, 310018, China Big Data Institute, Hangzhou Dianzi University, Hangzhou, 310018, China School of Science, Hangzhou Dianzi University, Hangzhou, 310018, China Big Data Institute, Hangzhou Dianzi University, Hangzhou, 310018, China Big Data Institute, Hangzhou Dianzi University, Hangzhou, 310018, China 0000-0001-6475-8863]Xue-Lei Chen National Astronomical Observatories, Chinese Academy of Sciences, Beijing 100101, China § ABSTRACT The digital correlator is one of the most crucial data processing components of a radio telescope array. With the scale of radio interferometeric array growing, many efforts have been devoted to developing a cost-effective and scalable correlator in the field of radio astronomy. In this paper, a 192-input digital correlator with six CASPER ROACH2 boards and seven GPU servers has been deployed as the digital signal processing system for Tianlai cylinder pathfinder located in Hongliuxia observatory. The correlator consists of 192 input signals (96 dual-polarization), 125-MHz bandwidth, and full-Stokes output. The correlator inherits the advantages of the CASPER system, for example, low cost, high performance, modular scalability, and a heterogeneous computing architecture. With a rapidly deployable ROACH2 digital sampling system, a commercially expandable 10 Gigabit switching network system, and a flexible upgradable GPU computing system, the correlator forms a low-cost and easily-upgradable system, poised to support scalable large-scale interferometeric array in the future. § INTRODUCTION The digital correlator plays a crucial role in radio astronomy by combining individual antennas to form a large-aperture antenna, keeping large field of view, and providing high-resolution images. At present, many radio interferometric arrays in the world use CASPER (Collaboration for Astronomy Signal Processing and Electronics Research) hardware platform ROACH2 (Reconfigurable Open Architecture Computing Hardware-2) to develop correlators. For example, PAPER (Precision Array for Probing the Epoch of Reionization) in South Africa's Karoo Desert <cit.>. The 100 MHz FX correlator was originally based on iBOBs (Interconnect Break-out Boards) and later upgraded to ROACH, and then ROACH2 boards <cit.>. Currently, PAPER uses 8 ROACH2 boards for channelization, followed by a GPU (Graphics Processing Unit)-based `X' stage. Additionally, the `large-N' correlator located in the Owens Valley Radio Observatory (LWA-OV) is designed to enable the Large Aperture Experiment to Detect the Dark Ages (LEDA) <cit.>. It features a 58 MHz, 512-input digitization, channelization, and packetization system using a GPU correlator backend. The Tianlai project [<https://tianlai.bao.ac.cn>] is an experiment aimed at detecting dark energy by measuring baryon acoustic oscillation (BAO) features in the large-scale structure power spectrum, in which BAO can be used as a standard ruler<cit.>. The basic plan is to build a radio telescope array and use it to make 21cm intensity mapping observations of neutral hydrogen, which trace the large-scale structure of the matter distribution <cit.>. Currently, two different types of pathfinder array have been built in a quiet radio site in Hongliuxia, Balikun county, Xinjiang, China <cit.>. The cylinder array consists of three adjacent parabolic cylinder reflectors, each 40m × 15m, with their long axes oriented in the N-S direction. It has a total of 96 dual-polarization feeds, resulting in 192 signal channels <cit.>. The dish array includes 16 dishes with 6-meter aperture. Each dish has a dual polarization feed, generating 32 signal channels in total <cit.>. In addition to the ability to survey the 21cm Hydrogen sky, both antenna arrays are also capable of detecting fast radio bursts <cit.>. This paper is about the development of correlator for Tianlai cylinder pathfinder array. The design of the Tianlai cylinder correlator is based on the prototype correlator of <cit.>, which has 32 inputs and was used for the Tianlai Dish pathfinder array. This 32-input prototype correlator is built upon the model of PAPER correlator, which creates a flexible and scalable hybrid correlator system. We expanded the prototype correlator from 32 to 192 channels, reprogrammed the network transport model, increased it from a single GPU server to seven GPU servers, solved the synchronization problem of multiple devices. It was eventually deployed in the machine room on the Hongliuxia site. The primary motivation behind the design of the PAPER correlator architecture is the scalability for large-scale antenna arrays, and it has been executed exceptionally well. Therefore, we have chosen to borrow ideas from the PAPER correlator. The Tianlai project is expanding and the number of single inputs will soon increase to more than 500. The Tianlai cylinder correlator is a flexible, scalable, and efficient system, which has a hybrid structure of ROACH2+GPU+10 GbE network. A ROACH2 is an independent board, unlike a PCIe-sampling board which needs to be plugged into a computer server and often leads to some incompatible issues. A GPU card is dramatically upgrading, and it is almost the best choice among the current available hardwares, such as CPU/GPU/DSP(Digital Signal Processing), by comprehensively considering the flexibility, the efficiency and the cost. The module of the data switch network is easy to be upgraded, since the Ethernet switch has a variety of commercial applications. We have uploaded all the project files to Github. [https://github.com/TianlaiProject] This paper gives a detailed introduction to the function and performance of the Tianlai 192-input cylinder correlator system. In Section <ref>, we introduce the general framework of the correlator system and show the deployment of the correlator. Then, in a sub-section, we provide a detailed introduction to the design and functions of each module. In Section <ref>, we evaluate the performance of the correlator. Section <ref> summarizes the correlator system and presents the design scheme for correlator expansion in the future as part of the Tianlai project. § SYSTEM DESIGN The digital correlators can be classified into two types: XF and FX. XF correlators combine signals from multiple antennas and performs cross-correlation followed by Fourier transformation. XF correlators can handle a large number of frequency channels and have a relatively simple hardware design <cit.>. FX correlators combine signals from multiple antennas and perform the Fourier transformation followed by cross-correlation. FX correlators can handle a large number of antenna pairs and also have a relatively simple hardware design. The Tianlai cylinder correlator is an FX correlator. The Tianlai cylinder correlator system can be divided into four parts, as shown in Figure <ref>. The first part is the control part which consists of a master computer and an Ethernet switch. The Ethernet switch is used for net-booting of the ROACH2, monitoring the status of F-engine and hashpipe [https://github.com/david-macmahon/hashpipe], and synchronizing the running status of F-engine and X-engine. The second part is the F-engine, which consists of six ROACH2 boards and one 10 GbE switch. The 192 input signals from the Tianlai cylinder array are connected to the ADC connectors on the ROACH2 boards. The main functions of the F-engine are to Fourier transform the data from the time domain into the frequency domain, and transmit the data to the GPU server through a 10 GbE switch. The third part is the X-engine, which performs cross-correlation on the received Fourier data. Each GPU server receives packets from all ROACH2 boards. The details of network transmission will be explained later. The X-engine utilizes a software called hashpipe <cit.> to store, deliver and compute the cross-correlations. The fourth part is the data storage part, which consists of seven GPU servers, an Ethernet switch, and a storage server (shared with the master computer). The GPU servers transmit data to the storage server via an Ethernet switch. We have developed a multi-threading program to collect and organize data packets from different GPU servers, and finally save them onto hard drives in HDF5 format. The deployment of the correlator system is shown in Figure <ref>. It consists of six ROACH2 boards, an Ethernet switch, a 10 GbE switch, a master computer, and seven GPU servers, arranged from top to bottom. The yellow “ROACH2” label in Figure <ref>(a) represents the front panel of the ROACH2 board. (In our case, a connector transformer panel has been specifically designed to conveniently connect to the radio cables.) The ADC connector of the ROACH2 is connected to a blue RF (Radio Frequency) cable that transmits the analog signal. Figure <ref>(b) shows the back side of the ROACH2 board. On the far left is the power line. The light orange RF cable is the clock cable. The 250 MHz clock of the ROACH2 board is output by a VALON 5008 dual-frequency synthesizer module, and it is split by a 12-way power splitter. The short blue-black cable connects to the synchronization port between the ROACH2 boards. We use synchronization ports in F-engine functional block design to ensure that the six ROACH2 boards work at the same clock. The signal of the synchronization port is provided by a time server. The time server sends out a 1-PPS (Pulse Per Second) signal, which is used to initialize the synchronization module of the F-engine system. After running the F-engine control script, the 1 PPS signal drives the F-engine and synchronizes the operational state of the six ROACH2 boards. The bandwidth of the antenna signal input to the ROACH2 board is 125 MHz. According to the Nyquist sampling law, the input signal can be completely recovered by a 250 Msps sampling rate. In Figure <ref>(c), seven GPU servers are vertically stacked, consisting of six Supermicro servers with a size of 4U (Unit) and one Dell server with a size of 2U. These devices are used to implement X-engine functionality. The number of servers is determined by the total frequency channel count and the frequency channel processing capacity of each server. In terms of computational performance, each GPU server runs 4 hashpipe threads, processing a total of 128 frequency channels. At this configuration, the computational performance accounts for approximately 46% of the theoretical peak performance. In terms of data transfer performance, the server's PCIe is of version 3.0, with a transfer rate close to 8GB/s. This is comparable to the maximum transfer rate between the host and the device. §.§ F-engine The diagram of the F-engine module is shown in Figure <ref>. The Tianlai cylinder correlator system is an improvement upon the Tianlai dish correlator system, with enhancements including an increased number of input signals and additional new functions. The Tianlai dish correlator is very similar to the PAPER experiment correlator, which also uses the ROACH2 system. Please refer to <cit.> for details. Here, we provide a concise overview of the F-engine's process and the functionality of the CASPER yellow block. Each ROACH2 board is connected with two ADC boards through Z-DOK+ connectors. The ADC board is the adc16x250-8 coax rev2 Q2 2013 version, which uses 4 HMCAD1511 chips and provides a total of 16 inputs. It samples 16 analog signal inputs with 8 bits at a rate of 250 Msps. The output digital signal of the block is Fix_8_7 format, which indicates an 8-bit number with 7 bits after the decimal point. The ADC chip is accompanied by a control program developed by David MacMahon from the University of California, Berkeley. This program is responsible for activating the ADC, selecting the amplification level, calibrating the FPGA input delay, aligning the FPGA SERDES blocks until data is correctly framed, and performing other related tasks. A comprehensive user's guide for the ADC16 chip is accessible on the CASPER website[https://casper.astro.berkeley.edu/wiki/ADC16x250-8_coax_rev_2]. According to the actual range of input signal power, we conducted linearity tests on the ADC at various gain coefficients and also assessed the linearity of the correlator system. Ultimately, the ADC gain coefficient was set to 2. The analog-to-digital converted data from the ADC is transmitted to the PFB (Polyphase Filter Bank) function module. PFB is a computationally efficient implementation of a filter bank, constructed by using an FFT (Fast Fourier Transform) preceded by a prototype polyphase FIR filter frontend <cit.>. The PFB not only ensures a relatively flat response across the channels but also provides excellent suppression of out-of-band signals. The PFB is implemented using the models and from the CASPER module library. Each [https://casper.astro.berkeley.edu/wiki/Block_Documentation] block (the signal processing blocks mentioned in this article can all be linked to the detailed page from here) processes two signals, configured with parameters including a PFB size of 2^11, a Hamming window function, four taps, input width of 8 bits, an output width of 18 bits, and other settings. Each block takes two input signals, and a total of 16 blocks are used to process 32 input signals. Each block processes four input data streams and outputs two sets of frequency domain data. configured with parameters including an FFT size of 2^11, an input width of 18 bits, an output width of 36 bits, and other settings. The parameter settings are based on the scientific requirements of the Tianlai project, which calls for a signal resolution of less than or equal to 0.2 MHz. There are eight blocks, with each block taking in four data streams and outputting two sets of frequency domain data. The PFB module is flexible, making it very easy to adjust the parameters according to one's requirements, such as the FFT size, PFB size, and the number of taps in the CASPER block. The data output of the PFB module is 36 bits, which essentially represents a complex number with 18 bits for the real part and 18 bits for the imaginary part. Considering factors such as data transmission and hardware resources, the data is usually effectively truncated. In our case, we will truncate the complex number to have a 4-bit real part and a 4-bit imaginary part. Prior to quantizing to 4 bits, the PFB output values pass through a scaling (i.e. gain) stage. Each frequency channel of each input has its own scaling factor. The purpose of the scaling stage is to equalize the passband before quantization, so this stage is often referred to as EQ. The scaling factors are also known as EQ[https://casper.astro.berkeley.edu/wiki/PAPER_Correlator_EQ] coefficients and are stored in shared BRAMs. The quantized data cannot be sent directly to the X-engine. Before sending it, we divide the frequency band and sort the data in a format that facilitates the relevant calculations. This module is called Transpose, and it is divided into four submodules. Each submodule processes 1/4 of the frequency band, resulting in a total of 256 frequency channels. The number of submodules corresponds to the number of 10 GbE network interface controllers (NICs) on the ROACH2 board, with each NIC used to receive and send data from the output of a transpose submodule. This module performs the data transpose, also known as a “corner turn” to arrange the data in the desired sequence. Additionally, it is responsible for generating the packet headers, which consist of (master counter), (F-engine id), and (X-engine id). The current parameter configuration of the sub-module is tailored for scenarios with 256 inputs or fewer. However, David MacMahon, the researcher behind the PAPER correlator system, has included sufficient spare bits in the design, enabling the adjustment of model parameters based on specific input conditions and accommodating scalability and additional use cases. The data is already in a form that is easy for X-engine to compute, we want to send it to X-engine, so the data comes to the Ethernet module. It contains four sub-modules and receives data from four transpose sub-modules. Each submodule has a block, where we can set the MAC address, IP address, destination port and other parameters using Python or Ruby script. §.§ Network The data of the F-engine module is sent out through the ROACH2 network port and transmitted to the network port of the target GPU server through the 10 GbE switch. The network transmission model of the correlator system is dependent on the bandwidth of a single frequency channel and the number of frequency channels calculated by the GPU server. The diagram of data transfer from F-engine to X-engine is shown in Figure <ref>. The frequency domain data in F-engine has a total of 1024 frequency channels. Given the 250 Msps sampling rate, each frequency channel has a width of Δν = 125 / 1024 MHz = 122.07 kHz. Each GPU node processes 128 frequency channels with a bandwidth of 128 × 122.07 kHz = 15.625 MHz. The number of frequency channels processed by the GPU server is determined by hashpipe. The analog part of the Tianlai digital signal processing system uses replaceable bandpass filters, with the bandpass set to 700 MHz ∼ 800 MHz. We have chosen to utilize seven GPU nodes to implement the X-engine component. These seven GPU nodes process data for the central 896 frequency channels, covering a bandwidth of approximately 109.375 MHz from 692.8125 MHz to 802.1875 MHz, as shown in Figure <ref>. The final GPU node is dedicated to receiving data from the first 32 and last 32 frequency channels out of the 896 frequency channels. The data transfer rate of a single network port of the ROACH2 board is 8.0152 Gbps, so the total data transfer rate of 6 ROACH2 boards is 6 × 4 ports × 8.0152 Gbps = 192.3648 Gbps. The data reception rate of a single network port on the GPU node is 192.3648 Gbps / (8 × 4) = 6.0114 Gbps. At present, the number of input signals for the correlator system ranges from 32 to 256. While our correlator system is designed for 192 input signals, we conducted data transmission simulations with 256 input signals. Under these conditions, the data reception rate of a single network port on the GPU node stands at 8.0152 Gbps. In our system, each GPU server has four 10 GbE ports. For the Tianlai cylinder correlator system, we require a total of 6 ROACH2 boards × 4 ports + 7 GPU servers × 4 ports = 52 ports on a 10 GbE switch. So we selected the Mellanox SX1024 switch which has 48 ports of 10 GbE and 12 ports of 40 GbE. Ports 59 and 60 on the switch can be subdivided into four 10 GbE ports, providing ample capacity for our application. The transpose module is designed with extra bits reserved in the blocks related to the parameter . The number of bits in the parameter is directly linked to the maximum number of F-engines in the correlator system. By utilizing these additional bits, the correlator can be configured to accommodate a greater number of input signals. In terms of the F-engine, theoretically, there could be an infinite number of input channels, and the number of ROACH2 boards can be increased based on the input channel number. The capacity of X-engine determines the upper limit of input channels, depending on the processing capacity of the GPU servers for a single frequency point. Since each frequency point should contain all the input channel information, the processing capability of the GPU servers for a single frequency channel affects maximum number of input channels. Currently, a single server theoretically has the capability to handle over 20,000 input channels if it only processes one frequency channel. However, this may need an extremely large-scale switch network. The relationship between the number of input channels and the output data rate is as follows: 1/2× N(N+1) × f_ch × 2 × f_b / Integration_time where N represents the number of input channels, f_ch represents the number of frequency channels, f_b represents the number of bytes in a single frequency channel. The multiplying factor 2 is because frequency channels are complex numbers. §.§ X-engine The primary role of the X-engine is to perform cross-correlation calculations. The X-engine receives the data from the F-engine in packets, which are then delivered to different computing servers, where the conjugate multiplication and accumulation (CMAC) are done. The hardware for this part consists mainly of six Supermicro servers and one Dell server. We list the main equipment of the X-engine in Table <ref>. The X-engine part consists of seven GPU nodes. To ensure that they integrate the data at exactly the same time duration, they must be synchronized together. A script has been developed to achieve this, and its basic procedure is as follows. First, initialize the hashpipes of 7 GPU nodes; Second, start the hashpipe program of the first GPU node; Third, read out the MCNT value in the current packet and calculate a future (several seconds later) MCNT value to act as the aligning time point. Finally, all GPU nodes work simultaneously when their hashpipe threads receive a packet contains the calculated aligning MCNT value. The data operation in the X-engine is managed by the hashpipe software running on CPU and GPU heterogeneous servers. Hashpipe was originally developed as an efficient shared pipe engine for the National Astronomical Observatory, the Universal Green Bank Astrospectrograph <cit.>. It was later adapted by David MacMahon of U.C. Berkeley, it can be used for FX correlators <cit.>, pulsar observations <cit.>, Fast Radio Bursts detection <cit.> and the search for extraterrestrial civilizations <cit.>. The core of the hashpipe is the flexible ring buffer. It simulates contiguous memory blocks, realizes data transmission and sharing among multiple threads, and uses the central processing unit to control startup and shutdown, etc. The ring buffer is used to temporarily store and deliver the data packets to ensure that the data is captured quickly and distributed in the correct order. Each hashpipe instance in our system has a total of four threads and three buffers, as shown in Figure <ref>. To process the four 10 GbE ports data stream, four hashpipe instances are created. In each instance, the basic data process can be concluded as follows. First, receives the packets from the GPU server's 10 GbE port. According to the packet format, the valid data is extracted and the packet header is analyzed. Packets are time-stamped, and if they arrive at the GPU server out of order, they can be rearranged into the appropriate time series and written to the input data buffer, which is passed onto the next thread once a consecutive block of data is filled. The thread “fluffing” the data, fluffs 4bit+4bit complex data into 8bit+8bit complex data in the thread. The data is “fluffed” and temporarily stored in the GPU input data buffer until it is fetched by . Then transfers the data to the graphics processor to perform complex calculations and then writes the results to the output data buffer. The CMAC process uses the xGPU [https://github.com/GPU-correlators/xGPU]<cit.>, which is written in CUDA-C and is optimized on GPU memory resources by specific thread tasks. The cross-correlation algorithm involves computing the cross-power spectrum at a specific frequency observed by a pair of stations, known as a baseline. By processing a sufficient number of baselines, a detailed power spectrum representation can be derived, enabling the generation of an image of the sky through an inverse Fourier transform in the spatial domain. The algorithm's implementation on Nvidia's Fermi architecture sustains high performance by utilizing a software-managed cache, a multi-level tiling strategy, and efficient data streaming over the PCIe bus, showcasing significant advancements over previous GPU implementations. The thread gets the data from the output data buffer and transmits it to the storage server through the switch. Hashpipe provides a status buffer that extract key-value pairs in each thread. This key value is updated every running cycle. The status can be viewed using a GUI monitor that has been written in both Python and Ruby. §.§ Data storage At the beginning of the design, two schemes for data storage were considered. One is that the data is stored on each GPU server, and it is read and combined when used. Due to the large number of GPU servers, this method is too cumbersome. The other is that the data is transmitted from each GPU server to the master computer in real-time, and the data is stored in the master computer. This method is convenient for data use and processing, so the second scheme is adopted. Each GPU node has 4 hashpipe instances, and the thread of each hashpipe instance sends data to a dedicated destination port. A total of 28 different UDP ports are used for the 7 GPU servers. The data acquisition script, written in Python, collects data from all 28 UDP ports and combines them . Currently, the integration time is set to approximately 4 seconds, resulting in a data rate of about 150 Mbps for each network port. The total data rate for all seven servers with 28 ports amounts to approximately 4.2 Gbps. Therefore, a 10 GbE network is capable of the transmission of the data. Therefore, a 10 GbE network is capable of handling the data transmission. Finally, the data are saved onto hard drives in the HDF5 format. Additional information such as integration time, observation time, telescope details, and observer information is also automatically saved in the file. §.§ CNS control module During the drift scan observation of the Tianlai cylinder array, the system needs to be calibrated by a calibrator noise source (CNS). The CNS periodically broadcasts a broadband white noise of stable magnitude from a fixed position, so the system gain can be recovered <cit.>. One requirement in the data processing part is to let the CNS's signal fall exactly in one integration time interval, so it needs to be aligned to the integration time. To achieve this, a logical ON/OFF signal from the cylinder correlator is necessary. In order to meet this requirement, We have introduced a noise source control function to the correlator system. This control function is implemented through the block in the F-engine, as shown in Figure <ref>(a). First, the script enables block to initialize the module. Second, the hashpipe instance on the GPU node returns the MCNT value of its current packet. The script uses this value to calculate the CNS MCNT value (an MCNT value at a future time, when the MCNT value in the F-engine is equal to this value, the CNS is turned ON) and sets that CNS MCNT to block and block. Third, the CNS on/off period is converted to the change value of MCNT and set block to this value. Fourth, set the GPIO's working time to block, which is on the ROACH2 board. Finally, the GPIO periodically sends out a logical signal to turn the CNS on or off. We tested the accuracy of the CNS control module and its actual output result, as shown in Figure <ref>(b). The CNS is activated based on a pre-set MCNT value and is aligned precisely with the integration time interval. § TESTING AND EXPERIMENTATION §.§ ADC testing The importance of ADCs lies in their quality and performance, as these factors bear a direct impact on the overall functionality of the systems they inhabit. To verify the sampling correctness of the ADC, we input a 15.625 MHz sinusoidal wave signal into the ADC and fit the digitized data. The sampling points and fitting result are shown in Figure <ref>(a). The correlator system requires the ADCs to have linearly sampled output at different signal levels. We plot the logarithm of the standard deviation of the ADC output with three different gain coefficients as a function of different input power levels, and the results are shown in Figure <ref>(b). No obvious nonlinearity is found in the testing power range. §.§ Phase testing We verify the phase of the visibility (cross-correlation result) by two input signals, whose phase difference is determined by a cable length difference. We use a noise source generator to output the white noise signal and the signal is divided into two ways by a power splitter. Then, the two signals are fed into the ROACH2 board through two radio cables of different lengths. The cable length difference is 15m. The two signals can be depicted as S_1=A_1 e^i(2π ft+ϕ_0) and S_2=A_2 e^i(2π f(t+τ) + ϕ_0 ), where A is the wave amplitude, ϕ_0 is an arbitrary initial phase, f is frequency, τ is the delay incurred by the unequal-length RF cables. The visibility of two signals is V = <S_1^∗· S_2 > = A_1 A_2 e^i2π fτ The cable length difference of the two input signals is fixed, so the delay is constant over time. As Eq. <ref> shows, the phase Φ = 2πτ· f, Φ is a linear function of frequency, and the slope k = 2πτ. The delay = Δ l/c̃, where Δl is the cable length difference, c̃ is the propagation speed of RF signal in coaxial cable. The measured waterfall 2D plot of phase of visibility output by our correlator in this experiment is plotted in Figure <ref>(a) and 1D plot (at one integration time) of phase as a function of frequency is shown in Figure <ref>(b). By calculating the curve slope in Figure <ref>(b), we obtain a propagation speed in the coaxial cable of about 0.78c (0.78 times speed of light in vacuum), which is consistent with the specification of the RF cable. §.§ Linearity of correlator system The linearity of our correlator is verified by comparing the input power levels and the output amplitudes. The results are shown in Figure <ref>. Considering multiple factors, we have set the ADC gain coefficient for the correlator to 2. We can draw the conclusion that the linear dynamic range of our correlator is between -22 dBm to 0 dBm within the 125 MHz bandpass. In realistic observations, power levels output from the receivers vary 10 dB at most, so the 22 dB dynamic range of our correlator can satisfy our observation requirement. §.§ Sky observation The whole frequency band of each feed ranging from 692.8125 to 802.1875 MHz, is divided to 28 sub-bands. These sub-band have been sent to different hashpipe instances for correlation calculation. The final spectra are the combination of these 28 sub-bands. Some spectra of feeds (A10X, A19Y, B27X and C12X) are plotted in Figure <ref>. The Tianlai cylinder array is aligned in the N-S direction and consists of three adjacent cylinders. They are designated as A, B and C from east to west, and have 31, 32, and 33 feeds respectively. Each dual linear polarization feed generates two signal outputs. We use `X' to denote the output for the polarization along the N-S direction and `Y' along E-W direction. Spectra of the selected feeds in Figure <ref> are from three cylinders, and are smooth in adjacent frequency sub-bands. No obvious inconsistent processing amplitude in different sub-bands are found. In these spectra, a periodic fluctuation of about 6.8 MHz can be seen. They have been confirmed to result from the standing wave in the 15-meter feed cable <cit.>. We made 4.4 hours (16000 seconds) of continuous observation since the night of Aug. 7th, 2023, and the data are shown in Figure <ref>. The fringe of radio bright source Cassiopeia A occurred around 8000th second. The continuous operation ability of the correlator is tested, and there is no fault in continuous operation for a month. We plot 4 days' continuous observation data of three baselines as a function of LST (Local sidereal time) and frequency, as shown in Figure <ref>. The subplots from top to bottom show the baselines for two feeds (a) on the same cylinder, (b) on two adjacent cylinders, and (c) on two non-adjacent cylinders. Each subplot shows the result of four consecutive days starting from Sept. 6th, 2023; each day is a sub-panel from bottom to top. §.§ Power consumption All devices are powered by PDU (Power Distribution Unit), and the voltage and current usage of the devices can be monitored through the PDU management interface. The entire correlator system uses a total of 3 PDUs. The six ROACH2 boards and the master computer are connected to one PDU. The first 7 GPU servers and the 10 GbE switch are connected to another PDU. The last 7 servers and the 1 GbE Ethernet switch are connected to the third PDU. The total power of the F-engine is 220 V × 3.5 A = 770 W, including six ROACH2 boards and one master computer. The total power of X-engine is 220 V × 17.5 A = 3850 W, including seven GPU servers, one 10 GbE switch, and one 1 GbE switch. Therefore, the total power of the whole correlator system is 770 W + 3850 W = 4620 W for 192 inputs. This is very energy-efficient for such a large-scale interferometer system. § SUMMARY In this paper, the correlator is designed and deployed for the cylinder array with 192 inputs. Based on the basic hybrid structure of the ROACH2-GPU correlator, we have realized the data acquisition and pre-processing function by F-engine, which consists of six ROACH2 boards. The F-engine part is tested, debugged, and analyzed, works in the suitable linear range and the calibrator noise source is controlled in a cadence according to integration time. We conducted hardware testing and data storage design for the X-engine part and realized the complete and orderly data storage of 7 GPU servers. We use a DELL 2020 server, NVIDIA GeForce RTX3080 graphics card, and Rocky 8 system to achieve the X-engine function. As Tianlai radio interferometric array is currently extending its scale, the correlator we design can increase the number of ROACH2 boards according to the number of input signals, and set the appropriate number of frequency points and the size of data packets. The X-engine part can use higher-level servers and graphics cards to combine multiple tasks and increase the work tasks of a single server to reduce the number of servers. Our future work is to implement it on larger systems. § ACKNOWLEDGEMENTS We acknowledge the support by the National SKA Program of China (Nos. 2022SKA0110100, 2022SKA0110101, and 2022SKA0130100), the National Natural Science Foundation of China (Nos. 12373033, 12203061, 12273070, 12303004, and 12203069), the CAS Interdisciplinary Innovation Team (JCTD-2019-05), the Foundation of Guizhou Provincial Education Department (KY(2023)059), and CAS Youth Interdisciplinary Team. This work is also supported by the office of the leading Group for Cyberspace Affairs, CAS (No.CAS-WX2023PY-0102) and CAS Project for Young Scientists in Basic Research (YSBR-063). aasjournal
http://arxiv.org/abs/2406.18684v2
20240626184222
CSI4Free: GAN-Augmented mmWave CSI for Improved Pose Classification
[ "Nabeel Nisar Bhat", "Rafael Berkvens", "Jeroen Famaey" ]
cs.CV
[ "cs.CV" ]
CSI4Free: GAN-Augmented mmWave CSI for Improved Pose Classification Nabeel Nisar Bhat IDLab-Faculty of Science University of Antwerp-imec Antwerp, Belgium nabeelnisar.bhat@uantwerpen.be Rafael Berkvens IDLab-Faculty of Applied Engineering University of Antwerp-imec Antwerp, Belgium rafael.berkvens@uantwerpen.be Jeroen Famaey IDLab-Faculty of Science University of Antwerp-imec Antwerp, Belgium jeroen.famaey@uantwerpen.be July 1, 2024 =================================================================================================================================================================================================================================================================================================================================================================================== § ABSTRACT In recent years, Joint Communication and Sensing (JC&S), has demonstrated significant success, particularly in utilizing sub-6 GHz frequencies with commercial-off-the-shelf (COTS) Wi-Fi devices for applications such as localization, gesture recognition, and pose classification. Deep learning and the existence of large public datasets has been pivotal in achieving such results. However, at mmWave frequencies (30-300 GHz), which has shown potential for more accurate sensing performance, there is a noticeable lack of research in the domain of COTS Wi-Fi sensing. Challenges such as limited research hardware, the absence of large datasets, limited functionality in COTS hardware, and the complexities of data collection present obstacles to a comprehensive exploration of this field. In this work, we aim to address these challenges by developing a method that can generate synthetic mmWave channel state information (CSI) samples. In particular, we use a generative adversarial network (GAN) on an existing dataset, to generate 30,000 additional CSI samples. The augmented samples exhibit a remarkable degree of consistency with the original data, as indicated by the notably high GAN-train and GAN-test scores. Furthermore, we integrate the augmented samples in training a pose classification model. We observe that the augmented samples complement the real data and improve the generalization of the classification model. Wi-Fi signals, mmWave, joint communication and sensing, channel state information, human activity recognition, data augmentation, generative adversarial networks. § INTRODUCTION In recent years, Wi-Fi signals have been widely utilized for sensing applications such as localization <cit.>, gesture recognition<cit.>, pose estimation <cit.> and gait identification<cit.>. In particular, channel state information (CSI) <cit.> extracted from commercial-off-the-shelf (COTS) Wi-Fi access points (APs) has resulted in remarkable accuracy in these applications. The main advantage of using Wi-Fi signals for sensing is that most of the Wi-Fi infrastructure is already in place in homes, offices, and buildings. Therefore, the signals transmitted for communications can also be used for sensing, at a limited additional cost. This concept of using communication signals for sensing is known as joint communication and sensing (JC&S) or integrated sensing and communication (ISAC) <cit.>. Compared to camera-based sensing, Wi-Fi sensing offers improved privacy and does not require a well-lit environment <cit.>. In Wi-Fi sensing, most of the focus has been on sub-6 GHz signals. However, researchers are now swiftly moving to mmWave (30-300 GHz) due to its large bandwidth and massive multiple input and multiple output (MIMO) capabilities. This has benefits not only limited to high data-rates but also more accurate sensing <cit.>. Recently, Wi-Fi signals at mmWave have shown promising results in applications such as gesture recognition<cit.>, pose estimation<cit.>, and localization<cit.>. In this work, we focus on mmWave and pose classification. The major force behind the exceptional success of Wi-Fi sensing is deep learning<cit.>. Deep learning has achieved state-of-the-art performance on many Wi-Fi sensing tasks. One of the key factors contributing to the effectiveness of deep learning methods, in general, is the availability of extensive and varied datasets. Unfortunately, Wi-Fi sensing at mmWave frequencies has suffered due to a lack of research hardware and consequently, the non-existence of large datasets. Moreover, data collection and annotation with Wi-Fi is difficult and time-consuming. Also, the lack of visual cues or ground truth in the Wi-Fi signal does not help either. To avoid overfitting problems, deep learning requires a large amount of labeled data<cit.>. In this work, we aim to reduce the effort required in the data collection by employing data augmentation techniques. Generative adversarial networks (GANs) <cit.> have proven to generate realistic and high-quality synthetic or fake images. GANs have been used for image augmentation <cit.>, anomaly detection<cit.>, super-resolution<cit.>, 3D object generation <cit.> and domain adaptation<cit.>. Our focus is on augmentation. Augmenting CSI is difficult and somewhat limited in literature as standard augmentation methods used for images such as random crop, horizontal flip, rotation, and brightness can not be directly applied to CSI data. Moreover, these methods produce a limited set of augmentations <cit.>. Differently, GAN-based augmentation methods have huge potential <cit.> and can be used to generate more natural and a broad set of augmentations. GANs mimic the original distribution of the dataset to generate more realistic samples than the standard augmentation methods and also improve the generalization of a downstream model. However, GANs are difficult to train and often suffer from what is known as mode collapse<cit.>. Moreover, GAN training is domain specific. Therefore, the utilization of GANs in augmenting CSI data is limited <cit.>, especially in the context of COTS mmWave CSI data where their potential has not yet been explored. In this work, we propose a GAN-based method for augmenting mmWave CSI data. Specifically, we use a conditional GAN (cGAN) <cit.> to generate new synthetic samples. We train our method on an existing mmWave COTS dataset <cit.>, consisting of 3 users performing a set of 8 distinct poses. We carefully train the generator and discriminator of cGAN and generate an additional 30,000 synthetic CSI samples, approximately 3800 samples for each pose. In this way, we increase the sample size of the dataset from 1084 to 31184 samples. We then validate the consistency of GAN-generated samples using GAN-train and GAN-test scores<cit.>. Our results reveal high GAN-train and GAN-test scores indicating a high quality of synthetic samples. Finally, we show that the synthetic samples also improve the performance of the downstream model for pose classification. Through this method, we create a fairly large mmWave CSI dataset which can be used by researchers to test and validate complex signal processing and deep learning models for pose classification. § RELATED WORK §.§ COTS mmWave Sensing Wi-Fi signals at sub-6 GHz have shown great potential in sensing applications. However, range resolution and spatial resolution at these frequencies are limited due to limited available bandwidth. Instead at mmWave and THz frequencies, larger bandwidths are available, potentially leading to sub-cm-level localization and high-definition imaging <cit.>. In this section, we review the recent developments in mmWave sensing, mainly focusing on COTS Wi-Fi. Yu et al. <cit.> made pioneering contributions in the field of mmWave Wi-Fi sensing. The authors used mid-grained spatial beam signal-to-noise ratios (SNRs) for human pose and seat occupancy detection. The testbed consisted of a pair of Talon AD 7200 COTS routers. An open-source tool <cit.> was used to gain access to beam SNR channel measurements from the routers. 8 distinct poses were performed. The authors used deep learning-based methodology and achieved an overall accuracy of 90% for pose classification. However, the dataset involves a single person and is not publicly available. In our previous work <cit.>, we validated the potential of mmWave Wi-Fi sensing for gesture recognition. We collected beam SNRs and CSI from mmWave and sub-6 GHz Wi-Fi APs respectively and compared the performance of the two approaches for gesture recognition. The dataset consisted of 10 distinct but closely related gestures/poses across 3 people and 2 environments. We achieved 96.7% accuracy with mmWave Wi-Fi for gesture recognition in a single environment. However, incorporating more users and environments leads to a reduction of accuracy by 6%. This is because of the limited dataset owing to the fact that data collection with mmWave COTS Wi-Fi is labor-intensive. Moreover, the low sample rate of beam SNRs (10 samples per second) adds to the complexity of data collection. Pegoraro et al <cit.> recently published a dataset for integrated sensing and communication based on a software-defined ratio. The authors highlight the lack of research hardware at mmWave and consequently lack of datasets. The testbed is capable of transmitting 60 GHz IEEE 802-11-ay packets. The dataset consists of 7 subjects performing 4 activities walking, running, standing up-sitting down, and waving hands. However, the testbed is difficult to replicate due to cost and complexity. Moreover, limited activities are performed. More recently, we in our work <cit.>, for the first time used CSI from COTS mmWave Wi-Fi AP for pose estimation (regression) and pose classification. The testbed consisted of MikroTik wAP 60Gx3 COTS routers. We followed the work <cit.> and installed OpenWrt to get access to CSI measurements from the routers. We employed a deep neural network-based methodology to derive a full-body pose from mmWave Wi-Fi. Microsoft Kinect was used to record the ground truth. The validation was performed across 3 individuals with 8 distinct poses. We achieved a high pose classification accuracy (>90%) for the classification task and a low mean square error (MSE) of 0.0048 for the regression task (full body pose). Also, in this case, the sampling rate of CSI was limited to around 22 samples per second, resulting in a limited dataset size. Note that it is not possible to increase sampling frequency in these devices as the firmware is not efficient, leading to stability issues. Moreover, increasing the sampling frequency of CSI arbitrarily, implies misemploying the concept of JC&S. The above discussion validates the use of mmWave Wi-Fi for sensing. Nevertheless, the progress in COTS mmWave research has been hindered by factors such as the absence of research-grade hardware, hardware limitations, labour-intensive data collection, and the scarcity of publicly available datasets. In this work, our goal is to minimize the challenges associated with data collection and to create an extensive dataset for pose classification, building upon our earlier research <cit.>. To achieve this goal, we leverage generative adversarial networks (GANs) for data augmentation. §.§ Data Augmentation Efficient training of neural networks requires a huge amount of data <cit.>. When the dataset is limited, network parameters are often undetermined and generalize poorly. To combat this, data augmentation can be considered. However, standard data augmentation methods produce limited plausible data <cit.>. Waheed et al. <cit.> proposed CovidGAN in their work, aiming to generate synthetic chest X-ray images for enhancing COVID-19 detection. Their findings indicate a notable 10% improvement in classification accuracy through the integration of synthetic images in the downstream task. Bhattacharya et al. <cit.> used Deep Convolutional Generative Adversarial Network (DCGAN) for data augmentation, to combat class imbalance in medical datasets. With DCGAN-augmented data, the classification accuracy rose from 60% to 65%. Han et al. <cit.> used GAN for CSI data augmentation to reduce the effort of data collection and prevent overfitting risks caused by incomplete datasets. The authors also evaluated the quality of GAN-generated samples. The experiments were conducted at sub-6 GHz frequencies with Wi-Fi signals. The authors went a step further towards domain adaptation and compared performances with and without data augmentation as the first step. In the former case, the authors observed around 7% accuracy in the target (unseen environment) domain. However, only four gestures were considered and all were operated by the right hand. Although GANs have been effectively employed for tasks involving CSI data at sub-6 GHz frequencies <cit.>, their application to COTS mmWave data remains undocumented at present. § METHODOLOGY §.§ JC&S Pose Classification pipeline Our pose classification pipeline consists of 60 GHz COTS Wi-Fi devices. In particular, we use MikroTik wAP 60x3 Wi-Fi devices, one acting as a transmitter and the other as a CSI sniffer. We follow the work of Blanco et al. <cit.> to gain access to CSI measurements. In our setup, a user performs a set of poses between the two devices. This creates unique patterns in CSI captured by the sniffer. The amplitude details of the CSI are subsequently input into a convolutional neural network (CNN), to extract distinctive features and effectively map changes in CSI to distinct poses <cit.>. However, in our approach, we opt to synthetically generate CSI to improve the generalization of the deep learning model for pose classification. §.§ Generative Adversarial Networks (GANs) Generative adversarial networks (GANs) <cit.> aim to generate new data that mirrors the statistical characteristics of the training data. GANs are very good at modeling high-dimensional distributions of the data. From the learned distribution, new data can be generated visually indistinguishable from the real data. GANs belong to a class of generative models that try to learn realistic representations of a class. These models take random noise (z) as an input and sometimes, a class label (Y). From the input, generative models generate a set of features (X) that resemble a particular class. z ensures there is diversity in X generated by the model. Therefore, generative models try to capture conditional probability P(X|Y). On the other hand, discriminative models often called classifiers, try to distinguish between the classes. These models take a set of X as input and determine the corresponding Y. Discriminative models aim to learn P(Y|X). GANs consist of two neural networks, Generator (𝒢) and Discriminator (𝒟), trained alternatively and competing with each other. 𝒢 takes z as an input usually sampled from a normal distribution, z ∼𝒩(μ, σ), and tries to generate fake or synthetic data (𝒢(z)) to fool 𝒟. Note that the terms fake, generated, or synthetic are used interchangeably to describe the data produced by 𝒢. z is often a low-dimensional vector and the corresponding sample space is called latent space. 𝒢(z) is a high-dimensional fake output e.g., an RGB image or audio or CSI, in our case. The goal of 𝒟 is to correctly classify real and fake data. So, 𝒟 outputs a single probability of an input being real or fake. These probabilities are fed back to the 𝒢 to improve its output. GANs are trained in an unsupervised manner or indirect training, in the sense that 𝒢 does not get to see real samples or images, but is trained to fool 𝒟. The optimization of GANs is a two-player min-max optimization problem that terminates at a saddle point forming a minimum with respect to 𝒢 and maximum with respect to 𝒟 <cit.>. The goal of the optimization is to reach Nash equilibrium <cit.>, a point where no player can improve by changing its weights. At this point, 𝒢 can be considered to have captured the distribution of real samples. The optimization can be mathematically formulated as follows: min_𝒢max_𝒟 V(𝒟, 𝒢) = 𝔼_x ∼ p_data(x)[log𝒟(x)] + 𝔼_z ∼ p_z(z)[log(1 - 𝒟(𝒢(z)))] where V(𝒟,𝒢) is the reward, x is the real data (real CSI), 𝒟(x) and 𝒟(𝒢(z)) represent output of 𝒟 on real CSI and GAN-generated synthetic data (synthetic CSI), respectively. 𝔼_x is the expected value over all real CSI instances. 𝔼_z is the expected value over all random inputs to the generator. The formula derives from the cross-entropy between the real and generated distributions. 𝒟 has access to both x and 𝒢(z). An ideal 𝒟 would output 1 for x and 0 for 𝒢(z) i.e., classifying real as real and synthetic as synthetic. On the other hand, 𝒢 only sees synthetic CSI, and aims to push 𝒟(𝒢(z)) close to 1, to minimize the overall loss function. This optimization problem is implemented as Binary Cross Entropy (BCE) loss. §.§ Method: Conditional Wasserstein GAN (cWGAN) We use our previously collected dataset <cit.> consisting of CSI samples corresponding to 3 users, performing a set of 8 distinct poses (8 classes). We use only amplitude information of the CSI, as phase information is noisy and calibration is needed. For each user, x ∈ℝ^m × n × p. x represents real CSI, m represents the number of samples/examples, and n and p represent the antenna and time index, respectively. We propose a method based on conditional GAN (cGAN) for data augmentation of the CSI samples. Since cGANs can be conditioned on labels, we can control the generation process, unlike standard GANs. cGANs therefore learn mapping from input to output (classes) and achieve faster convergence. cGANs have an additional input layer of one-hot-encoded labels (label embedding layer). This additional layer guides the generator in terms of which image or sample to produce. To train our cGAN, we first use a BCE loss according to Equation <ref>. However, we observe mode collapse and unstable training with BCE loss. To counter this, we adopt an enhanced loss function known as the Wasserstein Loss (W-loss) <cit.>. This loss function offers improved stability in the training of GANs. W-loss is implemented in the following way: min_𝒢max_𝒞(𝔼_x ∼ p_data[𝒞(x)] - 𝔼_z ∼ p_z[𝒞(𝒢(z))]) Figure <ref> shows the architecture of cGAN with W-loss (cWGAN). Here, the discriminator is called critic (𝒞), since its output can be any real number, not bounded between 0 and 1. G tries to minimize the loss by maximizing 𝒞(G(z)), bringing the real distribution closer to fake one. Instead, the aim of 𝒞 is to maximize the distance and separate the two distributions apart. For W-loss to be valid and approximate Earth Mover's Distance (EMD), the critic's neural network needs to be 1-Lipschitz continuous, which means the norm of its gradients should be at most 1. This ensures that W-loss is valid and does not grow much. We encourage this, by adding a gradient penalty <cit.> as follows: min_𝒢max_𝒞 (𝔼_x ∼ p_data[𝒞(x)] - 𝔼_z ∼ p_z[𝒞(𝒢(z))] + λ𝔼_x̂∼ p_x̂[(∇_x̂𝒞(x̂)_2 - 1)^2]) where x̂ = ϵ x + (1 - ϵ) 𝒢(z). x̂ is interpolation between real and synthetic CSI, weighted by ϵ, ∇ represents the gradient operator, ·_2 represents the squared Euclidean norm and λ controls magnitude of gradient penalty. We use a linear 𝒢 and a linear 𝒞. 𝒢 consists of a label embedding layer and five linear layers. The linear layers except the last one, are followed by LeakyReLU activations. Tanh is used after the last linear layer to scale inputs between -1 and 1. 𝒞 consists of a label embedding layer and 3 linear layers. LeakReLU activation is employed after the first two linear layers. Additionally, Dropout is used after the second linear layer. No activation is used after the last linear layer for W-loss to work. The 𝒞 outputs a single real value, its score on real and fake samples. We use cWGAN to generate 30,000 samples of CSI for each user, without any additional data collection[Dataset is available at:https://zenodo.org/records/10702215]. However, the challenge is to evaluate the quality of generated samples as the CSI samples lack visual information unlike images, where one can visually inspect the quality of samples. To combat this, we adopt GAN-train and GAN-test metrics presented in the work <cit.>. GAN-train involves training a classification model on data generated by a GAN and evaluating it on real data. If the model, which exclusively encounters GAN-generated data during training, achieves high accuracy on real data, it suggests that the generated samples closely resemble real ones. A high GAN-train score indicates diversity in the generated samples. On the other hand, the GAN-test assesses the accuracy of a model trained on real data and evaluated on GAN-generated data. A high score implies that GAN-generated samples realistically approximate the (unknown) distribution of real samples. To achieve this, we use a supervised deep learning model based on CNN (cf., Figure <ref>), which serves as a downstream model for pose classification. Our CNN consists of 3 convolution layers, each followed by batch norm, ReLU, and pooling layers. These layers extract features from the input data and encode the high-dimensional information into a low-dimensional space. Finally, a linear layer is employed to output a score for 8 classes. As an additional validation step, we also evaluate if the GAN-generated samples improve the classification accuracy of the original task (pose classification). § EXPERIMENTS §.§ GAN Training First, we use BCE loss with cGAN. We use a linear 𝒢 and a linear 𝒟. Specifically, we use 5 linear layers for 𝒢 and 𝒟. Normalization layers are used for both 𝒢 and 𝒟. Besides, ReLU activation is used for 𝒢 and LeakyReLU for 𝒟. This architectural setup draws inspiration from the design commonly used in computer vision tasks and best practices for GANs. The sigmoid layer is used as the final layer of 𝒟 and tanh for the 𝒢. We sample the input noise from a standard normal distribution and set the dimensionality (latent space) of the noise vector to 100. We use Adam as an optimizer with a learning rate of 3e-4. Batch size is set to 32 and weights of 𝒢 and 𝒟 are initialized to normal distribution. 𝒢 takes noise vector and labels as an input and outputs m×1500 where m is the number of samples of synthetic CSI and 1500 (30×50, representing antenna index and time index respectively). On the other hand, 𝒟 takes CSI (synthetic or real) and corresponding labels as input and predicts the probability of the CSI sample being real or synthetic. We train cGAN for 30,000 epochs and after every 500 epochs, we extract and save the synthetic CSI and corresponding labels. Besides, we monitor the 𝒢 loss, 𝒟 loss, and 𝒟 accuracy to see if the training is stable. Additionally, we use our evaluation metric (GAN-train and GAN-test), to inspect the quality of generated samples. We see that with BCE loss, the training process does not converge and 𝒢 loss increases steadily. Moreover, 𝒟 accuracy quickly approaches 100% suggesting that 𝒢 is not able to fool 𝒟. Further, the GAN-train and GAN-test scores are significantly low for all 3 users, well below 30%. Next, we try to make 𝒢 more powerful by using convolution layers in the architecture. However, also in this case, we observe low GAN-train and GAN-test scores. We link this failure to the mode collapse of GANs due to BCE loss. Therefore, this approach with BCE loss does not work. §.§ cWGAN Due to the above problems, we adopt W-loss instead of BCE loss and further tune the GAN training process. We use the 𝒢 and 𝒞 described in Section <ref>. In this case, we do not use any normalization layers. Using normalization layers leads to unstable training. We use the same set of hyper-parameters as mentioned previously. In addition, we stick to the default value of 10 for λ, which controls the magnitude of the gradient penalty in Equation <ref>. Moreover, we train 𝒢 once for every 5 iterations of 𝒞. In other words, we allow 𝒞 to be strong. This encourages 𝒢 to be updated with better gradients and adds stability to the overall process. Figure <ref> shows the training process of cWGAN with linear 𝒢 and linear 𝒞 for user 1. From Figure <ref>, it is quite evident that the losses are bounded and GAN training converges. The same holds for other users. We use the 𝒢 of trained cWGAN to generate 30,000 synthetic samples of CSI for each user and compute GAN-train and GAN-test scores for these samples, as mentioned in Section <ref>. Moreover, we also evaluate if the synthetic samples lead to an increase in the task of pose classification. Table <ref> shows the performance of our method on 3 different users (persons). GAN-train and GAN-test scores have been introduced in Section <ref>, Baseline Acc. refers to the original pose classification accuracy with the real CSI. This is obtained by splitting real CSI into train and test splits. We use a standard 75:25 split for train and test sets, respectively. Then a classifier, CNN, described in Section <ref> is trained on train split and evaluated on test split. Instead, cWGAN Acc. refers to the pose classification accuracy when the same classifier is trained on train splits of real CSI and GAN-generated CSI and evaluated on the test split of the real CSI. In other words, the latter measures the impact of augmentation on the original task. One can clearly see that cWGAN-generated samples get a very high GAN-train and GAN-test score, similar to typical validation accuracy (Baseline Acc.). Thus, the synthetic samples are highly consistent with the actual data. In this way, we not only increase the size of the dataset but also maintain the consistency of the samples. Further, we can see that in all three cases, we get an improved performance in pose classification accuracy, when generated data is combined with the actual data. We notice 3.3%, 3.1%, and 4% improvement in accuracy for users 1, 2, and 3 respectively, compared to Baseline Acc. This high pose classification accuracy is crucial when deploying mmWave ISAC-based solutions for real-world applications. § CONCLUSION AND FUTURE WORK In this study, we successfully conducted stable training of GANs for mmWave CSI. Notably, we adopted a cWGAN-based approach to achieve robust data augmentation. However, training the cGAN with BCE loss proved ineffective as the synthetic samples exhibited low scores in both GAN-train and GAN-test evaluations. Consequently, we adopted cWGAN, cGAN with W-loss. This modification resulted in high-quality synthetic CSI with high validation scores as far as GAN-train and GAN-test metrics are concerned. The result is a large dataset of COTS mmWave Wi-Fi samples that can be used by researchers for validating their signal processing and deep learning methodology for pose classification. Further, we also showed that cWGAN-generated data complements the real data in the original task of pose classification, resulting in improved generalization. Our method represents an initial stride towards domain adaptation. In the future, our goal will be to achieve broader generalization across different people and environments with mmWave Wi-Fi leveraging GANs. Additionally, we will also explore the transferability of our method to other related tasks beyond pose classification. § ACKNOWLEDGMENT This research is funded by the FWO project (Grant number: 1SH5X24N) and FWO WaveVR (Grant number: G034322N). IEEEbib
http://arxiv.org/abs/2406.18030v1
20240626025402
Unified Architecture for a Quantum Lookup Table
[ "Shuchen Zhu", "Aarthi Sundaram", "Guang Hao Low" ]
quant-ph
[ "quant-ph" ]
Microsoft Quantum, Redmond, Washington, 98052, USA Department of Computer Science, Georgetown University, Washington, DC 20057, USA Microsoft Quantum, Redmond, Washington, 98052, USA Microsoft Quantum, Redmond, Washington, 98052, USA § ABSTRACT Quantum access to arbitrary classical data encoded in unitary black-box oracles underlies interesting data-intensive quantum algorithms, such as machine learning or electronic structure simulation. The feasibility of these applications depends crucially on gate-efficient implementations of these oracles, which are commonly some reversible versions of the boolean circuit for a classical lookup table. We present a general parameterized architecture for quantum circuits implementing a lookup table that encompasses all prior work in realizing a continuum of optimal tradeoffs between qubits, non-Clifford gates, and error resilience, up to logarithmic factors. Our architecture assumes only local 2D connectivity, yet recovers results that previously required all-to-all connectivity, particularly, with the appropriate parameters, poly-logarithmic error scaling. We also identify novel regimes, such as simultaneous sublinear scaling in all parameters. These results enable tailoring implementations of the commonly used lookup table primitive to any given quantum device with constrained resources. Unified architecture for a quantum lookup table Guang Hao Low July 1, 2024 =============================================== § INTRODUCTION Quantum computers promise dramatic speedups over classical computers for a broad range of problems. Provable advantage in many cases such as in Hamiltonian simulation <cit.>, quantum machine learning <cit.>, and quantum search are in the so-called query model. The quantum gate costs of these quantum algorithms are typically dominated by queries made to a particular unitary oracle, with each oracle query having a gate cost that scales polynomially with the amount of classical data needed to encode the problem instance. As quantum computers will execute orders-of-magnitude fewer logical operations per second than classical computers <cit.>, the crossover of runtime between classical and quantum advantage is highly sensitive to the constant factors in synthesizing these oracles. Understanding the cost of oracle synthesis, such as in simulating chemistry, for a given algorithm can mean a crossover of days rather than years <cit.>. The dominant gate cost of quantum oracles in many cases reduces to some instance of a quantum lookup table <cit.> – a generic framework that facilitates access to unstructured classical data in superposition. In analogy to the classical lookup table that returns data x_i for specified address bits i∈[N], the quantum lookup table is some unitary quantum circuit O_x⃗ that responds with a superposition of data O_x⃗∑_i∈[N]α_i |i⟩|0⟩=∑_i∈[N]α_i |i,x_i⟩, when queried by an arbitrary superposition of address bits ∑_i α_i |i⟩|0⟩. In general, quantum circuits implementing O_x⃗ can be efficiently found for any data x⃗. It is understood that a decisive quantum advantage for interesting problems will be achieved with logical qubits in the fault-tolerant regime. Hence, the cost of quantum table-lookup should be understood in the context of fault-tolerant quantum resources. Specifically, logical Clifford gates {H, S, CNot} are cheap, such as due to a native implementation on underlying physical qubits <cit.>. In contrast, logical non-Clifford gates {T, Toffoli} are remarkably expensive <cit.>, following the Eastin-Knill theorem for the nonexistence of a set of transversal gates universal for quantum computing, and require sophisticated protocols to realize, like magic-state distillation and code-switching. There are myriad possible implementations of quantum table-lookup <ref>, which realize distinct resource trade-offs. Early seminal work called QRAM <cit.> found implementations <cit.> using Θ(N) T gates and qubits in Θ(polylogN) depth. However, realizing this shallow depth (or runtime) is impractical due to the bottleneck of T gate production. Synthesizing a single T gate in each unit of depth has an overhead of  100s qubits <cit.>. Moreover, interesting problems such as in chemistry have on the order of N∼10^6 <cit.> terms. Overall, the extreme space requirements make the full potential of QRAM implementations difficult to realize. This has motivated spacetime tradeoffs on the other extreme end — QROM <cit.> uses Θ(N) T gates, but now with the minimum of Θ(log N) qubits in Θ(N) depth. It is straightforward to interpolate between these extremes <cit.>, but a most surprising discovery is that interpolating a novel SELECT-SWAP <cit.> architecture reduces the expensive T gate count to an optimal Θ(√(N)) with only a moderate number of Θ(√(N)) qubits. Even more recently, the bucket-brigade QRAM implementation was discovered to be error-resilient with polylogarithmic error scaling <cit.> in an all-to-all quantum gate model, compared to all other approaches with linear error scaling. Error resilience indirectly reduces the cost of logical resources, as lower errors allow cheaper, lower-distance error-correcting codes with faster logical cycle times. Our key contribution is an architecture for table-lookup summarized in <ref> that encompasses all previous methods and enables a new continuum of tradeoffs in the three key parameters of qubits, T gates, and error. Crucially, our architecture only requires a planar layout of nearest-neighbor quantum gates. There are concerns that local connectivity in QRAM variants <cit.> limit their capabilities in end-to-end implementations <cit.>. We similarly find in <ref>, that the naive implementation of long-range gates in our architecture severely limits error resilience. However, we introduce a more sophisticated implementation based on entanglement distillation that establishes the feasibility of long-range connectivity within a planar layout. This allows us to recover, up to logarithmic factors, in <ref> the scaling found in prior work assuming all-to-all connectivity. In other words, our refined method for long-range connectivity in our table-lookup architecture is equivalent to allowing unrestricted access to long-range gates, at some small (logarithmic) overhead. Novel tradeoffs are enabled by our general architecture. These include, for instance, simultaneous sublinear scaling of all parameters with N, in both the case where x_i is a single bit, and the more general setting of a b-bit word. There exists a single-word quantum lookup table that has sublinear scaling in infidelity, T-gate count, and qubit count, with local connectivity. For constant word size b, there exist multi-word quantum lookup tables that have sublinear scaling in infidelity, T-gate count, and qubit count, with local connectivity. We also provide a fine-grained error analysis of our table-lookup implementation. We parameterize overall circuit error in terms of each common gate type present, e.g. Idling error, non-Clifford gate errors, etc. summarized in <ref>. In contrast, prior art universally assumes a generic ϵ error parameter for all gates, which we do, in <ref> to ease presentation, but elaborate on with details later. Our approach proves useful for understanding dominant error sources, which future experimental implementations can then focus on minimizing. Our table-lookup architecture is assembled from common circuit primitives reviewed in <ref>, and the general framework is introduced in <ref>. We then elucidate in <ref> the importance of long-range connectivity in table-lookup on a planar layout, and demonstrate that entanglement distillation provides a scalable means for overcoming the geometric error scaling of naive long-range implementations. These tools also enable our general table-lookup architecture to be extended to multi-bit words in <ref>. Finally, we conclude and discuss future work in <ref>. § PRELIMINARIES All the circuit designs in this work are assumed to access data addressed by n bits with a memory of size N = 2^n. We demonstrate how to read a single bit of data before discussing extensions for large word sizes. Here, we consider four key characteristics of the quantum lookup table and describe how they directly impact its efficiency: circuit depth, qubit count, T-gate count, and infidelity. The minimum time taken to query a memory location is proportional to circuit depth. It is commonly claimed that quantum algorithms have an exponential speedup over classical algorithms when circuit depth scales polylogarithmically in memory size. However, this minimum time is only achieved under exceptional circumstances requiring extremely large space. In most cases, the overall spacetime volume is the correct metric for evaluating cost, and not depth alone. Thus, we also consider qubit count as a key characteristic. The qubits in a lookup table design are either used to maintain memory and router status or to act as control or ancilla bits The current generation of quantum devices is relatively small, often having a few dozen to a few hundred qubits <cit.>. Considering such constraints in near-to-intermediate term devices, a quantum lookup table design with sublinear qubit scaling becomes highly desirable. T-gates, essential for implementing routers in the presence of superposition queries, are non-Clifford gates that remain difficult to physically implement without substantial resources in most qubit modalities <cit.>. Hence, in practical circuit design, the T-gate count needs to be minimized and roughly approximate the overall spacetime volume of the circuit. The infidelity is the probability of failing a single query and it scales as a function of memory size multiplied by a generic gate error ε. Thus for a constant target query error, lower infidelity provides more gate error tolerance and flexibility for accommodating larger memories. In classical memory, a combination of logical gates sends the bus signal to the specific memory location determined by the input address bit and relays the data stored in that memory location to the output register. Similarly, quantum routers (<ref>) help to navigate the qubit to a memory location determined by the address, after which the corresponding classical data is loaded onto the bus qubit, and subsequently, the bus qubit is navigated to the output register. The circuit design of the quantum router plays a pivotal role in determining the properties of a quantum lookup table. We review some of these router designs and prior architectures each of which assumes all-to-all connectivity. §.§ Fan-out architecture The fan-out architecture <cit.>, an initial proposal for the QRAM, can be visualized as a binary tree of depth log N, where the N memory locations are situated at the tree's leaves (<ref>). Every non-leaf node in this tree functions as a router, which guides the bus signal to its left or right child. The status of the routers on the ℓ-th level is determined by the ℓ-th address bit, which is maximally entangled with the router qubits. Once the memory contents x_j are written into the bus, such as using a classically-controlled X gate, the bus qubit is routed back out via the same path, and all router qubits are restored to their original disentangled state. A significant drawback of this architecture is its high linear infidelity. If a single router gets corrupted, it will flip the status of all other routers on that same level, thereby misdirecting the query to an incorrect memory path. Consequently, for error resilience, it is suboptimal to have all the routers simultaneously entangled with the address bits. §.§ Bucket-brigade architecture The bucket-brigade architecture <cit.> is an improved QRAM proposal over the fan-out architecture with three phases for querying memory: setting router status, qubit route-in, and qubit route-out. It uses CSWAP (Fig. <ref>) routers to direct signals. The circuit description for a CSWAP router _j is illustrated in <ref> following <cit.>. The router is composed of four qubits (t_0, in_0, L_0, and R_0) and is referred to as a CSWAP router as it uses Controlled-SWAPs to route information. The status of a router at depth ℓ of the bucket-brigade model is set according to the ℓth address bit. For our toy model, _j's status is set by storing the address qubit a_ℓ in the status register t_j. Once a router's status is set, it is said to be active and part of a query path. During qubit route-in, the bus qubit enters the router by being swapped into the in_j register. It is then swapped to the desired memory location connected to either R_j or L_j depending on the status in t_j. In <ref>, the circuit for qubit route-out is not explicitly depicted, as it is the qubit-route in a circuit in reverse. When a memory address is queried, the control qubits activate a path through the routers to the target memory cell (shown highlighted in blue in <ref>). Only the routers along this path are activated, significantly reducing the number of active routers at any time. This is in contrast to the fan-out architecture where all routers are activated simultaneously. As a result, only log N routers in the binary tree are strongly entangled with the address bits, and all other routers are weakly entangled. Similarly, the bus qubit storing x_j is routed back out via the same path and is weakly entangled with all other routers off the path. A pivotal study by Hann et al. <cit.> demonstrated a circuit design for the bucket-brigade model with O(log^2 N) infidelity that is resilient to generic gate errors, assuming all-to-all connectivity. They observed that the error in some query paths remains contained and does not spread to every other branch of the bucket-brigade model thereby ensuring that some of the query paths remain free of fault. The key idea here is that CSWAP routers have a certain ability to contain error. Assuming that all CSWAP routers along a given path suffer no errors, one can prove that arbitrary errors on any other router off the path do not affect the state of the bus qubit containing the queried data. Hence, by linearity, the overall error for this model is limited only by the error of all CSWAP routers and circuit elements along any one query path. §.§ SELECT-SWAP architecture The SELECT-SWAP architecture <cit.> uses a combination of linear and CNOT routers (described below) to route addresses and the fan-out architecture to route out the bus qubit. The linear routers ℛ_0 and _1 in <ref> act like a SELECT circuit that sets the state of register q to |1⟩ if and only if the state on the routers equals the state of the address qubits. This is achieved with the use of multi-control CNOT gates and a detailed implementation can be found in Ref. <cit.>. The router is named linear as its qubits can be set in a line within a planar layout. The CNOT router in <ref> can be used for diffusing the input signals from a parent to all child nodes. Notice that implementing a CNOT router does not require any T gates. The high-level routing scheme for memory with 16 locations is shown in <ref> where the address bits are partitioned into two sets, each controlling the linear and fan-out routers respectively. The linear and CNOT routers activate one out of four sets of memory locations using the first set of address bits. The second set of address bits then activates the fan-out routers to route out the qubits stored at the designated memory location to an output register. Although the SELECT-SWAP architecture targets T count reduction, it cannot achieve better than linear infidelity for generic error resilience. This limitation arises because every route in the tree must be correct to produce the desired output. §.§ Fine-grained error types A common scenario while analyzing various table lookup designs is to assume a single error parameter that represents the error contribution of any operation. However, this obscures the fact that every operation could contribute differently to the errors in the overall circuit. Separating the contributions from various errors allows for more in-depth circuit profiling that helps in identifying which operations hamper the overall performance of a model or circuit design and how one can best harness its full potential. The errors that we will consider throughout this work are summarized in Table <ref>. §.§ Circuit optimization We note that all the high-level circuits we have presented usually admit straightforward circuit optimizations. We outline a few examples in this section. The CNOT router in <ref> is depicted is sending a single input to two outputs. However, it can be clearly optimized to have one fewer qubit by instead having the input as one of the outputs. Similarly, the CSWAP router <ref> is depicted as performing a controlled swap to move the qubit in_j to either the left or right. However, this can also also be optimized by removing one output qubit and one controlled-swap gate by having the in_j input be the same as the L_j output. Then, the qubit will be swapped to R_j if and only if t_j is 1 without affecting the functionality of the router. § GENERAL TABLE LOOKUP FRAMEWORK In this section, we present our general error-resilient quantum table lookup architecture that can simultaneously have sub-linear scaling in qubit count, T count, and infidelity for a specific choice of parameters. We begin by describing our framework's structure followed by delineating its working and correctness. The high-level scheme of our design to query a memory of size N = 2^n with partition size λ≤ N and CNOT tree size γ≤λ can be visualized as the tree-like structure shown in <ref>. The top of our design contains d ≤ n linear routers _0, …, _d-1 where d = log_2 ( N/λ). This is followed by a tree of depth d' made up of CSWAP routers _0, …, _2^d'+1-1 where d' = log_2 ( λ/γ). Each of the 2^d' leaves of this tree has a corresponding CNOT tree with γ leaves attached to it. Essentially, the linear routers are used to partition the N memory locations into sets of size λ and the CSWAP routers further partition these into sets of size γ. Each leaf of the CNOT tree is connected to a memory location. The bottom of the design contains a tree of depth n-d made up of CSWAP routers _0, …, _2^n-d+1-1 to read the queried data from the appropriate location. The parts of the tree that correspond to the query path for x_0 are highlighted in blue in <ref>. Data lookup in our framework can be broken into three stages and without loss of generality, we describe the process to query address |a⟩ = |a_0 … a_n-1⟩ = |0 … 0⟩: Stage I (address setting). The status of the linear routers is set using the first d address bits such that |_z⟩ = |a_z⟩. Next, the status of the d' CSWAP routers in the query path are set sequentially using the address bits |a_d … a_d+d'-1⟩. Note that only the CSWAP routers along the query path are set, as opposed to every router in the CSWAP tree. Stage II (querying memory). Let [m] denote the set {0, …, m-1} for a number m. The objective in this stage is to compute intermediate values q'_0, q'_1, …, q'_λ-1 through N/λ repetitions. For i ∈[N/λ], the control qubit q_i is set to |1⟩ if and only if |i⟩ = |a_0 … a_d-1⟩. In our example, only q_0 is set to |1⟩ and q_i is set to |0⟩ for i ≠ 0. The control q_i is routed along the query path determined by the status of the CSWAP routers set in Stage I to a leaf of the depth d' tree. Then, q_i's value is diffused to the γ leaves of the activated CNOT tree attached to this leaf. In our example, q_i will be routed and diffused to the leftmost CNOT tree. Note that in the ith repetition, the leaves of this CNOT tree are associated with the memory locations {λ· i + j| j ∈ [λ]}. Finally, q_i's value acts as a control indicating whether data from memory is loaded into the corresponding q'_j qubit registers. For instance, when |a_d … a_d+d'-1⟩ = |0 … 0⟩, qubit registers {q'_j} are updated with the data {x_λ· i + j} for j ∈ [γ]. By contrast, the qubit registers q'_j for j ∈{γ, γ+1, …, λ-1} remain unchanged. Specifically, the values in the q'_j registers satisfy: q'_j = ⊕_i=0^N/λ q_i x_λ· i + j where ⊕ denotes addition modulo 2. For our example, after N/λ repetitions, only q_0 = 1 and only the leftmost CNOT tree is activated. Hence, we find that q'_j = x_j for j ∈ [γ] and q'_j = |0⟩ otherwise. We remark that between each repetition, the qubits in the CNOT trees can be trivially reset to |0⟩. Additionally, q_i is routed back to the top of the CSWAP tree and all qubits except for the router status qubits are reset trivially to |0⟩. Stage III (retrieving data). Data is retrieved similar to how the bus qubit is routed out of the noise-resilient bucket brigade architecture in Ref. <cit.>. We use the address bits |a_d … a_n-1⟩ to set the status of the n-d CSWAP routers at the bottom of our design. As in Stage I, it is only the routers in the query path whose status is set. The leaves of this CSWAP tree point to the q' registers and only the data in q'_ℓ for ℓ = a_d … a_n-1 is retrieved. In our example, this leads to q'_0 being retrieved at the end of the data lookup process. The detailed procedure for data lookup is shown in <ref>. Our objective is to examine the optimal balance between d and d' that results in the most favorable infidelity, T count, and qubit count scaling. Our fine-grained analysis uses the error types from <ref>. Consider the quantum data lookup structure with the high-level scheme in <ref> with N memory locations. Let n=log N, λ =2^n-d be the partition size and γ = 2^n-d-d' be the size of a CNOT tree with d' ≤ d ≤ n. The infidelity of this circuit is O( ε_L(γ N/λ+N/λlogλ/γ)+ ε_s logλ^2/γ. . +ε_I(N/λlog N(logN/γ+γ +logλ/γ)+λ) . . + ε_c γ N/λ + ε_ccN/λlogN/λ. . + ε_cs(N/λlogλ/γ+log^2λ/γ + log^2λ) ). Moreover, the T count for this design is O(N/γ+N/λlogN/λ+λ), and its qubit count is O(logN/λ+λ). We will restate and prove this theorem in <ref> after explaining how to lay out the scheme in <ref> on a planar grid with nearest neighbor connectivity. Here, we use the above theorem to find an instance that has sub-linear scaling for infidelity, qubit, and T-counts. For N memory locations, there exists a quantum data lookup scheme that has Õ(N^3/4) infidelity, O(N^3/4) T count, and O(√(N)) qubit count. For the high-level scheme depicted in <ref>, setting λ = √(N) and γ = N^1/4 and applying Theorem <ref>, gives the result. We claim that it is necessary to have CSWAP routers at the top of our design in Stages I and II to achieve sublinear infidelity scaling. Assume by way of contradiction that the CSWAP routers are replaced with CNOT routers. First, note that CNOT routers are not robust to Pauli Z errors as shown in <ref>. Although the Pauli Z error propagates only to the parent node in the CNOT router, it can result in a phase kickback that can alter the address state presented during a query. Specifically for our example, this happens when i = a_0 … a_d-1 in Stage II, and there are an odd number of Z errors along the query path in the framework. A comparable scenario is also presented in Ref. <cit.>, where the read-out CNOT tree demonstrates resilience to only Pauli Z errors. Second, to prevent a phase kickback, we need to assume that the entire CNOT tree with λ leaves is part of the query branch and remains Pauli Z error-free. In this case, the idling error will be dominated by Stage II's contribution of O(2^d ε_I ( d + 2^n-d)) = O(2^n) leading to a linear infidelity scaling. By contrast, the CSWAP router is robust against error propagation (see <ref> for details). Hence, we consider our framework with a non-zero number of CSWAP routers at the top of our design to be a more resource-efficient approach. Uncomputing the table lookup circuit is crucial to ensure that there are no residual garbage states entangled with the address and output registers once a query has been performed. For our design, uncomputing can be done as follows. First, run the circuit for Stage III in reverse to route the output bit back into its corresponding q' register, then run the Stage II circuits again to set all q' registers to 0. Finally, run the Stage I circuit in reverse to reset the status of all the routers. This will effectively double the infidelity scaling. A potential way to reduce the T count for our framework without worsening its query infidelity is by modifying the design for Stage III. Specifically, in <ref>, we retain the CSWAP tree from the bottom of the figure up to a depth d'. Each of the 2^d' leaves of this tree has a corresponding tree of fan-out routers with γ leaves attached to it. Each of the γ leaves is connected to a corresponding q'_j register. Essentially, this creates λ/γ different fan-out router substructures each of whose routers is set independently of the other. Let ℓ = a_d … a_n-1. Then, this will impose the condition that the γ-sized sub-structure of the fan-out routers containing q'_ℓ should be error-free for noise resiliency. Since a similar condition is satisfied by the CNOT trees in Stage II, this does not affect the asymptotic scaling for the infidelity. However, the T count reduces as the fan-out routers do not use T gates. A more detailed analysis of this improvement is left for future work. § PLANAR LAYOUTS FOR QUANTUM DATA LOOKUP FRAMEWORKS In this section, we discuss how our general quantum data lookup framework can be designed on a planar layout with only local connectivity. We first build some intuition for the underlying principles to achieve this by modifying the bucket-brigade design of <cit.> for a planar layout. This is illustrated in <Ref>. An initial analysis of query infidelity for this design, shows that it scales sub-linearly in memory size for the planar layout. In <Ref> we use entanglement distillation to perform long-range operations and recover the log scaling for query infidelity in the planar layout. We put these ideas together to present the planar layout for our general framework in <Ref>. §.§ Planar layout for the bucket-brigade model We provide a circuit design for the bucket-brigade QRAM model, assuming the qubits are laid out on a 2D planar lattice and multi-qubit gates act only on adjacent qubits. We first demonstrate a toy model with two memory locations, where the routing scheme is depicted as a binary tree in <ref>. To reach the desired memory allocation, a single CSWAP router 𝐑_0 is employed to direct the incoming address qubit into the designated memory location, from where the stored data x_i is retrieved and then routed out using the same path. The CSWAP router's four qubits can be arranged in a T-shaped configuration as shown in <ref> where the qubits are located at the intersections of the grid. This configuration ensures that each router qubit is adjacent to any other router qubit with at most one local SWAP. The three-qubit CSWAP gate can be decomposed into a sequence of Clifford and T gates that operate on at most two qubits <cit.>. For the larger memory size of N=16, the high-level bucket-brigade routing scheme is shown in <ref>, where both the route-in and route-out phases follow the same path in the tree. The corresponding planar layout is shown in <ref> where the blue lines show the structure of a CSWAP router, the red lines show connections between different levels of the routing scheme, and the input, address, and bus qubits are positioned in the center of the diagram. The routers are placed on the grid following the H-tree fractal pattern <cit.> starting from the root at the center and leaves at the boundaries of the grid. The left and right registers of the leaf-level routers send an incoming qubit to the respective memory locations. A pair of T-shaped routers each laid out according to <ref> can be joined together to form a single H-tree segment as shown in <ref>. Note that such a planar layout scheme can be naturally extended to higher dimensions such as a cubic grid for 3D. However, we focus only on the planar grid throughout this work. The recursive expansion of the fractal layout yields an optimal layout that occupies an area of size O(N). For this layout scheme the T count, and qubit count scale as O(N) since the layout uses O(N) CSWAP routers and there are O(N) points in the rectangular grid where each point corresponds to a qubit. Unlike in the all-to-all connectivity case, the error accumulation in the planar layout occurs due to the long-range gates that need to be performed such as those along the red lines in <ref>. Naively, if we assume that the probability of gate error is ε_max and a long-range SWAP is performed using a successive series of SWAP gates, the overall infidelity scales as O(N ε_max) as both the circuit depth and number of gates for each query scales as O(√(N)). However, by performing a more fine-grained error analysis, we show how the infidelity can scale sub-linearly in N. The foundation for calculating the infidelity scaling lies in the crucial property of error containment exhibited by the bucket-brigade QRAM. Consider the tree branches in <ref>. They can be categorized either as good or bad based on the presence or absence of errors in them. In Ref. <cit.>, it was shown that the errors do not spread from a bad branch to a good branch, and assuming that the query path is a good branch, this implies that errors in other parts of the QRAM do not significantly affect the query. Importantly, this holds regardless of the layout scheme as long as CSWAP routers are used to enact the high-level routing scheme in <ref>. Hence, we can use the error containment property for our planar layout too. To improve the overall infidelity of our layout scheme, we modify the circuit and employ constant depth circuits for long-range operations between non-adjacent qubits. One way to implement a long-range SWAP between two qubits separated by a line of m qubits(e.g., a single red line in <ref>), is to use a strongly entangled length-m GHZ state as a resource. The long-range gate is then performed involving the qubits near the endpoint as shown in Ref. <cit.>. A length-m GHZ state is |GHZ_m⟩ = |0⟩^⊗ m+|1⟩^⊗ m/√(2). The error contribution for using GHZ states in this case is stated below. For a given GHZ state of length m, the probability that any long-range operation using this state has an error is O(m ε_Q) where ε_Q is the probability of a single qubit having an error. For the GHZ state to be correct, all its underlying qubits have to be correct. Using triangular inequality yields the desired error probability for the GHZ state. Lemma <ref> shows that despite the constant depth needed for the long-range operation, its error rate increases linearly with the length of the GHZ state. However, GHZ states of arbitrary length can be created using a constant-depth circuit <cit.>. Consequently, all the GHZ states utilized for the long-range SWAPs can be generated in place without incurring a substantial overhead. In fact, with this modification, the circuit depth for the planar layout reduces from O(√(N)) to O(log N), thereby leading to a sub-linear infidelity scaling of O(√(N)). §.§ Recovering log scaling in infidelity As using GHZ states still gives a polynomial dependence on N in the infidelity analysis, we instead consider performing a long-range SWAP on remote qubits using a Bell state (i.e., |Φ^+⟩ = |00⟩+|11⟩/√(2)) between them as a resource. To obtain a high-quality Bell state, we use noisy Bell states, a quantum error correcting code, and an entanglement distillation protocol as shown in <cit.>. Given an [[n̂, k̂, d̂]] quantum error correcting code and n̂ noisy Bell pairs with initial error ε_i, there exists a distillation protocol that creates k̂ Bell pairs with error ε_f < ε_i where ε_f = O(ε_I^d). Moreover, when ε_i = O(m ·ε_G) and ε_f < 1 is a small constant, d = O(log m). To perform a long-range SWAP, consider the Bell pair being created on adjacent qubits near the source qubit with one half of it being teleported to be adjacent to the target qubit using the length-m GHZ state where m = O(√(N)) as per our layout. Then, from <ref> these noisy Bell pairs could have an error ε_i≤ O(√(N)·ε_Q) and it is possible to distill a Bell pair with constant error using say, the surface code, with distance O(log N). For this choice of code, the protocol to distill would have depth O(d) = O(log N) and the number of noisy Bell pairs used would be n̂ = O(d^2) = O(log^2 N). Combining the two methods gives the following. Given two qubits separated by m qubits on a planar grid with local connectivity, qubit error ε_Q and the error on a distilled Bell pair ε_f, the error on performing a long-range operation between the two qubits is given by ε_L := min (m ·ε_Q, ε_f). We acknowledge that, by using entanglement distillation, the overall circuit depth may increase by a polylogarithmic factor in the worst case, which is acceptable. However, there might be strategies to mitigate this depth increase. Therefore, for the purposes of our analysis, we proceed under the assumption that long-range Bell states are readily available. Accounting for the overheads due to entanglement distillation, for the planar layout, we claim that the circuit depth D scales as O( N). Understanding the activation sequence of routers in a query branch is beneficial in performing fine-grained error analysis. For instance, in the setting router status phase, we try to route as many address qubits as possible in parallel. This means that it is not necessary to wait for the address qubit |a_ℓ⟩ to reach the router at level ℓ before sending the address qubits |a_ℓ+1⟩ to be routed by _0. Specifically, this reveals that not all qubits need to maintain their state throughout the entire query depth T. Suppose we are given a fixed query branch of depth four, with the routers 𝐑_0, 𝐑_1, 𝐑_2, 𝐑_3 counting from root to leaf. The activation sequence for each router and associated gates is depicted in <ref>, where the time τ_i increases by an O( N) additive factor and hence the total query time ∑_i τ_i = O( (N)). The error terms for our fine-grained analysis are taken from <ref>. For N memory locations, the improved fine-grained infidelity of the bucket-brigade QRAM with planar layout (<ref>) scales as O(log^2 N ε_L +log Nε_s+log^2 Nε_cs+ N ε_I). For a fixed query branch, we first consider the error contribution from the long-range operation. Let T=log N be the tree depth of the routing scheme as shown in <ref>. By Corollary <ref>, the contribution of long-range error to the probability of a successful query is P_L = ∏_ℓ=1^log N (1-ε_L)^3(T-ℓ) = (1-ε_L)^Ω(T^2), where 3(T-ℓ) is the number of times a long-range CNOT is applied to execute T - ℓ long-range SWAP operations. This pattern is also evident in <ref>, where a long-range operation between 𝐑_1 and 𝐑_2 occurs T times, but occurs only T-1 times between 𝐑_2 and 𝐑_3. Next, consider the error contribution from the local SWAP operation over qubits associated with individual routers. It can be observed from <ref> that it takes two local swap operations for the address setting of each router. Hence the contribution of local SWAP error to the probability of a successful query is P_s = (1-ε_s)^2T=(1-ε_s)^Ω(T). The error contribution of the local CSWAP operation can be found similarly, and its probability contribution toward success is P_cs = ∏_ℓ=1^log N(1-ε_cs)^2ℓ = (1-ε_cs)^Ω(T^2). Last, we consider the idling error of the status qubit t_ℓ for each router 𝐑_ℓ as it must maintain its value immediately after its associated router's address is set. By contrast, the other qubits in the router can be reset and remain irrelevant until their next usage. The idling time for each 𝐑_ℓ's status qubit t_ℓ is the total active time of the router minus the number of CSWAP operations over the qubits of 𝐑_ℓ. Then, by <ref> and <ref>, the total idling time for 𝐑_ℓ is O(T-ℓ + (√(N)/2^ℓ/2)). Therefore, the idling qubit error contribution toward total query success probability is P_I = (1-ε_I)^Ω( N). Since the total success probability is P = P_L· P_s· P_cs· P_I, combining <ref> with a similar analysis as in reference <cit.>, we obtain the desired infidelity. §.§ Planar layout for the general framework The CSWAP routers form only one part of our general data lookup framework but the techniques from <ref> can be reused here. For ease of description, consider a memory of size N = 16, partition size λ = 4, and CNOT tree size γ = 2 for our design as shown in <ref>. The planar layout for 16 memory locations is given in <ref> where the former depicts the layout during Stages I and II while the latter holds for Stage III. Some of the routers are labeled in the figures with their components surrounded by dashed boxes. As the name suggests, the linear routers are at the top of the design. The middle of the layouts contains the CSWAP routers _0 and '_0 respectively. In <ref>, the routers at the sides in the bottom of the figure correspond to the CNOT trees. By contrast, note that in <ref>, the same qubits can be reused for the CSWAP routers '_1 and '_2. In comparison to <ref>, clearly these layouts use far fewer qubits. Now, we can analyze the T count, qubit count, and query infidelity for our framework for the planar layouts described here. [Restatement of <ref>] Consider the quantum data lookup structure with the high-level scheme in <ref> with N memory locations. Let n=log N, λ =2^n-d be the partition size and γ = 2^n-d-d' be the size of a CNOT tree with d' ≤ d ≤ n. The infidelity of this circuit is O( ε_L(γ N/λ+N/λlogλ/γ)+ ε_s logλ^2/γ. . +ε_I(N/λlog N(logN/γ+γ +logλ/γ)+λ) . . + ε_c γ N/λ + ε_ccN/λlogN/λ. . + ε_cs(N/λlogλ/γ+log^2λ/γ + log^2λ) ). Moreover, the T count for this design is O(N/γ+N/λlogN/λ+λ), and its qubit count is O(logN/λ+λ). First, consider Stage I where the linear routers do not contribute to the infidelity as their status is directly set by the address qubits. The CSWAP tree functions like a depth-d' noise-resilient bucket-brigade QRAM. For the planar layout, the infidelity for the bucket-brigade QRAM is computed in Theorem <ref>. Using this, the depth-d' CSWAP tree has infidelity O((d')ε_I+d'ε_L+d'ε_s+d'^2ε_cs) in Stage I. For the N/λ = 2^d repetitions performed in Stage II, the infidelity comes both from gate errors when operations are performed as well as idling error on qubits that don't have gates on them. The O(d) Toffoli gates used to implement the linear routers as per <cit.> each contribute ε_cc to the infidelity. The error containment property of the CSWAP routers discussed in <Ref> implies that their contribution is only d'ε_L for the long-range operations performed. For the CNOT tree with γ leaves, its contribution amounts to ε_L + ε_c for each node in the tree. Then, the overall gate error infidelity is O( 2^d ( ε_cc d+ε_cs d' + ε_L(d'+2^n-d-d') + ε_c 2^n-d-d') ). The infidelity from the idling error is O( 2^d (ε_I n (d + d' + 2^n-d-d') ) ), where the terms correspond to the qubits that are not reset between repetitions. These are the router status qubits for the linear routers and CSWAP routers along the query path, as well as the γ intermediate registers q'_i at the leaves of the activated CNOT tree. Lastly, Stage III acts like a depth-(n-d) bucket-brigade QRAM and applying Theorem <ref> again, we obtain its infidelity as O((n-d)ε_I+(n-d)ε_L+(n-d)ε_s+(n-d)^2ε_cs). Combining the infidelity from all stages, and replacing d, d' and n by N, λ, and γ yields <ref>. The overall T count for the design is O(2^d( 2^d'+d)+2^n-d), where the first term corresponds to the 2^d repetitions of Stage II each of which uses the depth-d' CSWAP tree, and the last term comes from the depth-(n-d) CSWAP tree in Stage III. Note that stage I has T count O(2^d') and it is always asymptotically smaller than the Stage II T count. Replacing d, d' and n yields a T count of O(N/γ+N/λlogN/λ+λ). The overall qubit count for the design is O(d+2^n-d), where the first term comes from the linear routers. The depth-d' CSWAP tree and the γ sized CNOT trees in total contain O(2^n-d) qubits. In Stage III, the qubits used for the CNOT trees are reused for the CSWAP routers accounted for in Stage II. Replacing n and d with N and λ yields an overall O(logN/λ+λ) qubit count. § LARGE WORD SIZE While all previous circuits considered reading a single bit of information from memory, in most real-world scenarios, one would want to retrieve multiple bits of classical information. In this section, we discuss two ways our general framework can be modified to handle the readout of multiple bits of information – (i) in parallel; and (ii) in sequence – while still maintaining a sub-linear scaling for memory size N. While the former has lower query infidelity, the latter has a lower T count. Determining where there exists a single multi-bit readout scheme that simultaneously minimizes T-count and infidelity or whether the need for two schemes remains a fundamental limitation of our framework is left for future work. Throughout this section, we assume that the word size, i..e, number of bits to be read out from each memory location is b and for ease of description, the designs will be described assuming N = 16, λ = 4 and γ = 2. §.§ Parallel multi-bit readout To read b-bit words with parallel readout, we make b copies of parts of our general framework from <ref>. Specifically, we create b copies of the framework involving the register with |q_i⟩, the CSWAP routers _0, '_0, '_1 and '_2 and the CNOT trees. Note that the three stages for data lookup proceed as described in <Ref> except that the ith copy will be used to access the ith bit of data, and at the end of Stage III all the b-bits will have been simultaneously retrieved. For b = 2, this modification is depicted in <ref> where |q_i⟩ is copied to the registers with |q^0_i⟩ and |q^1_i⟩. A very high-level view of this schematic for b=6 is given in <ref> where each square labeled by q^j_i contains the jth copy of the single-bit readout framework. The asymptotic scaling for infidelity, qubit, and T counts for parallel multi-bit readout is given below. Consider the quantum data lookup structure with the high-level scheme in <ref> with N memory locations. Let b=2^d” be the word size, n=log N, λ =2^n-d be the partition size and γ = 2^n-d-d' be the size of a CNOT tree with d' ≤ d ≤ n. Also, let 𝐈 be the infidelity of the single-bit readout framework from <Ref>. Then, the infidelity of this circuit is O(b𝐈). Moreover, the T count is O(logN/λN/λ+bN/γ+bλ), and qubit count is O(logN/λ+bλ). The multi-bit parallel readout scheme amounts to containing a single copy of the linear routers followed by b copies of the remainder of the single-bit readout framework. While the former uses O(d) qubits, the latter uses O(b 2^n-d) qubits. Additionally, as depicted in <ref>, each of |q_i^j⟩ registers can be laid out such that they are separated by O(2^n-d/2) qubits in the planar grid. Hence, the overall qubit count is O(d + b 2^n-d + b 2^n-d/2 ) = O(d + b 2^n-d) = O(logN/λ+bλ). Similarly, the T count for this design is O(2^d(d+b2^d')+b2^n-d)=O(logN/λN/λ+bN/γ+bλ). To analyze the infidelity for this design, let 𝐈_s represent the infidelity at stage s as described in Theorem <ref>. In Stage I, the infidelity is O(b𝐈_I+ϵ_L b2^n-d/2d'), where the second term emerges from the fact that the d' address bits used to set the CSWAP routers have to be copied b times using long-range operations and routed through each corresponding copy of the single-bit readout framework as depicted in <ref>. Moving to Stage II, the infidelity is O(b𝐈_II+2^d(ϵ_L b2^n-d/2)), where the second term is for the 2^d times when |q_i⟩ is transmitted to all the |q_i^j⟩ registers. For Stage III, the infidelity is O(b𝐈_III) as each of the b bits is retrieved from the corresponding copy of the readout framework when the procedure is finished. Aggregating the infidelities across these stages, observing that the extra terms only amount to an additional (b 𝐈_I) factor and following <ref>, we conclude that the total infidelity for parallel b-bit readout is O(b𝐈). §.§ Sequential multi-bit readout To read b-bit words in sequence, we extend the single-bit framework in <ref> by assuming that each memory location points to a b-qubit register instead of a single qubit one and repeating the readout procedure (i.e., Stage III from <ref>) for each bit of data. For b=2, this modification is presented in <ref> where the CNOT tree now has depth logλ/γ + log b and each leaf of the CNOT tree is connected to one of the b bits of data. This allows the signal for a memory location to be accessed to be shared amongst the b qubits. To achieve a compact design, the planar H-tree design is used to lay out the b qubits of each memory location. A high-level view of this schematic is given in <ref>. To retrieve the kth bit of data, |q'^(k)_j⟩ is swapped into the q'_j location at iteration k after which Stage III is applied to retrieve the data from the q'_j register. The asymptotic scaling for infidelity, T count and qubit count for this design is given below. Consider the QRAM with the high-level scheme in <ref> with N memory locations. Let b=2^d” be the word size, n=log N. Let b=2^d” be the word size, n=log N, λ =2^n-d be the partition size and γ = 2^n-d-d' be the size of a CNOT tree with d' ≤ d ≤ n. Also, let 𝐈 be the infidelity of the single-bit readout framework in Theorem <ref>. Then, the infidelity of this circuit is Õ(b𝐈+b^2ε_I). Moreover, the T count is O(N/γ+N/λlogN/λ+bλ), and qubit count is O(logN/λ+bλ). Before reading out the b memory bits, the analysis follows the same as in Theorem <ref> which yields O(b𝐈) infidelity. There is an additional qubit idling error to be considered. It takes O(2^d”+n-d+d”) time to transfer b memory bits out sequentially. The idling qubits are the memory qubits and the CSWAP router status bits, hence the total idling error is O(ε_I(2^d”+d')(2^d”+n-d+d”)=Õ(b^2ε_I). The number of CSWAP routers and linear routers is unchanged for Stages I and II but the CSWAP routers are used b times in Stage III. Hence, the T count is O(N/γ+N/λlogN/λ+bλ) where each term matches the count for each Stage respectively. The qubit count is O(d+2^n-d+d”)=O(logN/λ+bλ) as the size of the H-tree design for Stages II and III are scaled by b due to the deeper CNOT trees. § CONCLUSION In this work, we construct a general quantum data lookup framework and describe how it can be arranged on a planar grid with local connectivity. Further, we demonstrate that for specific choices of parameters, this framework can be made to be simultaneously noise-resilient as well as resource-efficient i.e., having a sub-linear dependency on memory size for qubit count, T count, and query infidelity. The versatility of our framework in highlighted as it provides the blueprint for describing a family of circuits for quantum data lookup with various space, time, and noise resiliency trade-offs. As the community moves beyond NISQ architectures to designing error-corrected implementations of quantum hardware <cit.> there is an urgent need to design highly resource-optimal components for quantum applications. For instance, initially, T gates would be considered an expensive resource so the focus would be on minimizing the T count. However, future improvements in large-scale quantum hardware and quantum error correction could dramatically reduce the cost of T gates, which would shift the focus to minimizing the query infidelity. Having sub-linear infidelity scaling in this setting would imply that smaller distance error correcting codes can be used to minimize overall errors and lead a to reduced qubit count. In this way, we expect the space-time trade-offs that result from this work to be repeatedly harnessed for the realization of end-to-end implementations of quantum algorithms using the evolving state-of-the-art hardware of its time <cit.>. The circuit designs presented in this work consider limited local connectivity with qubits laid out in a planar grid. To the best of our knowledge, there has been no prior comprehensive investigation into a resilient quantum lookup table framework for real-world applications that takes into account both resource limitations and local connectivity. However, we believe that our general framework would be noise resilient and can be more resource-efficient than prior work irrespective of the constraints imposed by connectivity. For instance, assuming only nearest neighbor connectivity is among the more restrictive regimes and having some long-range connectivity for free improves query infidelity as we show in <ref>. A thorough analysis of how our framework would perform for other qubit topologies is left for future work. Our fine-grained analysis of noise resiliency can be put to use in practice by replacing those error terms with the noise models or behavior exhibited by the hardware of choice. Explicitly separating the different sources of error can aid in understanding how hardware behavior contributes to the overall query infidelity. This, in turn, can help identify any bottlenecks or design improvements that may be needed to improve hardware capabilities and implement quantum data lookup successfully. Our framework recovers previous proposals for quantum data lookup for different choices of parameters. For example, with λ = N and γ = 1, we recover the noise-resilient bucket brigade QRAM design; with λ = √(N) and γ = 1, we recover a variant of the SELECT-SWAP design; and with λ = 1 and γ = 1 we recover the QROM design. Hence, we feel justified in considering our design to be a unifying framework. On the other hand, we acknowledge that one parameter regime that is not covered by our framework is the indicator function design from <cit.> that has √(N) T-count and logarithmic query depth. We conjecture that this design has linear infidelity scaling but studying whether it can be made noise resilient while still maintaining the same T-count is left for future work. While we presented a straightforward method to reset the qubits used for table lookup, it will result in a doubling of the T count and query infidelity. Another approach would be to use measurement-based uncomputation as discussed in <cit.>. This will lead to an increase in classical processing to deal with the mid-circuit measurement outcomes but not add too much overhead in terms of unitary operations. The further study needed to determine whether this will maintain the sub-linear scaling in infidelity is left as an open question. There exists an implicit assumption that the planar layout and local connectivity versions of both our framework and the bucket-brigade QRAM require the use of error correction to drive gate errors below some critical threshold. Without this, the designs can fail to be error-resilient. The bottleneck here is the difficulty in implementing the entanglement distillation that is used for long-range operations. For distillation to be effective, it is necessary that the initial error of noisy Bell pairs – determined solely by gate errors – be below some threshold. We leave as an open question whether this threshold can be raised by more sophisticated techniques such as using quantum repeaters negating the need for error correction. S.Z. conducted the research during his internship at Microsoft. Additionally, SZ is supported by the National Science Foundation CAREER award (grant CCF-1845125), and part of the editing work was performed while SZ was visiting the Institute for Pure and Applied Mathematics (IPAM), which is supported by the National Science Foundation (Grant No. DMS-1925919). § LIMITED LONG-RANGE CONNECTIONS In this section, we consider variations in qubit architectures that allow for limited long-range connectivity. Examples include superconducting qubits with long-range connections for LDPC codes or those where qubits can be coupled with photons that can be used to create high-quality Bell states. We study how the assumption of allowing only a few gates to be able to act long-range without overhead impacts infidelity. Unlike in the main text, we assume that there are only two ways in which to perform long-range interactions: (i) using a limited number of almost perfect Bell pairs that can be used to perfectly perform long-range operations; and (ii) creating noisy GHZ states linked between source and target qubits as described in <ref>. §.§ Bucket-brigade model First, consider the planar bucket-brigade model from <ref>. We allow the first k level of CSWAP routers the ability to perform long-range operations without any errors and produce a modified version of Theorem <ref>. Let the first k level routers in the bucket-brigade QRAM of <ref> have the ability to perform long-range operations without any overhead. For N memory locations, the improved fine-grained infidelity of the basic planar layout (<ref>) scales as O(2^-k/2log N√(N)ε_Q+log Nε_s+log^2 Nε_cs+ log^2 N ε_I). The result largely follows from the proof of Theorem <ref> along with two modifications. First, adjust the initial value of ℓ in <ref> from 1 to k as all levels up to k will not contribute to query infidelity. Additionally, as the remaining long-range operations only use GHZ states as a resource, its contribution to infidelity is m ·ε_Q for GHZ states of length m from <ref>. As m < √(N), we can upper bound this contribution as O(√(N)ε_Q). Since no magic state distillation is applied, the total idling time for each CSWAP router R_ℓ is O(T - ℓ), which leads to an overall log^2 N error contribution from the idling error. This gives the desired result. When k equals logN, the planar layout achieves the expected O(log^2 N) infidelity scaling from <cit.>. §.§ General framework We study the effect of allowing the first k level of CSWAP + CNOT routers in Stages I and II of the general framework from <ref> the ability to perform long-range operations without any error and produce a modified version of version of <ref>. Consider the quantum data lookup structure with the high-level scheme in <ref> with N memory locations. Let n=log N, λ =2^n-d be the partitions size, and γ = 2^n-d-d' be the size of a CNOT tree with d' ≤ d ≤ n. Let the first k-levels of the CSWAP + CNOT routers in Stages I and II have the ability to perform long-distance SWAP gates without any overhead. Let ℰ denote the sum of all but ε_L error in <ref>, then the infidelity of the circuit is Õ(ε_Q(2^-k/2√(λ)+2^-k/2N/√(λ)+γ N/λ) + ℰ), for k≤ d', and Õ(ε_Q(2^-kN +2^-k/2√(λ)) + ℰ), for d'< k≤ n-d. For k≤ d', in stage I the GHZ error induced from qubit error is d'2^n-d-k/2ε_Q, and in Stage II the GHZ error is 2^d(2^n-d-k/2+2^n-d-d')ε_Q. The remaining error analysis follows from <ref>, and combining these yields the desired GHZ error. For k>d', in stage I there is no GHZ error, in stage II the GHZ error comes from the CNOT tree, which becomes 2^d· 2^n-d-d'-(k-d')ε_Q. The remaining error analysis follows from <ref>, and combining these yields the desired GHZ error. Corollary <ref> illustrates the challenging nature of achieving optimal scaling in various aspects by balancing the parameters λ and γ with a budget of performing O(2^k) long-range operations without overhead. It becomes apparent that striking the perfect balance is a formidable task as it adds parameter that needs to be optimized. However, for some limited flexibility in choices for d = log( N/λ), d' = log( λ/γ), and the long-range budget 2^k, we try to provide some additional insights. For instance, within the realm of Õ(N^3/4) T count, we tabulate how the exponent determining how infidelity scales for increasing values of k ∈{0, d'/4, d'/2, 3d'/4, d} in <ref> respectively. Specifically for the green regions in these tables, we find that: * When d' is relatively small, specifically when d' ≤ n/4, the presence of long-range connectivity does not decrease infidelity. This is because the primary source of error originates from the CNOT tree section of the circuit. * Any further decrease in infidelity becomes unattainable when k exceeds d'/2. This is because, when k>d'/2, the predominant factor contributing to the error is the idling error by <ref>. * The greater the value of d' that can be accommodated, the more favorable the infidelity scaling becomes. This is because larger values of d' serve to diminish both the long-range errors as well as the idling error for the CNOT tree. * Increasing the value of d leads to a more favorable qubit count, as the qubit count scales as O(2^n-d). However, this sets up a trade-off between selecting a larger value of d for improved qubit count or a larger value of d' for reduced infidelity within the O(N^3/4) T-count regime. Extending the analysis naturally allows the initial k levels in the route-in procedure to perform long-range operations without overhead, while permitting the final k' levels in the route-out procedure to do the same. In a practical context, this can be likened to having a long-range budget of 2^d+k for the route-in and a long-range budget of 2^k' for the route-out process. The objective is to strike a balance between the values of k and k' to attain the most favorable scaling of infidelity. We can briefly argue that it is not useful to separately consider k vs. k' in the regime that yields sublinear infidelity and T count. This is because the long-range error from route-out only dominates that of route-in when k>2d, at which point a non-zero k' is required to suppress the long-range error from the route-out procedure. However, we observed that k is only meaningful for k≤d'/2, and it only improves the infidelity for cases where d≤d'/2, which contradicts the condition where k>2d.
http://arxiv.org/abs/2406.18816v1
20240627011532
Angle-dependent planar thermal Hall effect by quasi-ballistic phonons in black phosphorus
[ "Xiaokang Li", "Xiaodong Guo", "Zengwei Zhu", "Kamran Behnia" ]
cond-mat.str-el
[ "cond-mat.str-el", "cond-mat.mtrl-sci", "cond-mat.stat-mech" ]
lixiaokang@hust.edu.cn Wuhan National High Magnetic Field Center, School of Physics, Huazhong University of Science and Technology, Wuhan 430074, China Wuhan National High Magnetic Field Center, School of Physics, Huazhong University of Science and Technology, Wuhan 430074, China Laboratoire de Physique et d'Etude de Matériaux (CNRS) ESPCI Paris, PSL Research University, 75005 Paris, France zengwei.zhu@hust.edu.cn Wuhan National High Magnetic Field Center, School of Physics, Huazhong University of Science and Technology, Wuhan 430074, China kamran.behnia@espci.fr Laboratoire de Physique et d'Etude de Matériaux (CNRS) ESPCI Paris, PSL Research University, 75005 Paris, France § ABSTRACT The origin of the phonon thermal Hall effect in insulators is a matter of ongoing debate. The large amplitude of the signal in an elemental non-magnetic solid, such as black phosphorus (BP) calls for a minimal mechanism with no role for spin degree of freedom. Here, we show that a longitudinal heat flow generates a transverse temperature gradient in BP even when the magnetic field, the heat current and the thermal gradient lie in the same plane. The long phonon mean-free-path leaves little room for scattering by point-like symmetry breaking defects. we show that the signal peaks when the magnetic field is oriented along one of the two diagonal orientations of the puckered honeycomb plane and argue that this can be understood as the sum of two distinct contributions each parallel to a mirror plane. This angular dependence as well as the order of magnitude of the observed signal points to the torque exerted by magnetic field on electric dipoles traveling with heat-carrying phonons as the driver of the effect. Angle-dependent planar thermal Hall effect by quasi-ballistic phonons in black phosphorus Kamran Behnia July 1, 2024 ========================================================================================= The thermal conductivity is a second rank tensor linking the heat current density and the temperature gradient vectors. The thermal Hall effect (THE) refers to non-zero off-diagonal components of this tensor, κ. Its phonon-driven origin in insulators has attracted much recent experimental <cit.> and theoretical <cit.> interest. More recently, attention has focused on the observation of a planar thermal Hall effect <cit.>. This is a configuration with the three relevant vectors (Heat current, temperature gradient and magnetic field) lying in the same plane. The persistence of the signal when the signal is forbidden by crystal symmetry was attributed to defects breaking the local symmetry <cit.>. Here, we report on the observation of a planar Hall effect in black phosphorus(P), an elemental insulator with ballistic phonons <cit.> and no magnetism. The thermal conductivity tensor has unequal diagonal components (κ_zz≃ 5 κ_xx). Nevertheless, the off-diagonal components match each other (κ_xz(B)≃κ_zx(-B)), as expected by Onsager reciprocity. We quantify the variation of the thermal Hall signal as a function of the in-plane orientation of the magnetic field and find that it displays a twofold oscillation with minimum and maximum along one of the two diagonals of the xz plane of the puckered honeycomb lattice. This indicates that the signal is the sum of two sinusoidal contributions along two high-symmetry axes. A quantitative account of our observation is missing. Nevertheless, we argue that a scenario in which electric dipole moments, generated by the thermal gradient and the atomic vibrations, play a key role is compatible with the order of magnitude of the observed signal as well as its angular variation. Fig. <ref>a shows the crystal structure of black P. Identical phosphorus atoms located on two distinct sites are marked in blue and red. Each layer is a puckered honeycomb lattice in the xz crystallographic plane, where the x and z axes correspond to the armchair and zigzag orientations. As seen in Fig. <ref>b, the longitudinal thermal conductivity, as found previously <cit.>, is significantly different along the two orientations. Along the zigzag orientation, thermal conductivity is much larger than along the armchair orientation in the low temperature limit. The sound velocity shows a similar but attenuated anisotropy (9.6 km/s along z-axis vs. 4.6 km/s along the x-axis) <cit.>. As a consequence, the phonon mean free path ℓ_ph(T) , shown in Fig. <ref>c, is also anisotropic. It is twice longer along the z-axis and approaches the sample thickness (30 μm) around 20 K. Thus, at this temperature phonons become quasi-ballistic and accordingly thermal conductivity becomes size dependent <cit.>. A Hall response refers to a signal odd in magnetic field and oriented perpendicular to the applied current. Designating the transverse temperature gradient by (∇ T)_⊥, the applied longitudinal heat current by J_q and the magnetic field by B, the standard thermal Hall configuration corresponds to B ⊥ J_q ⊥ (∇ T)_⊥. It is sketched in panel d and was the subject of our previous study <cit.>. Panel e shows the configuration for planar thermal Hall effect, which corresponds to B  ||  (J_q ⊥ (∇ T)_⊥). We use κ_ij^k to designate the κ component corresponding to J_q along the i-axis, (∇ T)_⊥ along the j-axis and the magnetic field along the k-axis. Fig. <ref>a-c shows the planar thermal Hall data for three different configurations. In each panel, the orientations of the three relevant vectors (B, J_q and (∇ T)_⊥) are sketched and the field dependence of the ratio of the transverse to longitudinal temperature gradients (∇ T_j /∇ T_i) is shown. Fig. <ref>d compares the temperature dependence of the thermal Hall angle in three planar configurations at 12 T. It is finite in the three planar configurations, but there is a large difference in amplitude when one permutes the orientations of J_q and (∇ T)_⊥. Since the longitudinal thermal conductivity is anisotropic by the same factor, this difference warrants an equality between the absolute values of κ_xz^z and κ_zx^z, as expected by Onsager reciprocity. The thermal Hall angle (Fig. <ref>d) combined with the longitudinal thermal conductivity (Fig. <ref>b) leads to the planar thermal Hall conductivity, which is shown in Fig. <ref>e and f. The thermal Hall response is finite in four distinct configurations. We can see not only the validity of the Onsager reciprocity (Fig. <ref>e), but also the very small influence of the orientation of magnetic field on the amplitude of κ_xz (Fig. <ref>f). We then proceeded to quantify the angle dependence of this planar Hall signal at fixed temperature as the magnetic field rotated in the xz plane. The raw data, shown in the supplement <cit.>, shows a field-linear response for all explored angles. The measurements were performed at 44.5 K. This temperature was chosen to allow a compromise between proximity to the peak temperature and the optimal sensitivity of the thermocouples. As seen in Fig. <ref>a, the signal shows a twofold oscillation. It peaks (with opposite signs) at ϕ=-π/4 and ϕ=3 π/4 i.e. along one of the diagonals of the xz plane, and vanishes along the other diagonal. This intriguing feature would find a straightforward interpretation if the signal is the sum of two contributions of equal amplitude shifted by π/2, one following cosϕ (peaking along the x-axis and vanishing along the z-axis) and another following cos(ϕ+π/2) (peaking along the z-axis and vanishing along the x-axis), indicated by dashed lines in Fig. <ref>a. As sketched in Fig. <ref>b, each odd-field contribution would be bounded to one mirror plane. Thus, we detect in a crystal belonging with the mmm point group symmetry, there is a planar thermal Hall signal with bi-axial symmetry. Let us note that at peak temperature of this thermal Hall signal, the longitudinal thermal conductivity of black P is as large as κ_p≃ 2000 W/Km. This is to be compared with cases such as α-RuCl_3 (κ_p≃ 3 W/Km) <cit.>, Na_2Co_2TeO_6 (κ_p≃ 10 W/Km) <cit.> or Y-kapellasite (κ_p≃ 2 W/Km) <cit.>. Contrary to these cases, the phonon mean free path in black P is close to the sample dimensions and local defects are unlikely to play a major role. In anisotropic dense medium, thermal diffusivity is a tensor, D. The heat equation becomes : ∂ T/∂ t= ∇· (D∇ T) Consider now the energy continuity equation, which relates the energy flux to specific heat per volume, C: ∇·J_q+C∂ T/∂ t =0 The combination of the two previous equations leads to : J_q= - C D∇ T The thermal conductivity tensor κ, is thus the thermal diffusivity tensor multiplied by a scalar: κ= D C. Therefore, off-diagonal components in κ are proportional to their counterparts in D. The latter is the product of carrier velocity and carrier mean free path. Sound velocity, v_s, and phonon mean free path ℓ_ph (see Fig. <ref>c) are both anisotropic at zero magnetic field. The issue is to find a way for magnetic field to skew one or both of these tensors off the symmetry axes Recent theoretical studies have scrutinized intrinsic phonon band<cit.> incorporated with energy magnetization contribution <cit.> and phonon angular momentum acquired through interaction with the spins <cit.> or with orbital motion of ions <cit.>. Absent magnetism, the interplay between electric polarization and phonons deserves scrutiny. The computed map of charge distribution is strongly orientational <cit.>. Dipole-active phonon modes has been detected by infrared spectroscopy <cit.>. These are optical phonons and not heat-carrying acoustic modes which interest us. Nevertheless, any phonon breaking the inversion symmetry will generate an electric dipolar wave. This is a feature highlight in the context of ferrons, the elementary excitation of a ferroelectric solid, even in a paraelectric insulators <cit.>. The thermal expansion of black P is not only anisotropic, but has opposite signs along armchair and zigzag crystalline orientations <cit.>. As a consequence, in presence of a temperature gradient each tetragonal unit cell is slightly distorted. Its lattice parameter, compared to its colder or warmer neighboring cells, is longer and shorter along opposite (Fig. <ref>a) orientations. This paves the way for finite dipolar polarization, and for chirality in presence of the magnetic field. The inversion center of the pristine unit cell is not an atomic site and atomic displacements associated with phonons generate electric dipole moments (Fig. <ref>b). Therefore, acoustic phonons breaking inversion centers are traveling electric dipolar moments capable of coupling to a static magnetic field (Fig. <ref>c). The torque exerted by a static magnetic field, B, on an electric dipole of P moving with a velocity of v is : τ= P× (B×v) <cit.>. Let us rewrite it as: τ= (P·v) B- (P·B) v Thus, the torque due to the Lorenz force exerted on each of the two poles is finite when the magnetic field is oriented perpendicular to the orientation of the dipole propagation (Fig. <ref>d). These are ingredients for a scenario for planar thermal Hall signal in a non-magnetic insulator (see the supplement <cit.> for a discussion of the expected angular variation). A rigorous treatment is beyond the scope of this paper. However, let us compare the order of magnitude of the expected and the measured signals. Experimentally, the thermal Hall angle at B = 10 T peaks to ≈× 10^-2 in black P. The length scale extracted from this angle and fundamental constants (λ_tha = ℓ_B √(κ_ij/κ_jj)) is about 5 Å. Intriguingly, in all insulators displaying a thermal Hall signal, this length scale lies in a narrow range (2 Å<λ_tha <7 Å <cit.>. As seen in figure S2 in the supplement <cit.>, this remains true for recently studied cases of Si, Ge and quartz <cit.>. Suppose an electric dipole moment of δ p traveling with the velocity of sound, v_s (loosely linked to the Debye frequency by v_s ∼ a ω_D, where a is the interatomic distance). The exerted torque, that is the angular derivative of the energy, would be B a ω_D δ p. Divided by the Debye energy, it yields θ_H≃a δ p/ħ B, the order of magnitude of the expected rotation angle. With ℓ_B^2=ħ/e B, assuming δ p ≈ e a, leads to: θ_H≈a^2/ℓ_B^2 This expectation is of the order of magnitude of the measured signal. A rigorous treatment would presumably include in this expression the Grüneisen parameter <cit.>. Possibly, the atomic electric dipole polarizability <cit.> would also be present. However, the former, which is dimensionless remains of the order of unity and the latter introduces a length scale which does not vary widely among different solids. Therefore, a scenario attributing the thermal Hall signal to the interaction between a static magnetic field and traveling electric dipolar waves emerges promising from our study. We thank Benoît Fauqué for stimulating discussions. This work was supported by the National Key Research and Development Program of China (Grant No. 2022YFA1403500), the National Science Foundation of China (Grant No. 12004123, 51861135104 and 11574097 ), and the Fundamental Research Funds for the Central Universities (Grant no. 2019kfyXMBZ071). X. L. was supported by The National Key Research and Development Program of China (Grant No.2023YFA1609600) and the National Science Foundation of China (Grant No. 12304065). Supplementary Materials for “Angle-dependent planar thermal Hall effect in black phosphorus" § MATERIALS AND METHODS Black phosphorus crystals used in this work are same to our previous report <cit.>. They were synthesized under high pressure and came from two different sources. Sample #1 was obtained commercially and sample #2 was kindly provided by Prof. Yuichi Akahama (University of Hyogo). All thermal transport experiments were performed in a commercial measurement system (Quantum Design PPMS) within a stable high-vacuum sample chamber. An one-heater-four-thermocouples (type E) techniques was employed to simultaneously measure the longitudinal and transverse thermal gradient. The thermal gradient in the sample was produced through a 4.7 kΩ chip resistor alimented by a current source (Keithley6221). The DC voltage on the heater and thermocouples (thermometers) was measured through the DC-nanovoltmeter (Keithley2182A). The thermocouples, the heat-sink, and the heater were connected to samples directly or by gold wires with a 50 microns diameter. All contacts on the sample were made using silver paste. In the angular dependent measurements, a thermal transport rotation probe was used. The longitudinal (∇ T_i) and the transverse (∇ T_j ) thermal gradient generated by a longitudinal thermal current J_q were measured. They lead to the longitudinal (κ_ii) and the transverse (κ_ij) thermal conductivity, as well as the thermal Hall angle (∇ T_j / ∇ T_i): κ_ii = Q_i/∇ T_i ∇ T_j/∇ T_i = κ_ij/κ_jj κ_ij = ∇ T_j/∇ T_i·κ_jj Here Q is the heat power. § RAW DATA OF ANGULAR DEPENDENT PLANAR THERMAL HALL EFFECT Fig. <ref> shows the field dependent thermal Hall angles (∇ T_j / ∇ T_i) with the J_q alongs x axis and the (∇ T)_⊥ along z axis at six different field orientations. ϕ is the angle of the magnetic field with respect to the J_q (x axis). ϕ = 0^∘ and ϕ = 90^∘ represent the orientations along two high-symmetry axes, and their ∇ T_j / ∇ T_i signal have the same amplitude but opposite signs. ϕ = -45^∘ and ϕ = 45^∘ represent two diagonal orientations of the puckered honeycomb plane that have the maximum and vanished responses respectively, suggesting that the planar thermal Hall signal consists of two distinct contributions each along one high-symmetry axis. ϕ = -45^∘ and ϕ = 135^∘ have signals with opposite signs, as expected from the antisymmetric operation (κ_ij(B) = -κ_ij(-B)). § TORQUE EXERTED BY THE MAGNETIC FIELD ON MOVING DIPOLES The torque exerted by a static magnetic field, B, on an electric dipole of P moving with a velocity of v is : τ = P×(B×v) = (v×B)×P = (P·v)B-(P·B)v The electric dipole P, field B, and the velocity v can be expanded as: P= P_xe_x+P_ye_y+P_ze_z; B= B_xe_x+B_ye_y+B_ze_z; v= v_xe_x+v_ye_y+v_ze_z. Thus, the <ref> can be rewritten as: τ= (P_x v_x+P_y v_y+P_z v_z)· (B_xe_x+B_ye_y+B_ze_z) -(P_x B_x+P_y B_y+P_z B_z)· (v_xe_x+v_ye_y+v_ze_z) = (P_y v_x B_x + P_z v_z B_x-P_y B_y v_x-P_z B_z v_x)·e_x + (P_x v_y B_y + P_z v_z B_y-P_x B_x v_y-P_z B_z v_y)·e_y + (P_x v_x B_z + P_y v_y B_z-P_x B_x v_z-P_y B_y v_z)·e_z § THE ANGULAR DEPENDENCE OF THE TORQUE AND THE OBSERVED BI-AXIAL SYMMETRY In our experiment, the magnetic field was rotated in the xz plane, the longitudinal thermal gradient thermal gradient was applied along the x-axis. Putting B_y=0 and P_y=0, Equation <ref> becomes: τ= (P_z v_z B_x-P_z B_z v_x)·e_x - (P_x B_x v_y + P_z B_z v_y)·e_y + (P_x v_x B_z - P_x B_x v_z)·e_z We measured the transverse thermal gradient along the z-axis. The torque along the z-axis is equal to : τ_z= (P_x v_x) B_z - (P_x v_z)B_x One can see that the expected torque along the z-axis has two components. Each of them vanishes when the field is along either the x-axis or the z-axis. This is in agreement with the experimental data seen in Fig. <ref> of the main text.
http://arxiv.org/abs/2406.18405v1
20240626145643
Longitudinal chiral forces in photonic integrated waveguides to separate particles with realistically small chirality
[ "Josep Martínez-Romeu", "Iago Diez", "Sebastian Golat", "Francisco J. Rodríguez-Fortuño", "Alejandro Martínez" ]
physics.optics
[ "physics.optics" ]
]Longitudinal chiral forces in photonic integrated waveguides to separate particles with realistically small chirality 1]Josep Martínez-Romeu 1]Iago Diez 2]Sebastian Golat 2]Francisco J. Rodríguez-Fortuño 1]Alejandro Martínez amartinez@ntc.upv.es [1]Nanophotonics Technology Center, Universitat Politècnica de València, Camino de Vera, s/n Building 8F, 46022, Valencia, Spain [2]Department of Physics, King's College London, Strand, WC2R 2LS, London, United Kingdom Chiral optical forces exhibit opposite signs for the two enantiomeric versions of a chiral molecule or particle. If large enough, these forces might be able to separate enantiomers all optically, which would find numerous applications in different fields, from pharmacology to chemistry. Longitudinal chiral forces are especially promising for tackling the challenging scenario of separating particles of realistically small chiralities. In this work, we study the longitudinal chiral forces arising in dielectric integrated waveguides when the quasi-TE and quasi-TM modes are combined as well as their application to separate absorbing and non-absorbing chiral particles. We show that chiral gradient forces dominate in the scenario of beating of non-denegerate TE and TM modes when considering non-absorbing particles. For absorbing particles, the superposition of degenerate TE and TM modes can lead to chiral forces that are kept along the whole waveguide length. We accompany the calculations of the forces with particle tracking simulations for specific radii and chirality parameters. We show that longitudinal forces can separate non-absorbing chiral nanoparticles in water even for relatively low values of the particle chirality and absorbing particles with arbitrarily low values of chirality can be effectively separated after enough interaction time. [ [ July 1, 2024 ================ § INTRODUCTION Chirality is a property of asymmetry of objects which holds great importance in different branches of science and technology. This property describes objects that cannot be superimposed with their mirrored selves and is present from subatomic particles to macroscopic structures. This includes the remarkable case of molecular chirality <cit.>, by which molecules can display two opposite handedness, the so-called right- and left-handed enantiomers, which may show completely different physical and chemical properties. As an example in medicine, one molecule can have medicinal properties while its opposite enantiomer can be extremely toxic <cit.>. Therefore, it is of utmost importance to be able to separate the two types of enantiomers of certain chemical substances with great accuracy, in great volumes, and in a short time. Usual methods for separating enantiomers from mixtures rely on chemical processes that must be changed for each specific molecule. Alternatively, one could take advantage of the electromagnetic properties of chiral molecules, which have been thoroughly studied <cit.>, and use the chiral optical forces exerted by light <cit.> to perform enantiomer separation. Remarkably, this interaction does not depend on the specific molecule, which presents a great advantage over chemically-based separation. Due to the prospects for application in different industries, optical separation of enantiomers has recently received considerable attention, including many theoretical and simulation studies <cit.> as well as some experimental implementations <cit.>. Most previous works have considered free-space light beams incident upon chiral structures so that separation forces are exerted locally. A different approach proposes the use of guided light along dielectric fibers <cit.> or integrated waveguides <cit.> to exert transverse chiral forces over long (in terms of wavelength) propagation lengths that eventually could lead to enantiomeric sorting. However, such forces usually require large values of the chirality parameter of the nanoparticles to lead to partial separation, meaning that other strategies are needed to separate nanoparticles and molecules exhibiting lower chiral response, as usually happens in practice. In this work, we circumvent this problem by using longitudinal chiral forces arising in dielectric integrated waveguides upon the superposition of the two fundamental guided modes: the quasi-TE and the quasi-TM modes. We consider the cases of both absorbing and non-absorbing chiral nanoparticles. For non-absorbing nanoparticles, we show that in a waveguiding system where the electromagnetic energy density does not vary but the field helicity does vary, low chirality particles could be separated. A similar approach was followed for free-space optical beams using diffraction gratings and reflection in a gap between a prism and substrate <cit.>. In our guided approach, we show that a photonic integrated waveguide can be designed to produce longitudinal optical chiral forces stronger than the achiral forces for a wide range of both the particle radius and the chirality parameter. Our numerical results suggest that such photonic waveguide could lead to enantiomeric sorting along the propagation direction of light. For absorbing particles, we change the approach we use because the dominant forces will change. In this case, we leave behind the idea of having stronger chiral forces than achiral forces and focus on maintaining a chiral force over a long waveguide, which can be achieved by using guided chiral light making use of degenerate and 90^∘-shifted quasi-TE and quasi-TM modes. We show that separation is feasible along the longitudinal direction even in the case of small chirality of the nanoparticle. § REVIEW OF OPTICAL CHIRAL FORCES Optical forces exerted by an optical field on a chiral dipolar particle have been thoroughly studied in the literature <cit.>. We choose the particle-centric form of the force expression that is described in detail in previous work <cit.>: F= (α_eW_e+α_mW_m+α_cω𝔊 )_gradient force +2ω(α_ep_e+α_mp_m+α_cp_c )_radiation pressure force -(σ_rec+σ_im)/c -ω(γ^e_recS_e+γ^m_recS_m)_dipole recoil force. where ω is the angular frequency, k is the wavenumber, W_ e = 1/4ε |E|^2 and W_ m = 1/4μ |H|^2 are the electric and magnetic energy densities, respectively, measured in [J/ m^3] units. The helicity density is 𝔊 = 1/2ω c( E·H^* ) [J·s/m^3], whose sign indicates the handedness of the optical field. The following field properties S_ e = 1/4ω( ε E^* ×E) [J·s/m^3] and S_ m = 1/4ω(μ H^*×H) [J·s/m^3] yield respectively the electric magnetic spin densities of the field. The complex Poynting vector is represented by = 1/2E×H^* [W/m^2]. The electric, magnetic and chiral momentum of the light field are respectively p_e=1/2c^2-1/2S_e, p_m=1/2c^2-1/2S_m and p_c=k(S_e+S_m)-1/2ω c <cit.>. The properties of the particle are characterized by the electric polarizability α_ e, the magnetic polarizability α_ m, and the chiral polarizability α_c. The latter informs about how electromagnetically chiral the particle is. The other constants depend on the product of polarizabilities, σ_rec=k^4/6 π[(α_e^* α_m)+|α_c|^2], σ_im=k^4/6 π(α_e^* α_m), γ^e_rec=k^4/3π(α_e^∗α_c), γ^m_rec=k^4/3π(α_m^∗α_c). Even though the full expression of the forces is used to calculate the total force, it is important to know which terms dominate to gain insight into the physics of the system. In particular, we will consider first the separation of non-absorbing chiral particles, followed by the separation of absorbing chiral particles. In the former scenario, the dominant forces are those relying on the real part of the polarizabilities (gradient terms), while in the latter the dominant terms will be those proportional to the imaginary part of the polarizabilities (radiation pressure terms). Another important aspect is that for small particles the dominant forces will be those that depend on the polarizabilities up to the first order. To study the forces acting on the chiral particle, we need to know the electric and magnetic field profile of the quasi-TE and quasi-TM guided modes of the waveguide (eigenmodes). First, the two-dimensional profile of the fields (E(x,y), H(x,y)) is obtained by solving Maxwell's equations throughout the two-dimensional cross-section of the waveguide system using the finite element method implemented by the FemSIM solver in the commercial software RSoft (Synopsis). Then, these profiles are propagated along the waveguide using the corresponding effective indices n_ TE for the TE mode and n_ TM for the TM mode. For instance, for the electric field of the quasi-TE: E(x,y,z)=E(x,y)e^ikn_ TEz. This propagation results in a three-dimensional field along the waveguide. The resulting fields are then inserted into Eq. <ref> to compute the force exerted by the mode on a dipolar particle <cit.>. Chiral Mie theory <cit.> was used to calculate the polarizabilities of the particle from the properties of the particle (r, ε_ p, μ_ p, κ) and the surrounding medium (ε_ m, μ_ m). The chirality parameter κ characterizes the difference of the refractive index for left circularly polarized light and right circularly polarized light traveling through an optically active medium. For a medium constituted of (+) or (-) enantiomers the refractive index is: n_±=n±κ. This parameter is then particularly important to the study of chirality. (κ) is related to the optical rotatory dispersion and (κ) is related to circular dichroism <cit.>. § LONGITUDINAL CHIRAL GRADIENT FORCE ON NON-ABSORBING NANOPARTICLES We first consider non-absorbing particles, i.e. real ε_ p and real κ, the dominant force terms are the gradient forces, which are proportional to the real part of the polarizabilities. Notably, if the particles are large enough the Poynting part of the recoil force must be taken into account. The resulting force can be approximated as: F≈ω α_ c∇𝔊 + α_ e∇ W_ e + α_ m∇ W_ m-σ_rec/c From the previous equation, we can conclude that a system where the electric and magnetic energy density gradients (achiral forces) are negligible in comparison with the helicity density gradient (chiral force) should be in principle able to produce enantioseparation. This condition can be met along the longitudinal direction for a lossless dielectric integrated waveguide with rectangular cross-section. In these waveguides, the light intensity of the mode is maintained over the longitudinal direction. This means that the electric and magnetic energy densities do not change, i.e. the gradient of both energy densities (and therefore the associated achiral forces) are negligible in the longitudinal direction regardless of the electric and magnetic polarizability of the particle. To produce enantioseparation, the waveguiding system should present a chiral force (helicity density gradient), given by the first term of Eq. <ref>. This helicity density gradient can be produced within the waveguide by achieving a guided wave whose helicity density varies longitudinally, thus producing a helicity gradient and, therefore, a chiral longitudinal force. This setup can be achieved by simultaneously injecting the fundamental quasi-TE and quasi-TM modes in a waveguide where n_ TE≠ n_ TM.A schematic of such a waveguide is depicted in Fig. <ref>a whilst the helicity change in the longitudinal direction is shown in Fig. <ref>b. The longitudinal component of the helicity density gradient is given by the following equation (analytical derivation in section <ref> of the appendix): F_z^∇𝔊= α_ cωd𝔊/dz = 1/4 cα_ c|ψ| kΔ n sin(kΔ n z + ψ) where ψ=E_ TE·H_ TM^*-E_ TM^*·H_ TE. Equation <ref> shows that the chiral force oscillates sinusoidally with a periodicity of L_ beat=λ/Δ n, which is referred to as the beat length, and flips sign every half of the beat length L_ beat/2=λ/(2Δ n), where Δ n=n_ TE-n_ TM is the difference between the TE and TM mode indices. Therefore, by injecting left-handed elliptically polarized (LEP) light, which can be achieved by a 90∘-phase-shifted combination of the quasi-TE and quasi-TM guided modes, at the waveguide input, the polarization of the guided wave will change to right-handed elliptically polarized (REP) guided light after half of the beat length. The dominant chiral longitudinal force is proportional to the longitudinal gradient of the helicity density (F_z=ωα_ c∇_z 𝔊) and, therefore, it is the force enabling the enantioseparation. This force increases with Δ n and decreases with the wavelength and will always be present whenever mixing two non-degenerate guided modes. The separated distance of the enantiomers is inversely proportional to kΔ n. Therefore changing the kΔ n to augment the forces must be balanced with keeping enough distance between the enantiomers to produce significant separation. For small particles, the helicity density gradient force is the only dominant force term in the longitudinal direction, thus F_z, achiral<F_z, chiral, which favors enantiomeric separation even for particles exhibiting low chirality (α_ c≪α_ e,α_ m). § LONGITUDINAL CHIRAL PRESSURE FORCE ON ABSORBING NANOPARTICLES For absorbing particles, i.e. complex ε_ p and purely imaginary κ, we will consider a waveguiding system where a quasi-circularly polarized mode is injected by combining the quasi-TE mode and 90^∘-delayed quasi-TM mode that are degenerate (n_ TE=n_ TM). In this case, there will be no chiral or achiral gradient forces in the longitudinal direction since the wave helicity is conserved along the propagation direction. Moreover, if the particles are small enough, the dominant forces will be those that depend linearly on the polarizabilities, which are exclusively the radiation pressure forces from Eq. <ref>. Taking this into account, the resulting dominant longitudinal optical forces for small particles are: F_z≈ 2ω(α_ep_e,z+α_mp_m,z+α_cp_c,z) Notably, the electric and magnetic pressure forces do not change along the length of the waveguide as long as it is lossless. As a result, both enantiomers will be equally pushed in the longitudinal direction. The difference in the force exerted upon the two enantiomers will be due to the chiral pressure, which depends on the longitudinal spin of the light. Since the quasi-TE and quasi-TM modes do not display longitudinal spin <cit.>, we need a suitable combination of them to get longitudinal spin: they must have equal amplitude and be 90∘ phase shifted at the operating wavelength. Moreover, they need to exhibit degeneracy (n_ TE=n_ TM), which can be achieved by proper design of the waveguide cross-section, so that the longitudinal spin is maintained along the length of the waveguide <cit.>. For absorbing particles, F_ achiral>F_ chiral holds for a wide range of small chiralities, unlike in the non-absorbing chiral-beating setup. This condition implies that there can only be enantioseparation if we do not have a trapping achiral force. In our case, the achiral force pushes the particles along the waveguide, meaning the difference of forces can be used to separate the racemic mixture. For small chirality, the chiral force will be quite small in magnitude but, as we explored in a previous work <cit.>, even if the separating chiral force is small, it will eventually overcome the Brownian motion and separate the racemic mixture if applied for long enough time. As a result, an important aspect of this approach is maintaining the chiral force over a long distance, which can be achieved via the degeneracy of the main guided modes. This sorting mechanism is schematized in Fig. <ref>c showing the opposite direction of the chiral part of the force for opposite enantiomers. Figure <ref>d shows that despite the chiral part of the force is smaller than the achiral part, the particles will acquire different velocities that will eventually lead to their separation. § RESULTS §.§ Chiral gradient forces We consider a strip waveguide made of a silicon core (0.480 μm width × 0.220 μm thickness) with refractive index n≈3.45 on a SiO_2 substrate (n=1.4468), surrounded by water, operating at a wavelength of 1310 nm. The high index contrast between Si and SiO_2/water was chosen to increase Δ n, and thus, the enantioseparating force as shown by Eq.<ref>. Widening the waveguide also increases Δ n; however as the guided power spreads over a larger area, the local energy and helicity densities become smaller, and, consequently, the forces also diminish. For comparison, we also consider a silicon nitride (n≈2) waveguide (1.170 μm width × 0.268 μm thickness) operating at shorter wavelengths (780 nm) at which silicon becomes absorbing. In both cases (silicon and silicon nitride), we look for the same effects: gradient chiral forces being stronger than the achiral forces for different radii and chirality parameters. To this end, we consider non-absorbing chiral particles with relative permittivity ε_ p = 2 and relative permeability μ=1, suspended in water (n=1.33). The studied waveguide cross-section and the beating of helicity density along the longitudinal direction above the waveguide are shown in Fig. <ref>(a) and (b), respectively. We first studied which combinations of particle size and chirality parameter favor enantioseparation in this waveguide system. To this end, we plotted the ratio of the longitudinal total chiral force and the longitudinal total achiral force (|F^ chiral_z/F^ achiral_z|) in Fig. <ref> as a colormap, against the different chirality parameter and radii. The red (blue) zones represent where the longitudinal chiral forces are stronger (weaker) than the longitudinal achiral forces throughout the parameter space. The white zone shows where the chiral and achiral forces have a similar magnitude. Figure <ref>a corresponds to the silicon waveguide system and Fig. <ref>b to the silicon nitride waveguide system, both evaluated at the point of the helicity sign change. The most important forces in the system are F^∇𝔊_z, F^∇ E_z and F^Π_z, and they can be combined in the parameter space to produce different regions of dominance of the achiral and chiral forces. In Fig. <ref>a, we observe three distinct regions favorable for separation of enantiomers (red zones): I) κ > 10^-5 and r<10 nm, II) r>10 nm and 10^-6<κ<10^-3, III) r≈ 10 nm. In Fig. <ref>b, we find only two distinct regions. To further explain how these forces interact in the parameter space to produce the resulting total force, we analyze the individual contribution of the force terms F^∇𝔊_z, F^∇ E_z and F^Π_z for the silicon waveguide in section <ref> of the appendix. We have chosen a combination of particle radius (r=100 nm) and chirality parameter (κ=± 0.05) that favors enantioseparation (from region II in Fig. <ref>a) to test the sorting capabilities of our designed silicon waveguide (width 0.480 μm × thickness 0.220 μm). For such a particle, the most dominant achiral (F^∇ E and F^Π) and chiral forces (F^∇𝔊) are shown in Fig.<ref> throughout the x-z plane situated 100 nm above the waveguide (same axis as in Fig. <ref>). The arrowmaps represent the force fields exerted on the particle by the guided field along the x and z directions. The colormaps represent the y-component of the force, which moves the particle towards (negative, in blue) or away (positive, in red) from the waveguide. The chiral gradient force has a maximum of 1.39 fN/mW in the x-z plane. The electric gradient force has a maximum of 4.68 fN/mW in the x-z plane. The total optical force pushes the particles towards the waveguide in height (represented in the colormap). The position where the particles are most atracted towards the waveguide in height changes with the chirality of the particle. This phenomenon is complementary to the chiral separation in the longitudinal direction. In the points along z where the opposed chiral particles are trapped, they have the highest attracting force towards the waveguide too. The dominant force in both transversal directions x and y is the achiral electric energy density gradient F^∇ E that moves both enantiomers toward the center of the waveguide (in x) and toward the top of the waveguide (in y). At the center of the waveguide, F^∇ E exhibits a much smaller magnitude, and the chiral force F^∇𝔊 dominates over both achiral forces (F^∇ E and F^Π) along the longitudinal direction, thus enabling the sorting. This force analysis agrees with the selected particle radius and chirality parameter combination from Fig. <ref>. We have tested the sorting capability of this system by performing particle tracking simulations using the force field from Fig. <ref> for a guided power of 100 mW and during 1 second. The tracking algorithm is explained in detail in <cit.>. For this type of simulation, we have assumed that the microchannel (where the particles are suspended in water) is placed perpendicular to the waveguide, i.e. along the x-direction with 12 μm length, 1 μm width along z-axis, and 0.5 μm height along y-axis. Notice that the width along z should be at least equal to half of the beat length to achieve enough separation distance. To obtain a statistical measurement of the success of the sorting process, we have conducted the individual tracking of each enantiomer 500 times. Each particle's starting position is randomized each time throughout an area of 400 nm × 400 nm in the xz-plane at 140 nm above the waveguide. The final positions of the particles are represented in Fig. <ref>, where we can see that 1 second is enough duration to sort both enantiomers. The (+)-enantiomer (in magenta) gets trapped in the center of the microchannel where the helicity density is positive (see Fig.<ref>b), whereas the (-)-particle is repelled from that zone and attracted towards the area where the helicity density is negative. The results show that 95% of the (+)-particles end up within z ∈ [-0.330, 0.330] μm, and 59.8% (-)-particles end up outside. The latter percentage would be larger if the channel was wider. The purity of the mixture within a region is calculated with the quantity named enantiomeric fraction <cit.>, defined as: (+)-EF = N_+/(N_+ + N_-) and (-)-EF = N_-/(N_+ + N_-), where N_+ and N_- refer to the number of (+) or (-) particles within the region where the enantiomeric fraction is evaluated. The (+)-EF=70% within z ∈ [-0.330, 0.330] μm, and (-)-EF=92% outside that region. Lastly, we must comment on the practicality of this separation method. While it is true that it produces F_ chiral>F_ achiral for a wide range of chiralities, this is not enough to ensure enantioseparation. One important aspect to overcome is the enantiomeric mixing produced by the Brownian motion of the particles. Because this method uses trapping chiral forces for separation, the Brownian motion must be overcome. This places a quite strong limit on the range of chiralities and radii of particles: in general, this method is adequate to separate big particles exhibiting small chirality. For instance, particles with r=100 nm κ < 0.05 would not be separable. In the case where Δ n = 0, we can use another sorting method for absorbing chiral particles. As shown in the next subsection, this next method can effectively bypass the Brownian motion by use of longitudinal chiral forces without trapping potentials. As such, smaller particles with smaller chiralities can be effectively separated. §.§ Chiral pressure forces We consider a strip waveguide made of a silicon nitride core (0.239 μm width × 0.217 μm thickness) with refractive index n≈2.02 on a SiO_2 substrate (n=1.4468), surrounded by water (n = 1.33), operating at a wavelength of 633 nm. The optical mode is a combination of a quasi-TE and a quasi-TM mode that is delayed 90 degrees with respect to each other. For the chosen waveguide cross-section and wavelength, the quasi-TE and quasi-TM modes show degeneracy. Therefore, the combination results in a quasi-circularly polarized compound mode whose polarization is maintained along the waveguide length. The degeneracy condition (n_ TE=n_ TM) was found by sweeping the waveguide width for a fixed waveguide thickness and wavelength. The chiral particles to be studied in this system are assumed to be non-magnetic (μ_ p = 1) gold spheres (ε_ p = -11.753 + 1.2596i). Four different chirality parameter values were studied κ=0.05i, 0.01i, 0.005i, and 0.0005i, for two different radii: 10 nm and 50 nm. By considering a purely imaginary value for κ, we are implicitly assuming that the particle exhibits a maximum in its circular dichroism spectrum at the selected wavelength (633 nm). The dominant achiral and chiral forces, as well as the total force for each enantiomer, are represented throughout the waveguide cross section in Fig. <ref>, for a particle with r=10 nm and κ=±0.05i. The colormap represents the z-component of the forces and the arrowmap the transversal components. These force fields are maintained along the waveguide length because both the quasi-TE and quasi-TM modes are degenerate (in this case there is no beating pattern). In the transversal directions (xy-plane), the force field attracts both enantiomers towards the waveguide due to the dominant achiral electric energy density gradient force. In the longitudinal direction, the dominant force is the achiral electric pressure (F^ p_e). However, the opposite chiral pressure term (F^ p_c) with values ∼±0.1 fN/mW for opposite enantiomers, results in the (+)-enantiomer being pushed with a net force of ∼0.5 fN/mW, and the (-)-enantiomer being pushed with ∼0.7 fN/mW. We have tested the sorting capability of this system by performing particle tracking simulations using the force field from Fig. <ref> for a guided power of 100 mW. For these simulations we have taken into account the Brownian motion as explained in <cit.>. We obtained the final positions for 500 (+)-enantiomers and 500 (-)-enantiomers after a given amount of time. The initial (x,y) positions of the particles were randomized throughout the microchannel. Results for a particle of 10 nm radius in a microchannel 600nm wide and 400 nm thick are shown in Fig.<ref>: the final positions within the xy-plane are represented in (a), and the final positions along z are shown in histogram plots for particle of different chirality parameter: (b) ±0.05i, (c) ±0.01i, (d) ±0.005i, (e) ±0.0005i. Particles of higher chirality achieve greater separation along the longitudinal direction than those of low chirality for an equal amount of time, as suggested by how separated the z_ end-position distributions for each enantiomer are. In order to get an estimation of the sorting time for particles with the chirality parameters in (b)-(e), we have made the following considerations. We consider the enantiomers are separated when the distance between the mean values of each distribution (z_+ and z_-) is at least larger than four times the average standard deviation of both clouds (σ_z), i.e. we calculated the time at which |z_+-z_-|=4σ_z. This calculation is explained in more detail in section <ref> of the appendix. The time needed for obtaining separation for each case was estimated to be: (c) 23.5 s, (d) 93.8 s, (e) 9620 s = 2.7 hours. We repeated the same study (forces and sorting capabilities of the system) for a 50 nm radius particle. The dominant optical forces over the cross-section of the system are shown in Fig. <ref>. For this particle size, the dominant forces are the achiral electric energy density gradient and the electric radiation pressure forces. The resulting force field yields the particle tracking as shown in Fig. <ref>, where the main difference with respect to the 10 nm radius particles is that there is an achiral orbital movement of the particles due to the F^ p_e force which accumulates transversally both enantiomers on the left side of the waveguide. Yet, the different longitudinal force magnitude of 426 fN/mW and 434 fN/mW for (+) and (-) enantiomers enable the longitudinal sorting. The time needed to achieve |z_+-z_-|=4σ_z separation was estimated to be: (c) 187 s, (d) 12.5 minutes, (e) 9620 s = 20.8 hours. The enantiomeric fraction (EF) was obtained in a different manner than the sorting enabled by the sum of the TE mode and TM mode which produced chiral beating. It is explained in detail in section <ref> of the appendix. The EF is calculated for each enantiomer in its correspondent zone separated by the medium point between the centers of the z_ end-distributions. The EF corresponding to a separation of 4σ_z between both distributions is: (+)-EF=97.72% for z < (z_+ + z_-)/2, and (-)-EF=97.72% for z > (z_+ + z_-)/2. Where z_+ and z_- are the central points of the corresponding distributions. This value is the same for both 50 nm and 10 nm radius particles evaluated at the sorting time correspondent to the assumed separation condition: |z_+-z_-|=4σ_z. Finally, we show that our separation method can separate realistic chiralities. We compared the g-factor exhibited by our simulated chiral gold nanoparticles (obtained with chiral Mie theory <cit.> and shown in Table <ref>) to that of known chiral molecules reported in <cit.>. The 3β-Hydroxy-5α-androstan-16-one(13) exhibits a g-factor of 0.175 similar to a chiral gold nanoparticle of r=10 nm and κ=±0.05i. On the other end, the aminoacid L-cystine shows a g-factor=0.002, similar to that of a chiral gold nanoparticle of r=10 nm and κ=±0.0005i. This shows that our chosen chirality parameters correspond to realistic molecular chiralities. § CONCLUSIONS We have exhaustively studied the potential of exploiting chiral longitudinal forces in photonic integrated waveguides for sorting absorbing and non-absorbing chiral particles of realistically low chirality. The separation of enantiomers capabilities of such forces was confirmed with particle tracking simulations. For non-absorbing particles, we have designed a waveguide where the guided mode polarization varies between right-handed elliptical polarization and left-handed elliptical polarization in a periodic pattern, whereas the electromagnetic energy density is maintained longitudinally. We have shown that in such systems the chiral longitudinal gradient forces dominate over the achiral longitudinal forces even for particles of low chirality. In particular, 100 nm-radius particles of κ = ±0.05 can be separated within 1 second. For absorbing particles, we have designed a waveguide where the guided quasi-TE and quasi-TM modes are degenerate to maintain a quasi-circular polarization over the length of the waveguide. This system enables the separation of enantiomers of arbitrarily low chirality as long as enough time is waited. We have shown that gold particles of κ = ±0.0005i and radius either 50 nm or 10 nm can be separated in 21 hours and 3 hours, respectively. Our results unveil the potential of photonic integrated waveguides to become a platform for enantiomeric sorting of a wide variety of nanoparticles and even molecules. Acknowledgments: The authors acknowledge funding from the European Commission under the CHIRALFORCE Pathfinder project (grant no. 101046961) and from Innovate UK Horizon Europe Guarantee (UKRI 10045438). A.M. acknowledges partial funding from the Conselleria de Educación, Universidades y Empleo under the NIRVANA Grant (PROMETEO Program, CIPROM/2022/14). Data available in Zenodo repository at <cit.>. § APPENDIX §.§ Derivation of helicity density longitudinal variation in chirality change of combination of TE and TM modes Let us consider a guided left circularly polarized mode (LCP) described by the following electric field E_ LCP=E_ TEe^in_ TEkz + iE_ TMe^in_ TMkz and magnetic field H_ LCP=H_ TEe^in_ TEkz + iH_ TMe^in_ TMkz. The derivative of the helicity density along the propagation direction z will be given by: d𝔊/dz = d/dz[ 1/2ω c( E_ LCP·H^*_ LCP)] = 1/2ω cd/dz(E_ LCP·H^*_ LCP)= = 1/4ω cd/dz([E_ TEe^in_ TEkz + iE_ TMe^in_ TMkz]·[H^*_ TEe^-in_ TEkz - iH^*_ TMe^-in_ TMkz])= = 1/4ω cd/dz(E_ TE·H^*_ TE + E_ TM·H^*_ TM) + + 1/4ω cd/dz(E_ TE·H^*_ TM(-i)e^ikz(n_ TE-n_ TM) + E_ TM·H^*_ TE(i)e^-ikz(n_ TE-n_ TM )) where the first summand of the last expression is zero as it does not depend on z. The second summand can be expressed in a compact form using (z) =1/2i(z-z^*) ∀ z∈ℂ and defining ψ=E_ TE·H_ TM^*-E_ TM^*·H_ TE, and Δ n =n_ TE - n_ TM: d𝔊/dz = -1/4ω cd/dz[ψ e^ikzΔ n + ψ^*e^-ikzΔ n] = = -1/4ω cd/dz[|ψ| e^i(kzΔ n + ψ) + e^-i(kzΔ n + ψ)/2]= = -1/4ω cd/dz[ |ψ| cos(kzΔ n + ψ) ] = 1/4ω c|ψ| kΔ n sin(kΔ n z + ψ) §.§ Analysis of components of the chiral gradient force in parameter space Figure <ref>a shows that the relative strength (ratio) of the real Poynting vector force (F^Π_z) and the longitudinal helicity density gradient force (F^∇𝔊_z) depends on the radius of the particle. This is because for non-absorbing particles the F^Π_z grows with the product of two polarizabilities (particle volume squared), whereas F^∇𝔊_z grows linearly with the polarizability (particle volume). Thus, F^∇𝔊_z dominates over F^Π_z in region II. Figure <ref>b shows that since both the achiral electric energy density gradient force (F^∇ E_z) and F^∇𝔊_z depend linearly on the polarizability, their ratio does not depend on the particle radius. Thus, F^∇𝔊_z dominates over F^∇ E_z in region I. Figure <ref>c shows that both achiral forces, F^Π_z and F^∇ E_z, cancel each other for some specific radius (large ratio because the denominator is very small), which originates the distinctive line of minimum achiral force at r=10 nm in Fig. <ref>a. §.§ Estimation of the separation time of enantiomers in a microchannel The movement of a particle in a fluid is governed by the combined action of the optical force (in our case), the drag force due to the viscosity of the fluid, and the stochastic Brownian motion. More details about the governing equation can be found in <cit.>. The optical force makes the particle move in a deterministic manner with a terminal velocity v_z that is proportional to the optical force and the mobility ℳ of the particle in the fluid. The particle displacement along the z-axis due to the z-component of the optical force F_z is given by z_ opt: z_ opt = v_zt = ℳ F_z t where the mobility is related to the viscosity of the fluid (η) and the radius of the spherical particle (r) according to ℳ=1/(6πη r). In addition, the stochastic effect of the Brownian motion is to spread the possible positions around an average value. Thus, the dispersion in position z of a particle evolves in time as a normal distribution with a standard deviation given by: σ_z=√(2Dt)=√(2ℳk_BTt) Once given an initial distribution of z-position of particles that have started in the same initial z-position (assumed 0 here), we can model their time evolution by describing the movement of the average position z̅ with Eq. <ref> and its dispersion σ_z with Eq. <ref>. We refer to these position distributions as clouds. Each enantiomer, referred to as (+) and (-), experiences a different optical force due to the opposite sign of the chiral term, correspondingly F^+ = F_ achiral + F_ chiral and F^- = F_ achiral - F_ chiral. Therefore, the terminal velocity v_z is different for each enantiomer, but the spread due to brownian motion is the same σ_z for both. We have applied this modeling to the final distribution of positions resulting from the particle tracking simulations. This allows us to predict the time at which the cloud of positive and negative enantiomers will be separated, assuming a constant optical force acting on the particles. We have expressed the desired separation between the two enantiomer clouds, with average positions z̅_+ and z̅_-, as a multiple of their dispersion, sσ_z, where s can be any positive number. |z̅_+ - z̅_-| = sσ_z Note that both enantiomer clouds follow the same dispersion evolution, as both enantiomers have the same size. Therefore, by solving for t and using Eq.<ref> and Eq.<ref>, we find the expression to estimate the separation time: t = s^2 2 k_B T/ℳ⟨ F^+_z-F^-_z ⟩ ^2=s^2 k_B T/2ℳ⟨ F_ chiral,z⟩ ^2 where we have used ⟨ F^+_z-F^-_z⟩=2⟨ F_ chiral,z⟩. Using these equations, it is possible to make inferences of separation times for any degree of separation (regulated by s). To make accurate predictions, we must take into account that the mobility of the particle will change due to its proximity to a wall from the microchannel or waveguide core. We must also consider that the magnitude of the force at each instant of time is in general smaller than the maximum force throughout the waveguide cross-section. We used the last positions from the tracking simulation to calculate the correction to ℳ and the value for ⟨ F_ chiral,z⟩. In our case, we cannot directly use Eq. <ref> for small chiralities to get the estimate of the average chiral force ⟨ F_ chiral,z⟩ experienced by the particles from simulation. This is because the difference in positions is much smaller than the standard deviation for the simulated time. Instead, we first use the sum of the mean position after a simulation time t_ simulation to get an estimate of the average achiral force ⟨ F_ achiral,z⟩: z̅_++z̅_-/2=ℳ⟨ F^+_z+F^-_z⟩ t/2 = = ℳ⟨ 2F_ achiral,z +F_ chiral,z -F_ chiral,z⟩ t/2 =ℳ⟨ F_ achiral,z⟩ t ⟨ F_ achiral,z⟩=z̅_++z̅_-/2ℳt_ simulation Then, we assume that the strength ratio between the average chiral and achiral force ⟨ F_ chiral,z⟩ / ⟨ F_ achiral,z⟩ is the same as the strength ratio between the maximum values throughout the cross section of the microchannel |F^max_ chiral,z/F^max_ achiral,z|. Thus, we estimate the average chiral force as: ⟨ F_ chiral,z⟩=⟨ F_ achiral,z⟩F^max_ chiral,z/F^max_ achiral,z With those, we can finally accurately estimate the times of separation for any degree of separation using Eq. <ref>, Eq. <ref> and Eq. <ref> in this order. §.§ Analytical calculation of enantiomeric fraction To calculate the enantiopurity of the separated racemic mixtures after the separating process we use the enantiomeric fraction, given by the expression <cit.>: (+)-EF=N_+/N_++N_- We can do so analytically in the case of the absorbing particles. First, we must imagine the situation described by Fig. <ref>, where both Gaussians have equal standard deviation σ and their centers are separated by 4σ. If we take the division line to be at the center of the two separated normal distributions, the position from the center of the left distribution (which we will define as the +) is μ_+ + 2σ where μ_+ is the center of the + cloud of enantiomers. This center, seen from the perspective of the other enantiomeric cloud (which we will call the - enantiomeric cloud) will be given as μ_- - 2σ. Now, because the enantiomeric clouds follow a Gaussian distribution we know that the number of positive enantiomer particles to the left of the center is given by: N_+ = N^total_+ P_+(x ≤μ_+ + 2σ). Similarly, the number of - particles to the left of the center will be given by N_- = N^total_- P_-(x ≤μ_- - 2σ). We can use the expressions on the enantiomeric fraction and we get: (+)-EF=N^total_+ P_+(x ≤μ_+ + 2σ)/N^total_+ P_+(x ≤μ_+ + 2σ)+N^total_- P_-(x ≤μ_- - 2σ) In our case we consider N^total_+=N^total_- which greatly simplifies the enantiomeric fraction expression to: (+)-EF= P_+(x ≤μ_+ + 2σ)/P_+(x ≤μ_+ + 2σ)+P_-(x ≤μ_- - 2σ) We can use that P_-(x ≤μ_- - 2σ)=1-P_-(x ≥μ_- - 2σ) and furthermore P_-(x ≥μ_- - 2σ)=P_+(x ≤μ_+ + 2σ) due to the probabilities of both enantiomers following the same Gaussian distribution. So finally, we get that: (+)-EF= P_+(x ≤μ_+ + 2σ)/P_+(x ≤μ_+ + 2σ)+1-P_+(x ≤μ_+ + 2σ)= P_+(x ≤μ_+ + 2σ) By symmetry arguments, the (-) enantiomer will yield the same EF on its right side. (-)-EF= P_-(x ≥μ_- - 2σ) An important consideration to take from this derivation is that we can change the enantiomeric fraction simply by choosing the s parameter (defined by being the separation between the centers of the distributions). (+)-EF= P_+(x ≤μ_+ + s/2σ) Different s parameters will mean different waiting times due to Eq. <ref> so we can adjust the waiting time to obtain any enantiomeric purity.
http://arxiv.org/abs/2406.18746v1
20240626202530
Lifelong Robot Library Learning: Bootstrapping Composable and Generalizable Skills for Embodied Control with Language Models
[ "Georgios Tziafas", "Hamidreza Kasaei" ]
cs.RO
[ "cs.RO" ]
The localized phase of the Anderson model on the Bethe lattice Tommaso Rizzo1,2 and Marco Tarzia3,4 July 1, 2024 ============================================================== empty empty § ABSTRACT Large Language Models (LLMs) have emerged as a new paradigm for embodied reasoning and control, most recently by generating robot policy code that utilizes a custom library of vision and control primitive skills. However, prior arts fix their skills library and steer the LLM with carefully hand-crafted prompt engineering, limiting the agent to a stationary range of addressable tasks. In this work, we introduce , an LLM-based lifelong learning agent that continuously grows the robot skill library to tackle manipulation tasks of ever-growing complexity. achieves this with four novel contributions: 1) a soft memory module that allows dynamic storage and retrieval of past experiences to serve as context, 2) a self-guided exploration policy that proposes new tasks in simulation, 3) a skill abstractor that distills recent experiences into new library skills, and 4) a lifelong learning algorithm for enabling human users to bootstrap new skills with minimal online interaction. continuously transfers knowledge from the memory to the library, building composable, general and interpretable policies, while bypassing gradient-based optimization, thus relieving the learner from catastrophic forgetting. Empirical evaluation in a simulated tabletop environment shows that LRLL outperforms end-to-end and vanilla LLM approaches in the lifelong setup while learning skills that are transferable to the real world. Project material will become available at the webpage https://gtziafas.github.io/LRLL_project/https://gtziafas.github.io/LRLL_project/. § INTRODUCTION Building interactive agents that can continuously develop new skills and adapt to new scenarios remains a challenging frontier in robotics <cit.>. Such an agent should be able to interface natural language, percepts and actions in order to form policies that are reusable and expandable in an open-ended fashion <cit.>. Recent advances in end-to-end robot learning <cit.> learn capable multimodal policies but require copious amounts of data, which are very hard to scale in the robotics domain. Further, their reliance in gradient-based optimization hinders their applicability in a lifelong setup, due to the effect of catastrophic forgetting <cit.>. Meanwhile, an emerging paradigm has been to leverage the code-writing capabilities of modern LLMs <cit.> for synthesizing executable robot policy code from natural language <cit.>. In such a setup, vision and action skills are implemented as modules (either learned or scripted) in a first-party API. This allows the LLM to compose them arbitrarily in combination with classic programming structures (control flow, recursion etc.) and third-party Python APIs (e.g. ) in order to ground visual observation, perform low-level reasoning, and provide parameters for control primitives. This system bypasses model finetuning, instead relying on careful prompt design and in-context examples to steer the LLM and aid generalization. However, the choice of the skills library and prompt examples remains a design choice that limits the span of tasks that the agent can tackle, and require an expert to continuously adapt the library and prompts to the LLM. In this work, we wish to address such limitations by proposing LRLL, a Lifelong Robot Library Learning agent. LRLL learns hierarchical and generalizable skills across time spans, while staying within the regime of in-context learning and bringing non-expert human users in-the-loop. Our learning algorithm is inspired by wake-sleep optimization <cit.> and its adaptation for library learning <cit.>. Learning takes place in cycles, each with two distinct phases: a) a wake phase, during which the agent interacts with its environment and users in order to grow its experiences, and b) a sleep phase, during which the agent reflects on its experiences in order to expand its capabilities. Accumulated experiences are distilled into skills throughout the learning cycles, therefore allowing complex tasks to be expressed as programs composed of simpler skills (see Fig <ref>). The human acts as a teacher, introducing a few demonstrations and hints to the agent during the wake phase. We assume the teacher follows a curriculum approach, where in each cycle the objective tasks can be built out of the learner's current repertoire of skills. To stay within in-context learning, we use a frozen LLM to generate the policy code, and design our algorithm as an interchange between two modules: a) an experience memory, where the agent can store and retrieve past instruction-code pairs based on similarity search, in order to feed context to the LLM (i.e. prompt retrieval), and b) a skill library, which comprises the collection of API calls the LLM can generate code from. At each cycle, the wake phase populates the memory with new instruction-code pairs, generated from an LLM-based exploration module that proposes and self-verifies new tasks in a simulator. During the sleep phase, the accumulated experiences are distilled into new skills via an LLM-based abstraction module. The new skills are appended to the library and the wake phase is replayed with refactored experiences, in order to compress the memory. This leads to a continual transfer of knowledge from human guidance, via exploration in simulation, to the library, without utilising gradients in any part of the process. To apply our idea in robotics, we design a four-stage curriculum and prompt our agent to acquire a broad range of skills, including precise visual-spatial reasoning and long-horizon tabletop rearrangement. Empirically, we show that LRLL can automatically build a library of hierarchical, generalizable and interpretable skills, while outperforming end-to-end and stationary LLM baselines. We further perform ablations to demonstrate the effectiveness of each proposed component and explore design options. Finally, we illustrate that our algorithm can be transferred to a real robot for dual-arm tabletop rearrangement tasks, without any further adaptation. In summary, our key contributions are the following: a) LRLL, an LLM-based agent that can generate policy code, explore tasks in simulation, and expand its skillset over time, b) a formal recipe for enabling humans to bootstrap desired robot skills with minimal intervention, and c) extensive comparisons, ablation studies and hardware demonstrations that evaluate the effectiveness of each proposed component, assess overall generalization capabilities and test sim-to-real transferrability. § RELATED WORK Language to Action Natural language has a long-standing history for controlling robots <cit.>, serving both as a natural interface for human-robot interaction <cit.>, as well as a generalizable intermediate representation <cit.>. Approaches range from semantic parsing <cit.>, planning <cit.>, reinforcement <cit.>, imitation <cit.>, and model-based <cit.> learning to more recent large-scale end-to-end multimodal instruction-following <cit.>. While end-to-end policies are becoming more capable, they require prohibitive amounts of offline data or environment interactions. Further, their lifelong learning potential is limited by the effect of catastrophic forgetting <cit.>. In contrast, in this work we focus on a gradient-free approach where low-level actions are implemented as control primitives, out of which an LLM continuously builds more complex skills via few-shot human demonstration and interactive exploration. LLMs for Robot Control More similar to this work, an emerging body of methods is chaining LLMs with external models <cit.> in order to propose grounded plans that sequence high-level actions <cit.>. This method invests on the current capabilities of LLMs for multi-step reasoning using external modules as tools <cit.>, without adittional finetuning. Recent works <cit.> replace high-level actions with a library of primitives, and use the LLM to generate Python code that grounds the visual scene and parameterizes the primitives. This allows more complex policy logic than sequences of actions and offers more precise spatial grounding <cit.> and reasoning <cit.>. However, such works are stationary systems that do not further extend their library, and require manual prompt engineering to be applied in a general setup. Code-as-Policies (CaP) <cit.> demonstrates sparks of non-stationarity by letting the LLM recursively define unseen functions, but does not do so in a controlled, reusable fashion and does not consider human guidance. In this work, we wish to extend a CaP-like system to incorporate past interactions as a self-prompting mechanism and systematically use the LLM function generator to expand the skill library over time. Memory and Context Retrieval Retrieval-augmented LLMs are a trending direction in NLP research <cit.>, mostly as a means to reduce LLM hallucinations. In the robotics and embodied AI space, several works retrieve the most similar task-code pairs from memory based on similarity search and use them to prompt the LLM <cit.>, but do so in a stationary fashion. Recent works <cit.> expand the memory based on interactions, but there is no refinement of the base skills. In our work, we use the memory for LLM prompt retrieval, but progressively distill similar experiences to new skills that refactor and compress the memory. Language-guided Skill Acquisition Iteratively refining robot skills with language feedback has been explored in the past with external parsers <cit.> and end-to-end language-conditioned policies <cit.>, but rely on either domain knowledge or extensive demonstration datasets, and therefore lack scalability. More recently, <cit.> leveraged LLMs with multi-step human feedback to generate reward functions that will train policies with model-predictive control. The concurrent work <cit.> utilizes an LLM to generate synthetic experiences in simulation together with success conditions, and distills the successful trials into a language-conditioned policy with behavioral cloning. Both ideas are conceptually close to our work, exploiting the LLM to generate information that will train a policy, but employ traditional approaches for learning the policy and hence struggle with the lifelong learning setup. Instead, LRLL leverages control primitives and an LLM to express the policy itself, and poses skill acquisition as library learning on top of the primitives. This allows complex skills to form from simpler skills over time, while retaining interpretability and bypassing training policies from scratch. § METHOD In this section, we provide an overview of the proposed algorithm (Sec. <ref>) and its components (Sec. <ref>), and describe details for achieving policy and success generation, exploration and skill abstraction with LLMs (Sec. <ref>). §.§ LRLL Overview Our framework (see Fig. <ref>) receives at each cycle t a set of N_t language demonstrations X_t that contain instruction-policy-success tuples for specific tasks: X_t = ⟨ l^(i), a^(i), r^(i)⟩_i=1^N_t. The demos are appended to the agent's memory M_t. The policy code a^(i) is factorized based on the current library skills L_t. The success code r^(i) describes how the agent can use privileged simulation data to verify the success of a given instruction (e.g. check contact, object poses etc.). The output is a set of formal skills, expressed as Python functions, which can be directly used for real robot deployment. This process is repeated for an open-ended number of cycles, enabling the agent to expand its library in lifelong fashion. Each cycle consists of two phases: Wake Phase During the wake phase, the agent interacts with its environment in order to grow its proficiency in solving more tasks. We use an LLM to iteratively propose new task instructions l_1:K = (X_t, G_t, M_t) based on the input demos X_t, the current memory state M_t and some general hints G_t. The proposed tasks are executed and verified in the simulator using another LLM's generated policy and success code respectively: a_k, r_k = ( l_k, s_0,k, M_t), where s_0,k the initial simulator state of proposal k. Successful tuples are appended to the experience memory M_t, which helps the agent to recover more relevant context throughout exploration. The process is repeated until an iteration threshold is met, or the LLM decides that it has completed all objectives denoted in the hints. The acquired experiences are stored in a replay buffer B_t = { <s_0,k, l_k, a_k, r_k> | r_k=1 }. Sleep Phase During sleep, the agent reflects on its acquired experiences to compose new skills. To achieve this, we first represent each experience as an abstract syntax tree of its policy code. Experiences are clustered such that codes that have the same tree structure modulo variable and constant names are grouped together for a total of M clusters C_1:M. We then feed each cluster into an LLM that uses the experiences as examples to define new functions that will update the library L_t+1 = L_t ∪{(C_m)}_m=1^M. The demo policies are refactored based on the proposed functions X̃^(i)=(X^(i), L_t+1), and the wake phase is replayed from scratch, starting only with the refactored demos. When the agent fails in a previously succeeded task, its policy-success is also refactored and appended to the memory: M_t+1 = M_t ∪{X̃^(i)∈ B_t | r^(i)=0 } This process ensures that by the end of the cycle, the memory will contain the minimum number of experiences needed to replicate the performance of the wake phase. §.§ System Components Initial Vision & Action Primitives Following previous works <cit.>, we employ frozen pretrained vision-language models for zero-shot vision-language grounding In particular, we use MDETR <cit.> for referring expression grounding and CLIP <cit.> for open-vocabulary classification. For control, as we wish to demonstrate the capability of LRLL to progressively build complex skills from simpler ones, we start with the most basic primitives: moving the arm to a certain pose and opening / closing the gripper. Motion planning is performed via inverse kinematics from end-effector space. Experience Memory & Retrieval Agent experiences are formalized as tuples of task instruction, action and success code, either provided by the human teacher or “imagined" by the LLM exploration module X^(i) = <l^(i), a^(i), r^(i)>. When appended to the memory, each experience is indexed by its instruction embedding, provided by an encoder-based LM <cit.> 𝐳_i = F^LM(l^(i)). In order to retrieve experiences to serve as prompts, the query instruction q is embedded by the same model 𝐳_q and the experiences of the top-k most similar instructions based on maximum marginal relevance search <cit.> are returned: _i ∈ M_t | S [ λ ((z_i, z_q) - (1-λ) _j ∈ S (z_i, z_j)) ] where S the set of already selected retrievals, the cosine distance metric and λ a diversification hyper-parameter. This rule is applied k times to retrieve diverse experiences. Skill Library Each agent skill corresponds to a function, implemented as a Python API. The library maintains skill information such as their names and descriptions, and is able to trace skill dependencies from a given code snippet. Before learning begins, the library is initialized with the initial primitives. A wrapper around the library and the agent converts the newly acquired skills into API modules that are executable in a robot simulator. Replay Buffer The replay buffer is a replica of the experience memory but only for the explored experiences of the current cycle. Additionally, for each experience, the simulator states are saved. The replay buffer is reset at the beginning of each new cycle. §.§ LLM Prompts We implement three LLM modules: Actor-Critic The actor-critic comprises of two parallel LLM calls, one for policy and one for success code generation. Both prompt templates are instantiated after retrieving experiences from the memory for a given query, and follow the same general structure: * A comment indicating a general purpose of the code (e.g. "### Python robot control script"). * Information about the API, aiming to present the building blocks for code generation. For the actor, all modules are extracted from the retrieved examples' policies and rendered as import statements <cit.>. For the critic, a fixed code snippet describing simulator utilities via docstrings is provided <cit.>. * A sequence of task-code pairs from the retrieved experiences. Each pair is rendered as a comment of the task description followed by a policy or success code snippet, for the actor and critic respectively. * Throughout demonstration code, chain-of-thoughts <cit.> are provided as in-line comments to guide the LLM's reasoning before producing a next line, which is especially useful for explaining perspective conventions (e.g. "left" correspond to x-axis). For inference, the query is appended to the prompt as a comment and the LLM fills the corresponding policy or success code. We find that this code-based completion format works robustly also for chat-based LLMs <cit.>. Exploration The exploration module proposes the next tasks to complete in the simulator. The goal is to use the demos as guidance and introduce both task variations, i.e. alter concepts present in the instruction (e.g. color, spatial direction etc.), as well as task compositions, i.e. combinations of concepts present in the demos (e.g. desired destinations for placing objects). The prompt contains: * A system message that includes general directives that condition the LLM for the task, encourages diverse responses and provides the required response format. * Hints provided by the teacher, aimed to provide objectives of a specific cycle. * Information about the current state, represented as a list of appearing object names. * A list of completed and failed tasks so far, reflecting the agent's current progress towards completing all objectives mentioned in the hints. The completed tasks are initialized with the provided demos. * A set of two exemplar generations, containing input demo-hints and output task proposals. We provide one manual and set one more exemplar from the LLM's first response in the previous cycle. Before proposing tasks, we ask the LLM to reason about its proposals <cit.>, which significantly helps in responding better to the provided hints. We find that decomposing exploration to a chain of two LLM calls, prompted separately for compositions and variations, leads to faster completion. Task variations, proposed by the second LLM call, are not included in the progress prompt field. A temperature parameter of 0.1 is set at successive iterations to encourage diverse responses. Skill Abstraction This module leverages LLMs' capabilities to define Python functions out of examples. The goal is dual: a) maintain the same code logic, but abstract code variations such as target objects and destination regions as arguments to the new function, and b) extract boilerplate code snippets and abstract them to new functions. This is achieved by prompting the LLM in two rounds. The prompt contains: * A general purpose system message that primes the LLM for function generation and imposes constraints. * An API field, rendered from the dependencies of input code snippets as an import statement. * A set of two exemplar function definitions. The exemplars are different for each round of abstraction. As in exploration, one exemplar is manual and the other is selected from the first LLM response of the last cycle. * The input code snippets with their instruction as comments. We first provide definition examples, which are the raw instruction-code pairs from each cluster of the experience memory, and then abstraction examples, which correspond to the LLM-generated functions from the first round. We also ask the LLM to provide a docstring, which is used as the skill description, as well as to re-write the given examples based on the generated function, which is used to refactor the memory and move to the replay stage of the sleep phase. § EXPERIMENTS The focus of our experimental evaluation is threefold: a) compare our method against previous baselines for tabletop manipulation in simulation (Sec.<ref>), b) evaluate the impact of each our method's proposed contributions (Sec.<ref>), and c) demonstrate the transferability of our approach to the real world (Sec.<ref>). §.§ Evaluation Setup Implementation We leverage OpenAI’s <cit.> engine for all LLM generations, and <cit.> as the memory embedding model. Our system is built using the LangChain library <cit.>. Our simulator environment is built on Pybullet <cit.> and it is based on the Ravens <cit.> manipulation suite, with the blocks-and-bowls setup replicated from previous works <cit.>. We introduce more tasks and language variations, for a total of 41 task templates, organized in a curriculum of 4 cycles: a) Spatial Coordination, i.e. precise motions relative to objects/regions, b) Visual Reasoning, i.e. determining attributes, resolving spatial relations and counting/enumerating objects, c) Object Manipulation, i.e. single picking, releasing and placing tasks, and d) Rearrangement, i.e. long-horizon tasks that involve multiple objects and destinations. Evaluation For conducting generalization experiments, we generate task instances in three splits <cit.>: seen instructions with either seen (SA) or unseen (UA) attributes, and unseen instructions with unseen attributes (UI). For studying learning in the lifelong setup, we also propose two more splits: a) a forward-transfer (FT) split, which contains unseen compositions of tasks from the present cycle with all previous tasks (with seen attributes), and b) backward-transfer (BT), which contains the FT tasks from the previous cycle. These splits are meant to study whether the agent can learn to transfer knowledge between tasks (FT), and to what extent it “forgets" or improves on previous tasks (BT). Baselines We consider four baselines: a) learning end-to-end multi-task policies with CLIPort <cit.>, adapted as in <cit.> (not applicable in all tasks), b) prompting LLMs for primitive-based policy code using a static prompt, as in CaP <cit.> (without hand-crafted routing between LLM sub-systems), c) LRLL-no-sleep, where we remove the sleep phases from our LRLL and only retrieve examples from the memory without abstraction, and d) LRLL-no-wake, where we attempt to synthesize new skills directly from the human demonstrations, without the exploration of the wake phase. §.§ Tabletop Manipulation in Simulation We first wish to evaluate the performance of LRLL compared to established baselines in our simulated tabletop domain. To that end, we developed a simulated teacher that samples tasks from a set of predefined templates. The teacher generates up to 5 demonstration (1 per SA template) and multiple test (10 per UA, UI template) tasks in the beginning and end of each of our 4 cycles. For end-to-end learning with CLIPort <cit.>, we sample 1k trajectories per task template using a scripted expect for each cycle and train the model incrementally. For our LLM-static baseline <cit.>, we append demos from each new cycle in the LLM's prompt. The same demos are provided to LRLL at the beginning of each cycle's wake phase. Agents are tested at the end of each cycle. We repeat our experiments three times with different teacher seeds and report averaged success rates in Table <ref>. We observe that CLIPort struggles with unseen attributes and its performance degrades drastically with unseen instructions. LLM-static is robust to unseen attributes (2.9% average drop) and can generalize significantly better in unseen task instructions, with an average success rate of 76.8% in all cycles. We find that this baseline's main limitation is producing non-executable code in cases of unseen instructions at later cycles, which we attribute to its inability to interpret and compose multiple skills from a limited demonstration context. Such skill compositions are (partially) already explored during the wake phase of our LRLL, and abstracted to functions during the sleep phase, resulting in policy code that is much shorter and functional in style. This robustifies LRLL's generated policies, which translates to an average increase of ∼6% in unseen attribute and ∼10% in unseen instructions compared to LLM-static. §.§ Ablation Studies Our ablations focus on exploring the effect of each of our proposed components and discussing options for implementation. Forward/Backward Transfer We compare the averaged success rates of all baselines in FT/BT instructions. Results are reported in Table <ref>. First, we assess that all baselines are robust in tasks from previous cycles, showcasing the immunity of LLM's in-context learning to forgetting. However, no actual increase in backward task's performance is reported in any baseline. For forward transfer, we see a large increase in averaged success between LLM-static and LRLL. Even without refactoring code (LRLL-no-sleep baseline), retrieving explored experiences leads to better compositional abilities, with a ∼15% delta from static. Static Prompts vs. Retrieval When prompted with a few examples, the difference between a static and a retrieved prompt is marginal. In the late cycles of the curriculum, we observe the effect of prompt saturation <cit.> kicking in the static baseline, leading to several instabilities in the LLM responses, such as ignoring the first examples in favour of more recent ones or referring to variable names outside the current scope. Retrieval-based baselines tackle such issues by ensuring a fixed context length for the LLM actor. Effect of Exploration The contribution of the exploration module is vital, as the performance of LRLL-no-wake is consistently much lower across cycles. This is due to the high difficulty of abstracting skills from one-shot demos, which usually leads to a one-to-one mapping of instructions to functions, without adding any actual refactoring. When adding exploration, the abstractor has significantly more examples to define new skills. To evaluate the breadth of variance in the explored tasks, we visualize the tSNE projections of their instruction embeddings <cit.> compared to demonstration and test tasks within a cycle (see Fig. <ref>). Effect of Abstraction LRLL-no-sleep never abstracts the explored tasks into new skills, and so needs to retrieve a lot of examples in order to obtain sufficient context. This effect bottlenecks the agent to the quality of the retriever. Besides performance gains, sleep leads to other practical benefits (see Fig. <ref>). First, the amount of experiences required to maintain the same success within each cycle is drastically reduced, leading to a × 8 decrease in RAM required to store experience embeddings. Second, LRLL with refactored memory requires much less retrieved experiences to maintain high performance in UI tasks. Additionally, as the experiences are refactored to be simple function calls (to the abstracted skills), the retrieved code is itself smaller, which leads to smaller prompt lengths and hence cost gains for using GPT. LLM and Embedding Models We find no significant difference between and in the quality of generated policy code or function abstraction. For exploration, we find that both models provide rich variance in the proposed tasks, but the chat model tends to be less responsive to the hints signal. This effect can be ameliorated by running more exploration iterations. The choice of the embedding model is more important, as experiments with smaller encoder LMs such as Sentence-BERT <cit.> showed a tendency to retrieve instructions that are similar lexically (e.g. same object noun appears), but not necessarily convey the same task. §.§ Zero-Shot Sim-to-Real Transfer We repeat our curriculum with LRLL in a dual-arm robot setup with two UR5e arms and a Kinect sensor. We provide vision APIs for open-vocabulary detection with MDETR <cit.> and attribute recognition with CLIP <cit.>. To assist in articulated grasping, we also integrate GR-ConvNet <cit.> for 4-DoF grasp synthesis as a vision API. The motion primitives for moving the arm and opening/closing the fingers are parameterized by the left or right arm. We include a catalog of 12 household objects, including fruits, soda cans, juice boxes etc. We first train LRLL using our default 4-cycle curriculum in the Gazebo simulator <cit.> and then test the agent in the real robot. We demonstrate that the robot is able to perform long-horizon rearrangement tasks that combine precise spatial positioning with reasoning about object attributes, without any further adaptation from simulation. Errors were observed mostly at motion execution due to collisions, as well as perception errors due to CLIP misclassifications. § CONCLUSION & LIMITATIONS In this work, we introduce LRLL, an agent and learning algorithm for lifelong robot manipulation. LRLL exploits the emergent capabilities of modern LLMs to: a) generate policies as code, b) interact with the environment to explore new tasks, and c) distill the acquired experiences into new skills over time. LRLL replaces tedious prompt engineering with retrieval from memory, and a static library of skills with an expandable codebase, written and verified by the agent itself. Empirical evaluation shows that our agent learns a library of composable, generalizable, and interpretable skills that can be transferred to the real world, while its dynamic and gradient-free nature prevents it from prompt saturation and forgetting phenomena of stationary-LLM and end-to-end approaches respectively. LRLL comes not without limitations. First, perception is restricted by the choice of vision APIs, which currently support only referring expressions and attribute classification. In the future, we would like to look at multimodal LLMs <cit.> for open-ended vision-language grounding. Second, the current human demonstration input to LRLL is language, limiting its scalability to skills that can be expressed symbolically as primitive compositions. For articulated, contact-rich manipulation tasks, we would like to augment demonstration input to support video or kinesthetic teaching. Third, the initial prompts to exploration/abstraction modules need to be refined when changing domains or LLM engines. Finally, exploration with commercial LLMs is constrained by latency and price factors. In the future, we would like to investigate the gap between GPT and open-source alternatives <cit.>. § ACKNOWLEDGMENTS We thank the Center for Information Technology of the University of Groningen for providing access to the Hábrók high-performance computing cluster.
http://arxiv.org/abs/2406.18079v1
20240626053136
MFDNet: Multi-Frequency Deflare Network for Efficient Nighttime Flare Removal
[ "Yiguo Jiang", "Xuhang Chen", "Chi-Man Pun", "Shuqiang Wang", "Wei Feng" ]
cs.CV
[ "cs.CV", "eess.IV" ]
Corresponding author: Chi-Man Pun E-mail: cmpun@umac.mo This work was supported in part by the Science and Technology Development Fund, Macau SAR, under Grants 0141/2023/RIA2 and 0193/2023/RIA3. MFDNet: Multi-Frequency Deflare Network for Efficient Nighttime Flare Removal Yiguo Jiang Xuhang Chen Chi-Man Pun Shuqiang Wang Wei Feng ============================================================================= § ABSTRACT When light is scattered or reflected accidentally in the lens, flare artifacts may appear in the captured photos, affecting the photos' visual quality. The main challenge in flare removal is to eliminate various flare artifacts while preserving the original content of the image. To address this challenge, we propose a lightweight Multi-Frequency Deflare Network (MFDNet) based on the Laplacian Pyramid. Our network decomposes the flare-corrupted image into low and high-frequency bands, effectively separating the illumination and content information in the image. The low-frequency part typically contains illumination information, while the high-frequency part contains detailed content information. So our MFDNet consists of two main modules: the Low-Frequency Flare Perception Module (LFFPM) to remove flare in the low-frequency part and the Hierarchical Fusion Reconstruction Module (HFRM) to reconstruct the flare-free image. Specifically, to perceive flare from a global perspective while retaining detailed information for image restoration, LFFPM utilizes Transformer to extract global information while utilizing a convolutional neural network to capture detailed local features. Then HFRM gradually fuses the outputs of LFFPM with the high-frequency component of the image through feature aggregation. Moreover, our MFDNet can reduce the computational cost by processing in multiple frequency bands instead of directly removing the flare on the input image. Experimental results demonstrate that our approach outperforms state-of-the-art methods in removing nighttime flare on real-world and synthetic images from the Flare7K dataset. Furthermore, the computational complexity of our model is remarkably low. § INTRODUCTION Photographs taken in nighttime scenes with bright light sources often exhibit flare artifacts, which occur as a result of undesired scattering and reflection of intense light within the camera lens. Light scattering and reflection are common in real camera lenses, particularly in consumer-grade mobile phone cameras. Daily wear and tear, along with the presence of fingerprints and dust, can unintentionally cause light scattering or reflections in the lens. These flare artifacts not only impact the aesthetics of the photograph but also degrade the detailed visual information, hindering image comprehension. As a result, there is a strong demand for a reliable and effective nighttime flare removal algorithm. The flare patterns in photographs are affected by the lens properties and shooting environment, which encompass factors such as the design of the optics lens, manufacturing imperfections, lens smudges, and the light source's position and angle relative to the lens. The diversity of these factors results in flares with different shapes, positions, and colors. Typical flare artifacts include glare, streaks, bright colored lines, shimmer, saturated blobs, and many others. The variety in appearance of flare artifacts makes it challenging to remove them entirely from a photograph while preserving other content information, especially when multiple flare patterns exist within a single image. Traditional flare removal methods include hardware-based methods <cit.> and software-based methods <cit.>. Advanced materials and refined optical designs can contribute to more specialized lenses that reduce flare artifacts. Applying an anti-reflective coating is another widely used way of reducing flare impact. Nevertheless, these hardware methods can alleviate some flare effects but cannot wholly remove various flares. Furthermore, these hardware-based methods are not useful in images containing flares and are relatively expensive. To address these above problems, some software-based algorithms have emerged for removing flares. These methods usually follow two steps: detecting flares based on their characteristics and then removing them. However, software-based methods struggle to handle a broad range of flare artifacts. Recently, Deep learning-based methods <cit.> for removing flare have emerged. Nevertheless, most of these existing methods view flare removal as a general image restoration task and do not consider effectively decoupling the image's illumination and content information. As a result, these methods might not fully eliminate the flare artifacts or cause degradation of the image content during the removal process. Moreover, these methods directly utilize deep networks to globally manipulate the flare-corrupted image, which results in high computational costs. Consequently, their computational complexity exponentially increases with image resolution, making it unfeasible to apply to high-resolution images, reducing the algorithm's applicability. In this paper, we propose a lightweight Multi-Frequency Deflare Network (MFDNet) based on the Laplacian Pyramid <cit.> for nighttime flare removal. Our proposed method aims to effectively eliminate various flare artifacts while preserving the integrity of the original image. Inspired by the reversible frequency-band decomposition framework of a Laplacian Pyramid <cit.>, our MFDNet decouples illumination and content information by decomposing the image into low and high-frequency bands. It performs flare removal in the low-frequency part of the image, followed by gradual fusion with the high-frequency part to reconstruct the flare-free image. At the same time, because our method performs flare removal in the low-frequency part, where the resolution is low, it can effectively reduce the computational complexity. As shown in Figure <ref>, our method of first decoupling the flare-corrupted image and then removing the flare can effectively eliminate the flare artifacts while minimizing the computational complexity. Specifically, we propose the Low-Frequency Flare Perception Module (LFFPM) for flare removal in the low-frequency part. In the task of flare removal, a large receptive field is essential due to the extensive coverage of the flare. Thus, global information is crucial in accurately identifying the flare. Considering the Transformer's proficiency in capturing long-range pixels, the Low-Frequency Flare Perception Module (LFFPM) utilizes the Transformer for global feature extraction and refinement. In order to alleviate the limitation of Transformers in capturing local dependencies and reduce the model's computational complexity, LFFPM uses a convolution-based encoder-decoder structure to enhance local detailed feature representation. In addition, we propose the Hierarchical Fusion Reconstruction Module (HFRM) for an efficient fusion of high-frequency information. In HFRM, features from the high-frequency component and the results of the Low-Frequency Flare Perception Module (LFFPM) are aggregated at each layer to construct the Laplacian Pyramid for the final reconstruction. In summary, the contributions of this paper are as follows: 1. We propose a lightweight and effective Multi-Frequency Deflare Network (MFDNet) that removes nighttime flare artifacts by decoupling the image's illumination and content information into different frequency bands. 2. We design the Low-Frequency Flare Perception Module (LFFPM) to remove flares in the low-frequency part, which utilizes convolution to capture local features and self-attention to model long-range dependencies. 3. We design the Hierarchical Fusion Reconstruction Module (HFRM), which gradually aggregates features from the high-frequency bands and LFFPM's results to reconstruct the final flare-free image. 4. Extensive experiments demonstrate that our method achieves state-of-the-art performance on nighttime flare removal task while maintaining low computational complexity. The subsequent sections are structured as follows. Section <ref> discusses related works. Section <ref> is devoted to the details of our proposed MFDNet elaborately. Section <ref> presents our extensive experiments and analysis, and the conclusion is summarized in Section <ref>. § RELATED WORK §.§ Image Restoration During the process of capturing photographs, various factors, including unfavorable weather conditions, optics-induced diffraction, and relative motion between the camera and object, can lead to the deterioration of image quality. This degradation results in the loss of important information in the captured image, necessitating the restoration of the image to its original quality. Typical image restoration tasks include but are not limited to image deblurring <cit.>, image denoising <cit.>, image dehazing <cit.>, rain removal <cit.>, reflection removal <cit.>, shadow removal <cit.>, and more. Recently, some state-of-the-art image restoration methods <cit.> have appeared to handle different image restoration tasks. §.§ Flare Removal §.§.§ Traditional methods Traditional flare removal methods include hardware-based methods and software-based methods. Most hardware-based methods aim to reduce flare artifacts by improving optical designs and camera lens materials. Boynton et al. <cit.> propose a fluid-filled camera lens to alleviate flare artifacts caused by light reflections. Macleod et al. <cit.> employ a neutral density filter to minimize reflective flare artifacts. Another commonly used hardware-based approach is to apply an anti-reflective coating to the camera lens. However, this coating may interfere with other coatings like anti-scratch and anti-fingerprint, and it is usually only designed to work for specific light wavelengths and angles of incidence. While these specific hardware approaches can eliminate some lens flare artifacts, they can often not address unforeseeable flares such as those generated by fingerprints or dust on the lens. In addition, these hardware-based methods are usually expensive, and none of them can deal with photographs that already exhibit flare artifacts. In response to the above problems, some algorithms have been proposed to remove flare artifacts. Seibert et al. <cit.> and Faulkner et al. <cit.> propose to use deconvolution to remove flare artifacts. Zhang et al. <cit.> propose a method to remove flare by separating the image into a flared part and a scene part. Other methods <cit.> employ a two-step approach, where they detect flare based on their features and subsequently eliminate flare artifacts while reconstructing affected areas through inpainting <cit.>. However, these approaches might wrongly identify bright regions as flare artifacts. Additionally, they may not be effective in dealing with various patterns of flare artifacts in complex scenarios. §.§.§ Deep learning-based methods Recently, deep learning-based methods have achieved good results in some low-level image restoration tasks <cit.>. The success of these deep learning methods typically depends on domain-specific image training datasets. However, collecting large-scale pairs of flare-corrupted and flare-free images from real-world scenes can be a labor-intensive and time-consuming process. Consequently, the development of deep learning-based flare removal algorithms has been sluggish, with only a limited amount of related work emerging in recent years. Wu et al. <cit.> proposed the first deep learning-based method for daytime flare removal utilizing a U-Net <cit.> model to reconstruct flare-free images. Their approach involved a post-processing step to reintegrate the light source into the restored image. Many subsequent methods, including ours, adopted a similar pipeline by first removing the flare and then blending the light source back into the image. Dai et al. <cit.> proposed a method for synthesizing flare by simulating the optical principles of nighttime flare generation. This enables them to construct datasets consisting of paired flare-corrupted and flare-free images. They created Flare7K, the first benchmark dataset for nighttime flare removal, which serves as a valuable resource for tackling this challenging task. With the proposed Flare7K dataset, they follow Wu et al. <cit.> and train a U-Net <cit.> as a baseline network. Meanwhile, they train some state-of-the-art image restoration methods in the Flare7K dataset, including HINet <cit.>, MPRNet <cit.>, Restormer <cit.>, and Uformer <cit.> to build the flare removal benchmark. FF-Former <cit.> proposes a U-shape network based Fast Fourier Convolution (FFC) for nighttime flare removal, which addresses the issue of limited receptive field in traditional window-based Transformer approaches. HINet <cit.> integrates Instance Normalization (IN) into the basic module to build the HIN module, which improves the performance of the image restoration network. MPRNet <cit.> proposes a multi-stage architecture that incrementally learns recovery functions for degraded inputs, thereby dividing the entire restoration process into more manageable steps. Restormer <cit.> proposes an encoder-decoder Transformer model to learn multi-scale representations of high-resolution images without decomposing them into local windows. Uformer <cit.> proposes a universal U-shaped Transformer for various image restoration tasks, which is built on the basic locally-enhanced window Transformer module and is efficient and effective. In Section <ref>, we compare our MFDNet with these state-of-the-art methods in Flare7K benchmark. § METHODOLOGY §.§ Overview For the nighttime flare removal task, we propose a lightweight Multi-Frequency Deflare Network (MFDNet) based on the Laplacian Pyramid. As shown in Figure <ref>, our MFDNet consists of two primary modules: the Low-Frequency Flare Perception Module (LFFPM) and the Hierarchical Fusion Reconstruction Module (HFRM). Next, we first describe the overall pipeline of MFDNet, and then we detail the LFFPM in Section <ref> and the HFRM in Section <ref>. The Laplacian Pyramid (LP) <cit.> is a frequency-band image decomposition technique derived from the Gaussian Pyramid (GP). The main idea of the LP method is to decompose an image linearly into high-frequency and low-frequency bands. And based on the LP, the image reconstruction process can be implemented with precision and reversibility. Specifically, the LP is obtained by calculating the difference between adjacent layers in the Gaussian Pyramid. According to <cit.>, the lumination information of the image is more related to the low-frequency band and the high-frequency component contains more detailed content information such as textures. Inspired by the above properties of LP, we decouple the image's illumination and content information, remove the flare artifacts in the low-frequency part of the image, and subsequently fuse the low-frequency flare-free image with the detailed high-frequency information to restore the final flare-free image. The entire process is depicted in Figure <ref>. Specifically, given a nighttime flare-corrupted image I ∈ R^H × W × 3, MFDNet first decomposes it into a Laplacian Pyramid, generating a set of different frequency parts L=[l_1,l_2, ⋯,l_n-1] and the lowest frequency image I_n, where H × W denotes the spatial dimension, and n is the number of LP's decomposed levels. The components of L have progressively reducing resolutions from H × W to H/2^n-1×W/2^n-1, and I_n has H/2^n×W/2^n pixels. After getting I_n, we input it into the Low-Frequency Flare Perception Module (LFFPM) for flare removal and get a low-frequency flare-free image Î_n. I_n and Î_n are then provided to the Hierarchical Fusion Reconstruction Module (HFRM), which fuses each layer of L with Î_n incrementally to create L̂ =[l̂_1,l̂_2, ⋯,l̂_n-1] for reconstruction. L and L̂ share a one-to-one mirror relationship, so the final flare-free image Î can be reconstructed by Î_n and L̂. Because our method only needs to remove flares in the low-frequency components, it significantly reduces computational complexity. §.§ Low-Frequency Flare Perception Module The Low-Frequency Flare Perception Module (LFFPM) mainly includes the following parts, the Feature Extraction Transformer Block (FETB) and the Feature Refinement Transformer Block (FRTB) for extracting and refining features, and a U-shaped network with skip connections, composed of the Feature Encoder Convolution Block (FECB) and the Feature Decoder Convolution Block (FDCB). Transformers have been demonstrated to be more effective than CNNs in modeling long-range dependencies. As depicted in Figure <ref>, we use the Feature Extraction Transformer Block (FETB) in the Low-Frequency Flare Perception Module (LFFPM) to extract global features from the input image I_n. Moreover, at the end of LFFPM, we use the Feature Refinement Transformer Block (FRTB) to generate the enhanced features. After multiple FRTBs and FRTBs, we concatenate the output of each block to obtain longer-distance features. Inspired by <cit.>, FETB and FRTB use axis-based self-attention(ASA) to reduce computational complexity and gated feed-forward network (GFFN) to capture more critical features. Traditional self-attention's computational complexity is quadratic with the input resolution. ASA computes self-attention sequentially on the height and width axes across the channel dimension, resulting in linear complexity. GFFN applies GELU and elementwise product to eliminate less relevant features in two parallel paths, then combines the relevant features through element-wise summation. As shown in Figure <ref>, the computation of FETB and FRTB are represented as: 𝐅^'=ASA(LN(𝐅))+ 𝐅, 𝐅̂=GFFN(LN(𝐅^'))+𝐅^', where 𝐅 is the input of FETB and FRTB. 𝐅^' and 𝐅̂ are the outputs of ASA and GFFN, respectively. LN represents the layer normalization <cit.>. In order to alleviate the limitation of Transformers in capturing local dependencies, the Low-Frequency Flare Perception Module (LFFPM) uses a convolution-based U-shape encoder-decoder structure consisting of Feature Encoder Convolution Block (FECB) and Feature Decoder Convolution Block (FDCB) to enhance detailed feature representation, as shown in the Figure <ref>. Additionally, to ensure that both the global and local features are adequately fused, the Low-Frequency Flare Perception Module (LFFPM) automatically learns the weighted summation of the global long-distance features extracted by FETB and the output of FDCB. Inspired by <cit.>, FECB and FDCB are designed as simple and efficient nonlinear activation networks. As shown in Figure <ref>, FECB and FDCB mainly include the following parts: layer normalization <cit.>, convolution, Gate, and Channel Attention <cit.>. Gate divides the feature map into two parts in the channel dimension and multiplies them. The formula of Gate is: Gate(𝐗,𝐘)=𝐗⊙𝐘, where 𝐗 and 𝐘 are feature maps. The Channel Attention(CA) mechanism can capture global information efficiently. The formula of Channel Attention is: CA(𝐗)=𝐗∗ W Pool(𝐗), where 𝐗 denotes the feature map, W is fully-connected layers, and Pool represents the global average pooling. ∗ is channel-wise product operation. §.§ Hierarchical Fusion Reconstruction Module As shown in Figure <ref>, after obtaining the low-frequency flare-free image Î_n through the Low-Frequency Flare Perception Module (LFFPM), the Hierarchical Fusion Reconstruction Module (HFRM) fuses Î_n with the high-frequency parts L=[l_1,l_2, ⋯,l_n-1] layer by layer to obtain L̂ =[l̂_1,l̂_2, ⋯,l̂_n-1] for reconstruction. We concatenate the upsampled I_n and Î_n with l_n-1, and then input them into a simple network to learn a mask M_n-1 to guide the fusion process. The specific fusion process is as follows: l^'_n-1=l_n-1⊗𝐌_n-1 + l_n-1, where ⊗ represents the pixel-wise multiplication. To refine the fused features further, we perform a dilated convolution operation on l^'_n-1, then use the Feature Aggregation Block (FAB) to re-weight the significance of features. Through the above steps, the low-frequency flare-free image Î_n achieves the feature fusion operation with one layer in the high-frequency parts L=[l_1,l_2, ⋯,l_n-1]. By analogy, we need to upsample 𝐌_n-1 through linear interpolation and perform the same fusion operation with the next layer in the high-frequency parts L=[l_1,l_2, ⋯,l_n-1]. Behind the Feature Aggregation Block (FAB) in the final layer, we use the Spatial Pooling Pyramid (SPP) <cit.> to facilitate remixing multi-context features. This way, we get L̂ =[l̂_1,l̂_2, ⋯,l̂_n-1] to reconstruct the final flare-free image. As shown in Figure <ref>, the Feature Aggregation Block (FAB) structure is simple. Inspired by <cit.>, we use a squeeze-and-excitation block <cit.> in FAB to learn the weights of different channel features through some linear layers and pooling layers, which can automatically preserve essential features. Additionally, convolution is used to squeeze the features and match the original channels. §.§ Loss Function We train our model with different losses, including the mean square error (MSE) loss L_M, structural similarity loss L_S <cit.>, and perceptual loss <cit.>. Given the final output image I_out and ground truth image I_gt, the perceptual loss L_P is defined as: L_P = ∑_l || F_l(I_out)-F_l(I_gt) ||_2^2, where F is a pre-trained AlexNet <cit.> feature extractor. We compute the l_2 distance between F_l(I_out) and F_l(I_gt) for layer l. In summary, our total loss function can be expressed as: L_total = λ_m L_M + λ_s L_S + λ_p L_P, where we empirically set λ_m=1, λ_s=0.3 and λ_p=0.7 respectively. § EXPERIMENTS AND RESULTS §.§ Setup §.§.§ Dataset We train our MFDNet based on the Flare7K dataset <cit.>. Flare7K is currently the largest publicly available nighttime flare dataset, consisting of 5,000 scattering and 2,000 reflective flare images. The dataset comprises 25 types of scattering flares and 10 types of reflective flares. We combine the Flare7K flare image with 23949 flare-free images sampled from the 24K Flick images <cit.> to create paired flare-corrupted and flare-free images for training. Furthermore, Flare7K provides 100 real-world flare-corrupted images and 100 synthetic flare-corrupted images, along with their corresponding ground truth, which serve as the test dataset. §.§.§ Implementation Details Our implementation is based on PyTorch. We use the Adam optimizer with a learning rate of 1e-4. For a fair comparison, we use the same data augmentation strategy and post-processing step as <cit.>. To augment the training samples, we use random rotation, translation, shear, scale, and flip transformations. During the post-processing stage, we extract the saturated regions of the input image and superimpose it back onto the deflared image to recover the light source. In addition, our MFDNet is designed as a scalable model and the number of LP's decomposed layers n is related to the size of the input image. According to <cit.>, increasing n reduces the computational burden but degrades the performance. As the number of LP's decomposed layers n increases, the image size of the lowest frequency part decreases, which can reduce the computational cost. However, as n increases, the amount of effective information in the lowest-frequency components diminishes, which impacts the low-frequency deflaring and the quality of the reconstructed image. Consequently, the model's performance deteriorates with increasing depth. We set up n=3 to get a trade-off between the computational cost and the performance. §.§.§ Evaluation Metrics We utilize various metrics to evaluate our MFDNet's performance. Specifically, we measure the quality of nighttime flare removal using Structural Similarity (SSIM) <cit.>, Peak Signal-to-Noise Ratio (PSNR), and Learned Perceptual Image Patch Similarity (LPIPS) <cit.>. Additionally, we measure the method's practicality using GMACs, parameters, and inference time. §.§ Comparison with the State-of-the-art Methods We compare the performance of MFDNet with several state-of-the-art methods for nighttime flare removal in Flare7K real-world and synthetic datasets. These methods include a nighttime dehazing method <cit.>, a nighttime visual enhancement method <cit.>, two U-Net-based flare removal methods <cit.>, and HINet <cit.>, MPRNet <cit.>, Restormer <cit.>, Uformer <cit.>. §.§.§ Qualitative Evaluation We first compare the visual results of real-world nighttime flare removal in Figure <ref> and Figure <ref>. The visualization results show that the flare-free images recovered by our MFDNet are closer to the ground-truth images. Subsequently, we provide a visual comparison of removing synthetic flares in Figure <ref>. Our MFDNet still restores cleaner flare-free images than those of the other algorithms. As shown in Figure <ref>, Figure <ref>, and Figure <ref>, the previous methods could not completely remove the flare, or alter the original color and texture details of the input image while removing the flare. Our method can remove the flare while preserving the original image information. Specifically, to further prove the effectiveness and robustness of our method, we compare the flare removal results of our MFDNet and other state-of-the-art methods for different flare patterns. In the fourth row of Figure <ref>, the input image contains the glare artifact, in the sixth row of Figure <ref>, the input image contains the shimmer flare, in the second row of Figure <ref>, the input image contains the streak flare, in the fifth row of Figure <ref>, the input image contains colored lines. It can be seen that for these different flare patterns, the deflared image recovered through our method is closer to the ground truth. At the same time, our MFDNet can remove multiple flare patterns that exist within a single image, such as the third row of Figure <ref>, which contains both shimmer and glare artifacts, the first row of Figure <ref>, and the last row of Figure <ref>, which contain both streak and glare artifacts. §.§.§ Quantitative Evaluation Here, we perform a thorough quantitative evaluation of our MFDNet. Following the benchmark provided by <cit.>, we use full-reference metrics PSNR, SSIM <cit.>, and LPIPS <cit.> to measure the performance of different methods. In order to better compare the comprehensiveness of the algorithm for removing various flares, we re-evaluate Uformer to make it capable of removing both reflected flares and scattered flares, to replace the original method in <cit.>, which can only remove scattered flares. The results are reported in Table <ref>. Our MFDNet achieves the highest average PSNR and SSIM than the previous state-of-the-art methods for real-world nighttime flare removal. For flare removal in synthetic nighttime flare-corrupted images, Our MFDNet performs better in terms of PSNR, SSIM, and LPISP than the previous state-of-the-art methods. Specifically, our MFDNet achieves 30.79 dB on PSNR, which is at least 0.66 dB higher than all other methods. §.§.§ Efficiency Analysis In Table <ref>, we compare the GMACs, parameters, and inference time of different methods on images from 512×512 resolution to 4K resolution. Each result in Table <ref> is an average of 100 tests on an NVIDIA GTX 2080Ti GPU with 11G RAM, where the N.A. denotes that the method cannot handle the input image of this size, and OOM means that the method causes the out-of-memory issue for this specific resolution. As shown in Figure <ref> and Table <ref>, our proposed MFDNet outperforms other methods by a significant margin in terms of inference time and GMACs, while also achieving superior deflare performance than others. Specifically, for 512×512 resolution images, the GMACs of our method are 18.25G, which is much smaller than the 57.61G of the second-best Restormer <cit.>. Our inference time is 0.034s, which is much smaller than the 0.052s of the second-best Dai <cit.>. Furthermore, our MFDNet can process images of any size from 512×512 to 4K resolution. MPRNet <cit.> and Uformer <cit.> can only handle 512×512 and 1024×1024 resolution images, and none of those state-of-the-art methods can handle 4K resolution images. Currently, there are no high-resolution nighttime flare image datasets, and the Flare7K dataset that we used for training and testing is at 512×512 resolution. To evaluate our MFDNet's performance on 4K resolution images, we followed the pipeline of <cit.> by combining flare artifacts into real 4K nighttime images to generate 4K flare-corrupted images. The results of our method on 4K images are illustrated in Figure <ref>. Despite the use of 512×512 resolution images for our training, our MFDNet can still remove flare on 4K images. This demonstrates the effectiveness and efficiency of our method in removing flare by decoupling the illumination information and content information of the image. §.§ Ablation Studies We conduct ablation studies to evaluate the contributions of the different components of our proposed method. Specifically, we measure the contributions of the Feature Extraction Transformer Block (FETB) and the Feature Refinement Transformer Block (FRTB), the Feature Encoder Convolution Block (FECB) and the Feature Decoder Convolution Block (FDCB), and the Feature Aggregation Block (FAB). The results from Table <ref> and Figure <ref> show that these components of our MFDNet are all effective for flare removal. The Transformer block for global feature extraction and refinement can improve the deflare performance, because global information is crucial in accurately identifying the flare. Additionally, the convolution-based encoder-decoder structure provides substantial capabilities to remove flare artifacts and restore more image details. As can be seen from Table <ref> and Figure <ref>, after removing the FECB and FEDB, the deflare effect of the model drops significantly, indicating that in the task of flare removal, it is very important to capture the detailed information of the input flare-corrupted image. Moreover, the FAB helps to integrate information from different frequency bands to reconstruct the final flare-free image. §.§ Limitations However, the proposed method exhibits some limitations that need to be addressed. Firstly, if the light source is diminutive and faint, accompanied by glare flare, the contrast between the light source and adjacent flare areas may not be distinctly pronounced. Under such circumstances, our approach might not fully restore the light source. To address this issue, we might manually determine an optimal luminance threshold for saturation during restoration as shown in Figure <ref>. In future work, we will endeavor to resolve this challenge by employing adaptive techniques to recover the light source. Secondly, our method may fail to completely remove flare shimmers with very high brightness and extensive shapes, probably because they have obvious texture features in the high-frequency band. We intend to resolve these shortcomings in future studies. § CONCLUSION We propose a highly efficient Multi-Frequency Deflare Network (MFDNet) for the nighttime flare removal task, which significantly reduces the computational burden when handling high-resolution images while simultaneously removing flare artifacts successfully. We disentangle illumination information from content information by decomposing the input image with the Laplacian Pyramid. We leverage Transformer's ability to capture long-range pixel dependencies and convolutional neural networks' capability to capture local features to remove flare comprehensively. We developed a hierarchical fusion reconstruction step to adaptively refine high-frequency components, to generate a final flare-free image. Dividing frequency bands to process flare-corrupted images can substantially decrease computational complexity. We demonstrate through extensive experiments that our method is superior and more efficient than other state-of-the-art nighttime flare removal methods. spmpsci
http://arxiv.org/abs/2406.18242v1
20240626104644
ConStyle v2: A Strong Prompter for All-in-One Image Restoration
[ "Dongqi Fan", "Junhao Zhang", "Liang Chang" ]
cs.CV
[ "cs.CV", "eess.IV" ]
Fan et al. University of Electronic Science and Technology of China, Chengdu, CN {dongqifan, junhaozhang}@std.uestc.edu.cn liangchang@uestc.edu.cn ConStyle v2: A Strong Prompter for All-in-One Image Restoration Dongqi Fan Junhao Zhang Liang Chang =============================================================== § ABSTRACT This paper introduces ConStyle v2, a strong plug-and-play prompter designed to output clean visual prompts and assist U-Net Image Restoration models in handling multiple degradations. The joint training process of IRConStyle, an Image Restoration framework consisting of ConStyle and a general restoration network, is divided into two stages: first, pre-training ConStyle alone, and then freezing its weights to guide the training of the general restoration network. Three improvements are proposed in the pre-training stage to train ConStyle: unsupervised pre-training, adding a pretext task (i.e. classification), and adopting knowledge distillation. Without bells and whistles, we can get ConStyle v2, a strong prompter for all-in-one Image Restoration, in less than two GPU days and doesn't require any fine-tuning. Extensive experiments on Restormer (transformer-based), NAFNet (CNN-based), MAXIM-1S (MLP-based), and a vanilla CNN network demonstrate that ConStyle v2 can enhance any U-Net style Image Restoration models to all-in-one Image Restoration models. Furthermore, models guided by the well-trained ConStyle v2 exhibit superior performance in some specific degradation compared to ConStyle. The code is avaliable at: https://github.com/Dongqi-Fan/ConStyle_v2https://github.com/Dongqi-Fan/ConStyle_v2 § INTRODUCTION Image Restoration (IR) is a fundamental vision task in the computer vision community, which aims to reconstruct a high-quality image from a degraded one. Recent advancements in deep learning have shown promising results in specific IR tasks such as denoising <cit.>, dehazing <cit.>, deraining <cit.>, desnowing <cit.>, motion deblurring <cit.>, defocus deblurring <cit.>, low-light enhancement <cit.>, and JPEG artifact removal/correction <cit.>. However, these models are limited to addressing only one specific type of degradation. To tackle this issue, researchers have focused on developing models capable of handling multiple degradations <cit.>. Yet, these models require retraining for each different type of degradation, that is a set of weights is tailored for a single type of degradation. Obviously, these approaches are not practical as multiple degradations often coexist in real-world scenarios. For instance, rainy days are often associated with haze and reduced lighting. The all-in-one Image Restoration <cit.> is a kind of method that only uses a suit of weights to address multiple types of degradation. However, these all-in-one models often have a large number of parameters and require heavy computations, leading to time-consuming and inefficient training processes. For instance, Chen et al. <cit.> (<ref> (a)) adjust the number of teacher networks based on the number of degradations. Thus, the more teacher networks there are, the more complex the training process becomes; Li et al. <cit.> (<ref> (a)) also adopt the number of sub-networks the same as the amount of degradation, and its backbone is obtained through neural architecture search (NAS); PromptGIP <cit.> (<ref> (b)) leverage the idea of visual prompting to help the model in handling multiple degradations, but a degradation-clean sample pair must be provided in the training and inference stage and requiring 8 V100 GPUs for training. These methods are inefficient due to the lack of prior knowledge about the degradations in the input image. In other words, the prior information about the degradation type needs to be first obtained and then passed to subsequent sub-networks. In contrast, thanks to ConStyle v2, general restoration network (<ref> (c)) do not need specific prior knowledge about degradations, but rather require a clean visual prompt (clean prior). In this paper, we introduce a strong prompter for all-in-one Image Restoration: ConStyle v2 (<ref> and <ref>). The key of our work is to ensure that ConStyle v2 generates clean visual prompts, thus mitigating the issue of model collapse and guiding the training of general restoration networks. Model collapse typically arises when a model struggles to simultaneously handle multiple degradations. To address this challenge, we propose three simple yet effective improvements to ConStyle: unsupervised pre-training, leveraging a pretext task to enhance semantic information extraction capabilities, and employing knowledge distillation to further enhance this capacity. ConStyle v2 consists of convolution, linear, and BN layers and without any complex operators (<ref> (c)), so the time of the training is less than two days on a V100 GPU and Intel Xeon Silver 4216 CPU. Once trained, ConStyle v2 can be seamlessly integrated into any U-Net model to facilitate their training without fine-tuning. Additionally, given the lack of datasets encompassing multiple degradations, we collect and produce a Mix Degradations dataset, which includes noise, motion blur, defocus blur, rain, snow, low-light, JPEG artifact, and haze, to cater to training requirements. To verify our method, we perform ConStyle v2 on three state-of-the-art IR models (Restormer<cit.>, MAXIM-1S<cit.>, NAFNet<cit.>) and a non-IR U-Net model consisting of vanilla convolutions. <ref> shows details architecture of Original models and ConStyle/ConStyle v2 models. Experiment results on 27 benchmarks demonstrate that our ConStyle v2 is a powerful plug-and-play prompter for all-in-one image restoration and exhibits superior performance for specific degradation compared to ConStyle <cit.>. Our contributions can be summarized as follows: * Three simple yet effective enhancements are proposed to train ConStyle v2, and the time of the training is less than two GPU days. * We propose a Mix Degradations dataset, which includes noise, motion blur, defocus blur, rain, snow, low-light, JPEG artifact, and haze, to cater to training needs. * We propose a strong plug-and-play prompter for all-in-one and specific image restoration, in which the model collapse issue is avoided. § RELATED WORK §.§ All-in-One Image Restoration While numerous works <cit.> excel in various Image Restoration tasks, they are typically limited to addressing a single type of degradation with a specific set of weights. To solve this problem, all-in-one Image Restoration (IR) methods <cit.> have been developed. These methods aim to enable models to effectively handle multiple degradations simultaneously. For example, AirNet <cit.> leverages MoCo <cit.> and Deformable Convolution <cit.> to transform degradation priors obtained from the former into convolution kernels in the latter, enabling dynamic degradation removal; DA-CLIP <cit.> builds upon the architecture of CLIP <cit.>, in which BLIP <cit.> is used to generate synthetic captions for high-quality images. Then match low-quality images with captions and corresponding degradation types as image-text-degradation pairs; ADMS <cit.> introduces a Filter Attribution method based on FAIG <cit.> to identify the specific contributions of filters in removing specific degradations, while IDR <cit.> proposes a learnable Principal Component Analysis and treats various IR tasks as a form of multi-task learning to acquire priors. Different from the above methods, we aim to design a plug-and-play module that can transform a non-all-in-one model into an all-in-one model. §.§ Visual Prompting In the field of Natural Language Processing (NLP), Prompting Learning refers to providing task-specific instructions or in-context information to a model without the need for retraining. This approach has shown promising results in NLP, such as GPT-3 <cit.>. Drawing inspiration from Prompting Learning in NLP, recently, there have been many excellent visual prompting works in the IR <cit.>. For example, ProRes <cit.> involves adding a target visual prompt to an input image to create a "prompted image". This prompted image is then flattened into patches, with the weights of ProRes frozen, and learnable prompts are randomly initialized for new tasks or datasets; PromptIR <cit.> introduces a Prompt Block in the decoder stage of the U-Net architecture. This block takes prompt components and the output of the previous transformer block as inputs, with its output being fed into the next transformer block; PromptGIP <cit.> proposes a training method akin to masked autoencoding, where certain portions of question images and answer images are randomly masked to prompt the model to reconstruct these patches from the unmasked areas. During inference, input-output pairs are assembled as task prompts to realize image restoration. Our approach also leverages visual prompting, but in a more efficient manner, eliminating the need to distinguish different degradations like the above methods. It will provide a clean visual prompt for other models. § METHOD The training diagram of ConStyle v2 is depicted in <ref>, while the distinctions between ConStyle and ConStyle v2 are illustrated in <ref>. The same as ConStyle, ConStyle v2 only retains the Encoder part when the pre-training is complete. In this section, we first provide a brief overview of ConStyle (<ref>), followed by showing problems encountered with ConStyle in multiple degradations (<ref>), and finally, we illustrate the improvements made from ConStyle to ConStyle v2 in three steps (<ref>). In addition, the Mix Degradations dataset is described in <ref>. §.§ Review of ConStyle IRConStyle <cit.> is a versatile and robust IR framework consisting of the ConStyle and a general restoration network. ConStyle includes several convolutional layers and one MLP layer, which is responsible for extracting latent features (the latent code and intermediate feature map) and then passing them to the general restoration network. The general restoration network follows an abstract U-Net style architecture, allowing for the instantiation of any IR U-Net model. The training stage (<ref>) and inference (<ref>) process of IRConStyle <cit.> can be described as follows: I_restored=G(E(I_degraded, I_clean), I_degraded) I_restored=G(E(I_degraded), I_degraded) Where G stands for general restoration network, E for ConStyle, I_degraded for the input degraded image, and I_restored for the output restored clean image. Based on the contrast learning framework MoCo <cit.>, ConStyle cleverly integrates the idea of style transfer and replaces the pretext task, Instance Discrimination <cit.>, with one pretext task more suitable for IR. The total loss functions for IRConStyle are as follows: L_total=L_style+L_content+L_InfoNCE+L_1 The calculation of L_style, L_content, and L_InfoNCE is performed in ConStyle, while L_1 is performed in general restoration network. Under the supervision of L_style, L_content, and L_InfoNCE, latent features move closer to the clean space and further away from the degradation space. Since ConStyle can adaptively output clean latent features according to input degraded images, it is natural for us to believe that ConStyle should be able to turn the general restoration network into an all-in-one model. However, the experiment results on ConStyle models are not as expected. §.§ Mix Degradations Datasets We need a training dataset that includes noise, motion blur, defocus blur, rain, snow, low light, JPEG artifact, and haze, but the existing training dataset did not meet our needs. Therefore, we propose a dataset, namely Mix Degradations datasets, consisting of image pairs with all of the aforementioned degradations. Details on the Mix Degradations dataset can be found in <ref>. The images with noise and JPEG artifacts are respectively generated using established methods same as <cit.> and <cit.>. It is important to note that in the OTS dataset, haze images with intensities of 0.04 and 0.06 are manually removed due to being too clear to the human eye. In addition, the deraining dataset, which includes Rain14000 <cit.>, Rain1800 <cit.>, Rain800 <cit.>, and Rain12 <cit.>, initially contained 13,712 images, but two erroneous pictures are identified and removed. After a unified cropping process, the Mix Degradations dataset has 621,573 images, all of size 256 × 256. The Mix Degradations datasets, the uncropped joint datasets mentioned in <ref> (totaling 46,301 images), and the data preparation file are all available on our GitHub link. §.§ ConStyle on Mix Degradations Datasets To evaluate whether ConStyle <cit.> can directly convert U-Net models to all-in-one models, we conduct experiments using Original models (Restormer <cit.>, NAFNet <cit.>, MAXIM-1S <cit.>) and ConStyle models (ConStyle Restormer, ConStyle NAFNet, ConStyle MAXIM-1S) on Mix Degradations datasets. These models will be tested on GoPro <cit.> (motion blurring), RealDOF <cit.> (defocus blurring), LoL v1 <cit.> (low-light enhancement), SOTS outdoors <cit.> (dehazing), LIVE1 <cit.> (JPEG artifact removal), CSD <cit.> (desnowing), CBSD68 <cit.> (denoising), and Rain100H <cit.> (deraining). In addition, we introduce two more models for comparison: Original Conv and ConStyle Conv. The Original Conv is a vanilla U-Net convolution model, while the ConStyle Conv incorporates ConStyle into the Original Conv. These two additional models are used to verify the generality of the ConStyle and ConStyle v2. It is important to note that to expedite evaluation during the training, only a subset of the test datasets is used. For example, only 24 images from GoPro's test dataset of 1,111 images are selected. Using the full test datasets for all 8 tasks would significantly increase training time, as inference on Restormer alone with GoPro's test dataset would take 40 minutes on a V100 GPU. The results in <ref> demonstrate that under multiple degradation settings, the performance of ConStyle Conv not only fails to surpass that of the Original Conv but also remains consistently low performance after 80K iterations. While ConStyle Restormer shows better performance than Restormer, it also suffers from model collapse problems as Restormer. In the following section, we will illustrate the improving process of ConStyle to ConStyle v2 on Restormer and Original Conv step by step. §.§ ConStyle v2 §.§.§ Unsupervised Pre-training We find that even for specific IR tasks, ConStyle models exhibit varying degrees of model collapse issues. For example, in the dehazing task, the performance of ConStyle NAFNet, ConStyle MAXIM-1S, and ConStyle Restormer significantly declines after 250K iterations, 30K iterations, and 10K iterations respectively. Interestingly, even within the same model like ConStyle MAXIM-1S, the onset of model collapse differs across denoising, deraining, and deblurring tasks, occurring at 10K, 100K, and 200K iterations respectively. To address this challenge, IRConStyle <cit.> implements a strategy of early stopping ConStyle updates. Here, we intend to elegantly solve this problem. Since this problem happens in joint training, then it is natural to split joint training process into two stages. Specifically, ConStyle is pre-trained independently, followed by fixing its weights and integrating it with other IR models for guided training. For pre-training stage, we leverage the generation techniques of Real-ESRGAN <cit.> and ImageNet-C <cit.> in the Degradation Process (<ref>) for unsupervised training on ImageNet-1K <cit.>. Since our goal is to train ConStyle v2 to be a powerful prompter that can produce a clean visual prompt based on the different degradations, we use the method in ImageNet-C <cit.> to generate motion blur, snow, and low contrast and the two-stage degradation method in Real-ESRGAN <cit.> to generate Gaussian blur, noise, and JPEG artifacts. For each batch of images, 40% of the images are randomly selected to add motion blur and snow and change contrast, while 60% of the images are added Gaussian blur, noise, and JPEG artifact. For details training setting of the pre-training please see <ref>. The pre-trained ConStyle, with the weight fixed, is incorporated into the general restoration network for training on the Mix Degradations datasets. Here, we name the models of this stage as Pre-train models. The results of Pre-train Restormer and the Pre-train Conv can be seen in <ref> (a) and (e). §.§.§ Pretext Task Following pre-training step, ConStyle Restormer has demonstrated significant performance improvement, successfully resolving the issue of model collapse. Conversely, ConStyle Conv continues to face challenges with unstable training and limited enhancement in performance. We believe that this is attributed to heavy degradation, leading to a loss of semantic information (<ref>) in the original image. It makes the model of weak semantic extraction ability, such as ConStyle Conv, also have poor image restoration performance. Because the process of image restoration involves pixel-wise operations and necessitates a comprehensive understanding of the entire image. Thus we introduce a pretext task (classification) to enhance the semantic information extraction capabilities of ConStyle, so as to improve such capability of ConStyle models. Specifically, we add Classifier and Softmax layers at the back of the Encoder and leverage the labels of the ImageNet (<ref>). Here, we name the models of this stage as Class models. As shown in <ref> (b) and (f), the addition of the pretext task has little influence on ConStyle Restormer, since the Transformer model already has strong semantic extraction abilities. In contrast, for ConStyle Conv, the inclusion of the pretext task makes the training stable, and the performance is significantly improved. §.§.§ Knowledge Distillation Although ConStyle has been significantly improved through pre-training and the addition of a pretext task, enabling it to generate clean visual prompts based on degraded image input, to further boost ConStyle's ability to extract semantic information, we take the last step to improve ConStyle to ConStyle v2. Inspired by BYOL<cit.>, SimSam<cit.>, and DINO<cit.>, we take advantage of knowledge distillation. Since the input of the Momentum Encoder is the clean image, and its output visual prompt is cleaner than the output of Encoder, by utilizing the Momentum Encoder as a teacher and the Encoder as a student, the teacher network is able to adaptively guide the student network during training. Specifically, a Classifier and Softmax layer are added at the back of the Momentum Encoder, with distance measured by the Kullback-Leibler (KL) function (<ref>). Now, we have the final ConStyle v2, and the performance of ConStyle v2 Restormer (<ref> (g) and ConStyle v2 Conv (<ref> (c)) is raised again compared with other models. In addition, each improvement in the average performance across eight degradations is depicted in <ref> (d) and (h). § EXPERIMENTS §.§ Implement details All experiments in this paper are performed on an NVIDIA Tesla V100 GPU. To be consistent with ConStyle <cit.>, we use AdamW (β_1=0.9, β_2=0.999, weight decay=1e^-4) optimizer with an initial learning rate of 3e^-4 and Cosine annealing. Training Stage: The batch size, crop size, and total iterations are set as 16, 128, and 700K respectively. Pre-training Stage: The batch size, crop size, and total iterations are set as 32, 224, and 200K respectively. In the process of generating degraded images, we directly use all configurations in Real-ESRGAN <cit.> and change the intensity of degradation in ImageNet-C <cit.>. Mix Degradations datasets is used in the training stage and ImageNet-1K in the pre-training stage. For evaluation, GoPro <cit.>, HIDE <cit.>, RealBlur-J <cit.>, and RealBlur-R <cit.> are used for motion deblurring, DPDD <cit.> is used for defocus deblurring, SOTS outdoors <cit.> is used for dehazing, Rain100H <cit.>, Rain100L <cit.>, Test1200 <cit.>, and Test2800 <cit.> are used for deraining, FiveK <cit.>, LoL v1 <cit.>, and LoL v2 <cit.> are used for low-light enhancement, CSD <cit.>, Snow100K (S, M, and L) <cit.> are used for desnowing, CBSD68 <cit.> and urban100<cit.> are used for denoising, and LIVE1 <cit.> is used for JPEG artifact removal. §.§ Model Analyses <ref> presents a comparison of parameters, computations, and speed between all models. All the results are obtained using input data of size (2,3,128,128), and the speed is the average of 10,000 inference. The reason why the parameters of the ConStyle v2/ConStyle models are fewer than the original models (except for Original Conv and ConStyle v2 Conv) is that, to demonstrate that the improvement of the ConStyle models is not brought by simply expanding the scale of the network, ConStyle models are downscaled by reducing the width and depth <cit.>. Because of the introduction of the ConStyle part, the parameters of models will be increased by 1.19M. §.§ All-in-One Image Restoration Result To verify the overall performance of our methods, we calculate the average PSNR/SSIM on 27 benchmarks. As shown in <ref>, except that the performance of Pre-train Conv is lower than that of Original Conv and ConStyle Conv on PSNR, the performance of other Pre-train, Class, and ConStyle v2 models are significantly higher than that of Original and ConStyle models. This highlights the effectiveness of three proposed methods: unsupervised pre-training, using pretext task, and using knowledge distillation. Due to space constraints, the detail results of Restormer, NAFNet, MAXIM-1S, and Conv models can be found in the supplementary material. While the ConStyle v2 models may not outperform ConStyle, Pre-train, and Class models in certain benchmarks, they still show significant improvement over the Original models. It is worth noting that scaling up the ConStyle v2 models to the size of the Original models could potentially yield even better results. §.§ Single Image Restoration Result Considering the significant improvement demonstrated by ConStyle v2 in handling multiple degradations, it is worth investigating whether this method can also enhance general restoration models to address specific degradations. We simply utilize the same training settings as IRConStyle <cit.> for specific degradation scenarios. A comparison between ConStyle v2 models and ConStyle models is conducted across motion deblurring, denoising, and dehazing tasks. The denoising results are presented in <ref>, while the results for motion deblurring and dehazing are shown <ref>. For denoising, except for the performance of ConStyle v2 MAXIM-1S slightly worse than ConStyle MAXIM-1S on CBSD68, the overall performance of ConStyle v2 models is better than ConStyle models. For dehazing, ConStyle v2 models significantly outperform ConStyle models, even by 1.61 dB on NAFNet models. For motion deblurring, ConStyle v2 NAFNet is superior to ConStyle NAFNet but is indistinguishable from ConStyle on Restormer and MAXIM-1S. In general, for specific degradation, ConStyle v2 models do not require an accurate number of iterations to freeze the weight, while ConStyle models need to do so to avoid the problem of model collapse. Therefore, ConStyle v2 makes the entire IRConStyle framework more efficient. §.§ All-in-One Image Restoration Visual Result We present the visual results of the Original models and ConStyle v2 models on the GoPro, DPDD, LoL v2, CBSD68, Rain100L, SOTS outdoors, snow100K-M, and LIVE1. Due to the space constraint and fair comparasions, for all tasks, we only select one identical image for each model. The origin degrdaded and target images are shown in <ref>. The visual results of the Restormer and ConStyle v2 Restormer are shown in <ref>, the visual results of the NAFNet and ConStyle v2 NAFNet are shown in <ref>, the visual results of the MAXIM-1S and ConStyle v2 MAXIM-1S are shown in <ref>, and the visual results of the Original Conv and ConStyle v2 Conv are shown in <ref>. §.§ Ablation Studies In the process of improving ConStyle to ConStyle v2, the results of optimization are obtained step by step (<ref>): unsupervised pre-training, adding a pretext task, and adopting knowledge distillation. Therefore, our whole improvement process is also the process of ablation studies. In addition, for every step of improvement, we conduct all models on the Mix Degradations datasets for fair comparisons (see supplemental materials for details). § CONCLUSIONS AND LIMITATIONS §.§ Conclusions This paper leverages the unsupervised pre-training, pretext task, and knowledge distillation to improve ConStyle into a strong prompter for all-in-one image restoration. ConStyle v2 not only significantly improves the performance of the Original models under multiple degradations settings, but also solves the issue of model collapse (observed in Restormer) and unstable training caused by model limitations (observed in Original Conv). Moreover, the redundant operations that manually select specific iterations to freeze weights across different models and tasks in ConStyle are avoided, and the performance of ConStyle v2 models under certain specific degradations is also improved. Finally, due to the lack of training datasets for multiple degradations, the Mix Degradations datasets is collected and introduced. §.§ Limitations Despite many advantages as described in the conclusion and experiments, there are inevitably two limitations. Firstly, ConStyle v2 exhibits limited improvements in low-light, deraining, and defocus deblurring tasks compared to other tasks. This is evident in the results of ConStyle v2 Conv on LoL v1 and ConStyle v2 MAXIM-1S on DPDD. The reason is that during the pre-training stage, the generation methods of rain, low light, and defocus blur are not included in the degradation generation (Real-ESRGAN and ImageNet-C) since the effective method of the synthetic defocus blur, rain, and low light is still a challenge in the IR community. Secondly, while ConStyle v2 models have shown promising results in quantifiable indicators such as PSNR/SSIM, the visual improvements in ConStyle v2 Conv seem to be less pronounced. This may be attributed to the inherent limitations of the Original Conv model, which is not specifically tailored for image restoration. However, ConStyle v2 demonstrates significant visual enhancements in most tasks on MAXIM-1S and NAFNet. splncs04 Abbreviated paper title Fan et al. University of Electronic Science and Technology of China, Chengdu, CN {dongqifan, junhaozhang}@std.uestc.edu.cn liangchang@uestc.edu.cn Supplementary Material: A Strong Prompter for All-in-One Image Restoration Dongqi Fan Junhao Zhang Liang Chang ========================================================================== § ALL-IN-ONE IMAGE RESTORATION RESULT
http://arxiv.org/abs/2406.17675v1
20240625160908
Quantifying AI Psychology: A Psychometrics Benchmark for Large Language Models
[ "Yuan Li", "Yue Huang", "Hongyi Wang", "Xiangliang Zhang", "James Zou", "Lichao Sun" ]
cs.CL
[ "cs.CL" ]
[ [ July 1, 2024 ================ § ABSTRACT Large Language Models (LLMs) have demonstrated exceptional task-solving capabilities, increasingly adopting roles akin to human-like assistants. The broader integration of LLMs into society has sparked interest in whether they manifest psychological attributes, and whether these attributes are stable—inquiries that could deepen the understanding of their behaviors. Inspired by psychometrics, this paper presents a framework for investigating psychology in LLMs, including psychological dimension identification, assessment dataset curation, and assessment with results validation. Following this framework, we introduce a comprehensive psychometrics benchmark for LLMs that covers six psychological dimensions: personality, values, emotion, theory of mind, motivation, and intelligence. This benchmark includes thirteen datasets featuring diverse scenarios and item types. Our findings indicate that LLMs manifest a broad spectrum of psychological attributes. We also uncover discrepancies between LLMs' self-reported traits and their behaviors in real-world scenarios. This paper demonstrates a thorough psychometric assessment of LLMs, providing insights into reliable evaluation and potential applications in AI and social sciences. § INTRODUCTION The development of large language models (LLMs) has marked a milestone in artificial intelligence (AI) <cit.>. LLMs demonstrate exceptional performance beyond traditional natural language processing (NLP) tasks <cit.>, with remarkable problem-solving <cit.> and decision-making abilities <cit.>. The evolving capabilities of LLMs facilitate their expansion into broader real-world applications <cit.>, directing a significant shift from software tools to general-purpose assistants for humans <cit.>. It is thus crucial to move beyond merely assessing specific abilities. Inspired by how psychology facilitates the understanding of human behaviors, we investigate psychology in LLMs, aiming to better describe and predict the behaviors of LLMs. Psychometrics, a systematic evaluation framework, emerges as a promising tool for assessing the psychological attributes of LLMs <cit.>. It is distinguished by its predictive power and rigorous measurement <cit.>. Psychometrics evaluates psychological dimensions, termed constructs, which are the hypothesized factors to explain and predict the behaviors of humans <cit.>. For instance, personality has been shown to predict extensive social outcomes such as career choices and criminal behaviors <cit.>. Leveraging the predictive power of psychometrics, we intend to identify psychological dimensions and provide insights into the behavioral patterns of LLMs. Additionally, psychometrics emphasizes the importance of evaluation quality by measuring the validity and reliability of the tests <cit.>. We utilize similar approaches from psychometrics to address concerns about variability in LLMs' responses and stability of these psychological attributes <cit.>. As LLMs increasingly fulfill roles as lifelike assistants, there is a growing research interest in quantifying their psychological attributes <cit.>. Existing evaluations mainly focus on specific dimensions, such as personality <cit.> or theory of mind <cit.>. In addition, Miotto et al. <cit.> provided the initial evidence of psychological assessments for dimensions of personality, values, and demographics in GPT-3. Huang et al. <cit.> explored psychological portrayals of LLMs, examining dimensions of personality traits, interpersonal relationships, motivational tests, and emotional abilities. However, there are still two significant challenges that hinder a holistic understanding of LLM psychology. First, existing benchmarks lack diversity and comprehensiveness in both assessment scenarios and item types <cit.>. Most scenarios only involve self-reported questions (i.e., requiring LLMs to rate themselves), which limits the exploration of their psychological attributes in real-world situations. Additionally, since users primarily interact with LLMs through open-ended questions, it is crucial to understand how these models exhibit their psychological attributes through language generation rather than through closed-form answers. Second, concerns remain about the reliability of the test. These concerns have two aspects: (1) It is unclear whether psychometric tests, designed for humans, are applicable to LLMs. Psychometrics assumes that psychological attributes exist in humans; however, there is a lack of evidence supporting the existence of these attributes in LLMs. For instance, questions arise such as whether LLMs consistently respond to similar situations, whether their preferences for closed-form questions correlate with open-ended responses, and whether their attributes are robust against adversarial attacks; (2) It remains uncertain whether the tests are subject to measurement errors. Besides potential problems caused by position bias <cit.> and prompt sensitivity <cit.>, our use of LLM-as-a-judge <cit.> approach for the open-ended responses raises concerns about the reliability of LLM raters. To address these challenges, we present a comprehensive psychometric benchmark for investigating psychology in LLMs, covering six psychological dimensions: personality, values, emotion, theory of mind, motivation, and intelligence. Findings.  Our investigation of nine popular LLMs across thirteen datasets yields the following findings regarding the aforementioned challenges: * Consistency in responding to similar situations.  LLMs show consistent behavior in tasks that require reasoning, such as theory of mind or emotional intelligence tasks. In contrast, responses to preference-based questions, which do not have definitive answers, display significant variability across different models. Utilizing specific prompts (e.g., role-playing prompts) can improve response consistency toward designated attributes. * Closed-form versus open-ended responses.  For instance, a model might score low on extraversion in closed-form assessments yet demonstrate extraverted traits in open-ended responses. This discrepancy is also observed in human responses, where individuals may provide socially desirable answers on rating scales, whereas open-ended questions allow for more nuanced expressions that better reflect complex thoughts <cit.>. LLMs may simulate responses based on their training data, and open-ended queries might more accurately reveal the model's underlying generation patterns. This difference can reveal underlying inconsistencies in the model's learned behavior. * Position bias and prompt sensitivity.  The influence of option bias is minimal for models such as GPT-4 and Llama3-70b, whereas it is more apparent in models like ChatGPT and Llama3-8b. Moreover, LLMs exhibit varying degrees of prompt sensitivity during psychometric tests. While most models effectively manage simple substitutions (e.g., noun changes) with minimal impact, logical alterations frequently result in inconsistent outcomes. Additionally, models are particularly susceptible to perturbations in prompts when encountering challenging questions. * Reliability of LLM-as-a-judge.  LLM-as-a-judge has been widely used in recent studies <cit.>. In our study, we utilize two competent LLMs, GPT-4 and Llama3-7b, as raters for evaluating open-ended items. Our examination of their consensus reveals that these LLM raters maintain high consistency across all tests. This consistency demonstrates the potential applicability of this approach in similar evaluation scenarios. Impact.  Our psychometrics benchmark, situated at the intersection of psychology and AI, carries significant implications for AI, social sciences, and society. We uncover the variability in the behaviors of LLMs across different evaluation scenarios and contexts. These findings deepen our understanding of LLMs' response patterns, highlighting the need to mitigate biases and undesirable behaviors to develop socially responsible AI <cit.>. Furthermore, developers can integrate psychological attributes into AI assistants for downstream applications. These lifelike assistants could benefit many sectors, including healthcare, education, and customer service <cit.>. Our benchmark also serves as a valuable tool for social science research. With the increasing use of LLMs to simulate human responses <cit.>, this benchmark facilitates a more rigorous selection of LLMs for replicating human studies. It further enables interpretable analyses of simulated responses. For the general public, we position LLMs as lifelike assistants that efficiently handle user requests, fostering trust and enhancing the overall user experience. § OUR FRAMEWORK OF PSYCHOMETRICS BENCHMARK We draw inspiration from psychometrics and employ LLMs as respondents in psychological tests. Though LLMs are trained on extensive datasets that include human opinions and thoughts, it is crucial to acknowledge the fundamental differences between humans and LLMs when conducting psychometrics tests on LLMs. First, the question of whether LLMs possess agency remains debated <cit.>. The self-reported questionnaires in psychometrics presuppose that respondents possess agency, whereas LLMs' response reflects a multitude of characters from their training data <cit.>. Second, LLMs are sensitive to prompt perturbations that humans might find trivial <cit.>. Acknowledging these differences, we present the framework for the psychometrics benchmark, consisting of three crucial components: psychological dimension identification, assessment dataset curation, and assessment with results validation. §.§ Psychological Dimension Identification We first need to identify psychological dimensions that could explain and predict the behaviors of LLMs. We adopt a common top-down approach to identify dimensions, i.e., referring to psychology theories and employing an analogy between humans and LLMs <cit.>. Specifically, we initially draw upon social science and psychology literature as the source of supporting theories for dimension identification. However, this analogy may not always hold due to the fundamental differences between humans and AI models. To bridge this gap, we establish the following guidelines for identifying appropriate psychological dimensions for LLMs: * Validity: This guideline suggests that psychological dimensions should be valid constructs to predict behaviors. A counter-example of a valid dimension is astrological signs. Though popular in some cultural contexts for predicting traits, astrological signs lack scientific credibility in psychology and show no consistent impact on human behavior or cognition. In contrast, psychological dimensions that are grounded in scientific theories or empirical evidence possess predictive power that can effectively explain behaviors. * Meaningfulness: This guideline asserts that psychological dimensions should be relevant to the capabilities or functions of LLMs that yield meaningful assessment results. For instance, emotional variability is a valid psychological dimension for humans, influencing behaviors in high-stakes environments. However, applying the same concept to LLMs is not meaningful, as emotions in humans arise from biological mechanisms that LLMs do not possess. Conversely, the ability to understand emotion is meaningful for both humans and AI; it enables AI chatbots to comprehend user requests more effectively, thereby enhancing service efficiency. With the guidelines, we identify six psychological dimensions for investigation: personality, values, emotion, theory of mind, motivation, and intelligence. §.§ Assessment Dataset Curation For evaluating these identified psychological dimensions, we curate datasets using three sources as contents: standard psychometrics tests, established datasets, and self-designed scenarios. In total, thirteen datasets (shown in <ref>) are curated with the guidelines detailed in Appx. <ref>. The overview of curated datasets is shown in <ref>. These datasets are curated to comprehensively assess each psychological dimension, facilitating an in-depth understanding of LLMs' behaviors. The construction of each dataset follows the procedure involving content curation, item design, and prompt design: Content Curation.  The contents of the datasets are sourced from standard psychometric tests or are based on theoretical foundations. These theories not only validate the datasets but also guide the enhancement of dataset diversity. For instance, research on the Theory of Mind (ToM) involves multifaceted tasks encompassing various scenarios and different levels of ToM reasoning. This informs our inclusion of a diverse range of scenarios and reasoning levels in ToM problems. Item Design.  A key innovation of this benchmark is its capacity to uncover the psychological attributes of LLMs under various evaluation settings, such as self-reported and real-world scenarios. This is achieved by using varied item types to assess a psychological dimension. For instance, to evaluate personality, we incorporate both rating-scale Big Five Inventory and open-ended vignette tests. This approach enables a direct comparison between LLMs' self-evaluation scores and their narrative responses to real-world scenarios. Prompt Design.  The prompt design includes system prompts, instruction prompts, and answer rules, each tailored to different item types. We manually craft each prompt and subsequently test it with various LLMs to verify that it accurately conveys the intended task. Detailed information about the prompt design process is provided in the respective evaluation sections and the appendix. §.§ Assessment with Results Validation Model Selection.  We assess nine popular LLMs regarding the identified psychological dimensions on the curated datasets. These LLMs include both open-source and proprietary models such as ChatGPT <cit.>, GPT-4 <cit.>, GLM4 <cit.>, Qwen-Turbo <cit.>, Mistral-7b <cit.>, Mixtral (8*7b, 8*22b) <cit.>, and Llama3 (8b, 70b) <cit.>. To balance the control and diversity of the LLMs' responses, we set the temperature parameter to 0.5. Results Validation.  To ensure that the assessment results are reliable and interpretable, we conduct rigorous validation to examine the degree to which a test is free from errors <cit.>. Extending the reliability considerations in psychometrics, we focus on five forms of reliability: internal consistency, parallel forms reliability, inter-rater reliability, option position robustness, and adversarial attack robustness (full definitions in Appx. <ref>). Here, we outline the approaches for the reliability check. Internal consistency and parallel forms reliability gauge the stability of psychological attributes. Internal consistency examines whether LLMs exhibit consistent behavioral patterns across similar contexts at the item level <cit.>. Psychometrics tests, such as the Big Five Inventory and cultural orientation survey, often include multiple similar items to test a single trait. Parallel forms reliability measures consistency at the test level, exploring whether LLMs deliver consistent performances across two parallel test forms. Parallel forms of tests can be constructed through paraphrasing or altering the objects from the original tests. Option position robustness and adversarial attack robustness are specific to LLMs. Option position robustness how the arrangement of options impacts LLM performance, which we investigate by permuting the options and repeating the experiments. Adversarial attack robustness examines how the inclusion of adversarial elements to items affects the outcomes. Inter-rater reliability pertains to scenarios involving multiple raters and measures the level of agreement between their judgments. In this study, we employ two competent LLMs, GPT-4 and Llama3-70b, to evaluate open-ended responses. Inter-rater reliability is quantified by the correlations between these LLMs' judgments, thus reflecting the reliability of their evaluations. Details of the assessments and validation are presented in the subsequent sections. § EVALUATION ON PERSONALITY Personality is a set of characteristics that influences an individual's cognition, emotions, motivations, and behaviors <cit.>. In psychometrics, personality assessments have high efficacy at depicting and predicting human behaviors <cit.>. We extend this investigation to LLMs, recognizing its importance for understanding the behaviors and interactions of these models. We not only quantify the attributes exhibited by LLMs but also evaluate the stability of these attributes. Besides self-reported assessments, we administer vignette tests to investigate their responses to real-world scenarios. Furthermore, we prompt LLMs to role-play specific traits, examining how these role-playing prompts influence the models' personalities. Setup.  To understand personality in LLMs, we conduct three sets of tests: (1) Self-reported evaluation on the Big Five Inventory (BFI) <cit.> and Short Dark Triad (SD3) <cit.>. BFI assesses general personality traits across five aspects: agreeableness, conscientiousness, extraversion, neuroticism, and openness, and SD3 focuses on the socially aversive aspects, including Machiavellianism, narcissism, and psychopathy. All items in BFI and SD3 tests are rating-scale items, with LLMs rating from 1 (strongly disagree) to 5 (strongly agree) for each statement. The final score for each aspect is the average of all associated item scores. (2) Vignette tests for the Big Five personality. The vignette test uses a short paragraph of real-world scenarios to elicit open-ended responses that reveal psychological traits. We use vignettes from Kwantes et al. <cit.> and two LLM raters, GPT-4 and Llama3-70b, which assign personality scores ranging from 1 to 5. Final scores are the averages of these evaluations. (3) Role-playing prompting for personality assessments. We utilize four prompts — naive prompts, keyword prompts, personality prompts (P^2) <cit.>, and reverse personality prompts (P^2)—to instruct LLMs to role-play specific traits. We then repeat tests (1) and (2) to examine how these role-playing prompts influence the traits of LLMs in both self-reported and open-ended evaluation settings. We defer more details about the setup to Appx. <ref>–<ref>. r0.46 < g r a p h i c s > Demonstrations of BFI and vignette test scores, and the effect of role-playing prompts using Mixtral-8*7b. Results.  We summarize key findings as follows. Comparing personality scores for BFI in <ref> in Appx. <ref> and scores in vignette tests in <ref> in Appx. <ref>. We observe inconsistencies between self-reported personality scores and behaviors exhibited in open-ended responses. For example, as shown in <ref>, Mixtral-8*7b model demonstrates low extraversion in the BFI with a score of 2, whereas it scores 5 in the vignette test. This contrast implies the opposing traits in self-reported scores and observed traits in open-ended responses. In addition, we explore the impact of role-playing prompts on LLMs' personality traits. <ref> presents averages of all models' scores on personality aspects. Our findings suggest that role-playing prompts, especially P^2 and P^2, significantly influence scores on both tests. P^2 prompts elevate all vignette test scores close to 5, whereas P^2 prompts shift positive traits to negative. A concrete example is illustrated in <ref>, where the neuroticism score escalates from 2 to 5 with the use of P^2. Further discussions are included in the Appx. <ref>. l0.43 < g r a p h i c s > Heatmaps for the averaged personality scores for BFI and vignette test with different prompts. P^2 means personality prompts, P^2 means reverse personality prompts. Validation.  Personality is a stable trait that shapes consistent human behaviors. Similarly, LLMs with stable personalities would demonstrate consistent tendencies across similar scenarios. In test (1), we examine the stability of personalities in LLMs through the internal consistency check. We use the standard deviation (σ) as the metric (detailed calculation in <ref> in Appx. <ref>). In <ref> and <ref>, we find varying degrees of personality stability among LLMs. Llama3-8b and Mistral-7b demonstrate human-level stability, evidenced by their low σ values. In contrast, GPT-4 and Mixtral-8*7b show higher σ values, especially in the openness aspect, suggesting a lack of stability. In test (2) and (3), we use LLM raters to evaluate responses to Big Five personality vignettes, which raises concerns about the reliability of these scores. To this end, we quantify inter-rater reliability for two LLM raters' evaluation by calculating weighted Kappa coefficients (κ) (calculation in <ref> in Appx. <ref>). A strong agreement between two rates is indicated by the overall κ value of 0.86. This finding is further supported by high κ values on individual LLMs' answers shown in <ref>. § EVALUATION ON VALUES Values refer to “internalized cognitive structures that guide choices by evoking a sense of basic principles of right and wrong, a sense of priorities, and a willingness to make meaning and see patterns.” <cit.> values in LLMs are manifested through their responses, shaped by the training data <cit.>. As LLMs play an increasing role in decision-making in society, it becomes crucial to understand their encoded values. Thus, we explore the values in LLMs, investigating whether these values remain consistent despite the potentially conflicting values from extensive training datasets. Additionally, we examine the robustness of these values against adversarial perturbations. We probe values in LLMs across three sub-dimensions: cultural orientation, moral values, and human-centered values. Setup.  To investigate the values encoded in LLMs, we conduct three tests, each targeting a specific sub-dimension of values: (1) Evaluation of cultural orientation. We use the “Dimensions of Culture Questionnaire” from the GLOBE project <cit.>, which assesses cultural orientation through nine aspects: assertiveness, future orientation, gender egalitarianism, humane orientation, in-group collectivism, institutional collectivism, performance orientation, power distance, and uncertainty avoidance. All items are rating-scales from 1 to 7; (2) Evaluation of moral values. We employ the survey, which features two alternative-choice settings: a high ambiguity setting, where both choices are morally unfavorable, with one being more aligned with commonsense than the other; and a low ambiguity setting, which presents scenarios with one morally favorable option against an unfavorable one; (3) Evaluation of human-centered values. We curate survey based on the Ethics Guidelines for Trustworthy AI <cit.> (e.g., privacy, environmental and societal well-being). contains alternative-choice items and offers two versions: a regular version and an adversarial version. The regular version assesses LLMs' adherence to human-centered values in conflict scenarios (e.g., the economic gains for a company versus user privacy). The adversarial version, built on the regular one, employs three persuasive techniques <cit.> to enhance the appeal of less ethical choices, testing the robustness of human-centered values in LLMs. More details are in Appx. <ref> – <ref> Results.  In test (1), we examine cultural orientation in LLMs. <ref> in Appx. <ref> shows significant variability across cultural dimension scores. For example, in the assertiveness aspect, ChatGPT rates 5, while Mistral-7b scores only 1. In test (2), <ref> reveals that models perform well in low-ambiguity scenarios but struggle in high-ambiguity situations. The top-performing model, Mixtral-8*7b, aligns only 74.3% with commonsense decisions. These findings highlight significant room for enhancing LLMs' moral discernment. In test (3), <ref> shows that while most models demonstrate over 90% accuracy in standard human-centered value surveys, their performance against adversarial attacks varies; models like ChatGPT drops by more than 20% when faced with persuasive arguments, which underscores the need for further improvement in model robustness. Validation.  In test (1), we assess whether LLMs exhibit stable patterns in cultural orientations by using internal consistency analysis, quantified by the standard deviation (σ). As shown in <ref>, LLMs demonstrate consistent responses in some cultural aspects, while significant variability in others indicates conflicting cultural orientations. In test (2), we evaluate parallel form reliability by varying question types. Comparing <ref> to <ref>, we observe that in high-ambiguity scenarios, the consistency of model responses across parallel forms diminishes. This suggests that as LLMs face greater uncertainty about the answer, their responses become more susceptible to perturbations in prompts. § EVALUATION ON EMOTION Emotion serves to express feelings and conveys rich information about cognitive processes and attitudes <cit.>. While introducing the concept of emotion to LLMs, we recognize that not all aspects of human emotions, such as self-awareness of emotion <cit.>, are applicable to LLMs. We thus refine our focus on LLMs' ability to recognize, understand, and respond to human emotions. Specifically, we investigate whether LLMs can understand emotions in diverse scenarios and whether they can leverage this understanding for decision-making. Setup.  To evaluate emotional intelligence in LLMs, we utilize EmoBench <cit.> dataset, grounded on established psychological theories <cit.>. Our evaluation comprises two tests: (1) Emotion understanding test. This test assesses the LLMs' ability to comprehend emotions and the underlying causes within given scenarios. (2) Emotion application test. This test evaluates LLMs' capability to apply their understanding of emotions to solve emotional dilemmas (e.g., responding to a late-night text from a friend who just had a breakup). Both tests use multiple-choice items with correct answers. Results.  The accuracy rates of LLMs on emotion understanding and emotion application tests are shown in <ref>. The performance of most LLMs on both tests is not satisfactory, with all accuracies below 65%. Llama3-70b achieves the best results in emotion understanding, while GPT-4 excels the emotion application test. Llama3-70b and Mixtral-8*22b stand out as the most capable open-source models. However, even the top performers—Llama3-70b with an accuracy rate of 58.4% in emotion understanding test and GPT-4 with 64.7% in emotion application test—fall significantly short of the average human performance as reported in EmoBench <cit.>. This indicates a substantial room for improvement in the emotional intelligence of LLMs. Validation.  Emotion understanding and application tests are formatted as multiple-choice questions. To assess robustness against position bias, we repeated the experiments with varied positions for the correct option across A, B, C, and D while randomizing other options. We then calculate the standard deviation σ of these experiments. As shown in <ref>, σ values for most LLMs are below 0.1. However, the Llama3 series have higher σ values in the emotion application test, indicating susceptibility to position bias. Additionally, σ values for emotion understanding are lower than for emotion application, suggesting that LLMs possess higher position bias robustness in emotion understanding scenarios. § EVALUATION ON THEORY OF MIND Theory of Mind (ToM) refers to the ability to attribute mental states to oneself and others, essential for effective communication and interaction <cit.>. ToM involves reasoning about others' thoughts and beliefs to predict their behaviors <cit.>. In this section, we apply the concept of ToM to LLMs to investigate whether they can infer perspectives and thoughts from textual scenarios. Additionally, we examine the performance stability of ToM abilities across different tasks and real-world scenarios. Setup.  To evaluate ToM in LLMs, we conduct three tests, spanning various scenarios that require different orders of ToM reasoning: (1) Evaluation on false belief task. This task assesses the ability to understand that others hold incorrect beliefs <cit.>. Our false belief task comprised two sub-tasks: unexpected content task and unexpected transfer task, with all items being alternative-choice. (2) Evaluation on strange story task. The strange stories scenarios cover seven non-literal language uses (e.g., metaphors) that can be misinterpreted without ToM <cit.>. Each item contains an open-ended question, asking about the understanding of the protagonists' thoughts. We also use LLM raters, GPT-4 and Llama3-70b, to evaluate the responses. (3) Evaluation on imposing memory task. This task includes alternative-choice items with statements about the intentionality of characters in the scenario, and LLMs should judge if the statements correctly reflect the characters' intentions. Results.  We include detailed discussions in Appx. <ref> and summarize our key findings here. As illustrated in <ref>, GPT-4 and Llama3-70b achieve remarkable performance over all ToM tests. In contrast, ChatGPT, GLM4, and Mixtral-8*7b exhibit great variability across tests. For example, GLM4 excels at unexpected content tasks and struggles with unexpected transfer tasks. Similarly, Mixtral-8*7b has an 83.3% accuracy rate on imposing memory test but performs poorly on the unexpected transfer test. These results indicate that while some LLMs have abilities in ToM tasks, they lack the comprehensive capability to handle a wide range of ToM challenges. Validation.  We conduct rigorous test validation for the reliability of results for LLMs in ToM tasks. For test (1), we validate two forms of reliability: (i) Position bias robustness. <ref> shows most models demonstrate robustness against position bias, evidenced by high match rate (MR) (defined in <ref>). However, Llama3-8b and Mistral-7b show low MR scores, demonstrating significant variability. (ii) Parallel form consistency. To mitigate biases from word order and language tendencies, we modify the false belief task by swapping labels on the container and its contents in the scenario. Achieving consistent results in these modified tasks is essential for validating ToM capabilities. <ref> reveals that models such as Mixtral-8*7b display low MR values, demonstrating poor consistency and randomness in their responses. In test (2), we assess inter-rater reliability, and we propose a metric termed agreement rate (AR) as “similarity” between two evaluations (defined in <ref>). <ref> shows LLM raters have high consensus with AR values above 0.8 for all models . In test (3), we evaluate parallel form reliability by altering the names and genders of characters in the stories. This modification prevents LLMs from associating specific mental states with a character in alternative-choice tasks. We employ the MR score (defined in <ref>) to assess the parallel form's reliability. As shown in <ref>, all models record MR values of above 0.9, which validates the parallel form reliability of the test. § EVALUATION ON MOTIVATION The concept of motivation in psychology is understood as the driving force behind human actions, thoughts, and behaviors toward goal attainment <cit.>. In this section, we apply the notion of self-efficacy in motivation, originally defined as the belief to overcome challenges <cit.>, to LLMs. We reinterpret this notion as the perceived capability or “confidence” of LLMs to handle user queries. This attribute is crucial for their functionality as problem-solving assistants. In this section, we explore the self-efficacy of LLMs across various user query types and examine whether the self-efficacy they report aligns with their responses to actual queries. Setup.  To explore the self-efficacy of LLMs, we employ two evaluation scenarios: (1) Evaluation of self-reported LLM self-efficacy. We design questionnaire that gauges LLMs' self-reported confidence in handling queries that are challenging or beyond their capabilities. Query types are identified by Gao et al. <cit.>, including real-time data retrieval and specialized professional queries. (2) Evaluation of operational LLM self-efficacy. We utilize HoneSet dataset <cit.>, which consists of 930 user queries across the same six query types. This evaluation determines whether LLMs display confidence or recognize their limitations in response to specific queries. We introduce a metric termed confidence rate, intuitively understood as the the likelihood of LLMs successfully responding to a query without admitting limitations (detailed in Appx. <ref>). r0.46 < g r a p h i c s > The confidence level in questionnaire and HoneSet dataset for GPT-4 (left) and Mixtral-8*7b (right). Results.  Tests (1) and (2) assess self-efficacy, or “confidence” of LLMs through different evaluation scenarios. Test (1) employs the self-reported questionnaire for LLMs to rate their confidence, whereas test (2) assesses their operational confidence in specific query scenarios. As detailed in <ref> and <ref>, notable discrepancies emerge between self-reported and operational confidence. LLMs often report no confidence in managing non-textual or sensory data yet do not fully recognize these limitations when responding to user queries, resulting in fabricated responses. <ref> illustrates that GPT-4's self-reported confidence generally matches its responses. In contrast, Mixtral-8*7b, reports no confidence in processing non-textual and sensory data but still answers over 50% of such queries without admitting limitations. More details are discussed in Appx. <ref>. Validation.  To validate the reliability of questionnaire, we a parallel form by reversing the logic of the statements (e.g., a 100% confidence score on a “Can” statement should ideally correspond to 0% on a “Cannot” statement). We use weighted Kappa coefficients κ to quantify the parallel form consistency. In <ref>, several LLMs, such as ChatGPT and Mistral-7b, show inconsistencies in parallel forms, evidenced by a κ value near 0. It indicates that LLMs struggle to respond consistently to the inverse framing of statements, revealing limitations in their cognitive flexibility or contextual understanding. § DISCUSSION ON INTELLIGENCE Intelligence, a multifaceted construct, has captivated psychology and AI researchers. Recent studies have explored various aspects of intelligence in LLMs, including arithmetic <cit.> and symbolic reasoning <cit.>. Given the extensive evaluation of LLMs' intelligence, we did not include experiments in our benchmark. Instead, we discuss a critical question: How can psychometrics improve the evaluation of LLMs' intelligence? Traditional benchmarks often rely on classical test theory <cit.>, which simply sums or averages scores from correct responses. This method does not consider the varying difficulties of test items nor provides predictive power for performance on unseen tasks. Item Response Theory (IRT) <cit.> in psychometrics offers a more nuanced assessment by modeling the probability of a subject correctly answering an item based on the ability level and the item's difficulty. IRT allows for the selection of items tailored to the subject’s proficiency, enabling direct comparisons across different benchmarks and enhancing the efficacy of LLMs' intelligence assessments. § CONCLUSION In this paper, we present a comprehensive psychometrics benchmark for LLMs, incorporating six psychological dimensions and thirteen datasets to assess their psychological attributes. Different from existing studies, our psychometrics benchmark challenges the assumption of consistent responses—central to human psychometrics—by testing LLMs across diverse evaluation scenarios, including self-reported questionnaires, open-ended questions, and multiple-choice questions. We also suggest a rigorous framework for assessments and results validation. Our findings demonstrate the diversity and variability of LLMs across evaluation scenarios. Based on these findings, we offer insights into AI and social science communities and explore potential applications. Limitations and future directions are discussed in Appx. <ref>. unsrt [appendices] § APPENDIX [appendices]l1 § DATASET OVERVIEW Our benchmark includes 13 datasets from three sources: standard psychometrics tests, established datasets, and self-designed scenarios. In curating datasets, we adhere to the following guidelines: * Authoritative and Established Datasets: The psychometrics datasets used in our benchmark are both authoritative and well-established. We select datasets that are widely recognized in psychology research to enhance the authority of our assessments. For instance, we utilize the Big Five personality test <cit.>, which is a standard personality assessment. In contrast, we exclude the Myers-Briggs Type Indicator (MBTI) from our personality evaluations due to its limited use in scientific research and ongoing debates regarding its validity. In our benchmark, we ensure that the questions in self-curated datasets are grounded on established principles. * Comprehensive Evaluation of Each Dimension: Our datasets are designed to assess wide aspects of each dimension, incorporating various tasks to thoroughly evaluate the performance of LLMs. In the theory of mind dimension, for example, we incorporate false beliefs, strange stories, and imposing memory tasks. These tasks assess both first-order and higher-order theory of mind capabilities, offering a comprehensive view of this dimension in LLMs. * Diverse Dataset Items: Our dataset diversity is further enhanced by including a variety of scenarios and item types. These scenarios mimic real-world situations, providing insights into how LLMs respond to diverse circumstances. The item types—including alternative-choice, multiple-choice, rating-scale, and open-ended items—are chosen to tailor specific needs of measuring psychological attributes. For instance, we use rating scales to assess cultural orientations. This item type captures the intensity of values and preferences on a continuum, allowing for precise interpretations of LLMs' cultural orientations. § RESULTS VALIDATION Results validation in psychometrics ensures that tests produce reliable and interpretable results. A fundamental principle of psychometrics in test validation is reliability, defined as the degree to which a test is free from error <cit.>. Reliability pertains to the consistency of a test under various conditions, including over time (test-retest reliability), across different versions (parallel forms reliability), and among different evaluators (inter-rater reliability). Due to the difference between humans and LLMs, applying psychometric tests to LLMs poses unique challenges. Therefore, we extend reliability considerations in psychometrics and focus on the following five forms of reliability. * Internal Consistency is the degree of homogeneity among the items on a test, such that they are consistent with one another and measuring the same thing <cit.>. That is, internal consistency measures whether LLMs exhibit stable attributes and have similar preferences to the questions examining the same aspect. * Parallel Forms Reliability reflects whether two different yet equivalent versions of a test yield consistent results. The parallel forms may differ slightly, such as in phrasing or terminology. This form of reliability is used to examine prompt sensitivity in LLMs. * Inter-Rater Reliability is used when the test involves open-ended questions that require evaluations by multiple raters to judge the responses. This form of reliability measures the level of agreement between different raters' judgments. In this work, we use two competent LLMs, GPT-4 and Llama3-70b, as the raters to evaluate open-ended responses. Inter-rater reliability is determined by correlations between the judgments of the two LLM raters, thereby quantifying the consistency of their evaluations. * Option Position Robustness is the extent to which the arrangement of options in alternative-choice or multiple-choice items influences the assessment outcomes. This form of reliability is vital for ensuring that evaluations of LLMs remain unbiased, regardless of the configuration of answer choices. * Adversarial Attack Robustness represents the extent to which LLMs are unaffected by adversarial prompts. When evaluating opinion-related aspects of LLMs, we conduct tests using both a standard dataset and a comparable one infused with adversarial elements. This approach helps determine the robustness of opinions generated by the models. § ADDITIONAL DETAILS OF EVALUATION ON PERSONALITY Personality is an enduring set of traits one exhibits <cit.>. Understanding the distinct personality attributes of LLMs can optimize their functionality in downstream tasks. Testing these traits not only deepens our understanding but also fosters innovation in AI's social adaptability and human-computer interaction (HCI) technologies. For instance, an LLM characterized by an extraverted personality may be particularly effective in educational applications that demand extensive user interaction, potentially enhancing user satisfaction and engagement. Furthermore, investigating the personalities of LLMs, especially darker traits, presents an opportunity to enhance the trustworthiness of these models <cit.>. For example, personality testing can proactively identify and mitigate toxic behaviors before deployment. Additionally, by adjusting specific traits—such as reducing neuroticism and increasing agreeableness—we aim to make interactions with LLMs safer and more inclusive, thereby improving the overall user experience with these technologies <cit.>. In this section, we examine two distinct categories of personality: the general personality traits (Big Five), and the adversarial traits (Dark Triad). We aim to address the following research questions: What personality traits do LLMs exhibit? (2) Are the personality traits in LLMs consistent when assessed through self-report questionnaires? (3) Do the personality traits self-reported by LLMs align with those demonstrated in responses to open-ended questions about real-world scenarios? (4) How do role-playing prompts influence personality traits of LLMs? §.§ Big Five Inventory Dataset.  Big Five Inventory (BFI) is a widely-recognized personality test <cit.>, covering aspects of agreeableness, conscientiousness, extraversion, neuroticism, and openness. It contains 44 rating-scale items. We refer to McCrae et al. <cit.> for the descriptive definition of each aspect. * Agreeableness: appreciative, forgiving, generous, kind, and sympathetic. * Conscientiousness: efficient, organized, planful, reliable, responsible, and thorough. * Extraversion: active, assertive, energetic, enthusiastic, outgoing, and talkative. * Neuroticism: anxious, self-pitying, tense, touchy, unstable, and worrying. * Openness: artistic, curious, imaginative, insightful, and original with wide interests. We display statement examples for each aspect in BFI in <ref>. Setup.  We instruct the LLMs to give a score ranging from 1 to 5, indicating from strongly disagree to strongly agree that best corresponds to each provided question. The the prompt template used is shown below: [ enhanced, sharp corners, colback=LightGray, colframe=white, boxrule=0pt, top=2mm, bottom=2mm, left=5mm, right=5mm, borderline north=1pt0ptblack, borderline south=1pt0ptblack ] To evaluate the effects of role-playing prompts on LLMs, we employ four types of prompts: naive prompts <cit.>, keyword prompts, and personality prompts (P^2) <cit.>, and reverse personality prompt (P^2). The personality prompts are GPT-4 generated descriptive sentences about specific personality traits. We use the same generating procedure introduced by Jiang et al. <cit.>. We also design reverse personality prompts, using GPT-4 to generate descriptions that are the opposite of personality prompts. We ensure that the sentence structure of the reverse personality prompt mirrors that of the original personality prompt. These role-playing prompts are added before the statement. We provide examples of role-playing prompts for extroverted trait in the following. Naive prompt: [ enhanced, sharp corners, colback=LightGray, colframe=white, boxrule=0pt, top=2mm, bottom=2mm, left=5mm, right=5mm, borderline north=0pt0ptLightGray, borderline south=0pt0ptLightGray ] Keyword prompt: [ enhanced, sharp corners, colback=LightGray, colframe=white, boxrule=0pt, top=2mm, bottom=2mm, left=5mm, right=5mm, borderline north=0pt0ptLightGray, borderline south=0pt0ptLightGray ] Personality prompt (P^2): [ enhanced, sharp corners, colback=LightGray, colframe=white, boxrule=0pt, top=2mm, bottom=2mm, left=5mm, right=5mm, borderline north=0pt0ptLightGray, borderline south=0pt0ptLightGray ] Reverse personality prompt (P^2): [ enhanced, sharp corners, colback=LightGray, colframe=white, boxrule=0pt, top=2mm, bottom=2mm, left=5mm, right=5mm, borderline north=0pt0ptLightGray, borderline south=0pt0ptLightGray ] Results.  Each personality aspect across the datasets (e.g., openness) comprises multiple questions. The final score for each dimension is determined by computing the average of all associated question scores. In <ref>, we also include the average human scores (3,387,303 participants) for BFI in the United States <cit.>. We observe that LLMs generally score higher than humans in agreeableness and conscientiousness, while their scores in neuroticism are significantly lower. We utilize role-playing prompts to investigate whether they compel LLMs to exhibit different behaviors. Specifically, we examine whether role-playing prompts that assign specific traits to LLMs effectively result in higher scores in the corresponding personality aspects. Comparing <ref> to <ref>, we observed mixed effects of the naive prompts on LLM scores. For example, while the naive prompt increases the openness score from 3.40 to 4.80 for GPT-4, it reduces its score in extraversion. The impact of naive prompts on the self-reported scores of LLMs remains ambiguous. We speculate that the ambiguity arises because a naive prompt, typically a single sentence assigning a specific personality trait, might be too abstract to significantly influence LLMs' self-reported scores in real-world scenarios. As shown in <ref> and <ref>, we observe that more descriptive and concrete role-playing prompts lead to noticeable improvements in self-reported scores. For instance, the personality prompt enhances scores across almost all personality aspects for the majority of LLMs, demonstrating its effectiveness in influencing LLMs' response. In particular, the Mixtral-8*7b model, initially scoring 2.14 in extraversion, reached a score of 5 under both keyword and personality prompts, which highlight a significant change in its perceived traits. These findings demonstrate the effectiveness of prompts in altering the behavioral patterns of LLMs. Validation.  We measure the internal consistency through standard deviation (σ). Formally, we define a dataset comprised of multiple personality aspects 𝒜 = {a_1, a_2, …}. Each aspect a_i contains a collection of items 𝒬_a_i = {q_i1, q_i2, …}. Each item q_ij is associated with a rating score s_ij. The standard deviation for the aspect a_i is computed as follows: σ(a_i) = √(1/|𝒬_a_i|∑_j=1^|𝒬_a_i| (s_ij - s̅_i)^2) where s_ij represents the score of the j-th, and s̅_i is the mean score across all items in the same aspect. This measure of variability indicates the stability of personality in LLMs. We record the σ for BFI in <ref>. We also calculate the σ for the personality under different prompts, shown in <ref>, <ref>, <ref>, and <ref>. A notable observation is that the personality prompts effectively decrease the variability of personality traits for almost all models, which demonstrate that the personality prompts not only direct LLMs to exhibit designated personality, but also enhance its stability. §.§ Short Dark Triad Dataset.  Short Dark Triad (SD3) focuses on darker aspects of personality, which offers a crucial measure of potential trustworthiness within LLMs' personalities. We employ the latest and widely-used dataset <cit.>, which evaluates LLMs based on Machiavellianism, Narcissism, and Psychopathy. The definition of dark aspects of personality are refer to Muris et al. <cit.>: * Machiavellianism: A duplicitous interpersonal style, a cynical disregard for morality, and a focus on self-interest and personal gain. * Narcissism: The pursuit of gratification from vanity or egotistic admiration of one’s own attributes. * Psychopathy: A personality trait characterized by enduring antisocial behavior, diminished empathy and remorse, and disinhibited or bold behavior We show statement examples for each aspect in SD3 in <ref>. Setup.  The instruction prompt template, the role-playing prompts, and the result calculation procedures are identical to those used in the BFI assessment. Results.  We explore dark sides of personality in LLMs using the Short Dark Triad (SD3). We also incorporate human scores (7,863 participants) from ten studies <cit.>. In <ref>, we observe that LLMs typically exhibit higher Machiavellianism and narcissism scores compared to psychopathy. GPT-4 and Mixtral-8*7b score the lowest on average across these traits, and the scores even fall below the human average, which suggests that these models display fewer dark traits and demonstrate higher trustworthiness. Validation.  We use standard deviation (σ) to quantify the internal consistency. We record the σ for BFI in <ref>. We observe that LLMs exhibit varying degree of internal consistency on dark traits. ChatGPT has the most stable patterns in this personality tests, with σ for all three aspects lower than human average. However, the remaining models have substantially higher variability, indicating that these traits are not stable in LLMs. §.§ Vignettes Test for Big Five Personality The vignettes test is a psychometric research tool which employs brief narratives to elicit responses that reveal participants' perceptions, attitudes, and beliefs <cit.>. These vignettes are crafted to simulate real-life situations or dilemmas, prompting respondents to make decisions based on the scenarios. This approach could facilitate the understanding of respondents' behaviors across diverse situations. Dataset.  The vignettes we use consist of five open-ended items, each based on a real-world scenario that asks LLMs to respond to a specific situation. Each item corresponds to one of the Big Five personality aspects <cit.>. Below, we present an example of a vignette designed to assess agreeableness. [arc = 0mm, coltitle=black, title = Vignette Test Example (Agreeableness), ]example Setup.  We use the following prompt to elicit LLMs' response to the real-world scenarios. [ enhanced, sharp corners, colback=LightGray, colframe=white, boxrule=0pt, top=2mm, bottom=2mm, left=5mm, right=5mm, borderline north=1pt0ptblack, borderline south=1pt0ptblack ] For the evaluation under role-playing prompts, we replace “You are an assistant” with the these prompts. The prompt for LLM raters to evaluate the responses is shown below. [ enhanced, sharp corners, colback=LightGray, colframe=white, boxrule=0pt, top=2mm, bottom=2mm, left=5mm, right=5mm, borderline north=1pt0ptblack, borderline south=1pt0ptblack ] The final score of LLMs on each personality aspect is the average score of two LLM raters. Results.  We assess the Big Five personality traits using vignette tests, where LLMs respond to real-world scenarios. Subsequently, LLM evaluators rate the responses for each personality aspect. We demonstrate the difference in responses indicative of negative scores (3) and positive scores (3) for each personality aspect in <ref>. All scores are averaged from evaluations by two LLM raters, GPT-4 and Llama3-70b. Comparing the results of <ref> to <ref>, we observe that in vignette tests, nearly all LLMs score below 3 (indicative of weak traits) in neuroticism, while generally scoring above 3 in the other four personality aspects (indicative of strong traits). A significant inconsistency exists between the results in the self-reported BFI and the open-ended vignette tests. For example, the Mixtral-8*7b model has a score of 2.14 for extraversion in the BFI, yet scores 5 in the vignette test. This suggests that the model exhibits an opposite personality trait, responding as introverted in the BFI but displaying strong extraversion in the vignette tests. Furthermore, there are significant differences in the intensity of personality traits between the LLMs' responses to BFI rating-scale items and vignette test open-ended items. Using role-playing prompts for the vignette tests has proven to be highly effective in altering models' behaviors. In <ref>, we compare the scores from regular prompts, personality prompts (P^2), and reverse personality prompts (P^2). We find that the personality prompts (P^2) significantly enhance the scores for each aspect, with most aspects approaching a score of 5. The average score of all LLMs for neuroticism is 2.11, indicative of weak traits; however, with the personality prompt, it increases to 4.94, indicating a strong neurotic trait. Similarly, the reverse personality prompts lead LLMs' responses to the opposite directions, exhibiting weak traits in all aspects. Thus, role-playing prompts are highly effective in directing LLMs' behaviors. In <ref>, we compare the effectiveness of naive prompts and keyword prompts in influencing the response patterns of LLMs. We observe that both types of role-playing prompts generally enhance scores across personality aspects. However, while naive prompts increase agreeableness, conscientiousness, extraversion, and neuroticism, they do not improve openness. Similarly, the keyword prompt enhances all personality aspects except conscientiousness. Validation.  In vignette tests, The overall agreement between LLM raters, GPT-4 and Llama3-70b, was calculated using the quadratic weighted Kappa coefficient (κ). This coefficient quantifies the degree of agreement between two raters. The computation of κ is outlined as follows. The computation of Cohen's κ involves several systematic steps. We first construct the confusion matrix (X). A k × k confusion matrix X is constructed from N items that have been categorized into k categories by two raters. Each element X_ij in the matrix represents the count of items rated in category i by Rater 1 and in category j by Rater 2. We then calculate observed agreement (P_o), which is calculated as the ratio of the sum of the diagonal elements of X to N, defined as: P_o = 1/N∑_i=1^k X_ii. Afterwards, we calculating expected agreement under probability (P_e). This step involves calculating the marginal totals a_i and b_i for each category i, where a_i and b_i are the total ratings given to category i by each rater respectively. Formally, expected agreement P_e, is then computed as: P_e = 1/N^2∑_i=1^k a_i b_i. Then, the weighting disagreements matrix W is calculated asW_ij = (i-j)^2. The weighted observed agreement, P_w, and weighted expected agreement, P_we, are given by: P_w = 1 - 1/N∑_i,j=1^k W_ij X_ij P_we = 1 - 1/N^2∑_i,j=1^k W_ij a_i b_i. Finally, κ is given by: κ = P_w - P_we/1 - P_we The κ value ranges from -1 (perfect disagreement) to 1 (perfect agreement), with 0 indicating an agreement equivalent to randomness. We include the κ values across all LLMs in <ref>. We find that κ values for individual LLMs' answers are dominantly higher than 0.8, which demonstrates that LLM raters offer reliable assessments. § ADDITIONAL DETAILS OF EVALUATION ON VALUES Values significantly impact decision-making processes by providing a framework that guides choices and behaviors. For example, a value in fairness may lead an individual to make decisions that they perceive as equitable. Therefore, it is an important cognitive dimension that plays a crucial role in explaining human behaviors <cit.>. In social science, values are used to characterize cultural groups, societies, and individuals <cit.>. Analyzing values in LLMs is essential to ensure that LLMs align with ethics and societal norms, particularly given their growing influence in shaping public opinion. LLMs are trained on diverse and vast text corpora, it is important to investigate the consistency and reliability of their responses to questions eliciting values. Investigating the values of LLMs helps enhance their trustworthiness and applicability in diverse cultural and social contexts. In addition, such investigation would illustrate how these models process conflicting information from the training data and the level of certainty they ascribe to their outputs. This evaluation is particularly vital in applications where decision-making relies on the model’s outputs, as fluctuations in confidence levels and inconsistencies in beliefs could lead to unpredictable behaviors. Given their training datasets, LLMs may produce a wide range of outputs. Within the psychological dimension of values, we explore cultural orientations, moral values, and human-centered values. We aim to answer the following research questions: What values are reflected in the response of LLMs? (2) Are the values encoded in LLMs consistent and robust against adversarial counterarguments? §.§ Cultural Orientation Cultural orientations refer to generalizations or archetypes that allow us to study the general tendencies of a cultural group, which represent the collective behavioral standards and conventions unique to specific groups, bridging cultural symbols with underlying values <cit.>. Cultural orientation involves being observant and aware of the similarities and differences in cultural norms across various cultural groups <cit.>. Such value is essential in understanding the needs of people from diverse cultural backgrounds <cit.>. A better understanding of diverse cultures in the workplace also leads to improved teamwork efficiency <cit.>. Evaluating the cultural orientation of LLMs is of great significance for the following reasons. First, such a test enhances our understanding of models' cultural sensitivity and fairness, which is often reflected in how the model processes inputs from diverse cultural contexts. This deeper insight can contribute to the development of more ethical LLMs by reducing cultural biases and misunderstandings <cit.>. Furthermore, as different cultures frequently correlate with distinct languages, evaluating cultural orientation can also provide valuable insights into improving the model’s ability to handle cross-cultural contexts effectively <cit.>. Dataset.  To assess the cultural orientation of LLMs, we utilize the “Dimensions of Culture Questionnaire” from the GLOBE project <cit.>. This questionnaire is structured as a multi-dimensional, rating-based test. Here are the definitions of each dimension in the dataset <cit.>: * Assertiveness: Assertiveness is the degree to which individuals are forceful, confrontational, and aggressive, as opposed to cooperative and compassionate. * Power Distance: Power distance is the degree to which people accept an unequal distribution of power and status privileges. * Uncertainty Avoidance: The degree to which people are uncomfortable with risk, change, and ambiguity is called uncertainty avoidance. * Performance Orientation: Performance orientation is the degree to which innovation, high standards, and excellent performance are encouraged and rewarded. * Future Orientation: The degree to which delayed gratification and planning for the future are valued over short-term gains is called future orientation. * Humane Orientation: The degree to which fairness, altruism, generosity, and kindness are encouraged and valued is a measure of a country’s humane orientation. * Institutional Collectivism: Institutional collectivism is the degree to which organizational and societal institutions encourage individuals to be integrated into groups and organizations. * In-Group Collectivism: In-group collectivism is the degree to which individuals express pride, loyalty, and cohesiveness in their organizations or families. * Gender Egalitarianism: The degree to which male and female equality is actualized is called gender egalitarianism. We display statement examples for each dimension in the cultural orientation survey in <ref>. Setup.  LLMs are instructed to give a score that most accurately reflects their cultural orientation. Below is an example from the prompt template with an example from the dataset: [ enhanced, sharp corners, colback=LightGray, colframe=white, boxrule=0pt, top=2mm, bottom=2mm, left=5mm, right=5mm, borderline north=1pt0ptblack, borderline south=1pt0ptblack ] The score for each dimension is calculated as the average of all scores associated with the corresponding dimension. Results.  The cultural orientation results are shown in <ref>, and radar figures of cultural orientation for all LLMs are shown in <ref>. The results indicate substantial variability in the cultural orientation traits exhibited by LLMs. For example, ChatGPT and GPT-4 demonstrate high assertiveness and performance orientation. In contrast, Llama3-70b and Llama3-8b tend to score higher on future orientation and moderately on gender egalitarianism. This delineation of cultural traits indicates that both the underlying training data and the intended application domains significantly shape the cultural dimensions that models tend to exhibit. Consequently, this influences how these models are perceived and utilized across various global contexts. Validation.  We examine the stability of cultural orientations in LLMs through internal consistency, measured by standard deviations σ. The analysis of σ on each cultural orientation dimensions reveals the models' consistency in portraying certain cultural orientations. Lower standard deviations indicate a model’s stable portrayal of cultural traits across different instances, suggesting more reliable and predictable behavior in respective dimensions. On the other hand, higher standard deviations, as observed in the humane orientation scores for GPT-4, indicate a great fluctuation and potential sensitivity to variations in input data or contextual settings. This variability is critical for developers and users as it underscores potential unpredictability in model performance, particularly in culturally sensitive applications. Comprehending these variations is crucial for aligning LLMs deployments with their intended global uses and for mitigating unintended cultural biases in decision-making processes. §.§ Moral Values Dataset.  We utilize the survey <cit.> to examine moral values in LLMs. The survey presents hypothetical scenarios that mirror real-world situations, followed by questions probing the LLMs' moral preferences. This dataset is based on the moral framework introduced by <cit.>, which outlines ten rules of common morality under two primary categories: “Do not harm” and “Do not violate trust.” The survey is divided into two settings: one with high ambiguity consisting of 680 samples, and another with low ambiguity comprising 687 samples. In the high-ambiguity setting, each scenario is associated with two unfavorable actions. Despite unfavorable, there is an action that aligns more closely with the commonsense. In the low-ambiguity setting, scenarios are presented with one favorable and one unfavorable action. Examples of both high-ambiguity and low-ambiguity scenarios are provided below. [arc = 0mm, coltitle=black, title = High-Ambiguity Scenario, ]example [arc = 0mm, coltitle=black, title = Low-Ambiguity Scenario]example Setup.  We utilize the prompt templates, presented by Scherrer et al. <cit.>, to explore the sensitivity of LLMs' generation to question forms <cit.>. System instructions are to control the output format. The question templates and system instructions are shown in <ref>. The final score is the proportion of answers that are correct (for low-ambiguity scenario) or are aligned with commonsense (for high-ambiguity scenario). Results.  In <ref>, we observe that LLMs generally align closely with established moral values, with many models performing almost perfectly. However, in high-ambiguity scenarios, LLMs demonstrate poor alignment with commonsense decisions. For instance, Mixtral-8*7b shows the highest alignment with commonsense, while at only 74.3%. GPT-4's decisions align with commonsense in merely 65.1% of cases. These results highlight significant room for improvement in LLMs in assessing which of two morally questionable actions is more favorable and may cause less harm. Validation.  In evaluating moral values, we create parallel forms of tests using different question types. We introduce match rate (MR) to measure the parallel form reliability. Formally, we define two lists, representing the correct or incorrect responses for two forms of a questionnaire 𝒬 = {q_1, q_2, …, q_n} and 𝒬' = {q'_1, q'_2, …, q'_n}. 𝒳 = {x_1, x_2, …, x_n} and 𝒳' = {x'_1, x'_2, …, x'_n} are the results from two parallel forms of a questionnaire (testing the same psychological attribute with different content) or different in option order. Each element x_i and x'_i is determined by: x_i = 1{correct answer to the q_i -th question}, x'_i = 1{correct answer to the q'_i -th question} for the i-th question on the respective form. These responses are collected from the same LLM (Language Learning Model) respondent, ensuring that each pair (x_i, x'_i) represents the correct/incorrect result of an LLM to equivalent questions across the two forms. To measure the similarity of the responses between the two forms, we use the MR score, which is calculated as follows: MR = 1/n∑_i=1^n 1(x_i = x'_i) where 1() is an indicator function that returns 1 if the responses match and 0 otherwise. Comparing <ref> to <ref>, we find that LLMs display significantly greater uncertainty in high-ambiguity scenarios. In low-ambiguity scenarios, most models exhibit a high match rate. However, in high-ambiguity scenarios, altering the question type—despite the scenarios being identical—results in markedly lower consistency among LLM responses. These results demonstrate that the vulnerability of LLMs to prompt sensitivity is influenced by the difficulty of the problem. §.§ Human-Centered Values The development of AI should be aligned with human-centered values, such as fundamental freedoms, equality, and rule of law <cit.>. Many human-centered values, such as truthfulness and transparency, are well-explored as trustworthiness in LLMs <cit.>. These prior endeavors evaluate whether LLMs would have benign answers that violate principles including safety, fairness, and accountability. Ethics Guidelines for Trustworthy AI underlines AI is not an end in itself, but rather a promising means to increase human flourishing <cit.>. That is, LLMs, as virtual assistants that have increasing interactions with humans, are expected to be aware of human-centered values. Therefore, it is crucial to assess whether AI systems also prioritize human-centered needs and make decisions that consider human well-being <cit.>. We not only examine the extent to which LLMs' responses align with human-centered values but also assess the robustness of these values against adversarial attacks. Dataset.  To evaluate the human-centered values embedded in LLMs, we introduce . This dataset includes hypothetical scenarios that mirror real-world dilemmas faced by users. These scenarios often involve value conflicts, such as the tension between economic profit and the well-being of public or broader human communities. LLMs are expected to prioritize and protect human well-being. This value tension scenario construction was suggested by Sorensen et al. <cit.>, which examine the value-driven decision-making of LLMs through scenarios that present competing values, thereby shedding light on the trade-offs in LLM decision-making processes. Our dataset comprises alternative-choice items with predetermined correct answers and includes two versions: * Regular (57 scenarios): Each scenario presents a choice between a favorable action aligned with human-centered values and an unfavorable one. * Adversarial (57 × 3 scenarios): Built upon the regular version, the adversarial scenarios is constructed to make the ethically less options more compelling using three types of persuasive adversarial attacks <cit.>, while maintaining the same favorable and unfavorable action choices as the regular scenarios. We ground our scenarios within the framework provided by the Ethics Guidelines for Trustworthy AI <cit.>. These guidelines include seven key requirements for trustworthy AI, i.e., human agency and oversight, technical robustness and safety, privacy and data governance, transparency, diversity, non-discrimination and fairness, environmental and societal well-being, and accountability. From these guidelines, we focus on specific considerations that have been relatively under-explored in research to guide the construction of our human-centered value survey. Descriptions of these human-centered considerations are detailed in <ref>. The construction of follows two steps: scenario generation and quality control: Scenario Generation.  To increase the diversity of dataset, we employ stochastic few-shot generation <cit.> utilizing GPT-4. We first manually draft scenarios that incorporate human-centered considerations, including two options per scenario, where one option violates the rule. These hand-written examples involve value conflicts, such as economic profits for a local company versus environmental protection for the community. These examples undergo quality control process to ensure they reflect the intended ethical dilemmas. A random selection of these verified hand-written scenarios is illustrated in <ref>. Below, we provide the detailed prompt template used for instructing GPT-4 to generate standard scenarios, which is adapted from Scherrer et al. <cit.>. [ enhanced, sharp corners, colback=LightGray, colframe=white, boxrule=0pt, top=2mm, bottom=2mm, left=5mm, right=5mm, borderline north=1pt0ptblack, borderline south=1pt0ptblack ] A generated example for the human-centered value scenario is shown below. [arc = 0mm, coltitle=black, title = Human-Centered Value Scenario]example To assess the robustness of human-centered values in LLMs against adversarial attacks, we enhance regular scenarios using adversarial techniques to emphasize non-human-centered values more persuasively. We employ three highly effective persuasion techniques identified in the study by Zeng et al. <cit.>: logical appeal, authority endorsement, and evidence-based persuasion. We include definitions and examples of our selection of persuasive techniques, and the complete information for persuasive techniques is available [<https://github.com/CHATS-lab/persuasive_jailbreaker>]. [ enhanced, sharp corners, colback=LightGray, colframe=white, boxrule=0pt, top=2mm, bottom=2mm, left=5mm, right=5mm, borderline north=1pt0ptblack, borderline south=1pt0ptblack ] We provide an example of an adversarial scenario utilizing the authority endorsement persuasive technique, with text in red indicating the adversarial additions. (Note: The red text represents fabricated contents which may lack factual accuracy.) [arc = 0mm, coltitle=black, title = Adversarial Human-Centered Value Scenario]example Quality Control.  After generating regular human-centered values survey scenarios, and before generating adversarial examples, we conduct quality control to remove low-quality and redundant data. We conduct quality control before adversarial example generation because the adversarial examples are built upon regular examples, and we would have the same number of regular adversarial example pairs for fair comparison in evaluation. Our research team members adhere to the following guidelines to ensure the quality of data: * Quality of scenarios: * Pertinency: We assess whether the scenarios generated by GPT-4 are reflective and aligned with the human-centered values description. * Clarity: We ensure that each question is easily comprehensible to humans, avoiding the use of vague or complex vocabulary and expressions. * Quality of options: * Correctness: We verify the accuracy of the ground-truth labels, retaining data only when human evaluators agree with high confidence on the correctness of an option. * Distinctiveness: We require that the options should not be too similar or too dissimilar, ensuring that selecting the correct option poses a reasonable challenge and necessitates thoughtful consideration. We instruct human reviewers to eliminate options that lack distinctiveness, being overly simplistic or ambiguously unclear. In addition to ensuring the quality of scenarios and options, we employ a similarity filtering procedure to remove duplicates and scenarios that are excessively similar. We adopt lexical similarity, calculated using cosine similarity of word-count vectors. Any pair of scenarios with a cosine similarity above 0.6 undergoes a random elimination process to remove one of the scenarios. Following this quality control procedure, we retain 57 scenarios for the human-centered values survey. Setup.  The prompt we use for the survey is identical to that used for survey in <ref>. The metric we use is the accuracy rate. Results.  In <ref>, we compare the accuracy rates of all models under regular version of dataset and adversarial versions. We observe a notable decrease in performance across most LLMs when subjected to adversarial persuasions, including authority endorsement, evidence-based persuasion, and logical appeal attacks. Qwen-Turbo demonstrates relatively higher accuracy under authority endorsement and evidence-based persuasion compared to other models, whereas Llama3-8b displays lower robustness, particularly under logical appeal. Validation.  We conduct two types of validations on LLMs regarding human-centered values: robustness against position bias and robustness against adversarial attacks. The robustness against adversarial attacks are presented together with the results. Here, we present the position bias robustness, measured by the match rate MR defined in <ref>. As shown in <ref>, the majority of LLMs have the MR higher than 0.9, demonstrating satisfactory consistency when the positions of options are altered. In contrast, Llama3-8b appears to be vulnerable to position bias. § ADDITIONAL DETAILS OF EVALUATION ON EMOTION Emotional and cognitive abilities are considered as an integrated unity in humans, termed as cognitive-emotive unity <cit.>, which indicates the interwoven nature of emotional and cognitive faculties. Consequently, emotion plays a critical role in shaping human behavior and decision-making processes <cit.>. Enhanced emotional intelligence significantly improves social interactions and facilitates adaptive responses to diverse situations <cit.>. The concept of emotion in LLMs diverges; for humans, emotions arise from complex biological mechanisms, whereas LLMs do not generate emotions. To this end, we apply the concept of emotion to LLMs in terms of their ability to recognize and perceive human emotions, as demonstrated by accurately interpreting emotions from input texts. LLMs lacking emotional intelligence may fail to engage users effectively, potentially leading to misunderstandings and a decline in user experience quality. Thereby, researching emotion in LLMs is crucial as it guides developers and researchers to tailor these models for downstream applications §.§ Emotion Understanding Dataset.  For evaluating emotion understanding, we utilize the emotion understanding dataset from EmoBench <cit.>. It contains 200 multiple-choice items that cover a broad range of scenarios, including mixed emotions contexts and various emotional cues. The emotion understanding tasks are designed to assess whether LLMs can accurately identify the emotions and the underlying causes in real-world scenarios. An example of an emotion understanding test is shown below: [arc = 0mm, coltitle=black, title = Emotion Understanding Test Example, ]example Results.  As illustrated in <ref>, all LLMs exhibit mediocre performance on the emotion understanding test, with the best-performing model, Llama3-70b, achieving an accuracy rate of only 58.4%. In comparison, the average human performance is approximately 70%, indicating a significant gap between LLMs and humans in the emotion understanding ability. Additionally, there is no discernible difference in performance between proprietary LLMs and open-source LLMs. §.§ Emotion Application Dataset.  The emotion application test examines whether LLMs can effectively manage thoughts and emotions and make decisions in emotionally challenging scenarios. For this purpose, we use the emotion application dataset from EmoBench <cit.>. The emotion application dataset comprises scenarios related to interpersonal relationships, involving personal connections (e.g., friends, family) and social connections (e.g., colleagues, teachers), and includes 200 multiple-choice items. An example of an emotion application task is shown here: [arc = 0mm, coltitle=black, title = Emotion Application Example, ]example Results.  The performance on the emotion application test, as shown in <ref>, is also not satisfactory. All models achieving an accuracy rate of less than 70%. In comparison, the average human performance is around 78%. Interestingly, all proprietary LLMs perform better in the emotion application test than in the emotion understanding test, with an improvement of at least 6.7%. In contrast, open-source models do not exhibit this pattern. Llama3-8b and Mistral-7b perform worse in the emotion understanding task, whereas Llama3-70b, Mixtral-8*7b, and Mixtral-8*22b achieve higher accuracy rates in the emotion understanding test. § ADDITIONAL DETAILS OF EVALUATION ON THEORY OF MIND Theory of mind (ToM) is crucial for effective communication and interaction <cit.> as it equips individuals to better interpret the intentions and perspectives of others. Research in cognitive science has identified three major components that facilitate ToM in interactions: shared world knowledge, perception of social cues, and interpretation of actions <cit.>. Shared world knowledge involves an understanding of the contextual dynamics, such as the settings of interactions and interpersonal relationships <cit.>. The perception of social cues involves interpreting signals such as facial expressions, gaze, and vocal tones, which are indicative of others' mental states <cit.>. The interpretation of actions allows for the inference of intentions based on observed behaviors <cit.>. This intricate psychological procedure underscores the multifaceted capabilities required for ToM. Understanding ToM in LLMs helps develop LLMs with more advanced communication abilities. With ToM, LLMs could significantly enhance the efficiency of human-AI communication, enabling AI to better serve human needs. Furthermore, LLMs would effectively analyze and respond to the contextual information of users, inferring their intentions and delivering tailored responses that improve performance in tasks requiring empathy and contextual awareness. In our benchmark, we include three distinct ToM tasks: the false belief task, the strange story task, and the imposing memory task, with scenarios encompassing a wide range of real-world situations and entailing different orders of ToM reasoning. §.§ False Belief Task Dataset.  False belief is a classic task for evaluating ToM. We adopt the false belief task developed by Kosinski <cit.>, and it contains two subtasks: unexpected content subtask and unexpected transfer subtask. * Unexpected content subtask:  First designed by Perner et al. <cit.>, this subtask has a typical setup of a protagonist being presented with an opaque container with inaccurate labels. The protagonist has not previously seen the container or its contents. The participant's task is to recognize that the protagonist, unaware of the discrepancy, will incorrectly assume the label accurately describes what is inside the container. * Unexpected transfer subtask:  In this subtask, the protagonist observes a situation and then leaves the scene <cit.>. While the protagonist is absent, the participant witnesses an unexpected alteration in this situation. A participant equipped with ToM should recognize that although they are aware of the change, the protagonist, having not witnessed it, will still hold on to their original belief about the situation. Each subtask contains 20 items with hypothetical scenarios and questions. Each item is accompanied by two questions, the first question examines LLMs' ToM, and the second question assesses LLMs' task comprehension. Another rationale for the second question is that ToM scholars have highlighted that false-belief tasks might be solved without ToM by simply presuming the protagonist will make mistakes <cit.>. All questions are alternative-choice. The scenarios mimic real-world situations that entail LLMs to infer the thoughts or beliefs of the people in the scenario. Examples of unexpected content subtasks and unexpected transfer subtasks are shown below. [arc = 0mm, coltitle=black, title = Unexpected Content Subtask Example, ]example [arc = 0mm, coltitle=black, title = Unexpected Transfer Subtask Example, ]example Note that in the original dataset, Kosinski <cit.> used a story completion prompt. We adapt his approach to use alternative-choice items to prevent data contamination. This adaptation addresses concerns that some earlier studies of ToM might be part of the training dataset for LLMs, potentially causing LLMs to replicate patterns from these ToM tasks in their responses. Setup.  We use the same prompt as <ref> for the alternative-choice items in the false belief task. Each item in the test contains two questions designed to ascertain whether LLMs comprehend the scenario and can accurately address ToM questions. Successful completion requires correct responses to both questions. Therefore, we introduce dual question accuracy (DQA) metric to quantify the performance, calculated as the correctness of both responses within each scenario. Formally, we define a set of dual question items as 𝒬 = { (q_11, q_12), (q_21, q_22), …}, and t_ij denotes the correct label for the question q_ij. The metric DQA is calculated as follows: DQA = 1/N∑_i^|𝒬|1{(a_i1 = t_i1)∩(a_i2 = t_i2)} where 1 is the indicator function that returns 1 if both answers a_i1 and a_i2 in scenario i match the correct labels t_i1 and t_i2, and returns 0 otherwise. Results.  The results for the unexpected content task and the unexpected transfer task are displayed in <ref>. We observe that GPT-4 and Llama3-70b demonstrate exceptional performance on both the unexpected content task and the unexpected transfer task, with DQA values exceeding 85%. GLM4 and Mixtral-8*22b exhibit significant variability across the two false belief tasks: both models address all items correctly in the unexpected content task, yet manage to solve only 50% of the items in the unexpected transfer task. The rest models perform poorly on both false belief tasks, demonstrating their inability to infer the thoughts of others Validation.  To ensure the validity of the experimental results, we examine: (i) the models' robustness against position bias, and (ii) the models' parallel forms reliability. For validation (i), it is suggested that LLMs may not exhibit robustness against changes in option positions in alternative-choice questions <cit.>. They may have a preference to choose options with certain positions, such as option “A”, which invalidates our results. To address this problem, we switch option positions, for example, options “” becomes “”, and repeat the experiments. We use the match rate MR, defined in <ref>, as the metric to measure the “similarity” in LLMs response, which indicates the position option robustness. As shown in <ref>, GPT-4, GLM4, Llama3-70b, and Mixtral-8*22b exhibit strong robustness against position bias. Conversely, the MR scores for Llama3-8b and Mistral-7b in the unexpected content tasks are surprisingly low, at 0.30 and 0.40 respectively, indicating significant performance variability with changes in option positions. Consequently, their results are deemed unreliable for assessing their ToM capabilities. For validation (ii), we focus on the consistency of parallel forms. LLMs' correct responses might be influenced by the frequency of word occurrences or language biases. For instance, LLMs could infer associations between two words, thereby influencing their choices. In the false belief task, LLMs might assert that a container is associated with a certain label. We therefore create parallel versions of the tasks by interchanging labels on the container and the contents in the container in the scenario. To address this issue, we create parallel versions of the original questions by interchanging the contents and labels of the containers (i.e., content: wine/beer, container: bottle). This approach ensures that the parallel forms of tests assess the same abilities in LLMs. Consistently accurate results across these tests are crucial for correctly interpreting whether LLMs truly possess ToM capabilities or are simply responding to language patterns. As detailed in <ref>, GPT-4, GLM4, Qwen-Turbo, Llama3-70b, and Mixtral-8*22b exhibit great consistency across parallel forms, indicating stable performance on similar assessments. Conversely, models like Llama3-8b demonstrate low MR, suggesting poor consistency in similar scenarios, which may indicate that their results are attributable to randomness rather than ToM capabilities. §.§ Strange Stories Task Dataset.  The strange stories task <cit.> describes social situations with non-literal language use that can be misinterpreted without ToM. This task tests the ability to use prior world knowledge in order to understand several communication acts embedded in story situations. To understand the situations, subjects should apply ToM to infer the characters’ intentions. Our dataset is derived from Van Duijn et al. <cit.>, with each item consisting of a scenario and an open-ended question. Scenarios include seven non-literal communication language, including lie, pretend, joke, whitelie, misunderstanding, sarcasm, and dubblebluff. We include an example from our dataset below. [arc = 0mm, coltitle=black, title = Strange Stories Task Example, ]example To elucidate, in this example, Jan knocked over his mother's vase while claiming that the dog knocked it over. Subjects are asked “Is what Jan says true?”, with the correct answer ‘No’. Another intention question for “Why does Jan say this?” with the correct answer “to avoid taking responsibility.” This requires LLMs to understand the intention of the protagonist's mental state. Setup.  We use the following prompt to instruct LLMs to answer open-ended questions. [ enhanced, sharp corners, colback=LightGray, colframe=white, boxrule=0pt, top=2mm, bottom=2mm, left=5mm, right=5mm, borderline north=1pt0ptblack, borderline south=1pt0ptblack ] Strange stories consist of open-ended items. For their evaluation, we employ the LLM-as-a-judge approach <cit.>, selecting GPT-4 and Llama3-70b as raters for the responses. These LLM raters are provided with the correct answers as references. The raters assign scores on a scale where 0 indicates an incorrect answer, 1 indicates a partially correct answer, and 2 indicates a fully correct answer. The final results are computed as the average of the scores provided by the two LLM raters. Detailed instruction prompt for the LLM raters is outlined below: [ enhanced, sharp corners, colback=LightGray, colframe=white, boxrule=0pt, top=2mm, bottom=2mm, left=5mm, right=5mm, borderline north=1pt0ptblack, borderline south=1pt0ptblack ] Result.  The model performance on the strange stories task, as shown in <ref>, has been re-scaled from a maximum score of 2 to 100%. The results reveal exceptional performance across all models, with GPT-4 and Llama3-70b successfully answering all questions. In particular, one specific question—termed the “double bluff” scenario—presents a significant challenge. This scenario involves a character telling the truth but expecting others to perceive it as a lie, thereby deceiving them while remaining truthful. Several models, including ChatGPT, Llama3-8b, Mistral-7b, and Mixtral-8*7b, struggled with this task, indicating a general limitation in handling complex second-order ToM scenarios. Validation.  Given that the strange stories task involves open-ended questions, we employ two competent LLMs as raters for the responses. In psychometrics, when humans act as raters, it is essential to validate their assessments through inter-rater reliability, which measures the degree to which different raters give consistent estimates of the same phenomenon. It ensures that the evaluation is reliable and not overly dependent on the subjective judgment of a single rater. Similarly, we apply inter-rater reliability to our LLM raters. The LLM raters are instructed to score the responses on a scale from 0 to 2. Given the small sample size, metrics such as the quadratic weighted Kappa coefficient κ are not robust. Consequently, we propose an alternative metric termed Agreement Rate (AR). Let s_1i and s_2i represent the scores assigned by rater 1 and rater 2, respectively, to the i-th item. The individual agreement score a for each item is defined by a discrete scoring function a: ℤ×ℤ→{0%, 50%, 100%}, articulated as follows: a(s_1i, s_2i) = 100% if |s_1i - s_2i| = 0, 50% if |s_1i - s_2i| = 1, 0% otherwise. The overarching Agreement Rate, denoted AR, is the average of these individual scores across all n items, calculated as: AR = 1/n∑_i=1^n a(s_1i, s_2i) AR provides a numerical measure of the degree to which the two raters concur in their evaluations, scaled from 0 to 100%, where 100% signifies perfect agreement and 0% indicates no agreement. Table <ref> illustrates that the raters exhibit considerable agreement, with ARs exceeding 80%, thereby validating the scores assigned by the LLMs. §.§ Imposing Memory Task Dataset.  The Imposing Memory task <cit.> has been used to examine the recursive mind-reading abilities, the ability to represent the mental representations of others. Our dataset was originally developed by Haddad and Dunbar <cit.> for children aged 7-10. This dataset contains two different scenarios, followed by a total of nine alternative-choice questions, and we selected questions asking for“intentionality” from the original dataset. Here is an example of the scenario-question pair in the dataset. [arc = 0mm, coltitle=black, title = Imposing Memory Task Example, ]example In this story, the protagonist Sam asked his classmate Helen where to buy stamps for his grandmother's birthday card, and Helen initially directed him to the wrong location. Sam then wondered whether Helen pranked him or was genuinely confused, and asked another classmate, Pete, for help. The intentionality questions involve reasoning about different levels of recursive mental states (e.g., at third-level: “Helen thought Sam did not believe that she knew the location of the store that sells post stamps”). Setup.  We use the following prompt for the alternative-choice items in the imposing memory task. [ enhanced, sharp corners, colback=LightGray, colframe=white, boxrule=0pt, top=2mm, bottom=2mm, left=5mm, right=5mm, borderline north=1pt0ptblack, borderline south=1pt0ptblack ] The final results are expressed in terms of accuracy rate. Results.  In <ref>, we find that the proprietary models generally outperform open-source models. GLM4 achieves the performance with an accuracy of 88.89%, followed by GPT4 and Qwen-Turbo, which reported accuracies of 83.33% and 66.67%, respectively. Among open-source models, Llama3-70b demonstrates a robust performance with 88.89% accuracy, significantly surpassing other models such as Mistral-7b (55.56%) and Mixtral-8*7b (83.33%). Validation.  We conduct parallel forms reliability check by altering the names and genders of characters in the stories to avoid LLMs associating the names of characters with the answer. We employ the match rate MR to assess the parallel forms reliability. In <ref>, we see that almost all models recorded high MR of above 0.9, indicating strong consistency across two similar forms of tests. This demonstrates that the experimental results for the imposing memory task are reliable. § ADDITIONAL DETAILS OF EVALUATION ON MOTIVATION Motivation, although not directly observable, plays a crucial role in understanding and predicting human behavior <cit.>. Maslow's hierarchy of needs suggests that human motivation evolves from fulfilling basic physiological needs to achieving higher psychological aspirations, such as esteem and self-actualization <cit.>. Furthermore, Self-Determination Theory (SDT) highlights the importance of intrinsic motivation, asserting that meeting the three fundamental psychological needs—autonomy, competence, and relatedness—is essential for personal growth and well-being. Though motivation is critical for humans, directly applying human motivation surveys to LLMs presents challenges, as these assessments presume agency and inherent needs that LLMs may lack <cit.>. For instance, the Love of Money Scale <cit.>, used to gauge human intrinsic job satisfaction, may not yield meaningful insights for LLMs <cit.>. However, certain aspects in motivation, such as self-efficacy <cit.>—the belief in one's ability to manage challenges—are useful for understanding LLM behavior. High self-efficacy indicates a strong belief in managing challenges effectively. For LLMs, which serve as assistants encountering queries for problem-solving, we reinterpret self-efficacy to assess their perceived capability in managing complex tasks. §.§ Self-Efficacy Dataset.  To provide a comprehensive view of LLM self-efficacy under various contexts, we utilize two datasets: * questionnaire: A self-curated questionnaire comprising six rating-scale items. These items are based on six categories of questions <cit.> that challenge LLMs or that LLMs struggle to answer, such as assessing real-time stock information. * HoneSet dataset <cit.>: An established dataset featuring 930 open-ended items with simulated user inputs designed to probe LLMs’ confidence to answer questions from the same six categories as questionnaire. By analyzing the response, we determine whether LLMs confidently answer or acknowledge their limitations in these scenarios. The questionnaire is inspired by the General Self-Efficacy Scale <cit.>. We have construct such tailored version for LLMs, inquiring about their confidence in six categories that demarcate the abilities of LLMs. This questionnaire is presented in a self-reported format. We will now describe the procedure for constructing the questionnaire. Questionnaire Generation.  The questionnaire is based on six categories of queries established by Gao et al. <cit.> for investigating LLMs' confidence in responding to specific questions. The six categories include: accessing the latest information with external services, handling insufficient or incorrect user input, recognizing self-identity, addressing modality mismatches, and providing professional assistance in specific domains. Note that our focus is exclusively on the LLM itself, without integrating any external databases or tools. Following these categories, we manually curate one item for each category, detailed in <ref>. To ensure the reliability of the results, we have created a parallel version of the questionnaire, altering the word “can” to “cannot.” This modification aims to measure the LLMs' lack of confidence in response to the statements. The raw scores from this version are expected to be complementary to those of the original questionnaire. The second dataset we utilize is HoneSet <cit.>, which includes 930 queries that mirror user questions. These questions are categorized according to the same framework as the questionnaire. When LLMs respond to these questions without acknowledging their limitations, it indicates their confidence in their capabilities. Thus, HoneSet provides a practical open-ended scenario for assessing the self-efficacy of LLMs. Examples from each category are illustrated in <ref>. Setup.  The questionnaire includes rating-scale items on a scale from 0 to 100, which represents their confidence score. We employ the following prompt for LLMs: [ enhanced, sharp corners, colback=LightGray, colframe=white, boxrule=0pt, top=2mm, bottom=2mm, left=5mm, right=5mm, borderline north=1pt0ptblack, borderline south=1pt0ptblack ] For the parallel version of the questionnaire, we use the same prompt instruction. This version elicits responses indicating how unconfident LLMs are about the statement. The resulting confidence score is calculated as 100 -{raw_score}, which indicates the confidence level. The results of HoneSet are determined collaboratively between LLM evaluators and human evaluators.This approach is inspired by the CoAnnotating method <cit.>. The evaluation process is as follows: We first employ GPT-4 and Llama3-70b as two judges, instructing them to determine whether the answers to the question demonstrate their confidence. If both LLM raters reach a consensus, their judgment stands as the result. If they do not agree, our research team manually reviews the responses to determine the outcome. The following prompt is used for LLMs: [ enhanced, sharp corners, colback=LightGray, colframe=white, boxrule=0pt, top=2mm, bottom=2mm, left=5mm, right=5mm, borderline north=1pt0ptblack, borderline south=1pt0ptblack ] The final confidence score for the specific category of queries is determined by a new metric confidence rate that measures the proportion of LLM responses matching the statements in the questionnaire. This metric indicates the LLMs' confidence in answering these questions. The calculated formula is defined as: Confidence Rate = N_match/N_total Results.  The confidence levels of LLMs in the two evaluation scenarios are shown in <ref> and <ref>. Comparing these two tables, we find interesting patterns of consistency and inconsistency among LLMs on the self-reported results and results from concrete queries. For instance, GLM-4 exhibits a notable discrepancy in the category of modality mismatch. It claims to misplaced confidence in processing non-textual data, while in actual queries, they are not able to respond to this kind of request. Llama3-70b and Mistral-7b also show mismatches between their self-reported data and actual performance. Llama3-70b's high self-confidence in self-identity cognition is consistent with the actual query scenario. However, despite that they have low confidence in sensory perception, in actual queries, they respond to this type of query with moderate confidence despite hallucination. Similarly, Mistral-7b, while generally aligning in self-identity cognition, shows a large gap in modality mismatch, where it reports no capability yet in real queries, it responds with a moderately high rate. Validation.  In validating the questionnaire, we conduct a parallel form reliability check. This involves comparing the confidence scores obtained from the two parallel forms of the questionnaire to assess their agreement. We use quadratic weighted Kappa coefficient (κ) as the metric, defined in <ref>. In <ref>, we observe that GPT-4 exhibits exceptionally high consistency with a κ 0.971, indicative of almost perfect agreement. Similarly, Llama3-70b and GLM4 also show great parallel form consistency, which enhances their reliability. In stark contrast, ChatGPT displays κ near zero, indicating no agreement beyond chance, and reflecting significant inconsistencies. The Mistral-7b model also shows no agreement, highlighting critical inconsistencies. Meanwhile, models like Mixtral-8*22b and Mixtral-8*7b display moderate agreement with κ of 0.878 and 0.903, respectively, suggesting reasonably consistent. These findings highlight concerns with LLMs' responses to parallel forms that employ reverse logic while testing the same aspect; they do not consistently show the same preferences. § RELATED WORK The evaluation of LLMs from psychological perspectives is receiving increasing attention due to its crucial role in offering insights into LLM behavior and advancing the development of lifelike AI assistants. This section presents a comprehensive review of existing research that focuses on evaluating LLMs from diverse psychological dimensions. Assessments on LLMs Personality.  The integration of personality traits into language models has attracted significant interest. For instance, Caron et al. <cit.> presented an early endeavor of conducting personality tests on BERT <cit.> and GPT2 <cit.>, suggesting the potential for controlled persona manipulation in applications such as dialogue systems. Bodroza et al. <cit.> assessed the GPT-3's personality, highlighting the varying consistency of different aspects of personality, while exhibiting socially desirable traits. Karra et al.<cit.> quantified the personality traits of many LLM models, aiming to enhance model applications through a better understanding of anthropomorphic characteristics. Moreover, Serapio-García et al. <cit.> adopted a rigorous evaluation framework for investigating personality in LLMs and measuring the validation of the test. Similarly, Frisch et al. <cit.> explored personality consistency in interacting LLM agents, emphasizing the importance of maintaining personality integrity in dynamic dialogue scenarios. Huang et al. <cit.> revisited the reliability of psychological scales applied to LLMs, finding consistent personality traits in responses, which supports the use of LLMs in substituting human participants in social science research. Jiang et al. <cit.> and La Cava et al. <cit.> further use prompt engineering to elicit specific personalities in LLMs. Cui et al. <cit.> proposed a fine-tuning method to encode MBTI traits into LLMs, ensuring stable and consistent personalities. Assessments on LLMs Values.  LLMs have been widely used in open-ended contexts, and the values they reflect in their response have a profound impact on shaping societal views <cit.>. Miotto et al. <cit.> presented an early study of values of GPT-3 employing psychometric tools. Ziems et al. <cit.> investigated the use of LLMs in political science and benchmarked ideology detection, stance detection, and entity framing. Hendrycks et al. <cit.> introduced the ETHICS dataset to evaluate LLMs against human moral judgments, providing a foundation for aligning AI outputs with societal values. Santurkar et al. <cit.> presented OpinionsQA, which aligns LLM-generated opinions with diverse U.S. demographics, revealing significant biases that could influence societal perceptions. Durmus et al. <cit.> introduced GlobalOpinionQA, which includes cross-national question-answer pairs designed to capture diverse opinions on global issues across different countries. The evaluation on GlobalOpinionQA reveals that by using prompts to indicate the specific culture, the response of LLMs can adjust to the specific cultural perspectives while reflecting harmful cultural stereotypes. Sorensen et al. <cit.> introduced a dataset named ValuePrism, which includes scenarios that multiple correct human values are in tension, and they build an LLM that could generate, explain, and assess decision-making related to human values. In terms of evaluation, Röttger et al. <cit.> advocated more naturalistic assessments that reflect real-world user interactions with these models when evaluating LLMs on opinions and values. Assessments on LLMs Emotions.  Investigating emotion-related abilities in LLMs is essential for these models to interact with and serve humans. Wang et al.<cit.> developed a psychometric assessment to quantitatively evaluate LLMs' emotional understanding. Sabour et al.<cit.> introduced EmoBench, which includes emotion understanding and emotion application tasks for a more comprehensive evaluation of emotion intelligence in LLMs. Further, Zhan et al. <cit.> highlighted the important subjective cognitive appraisals of emotions for LLMs in understanding situations and introduced a dataset to evaluate such abilities in LLMs. Some literature also examined how emotion would affect the performance of LLMs. For instance, Li et al. <cit.> found that LLMs can understand emotional stimuli, and they also explored the application of emotional prompts to improve LLMs' performance across numerous tasks, demonstrating that such stimuli can significantly boost effectiveness. In addition, Li et al. <cit.> proposed a novel prompting method named Emotional Chain-of-Thought, which aligns LLM outputs with human emotional intelligence, thereby refining emotional generation capabilities. Coda-Forno et al. <cit.> applied computational psychiatry principles to study how induced emotional states like anxiety can affect LLMs’ decision-making and biases. This exploration contributes to understanding LLMs' behaviors under various emotional conditions but also indicates the potential impact of emotions on AI’s effectiveness and ethical implications. Assessments on LLMs Theory of Mind (ToM).  ToM is an essential cognitive ability for social interactions. Therefore, researchers have been interested in whether LLMs have ToM as an emergent ability. Kosinski <cit.> modified from classic Anne-Sally Test and curated false belief tasks, each include a set of prompts containing false-belief scenario and true belief control scenarios to ensure the validity of the test, and the results show that GPT-4’s performance is on par with six-year-old children, and earlier LLMs barely solve the tasks. Van Duijn et al. <cit.> evaluated instruction-tuned models on non-literal language usage and recursive intentionality tasks, suggesting that instruction-tuning brings LLMs with ToM. Wu et al. <cit.> evaluates high order ToM on LLMs, resulting in a decline in performance. Sclar <cit.> presented a plug-and-play approach named SymbolicToM to track belief states and high-order reasoning of multiple characters through symbolic representations in reading comprehension settings, which enhances accuracy and robustness of ToM in out-of-distribution evaluation. Zhou et al. <cit.> presented a novel evaluation paradigm for ToM, which requires models to connect inferences about others' mental states to actions in social scenarios, consequentially, they suggested a zero-shot prompting framework to encourage LLMs to anticipate future challenges and reason about potential actions for improving ToM inference. Some prior studies also examined ToM of LLMs in more complex settings. For instance, Ma et al. <cit.> treated LLMs as an agent and created scenarios to make them physically and socially situated in interactions with humans, and provided a comprehensive evaluation of the mental states. Verma et al. <cit.> investigated ToM in a human-robot interaction setting, where robots utilize LLMs to interpret robots’ behaviors. The initial tests indicated strong ToM abilities in models of GPT-4 and GPT-3.5-turbo, further perturbation tests exposed significant limitations, demonstrating the models' difficulties in handling variations in context. Assessments on LLMs Motivation.  The motivation for LLMs is an under-explored dimension, partly due to its difficulty in applying this notion to LLMs. Huang et al. <cit.> conducted three tests evaluating motivations, self-efficacy <cit.> (the belief in one's ability to manage various challenging demands), life orientation <cit.> (optimism and pessimism), and love of money scale test <cit.> (attitudes towards money). However, the interpretability of these results is challenging. For instance, it is unclear the response of LLMs to questions from the General Self-Efficacy Scale, such as, “If someone opposes me, I can find the means and ways to get what I want” or from the Life Orientation Test “It is easy for me to relax” can yield meaningful interpretations. In our work, we focus on a specific aspect of motivation, self-efficacy, which refers to the confidence level of LLMs in responding to challenging queries. § LIMITATIONS AND FUTURE DIRECTIONS In this study, we introduce a psychometric benchmark for LLMs that covers six psychological dimensions, provides an evaluation framework to ensure test reliability, and offers a comprehensive analysis of the results. In this section, we will discuss the limitations of our current work and explore potential future directions for integrating psychology and AI. Future research could focus on the following directions: Dynamic and Interactive Evaluation.  Our current assessment limits evaluation to single-turn conversations, which may not fully capture the dynamic psychological attributes of LLMs. Future research should focus on dynamic and interactive assessments through multi-turn conversations or interactions, potentially exploring the evolution of psychological attributes within sandbox environments <cit.>. This simulation could yield insights into the social dynamics. Test Enrichment.  Despite the vast capabilities of LLMs, our observations highlight inconsistencies across different scenarios and item types. Our tests, limited to several parallel forms and prompt templates, necessitate a broader scope to understand LLM behavioral patterns comprehensively. Future expansions should include a variety of tests within our current framework, providing deeper understanding into behavioral patterns of LLMs. Broader Psychological Dimensions Evaluation.  Future research could explore broader psychological dimensions to deepen our understanding of LLM behaviors. Currently, our approach to identifying these dimensions is top-down, grounded in established psychological theories. However, future studies could benefit from an inductive method, deriving insights directly from empirical observations to refine or develop new theories <cit.>. This shift will not only enhance our comprehension of LLMs but also improve the reliability of their evaluations as our conceptual frameworks evolve. Mechanism Design for Assessment.  Our psychometric benchmark currently follows to classical test theory, which may not adequately account for item difficulty variability or predict performance on unseen test items. To improve the predictive power of our assessments, we suggest future work to adopt Item Response Theory (IRT) <cit.>. IRT allows for modeling the probability of a correct response based on the ability levels, facilitating more accurate evaluations by selecting items that best match the LLMs' proficiency. § APPLICATIONS In this section, we explore the opportunities presented by our study and discuss potential applications of the benchmark. Enhancing Understanding of LLMs' Behaviors. Different from most existing benchmarks that assess the specific capabilities of LLMs, our work focuses on a higher-level, abstract analysis. We aim to comprehend LLM behaviors from a psychological perspective. Utilizing the psychometric paradigm, we establish comprehensive profiles that can track changes in LLMs over time. For example, proprietary LLMs such as GPT-4 are periodically updated based on user feedback, though the details of such updates are often not disclosed publicly. While Chen et al.<cit.> suggested to quantify these changes in LLM abilities, we argue that evaluating and understanding these modifications through psychological dimensions—such as cultural orientations—is critical. These evaluations not only facilitate the integration of LLMs into complex systems but also enhance the predictability of their outputs. Furthermore, examining the psychological dimensions of LLMs opens new avenues for research in human-AI collaboration, exploring how LLMs' psychological traits can improve user trust and influence interactions between humans and AI. Empowering LLM-based Agents.  Our psychometrics benchmark presents a starting point for developing more sophisticated LLM-based agents. Previous research has implemented personas within LLM-based agents <cit.>, directing these agents to engage in role-playing. This benchmark serves as a tool not only for evaluating human-like psychological attributes but also for assessing the consistency of these attributes across various contexts. Furthermore, it facilitates the creation of more intricate, diverse, and realistic simulations for multi-agent systems <cit.>. By examining the variability in behaviors of LLM-based agents, developers can design interactions that more accurately replicate human communication patterns, leading to the development of more effective multi-agent systems. Improving User Experience.  Assessing the psychology of LLMs enables the customization of their characteristics to better align with diverse applications <cit.>. For example, LLMs designed with distinct personalities can adopt tailored communication styles, where certain traits may enhance user engagement and trust in specific contexts. For instance, LLMs exhibiting traits of openness are well-suited for the education sector, where engaging user interaction is crucial. Additionally, equipping LLMs with the ability to understand and mirror specific cultural orientations can significantly enhance their capacity to provide contextually appropriate recommendations. Such cultural adaptability not only improves the user experience for individuals from targeted cultural backgrounds but also increases the technology’s acceptability across varied audiences <cit.>. Facilitating Interdisciplinary Collaboration.  Due to exceptional generative capabilities, LLMs have significantly propelled interdisciplinary research across various fields, including education <cit.>, the medical domain <cit.>, and social sciences <cit.>. Our benchmark creates opportunities for interdisciplinary collaborations. Specifically, social science researchers can employ LLMs to simulate social behaviors and interactions. This benchmark provides a framework that helps researchers identify which LLMs best meet their specific requirements in simulating social science research participants in their studies. Similarly, in the healthcare sector, LLMs are increasingly utilized to simulate patient-doctor interactions <cit.>. Our study serves as a useful tool that enables healthcare researchers and practitioners to evaluate and select LLMs that simulate medical dialogues more accurately. This functionality is crucial in preparing medical staff to manage sensitive or complex situations effectively. As these models become more refined, their ability to function as reliable proxies in training and therapeutic contexts increase, and our benchmark serves to contribute to this integration by providing a rigorous and reliable evaluation of the attributes of LLMs. § SOCIAL IMPACTS This paper carries multifaceted social implications. Our psychometrics benchmark enhances the evaluation of LLMs by identifying biases and inconsistencies, thus contributing to more ethically responsible AI <cit.>. Additionally, this benchmark may aid in developing personalized lifelike AI assistants in sectors such as healthcare and education <cit.>. Moreover, it may bolster public trust in LLM technologies through enhancing user experience. However, we are aware of the potential negative social impacts. The advancement of AI assistants, which may replace certain human tasks, poses risks of job displacement while also creating new opportunities.
http://arxiv.org/abs/2406.18163v1
20240626082334
Alexandrov's Soap Bubble Theorem for Polygons
[ "Marco Bonacini", "Riccardo Cristoferi", "Ihsan Topaloglu" ]
math.AP
[ "math.AP", "math.OC" ]
equationsection theoremTheorem[section] proposition[theorem]Proposition lemma[theorem]Lemma *mainlemmaMain Lemma *mainresultMain Result corollary[theorem]Corollary claim[theorem]Claim *conjConjecture remark remark[theorem]Remark definition definition[theorem]Definition example[theorem]Example calc decorations.markings decorations.pathmorphing decorations.shapes shapes,arrows,shapes.geometric,patterns,fadings
http://arxiv.org/abs/2406.18964v1
20240627074903
DNLSAT: A Dynamic Variable Ordering MCSAT Framework for Nonlinear Real Arithmetic
[ "Zhonghan Wang" ]
cs.SC
[ "cs.SC" ]
State Key Laboratory of Computer Science, Institute of Software, Chinese Academy of Sciences School of Computer Science and Technology, University of Chinese Academy of Sciences Beijing China wangzh@ios.ac.cn § ABSTRACT Satisfiability modulo nonlinear real arithmetic theory (SMT(NRA)) solving is essential to multiple applications, including program verification, program synthesis and software testing. In this context, recently model constructing satisfiability calculus (MCSAT) has been invented to directly search for models in the theory space. Although following papers discussed practical directions and updates on MCSAT, less attention has been paid to the detailed implementation. In this paper, we present an efficient implementation of dynamic variable orderings of MCSAT, called dnlsat. We show carefully designed data structures and promising mechanisms, such as branching heuristic, restart, and lemma management. Besides, we also give a theoretical study of potential influences brought by the dynamic variablr ordering. The experimental evaluation shows that dnlsat accelerates the solving speed and solves more satisfiable instances than other state-of-the-art SMT solvers. Demonstration Video: <https://youtu.be/T2Z0gZQjnPw>. Code: <https://github.com/yogurt-shadow/dnlsat/tree/master/code> Benchmark: <https://zenodo.org/records/10607722/files/QF_NRA.tar.zst?download=1> DNLSAT: A Dynamic Variable Ordering MCSAT Framework for Nonlinear Real Arithmetic Zhonghan Wang July 1, 2024 ================================================================================= § INTRODUCTION Satisfiability Modulo Theories (SMT) extends boolean satisfiability (SAT) problems into different background theories, such as linear and nonlinear arithmetic, uninterpretted functions, strings and arrays <cit.>. Among them, nonlinear real arithmetic (NRA), represents logical formulas with polynomial constraints, and enables variables to be real numbers. Nonlinear real arithmetic solving is essential to various domains in computer science, including both academical and industrial applications. For example, since nonlinear arithmetic is good at representing differential equations along the continuous systems, SMT(NRA) solvers are efficient tools to predict and control behavious in cyber physical systems <cit.>. Other applications including ranking function generation <cit.> used for termination analysis and nonlinear hybrid automata <cit.> analysis also take advantage of SMT(NRA) solvers. Recently, SMT(NRA) solvers also helps a lot in artificial intelligence related studies, like the verification and repairing of neural networks <cit.>. In conclusion, designing powerful SMT(NRA) solvers is essential and fundamental for different research directions. The traditional method to deal with nonlinear arithmetic is CDCL(T), with a theory solver as a black box used to check the consistence of theory literals and return a new lemma. Recently, MCSAT framework has been introduced to enroll the theory solver into the search space, and lead the literal level search into variable level. However, although MCSAT is efficient in solving most instances, many ideas that have been proved to be effective in SAT solving are not applied. In this paper, we introduce a new solver named dnlsat, which brings usually used systematic search heuristics into nonlinear arithmetic solving. Our experimental results demonstrate that dnlsat is competitive on satisfiable problems, which means that it might be a powerful tool used in program termination analysis and bug finding. § RELATED WORK model constructing satisfiability calculus (MCSAT) has been widely applied to solve SMT problems over different theories. The spirit of MCSAT is to directly assign arithmetic variables with a theory solver incorporated in the solving process, rather than assign literals like CDCL(T) does. For nonlinear real arithmetic, NLSAT is an efficient implementation of MCSAT, which uses cylindrical algebraic decomposition (CAD) for explanation when encountering conflicts. The seaech process is repeated until a solution is found or it is determined that the problem is unsatisfiable. Thanks to the MCSAT framework, many ideas have been applied to tackle directly with arithmetic variables. For example, some studies have tried to investigate the branching heuristic of MCSAT, and talk about the related completeness problem <cit.>. Other work has been discussing the proof complexity of MCSAT algorithm <cit.>. In conclusion, it is interesting to bring heuristics used in SAT solving to the MCSAT framework, with a transfer from boolean space to the real space. § ARCHITECTURE The general architecture of dnlsat is shown in Fig.  <ref>. Most of the parts are borrowed from CDCL SAT solvers, such as minisat <cit.>. We first give our implementation of dynamic variable orderings (i.e. branching heuristic), as suggested in  <cit.>. As proposed in  <cit.>, we talk about the implementation of detecting a univariate clause, with a consideration of root atoms. Second, some kinds of MCSAT trails like level and stage are also discussed to better manage conflict analysis and backtrack. Third, we give a theoretical analysis of CAD projection orders. Since different projection orders will generate different forms of lemmas, we talk about the set of orders that can be used to resolve conflicts. Fourth, we implement a lemma management mechanism to periodically delete useless lemmas, which is proved to be very powerful when encountering disabled root atoms. Finally, we introduce restart mechanism into systematic search of SMT solvers. §.§ Branching Heuristic We implement the following variable orderings as suggested in  <cit.>. * Default. This is the default setting in NLSAT. Boolean variables are decided before any theory decision. Arithmetic variables are ordered by their maximum degrees in polynomial constraints. * Boolean-VSIDS. Boolean variables are always decided before arithmetic variables. Two variables with the same type are ordered by their activity. * Theory-VSIDS. Arithmetic variables are always decided before boolean variables. Two variables with the same type are ordered by their activity. * Uniform-VSIDS. Whatever type the variables are, they are ordered only by their activity. §.§ Projection Order Projection order is essential for CAD algorithm. However, in real applications like SMT solvers, the order is actually not random when considering the power of generated lemmas. Given a polynomial set ps, and a projection order {v_1, v_2, ..., v_k}, each time the projection method eliminates a variable and generates a root atom according to that variable. In this case, root atoms are generated in the following form: v_1 ∼ root(p_1(v_1, v_2, v_3, ..., v_k), i_1) v_2 ∼ root(p_2(v_3, v_4, ..., v_k), i_2) ... ∼ ... v_k ∼ root(p_k(v_k), i_k) where polynomial p_k is univariate to v_k. To deal with the problem of useless root atoms as discuss in  <cit.>, the projection order is the reverse of the order of assigned variables. §.§ Lemma Management We borrow the idea of periodically forgetting useless lemmas from minisat <cit.>. §.§.§ Requirement We try to minimize the learnt clauses only when we jump out the search process and restart. We record a counter named learntsize_adjust_cnt. Only when learntsize_adjust_cnt counts down to zero, the minimize procedure takes effect. The increasement factor of learntsize_adjust_cnt is named learntsize_adjust_inc, which tries to improve the threshold of minimize process alongwith the systematic search. §.§.§ Clause Activity Similar to the activity used in VSIDS, the activity of learnt clauses also record their status of being involved in conflict analysis. Whenever we find a learnt clause in the resolve process, its activity is bumped by clause_bump. To inherit the spirit that focus more on recent conflicts, clause_bump is increased by a factor clause_decay after each update. §.§.§ Minimize Process To better preserve useful lemmas, we set a parameter named max_learnts. Only when the current database contains a number of learnt clauses larger than max_learnts, our minimize process is executed. We sort the learnt clauses by their activities and try to delete those most inactive clauses by a half. Short clauses that contain less than three literals are preserved. §.§ Restart §.§.§ Requirement We check the requirement of restart after each iteration of search process. When the number of conflicts surpass a threshold, the restart process executes. In our implementation, the threshold conflict times of enabling restarts is an exponential sequential calculated as threshold = restart_first * restart_base^restart_times. §.§.§ Information Preserve Most information about the search process is preserved in our restart mechanism, including the activity of variables and clauses, the number of total decisions and a part of learnt clauses. § IMPLEMENTATION AND USAGE Our algorithm is implemented on top of the nlsat solver in Z3, based on the existing library for polynomials and algebraic numbers. The source code and other resources of dnlsat is availabe at: <https://github.com/yogurt-shadow/dnlsat>. We present hyperparameters used in our implementation in Table  <ref>. Usage: The compilation and execution method are the same with Z3 solver. Dnlsat accepts an input with SMT-LIB v2.6 format [https://smtlib.github.io/jSMTLIB/SMTLIBTutorial.pdf] (). We assume the user enters the folder. To compile the overall project, simply run python command To solve a SMT instance, simply run § EVALUATION In this section, we demonstrate the performance of dnlsat on SMT(NRA) benchmarks. We first describe our experimental environment, benchmarks and baseline solvers. Then we compare different versions of branching heuristics. Finally, we present the comparison between dnlsat and other state-of-the-art solvers in terms of solved satisfiable and unsatisfiable instances. §.§ Experiment Preliminaries §.§.§ Environment We evaluate our experiments on a server with Intel Xeon Platinum 8153 processor at 2.00 GHz. The time limit of running time for each instance is 1200 seconds. §.§.§ Benchmarks We choose the SMT-LIB <cit.> benchmark repository QF_NRA track[https://zenodo.org/records/10607722] for our evaluation, which contains 12134 instances in total. §.§.§ Baseline Solvers We compare our solver with three most powerful solvers in SMT-COMP, including Z3 (version 4.13.1) <cit.>, CVC5 (version 1.0.2) <cit.> and YICES2 (version 2.6.2) <cit.>. Note that we do not make any modification to their source code, and preserve all the different algorithms in the portfolio solvers. §.§ Comparison between different branching heuristics Figure  <ref> shows the number of solved instances by different branching heuristics within distinct time limits. It is found that uniform-vsids and bool-vsids solve more instances than theory-vsids and the default static ordering. Our results are different from previous studies <cit.>, the main difference is that SMT-RAT implements multiple pulgins used for explanation, such as virtual substitution method (VS), Fourier-Motzkin variable elimination method (FM) and cylindrical algebraic decomposition method (CAD), while we only preserve CAD plugin for backend explanation part. Due to our efficient data structures and other heuristics, we solve more instances than SMT-RAT as described in  <cit.>. §.§ Comparison with SOTA solvers Table  <ref> shows the overall solved instances of dnlsat and other state-of-the-art solvers, which are divided into different categories. It is noticed that dnlsat is competitive with other portfolio solvers with only MCSAT algorithm implemented. Specifically, dnlsat solves the most satisfiable instances. Although dnlsat only solved third most overall instances, the difference between the second solver YICES2 is only 7 instances. We also count the number of solved instances within different time limits by each solver in Figure  <ref>. It is shown that dnlsat solves almost the most satisfiable instances in any time range. Although dnlsat is not competitive at unsatisfiable instances, dnlsat solves almost the same number with Z3 solver. In conclusion, dnlsat increases performance on satisfiable instances, while does not bring much side effect on unsatisfiable instances. § CONCLUSION We present a new SMT(NRA) solver called dnlsat, which implements an efficient dynamic variable ordering mechanism based on MCSAT. Several heuristics including projection order, lemma deletion and restart have also been incorporated into our decision procedure. Our experimental results demonstrate that dnlsat is competitive on most NRA instances, especially on satisfiable instances. The authors would like to thank the anonymous reviewers for their comments and suggestions. ACM-Reference-Format
http://arxiv.org/abs/2406.18147v1
20240626075851
Correlation entropy of free semigroup actions
[ "Xiaojiang Ye", "Yanjie Tang", "Dongkui Ma" ]
math.DS
[ "math.DS" ]
add1]Xiaojiang Ye yexiaojiang12@163.com add1]Yanjie Tang yjtang1994@gmail.com add1]Dongkui Macor1 [cor1]Corresponding author dkma@scut.edu.cn [add1]School of Mathematics, South China University of Technology, Guangzhou, 510640, China § ABSTRACT This paper introduces the concepts of correlation entropy and local correlation entropy for free semigroup actions on compact metric space, and explores their fundamental properties. Thereafter, we generalize some classical results on correlation entropy and local correlation entropy to apply to free semigroup actions. Finally, we establish the relationship between topological entropy, measure-theoretic entropy, correlation entropy, and local correlation entropy for free semigroup actions under various conditions. Free semigroup actions; Correlation entropy; Local correlation entropy; Topological entropy; Brin-Katok entropy formula. [2020]: Primary: 37A50, 37C85; Secondary: 28D20, 37B05. § INTRODUCTION In a classical dynamical system (X,f), where X is a compact metric space with metric d and f: X → X is a continuous transformation, a point x is recurrent if, for any neighborhood U of x, there exist infinitely many indices n such that f^n(x) ∈ U. The topological version of the famous Poincaré recurrence theorem states that almost every point is recurrent for every f-invariant Borel finite measure. Due to the continuity of f, for any given ε and any recurrent point x, there exist infinitely many pairs of indices i ≠ j such that d(f^i(x), f^j(x))< ε. These pairs are referred to as recurrences. In <cit.>, Eckmann, Kamphorst and Ruelle explored recurrences by recurrence plots, a white-and-black square image with black pixels representing recurrences. Further insights into recurrence quantification analysis are available in literature such as <cit.>. Building upon the investigation of recurrence, the correlation sum C(x,ε,d,n) is defined as C(x,ε,d,n):=1/n^2♯{(i,j):0 ≤ i,j ≤ n-1, d(f^i(x),f^j(x))≤ε}, where ε represents the threshold distance. Literature concerning the correlation sum can be found in references such as <cit.>. A fundamental result regarding the correlation sum states that in an ergodic dynamical system (X,f,μ), there exists a countable subset Q ⊂ℝ such that for any ε∉ Q, the following convergence holds for almost everywhere x ∈ X: lim_n → +∞C(x,ε,d,n)=∫_X μ (B_d(x,ε))dμ(x), where B_d(x,ε) is the ε-neighborhood of x, and ∫_X μ (B_d(x,ε))dμ(x) is referred to as the correlation integral. A detailed proof of this statement can be found in <cit.>.In the pursuit of numerical estimation method for generalized entropies, Takens introduced q-correlation entropy, expanding upon the concepts of correlation sum and correlation integral in <cit.>. If q ≠ 1, the q-correlation entropy is defined as h_cor(f,μ,q) :=lim_ε→ 0lim sup_k → +∞-1/(q-1)klog∫_Xμ(B_d_k(x,ε))^q-1dμ(x), h_cor(f,μ,q) :=lim_ε→ 0lim inf_k → +∞-1/(q-1)klog∫_Xμ(B_d_k(x,ε))^q-1dμ(x), if q=1, the 1-correlation entropy is defined as h_cor(f,μ,1) :=lim_ε→ 0lim sup_k → +∞-1/k∫_Xlogμ(B_d_k(x,ε))dμ(x), h_cor(f,μ,1) :=lim_ε→ 0lim inf_k → +∞-1/k∫_Xlogμ(B_d_k(x,ε))dμ(x), where d_k represents the Bowen metric and B_d_k(x,ε) denotes the Bowen dynamical ball. Following <cit.>, Špitalský <cit.> defined local correlation entropy as follows h_cor(f,x):=lim_ε→ 0lim sup_k → +∞ - 1/kloglim inf_n → +∞ C(x,ε,d_k,n), h_cor(f,x):=lim_ε→ 0lim inf_k → +∞ - 1/kloglim sup_n → +∞ C(x,ε,d_k,n). In <cit.>, Verbitskiy demonstrated that 1-order correlation entropy equals measure-theoretic entropy. Moreover, if μ is an invariant f-homogeneous Borel probability measure, meaning that Borel probability measure μ is invariant and satisfies that for any ε>0, there exists δ >0 and c>0 such that μ(B_d_k(y,δ)) ≤ cμ(B_d_k(x,ε)) for all x, y∈ X and k∈ℕ, then q-order correlation entropy, measure-theoretic entropy and topological entropy coincide for any q∈ℝ. Based on formula (<ref>), we have h_cor(f,x)=h_cor(f,μ,2), h_cor(f,x)=h_cor(f,μ,2), μ-a.e. x. Thus, a connection exists among different entropies if μ is an invariant f-homogeneous Borel probability measure. This connection is valuable as it obviates the need for partition consideration when computing measure-theoretic entropy. Recently, there has been growing interest in studying free semigroup actions, which allow systems to adapt dynamically over time to accommodate inevitable experimental errors. Ghys et al. <cit.> proposed a definition for topological entropy applicable to finitely generated pseudo-groups of continuous maps, sparking significant interest in free semigroup actions on compact metric spaces. For instance, Bufetov <cit.> introduced the concept of topological entropy for free semigroup actions, whereas Biś <cit.> introduced the entropies for a semigroup of maps using alternative methods. Partial variational principles linking measure-theoretic entropy and topological entropy for free semigroup actions were investigated in <cit.> and <cit.>. Carvalho et al. proposed a novel definition for the measure-theoretic entropy of free semigroup actions in <cit.> to establish these principles. In contrast to studies involving the entire space of free semigroup actions, Ju et al. <cit.> explored the topological entropy, while Xiao and Ma <cit.> investigated the topological pressure of free semigroup actions on non-compact sets. Given the interconnectedness among topological entropy, measure-theoretic entropy, correlation entropy, and local correlation entropy in classical dynamical systems, it is natural to inquire whether such relationships persist in free semigroup actions, although limit literature exists on this topic. Therefore, this paper aims to investigate this relationship. Throughout this paper, we focus on a compact metric space (X,d) and a Borel probability measure μ with full support. Let G be the free semigroup with m generators {f_1,f_2,⋯,f_m} acting on X, where each f_i denotes a continuous self-map on X with i ∈{1,2,⋯,m}. We use h_μ(G) to denote the measure-theoretic entropy, h_top(G) to denote the topological entropy and h(ω,x)(H(ω,x)) to denote the lower(upper) local entropy of free semigroup actions, respectively. We introduce the concepts of correlation sum, upper(lower) local correlation entropy and q-order upper(lower) correlation entropy for free semigroup actions, denoted by C(G,x,ε,ω,k,n), h_cor(G,x)(h_cor(G,x)), h_cor(G,μ,q)(h_cor(G,μ,q)) respectively (details can be found in Section 3). Subsequently, we establish the relationship among different entropies. Specifically, Theorem <ref> corresponds to the case of q=2, Theorem <ref> to q=0, Theorem <ref> to q ≥ 1, while Theorem <ref> and Theorem <ref> address the cases of 0 ≤ q ≤ 1 and q ≤ 0, respectively. We now proceed to present our findings. Let G be a free semigroup acting on a compact metric space X, μ be G-ergodic Borel probability measure on X, then there exists a countable subset Q ⊂ℝ such that for any ε∉ Q, ω∈Σ_m^+ and k ∈ℕ, lim_n → +∞ C(G,x,ε,ω,k,n) =∫_X μ (B_ω,k^G(x,ε))dμ(x), μ-a.e. x ∈ X, where B_ω,k^G(x,ε) is generalized Bowen dynamical ball (see details in Section 2). Furthermore, h_cor(G,μ,2)=h_cor(G,x), h_cor(G,μ,2)=h_cor(G,x), μ-a.e. x ∈ X. Let G be a free semigroup acting on a compact metric space X, μ Borel probability measure on X. If μ satisfies the weak entropy-doubling condition of free semigroup actions, then h_top(G)=h_cor(G,μ,0). Theorem <ref> and <ref> are generalizations of classical results. Let G be a free semigroup acting on a compact metric space X, μ G-ergodic Borel probability measure on X. If μ satisfies h_μ(G) < +∞ and the limit process of h(ω,x) is uniformly about x for almost everywhere ω, then for any q ≥ 1, h_μ(G)=h_cor(G,μ,q). In particularly, h_μ(G)=h_cor(G,μ,2)=h_cor(G,x), μ-a.e. x ∈ X. In Theorem <ref>, the condition regarding h(ω,x) is necessary, but its optimality is unknown. In general, without the condition of h(ω,x), the Theorem <ref> does not hold. Examples demonstrating this can be found in <cit.>. Let G be a free semigroup acting on a compact metric space X, μ be Borel probability measure on X. If h(ω,x) ≥ h_top(G) almost everywhere, then for any 0 ≤ q ≤ 1, h_top(G)=h_cor(G,μ,q). Moreover, if μ is G-invariant, then h_top(G)=h_cor(G,μ,1)=h_μ(G). Let G be a free semigroup acting on a compact metric space X, μ be G-invariant Borel probability measure on X satisfying the weak entropy-doubling condition of free semigroup actions. If the limit process of H(ω,x) is uniformly about (ω,x) and H(ω,x) ≤ h_μ(G) almost everywhere, then for any q ≤ 0, h_cor(G,μ,q) = h_μ(G). In particularly, h_top(G)=h_cor(G,μ,0)= h_μ(G). Theorem <ref>, <ref> and <ref> were first proposed even in classical dynamical systems and were inspired by the examples in chapter 2 of <cit.>. The paper is structured as follows: Section 2 presents the necessary preliminaries, while Section 3 introduces the concepts of correlation entropy and local correlation entropy of free semigroup actions, along with an examination of their properties. The proofs of Theorems <ref> and <ref> are provided in Sections 4 and 5, respectively. Section 6 is dedicated to the proofs of Theorems <ref>, <ref>, and <ref>. § PRELIMINARIES §.§ The free semigroup actions on compact metric space Let (X,d) be a compact metric space, and let G be a free semigroup generated by G_∗:={f_1, f_2, ⋯, f_m}, where each f_i is a continuous self-map on X for all 1≤ i≤ m. Given a vector p=(p_1, p_2, ⋯, p_m) with ∑_i=1^mp_i=1 and p_i>0 for all 1≤ i ≤ m, there exists a symbol space Σ_m^+:= {1, 2, ⋯, m}^ℕ with a Bernoulli probability measure ℙ generated by the vector p. Let σ: Σ_m^+ ⟶Σ_m^+ be the shift operator defined by σ (i_1, i_2, ⋯)=(i_2, i_3, ⋯). For any ω∈Σ_m^+, denoted as ω=(i_1, i_2, ⋯), we define f_ω,n(x):= x, n=0 f_i_n∘ f_i_n-1∘⋯∘ f_i_1(x), n≥ 1. Thus, the orbit of x under the free semigroup actions is defined as Orb(x,G):={f_ω,n(x): ∀ω∈Σ_m^+, n ≥ 0}. We refer to (X,G) as the free semigroup action. Given ω=(i_1, i_2, ⋯) and k ≥ 1, one could define the generalized Bowen metric as d_ω, k ^G (x, y):=max{d(f_ω, i(x), f_ω, i(y)) : 0 ≤ i ≤ k-1 } and the generalized Bowen dynamical ball as B_ω,k^G(x,ε):={y: d_ω,k^G(x,y) ≤ε}. Let F: Σ_m^+ × X ⟶Σ_m^+ × X be the skew product transformation defined as follows F(ω, x)=(σ (ω), f_ω,1(x)). Here, we revisit certain terminologies originating from random dynamical systems (cf. <cit.> for detailed exposition) and introduce specific constraints to adapt them for free semigroup actions. A Borel probability measure μ is called G-invariant if ℙ×μ is invariant with respect to F. Similarly, A Borel probability measure μ is called G-ergodic if ℙ×μ is both invariant and ergodic with respect to F. Subsequently, we present a necessary theorem concerning ergodicity in the random dynamical systems. While initially established by <cit.>, the theorem underwent generalization by <cit.>, and Kifer <cit.> provided an alternative proof methodology. We apply this theorem to the context of free semigroup actions for our convenience. Let G be a free semigroup acting on a compact metric space X, μ be a Borel probability measure on X. Then ℙ×μ is ergodic with respect to skew product transformation if and only if for any measurable subset A ⊆ X, if ∑_i=1^m p_iχ__A(f_i(x))=χ__A(x), then μ(A)=0 or 1. Given any integer t ≥ 1, G^t can be defined as a free semigroup generated by G_∗^t:={g_1, g_2, ⋯, g_m^t:g_i=f_i_t∘ f_i_t-1∘⋯∘ f_i_1, 1 ≤ i≤ m^t, f_i_s∈ G_∗, 1≤ s≤ t}. Additionally, a one-to-one transformation τ is defined as follows τ : {1,2,⋯, m^t} ⟶{1,2,⋯,m}^t j ↦(i_1,i_2,⋯,i_t), and for convenience, the following transformation is still denoted as τ τ : Σ_m^t^+ ⟶Σ_m^+ (j_1,j_2,⋯) ↦ (τ(j_1),τ(j_2),⋯). Thus, a probability vector p^t on {1,2,⋯,m^t} corresponding to p can be defined as p^t(j)=p(i_1)p(i_2)⋯ p(i_t) where 1≤ j≤ m^t, τ(j)=(i_1,i_2,⋯,i_t). Additionally, there exists a symbol space Σ_m^t^+:={1,2,⋯,m^t}^ℕ with a Bernoulli probability measure ℙ^t generated by p^t. It is evident that both τ and τ^-1 preserve measure, meaning that for any measurable set A ⊆Σ_m^+, B ⊆Σ_m^t^+, ℙ^t (τ^-1A)=ℙ (A) and ℙ^t (B)=ℙ (τ B) hold. For any ϖ=(j_1,j_2,⋯)∈Σ_m^t^+, j_s∈{1,2,⋯,m^t}, s=1,2,⋯, we define g_ϖ,n(x):= x, n=0 g_j_n∘ g_j_n-1∘⋯∘ g_j_1(x), n≥ 1. (X, G^t) is referred to as the t-power system of (X,G). Notably, for any ϖ=(j_1,j_2,⋯)∈Σ_m^t^+, j_s∈{1,2,⋯,m^t}, 1≤ s, there exists a unique ω=τ(ϖ) ∈Σ_m^+, ω=(i_1,i_2,⋯) such that f_ω, nt(x)=g_ϖ, n(x) for any n≥ 0, any x∈ X. §.§ Measure-theoretic entropy and topological entropy of free semigroup actions The measure-theoretic entropy and topological entropy of free semigroup actions have been extensively studied in the literature <cit.>. Here, we adopt the following definitions for measure-theoretic entropy and topological entropy. <cit.> Let G be a free semigroup acting on a compact metric space X, μ be G-invariant probability measure, ξ be finite Borel measurable partition of X. Then the measure-theoretic entropy h_μ(G, ξ) of (X,G) with respect to ξ is defined as h_μ(G, ξ):=lim_k → +∞1/k∫_Σ_m^+ H_μ(⋁_i=0^k-1 f_ω, i^-1 ξ) dℙ(ω), and the measure-theoretic entropy of (X,G) is defined as h_μ(G):=sup_ξ h_μ(G, ξ), where H_μ(⋁_i=0^k-1 f_ω, i^-1ξ) :=-∑_A ∈⋁_i=0^k-1 f_ω, i^-1ξμ(A) logμ (A) , ⋁_i=0^k-1 f_ω, i^-1ξ :=ξ⋁ f_ω, 1^-1ξ⋁⋯⋁ f_ω, k-1^-1ξ. We recall the concepts of separated sets and spanning sets in the context of free semigroup actions. Let G be a free semigroup with m generators {f_1,f_2,⋯,f_m} acting on compact metric space X, where f_i is continuous self-map on X, i ∈{1,2,⋯,m}. A subset E(ω,k,ε) ⊆ X is defined as the largest cardinality (ω, k, ε) separated set if, for any distinct points x,y ∈ E(ω,k,ε), the distance d^G_ω, k(x,y) > ε, and the cardinality ♯ E(ω,k,ε) of E(ω,k,ε) is maximized. Similarly, A subset F(ω,k,ε) ⊆ X is termed the smallest cardinality (ω, k, ε) spanning set if, for any x ∈ X, there exists y ∈ F such that d^G_ω, k(x,y) ≤ε, and the cardinality ♯ F(ω,k,ε) of F(ω,k,ε) is minimized. <cit.> Let G be a free semigroup acting on a compact metric space X. Then the topological entropy h_top(G) of (X,G) is h_top(G)=lim_ε→ 0lim inf_k → +∞1/k∫_Σ_m^+log♯ E(ω,k,ε) dℙ(ω). Kifer <cit.> introduced the topological entropy of random transformations as H_top(G) =lim_ε→ 0lim inf_k → +∞1/klog♯ E(ω,k,ε) for almost everywhere ω∈Σ_m^+, and Li et al<cit.> demonstrated that h_top(G)=H_top(G). Below, we present the ergodic theorem and the Brin-Katok local entropy formula for random dynamical systems, established by Kifer <cit.> and Zhu <cit.>, respectively <cit.> Let G be a free semigroup acting on a compact metric space X, (Σ_m^+, ℙ) be a probability space, F: Σ_m^+ × X ⟶Σ_m^+ × X be the skew product transformation, μ be a Borel probability measure on X. If ℙ×μ is ergodic with respect to F and ϕ∈ L^1(μ), then there exists an Ω⊆Σ_m^+ with ℙ(Ω)=1, such that for any ω∈Ω, there exists a X_ω⊆ X with μ(X_ω)=1 where lim_k → +∞1/k∑_i=0^k-1ϕ(f_ω,i(x))=∫_X ϕ(y) d μ(y) holds for any x ∈ X_ω. It is also true that there exists a W ⊆ X with μ(W)=1, such that for any x ∈ W, there exists an Ω_x ⊆Σ_m^+ with ℙ(Ω_x)=1 where lim_k → +∞1/k∑_i=0^k-1ϕ(f_ω,i(x))=∫_X ϕ(y) d μ(y) holds for any ω∈Ω_x. <cit.> Let G be a free semigroup acting on a compact metric space X, (Σ_m^+, ℙ) be a probability space, F: Σ_m^+ × X ⟶Σ_m^+ × X be the skew product transformation, μ be a Borel probability measure on X. If ℙ×μ is ergodic with respect to F and h_μ(G) < +∞, then there exists an Ω⊆Σ_m^+ with ℙ(Ω)=1 such that for any ω∈Ω, there exists a X_ω⊆ X with μ(X_ω)=1 where h_μ(G) =lim_ε→ 0lim inf_k → +∞ -1/klogμ (B_ω,k^G(x,ε)) =lim_ε→ 0lim sup_k → +∞ -1/klogμ (B_ω,k^G(x,ε)) holds for any x ∈ X_ω. The formulations of the two theorems differ in references <cit.> due to modifications introduced for convenience. Readers are encouraged to independently verify the validity of these modifications. § NOTIONS AND PROPERTIES In this Section, we introduce the concepts of correlation sum, upper(lower) local correlation entropy as well as q-order upper(lower) correlation entropy of free semigroup actions and explore their fundamental properties.To begin, we introduce the concept of correlation sum for free semigroup actions. Let G be a free semigroup acting on a compact metric space X. For any x ∈ X, ε > 0, ω∈Σ_m^+, k ≥ 1, n ≥ 1, the correlation sum of free semigroup actions is defined as follows C (G,x,ε,ω,k,n) :=1/n^2∫_Σ_m^+♯{(i,j): 0 ≤ i, j ≤ n-1, d_ω, k^G(f_υ, i(x), f_υ, j(x)) ≤ε} d ℙ (υ) where ♯ A is the cardinality of set A. Initially, we propose an alternative definition to generalize the correlation sum, which is defined as C^' (G,x,ε,ω,k,n) :=1/n^2♯{(i,j): 0 ≤ i, j ≤ n-1, d_ω, k^G(f_ω, i(x), f_ω, j(x)) ≤ε}. However, we prefer Definition <ref> for the following reasons. Firstly, in classical dynamical systems, local correlation entropy is defined using a fixed Bowen metric d_k to observe the first n elements of the orbit {f^i(x)}_i=0^+∞ and compute C(x,ε, d_k, n). In the context of free semigroup actions, it is essential to choose a fixed Bowen metric, that is, d_ω, k^G. Subsequently, this fixed Bowen metric is applied to observe the first n elements of the orbit Orb{x, G}, denoted as Orb(x,G,n):={f_ω,k(x): ∀ω∈Σ_m^+, 0 ≤ k ≤ n-1}. We contend that Orb{x, G,n} should encompass multiple trajectories induced by different υ rather than only unique trajectory induced by ω. Secondly, if we adopt C^'(G,x,ε,ω,k,n) as the definition of generalized correlation sum,, we could not be able to obtain the analogue of Theorem (<ref>) without additional conditions. The reason is that the ergodic theorem of random dynamical systems plays an important role in the proof of the Theorem (<ref>). Hence, the choice of the integral form is preferred. Building upon the concept of correlation sum outlined above, we introduce the local correlation entropy of free semigroup actions. Let G be a free semigroup acting on a compact metric space X. The upper (lower) local correlation entropy of free semigroup actions is defined as follows h_cor(G,x):=lim_ε→ 0lim sup_k → +∞ - 1/k∫_Σ_m^+logC(G,x,ε,ω,k) dℙ(ω), h_cor(G,x):=lim_ε→ 0lim inf_k → +∞ - 1/k∫_Σ_m^+logC(G,x,ε,ω,k) dℙ(ω), where C(G,x,ε,ω,k):=lim inf_n → +∞ C(G,x,ε,ω,k,n), C(G,x,ε,ω,k):= lim sup_n → +∞ C(G,x,ε,ω,k,n). If h_cor(G,x) =h_cor(G,x), then we denote h_cor(G,x):=h_cor(G,x) =h_cor(G,x). Similarly, we introduce the concepts of correlation integral and correlation entropy of free semigroup actions as follows. Let G be a free semigroup acting on a compact metric space X, μ be a Borel probability measure with full support on X. (this assumption holds when discussing correlation entropy). For ε >0, k ≥ 1, and q ∈ℝ, the correlation integral of q-order of free semigroup actions is defined as follows c(G,μ,ε,k,q) := 1/q-1∫_Σ_m^+log(∫_X μ(B_ω,k^G(x,ε))^q-1 dμ(x)) dℙ(ω) q ≠ 1 , c(G,μ,ε,k,1) := ∫_Σ_m^+∫_X logμ (B_ω, k^G (x,ε)) dμ(x) dℙ(ω) q=1, and the upper (lower) correlation entropy of q-order of free semigroup actions is defined as follows h_cor(G,μ,q) : =lim_ε→ 0lim sup_k → +∞ -1/k c(G,μ,ε,k,q), h_cor(G,μ,q) : =lim_ε→ 0lim inf_k → +∞ -1/k c(G,μ,ε,k,q). If h_cor(G,μ,q)=h_cor(G,μ,q), we denote h_cor(G,μ,q):=h_cor(G,μ,q)=h_cor(G,μ,q). When G_∗={f_1}, definition 2.1-2.3 degenerate into classical cases, as discussed in <cit.>. Similar to the approach described in <cit.>, the definition of q=1 is imposed by continuity. For clarity, we provide an explanation here. Let ε >0 satisfy the following condition, for any ω∈Σ_m^+, k ≥ 1 and x ∈ X, μ( { y: d_ω,k^G(x,y)=ε})=0. Assume that there exists ω_0 ∈Σ_m^+ such that inf_x∈ Xμ(B_ω_0,k^G(x,ε))=0. Denote B_ω_0,k^G(x,r_1,r_2):={y: r_1 < d_ω_0,k^G(x,y) ≤ r_2}. It could be observed that |μ(B_ω_0,k^G(x,ε)) - μ(B_ω_0,k^G(y,ε))| = |μ(B_ω_0,k^G(x,ε) ∖ B_ω_0,k^G(y,ε)) - μ(B_ω_0,k^G(y,ε) ∖ B_ω_0,k^G(x,ε))| ≤ max{μ(B_ω_0,k^G(x,ε) ∖ B_ω_0,k^G(y,ε)), μ(B_ω_0,k^G(y,ε) ∖ B_ω_0,k^G(x,ε)) }. Note that if d_ω_0,k^G(x,y) < δ < ε, then we have B_ω_0,k^G(x,ε) ∖ B_ω_0,k^G(y,ε) ⊂ B_ω_0,k^G(x,ε - δ, ε), B_ω_0,k^G(y,ε) ∖ B_ω_0,k^G(x,ε) ⊂ B_ω_0,k^G(x,ε, ε + δ). Combined with the equations (<ref>) and (<ref>), we have |μ(B_ω_0,k^G(x,ε)) - μ(B_ω_0,k^G(y,ε))| ≤ max{μ(B_ω_0,k^G(x,ε) ∖ B_ω_0,k^G(y,ε)), μ(B_ω_0,k^G(y,ε) ∖ B_ω_0,k^G(x,ε)) } ≤ max{μ(B_ω_0,k^G(x,ε - δ, ε)), μ(B_ω_0,k^G(x,ε, ε+ δ)) }. As δ approaches 0, B_ω_0,k^G(x,ε , ε+ δ) tends to ∅ and B_ω_0,k^G(x,ε - δ, ε) tends to { y: d_ω,k^G(x,y)=ε}. Thus, for any η >0, there exists δ < ε such that if d_ω_0,k^G(x,y) < δ, then max{μ(B_ω_0,k^G(x,ε - δ, ε)) μ(B_ω_0,k^G(x,ε, ε+ δ)) }≤η. Hence, μ(B_ω_0,k^G(x,ε)) is continuous with respect to x. Moreover, X is compact, implying the existence of x_0 ∈ X such that μ(B_ω_0,k^G(x_0,ε))=0. However, this contradicts the condition that the support of μ is X. Therefore, 0<inf_x ∈ Xμ(B_ω,k^G(x,ε)) ≤ 1 for any ω∈Σ_m^+, ensuring the interchangeability of the limit and the integral in the following process lim_q → 11/q-1∫_Σ_m^+log(∫_X μ(B_ω,k^G(x,ε))^q-1 dμ(x)) dℙ(ω) = ∫_Σ_m^+lim_q → 11/q-1log(∫_X μ(B_ω,k^G(x,ε))^q-1 dμ(x)) dℙ(ω) = ∫_Σ_m^+lim_q → 1∫_X μ(B_ω,k^G(x,ε))^q-1logμ (B_ω, k^G (x,ε)) dμ(x)/∫_X μ(B_ω,k^G(x,ε))^q-1 dμ(x)dℙ(ω) = ∫_Σ_m^+∫_X logμ (B_ω, k^G (x,ε)) dμ(x) dℙ(ω). Furthermore, we will demonstrate in the proof of Theorem <ref> that the set of such ε forms an uncountable dense subset of ℝ. (1) If 0 < ε_1 ≤ε_2, then C(G,x,ε_1,ω,k,n) ≤ C(G,x,ε_2,ω,k,n). So h_cor(G,x) is well defined. (2) If k_1 ≤ k_2, then ∫_Σ_m^+loglim inf_n → +∞ C(G,x,ε,ω,k_1,n) dℙ(ω) ≥∫_Σ_m^+loglim inf_n → +∞ C(G,x,ε,ω,k_2,n) dℙ(ω). (3) If {n_i}_i increases strictly and n_i+1/n_i→ 1, then lim inf_n → +∞ C(G,x,ε,ω,k,n)=lim inf_i → +∞ C(G,x,ε,ω,k,n_i). (4) If {k_i}_i increases strictly and k_i+1/k_i→ 1, then h_cor(G,x)=lim_ε→ 0lim sup_i → +∞ - 1/k_i∫_Σ_m^+loglim inf_n → +∞ C(G,x,ε,ω,k_i,n) dℙ(ω). (1) and (2) can be proved by definition. (3) For any given n, there exists an index s such that n_s ≤ n ≤ n_s+1. Thus, we have n_s^2/n_s+1^2 C(G,x,ε,ω,k,n_s) ≤1/n^2∫_Σ_m^+♯{(i,j): 0 ≤ i, j ≤ n_s-1, d_ω, k^G(f_υ, i(x), f_υ, j(x)) ≤ε} d ℙ (υ) ≤ C(G,x,ε,ω,k,n) ≤1/n^2∫_Σ_m^+♯{(i,j): 0 ≤ i, j ≤ n_s+1-1, d_ω, k^G(f_υ, i(x), f_υ, j(x)) ≤ε} d ℙ (υ) ≤n_s+1^2/n_s^2 C(G,x,ε,ω,k,n_s+1). Taking the lim inf on both sides yields the desired result. (4) By employing (2) and following the methodology outlined in (3), the proof is established. It extends the results of V.Špitalsky̌ <cit.>. All the properties remain valid when considering C(G,x,ε,ω,k) and h_cor(G,x). (1) If 0 < ε_1 ≤ε_2, then c(G,μ,ε_1,k,q) ≤ c(G,μ,ε_2,k,q). So h_cor(G,μ,q) is well-defined. (2) If k_1 ≤ k_2, then c(G,μ,ε,k_1,q) ≥ c(G,μ,ε,k_2,q). (3) If {k_i}_i increases strictly and satisfies k_i+1/k_i→ 1, then h_cor(G,μ,q)=lim_ε→ 0lim sup_i → +∞ -1/k_ic(G,μ,ε,k_i,q). (4) If q_1 < q_2, then c(G,μ,ε,k,q_1) ≤ c(G,μ,ε,k,q_2). In particularly, h_cor(G,μ,q_1) ≥h_cor(G,μ,q_2) , h_cor(G,μ,q_1) ≥h_cor(G,μ,q_2). (1) If q > 1, then ∫_Σ_m^+log(∫_X μ(B_ω,k^G(x,ε_1))^q-1 dμ(x)) dℙ(ω) ≤ ∫_Σ_m^+log(∫_X μ(B_ω,k^G(x,ε_2))^q-1 dμ(x)) dℙ(ω). So c(G,μ,ε_1,k,q) ≤ c(G,μ,ε_2,k,q). If q=1, then ∫_Σ_m^+∫_X logμ (B_ω, k^G (x,ε_1)) dμ(x) dℙ(ω) ≤∫_Σ_m^+∫_X logμ (B_ω, k^G (x,ε_2)) dμ(x) dℙ(ω). So c(G,μ,ε_1,k,q) ≤ c(G,μ,ε_2,k,q). If q < 1, then ∫_Σ_m^+log(∫_X μ(B_ω,k^G(x,ε_1))^q-1 dμ(x)) dℙ(ω) ≥ ∫_Σ_m^+log(∫_X μ(B_ω,k^G(x,ε_2))^q-1 dμ(x)) dℙ(ω). So c(G,μ,ε_1,k,q) ≤ c(G,μ,ε_2,k,q). (2) It can be proved by definition. (3) It can be proved in a similar manner as Proposition <ref>. (4) Given that μ possesses full support, for q <1, the inequality (q-1) ∫_X logμ (B_ω, k^G(x,ε))dμ(x) ≤log∫_X μ (B_ω,k^G(x,ε))^q-1dμ(x) follows from Jensen's inequality <cit.>. Consequently, c(G,μ,ε,k,1) > c(G,μ,ε,k,q). Similarly, for q>1, we obtain c(G,μ,ε,k,1) < c(G,μ,ε,k,q). Furthermore, for 1 < q_1< q_2, the inequality 1/q_1-1∫_Σ_m^+log(∫_X μ (B_ω,k^G(x,ε))^q_1-1dμ(x)) dℙ(ω) < 1/q_2-1∫_Σ_m^+log(∫_X μ (B_ω,k^G(x,ε))^q_2-1dμ(x)) dℙ(ω) holds by Lyapunov's inequality <cit.>. Since μ has full support, Lyapunov's inequality applies to q_1<q_2<1, thus concluding the proof. It generalizes the results of E.Verbitskiy <cit.> and all the properties remain valid when considering h_cor(G,μ,q). For any integer t≥ 1, let G be a free semigroup acting on a compact metric space X, (X,G^t) t-power system of (X,G). Then for any q ∈ℝ h_cor(G^t,μ,q) = t ·h_cor(G,μ,q), h_cor(G^t,μ,q) =t ·h_cor(G,μ,q). Since X is a compact metric space and each f_i ∈ G_∗ is continuous, for any ε > 0, there exists δ≤ε such that d(x,y) ≤δ implies d_ω ,t^G(x,y) ≤ε for any ω∈Σ_m^+. Therefore, for any ϖ∈Σ_m^t^+, there exists a unique ω∈Σ_m^+ such that μ (B_ω,tk^G(x,δ)) ≤μ (B_ϖ,k^G^t(x,δ)) ≤μ (B_ω,tk^G(x,ε)). By computation, we establish for any q ∈ℝ, c(G,μ,δ,tk,q) ≤ c(G^t,μ,δ,k,q)≤ c(G,μ,ε,tk,q). Combined with Proposition <ref>, we derive h_cor(G^t,μ,q) = t ·h_cor(G,μ,q), h_cor(G^t,μ,q) =t ·h_cor(G,μ,q). § PROOF OF THEOREM <REF> In this Section, we present the proof of Theorem <ref> (q=2) following the methodology from <cit.>. The proof of Theorem <ref> proceeds in three main steps. Firstly, we establish the existence of a countable subset Q ⊂ℝ such that for any ε∉ Q, ω∈Σ_m^+ and k ∈ℕ, μ×μ( { (x,y) ∈ X × X: d_ω, k^G(x,y)= ε})=0, meaning that the mapping ε↦∫_X μ ( B_ω, k^G(x, ε) ) dμ(x) is continuous at ε. Secondly, for a fixed ω∈Σ_m^+, we identify a full measure subset W(ω, k, ε) ⊆ X where the equality (<ref>) is established. Finally, we determine a common full measure subset applicable for any ω∈Σ_m^+. We commence by proving the first step. Step 1. For any ω∈Σ_m^+ and k ∈ℕ, the set of real number ε∈ℝ satisfying the inequality μ×μ( { (x,y) ∈ X × X: d_ω, k^G(x,y)= ε})> 1/n has a cardinality less than n, owing to μ×μ being a probability measure on the compact metric space X × X. Let Q_ω, k, n denote the collection of such ε. Consequently, there exists a countable subset ⋃_n=1^+∞ Q_ω, k, n such that for any ε∉⋃_n=1^+∞ Q_ω, k, n, μ×μ( { (x,y) ∈ X × X: d_ω, k^G(x,y)= ε})=0. Given that Σ_m^+ is a compact metric space, there exists a countable dense subset {ω_r }_r=1^+∞⊂Σ_m^+. For any ω∈Σ_m^+ and k ∈ℕ, there exists an ω_r_0 such that the Bowen metric d_ω, k^G equals the Bowen metric d_ω_r_0, k^G, implying that ⋃_n=1^+∞ Q_ω, k, n = ⋃_n=1^+∞ Q_ω_r_0, k, n. Thus, there exists a countable subset Q:=⋃_r,k,n=1Q_ω_r, k, n such that for any ε∉ Q, ω∈Σ_m^+ and k ∈ℕ, μ×μ( { (x,y) ∈ X × X: d_ω, k^G(x,y)= ε})=0. It is noted that ∫_X μ(B_ω, k^G(x,ε))d μ(x) = μ×μ( { (x,y) ∈ X × X: d_ω, k^G(x,y)≤ε}). For any sequence {ε_n}_n=1^+∞ with ε_n < ε and lim_n → +∞ε_n =ε, we have lim_n → +∞(∫_X μ(B_ω, k^G(x,ε))d μ(x)-∫_X μ(B_ω, k^G(x,ε_n))d μ(x)) = lim_n → +∞μ×μ( { (x,y) ∈ X × X: ε_n < d_ω, k^G(x,y)≤ε}) = μ×μ( { (x,y) ∈ X × X: d_ω, k^G(x,y)= ε}) = 0. Similarly, for any sequence {ε_n}_n=1^+∞ with ε_n > ε and lim_n → +∞ε_n =ε, we obtain lim_n → +∞(∫_X μ(B_ω, k^G(x,ε_n))d μ(x)-∫_X μ(B_ω, k^G(x,ε))d μ(x)) = lim_n → +∞μ×μ( { (x,y) ∈ X × X: ε < d_ω, k^G(x,y)≤ε_n }) = 0. Hence, for any ω∈Σ_m^+ and k ∈ℕ, the mapping ε↦∫_X μ ( B_ω, k^G(x, ε) ) dμ(x) is continuous at ε∉ Q. Step 2. We claim that for any given ε∉ Q, ω∈Σ_m^+ and k∈ℕ, there exists a subset W(ω, k, ε) ⊆ X of full measure such that for any x ∈ W(ω, k, ε), the following convergence holds lim_n → +∞ C(G,x,ε,ω,k,n) = ∫_X μ (B_ω,k^G(x,ε))dμ(x). Given ω∈Σ_m^+, k∈ℕ and ε∉ Q, for any t ∈ℕ, there exists a finite measurable partition of X, denoted by ξ_t:={A_t,1, A_t,2, ⋯, A_t,N(t)}, satisfying μ(A_t,1) ≤ 2^-t, diam_ω,k(A_t,s) ≤ 2^-t, 2 ≤ s ≤ N(t), where diam_ω,k(A):=sup_x, y ∈ Ad_ω,k^G(x,y). Furthermore, we could consider that the boundary of A_t,s, 1 ≤ s ≤ N(t), to have measure 0, given that μ is Borel probability measure and X is compact metric space.Denote S_ε :={(x,y) ∈ X × X | d_ω,k^G(x,y) ≤ε}, 𝒞_1 :={A_t,s_1× A_t,s_2∈ξ_t ×ξ_t | A_t,s_1× A_t,s_2⊆ S_ε}, 𝒞_2 :={A_t,s_1× A_t,s_2∈ξ_t ×ξ_t | A_t,s_1× A_t,s_2∩ S_ε≠∅}. It is obvious that ⋃_A_t,s_1× A_t,s_2∈𝒞_1 A_t,s_1× A_t,s_2⊆ S_ε⊆⋃_A_t,s_1× A_t,s_2∈𝒞_2 A_t,s_1× A_t,s_2. In particularly, we claim S_ε-2^-t+1 - ((A_t,1× X) ∪ (X × A_t,1)) ⊆⋃_A_t,s_1× A_t,s_2∈𝒞_1 A_t,s_1× A_t,s_2 ⊆⋃_A_t,s_1× A_t,s_2∈𝒞_2 A_t,s_1× A_t,s_2 ⊆ S_ε+2^-t+1∪ (A_t,1× X) ∪ (X × A_t,1). Indeed, for any (x,y) ∈ S_ε-2^-t+1 - ((A_t,1× X) ∪ (X × A_t,1)), it holds that d_ω,k^G(x,y) ≤ε-2^-t+1, x ∉ A_t,1, y ∉ A_t,1. Hence, there exists s_1,s_2 ≠ 1 such that (x,y) ∈ A_t,s_1× A_t,s_2. For any (x^',y^') ∈ A_t,s_1× A_t,s_2, we have d_ω,k^G(x^',y^') ≤ d_ω,k^G(x^',x) + d_ω,k^G(x,y) +d_ω,k^G(y,y^') ≤ 2^-t + ε-2^-t+1 + 2^-t = ε , meaning that (x^',y^') ∈ S_ε. Therefore, A_t,s_1× A_t,s_2∈𝒞_1, which yields S_ε-2^-t+1 - ((A_t,1× X) ∪ (X × A_t,1)) ⊆⋃_A_t,s_1× A_t,s_2∈𝒞_1 A_t,s_1× A_t,s_2 . Similarly, it can be demonstrated that ⋃_A_t,s_1× A_t,s_2∈𝒞_2 A_t,s_1× A_t,s_2⊆ S_ε+2^-t+1∪ (A_t,1× X) ∪ (X × A_t,1). According to Theorem <ref>, for any characteristic function χ__A_t,s, where 1 ≤ s ≤ N(t), there exists a subset W_t,s⊆ X with μ(W_t,s)=1 such that for any x ∈ W_t,s, there exists a subset Ω_x,t,s⊆Σ_m^+ with ℙ(Ω_x,t,s)=1 satisfying for any υ∈Ω_x,t,s, lim_n → +∞1/n∑_i=0^n-1χ__A_t,s(f_υ,i(x))=μ(A_t,s). Furthermore, utilizing Egoroff's theorem<cit.>, for any δ > 0, there exists a subset Ω_δ,x,t,s⊆Ω_x,t,s with ℙ(Ω_δ,x,t,s) > 1 - δ/N(t). This subset Ω_δ,x,t,s satisfies the existence of N(Ω_δ,x,t,s) ∈ℕ such that if n > N(Ω_δ,x,t,s), then for any υ∈Ω_δ,x,t,s, the following inequality holds |1/n∑_i=0^n-1χ__A_t,s(f_υ,i(x)) - μ(A_t,s)| ≤2^-t-1/N^2(t). Let W_t:=⋂_s=1^N(t) W_t,s, where it is evident that μ(W_t)=1. For any x ∈ W_t, define Ω_δ,x,t := ⋂_s=1^N(t)Ω_δ,x,t,s. It is noteworthy that ℙ(Ω_δ,x,t) ≥ 1 - δ. Define N(Ω_δ,x,t):=max_1 ≤ s ≤ N(t){ N(Ω_δ,x,t,s)}. Hence, if n > N(Ω_δ,x,t), then for any υ∈Ω_δ,x,t and 1 ≤ s ≤ N(t), |1/n∑_i=0^n-1χ__A_t,s(f_υ,i(x)) - μ(A_t,s)| ≤2^-t-1/N^2(t). Consequently, for any x ∈ W_t, δ > 0, if n > N(Ω_δ,x,t), then for any υ∈Ω_δ,x,t and 1 ≤ s_1, s_2 ≤ N(t), we have |1/n^2♯{(i,j) : 0 ≤ i,j ≤ n-1, (f_υ,i(x),f_υ,j(x)) ∈ A_t,s_1× A_t,s_2}. - μ×μ (A_t,s_1× A_t,s_2) | = | 1/n♯{i : 0 ≤ i ≤ n-1, f_υ,i(x) ∈ A_t,s_1}·1/n♯{j : 0 ≤ j ≤ n-1, f_υ,j(x) ∈ A_t,s_2}. - μ(A_t,s_1) ·μ(A_t,s_2) | ≤ 2^-t/N^2(t). Note that 1/n^2♯( ⋃_A_t,s_1× A_t,s_2∈𝒞_1{(i,j) | 0 ≤ i,j ≤ n-1, (f_υ,i(x),f_υ,j(x)) ∈ A_t,s_1× A_t,s_2}) ≤ 1/n^2♯{(i,j) | 0 ≤ i,j ≤ n-1, d_ω,k^G(f_υ,i(x),f_υ,j(x)) ≤ε} ≤ 1/n^2♯( ⋃_A_t,s_1× A_t,s_2∈𝒞_2{(i,j) | 0 ≤ i,j ≤ n-1, (f_υ,i(x),f_υ,j(x)) ∈ A_t,s_1× A_t,s_2}). In light of (<ref>), we derive (μ×μ) ( S_ε-2^-t+1) - 2 μ (A_t,1) ≤∑_A_t,s_1× A_t,s_2∈𝒞_1 (μ×μ) (A_t,s_1× A_t,s_2) ≤∑_A_t,s_1× A_t,s_2∈𝒞_2 (μ×μ) (A_t,s_1× A_t,s_2) ≤ (μ×μ) ( S_ε+2^-t+1) +2μ( A_t,1). By combining formulas (<ref>), (<ref>), and (<ref>), we derive the following inequality 1/n^2♯{(i,j) | 0 ≤ i,j ≤ n-1, d_ω,k^G(f_υ,i(x),f_υ,j(x)) ≤ε} ≥ ∑_A_t,s_1× A_t,s_2∈𝒞_11/n^2♯{(i,j) : 0 ≤ i,j ≤ n-1, (f_υ,i(x),f_υ,j(x)) ∈ A_t,s_1× A_t,s_2} ≥ ∑_A_t,s_1× A_t,s_2∈𝒞_1( (μ×μ) (A_t,s_1× A_t,s_2) - 2^-t/N^2(t)) = ∑_A_t,s_1× A_t,s_2∈𝒞_1 (μ×μ) (A_t,s_1× A_t,s_2) - ♯𝒞_1 2^-t/N^2(t) ≥ (μ×μ)( S_ε-2^-t+1) -2 μ (A_t,1) -2^-t ≥ (μ×μ)( S_ε-2^-t+1)-3·2^-t. Similarly, we could get the another inequality as follows, (μ×μ) (S_ε-2^-t+1) - 3 · 2^-t ≤ 1/n^2♯{(i,j) | 0 ≤ i,j ≤ n-1, d_ω,k^G(f_υ,i(x),f_υ,j(x)) ≤ε} ≤ (μ×μ) (S_ε+2^-t+1) + 3 · 2^-t. Recalling the definition of correlation sum under free semigroup actions, we have C (G,x,ε,ω,k,n) = 1/n^2∫_Σ_m^+♯{(i,j): 0 ≤ i, j ≤ n-1, d_ω, k^G(f_υ, i(x), f_υ, j(x)) ≤ε} d ℙ (υ) = 1/n^2∫_Ω_δ,x,t♯{(i,j): 0 ≤ i, j ≤ n-1, d_ω, k^G(f_υ, i(x), f_υ, j(x)) ≤ε} d ℙ (υ) +1/n^2∫_Σ_m^+ - Ω_δ,x,t♯{(i,j): 0 ≤ i, j ≤ n-1, d_ω, k^G(f_υ, i(x), f_υ, j(x)) ≤ε} d ℙ (υ). Now, we provide an estimation of the correlation sum. For any x ∈ W_t, any δ > 0, if n > N(Ω_δ, x,t), then ∫_Ω_δ,x,t (μ×μ) (S_ε-2^-t+1) - 3 · 2^-t d ℙ (υ) ≤ 1/n^2∫_Ω_δ,x,t♯{(i,j): 0 ≤ i, j ≤ n-1, d_ω, k^G(f_υ, i(x), f_υ, j(x)) ≤ε} d ℙ (υ) ≤ ∫_Ω_δ,x,t (μ×μ) (S_ε+2^-t+1) + 3 · 2^-t d ℙ (υ), which implies ( (μ×μ) (S_ε-2^-t+1) - 3 · 2^-t) ℙ (Ω_δ,x,t) ≤ 1/n^2∫_Ω_δ,x,t♯{(i,j): 0 ≤ i, j ≤ n-1, d_ω, k^G(f_υ, i(x), f_υ, j(x)) ≤ε} d ℙ (υ) ≤ ( (μ×μ) (S_ε+2^-t+1) + 3 · 2^-t) ℙ (Ω_δ,x,t). Therefore, ( (μ×μ) (S_ε-2^-t+1) - 3 · 2^-t) ℙ (Ω_δ,x,t) ≤ C(G,x,ε,ω,k,n) ≤ ( (μ×μ) (S_ε+2^-t+1) + 3 · 2^-t) ℙ (Ω_δ,x,t) + 1/n^2∫_Σ_m^+ - Ω_δ,x,t♯{(i,j): 0 ≤ i, j ≤ n-1, d_ω, k^G(f_υ, i(x), f_υ, j(x)) ≤ε} d ℙ (υ) ≤ ( (μ×μ) (S_ε+2^-t+1) + 3 · 2^-t) ℙ (Ω_δ,x,t) + ℙ (Σ_m^+ - Ω_δ,x,t). Given that ℙ ( Ω_δ,x,t ) ≥ 1 - δ and ℙ (Σ_m^+ - Ω_δ,x,t) ≤δ, we can establish the following inequlity ( (μ×μ) (S_ε-2^-t+1) - 3 · 2^-t) ( 1 - δ) ≤ C(G,x,ε,ω,k,n) ≤ ( (μ×μ) (S_ε+2^-t+1) + 3 · 2^-t) + δ. Taking lim inf and lim sup of C(G,x,ε,ω,k,n) as n → + ∞, we obtain ( (μ×μ) (S_ε-2^-t+1) - 3 · 2^-t) ( 1 - δ) ≤ C(G,x,ε,ω,k) ≤ C(G,x,ε,ω,k) ≤ ( (μ×μ) (S_ε+2^-t+1) + 3 · 2^-t) + δ. Because δ is arbitrary, we conclude that given an ω∈Σ_m^+, k∈ℕ and any ε∉ Q, for any t ∈ℕ, there exists a full measure subset W_t ⊆ X such that any x ∈ W_t satisfying (μ×μ) (S_ε-2^-t+1) - 3 · 2^-t ≤ C(G,x,ε,ω,k) ≤ C(G,x,ε,ω,k) ≤ (μ×μ) (S_ε+2^-t+1) + 3 · 2^-t. We define W(ω, k, ε) := ⋂_t=1^+ ∞ W_t. It is evident that μ (W(ω, k, ε)) = 1. Since ε is a point of continuity of μ×μ(S_ε), for any x ∈ W(ω, k, ε), (μ×μ) (S_ε) = C(G,x,ε,ω,k) = C(G,x,ε,ω,k) = (μ×μ) (S_ε), implying lim_n → +∞ C(G,x,ε,ω,k,n) = (μ×μ) (S_ε)=∫_X μ (B_ω,k^G(x,ε))dμ(x). Step 3. The proof proceeds by establishing the existence of a common full measure subset W ⊆ X for any ω∈Σ_m^+, k∈ℕ and ε∉ Q. In Step 2, it is established that for any ω∈Σ_m^+, k ∈ℕ, ε∉ Q, there exists a full measure set W(ω, k, ε) such that for any x ∈ W(ω,k, ε), lim_n → +∞ C(G,x,ε,ω,k,n) =∫_X μ (B_ω,k^G(x,ε))dμ(x). Given that h_cor(G, μ,2) and h_cor(G,x) are well-defined, we can substitute ε→ 0 with ε_s → 0, where ε_s ∉ Q and s ∈ℕ. This yields h_cor(G,μ,2) =lim_s → +∞lim sup_k → +∞ -1/k∫_Σ_m^+log(∫_X μ(B_ω,k^G(x,ε_s)) dμ(x))dℙ(ω). h_cor(G,x) =lim_s → +∞lim sup_k → +∞ - 1/k∫_Σ_m^+loglim inf_n→+∞ C(G,x,ε_s,ω,k,n) dℙ(ω). Moreover, Σ_m^+ is a compact metric space. For any k,s ∈ℕ, consider the countable dense subset {ω_r}_r=1^+∞ of Σ_m^+, with W(ω_r,k,ε_s) representing the corresponding full measure subset of X. Define W(k,s):=⋂_r=1^+∞W(ω_r,k,ε_s). For any x ∈ W(k,s) and any ω∈Σ_m^+, there exists ω_r_0 such that ω_r_0|_[1,k]=ω|_[1,k], where ω|_[1,k] denotes the initial k elements of ω. It is pertinent to note that C(G,x,ε_s,ω,k,n) is contingent upon ω|_[1,k], rather than ω. For this reason, we have lim_n → +∞ C(G,x,ε_s,ω,k,n) =lim_n → +∞ C(G,x,ε_s,ω_r_0,k,n) =∫_X μ (B_ω_r_0,k^G(x,ε_s))dμ(x) =∫_X μ (B_ω,k^G(x,ε_s))dμ(x), implying - 1/k∫_Σ_m^+loglim_n → +∞ C(G,x,ε_s,ω,k,n) dℙ(ω) = - 1/k∫_Σ_m^+log(∫_X μ(B_ω,k^G(x,ε_s)) dμ(x)) dℙ(ω). Let W:= ⋂_s,k=1^+∞ W(k,s). It is obviously that μ(W) =1. For any x ∈ W, h_cor(G,x) =lim_s → +∞lim sup_k → +∞ - 1/k∫_Σ_m^+loglim_n→+∞ C(G,x,ε_s,ω,k,n) dℙ(ω) =lim_s → +∞lim sup_k → +∞ -1/k∫_Σ_m^+log(∫_X μ(B_ω,k^G(x,ε_s)) dμ(x))dℙ(ω) =h_cor(G,μ,2). Likewise, we can establish h_cor(G,μ,2)=h_cor(G,x) for x ∈ W. Hence, the Theorem <ref> is demonstrated. In <cit.>, the authors proved a similar result under stronger conditions, as stated below. <cit.> Let (X,μ) be a probability space, G be a topological semigroup. If μ is G-ergodic and for any φ∈ L^1(μ), there exists a full measure subset Y ⊆ X such that for any x ∈ Y, the following holds lim_n → + ∞1/n∑_i=0^n-1φ(f_ω,i(x))=∫_Xφ dμ(x), and the convergence is uniform, then for any x ∈ Y, ε >0, ω∈Σ_m^+ and k ≥ 1, the equality lim_n → + ∞1/n^2♯{ (i,j): 0≤ i,j ≤ n-1, d_ω,k^G(f_υ,i(x),f_υ,j(x))≤ε} = ∫_Xμ(B_ω,k^G(x,ε))dμ(x) holds for any υ. In this paper, Theorem <ref>(Theorem <ref>) fails to meet the condition (<ref>) because full measure subset Ω_x ⊆Σ_m^+ depends on x as noted in Remark <ref>. Let G be a free semigroup acting on a compact metric space X, (Σ_m^+, ℙ) corresponding symbol space. For any t ≥ 1, (X,G^t) is the t-power system of (X, G) and (Σ_m^t^+, ℙ^t) is the corresponding symbol space. If ℙ^t ×μ is ergodic with respect to F_t, where F_t is the skew product transformation acting on the Σ_m^t^+ × X, defined as F_t(ϖ,x) = (σϖ, g_j_1(x)), ϖ:= (j_1,j_2, ⋯) ∈Σ_m^t^+, then h_cor(G^t,x)=t ·h_cor(G,x), h_cor(G^t,x)=t ·h_cor(G,x), μ-a.e. x, Firstly, we assert that if ℙ^t ×μ is ergodic with respect to F_t, then ℙ×μ must also be ergodic with respect to F. To substantiate this claim, consider an invariant integrable function φ: Σ_m^+ × X ⟶ℝ. We can define a function ψ: Σ_m^t^+ × X →ℝ as follows ψ(ϖ,x):= φ(τ(ϖ),x), where τ : Σ_m^t^+ →Σ_m^+ is defined in Section 2.1. This function ψ is integrable. For any (ϖ,x)=((j_1,j_2,⋯),x), the following transformation can be observed ψ∘ F_t(ϖ,x) =ψ((j_2,⋯),g_j_1(x)) =φ(σ^t τ(ϖ),f_τ(ϖ),t(x)) =φ∘ F^t(τ(ϖ),x) =φ(τ(ϖ),x) =ψ(ϖ,x), meaning that ψ is invariant. Since ℙ^t ×μ is ergodic, ψ attains a constant value for ℙ^t ×μ-almost everywhere (ϖ, x). Furthermore, owing to the preservation of measure by both τ and τ^-1, φ takes on a constant value for ℙ×μ-almost everywhere (τ(ϖ),x). This observation implies the ergodicity of ℙ×μ with respect to F. Utilizing Proposition 3.3 and Theorem 1.1, we derive the following equality h_cor(G^t,x)=h_cor(G^t,μ,2)=t ·h_cor(G,μ,2)=t ·h_cor(G,x), μ-a.e. x. Similarly, the equality h_cor(G^t,x)=t ·h_cor(G,x) for μ-almost everywhere x can be demonstrated using analogous reasoning. Problem. In classical dynamical systems, <cit.> established that h_cor(f^t,x)=t ·h_cor(f,x), h_cor(f^t,x)=t ·h_cor(f,x), holds for any x via an innovative combinational approach. However, in the context of free semigroup actions, it remains unclear whether this power law persists for any x. § PROOF OF THEOREM <REF> Prior to establishing Theorem <ref> (q=0), we introduce a weak double-entropy condition for free semigroup actions, akin to the approach outlined in <cit.>. Let G be a free semigroup acting on a compact metric space X, μ be a Borel probability measure on X. We say that μ satisfies the weak double-entropy condition of free semigroup actions if for sufficiently small 2ε,, the lim sup_k → +∞1/klogsup_x ∈ Xμ(B_ω,k^G(x,2ε))/μ(B_ω,k^G(x,ε))=0 holds for almost everywhere ω∈Σ_m^+. Let F(ω, k,ε) be the (ω, k,ε) spanning set with smallest cardinality. Since for any x_i ∈ F(ω, k, ε) and x ∈ B^G_ω,k(x_i,ε), B^G_ω,k(x_i,ε) ⊆ B^G_ω,k(x,2ε), we have ∫_X μ(B^G_ω,k(x,2ε) )^-1 d μ(x) ≤ ∑_x_i ∈ F∫_B^G_ω ,k(x_i,ε)μ(B^G_ω,k(x,2ε) )^-1 dμ(x) ≤ ∑_x_i ∈ Fμ(B^G_ω,k(x_i,ε) )^-1μ(B^G_ω,k(x_i,ε) ) = ♯ F(ω,k,ε). Therefore 1/k∫_Σ_m^+log( ∫_X μ(B^G_ω,k(x,2ε) )^-1 d μ(x) ) d P(ω) ≤1/k∫_Σ_m^+log♯ F(ω,k,ε) d P(ω). Taking lim sup as k → +∞ and the limit as ε→ 0 on both sides of the inequality, we obtain h_cor(G,μ,0) ≤ h_top(G). Let E(ω,k,2ε) be the largest cardinality (ω, k,2ε) separated set, with 2ε chosen sufficiently small such that μ satisfies the weak double-entropy condition of free semigroup actions. Consequently, there exist at most ♯ E(ω,k,2ε) pairwise disjoint Bowen ball B^G_ω,k(x_i,ε) on X, where x_i ∈ E(ω,k,2ε). For any x ∈ B^G_ω,k(x_i,ε), we have B^G_ω,k(x,ε) ⊆ B^G_ω,k(x_i,2ε). Thus, ∫_X μ(B^G_ω,k(x,ε) )^-1 d μ(x) ≥ ∑_x_i ∈ E∫_B^G_ω ,k(x_i,ε)μ(B^G_ω,k(x,ε) )^-1 dμ(x) ≥ ∑_x_i ∈ Eμ(B^G_ω,k(x_i,2ε) )^-1μ(B^G_ω,k(x_i,ε) ). Given that μ satisfies the weak double-entropy condition of free semigroup actions, we define C_δ,K for any δ > 0 and K ∈ℕ as C_δ,K:={ω∈Σ_m^+: μ(B_ω,k^G(x,2ε))/μ(B_ω,k^G(x,ε))≤ e^kδ for any k>K and x ∈ X}. It is clear that {ω∈Σ_m^+: lim sup_k → +∞1/klogsup_x ∈ Xμ(B_ω,k^G(x,2ε))/μ(B_ω,k^G(x,ε))=0} =⋂_δ >0⋃_K=1^+ ∞ C_δ, K = lim_δ→ 0lim_K → +∞C_δ,K. Therefore, for any η >0, there exists δ_0 such that if δ < δ_0, then there exists K_0=K_0(δ) satisfying ℙ(C_δ,K) > 1- η holds for any K > K_0. For δ < δ_0 and K > K_0, consider E(ω,k,2ε) with k > K and 2ε sufficiently small. For any ω∈ C_δ,K, we have ∫_X μ(B^G_ω,k(x,ε) )^-1 d μ(x) ≥∑_x_i ∈ Eμ(B^G_ω,k(x_i,2ε) )^-1μ(B^G_ω,k(x_i,ε) ) ≥ e^-kδ♯ E(ω,k,2ε). Consequently, 1/k∫_Σ_m^+log( ∫_X μ(B^G_ω,k(x,ε) )^-1 d μ(x) ) d ℙ(ω) ≥ 1/k∫_C_δ,Klog( ∫_X μ(B^G_ω,k(x,ε) )^-1 d μ(x) ) d ℙ(ω) ≥ -δℙ(C_δ,K) + 1/k∫_C_δ,Klog♯ E(ω,k,2ε) d ℙ(ω). By taking lim inf as k → +∞ and limit as ε→ 0 on both sides of the inequality, and employing Fatou's Lemma <cit.>, we get lim_ε→ 0lim inf_k → +∞1/k∫_Σ_m^+log( ∫_X μ(B^G_ω,k(x,ε) )^-1 d μ(x) ) d ℙ(ω) ≥ -δℙ(C_δ,K) + ∫_C_δ,Klim_ε→ 0lim inf_k → +∞1/klog♯ E(ω,k,2ε) d ℙ(ω). By the definition of topological entropy, we observe that for almost everywhere ω∈Σ_m^+, lim_ε→ 0lim inf_k → +∞1/klog♯ E(ω,k,ε)=h_top(G). Therefore, lim_ε→ 0lim inf_k → +∞1/k∫_Σ_m^+log( ∫_X μ(B^G_ω,k(x,ε) )^-1 d μ(x) ) d ℙ(ω) ≥ -δℙ(C_δ,K) + h_top(G) ℙ(C_δ,K) ≥ -δ + h_top(G) (1-η). Since δ and η are arbitrarily small, we conclude that h_cor(G,μ,0) ≥ h_top(G). Let Σ_2^+:={0,1}^ℕ be a compact metric space, where the metric d is defined as d((x_1,x_2,⋯),(y_1,y_2,⋯)):=2^-min{i ≥ 1: x_i ≠ y_i}, and μ be a Bernoulli probability measure generated by (1/2,1/2) on (Σ_2^+,d). For any x=(x_1,x_2,⋯), the shift operator f_1 is defined as f_1(x):=(x_2,x_3,⋯), and odometers f_2 (also known as adding machines) <cit.> is defined as f_2(x):=x+(1,0,0,⋯)=(y_1,y_2,y_3,⋯), where (y_1,y_2,y_3,⋯) is determined by the following process. If x_1+1=1, then y_1=1 and δ_2=0, if x_1+1=2, then y_1=0 and δ_2=1. For every n ≥ 2, if x_n+δ_n=1, then y_n=1 and δ_n+1=0, if x_n+δ_n=2, then y_n=0 and δ_n+1=1. Denote i=min{j ≥ 1: x_j=0}, that is, x=(1, 1,⋯, 1, 0,x_i+1,x_i+2,⋯). Hence, we get a simple expression of f_2 as follows f_2(x):=x+(1,0,0,⋯)=(0,⋯,0,1,x_i+1,x_i+2,⋯). It is noted that if i=+∞, that is, x=(1,1,⋯), then f_2(x)=(0,0,⋯). G is the free semigroup generated by {f_1,f_2} acting on compact metric space (Σ_2^+,d). (Σ_2^+,ℙ) is the corresponding symbol space where ℙ is the Bernoulli probability measure generated by (1/2,1/2). For the sake of convenience, let ε = 2^-t, t ∈ℕ. Note that f_1^-1B(f_1(x),ε)=_2[x_2,x_3,⋯,x_t+1]_t+1, f_2^-1B(f_2(x),ε)=_1[x_1,x_2,⋯,x_t]_t, where cylinder _a[i_1,i_2,⋯,i_j]_a+j-1:={y=(y_1,y_2,⋯) ∈Σ_2^+: y_a+t=i_t+1, 0 ≤ t ≤ j-1 }. We claim that for any ω:=(i_1,i_2,⋯) ∈Σ_2^+, k ≥ 1 and ε =2^-t, it is verified that B_ω,k^G(x,ε)=_1[x_1,x_2,⋯,x_t+s_ω,k]_t+s_ω,k where s_ω,k:=♯{1 ≤ j ≤ k-1: i_j =1 }. Next we provide the proof of the claim.It is known that B_ω,k^G(x,ε):=B(x,ε) ⋂⋯⋂ f_ω,k-2^-1B(f_ω,k-2(x),ε)⋂ f_ω,k-1^-1B(f_ω,k-1(x),ε). If f_i_k-1=f_1, then f_ω,k-1^-1B(f_ω,k-1(x),ε)=f_ω,k-2^-1∘ f_1^-1 B(f_1 ∘ f_ω,k-2(x),ε), implying that B_ω,k^G(x,ε):= B(x,ε) ⋂ f_ω,1^-1B(f_ω,1(x),ε) ⋂⋯⋂ f_ω,k-2^-1B(f_ω,k-2(x),ε) ⋂ f_ω,k-2^-1∘ f_1^-1 B(f_1 ∘ f_ω,k-2(x),ε) = B(x,ε) ⋂ f_ω,1^-1B(f_ω,1(x),ε) ⋂⋯⋂ f_ω,k-3^-1B(f_ω,k-3(x),ε) ⋂ f_ω,k-2^-1(B(f_ω,k-2(x),ε) ⋂ f_1^-1 B(f_1 ∘ f_ω,k-2(x),ε)) = B(x,ε) ⋂ f_ω,1^-1B(f_ω,1(x),ε) ⋂⋯⋂ f_ω,k-3^-1B(f_ω,k-3(x),ε) ⋂ f_ω,k-2^-1B(f_ω,k-2(x),ε/2). If f_i_k-1=f_2, then f_ω,k-1^-1B(f_ω,k-1(x),ε)=f_ω,k-2^-1∘ f_2^-1 B(f_2 ∘ f_ω,k-2(x),ε), implying that B_ω,k^G(x,ε):=B(x,ε) ⋂ f_ω,1^-1B(f_ω,1(x),ε) ⋂⋯⋂ f_ω,k-2^-1B(f_ω,k-2(x),ε). Based on these consideration, we establish the claim through induction. For the base case k=1, we have B_ω,k^G(x,ε)=B(x,ε)=_1[x_1,x_2,⋯,x_t]_t. Now, for k=2, if f_i_1=f_1, then B_ω,k^G(x,ε)=B(x,ε/2)=_1[x_1,x_2,⋯,x_t+1]_t+1 and if f_i_1=f_2, then B_ω,k^G(x,ε)=B(x,ε)=_1[x_1,x_2,⋯,x_t]_t.Now, Assuming that the assertion holds for k-1, we examine the case k. Let us define s_ω,k-1:=♯{1 ≤ j ≤ k-2: i_j =1 }, s_ω,k:=♯{1 ≤ j ≤ k-1: i_j =1 }. If f_i_k-1=f_1, meaning s_ω,k=s_ω,k-1+1, we have B_ω,k^G(x,ε)= B(x,ε) ⋂ f_ω,1^-1B(f_ω,1(x),ε) ⋂⋯⋂ f_ω,k-3^-1B(f_ω,k-3(x),ε) ⋂ f_ω,k-2^-1B(f_ω,k-2(x),ε/2). When f_i_k-2=f_1, f_ω,k-3^-1B(f_ω,k-3(x),ε) ⋂ f_ω,k-2^-1B(f_ω,k-2(x),ε/2) = f_ω,k-3^-1( B(f_ω,k-3(x),ε) ⋂ f_1^-1B(f_1 ∘ f_ω,k-3(x),ε/2) ) = f_ω,k-3^-1 B(f_ω,k-3(x),ε/2^2) ⊆ f_ω,k-3^-1B(f_ω,k-3(x),ε/2) ⋂ f_ω,k-2^-1B(f_ω,k-2(x),ε/2) ⊆ f_ω,k-3^-1B(f_ω,k-3(x),ε) ⋂ f_ω,k-2^-1B(f_ω,k-2(x),ε/2). When f_i_k-2=f_2, f_ω,k-3^-1B(f_ω,k-3(x),ε) ⋂ f_ω,k-2^-1B(f_ω,k-2(x),ε/2) = f_ω,k-3^-1B(f_ω,k-3(x),ε) ⋂ f_ω,k-3^-1B(f_ω,k-3(x),ε/2) ⊆ f_ω,k-3^-1B(f_ω,k-3(x),ε/2) ⋂ f_ω,k-2^-1B(f_ω,k-2(x),ε/2) ⊆ f_ω,k-3^-1B(f_ω,k-3(x),ε) ⋂ f_ω,k-2^-1B(f_ω,k-2(x),ε/2). Consequently, B_ω,k^G(x,ε)= B(x,ε) ⋂ f_ω,1^-1B(f_ω,1(x),ε) ⋂⋯⋂ f_ω,k-3^-1B(f_ω,k-3(x),ε/2) ⋂ f_ω,k-2^-1B(f_ω,k-2(x),ε/2). Iterating this process k-2 times yields B_ω,k^G(x,ε) =B_ω,k-1^G(x,ε/2) =_1[x_1,x_2,⋯,x_t+s_ω,k-1+1]_t+s_ω,k-1+1 =_1[x_1,x_2,⋯,x_t+s_ω,k]_t+s_ω,k. If f_i_k-1=f_2, that is s_ω,k=s_ω,k-1, we obtain B_ω,k^G(x,ε):= B(x,ε) ⋂ f_ω,1^-1B(f_ω,1(x),ε) ⋂⋯⋂ f_ω,k-2^-1B(f_ω,k-2(x),ε) = B_ω,k-1^G(x,ε) = _1[x_1,x_2,⋯,x_t+s_ω,k-1]_t+s_ω,k-1 = _1[x_1,x_2,⋯,x_t+s_ω,k]_t+s_ω,k. Hence, the claim is demonstrated. Based on this claim, μ satisfies weak double-entropy condition of free semigroup actions. By Theorem<ref>, h_top(G)= lim_ε→ 0lim_k → +∞1/k∫_Σ_2^+log(∫_Xμ(B_ω,k^G(x,ε))^-1dμ(x))dℙ(ω) = lim_ε→ 0lim_k → +∞1/k∫_Σ_2^+log(∫_Xμ(_1[x_1,x_2,⋯,x_t+s_ω,k]_t+s_ω,k)^-1dμ(x))dℙ(ω) = lim_ε→ 0lim_k → +∞1/k∫_Σ_2^+log(∫_X 2^t+s_ω,kdμ(x))dℙ(ω) = lim_ε→ 0lim_k → +∞1/k∫_Σ_2^+log 2^t+s_ω,kdℙ(ω) = lim_ε→ 0lim_k → +∞1/k∑_s=0^k-1C_k-1^s 2^-(k-1)log2^t+s = lim_ε→ 0lim_k → +∞log2/2^k-1k∑_s=0^k-1C_k-1^s (t+s) = lim_ε→ 0lim_k → +∞log2/2^k-1k(∑_s=0^k-1C_k-1^s t+∑_s=0^k-1C_k-1^s s) = lim_ε→ 0lim_k → +∞log2/2^k-1k(t2^k-1+(k-1)2^k-2) = lim_ε→ 0lim_k → +∞(tlog2/k+log2/2k-1/k) = log2/2. § PROOFS OF THEOREM <REF>, <REF>, <REF> Motivated by Theorem <ref>, we introduce the notions of the lower(upper) local entropy of free semigroup actions as follows. Let G be a free semigroup acting on a compact metric space X, μ a Borel probability measure on X. The lower(upper) local entropy of free semigroup actions is defined as h(ω,x):= lim_ε→ 0lim inf_k → +∞ -1/klogμ (B_ω,k^G(x,ε)), H(ω,x):= lim_ε→ 0lim sup_k → +∞ -1/klogμ (B_ω,k^G(x,ε)). We define the limit process of h(ω,x) to be uniformly with respect to x for almost everywhere ω if, for almost everywhere ω, there exists a full measure subset A(ω) ⊆ X such that for any δ > 0, there exists ε_0:=ε_0(ω,δ) such that for ε≤ε_0, there exists K:=K(ω,δ,ε) such that if k > K, then h(ω, x) - δ≤ -1/klogμ(B^G_ω, k(x, ε)) holds for any x ∈ A(ω). We define he limit process of h(ω,x) to be uniformly with respect to (ω,x) if there exists a full measure subset A ⊆Σ_m^+ × X such that for any δ > 0, there exists ε_0:=ε_0(δ) such that for ε≤ε_0, there exists K:=K(δ,ε) such that if k > K, then h(ω, x) - δ≤ -1/klogμ(B^G_ω, k(x, ε)) holds for any (ω,x) ∈ A.Prior to establishing Theorem <ref> (q ≥ 1), we require the following lemma. Let G be a free semigroup acting on a compact metric space X, μ a G-invariant Borel probability measure on X. Then h_cor(G,μ,1) ≤ h_μ(G). For any ε >0, there exists a finite partition ξ such that diam(ξ) ≤ε. Considering that D(⋁_i=0^k-1 f_ω,i^-1ξ, x) ⊆ B_ω, k^G(x,ε) where D(⋁_i=0^k-1 f_ω,i^-1ξ, x) represents the element of ⋁_i=0^k-1 f_ω,i^-1ξ containing x, we obtain -1/k∫_Σ_m^+∫_X logμ (D(⋁_i=0^k-1 f_ω,i^-1ξ, x)) dμ(x) dℙ(ω) ≥ -1/k∫_Σ_m^+∫_X logμ( B_ω, k^G(x,ε) ) dμ(x)dℙ(ω). By definition of H_μ(⋁_i=0^k-1 f_ω, i^-1ξ), we get 1/k∫_Σ_m^+ H_μ(⋁_i=0^k-1 f_ω, i^-1ξ) dℙ(ω) ≥ -1/k∫_Σ_m^+∫_X logμ( B_ω, k^G(x,ε)) dμ(x) dℙ(ω). Taking lim sup as k → +∞ and limit as ε→ 0, we have h_μ(G) ≥ h_μ(G, ξ) ≥h_cor(G,μ,1). Lemma <ref> generalizes the result of E.Verbitskiy <cit.> to the free semigroup actions. According to Theorem 2.3., there exists measurable set A ⊆Σ_m^+ × X with full measure, such that for any (ω,x) ∈ A, h(ω,x)=h_μ(G). Denote 𝒫(A):={ω: ∃ x, s.t. (ω,x) ∈ A}, A(ω):={x: (ω,x) ∈ A}. Since ℙ×μ(A)=1, for almost everywhere ω∈𝒫(A), μ(A(ω))=1. Under the given assumption, we can choose A(ω) to satisfy the following condition. For almost everywhere ω∈𝒫(A) and any δ > 0, there exists ε_0:=ε_0(ω,δ), such that for any ε≤ε_0, there exists K:=K(ω,δ,ε) such that if k > K, then h(ω, x) - δ≤ -1/klogμ(B^G_ω, k(x, ε)) holds for any x ∈ A(ω). Similar to Theorem <ref>, for any δ > 0, ε > 0 and K ∈ℕ, we define Ω_δ,ε,K:= {ω∈𝒫(A): h_μ(G) - δ≤ -1/klogμ(B^G_ω, k(x, ε)) for any k >K, x ∈ A(ω)}. As lim_δ→ 0lim_ε→ 0lim_K → +∞ℙ(Ω_δ,ε,K)=1, for any η >0, δ > 0. there exist ε, K such that ℙ(Ω_δ,ε,K) > 1-η. Given k > K and q >1, we have -1/k1/q-1∫_Σ_m^+log(∫_X μ(B_ω,k^G(x,ε))^q-1 dμ(x)) dℙ(ω) ≥ -1/k1/q-1∫_Ω_δ,ε,Klog(∫_A(ω) e^-k(q-1)(h_μ(G)-δ) dμ(x)) dℙ(ω) = (h_μ(G)-δ) ℙ(Ω_δ,ε,K). If h_μ(G)=0, then -1/k1/q-1∫_Σ_m^+log(∫_X μ(B_ω,k^G(x,ε))^q-1 dμ(x)) dℙ(ω) ≥ -δ. If h_μ(G)>0, we can assume h_μ(G)-δ > 0 owing to δ is arbitrary, then -1/k1/q-1∫_Σ_m^+log(∫_X μ(B_ω,k^G(x,ε))^q-1 dμ(x)) dℙ(ω) ≥ (h_μ(G)-δ)(1-η). As δ and η are arbitrary, taking lim inf as k→ +∞ and limit as ε→ 0 on both sides yields h_cor(G,μ,q) ≥ h_μ(G). By utilizing Lemma 6.1 and Proposition 3.2 (4), we can establish for any q ≥ 1 that h_μ(G)=h_cor(G,μ,q)=h_cor(G,μ,q). In particularly, based on Theorem 1.1, h_μ(G)=h_cor(G,μ,2)=h_cor(G,x), μ-a.e. x ∈ X. Following the approach in <cit.>, we establish h_μ(G) = h_cor(G,μ,1) without the requirement concerning h(ω,x), as stated in Proposition <ref>. Let G be a free semigroup acting on a compact metric space X and μ be a Borel probability measure on X such that ℙ×μ is ergodic with respect to skew product transformation F. If μ satisfies h_μ(G) < +∞, then h_cor(G,μ,1)=h_μ(G). Given h_μ(G) < +∞ and the ergodicity of ℙ×μ, it follows that h(ω,x)=h_μ(G) for almost everywhere (ω,x). By Fatou's Lemma <cit.>, we obtain lim_ε→ 0lim inf_k → +∞ -1/k∫_Σ_m^+∫_X logμ( B_ω, k^G(x,ε) ) dμ(x)dℙ(ω) ≥ ∫_Σ_m^+∫_X lim_ε→ 0lim inf_k → +∞ -1/klogμ( B_ω, k^G(x,ε) ) dμ(x)dℙ(ω) = h_μ(G). Thus, h_cor(G,μ,1) ≥ h_μ(G). By utilizing Lemma <ref>, we conclude that h_cor(G,μ,1) = h_μ(G). We now turn our attention to the scenario where 0 ≤ q ≤ 1, as outlined in Theorem <ref>. There exists a set A ⊆Σ_m^+ × X of full measure such that for any (ω,x) ∈ A, h(ω,x) exists and h(ω,x) ≥ h_top(G) by assumption. We define 𝒫(A):={ω: ∃ x, s.t. (ω,x) ∈ A}, A(ω):={x: (ω,x) ∈ A}. By Fatou's lemma <cit.>, we obtain lim_ε→ 0lim inf_k → +∞ -1/k∫_Σ_m^+∫_X logμ (B_ω,k^G(x,ε)) dμ (x) d ℙ (ω) ≥ ∫_Σ_m^+∫_X lim_ε→ 0lim inf_k → +∞ -1/klogμ (B_ω,k^G(x,ε)) dμ (x) d ℙ (ω) = ∫_𝒫(A)∫_A(ω)lim_ε→ 0lim inf_k → +∞ -1/klogμ (B_ω,k^G(x,ε)) dμ (x) d ℙ (ω) ≥ h_top(G). Thus, we have h_cor(G,μ,1) ≥ h_top(G). On the other hand, from the proof of Theorem <ref>, it follows that h_cor(G,μ,0) ≤ h_top(G). Utilizing Proposition 3.2 (4), we conclude that for any 0 ≤ q ≤ 1, h_top(G)=h_cor(G,μ,q). Referencing <cit.>, it follows that h_top(G)= sup{h_μ(G): μ∈ M(X,G)}, where M(X,G) denotes the space of G-invariant probability measure of (X,G). By employing Lemma 6.1, we establish h_top(G)=h_cor(G,μ,1)= h_μ(G), indicating that μ represents the measure of maximum entropy. Finally, we demonstrate Theorem <ref> for q ≤ 0. Under the given assumption, there exists a full measure subset A ⊆Σ_m^+ × X such that for any δ > 0, there exists ε_0:=ε_0(δ), such that for any ε≤ε_0, there exists K:=K(δ,ε) satisfying if k > K, then μ(B^G_ω, k(x, ε))^q-1≤ e^-k(q-1)(h_μ(G)+δ) holds for any (ω,x) ∈ A and q ≤ 0. Consequently, for any q ≤ 0, ε≤ε_0 and k > K, we derive -1/k1/q-1∫_Σ_m^+log(∫_X μ(B_ω,k^G(x,ε))^q-1 dμ(x)) dℙ(ω) ≤ -1/k1/q-1∫_𝒫(A)log(∫_A(ω) e^-k(q-1)(h_μ(G)+δ) dμ(x)) dℙ(ω) = (h_μ(G)+δ). Taking the lim sup as k → +∞ and the limit as ε→ 0 on both sides, we obtain for any q ≤ 0, h_cor(G,μ,q) ≤ h_μ(G). By employing Theorem <ref> and equality (<ref>), we conclude that for any q ≤ 0, h_cor(G,μ,q)=h_μ(G). Specifically, h_top(G) =h_cor(G,μ,0)= h_μ(G), signifying that μ represents the measure of maximum entropy. Let X be a compact metric group with the right-invariant metric d and Haar measure μ, which also exhibits right-invariance property (<cit.>). Consider L: X → X as a group automorphism. We define a continuous map f_i: X → X for each x_i in a finite set {x_i: 1 ≤ i ≤ m}⊆ X as f_i(x):=L(x)· x_i. Given automorphism nature of L, the inverse map of f_i is denoted as f_i^-1(x):=L^-1(x)· L^-1(x_i^-1). Let G be the free semigroup generated by {f_i: 1 ≤ i ≤ m}. Notably, due to the right-invariance of metric d, for any ω=(i_1,i_2,⋯) and j≥ 0, we have B(f_ω,j(x),ε)=B(e,ε)· f_ω,j(x) where e represents the identity element. Consequently, f_ω,j^-1B(f_ω,j(x),ε)=f_ω,j^-1(B(e,ε)· f_ω,j(x) ). Notice that for any z ∈ B(e,ε), f_ω,j^-1(z· f_ω,j(x) ) = f_i_1^-1∘ f_i_2^-1∘⋯∘ f_i_j^-1( z· f_ω,j(x)) = f_i_1^-1∘ f_i_2^-1∘⋯∘ f_i_j-1^-1 L^-1( z· f_ω,j(x)) · L^-1(x_i_j^-1) = f_i_1^-1∘ f_i_2^-1∘⋯∘ f_i_j-1^-1 L^-1(z)L^j-1(x)L^j-2(x_i_1)⋯ x_i_j-1· L^-1(x_i_j)· L^-1(x_i_j^-1) = f_i_1^-1∘ f_i_2^-1∘⋯∘ f_i_j-1^-1 L^-1(z)L^j-1(x)L^j-2(x_i_1)⋯ x_i_j-1 ⋯ = L^-j(z) · x. Hence, f_ω,j^-1B(f_ω,j(x),ε)=f_ω,j^-1(B(e,ε)· f_ω,j(x) )=L^-j(B(e,ε))· x, which implies that that B_ω,k^G(x,ε)=(⋂_j=0^k-1L^-j( B(e,ε)))· x. Moreover, since μ is right-invariant, it follows that the limit processes of h(ω,x) and H(ω,x) are uniform. In classical dynamical systems, Bowen <cit.> introduced the concept of homogeneous measure, where the Haar measure serves as a homogeneous measure when L: X → X denotes an affine transformation acting on a compact metric group X. Verbitskiy <cit.> computed the order correlation entropy of homogeneous measure and demonstrated the equality between any order correlation entropy, measure-theoretic entropy, and topological entropy. This equivalence extends to free semigroup actions, where these entropies, computed by definition, also coincide due to the inherent strong uniformity of homogeneous measures. Theorems <ref>, <ref>, and <ref> are established by relaxing the uniformity constraints on the measure while incorporating additional conditions. § DECLARATIONS Conflict of interest The authors declare their is no conflict of interest. § ACKNOWLEDGMENTS The authors express their sincere appreciation for the insightful remarks and constructive suggestions provided by the referees, which have significantly enhanced the quality of this manuscript. Additionally, the authors would like to acknowledge Prof. Xiaogang Lin for his guidance in English academic writing. § REFERENCES plain
http://arxiv.org/abs/2406.17755v1
20240625174152
Accelerating Clinical Evidence Synthesis with Large Language Models
[ "Zifeng Wang", "Lang Cao", "Benjamin Danek", "Yichi Zhang", "Qiao Jin", "Zhiyong Lu", "Jimeng Sun" ]
cs.CL
[ "cs.CL" ]
Accelerating Clinical Evidence Synthesis with Large Language Models Zifeng Wang^1, Lang Cao^1, Benjamin Danek^1, Yichi Zhang^1, Qiao Jin^2, Zhiyong Lu^2, Jimeng Sun^1,3# ^1 Department of Computer Science, University of Illinois Urbana-Champaign, Champaign, IL ^2 National Center for Biotechnology Information, National Library of Medicine, Bethesda, MD ^3 Carle Illinois College of Medicine, University of Illinois Urbana-Champaign, Champaign, IL ^#Corresponding authors. Emails: jimeng@illinois.edu ============================================================================================================================================================================================================================================================================================================================================================================================================================================================ Automatic medical discovery by AI is a dream of many. One step toward that goal is to create an AI model to understand clinical studies and synthesize clinical evidence from the literature. Clinical evidence synthesis currently relies on systematic reviews of clinical trials and retrospective analyses from medical literature. However, the rapid expansion of publications presents challenges in efficiently identifying, summarizing, and updating evidence. We introduce , a generative AI-based pipeline for conducting medical systematic reviews, encompassing study search, screening, and data extraction phases. We utilize large language models (LLMs) to drive each pipeline component while incorporating human expert oversight to minimize errors. To facilitate evaluation, we also create a benchmark dataset , a custom dataset with 870 annotated clinical studies from 25 meta-analysis papers across various medical treatments. Our results demonstrate that significantly improves the literature review process, achieving high recall rates (0.897-1.000) in study searching from over 20 million PubMed studies and outperforming traditional language model embeddings-based methods in screening (Recall@20 of 0.227-0.246 vs. 0.000-0.102). Furthermore, our approach surpasses direct GPT-4 performance in result extraction, with accuracy ranging from 0.65 to 0.84. We also support clinical evidence synthesis in forest plots, as validated by eight human annotators who preferred over the GPT-4 baseline with a winning rate of 62.5%-100% across the involved reviews. Our findings suggest that an LLM-based clinical evidence synthesis approach, such as , can enable reliable and high-quality clinical evidence synthesis to improve clinical research efficiency. § INTRODUCTION Clinical evidence is crucial for supporting clinical practices and advancing new drug development. It is primarily gathered through retrospective analysis of real-world data or through prospective clinical trials that assess new interventions on humans. Researchers often conduct systematic reviews to consolidate evidence from various studies in the literature <cit.>. However, conducting a systematic literature review is expensive and time-consuming, requiring five experts to analyze 195 publications over 67 weeks on average <cit.>. Moreover, the fast growth of clinical study databases means that the information in these published clinical reviews becomes outdated rapidly <cit.>. This situation underscores the urgent need to streamline the systematic review processes to produce systematic and timely clinical evidence from the extensive medical literature <cit.>. Large language models (LLMs) excel at information processing and generating, showing great promise in streamlining the clinical evidence synthesis process. Adapting LLMs to new tasks can be done by providing them the task definition and examples as the text inputs (namely `prompts') without the need of retraining the model <cit.>. Researchers have tried to adopt LLMs for individual tasks in literature review <cit.>. For example, by enhancing inputs with multiple papers, LLMs can summarize findings to answer medical questions <cit.>. This strategy helps reduce hallucinations but still faces challenges when the input studies do not adequately support the posed questions, which requires more effort in literature search and screening steps. Furthermore, LLMs often demonstrate limitations in reasoning with numerical data found in clinical studies. The qualitative clinical evidence generated using raw paper content via prompting can be overly generic, lack critical information, or misinterpret results <cit.>. Therefore, we propose developing an LLM-driven pipeline to assist the entire workflow, including formulating research questions, conducting literature mining, extracting information, and synthesizing clinical evidence. This includes a comprehensive evaluation of LLMs in the entire process, which was not well-explored <cit.>. This study aims to fulfill the potential of AI in helping medical practitioners with the entire clinical evidence synthesis process using AI. We demonstrate how our method is optimized for the clinical evidence synthesis task via: (1) generating boolean queries to search from the literature; (2) building inclusion and exclusion criteria to screen through the found studies; (3) extracting key information, including study protocols, methods, participant baselines, etc., from unstructured documents, per users' request; and (4) synthesizing high-quality clinical evidence. Unlike previous approaches, integrates LLMs into an AI pipeline by breaking down the task into multiple steps that align with expert practices. This approach maintains flexibility by involving humans in the loop to monitor, edit, and verify all intermediate steps and the final synthesized outputs. This paper curated a benchmark dataset for a comprehensive evaluation. The dataset includes 870 involved clinical studies and more than 50,000 identified studies from 25 meta-analyses. It also consists of manual annotations of 1,334 study characteristics and 1,049 study results. We show that the is able to 1) retrieve a complete list of target studies from the literature, 2) follow the specified eligibility criteria to rank the most relevant studies at the top, and 3) achieve high accuracy in extracting information and clinical outcomes from unstructured documents based on user requests. Additionally, the extracted clinical outcomes can be standardized as input for meta-analysis (e.g., forest plots). We conducted a human evaluation of the synthesized evidence to demonstrate the potential value of in practice. § RESULTS §.§ Creating from medical literature A systematic understanding of cancer treatments is crucial for oncology drug discovery and development. Using a large list of cancer treatments from the National Cancer Institute (NCI) <cit.>, we curated a dataset of systematic reviews. To ensure data quality, we crafted comprehensive queries with automatic filtering and manual screening. For each review, we obtained the list of studies with their PubMed IDs, retrieved their full content, and extracted study characteristics and clinical outcomes. We followed PubMed's usage policy and guidelines during retrieval. Further manual checks were performed to correct inaccuracies, eliminate invalid and duplicate papers, and refine the text for clarity (Methods). The final dataset consists of 870 studies involved in 25 reviews (Fig. [fig:exp_search_recall_combined]2a), including (1) Brachytherapy <cit.>: 3 reviews with 189 involved studies and 5,811 identified studies; (2) Chemotherapy <cit.>: 3 reviews with 126 involved studies and 5,874 identified studies; (3) Hyperthermia <cit.>: 3 reviews with involved 28 studies and 5,972 identified studies; (4) Hormone Therapy <cit.>: 3 reviews with 30 involved studies and 5,970 identified studies; (5) Cancer Vaccine <cit.>: one review with 6 involved studies and 1,994 related studies; (6) Immune Checkpoint Inhibitors <cit.>: 3 reviews with 187 involved studies and 5,813 identified studies; (7) Immune System Modulators <cit.>: one review with 9 involved studies and 1,991 identified studies; (8) Monoclonal Antibodies <cit.>: 3 reviews with 228 involved studies and 5,772 identified studies; (9) T-cell Transfer Therapy <cit.>: 2 reviews with 36 involved studies and 3,964 identified studies; (10) Stem Cell Transplant <cit.>: 3 reviews with 31 involved studies and 5,969 identified studies. The detailed characteristics of these studies are in Extended Table <ref>. §.§ Build an LLM-driven system for clinical evidence synthesis Large language models (LLMs) excel in adapting to new tasks when provided with task-specific prompts while often struggling with complex tasks that require multi-step reasoning. Additionally, interacting and collaborating with LLMs can be problematic due to their opaque nature and the complexity of debugging <cit.>. In this study, we developed that decomposes the clinical evidence synthesis process into four main tasks (Fig. <ref> and Methods). Initially, using the provided research question enriched with population, intervention, comparison, and outcome (PICO) elements, conducts a comprehensive search from the literature. It also works with users to build the criteria for study screening and rank the studies. Next, browses the study details to extract the study characteristics and pertinent findings. To ensure the accuracy and integrity of the data, each output is linked to the sources for manual inspection. In the final step, standardizes the clinical outcomes for meta-analysis. §.§ can make a comprehensive retrieval of studies from the literature Finding relevant studies from medical literature like PubMed, which contains over 37 million entries, can be challenging. Typically, this requires the research expertise to craft complex queries that comprehensively cover pertinent studies. The challenge lies in balancing the specificity of queries: too stringent, and the search may miss relevant studies; too broad, and it becomes impractical to manually screen the overwhelming number of results. Previous approaches propose to prompt LLMs to generate the searching query in one pass <cit.>, which can induce incomplete searching results due to the limited knowledge of LLMs. In contrast, is designed to produce comprehensive queries through a pipeline that includes query generation, augmentation, and refinement. It also provides users with the ability to make adjustments (Fig. [fig:exp_search_recall_combined]2b). The dataset involving clinical studies spanning ten cancer treatment areas was used for evaluation (Fig. [fig:exp_search_recall_combined]2a). For each review, we collected the involved studies' PubMed IDs as the ground-truth and measured the Recall, i.e., how many ground-truth studies are found in the search results. We created two baselines as the comparison: GPT-4 and Human. The GPT-4 baseline makes a guided prompt for LLMs to generate the boolean queries <cit.>. It represents the common way of prompting LLMs for literature search query generation. The Human baseline represents a way where the key terms from PICO elements are extracted manually and expanded, referring to UMLS <cit.>, to construct the search queries. Overall, achieved a Recall of 0.921 on average for all reviews in , meaning it can capture all target studies most of the time. By contrast, the GPT-4 baseline yielded Recall = 0.079, and the Human baseline yielded Recall = 0.230. We divided the search results across four topics determined by the treatments studied in each review (Fig. [fig:exp_search_recall_combined]2c). Our analysis showed that can identify many more studies than the baselines. For instance, achieved = 0.914 (N studies = 28,863) for Immunotherapy-related reviews, while the GPT-4 baseline got = 0.1 (N studies = 22), and the Human baseline got Recall = 0.154 (N studies = 153), respectively. In Radiation/Chemotherapy, achieved = 0.897, the GPT-4 baseline got = 0.017, and the Human baseline got = 0.304. In Hormone Therapy, achieved = 0.924, the GPT-4 baseline got = 0.100, and the Human baseline got = 0.150. In Hyperthermia, achieved = 1.000, the GPT-4 baseline got = 0.133, and the Human baseline got = 0.683. These results demonstrate that regardless of the search task's complexity, as indicated by the variability in the Human baseline, consistently retrieves nearly all target studies from the PubMed database. This robust performance provides a solid foundation for accurately identifying target studies in the screening phase. Furthermore, we made scatter plots of Recall versus the number of target studies for each review (Fig. [fig:exp_search_recall_combined]2d). The hypothesis was that an increase in target studies correlates with the difficulty of achieving complete coverage. Our findings reveal that consistently maintained a high Recall, nearing 1 in many instances and never falling below 0.7, significantly outperforming the best baselines across all 25 reviews. A trend of declining Recall with an increasing number of target studies was confirmed through regression analysis. For example, with fewer than 20 target studies, achieved perfect Recall for most reviews, while the GPT-4 baseline struggled, showing Recall close to 0, and the Human baseline results varied between 0 and 0.85. As the number of target studies increased, the Human and GPT-4 baselines' Recall decreased to nearly zero. In contrast, demonstrated remarkable resilience, showing minimal variation in performance despite the increasing number of target studies. For instance, in a review involving 141 studies, achieved a Recall of 0.99, while the GPT-4 and Human baselines obtained a Recall of 0.02 and 0, respectively. §.§ enhances literature screening and ranking Typically, human experts manually sift through thousands of retrieved studies to select relevant ones for inclusion in a systematic review. This process adheres to the PRISMA statement <cit.>, which involves creating a list of eligibility criteria and assessing each study's eligibility. streamlines this task through a three-step approach: (1) it generates a set of inclusion criteria, which are subject to user's adjustments; (2) it applies these criteria to evaluate the study's eligibility, denoted by {-1,0,1} where -1 and 1 represent eligible and non-eligible, and 0 represents unknown/uncertain, respectively; and (3) it ranks the studies by aggregating the eligibility predictions, where the aggregation strategy can be specified by users (Fig. [fig:exp_screen_combined]3a). We took a summation of the criteria-level eligibility predictions as the study-level relevance prediction scores for ranking. As such, provides a rationale for the relevance scores by detailing the eligibility predictions for each criterion. We chose MPNet <cit.> and MedCPT <cit.> as the general domain and medical domain ranking baselines, respectively. These methods compute study relevance by the cosine similarity between the encoded PICO elements as the query and the encoded study's abstracts. We also set a Random baseline that randomly samples from candidates. We created the evaluation data based on the search results in the first stage. For each review, we mixed the target studies with the other found studies to build a candidate set of 2,000 studies for ranking. Discriminating the target studies from the other candidates is challenging since all candidates meet the search queries, meaning they most probably investigate the relevant therapies or conditions. We evaluated the ranking performance using the Recall@20 and Recall@50 metrics. The concatenation of the title and abstract of each study is used for all methods as inputs. We found that greatly improved ranking performances, with the fold changes over the best baselines ranging from 1.4 to 24 across four topics (Table [fig:exp_screen_combined]3c). For instance, for the Hormone Therapy topic, obtained @20 = 0.240 and @50=0.439. In the Hyperthermia topic, obtained @20 = 0.246 and @50=0.306. In the Immunotherapy topic, obtained @20 = 0.230 and @50=0.383. In the Radiation/Chemotherapy topic, obtained @20 = 0.227 and @50=0.335. demonstrated the largest advantage over baselines on the Hormone Therapy topic (fold change = 24 and 10.5 for @20 and @50 compared to the best baselines). In contrast, other baselines exhibit significant variability across different topics. The general domain baseline MPNet was the worst as it performed similarly to the Random baseline in @20. MedCPT showed marginal improvement over MPNet in the last three topics, while both failed to capture any target studies in Hormone Therapy. Furthermore, demonstrated significant improvements over the baselines across various therapeutic areas (Fig. [fig:exp_screen_combined]3b). For example, in “Cancer Vaccines" and “Hormone Therapy," substantially increased @50, achieving 33.33-fold and 10.53-fold improvements, respectively, compared to the best-performing baseline. generally attained a fold change greater than 2 (ranging from 1.57 to 33.33). Despite the challenge of selecting from a large pool of candidates (n=2,000) where candidates were very similar, identified an average of 43% of target studies within the top 50. We compared to MedCPT and MPNet for @K (K in 10 to 200) to gain insight into how K influences the performances (Fig. [fig:exp_screen_combined]3e). We found can capture most of the target studies when K=200, as it obtained @200=0.697 in Immunotherapy, @200=0.618 in Radiation/Chemotherapy, @200=0.744 in Hormone Therapy, @200=0.830 in Hyperthermia, respectively. The improvement over baselines is especially significant in Hormone Therapy and Hyperthermia, as the other baselines did not outperform the Random performances all the time. For instance, in Hormone Therapy, MedCPT's @200=0.122 and MPNet's @200=0.083 (Random's @200 = 0.100). To thoroughly assess the quality of these criteria and their impact on ranking performance, we conducted a leave-one-out analysis to calculate Δ@200 for each criterion (Fig. [fig:exp_screen_combined]3d). The Δ@200 metric measures the difference in ranking performance with and without a specific criterion, with a larger value indicating superior criterion quality. Our findings revealed that most criteria positively influenced ranking performances, as the negative influence criteria are n=1 in Hormone Therapy, n=1 in Hyperthermia, n=5 in Radiation/Chemotherapy, and n=7 in Immunotherapy. Additionally, we identified redundancies among the generated criteria, as those with Δ@200=0 were the most frequently observed. This redundancy likely stems from some criteria covering similar eligibility aspects, thus not impacting performance when one is omitted. §.§ scales data and result extraction from unstructured documents leverages LLMs to streamline extracting study characteristics such as target therapies, study arm design, and participants' baseline information from involved studies. Specifically, refers to the field names and the descriptions from users and use the full content of the study documents in PDF or XML formats as inputs (Fig. [fig:exp_extraction_combined]4a). When the free full content is unavailable, accepts the user-uploaded content as the input. We developed an evaluation dataset by converting the study characteristic tables from each review paper into data points. Our dataset comprises 1,334 target data points, including 696 on study design, 353 on population features, and 285 on results. We assessed the data extraction performance using the Accuracy metric. demonstrated strong extraction performance across various topics (Fig. [fig:exp_extraction_combined]4b): it achieved an accuracy of =0.78 (95% confidence interval (CI) = 0.75–0.81) in the Immunotherapy topic, =0.77 (95% CI = 0.72-0.82) in the Radiation/Chemotherapy topic, =0.72 (95% CI = 0.63-0.80) in the Hormone Therapy topic, and =0.83 (95% CI = 0.74-0.90) in the Hyperthermia topic. These results indicate that can provide a solid initial data extraction, which human experts can refine. Importantly, each output can be cross-checked by the linked original sources, facilitating verification and further investigation. Diving deeper into the accuracy across different types of fields, we observed varying performance levels. It performed best in extracting study design information, followed by population details, and showed the lowest accuracy in extracting results (Fig. [fig:exp_extraction_combined]4b). For example, in the Immunotherapy topic, achieved an accuracy of =0.95 (95% CI = 0.92-0.96) for study design, =0.74 (95% CI = 0.67-0.80) for population data, and =0.42 (95% CI = 0.36-0.49) for results. This variance can be attributed to the prevalence of numerical data in the fields: fields with more numerical data are typically harder to extract accurately. Study design is mostly described in textual format and is directly presented in the documents, whereas population and results often include numerical data such as the number of patients or gender ratios. Results extraction is particularly challenging, often requiring reasoning and transformation to capture values accurately. Given these complexities, it is advisable to scrutinize the extracted numerical data more carefully. We also evaluated the robustness of against hallucinations and missing information (Fig. [fig:exp_extraction_combined]4c). We constructed a confusion matrix detailing instances of hallucinations: false positives (FP) where generated data not present in the input document and false negatives (FN) where it failed to extract available target field information. We observed that achieved a precision of =0.994 for study design, =0.966 for population, and =0.862 for study results. Missing information was slightly more common than hallucinations, with achieving recall rates of =0.946 for study design, =0.889 for population, and =0.930 for study results. The incidence of both hallucinations and missing information was generally low. However, hallucinations were notably more frequent in study results; this often occurred because LLMs could confuse definitions of clinical outcomes, for example, mistaking `overall response' for `complete response.' Nevertheless, such hallucinations are typically manageable, as human experts can easily identify and correct them while reviewing the referenced material. The challenges in extracting study results primarily stem from (1) identifying the locations that describe the desired outcomes from lengthy papers, (2) accurately extracting relevant numerical values such as patient numbers, event counts, durations, and ratios from the appropriate patient groups, and (3) performing the correct calculations to standardize these values for meta-analysis. In response to these complexities, we developed a specialized pipeline for result extraction (Fig. [fig:exp_extraction_combined]4g), where users provide the interested outcome and the cohort definition. offers a transparent extraction workflow, documenting the sources of results along with the intermediate reasoning and calculations. We compared against two generalist LLM baselines, GPT-4 and Sonnet, which were prompted to extract the target outcomes from the full content of the study documents. Since the baselines can only make text extractions, we manually convert them into numbers suitable for meta-analysis <cit.>. This made very strong baselines since they combined LLM extraction with human post-processing. We assessed the performance using the Accuracy metric. The evaluation conducted across four topics demonstrated the superiority of (Fig. [fig:exp_extraction_combined]4d). Specifically, in Immunotherapy, achieved an accuracy of =0.70 (95% CI 0.62-0.77), while GPT-4 scored =0.54 (95% CI 0.45-0.62). In Radiation/Chemotherapy, reached =0.65 (95% CI 0.51-0.76), compared to GPT-4's =0.52 (95% CI 0.39-0.65). For Hormone Therapy, achieved =0.80 (95% CI 0.58-0.92), outperforming GPT-4, which scored =0.50 (95% CI 0.30-0.70). In Hyperthermia, obtained an accuracy of =0.84 (95% CI 0.71-0.92), significantly higher than GPT-4's =0.52 (95% CI 0.39-0.65). The breakdowns of evaluation results by the most frequent types of clinical outcomes (Fig. [fig:exp_extraction_combined]4e) showed got fold changes in accuracy ranging from 1.05 to 2.83 and a median of 1.50 over the best baselines. This enhanced effectiveness is largely attributable to 's ability to accurately identify the correct data locations and apply logical reasoning, while the baselines often produced erroneous initial extractions. We analyzed the error cases in our result extraction experiments and identified four primary error types (Fig. [fig:exp_extraction_combined]4f). The most common error was `Inaccurate' extraction (n=36), followed by `Extraction failure' (n=27), `Unavailable data' (n=10), and `Hallucinations' (n=3). `Inaccurate' extractions often occurred due to multiple sections ambiguously describing the same field. For example, a clinical study might report the total number of participants receiving CAR-T therapy early in the document and later provide outcomes for a subset with non-small cell lung cancer (NSCLC). The specific results for NSCLC patients are crucial for reviews focused on this subgroup, yet the presence of general data can lead to confusion and inaccuracies in extraction. `Extraction failure' and `Unavailable data' both illustrate scenarios where could not retrieve the information. The latter case particularly showcases 's robustness against hallucinations, as it failed to extract data outside the study's main content, such as in appendices, which were not included in the inputs. Furthermore, errors caused by hallucinations were minor. The outputs were easy to identify and correct through manual inspection since no references were provided. §.§ synthesizes clinical evidence from extracted results We engaged human annotators to assess the quality of synthesized clinical evidence presented in forest plots, a format commonly used in systematic reviews to report meta-analysis results (Fig. [fig:exp_human_eval]5a). We selected five systematic review studies as benchmarks and referenced the clinical evidence reported in the target studies. The baseline used GPT-4 with a simple prompting to extract the relevant text pieces that report the target outcome of interest (Methods). Manual calculations were necessary to standardize the data for meta-analysis. In contrast, automated the extraction and standardization. Each annotator was asked to evaluate the evidence quality by comparing it against the evidence reported in the target review and deciding which method, or the baseline, produced superior results. Additionally, they rated the quality of the synthesized clinical evidence on a scale of 1 to 5. The assignment of our method and the baseline was randomized to ensure objectivity. The evaluation highlighted 's superior performance compared to the direct use of GPT-4 for clinical evidence synthesis (Fig. [fig:exp_human_eval]5b). We calculated the winning rate of versus the baseline across the five studies. The results indicate a consistent preference by annotators for the evidence synthesized by over that of the baseline. Specifically, achieved winning rates of 87.5%, 100%, 62.5%, 62.5%, and 81.2%, respectively. The baseline's primary shortcoming stemmed from the initial extraction step, where GPT-4 often failed to identify the relevant sources without well-crafted prompting. Therefore, the subsequent manual post-processing was unable to rectify these initial errors. In addition, we illustrated the ratings of and the baseline across studies (Fig. [fig:exp_human_eval]5c). We found was competent as the GPT-4+Human baseline and outperformed the baseline in many scenarios. For example, obtained the mean rating of 4.25 (95% CI 3.93-4.57) in Study #1 while the baseline obtained 3.50 (95% CI 3.13-3.87). In Study #2, yielded 3.50 (95% CI 3.13-3.87) while the baseline yielded 1.25 (95% CI 0.93-1.57). The performance of the two methods was comparable in the remaining three studies. These results highlight as a highly effective alternative to conventional LLM usage in evidence synthesis, streamlining data extraction and processing while maintaining the critical benefit of human oversight. Finally, we requested that annotators self-assess their expertise level in clinical studies, classifying themselves into three categories: `Basic', `Familiar', and `Advanced'. The typical profile ranges from computer scientists at the basic level to medical doctors at the advanced level. We then analyzed the ratings given to both methods across these varying expertise levels (Fig. [fig:exp_human_eval]5d). We consistently observed higher ratings for than the baseline across all groups. Annotators with basic knowledge tended to provide more conservative ratings, while those with more advanced expertise offered a wider range of evaluations. For instance, the `Basic' group provided average ratings of 3.67 (95% CI 3.34-3.39) for compared to 3.22 (95% CI 2.79-3.66) for the baseline. The `Advanced' group rated at an average of 3.40 (95% CI 3.16-3.64) and the baseline at 3.07 (95% CI 2.75-3.39). § DISCUSSION Clinical evidence forms the bedrock of evidence-based medicine, crucial for enhancing healthcare decisions and guiding the discovery and development of new drugs. It often comes from a systematic review of diverse studies found in the literature, encompassing clinical trials and retrospective analyses of real-world data. Yet, the burgeoning expansion of literature databases presents formidable challenges in efficiently identifying, summarizing, and maintaining the currency of this evidence. The rapid development of large language models (LLMs) and generative AI technologies has generated considerable interest in their potential applications. However, implementing these models in a manner that is collaborative, transparent, and trustworthy poses significant challenges, especially in critical areas such as medicine <cit.>. For instance, when utilizing LLMs to summarize multiple studies, the summaries often merely echo the findings verbatim, omit crucial details, and fail to adhere to established best practices <cit.>. This study introduces a clinical evidence synthesis pipeline enhanced by LLMs, named . This pipeline is structured in accordance with established medical systematic review protocols, involving steps such as study searching, screening, data/result extraction, and evidence synthesis. At each stage, human experts have the capability to access, monitor, and modify intermediate outputs. This human oversight helps to eliminate errors and prevents their propagation through subsequent stages. Unlike approaches that solely depend on the internal knowledge of LLMs, integrates human expertise through in-context learning and chain-of-thought prompting. Additionally, extends external knowledge sources to its outputs through retrieval-augmented generation and leveraging external computational tools to enhance the LLM's reasoning and analytical capabilities. Comparative evaluations of and traditional LLM approaches have demonstrated the advantages of this system design in LLM-driven applications within the medical field. This study also has several limitations. First, despite incorporating multiple techniques, LLMs may still make errors at any stage. Therefore, human oversight and verification remain crucial when implementing in practical settings. Second, the prompts used in were developed based on prompt engineering experience, suggesting potential for performance enhancement through advanced prompt optimization or by fine-tuning the underlying LLMs to suit specific tasks better. Third, while demonstrated effectiveness in study search, screening, and information extraction, the dataset used was limited in size due to the high costs associated with human labeling. Future research could expand on these findings with larger datasets to further validate the method's effectiveness. Fourth, the study coverage was restricted to publicly available sources from PubMed Central, which provides structured PDFs and XMLs. Many relevant studies are either not available on PubMed or are in formats that entail OCR algorithms as preprocessing, indicating a need for further engineering to incorporate broader data sources. Fifth, although illustrated the potential of using advanced LLMs like GPT-4 to streamline clinical evidence synthesis, developing techniques to adapt the pipeline for use with other LLMs could increase its applicability. Finally, while the use of LLMs like GPT-4 can accelerate study screening and data extraction, the associated costs and processing times may present bottlenecks in some scenarios. Future enhancements that improve efficiency or utilize localized, specialized smaller models could increase practical utility. LLMs have made significant strides in AI applications. exemplifies a crucial aspect of system engineering in LLM-driven pipelines, facilitating the practical, robust, and transparent use of LLMs. We anticipate that will benefit the medical AI community by fostering the development of LLM-driven medical applications and emphasizing the importance of human-AI collaboration. § METHODS §.§ Description of the Dataset The overall flowchart for the study identification and screening process in building is illustrated in Extended Fig. <ref>. Database search and initial filtering We undertook a comprehensive search on the PubMed database for meta-analysis papers related to cancer. The Boolean search terms were specifically chosen to encompass a broad spectrum of cancer-related topics. These terms included “cancer", “oncology", “neoplasm", “carcinoma", “melanoma", “leukemia", “lymphoma", and “sarcoma". Additionally, we incorporated terms related to various treatment modalities such as “therapy", “treatment", “chemotherapy", “radiation therapy", “immunotherapy", “targeted therapy", “surgical treatment", and “hormone therapy". To ensure that our search was exhaustive yet precise, we also included terms like “meta-analysis" and "systematic review" in our search criteria. This initial search yielded an extensive pool of 46,192 results, reflecting the vast research conducted in these areas. We applied specific filters to refine these results and ensure relevance and quality. We focused on articles where PMC Full text was available and specifically categorized under “Meta-Analysis". Further refinement was done by restricting the time frame of publications to those between January 1, 2020, and January 1, 2023. We also narrowed our focus to studies conducted on humans and those available in English. This filtration process was critical in distilling the initial results into a more manageable and focused collection of 2,691 papers. Refinement Building upon our initial search, we employed further refinement techniques using both MeSH terms and specific keywords. The MeSH terms were carefully selected to target papers precisely relevant to various forms of cancer. These terms included “cancer", “tumor", “neoplasms", “carcinoma", “myeloma", and “leukemia". This focused approach using MeSH terms effectively reduced our selection to 1,967 papers. To further dive in on papers investigating cancer therapies, we utilized many keywords derived from the National Cancer Institute’s “Types of Cancer Treatment" list. This approach was multi-faceted, with each set of keywords targeting a specific category of cancer therapy. For chemotherapy, we included terms like “chemotherapy", “chemo", and related variations. In the realm of hormone therapy, we searched for phrases such as "hormone therapy", "hormonal therapy", and similar terms. The keyword group for hyperthermia encompassed terms like “hyperthermia", “microwave", “radiofrequency", and related technologies. For cancer vaccines, we included keywords such as “cancer vaccines", “cancer vaccine", and other related terms. The search for immune checkpoint inhibitors and immune system modulators was comprehensive, including terms like “immune checkpoint inhibitors", “immunomodulators", and various cytokines and growth factors. Lastly, our search for monoclonal antibodies and T-cell transfer therapy included relevant terms like “monoclonal antibodies", “t-cell therapy", “car-t", and other related phrases. The careful application of keyword filtering played a crucial role in narrowing down our pool of research papers to a more focused and relevant set of 352. It represents a diverse and meaningful collection of studies in cancer therapy, highlighting a range of innovative and impactful research within this field. Manual screening of titles and abstracts Then, we manually screened titles and abstracts, applying a rigorous classification and sorting methodology. The remaining papers were first categorized based on the type of cancer treatment they explored. We then organized these papers by their citation count to gauge their impact and relevance in the field. Our selection criteria aimed to enhance the quality and relevance of our final dataset. We prioritized papers that focused on the study of treatment effects, such as safety and efficacy, of various cancer interventions. We preferred studies that compared individual treatments against a control group, as opposed to those examining the effects of combined therapies (e.g., Therapy A+B vs. A only). To build a list of representative meta-analyses, we needed to ensure diversity in the target conditions under each treatment category. Further, we favored studies that involved a larger number of individual studies, providing a broader base of evidence. However, we excluded network analysis studies and meta-analyses that focused solely on prognostic and predictive effects, as they did not align with our primary research focus. To maintain a balanced representation, we limited our selection to a maximum of three papers per treatment category. This process culminated in a final dataset comprising 25 papers. This curated collection forms the backbone of our analysis, ensuring a concentrated and pertinent selection of high-quality studies directly relevant to our research objectives. The characterstics of the created dataset is in Extended Table <ref>. §.§ LLM Prompting Prompting steers LLMs to conduct the target task without training the underlying LLMs. proceeds clinical evidence synthesis in multiple steps associated with a series of prompting techniques. In-context learning LLMs exhibit a profound ability to comprehend input requests and adhere to provided instructions during generation. The fundamental concept of in-context learning (ICL) is to enable LLMs to learn from examples and task instructions within a given context at inference time <cit.>. Formally, for a specific task, we define T as the task prompt, which includes the task definition, input format, and desired output format. During a single inference session with input X, the LLM is prompted with P(T, X), where P(·) is a transformation function that restructures the task definition T and input X into the prompt format. The output X̂ is then generated as X̂ = (P(T, X)). Retrieval-augmented generation LLMs that rely solely on their internal knowledge often produce erroneous outputs, primarily due to outdated information and hallucinations. This issue can be mitigated through retrieval-augmented generation (RAG), which enhances LLMs by dynamically incorporating external knowledge into their prompts during generation <cit.>. We denote R_K(·) as the retriever that utilizes the input X to source relevant contextual information through semantic search. R_K(·) enables the dynamic infusion of tailored knowledge into LLMs at inference time. Chain-of-thought Chain-of-though (CoT) guides LLMs in solving a target task in a step-by-step manner in one inference, hence handling complex or ambiguous tasks better and inducing more accurate outputs <cit.>. CoT employs the function P_CoT(·) to structure the task T into a series of chain-of-thought steps {S_1, S_2, …, S_T}. As a result, we obtain {X̂_S^1, …, X̂_S^T} = LLM(P_CoT(T,X)), all produced in a single inference session. This is rather critical when we aim to elicit the thinking process of LLM and urge it in self-reflection to improve its response. For instance, we may ask LLM to draft the initial response in the first step and refine it in the second. LLM-driven pipeline Clinical evidence synthesis involves a multi-step workflow as outlined in the PRISMA statement <cit.>. It can be generally outlined as identifying and screening studies from databases, extracting characteristics and results from individual studies, and synthesizing the evidence. To enhance each step's performance, task-specific prompts can be designed for an LLM to create an LLM-based module. This results in a chain of prompts that effectively addresses a complex problem, which we call LLM-driven workflow. Specifically, this approach breaks down the entire meta-analysis process into a sequence of N tasks, denoted as 𝒯 = {T_1,…,T_N}. In the workflow, the output from one task, X̂_n, serves as the input for the next, X̂_n+1 = LLM(P(T_n, X̂_n)). This modular decomposition improves LLM performance by dividing the workflow into more manageable segments, increases transparency, and facilitates user interaction at various stages. Incorporating these techniques, the formulation of for any subtask can be represented as: X̂_n+1 = LLM(P(T_n, X_n), R_K(X_n)), ∀ n = 1, …, N, where R_K(·) are optional. §.§ Implementation of All experiments were run in Python v.3.9. Detailed software versions are: pandas v2.2.2; numpy v1.26.4; scipy v1.13.0; scikit-learn v1.4.1.post1; openai v1.23.6; langchain v0.1.16; boto3 v1.34.94; pypdf v4.2.0; lxml v5.2.1 and chromadb v0.5.0 with Python v.3.9. LLMs We included GPT-4 and Sonnet in our experiments. GPT-4 <cit.> is regarded as a state-of-the-art LLM and has demonstrated strong performances in many natural language processing tasks (version: gpt-4-0125-preview). Sonnet <cit.> is an LLM developed by Anthropic, representing a more lightweight but also very capable LLM (version: anthropic.claude-3-sonnet-20240229-v1:0 on AWS Bedrock). Both models support long context lengths (128K and 200K), enabling them to process the full content of a typical PubMed paper in a single inference session. Research question inputs processes research question inputs using the PICO (Population, Intervention, Comparison, Outcome) framework to define the study's topic and scope. In our experiments, the title of the target review paper served as the general description. Subsequently, we extracted the PICO elements from the paper's abstract to detail the specific aspects of the research question. Literature search is tailored to adhere to the established guidelines <cit.> in conducting literature search and screening for clinical evidence synthesis. In the literature search stage, the key is formulating Boolean queries to retrieve a comprehensive set of candidate studies from databases. These queries, in general, are a combination of treatment, medication, and outcome terms, which can be generated by LLM using in-context learning. However, direct prompting can yield low recall queries due to the narrow range of user inputs and the LLMs' tendency to produce incorrect queries, such as generating erroneous MeSH (Medical Subject Headings) terms <cit.>. To address these limitations, incorporates RAG to enrich the context with knowledge sourced from PubMed, and employs CoT processing to facilitate a more exhaustive generation of relevant terms. Specifically, the literature search component has two main steps: initial query generation and then query refinement. In the first step, prompts LLM to create the initial boolean queries derived from the input PICO to retrieve a group of studies (Prompt in Extended Fig. <ref>). The abstracts of these studies then enrich the context for refining the initial queries, working as RAG. In addition, we used CoT to enhance the refinement by urging LLMs to conduct multi-step reasoning for self-reflection enhancement (Prompt in Extended Fig. <ref>). This process can be described as {X̂_S^1, X̂_S^2, X̂_S^3} = LLM(P_CoT(T_LS,X,R_K(X))), where X denotes the input PICO; R_K(X) is the set of abstracts of the found studies; T_LS is the definition of the query generation task for literature search. For the output, the first sub-step X̂_S^1 indicates a complete set of terms identified in the found studies; the second X̂_S^2 indicates the subset of X̂_S^1 by filtering out the irrelevant; and the third X̂_S^3 indicates the extension of X̂_S^2 by self-reflection and adding more augmentations. In this process, LLM will produce the outputs for all three substeps in one pass, and takes X̂_S^3 as the final queries to fetch the candidate studies. Study screening follows PRISMA to take a transparent approach for study screening. It creates a set of eligibility criteria based on the input PICO as the basis for study selection (Prompt in Extended Fig. <ref>), produced by X̂_EC = LLM(P(T_EC, X)), where X̂_EC = {E_1, E_2,…, E_M} is the M generated eligibility criteria; X is the input PICO; and T_EC is the task definition of criteria generation. Users are given the opportunity to modify these generated criteria, further adjusting to their needs. Based on X̂_EC, embarks the parallel processing for the candidate studies. For i-th study F_i, the eligibility prediction is made by LLM as (Prompt in Extended Fig. <ref>) {I_i^1,…,I_i^M} = LLM(P(F_i, X, T_SC,X̂_EC)), where T_SC is the task definition of study screening; F_i is the study i's content; I_i^m ∈{-1,0,1}, ∀ m=1,…,M is the prediction of study i's eligibility to the m-th criterion. Here, -1 and 1 mean ineligible and eligible, 0 means uncertain, respectively. These predictions offer a convenient way for users to inspect the eligibility and select the target studies by altering the aggregation strategies. I_i^m can be aggregated to offer an overall relevance of each study, such as Î_i = ∑_m I_i^m. Users are also encouraged to extend the criteria set or block the predictions of some criteria to make customized rankings during the screening phase. Data extraction Study data extraction is an open information extraction task that requires the model to extract specific information based on user inputs and handle long inputs, such as the full content of a paper. LLMs are particularly well-suited for this task because (1) they can perform zero-shot learning via in-context learning, eliminating the need for labeled training data, and (2) the most advanced LLMs can process extremely long inputs. As such, the framework is engineered to streamline data extraction from structured or unstructured study documents using LLMs. For the specified data fields to be extracted, prompts LLMs to locate and extract the relevant information (Prompt in Extended Fig. <ref>). These data fields include (1) study characteristics such as study design, sample size, study type, and treatment arms; (2) population baselines; and (3) study findings. In general, the extraction process can be described as {X̂^1_EX, …, X̂^K_EX} = LLM(P(F,C,T_EX)), where F represents the full content of a study; T_EX defines the task of data extraction; and C = {C_1, C_2, …, C_K} comprises the series of data fields targeted for extraction. C_k is the user input natural language description of the target field, e.g., “the number of participants in the study". The input content F is segmented into distinct chunks, each marked by a unique identifier. The outputs, denoted as X̂^k_EX = {V^k, B^k}, include the extracted values V and the indices B that link back to their respective locations in the source content. Hence, it is convenient to check and correct mistakes made in the extraction by sourcing the origin. The extraction can also be easily scaled by making paralleled calls of LLMs. Result extraction Our analysis indicates that data extraction generally performs well for study design and population-related fields; however, extracting study results presents challenges. Errors frequently arise due to the diverse presentation of results within studies and subtle discrepancies between the target population and outcomes versus those reported. For instance, the target outcome is the risk ratios (treatment versus control) regarding the incidence of adverse events (AEs), while the study reports AEs among many groups separately. Or, the target outcome is the incidence of severe AEs, which implicitly correspond to those with grade III and more, while the study reports all grade AEs. To overcome these challenges, we have refined our data extraction process to create a specialized result extraction pipeline that improves clinical evidence synthesis. This enhanced pipeline consists of three crucial steps: (1) identifying the relevant content within the study (Prompt in Extended Fig. <ref>), (2) extracting and logically processing this content to obtain numerical values (Prompt in Extended Fig. <ref>), and (3) converting these values into a standardized tabular format (Prompt in Extended Fig. <ref>). Steps (1) and (2) are conducted in one pass using CoT reasoning as {X̂^1_RE,S,X̂^2_RE,S} = LLM(P_CoT(X, O, F, T_RE)), where O is the natural language description of the clinical endpoint of interest and T_RE is the task definition of result extraction. In the outputs, X̂_RE,S^1 represents the raw content captured from the input content F regarding the clinical outcomes; X̂_RE,S^2 represents the elicited numerical values from the raw content, such as the number of patients in the group, the ratio of patients encountering overall response, etc. In step (3), writes Python code to make the final calculation to convert X̂_RE,S^2 to the standard tabular format. X̂_RE = (LLM(P(X, O, T_PY,X̂_RE,S^2)), X̂_RE,S^2 ). In this process, adheres to the instructions in T_PY to generate code for data processing. This code is then executed, using X̂_RE,S^2 as input, to produce the standardized result X̂_RE. An example code snippet made to do this transformation is shown in Extended Fig. <ref>. This approach facilitates verification of the extracted results by allowing for easy backtracking to X̂_RE,S^1. Additionally, it ensures that the calculation process remains transparent, enhancing the reliability and reproducibility of the synthesized evidence. §.§ Experimental setup Literature search and screening In our literature search experiments, we assessed performance using overall Recall, aiming to evaluate the effectiveness of different methods in identifying all relevant studies from the PubMed database using APIs <cit.>. For literature screening, we measured efficacy using Recall@20 and Recall@50, which gauge how well the methods can prioritize target studies at the top of the list, thereby facilitating quicker decisions about which studies to include in evidence synthesis. We constructed the ranking candidate set for each review paper by initially retrieving studies through , then refining this list by ranking the relevance of these studies to the target review's PICO elements using OpenAI embeddings. The top 2000 relevant studies were kept. We then ensured all target papers were included in the candidate set to maintain the integrity of our groundtruth data. The final candidate set was then deduplicated to be ranked by the selected methods. In the criteria analysis experiment, we utilized Recall@200 to assess the impact of each criterion. This was done by first computing the relevance prediction using all eligibility predictions and then recalculating it without the eligibility prediction for the specific criterion in question. The difference in Recall@200 between these two relevance predictions, denoted as Δ, indicates the criterion's effect. A larger Δ suggests that the criterion plays a more significant role in influencing the ranking results. Data extraction and result extraction To evaluate performance, we measured the accuracy of the values extracted by against the groundtruth. We used the study characteristic tables from the review papers as our test set. Each table's column names served as input field descriptions for . We manually downloaded the full content for the studies listed in the characteristic table. To verify the accuracy of the extracted values, we enlisted three annotators who manually compared them against the data reported in the original tables. We also measured the performance of result extraction using accuracy. The annotators were asked to carefully read the extracted results and compare them to the results reported in the original review paper. For the error analysis of , the annotators were asked to check the sources to categorize the errors for one of the reasons: inaccurate, extraction failure, unavailable data, or hallucination. We designed a vanilla prompting strategy for GPT-4 and Sonnet models to set the baselines for the result extraction. Specifically, the prompt was kept minimal, as “Based on the {paper}, tell me the {outcome} from the input study for the population {cohort}", where {paper} is the placeholder for the paper's content; {outcome} is the for the target endpoint; {cohort} is the for the target population's descriptions, including conditions and characteristics. The responses from these prompts were typically in free text, from which annotators manually extracted result values to evaluate the baselines' performance. Evidence synthesis In evidence synthesis, we processed the input data using R and the `meta' package to make the forest plots and the pooled results based on the standardized result values. This is for both and the baselines. Nonetheless, for the baseline, the annotators also need to manually extract the result values and standardize the values to make them ready for meta-analysis, which forms the GPT-4+Human baseline in the experiments. We engaged two groups of annotators for our evaluation: (1) three computer scientists with expertise in AI applications for medicine, and (2) five medical doctors to assess the generated forest plots. Each annotator was asked to evaluate five review studies. For each review, we randomly presented forest plots generated by both the baseline and . The annotators were required to determine how closely each generated plot aligned with a reference forest plot taken from the target review paper. Additionally, they were asked to judge which method, the baseline or , produced better results in a win/lose assessment. Fig. [fig:exp_human_eval]5a demonstrates the user interface for this study, which was created with Google Forms. [table]name=Extended Table font=normalsize @p5em|p6emrrrp10emp10emp10em The characteristics of involved meta-analyses papers in the MetaSyns dataset. N_1: the number of identified studies; N_2: the number of involved studies after screening; N_3: the number of involved participants in the studies; P: population; I/C: intervention and comparison; O: outcome measurements. Ref Topic 1lN_1 1lN_2 1lN_3 P I/C O Valle, et al. 202132 Brachytherapy 2553 150 11322 men treated with a number of salvage treatments for radiorecurrent prostate cancer. Salvage radical prostatectomy (RP), high-intensity focused ultrasound (HIFU), cryotherapy, stereotactic body radiotherapy (SBRT), low–dose-rate (LDR) brachytherapy, and high-dose-rate (HDR) brachytherapy 1. 2-yr and 5-yr Relapse free survival (RFS); 2. severe genitourinary (GU) toxicity; 3. severe gastrointestinal (GI) toxicity Li et al. 202333 Brachytherapy 11806 15 12773 prostate cancer 1. Brachytherapy (BT); 2. External beam radiotherapy (EBRT); 3. BT+EBRT 1. risk ratios (RRs) of genitourinary (GU) toxicity; 2. RRs of gastrointestinal (GI) toxicity Hande et al. 202234 Brachytherapy 5499 24 5488 locally advanced cervical cancer 1. Volume-based Brachytherapy (BT); 2. Point-A based BT 1. 3-year disease-free survival (DFS); 2. 3-year local control (LC); 3. 3-year overall survival (OS); 4. severe GU toxicity 5. severe GI toxicity Wahyuhadi et al. 202244 Cancer Vaccines 326 6 1202 Glioblastoma multiforme (GBM) 1. active immunotherapy: dendritic cell vaccination, peptide vaccination, DNA vaccine, viral vector-based vaccine, antigen non-specific vaccine, autologous tumor cell therapy; 2. standard therapy (surgery, chemotherapy, radiotherapy) 1. Overall survival (OS); 2. Progression-free survival (PFS); 3. post-treatment Karnofsky performance scale (KFS); 4. serious adverse events (AEs); 5. 2-year mortality Zhou et al. 202045 Checkpoint Inhibitors 2236 30 4971 patients with cancer receiving immune checkpoint inhibitors (ICIs): nivolumab, pembrolizumab, atezolizumab, durvalumab, avelumab, or ipilimumab 1. occurrence of immune-related AEs (irAEs); 2. non-occurrence of irAEs 1. overall survival (OS); 2. progression-free survival (PFS) Leone et al. 202246 Checkpoint Inhibitors 984 10 5257 advanced esophageal squamous cell carcinoma (ESCC) immune checkpoint inhibitors (ICIs) 1. overall survival benefit; 2. progression-free survival; 3. overall response rate Song et al. 202047 Checkpoint Inhibitors 220 147 23761 cancer patients 1. anti-PD-1 & anti-PD-L1 inhibitors; 2. anti-CTLA-4 inhibitors ICI-related AEs (Grade 1-5, Grade 3-5) ABC Group, 202235 Chemotherapy 1lN/A 10 1183 participants with biopsy-proven transitional cell carcinoma with muscle-invasive bladder cancer who had not received neoadjuvant chemotherapy 1. adjuvant cisplatin-based chemotherapy plus local treatment; 2. local treatment; 3. local treatment then adjuvant cisplatin-based chemotherapy on reccurence 1. overall survival; 2. locoregional recurrence-free survival; 3. metastasis-free survival; 4. overall recurrence-free survival Lacas et al. 202136 Chemotherapy 1lN/A 107 19805 squamous cell Head and Neck Cancer (MACH-NC) Q1: 1. loco-regional treatment (LRT); 2. LRT+CT; Q2: 1. induction CT + radiotherapy; 2. concomitant CT 1. overall survival; 2. event-free survival; 3. loco-regional failure (LRF); 4. distant failure (DF); 5. cancer and non-cancer mortality Xia et al. 202037 Chemotherapy 1lN/A 9 36480 triple-negative breast cancer (TNBC) 1. neoadjuvant chemotherapy (NACT); 2. adjuvant chemotherapy (ACT) 1. overall survival; 2. disease-free survival (DFS) Liu et al. 202141 Hormone Therapy 7506 16 67616 high-risk prostate cancer (HRPCa) 1. neoadjuvant hormone therapy (NHT) with radical prostatectomy (RP); 2. RP alone 1. overall survival; 2. biochemical progression-free survival; 3. cancer-specific survival; 4. disease-free survival; 5. risk rates (RRs) of lymph node involvement; 6. RRs of pathological downstaging; 7. RRs of organ-confinement; 8. RRs of positive surgical margins; 9. RRs of seminal vesicle invasion Piezzo et al. 202042 Hormone Therapy 685 8 4580 patients with HR-positive/HER2-negative advanced or metastatic breast cancer 1. CDK4/6 inhibitors with endocrine therapy (ET); 2. ET only 1. progression-free survival; 2. overall survival; 3. objective response rate (ORR) Peleg Hasson et al. 202143 Hormone Therapy 113 6 35680 HR-positive/HER2-positive early-stage breast cancer 1. adjuvant endocrine therapy (tamoxifen); 2. aromatase inhibitors disease-free survival Kim et al. 202140 Hyperthermia 334 4 653 low-risk papillary thyroid microcarcinomas (PTMCs) 1. thermal ablation; 2. surgery 1. new tumor after treatment; 2. lymph node metastasis; 3. complication Wang et al. 202239 Hyperthermia 3060 9 1215 T1aN0M0 and T1bN0M0 papillary thyroid carcinoma ultrasound-guided thermal ablation (LA, RFA, or MWA) 1. volume reduction rate; 2. overall diseasee progress rate; 3. complication rate Spiliotis et al. 202138 Hyperthermia 716 15 2169 liver cancer 1. microwave ablation; 2. radiofrequency ablation 1. complete ablation (CA); 2. local tumor prgression (LTP); 3. intrahepatic distant (IDR); 4. complications Li et al. 202048 Immune System Modulators 682 9 1752 breast cancer receiving chemotherapy 1. PEGylated granulocyte colony-stimulating factor (G-CSF); 2. G-CSF 1. risk ratios (RRs) of grade >= 3 / 4 neutropenia; 2. RRs of febrile neutropenia (FN); 3. time to absolute neutrophil count recovery; 4. grade 4 AEs; 5. RRs of skeletal and/or muscle pain Wu et al. 202249 Monoclonal Antibodies 116 32 958 HER2-positive patients with NSCLC HER2-targeted therapy 1. objective response rate (ORR); 2. disease control rate (DCR); 3. progression-free survival (PFS) Li et al. 202150 Monoclonal Antibodies 5861 27 15063 patients with solid tumors 1. anti-PD-1/PD-L1 mAbs; 2. anti-PD-1/PD-L1 mAbs + chemotherapy; 3. standard chemotherapy RR of nephrotoxicity Zhu et al. 202351 Monoclonal Antibodies 2511 169 22492 cancer antibody–drug conjugates (ADCs) 1. RR of grade >= 3 AEs; 2. RR of all grade AE Masetti et al. 202254 Stem Cell Transplant 2141 9 1448 children with pediatric acute myeloid leukemia (AML) in first complete remission (CR1) 1. allogeneic hematopoietic stem cell transplantation (allo-HSCT); 2.chemotherapy alone 1. overall survival; 2. relapse rate; 3. disease-free survival Zeng et al. 202155 Stem Cell Transplant 499 15 959 adult philadelphia chromosome positive acute lymphoblastic leukemia in post-remission 1. allogeneic hematopoietic stem cell transplantation (allo-HSCT); 2. tyrosine kinase inhibitor (TKI) combined with chemotherapy 1. overall survival; 2. Relapse-free survival (RFS); 3. Odds Ratio of non-relapsed mortality (NRM); 4. Odds Ratio of non-relapsed survival (NRS) Gagelmann et al. 202156 Stem Cell Transplant 1050 7 680 patients with FLT3-ITD-mutated acute myeloid leukemia (AML) and received allogeneic stem-cell transplation tyrosine kinase inhibitor (TKI) maintenance therapy 1. relapse-free survival; 2. RRs of relapse; 3. overall survival; 4. non-replase mortality; 5. RRs of chronic graft vs. chronic disease (GVHD) Yang et al. 202152 T-Cell Transfer Therapy 661 23 350 patients with Relapse or Refractory Multiple Myeloma (RRMM) and treated with CAR-T therapy CAR-T therapy 1. Overall response (OR); 2. complete response rate (CRR); 3. MRD negativity within responders; 4. relapse rate; 5. progression-free survival; 6. overall survival; 7. severe CRS (sCRS); 8. Neurologic toxicity (NT) Shahzad et al. 202353 T-Cell Transfer Therapy 677 13 57 relapsed/refractory acute myeloid leukemia (RR-AML) CAR-T therapy 1. complete remission (CR); 2. overall response rate; 3. incidence of CRS; 4. incidence of ICANs; 5. incidence of graft-versus-host disease (GVHD) naturemag
http://arxiv.org/abs/2406.18986v1
20240627082801
Indirect Detection for Higgs Portal Majorana Fermionic Dark Matter
[ "Naoyuki Habaa", "Junpei Ikemoto", "Shimizu Yasuhiroa", "Toshifumi Yamada" ]
hep-ph
[ "hep-ph" ]
./Figures/
http://arxiv.org/abs/2406.18162v1
20240626082313
Multimodal Reaching-Position Prediction for ADL Support Using Neural Networks
[ "Yutaka Takase", "Kimitoshi Yamazaki" ]
cs.RO
[ "cs.RO", "cs.HC" ]
Article Title] Multimodal Reaching-Position Prediction for ADL Support Using Neural Networks [1]Yutaka Takaseyutaka_takase@shinshu-u.ac.jp 1]Kimitoshi Yamazakikyamazaki@shinshu-u.ac.jp [1]Mechanical System Engnieering, Shinshu University, Wakasato 4-17-1, Nagano, 3808553, Nagano, Japan This study aimed to develop daily living support robots for patients with hemiplegia and the elderly. To support the daily living activities using robots in ordinary households without imposing physical and mental burdens on users, the system must detect the actions of the user and move appropriately according to their motions. We propose a reaching-position prediction scheme that targets the motion of lifting the upper arm, which is burdensome for patients with hemiplegia and the elderly in daily living activities. For this motion, it is difficult to obtain effective features to create a prediction model in environments where large-scale sensor system installation is not feasible and the motion time is short. We performed motion-collection experiments, revealed the features of the target motion and built a prediction model using the multimodal motion features and deep learning. The proposed model achieved an accuracy of 93 % macro average and F1-score of 0.69 for a 9-class classification prediction at 35% of the motion completion. [ [ ===== § INTRODUCTION 少子高齢化社会を迎えるにあたって,独居する脳卒中の後遺症による片麻痺患者や高齢者の 日常生活動作(ADLs;activities of daily living)を自宅においてサポートする 自律型支援ロボットの開発が求められている. 高度にセンシングされた環境下において製品の組み立てや運搬作業に 従事する工業用ロボットとは異なり,一般の家庭において ADLs をサポート するためには,空間的に制限され,またセンシング環境の 整っていない環境においてユーザと近接した作業が求められることになる. 家庭用の自律型ロボットシステムとして,ピックアンドプレイスタスクを主として行うシステムが開発, 実用化されている <cit.>が, 食事や洗濯,着替えといった ADLs を支援するためには,ハードウェアやソフトウェア両面において未だ多くの超えるべき課題がある. 特に,リハビリテーションを超えた日常生活の支援システムであるためには,安全性や空間的な制約以外にも 人と,ロボットの自然なインタラクションも必要となるだろう. 加えて,ロボットによる支援によって利用者に与える精神的な影響も考慮しなければならない. 例えば,離れた位置にある物を取ろうと手を伸ばしているユーザに対して, フルレベルサポートとして,その物体をロボットが取り上げ, ユーザの元に届けることが常に適切であるとは言い難い. この場合においては,ユーザは自らの意思のもと体を動かそうとしたにも関わらず サポートの結果としてその意志を削ぐことになる. このように,利用者の能動的な行動意欲,自己効力感(SE: self-efficacy)<cit.>,を 阻害しないことは,日常の生活をシステムがサポートする上で重要な要素と言える. <cit.>においては,脳卒中による後遺症が残る患者のリーチング動作において, SEの評価と動作速度や精度には相関があることが報告されている. 本研究における最終的な目標は自律的に,かつユーザの動作を主体とした ADLs サポートロボットの開発である. 特に,自己効力感のmaintainやimprove のため,ユーザと協調してタスクの達成を行うことを目指している. ロボットはユーザの意図や行動を理解した上で,必要なサポートを適切に提供する. 例えば,先の例で言えば,ユーザが物を取るために伸ばした腕を支えたり, 対象の物体を取りやすく移動させたりする. このようなユーザの動作を補助することによる支援が実現すれば, サポートの享受による自己効力感の低下を防ぐだけでなく,かついままで不可能だったタスクを 自らと,ロボットの協働によって達成する経験から,その向上も期待できる. この目標の達成のために,本論文では上腕振り上げ動作に対する到達目標位置予測フレームワークを提案する. 上腕を振り上げる動作は,高い所から物を取る,設置する,洗濯物を干すというように 日常生活において欠かせない動作であるにもかかわらず, 高齢者や,運動機能に障害が残る患者にとっては腕を上げた状態での保持が必要だったり, 体幹のバランスを崩すおそれがあったりと難易度が高い. また,アシストロボットの行うべき動作は,ユーザの腕,体幹を直接支えることや,ユーザの動作対象 である物体への干渉まで多岐に渡り,この動作の特徴を解析することで, 将来的なロボット動作の開発への手がかりを作ることができる. 本研究の貢献は次の通りである. * 腕の振り上げ動作データの収集を複数のセンサを利用して行い,その動作特徴を解析した. * アシストロボットのリアルタイム制御を見越したマルチモーダル 特徴量による到達位置予測モデルを作成した 本論文の残りの構造は以下のとおりである. 第2節では,関連研究を整理する.次に,今回の研究における課題と解決へのアプローチについて述べる. 第4節では上腕振り上げ動作についてのデータ収集とその解析, 第5節において到達位置予測ネットワークの構造について述べ,その後,結果と議論を通して 結論をまとめる. With an aging society, the demand for intelligent robots to support activities of daily life (ADL) for elderly and disabled people living alone is increasing. These robots are required to work close to human users in environments that are relatively narrow and difficult to sense, in contrast with industrial robots, which work in well-controlled environments of factories or warehouses. The influence of the support provided by the autonomous robot on the mental health of the user must also be considered. For example, is not always appropriate for the assisting robot to fully support a user who intends to reach a distant object by picking up the object and bringing it to the user. In this case, despite the intention of users to move on their own, the support ends up undermining the intention. Therefore, it is essential for the ADL supporting system not to inhibit the active motivation and self-efficacy of users (<cit.>. In <cit.>), the authors reported a correlation between SE evaluation scores and movement speed and accuracy for reaching motions of patients with residuals from stroke. This study aims to develop an autonomous ADL support robot, considering the intentions of the user. We focus on cooperation task completion with users to maintain and improve their self-efficacy. In the previous example, the robot could support the arm of the user to reach out to pick up the object or move the target object into an easier-to-pick position. Such a support system, not only maintains the self-efficacy of the users but also improves it through the experience of accomplishing tasks that would be difficult to accomplish alone. To achieve the goals of the study, in this paper, we propose a novel scheme to predict reaching position in reaching motion involving upper-arm lifting. Although lifting the upper arm is an essential part of ADL, such as picking up an object from a high place, putting it up, or drying laundry, it is difficult for elderly or disabled patients because of the need to maintain their arms at high positions, which may cause an imbalance in the torso. There are many possible actions that the assisting robot can perform to support the motions, such as directly supporting the arms and torso of the user and interfering with objects that are the target of movements of the user. Therefore, establishing a prediction scheme for this motion along with an analysis of the features of the motion will be useful for both hardware and software development of support robots. The main contributions are summarized as follows: * We collected the motion data of lifting upper arm, which imitates object grabbing using multiple sensors, and analyzed the motion features. * We created a multimodal-neural-network model to predict the reaching position as a classification problems that can be adapted to real-time robot control. The remainder of this paper is structured as follows: In the next section, we introduce related works. In Section <ref>, we defined our research problems and approaches. In Section <ref>, we describe the motion data collection and analysis. Then, we proposed our multimodal reaching-position prediction network in Section <ref>. The results and discussion are presented in Sections <ref> and <ref>. Finally, Section <ref> concludes the paper. § RELATED WORK There are various approaches to developing cooperative robots. Research themes in this area are shifting from proposing robot motion generation methods to developing human-motion estimation frameworks. For tasks where robots and humans work in proximity, as in this study, there are systems aimed at sharing the workspace to avoid interference with each other, and systems aimed at cooperating when performing a single task, like in handover or load-sharing tasks. Assembly tasks at a factory are typical scenes of cooperative tasks between robots and humans (e.g. <cit.>). In the case of a robot and humans sharing a workspace and working individually, the robot must predict their motion trajectory to avoid collisions with humans. In <cit.>, the authors addressed the problem of estimating the reaching motion of human workers as a multi-class classification problem. They reported that the accuracy of the proposed method, which used 3D point-cloud data, was around 80 % after 50% of the operation. In <cit.>, the authors collected data on the reaching motions of workers using a motion capture system and used the data to predict the arm trajectory of human worker. The construction of advanced sensor environments is beneficial in factory and laboratory environments. In research on handover tasks, which require the positions of robots and humans to be close, sensors such as voice and electromyography sensors are used to predict the trajectory of a workers' arms (<cit.>). In the area of human-robot interaction, many studies have been constructed models to predict user activities and intentions using various features as well as the movement of the users to achieve a natural interaction between humans and robots that is similar to human and human interaction. For example, in <cit.>, the authors proposed emotion estimation models using the facial expressions and verbal features of the user. In <cit.>, the authors proposed a deep-neural-network model to estimate the order of the users for the robot by using both verbal and nonverbal features. In <cit.>, the user intention to service robots was estimated using facial direction. Although using human natural motions, such as facial direction, seems effective in informing the intention or purpose of the motion to systems, the system we aim for, as described above, targets supporting daily life activities according to the actions of the user. Therefore, it is not appropriate to build complex sensor systems for trajectory tracking in the home or give voice instructions to robots like "I want to get the book on the upper right shelf." In this study, we address these problems by using simple sensor systems. In addition, we deal with the reaching-position prediction problem for upper-arm lifting motion as a multi-class classification task and create a novel model that uses multi-nonverbal features. The proposed method could be applied in the future to load-sharing tasks (<cit.>. Currently, studies focus on control methods and algorithms after humans and robots hold the load; however, this study proposes one approach to the important problem of how a robot can hold objects together according to human intentions. In the following section, we present our research questions and approaches. § RESEARCH QUESTIONS AND OUR APPROACHES §.§ Research questions This study addressed two research questions: First, we investigated the practical features of upper-arm lifting motion to construct a reaching-position prediction model; second, we built a neural-network model using the features. Considering our goal, we assumed the following use environment and scenario: The system would be used in the everyday household environment, the users would be patients with hemiplegic and older adults with weakened muscles, and the support system should operate autonomously, and avoiding compromising the self-efficacy of the user by not providing full support. For a specific task, assume that the user takes an object from a shelf with a healthy arm. The system recognizes the reaching position of the motion and interacts with the user's arm and the object to be grabbed. This means that the system supports the task by, for example, keeping the arm or torso or moving the object to a position that is easier to grasp. Based on these assumptions, the proposed method has the following requirements. * It deals only with available information without installing or attaching large sensor systems to the user or environment. * The proposed method assumes that the support system works autonomously without active manipulation by the user for operation. * It provides an environment in which the user does not have to wait for support or adjust their operating speed. §.§ Approach Under the conditions described above, our approach to investigating the research question is as follows. First, we collect target motion data of multiple subjects in an assumed environment. Next, the features of the motions are selected from the collected data, which are considered adequate for constructing a prediction model. Finally, as in <cit.>, we constructed a prediction model of the reaching position as a multi-class classification problem using deep learning and evaluated its performance. The next section describes the data collection method, its features, and the features that can be used to predict the arrival position. § ANALYSIS OF REACHING MOTIONS §.§ Motion collection <ref> shows the environment settings for the data collection. The motion data collection procedure is as follows. As illustrated in the figure, the participant sat on a chair in front of a shelf divided into nine regions. The participant performed the motion of grabbing things from the area randomly indicated by the experimenter. Every indication was presented visually after a 3-second countdown in one region to be determined instantly on a display set in front of the participant. One set of trials consisted of four randomly ordered motions to each region repeated four times, and all participants sequentially performed seven sets of trials. The participants were instructed to place their right hands on their knees and face the display in front of them during the countdown. The sensors used were an RGBD camera (Microsoft, Azure Kinect) installed in front of the participant and an inertial measurement unit (IMU) sensor (MicroStrain, 3DM-Gx5-45) attached to the right arm of participants. Color (resolution: 1280 × 720, field of view: 90^∘ × 59^∘) and depth images (resolution: 640 × 576, field of view: 75^∘ × 65^∘) were acquired from the RGBD sensor at 15 frames per seconds (FPS), and magnetometer, angular velocity, and acceleration data were obtained from the IMU sensor at 100 FPS. Six able-bodied male participants (aged 22-25, all right-handed) were recruited from our laboratory. Excluding data recording failures, the effective number of data samples was 1538. <ref> shows an example of the collected data: the sequence of reaching motion to the center-left region. The numbers indicate the elapsed frames from the start of the motion. §.§ Motion analysis shows the descriptive statistics for the reaching times to each region derived from the collected data. Here, the reaching time is measured based on video data, from the moment the participant starts the movement after the target region is indicated by the display to when the extended right arm becomes stationary. Therefore, the time it takes for the visual reaction is not included. One-way ANOVA and the post hoc Tukey HSD test ( p <.05 was considered significant ) were conducted and suggested that there were significant differences between the regions (F(8, 1529) = 19.87, p < .001). All the post hoc test results are shown in , which will be discussed later. From the results, reaching the uppermost regions, which are the farthest from the right hand’s initial position, top-left (TL), top-center (TC), and top-right (TR), required approximately 1.56, 1.56, and 1.58 s, respectively. And, there was no significant difference observed between them. Similarly, no significant difference was observed among the middle regions, center-left (CL), center (C), and center-right (CR). These results suggest that, within this experimental setup, participants unconsciously adjust their movement speed to reach regions at the same height. This adjustment equals modulating the waiting time until the next movement's target position is presented. Therefore, it might be influenced by experimental conditions. On the other hand, for the bottom regions, reaching the bottom-center (BC), which is located /directly in front of the body, was the fastest, with an average time of about 1.33 s. According to the post hoc test results, there was no significant difference between BC and bottom-right (BR). However, significant differences (p <.05) were observed between BC and the bottom-left (BL), where required to extend the right arm to the front-left. This suggests that such a movement appears to be particularly difficult, even for healthy individuals. Additionally, it was observed that the maximum reaching time to BR was relatively larger compared to the other bottom regions. Upon reviewing the video data of this motion, it was noted that, after quickly getting closer to the target location, participants continued a slow approach movement without coming to a complete stop. This data has not been excluded, as it is considered not to affect future analyses or system development significantly. Thus, in simple reaching movements, while there is an observed tendency to unconsciously adjust speed, it became clear that due to presence of locations significantly more difficult to reach, reaching speed and movements are not solely determined by simple distance between the arm and the destination. This study uses the average value of 1.47 s from all data as a guideline for developing a system to support this task. The proposed system must perform user motion recognition, predict the reaching position, and provide support actions all within this time frame. Additionally, shows the differential images created using the SAD (Sum of Absolute Differences) method from color images over 10 frames after the start of the motion. <ref>A and <ref>B are from specific motions extracted from the collected dataset, while <ref>C is generated from all data. Bright areas in the images indicate regions of significant movement within the frames. The collected video data and the differential image also revealed the following features. 1) The target motion consisted of movements of specific body parts, that is, upper body, right arm, and face, rather than the entire body; in particular, from the differential image, the upper body movements appear not to be significant in the initial phase of the motion, making it seem challenging to use this information to predict the reaching position. On the other hand, the image shows significant movement near the face and around the right arm. Therefore, it was considered effective to use the right arm motion with visual features and to capture changes from the early stage of the motion together with face direction changes. 2) The preparatory motion was not useful for prediction; in other words, the time used to predict from the start of the motions directly affected the time available for the support action, and it was also necessary to recognize the timing of the start of the motion. However, it was challenging to obtain the exact timing of the start of the motion due to there was no prior motions. This point is discussed in Section <ref> as a future issue. §.§ Modals for prediction Based on the observation results stated above, we selected the face, visual, and motion features to construct a reaching-position prediction model. The observation results and <cit.> suggested that the face direction or features would be an essential cue to estimate the following motions. Additionally, to use this system universally, it is more appropriate to estimate the reaching position using depth information as visual cues rather than color image data, which contains redundant external information related to the user and the environment. Furthermore, motion features acquired using the IMU sensor attached to the healthy side wrist of the user were employed, because, considering future robots supporting the user’s arms or grasping objects, it becomes imperative to understand the three-dimensional movements and postures of the arm. Although there are many technologies to estimate human postures by using only color images <cit.>, considering the typical house environment in Japan, it is difficult to obtain a camera angle of view sufficient to estimate the arm's posture in reaching an unspecified direction to a shelf placed in front of the user. Therefore, estimated posture data were not used. Eye-tracking devices were also not used to avoid complicating the system. §.§ Motion data extraction In this study, the start and end recognition of the motion was not performed. Therefore, it was necessary to extract data of each motion from the collected data based on some criteria. We manually annotated the motion start and end timings based on the following definition: The motion-start timing was defined as the frame when the right hand, initially positioned at the knee, began to move. The motion-end timing was determined as a frame when the extended arm started to retract at the reaching position. The collected data were divided into individual motions according to the annotated timings. In the next section, we organize the features discussed in this section into specific feature data and discuss the construction of the reaching-position prediction model. § PREDICTION MODEL §.§ Features This section describes the features used to build our prediction model. §.§.§ Face Features Face feature was set with the expectation of capturing the direction of the face, its variations, and the characteristics of gaze transition. In studies to create prediction models of human behavior, movements of the head and gaze are often used as features We attempted to capture such features without attaching sensors to the users. In this study, the face mesh data was employed as face features. We used Google Mediapipe (<cit.>) to obtain 468 3D face landmark positions from the color images. The time elapsed from the start time of the motion was added to each frame, resulting in data of 1405 (468 × 3 + 1) dimensions. §.§.§ Depth Features Depth features were set with the expectation of extracting the three-dimensional characteristics of movements while reducing dependency on clothing and the experimental environment. These features are valuable for understanding the user's position and posture in future support scenarios. The depth image data was acquired at 15 FPS and cropped to the user center. The resolution was reduced to 256 × 188. The elapsed time was also added to each frame, as described below. §.§.§ Motion Features Motion features were set to capture the characteristics of rapid three-dimensional movements of the arm. As the motion features, the data from the IMU sensor attached to the right wrist provided ten dimensions of information (geomagnetism, acceleration, and angular acceleration). The elapsed time was also added to the data to obtain 11 dimensions. §.§ Network structure We constructed a multimodal 9-class classification neural-network model to predict reaching positions as seen in Section <ref>. The long short-term memory (LSTM) (<cit.>), and the local attention mechanism (<cit.>) were used to construct our machine-learning model. The network structure is shown in <ref>. As shown in the figure, the output from all modal layers are combined through late fusion (<cit.>). This is the method used in modeling multimodal information. The composition of each unimodal network is as follows. §.§.§ Face layers A bi-directional LSTM layer with 1405-dimensional input and 2048-dimensional output was employed to train the face modal. The final output data passed through a self-attention layer and was output as 2048-dimensional data. The number of parameters in the network was 44,554,241. §.§.§ Motion layers The same structure as the face model was used to train the motion model. The number of input dimensions was 11, and the number of output dimensions was 512. It had 2,139,137 network parameters. §.§.§ Depth layers Depth features were learned by combining latent representation of depth images using a convolutional neural network (CNN) and time-series learning using LSTM (<cit.>). The CNN parameters used are shown in Table <ref>. The CNN + LSTM network had a total of 204,509,720 parameters, and the latent representation of each frame of the CNN data was combined with the elapsed time, described earlier, and input to the LSTM layer. §.§.§ Classification layers The three output vectors from the unimodal layer were simply combined into a 1 × 4097-dimensional vector. They were then input to a fully connected layer with dropout at each layer. The dimension of the last output layer was nine, the number of classes. The dropout rates were set to 0.6, 0.4, and 0.2, respectively, and the ReLU function was used as the activation function. §.§ Input frames As shown in Table <ref>, the target motion time for this task was a minimum of aproximately 1.33 s to complete the motion. Even if we disregard the movement time of the support robot, we still need a prediction of a shorter time to assist the robot's movement. The time used for the prediction was set to 0.5 s (7 frames for the face and depth models and 50 frames for motion model). This means the model used information from 32% to 36% of the motion time. This is a short prediction time compared to the previous study (<cit.>). To improve the performance of the prediction model, data interpolation should be performed for missing data. However, considering real-time use, the raw data obtained from the sensors should be input to the predictor with as little processing as possible. Therefore, data shaping was kept to a minimum. For example, frames where face mesh could not be recognized were padded with zeros. In the next section, we discuss the result of the training and the features of our prediction model. § EVALUATION AND RESULT §.§ Model performance For training and evaluation the proposed late fusion model, firstly, the whole dataset is randomly divided into the training set and test set with 1341 and 197 motions, respectively. Then, we trained the model using a 10-fold cross-validation strategy. Table . shows model accuracy and Macro Precision, Recall, and F1-score, which are used to evaluate multi-class classification problems, of the fusion model and each unimodal model for comparison. These values are obtained by calculating the scores for each class in a one-vs-rest manner and then averaging these results across all classes. The unimodal models were trained only using LSTM (CNN-LSTM) and a classifier structure for each model in the <ref>. The results found that the proposed model performs well or better than other unimodal models. In particular, the fusion model has the highest F1-score of 0.69, indicating that it was the most balanced model. In a previous study addressing a similar 9-class reaching-position prediction problem, an accuracy of 80% was reported at approximately 50% of the motion completion time. In contrast, our method achieved higher accuracy (93 %) at an earlier stage of the motion (32%). shows a confusion matrix obtained from 197 test data. The rows represent the actual classes and columns represent the predicted classes by the fusion model. The percentages represent the precision of the classification results for each class. For example, data classified as TL are correctly classified with an 88.0% probability; however, it shows a 4.0% probability of ML, BL, and BC motions being incorrectly classified as TL. The results show that the precision increases from the right side, near the starting point of motion, towards the upper left. Also, as a trend across the classes, there is some confusion within the same column. In particular, the precision for the middle row was relatively low, often confused with motions to the same column. This is an interesting result, suggesting that even from the initial few frames, movements towards the top and bottom regions are distinctively characteristic. Distinguishing whether the arm stops in the middle row or continues moving up or down is difficult for the current model. However, the results from the post-hoc test, conducted in Section , also show potential for classification. The asterisks in the figure indicate pairs where a p<.05 significant difference in motion speed was observed in the post-hoc Tukey HSD test. From the result, significant differences were observed in the final reaching motion time between CR and TR, and CR and BR, indicating that there are differences in arm motions between them. Even if there are differences in the current input frames regarding movement trajectories, this model, which predicts reaching positions by combining arm, face, and depth features, has cases where classification may fail due to factors other than motion features. To analyze the factors causing differences in reaching times, arm trajectories and features must be analyzed, and neural networks capable of extracting these data must be constructed. These tasks remain for future research. Additionally, it is impossible to avoid the possibility of misclassification completely. Based on these results, the operation of future support systems will be discussed in Section . <ref> shows a confusion matrix of the fusion model. As shown in the figure, in this experimental condition, the classification problem of the three rows and three columns, the performance of the columns (L, C, R) was relatively high. However, the central position of each raw (CL, C, CR) confused the upper and lower classes. These issues need to be resolved through the operation of an assistance system. §.§ Estimation speed Finally, we measured the estimation speed of the proposed model — with 282,930,211 parameters. The computer used for inference was Ubuntu 20.04.6 LTS for OS, Intel(R) Xeon(R) W-2225 @ 4.10GH CPU, NVIDIA RTX A5000 and 24GB RAM. The input data utilized the motion data collected in Section . These data are stored using the functionalities of Robot Operating System , allowing for the simulation of receiving camera images and IMU sensor data while maintaining timestamp information. However, delays such as the camera's image acquisition time or data transmission between the sensor and the computer are not considered. The measurement program measured the time from when it collected the number of input frames of sensor data through the conversion and trimming into a format suitable for the predictive model to obtain the prediction results. The prediction model is trained using PyTorch [PyTorch : <https://pytorch.org>] and optimized with Nvidia TensorRT [Nvidia TensorRT : <https://developer.nvidia.com/tensorrt>]. The average prediction time for 100 data inputs was 0.0086 s, with a standard deviation of 0.0036 s. The maximum value was 0.022 s, and even if this worst-case scenario is adopted, the time required for estimation is approximately 1.5 % of the motion time of our collected motion data, which is considered sufficiently small. This result indicates that the proposed method requires a prediction time of approximately 0.5 s (for collecting motion data) + 0.086 s (for estimating the target position) for a reaching motion that takes approximately 1.47 s on average. Therefore, the proposed system leaves appropriately 0.96 s of grace time for the support robot. Finally, we measured the estimation speed of the proposed model — with 282,930,211 parameters. The computer used for inference was Ubuntu 20.04.6 LTS for OS, Intel(R) Xeon(R) W-2225 @ 4.10GH CPU, NVIDIA RTX A5000 and 24GB RAM. The average prediction time for 100 random data inputs was 0.0086 s, with standard deviation of 0.0036s. The estimation time was approximately 2 % of the motion time of our collected motion data, which is considered sufficiently small. This result indicates that the proposed framework requires a prediction time of approximately 0.53 s for a reaching motion that takes approximately 1.4 s on average. Therefore, the proposed system leaves appropriately 0.9 s of grace time for the support robot. § LIMITATIONS AND DISCUSSIONS In this section, we elaborate on insights encountered while conducting data collection, creating the prediction model, and analyzing the results. Due to the COVID-19 pandemic, it was impossible to recruit a sufficient number and variety of participants. Whether the motion of people who have a stroke or have advanced age is the same as that of the participants should be considered in further investigations. In , the authors conducted a reaching-tasks experiment for mostly right-handed patients with hemiplegia. They report that there is no difference in the motor function of the unaffected arm between left- and right-affected patients. However, in comparisons between these patients and elderly individuals without paralysis, it is shown that patients with paralysis exhibited inferior motor function even in the unaffected arm. Furthermore, In , it is shown that the arm motor functions of healthy elderly individuals differ depending on the dominant arm. From this, while it may seem difficult to directly apply the proposed model or the collected data from healthy individuals in this study to support a system intended for elderly or hemiplegic patients, the requirements for assistive systems identified through this study, along with the series of methods for model creation, can be considered useful. Using the proposed model, it would be possible to realize a system where support robots autonomously operate triggered by the user's active movements, supporting the completion of user tasks. In , the authors reported that there is a correlation between physical activity and self-efficacy, or life satisfaction in the elderly. On the other hand, there have also been reported that elderly individuals have psychological barriers to engaging in physical activities in the first place (). Support systems in ADL can encourage active movements from familiar activities, as seen in this task, reduce psychological barriers to physical activity, and promote more extensive social activities. To achieve this, the support system aims to appropriately assist users' activities in daily life, enhancing their self-efficacy and motivation for active engagement. For this purpose, one of the future challenges is to enable support at multiple levels based on the user's physical condition, from simple arm support to higher-level assistance like directly retrieving objects and handing them to the user's extended hand. As mentioned in Section , it is impossible to eliminate the possibility of misclassification when the classification model is used in the wild. While it is important to improve model performance, it is equally crucial to build and operate a system that is robust against misclassification. The proposed model is expected to improve in accuracy with increased input frames (i.e., as the motion progresses). Additionally, the results from suggest that the proposed model achieves very high accuracy in the 3-class classification of rows (Left, Center, Right), with respective accuracies of 97.15%, 96.60%, and 93.47%. Hence, in the actual operation of support robots, for instance, at the beginning of a movement, the robot might perform lateral shifts, and as time progresses, it could execute more detailed movements based on predictive results. For interactions with people or objects, it would be practical to utilize proximity sensors equipped on the robot for precise positional adjustments. However, it is difficult to say that the current model ensures sufficient time for actual support operations. As seen in Section <ref>, the time available for assistance is only about 0.96 s, which is clearly insufficient for a stationary robot to approach a user or target object and provide support. Therefore, to achieve our goal, we must solve problems from many directions, such as establishing a fast support method, developing a soft robot that considers collision with humans or surrounding objects, and integrating these technologies, including this study. These are challenges for future study. § CONCLUSIONS We proposed a novel scheme for constructing a reaching-position prediction model for the reaching motion involving upper-arm lifting, which is part of activities of daily living (ADL), to develop a support robot. Based on the results of the motion collection experiment and its analysis, we developed a target position prediction model using time-series data of face, motion, and depth features. The proposed model, which demands that the support system autonomously operates triggered by the user's movements using data from simple sensors, achieved an accuracy of 93% at 35% of the motion completion time. This model, utilizing only 0.5 s of data, was able to make predictions in approximately 0.086 s of computation time. However, it is difficult to say that sufficient time has been secured for the operation of support robots, and the issue of misclassification needs to be resolved to adapt the classification model in the wild. In the future, along with improving prediction accuracy, we aim to develop robust support methods against misclassification and robots that support ADL in close contact with users, striving to realize the proposed system. § DECLARATIONS §.§ Availability of data and materials The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request. §.§ Competing interests The authors declare that they have no competing interests. §.§ Funding This work is supported by JST [Moonshot R&D],[Grant Number JPMJMS2034] §.§ Ethical Approval and consent to participate Ethical approval was not required as per institutional guidelines. All participants were informed about the purpose of the study, the anonymity and confidentiality of their results, and provided informed consent prior to participation. §.§ Authors' contributions Y.T. worked on the research concept, participated in the design and development of the method, and drafted the thesis. K.Y. participated in the research design. All authors reviewed the results and approved the final version of the manuscript. §.§ Acknowledgement Not applicable.
http://arxiv.org/abs/2406.18140v1
20240626074427
Exclusive Style Removal for Cross Domain Novel Class Discovery
[ "Yicheng Wang", "Feng Liu", "Junmin Liu", "Zhen Fang", "Kai Sun" ]
cs.CV
[ "cs.CV", "cs.AI" ]
UTF8gbsn Y. Wang et al.: Exclusive Style Removal for Cross Domain Novel Class Discovery Y. Wang et al.: Exclusive Style Removal for Cross Domain Novel Class Discovery Exclusive Style Removal for Cross Domain Novel Class Discovery Yicheng Wang, Feng Liu, Junmin Liu, Zhen Fang and Kai Sun This work was supported in part by the National Nature Science Foundation of China (Grant Nos. 62276208, 12326607, U20B207, 11991023, 12201490) and in part by the Natural Science Basic Research Program of Shaanxi Province (Grant No. 2024JC-JCQN-02). Y. Wang is with the School of Mathematics and Statistics, Xi'an Jiaotong University, Xi'an 710049, China, and is also with the School of Mathematics and Statistics, The University of Melbourne, VIC 3010 Australia. F. Liu is with the School of Computing and Information Systems, The University of Melbourne, VIC 3010 Australia. J. Liu and K. Sun are with the School of Mathematics and Statistics, Xi'an Jiaotong University, Xi'an 710049, China. Z. Fang is with the Australian Artificial Intelligence Institute, FEIT, University of Technology Sydney, Sydney, NSW 2007, Australia. July 1, 2024 ============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================ § ABSTRACT As a promising field in open-world learning, Novel Class Discovery (NCD) is usually a task to cluster unseen novel classes in an unlabeled set based on the prior knowledge of labeled data within the same domain. However, the performance of existing NCD methods could be severely compromised when novel classes are sampled from a different distribution with the labeled ones. In this paper, we explore and establish the solvability of NCD in cross domain setting with the necessary condition that style information must be removed. Based on the theoretical analysis, we introduce an exclusive style removal module for extracting style information that is distinctive from the baseline features, thereby facilitating inference. Moreover, this module is easy to integrate with other NCD methods, acting as a plug-in to improve performance on novel classes with different distributions compared to the seen labeled set. Additionally, recognizing the non-negligible influence of different backbones and pre-training strategies on the performance of the NCD methods, we build a fair benchmark for future NCD research. Extensive experiments on three common datasets demonstrate the effectiveness of our proposed module. Novel Class Discovery, Cross Domain Learning, Exclusive Style Removal § INTRODUCTION Generic Machine Learning (ML), whether supervised or semi-supervised learning, typically relies on prior knowledge of a specific label space to which all samples belong. However, in open-world scenarios, it is common to encounter samples whose labels do not exist in the supervised label space of the ML model. This can significantly reduce model performance and raise concerns about model trustworthiness <cit.>. To tackle this issue, Novel Class Discovery (NCD) <cit.> has been proposed and attracted significant attention in the ML community. Different from the traditional ML setting where testing samples should all fall inside the known classes during training, the NCD is introduced not only to classify data into known classes but also to cluster instances that do not belong to any existing class <cit.>. Specifically, given a training dataset that includes a labeled set and an unlabeled set with different label space, the goal of NCD is to learn a model that can cluster the unlabeled data by leveraging the supervised information from the labeled data, meanwhile without compromising classification performance on the labeled data <cit.>. Due to the disjoint label spaces, the labeled and unlabeled sets are often referred to as the seen and novel categories set in many NCD works <cit.>. By relaxing the restriction of the same label space of semi-supervised learning, NCD becomes a promising field for open-world scenarios and various related applications such as anomaly detection <cit.>, outlier identification <cit.>, and so on<cit.>. In recent years, several NCD methods have been proposed, which can be generally categorized into one-stage and two-stage approaches <cit.>. Initially, NCD algorithms were usually developed by employing the two-stage strategy, which first focus on labeled data to establish a unified feature extraction framework, and then this framework is used to learn a similarity function or incorporate latent features between labeled and unlabeled data. Representative works following this approach include the Deep Transfer Clustering (DTC) <cit.>, Constrained Clustering Network (CCN) <cit.> and Meta Classification Likelihood (MCL) <cit.>. Apart from two-stage methods, the one-stage algorithms are characterized by simultaneously exploiting both labeled and unlabeled data. They typically learn a shared latent space representation with two different tasks: clustering unlabeled data and maintaining good classification accuracy on labeled sets. For example, remarkable works such as UNified Objective function (UNO) <cit.>, ComEx <cit.> and Rank Statistics (RS) <cit.> all train a joint encoder with the assistance of two head modules for classification and clustering to obtain feature representations from both labeled and unlabeled data. Although many breakthroughs have been made in this field, most existing works <cit.> are introduced under the assumption that instances are consistently sampled from the same domain <cit.>. This assumption proves unrealistic in many real-world applications, as it inevitably results in a performance decrease for existing NCD methods when the distribution of unlabeled data differs from that of the labeled set. To illustrate this issue, we construct a series of toy experiments. First, we employ Gaussian Blur with five increasing levels of severity to the CIFAR10 dataset <cit.> to create data with different distribution compared to the original dataset. Based upon the corrupted data, we then synthesize two groups of toy datasets: CIFAR10cmix (with distribution shift) and CIFAR10call (without distribution shift), which respectively represent scenarios where distribution shift exists and where it is absent between labeled and unlabeled data. Further details on the settings of datasets and experiments are introduced in Section <ref>. As shown in Fig. <ref>, despite the consistent performance decline of existing methods <cit.> as the corruption severity increases, it is noteworthy that the performance of three methods trained on CIFAR10call (without distribution shift, represented by orange lines) is consistently better than that on CIFAR10cmix (with distribution shift, represented by green lines). The substantial gap observed between the two lines inspires us to address a new NCD task. Considering that the distinction between domains can be viewed as a special kind of distribution shift, and given the considerable body of works <cit.> on cross domain problems, in this paper we concentrate on the task of Cross Domain NCD (CDNCD) where unlabeled instances belong to classes and domains both different from the labeled data. The CDNCD task is motivated by a practical perspective, as real-world ML systems need to perform well across domains and classes without any supervised information. To address this challenging task, we first expand the solvability analysis from NCD to CDNCD task and demonstrate the critical importance of removing style information for solving this new task. Based on the theory, we then introduce a solution that is built upon a baseline work <cit.> trained simultaneously with a simple yet effective style removal module. Furthermore, as the NCD field is still in its infancy, several algorithms have been proposed with diverse backbones and settings. This results in a lack of a comprehensive and fair experimental benchmark for comparison. So we build a unified benchmark that can provide a useful reference for future NCD and related transfer learning research. The contributions of this paper are as follows: * We define a more challenging but practical task called Cross Domain Novel Class Discovery by verifying the failure of existing NCD methods on data with distribution shift on a series of synthesized toy datasets. * We first theoretically analyze the solvability of CDNCD task and then propose a method by removing the exclusive style feature between labeled and unlabeled data. * We find that the choice of diverse backbones and pre-training strategies have a significant impact on the performance of existing algorithms. Therefore, a unified experimental coding framework is developed as a fair benchmark for further research. * Numerical experiments quantitatively demonstrate the effectiveness of the proposed method and validate its merit as a plug-in for other NCD methods. § RELATED WORKS §.§ Novel Class Discovery The NCD problem was first introduced by Hsu et al. <cit.> from the perspective of the Transfer Learning (TL) task and they proposed a representative NCD algorithm termed CCN <cit.>. Following this two-stage NCD work, some algorithms <cit.> were developed while <cit.> pointed out that the two-stage models only use labeled data in the first training stage which could lead to data bias. To avoid this issue, more recent NCD works have adopted a one-stage manner to learn feature representation based on both labeled and unlabeled sets. For example, based on the same backbone <cit.> for feature extraction, UNO <cit.> introduces a unified objective function for discovering novel classes and ComEx <cit.> proposes two groups of compositional experts to enhance the discriminate capabilities to both sets. In ML research, new directions are often defined by relaxing the assumptions or restrictions behind existing tasks to be closer to real-world applications, so is the NCD task. Generally speaking, there are two directions for expansion. One is the cross domain NCD studied here which relaxes the assumption that “both labeled and unlabeled data come from one domain” <cit.>, and the other is named as Generalized Category Discovery (GCD), which removes the limiting assumption that “all of the unlabeled images come from new categories” <cit.>. As the first work to address NCD in a cross domain setting, Yu et al. <cit.> proposed a self-labeling framework to recognize seen classes and discover novel categories of target domain samples simultaneously. However, the use of a supervised pre-trained backbone <cit.> in this work might cause the label information leakage of novel classes, which contradicts with vanilla setting of NCD <cit.>. Meanwhile, GCD deals with NCD when the unlabeled data includes both seen and novel classes, without any information on the number of novel classes <cit.>. It is a natural extension of NCD task, requiring methods with the ability to recognize the previously seen categories and estimate the class number of novel classes in the unlabeled data <cit.>. Besides, combining a representative GCD algorithm <cit.> with a self-distillation mechanism and entropy regularization, SimGCD <cit.> was introduced as an improved version of <cit.> and could serve as a strong baseline in NCD tasks. §.§ Cross Domain Learning Cross domain learning consists of two well-defined tasks: Domain Adaptation (DA) and Domain Generalization (DG) <cit.>. DA aims to transfer knowledge from a label-rich source domain to a label-scarce target domain, with the target domain data available during training <cit.>. In contrast, the DG model is trained on multiple source domains and tested on an unseen target domain to improve the generalization ability of the model <cit.>. From the perspective of DA, the CDNCD problem studied here could be regarded as a new task that relaxes the assumption that “all data share the same category space”. Specifically, the CDNCD model should be trained on labeled data from the source domain and unlabeled data from the target domain, which is similar to common DA methods <cit.>, but the unlabeled data come from novel categories that the labeled set does not belong to. Generally, both DA and DG tasks are based on the assumption that the data from different domains share domain-invariant features suitable for discrimination <cit.>. Therefore, learning to extract these domain-invariant features and removing domain-specific features is the key to solving the cross domain learning problem. In DA, for example, the distribution of source and target domain data is aligned by adversarial training <cit.>, Maximum Mean Discrepancy (MMD) <cit.> or Optimal Transport (OT) <cit.> to learn the domain-invariant representation. Moreover, techniques such as augmenting samples on feature level by Cross-Patch Style Swap (CPSS) <cit.> and style randomization <cit.>, variational bayesian inference <cit.>, reconstructing the original image to analogs in multiple domains <cit.> or simply making projected textural and semantic feature orthogonal <cit.> encourage the DG models to focus on semantic (domain-invariant) information. So in this paper, we follow the idea by making the above two kinds of features with low correspondence to find and remove exclusive style features (i.e. domain-specific features) for solving the cross domain NCD problem. § THE PROPOSED METHOD §.§ Motivation Current NCD methods are usually based on data from a specific domain with the same distribution and may suffer when there is distribution shift between the seen and novel categories. To point out this issue, we first construct a series of corrupted CIFAR10 <cit.> with different severities of Gaussian Blur[The codes are available on: <https://github.com/hendrycks/robustness>]<cit.>. Based on the original and corrupted data, two groups of toy datasets CIFAR10cmix and CIFAR10call are synthesized. Specifically, in dataset CIFAR10cmix, the first five classes of original CIFAR10 as seen categories are chosen as labeled data while the remaining classes of corrupted CIFAR10 are referred to as novel classes, with their corresponding samples used as unlabeled data. In contrast, labeled and unlabeled data in CIFAR10call are both corrupted data. So the CIFAR10cmix stands for there existing distribution shift between seen and novel categories while the CIFAR10call denotes the setting with the same distribution. Then we use the above two groups of synthesized datasets to train and test existing NCD methods <cit.>. The test results, as shown in Fig.<ref>, clearly demonstrate that increasing corruption severities result in a consistent degradation in performance. More importantly, when the unlabeled data is drawn from a different distribution from that of the labeled data, as illustrated by the green lines in Fig.<ref>, the performance of three NCD methods on clustering novel categories may significantly and consistently decrease compared with the same distribution setting CIFAR10call shown by orange lines. The notable gap between two lines serves as compelling evidence that these NCD methods are sensitive to the distribution shift between labeled and unlabeled data. This is the motivation of our work to propose and solve the CDNCD problem and further partially bridge the above gap. §.§ Problem Definition and Analysis of Its Solvability Although the definition of NCD is presented in various works <cit.> in different manners, Chi et al. <cit.> were the first to provide the formal definition and theoretical solvability of the NCD problem. They clarify the assumptions underlying NCD that high-level semantic features should be shared between labeled and unlabeled data. Building on the concepts introduced in <cit.>, we define and outline the assumptions of CDNCD problem and discuss its solvability. Our analysis leads to the conclusion that in addition to the requirement for similar semantic information between labeled and unlabeled data, solving the CDNCD also hinges on the removal of the exclusive style information induced by cross domain setting. Two crucial definitions from <cit.> regarding the K-ϵ-separable random variable (r.v.) and the consistent K-ϵ-separable transformation set are list as follows: Definition 1 (K-ϵ-separable r.v.). Given a r.v. X ∼ℙ_X defined on space 𝒳⊂ℝ^d, X is K-ϵ-separable with a non-empty function set ℱ={f: 𝒳→ℐ}, if ∀ f ∈ℱ, τ(X, f(X)) :=max _i, j ∈ℐ, i ≠ jℙ_X(R_X | f(X)=i∩ R_X | f(X)=j)=ϵ, where ℐ = {i_1, ⋯, i_K} is an index set, f(X) is an induced r.v. whose source of randomness is X, and R_X | f(X)=i is the support set of ℙ_X | f(X)=i. Definition 2 (Consistent K-ϵ-separable Transformation Set). Given the r.v. X ∼ℙ_X that is K-ϵ-separable with ℱ, a transformed r.v. π(X) is K-ϵ-separable with ℱ, if ∀ f ∈ℱ, τ(π(X), f(X)) :=max _i, j ∈ℐ, i ≠ jℙ_π(X)(R_π(X) | f(X)=i∩ R_π(X) | f(X)=j)=ϵ, where π: 𝒳→ℝ^d_r (d_r≪ d) is a dimension reduction transformation function. Then, a non-empty set Π is a consistent K-ϵ-separable transformation set satisfying that ∀π∈Π, π(X) is K-ϵ-separable with ℱ. Similar to the assumptions in cross domain learning works <cit.>, here we also assume that image data follow a joint distribution of style and content information, denoted as X ∼ℙ_X = ℙ_sXℙ_cX, where ℙ_s and ℙ_c stand for the margin distributions of style and content, respectively. Consequently, the features processed by a non-linear transformation π∈Π can theoretically be decomposed into two parts: π(X) = [π_c(X),π_s(X)], where [· , ·] is tensors concatenation, and π_c(X), π_s(X) ∈ℝ^d_r represent the corresponding content feature and style feature. Building on <cit.>, the NCD problem on cross domain setting can be defined as follows. The difference is that the transformation set Π is replaced by Π_c for dimension reduction of the content feature. Definition 3 (CDNCD). In data-label joint distribution {𝒳,𝒴}, two r.v. X^l, X^u are sampled from 𝒳^l and 𝒳^u to represent labeled and unlabeled data respectively, where X^l∼ℙ_X^l = ℙ_cX^lℙ_sX^l and X^u∼ℙ_X^u = ℙ_cX^uℙ_sX^u, the classification function for data with ground-truth labels f^l: 𝒳→𝒴^l and a function set ℱ={f: 𝒳→𝒴^u}, where 𝒴^l = {i^l_1, …, i^l_K^l} and 𝒴^u = {i^u_1, …, i^u_K^u}. Then we have the following assumptions: * The support set of X^l and the support set of X^u are disjoint, and underlying classes of X^l are different from those of X^u (i.e., 𝒴^l ⋂𝒴^u = ∅), and ℙ_sX^l≠ℙ_sX^u; * X^l is K^l-ϵ^l-separable with ℱ^l={f^l} and X^u is K^u-ϵ^u-separable with ℱ^u, where ϵ^l = τ(X^l, f^l(X^l))<1 and ϵ^u =min_f∈ℱτ(X^u, f(X^u))<1; * There exist a consistent K^l-ϵ^l-separable transformation set Π^l_c for X^l and a consistent K^u-ϵ^u-separable transformation set Π^u_c for X^u; * Π^l_c ⋂Π^u_c ≠∅. With above assumptions <ref>-<ref> hold, the goal of CDNCD is to learn a dimension reduction transformation π̂_c: 𝒳→ℝ^d_r via minimizing 𝒥(π̂_c)=τ(π̂_c(X^l), f^l(X^l))+τ(π̂_c(X^u), f^u(X^u)) such that f^u(X^u) is K^u-ϵ^u-separable, where f^u∈ℱ and d_r≪ d. The interpretation for <ref>-<ref> are the same as those in <cit.>, with the addition of a supplement to <ref>: ℙ_sX^l≠ℙ_sX^u implies that the style distribution of X^l and X^u is different. In other words, the labeled and unlabeled data come from different domains. Theorem 1 (CDNCD is Theoretically Solvable). Given X^l, X^u, f^l and ℱ defined above and assumptions <ref>-<ref> hold, then π̂_c is K^u-ϵ^u-separable. If ϵ^u = 0, then CDNCD is theoretically solvable. Theorem 1 suggests that in the CDNCD setting, it is possible to learn a suitable transformation π̂_c to achieve separable content features for inference. The proof of Theorem 1 is similar as that in <cit.>, so it is omitted here. The only difference lies in replacing the transformation set Π with Π_c. In addition, a theorem regarding that CDNCD is not solvable when condition <ref> does not hold is similar to that in <cit.>. The latter argues that the consistent semantic information between labeled and unlabeled data is a necessary condition for solving the NCD problem, so is the CDNCD. When we totally following the way of NCD setting <cit.>, ignoring to remove the exclusive style information caused by the cross domain context, the CDNCD problem might be unsolvable. This claim is supported by a new Impossibility Theorem presented formally below. Theorem 2 (Impossibility Theorem with Style Information). Given solvable CDNCD problem with K^u-ϵ^u-separable transformation set π̂_c. X^l, X^u, f^l and ℱ defined above and assumptions <ref>-<ref> hold. Consider conditions below on the expanded transformation set Π = [Π_c, Π_s]:= {[π_c, π_s], π_c ∈Π_c and π_s ∈Π_s } as follows: (C*) There exist a consistent K^l-ϵ^l-separable transformation set Π^l for X^l and a consistent K^u-ϵ^u-separable transformation set Π^u for X^u; (D*) Π^l ⋂Π^u ≠∅. By utilizing conditions <ref>-<ref> in Definition 3, (C*) can be hold, while (D*) might not be achievable. This implies that π̂∈Π might not be K^u-ϵ^u-separable. Theorem 2 demonstrates that solving the CDNCD might be impossible without removing the style feature π̂_s(X). In other words, if the goal is to find a transformation π̂: 𝒳→ℝ^2dr, where π̂(X) = [π̂_c(X),π̂_s(X)] includes both content and style features for the dimension reduction of data, then the CDNCD might be ill-defined. The proof of Theorem 2 is partially based on a lemma listed below. Lemma (Dimension Lemma of K-ϵ-separable r.v.). Given a d-dimension bounded subspace 𝒳⊂ℝ^d, an index set ℐ = {i_1,...,i_K}, a n-dimension subspace 𝒲⊂ℝ^n and a m-dimension subspace 𝒵⊂ℝ^m, and 𝒵⊂𝒲⊂𝒳 with m<n<d, then K-ϵ-separable r.v. Z ∈𝒵 with ℱ={f: 𝒳→ℐ} is a sufficient but not necessary condition for K-ϵ-separable r.v. W ∈𝒲 with the same ℱ. The proof of this Lemma is provided in the Appendix <ref>. Based on this lemma, it is evident that the original assumption <ref> is a sufficient but not necessary condition for the assumption (C*) to hold. We only need to prove that the condition Π^l_c ⋂Π^u_c ≠∅ could not guarantee Π^l ⋂Π^u ≠∅. The proof of this assert is provided in Appendix <ref>. §.§ Model Overview Based on the analysis in <ref>, we aim for the content feature π_c(X) to be uncorrelated with the style feature π_s(X), as the latter is irrelevant for class prediction. Therefore, π_s(X) is removed from the transformed feature π(X) to ensure the solvability of the CDNCD with theoretical guarantees. In practice, this decoupling of the non-linear transformation π∈Π into two independent parts can be achieved using two parallel deep neural networks and rational regularization <cit.>. This approach enables the alignment of the feature distribution π_c(X) between labeled and unlabeled sets for inference. So our proposed method consists of two components: a baseline work <cit.> and an exclusive style removal module called the style encoder. As shown in Fig. <ref>, these two parallel models are trained simultaneously to separate content and style features. During inference, the base feature is fed to the classification head as same as <cit.> to predict the output labels directly. §.§.§ Baseline Model Based on an effective baseline GCD work <cit.>, we employ a vision transformer ViT-b <cit.> pre-trained on ImageNet in a self-supervised manner <cit.> as a feature representation backbone. Unlike <cit.>, this self-supervised strategy does not contradict with the NCD setting, as the novel classes have no supervised information throughout all training processes. The training procedures for the backbone, classification head, and projection head are consistent with <cit.>. Specifically, same as in GCD <cit.>, unsupervised contrastive loss and supervised contrastive loss are used to fine-tune the backbone and projection head. Formally, two views x_i and x_i^' with different random augmentation are processed by the backbone f and projection head ϕ to generate two feature representations z_i = ϕ(f(x_i)) and z_i^' = ϕ(f(x_i^')). The unsupervised contrastive loss for representation learning is defined as: ℒ_rep^u=1/|B|∑_i ∈ B-logexp(z_i^⊤z_i^' / τ_u)/∑_n^n ≠ iexp(z_i^⊤z_n^' / τ_u), in which τ_u is a temperature hyper-parameter for unsupervised contrastive loss, B is the batch of data including labeled and unlabeled samples. To effectively leverage existing label information, the outside version of supervised contrastive loss <cit.> is added as follows. ℒ_rep^s=1/|B^l|∑_i ∈ B^l1/|𝒩(i)|∑_q ∈𝒩(i) -logexp(z_i^⊤z_q^' / τ_c)/∑_n^n ≠ iexp(z_i^⊤z_n^' / τ_c), where 𝒩(i) is the set of negative samples that hold the same label as the i-th sample in the labeled batch B^l, τ_c is a temperature hyper-parameter for supervised contrastive loss. Thus representation loss is defined as ℒ_rep = (1 - λ)ℒ_rep^u + λℒ_rep^s, where λ is set to balance the two losses. Instead of the self-labeling strategy employed in <cit.>, baseline <cit.> used self-distillation as a parametric classification paradigm which consists of student and teacher networks to further enhance the representative capability of the model. Based on the latent feature h_i = f(x_i) and randomly initialized prototypes 𝒞={c_1, …, c_K}, where K = |𝒴^l⋃𝒴^u| is the total number of categories. Then the soft label for each augmented sample is p_i = (p_i^(1),...,p_i^(K))^⊤ in which every element p_i^(k) is computed by: p_i^(k)=exp(1/τ_s(h_i /h_i_2)^⊤(c_k /c_k_2))/∑_k^'exp(1/τ_s(h_i /h_i_2)^⊤(c_k^' /c_k^'_2)), where τ_s is a temperature for student network. For another view x_i^', the soft label q_i^' is computed by teacher network with τ_t similarly. So the unsupervised cluster objective with mean-entropy maximum regularization term <cit.> is defined as: ℒ_cls^u=1/|B|∑_i ∈ Bℓ(q_i^', p_i)-ε H(p), where ℓ is the Cross Entropy (CE) loss function, H(·) is the entropy function and p=1/2|B|∑_i ∈ B(p_i+p_i^') indicates the average prediction of a mini-batch. In order to guarantee the performance on labeled data, the general CE loss is used as a supervised objective defined as: ℒ_cls^s=1/|B^l|∑_i ∈ B^lℓ(y_i, p_i), where y_i, p_i are the ground truth and predicted label of the i-th sample, and B^l is the batch of labeled data. Then the classification loss is set as ℒ_cls = (1 - λ)ℒ_cls^u + λℒ_cls^s. §.§.§ Exclusive Style Removal Module To ensure the solvability of NCD on cross domain setting as discussed in <ref>, for better aligning content feature, we propose a simple yet effective strategy. This strategy involves using a ResNet18<cit.> trained with baseline work simultaneously for extracting feature of style information that is distinctive from the discriminative feature obtained from the backbone and projection head for classification. To validate this statement, we separately use three common similarity measures as objective functions to assess the correspondence between content feature for inference z_i = ϕ(f(x_i)) and style feature extracted by the style encoder v_i = g(x_i). These measures include the inner product z_i^⊤v_i, cosine similarity z_i^⊤v_i/z_i_2v_i_2 and Pearson correlation cov(z_i, v_i)/σ_z_iσ_v_i, where cov(·,·) is the covariance and σ_z_i is the standard deviation of z_i. To ensure that the style and content feature are distinct from each other, three different style removal objective functions minimized during training are defined respectively: ℒ_orth = abs(z_i^⊤v_i), ℒ_cossimi = abs(z_i^⊤v_i/z_i_2v_i_2), ℒ_corr = abs(cov(z_i, v_i)/σ_z_iσ_v_i), where abs(·) is the absolute value function. We use a unified format to define the style removal function as follows: ℒ_style_removal = λ_a ℒ_orth + λ_bℒ_cossimi + λ_cℒ_corr, where value of λ_a, λ_b and λ_c are set to 0 or 1 to control the usage of different functions such that λ_a + λ_b + λ_c = 1. Following the baseline approach <cit.>, the overall loss function is defined as ℒ = ℒ_rep + ℒ_cls + wℒ_style_removal, where w is a hyper-parameter to balance the style removal loss and baseline loss. §.§ Lack of Fairly Comparable Benchmark Given that the field of NCD is still in its early stages, there is currently no standard benchmark for fair comparison, as different works have used diverse backbones and pre-trained strategies <cit.>. For example, RS <cit.>, UNO <cit.>, and ComEx <cit.> employed vanilla ResNet18 <cit.> to learn a unified feature extractor based on the training data in a specific task in a self-supervised manner. Besides, DualRS <cit.> utilized ResNet50 <cit.> pre-trained on ImageNet via self-supervision MoCov2 <cit.>. In contrast, GCD <cit.> and SimGCD <cit.> used self-supervised DINO <cit.> to pre-train the ViT <cit.> backbone on ImageNet for feature extractor. Even in <cit.>, a supervised pre-trained ResNet50 on ImageNet was used, which could potentially lead to label information leakage for novel class samples. This model setup actually violates the NCD setting, as many novel categories in the training dataset might have already been seen in the ImageNet with labels. In the downstream applications of deep learning research, it is well known that performance is highly dependent on the backbone networks and corresponding pre-trained strategy for base feature extraction <cit.>. Different backbones and pre-trained manners might lead to significantly different results, regardless of the outcome achieved by modules hand-crafted specially for specific tasks. Therefore, we designed a series of warm-up experiments to compare the performance of two NCD methods <cit.> with different pre-trained backbones. The detailed results and analysis are presented in Section <ref>. § EXPERIMENTS §.§ Experimental Settings Datasets. In our experiments, we use three datasets: CIFAR10 <cit.>, OfficeHome <cit.> and DomainNet40 <cit.>. As for CIFAR10, following the same setting of existing NCD tasks, we utilize the first five classes as labeled data and the remaining five classes as unlabeled sets. The original OfficeHome is a image dataset designed for DA and DG tasks in computer vision <cit.>. It consists of images from four different domains: Art (A), Clipart (C), Product (P), and Real-World (R). Each domain contains 65 classes of images with various office-related objects and scenes. We use the first 40 classes for experiments and split 20:20 for labeled and unlabeled data. By sequentially combining each pair of domains from four domains as labeled and unlabeled datasets, we establish twelve experiment settings for the cross domain conditions. Following the naming conventions in cross domain tasks <cit.>, the term R→A indicates the labeled data comes from the Real-World domain and the unlabeled data sampled from the Art domain, and so on. DomainNet <cit.> is a large-scale dataset which contains 345 classes from six domains, while many classes contains mislabeled outliers and plenty of indistinguishable samples exist in domains Quickdraw and Infograph<cit.>. So Tan et al. <cit.> select and construct a subset termed DomainNet40, which contains commonly-seen 40 classes from four domains: Clipart (C), Painting (P), Real (R), and Sketch (S). Similar to the OfficeHome, we split 20:20 classes for labeled and unlabeled data and establish twelve experiment settings for cross domain scenarios. For motivation setup and warm-up experiments, we change the distribution of the unlabeled data by introducing several corruptions <cit.> to create toy synthesized datasets. Further details have been mentioned in <ref>. All of the experiments in this paper are conducted by training and testing models on corresponding datasets to ensure consistent experimental settings. Data Augmentation. We use strong data augmentation for all datasets, following the approach in <cit.> which includes crop, flip, and jittering in a moderate random manner. For a fair comparison, we also apply these transformations to other NCD methods in all experiments. Evaluation Metrics. All of the experiments in this paper assess the performance of clustering on the novel categories. As a primary evaluation metric used in clustering tasks, clustering ACCuracy (ACC)<cit.> is used here which is calculated as follows: ACC=1/N∑_i=1^N1[y_i=map(ŷ_i)], where y_i and ŷ_i are ground-truth label and clustering assignment. N is the number of the test data and the map is the optimal permutation of predicted classes computed via the Hungarian algorithm <cit.>. Another common metric is the Normalized Mutual Information (NMI) defined as: NMI=MI(y, ŷ)/√(H(y) H(ŷ)), where MI(y,ŷ) is the Mutual Information between y and ŷ, in which y is the set of ground-truth labels {y_i}_N and so on. Besides ACC and NMI, Adjusted Rand Index (ARI) is also used here to measure the agreement between clusters which is defined as: ARI=RI - E(RI)/max(RI)-E(RI), where RI = TP + TN/TP + FP + FN + TN is the Rand Index, E(RI) is the expected value of the RI and max(RI) = 1. TP, TN, FP, and FN are the number of true positive, true negative, false positive, and false negative respectively. Different from ACC and NMI which range from 0 to 1, ARI ranges from -1 to 1. The value 0 indicates a random cluster and higher values indicate better clustering results. §.§ Warm-up Experiments and Benchmark Setup During the process of experiments, we observed that several NCD methods are built using different backbones, so we conducted a series of warm-up experiments to compare the performance of UNO <cit.> and ComEx <cit.> with ResNet50, ResNet18 <cit.> and ViT <cit.> backbones with different pre-trained strategies on both original and synthesized OfficeHome datasets. The synthesized datasets were used to evaluate the performance in scenarios with distribution shifts between labeled and unlabeled sets while the original OfficeHome is utilized to test algorithms in a real cross domain setting. Specifically, similar to the toy dataset CIFAR10cmix, the labeled data in the synthesized OfficeHome is based on the Real-World domain of the original set, while the unlabeled part is constructed using three corruption functions: gaussian blur, jpeg compression, and impulse noise (referred to as gaussian, jpeg, and impulse respectively for simplicity) with a severity level of 5. Additionally, for consistency, the synthesized OfficeHome is denoted as OfficeHomecmix. For the original OfficeHome, each model trained on the twelve cross domain settings has twelve corresponding testing results, and the mean value of the results with standard deviation is listed for concise comparison. The experimental results on metric ACC with standard deviation (std) (%) are presented in Table. <ref>. It could be observed that when using the same self-supervised training strategy <cit.>, both UNO and ComEx perform significantly better with the ViT-b <cit.> backbone compared to the ResNet50 and ResNet18 <cit.>. Besides, it is evident that employing self-supervised pre-trained ResNet50 leads to improved performance for both methods compared to models without pre-training. However, training ResNet18 in the manner of DINO <cit.> proved to be challenging, and the results with ResNet18 are unsatisfactory regardless of whether the backbone was pre-trained or not. Based on these warm-up experiments, it is clear that the choice of backbone and pre-trained strategy plays a crucial role in the performance of NCD methods. Therefore, it is essential to establish a fair benchmark for comparison. In the following experiments, including in Section <ref>, we use the ViT-b <cit.> backbone and pre-trained manner <cit.> for methods <cit.> to build a benchmark. All experimental results in this paper are the mean values with standard deviations of 5 runs with different random seeds. It is worth noting that representative one-stage NCD methods RS <cit.> performs well using ResNet18 <cit.> with three successive steps: self-supervised training with all data; supervised training with labeled data; and finally auto-novel step using Rank Statistics to measure and match the similarities among unlabeled data points, subsequently facilitating the generation of pseudo labels. Although the training procedure is time-consuming, the compact and unified training strategies guarantee good results. When the pre-trained ViT-b <cit.> is used as a backbone or Rank Statistics is used just for pseudo labeling in the supervised learning step, the performance of RS is consistently unsatisfactory. Therefore, we do not use RS in the following experiments, except for the Section of motivation setup <ref> and call-back <ref>, where we use the original RS for comparison. §.§ Novel Class Discovery Task on Toy Datasets with Distribution Shift In this task, we use synthesized toy datasets CIFAR10cmix and OfficeHomecmix mentioned in Sections <ref> and <ref> with distribution shift to verify our method compared with baseline <cit.> and State-Of-The-Art (SOTA) methods <cit.> <cit.>. Our proposed methods using three style removal objective functions <ref> <ref> <ref>, is denoted as Ours_orth, Ours_cossimi and Ours_corr respectively. From the results shown in Table. <ref>, it can obviously be seen that with the assistance of style removal module, baseline <cit.> has been improved to some extent on both CIFAR10cmix and OfficeHomecmix datasets with different corruptions and the improvement is significant especially on CIFAR10cmix. Regarding the OfficeHome dataset with higher resolution, UNO <cit.> and ComEx <cit.> achieve better results using a multi-view self-labeling strategy, while baseline and our method could obtain comparable performance. Besides, the experiments on toy datasets with distribution shift show that the performance of our method is not sensitive to the choice of different style removal objective functions, confirming the robustness of the proposed module. §.§ Cross Domain Novel Class Discovery Task In the realistic cross domain situation, similar to the warm-up part in Section <ref>, we employ the twelve scenarios with different domains between labeled and unlabeled sets to sequentially train and test the proposed method and its counterparts on OfficeHome <cit.> and DomainNet40 <cit.>. The performance results on metric ACC are shown in Table. <ref> and Table. <ref>. Due to the space limitation, the results of NMI and ARI are provided in <ref> in Appendix. From the mean results shown in six tables, it could be concluded that on OfficeHome with cross domain setting, our method generally outperforms the baseline <cit.> and SOTAs <cit.> <cit.> on all three metrics. The similar conclusion could be obtained on DomainNet40 except for the comparable performance evaluated by ACC. The upward arrows indicate that with the assistance of the proposed exclusive style removal module, which includes style encoder g and three different objectives ℒ_style_removal, the test results of our method on both datasets are better than baseline to varying extents in most of the cross domain conditions. This demonstrates the effectiveness of the proposed module. Besides, on both datasets, the proposed model with ℒ_orth induced by simplest inner product generally performs the best compared to the model using ℒ_cossimi and ℒ_corr. Additionally, the mean results in the last row indicate the same conclusion. This is consistent with the results shown in Table. <ref> on toy datasets and demonstrates that keeping the style and content features orthogonal is the most effective way to decouple the two kinds of features. Last but not least, it can be observed that the performance of each algorithm varies significantly with different matches between the source and target domains on two datasets. Additionally, compared to different source domains, the results of CDNCD seem to be more related to the target domain. Particularly when the target domain is real of both OfficeHome <cit.> and DomainNet40 <cit.>, all algorithms perform very well. This is because the backbone is pre-trained on ImageNet, which can be referred as a real domain dataset, further highlighting the importance of pre-training. §.§ Motivation Call-back and Plug-in Ability of Proposed Module When we revisit the motivation setup in Section <ref>, our initial goal was to bridge the gap between two types of experimental settings: those with distribution shift between labeled and unlabeled sets, and those without. In other words, the degradation of performance on data with different distributions needs to be alleviated to some extent. To achieve this, we integrate the proposed exclusive style removal module into three NCD SOTAs: RS<cit.>, UNO<cit.>, and ComEx<cit.>. Similar to Section <ref>, these three methods are trained and tested on the synthesized CIFAR10cmix with different corruption severities of Gaussian Blur <cit.> on novel categories. In detail, similar to Fig. <ref>, the style encoder g is added to the backbones f of the above methods as a parallel plug-in module. Since these three algorithms use projection heads with different manners, the simplest and most effective objective ℒ_orth in ℒ_style_removal which calculate inner product between outputs of f and g directly, is added to the overall loss function of the corresponding methods. A series of test results are shown with newly added purple lines in Fig. <ref> compared with Fig. <ref>. From the results, we can see that with the assistance of the style removal module, the performance of these three methods is improved on synthesized CIFAR10cmix and the gap becomes smaller to some extent than before shown in Fig. <ref>. This confirms the effectiveness and the plug-in ability of the proposed module. It is also interesting to note that there is almost no gap between the two settings in the baseline <cit.>, which may be due to the fact that with a contractive learning strategy, this method could learn more discriminative content features than RS <cit.>, UNO <cit.> and ComEx <cit.>. In addition, when integrated with the style removal module, the style feature is removed more thoroughly, which makes the results even outperform the counterpart on datasets without distribution shift, as shown with an orange line in Fig. <ref>(d). In order to illustrate the improvement more clearly, we present the t-SNE figures in Fig. <ref> showing the feature distribution obtained by <cit.> with and without the proposed module. Specifically, the feature fed for the t-SNE embedding method is the output of the backbone on one random seed, and the labels are the ground truth of the test set. The corruption level of Gaussian Blur on novel categories is set to 2. It is clear that for the last five classes (novel categories), the feature distribution with the proposed style removal module is more compact and separable than vanilla ones, leading to higher cluster accuracy and a higher position of purple lines compared with green lines in Fig. <ref>. Even for the first five classes (seen categories), the method with the proposed module also shows a more compact feature distribution than the origin, which verifies the merit of the proposed module and its effectiveness in addressing the motivation. § CONCLUSION In this paper, we introduce a task named Cross Domain Novel Class Discovery and discuss its solvability. Based on the theoretical analysis, a new algorithm is proposed which utilizes a GCD algorithm as a baseline and includes a simple yet effective exclusive style removal module trained simultaneously with the baseline. Experimental results show that our method outperforms the baseline method and two representative NCD methods on toy datasets with distribution shift and two common datasets with cross domain settings. Moreover, the proposed module is robust to different style removal objective functions and can be easily integrated into other NCD methods as a plug-in to improve their performance on data with distribution shift. Last but not least, a fair benchmark with the same backbone and pre-trained strategy built in this paper is beneficial for the development of NCD and other related transfer learning tasks. § APPENDIX §.§ Proof of Dimension Lemma regarding K-ϵ-separable r.v. Sufficiency: We aim to prove that based on the same support set R_X | f(X)=i, that a m-dimension r.v. Z ∈𝒵⊂𝒲⊂𝒳 is K-ϵ-separable is a sufficient condition for that a expanded n-dimension r.v. W ∈𝒲 is K-ϵ-separable. As d-dimension space 𝒳 is bounded, the n-dimension subspace 𝒲 is bounded and compact. Then for ∀ i ∈ℐ, there exists a limited number (set as c_i) of n-dimension open spares with diameter 1 can cover the support set R_W | f(X)=i. As index set ℐ = {i^u_1, …, i^u_K^u} is finite, we set c=max_i ∈ℐ c_i. Then we have ∀ X ∈𝒳, ∀ f ∈ℱ, there exist n-dimension W ∈𝒲 corresponding to X, such that max_i ∈ℐℙ_W(R_W | f(X)=i)<c^n. In m-dimension subspace 𝒵⊂ℝ^m, given a K-ϵ-separable r.v. Z ∈𝒵 with ℱ={f: 𝒳→ℐ}, then X ∈𝒳, ∀ f ∈ℱ, τ(Z, f(X))=max_i, j ∈ℐ, i ≠ jℙ_Z(R_Z | f(X)=i∩ R_Z | f(X)=j) = ϵ. Based on the m-dimension r.v. Z ∈𝒵 corresponding to X, the n-dimension expansion W ∈𝒲 must satisfy that max_i, j ∈ℐ, i ≠ jℙ_W(R_W | f(X)=i∩ R_W | f(X)=j)<ϵ c^n-m. Thus, W is K-ϵ-separable with the same ℱ={f: 𝒳→ℐ}. Not Necessity: It is easy to prove if we find a specific counterexample to show that based on a K-ϵ-separable n-dimension r.v. W ∈𝒲⊂𝒳, a m-dimension projection r.v. Z ∈𝒵⊂𝒲 is not K-ϵ-separable. Given a 3-dimension space 𝒳={(x,y,z),x∈[-1,1], y∈[-1,1], z∈[-1,1]}⊂ℝ^3 and r.v. X is randomly sampled from 𝒳 with uniform distribution. We assume that the label of X is 1 if X is located on or above the 2-dimension surface z = x^2 and if 𝒳 is below the surface the label is 0. Then we define a classification function f(X):=0 if z <x^2 and f(X):=1 if z ≥ x^2. As ℙ_X(R_X | f(X)=0∩ R_X | f(X)=1) = 0, we have r.v. X is 2-0-separable with non-empty ℱ={f: 𝒳→ℐ}, where ℐ = {0,1}. It could be easy to prove that in the 2-dimension subspace 𝒲={(x,0,z),x∈[-1,1], z∈[-1,1]}⊂𝒳, r.v. W is also 2-0-separable with the same ℱ={f: 𝒳→ℐ}, because ℙ_W(R_W | f(X)=0∩ R_W | f(X)=1) = 0 is still hold true. Then, we consider a 1-dimension subspace 𝒵={(x,0,0),x ∈ [-1,1]}⊂𝒲 and r.v. Z∈𝒵 is a projection of X on the x-axis. It is obvious that τ(Z, f(X))=ℙ_Z(R_Z | f(X)=0∩ R_Z | f(X)=1) = 2 is a consistent value, so r.v. Z is not 2-ϵ-separable in space 𝒵 with any ϵ < 2. §.§ Proof of Theorem 2 Here we will prove that the condition Π^l_c ⋂Π^u_c ≠∅ could not guarantee Π^l ⋂Π^u ≠∅. The transformations π^l_s(X) and π^u_s(X) encode exclusive style information of two domains respectively. As the distribution of datasets X^l and X^u from two domains are different, thus Π^l_s ⋂Π^u_s := {π^l_s, π^l_s ∈Π^l_s }⋂{π^u_s, π^u_s ∈Π^u_s } = ∅. Even Π^l_c ⋂Π^u_c ≠∅, there still has Π^l ⋂Π^u = [Π^l_c, Π^l_s] ⋂ [Π^u_c, Π^u_s] := {[π^l_c, π^l_s], π^l_c ∈Π^l_c and π^l_s ∈Π^l_s }⋂{[π^u_c, π^u_s], π^u_c ∈Π^u_c and π^u_s ∈Π^u_s } = ∅. §.§ NMI and ARI results on OfficeHome and DomainNet40 The results of cross domain NCD tasks evaluated by NMI and ARI on OfficeHome <cit.> and DomainNet40 <cit.> are shown in Table. <ref>, <ref>, <ref>, and <ref>, respectively. ieeetr
http://arxiv.org/abs/2406.19140v1
20240627124653
Phonon dynamics in the site-disordered Kitaev spin liquid
[ "Vitor Dantas", "Wen-Han Kao", "Natalia B. Perkins" ]
cond-mat.str-el
[ "cond-mat.str-el" ]
School of Physics and Astronomy, University of Minnesota, Minneapolis, MN 55455, USA School of Physics and Astronomy, University of Minnesota, Minneapolis, MN 55455, USA School of Physics and Astronomy, University of Minnesota, Minneapolis, MN 55455, USA § ABSTRACT The Kitaev honeycomb model provides a paradigmatic example of an exactly solvable quantum spin liquid (QSL), in which the spin degrees of freedom fractionalize into itinerant Majorana fermions coupled to a static background of ℤ_2 gauge fluxes. This model has attracted significant attention in recent years due to the possibility of its experimental realization in some spin-orbit Mott insulators such as α-RuCl_3. Among various experimental probes, ultrasound experiments measuring sound attenuation have emerged as a promising avenue to unveil the fractionalization of spins in these materials. Yet, candidate materials often deviate from the ideal Kitaev model due to the presence of disorder, leading to the emergence of localized modes governing low-energy physics. To provide further insight into the effects of these defect-induced modes on the phonon dynamics, we calculate the sound attenuation coefficient in the site-disordered Kitaev honeycomb model with an applied magnetic field, which breaks the time-reversal symmetry. In order to obtain a more accurate perspective on the temperature-dependent sound attenuation in this model, the impact of thermally excited fluxes on the disordered system is also analyzed. Phonon dynamics in the site-disordered Kitaev spin liquid Natalia B. Perkins July 1, 2024 ========================================================== § INTRODUCTION In recent decades, significant efforts have been directed toward discovering and synthesizing compounds that manifest quantum spin-liquid (QSL) phases. These enigmatic states of matter exhibit exotic phenomena such as long-range entanglement, emergent gauge theories, and fractionalized spin excitations. Central to this pursuit are Kitaev materials  <cit.>, which feature dominant Ising-like bond-dependent interactions for j_eff=1/2 effective moments on a honeycomb lattice and hold promise for realizing Kitaev QSL  <cit.>. This model admits an exact solution, revealing a quantum spin-liquid phase with fractionalized excitation represented by gapless/gapped Majorana fermions and static ℤ_2 fluxes  <cit.>. At the forefront of this rapidly growing field is the search for signatures of spin fractionalization in candidate materials, such as the honeycomb iridates A_2IrO_3 (A=Li,Na) <cit.> and  <cit.>. Various dynamical probes are employed in this search, including inelastic neutron scattering  <cit.>, Raman scattering  <cit.>, resonant inelastic X-ray scattering  <cit.>, ultrafast spectroscopy <cit.>, terahertz nonlinear coherent spectroscopy  <cit.>, and inelastic scanning tunneling microscopy (STM)  <cit.>. Among these several routes, recent studies have highlighted phonon dynamics as an insightful probe for understanding quantum spin liquids in candidate materials, given that spin-lattice coupling is inevitable and often strong in materials with large spin-orbit coupling <cit.>. For instance, sound attenuation has been shown to be a useful tool to provide information about two-dimensional frustrated magnetic systems, including quantum spin liquids <cit.>. In the context of Kitaev materials, previous investigations have shown that the Majorana fermion-phonon coupling gives rise to unique signatures such as a linear temperature-dependent sound attenuation and a characteristic sixfold angular dependence from the lattice symmetry <cit.>. Other works also explored the effects of magnetoelastic coupling between spins and optical phonons, which may be accessible via Raman spectroscopy <cit.>. Here we propose that the phonon dynamics is a meaningful experimental probe of the Kitaev materials, even with inevitable disorder. The role of disorder in these systems gained a lot of attention in recent years, especially after the synthesis of the iridate compound , known for its unique thermodynamic behavior <cit.>. Disorder is also crucial in other candidate materials like Ir-doped RuCl_3, where vacancies play a major role <cit.>. Extensive theoretical effort has been made to understand disorder in the Kitaev spin liquid, with minimal models incorporating bond randomness and various types of vacancies  <cit.>. It was shown that disorder dramatically changes the low-energy excitations in the Kitaev model, impacting dynamical and thermodynamic properties such as spin dynamics, specific heat, and susceptibility  <cit.>, and even topological properties in the non-Abelian regime  <cit.>. To achieve an effective and realistic description of dynamical probes in candidate materials, it is essential to account for these defects. Therefore, in this paper, we aim to extend our previous investigations of phonon dynamics in the Kitaev model by incorporating disorder in the form of quasivacancies  <cit.>. Specifically, we compute the sound attenuation for an effective spin-lattice coupled Kitaev spin liquid with quasivacancies. As sound attenuation originates from dissipative scattering processes, it can be connected to the imaginary part of the phonon polarization bubble using the perturbative methods developed in Ref. <cit.> and the real-space formalism introduced in Ref. <cit.> to incorporate site disorder. Our analysis demonstrates how the intricate low-energy physics of quasilocalized modes <cit.> in the presence of quasivacancies is reflected in the phonon self-energy. We show that the sound attenuation coefficient exhibits the expected sixfold symmetry, as in the clean system <cit.>. Crucially, the sound attenuation remains linear at low temperatures even in the presence of disorder, indicating the robustness of this observable in probing fractionalization. Moreover, we show that this behavior may persist in the presence of an external field due to the scattering of quasilocalized modes induced by quasivacancies into the bulk. This is in stark contrast to the pure-model description, where the gapped Majoranas give a negligible contribution to sound attenuation at low temperatures  <cit.>. Finally, we demonstrate that the attenuation coefficient continues to grow linearly with temperature in the random-flux regime. This behavior is fundamentally distinct from other possible physical processes, reinforcing that fractionalized degrees of freedom dominate the attenuation process even in the presence of disorder. The rest of the paper is organized as follows: In Sec.<ref>, we introduce the model of this work. The disordered Kitaev model, the phonon Hamiltonian, and the spin-phonon coupling are described in Sec. <ref>, <ref> and <ref>, respectively. Next, we show the computation of the polarization bubble in the mixed representation in Sec. <ref>. In Sec. <ref>, we present our results for the sound attenuation, where a simple argument for the temperature dependence is shown in Sec. <ref>. We present the numerical results for quasivacancies and flux disorder in Secs. <ref> and <ref>, respectively. We close with a summary in Sec.<ref>. Additional details and results are presented in Appendices <ref> and <ref>. § THE MODEL In this section, we introduce the model and review the site-disordered Kitaev spin liquid and the derivation of the spin-phonon interaction in the presence of disorder. The full Hamiltonian comprises the pure spin and phonon terms, along with the spin-phonon interaction: ℋ = ℋ_spin + ℋ_ph + ℋ_spin-ph The first term is the Kitaev model in the presence of a time-reversal symmetry-breaking field <cit.>, written in real space to incorporate disorder. The second term is the two-dimensional free-phonon Hamiltonian on the honeycomb lattice in the long-wavelength limit, focusing on low-energy acoustic phonon modes. We assume disorder affects only the magnetic interactions, not the lattice structure, so ℋ_ph is represented in momentum space. The last term describes the magnetoelastic coupling, written in a mixed representation. §.§ Spin Hamiltonian The spin-1/2 Kitaev Hamiltonian on a honeycomb lattice in the presence of a time-reversal symmetry-breaking field reads <cit.>: ℋ_spin = -∑_⟨ ij ⟩J_⟨ ij ⟩_ασ_i^ασ_j^α - κ∑_⟨⟨ ik ⟩⟩_α,βσ_i^ασ_j^βσ_k^γ, where σ^α_i denotes Pauli spin operators with α = x, y, z and ⟨ ij ⟩_α labels the nearest-neighbor sites i and j along an α-type bond. The second term is the three-spin interaction with strength κ∼h_xh_yh_z/J^2 that imitates an external magnetic field and breaks time-reversal symmetry while preserving the exact solvability  <cit.>. By rewriting each spin operator in terms of four Majorana fermions, σ^α_i = ib^α_ic_i, and defining the link operators η_ij^α=ib^α_ib^α_j, the Hamiltonian takes the form ℋ_spin = i∑_⟨ ij ⟩J_⟨ ij ⟩_αη_ij^αc_ic_j + iκ∑_⟨⟨ ik ⟩⟩_α,βη_ij^αη_kj^βc_ic_k. In the following, we restrict ourselves within the isotropic limit of the pristine model, J_α≡ J, and focus predominantly on a kind of disorder dubbed quasivacancy <cit.>. Contrary to a true vacancy, where a site is completely removed from the lattice, a quasivacancy is defined by a weakly interacting site with respect to its neighbors. More precisely, the Kitaev exchange for a quasivacancy is given by J^'≪ J. Physically, this can be identified as non-magnetic defects that are weakly connected to their neighbors due to extremely strong but relatively rare bond randomness. In order to introduce randomly distributed quasivacancies, we rewrite the spin Hamiltonian (<ref>) as <cit.>: ℋ_spin = iJ∑_⟨ ij⟩ _α i,j ∈𝒫η_ij^αc_ic_j + iJ^'∑_⟨ ij⟩ _α i ∈𝒬, j ∈𝒫η_ij^αc_ic_j + iκ∑_⟨⟨ ik⟩⟩_α,βη_ij^αη_kj^βc_ic_k, Where 𝒫 denotes the set of normal sites (the bulk) and 𝒬 is the set of quasivacancies. In the pristine model with κ = 0, the flux configuration that minimizes the energy is the zero-flux sector, that is, W_p=∏_⟨ ij⟩∈ pη_ij^α= +1, for all p. This results in a straightforward tight-binding model for Majorana fermions, which can be easily diagonalized in momentum space. The low-energy dispersion of the Majorana fermions exhibits a graphene-like behavior, characterized by two Dirac cones at the K and K^' points of the Brillouin zone. In the presence of a finite field (κ≠ 0), the Majorana excitations become gapped, with an energy gap given by Δ_κ/J = max6√(3)κ, 2, as shown in Fig. <ref>(b). Introducing quasivacancies disrupts the translation symmetry, preventing diagonalization in reciprocal space and necessitating real-space diagonalization. Despite this, the model remains exactly solvable. Moreover, unlike the clean limit, the ground state of the Kitaev model with quasivacancies does not necessarily correspond to the zero-flux sector. It was shown that the energy is lowered by binding a flux to the extended plaquette around the vacancy <cit.>. The flux-binding effect persists even with a finite concentration of vacancies, including quasivacancies weakly coupled to the rest of the system (J'≪ J) <cit.>. However, as our numerical results have shown that the flux-binding effect does not affect the sound attenuation response (see Appendix <ref>), we will focus on the zero-flux and random-flux sectors only. In a given flux sector, the gauge can be fixed by specifying all link variables {η_ij}. The resulting Hamiltonian (<ref>) of non-interacting Majorana fermions can be rewritten in the following matrix form: ℋ_spin = i/2[ c_A c_B ][ F M; -M^T -D ][ c_A; c_B ], where c_A(B)^ denote the N-component vectors with components c_i,A(B) for a lattice with N = L^2 unit cells. The matrix M_ij = J_ijη_ij defines the hopping between different sublattices, with J_ij=J if the bond is in the bulk and J_ij=J' if the bond involves a quasivacancy  <cit.>. The hopping within the same sublattice is represented by the matrices F_ik = κη_ij^αη_kj^β (on sublattice A) and D_ik = κη_ij^αη_kj ^β (on sublattice B), which have non-zero elements only when κ≠ 0. Note that, generally, F and D do not necessarily have to be identical due to the sublattice symmetry breaking induced by a generic flux configuration. In our algorithm, we select a randomly distributed percentage x of spins from the system to define the quasivacancy subspace 𝒬, ensuring a balanced distribution of quasivacancies. This means the number of quasivacancies on each sublattice is N_A = N_B = xN/2. To bring the Hamiltonian (<ref>) into its canonical form, we introduce the complex fermion operators f^†, which are related to the Majorana fermions by the transformation U, defined as: [ f; f^† ] = 1/2[ 1 i; 1 -i ][ c_A; c_B ] =U^-1[ c_A; c_B ]. In this complex fermion basis, the Hamiltonian assumes the Bogoliubov de-Gennes form <cit.>, ℋ_spin = 1/2 [ f^† f ][ h Δ; Δ^† -h^T ][ f; f^† ], with the N× N matrices Δ and h defined in terms of M, D, and F as Δ = (M^T - M) + i(F+D), h = (M + M^T) + i(F - D). To bring the matrix in Eq. (<ref>) into its diagonal form, we define the unitary transformation W, which is the matrix representation of the Bogoliubov transformation <cit.>: W [ h Δ; Δ^† -h^T ] W^† = [ E 0; 0 -E ]. Here, E is the N× N matrix containing the positive eigenvalues ε_i stored in descending order. With this convention, it is easy to see that W^† is the matrix with the eigenvectors stored columnwise as ^† = V^+_1 …V^+_N V^-_1 …V^-_N, where V^+(-)_n is the eigenvector corresponding to the n-th positive (negative) eigenvalue ±ε_n. The list of positive eigenvalues (ε_N,… ,ε_1 ) and the matrix ^† constitute our numerical output from the exact diagonalization. Following the notation used in Ref. <cit.>, we introduce N× N Bogoliubov matrices X and Y. Thus, the operator W can be written as: = [ ^* ^*; ] ^† = [ ^T ^†; ^T ^† ] Now, we can define the Bogoliubov quasiparticles β and β^†, which are related to f and f^† via β_i = ∑_j (X^T_ij f_j^ + Y_ij^† f_j^† ) β_i^† = ∑_j (Y^T_ij f_j^ + X_ij^† f_j^† ), where X_ij and Y_ij are defined in (<ref>). In terms of the Bogoliubov quasiparticles, ℋ_spin becomes ℋ_spin = ∑_iε_i β_i^†β_i - 1/2 . Finally, the ground state is defined as the state with no quasiparticle excitations, i.e. β_i|0⟩ = 0. From this definition, along with (<ref>), we define the ground-state energy as E_0 = -1/2∑_iε_i and the density of states (DOS) as ρ(ε) = ∑_iδ(ε - ε_i). As shown in recent works <cit.> the effects of quasivacancies are well pronounced even at very low concentrations. More specifically, when the field is absent (κ = 0), there is a pileup of low-energy states, as shown in the inset of Fig. <ref>(d). The pileup of low-energy states emerges for any finite x, and its appearance is independent of the flux-sector <cit.>. We also consider the effect of flux disorder, originating from the thermal proliferation of fluxes at finite temperatures. This intrinsic property of the Kitaev model is reflected in a specific heat crossover in the 2D model at temperatures near the flux-excitation energy scale <cit.>. In the high-temperature limit, it has an equal probability to have zero- or π-flux on each plaquette. A realization of the random-flux sector is shown in Fig. <ref>(e). This introduces additional disorder to the Majorana fermions as a random-sign hopping problem. It significantly affects the density of states, smearing out both the Van Hove singularity and the Dirac cone (see Fig. <ref> (f)), and has substantial impacts on both thermodynamic and dynamical responses <cit.>. Previous studies have considered this type of disorder in phonon dynamics within the Kitaev model <cit.>. Here, we extend the analysis by extracting the temperature dependence of the sound attenuation coefficient in the presence of both flux and quasivacancy disorders, a topic not previously explored. §.§ Phonon Hamiltonian Since the type of disorder studied in this work does not affect the lattice symmetry – quasivacancies only alter the magnetic exchange between an ion and its neighboring sites – the description of acoustic phonons remains the same for both the clean model and the model with quasivacancies. Then, following Refs. <cit.>, we assume a homogeneous elastic medium in the long wavelength limit to write an effective action in terms of the strain tensor ϵ_ij = 1/2(∂_i u_j +∂_j u_i), where u_i are the displacement field components, and the independent components of the elastic modulus tensor 𝒞_ijkl. We then impose the C_6v point group symmetry of the honeycomb lattice to write the elastic energy in the basis of A_1^ph and E_2^ph irreducible representations (IRRs) of the point group. Decomposing the displacement fields into two independent polarizations, u_q = ∑_μe^μ_qũ^μ_q, where e^μ_q are the polarization vectors for μ = ∥,⊥ (longitudinal and transverse polarizations), and diagonalizing ℋ_ph, we obtain the phonon dispersion and the corresponding polarization vectors: Ω_𝐪^=v_s^ q=√(C_1+C_2/ρ) q, ê_𝐪^={cosθ_𝐪, sinθ_𝐪}, Ω_𝐪^⊥=v_s^⊥ q=√(C_2/ρ) q, ê_𝐪^⊥={-sinθ_𝐪, cosθ_𝐪}, where θ_𝐪 is the angle between 𝐪 and x̂ axis, ρ is the mass density of the ion, and the sound velocity v_s^μ is defined in terms of the only two independent elastic tensor coefficients C_1 and C_2. The quantized displacement field can then be expressed in terms of bosonic operators as ũ_q^μ(t) = i√(ħ/2ρδ_VΩ_q)a_qe^-iΩ_qt + a^†_-qe^iΩ_qt, where δ_V is the unit cell area. §.§ Spin-phonon interaction The magnetoelastic coupling arises from the variation in the Kitaev exchange strength J due to lattice deformations. Assuming J depends only on the distance r between neighboring ions and u(r) is a small displacement, we can expand the spin interaction around the equilibrium value: J_α→ J_α + ∇ J_α·u(r) - u(r + M_α), where M_α are the nearest neighbor vectors (see Fig. <ref>(a))). From this approximation, the spin-phonon coupling Hamiltonian is given by <cit.>: ℋ_spin-ph = λ∑_r,αM_α·(M_α·∇) u(r)σ^α_rσ^α_r+M_α, where λ∼ (∇ J_α)_eqℓ_a is the spin-phonon coupling strength and ℓ_a is the lattice constant. The spin-phonon coupling Hamiltonian (Hspin-phonon) can be rewritten in terms of symmetry channels. The two contributions are the A_1 and E_2 channels. Since the E_2 channel is dominant <cit.>, we will disregard the A_1 contributions here <cit.>. In the clean model, the contribution to ℋ_spin-ph from the E_2 channel can be explicitly written in terms of Majorana fermions as ℋ_spin-ph^E_2 = -iλ∑_r ϵ_xx - ϵ_yyη_r,r+M_xc_,Ac_+M_x,B + η_r,r+M_xc_,Ac_+M_y,B - 2η_r,r+M_xc_,Ac_+M_z,B + + 2√(3)ϵ_xyη_r,r+M_xc_,Ac_+M_x,B - η_r,r+M_xc_,Ac_+M_y,B. This description extends easily to the case of quasivacancies. For any site in the quasivacancy space 𝒬, the spin-phonon coupling changes from λ to λ' ∼ (∇ J'_α)_eqℓ_a, where J'_α is the weak coupling between a quasivacancy and a normal spin. Thus, the magnetoelastic coupling Hamiltonian involving a magnetic defect retains the same functional form as Eq. (<ref>). The next step is to derive interaction between phonons and Majorana fermions from the previously obtained magnetoelastic coupling. We use a mixed representation <cit.>, where the displacement field is in the reciprocal space of the phonons, and the Majorana operators are in real space. By performing a Fourier transform on the strain tensor ϵ_ij(r) = ∑_qi/2q_iu_q,i + q_ju_q,j and changing the basis, we express the coupling in terms of the displacement vectors ũ_q^μ in the longitudinal and transversal polarizations. The spin-phonon coupling Hamiltonian in this mixed representation can be written as ℋ_spin-ph = 1/√(N)∑_qV_q, where the interaction V_q is explicitly given by V_ = -i/2∑_i∈ A,j∈ B,μc_ic_j iju_^μ e^i·r_i, where ij define the matrix that couples the Majoranas at sites r_i,A and r_j,B with a phonon of momentum q. In the model with quasivacancies, the elements of the coupling vertex matrix are given by ij = 2iλη^α_ij f^μ_α(q) if (i,j) ∈𝒫 2iλ^'η^α_ij f^μ_α(q) if either i or j ∈𝒬, where 𝒫 and 𝒬 are the normal site and quasivacancy subspaces, respectively. The explicit form of the functions f^μ_α() is presented in Appendix <ref>. It is important to note that the matrix is defined with entries from sublattice A to sublattice B (i ∈ A and j ∈ B). Therefore, the matrix must have the same structure as the adjacency matrix, where the entries are defined by the polarization, flux sector, phonon momentum q, and quasivacancy distribution. Similar to the Hamiltonian matrix, for a given quasivacancy, the procedure of finding the coupling vertex involves substituting λ→λ' for the corresponding row and column. Since the calculation is performed on the whole lattice, it is useful to write Eq. (<ref>) in matrix form in the Majorana basis c = (c_A c_B), i.e., in the sublattice representation. However, we cannot simply absorb the phase factor into the definition of AB because BA = AB^†, which would change the phase factor from e^iq·r_j to e^-iq·r_j. To incorporate all spatial dependence into a single compact notation, we introduce the matrix , defined by absorbing the exponential factor in (<ref>) as the Hadamard product of ij with the matrix 𝔼_q, which is the symmetrized matrix with all the phase factors stored in rows <cit.> (see Appendix <ref> for details): Λ_^μ = [ 0 λ_^μ; -λ_^μ^T 0 ]⊙[ 0 𝔼_; 𝔼_^T 0 ]. This allows us to obtain the coupling V_ in the sublattice representation: V_ = -i/2[ c_A c_B ][ 0 AB; BA 0 ][ c_A; c_B ]u_^μ, with Λ_,AB^μ_ij = ij 1/2e^i·r_i + e^i·r_j, Λ_,BA^μ_ij = -ji 1/2e^i·r_j + e^i·r_i. An immediate advantage of this expression is that Λ_,BA^μ,s can be written in terms of Λ_,AB^μ,s as Λ_,BA^μ,s = -λ_^μ⊙𝔼_^T = -Λ_,AB^μ,s^T. This form of the vertex matrix greatly simplifies the numerical computation of the polarization bubble, as the vertex from B to A is determined once we know AB. § PHONON DYNAMICS IN THE DISORDERED SYSTEM In this section, we present the computation of the phonon self-energy in the spin-phonon coupled Kitaev model with quasivacancies. In order to study the leading order corrections to phonon dynamics due to the spin-lattice coupling, we compute the one-loop phonon self-energy Π^μν(,Ω). As we will see in the next section, the sound wave attenuation coefficient α_s(q) can be related to the imaginary part of the phonon polarization bubble. Since we are interested in the phonon dynamics at finite temperatures, we compute the polarization bubble in real space using the Matsubara formalism. After deriving an expression for Π^μν(q,Ω) in terms of the Majorana fermion dispersion and the vertex matrix, we show the numerical results for the imaginary part of Π^μν(q,Ω). Our analysis reveals how the localized modes induced by quasivacancies contribute to the self-energy in the low-energy regime. §.§ Phonon polarization bubble To calculate the phonon polarization bubble, we need to rewrite the coupling vertex (eq:Vq_matrix) in the basis of phonons and Bogoliubov quasiparticles. This involves relating the Bogoliubov quasiparticle operators (bogtrans) to the Majorana fermions through the following transformations: c = U W^† β, c^† = β^† W U^†, where β^† = (β^† β) and c = (_A _B)^T. In this basis, the coupling V_ is written as: V_ = -i/2[ β^† β ][ Λ^μ_,11 Λ^μ_,12; Λ^μ_, 21 Λ^μ_,22; ][ β; β^† ]u_^μ, where the vertex Λ_^μ = WU^†Λ_^μ U W^† is written in a block form in (Vqbogo), corresponding to the creation and annihilation subspaces. In this formulation, Λ^μ_,11 and Λ^μ_,22 contribute to the particle-hole (ph-) channel, while the off-diagonal components contribute to the particle-particle (pp-) channel. In the Matsubara formalism, the phonon polarization bubble can be written as the following expectation value: Π^μν(,τ) = T_τ{^†μ(τ) ^†ν-(0)}, where T_τ is the imaginary time-ordering operator with τ = it. As shown in Appendix <ref>, the Fourier transform of Π^μν(,τ) is given by Π^μν(,Ω) = 1/N∑_ij P^g̅g_ijμ11⊙ν,T11 - ν22_ij +P^gg̅_ijμ22⊙ν,T22 - ν11_ij +P^gg_ijμ21⊙ν,T12 - ν12_ij +P^g̅g̅_ijμ12⊙[ν,T21 -ν21_ij, where the sum is over all the eigenstates ε_i and ε_j and P^g̅g_ij, P^gg̅_ij, P^gg_ij and P^g̅g̅_ij are the convolutions of the Matsubara Green's functions, which are explicitly written as: P_ij^gg̅ = n_F(ε_i) - n_F(ε_j)/Ω - ε_i + ε_j +iδ P_ij^g̅g = n_F(-ε_i) - n_F(-ε_j)/Ω + ε_i - ε_j+iδ P_ij^gg = n_F(ε_i) - n_F(-ε_j)/Ω - ε_i - ε_j+iδ P_ij^g̅g̅ = n_F(-ε_i) - n_F(ε_j)/Ω + ε_i + ε_j+iδ Here, P^gg_ij contributes to the pp-channel, while P^g̅g_ij and P^gg̅_ij contribute to the ph-channel. The terms with P^g̅g̅_ij are in the hole-hole (hh-) channel, which will not be considered in our calculations. Also, we point out that the temperature dependence of the polarization bubble is encoded in the expressions for each P_ij through the Fermi-Dirac distribution. On the other hand, the vertex matrices ij determine the angular dependence of the Majorana fermion-phonon scattering. §.§ Numerical results for the phonon self-energy In this section, we present our results for the phonon self-energy as a function of the input phonon frequency Ω. We restrict our analysis to the long-wavelength approximation, calculating Π^μμ_ph(q,Ω) for small values of q and, correspondingly, Ω. The imaginary part of the self-energy [Π^∥∥_ph(q,Ω)], shown in Fig. <ref> as a function of phonon frequency Ω at different temperatures T and concentration of quasivacancies x, describes the spectral width of the phonon, providing insights into its lifetime and sound attenuation. In the first row of Fig. <ref>, we plot [Π^∥∥_ph(q,Ω)] for the time-reversal invariant case, κ=0, and in the second row for κ=0.2. In the clean limit, due to strong kinematic constraints, the pp-channel is restricted to v_s>v_F, while the ph-channel is only allowed when v_s<v_F. Since we fix the value of | q|=0.5, the vertical line in the first row of Fig. <ref> separates these regions. There, we observe that the ph-channel dominates at small Ω, with its contribution increasing with temperature due to higher occupation of low-energy states and thus increasing phase space. This also leads to a distinctive linear T dependence of the sound attenuation for v_s<v_F. Conversely, the pp-channel becomes more significant at higher Ω. In the presence of defects, the translation symmetry is lost so the Majorana fermion momentum k is not a good quantum number anymore. Therefore, at a finite concentration of quasivacancies, the momentum constraint is lifted and the decay of a phonon involves both the ph- and pp-processes. Additionally, the Fermi velocity v_F is not well defined anymore, due to the disappearance of the Dirac cones. However, we can still take v_F as a parameter to define an energy scale vestigial to the clean limit. In the next section, we will see how the attenuation changes for v_s/v_F larger or smaller than 1. In the finite-field case (second row of Fig. <ref>), we observe the formation of a peak structure at low-Ω in both channels. This is a direct consequence of the low-energy modes induced by vacancies, as discussed in Sec. <ref>. At very low temperatures, there is no contribution from the ph-channel because the low-energy states are gapped out. However, as the temperature increases, the in-gap Majorana modes start to become occupied. This occupation modifies the manifold of allowed scattering processes, resulting in the previously mentioned peak structure. In the following section, we will explore how this influences the evolution of sound attenuation as a function of temperature. § SOUND ATTENUATION The attenuation process of an acoustic sound wave within the material can be described quantitatively by considering a lossy acoustic wave function that decays with distance from the driving source u(r,t) = u_0 e^-α_s(q)re^i(Ω t - q·r), where u is the displacement field with u_0 = u(t=0), Ω is the acoustic wave dispersion and q is the wave-vector. The sound attenuation coefficient, α_s(q), is related to the diagonal component of the imaginary part of the phonon self-energy <cit.>: α^μ_s(q) ∝ -1/(v^μ_s)^2 q[Π^μμ(q,Ω)]_Ω = v_sq. As we already discussed above, the phonon dynamics in the pure model is strongly dependent on the relative values of the sound velocity v_s and the Fermi velocity v_F. In the case when v_s<v_F, earlier studies have demonstrated that α_s(q) exhibits a particular sixfold angular dependence on q and shows a linear dependence on temperature at low-T <cit.>: α^∥_s(q) ∼λ^2 v_sq/v_F^3 T (1-cos 6θ_q). The behavior of the transversal polarization can be obtained from a simple π/2 rotation from the expression above. Therefore, for simplicity, we only refer to the longitudinal component in the remainder. The most crucial feature in this expression of α_s, however, is the linear scaling with temperature. As argued in <cit.>, this behavior is an indirect signature of the fractionalized nature of the spin excitations in this model, as it comes from to the phase space of the Majorana fermion-phonon scattering. Clearly, the anharmonic effects due to the phonon-phonon scattering processes always contribute in a realistic material but their contribution is ∼ T^3. Consequently, at sufficiently low temperatures, the sound attenuation in the quantum spin liquid predominantly arises from the decay into fractionalized excitations. In α-RuCl_3, there is also a contribution to the sound attenuation from scattering of the phonons from magnons. However, given that magnons are gapped at low temperatures, when α-RuCl_3 is magnetically ordered, and short-ranged above the Neel ordering temperature, their contribution on sound attenuation is negligible <cit.>. §.§ Temperature dependence of α_s(q): Qualitative analysis Here we explore how the linear scaling with temperature can be derived using a straightforward power counting argument directly from the phonon polarization bubble. While this derivation is more easily performed in a translationally invariant system within k-space, It can also be done within a real-space formulation. By performing this analysis, we can more transparently connect the clean limit to our numerical calculations of sound attenuation in the disordered system, since whether in k-space or real-space, it highlights the role of the density of states ρ(ε). For illustration, we first consider the clean system in the zero-flux sector. Starting with the ph-channel, symmetry allows us to consider only the gg̅ process, which contributes to sound attenuation as follows: Im Π^μμ_gg̅(q,Ω) = 1/NIm∑_ijn_F(ε_i) - n_F(ε_j)/(Ω+iδ) - ε_i + ε_jμ11⊙μ,T11 - μ22_ij ≡ -π/N∑_ijn_F(ε_i) - n_F(ε_j) δΩ - ε_i + ε_j [ξ_gg̅^μ(q)]_ij. Here, we used the explicit form of P^g̅g_ij from Eq. (<ref>), with n_F(ε) being the Fermi-Dirac distribution, and we assume that ν=μ from now on. Using the relation 1/Ω + iδ = -πδ(Ω), we obtain the second line . We also define the product of vertex functions Λ_q^μ in the gg̅ channel as ξ_gg̅^μ(q), for simplicity. It is crucial to note that the matrix elements of ξ^μ_gg̅() weight the sum of Fermi distributions for different energy pairs ε_i, ε_j, which are related to Ω via the kinematic constraints. However, the idea of the concept of power counting is that the contribution from ξ^μ_gg̅() does not significantly affect the temperature dependence of sound attenuation. Therefore, we can extract the scaling of α_s in T solely by examining the remaining sum of Fermi distributions. To make this approximation, we assume [ξ_gg̅^μ(q)]_ij = ξ^μ_gg̅(q), implying that the vertex function is constant with respect to the eigenvalue indices. This approach is equivalent to neglecting the k dependence of k in the clean case <cit.>. As a result, we write the imaginary part of the self-energy as Im Π^μμ_gg̅(q,Ω) = ξ^μ_gg̅()Π_gg̅(Ω,T), where the function Π(Ω,T) encodes all the temperature dependence and is expressed in terms of the density of states ρ(ε) as Π_gg̅(Ω,T) =1/N∑_ijn_F(ε_i) - n_F(ε_j)δΩ - ε_i + ε_j = -π∫_0^ε_max dερ(ε)ρ(ε + Ω) [n_F(ε + Ω) - n_F(ε)], where the upper bound ε_max is taken as the bandwidth 6J. This estimation is justified in the clean limit when the temperature is low enough that only states around the Dirac points contribute, T≪ J, and the vertex function depends only on |q|, i.e., μq∼ |q|. Therefore, we can approximate the dependence in T by expanding the Fermi-Dirac functions in the limit where ε,ε + Ω≪ T. This, together with the linear density of states of the Dirac cone yields Π_gg̅(Ω,T) = -π∫_0^T dε (ε^2 + Ωε)1/2-ε+Ω/4T - 1/2 + ε/Ω = π/8Ω^2 T + 𝒪(T^2). It can be easily shown that the g̅g process contributes in the exact same way. Therefore, our real-space calculation reproduced the result reported in <cit.>. The power-counting estimation in pp-channel can be performed by following the same procedure for the gg-process. Then, Π_gg(Ω,T) = π∫_0^Ω dε ρ(ε) ρ(Ω - ε)n_F(ε) - n_F(ε-Ω), where the upper bound in this case is Ω due to the restriction set by ρ(Ω-ε). Because the pp-processes can happen at any temperature, by just taking T→ 0 limit of the Fermi distributions, we estimate Π_gg(Ω,T) as Π_gg(Ω,T→ 0) ≈π∫_0^Ω dε ρ(ε) ρ(Ω - ε) Therefore, the sound attenuation at very low temperatures is strongly dependent on the form of the density of states. On the other hand, we can expand the integrand at large temperatures to obtain the characteristic 1/T dependence of the pp-channel when T≫Ω: Π_gg(Ω,T) ≈Ω/4T Π_gg(Ω,0). Thus, the pp-channel decays as 1/T even in the disordered systems where the low-energy density of states deviates from the linear form. To summarize, our power counting argument suggests that sound attenuation scales linearly with temperature at low T if the ph-channel dominates, and as ∼ 1/T at high T if the pp-channel is prevalent. In the site-disordered system, the low-energy density of states is more intricate so the qualitative argument can hardly be applied. Therefore, in the next section, we will use numerical results to accurately scale α_s(T) in the Kitaev spin liquid with quasivacancies. We will show that even in this case, α_s(T) only moderately deviates from the estimates in the clean case due to the distinct fermionic nature of the scattering processes in the quantum spin liquid. §.§ Quasivacancies: numerical results In this section, we present our results for the sound attenuation coefficient in the presence of quasivacancies. We relax the kinematic constraints due to the lack of translation symmetry, imposing only energy conservation in each scattering channel. Consequently, we include both pp- and ph-contributions regardless of the relative sound velocity. We start by evaluating sound attenuation coefficient α^∥_s(q) in the zero-flux sector. The left panel in Fig. <ref> shows the angular dependence of α^∥_s(q) for κ = 0.0 computed at T/J = 0.2, 0.4, 0.8 for both v_s>v_F and v_s<v_F. Notably, as temperature increases, a distinction between v_s>v_F and v_s<v_F emerge, even when summing over both scattering channels. This is further illustrated in the right panel of Fig. <ref>, which shows the difference between pp- and ph-contributions for a region of 𝐪 close to the center of the Brillouin zone. For v_s/v_F=0.8, due to dominant ph-processes, the magnitude of α_s^∥(q) increases as the fermionic population grows with temperature. For v_s/v_F=1.2, both channels contribute, with pp-scattering dominating, resulting in sound attenuation that is almost temperature-independent for small momentum q. We also observe that the sixfold symmetry remains intact, as expected due to the C_6 symmetry in the phonon sector and the averaged C_3 symmetry in the spin sector. This averaging is a result of preserving an equal number of quasivacancies on the A and B sublattices, despite their random positions. Now we turn our attention to the temperature evolution of α_s^∥(q). Again, we focus on the zero-flux sector, as the results for the bound-flux sector are very similar (see Appendix <ref>). Fig. <ref> shows the sound attenuation coefficient for κ = 0.0, where the characteristic scaling α_s^∥(q) ∼ T is evident even in the presence of quasivacancies. This linear scaling with temperature occurs only at low T, as shown in the inset of Fig. <ref>, and this temperature range aligns with the linear in T regime observed for some acoustic modes in ultrasound experiments in  <cit.>. We also point out that the sharp difference between the clean limit and all x ≠ 0 curves at very low T is caused by an almost zero-energy peak induced by quasivacancies, which allows more states to participate in the attenuation process. This is evident from the form of the pp-contribution in Eq. (<ref>), which is strongly dependent on the density of states. At higher temperatures, we observe that α_s^∥(q) decays, a trivial consequence of the Pauli exclusion principle, as most of the fermionic states become occupied. Finally, we present our results for the case κ≠ 0 in Fig. <ref>. The low-energy modes induced by quasivacancies inside the gap caused by κ≠ 0 significantly affect the sound attenuation as a function of |q|, as shown in Fig. <ref>(a)-(c) for different values of temperature and a fixed concentration x=4%. Here we plot only the contribution from the ph-channel scattering, as it dominates at low Ω (see the second row of Fig. <ref>). At the center of the Brillouin zone, there is a region where α_s ≪ 1 due to the gap induced by κ. However, for large enough |q|, the ph-processes become allowed, indicating the scattering from in-gap states to the bulk. As temperature increases, the attenuation from these processes becomes more pronounced. This is evident in Fig. <ref>(d)-(e), where we compare the temperature scaling for two distinct values of q, keeping the sound velocity v_s and angle θ_𝐪 fixed. Therefore, the phonon energy Ω_𝐪 differs only by the choice of |q|. These points are depicted as the black and magenta stars in Fig. <ref>(a)-(c). For small |q| (Fig. <ref>(d)), we observe an x-independent upturn at moderate temperatures, indicating that only states with energies larger than the gap participate in the ph-processes. In contrast, for a larger value of |q| (Fig. <ref>(e)), we see an x-dependent upturn at much smaller temperatures, highlighting the contribution from localized states induced by the quasivacancies to the sound attenuation. Interestingly, the linear temperature scaling persists even as the manifold of low-energy states changes dramatically, from a Dirac cone in the clean model to an in-gap peak at finite κ with quasivacancies. These results are consistent with our power-counting argument. §.§ Flux disorder: Numerical results So far, we have focused on the scenario where the phase space for the scattering of acoustic phonons is entirely determined by the thermally excited Majorana fermions. This is justified by the form of the magneto-elastic coupling vertex (<ref>), which in the Kitaev model is diagonal in the flux sectors. Nevertheless, thermally excited fluxes modify the spectrum of Majorana fermions and, therefore, indirectly affect sound attenuation. We will study this effect in this section. As fluxes proliferate at finite temperatures, the bond variables η_ij exhibit a temperature-dependent flipping probability distribution <cit.>. This distribution can be accounted for either by numerically costly Monte Carlo sampling of flux excitations <cit.>, or alternatively, by a quantitative approximation of the finite-temperature behavior obtained by taking a random average over “typical” flux sectors <cit.>. In the random-flux regime, however, each plaquette has an equal probability of hosting either a zero-flux or a π-flux. As shown in Fig. <ref>(e,f), this type of disorder flattens the Majorana fermion density of states across the entire energy range, allowing us to employ the power-counting argument in a straightforward manner. By assuming a constant density of states, ρ(ε) ∼ρ, we can easily perform the integration in Eq. (<ref>) to obtain the ph-channel contribution to the temperature dependence. Taking ε_max∼ J, we can write: Π_gg̅(Ω,T)=πρ^2Tlog1+tanhJ/2TtanhΩ/2T, At high temperatures (T ≫ J, Ω), we can recover the expected 1/T behavior from the Pauli exclusion: Π_gg̅∼πρ^2 JΩ/4T. For small T (T ≪Ω), we can expand the term inside the logarithm, allowing us to recover the linear in T behavior: Π_gg̅∼πρ^2Tlog 2. By adding this to the low-temperature constant contribution from the pp-channel, we get the general behavior: Π(Ω,T) ∼α_0 + α_1T, where the constants α_0 and α_1 depend on the density of states and the frequency Ω. This simple argument is corroborated by our numerical calculations, as shown in Fig. <ref>. Here we show the contributions of pp- and ph-channels, as well as their combined effect, to the sound attenuation for both the clean case and the case with 2% of quasivacancies, computed for κ = 0. To this end, we plot the sound attenuation as a function of temperature in the random-flux sector. We clearly see that the overall behavior of α_s(q) is almost independent of the presence of quasivacancies, as flux disorder overrides the contribution from the vacancy-induced low-energy localized modes. § CONCLUSIONS In this work, we have investigated the effects of site disorder on the phonon dynamics of the Kitaev spin liquid on a honeycomb lattice. Our primary objective was to determine the sound attenuation coefficient as a function of temperature in the presence of site disorder. We demonstrated that a linear temperature scaling may persist even with disorder. To achieve this, we extended the methods developed in previous studies on the acoustic phonon dynamics in the Kitaev model <cit.> by incorporating quasivacancies into the model. Because these types of defects affect only the magnetic interactions, the lattice symmetries are preserved, allowing us to employ the mixed representation method <cit.> to couple a phonon with momentum q to the Majorana fermions labeled by real-space indices i and j, with energies obtained via exact diagonalization. By constructing the coupling vertex within this framework, we were able to compute the phonon self-energy in the disordered system. Our numerics showed how the low-energy modes induced by quasivacancies change the imaginary part of the phonon polarization bubble. The effects of such states are particularly evident in the time-reversal symmetry-breaking case, κ≠ 0, where the scattering from localized modes to the bulk led to a low-Ω peak inside the gap for the ph-channel. The flux dependence of these localized modes <cit.> resulted in distinct peak structures for the bound-flux and zero-flux scenarios. The sound attenuation coefficient α^μ_s(q) was numerically obtained from the imaginary part of the diagonal components of the self-energy. In the pristine model, this quantity exhibits a characteristic sixfold angular symmetry in q and a linear scaling with temperature <cit.>. In the disordered system, we found that a sixfold pattern is also present, while the temperature evolution strongly depends on the value of κ and the sound velocity v_s. Through power-counting estimation, we demonstrated that the generic form of the temperature evolution must depend on the density of states and the Fermi-Dirac distribution for each scattering channel. In our full numerical analysis, we observed an approximate linear scaling with temperature in the experimentally accessible range for κ = 0, regardless of the quasivacancy concentration. However, when κ≠ 0, the results show a distinct linear scaling with temperature only if we fine-tune the phonon frequency Ω while keeping the sound velocity v_s constant. This fine-tuning is necessary due to the localized nature of the in-gap states induced by the quasivacancies, which require a specific value of incoming phonon energy to scatter into the bulk, thus contributing to the ph-channel. This effect is absent in the clean limit, where the Majoranas are fully gapped when κ≠ 0. Finally, we explored the temperature scaling of α_s(q) in the random-flux sector, as a limiting case to capture the physics of thermally excited fluxes at temperatures above the flux gap scale. Using a power-counting argument, we demonstrated that although flux disorder overrides the effects of quasivacancies, the attenuation still scales with temperature as α_s ∼α_0 + α_1 T. This is because ph-processes grow linearly with temperature within a smaller window in the random-flux sector. Combined with dominant pp-processes, this results in a bounded scaling with temperature. Our numerical analysis corroborates this observation, highlighting the robustness of sound attenuation in the presence of both site and flux disorder. This robustness is due to the fermionic nature of fractionalized excitations, in contrast to other bosonic contributions such as phonon-phonon and magnon-phonon scatterings. In conclusion, our combined results reinforce the effectiveness of acoustic probes as a means to experimentally verify spin fractionalization in Kitaev materials. § ACKNOWLEDGEMENTS The authors thank Susmita Singh, Peter Stavropoulos, Swetlana Swarup and Yang Yang for valuable discussions. The work is supported by the U.S. Department of Energy, Office of Science, Basic Energy Sciences under Award No. DE-SC0018056. We acknowledge the support from NSF DMR-2310318 and the support of the Minnesota Supercomputing Institute (MSI) at the University of Minnesota. N.B.P. also acknowledges the hospitality and partial support of the Technical University of Munich – Institute for Advanced Study and the Alexander von Humboldt Foundation. § COMPUTATION OF THE PHONON POLARIZATION BUBBLE In this appendix, we provide some technical details for the computation of the phonon polarization bubble. §.§ Majorana fermion-phonon vertex coupling in the mixed representation As shown in the main text, the matrix elements of the Majorana fermion-phonon vertex in the mixed representation are given by ij = 2iλη^α_ij f^μ_α(q), where the functions f^μ_α() are obtained from the Fourier transform of the strain tensor ϵ_ij(r) in the basis of the phonon polarization vectors. The explicit form of f^μ_α() is: f_x^∥(q) = q_x + √(3)q_ycosθ_q+ √(3)q_x - q_ysinθ_q, f_y^∥(q) = q_x - √(3)q_ycosθ_q- √(3)q_x + q_ysinθ_q, f_z^∥(q) = 2-q_xcosθ_q +q_ysinθ_q, f_x^⊥(q) =√(3)q_x - q_ycosθ_q- q_x + √(3)q_ysinθ_q, f_y^⊥(q) =-√(3)q_x + q_ycosθ_q- q_x - √(3)q_ysinθ_q, f_z^⊥(q) = 2q_xsinθ_q +q_ycosθ_q. The next step is to absorb the exponential factors from Eq. (<ref>) into our definition of ij. To achieve this, we define the exponential matrices 𝔼_^A,B, which contain all the phase factors corresponding to sites on each sublattice: 𝔼_^A = [ e^i·r_1 … e^i·r_1 … e^i·r_1; … … … … …; e^i·r_i … e^i·r_i … e^i·r_i; … … … … …; e^i·r_N … e^i·r_N … e^i·r_N; ], 𝔼_^B^T = [ e^i·r_1 ⋮ e^i·r_j ⋮ e^i·r_N; ⋮ ⋮ ⋮ ⋮ ⋮; e^i·r_1 ⋮ e^i·r_j ⋮ e^i·r_N; ⋮ ⋮ ⋮ ⋮ ⋮; e^i·r_1 ⋮ e^i·r_j ⋮ e^i·r_N. ] Here all r_i inside 𝔼_^A belongs to sublattice A, and all r_i in 𝔼_^B belongs to B. This allows us to write the full vertex matrix Λ_^μ by taking the element-wise (Hadamard) product of the vertex ij with the exponential matrices: Λ_^μ = [ 0 λ_^μ; -λ_^μ^T 0 ]⊙[ 0 𝔼^A_; 𝔼^B_^T 0 ]. Here we used the fact that the matrix from sublattice B to A is simply: BA = AB^† = -AB^T, where the last step comes from the fact that ij is purely imaginary. This allows us to obtain the coupling V_ defined in Eq. (<ref>) in the sublattice representation and recover Eq. (<ref>): V_ = -i/2[ c_A c_B ][ 0 AB; BA 0 ][ c_A; c_B ]u_^μ The submatrices AB and BA are then given by: AB = λ_^μ⊙𝔼_^A, BA = -λ_^μ^T ⊙𝔼_^B^T . As shown in <cit.>, it is useful to further simplify this expression by symmetrizing the coupling matrix between between A and B sublattices. The elements of the symmetrized exponential matrices are then: [𝔼^s_]_ij = 1/2e^i·r_i + e^i·r_j. This symmetrization ensures that the coupling matrix accounts for the interactions between the two sublattices in a balanced manner and recover the expressions for submatrices AB and BA presented in Eq. (<ref>). The final form of the symmetrized vertex is the same as in Eq. (<ref>), but with the symmetrized submatrices instead. From this point, we can drop the superscript s, as all the sublattice vertex matrices are assumed to take the form presented in the main text. In principle, we could proceed with the self-energy calculation from the expression above. However, it is instructive to see how each block is written explicitly in terms of the Bogoliubov submatrices X and Y. By taking the matrix product explicitly in Eq. (<ref>) we have Λ^μ_,11 = iX-Y^*BAX+Y^T - X+Y^*ABX-Y^T Λ^μ_,12 = iX-Y^*BAX+Y^† + X+Y^*ABX-Y^† Λ^μ_, 21 = -iX-YBAX+Y^T + X+YABX-Y^T Λ^μ_,22 = -iX-YBAX+Y^† - X+YABX-Y^† where BA can be written in terms of AB by using the property of the symmetrized vertex in Eq. (<ref>). The advantage of using the above expression is that X and Y are N× N matrices such that their matrix multiplication is practically much more efficient than that of the 2N× 2N matrices, W and U, defined in Eq. (<ref>). §.§ Explicit steps to obtain Π^μν(q,τ) Here we show some extra steps to obtain the phonon polarization bubble shown in Sec. <ref>. From Eq. (<ref>), we can take the matrix product to write Eq. (<ref>) in a more explicit form: Π^μν(,τ) = ⟨ T_τβ^†μ11β + β^†μ12β^† + βμ21β + βμ22β^†(τ) ×. .β^†ν11β + β^†ν12β^† + βν21β + βν22β^†(0) ⟩ . By using Wick's theorem, one can show that there are only six non-vanishing contributions out of the sixteen terms in the expression above. For instance, one possible term in the ph-channel can be written as: T_τβ^†μ11β(τ) β^†ν11β(0) = T_τβ^†_i β_j(τ)β^†_k β_l(0)μ11_ijν11_kl = T_τβ^†_i(τ)β_l(0)T_τβ_j(τ)β_k^†(0)μ11_ijν11_kl = _i(τ)g_j(τ)δ_ilδ_jkμ11_ijν11_kl = _i(τ)g_j(τ)[μ11] ⊙ [ν11]^T_ij, where g_i(τ) = T_τβ_i(τ)β^†_i(0) and g̅_i(τ) = T_τβ^†_i(τ)β_i(0) are the fermionic propagators. The summation over repeated indices is assumed in (<ref>), and the definition of the Hadamard product was used. The same kind of computation can be easily performed for all non-vanishing terms. From this, we can sum all contributions to write the polarization bubble in terms of the imaginary time: Π^μν(,τ) = 1/N∑_ij _i(τ)g_j(τ)[μ11]⊙[ν11]^T -[μ11]⊙[ν22] _ij +g_i(τ)_j(τ)[μ22]⊙[ν22]^T -[μ22]⊙ [ν11] _ij +g_i(τ)g_j(τ)[μ21]⊙ [ν12]^T -[μ21]⊙ [ν12] _ij +_i(τ)_j(τ)[μ12]⊙[ν21]^T -[μ12]⊙ [ν21] _ij. The final step is to take the Fourier transform to the Matsubara frequencies iω_n. The fermionic Matsubara Green's functions are given by: g_i(iω_n) = 1/iω_n - ε_i , g̅_i(iω_n) = 1/iω_n + ε_i, which are obtained from g_i(iω_n) = ∫_0^β dτ e^iω_nτ g_i(τ), and the same for g̅_i(iω_n). Using this form, along with the Fourier transform over the phonon frequencies, we can compute the convolutions of the Green's functions in Eq. (<ref>) as: P_ij^gg̅ = T∑_ω_n g_i(iω_n)g̅_j(iΩ - iω_n) = T∑_ω_n1/iω_n - ε_i1/(iΩ - iω_n) + ε_j = n_F(ε_i) - n_F(ε_j)/iΩ - ε_i + ε_j, P_ij^g̅g = T∑_ω_ng̅_i(iω_n)g_j(iΩ - iω_n) = T∑_ω_n1/iω_n + ε_i1/(iΩ - iω_n) - ε_j = n_F(-ε_i) - n_F(-ε_j)/iΩ + ε_i - ε_j, P_ij^gg = T∑_ω_n g_i(iω_n)g_j(iΩ - iω_n) = T∑_ω_n1/iω_n - ε_i1/(iΩ - iω_n) - ε_j = n_F(ε_i) - n_F(-ε_j)/iΩ - ε_i - ε_j, P_ij^g̅g̅ = T∑_ω_ng̅_i(iω_n)g̅_j(iΩ - iω_n) = T∑_ω_n1/iω_n + ε_i1/(iΩ - iω_n) + ε_j = n_F(-ε_i) - n_F(ε_j)/iΩ + ε_i + ε_j, where we used the method of residues to compute the sum over complex frequencies <cit.>. Therefore, we can recover Eq. (<ref>) given in the main text by taking an analytical continuation Ω→Ω + iδ for infinitesimal δ. § RESULTS FOR THE BOUND-FLUX SECTOR In this appendix, we show some results for the sound attenuation in the bound-flux sector, with one possible realization is shown in Fig. <ref>(a). As outlined in previous works <cit.>, a clear distinction between the bound- and zero-flux sectors emerges for κ≠ 0. Such difference is evident from the density of states, where in contrast to the zero-flux one additional resonant peak is present around E = 0 in the bound-flux sector, independent of the strength of κ (Fig. <ref>(b)). The appearance of such localized mode could, in principle, change the response of the system in the attenuation process for κ≠ 0, which is evidenced in the imaginary part of the self-energy as a function of the phonon frequency Ω, as plotted in Fig. <ref>(c). Because of the additional in-gap states in the bound-flux sector, the overlap of the low-frequency peaks leads to a larger range of Ω where the ph-channel is appreciable. This behavior is in visible contrast to the zero-flux sector. These new features are, however, almost insensible to the temperature scaling of the sound attenuation. As shown in Fig. <ref>(d), there is still a threshold of values of || where the linear scaling in temperature is observable, for values close to the ones in the zero-flux sector. Again, this is a direct consequence of the new manifold of low-energy states accessible in the ph-scattering process. Since we do not expect such fine-tuning to be achievable in ultrasound experiments, we conclude that the bound-flux and zero-flux sectors are nearly indistinguishable from the perspective of acoustic phonon dynamics. apsrev4-1
http://arxiv.org/abs/2406.18714v1
20240626193423
Subdiffusion of heavy quark in hot QCD matter by the fractional Langevin equation
[ "Jai Prakash" ]
hep-ph
[ "hep-ph" ]
update
http://arxiv.org/abs/2406.17851v1
20240625180002
On the Origin of Species Thermodynamics and the Black Hole - Tower Correspondence
[ "Alvaro Herráez", "Dieter Lüst", "Joaquin Masias", "Marco Scalisi" ]
hep-th
[ "hep-th" ]
MPP-2024-123 LMU-ASC 08/24 On the Origin of Species Thermodynamics and the Black Hole - Tower Correspondence Alvaro Herráez^1, Dieter Lüst^1,2, Joaquin Masias^1, Marco Scalisi^1 ^1 Max-Planck-Institut für Physik (Werner-Heisenberg-Institut), Boltzmannstr. 8, 85748 Garching, Germany ^2 Arnold Sommerfeld Center for Theoretical Physics, Ludwig-Maximilians-Universität München, 80333 München, Germany § ABSTRACT Species thermodynamics has been proposed in analogy to black hole thermodynamics. The entropy scales like an area and is given by the mere counting of the number of the species. In this work, we derive the rules of species thermodynamics and explain how those originate from standard thermodynamics. We consider configurations of species in thermal equilibrium inside a box of size L and show that the temperature T of the system, which plays a crucial role, is always upper bounded by the species scale Λ_ sp. We highlight three relevant regimes: (i) when L^-1< T<Λ_ sp, and gravitational collapse is avoided, the system exhibits standard thermodynamics features, for example, with the entropy scaling like the volume of the box; (ii) in the limit L^-1≃ T→Λ_ sp we recover the rules of species thermodynamics with the entropy scaling like the area of the box; (iii) an intermediate regime with L^-1≃ T< Λ_ sp that avoids gravitational collapse and fulfills the Covariant Entropy Bound; this interpolates between the previous two regimes and its entropy is given merely in terms of the counting of the species contributing to the thermodynamic ensemble. This study also allows us to find a novel and independent bottom-up rationale for the Emergent String Conjecture. Finally, we propose the Black Hole - Tower Correspondence, a generalization of the celebrated Black Hole - String Correspondence. This provides us with a robust framework to interpret the results of our thermodynamic investigation. Moreover, it allows us to qualitatively account for the entropy of black holes in terms of the degrees of freedom of the weakly coupled species in the tower. empty empty plain § INTRODUCTION Understanding the generic properties of Effective Field Theories (EFTs) arising from Quantum Gravity (QG) is at the core of trying to connect the latter with the observable universe. In the context of the Swampland Program <cit.>, these generic properties are usually formulated in terms of conjectures, which can be examined and, sometimes, rigorously formulated in concrete frameworks, such as different corners of String Theory or AdS/CFT (see also <cit.> for recent reviews on the topic). Interestingly, such explorations usually provide insights into a wide range of topics related to QG, from constraints in the spectrum of particles or energies allowed in the resulting EFTs, to black hole physics, and every so often these turn out to be related in unexpected ways. One of the best established properties of QG theories is the existence of infinite towers of states in the spectrum. This is not necessarily the case in standard QFT (namely, when gravity can be neglected). Furthermore, according to the Distance Conjecture (DC) <cit.>, one of the best established Swampland Conjectures, some of these infinite towers always become massless (in d-dimensional Planck units for a d-dimensional EFT) as one approaches infinite distance limits in moduli (or, more generally, in scalar field) space. In fact, the Emergent String Conjecture (ESC) <cit.> constraints the nature of these light towers by postulating that they can only be of two kinds: either Kaluza-Klein (KK)-like towers, or towers of oscillator modes from a weakly coupled critical string. In this sense, the presence of towers of states becoming light indicates the existence of a finite range of validity (in field space) for such EFTs (see e.g. section II.B of <cit.>). One could think that by integrating in the different states as they become light could be enough to define a new EFT with an extended range of validity. However, the fact that these towers include infinitely many degrees of freedom (dof) hints towards the existence of a scale above which a usual EFT description cannot be applied. This can be seen as a maximum cutoff for any gravitational EFT, and in the limit of such infinite tower of states becoming massless, this fundamental cutoff must also vanish (in d-dimensional Planck units). This relevant cutoff scale is the so-called Species Scale <cit.>, originally defined as ≃^1/d-2 , where is the d-dimensional Planck mass and counts the number of dof with masses below the species scale itself. Notably, this scale encapsulates the idea that a gravitational EFT with an arbitrarily high number of light dof breaks down at scales arbitrarily lower than , contrary to the naive expectation of the Planck mass giving the cutoff for gravitational EFTs. Different perturbative and non-perturbative arguments lead to the result that gives indeed the maximum scale up to which a gravitational EFT can be trusted. Furthermore, in the cases in which is dominated by the towers of states coming from KK or string oscillator modes, reduces to the higher dimensional Planck mass or the string scale, respectively, as one would expect (see e.g. <cit.>). More recently, this idea of considering the species scale as the maximum cutoff scale for gravitational EFTs was formulated in the more usual EFT language as the scale suppressing higher curvature corrections <cit.>, schematically as S_EFT, d=∫ d^d x √(-g) ^d-22(R + ∑_n 𝒪_n (R)/^n-2)+… which can be more easily extended to the interior of moduli spaces. In <cit.>, and the subsequent work <cit.>, it was indeed confirmed that the species scale computed from protected higher curvature corrections in different supersymmetric setups indeed matches the expected results, recovering asymptotically the string scale or the higher-dimensional Planck scale depending on whether the limits that are probed are emergent string limits or decompactification limits, respectively. A complementary derivation of the species scale comes from black hole physics <cit.>. In this case, is given as the scale associated to the smallest semi-classical black hole in a d-dimensional gravitational EFT. This notion of species scale is compatible and, in fact, deeply related to the aforementioned definitions, since corrections to the EFT also affect the size of the horizon of the black holes and can become particularly relevant for small black holes. It also connects with one of the most interesting questions in QG, namely the explanation for the entropy of black holes. In particular, a Schwarzschild black hole of size ^-1 will have an entropy S∼( ) ^d-2≃ . Note that this entropy does not just follow the usual area law but, in this case, it also turns out to be given by the total number of species. This observation that a minimal black hole corresponds to a particular tower of states led to the formulation of species thermodynamics <cit.>. Using the laws of black hole thermodynamics, the corresponding laws of species thermodynamics were deduced. In <cit.>, it was also shown that, by imposing the rules of black hole thermodynamics on the corresponding tower of species, the mass spectrum and the degeneracies of the species particles can be largely constrained. This result was in agreement with the emergent string conjecture <cit.> and, therefore, constituted a first bottom up argument for it. In this work, we shed light on the origin of the peculiar properties of species thermodynamics. We find the following key results: ∘Using standard thermodynamics for a system of species in thermal equilibrium inside a box of size L, we derive the constitutive relations of species thermodynamics for the entropy and the energy, i.e. S∼ and E∼, in the limit T→. This naturally produces the scaling of the entropy with the area in such limit. ∘We obtain the expected volume dependence of S and E in the regime L^-1 < T <, and show that a box of species effectively interpolates between the two regimes while still avoiding gravitational collapse and fulfilling the Covariant Entropy Bound. ∘We show the existence of a maximum temperature, which can be at most equal to . This result is directly derived by the application of entropy bounds to a finite box of species. ∘We show that the only apropriate towers consistent with the properties of species thermodynamics are those with polynomial or exponential degeneracies, which is reminiscent of KK and string towers, respectively. This provides a novel and independent bottom-up rationale for the emergent string conjecture (other indications are offered in <cit.>). ∘We propose the Black Hole - Tower Correspondence by generalizing the original argument leading to the celebrated the Black Hole - String correspondence <cit.> (see also <cit.>). We show that the correspondence is in fact between black holes and the aforementioned allowed towers of states, namely either KK or string modes. We start our investigation by considering the thermodynamics of species in a box of size L at a temperature T, in the canonical ensemble. It is a known result that this entropy generically scales like the volume of the box <cit.>. This may look like an important obstacle to recover the right dependence of the entropy on the area as it is dictated by species thermodynamics. However, a first hint that this might not be the whole story is the fact that, in the presence of gravity, the maximum entropy of a system is constrained by the Covariant Entropy Bound (CEB) <cit.> and, in the case of the box, this means it is bounded above by its area (in Planck units). Moreover, there is always the possibility that if the box with the species is heated up too much, it could collapse to a black hole. We show that, as we increase the temperature and decrease the length L of such a box, gravitational collapse can be avoided and the CEB is fulfilled for temperatures below and sufficiently small boxes. It is precisely in the limit T ≃ 1/L≃ that the CEB is saturated, gravitational collapse should take place, and also S → <cit.>. In this limit, the entropy turns out to be proportional to the area of the box in which the species are contained. Moving away from the limit in which T ≃, namely for temperatures lower than the species scale, we show that the entropy of the system cannot be given in terms of the area of the box. There are two relevant situations. When a small number of species is involved, then we recover the standard result of the entropy dependence on the volume of the box. Instead, we can consider high enough temperatures to allow for more species contributing to the ensemble. However, one must avoid an increase of entropy such that the system collapses into a black hole or violates the CEB. This is possible if T ≃ 1/L, even for temperatures below . In this case, we obtain a dependence of the entropy like S∼ N_T , where N_T is the effective number of species at temperature T. Importantly, this converges to as T approaches . This case, therefore, effectively interpolates between the volume and area laws for the entropy in the field theory. We would like to emphasize that we perform our whole analysis in the limit in which we can neglect the d-dimensional gravitational interactions, namely for ≫ or equivalently ≫ 1. Such a limit would correspond to asymptotic limits of moduli space in a consistent string effective description. This effective thermodynamic analysis allows us also to select consistent type of towers. Specifically, from the convergence of the EFT canonical partition function, we obtain that the degeneracy cannot be superexponential that is, it cannot be larger than the degeneracy of string oscillators. Moreover, from the requirement that S∼ N_T converges to S∼ when T approaches , we obtain that the tower degeneracy has to be at least polynomial, which corresponds to KK-like towers. Thus, we are able to exclude other types of degeneracy and find a novel supporting bottom-up argument for the Emergent String Conjecture <cit.>. Our results complement recent evidence in the context of species thermodynamics <cit.> and gravitational scattering amplitudes <cit.>. One of the main lessons is therefore that a finite box of species, with mass scale degeneracy as the one of KK or string modes and at finite temperature T, turns out to have the peculiar properties of species thermodynamics when T→. In this limit, the system will collapse into the smallest possible black hole with temperature T∼ and entropy S∼. As a further non-trivial step, one could imagine to increase the gravitational strength of such a black hole, by varying the ratio / and while keeping constant entropy. This represents a first concrete example of a correspondence between tower of species in the weakly coupled EFT regime and gravitational strongly coupled objects such as black holes. This takes us to the final part of our investigation, where we propose the black hole - tower correspondence. We will show that this is nothing but that a generalization of the black hole - string correspondence, originally proposed in <cit.>. It will provide us with an effective tool to qualitatively account for the entropy of black holes in terms of the degrees of freedom of the species in the tower. We will consider varying a modulus in order to control the effective gravitational coupling of d-dimensional gravity (i.e. the quotient /). We will then draw the correspondence between black hole solutions (when gravity is sufficiently strong), and a box of species arbitrarily weakly coupled in the opposite regime (see Fig. <ref>). On the black hole side, the entropy goes like S∼ (/)^d-2, whereas in the EFT regime it interpolates between the volume dependence (when few species are included) and S∼ N_T when the species dominate. A correspondence between these two entropies takes place precisely when both limits approach T∼ and S∼. We show this can only happen for towers of states with polynomial or exponential degeneracy. We find consistency with arguments given in <cit.> and <cit.>. This offers a complete picture in which the EFT results on the one side match the black hole results on the other side if the correspondence between the black hole and the ensemble occurs. Furthermore, in the spirit of the black hole - string correspondence proposed in <cit.>, one can consider large, semi-classical black holes with a very large horizon and follow their constant entropy lines as the modulus is varied. In such a way, one can get to the point where the entropy of the original black hole is accounted for by the degrees of freedom of the species in the tower, namely the free string (as originally proposed) or the KK tower, as would be the case if the original black holes wrap extra dimensions. The structure of this paper is as follows. In section <ref>, we review how the CEB and gravitational collapse constraint thermal configurations that can be described in field theory, highlighting the importance of the limit ≃ T ≃ 1/L, in which both bounds are saturated and one recovers S∼. In section <ref>, we study the EFT thermodynamics of a system of species at temperature T≤ (mainly in the canonical ensemble) in the limit in which interactions can be neglected. We also obtain the usual volume dependence, as well as the dependence on the effective number of degrees of freedom at temperature T, for the entropy in the relevant limits, finding EFT evidence for species thermodynamics. Furthermore, we review the agreement of our results with previous results by treating this limiting case in the microcanonical ensemble. Along the way, we explain how convergence of the canonical partition function, together with the limit N_T→ as T→ constraint the possible towers to those with polynomial or exponential degeneracy. In section <ref>, we revisit our previous analysis from the top-down embedding of the towers with polynomial degeneracy, comparing black hole and black brane solutions in the presence of extra dimensions. Finally, in section <ref>, we review the black hole - string correspondence and heuristically extend it to a black hole - tower correspondence, presenting the bigger picture of how the species entropy can be understood as the intermediate step to account for the entropy of a general Schwarzschild black hole from the EFT counting of the entropy of a box of weakly coupled species in the appropriate limit. We leave a summary of our findings and conclusions for section <ref>, and relegate some technical details to the appendices. § IR/UV MIXING AND THE COVARIANT ENTROPY BOUND We begin by reviewing the Covariant Entropy Bound (CEB) <cit.>, focusing on the IR/UV mixing that arises when considered in the context of Effective Field Theories (EFTs) of gravity. The basic idea that we will exploit is the fact that the maximum entropy that can be accommodated in a finite spacelike region, Σ, behaves holographically in the presence of gravity. That is, it is bounded by the area of the boundary of said region, measured in Planck units,[To be precise, in order to apply the spacelike projection theorem to obtain the simple form of the bound applied to a spacelike region, Σ, we need the region to be contained in the causal past of the future directed light-sheets (i.e. the future-directed ligthsheets are complete) <cit.>.] S(Σ) ≤A(∂Σ)/4 ^d-2 , where A(∂Σ) refers to the area of the boundary of Σ, and to the Planck lenght in d-dimensions.[We use the conventions in which the Planck lenght is related to the d-dimensional Newton constant and (reduced) Planck mass in the following way G_N,d=^d-2=1/8π^d-2.] From confronting this area behaviour with the volume scaling of the entropy in field theory, which for N species of particles in the thermodynamic limit, characterized by the temperature T, takes the form S_EFT (T,Σ) ∼ N T^d-1vol(Σ) , it is easy to see that the IR/UV mixing arises from the fact that probing larger and larger regions restricts the maximum temperature that one can probe without violating the bound. In particular, if we consider the region Σ to be characterized by the length scale L (e.g. by taking a d-dimensional sphere of radius L), we obtain the following bound N (T )^d-1≲(L) , which very neatly displays the aforementioned IR/UV mixing between the IR scale of the configuration under consideration, L, and the UV scale, T. The logic yielding constraints on low energy EFTs of gravity coming from direct application of the CEB is the following. First, we note that every configuration in the theory must fulfill the bounds (<ref>)-(<ref>). Thus, if one finds a configuration that naively violates the bound, two options arise. Either that particular configuration turns out to be censored in the EFT via some mechanism in (quantum) gravity precluding it, or the regime of validity of the EFT must be restricted to avoid that configuration. Whenever a particular configuration can be argued to be within the regime of validity of a low energy EFT, one would expect the latter option to be the correct one. In particular, a very neat implementation of this idea is to restrict the regime of validity of the EFT via bounding the maximum UV cutoff that can be considered. This reasoning was originally used in <cit.> in the context of families of AdS vacua in order to bound the species scale <cit.> in terms of the mass of a tower to a positive power, as originally predicted by the AdS Distance Conjecture <cit.>.[See also <cit.> for some recent discussions on the species scale in the context of the Swampland.] Furthermore, it has also been used in combination with the idea of dynamical cobordisms <cit.>, to provide a bottom-up rationale for the Distance Conjecture <cit.>.[See also <cit.> for previous bottom-up arguments recovering also some of the key features of the Distance Conjecture from different approaches.] In these works, the configurations whose entropy was confronted with the CEB had two important features. First, the maximum energy up to which the EFT was assumed to be trustable, was identified with the species scale. Second, the entropy that was confronted with the CEB in order to obtain the bounds on such UV cutoff of the EFT (i.e. the species scale) was the one associated to configurations in which only the massless subsector of the EFT was considered (i.e. N∼ 1 in eq. (<ref>)). In this work, we will focus on a different type of configurations. In particular, we will not only focus on the massless subsector of the theory, and for a configuration characterized by a temperature T, we will consider the entropy of all the species which contribute significantly to such configurations, that we denote N_T. Needles to say, we will also take into account the fact that T≤, and particularly focus on the limiting case T≃ for reasons that will become clear shortly. At this point it is natural to wonder which configurations are then the right ones to consider in order to constrain EFTs of gravity. The answer lies on the aforementioned fact that every configuration must fulfill the CEB. Thus, the bounds and results from our analysis here are independent and complementary to the previous results obtained by applying the same logic to different kinds of configurations. Let us consider a spherical region characterized by a typical size L, and study configurations in thermodynamic equilibrium at temperature T that fit in it, including the potential contributions from all possible species, whose number is given by N_T and generally depends in the temperature (and the density of one-particle states). In particular, we want to focus on the case in which a tower of massive particles is present, and thus the species scale, which gives an upper bound for the EFT cutoff, can be well described by ≃^1/d-2 . For concreteness, let us now consider the following parameterization for the mass spectrum of the aforementioned towers at level n <cit.> (we will elaborate more on the possible types of towers and parameterizations in section <ref>, but for now we stick to this parameterization for the sake of simplicity, and keep in mind that the subsequent discussion can be adapted to more general cases) m_n = n^1/p . Here, refers to the mass scale of the tower and p is some effective way to encode the density of the tower, which captures the behaviour of towers of KK modes from p compact dimensions (for finite values of p) and also the key features of a tower coming from weakly coupled string oscillators (in the p→∞ limit). According to the Emergent String Conjecture <cit.>, these are the only two relevant types of towers as one approaches infinite distance limits in moduli space, and recent results support this claim from a bottom-up approach <cit.>. In the presence of a tower with mass spectrum given by (<ref>), one can then identify the maximum n that contributes to the species scale with the number of species, , and thus obtain ≃( )^p . Thus, for a temperature T≤, we can similarly define the number of species with masses below T that contribute to the thermodynamics as N_T≃(T)^p , where clearly N_T ≤. We can then use eqs. (<ref>) and (<ref>) to rewrite in terms of the mass scale of the tower, , and the density parameter, p, and use that to obtain an expression of N_T in terms of the temperature and the species scale N_T≃(T)^p ()^d-2 ≃(T)^p , showing explictly that N_T< as long as T,. By applying the CEB in the form given by (<ref>), with N=N_T we thus obtain T≲()^1/d-1+p≡≤ , where we defined ≡ 1/L. From here, it is manifest that for configurations in which all the at thermodynamic equilibrium at temperature T are considered, there is a maximum temperature which is compatible with the CEB, and it is, in general, strictly lower than the species scale (this was also discussed in earlier work, see e.g. <cit.>). However, there is a particularly interesting case in which this temperature can be pushed up to the species scale, namely when we study configurations for which ≃, since then ≃ and, obviously, N_T≃. This limiting case, originally highlighted in <cit.>, corresponds to the limit in which the entropy of the thermodynamic configuration would contain the same entropy as the minimal black hole in the EFT (i.e. the one with the smallest possible radius, ≃),[This is actually one of the original ideas to motivate the species scale non-perturbatively, as the length scale of the smallest (semi-classical) black hole in a theory <cit.>.] and thus approaches the maximum entropy allowed in such a region. Equivalently the box of species at temperature T≃ 1/L → approaches its maximum possible temperature and the limit where it should collapse to such black hole. The entropy goes like S∼≃(/ )^d-2 , recovering the area behaviour of the black hole entropy from species counting. These smallest black holes in the EFT were actually the main focus of <cit.>, where the entropy associated to the species was actually promoted to a fundamental concept, and the corresponding laws of species thermodynamics were proposed. One of the main goals of this paper is to study this behaviour of the entropy from basic thermodynamic considerations and comparison with the CEB. There is even one more remarkable feature of this limit, which was also noticed in <cit.>, namely the fact that in this limit, the CEB actually coincides from the bound from preventing gravitational collapse (à la CKN <cit.>), which is in general stronger, since a generic configuration in the EFT satisfying T≃, as given in (<ref>), will have a Schwarzchild radius larger that L=Λ_IR^-1. The total energy inside the box of size L corresponds to the ADM mass of the black hole E=N_T L^d-1T^d=, which is related to its Schwarzchild radius as ≃()^1/d-3^-1. The requirement ≤ L , leads to a bound on the temperature T≤ T_CKN=(Λ_IR)^2/d+p≤, which is strictly lower than the bound found using the CEB for any d≥2, p≥1, for any L>^-1. As expected, the two bounds match when L^-1→. This is particularly interesting since, even though the bounds from gravitational collapse give similar qualitative results than the ones obtain from applying the CEB, some of the quantitative outputs can be slightly different. In this sense, all the arguments and considerations that specifically refer to the limiting case above are particularly robust against any subtleties related to gravitational collapse. Having discussed both the CEB and the gravitational collapse bound, and given that the latter is always strictly more constraining than the former, a natural question arises: Can we approach the maximum entropy configuration (i.e. the saturation of the CEB) with a thermal ensemble that has not collapsed gravitationally? The answer, as can be seen from the previous analysis, turns out to be positive, since for any configuration with T≃ 1/L, both bounds are satisfied as long as T<, and in the limit in which T→ we have seen that both bounds are saturated and we expect our configuration to collapse to a black hole without a dramatic increase on its entropy, since it is already parametrically the same as that of the corresponding black hole solution with the same size. §.§.§ The observable universe and extra dimensions Let us now make a small detour and consider some potential implications of our previous discussion when boldly applied to the observable universe. In the presence of KK towers, the states in the tower contribute to the entropy if T≳=1/r. Imposing this on T_max in eq. (<ref>) leads to a mixing between IR and UV scales Λ_IR>()^(d-1)(d-2+p)/p , or equivalently, in terms of the size of the compact dimensions Λ_IR>(1/r)^d-1 , For a box the size of the observable universe, we have Λ_IR=√(Λ_cc), which bounds the size of the extra dimensions as r>(Λ_cc^2)^1/2(d-1) . For Λ_cc≃ 10^-120^2 we find r>10^-15 m. For <1/r the EFT would be oblivious to the existence of species (or equivalently, to the presence of extra dimensions). The cutoff temperature T= may not signal out a breakdown of the EFT if the effective description changes at some lower temperature, due to, for example, the presence of the extra dimensions or the back-reaction of the boundary. § TOWER ENTROPY FROM EFT THERMODYNAMICS In this section we consider the thermodynamics of a set of different species of particles in a box. The main goal is to begin by reviewing some standard results about the entropy of such configurations, and how it relates to the temperature, the size of the box, and the number of particles, in different limits that will be of particular interest later in this work. §.§ Canonical ensemble We start by considering a system with energy levels labeled E_r, where the energy is given by a sum of the energies of each of the constituents in the system. This boils down to assuming that we can neglect the contribution to the energy coming from the gravitational interactions between the particles, and in the systems in which we focus on along this work it is motivated by the fact they lie near infinite distance limits in moduli space, where d-dimensional gravity is arbitrarily weakly coupled (we will elaborate more on this along the way). Thus, we can write E_r= ∑_{κ_r} E_1,κ_r, where {κ_r} encode all the information about the one-particle states that constitute the system, with energy denoted by E_1,κ_r, like momentum distribution, mass degeneracy, etc. The canonical partition function for such a system can be written as a sum over (multiparticle) energy levels[This system can be expressed in terms of its Hamiltonian as 𝒵=e^- H/T] 𝒵=∑_r e^-E_r/T = ∑_r∏_{κ_r} e^-E_1,κ_r/T . The product over the {κ_r} includes information of the mass spectrum in the partition function. For high enough degeneracies, namely for particle spectra with an exponential (or greater) degeneracy, the partition function may not be well-defined, signaling a breakdown of our effective description, such that one should re-express the theory in term of more appropriate degrees of freedom. In the case in which the partition function converges for high degeneracies (e.g. exponential below some critical temperature), it will be dominated by the single-particle partition function at mass m≃ T <cit.>, whereas for small degeneracies (e.g. polynomial or subpolynomial) the partition function will be dominated by multi-particle configurations (i.e. with many one-particle states occupied). This has been thoroughly studied in the context of QCD <cit.>, where at temperature scales near the pion mass the effective baryon description breaks down, and also in string theory, where the phase transition is in general less understood <cit.>. §.§.§ Multi-particle state domination Let us then first consider a system consisting of particles of N_T different species,[We are choosing the notation with an eye on the fact that the total number of species contributing to the ensemble will be given by the previously introduced (and in general T-dependent) quantity, N_T, but for now we can simply consider it as a quantity to be determined and we will prove that it coincides with the number of species with masses below T later in this section.] under the assumption of sufficiently small mass degeneracy, such that multi-particle states dominate the partition function. We treat the complementary case below. Each of the species is labeled by an integer n, and we consider configurations inside a box of size L in d spacetime dimensions in the canonical ensemble, namely at equilibrium at a fixed common temperature, T. Importantly, we have not fixed by hand the total number of species, N_T, nor the number of total particles in the box (equivalently, we have not fixed the number of particles of each species). Instead, we allow these numbers to fluctuate and be determined by the thermodynamics of the system, with their average values being determined as a function of T. Thus, one should think of the total number of particles of each species inside the box, as well as of the number of species contributing to the ensemble, as the average quantities ⟨ N_n (T) ⟩ and ⟨ N_T (T) ⟩, respectively. The (average) total number of particles in the box thus includes a sum over all species ⟨ N_TOT (T)⟩ = ∑_n=1^⟨ N_T ⟩⟨ N_n (T) ⟩ . To avoid cluttering the notation we will usually drop the ⟨…⟩ unless we want to emphasize the fact that this is to be thought of as an average in the equilibrium configuration at temperature T. As previously mentioned, we consider the free particle limit, where the energy of the whole system can be accounted for by the energy of each of the individual constituents, neglecting interactions. The single-particle partition function for a particle of species n, with mass m_n, is given by[We are using here the Bolztmann distribution for the (non-degenerate) one-particles states. Considering the Bose-Einstein or Fermi-Dirac distributions does not change our results in the relevant limits analyzed in this paper.] 𝒵_1,n= ∑_p_α e^-E_n, p_α/T , E_n,p_α^2=m_n^2+∑_α=1^d-1 p_α^2 , where p_α=k_α/L (with k_α being integers) is the (d-1)-dimensional spatial momentum of the particles in the box. The sum over p_α in the partition function is over all possible momenta fitting inside the box (or equivalently over all (d-1)-tuples with integer entries, k_α). One can now build the partition function for such system of N_T species, with N_n identical particles of each species in the free limit, by simply multiplying the single-particle partition functions and dividing by the number of identical combinations of N_n identical species 𝒵=∏_n=1^N_T ( Z_1,n)^N_n∏_n=1^N_T N_n! . The (average) number of particles of each species is given by the sum of all (average) occupation numbers of one-particle states (i.e. states with different d-dimensional spatial momenta) of said species, that is ⟨ N_n ⟩ =∑_α⟨ N_n,α⟩ =∑_α e^-E_n, p_α/T , where we have simply used that the average occupation number (in the absence of a chemical potential, or with a vanishing one) is given by e^-E/T (see e.g. <cit.>). We thus also have that 𝒵_1,n=⟨ N_n ⟩ Let us now study the behaviour of N_n as a function of T and L for different values of m_n and , keeping in mind that ≥ T ≥ 1/L. Using the results of Appendix <ref>, (c.f. eq. (<ref>)) we can approximate the average occupation numbers by [ N_n= 𝒵_1,n≃ (TL)^d-1 , for T≳ m_n ,; ; N_n= 𝒵_1,n≃ e^-m_n/T P_q(m_nT) (TL)^d-1 , for T≲ m_n . ] From here we see that species with masses m_n≲ T have a typical occupation number that at leading order does not depend significantly on their mass, whereas for masses above T there exists the expected exponential suppression. This effectively means that all species with masses up to the order of the temperature will contribute to the partition function, whereas the ones with masses above such temperature will be exponentially suppressed and effectively will not contribute (unless the degeneracy of such species can compensate the exponential suppression, as is the case for stringy states, which we consider separately below). Thus, for a given temperature, T, such that the number of species with masses below such temperature is sufficiently large, we can compute the average number of weakly coupled one-particle states that are occupied, as given in eq. (<ref>), by approximating the sum by an integral ⟨ N_TOT (T)⟩ = ∫_0^∞ dm ρ(m) N(m) = = (TL)^d-1∫_0^T dm ρ(m) + (TL)^d-1∫_T^∞ dm e^-m_n/T P_q(m_nT) ρ(m) . Here we have expressed everything in terms of the masses, namely N_n=N(m_n), and defined the density of (species) states simply as ρ(m)=ρ(n) dn/dm_n. The first integral counts the number of species below temperature T, which resembles our definition of N_T in the previous section N_T=∫_0^T dm ρ(m) . For a spectrum with polynomial degeneracy, which we can parameterize as in (<ref>), this reduces to N_T=(T)^p , which coincides with eq. (<ref>). The integral between T and gives the following result (T)^p p [ Γ (q+p,1) ] , where we have use the same parameterization for the mass spectrum and also the fact that the polynomial P_q(m_n/T) has degree q≤ d-2. The factor multiplying N_T gives an extra term proportional to N_T (TL)^d-1. All in all, for towers with finite p,[The case p→∞, which can be though of an effective parameterization of the stringy regime, will be discussed separately below.] the parametric dependence of the average number of occupied one-particle states is given by ⟨ N_TOT (T)⟩ ∼ N_T (TL)^d-1 , which can be interpreted as the number of species below temperature T, captured by N_T, times the corresponding number of available momentum states for each of these species, which is of order (TL)^d-1. Since the average number of one-particle states that are populated effectively defines the average number of particles in the box, it is crucial to understand also the fluctuations in such number. In particular, using the definition of the fluctuation (<ref>), we obtain for the system defined by the partition function (<ref>) the following expression Δ N_TOT⟨ N_TOT⟩∼1√(⟨ N_TOT⟩)∼1√(N_T (TL)^d-1) . As expected, the distribution becomes arbitrarily narrow as N_TOT grows, which will always be the case for the configurations we study (either because N_T becomes large or because T≫ L). In fact, this is also valid for the average number of one-particle states of each species, namely Δ N_n ∼√(N_n). Therefore, for large N_TOT we can simply use the average values up to order one factors that we are not important for our computations. In such cases, given that 𝒵_1,n=N_n∼ (TL)^d-1 for all n≲ N_T, we can approximate the logarithm of the partition function (neglecting subleading additive contributions from n>N_T) (<ref>) by log𝒵≃ N_n N_Tlog (N_n)-N_T log(N_n!) . We will be interested in the limits where T ≫ 1/L or T≃ 1/L, and it turns out that in both cases we can rewrite the log of the partition function as log( 𝒵)∼ N_TOT , where in the former case one can use Stirling's approximation for the denominator and in the later we use that N_n∼ 1 for T≃ L.[Note that in the case T≃ L we are approximating N_n∼ 1 to indicate that there is no asymptotic dependence in any of the variables of the problem, but keeping in mind that it N_n≠ 1 in order to avoid the vanishing of the logarithm, which would not make sense from the point of view of e.g. entropy counting.] With this simple dependence of the partition function on the average number of one-particle states that are populated at temperature T, it is straightforward to compute the average energy and its corresponding relative fluctuation, which take the form ⟨ E ⟩ ∼ N_TOT T , Δ E⟨ E ⟩ ∼ 1√( N_TOT) , where we have used eqs. (<ref>) and (<ref>). Finally, let us compute the entropy, which reads S=log (𝒵) + T ∂log (𝒵)∂ T∼ N_TOT , where in the last step we have used that log (𝒵) ∼log N_TOT∼ T^p, as is the case for the spectrum in eq. (<ref>). Let us now comment on the different asymptotic behaviours of the entropy (i.e. N_TOT) and the energy for the different hierarchies of control parameters that we will consider throughout the rest of the paper ∘Thermodynamic limit: ≥ T ≫ 1/L In this case, the entropy grows with the volume. All the species with masses of order m_n≲ T effectively contribute to the entropy with a contribution proportional to the volume, i.e. (TL)^d-1. The energy is mainly dominated by the kinetic energy of the species with masses below T, and thus we obtain E ∼ N_TOT T. These configurations are the ones that are constrained by the CEB and the gravitational collapse bound, as explained in section <ref>, and this imposes a stricter upper bound on the maximum allowed temperature, which cannot be arbitrarily close to (c.f. eq. (<ref>). ∘Frozen momentum limit: ≫ T ≃ 1/L In this limit, each of the species contributing to N_T is effectively frozen in d-dimensions, since the number of available d-dimensional momentum states that fit in the box of size L and remain below T is of order one. Thus, N_TOT≃ N_T and the entropy comes mainly from the number of species with masses m_n≲ T. The average energy takes the form E ∼ N_T T, which we can also rewrite as E ∼ ∑_n=1^N_T m_n using a parameterization for the mass spectrum as the one in eq. (<ref>), together with the definition of N_T given in (<ref>). Note that this resembles the species energy introduced in <cit.>, but with contributions from all species below T, instead of . These configurations do not collapse gravitationally nor exceed any holographic entropy bounds since T<, due to the fact that the volume contribution to the entropy and the energy is effectively frozen by setting the lowest possible temperature, namely T≃ 1/L. ∘Species scale limit: ≃ T≃ 1/L Finally, we can consider the previous case, where the d-dimensional degrees of freedom are effectively frozen, at the same time as we increase the temperature (and consequently decrease the size of the box) until ≃ T≃ 1/L. As argued in section <ref>, this is the limiting case in which the gravitational collapse bound and the CEB coincide, as can be seen by the fact that the entropy scales like S∼∼ ( L)^d-2, which recovers the area law for the entropy in the limit in which the system would collapse to the smallest possible black hole in the EFT, namely L≃^-1. The average energy of the system, which takes the form E∼, can also be re-expressed as the species energy <cit.>, E ∼ ∑_n=1^ m_n, which from this point if view it is nothing but the average energy of the system of species in the canonical ensemble, since all of them have average order-one occupation number. §.§.§ One-particle state domination We now consider the opposite regime, in which the partition function is dominated by configurations in which the total energy of the system is mainly accounted for by a single particle with E≃ m_n. In order to compute thermodynamic quantities we need the logarithm of 𝒵, which in this case can be approximated by 𝒵≃∑_n=1^∞ e^-m_n/T d_n, where d_n is the degeneracy at each level. The validity of our thermodynamic description should at least have a convergent partition function, otherwise we would have that our original degrees of freedom are not an appropriate description of the system. This motivates three different regimes in terms of the degeneracy of the levels. On the one hand, for subexponential degeneracies we have that 𝒵_1 converges for every temperature since the exponential suppression makes the contributions from heavy one-particle states subdominant. In this case, 𝒵≫𝒵_1 and (<ref>) is the right approximation for the partition function, instead of (<ref>). For completeness, let us mention that this is also the relevant regime in the presence of an exponential degeneracy whenever the temperature is well below the Hagedorn temperature, such that only the massless modes are active and the exponential degeneracy is effectively invisible. On the other hand, if the degeneracy is exponential, such that d_n≃ e^m_n/T_H,[The degeneracy can, and in general does, include multiplicative subexponential contibutions such as d_n≃(m_n/T_H)^α e^m_n/T_H, with α≥ 0. These may differ between the one-particle and the multi-particle density of states, but the exponential piece remains the same and is the one that sets the maximum temperature for the canonical partition function to converge, T_H. ] the partition function takes the form 𝒵≃∑_n=1^∞e^-m_n(1/T-1/T_H) , and only converges for T<T_H, where T_H is the Hagedorn temperature <cit.>, which acts as a cutoff for the effective description. At temperatures near T_H, the total partition function, 𝒵, is dominated by configurations in which most of the energy is stored in the mass of a one-particle state (in the string picture this corresponds to the fact that these configurations are dominated by single, long strings) <cit.>. For such tower we can compute the average energy and its fluctuation using eqs.(<ref>) and (<ref>) to obtain ⟨ E ⟩∼T_HT_H-T T ≡ N_T T, Δ E∼⟨ E ⟩ , while the entropy is given by S∼ N_T. Here, we have used the (leading contribution to the) entropy to define N_T, since unlike in the polynomially degenerate case, it does not directly correspond to the number of species with masses below T. Instead, in this case, the majority of states (or species) that contribute to the partition function at temperature T (near, but below T_H) have masses above T, but they still contribute due to the large degeneracy. Thus, it is not possible to give an intuitive definition from comparing T with m_n directly. However, given that in the familiar cases the entropy gives the correct notion of particle (or species) number, we can use it as a definition in the current case. Crucially, we have not assumed any particular scaling between the (average) energy and the particle number, but we obtain the same as in the polynomially degenerate case, which is pivotal for the validity of our arguments. In fact, this can be traced back to the fact that from the two contributions to the canonical entropy, namely the one proportional to E/T and the one proportional to log( 𝒵), the former is parametrically dominant in the exponentially degenerate case, whereas the two of them are parametrically equal in the polynomially degenerate one, yielding the S∼ E/T scaling. Furthermore, in contrast to a polynomial spectrum, we note that the relative energy fluctuations do not decrease as we increase T (or equivalently, N_T). This shows that, even though it can be analyzed in the canonical ensemble, the microcanonical analysis is not recovered in the high-entropy limit <cit.>, which we study separately in the next subsection (see also <cit.>). Finally, let us comment on the limit when T→. Our strategy is to equate N_T to , which we define as 1/ ^2 for the case of a string tower, and solve for , namely lim_T→N_T=N_sp=(M_Pl,d)^d-2 = 1^2 . Near T_H, or equivalently for a large N_T, the leading term for the species scale is given by ∼ T_H , where the subleading additive contribution vanishes in the → 0 limit, and it is compared wit the general form of the species scale in section <ref> (c.f. eq. (<ref>)). This matches the expected behavior for a string tower, where in the weak coupling limit ∼ T_H≃, and it is clear that the species scale is capturing the maximum possible temperature that can be described by the EFT, given by the maximum temperature for which the canonical partition function is convergent. §.§.§ Super-exponential degeneracy and divergence of the canonical partition function Finally, we note that for d_n growing parametrically faster than an exponential (i.e. for d_n e^-m_n/T_0→∞ as n→∞), the partition function does not converge for any T>0. This allows us to rule out towers with super-exponential degeneracy, since systems with such high degeneracies lead to a divergent canonical partition function and a breakdown of our original description. Similarly, the average energy diverges, suggesting a mismatch between the canonical and microcanonical descriptions. In this regard, let us mention that if one decided to introduce by hand a UV regulator in the masses that contribute to the canonical partition function by cutting off the tower at a maximum N, which we could then parameterize as 𝒵≃∑_n=1^Ne^-m_n/T+ (m_n/T_0)^α , for some α>1 (such that the spectrum is superexponential). The introduction of a cutoff then implies finite average energy, and a matching microcanonical description. From the partition function it is plain to see that it would be dominated by the last term only, this due to the fact that the degeneracy grows so fast that we have d_n≪ d_n+1, and thus 𝒵≃ e^-m_N/T+ (m_N/T_0)^α=N_T e^-m_N/T . This spectrum then corresponds to N_T degenerate states with mass m_N, and as we disucss in detail in section <ref>, it can be thought of as an effective parameterization of the exponentially degenerate case if ≃ m_N. §.§ Microcanonical ensemble We have seen that for towers of states with polynomial degeneracy the canonical ensemble yields relative fluctuations on the energy that asymptote to zero as the entropy (or, equivalently, the temperature and N_TOT) is increased, effectively recovering the microcanonical ensemble in that limit. For exponentially degenerate towers, however, it is known that as we approach the limiting Hagedorn temperature the relative fluctuations do not decrease (c.f. eq. (<ref>)) and thus the canonical ensemble does not reduce to the microcanonical one <cit.>. Independently of the size of the fluctuations, we have obtained that in the relevant limits the thermodynamic quantities take the following form E≃ N_T T , S≃ N_T, which converge to the constitutive relations of species thermodynamics in the limit T→. With this in mind, it is illustrative to study the problem in the strict limit =T=1/L in the microcanonical ensemble, in which we expect to recover the same constitutive relations for the energy and entropy <cit.>. To do so, we consider a tower of particles with masses m_n≤ T==1/L and follow <cit.> to define the number[Strictly speaking, ℳ should be an integer, but since for a large number of species we have that E≫ we can neglect the subtleties associated to this.] ℳ=E/ . Where E is the total energy, and is the mass of the lightest particle in the tower. The problem is now to find all possible combinations with total energy E=ℳ, from summing masses m_n= f(n) (with f(n) a function of n, c.f. (<ref>)), with degeneracy d_n and below the cutoff scale . Note that this approach is slightly different from the canonical one in the sense that we are now considering only contributions from species with masses below the given T=, whereas in the canonical ensemble the particles with masses above T could still be the dominant contribution for sufficiently high degeneracies (i.e. exponential or higher).[This is related to the logarithmic ambiguity in the definition of the species scale for stringy towers that arises when counting species as opposed to the entropy of the minimal black hole of size ∼, as discussed in e.g.<cit.>.] That is, we consider only n≤, with the maximum level defined via = f() , =∑_n=1^ d_n , where we are also giving its relation to the number of species below . The number of such combinations, that we denote Ω(ℳ,), is known to have generating function <cit.> Z(q) = ∑_M Ω(ℳ,) q^ℳ=∏_n≤1( 1-q^f(n)) ^d_n . The desired combination can then be obtained by contour integration as Ω(ℳ,) = ∮ dqq^ℳ+1 Z(q). For many species below the cutoff it is reasonable to assume ℳ≫, such that the poles at q=0 do not contribute to the integral. Following the procedure in <cit.> one finds S= +log(ℳ)-∑_n^ d_nlogf(n)+𝒪(logℳ) . The temperature is then, for a general tower at T≃ T=(∂ S∂ E)^-1= EN_T= E= . We also define the species chemical potential as μ≡ - ∂ S∂, such that μ=0 implies maximum entropy and an equilibrium configuration. We can estimate the chemical potential for a general tower by first approximating the third term in (<ref>) by an integral and using ∂∂∫^N_max_1 dn d_nlog( f(n) )≃1d_N_max∂∂ N_max∫^N_max_1 dn d_nlog[f(n)]=logf(N_max). The chemical potential then takes the form μ≃-logℳN_sp f_N_max, and the entropy will be maximized for configurations satisfying ℳ≃ N_sp f(N_max) , S≃ N_sp+ ∑_n=1^N_max d_n logf(N_max)f(n) . In general, these configurations of maximal entropy are then bounded as N_sp≤ S ≤ N_sp(1+log()), where we have used T/=f_N/f_1 and assumed that f_n+1≥ f_n. We can note however, that for sufficiently high degeneracy the sum will be dominated by the n=N_max limit, such that the contribution of the sum vanishes and the entropy is S≃ N_sp. This is the case for exponential (and higher) degeneracies, up to small (i.e. logarithmic) corrections, where ∼ and the upper bound is saturated. In contrast, towers with polynomial degeneracy can never saturate this bound. §.§ Appropriate Towers and species entropies The goal of this section is to systematically study different kinds of towers in the canonical ensemble and determine which of them can give rise to the correct scaling of the entropy and energy in the limit T→ to reproduce the ones of the corresponding minimal black hole (we elaborate more on this picture, relating this behaviour to the correspondence between black holes and the thermal ensemble, in section <ref>). To that end, and inspired by the two cases studied in section <ref> (which correspond to the only two types of consistent towers in Quantum Gravity according to the Emergent String Conjecture <cit.>, namely KK-like and weakly coupled string towers) we analyze more general spectra and are able to rule out the ones that do not correspond to the ones previously analyzed. Our starting point is to consider a tower of states with masses m_n, such that when the states are thermalized at the maximum allowed temperature, T≃ = / ^1/d-2, precisely of them effectively contribute to the canonical partition function. Motivated by the results of the previous sections, we start from the following form for the energy of such configurations E = N_T T , with N_T is the number of states that contribute at temperature T (e.g. the number of states with masses m_n ≲ T in the case of a tower with polynomial degeneracy). In general, recall that we can relate the energy, the entropy and the partition function via E=T^2∂∂ Tlog(𝒵) , S=∂∂ T(T log(𝒵)) . So that for the aforementioned form of the energy, the entropy and partition function satisfy ∂_Tlog𝒵=N_TT , S=N_T+log(𝒵) . Then, a species distribution with energy (<ref>) is appropriate if it fulfills the following two conditions i.The number of kinematically available species does not decrease as we increase the temperature, ∂ N_T/∂ T≥ 0. ii.The entropy in the limit in which the momentum states available per species are of order one (i.e. T≃ 1/L for particles in a box) is given by S≃ N_T (with N_T≫ 1), the number of species that are active at such temperature, such that in the limit T→ the minimal black hole entropy is recovered. In the rest of this section, we start by first considering what kinds of towers can mimic different parameterizations of the entropy that yield the aforementioned scaling. We then proceed to the systematic study of different kinds of towers following the logic outlined above, and concluding whether they are appropriate towers that can give rise to the right scaling of entropy and energy (particularly in the limit T→) or not. We also revisit the cases of the KK-like and string-like towers following this logic to make the discussion more complete and easy to follow, even though we know from the get go that they fulfill the right properties as explained in section <ref>. We perform this analysis in the canonical ensemble, by considering thermal configurations of species at temperature T, but equivalent computations in the microcanonical ensemble are found in Appendix <ref>. §.§.§ Corrections to the entropy and structure of the towers We begin by exploring what kind of modifications in the structure of the tower (i.e. its degeneracies, d_n) can account for different multiplicative or additive corrections for the entropy. First, we consider the following form for the entropy and the partition function S=c N_T , log(𝒵)N_T=c-1 , with c an arbitrary, order-one constant. Note, in particular, that this kind of multiplicative correction does not change the parametric dependence of the entropy that we argued to be the appropriate one. To be precise, since we have not kept track of 𝒪(1) factors so far, a multiplicative correction of this type might seem irrelevant. However, the point that we want to emphasize is that in this context it encapsulates the fact that both contributions to the entropy in (<ref>) are parametrically the same, i.e. N_T∼log(𝒵), and this implies that N_T grows polynomially in T, as we show in the following. This form of the entropy corresponds to a number of states N_T=(TT_0)^1c-1. Then, for c>1 we have that the number of states entering the theory increase with the temperature, and both conditions are satisfied. Here and in the rest of this paper, we consider a tower of states with mass scaling m_n = n^1/p . and we can then easily compute the degeneracy d_n=n^p/c-1-1. In order to reproduce S≃ N_T, the contribution coming from log(𝒵) has to be at most, of the same order as N_T. Thus, we conclude that polynomial towers lead to log(𝒵)∼ N_T. Let us now turn to the structure of towers that produce a subdominant contribution coming from log(𝒵), namely log(𝒵)≪ N_T for large N_T, such that the leading form of the entropy is still the appropriate one and we recover the right result for large number of species. We first consider additive, subleading, polynomial corrections to the entropy, which we parametrize as S=N_T(1+a/N_T^k), log(𝒵)=a N_T^1-k , with k>0 and a=𝒪(1). Here we have assumed N_T continuous and differentiable in the range T>T_0. The number of states takes the following form N_T≃1log(T/T_0)^1/k , and since N_T decreases with the temperature, it is clear the first condition is not satisfied. This suggests that such polynomially suppressed corrections do not come from appropriate towers of states. In fact, one can check that any correction with polynomial or super-polynomial suppression will violate this first condition, so we do not consider them any further. We can then consider corrections to the entropy with sub-polynomial suppression, namely S=N_T(1+a/log(N_T)) , log𝒵 = a N_Tlog(N_T) . The corresponding number of states and degeneracy are N_T≃ e^T^1/a, d_n≃ e^n^1/(a p), such that both conditions are satisfied. In general, corrections to the entropy of the form S=N_T(1+a/log^[k](N_T)) , log(𝒵)=a N_Tlog^[k](N_T), with log^[k](x)≡logloglog..._(k times)...log (x), also lead to appropriate towers with N_T≃expexpexp... _(k times)...exp(T^1/p) , d_n ≃expexpexp... _(k times)...exp(n^1/(a p)). Note that polynomial and exponential degeneracies are special cases of (<ref>), with k=0 and k=1, respectively. Furthermoer, as mentioned in section <ref>, super-exponential towers (i.e. k>1) have a divergent canonical partition function for any non-vanishing temperature, so they do not represent can also be ruled out as the fundamental, weakly coupled degrees of freedom in the theory, so we also do not consider these as appropriate. §.§.§ Tower with polynomial degeneracy We now turn to the systematic study of different spectra for the towers. Let us begin by considering a tower that we know arises in Quantum Gravity setups, namely one with the following mass spectrum and density of states [This is equivalent to a tower with m_n = n , d_n=n^p-1]: m_n=n^1/p , d_n=1 . Such a tower corresponds to Kaluza-Klein compactification on an isotropic p dimensional manifold, e.g. a torus 𝒯^p, and at temperature T there are N_T=(T)^p, available species. The structure of such tower is shown in the left side of Fig. <ref>. The energy is then given by E=N_T T=T^p+1^p , and using (<ref>) we can compute the corresponding partition function and entropy log (𝒵)=(T) 1p= N_Tp , S=T^p^pp+1p=N_Tp+1p . Using eq. (<ref>) we can identify c=p+1p, and we obtain that the tower fulfills the requirements presented above and can be considered as an appropriate tower, as expected for a KK-like tower. In the high-temperature limit T≃ we have S=N_spp+1p=()^pp+1p . Also, recall that the string limit is recovered in the limit p→∞, such that =, S=N_sp . In this limit one recovers the free string entropy, as the UV cutoff approaches the mass of the tower =, S=E, log𝒵N_T≃ 0 , and the Hagedorn temperature can be defined as T_H≃. §.§.§ Negative p and black holes As a side comment, let us note that the family of solutions in eq. (<ref>) can also describe d-dimensional black holes if one considers negative values for p of the form p=-(d-2) , for d>3. From here we obtain the following form for the entropy (and partition function) S=d-3d-2T^-(d-2)∝ E^d-2/d-3 , log𝒵N_T=-1d-2 , which are the expected relations for a black hole with ADM mass M=E in d dimensions. In summary, we can classify the three different regimes in terms of the exponent p as [ 1≤ p<∞ KK tower,; |p|=∞ String tower/highly excited string,; -∞<p≤-1 Black hole microstates. ] In accordance with our prescription, the black hole microstates cannot be considered as a fundamental, weakly-coupled, tower of states, as the microstate count decreases with the temperature. We can also reject 0<p<1, which effectively parameterizes sub polynomial degeneracy, since that implies S≫ N_T, while -1<p<0 implies S<0. The case of p=0 as parameterized here involves an infinite number of states siting at m_n=, so we consider it separately in the following. §.§.§ Tower with fixed mass We consider now a "tower" with the following spectrum: m_n=n , d_n=δ_n,1 N . This corresponds to N states accumulating at a mass . One could think of this, for example, as a situation with ∼ N non interacting copies of the Standard Model <cit.>.[Additionally, as mentioned in section <ref>, one can also interpret this as towers with a super-exponential degeneracy in the spectrum and maximum level given by n. Similarly, towers with sub-polynomial degeneracy d_n≫ d_n-1 will be dominated by their first level, and behave like fixed mass spectra.] This is shown in the right side of Fig. <ref>. For T≥ the energy is given by E=N T , and the partition function and the entropy then take the form log(𝒵)=N log(T) , S=N{ 1+log(T) } . For T≃ we find S=N{ 1+log( ) } . As such, states accumulating around a single mass do not give the correct dependence between entropy and number of species, as they present large corrections for T≫. Remarkably, however, the correct entropy can be recovered for ≃, corresponding to the string limit.[Similarly, this suggests that a super-exponential tower accompanied by a UV cutoff effectively parameterizes a string-like tower.] Away from this limit, there is an obstruction to increasing the number of species above N_crit=(M_Pl)^d-2 , since for N>N_crit the EFT description including any state of the tower would be invalid, as the lightest one would already be above the UV cutoff. Note that this is deeply related with the fact that the number of species must not only increase with T, but also vary in an appropriate way with the moduli whose vev controls the mass scales of the towers, as otherwise we could not have the right relation between the entropy and the mass as we move towards infinite distance limits in moduli space. We will elaborate further on this idea and its connection to the black hole - tower correspondence in section <ref>. §.§.§ Tower with exponential degeneracy Lastly, let us consider a tower characterized by the following spectrum: m_n=n^1/β , d_n = e^λ n^1/α , with 0<α<β<1 and λ>0, such that the canonical partition function converges for every temperature. As mentioned in section <ref>, the case α=β has to be treated separately, while α>β implies a divergent partition function for every T>0 so we discard it from the get go. The level of the tower for which the mass equals the temperature is N_max=(T)^β , and the number of species that contribute at such temperature (i.e. the ones with n≤ N_max can be computed from the sum N_T=∑_n=1^N_max d_n, which can be approximated by an integral for large N_max and yields N_T ≃ e^λ(T/)^β/α{αλ(T)^βα-1 /α+12λ ^2-α} . The energy can then be expressed as E=N_T T≃ N_T(logN_Tλ)^α/β , and the entropy and partition function then take the form log(𝒵)≃α/βN_T /log(N_T)≪ N_T , S≃ N_T (1+α/β/log(N_T))≃E(logN_T^1/λ)^α/β(1+α/β/logN_T) . In the large N_T limit the entropy is then proportional to the number of species at temperature T. We note this is nothing but eq. (<ref>), with a=α/β, corresponding to an appropriate tower. In the case α=β, which includes a tower of string excitations when also α=2, we saw in section <ref> that there is a competition of terms in the partition function 𝒵=∑_n=1^∞ e^- n^1/α(1/T-1/T_H), where T_H=λ, with λ some 𝒪(1) number that depends on the particular string theory under consideration. Convergence then requires T<T_H, and as a consequence there will be no light states in the tower as we had previously defined them. This occurs since the Hagedorn temperature is typically close to the mass of the tower ≃ T_H≳ T. Performing the sum (by approximating it by an integral) yields 𝒵≃(T-T_H)^-α. This reproduces the results of <cit.>, where the partition function for the single highly energetic string is computed from the thermal scalar path integral of strings in d-dimensions. Our results, from simple thermodynamic considerations are only strictly valid in the limit in which momentum is frozen in the non compact directions, namely d=0, but they match at leading order for any number of dimensions. The energy is then E = α T T_HT_H-T , while the free energy and the entropy are given by log(𝒵)= αlog(T T_H(T_H-T)) , S=α T_H/T_H-T+αlog(T T_H(T_H-T)) . In the high temperature limit, T_H-T≪ T_H, we recover the expected relations between thermodynamic quantities for highly degenerate spectra log(𝒵)S→ 0 , ET→ S . In previous cases we had a direct interpretation of N_T as the number of effectively light species, such that the exponential factor present in Boltzmann statistics was of 𝒪(1) for masses up to m≲ T and exponentially suppressed above. However, as remarked in section <ref>, for a spectrum with an exponential degeneracy this interpretation is no longer valid. Given this, and eq. (<ref>), we are motivated to define N_T as the entropy for T_H-T≪ T_H, so that N_T=1T_H-Tlim_T→ T_H{ S ( T_H-T) } = α T_HT_H-T . N_T and S then match near T=T_H, and N_T must be understood as the effective number of (gravitational) degrees of freedom that account for the entropy of the system. In terms of N_T the energy and entropy take the expected form E=N_T T, S=N_T(1+log(N_T)N_T). We note this can in principle be identified with eq. (<ref>) with k=1, but in the opposite regime of validity, for T<T_0, since |logT/T_0|≃ 1-T/T_0. This is in accordance to out discussion on appropriate towers since near the singularity N_T grows much faster than polynomially. We can also obtain the species scale by solving lim_T→N_T=N_sp=(M_Pl,d)^d-2 , near T_H and small string coupling, or equivalently for a large number of species. This yields ≃ T_H(1-α(T_H)^d-2) . Then, for a large but finite number of species we require <T_H≪ M_Pl,d, so that ≃ T_H. For the known string case, α=2, we can use the definition of the string coupling, g_s, to write this as =T_H(1-2λ^d-2 g_s^2). For Type II and Heterotic string theory the value of λ has been explicitly computed as<cit.> λ_Type II=1√(2), λ_Het=(1+1√(2))^-1, so that in e.g. d=4 we have 2λ^2=𝒪(1). This expression for the species scale is strictly smaller than the Hagedorn temperature, as the equality between the two only holds in the strictly weakly coupled limit in which g_s^2=1/N_sp=(/)^d-2=0. This is in agreement with the discussion of <cit.>, where it was argued that for a finite coupling gravitational effects cannot be neglected at high temperatures, and the system necessarily undergoes a phase transition before T=T_H. Finally we would like to comment that the g_s^2 correction to also agrees with the leading order correction obtained from the Type II gravitational EFT <cit.>. All in all, by considering systems of species at equilibrium at temperature T→, we have been able to rule out towers with different spectra. First, all towers with super-exponential degeneracies, are ruled out from the perspective of yielding a divergent canonical partition function for any finite T (see also <cit.> for an alternative analysis of similar towers in the microcanonical ensemble). Furthermore, we can also rule out towers of N species with fixed mass, unless this mass is precisely of order , which effectively resembles a tower of string excitations. Finally, we can also rule out towers with subpolynomial spectra, since for fixed energy (<ref>) their entropy is too large S≫ N_T. Consequently, only towers with polynomial or exponential degeneracy, which resemble KK-like and string-like towers, respectively, seem to give rise to the right form of the energy and entropy to match those of black holes as T→, in agreement with the Emergent String Conjecture <cit.>, and recent bottom-up arguments <cit.>. § ENTROPIES IN LOWER AND HIGHER DIMENSIONS In this section we briefly highlight some important aspects of black holes and black branes in the presence of p extra dimensions. In particular, we focus on the correspondence and differences between what we denote the EFT black hole, which for <r we identify as the black brane solution wrapping the extra dimensions of size r, and the higher-dimensional black hole solution, which for <r is a fully localized solution in the higher-dimensional theory, namely a (d+p)-dimensional spherical black hole with <r in the compact and non-compact dimensions §.§ The EFT entropy Recall that for a given an EFT configuration at finite temperature, T<Λ_UV, in a box of volume L^d-1, and with a finite (𝒪(1)) light species, the total entropy takes the form S_EFT,d≃ T^d-1L^d-1. We have also seen in section <ref> that in the presence of N_T species contributing to the thermal ensemble at temperature T the entropy (in the thermodynamic limit T≫ L) is given by S≃ N_T T^d-1L^d-1. Then, for a KK tower originating from the presence of p extra dimensions, which we assume to be isotropic for simplicity, and of typical size r=1/, we can rewrite the entropy as S_EFT,p,D≃ T^d+p-1r^p L^d-1 . Note that this is nothing but the entropy of a D=d+p dimensional EFT in a box of size L along the d non-compact dimensions, and r along the p compact ones, such that the it completely wraps the compact dimensions. From this perspective, the scale at which (<ref>) and (<ref>) meet T≃, is simply the energy scale at which the EFT starts to perceive the modes in the tower, and they can be in thermal equilibrium. §.§ Black hole vs. black brane entropy A d-dimensional black hole of radius has entropy S_BH,d≃^d-2^d-2=^d-2/d-3^-d-2/d-3. Since it is a purely gravitational object, it is expected that as one decreases its radius the description near scales L≃ r must change in order to reflect the presence of extra dimensions. For <r one can first consider a D dimensional black hole localized in the extra dimensions, namely one with S_BH,D≃^D-2M_Pl,D^D-2= ^D-2/D-3r^p/D-3M_Pl,d^-d-2/D-3. On the other hand, a black object with radius along the d non-compact and wrapping the whole internal manifold can be seen as a black brane solution <cit.>, and the total surface gains a contribution from the internal volume, r^p. The entropy in this case reads S_BB,p,D≃ R_BB^d-2 r^p M_Pl,D^D-2=M_BB^d-2/d-3M_Pl,d^-d-2/d-3. As first noticed by Gregory and Laflamme <cit.>, the black brane is unstable for R_BB<r, as the higher dimensional black hole with the same mass, =M_BB, has a greater entropy, as displayed in Fig. <ref>. Using the relation between Planck masses ^d-2= r^p M_Pl,D^D-2 , one can see that the lower-dimensional black hole solution and the black p-brane one have parametrically the same entropy. We can compare EFT and black hole entropies in the specific limit of ≃ 1/T≃ 1/Λ_UV≃ 1/M_Pl,D and we see that the black brane (and the lower dimensional black hole) can be identified with the EFT that sees the presence of the species of the tower. In contrast, a higher dimensional black hole that does not wrap the internal directions contains no information about the tower. This is to be expected, as the fully localized solution along the extra dimensions can only see the localized modes along the extra dimensions, as opposed to the full tower of KK modes including all momentum modes with 1/r≤ T. Furthermore, from an EFT perspective, it is natural to remain agnostic about the details of the extra dimensions. From the black solution point of view, this idea can be interpreted as maximizing the entropy associated to the extra dimensions by wrapping them, and thus being sensitive to all states in the KK tower up to temperatures T=1/≥ 1/r. § THE BLACK HOLE - TOWER CORRESPONDENCE: BLACK HOLE ENTROPY FROM NON-GRAVITATIONAL THERMODYNAMICS This section is devoted to the analysis of the entropy of Schwarzschild black holes in EFTs from the microstate counting of non-gravitational systems that can be obtained from following the adiabats of such black holes towards infinite distance limits. The following discussion can be understood as a generalization of the celebrated Black Hole - String correspondence[In this work we refer to the correspondence, as opposed to the transition, to emphasize the fact that we are not studying the details of the transition as a dynamical process, but focusing instead on the correspondence of the entropies of the black hole system and the tower system to obtain a microscopic interpretation of the entropy of the former in terms of the microstate counting of the latter.] <cit.> (see also <cit.> for recent discussions on the topic), which accounted for the entropy of general Schwarzschild black holes (up numerical prefactors, but crucially matching the area dependence) by the microscopic counting of the microstates associated to a free string in the limit of vanishing string coupling. We begin with a review of the Black Hole - String correspondence, following mainly <cit.>, and then proceed to explain the generalization arising in the presence of general infinite distance limits. §.§ The Black Hole - String Correspondence In its crudest incarnation, the key idea behind the Black Hole - String correspondence is to consider a Schwarzschild black hole solution at some finite string coupling, and follow it adiabatically as the string coupling is decreased. Following the black hole along this adiabatic process (through which the entropy stays constant) as the string coupling is decreased, it turns out that at some point the gravitational interaction becomes too weak for the black hole to remain a black hole. At that point, it transitions to a long, highly tangled string[In principle it could transition to a gas of strings, but the main contribution to the entropy for such gas turns out to be a single, long string, as discussed in section <ref>.] that one can then also follow along the corresponding adiabatic trajectory while decreasing the string coupling until a point in which entropy can actually be computed as the entropy of the free string. Thus, the black hole entropy of any Schwarzschild black hole (at finite string coupling) can actually be understood from microstate counting of a free string. We now proceed to describe this process more quantitatively, and comment on the limitations of the picture along the way First, let us recall some useful facts about black holes and strings. To begin with, the Planck and string lengths are related via the d-dimensional dilaton, , as follows ^d-2 = g_s,d^2 ^d-2 . Furthermore, the relation between the mass and radius of a black hole in d-dimensions is given by ∼ ^d-3^d-2 ∼ ^d-3^2 ^d-2 , and its entropy, which is proportional to the area in Planck units, takes the form ∼ ()^d-2 ∼ ( )^d-2/d-3 ∼ ^2/d-3( )^d-2/d-3 , where we have expressed all quantities both in Planck and string units for later convenience using (<ref>). Additionally, if we consider a long, highly excited, free string of length , its mass and entropy are given by (see e.g. <cit.>)[A quick and intuitive way to understand these formulae is to consider the string as given by a random walk of N steps in a d-dimensional lattice of size given by . The length and mass of all such configurations would be given by adding the N steps, each corresponding to , producing ∼ N and ∼ N/. Furthermore, the number of possible configurations grows exponentially with N, thus giving and entropy ∼ N.] M_str ∼ ^2 , ∼ ∼ M_str . Now, let us start by considering a black hole with mass M_0 (in string units) for a value of the string coupling given by g_0. The constant entropy lines for the black hole solution in the M-^-1 plane (see Fig. <ref>) are given by ∼ g_0^2/d-2 M_0 ^2/d-2 . Furthermore, as we go towards → 0, the radius of the black hole solution, measured in string units, decreases. Hence, we can start with a very large, semiclassical black hole, at g_0, but as we adiabatically decrease the string coupling it will always reach a point at which ∼, where the gravitational interaction would become so weak that it would not be able to keep the black hole together and the transition to a string (or gas of strings) should occur. The transition region defined by ∼ takes the following form in the M-^-1 plane M ∼1^2 , and the values of the aforementioned relevant quantities at the transition region are g_∗,d∼( g_0^2/d-2 M_0 )^-d-2/2(d-3) , M_∗ ∼ 1g_∗,d^2 . In fact, the entropy (which is constant along the adiabatic trajectory) can be written as S ∼ 1g_∗,d^2 ∼ M_∗,d , matching precisely the entropy of a string of mass ∼ M_∗,d (c.f. eq. (<ref>) ). Thus, at the transition point, namely when the radius of the black hole becomes of the order of the string length, the mass of the black hole coincides with the mass of the free string that would account for the same entropy. Even if the details regarding the exact transition are not fully clear, one can still follow the potential string solution that matches the entropy along its corresponding adiabatic trajectory towards arbitrarily weak coupling, where the microscopic understanding of the entropy of the free string is fully reliable. The constant entropy lines in the M-^-1 are the horizontal lines M_str ∼ M_∗,d . Therefore, for any (large) semiclassical black hole at fixed string coupling, following the adiabatic trajectory as the string coupling is reduced, there will always be a transition to a string that we can then follow to arbitrarily weak coupling to count the degrees of freedom that account for the entropy of the starting black hole. Let us remark that this is possible only because for every black hole solution, the constant entropy trajectories always intersect the transition region ∼, as otherwise we could not use the previous argument. Notice that his is at the core of our discussion in sectoin <ref> regarding appropriate towers. This is the case since in this language a tower of string excitations can equivalently be identified as appropriate due to the fact that it gives rise to the right entropy and energy in the limit in which it should collapse to a black hole of size ∼, and moreover there is a solution giving the right behavior for any value of ≪ 1. Several remarks are in order. First, this method can successfully recover the area dependence of the entropy in Planck units, but not the order one prefactors, due to the fact that we do not have a clear picture about the details of the transition. This is to be contrasted with detailed computations including order one factors, such as the one in <cit.>, where supersymmetry and extremality allow for a detailed counting. However, note that this dependence is enough for our purposes, as we are interested in understanding the area dependence of the entropy in the presence of towers of species. In this regard, one could expect gravitational effects to be more relevant <cit.>, causing a departure from the free string picture near the transition point, for the cases in which the transition takes place at only moderately small coupling. However, one would expect the approximation to be more and more precise if one starts with a sufficiently large black hole at g_0 such that the transition happens at arbitrarily small g_∗,d, and thus the area dependence of the entropy of an infinite number of black holes (those with masses M≥ M_0 at g_0), would also be more precisely accounted by the free string computation. In any event, independently of numerical, order one prefactors that might depend on the details of the transition (represented by the black bubble in Fig. <ref>), the main message is that one can still follow the adiabat towards arbitrarily small to perform the counting of the states in the free string solution. §.§ Decompactification limit and KK towers Let us now try to generalize the aforementioned process to the case in which instead of exploring a weak string coupling point, we probe a decompactification limit of the EFT. We focus here on a situation in which the decompactification limit is not accompanied by an arbitrarily weak string coupling limit, such that the KK tower is asymptotically lighter than any of the string excitations, and the higher dimensional species scale is asympotically the same as the higher dimensional Planck mass. From the top-down construction, this includes both the decompactification to 10-dimensional string theory at fixed g_s or to 11-dimensional M-theory. Following the same logic as for the black hole - string correspondence, we start with a large, semiclassical black bole of mass M_0 in a d-dimensional EFT, now obtained from compactification of a D-dimensional theory (i.e. D=d+p). We denote by 𝒱_p the volume of the p internal dimensions (measured in higher dimensional Planck units) and consider our starting point to be given by 𝒱_0^1/p ≪. The higher and lower dimensional Planck lengths are related as ^d-2=^d-2𝒱_p , and the black hole and mass and entropy, given in eqs. (<ref>)-(<ref>), read as follows in D-dimensional Planck units ∼ ^d-3^d-2 𝒱_p , ∼ ( ^d-2 ^d-2𝒱_p )^1/d-3 . Analogously to the black hole - string transition case discussed above, we can now try to follow the constant entropy lines as we increase 𝒱_p, effectively reducing the intensity of d-dimensional gravity, c.f. (<ref>). Such constant entropy lines in the M-𝒱_p plane (see Fig. <ref>) are given by M ∼ M_0 𝒱_0^1/d-2 𝒱_p^1/d-2 , and it can be checked that as we follow them along the direction of increasing 𝒱_p, the radius of the corresponding black hole becomes smaller and smaller in D-dimensional Planck units. There are two interesting points along this trajectory. First, since we are decreasing the black hole radius at the same time as we make the internal volume larger, we will reach a point at which both sizes are comparable, namely ∼𝒱_p^1/p. From that point on, there is an ambiguity on the way see our black hole solution in the higher dimensional theory, as discussed in section <ref>. In particular, we can continue with a D-dimensional black hole localized in the extra dimensions, or a black brane solution whose horizon is spherical in d-dimension but wraps the internal dimensions completely. In this section we are interested in the second possibility, since this is the one in which the tower associated to the decompactification limit, namely the KK modes, actually plays a role in accounting for the entropy. In the former case, we could not continue along a constant entropy line while still reducing the radius of the horizon without including some extra weak coupling point, such as an additional weak string coupling limit, effectively recovering the picture presented by the black hole string transition. Furthermore, let us remark that choosing to follow the black brane trajectory is not in conflict with the idea that the localized black hole of equal mass is more entropic when ∼^-1, as recently highlighted in the Swampland context in <cit.>. At that point, we simply focus on the less entropic solution because it still wraps the extra dimensions, and it is thus the only one that allows us to probe the limit → in d dimensions while following constant entropy lines. Since we are not following the dynamical evolution of the black hole solutions, this is allowed. Let us then focus on the d-dimensional black hole that actually wraps the whole p-dimensional manifold even when it effectively becomes larger than the horizon, which from the higher dimensional point of view corresponds to the black brane. Incidentally, this morally looks like the more sensible thing to do from the lower dimensional EFT perspective, since being agnostic about the details coming from the internal dimensions should correspond to a maximum uncertainty about their details, which in this case nicely fits with the idea of the horizon hiding them. Thus, from this point one could consider a sort of black hole - black p-brane transition. In any event, this is still a semiclassical object in the theory and it can actually be checked that the area of its horizon, which accounts for the entropy, is still correctly described by eq. (<ref>) (c.f. eq. (<ref>)), and thus the constant entropy lines are also given by (<ref>). The second special point along this constant entropy trajectory turns out to be the most relevant one for our argument. This is the point at which the d-dimensional black hole horizon reaches its minimum allowed size, namely ∼ (assuming the D-dimensional UV cutoff is sufficiently well approximated by the D-dimensional Planck scale, as is the case unless an arbitrarily high number of species different from the KK ones were asymptotically lighter than this scale). In the M-𝒱_p plane this transition region is defined by M ∼ 𝒱_p . Once again, we see that for any d>3 this always intersects the constant entropy lines in the black hole region. This intersection occurs for the folowing values of the internal volume and the mass 𝒱_∗,p ∼ (M_0^d-2^d-2𝒱_0)^1/d-3 , M_∗ ∼ 𝒱_∗,p . Furthermore, we can recall that the number of species associated to the KK-modes corresponding to decompactification of p-dimensions is precisely given by ≃()^d-2=𝒱_p , where we have used that the species scale and the higher dimensional Planck scale coincide. More interestingly, the entropy can be rewritten as the number of species corresponding to the intersection point S ∼ 𝒱_∗,p ∼ N_∗ . This is precisely the species entropy, and as we have discussed in previous section, it can be understood as the entropy associated to N_∗ free species frozen in a box of size L ∼ at a value of the modulus 𝒱_∗,p. Thus, following the constant entropy trajectories for values larger than 𝒱_∗,p the black hole should transition to something else that near the transition region can be thought of as a system of N_∗ species in a box. Once again, independently of the details of the transition, we can still follow the constant entropy lines towards larger 𝒱_p, and also avoiding gravitational collapse. Hence, we approach sufficiently weak d-dimensional gravity, and perform a counting of the entropy in the free theory near that point, where arbitrarily good control can be achieved, to account for the entropy of the original black hole of mass M_0 at 𝒱_0. In particular, for such a box of particles, we can now play with two different control parameters in order to compute the entropy, namely the size of the box and the temperature. In the black hole case these are related, so we can only tune one of them freely. Notice that in the case of the free string, for T≃ T_H≃, which is the temperature near which the transition region is probed, we have T=( ∂ S/ ∂ M )^-1 ∼ ^-1, so following the free string and including its excitation modes we effectively reduced the control parameters to one. For the case of the KK tower, we can explore two interesting limits after the transition region to compute the entropy in the 𝒱_p →∞ limit. These correspond precisely to two of the limits discussed in section <ref>, namely the one in which T≃ 1/L (dubbed frozen momentum limit) where the entropy is dominated by the number of species below T, and the one where T≫ 1/L but with T≲ (dubbed themodynamic limit), such that the effective number of species is order one and the entropy is dominated by the momentum configurations in the box associated to the massless modes. The former case can be achieved by the scaling T ≃1L∼N_∗^1/p 𝒱_p^1/p . In such case, we have S≃ N_T≃ N_∗, such that as we lower the temperature while exploring the limit 𝒱_p→∞, we keep the number of excited species constant and also make the size of the box larger in a way that the momentum modes of the species in d-dimensions are still frozen. The entropy is thus accounted for by the number of active species below T. Crucially, T≪^-1≪^-1 and also we can stay arbitrarily below the gravitational collapse bound (<ref>), such that neglecting the gravitational interactions is justified in this limit, as was the case in section <ref>. Thus, we can account for the entropy of the initial black hole in the EFT by computing the entropy associated to the free theory in the infinite distance limit. The entropy is therefore given by that of a tower of KK modes, c.f. eq. (<ref>), S∼(M 𝒱_p^1/p)^p/p+1, where the tower mass M is given by its total energy E. Constant entropy trajectories then follow M∼M_0 𝒱_0^1/p𝒱_p^1/p, such that at the transition point M=𝒱_p one obtains S=S_BH, L=. The later limit can be obtained by the scaling T ≃N_∗^1/d-1L∼1 𝒱_p^1/p . In this case, at large 𝒱_p the temperature is below the lightest massive states in the tower, and thus the massless species dominate the entropy. In this case, however, their d-dimensional momentum states can be excited, due to larger size of the box with respect to T, i.e. L≃ N_∗^1/d-1/T, and it is this entropy that accounts for the one of the original black hole. In any event, let us remark that even if in the limit 𝒱_p≫𝒱_∗,p one can tune T to recover any of these two behaviours, it is still necessary to connect this with the line T≃ 1/L, namely (<ref>), close to the transition region in order toto match the entropy. §.§ The Black Hole - Tower Correspondence Having studied the two kinds of towers associated to infinite distance limits, namely string oscillator modes and KK-like towers, in the context of a correspondence that allows us to account for the entropy of a semiclassical black hole at finite effective gravitational coupling from microstate counting in the free limit, we are now in a position to formulate this in terms of a general Black Hole - Tower Correspondence, that includes both cases. Following the previous cases, the general logic is to start with some black hole solution at finite value of the effective gravitational coupling, and study the constant entropy trajectories as we make that coupling smaler. This process reduces the radius of the corresponding black hole solution (in the right units, which we will identify with ), and at some point it reaches the minimum possible radius for a black hole in the EFT. Around that region, we expect some transition to take place, in which a high number of particles (the ones giving rise to the tower that relates the effective gravitational coupling and the species scale) take over the black hole description. Independently of the details of the actual transition, which occupy a great part of this paper (and also previous works related to species thermodynamics <cit.>), we must remark that by continuing along our adiabatic trajectory towards arbitrarily weak effective gravitational coupling we can account for the entropy of the original black hole from the usual thermodynamics of the corresponding gas of species in the gravity decoupling limit, in close analogy to the way in which the free string could account for the entropy of the original black hole in the black hole - string correspondence. The general picture is then as follows. We consider a d-dimensional EFT, where the d-dimensional Planck lenght is related to the species scale length as ^d-2=^d-2ϕ , with ϕ the modulus controlling the effective gravitational coupling in such a way that the latter decouples in the ϕ→∞ limit. One can then extrapolate the expressions for the black hole and tower entropies as ∼ 1ϕ^d-3(^d-2 ^d-2)^1/d-3, S_tower ∼ ϕ^1/p+1(M )^p/p+1 , where we are using the parameterization of the tower in which p encodes the number of dimensions that are being decompactified and recovers the stringy case for p→∞. In Figure <ref> we depict the black hole - tower transition in the M-ϕ plane for different initial conditions. The core idea is that, for any initial conditions, and independently of the value of p, one can always vary the modulus in order to reach the transition region. It is well-known that captures the higher dimensional Planck mass or the string length associated to the infinite distance limit that we probe as ϕ→∞, which recovers the cases described above upon identifying ϕ=𝒱_p , g_s^-2, respectively. It is thus straightforward to see that both the black hole - string correspondence and the black hole - KK tower correspondence described above can be reformulated in terms of and ϕ. The general idea is that indeed the transition from the black hole to the tower should take place in the region ∼, where we have S∼ϕ_∗∼ N_∗∼ M_∗ , and the dependence on the details of the tower is encoded via N_∗∼ϕ_∗, which can be computed from different towers. After the transition region, we can follow our constant entropy trajectories as we decouple the effective gravitational coupling as ϕ→∞ and compute the entropy of the initial black hole from the thermodynamics of the tower. The two most interesting cases in which we can compute the entropy in the free theory are T ≃1L∼N_∗^1/p ϕ^1/p , and T ≃N_∗^1/d-1L∼1 ϕ^1/p , where we remark once again that this freedom to chose the scaling of the temperature to include a higher or lower number of species at temperature T is only available for ϕ≫ϕ_∗, but we need to match the trajectory to the former, with T≃ 1/L, near the correspondence region to keep the entropy constant. The first case is such that the entropy is accounted for by the number of species below T, whereas in the second it is accounted by the d-dimensional momentum configurations of the massless modes. It can be seen that the p→∞ case degenerates in the sense that both cases give rise to T∼^-1. In any case, it is important to remark that in these cases it is justified to neglect the gravitational interactions in the limit, since T≪≪ (for finite p) , T ∼≪ (for p→∞) . Thus, in the first case, both the d-dimensional and the D-dimensional gravitational interactions are negligible, and in the second the gravitational interaction is negligible and the stringy effects are taken into account by the free string even for T∼. It can also be checked that, in the limit, the energy of the configurations is below the gravitational collapse threshold for a configuration of size L. For the case T≃ 1/L, the requirement that E(L)≤ (=L) can be written as N_TL≲( L)^d-21L , where we have used eqs.(<ref>) and (<ref>). This can be brought to the form N_∗≤ϕ , which is parametrically saturated precisely at the transition point ϕ∼ϕ_∗∼ N_∗, and fulfilled for any larger value of ϕ, as expected. Similarly, the CEB is also fulfilled for any ϕ≥ϕ_∗ and is saturated at the correspondence point. Finally, to connect with the usual conventions in the swampland literature, where quantities are meaningfully measured in d-dimensional Planck units, let us remark that at the correspondence point, namely when ∼^-1, the mass of the black hole is M_∗≃()^3-d . so that we can always associate the mass scale ^3-d as the one given by the mass of the smallest black hole in the EFT (in planck units), whose corresponding temperature is T≃. § CONCLUSIONS The goal of this work is twofold. First, finding evidence for the constitutive relations of species thermodynamics <cit.>, namely S_sp=E_sp/T=, from considering the thermodynamics of a system of species in equilibrium at at a temperature T. This is also motivated by previous analysis about the maximum entropy of a system in the presence of species before gravitational collapse takes place, and the interplay between the Covariant Entropy Bound and the species scale <cit.>. Second, understanding which kinds of towers can give rise to such constitutive relations in the limit T→, understood as a key feature of Quantum Gravity that allows for an interpolation between the usual volume dependence of the entropy in field theory and the area behaviour for black holes, which can be understood in the bigger picture of allowing for a black hole - tower correspondence, in analogy with the black hole - string correspondence <cit.>. We have started by reviewing how the Covariant Entropy Bound, together with the gravitational collapse bound, allow for a system to (asymptotically) approach the maximum entropy in the presence of gravity by tuning the control parameters to effectively freeze the momentum degrees of freedom. In other words, we highlighted how if one tries to get as close as possible to the saturation of the Covariant Entropy Bound (i.e. S∼ A) without collapsing the thermodynamic system of species into a black hole, one is led to set T≃ 1/L (or T∼ T_H for a string tower), effectively freezing the momentum modes. Then, in the limit T→ both bounds coincide <cit.>, and one expect so recover S∼∼ (/)^d-2. This means that it is justified to take the thermodynamic system and try to obtain the area scaling of the entropy in the aforementioned limit from the standard thermodynamics of the system of species. With this in mind, we have analyzed a system of KK-like or string-like species (i.e. polynomial or exponential degeneracy in the spectrum, respectively) in thermodynamic equilibrium at temperature T (neglecting interactions, which is justified as long as we avoid gravitational collapse and study weakly couple regimes, or equivalently, large number of species). As a first consistency check, we find that both the energy and the entropy grow with the volume as S∼ E/T∼ (LT)^d-1 for low temperatures (compared with ), as expected in the usual field theory limit, in which the available momentum states dominate the entropy and the energy. In contrast, we derive that for high-enough temperatures, such that a large number of species, N_T, start to contribute to the canonical partition function , one recovers a scaling of the entropy and energy that goes like S∼ E/T∼ N_T, which we also show to converge to as T→. From a purely thermodynamical analysis in the canonical ensemble we have then found an interpolation between the usual volume scaling of the energy and entropy at low temperatures (where momentum states dominate) and their scaling with the number of active species at high temperatures (where the system only makes sense if the momentum modes are frozen, to avoid gravitational collapse), which converges to the constitutive relations of species thermodynamics in the T→ limit. Let us also briefly remark that the canonical ensemble analysis provides an interesting perspective on the species scale in the presence of a tower of string-like excitations. In particular, from a microcanonical analysis (and from perturbative definitions of the species scale), it is known that some subtleties arise from the apparent mismatch between the identification of the species scale as the string mass and the fact that no states in the tower lie below such scale. A self consistent counting, though, is known to produce an extra multiplicative logarithmic factor that does not seem to match the higher-curvature computations <cit.> (see also <cit.> for some related discussions on the topic). Interpreting the species scale as the limiting temperature in the canonical ensemble, however, seems to provide a consistent picture in which, for low enough (polynomial-like) degeneracies, only states with masses below T can contribute, so that only those with masses below contribute to its computation. Whereas for high enough degeneracies (exponential-like), on the other hand, also states above T can contribute significantly to the partition function, providing a self-consistent picture in which the tower of string excitations above ∼ T_H can contribute and set ∼ (this turns out to be a key feature to take into account in e.g. cosmological applications of string configurations near the Hagedornt temperature like the ones recently studied in <cit.>). Having recovered the constitutive relations of species thermodynamics from our first-principles thermodynamic analysis of KK-like and string-like towers, we have then investigated the question of whether different kinds of towers can also give rise to such form of the entropy and energy, so as to to recover the black hole scaling in the T→ limit. First, we have discarded any kind of super-exponentially degenerate spectrum, since they give rise to a divergent partition function for any T>0. These can be complementary analyzed in the microcanonical ensemble, but have also been found to be inconsistent with species thermodynamics <cit.>. One could also argue that it is sensible to introduce an extra hard cutoff in the masses of the canonical partition function, so as to make it convergent. Nevertheless, as we have previously mentioned, the only reasonable scale one could think of to cut the sum off is the species scale itself, and this would mean that no states in the tower contribute to the partition function, producing meaningless results.[As we have seen, in fact, this cutoff prescription can at most be thought of as an effective way of parameterizing an exponentially degenerate tower, morally similar to the p→∞ limit in the polynomial case.] Second, we have found that towers with fixed mass or subpolynomial degeneracy cannot provide the right scalings for the energy and the entropy at the same time, and are thus not appropriate towers. The exception being the tower with fixed mass when ≃, which effectively resembles a string-like tower in the parameterization p→∞, as expected. Thus, we have provided new evidence for the Emergent String Conjecture <cit.>, in agreement with recent bottom-up arguments from the microcanonical analysis of species thermodynamics <cit.> and general properties of scattering amplitudes <cit.>. Finally, we have interpreted our results in the bigger picture of a black hole - tower correspondence. In particular, revisiting the original argument for the black hole - string correspondence <cit.>, it is transparent that the existence of such correspondence between black holes and free strings can be understood in terms of the presence of a (non-gravitational) system whose entropy and energy are the correct ones to match those of the black hole when the latter becomes as small as possible (which in this context means ≃^-1). In fact, the correspondence is established when a black hole solution is followed along constant entropy lines towards weak gravitational coupling (by varying a modulus, originally the string coupling) up to the point when its constant entropy curve intercepts that of the free string with equal mass. Furthermore, this should not only happen for one particular black hole solution, but for all of them (with sufficiently high entropy). This can then be reinterpreted as having a system of species that is able to reproduce the right scaling of the entropy and the energy in the limit T→, that is, a tower that is able to produce the constitutive relations of species thermodynamics, S∼ E/∼, in said limit. Thus, all our previous arguments about the towers that we dub appropriate can be rephrased in terms of towers that do not obstruct a black hole - tower correspondence. In particular, we can now formulate this correspondence not only for emergent string limits, as originally proposed, but also for KK-like limits, where the KK species account for the entropy of a black hole that wraps the corresponding extra dimensions.[Incidentally, let us remark that this latest claim is consistent with the idea of more entropic black hole solution in decompactification limits when one probes sizes of order ∼^-1, as recently highlighted in the Swampland context in <cit.>. At that point, we focus on the less entropic solution, namely the one that still wraps the extra dimensions, since this is the one that allows us to probe the species scale while following constant entropy lines. Since we are not following the dynamical evolution of the black hole solutions this is allowed.] Hence, from this perspective, the towers that are allowed in Quantum Gravity should be the ones that can account for the (asymptotic) entropy of black hole solutions as they are followed along constant entropy lines towards arbitrarily weak gravitational coupling. Let us remark that our analysis here is not meant to explain the details of the transition, but instead we see it as a first step (and a necessary condition) towards establishing the detailed correspondence between towers of species and black holes. That is, one can think of it as the analogous of the argument in <cit.> for the black hole - string correspondence, but several subtleties regarding the details of the transitions, such as those originally discussed in <cit.> (and more recently <cit.>) would also need to be considered to understand the full picture in detail. Acknowledgments We would like to thank Ivano Basile, José Calderón-Infante, Alberto Castellano, Niccolò Cribiori, Nicolás Kovensky, Yixuan Li and Carmine Montella for useful discussions. The work of D.L. is supported by the Origins Excellence Cluster and by the German-Israel-Project (DIP) on Holography and the Swampland. § SINGLE-PARTICLE CANONICAL PARTITION FUNCTION We devote this appendix to clarify the approximations used in the main text for the single-particle canonical partition function of a relativistic particle of mass m_n in a box of size L in (d-1)-spatial dimensions, at a temperature T≤Λ. First, we recall such partition function, given by eq. (<ref>), which we recall here for simplicity 𝒵_1,n= ∑_p_α e^-E_n, p_α/T , E_n,p_α^2=m_n^2+∑_α=1^d-1 p_α^2 , with p_α=k_α/L (where k_α are non-vanishing integers). For L^-1≪ T, the number of points in the lattice determined by the allowed k_α's which gives the leading contribution to the sum is large and we can approximate it by an integral 𝒵_1,n≃ L^d-1∫_1/L^∞ dp p^d-2exp(-√(m_n^2+p^2)T) , where we have defined the modulus of the spatial momentum p^2=∑_α=1^d-1 p_α^2, as well as neglected factors of 2π in the definition of the volume of phase space, since we will eventually neglect multiplicative order one factors and it is not meaningfull to keep track of them at this point. The change of variables x=√(m_n^2+p^2/T^2) brings this integral to the form 𝒵_1,n≃ (T L)^d-1∫_√(m_n^2/T^2+1/(TL)^2)^∞ dx x ( x^2-m_n^2T^2)^d-3/2 e^-x , which for odd d can be performed analytically because the quantity inside the parenthesis becomes a polynomial in x. Solving the integral analytiucaly in those cases and substituting the integration limits one obtains a product including the (TL)^d-1, and exponential functions of 1/(TL), and m_n/T multiplying polynomials on the same variables. These expression can be expanded in powers of m_n/T and the following results are obtained [ 𝒵_1,n≃ (TL)^d-1 , for T≥ m_n ,; ; 𝒵_1,n≃ e^-m_n/T P_q(m_nT) (TL)^d-1 , for T≪ m_n . ] These results are valid for any 1/L≤ T ≤Λ, as long as the integral gives a good approximation for the sum, and only include the leading term, which is in general multiplied by functions of 1/(TL) and Λ/T, which are extremely stable in the range of values under consideration and can be approximated by a constant. Furthermore, P_q(m_n/T) represents a polynomial in m_n/T with degree q≤ (d-1)/2. As an example, let us consider in detail the result of the d=5 case, where the integral takes the form 𝒵_1,n^(5d)≃ (T L)^d-1[ e^-x{(m_n^2T^2-6) x+m_n^2T^2-x^3-3 x^2-6}]_x=√(m_n^2/T^2+1/(TL)^2)^x=∞ and we can substitute the limits explicitly and expand about different values for m_n/T, keeping in mind that L^-1≤ T ≤Λ: 𝒵_1,n^(5d)≃ (TL)^4{e^-1/TL[1L^3 T^3+3L^2 T^2+6L T+6]+ 𝒪(m_nT)^2} , for T≫ m_n , 𝒵_1,n^(5d)≃ (TL)^4{ e^-√(1+(T L)^2)/TL( 6 √((T L)^2+1)(T L)+3(T L)^2+√((T L)^2+1)(T L)^3+8) } , for T≃ m_n , 𝒵_1,n^(5d)≃ e^-m_n/T (TL)^4{2 m_n^2T^2+m_nT[ 1TL+6]+3(T L)^2+6 +𝒪(Tm_n)} , for T≪ m_n . Let us start by analyzing eq. (<ref>). In this case we have the factor (T L)^d-1 and, using the definition of the incomplete gamma function Γ(d-1,x), the function multiplying it can be rewritten as 𝒵_1,n^(5d)≃ Γ(4,1TL) (TL)^4 , for T≫ m_n , This prefactor is a multiplicative function bounded by Γ(d-1)≥Γ(d-1,1/(TL))≥Γ(d-1,1), which are saturted in the limits T≫ L^-1 and T≃ L^-1, respectively. Therefore, we obtain the behaviour described in (<ref>) For the regime T≃ m_n it can be checked that the factor multiplying (T L)^4 in eq. (<ref>) is of order one in all possible limits for the parameters (i.e. T≥ L^-1), and this is compatible with (<ref>) Finally, we can see that for the case T≪ m_n the factor (T L)^4 is exponentially suppressed in eq. (<ref>), as in (<ref>). Furthermore, the leading term in the extra multipicative correction is indeed a monomial in m_n/T with degree q=2, which matches (<ref>) for d=5 It can be checked that all other odd dimensional cases yield similar results upon performing the integral analytically. The even dimensional cases can also be analyzed upon expanding the integrand for the different relevant limits, and yield the same leading approximations, summarized in eq. (<ref>). To end this section, we focus now on the case in which the number of momentum states that give the leading contribution to the single particle partition function is small, such that we can try to estimate the value of the sum in eq. (<ref>) directly, without the need to use the integral approximation. This corresponds to the limit T≃ L^-1. In this case, if m_n≲ T, there will be 𝒪(1) contributions to the partition function which are not heavily exponentially suppressed and are of 𝒪(1) each. On the other hand, if m_n≫ T all contributions will be heavily exponentially suppressed. Thus, this behaviour is also captured by eq. (<ref>), since for the former case, (TL)^d-1∼𝒪(1), and for the latter we obtain the right exponentially suppressed behavior. §.§.§ Fluctuations in the canonical ensemble In this subsection we collect some useful formulae regarding the fluctuations in equilibrium configurations at temperature T. In general, for a given distribution function, we can define the (squared) fluctuation of a magnitude Q as <cit.> (Δ Q)^2=⟨ Q^2 ⟩-⟨ Q⟩ ^2 . This characterizes the width of the distribution function around its average value, ⟨ Q ⟩. It is in particular useful to consider the adimensional quantity Δ Q / ⟨ Q ⟩, which we denote relative fluctuation, since it gives the relative measure of the width of the distribution normalized by the average value. If this relative fluctuation is small, the relative width of the distribution is narrow. In the particular case in which the magnitude Q represents the energy, whenever we have a relative fluctuation that approaches zero, the distribution in energies approaches a delta and we recover the microcanonical ensemble. For the energy, its average value can be computed from the partition function as ⟨ E ⟩ = T^2 ∂log( 𝒵) ∂ T , and using this it can be seen that (Δ E)^2=⟨ E^2⟩ - ⟨ E ⟩ ^2 = T^2 ∂⟨ E ⟩∂ T . § TOWERS IN THE MICROCANONICAL ENSEMBLE We compliment the discussion on towers in the main text by showing equivalent results using the microcanonical ensemble. Here we work at finite temperature, but one should always take the limit T≃ at the end. §.§ Towers with polynomial degeneracy We begin by considering towers with the following mass spectrum and degeneracy m_n=n , d_n=n^p-1 , corresponding to a tower of KK-like modes from the compactification of p isotropic dimensions. The number of species in this case is given by N_T≃ N^p/p, and the entropy takes the form S=N_Tp+1p+N_T log(MN_T^p+1/p) . The entropy is then maximized in terms of N_T for M= N_T^p+1/p . This configuration is precisely the one which reproduces the expected results of species thermodynamics, in which each state contributes exactly once to the total energy E=M ≃∑_n=1^Nm_n d_n≃ N^p+1≃ N_T^p+1/p . The entropy and temperature then take the form S=N_T (p+1p) , T= N_T^1/p , and we recover eq.s (<ref>) and (<ref>). §.§ Towers with fixed mass We now consider the following spectrum m_n=n , d_n=δ_n,1 N . This corresponds to modes accumulating around some mass with degeneracy N. The number of available species is simply N_T=N, with entropy S=N_T +N_T log(MN_T)=N_T +N_T log(T), where we have used eq. (<ref>) to express N in terms of the temperature. Maximizing the entropy requires M=N_T, such that S=N_T, T=. In contrast to the previous case, this suggests that such a tower is only in an equilibrium configuration for T=, as one cannot relate the temperature with the number of available modes N_T. §.§ Towers with exponential degeneracy We finally consider string-like towers with the following mass spectrum m_n=√(n) , d_n=e^λ√(n) . The number of modes in the tower is given in terms of the level as N_T≃2λ^2e^λ√(N)λ√(N) . The entropy is S≃ N_T+ N_Tlog(MN_TlogN_T) , the configuration maximizing the entropy is M≃ N_TlogN_T . The equilibrium entropy and temperature for string towers are then S≃ N_T , ≃logN_sp . This corresponds directly to the number of states kinematically available with masses below T. We can thus notice that the microcanonical counting can be suited for degeneracies which are up to exponential, as the results match with those of the canonical partition function. JHEP
http://arxiv.org/abs/2406.18005v1
20240626011928
NEOWISE-R Caught the Luminous SN 2023ixf in Messier 101
[ "Schuyler D. Van Dyk", "Tamas Szalai", "Roc M. Cutri", "J. Davy Kirkpatrick", "Carl J. Grillmair", "Sergio B. Fajardo-Acosta", "Joseph R. Masiero", "Amy K. Mainzer", "Christopher R. Gelino", "Jozsef Vinko", "Andras Peter Joo", "Andras Pal", "Reka Konyves-Toth", "Levente Kriskovics", "Robert Szakats", "Krisztian Vida", "WeiKang Zheng", "Thomas G. Brink", "Alexei V. Filippenko" ]
astro-ph.SR
[ "astro-ph.SR", "astro-ph.GA", "astro-ph.HE" ]
Schuyler Van Dyk vandyk@ipac.caltech.edu 0000-0001-9038-9950]Schuyler D. Van Dyk Caltech/IPAC, Mailcode 100-22, Pasadena, CA 91125, USA 0000-0003-4610-1117]Tamás Szalai Department of Experimental Physics, Institute of Physics, University of Szeged, Dóm tér 9, Szeged, 6720, Hungary MTA-ELTE Lendület "Momentum" Milky Way Research Group, Hungary 0000-0002-0077-2305]Roc M. Cutri Caltech/IPAC, Mailcode 100-22, Pasadena, CA 91125, USA 0000-0003-4269-260X]J. Davy Kirkpatrick Caltech/IPAC, Mailcode 100-22, Pasadena, CA 91125, USA 0000-0003-4072-169X]Carl J. Grillmair Caltech/IPAC, Mailcode 314-6, Pasadena, CA 91125, USA 0000-0001-9309-0102]Sergio B. Fajardo-Acosta Caltech/IPAC, Mailcode 100-22, Pasadena, CA 91125, USA 0000-0003-2638-720X]Joseph R. Masiero Caltech/IPAC, Mailcode 100-22, Pasadena, CA 91125, USA 0000-0002-7578-3885]Amy K. Mainzer University of Arizona, 1629 E. University Boulevard, Tucson, AZ 85721, USA Department of Earth, Planetary, and Space Sciences, The University of California, Los Angeles, 595 Charles E. Young Drive East, Los Angeles, CA 90095, USA 0000-0001-5072-4574]Christopher R. Gelino Caltech/IPAC, Mailcode 100-22, Pasadena, CA 91125, USA 0000-0001-8764-7832]József Vinkó Department of Experimental Physics, Institute of Physics, University of Szeged, Dóm tér 9, Szeged, 6720, Hungary HUN-REN CSFK Konkoly Observatory, Konkoly Th. M. út 15-17, Budapest, 1121, Hungary CSFK, MTA Centre of Excellence, Konkoly Thege Miklós út 15-17, Budapest, 1121, Hungary ELTE Eötvös Loránd University, Institute of Physics and Astronomy, Pázmány Péter sétány 1/A, Budapest, 1117, Hungary 0000-0001-5203-434X]András Péter Joó HUN-REN CSFK Konkoly Observatory, Konkoly Th. M. út 15-17, Budapest, 1121, Hungary CSFK, MTA Centre of Excellence, Konkoly Thege Miklós út 15-17, Budapest, 1121, Hungary ELTE Eötvös Loránd University, Institute of Physics and Astronomy, Pázmány Péter sétány 1/A, Budapest, 1117, Hungary 0000-0001-5449-2467]András Pál HUN-REN CSFK Konkoly Observatory, Konkoly Th. M. út 15-17, Budapest, 1121, Hungary CSFK, MTA Centre of Excellence, Konkoly Thege Miklós út 15-17, Budapest, 1121, Hungary 0000-0002-8770-6764]Réka Könyves-Tóth HUN-REN CSFK Konkoly Observatory, Konkoly Th. M. út 15-17, Budapest, 1121, Hungary CSFK, MTA Centre of Excellence, Konkoly Thege Miklós út 15-17, Budapest, 1121, Hungary Department of Experimental Physics, Institute of Physics, University of Szeged, Dóm tér 9, Szeged, 6720, Hungary ELTE Etvs Lornd University, Gothard Astrophysical Observatory, Szombathely, Hungary HUN-REN CSFK Konkoly Observatory, Konkoly Th. M. út 15-17, Budapest, 1121, Hungary CSFK, MTA Centre of Excellence, Konkoly Thege Miklós út 15-17, Budapest, 1121, Hungary 0000-0002-1698-605X]Róbert Szakáts HUN-REN CSFK Konkoly Observatory, Konkoly Th. M. út 15-17, Budapest, 1121, Hungary CSFK, MTA Centre of Excellence, Konkoly Thege Miklós út 15-17, Budapest, 1121, Hungary 0000-0002-6471-8607]Krisztián Vida HUN-REN CSFK Konkoly Observatory, Konkoly Th. M. út 15-17, Budapest, 1121, Hungary CSFK, MTA Centre of Excellence, Konkoly Thege Miklós út 15-17, Budapest, 1121, Hungary 0000-0002-2636-6508]WeiKang Zheng Department of Astronomy, University of California, Berkeley, CA 94720-3411, USA 0000-0001-5955-2502]Thomas G. Brink Department of Astronomy, University of California, Berkeley, CA 94720-3411, USA 0000-0003-3460-0103]Alexei V. Filippenko Department of Astronomy, University of California, Berkeley, CA 94720-3411, USA § ABSTRACT The reactivated Near-Earth Object Wide-field Infrared Survey Explorer (NEOWISE-R) serendipitously caught the Type II supernova SN 2023ixf in Messier 101 on the rise, starting day 3.6 through day 10.9, and on the late-time decline from days 211 through 213 and days 370 through 372. We have considered these mid-infrared (mid-IR) data together with observations from the ultraviolet (UV) through the near-IR, when possible. At day 3.6 we approximated the optical emission with a hot, ∼ 26,630 K blackbody, with a notable UV excess likely from strong SN shock interaction with circumstellar matter (CSM). In the IR, however, a clear excess is also obvious, and we fit it with a cooler, ∼ 1,620 K blackbody with radius of ∼ 2.6 × 10^15 cm, consistent with dust in the progenitor's circumstellar shell likely heated by the UV emission from the CSM interaction. On day 10.8, the light detected was consistent with SN ejecta-dominated emission. At late times we also observed a clear NEOWISE-R excess, which could arise either from newly formed dust in the inner ejecta or in the contact discontinuity between the forward and reverse shocks, or from more distant pre-existing dust grains in the SN environment. Furthermore, the large 4.6 μm excess at late times can also be explained by the emergence of the carbon monoxide 1–0 vibrational band. SN 2023ixf is the best-observed SN IIP in the mid-IR during the first several days after explosion and one of the most luminous such SNe ever seen. § INTRODUCTION Supernova (SN) explosions are among the most powerful events in the Universe. They serve as unique cosmic laboratories for studying processes in extreme physical conditions and chemical feedback into the interstellar and intergalactic media. Core-collapse supernovae (CCSNe), consequences of the gravitational collapse of iron cores of massive (≳ 8 M_⊙) stars, have been considered as possible sources of cosmic dust at high redshifts for over ∼ 50 yr <cit.>. Observed dust in CCSNe may form either in the (unshocked) ejecta or in a cold dense shell (CDS) across the contact discontinuity between the shocked circumstellar matter (CSM) and shocked ejecta. A late-time mid-infrared (mid-IR) excess may emerge from either newly formed or heated pre-existing dust grains. In the shocked CSM, heating can be collisional, and grains in the more distant, unshocked CSM are assumed to be radiatively heated by the peak SN luminosity or by energetic photons generated during CSM interaction (see, e.g., for a review). Observed properties and classifications of CCSNe, from H-rich to H-free spectra (IIP, IIL, IIb, Ib/c; see and for reviews), depend mainly on the degree of pre-explosion mass loss from the progenitor star (and/or its companion in a binary system). H-rich Type II-plateau (IIP) SNe, characterized by their ∼100-day optical plateau-like light curves, are the predominant CCSN subclass <cit.>. Direct evidence exists that these SNe arise from stars in the red supergiant (RSG) phase, with the star's massive hydrogen envelope remaining relatively intact at explosion <cit.>. SNe IIP are known to form new dust in their ejecta (see and references therein). Based on recent model calculations, expanding SN IIP ejecta succeed in condensing sufficient quantities (0.05–1.0 M_⊙) of dust. Some of these models propose slow and steady dust growth over several thousand days <cit.>, while others suggest a more rapid dust growth <cit.>. These theoretical expectations are also in agreement with the far-infrared/submillimeter detection of a large amount of cold (≲ 50 K) dust in hundreds-of-years-old Galactic SN remnants, such as Cas A <cit.> and the Crab Nebula <cit.>, as well as the nearby (∼ 50 kpc) and famous SN 1987A <cit.>. Very recently, the James Webb Space Telescope (JWST) has offered a new opportunity to study the late phases of cool (∼ 100–200 K) dust in extragalactic SNe and has already led to the detection of a significant amount (≳10^-3 M_⊙) of dust in Type IIP SNe 2004et and 2017eaw <cit.>, and the Type IIL SN 1980K <cit.>. At the same time, only a handful of nearby young (≲5 yr) SNe II (primarily IIP) show direct observational evidence for dust condensation, and these examples have all yielded two-to-three orders of magnitude less dust (∼ 10^-5–10^-3 M_⊙) than predicted by the models. Most of these observations, however, were carried out in the wavelength range 3–5 μm, and thus have been limited to just the warmer (≳ 500 K) dust grains. In the last quarter century, the primary source of mid-infrared (mid-IR) data on SNe was NASA's Spitzer Space Telescope, which resulted in valuable data during both its cryogenic (2003–2009) and post-cryogenic (2009–2020) missions. Except for several single-object studies — e.g., SN 1987A <cit.>, SN 1993J <cit.>, SN 1995N <cit.>, SN 2003gd <cit.>, SN 2004dj <cit.>, SN 2004et <cit.>, SN 2005af <cit.>, SN 2005ip <cit.>, SN 2007it <cit.>, and SN 2007od <cit.> — most of these Spitzer SN data were collected in the post-cryogenic phase (at 3.6 and 4.5 μm). Studies during this phase included either targeted surveys, such as the SPIRITS project <cit.> and work focused on interacting SNe <cit.>, or archival images for which the SNe were not the primary target <cit.>. The latter work, including data from targeted surveys, presents the most extensive analysis of mid-IR SN observations to date, including ∼ 120 positively detected objects from ∼ 1100 SN sites imaged by Spitzer. Another very important tool for detecting early-time mid-IR radiation from SNe has been the Wide-field Infrared Survey Explorer (WISE, both cryogenic and post-cryogenic, 2009–2011; ). The post-cryogenic WISE mission was reactivated in 2013 and has been monitoring the sky at 3.4 and 4.6 μm ever since, as the Near-Earth Object Wide-field Infrared Survey Explorer Reactivation (NEOWISE-R, or NEOWISE for short; ). While the original aim of the reactivated mission is mainly characterization of known Solar System objects, its database also serves as a valuable source of information on a rich variety of transient objects, such as cataclysmic variables, active galactic nuclei, tidal disruption events, and SNe (e.g., ). We as a community have been incredibly fortunate to have the recent, nearby SN 2023ixf occur in Messier 101 (M101; NGC 5457). Its proximity and brightness have led to many investigators training various facilities at a range of wavelengths at the event, which has exhibited a number of fascinating properties. The SN was discovered by <cit.> on 2023 May 19.73 (UTC dates are used throughout this paper) and classified as an SN II by <cit.> within hours of discovery. It was evident immediately that the optical spectrum was dominated by “flash” emission features indicative of interaction of the SN shock with pre-existing CSM <cit.>. The SN's light curves provided similar indications <cit.>. <cit.>, from an analysis of early-time Hubble Space Telescope (HST) ultraviolet (UV) spectroscopy of the SN, constrained the CSM to be dense and confined, with ∼ 10^-12 g cm^-3 at ≲ 2 × 10^14 cm; they concluded that this dense CSM immediate to the progenitor prolonged the SN shock breakout by ∼ 3 d. Other indications of initial and longer-term CSM interaction for SN 2023ixf come from observations at X-ray <cit.> and radio <cit.> wavelengths. A progenitor candidate was directly identified in archival HST, Spitzer, and ground-based near-IR data <cit.>. These unprecedented data were plentiful enough that the star was shown in astonishing detail to be a long-period variable, similar to what we expect for many RSGs <cit.>. Additionally, modeling of the star's spectral energy distribution (SED), e.g., by <cit.> revealed it to be quite dusty and luminous, and implied that the star was surrounded by a dusty silicate-rich shell with an inner radius of ≈ 10 times the star's radius, or ≈ 10^15 cm. Near-IR studies of SN 2023ixf have already been conducted and published <cit.>, and others will likely emerge. The SN has already been observed with the JWST, and those results are pending. Here we describe and analyze observations by NEOWISE, which serendipitously caught SN 2023ixf in the act between ∼ 3 days and ∼ 372 days in age. This is among the earliest that an SN has been detected in the mid-IR (the peculiar Type IIP SN 2009js was caught by Spitzer two days after discovery, however, its explosion epoch is rather uncertain; ). Following <cit.>, we have adopted 2023 May 18 18:00 UTC (MJD 60082.75) as the explosion epoch. We assume throughout a distance to M101 of 6.85 ± 0.13 Mpc <cit.>. § OBSERVATIONS §.§ NEOWISE NEOWISE observed the SN site pre-explosion, as part of routine operations, 152 times between 2013 December 18.26 and 2022 December 18.99 UT (MJD 56644.2618 and 59931.9958, respectively). The first pre-SN pair of single exposures occurred 3438.49 d prior to explosion, while the last was 150.75 d pre-SN. The progenitor candidate was not detected in any of these observations <cit.>. See also Section <ref> of this paper. The lack of detection tends to rule out luminous eruptions or outbursts from the progenitor candidate during that time period prior to explosion, as discussed in the above studies. (The star was also not detected by WISE and the post-cryogenic NEOWISE prior to reactivation, between 2009 and 2011; see .) The SN itself was detected during the nineteenth sky coverage since the start of the Reactivation Mission, from day 3.631 through 10.901 (2023 May 22.38 through May 29.65). We note that the gap of several days in the early-time data is caused by a “Moon toggle” procedure, in which the spacecraft pointing skips ahead to avoid the Moon, and then slews back to observe the “skipped” area of sky (see for a description); in our case, this was advantageous, as we were able to sample the SN's evolution about a week after the first detections. All of these data are publicly available via the NASA/IPAC Infrared Science Archive (IRSA; https://irsa.ipac.caltech.edu/https://irsa.ipac.caltech.edu/); see Figure <ref>. The SN was then captured again at late times during the twenty-first sky coverage, from day 211.736 through 213.351 (2023 December 16.49 through 18.10), and then again from day 370.875 through 372.476 (2024 May 23.63 through 25.23). Those data were still pre-release at the time of this writing and were also obtained via IRSA. Given the separation of the SN from the general environs of the neighboring giant H ii region NGC 5461, the detections are relatively clean. The SN detections are listed in Table <ref>. The quantities W1mpro and W2mpro are profile-fit photometry magnitudes at 3.4 and 4.6 μm (bands W1 and W2), respectively, in the Vega system[See the NEOWISE Data Release Explanatory Supplement, https://wise2.ipac.caltech.edu/docs/release/neowise/expsup/https://wise2.ipac.caltech.edu/docs/release/neowise/expsup/.], without any further special processing or additional background subtraction applied. The resulting light curves are shown in Figure <ref>. cccccccc 0pt 8 NEOWISE-R Observations of SN 2023ixf MJD Age Scan Frame W1mpro W1sigmpro W2mpro W2sigmpro (d) ID Num (mag) (mag) (mag) (mag) 60086.381 3.631 50719r 232 11.352 0.020 11.287 0.026 60086.511 3.761 50723r 159 11.301 0.022 11.262 0.036 60086.641 3.891 50727r 232 11.296 0.019 11.212 0.028 60086.771 4.021 50731r 232 11.241 0.020 11.309 0.026 60086.836 4.086 50733r 207 11.278 0.021 11.233 0.026 60086.836 4.086 50733r 208 11.276 0.023 11.296 0.027 60086.901 4.151 50735r 231 11.225 0.020 11.183 0.028 60086.966 4.216 50737r 207 11.247 0.019 11.220 0.031 60087.031 4.281 50739r 232 11.260 0.018 11.146 0.025 60087.096 4.346 50741r 208 11.258 0.020 11.219 0.028 60087.160 4.410 50743r 157 11.208 0.018 11.198 0.026 60087.225 4.475 50745r 157 11.245 0.021 11.156 0.025 60087.290 4.540 50747r 232 11.228 0.018 11.178 0.026 60087.355 4.605 50749r 208 11.220 0.020 11.196 0.026 60087.420 4.670 50751r 232 11.200 0.020 11.169 0.026 60087.484 4.734 50753r 157 11.354 0.020 11.451 0.027 60087.485 4.735 50753r 158 11.208 0.020 11.180 0.032 60087.549 4.799 50755r 156 11.209 0.017 11.115 0.023 60087.615 4.865 50757r 208 11.195 0.020 11.162 0.028 60087.680 4.930 50759r 232 11.181 0.018 11.139 0.023 60087.745 4.995 50761r 207 11.145 0.021 11.112 0.023 60087.745 4.995 50761r 208 11.145 0.021 11.165 0.026 60087.874 5.124 50765r 207 11.172 0.018 11.158 0.024 60088.004 5.254 50769r 208 11.147 0.017 11.183 0.024 60088.134 5.384 50773r 156 11.117 0.019 11.095 0.022 60093.586 10.836 50941r 211 10.719 0.019 10.725 0.021 60093.651 10.901 50942r 235 10.687 0.016 10.646 0.020 60294.486 211.736 57146r 007 11.921 0.027 10.544 0.020 60294.616 211.866 57150r 042 11.920 0.021 10.505 0.018 60294.745 211.995 57154r 042 11.895 0.021 10.489 0.019 60294.874 212.124 57158r 042 11.880 0.020 10.513 0.018 60294.938 212.188 57160r 017 11.849 0.020 10.513 0.018 60295.003 212.253 57162r 042 11.891 0.022 10.493 0.018 60295.068 212.318 57164r 018 11.930 0.023 10.483 0.019 60295.197 212.447 57168r 018 11.839 0.020 10.510 0.019 60295.261 212.511 57170r 042 11.874 0.023 10.569 0.025 60295.326 212.576 57172r 018 11.860 0.020 10.511 0.018 60295.390 212.640 57174r 042 11.929 0.024 10.508 0.021 60295.455 212.705 57176r 017 11.901 0.021 10.495 0.019 60295.584 212.834 57180r 017 11.872 0.023 10.479 0.020 60295.649 212.899 57182r 042 11.911 0.025 10.517 0.021 60295.713 212.963 57184r 018 11.930 0.023 10.530 0.019 60295.778 213.028 57186r 043 11.925 0.028 10.498 0.020 60295.842 213.092 57188r 018 11.905 0.024 10.527 0.020 60295.972 213.222 57192r 018 11.894 0.023 10.525 0.023 60296.101 213.351 57196r 018 11.939 0.024 10.502 0.020 60453.625 370.875 62087r 131 13.141 0.038 11.795 0.036 60453.753 371.003 62091r 231 13.120 0.034 11.815 0.039 60453.881 371.131 62095r 232 13.177 0.036 11.856 0.040 60453.945 371.195 62101r 206 13.194 0.035 11.741 0.039 60454.009 371.259 62103r 231 13.157 0.035 11.867 0.036 60454.073 371.323 62105r 232 13.098 0.032 11.769 0.037 60454.137 371.387 62107r 134 13.101 0.032 11.817 0.043 60454.201 371.451 62109r 135 13.172 0.038 11.775 0.038 60454.265 371.515 62111r 231 13.149 0.036 11.751 0.034 60454.329 371.579 62113r 206 13.265 0.038 11.836 0.037 60454.393 371.643 62115r 231 13.085 0.031 11.861 0.043 60454.457 371.707 62117r 205 13.091 0.049 11.775 0.032 60454.458 371.708 62117r 206 13.134 0.036 11.844 0.038 60454.521 371.771 62119r 231 13.118 0.032 11.816 0.041 60454.585 371.835 62121r 205 13.080 0.038 11.800 0.036 60454.650 371.900 62123r 231 13.168 0.032 11.815 0.033 60454.714 371.964 62125r 206 13.165 0.033 11.839 0.040 60454.778 372.028 62127r 231 13.056 0.031 11.789 0.034 60454.842 372.092 62129r 206 13.254 0.036 11.705 0.038 60454.906 372.156 62131r 231 13.053 0.031 11.819 0.031 60454.970 372.220 62133r 206 13.228 0.035 11.797 0.041 60455.097 372.347 62137r 125 13.120 0.034 11.768 0.036 60455.226 372.476 62141r 206 13.202 0.035 11.823 0.035 The columns W1mpro, W1sigmpro, W2mpro, and W2sigmpro are profile-fit photometry magnitudes and their uncertainties in the Vega system. §.§ Late-Time Optical Data §.§.§ Konkoly Observatory Many investigators have continued to follow SN 2023ixf since its discovery. We (Vinkó, Joó, Pál, Kriskovics, Könyves-Tóth, Szakáts, Vida) obtained optical photometry with the 0.8 m Ritchey-Chrétien telescope at the Konkoly Observatory, Hungary (J. Vinkó et al. 2024, in preparation). This includes late-time Johnson BV and Sloan Digital Sky Survey (SDSS) g'r'i'z' (hereafter BVgriz) photometry from MJD 60297.0 (2023 December 18.5, day 214) and from MJD 60444.93 (2024 May 14.9, day 362.18). These data were processed with standard IRAF[IRAF is distributed by the National Optical Astronomy Observatories, which are operated by the Association of Universities for Research in Astronomy, Inc., under a cooperative agreement with the National Science Foundation.] routines. Photometric calibration was based on field stars in the Pan-STARRS DR1 (PS1) catalogue[https://catalogs.mast.stsci.edu/panstarrs/https://catalogs.mast.stsci.edu/panstarrs/] <cit.>. In order to obtain reference magnitudes for our B and V frames, the PS1 magnitudes were transformed into the Johnson-Cousins BVRI system, based on equations and coefficients provided by <cit.>. Finally, the instrumental magnitudes were transformed into standard BVgriz magnitudes by applying a linear color term (using g-i) and wavelength-dependent zero points. Since the reference stars were all within a few arcminutes around the SN, no atmospheric extinction correction was necessary. §.§.§ Lick Observatory We (Zheng, Brink, Filippenko) performed further follow-up BVRI photometric observations of SN 2023ixf with both the 0.76 m Katzman Automatic Imaging Telescope (KAIT) and the 1 m Nickel telescope, as well as spectroscopy with the Shane 3 m telescope at Lick Observatory; see W. Zheng et al. (2024, in preparation) for details of the observations and data reduction. Regarding the photometry, we isolated just the late-time data on MJD 60291.53 and 60303.59 (2023 December 13.53 and 25.59, or days 208.78 and 220.84, respectively), which bracketed the late-time NEOWISE observations, and interpolated between these two sets of measurements. As far as spectroscopy, we have included spectra from Zheng et al. obtained on 60290.53 (2023 December 12.53, day 207.78) and 60445.41 (2024 May 15.41, day 362.66), which are the closest Lick spectra in time to the late-time NEOWISE observations. § ANALYSIS Throughout our analysis we have assumed a total extinction to SN 2023ixf of A_V=0.12 mag from <cit.>. For UV through the near-IR we adopted the <cit.> reddening law. The extinction corrections for the NEOWISE bands are adopted from <cit.>. §.§ Early-Time IR Emission Rather than select every pair of observed NEOWISE W1 and W2 data points to analyze, for illustrative purposes we have chosen to consider just two sets at early times, the very first one from MJD 60086.381 (day 3.631) and from 60093.586 (day 10.836). These adequately represent the two periods of early-time sampling of the light curves in these bands. In order to put the NEOWISE data in context with the overall SED at these two epochs, we accumulated published light-curve data at UV, optical, and near-IR wavelengths corresponding to (or bracketing) the epochs. The data sources were then an amalgam of Swift UVW2, UVM2, UVW1 from <cit.>, SDSS ugriz and Johnson BVJHK_s from <cit.>, Johnson UBV, and SDSS griz from <cit.>, and Johnson JHK from <cit.>. Since the NEOWISE measurements are in the Vega system, the entirety of the dataset presented by <cit.>, which is in the AB system, had to be converted to Vega magnitudes. The SDSS magnitudes from <cit.> also required a similar conversion. No conversion was needed for the <cit.> JHK photometry. For day 3.631 the available UV-optical data were quasi-contemporaneous with the NEOWISE points; however, JHK required a linear interpolation between two bracketing epochs (the earlier epoch was at day 3.4, very close in time to NEOWISE). For day 10.836 none of the complementary data were contemporaneous, so we were forced to interpolate between bracketing epochs at all wavelengths. The resulting SED is shown in Figure <ref>. In addition to the UV-optical photometric points we included an FTN-FLOYDS-N spectrum of the SN obtained by <cit.> on 2023 May 22, which we downloaded from WISeREP[https://www.wiserep.orghttps://www.wiserep.org] <cit.>. Both the spectrum and the photometry were first reddening-corrected. This spectrum was further renormalized to the (dereddened) SN V-band brightness. As can be seen in the figure, the overall agreement is reasonable between the spectrum and the photometric points across the common wavelength range. We then attempted to fit a single, simple blackbody to the SED at day 3.631. We found that a hot, ∼ 26,630 K blackbody provides a good fit to the optical data, although a clear excess exists in the UV relative to this fit. The fit implies that the SN luminosity at that epoch was ≳ 4.1 × 10^43 erg s^-1; this is a lower limit, since there is clearly additional luminosity in the UV. This is consistent with the evidence for strong, early-time interaction of the SN shock with dense CSM <cit.>; specifically, interaction in SNe II can strengthen the continuum flux and boost emission lines in the UV simultaneously <cit.>. The blackbody radius is then R_ hot≈ 3.4 × 10^14 cm. The early-phase photometric and spectroscopic UV-optical SN observations provided evidence for strong interaction of the SN shock with a dense, confined (<2 × 10^15 cm) CSM <cit.>. <cit.> further refined the extent of the dense CSM to R_ CSM≈ 2 × 10^14 cm and concluded that it actually delayed shock breakout (SBO) from hours after explosion to ∼ 3 d. In other words, the very first NEOWISE data were likely obtained within just hours after SBO, and the inferred R_ hot is consistent with the shock having already overrun the confinements of the dense CSM. In fact, following <cit.> and assuming a SN expansion velocity v_ exp=8,000 km s^-1, on day 3.631 the shock would have been at ∼ 2.5 × 10^14 cm, roughly consistent with R_ hot (if v_ exp instead had been a somewhat higher, ∼ 11,000 km s^-1, the two radii would be in better agreement). Particularly fascinating here is that an obvious excess in flux, relative to the hot blackbody, can be seen in Figure <ref> at ≳ 1.5 μm. We found that we could account for this IR excess with an additional much cooler blackbody, at ∼ 1620 K. Including this extra blackbody provides a reasonable fit to JHK_s and W1, although it does not quite fit W2 as well. This additional source of IR emission is of a comparatively far smaller luminosity, ∼ 3.2 × 10^40 erg s^-1, than the SN shock (it accounts for ≲ 0.1% of the total emission). The corresponding blackbody radius is R_ IR≈ 2.6 × 10^15 cm. <cit.> and <cit.>, for example, inferred that the RSG progenitor candidate was surrounded by a dusty shell with an inner radius of R_ in≈ (0.5–1.0) × 10^15 cm. The assumption <cit.> made in their modeling of the star was that the shell extended out to 1000 × R_ in, with the dust density declining ∝ r^-2. Furthermore, ∼ 1620 K is roughly the estimated evaporation temperature of ∼ 0.01–0.1 μm-sized silicate-dominated dust in SN environments <cit.>. (Note that were able to fit the reddening-corrected SED of the pre-explosion dust shell with a simple ∼ 1761 K blackbody.) Thus, we speculate that the IR excess was emanating from the dusty CSM shell, with the SN shock still within R_ in. This analysis including the NEOWISE data lends credence to the overall picture of the progenitor star, inferred via the modeling of the observed SED of the star. The estimated luminosity from the optically-thin dust was still about two orders-of-magnitude larger than the luminosity of the progenitor candidate (∼ 9 × 10^4 L_⊙, or ∼ 3.5 × 10^38 erg s^-1; ), which implies that the dust shell was likely heated by and was reprocessing the UV emission from the interaction of the SN shock with the dense inner CSM. We note that much of the CSM dust was likely destroyed immediately after explosion by high-energy (extreme UV to γ-ray) photons from the blast, through grain sublimation, vaporization, and extreme grain charging effects <cit.>. [;]();a; We built a similar SED from the available UV/optical/near-IR/NEOWISE photometry from day 10.836. Again, we added to this a dereddened and renormalized FTN-FLOYDS-N SN spectrum from WISEReP from 2023 May 29 <cit.>. The resulting SED is shown in Figure <ref>. We attempted to fit a warm, ∼ 9050 K blackbody to the data; however, as can be seen in the figure, while this fit is very good in the IR, it diverges significantly for wavelengths < 1 μm. If we consider the radiation models for SNe IIP by <cit.>, specifically, the m15mlt1 model (with an initial progenitor mass of 15 M_⊙ and radius R_⋆=1107 R_⊙) computed at day 11, it provides a remarkably good comparison, from the UV through to the mid-IR, with the observed data. This implies that what we were seeing at that epoch, including with NEOWISE, was SN ejecta-dominated emission. If we take into account the blackbody fit, the inferred luminosity is ≳ 5.8 × 10^42 erg s^-1, which is a lower limit, since a significant amount of luminosity is still emerging from the SN at < 1 μm. In fact, if we integrate the D13 model (for instance), we obtain a luminosity of ∼ 1.5 × 10^43 erg s^-1. The blackbody radius is R_ warm≈ 1.1 × 10^15 cm. Once again, assuming v_ exp=8,000 km s^-1, the shock radius would have been ∼ 7.5 × 10^14 cm, roughly consistent with R_ warm; similar to day 3.631, a somewhat higher v_ exp = 11,000 km s^-1 would result in better agreement between the two radius estimates. The blackbody radii and temperatures that we have calculated for both days 3.6 and 10.8 are in a good agreement with the results of <cit.>. Note, however, that we have assumed a spherical and homogeneous medium, while early-phase spectropolarimetric data implied the presence of an aspherical dense CSM and a clumpy, low-density extended CSM around SN 2023ixf <cit.>. Nevertheless, our conclusions also fit with that of <cit.>, in that they concluded that the expanding SN ejecta just emerged from the dense CSM region around Day 3.5. §.§ Late-Time Mid-IR Excess We computed the averages of the various measurements obtained by NEOWISE at late times in the SN's evolution, between MJD 60294.5 and 60296.1. SN 2023ixf was already well on the radioactively-powered exponential tail by this point. The averages are 11.895 ± 0.023 mag and 10.509 ± 0.020 mag in W1 and W2, respectively, with reference to day 213. Thanks to the active worldwide follow-up observations of SN 2023ixf, we can also build the late-time optical-NIR SED for the SN. We assembled the unpublished optical BVgriz photometry from Konkoly Observatory (from 2023 December 18.5, day 214; J. Vinkó et al. 2024, in preparation) and Lick Observatory BVRI data (interpolated to the same date), as well as a single Lick optical spectrum (from 2023 December 12, day 208; W. Zheng et al. 2024, in preparation); see Section <ref>. We show the combined optical SED in Figure <ref>. Since we are unaware of any available near-IR data close to this late epoch, we included spectra and fluxes from models of SN IIP explosions for comparison: The day 207 s15p2 model spectrum courtesy of Luc Dessart, and the interpolated and distance-scaled day +193 and +242 model fluxes for the Type IIP SN 2012aw from <cit.>. (We compared the Fe ii line velocities for SN 2023ixf from Zheng et al. 2024, to that of SN 2012aw from and found a very good match even at late phases; we also adopted the SN 2012aw distance of 9.9 Mpc from .) Both the model spectrum and the set of optical model fluxes appear to compare well with our late-time measured optical data; thus, we find it to be a reasonable approach to use the near-IR components of these models as references during the analysis of the late-time optical-IR SED. As can be seen in Figure <ref>, the NEOWISE 3.4 and 4.6 μm data show a clear excess relative to the model fluxes. We fit a single blackbody to the optical-near-IR part of the SED and found a warm component with temperature ∼ 4650 K and corresponding radius ∼ 5.6 × 10^14 cm. While SN ejecta are not expected to be realistically represented by a blackbody at late epochs, we can then use the result of this fit to provide a characterization of the colder, longer-wavelength excess that is apparent. As we noted in Section <ref>, we assume that the majority of pre-existing dust grains, located in the dense, confined circumstellar shell, have been evaporated at early times after explosion; therefore, these can no longer be responsible for the excess. Thus, we consider two possible alternate scenarios: (i) At day 213, newly-formed dust existed either in the inner (unshocked) ejecta or in the contact discontinuity between the forward and reverse shocks (the CDS); or (ii) radiation was being emitted by more distant pre-existing dust grains in the SN environment, heated collisionally or radiatively by the (forward) SN shock. However, from only the two mid-IR data points, a detailed investigation of the origin and properties of the assumed late-time dust is challenging to carry out — we hope that pending JWST observations of the SN will provide greater insight into these two scenarios. Furthermore, we note that the large 4.6 μm flux excess at day 213 can also be explained by the emergence of the 1–0 vibrational band of carbon monoxide (CO) at 4.65 μm, as seen in some SNe IIP with observed mid-IR spectra at a similar age <cit.>, and also predicted by modeling of exploding RSGs by <cit.>. The presence of a late-time mid-IR excess is in agreement with the results from <cit.>; they found that the flattening in the K_s-band light curve and the attenuation of the red-edge of the Hα line profile ∼125 d after explosion is indicative of the onset of molecular CO and, hence, dust formation in SN 2023ixf. This possibility should be also taken into account during any analysis. Thus, we also excluded the W2 flux and fit a two-component blackbody model to the remainder of the optical-IR SED, holding the parameters from the hot component fixed. Since a fit to only one mid-IR point would not provide any physically relevant information, we applied temperature constraints and assumed two possible scenarios for the “warm” dust component: (i) First, we assumed the highest theoretical dust temperature possible for amorphous carbon dust, T_ IR = 2600 K (see, e.g., ); and, (ii) second, we assumed a “typical” temperature of “warm” dust (T_ IR = 700 K) seen in SNe IIP at a similar age <cit.>. Using these two assumptions, we found a blackbody radius of R_ IR∼ 1.5 × 10^15 cm and ∼ 1.6 × 10^16 cm, respectively, for the two assumptions; see Figure <ref>). Extrapolating the Fe ii velocities (W. Zheng et al. 2024, in preparation) to day 213, we estimated ∼ 1400–1500 km s^-1 for the ejecta velocity, which results in (2.6–2.8) × 10^15 cm for the ejecta radius. This implies that, in the case of very high-temperature dust (the first scenario above), these grains possibly could be within the ejecta. However, assuming a more realistic dust temperature (the second scenario), the dust would more likely be outside the ejecta. As <cit.> concluded, the dusty shell of the progenitor likely had an inner radius of ∼ 10^15 cm, with an r^-2 density distribution extending outward. Thus, the pre-explosion shell itself could easily have extended out to ∼ 1.6 × 10^16 cm (well beyond the confined volume); the dust we infer here could have been located within the CSM, being either pre-existing or newly-formed in the CDS. We present in Figure <ref> an optical-IR SED comprised of data obtained in the day 370–372 interval. Since at that range of epochs the shape of the measured optical SED appears to differ significantly from Type IIP atmospheric models by either <cit.> or <cit.>, we are unable to directly estimate the amount of IR excess in the manner that we did for the day 211–213 SED. Nevertheless, clear excesses at both 3.3 and 4.6 μm are obvious. §.§ Comparison of SN 2023ixf with Other SNe IIP In the Mid-IR We have considered all of the NEOWISE data for SN 2023ixf, in the context of the mid-IR emission from other SNe, particularly SNe IIP. To achieve this, we compared SN 2023ixf with the sample of SNe from <cit.>, who presented photometric data obtained by Spitzer in the IR Array Camera <cit.> bands at 3.6 and 4.5 μm during both the cryogenic and Warm missions; see Figure <ref>. While the bandpasses for Spitzer IRAC differ slightly from those of NEOWISE W1 and W2 channels, we can draw some basic conclusions. First, as noted above, SN 2023ixf has become the best-observed SN IIP in the mid-IR during the first several days after explosion. Second, SN 2023ixf is one of the most luminous SNe IIP in the mid-IR ever seen. This statement is especially striking, considering the 4.5/4.6 μm photometric evolution of SNe IIP (Figure <ref>; right panel) — at day 213, SN 2023ixf is more luminous than at early times and than any other SNe IIP detected so far at late times. We can speculate on why this is the case. The early-time mid-IR luminosity is consistent with the overall excess observed at other wavelengths and can be explained by the shock-CSM interaction. Among the Spitzer sample, most of the SNe were either of low luminosity (e.g., SN 2004dj) or were otherwise normal (e.g., SN 2007od, SN 2011ja); only SN 2004et was observed to be somewhat extraordinary <cit.>. The late-time mid-IR excess for SN 2023ixf could be due to post-explosion dust formation, as we have mentioned above. However, we cannot say much more about this, based on the NEOWISE data alone; observations with JWST will likely provide significantly more insight on this. § CONCLUSIONS We have analyzed serendipitous observations of SN 2023ixf made as part of routine NEOWISE survey scanning operations, starting on day 3.6 through day 10.9 after explosion, and again at late times from days 211 through 213 and days 370 through 372. For the three epochs in these time ranges that we analyzed, we combined the NEOWISE observations with data from the UV through the near-IR, whenever possible. At day 3.6 we approximated the emission in the optical with a hot, ∼ 26,630 K blackbody, exhibiting a marked excess in the UV, likely resulting from strong, early SN shock-CSM interaction. In the IR, however, a definite excess is also obvious, and we fit that with a cooler, ∼ 1,620 K blackbody, with a radius of ∼ 2.6 × 10^15 cm. We concluded that this is consistent with dust in an inferred circumstellar shell surrounding the progenitor star having been heated by the UV emission from the early CSM interaction. On day 10.8 the emission, including that detected with NEOWISE, was consistent with being SN ejecta-dominated. At late times we also observed an obvious excess in the NEOWISE bands, relative to the other wavelengths. This excess could arise either from newly-formed dust in the inner ejecta or in the contact discontinuity between the forward and reverse shocks (the CDS), or from more distant pre-existing dust grains in the SN environment. Furthermore, the observed large excess at 4.6 μm at late times can also be explained by the emergence of the CO 1–0 vibrational band, seen in other SNe IIP. Observations with JWST are necessary to confirm detection of the CO band, as well as to better explore the overall nature of the late-time mid-IR emission. We found, from comparing to mid-IR data for other SNe IIP, that SN 2023ixf is the best-observed SN IIP in the mid-IR during the first several days after explosion and one of the most luminous SNe IIP ever seen in the mid-IR. The survey operations by the WISE mission, in all its incarnations, are scheduled to be terminated permanently on 2024 July 31. Together with the decommissioning of Spitzer over four years ago, the number of available facilities to gather mid-IR light from nearby SNe will be greatly diminished. The next avenue will be provided by NEO Surveyor, set for launch no later than mid-2028 <cit.>: The mission will be obtaining four detections at 4.6 and 8 μm over a six-hour time period, approximately every 13 days, as part of survey operations, so it may be possible once again to catch SNe both on the rise and at late times. For future pointed observations it falls on JWST to be the platform for observing both new and old SNe, to explore further in detail the nature of dust associated with these spectacular events. We thank Luc Dessart for providing the day 207 model SN IIP spectrum. This publication makes use of data products from the Near-Earth Object Wide-field Infrared Survey Explorer, which is a joint project of the Jet Propulsion Laboratory/California Institute of Technology and the University of Arizona. It also uses data products from the Wide-field Infrared Survey Explorer, which is a joint project of the University of California, Los Angeles, and the Jet Propulsion Laboratory/California Institute of Technology. WISE and NEOWISE are funded by the National Aeronautics and Space Administration (NASA). This work has been supported by the GINOP-2-3-2-15-2016-00033 project of the National Research, Development and Innovation (NRDI) Office of Hungary funded by the European Union, as well as by NKFIH OTKA FK-134432, KKP-143986, and K-142534 grants, and from the HUN-REN Hungarian Research Network. L.K. and K.V. are supported by the Bolyai János Research Scholarship of the Hungarian Academy of Sciences. LK acknowledges the Hungarian National Research, Development and Innovation Office grant OTKA PD-134784. The authors acknowledge financial support of the Austrian-Hungarian Action Foundation grants 98öu5, 101öu13, 112öu1. A.V.F.'s research group at UC Berkeley acknowledges financial assistance from the Christopher R. Redlich Fund, Gary and Cynthia Bengier, Clark and Sharon Winslow, Alan Eustace (W.Z. is a Bengier-Winslow-Eustace Specialist in Astronomy), William Draper, Timothy and Melissa Draper, Briggs and Kathleen Wood, Sanford Robertson (T.G.B. is a Draper-Wood-Robertson Specialist in Astronomy), and numerous other donors. KAIT and its ongoing operation at Lick Observatory were made possible by donations from Sun Microsystems, Inc., the Hewlett- Packard Company, AutoScope Corporation, Lick Observatory, the U.S. NSF, the University of California, the Sylvia & Jim Katzman Foundation, and the TABASGO Foundation. A major upgrade of the Kast spectrograph on the Shane 3 m telescope at Lick Observatory, led by Brad Holden, was made possible through generous gifts from the Heising-Simons Foundation, William and Marina Kast, and the University of California Observatories. Several UC Berkeley undergraduate students helped obtain the 1 m Nickel data. We appreciate the excellent assistance of the staff at Lick Observatory. Research at Lick Observatory is partially supported by a generous gift from Google. NEOWISE, KAIT, Nickel, Shane, IRSA, RC80 (Konkoly) IRAF <cit.>, PyRAF (http://www.stsci.edu/institute/ softwarehardware/pyraf) § PRE-EXPLOSION NEOWISE NON-DETECTIONS As pointed out in Section <ref>, NEOWISE obtained pre-explosion observations of the SN site between 2013 December 18 and 2022 December 18, with the last pair of single exposures occurring 150.75 d prior to explosion. The progenitor candidate was not detected in any of these exposures. As <cit.> described, the upper limits on detection were established by isolating in the NEOWISE-R Single Exposure Source Table, obtained from IRSA, all of the 3σ detected objects within 60 of the SN position for each band. We show these upper limits in Figure <ref>. Both the mean and median values in W1 are < 16.4 mag; the mean value in W2 is < 14.8 mag, whereas the median value is < 14.9 mag. aasjournal
http://arxiv.org/abs/2406.18311v1
20240626125013
Online Learning of Multiple Tasks and Their Relationships : Testing on Spam Email Data and EEG Signals Recorded in Construction Fields
[ "Yixin Jin", "Wenjing Zhou", "Meiqi Wang", "Meng Li", "Xintao Li", "Tianyu Hu", "Xingyuan Bu" ]
cs.LG
[ "cs.LG" ]
Online Learning of Multiple Tasks and Their Relationships : Testing on Spam Email Data and EEG Signals Recorded in Construction Fields Yixin Jin* University of Michigan, Ann Arbor Ann Arbor, MI, 48109, USA jinyixin@umich.edu Wenjing Zhou University of Michigan, Ann Arbor Ann Arbor, MI, 48109, USA wenjzh@umich.edu Meiqi Wang Brandeis University Waltham, MA, 02453, USA meiqw@brandeis.edu Meng Li Columbia University New York, NY, 10027, USA ml4818@columbia.edu Xintao Li University of Miami Miami, FL, 33156, USA xintao.li@miami.edu Tianyu Hu Columbia University New York, NY, 10027, USA th3011@columbia.edu Xingyuan Bu University of Michigan Ann Arbor, USA xingyuanbu@gmail.com July 1, 2024 ================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== § ABSTRACT This paper examines an online multi-task learning (OMTL) method, which processes data sequentially to predict labels across related tasks. The framework learns task weights and their relatedness concurrently. Unlike previous models that assumed static task relatedness, our approach treats tasks as initially independent, updating their relatedness iteratively using newly calculated weight vectors. We introduced three rules to update the task relatedness matrix: OMTLCOV, OMTLLOG, and OMTLVON, and compared them against a conventional method (CMTL) that uses a fixed relatedness value. Performance evaluations on three datasets—a spam dataset and two EEG datasets from construction workers under varying conditions—demonstrated that our OMTL methods outperform CMTL, improving accuracy by 1% to 3% on EEG data, and maintaining low error rates around 12% on the spam dataset. machine learning, online learning, data mining, EGG § INTRODUCTION In the big data era <cit.>, the need for prompt data <cit.> processing and decision-making has grown, particularly in areas like multi-task learning <cit.>. This paper addresses the shortcomings of traditional online learning models, which assume static task relatedness, by introducing a dynamic framework for Online Multi-Task Learning (OMTL). Our approach iteratively updates task relatedness based on data insights, improving the accuracy and utility of online learning systems in applications like spam detection and EEG signal analysis. Our research advances the understanding of task relatedness in OMTL and demonstrates its practical applications, highlighting its significance for real-time data processing in various settings <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>. This study showcases the adaptability and impact of multi-task learning models in data-intensive environments. § MATERIALS AND METHODS §.§ Datasets §.§.§ Spam emails dataset To test the suggested framework in this research, the spam dataset provided online was used <cit.> to classify spam and non-spam emails. This dataset includes 3,000 training and 1,100 testing examples recorded from email messages of two subjects. It includes approximately 50% spam email messages and 50% non-spam email messages. The first 4,000 training and testing examples are subject 1 data. The last 100 data are from the subject 2. Subject 1 and 2 were considered as task 1 and 2 respectively. §.§.§ Construction workers' EEG signal dataset The EEG data were collected from 8 healthy male workers using the Emotiv EPOC+, an affordable EEG device that records signals through 14 channels at a quality suitable for research. The device sampled internally at 2,048Hz, delivering data at 128Hz with a resolution of 14 bits and a connectivity at the 2.4GHz band. Two datasets were generated based on the motor cortex activation: Dataset1 distinguished between inactive (inactive) and active (active) states, while Dataset2 differentiated between relaxed and stressful working conditions. Details on window size selection and feature extraction will be discussed in the subsequent section. §.§ Dataset pre-process and feature extraction for EEG signal data §.§.§ Pre-process As described in section A 2), the construction workers’ EEG signal dataset was raw data recorded as time-series data in 14 channels and was originally stored as an Excel file. A step of preprocessing is applied to extract the data into a Matlab format. Due to the raw and uncleaned natural of the original data, there is a large number of signal artifacts and abnormality in the extracted dataset. These signal artifacts were removed before extracting the features using filtering methods and the Independent Component Analysis method (ICA). After cleaning the data <cit.>, a window of size 128 was applied to calculate the feature vectors. The window size was 128 since we are using a 128Hz rate to collect the data. Moreover, in each of the subject datasets, worker’s behavior is labeled up to 7 labels. We extracted two interested datasets that each has binary labels. Dataset1 contains the label indicating whether the constructor is rest or active. Dataset2 labels the constructor as stressful or relaxed. We used Python to handle all the extraction and cleaning and put the well-organized data into a .mat file just like spam data in order to process it. We have 2,744 data points in dataset1 and 1,585 data points in dataset2. §.§.§ Power spectral density estimation The power spectral density (PSD) estimation shows the strength of energy variation as a function of frequency. PSD is the average power distribution of frequency response of a random periodic signal. The power distribution of frequency is calculated through the following equations: S(w) = ∑ p(k)e^iwk where p(k) = ∑ y(t)y*(t-k) After calculating the power distribution, the PSD is calculated through averaging the absolute mean value over the frequency domain <cit.>. In this study, we used pwelch function in Matlab to calculate the power distribution and then apply the absolute mean to calculate the PSD. We apply the same PSD extraction function to generate a feature vector from multi-channel signals (14 channels in this paper) from different EEG electrodes (14 electrodes). This process is the same for the following feature extraction process, and therefore will not be repeated in the following sections. §.§.§ Mean estimation for alpha frequency and beta frequency Alpha frequency and beta frequency are two sets of frequency domains with which to describe human brain activities. Alpha frequency describes the frequency between 8 to 12.5HZ, whereas beta frequency describes 12.5 to 30HZ. Based on research, alpha frequency is associated with the movement of closing eyes and beta frequency mainly describes the muscle contractions before and during a human movement; therefore, these are the best features with which to determine whether a human is in resting or active state. To obtain alpha and beta frequencies for a specific channel of brain signals, we can apply the same power distribution function with the cut-off frequency domain setup to the corresponding frequency domains and apply the similar mean absolute estimate function to extract the mean estimation. §.§.§ Frontal EEG features Frontal EEG asymmetry (FEA) compares the frontal activity of the brain between its left and right areas. Left frontal activity indicates a positive emotion; on the other hand, right frontal activity usually indicates a negative emotion <cit.>. FEA shows the degree of activation of the left and right areas by comparing the spectral power in the alpha and beta range frequencies between these two areas <cit.>. FEA has been frequently used to determine the emotions of human subjects to measure the valence and arousal of human subjects’ emotional levels <cit.>. Valence levels illustrate how pleasurable the event is for the subjects, and arousal shows how active/aroused subjects are in different situations <cit.>. In order to calculate the arousal and valence features, we use the following, equation <ref> and equation <ref>: Arousal = α (AF3 + AF4 + F3 + F4)/β (AF3 + AF4 + F3 + F4) Valence = α (F4)/βF4 + α (F3)/β (F3) In both equations, α and β represent the alpha and beta frequencies following the same calculation procedure in section B 3). The AF3, AF4, F3 and F4 represent the different EEG channels in the left and right brains. Equation <ref>, which is the arousal feature, indicates the excitation state of a person by calculating the beta/alpha ratio. Equation <ref>, the valence feature, compares the activation levels of two cortical hemispheres of the brain that shows the emotional valence status of the subjects. §.§ Perceptron based online multi-task learning (CMTL) In machine learning, the perceptron is an algorithm for supervised learning of binary classifiers that decide whether an input belongs to some specific class or not <cit.>. The perceptron uses hypotheses of the form y(x;w) = f(w^T x), where f(z)=I[z ⩾ 0]. The update rule is as follows: w_i+1 := w_i + α[t_i+1 - y(x_i+1; w_i)]x_i+1 CMTL keeps a weight vector for each task and updates all weight vectors at each mistake using the perceptron rule through learning rates defined by a K × K interaction matrix A. It is A that encodes beliefs about the learning tasks: different choices regarding the interaction matrix result in different geometrical assumptions regarding the tasks. The pseudocode for the multitask perceptron algorithm using a generic interaction matrix A is given below. At the beginning of each time step, the counter s stores the mistakes made so far, plus one. The weights of the K perceptrons are maintained in a compound vector w^T_s = (w^T_1, s, ..., w^T_K, s) with w_j, s∈ℝ^d for all j. The algorithm predicts y_t through the sign y_t of the ith perceptron’s margin w^T_s-1Φ_t = w^T_i, s-1 x_t. Then, if the prediction and the true label disagree, the compound vector update rule is w_s = w_s-1 + (A ⊗ I_d)^-1Φ_t, where ⊗ was defined as the Kd × Kd Kronecker product, that is A ⊗ I_d = [ a_11I_d a_12I_d … a_1KI_d; 4; a_K1I_d a_K2I_d … a_KKI_d ] Since (A ⊗ I_d)^-1 = A^-1⊗ I_d, the above update is equivalent to the K task updates <cit.>: w_j,s = w_j, s-1 + y_t A^-1_j,i_tx_t The pseudocode is shown as algorithm <ref>. §.§ Online multi-task learning (OMTL) This method learns weight vectors of multiple tasks and the task relatedness matrix together adaptively in an online setting, in contrast with previous methods, which usually assume the task relatedness matrix is fixed and the relatedness of task is positive. First, this method defines an objective function to optimize, which is inspired by the CMTL method. The objective function is as follows: _w ∈ℝ^Kd1/2w^T A_⊗ w + D_A(A||A_t) + ∑^t_1 l_t(w) where D_A denotes Bregman divergences, w and A are the weight vector and the interaction matrix, and A_⊗ = A ⊗ I_d The optimization problem is defined jointly over both w and A. It can be solved in an alternating fashion by solving for w given A, and then solving for A given w <cit.>. So this method uses an alternating optimization scheme to solve for the parameters A and w. Deriving from the CMTL method, the update rule is as follows: w_s = w_s-1 + y_t (A_s-1⊗ I_d)^-1Φ_t w_j,s = w_j, s-1 + y_t A^-1_s-1,(j,i_t)x_t where j denotes which task the parameter is for, s standing for the round, true label y_t ∈{-1, 1}, Φ_t = (0,...,0,x_i_t,0,...,0) ∈ℝ^Kd. Once w_s is solved, we treat it as fixed and then solve for A. The pseudocode for OMTL is shown in algorithm <ref>. § RESULTS & DISCUSSION We evaluated the developed online task relationship learning algorithm by comparing five different algorithms suggested in this paper <cit.>; these algorithms have been shown in the Table <ref>. We tested our algorithm on spam and two EGG dataset. The evaluation metric we used is the error rate of classification, which is the total number of correct classifications divided by the total number of samples. To fully investigate our algorithms, we first run our algorithms with fixed epoch parameters, which we made to 0.5, 0.8 and 0.8 respectively, and tuned the learning rate η simultaneously. Also, we tried to explore the influence of the epoch parameters measuring the proportion of data being read before updating relatedness matrix. Therefore, we run our algorithm using various values for epoch parameters to investigate this aspect. §.§ Error rates The averaged results of the error rate and standard deviations from 20 random permutations are reported in Table <ref>. According to the results, BatchOPT has the lowest error rate on the spam email dataset (average error rate, 11.32%), and OMTLCOV has the lowest performance on the spam dataset (average error rate, 15.33%). For the EEG dataset, when classifying whether a worker is resting or active, OMTLCOV has the best accuracy (average error rate, 33.25%) and CMTL has the worst performance (average error rate, 35.19%). When classifying whether a worker is relaxed or stressed out, OMTLCOV still has the best accuracy (average error rate, 29.50%), and BatchOPT gives us the worst classification (average error rate, 35.08%). Moreover, for spam data, the BatchOPT seems the most stable (lowest standard deviation when permuting data) and for EEG data, OMTLCOV outperforms the others by both accuracy and stability. §.§ Epoch parameter As illustrated in Figure <ref>, Figure<ref> and Figure<ref>, adjusting the EPOCH values, which determine the percentage of data seen before updating the relatedness matrix, influences classification errors. OMTLLOG and OMTLVON generally show lower errors with increased EPOCH values. For OMTLCOV, error rates initially decrease then increase, with optimal classification at an EPOCH setting of 0.6. BatchOPT’s prediction accuracy is less affected by changes in EPOCH. An EPOCH range of 0.6-0.7 typically offers the best balance of classification accuracy and computational efficiency, as higher values increase computation time. Notably, the four online learning methods that update the relatedness matrix consistently outperform CMTL, which uses a fixed relatedness matrix, particularly in the EEG datasets. §.§ Discussion In the spam case, all algorithms demonstrated similar error rates on average. For OMTLLOG and OMTLVON, there were obvious decreasing trends in error rates with increasing EPOCH value, which showed the benefits from adaptive learning and was also consistent with the conclusion of the selected paper: that an increase in Epoch value led to a gradual improvement in prediction accuracy <cit.>, although this pattern was not similarly clear for OMTLCOV and BatchOPT. We could also conclude that an EPOCH equal to 0.6-0.7 was a preferable setting in terms of both accuracy and time complexity. For the spam data, all the error rates were lower than what was found in the original paper, which indicated that our dataset differed from theirs and all these algorithms could get similar good accuracy. One possible reason was that the spam data we worked on was balanced, containing 50% of the spam emails, which might not be same as the dataset used in the paper. In the previous works <cit.>, they mentioned that different datasets would yield different tipping points and it was reasonable to see different results when testing with different data. In the EEG case, we observed the OMTLCOV outperformed all other algorithms. OMTLCOV delivered lower and smoother error rates, which validated the simulation results in the paper. However the average error rates were high compared to the spam data. This could be because before the simulation, we needed to extract the features from the brain-test data and the process might potentially bring more skewedness into the experiments, since we only took a limited number of features, which might not give a complete picture of the dataset. Additionally, the sample size was relatively small and the lack of data might be the reason for the unfavorable accuracy. However, due to the capacity of the online learning algorithms, we would expect much better performance by updating the model parameters when getting more data in the future. It was also possible that the EEG data was not suitable for these learning algorithms. The traditional SVM provided even lower classification errors. It was probably because the EEG data was much more randomized than the spam data and required a better data cleansing process and feature extracting process. Moreover, we realized that the learning rate also played a very important role in the online multi-task learning. The updating methods of the relatedness matrix were sensitive to the learning rate to various degrees. Therefore, choosing a proper learning rate would improve the stability and classification accuracy. § CONCLUSION In this paper, we implemented online multi-task learning algorithms and assessed their efficiency across various datasets, including spam and EEG data. We discovered that performance often depended on the dataset’s composition, as indicated by the reference paper <cit.>. While algorithms performed well with structured spam data, they struggled with more randomized EEG data, highlighting the need for effective data cleansing and feature extraction, particularly for EEG. We confirmed that updating the relatedness matrix after sufficient data exposure improves prediction accuracy. We also noted the importance of fine-tuning the epoch parameter and other factors like the initial interaction matrix values and learning rate to optimize classifier performance. 1 crammer2009adaptive Koby Crammer, Alex Kulesza, and Mark Dredze. Adaptive regularization of weight vectors. In Advances in neural information processing systems, pages 414–422, 2009. winkler2010frontal IreneWinkler, Mark J¨ager, Vojkan Mihajlovic, and Tsvetomira Tsoneva. Frontal eeg asymmetry based classification of emotional valence using common spatial patterns. World Academy of Science, Engineering and Technology, 45:373–378, 2010. coan2006capability James A Coan, John JB Allen, and Patrick E McKnight. A capability model of individual differences in frontal eeg asymmetry. Biological psychology, 72(2):198–207, 2006. allen2015frontal John JB Allen and Samantha J Reznik. Frontal eeg asymmetry as a promising marker of depression vulnerability: Summary and methodological considerations. Current opinion in psychology, 4:93–97, 2015. scherer2005emotions Klaus R Scherer. What are emotions? and how can they be measured? Social science infor- mation, 44(4):695–729, 2005. dekel2006online Ofer Dekel, Philip M Long, and Yoram Singer. Online multitask learning. In International Conference on Computational Learning Theory, pages 453–467. Springer, 2006. saha2011online Avishek Saha, Piyush Rai, and Hal Daum´e III Suresh Venkatasubramanian. Online learning of multiple tasks and their relationships. update, 1(1):2, 2011. zhu2021tamingZhu, Z. & Zhou, W. Taming heavy-tailed features by shrinkage. International Conference On Artificial Intelligence And Statistics. pp. 3268-3276 (2021) read2023predictionRead, A., Zhou, W., Saini, S., Zhu, J. & Waljee, A. Prediction of Gastrointestinal Tract Cancers Using Longitudinal Electronic Health Record Data. Cancers. 15, 1399 (2023) read2022predictionRead, A., Zhou, W., Saini, S., Zhu, J. & Waljee, A. PREDICTION OF GASTROINTESTINAL TRACT CANCERS USING LONGITUDINAL ELECTRONIC HEALTH RECORD DATA. GASTROENTEROLOGY. 162, S1045-S1045 (2022) liu2024adaptiveLiu, H., Shen, Y., Zhou, W., Zou, Y., Zhou, C. & He, S. Adaptive speed planning for unmanned vehicle based on deep reinforcement learning. ArXiv Preprint ArXiv:2404.17379. (2024) xin2024mmapXin, Y., Du, J., Wang, Q., Yan, K. & Ding, S. MmAP: Multi-modal Alignment Prompt for Cross-domain Multi-task Learning. Proceedings Of The AAAI Conference On Artificial Intelligence. 38, 16076-16084 (2024) yang2024comparativeYang, Q., Li, P., Shen, X., Ding, Z., Zhou, W., Nian, Y. & Xu, X. A comparative study on enhancing prediction in social network advertisement through data augmentation. ArXiv Preprint ArXiv:2404.13812. (2024) xin2023selfXin, Y., Luo, S., Jin, P., Du, Y. & Wang, C. Self-Training with Label-Feature-Consistency for Domain Adaptation. International Conference On Database Systems For Advanced Applications. pp. 84-99 (2023) shen2024localizationShen, Y., Liu, H., Liu, X., Zhou, W., Zhou, C. & Chen, Y. Localization through particle filter powered neural network estimated monocular camera poses. ArXiv Preprint ArXiv:2404.17685. (2024) xin2024vmtXin, Y., Du, J., Wang, Q., Lin, Z. & Yan, K. VMT-Adapter: Parameter-Efficient Transfer Learning for Multi-Task Dense Scene Understanding. Proceedings Of The AAAI Conference On Artificial Intelligence. 38, 16085-16093 (2024) li2024exploringLi, P., Yang, Q., Geng, X., Zhou, W., Ding, Z. & Nian, Y. Exploring diverse methods in visual question answering. ArXiv Preprint ArXiv:2404.13565. (2024) wang2024research Wang J, Li X, Jin Y, et al. Research on image recognition technology based on multimodal deep learning[J]. arXiv preprint arXiv:2405.03091, 2024. liu2024image Liu, Tianrui, et al. "Image Captioning in news report scenario." arXiv preprint arXiv:2403.16209 (2024). liu2024rumor Liu, Tianrui, et al. "Rumor Detection with a novel graph neural network approach." arXiv preprint arXiv:2403.16206 (2024). li2024feature Li Z, Huang Y, Zhu M, et al. Feature manipulation for ddpm based change detection[J]. arXiv preprint arXiv:2403.15943, 2024. li2022automated Li S, Mo Y, Li Z. Automated pneumonia detection in chest x-ray images using deep learning model[J]. Innovations in Applied Engineering and Technology, 2022: 1-6. gao2018solution Gao Y, Bu X, Hu Y, et al. Solution for large-scale hierarchical object detection datasets with incomplete annotation and data imbalance[J]. arXiv preprint arXiv:1810.06208, 2018. feng2022beyond Feng W, Bu X, Zhang C, et al. Beyond bounding box: Multimodal knowledge learning for object detection[J]. arXiv preprint arXiv:2205.04072, 2022. chen2022visual Chen S, Lin C, Guan W, et al. Visual Encoding and Debiasing for CTR Prediction[J]. arXiv preprint arXiv:2205.04168, 2022. bu2016attention Bu X, Pei M, Jia Y. Attention Estimation for Input Switch in Scalable Multi-display Environments[C]//Neural Information Processing: 23rd International Conference, ICONIP 2016, Kyoto, Japan, October 16–21, 2016, Proceedings, Part IV 23. Springer International Publishing, 2016: 329-336. bu2021gaia Bu X, Peng J, Yan J, et al. Gaia: A transfer learning system of object detection that fits your needs[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2021: 274-283. peng2023gaia Peng J, Chang Q, Yin H, et al. GAIA-Universe: Everything is Super-Netify[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023, 45(10): 11856-11868. jiang2024disinformation Jiang B, Tan Z, Nirmal A, et al. Disinformation detection: An evolving challenge in the age of llms[C]//Proceedings of the 2024 SIAM International Conference on Data Mining (SDM). Society for Industrial and Applied Mathematics, 2024: 427-435. tan2024large Tan Z, Beigi A, Wang S, et al. Large Language Models for Data Annotation: A Survey[J]. arXiv preprint arXiv:2402.13446, 2024. yuan2024label Yuan B, Chen Y, Tan Z, et al. Label Distribution Learning-Enhanced Dual-KNN for Text Classification[C]//Proceedings of the 2024 SIAM International Conference on Data Mining (SDM). Society for Industrial and Applied Mathematics, 2024: 400-408.
http://arxiv.org/abs/2406.17870v1
20240625181704
Equidistant dimension of Johnson and Kneser graphs
[ "Jozef Kratica", "Mirjana Čangalović", "Vera Kovačević-Vujčić" ]
math.CO
[ "math.CO", "05C12", "G.2.2" ]
jkratica@mi.sanu.ac.rs mirjana.cangalovic@alumni.fon.bg.ac.rs vera.vujcic@alumni.fon.bg.ac.rs [mi]Mathematical Institute, Serbian Academy of Sciences and Arts, Kneza Mihaila 36/III, 11 000 Belgrade, Serbia [fon]Faculty of Organizational Sciences, University of Belgrade, Jove Ilića 154, 11000 Belgrade, Serbia § ABSTRACT In this paper the recently introduced concept of equidistant dimension eqdim(G) of graph G is considered. Useful property of distance-equalizer set of arbitrary graph G has been established. For Johnson graphs J_n,2 and Kneser graphs K_n,2 exact values for eqdim(J_n,2) and eqdim(K_n,2) have been derived, while for Johnson graphs J_n,3 it is proved that eqdim(J_n,3) ≤ n-2. Finally, exact value of eqdim(J_2k,k) for odd k has been presented. Distance-equalizer set, Equidistant dimension, Johnson graphs, Kneser graphs. [2010]05C12,05C69 § INTRODUCTION AND PREVIOUS WORK The set of vertices S is a resolving (or locating) set of graph G if all other vertices are uniquely determined by their distances to the vertices in S. The metric dimension of G is the minimum cardinality of resolving sets of G. Resolving sets for graphs and the metric dimension were introduced by Slater <cit.> and, independently, by Harary and Melter <cit.>. The concept of doubly resolving set for G has been introduced by Caceres et. al <cit.>. However, recently, several authors have turned their attention in the opposite direction from resolvability, thus trying to study anonymization problems in networks instead of location aspects. A subset of vertices A is a 2-antiresolving set for G if, for every vertex v ∉ A, there exists another different vertex w ∉ A such that v and w have the same vector of distances to the vertices of A <cit.>. The 2-metric antidimension of a graph is the minimum cardinality of 2-antiresolving sets for G. More about this topic can be found in <cit.>. In the same spirit, paper <cit.> introduces new graph concepts that can also be applied to anonymization problems in networks: distance-equalizer set and equidistant dimension. The authors study the equidistant dimension of several classes of graphs, proving that in the case of paths and cycles this invariant is related to a classical problem of number theory. They also show that distance-eqalizer sets can be used for constructing doubly resolving sets, and obtain a new bound for the minimum cardinality of doubly resolving sets of G in terms of the metric dimension and the equidistant dimension of G. In <cit.> it is proved that the equidistant dimension problem is NP-hard in a general case, and eqidistant dimension of lexicographic product of graphs is considered. §.§ Definitions and basic properties All graphs considered in this paper are connected, undirected, simple, and finite. The vertex set and the edge set of a graph G are denoted by V(G) and E(G), respectively. The order of G is |V(G)|. For any vertex v ∈ V(G), its open neighborhood is the set N(v) = {w ∈ V(G) | vw ∈ E(G)} and its closed neighborhood is N[v] = N(v) ⋃{v}. The degree of a vertex v, denoted by deg(v), is defined as the cardinality of N(v). If deg(v) = 1, then we say that v is a leaf, in which case the only vertex adjacent to v is called its support vertex. When deg(v) = |V(G)|-1, we say that v is universal. The maximum degree of G is Δ(G) = max {deg(v) | v ∈ V(G)} and its minimum degree is δ(G) = min {deg(v) | v ∈ V(G)}. If all vertices of G have the same degree r, i.e. Δ(G)=δ(G)=r, we say that graph G is r-regular. The distance between two vertices v,w ∈ V(G), denoted by d(v,w), is the leghth of a shortest u-v path, and the diameter of G is Diam(G) = max {d(v,w) | v,w ∈ V(G)}. The set of vertices on equal distances from u and v is denoted in the literature by _uW_v (<cit.>). Formally, _uW_v = { w ∈ V(G) | d(u,w) = d(v,w)}. Let n and k be positive integers (n > k) and [n] = {1,2,...,n}. Then k-subsets are subsets of [n] which have cardinality equal to k. The Johnson graph J_n,k is an undirected graph defined on all k-subsets of set [n] as vertices, where two k-subsets are adjacent if their intersection has cardinality equal to k-1. Mathematically, V(J_n,k) = { A | A ⊂ [n], |A|=k} and E(J_n,k) = { AB | A,B ⊂ [n], |A|=|B|=k, |A ⋂ B|=k-1}. It is easy to see that J_n,k and J_n,n-k are isomorphic, so we shall only consider Johnson graphs with n ≥ 2k. The distance between two vertices A and B in J_n,k can be computed by Property <ref>. For A,B ∈ V(J_n,k) it holds d(A,B) = |A ∖ B| = |B ∖ A| = k-|A ⋂ B|. In a special case when n=2k distance between A = V(J_n,k∖ A and B can be computed by Property <ref>. For A,B ∈ V(J_2k,k) it holds d(A,B) = k - d(A,B) = |A ⋂ B| Considering Property <ref>, it is easy to see that Johnson graph J_n,k is a k (n-k)-regular graph of diameter k. The Kneser graph K_n,k is an undirected graph also defined on all k-subsets of set [n] as vertices, where two k-subsets are adjacent if their intersection is empty set. Mathematically, V(J_n,k) = { A | A ⊂ [n], |A|=k} and E(J_n,k) = { AB | A,B ⊂ [n], |A|=|B|=k, A ⋂ B= ∅}. Kneser graph is connected only if n > k, it is also n-kk-regular graph. Specially, for k=2, Kneser graph K_n,2 is the complement of the corresponding Johnson graph J_n,2, and both graphs have diameter 2. Hence, if d_K_n,2(A,B)=1 then d_J_n,2(A,B)=2, and vice versa. Therefore, for A B it holds d_K_n,2(A,B) = 3 - d_J_n,2(A,B). Let x, y, w ∈ V(G). We say that w is equidistant from x and y if d(x,w) = d(y,w). A subset S of vertices is called a distance-equalizer set for G if for every two distinct vertices x, y ∈ V(G) ∖ S there exists a vertex w ∈ S equidistant from x and y. The equidistant dimension of G, denoted by eqdim(G), is the minimum cardinality of a distance-equalizer set of G If v is a universal vertex of a graph G, then S = {v} is a minimum distance-equalizer set of G, and so eqdim(G) = 1. Let G be a graph. If S is a distance-equalizer set of G and v is a support vertex of G, then S contains v or all leaves adjacent to v. Consequently: eqdim(G) ≥ | {v ∈ V(G) | v is a support vertex } |. For every graph G of order n ≥ 2, the following statements hold. * eqdim(G) = 1 if and only if Δ(G) = n - 1; * eqdim(G) = 2 if and only if Δ(G) = n - 2. If G is a graph of order n with Δ(G) < n - 2 then eqdim(G) ≥ 3. For every graph G of order n, the following statements hold. * If n ≥ 2, then eqdim(G) = n - 1 if and only if G is a path of order 2; * If n ≥ 3, then eqdim(G) = n - 2 if and only if G ∈{P_3, P_4, P_5, P_6,C_3,C_4,C_5}. If G is a graph of order n ≥ 7, then 1 ≤ eqdim(G) ≤ n-3. For any positive integer k, it holds that eqdim(J_n,k) ≤ n whenever n ∈{2k - 1, 2k + 1} or n > 2k^2. In <cit.> the exact value of metric dimension for J_n,2 for n ≥ 6 and an upper bound of metric dimension for J_n,k for k ≥ 3 are given. § NEW RESULTS §.§ Some properties of distance-equalizer set of graph G Let G be a graph, and u and v any vertices from V(G). If S is a distance-equalizer set of G, then S ⋂ ({u,v} ⋃ _uW_v) ≠∅. Case 1: u ∈ S Since u ∈ S and u ∈{u,v} ⋃ _uW_v then u is also member of their intersection, i.e. u ∈ S ⋂ ({u,v} ⋃ _uW_v) ≠∅. Case 2: v ∈ S Similarly as in Case 1, since v ∈ S and v ∈{u,v} ⋃ _uW_v then v is also member of their intersection, i.e. v ∈ S ⋂ ({u,v} ⋃ _uW_v) ≠∅. Case 3: u,v ∉ S Since S is a distance-equalizer set of G, and u,v ∈ V(G) ∖ S then (∃ x ∈ S) d(u,x) = d(v,x). Therefore, x ∈ S and x ∈ _uW_v so S ⋂ _uW_v is not empty (since it contains x) implying S ⋂ ({u,v} ⋃ _uW_v) ≠∅. Let G be a graph, and u and v any vertices from V(G). If S is a distance-equalizer set of G and _uW_v = ∅ then u ∈ S or v ∈ S. It should be noted that Lemma <ref> from <cit.> is a consequence of Corollary <ref>. Indeed, if v is a support vertex of G and u is one of leaves adjacent to v, it is obvious that (∀ x ∈ V(G) ∖{u}) d(u,x) = d(v,x) + 1 and 1 = d(u,v) = d(u,u)+1, and, therefore, _uW_v = ∅. If S is a distance-equalizer set of G, by Corollary <ref>, S contains v or all leaves adjacent to v. §.§ Equidistant dimension of J_n,2 The exact value of eqdim(J_n,2) for n ≥ 4 is given by Observation <ref> and Theorem <ref>. By a total enumeration, it is found that * eqdim(J_4,2) = 2 with the corresponding distance-equalizer set S = {{1,2}, {3,4}}; * eqdim(J_5,2) = 3 with the corresponding distance-equalizer set S = {{1,2}, {1,3}, {2,3}}. For n ≥ 6 it holds eqdim(J_n,2) = 3. Step 1: eqdim(J_n,2) ≥ 3 Since J_n,k is k · (n-k)-regular graph, so Δ(J_n,k) = δ(J_n,k) = k · (n-k). For k=2 it follows that Δ(J_n,2) = 2 · (n-2). Since |J_n,2| = n2 = n · (n-1)/2 it is obvious that for n ≥ 5 it holds Δ(J_n,2) = 2 · (n-2) < |J_n,2|-2 = n · (n-1)/2 - 2, so by Corollary <ref> it follows that eqdim(J_n) ≥ 3. Step 2: eqdim(J_n,2) ≤ 3 Let S = {{1,2}, {1,3}, {2,3}}. We will prove that set S is a distance-equalizer set by checking all pairs of vertices X and Y from V(J_n,2) ∖ S. Case 1: {1,2,3}⋂ X = ∅ and {1,2,3}⋂ Y = ∅ Let Z = {1,2}. Then d(X,Z)= 2-|X ⋂ Z| = 2 = 2-|Y ⋂ Z| = d(Y,Z). Case 2: {1,2,3}⋂ X = ∅ and {1,2,3}⋂ Y ∅ Since Y ∉ S then |Y ⋂{1,2,3}| = 1. Let Z = {1,2,3}∖ Y. It is obvious that Z ⊂{1,2,3} and |Z| = 2 implying Z ∈ S. Since {1,2,3}⋂ X = ∅ and Y ⋂ Z = ∅ then d(X,Z)= 2-|X ⋂ Z| = 2 = 2-|Y ⋂ Z| = d(Y,Z). Case 3: {1,2,3}⋂ X ∅ and {1,2,3}⋂ Y = ∅ This case is analogous as Case 2, only swap sets X and Y. Case 4: {1,2,3}⋂ X ∅ and {1,2,3}⋂ Y ∅ and X ⋂ Y ⋂{1,2,3} = ∅ Let Z = {1,2,3}⋂ (X ⋃ Y). It is obvious that Z ⊆{1,2,3}. Since X,Y ∉ S then |X ⋂{1,2,3}| = 1 and |Y ⋂{1,2,3}| = 1 it holds |Z|=2 so Z ∈ S. Therefore, d(X,Z)= 2-|X ⋂ Z| = 1 = 2-|Y ⋂ Z| = d(Y,Z). Case 5: {1,2,3}⋂ X ∅ and {1,2,3}⋂ Y ∅ and X ⋂ Y ⋂{1,2,3}∅ Since X,Y ∉ S it holds |X ⋂ Y ⋂{1,2,3}|=1. Let Z = {1,2,3}∖ X. It is obvious that Z = {1,2,3}∖ Y and X ⋂ Z = Y ⋂ Z = ∅. Therefore, d(X,Z)= 2-|X ⋂ Z| = 2 = 2-|Y ⋂ Z| = d(Y,Z). Since (∀ X,Y ∈ V(J_n,2) ∖ S) (∃ Z ∈ S) d(X,Z) = d(Y,Z), then S is a distance-equalizer set for J_n,2 and thus eqdim(J_n,2) ≤ |S| = 3. From Step 1 and Step 2 it holds eqdim(J_n,2) = 3 for all n ≥ 6. §.§ An upper bound of equidistant dimension of J_n,3 The next theorem gives a tight upper bound of eqdim(J_n,3) for n ≥ 9. The remaining cases when n ∈{6,7,8} are resolved by Theorem <ref> for n=6 and Table <ref> for n=7 and n=8. For n ≥ 9 it holds eqdim(J_n,3) ≤ n-2. Let S = {{1,2,j} | 3 ≤ j ≤ n}. It can be proved that set S is a distance-equalizer set for J_n,3, i.e. for each two vertices X and Y from V(J_n,3) ∖ S, there exists a vertex Z={1,2,l} from S, such that d(X,Z) = d(Y,Z). We will consider four cases: Case 1: {1,2}⋂ X = ∅ and {1,2}⋂ Y = ∅ It is easy to see that |{1,2}⋃ X ⋃ Y| ≤ 8. As n ≥ 9, then there exists l ∈{3,4,...,n} such that l ∉ X ⋃ Y. Now, for vertex Z={1,2,l} from S, d(X,Z)= 3-|X ⋂ Z| = 3 = 3-|Y ⋂ Z| = d(Y,Z). Case 2: {1,2}⋂ X ∅ and {1,2}⋂ Y ∅ As X ∉ S and Y ∉ S, then |{1,2}⋂ X| = 1 and |{1,2}⋂ Y| = 1 and, consequently, |{1,2}⋃ X ⋃ Y| ≤ 6. As n ≥ 9, then there exists l ∈{3,4,...,n} such that l ∉ X ⋃ Y. Now, for vertex Z={1,2,l} from S, d(X,Z)= 3-|X ⋂ Z| = 2 = 3-|Y ⋂ Z| = d(Y,Z). Case 3: {1,2}⋂ X ∅ and {1,2}⋂ Y = ∅ As X ∉ S, then |{1,2}⋂ X| = 1 and, consequently, (Y ∖ X) ⋂{1,2} = ∅ and |Y ∖ X| ≥ 1. It means that there exists l ∈{3,4,...,n} such that l ∉ Y ∖ X. Now, for vertex Z={1,2,l} from S, d(X,Z)= 3-|X ⋂ Z| = 2 = 3-|Y ⋂ Z| = d(Y,Z). Case 4: {1,2}⋂ X = ∅ and {1,2}⋂ Y ∅ This case can be reduced to Case 3. Based on all previous cases, for each pair of vertices from V(J_n,3) ∖ S there exists a vertex Z ∈ S such that d(X,Z) = d(Y,Z). Therefore, set S is a distance-equalizer set for J_n,3. As |S| = n-2, then eqdim(J_n,3) ≤ |S| = n-2. §.§ Equidistant dimension of J_2k,k, for odd k Since 2kk is even, then it is possible to make a partitition (P_1,P_2) of V(J_2k,k), such that P_1 ⋂ P_2 = ∅, P_1 ⋃ P_2 = V(J_2k,k) and |P_1| = |P_2| = 1/2·2kk. In the sequel we will use the following partition: P_1 = {X ∈ V(J_2k,k) : |X ⋂{1,2,...,k}| > |X ⋂{k+1,k+2,...,2k}|}, and P_2 = V(J_2k,k) ∖ P_1. It shoud be noted that for odd k it holds |X ⋂{1,2,...,k}| |X ⋂{k+1,k+2,...,2k}|, so P_2 = {X ∈ V(J_2k,k) : |X ⋂{1,2,...,k}| < |X ⋂{k+1,k+2,...,2k}|} and, consequently, |P_1| = |P_2| = 1/2·2kk. For any odd k ≥ 3 it holds eqdim(J_2k,k) = 1/2·2kk. Step 1: eqdim(J_2k,k) ≥1/2·2kk Let us consider 1/2·2kk pairs of vertices (X,Y) from V(J_2k,k), such that X ∈ P_1 and Y = [2k] ∖ X ∈ P_2. For any vertex Z ∈ V(J_2k,k) it holds |Z ⋂ X| + |Z ⋂ Y| = k. Since k is odd, |Z ⋂ X| is odd and |Z ⋂ Y| is even, or vice versa. Therefore, |Z ⋂ X| |Z ⋂ Y| implying d(X,Z) = k-|Z ⋂ X| k - |Z ⋂ Y| = d(Y,Z), so _XW_Y = ∅. According to Corollary <ref>, if S is a distance-equalizer set for graph J_2k,k then either X ∈ S or Y ∈ S, for each pair (X,Y). Since the number of pairs is 1/2·2kk, then |S| ≥1/2·2kk. Step 2: eqdim(J_2k,k) ≤1/2·2kk We shall prove that P_1 is a distance-equalizer set for J_2k,k. For any two vertices Y and Z from P_2 = V(J_2k,k) ∖ P_1, let us construct X ∈ P_1 such that d(Y,X) = d(Z,X). Since |Y| = |Z| = k it follows that |Y ∖ Z| = |Y| - |Y ⋂ Z| = |Z| - |Y ⋂ Z| = |Z ∖ Y|. Additionally, as Y,Z ∈ V(J_2k,k) then |Y ⋂ Z| = |Y⋂Z|. Let U_1 = (Y ⋂ Z) ⋃ (Y⋂Z). It is easy to see that U_1 ⋂ Y = U_1 ⋂ Z and |U_1| is even so k+1-|U_1| is also even. Case 1. If |U_1| < k let a ∈ U_1 be arbitrary index and U_2 = U_1 ∖{a}. Let W_1 and W_2 be any subsets of Y ∖ Z and Z ∖ Y of cardinality k+1-|U_1|/2 elements, respectively. Now let U_3 = U_2 ⋃ W_1 ⋃ W_2. It is obvious that W_1 ⊂ Y, W_1 ⋂ Z = ∅, W_2 ⊂ Z, W_2 ⋂ Z = ∅. Moreover, |W_1| = |W_2|, and therefore |U_3 ⋂ Y| = |U_2 ⋂ Y| + |W_1 ⋂ Y| = |U_2 ⋂ Y| + |W_1| = |U_2 ⋂ Z| + |W_2| = |U_2 ⋂ Z| + |W_2 ⋂ Z| = |U_3 ⋂ Z|. Case 2. If |U_1| > k let U_3 be any subset of U_1 of cardinality k. It is obvious that U_3 ⊂ (Y ⋂ Z) ⋃ (Y⋂Z) so |U_3 ⋂ Y| = |U_3 ⋂ Z|. In both cases |U_3| = k so U_3 ∈ V(J_2k,k). Therefore, in both cases |U_3 ⋂ Y| =|U_3 ⋂ Z| and hence d(U_3,Y) = k-|U_3 ⋂ Y| = k-|U_3 ⋂ Z| = d(U_3,Z). Finally, we construct X as follows. If U_3 ∈ P_1 then X = U_3. Otherwise, if U_3 ∈ P_2 then X = U_3∈ P_1, and by Property <ref> it holds d(U_3,Y) = k-d(U_3,Y) = |U_3 ⋂ Y| = |U_3 ⋂ Z| = k-d(U_3,Z) = d(U_3,Z). As, d(Y,X) = d(Z,X) and X ∈ P_1, it follows that P_1 is a distance-equalizer set for graph J_2k,k. Therefore, eqdim(J_2k,k) ≤ |P_1| = 1/2·2kk. §.§ Equidistant dimension of K_n,2 Exact value for eqdim(K_n,2) is given in Theorem <ref>, and it is equal to eqdim(J_n,2) = 3. eqdim(K_n,2) = 3. Step 1: eqdim(K_n,2) ≥ 3 As stated in Section 1, Kneser graph K_n,k exists only for n > 2 · k implying that for k=2 all Kneser graphs K_n,2 satisfy n ≥ 5. Similarly as for Johnson graphs, Kneser graph K_n,k is n-kk-regular graph, so Δ(K_n,k) = δ(K_n,k) = n-kk. For k=2 it follows that Δ(K_n,2) = (n-2)(n-3)/2. Since |K_n,2| = n2 = n · (n-1)/2 it is obvious that for n ≥ 5 it holds 4 · n > 10 so (n-2)(n-3) = n^2-5n+6 < n^2-n-4 = n(n-1)-4 implying n-22 < n2-2 which means Δ(K_n,2) = n-22 < |K_n,2|-2 = n2-2, so by Corollary <ref> it follows that eqdim(K_n,2) ≥ 3. Step 2: eqdim(J_n,2) ≤ 3 As already noticed J_n,2 = K_n,2 and Diam(J_n,2) = Diam(K_n,2) = 2 so V(J_n,2) = V(K_n,2) and for each two vertices A,B ∈ V(K_n,2) with A B it holds d(A,B) = 3 - d_J_n,2(A,B), where d(A,B) and d_J_n,2(A,B) are distances between A and B in Kneser graph K_n,2 and Johnson graph J_n,2, respectivelly. Let S = {{1,2}, {1,3}, {2,3}}, and X and Y are any vertices from V(K_n,2) ∖ S. Since V(J_n,2) = V(K_n,2), and by Theorem <ref>, the same set S = {{1,2}, {1,3}, {2,3}} is proved to be a distance-equalizer set for graph J_n,2, then (∀ X,Y ∈ V(J_n,2) ∖ S) (∃ Z ∈ S) d_J_n,2(X,Z) = d_J_n,2(Y,Z). It follows that d(X,Z) = 3 - d_J_n,2(X,Z) = 3 - d_J_n,2(Y,Z) = d(Y,Z). Therefore, the same set S is also a distance-equalizer set for graph K_n,2. From Step 1 and Step 2 it holds eqdim(K_n,2) = 3. §.§ Some other individual exact values It is interesting to examine values of eqdim(J_n,k) and eqdim(K_n,k) in cases which are not covered by the obtained theoretical results presented above. Table <ref> contains such values for Johnson and Kneser graphs up to 84 vertices obtained by a total enumeration. Since Kneser graphs are not connected for n=2k, graph K_8,4 is not connected, which is denoted by ”-”. § CONCLUSIONS In this paper, equidistant dimensions of Johnson and Kneser graphs are considered. Exact values eqdim(J_n,2)=3, eqdim(J_2k,k)=1/2·2kk for odd k and eqdim(K_n,2)=3 are found. Moreover, it is proved that n-2 is a tight upper bound for eqdim(J_n,3). Further work can be directed to finding equidistant dimension of other interesting classes of graphs. Also, it would be interesting to develop exact and/or heuristic approaches for solving equidistant dimension problem. elsarticle-num
http://arxiv.org/abs/2406.19362v1
20240627174335
STAL3D: Unsupervised Domain Adaptation for 3D Object Detection via Collaborating Self-Training and Adversarial Learning
[ "Yanan Zhang", "Chao Zhou", "Di Huang" ]
cs.CV
[ "cs.CV" ]
IEEE Transactions on Intelligent Vehicles, Vol. XX, No. X, May 2024 Zhang et al.: STAL3D: Unsupervised Domain Adaptation for 3D Object Detection via Collaborating Self-Training and Adversarial Learning STAL3D: Unsupervised Domain Adaptation for 3D Object Detection via Collaborating Self-Training and Adversarial Learning Yanan Zhang, Chao Zhou, and Di Huang, Senior Member, IEEE This work is partly supported by the National Natural Science Foundation of China (62022011), the Research Program of State Key Laboratory of Software Development Environment, and the Fundamental Research Funds for the Central Universities. (Corresponding author: Di Huang.) Y. Zhang, C. Zhou and D. Huang are with the State Key Laboratory of Software Development Environment, School of Computer Science and Engineering, Beihang University, Beijing 100191, China. D. Huang is also with the Hangzhou Innovation Institute, Beihang University, Hangzhou 310051, China (e-mail: zhangyanan@buaa.edu.cn; zhouchaobeing@buaa.edu.cn; dhuang@buaa.edu.cn). July 1, 2024 ============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================ § ABSTRACT Existing 3D object detection suffers from expensive annotation costs and poor transferability to unknown data due to the domain gap, Unsupervised Domain Adaptation (UDA) aims to generalize detection models trained in labeled source domains to perform robustly on unexplored target domains, providing a promising solution for cross-domain 3D object detection. Although Self-Training (ST) based cross-domain 3D detection methods with the assistance of pseudo-labeling techniques have achieved remarkable progress, they still face the issue of low-quality pseudo-labels when there are significant domain disparities due to the absence of a process for feature distribution alignment. While Adversarial Learning (AL) based methods can effectively align the feature distributions of the source and target domains, the inability to obtain labels in the target domain forces the adoption of asymmetric optimization losses, resulting in a challenging issue of source domain bias. To overcome these limitations, we propose a novel unsupervised domain adaptation framework for 3D object detection via collaborating ST and AL, dubbed as STAL3D, unleashing the complementary advantages of pseudo labels and feature distribution alignment. Additionally, a Background Suppression Adversarial Learning (BS-AL) module and a Scale Filtering Module (SFM) are designed tailored for 3D cross-domain scenes, effectively alleviating the issues of the large proportion of background interference and source domain size bias. Our STAL3D achieves state-of-the-art performance on multiple cross-domain tasks and even surpasses the Oracle results on Waymo → KITTI and Waymo → KITTI-rain. 3D object detection, autonomous driving, unsupervised domain adaptation. § INTRODUCTION 3Dobject detection is crucial for the perception systems of autonomous driving, aiming to classify and localize objects in the real-world 3D space, providing essential groundwork for higher-level tasks such as trajectory prediction and path planning. Recently, significant progress <cit.> has been made in this task due to the advancement of deep learning techniques and the emergence of large-scale annotated datasets <cit.> for autonomous driving. However, due to the presence of domain shift, when applying models trained on known domains directly to new domains, performance significantly deteriorates, hindering the generalizability and transferability of detectors across different scenes. To overcome such challenge, Unsupervised Domain Adaptation (UDA) strives to transfer knowledge from a labeled source domain to an unlabeled target domain. While there have been numerous studies on UDA in the field of 2D object detection, these methods are not applicable to sparse, unordered, and irregular point clouds. As a result, 3D UDA methods have not been thoroughly explored yet. The existing cross-domain 3D detection methods are mainly divided into two paradigms, namely the Self-Training (ST) paradigm and the Adversarial Learning (AL) paradigm. As shown in Fig. <ref>(a), the ST paradigm <cit.> is the simplest and most straightforward scheme to narrowing the domain gap in domain adaptation, which primarily consists of a source domain pre-training stage and a target domain self-training stage. During the initial stage, the model is trained under supervision using labeled data from the source domain. Subsequently, the parameters of the trained model are loaded to generate pseudo-labels for the target domain, and the model is iteratively updated and trained using these pseudo-labels. To accommodate the peculiarities of 3D data, certain methods take into account the disparities in the authentic 3D physical size distribution of objects, proposing size distribution regularization <cit.> and size data augmentation techniques <cit.>. Alternatively, some approaches consider the differences in LiDAR sensor scanning beam patterns and devise beam resampling <cit.> or LiDAR distillation <cit.> strategies. However, these methods only consider domain differences along a single dimension, overlooking the significant domain differences caused by various factors such as weather, road conditions, sensor types, etc. Due to the inherent lack of feature distribution alignment process in ST, when there are significant domain disparities, they will generate low-quality pseudo-labels. These low-quality pseudo-labels have a detrimental effect on model optimization, leading to error accumulation during the iterative process of pseudo-label generation and model updates, ultimately resulting in performance degradation. As shown in Fig. <ref>(b), the AL paradigm <cit.> comprises a shared feature extractor and a domain discriminator, which simultaneously take annotated data from the source domain and unlabeled data from the target domain as input. Through Gradient Reversal Layer (GRL) and domain discriminator, the source and target domains minimize the distribution discrepancy under the effect of adversarial loss. Additionally, the labeled source domain is also optimized through detection loss. Inspired by learning domain-invariant feature representations in 2D object detection, some methods have explored the application of adversarial learning to 3D domain adaptation from perspectives such as range perception <cit.>, scale perception <cit.>, and category perception <cit.>. However, due to the lack of pseudo-labels, the target domain only utilizes adversarial loss L_adv for optimization, while the source domain can optimize using both detection loss L_det and adversarial loss L_adv simultaneously. The asymmetric optimization losses hinder features from aligning to a balanced position between two domains, leading to a source-bias issue that significantly compromises the detectors' generalization capability in the target domain. Through the above analysis, we can observe that: (1) The ST paradigm excels in providing pseudo-label supervision signals for unlabeled target domains, yet its inherent limitation lies in the lack of feature distribution alignment, posing challenges in simultaneously adapting to multiple domain disparities, especially in the presence of significant domain disparities, leading to the generation of low-quality pseudo-labels; (2) The advantage of the AL paradigm lies in its capability to address disparities from multiple domains and significant domain gaps through feature distribution alignment. However, its drawback is the lack of supervision signals from the target domain, forcing the formation of asymmetric optimization losses, which can lead to the issue of source domain bias. Motivated by the strong complementarity between the two paradigms, as shown in Fig. <ref>(c), we propose a novel unsupervised domain adaptation framework for 3D object detection via collaborating ST and AL, unleashing the potential advantages of pseudo-labels and feature distribution alignment. From the perspective of ST to AL, our ST approach can generate pseudo-labels for unlabeled target domain data, which then participate in the training process of AL. By obtaining additional pseudo-label supervision signals, AL forms symmetric optimization losses, namely adversarial loss and detection loss for both the source and target domains, which can effectively address the issue of source domain dominance previously caused by asymmetric gradient optimization. From the perspective of AL to ST, our AL approach utilizes gradient reversal layer and domain discriminator to incorporate additional feature distribution alignment constraints into the feature extraction network of ST, thereby forming domain-invariant features. Relying on domain-invariant feature representation, ST can generate higher-quality pseudo-labels even when confronted with multiple domain disparities or significant domain gaps, thus effectively alleviating the accumulation of errors during the iterative process. Additionally, the task of 3D cross-domain object detection exhibits some distinct characteristics compared to traditional 2D tasks: (1) In 3D scenes, the proportion of background is significantly larger than that of foreground, leading to potential background interference; (2) 3D detection reflects the real size of the object in the physical world, but the size distribution of the same category in different domains varies greatly, resulting in a unique source domain size bias problem. Regarding the above two issues, we design a Background Suppression Adversarial Learning (BS-AL) module and a Scale Filtering Module (SFM) to alleviate the issues of the large proportion of background interference and source domain size bias, respectively. In summary, our contributions are as follows: * We point out the strong complementarity between ST and AL and propose a novel collaborative STAL3D framework for cross-domain 3D object detection, unleashing the potential advantages of pseudo-labels and feature distribution alignment. * We design a Background Suppression Adversarial Learning (BS-AL) module and a Scale Filtering Module (SFM) tailored for 3D cross-domain scenes, effectively alleviating the issues of the large proportion of background interference and source domain size bias. * We conduct extensive experiments on multiple datasets for three categories. The proposed STAL3D consistently outperforms the strong baseline by large margins, highlighting its effectiveness. § RELATED WORK §.§ LiDAR-based 3D Object Detection LiDAR-based 3D object detection techniques can be broadly classified into three categories: Point-based methods, Voxel-based methods and Point-Voxel based methods. Point-based methods <cit.>, employing PointNet <cit.> as the backbone network, directly extract geometric features from raw point clouds to accomplish detection tasks. Recent advancements <cit.> in this area have focused on enhancing performance through the design of more effective point sampling strategies. However, these approaches often necessitate time-consuming point sampling and neighbor search operations. Voxel-based methods <cit.> typically partition point clouds into regular grid structures and employ 3D convolutional backbones to extract features. Recent advancements <cit.> have also explored leveraging Transformer architectures to enhance the representation capabilities of voxel features by capturing long-range dependency relationships. While computationally efficient, voxelization inevitably introduces quantization loss. Point-voxel based methods <cit.>, on the other hand, endeavor to amalgamate the advantages of both voxel-based and point-based approaches. However, existing approaches overlook domain discrepancies across different 3D scenes, making them almost inapplicable to unseen environments. In this paper, we investigate domain adaptation in 3D object detection, which effectively enhances the domain generalization capabilities of leading 3D detectors. §.§ Cross-domain 3D Object Detection Existing cross-domain 3D detection methods primarily fall into two paradigms: Self-Training and Adversarial Learning. The Self-Training (ST) paradigm mainly focuses on leveraging annotated data from the source domain to pre-train robust initial models and optimize the quality of generated pseudo-labels. Zoph et al. <cit.> leverage techniques like data distillation and self-training with data augmentation to mitigate confirmation bias and improve pseudo-label quality. Wang et al. <cit.> first claim that the size distribution of objects in different datasets is a key factor influencing the model's domain adaptation performance and propose a simple strategy to correct sizes using statistical information. ST3D <cit.> introduces a data augmentation technique to enhance the robustness of pre-trained models to target sizes and proposes a quality-aware triplet memory bank to refine pseudo-labels. ST3D++ <cit.> conducts further analysis on pseudo-label noise and proposes a pseudo-label denoised self-training pipeline from pseudo-label generation to model optimization. DTS <cit.> proposes a density-insensitive cross-domain approach to mitigate the impact of domain gap resulting from varying density distributions. AVP <cit.> explicitly leverages cross-domain relationships to efficiently generate high-quality samples, thereby mitigating domain shifts. The Adversarial Learning (AL) paradigm strives to reduce domain gap by better aligning the feature distributions of the source and target domains. Wang et al. <cit.> for the first time attempt to study adaptation for 3D object detection in point clouds, which combine fine-grained local adaptation and adversarial global adaptation to improve LiDAR-based far-range object detection. SRDAN <cit.> additionally introduces domain alignment techniques in both scale and range, exploiting the geometric characteristics of point clouds for aligning feature distributions. 3D-CoCo <cit.> utilizes a contrastive learning mechanism to minimize feature distances within the same category across various domains while maximizing feature distances between different categories, facilitating the alignment of feature distributions. However, existing methods have not fully explored the advantages, disadvantages, and complementary effects of ST and AL paradigms. In this paper, we conduct an in-depth analysis of the inherent synergy between ST and AL and propose a novel framework to unleash the potential advantages of pseudo-labels and feature distribution alignment. § METHOD §.§ Framework Overview As shown in Fig. <ref>, the framework primarily consists of two stages: source domain pre-training stage and self-training and adversarial learning stage. In the source domain pre-training stage, we conduct supervised training using annotated source data to obtain initial parameters. During the self-training and adversarial learning stage, in the target domain, we use pre-trained parameters from stage (I) to generate pseudo-labels. Simultaneously, Background Suppression Adversarial Learning Module (BS-AL) is applied to perform feature distribution alignment for source and target domains. Additionally, Scale Filtering Module (SFM) is applied to alleviate the issue of source domain size bias. In the scenario of unsupervised domain adaptation, we are provided with point cloud data from a single labeled source domain 𝒟_S = {(P^s_i, L^s_i)}^n_s_i=1 and an unlabeled target domain 𝒟_T = {P^t_i}_i=1^n_t, where n_s and n_t denote the number of samples (P is the point clouds, L is the corresponding label) from the source and target domains, respectively. The 3D box annotation for the i-th point cloud in the source domain is represented as L^s_i= {b_j}^B_i_j=1, with b_j = (x, y, z, w, l, h, θ, c)∈ℝ^8, where B_i indicates the total number of labeled boxes in P^s_i. Here, (x, y, z) denotes the center location of the box, (w, l, h) represents the box dimensions, θ signifies the box orientation, and c ∈{1, ..., C} denotes the object category. The objective of the domain adaptive detection task is to train a model F utilizing 𝒟_S and 𝒟_T, with the aim of maximizing performance on 𝒟_T. The training process is outlined in the following Algorithm <ref>. §.§ Source Domain Pre-training STAL3D starts from training a 3D object detector on labeled source data {(P^s_i, L^s_i)}^n_s_i=1. The pre-trained model learns how to perform 3D detection on source labeled data and is further adopted to initialize object predictions for the target domain unlabeled data. As shown in Fig. <ref>(I), following <cit.>, we first use Random Object Scale (ROS) to initially alleviate the domain disparities in size distribution by data augmentation. Let {p_i}_i=1^n_b denote all the points within the annotated box B_b, where p_i is represented as the coordinates of the point cloud (p_i^x, p_i^y, p_i^z). (c_b^x, c_b^y, c_b^z) represents the center point of the annotated box B_b, and R is the rotation matrix. Then,{p_ i}_i=1^n_b can be expressed in the target's center coordinate system as follows: (p_i^l, p_i^w, p_i^h) = (p_i^x-c_b^x, p_i^y-c_b^y, p_i^z-c_b^z) · R After obtaining the coordinates in the target's center coordinate system, a random scaling factor within a certain range (r_l, r_w, r_h) is set. The augmented point cloud can be represented as p_i^aug, as shown in Equation <ref>. This augmentation is applied to the source domain data for pre-training, resulting in the pre-trained parameters of the model θ_ros. p_i^aug = (r_lp_i^l, r_wp_i^w, r_hp_i^h) · R^T + (c_b^x, c_b^y, c_b^z) Our framework consists of two key stages: source domain pre-training and iterative self-training. The purpose of the source domain pre-training stage is to provide a robust initialization model for the iterative self-training stage. Additionally, unlike 2D object detection tasks, 3D object detection reflects the real sizes of objects in 3D physical space, and differences in the distribution of object sizes across datasets can affect the training of the pre-trained model, thereby influencing the initialization effect of iterative self-training. Therefore, we consider using this random scaling strategy to augment the object point cloud during the source domain pre-training phase for improving the training quality of the initial model. §.§ Self-Training With the trained detector, the self-training step is to generate pseudo labels and perform iterative refinement for the unlabeled target data. As shown in Fig. <ref>(II), we introduce the Pseudo-Label Memory Bank Integration module. This module takes the pseudo-labels from the k-th stage {L̂_̂î^̂t̂}_k^n_l and the pseudo-labels stored in the memory bank {M̂_̂î^̂t̂}_k-1^n_m as input, and outputs the integrated memory bank pseudo-labels {M̂_̂î^̂t̂}_k, where n_l and n_m denote the number of pseudo-labels from the present model and memory bank, respectively. Specifically, we calculates a 3D IoU matrix A={a_ef}∈ℝ^n_m× n_l between the two sets of labels. For the e-th target box in the memory bank, its matched target box is determined as e^'=argmax_e(a_ef). If a_ee^'≥ 0.1, the two boxes are considered a match, and the label with the higher score from {M̂_̂ê^̂t̂}_k-1 and {L̂_̂ê^̂'̂^̂t̂}_k is selected as the new pseudo-label and stored in the memory bank. If a_ee^' < 0.1, we sets up an additional buffer in the memory bank, caching these target boxes and maintaining them using a queue. It is worth noting that since our STAL3D framework introduces feature distribution alignment for ST, it will enable our ST to better cope with large domain disparities, thereby reducing the accumulation of errors during the iteration process caused by low-quality pseudo labels. §.§ Background Suppression Adversarial Learning As shown in Fig. <ref>, for aligning the feature distributions between the source domain and the target domain, we employ adversarial learning for feature distribution alignment. The detector is divided into a backbone network G and a detection head network H. For the feature map F∈ℝ^H × W × L × d generated by the backbone network, a domain classifier D is introduced, and a min-max optimization adversarial game loss is constructed. Specifically, the optimization direction for the domain classifier D is to distinguish the source domain from the target domain, minimizing the domain classification loss. Conversely, the optimization direction for the backbone network G is to make it difficult for the domain classifier D to distinguish the source of the features, maximizing the domain classification loss. To achieve end-to-end training, we employs a Gradient Reversal Layer (GRL) <cit.> to connect G and D, where the gradient of the model is inverted when passing through the GRL layer, allowing the optimizer to perform normal optimization. When the model's training converges, the backbone network G can extract domain-invariant feature representations, thereby accomplishing feature distribution alignment. Let 𝒟s and 𝒟t represent the source and target domain data respectively, θ_G and θ_D represent the parameters of the backbone network and the domain classifier, and D(·)^(h,w,l) represents the probability that the feature at position (h,w,l) in the feature map comes from the source domain. The loss function in this stage can be expressed as: 0.99!ℒ_adv = max_θ_Gmin_θ_D -𝔼_x_s∼𝒟_slogD(G(x_s))-𝔼_x_t∼𝒟_tlog(1-D(G(x_t))) where x_s, x_t and 𝔼 respectively stand for inputs from source and target, and the Expectation. In the context of 3D object detection, the foreground region occupies a small proportion of the entire scene, accounting for only about 5% <cit.>. Additionally, in 3D object detection tasks, the foreground region is typically much more important than the background, as it contains much richer semantic information. However, a naive domain classifier does not distinguish between foreground and background, and performing feature distribution alignment with such a classifier can lead to a long-tail problem, causing a decrease in focus on the foreground. Therefore, the Feature Richness Score (FRS) <cit.> of the feature map is utilized as semantic foreground guidance to address this issue. The FRS is then used to apply attention weighting to the adversarial loss based on the foreground region. As shown in Fig. <ref>, valuable areas can be well distinguished through this approach. Specifically, let F_h,w,l∈ F represent the feature vector at position (h,w,l) in the feature map. Let C represent the categories to be detected, and N_dir denote the number of predefined directions for anchor boxes. The 3D object detection network uses a 1x1 convolution to predict the confidence of predefined anchor boxes at position F_h,w,l, resulting in p_h,w,l = conv1×1(F_h,w,l), p ∈ℝ^C × N_dir. After obtaining the confidence score vector, the feature richness score S_h,w,l at position (h,w,l) can be obtained by taking the maximum value of the score vector, as shown in Equation <ref>. Here, σ(·) represents the sigmoid function. S_h,w,l=max_i∈[1, C × N_dir]σ(p_h,w,l^i) In consideration of the issue that the background occupies a much higher proportion than the foreground and contains less informative content in 3D object detection tasks, the algorithm divides the entire scene into two regions: a learning region (dominated by the foreground) and a suppression region (dominated by the background). The feature richness score is used to guide this division. We considers the voxels with feature richness scores in the top k% as part of the learning region, while the remaining voxels are considered part of the suppression region. Additionally, the feature richness scores of the suppression region are set to 0. This is expressed in Equation <ref>. Ŝ_h,w,l = S_h,w,l, if S_h,w,l in top k% 0, otherwise. After obtaining the region partition map Ŝ∈ℝ^H × W × L, we proceed to weight the original adversarial training loss using this feature richness score map, resulting in the final adversarial loss with region-based suppression: ℒ_rs = ∑_h, w, lŜ_h,w,l·ℒ_adv^h,w,l It is worth noting that due to the introduction of ST process in our STAL3D framework for AL, pseudo-label supervised signals in the target domain can be obtained during the training process, resulting in symmetric optimization loss in AL during the training process, which can effectively alleviate the source domain bias problem. §.§ Scale Filtering Module Compared to pseudo-labels, source domain labels can be regarded as noise-free label, so introducing source domain label supervised signals can theoretically assist in the self-training process. In 3D object detection, the size of objects reflects their dimensions in the real world, with different domains exhibiting significant variations in size distribution. Therefore, directly training the network using source domain label information poses a critical challenge as the model may gradually overfit to the source domain size due to inter-domain size differences throughout the training process, commonly referred to as the issue of source domain size bias. When dealing with source domain size bias, we begins by addressing the loss design of the 3D object detection model. Taking the single-stage network SECOND <cit.> as an example, its loss design primarily encompasses three components: classification loss, angle classification loss, and regression loss. For the classification loss, the model employs Focal Loss <cit.> to tackle the foreground-background class imbalance. Due to the strong shape similarity of 3D objects, utilizing target domain pseudo-labels for classification loss might introduce additional noise. Therefore, the model does not compute the classification loss for the pseudo-labels of the target domain data. For angle classification loss, the model employs the cross-entropy loss. As for the regression loss, we employs the Smooth L1 loss to calculate the regression loss. The regression targets, denoted as (x, y, z, h, w, l, θ), are normalized and encoded as follows: 0.99!x_t=x_g-x_a/d_a, y_t=y_g-y_a/d_a, z_t=z_g-z_a/h_a, θ_t = sin(θ_g-θ_a), h_t = log(h_g/h_a), w_t = log(w_g/w_a), l_t = log(l_g/l_a), where x, y, and z represent the center point coordinates, h, w, and l represent height, width, and length, and θ represents the rotation angle around the z-axis. Subscripts t, a, and g correspond to encoded value, anchor value, and ground truth annotation value, respectively. d_a=√((l_a)^2+(w_a)^2) represents the diagonal length of the anchor box's width and height. When use of source domain data for predicting object box sizes lacks inter-domain consistency and can lead to severe over-fitting. However, object box localization and angle prediction exhibit inter-domain consistency. Therefore, as shown in Fig. <ref>, we filter out regression deviations for the target box sizes h_t, w_t, l_t, and only employs x_t, y_t, z_t, θ_t as regression targets. Although this scale filtering design is relatively simple, it has been found to be very effective. As shown in Fig. <ref>, SFM can effectively alleviate the issue of source domain size bias. Based on this, the overall optimization objective function of the model can be expressed as follows: 0.99!ℒ = λ_1ℒ_FL^𝐒 + λ_2ℒ_reg-f^𝐒,𝐓 + λ_3ℒ_IoU^𝐒,𝐓 + λ_4ℒ_cls-dir^𝐒,𝐓 + λ_5ℒ_rs^𝐒,𝐓 where ℒ_FL, ℒ_reg-f, ℒ_IoU, ℒ_cls-dir, and ℒ_rs represent the Focal Loss classification loss, filtered regression loss, IoU prediction loss, box orientation classification loss, and region-based adversarial loss with background suppression, respectively. The superscripts 𝐒 and 𝐓 denote the source and target domains, respectively. § EXPERIMENTS §.§ Experimental Setup Datasets. Our experiments are conducted on four extensively used LiDAR 3D object detection datasets: KITTI <cit.>, Waymo <cit.>, nuScenes <cit.>, and Lyft <cit.>. Accordingly, we assess domain adaptive 3D object detection models across the following five adaptation tasks: Waymo → KITTI, Waymo → Lyft, Waymo → nuScenes, nuScenes → KITTI and Lyft → KITTI. We tackle the domain shift introduced by adverse weather conditions by simulating rain on the KITTI dataset using the physics-based lidar weather simulation algorithm proposed in <cit.>. By sampling from a range of rain rates spanning from 0 mm/hr to 100 mm/hr to simulate realistic adverse weather conditions, each sample is augmented with artifacts typical of lidar data captured in rainy weather. Rainy conditions can induce significant domain disparities, resulting in a notable degradation in the quality of point cloud data for vehicles. So, we add two additional domain shift tasks, namely Waymo → KITTI-rain and Lyft → KITTI-rain. Comparison Methods. (i) Source Only: Directly evaluates the source domain pre-trained model on the target domain; (ii) SN <cit.>: Pioneering weakly-supervised domain adaptation method for 3D object detection, incorporating statistical object size information from the target domain; (iii) ST3D <cit.> and ST3D++ <cit.>: State-of-the-art methods based on self-training; (iv) Oracle: Fully supervised model trained exclusively on the target domain. Evaluation Metric. We adhere to the official KITTI metric and present the average precision (AP) in both the bird's-eye view (BEV) and 3D over 40 recall positions. The mean average precision is assessed with an IoU threshold of 0.7 for cars and 0.5 for pedestrians and cyclists. Additionally, we quantify the closure of the performance gap from Source Only to Oracle, denoted as closed gap =AP_model - AP_source only/AP_oracle - AP_source only× 100%. Implementation Details. We validate our STAL3D method using SECOND-IoU <cit.>. Pretraining of our detectors on the source domain follows the training settings outlined in the widely-used point cloud detection codebase OpenPCDet <cit.>. During the subsequent self-training stage on the target domain, we employ Adam with a learning rate of 1.5 × 10^-3 and utilize a one-cycle scheduler for fine-tuning the detectors over 30 epochs. During the generation of pseudo-labels, the hyperparameter φ is set to 0.2. In the Focal loss, α is set to 0.25 and γ is set to 2. In the region suppression, k is set to 20%. For the model optimization objective, λ_1 is set to 1.0, λ_2 is set to 2.0, λ_3 is set to 1.0, λ_4 is set to 0.2, and λ_5 is set to 1.0. All experiments are accomplished on 4 NVIDIA Tesla V100 GPUs. §.§ Main Results Quantitative results. As shown in Tab. <ref>, we compare the performance of our STAL3D with Source Only, SN <cit.>, ST3D <cit.>, ST3D++ <cit.> and Oracle on five adaptation tasks. We can clearly observe that STAL3D consistently improves the performance on Waymo → KITTI, Waymo → Lyft, Waymo → nuScenes, nuScenes → KITTI and Lyft → KITTI by a large margin of 23.14%, 5.84%, 5.01%, 24.17% and 15.47% in terms of mAP_3D, which largely close the performance gap between Source Only and Oracle. When comparing with the latest SOTA method ST3D++, our STAL3D exhibits superior performance in terms of mAP_3D for all five adaptation tasks, with improvements of 3.64%, 1.42%, 1.77%, 1.94% and 2.58%, respectively. We attribute these performance improvements to our approach, which unifies self-training and adversarial learning paradigms into one framework, unleashing the potential complementary advantages of pseudo-labels and feature distribution alignment. Additionally, the BS-AL and SFM tailored for 3D cross-domain scenes, effectively alleviating the issues of the large proportion of background interference and source domain size bias. The domain shift caused by adverse weather is a relatively difficult benchmark in 3D cross-domain settings, as special weather not only affect the data generation mode of LiDAR acquisition equipment, but also generate a large number of noise points. As shown in Tab. <ref>, it is worth noting that the proposed STAL3D gains significantly more performance for the Waymo → KITTI-rain and Lyft → KITTI-rain (even surpassing Oracle), indicating the STAL3D is more effective for adapting to 3D scenes with larger environmental gaps. Overall, the proposed STAL3D excels all baselines on both mAP_3D and mAP_BEV across all scenarios of 3D adaptation tasks. We contend that in the presence of significant domain gaps, the self-training paradigm based on pseudo labels, owing to its inherent incapacity to align feature spaces, tends to generate low-quality pseudo labels, thereby falling into the source domain bias trap. Our STAL3D collaborates self-training with adversarial learning, achieving feature distribution alignment between source and target domain, so it can better cope with such challenge. Qualitative results. We also compare the visual quality of the results against that of the source-only network on nuScenes → KITTI and Waymo → KITTI-rain. As shown in Fig. <ref> and Fig. <ref>, our STAL3D can improve the performance of cross-domain 3D object detection from three aspects: (1) alleviates the issue of source domain size bias; (2) reduces missed detections; (3) reduces false positives. The source only trained model frequently suffers from serious false positives. Our STAL3D collaborates the self-training and adversarial learning, aligning feature distribution for large domain gaps through adversarial learning and using self-training to generate high-quality pseudo labels. And a concise and effective scale filtering module is tailored for the source domain size bias. Thanks to the above advantages, STAL3D achieves excellent cross-domain detection results. §.§ Ablation Study Effectiveness of each component. To verify the effectiveness of each module, we first conduct component ablation experiments on the proposed STAL3D. We abbreviate the Self-Training, Adversarial Learning with Background Suppression and Scale Filtering Module as ST, BS-AL and SFM, respectively. As shown in Tab. <ref>, with Source Only augmented by ROS as the original baseline, applying ST can improve mAP_3D by 7.93%. This indicates that fully utilizing source domain information and performing ST is significantly superior to direct model transfer. Based on this, after adding our BS-AL module, the mAP_3D performance is boosted by 2.85%, and it is further improved by 4.33% through incorporating our SFM. We attribute this improvement to the feature distribution correction ability of adversarial learning and the further resolution of source domain size bias. Finally, with all these components, the mAP_BEV and mAP_3D is boosted to 82.26% and 69.78% respectively, validating its effectiveness. Ablation to BS-AL. To verify the effectiveness of the BS-AL module, we use a naive adversarial learning approach that removes background suppression operations as the baseline. As shown in Tab. <ref>, we compare two different methods for extracting and weighting the foreground-background based on Feature Richness Scores (FRS) and Channel Attention (CA), and ablate the hyperparameters. For the CA approach, we obtain the attention map 𝒜 by computing the absolute mean of channel values pixel-wise, and then weight the attention map with 𝒜^' = 1 + β·𝒜. As the results demonstrate, the FRS is the preferable choice, and with the increase of the parameter k, the model's detection performance exhibits an initially increasing and subsequently decreasing trend. We postulate that the FRS may not perfectly align with foreground-background. Consequently, with lower k, the attention might overlook certain foreground regions, while higher k could result in some interference from background pixels during adversarial learning. Finally, when k=0.2, it can bring a performance improvement of 3.52% on mAP_3D. Detailed analysis of AL and ST. As shown in Tab. <ref>, we first remove the Self-Training (ST) framework and conducted adversarial training solely using labeled source domain and unlabeled target domain data. Additionally, we devise both feature distribution alignment (F_AL) and regressor-based distribution alignment (RG_AL) techniques. RG_AL refers to the approach introduced by <cit.>, which employs Generalized Intersection over Union (GIOU) <cit.> as a distance metric for aligning the distribution of bounding boxes in the context of regressors. It can be seen that using these two alignment methods directly cannot improve the results. This is due to the asymmetrical loss formed as a result of the lack of pseudo-label supervision signals from the target domain in adversarial learning, leading to source domain size bias issue. ROS is an effective data augmentation method that can alleviate source domain size bias to a certain extent, but there is still a 7.93% gap on mAP_3D compared to the ST (w/ ROS). In addition, directly adding Source Label information to ST (w/ROS) can also result in a 10.26% performance degradation. The results above indicate that both AL and ST are susceptible to the issue of source domain size bias. Additionally, relying solely on ROS is not sufficient for effective resolution. Ablation to SFM. In order to fully validate the effectiveness of SFM, we conducted a detailed combination analysis of the loss term. As shown in Tab. <ref>, without scale filtering, removing the regression supervision signal from the source domain results in a 1.42% improvement in mAP_3D. Moreover, comparing mAP_3D before and after scale filtering, there is a 4.07% increase. This demonstrates the necessity of filtering the scale regression term. Furthermore, it can be observed that regardless of whether scale is filtered in the regression terms, removing the classification loss from the target domain leads to improvements in detection results, with mAP_3D increasing by 1.93% and 0.74%, respectively. We believe that despite the denoising process during pseudo-label generation, noise is inevitably present. Utilizing accurate class labels from the source domain enables the self-training process to obtain high-quality classification labels. The final combination of the above special designs result in our ultimate SFM module, bringing a 4.81% performance improvement. Robustness to Detector Architecture. All previous experiments are conducted on the SECOND-IoU detector. To further validate the robustness of our method across different detectors, we conduct additional experiments on the Waymo → KITTI task using the PV-RCNN <cit.> framework. As shown in Fig. <ref>, our approach consistently outperforms previous SOTA methods across all three categories, demonstrating its robustness to detector architecture. § CONCLUSION This paper analyzes the strengths and weaknesses of existing 3D unsupervised domain adaptation paradigms and points out the strong complementarity between Self-Training (ST) and Adversarial Learning (AL). To unleash the potential advantages of pseudo-labels and feature distribution alignment, a novel cross-domain 3D object detection framework is proposed via collaborating ST and AL, dubbed as STAL3D. Additionally, considering the characteristics of 3d cross-domain scenarios, a Background Suppression Adversarial Learning (BS-AL) module and a Scale Filtering Module (SFM) are proposed, effectively alleviating the problems of the large proportion of background interference and source domain size bias. Extensive experiments are conducted on multiple cross-domain tasks, and STAL3D reaches a newly state-of-the-art. IEEEtran
http://arxiv.org/abs/2406.18931v1
20240627065646
Semi-adaptive Synergetic Two-way Pseudoinverse Learning System
[ "Binghong Liu", "Ziqi Zhao", "Shupan Li", "Ke Wang" ]
cs.LG
[ "cs.LG" ]
School of Computer and Artificial Intelligence, Zhengzhou University, Zhengzhou 450001, China. National Supercomputing Center in Zhengzhou, Zhengzhou 450001, China. Semi-adaptive Synergetic Two-way Pseudoinverse Learning System Binghong Liu1 Ziqi Zhao1 Shupan Li1,2 Ke Wang1,2Corresponding author: iekwang@zzu.edu.cn July 1, 2024 ============================================================================================ § ABSTRACT Deep learning has become a crucial technology for making breakthroughs in many fields. Nevertheless, it still faces two important challenges in theoretical and applied aspects. The first lies in the shortcomings of gradient descent based learning schemes which are time-consuming and difficult to determine the learning control hyperparameters. Next, the architectural design of the model is usually tricky. In this paper, we propose a semi-adaptive synergetic two-way pseudoinverse learning system, wherein each subsystem encompasses forward learning, backward learning, and feature concatenation modules. The whole system is trained using a non-gradient descent learning algorithm. It simplifies the hyperparameter tuning while improving the training efficiency. The architecture of the subsystems is designed using a data-driven approach that enables automated determination of the depth of the subsystems. We compare our method with the baselines of mainstream non-gradient descent based methods and the results demonstrate the effectiveness of our proposed method. The source code for this paper is available at http://github.com/B-berrypie/Semi-adaptive-Synergetic-Two-way-Pseudoinverse-Learning-Systemhttp://github.com/B-berrypie/Semi-adaptive-Synergetic-Two-way-Pseudoinverse-Learning-System. § INTRODUCTION Deep learning<cit.>, as a powerful representation learning technology, has achie-ved remarkable accomplishments in various domains, exerting profound influences on human society<cit.>. Gradient descent algorithm and its variants are a class of commonly used optimization algorithms employed to train deep neural networks by minimizing loss functions. The basic idea is to iteratively adjust parameters in the opposite direction of the gradient of the loss function, gradually approaching the minimum point of the function<cit.>. These algorithms are widely utilized in the field of deep learning due to their simplicity, ease of implementation and parallelizability<cit.>. However, gradient descent algorithms face a range of challenges, including low training efficiency, the difficulty in setting hyperparameters, and the possibility of encountering issues like gradient vanishing and gradient explosion. Simultaneously, network architecture design and computational resource constraints represent two significant challenges in deep learning. To overcome the limitations of gradient descent algorithms, researchers have investigated many non-gradient descent methods. Extreme Learning Machine (ELM)<cit.> is a model based on a single-hidden layer feedforward neural network (SLFN). Unlike conventional gradient based algorithms such as backpropagation, ELM employs a unique training strategy: the input weights and biases are randomly assigned, and then the output weights are analytically determined using the Moore-Penrose generalized inverse. Hierarchical Extreme Learning Machine (HELM) <cit.> is an extension of ELM, introducing a hierarchical structure to enhance the model's capability and performance. Broad learning system (BLS)<cit.> draws inspiration from the random vector functional link neural network (RVFLNN)<cit.>. BLS is configured as a flat network, where the initial inputs are embedded into feature nodes. The structure then undergoes broad expansion through enhancement nodes. The key idea of these representative non-gradient descent learning algorithms can also be traced back to the PseudoInverse Learning algorithm (PIL)<cit.> which was originally proposed for training SLFN. In PIL, the output weights are calculated analytically by calculating an approximate optimal solution for the loss function using the Moore-Penrose generalized inverse. Several variants of the PIL algorithm<cit.> have been investigated that can use either the random weights or the pseudo-inverse of the input data or its low-rank approximation as input weights. Motivated by the difficulty of designing network structures and the obstacles faced by gradient descent based learning algorithms, we propose a semi-adaptive synergetic two-way pseudoinverse learning system. Instead of pre-setting the model structures before training, the model grows dynamically according to the learning task during the training phase. Our contributions can be outlined as follows: * We propose a synergetic learning system exhibiting superior performance contrasted with the baselines. Each elementary model of the proposed system includes two-way learning and feature fusion modules, enabling the acquisition of more comprehensive features. * The elementary model of the learning system is trained using a non-gradient approach, while the network architecture is dynamically determined, simplifying hyperparameter tuning. * The elementary training model within the synergetic learning system can be trained in parallel, facilitating the acceleration of the model training process. § RELATED WORK §.§ Pseudoinverse Learning based Autoencoder An autoencoder<cit.>, a commonly used neural network employed in unsupervised learning, comprises an encoder and a decoder. The encoder transforms input data into a low-dimensional representation, while the decoder reconstructs this representation to an output closely resembling the original input. The loss function for an autoencoder can be defined as ℒ(𝐖_e,𝐖_d) = 1/2N∑_i = 1^N g(𝐖_df(𝐖_ex_i-x_i)) _2^2, where x_i denotes the i-th sample within the input data set, N is the number of samples, while f and g represent the activation functions of the encoder and decoder, respectively. 𝐖_e denotes the encoder weight, and 𝐖_d denotes the decoder weight. Given the challenges faced in training autoencoders using gradient descent methods, such as issues with hyperparameter tuning and inefficiency in training, the pseudoinverse learning based autoencoder (PILAE)<cit.> was introduced. The setting of encoder weights W_e varies across different versions of the PIL algorithm and its derivatives. Let H represent the output of the encoder. The regularization term is used to prevent overfitting, and the new loss function is ℒ(𝐖_d) = 1/2𝐖_d𝐇-𝐗_2^2+λ/2𝐖_d_r. 𝐗∈ℝ^d × N is the input data, d represents dimension. When r is set to 2, the optimization problem associated with this loss function is also referred to as ridge regression. It can be readily inferred that 𝐖_d = 𝐗𝐇^T(𝐇𝐇^T+λ𝐈)^-1. In autoencoder architectures, weight tying, where encoder weights are constrained to be equal to the transpose of decoder weights (𝐖_e = 𝐖_d^T), is a common practice. This approach capitalizes on the symmetry inherent in autoencoders, enabling parameter sharing between encoder and decoder layers. Multiple PILAEs can be stacked to form a multilayer structure, facilitating the acquisition of increasingly abstract and sophisticated representations of the input data. Stacked PILAE typically employs a greedy layer-wise training approach, where each PILAE is trained independently. The output of the preceding PILAE's hidden layer serves as the input to the subsequent PILAE. §.§ Synergetic Learning System A Synergetic Learning System (SLS)<cit.> is a system that integrates at least two subsystems, which can be agents, models, or human-machine synergies. Synergetic learning between subsystems can be master and servant, cooperative or adversarial, as determined by the learning task. As shown in Fig. <ref>, SLS can be built hierarchically, beginning with the assembly of individual neurons into functional units called chunks. These chunks are then integrated into larger subsystems, which, when combined, form the intricate architecture of neural networks. Finally, SLS emerges as a synergetic network comprising multiple interconnected subsystems, each contributing to the overall functionality of the system. § METHODOLOGY Fig. <ref> illustrates the architecture of our proposed method. The system utilizes an SLS framework consisting of several multi-level elementary models, with each elementary model being a hybrid neural network. This design allows the system to capture and process data at different levels, thereby enhancing the system's versatility and performance. §.§ Elementary Model Each elementary model comprises three modules, forward learning, backward learning and concatenated feature fusion. Forward and backward learning work together to form two-way learning. The structure of the elementary model is illustrated in Fig. <ref>. §.§.§ Forward Learning During the forward learning process, we utilize PILAE as the foundational block to construct a multi-layer network, namely stacked PILAE. As depicted in Eq. (<ref>), L_1 regularization can be chosen when feature selection or achieving a sparse solution is desired, with the setting r = 1. This optimization problem is also termed as LASSO <cit.>. To solve this optimization problem, various methods can be employed<cit.>, including least angle regression (LAR)<cit.>, iterative shrinkage-thresholding algorithm (ISTA), and alternating direction method of multipliers (ADMM)<cit.>. This study utilizes the fast iterative shrinkage-thresholding algorithm (FISTA)<cit.>, an efficient variant of ISTA, to tackle the optimization problem. The main distinction between FISTA and ISTA lies in the selection of the starting point of the approximation function in the iteration step. Specifically, the basic iterative steps of FISTA are as follows 𝐖_d^k = p_L(𝐕^k), t_k+1 = 1+√(1+4t_k^2)/2 , 𝐕^k+1 = 𝐖_d^k+ ( t_k-1/t_k+1 ) ( 𝐖_d^k-𝐖_d^k-1 ), where 𝐕^1 = 𝐖_d^0, t_1 = 1. L = L(q) is the Lipschitz constant of ▽ q. p_L ( · ) denotes the proximal operator, defined as p_L( 𝐕) = 𝐖_d argmin{𝐖_d_1+L/2𝐖_d-(𝐕-1/L▽ q(𝐕)) _2^2}, where q(𝐕) = 𝐕𝐇-𝐗_2^2, ▽ q = 2E_max(𝐇𝐇^T), E_max(·) computes the maximum eigenvalue of the given matrix. Through tied weights, the parameters of the encoder can be determined post-training. Removing the decoder, the output of the encoder is fed as input to the next PILAE, iteratively forming a stacked PILAE. The forward propagation function for forward learning can be expressed as F(𝐗) = σ(𝐖_e^lσ(𝐖_e^l-1...σ(𝐖_e^2σ(𝐖_e^1𝐗))...)), where σ(·) is the activation function. According to Eq. (<ref>), it can be inferred that 𝐖_e^l = (𝐇^l-1(𝐇^l)^T(𝐇^l(𝐇^l)^T+λ𝐈)^-1)^T. 𝐇^l= σ(𝐖_e^lσ(𝐖_e^l-1...σ(𝐖_e^2σ(𝐖_e^1𝐗))...)). In particular, 𝐇^0 = 𝐗. As shown in Fig. <ref>, according to Eq. (<ref>) the final output of the task is represented as 𝐘 = 𝐖_oF(𝐗), where 𝐘 represents the output, the weight matrix 𝐖_o connects the last hidden layer to the output layer. The objective is to minimize the discrepancy between the output 𝐘 and the real label matrix 𝐓. minimize𝐘-𝐓 _2^2 = minimize𝐖_oF(𝐗 )-𝐓. According to Eq. (<ref>) and Eq. (<ref>), 𝐖_o can be computed as 𝐖_o = 𝐓F(𝐗 )^T(F(𝐗 )F(𝐗 )^T+λ𝐈)^-1. In deep learning, determining the optimal depth of a model is a challenging task, so building the model incrementally to meet the task requirements is a sensible strategy. Model construction begins with a simpler and shallower architecture, gradually increasing its complexity as needed. §.§.§ Backward Learning Forward learning, which primarily relies on unsupervised data reconstruction tasks for feature extraction, inevitably leads to features that are more suited for the reconstruction task, thus potentially hindering performance on downstream learning tasks. Additionally, forward learning fails to fully leverage the information contained in target labels, which is evidently beneficial for extracting features associated with specific learning tasks. This prompts us to propose backward learning. Once the training of the forward learning network is completed, the structure of the network is established. Backward learning employs the same architecture, learning in reverse from the last hidden layer back to the input layer. According to Eq. (<ref>), the 𝐇_b^l that yields the minimum error can be determined as follows: 𝐇_b^l = 𝐖_o^†𝐓. † denotes pseudoinverse (Moore-Penrose inverse). Similarly, 𝐇_b^l-1 = (𝐖_e^l)^†σ^-1(𝐇_b^l). σ^-1(·) represents the inverse function of σ(·). Repeating Eq. (<ref>) yields: 𝐇_b^1 =(𝐖_e^2)^†σ^-1(... (𝐖_e^l-1)^†σ^-1((𝐖_e^l)^†σ^-1(𝐇_b^l) )...). Substituting Eq. (<ref>) into Eq. (<ref>), 𝐇_b^1 =(𝐖_e^2)^†σ^-1(... (𝐖_e^l)^†σ^-1(𝐖_o^†𝐓)...). At this stage, the label information propagates backward through the network. The reconstructed hidden layer output contains features related to the label. The weights undergo corresponding updates. Like Eq. (<ref>), the weights are updated by minimize𝐖_b^1𝐗-𝐇_b^1_2^2, then, 𝐖_b^1 = 𝐇_b^1𝐗^†, similarly, 𝐖_b^2 = 𝐇_b^2(σ(𝐖_b^1𝐗))^†. In the same vein, 𝐖_b^l = 𝐇_b^l(σ(𝐖_b^l-1σ(...σ(𝐖_b^1𝐗)...)))^†. Let 𝐖_b^l+1 denote the output weight. 𝐖_b^l+1 = 𝐓(σ(𝐖_b^lσ(...σ(𝐖_b^1𝐗)...)))^†. After determining the weights through backward learning, the prediction of the backward learning network can be expressed as 𝐘_b = 𝐖_b^l+1σ(𝐖_b^lσ(𝐖_b^l-1...σ(𝐖_b^2σ(𝐖_b^1𝐗))...)). §.§.§ Feature Fusion Integrating features from diverse sources enables the acquisition of richer information, facilitating a more profound understanding of the data and the extraction of underlying patterns, thus enhancing the robustness of the model. Forward and backward learning will be used as a representation learning network instead of directly performing the final learning task. We concatenate features of varying abstraction levels obtained from forward and backward learning for downstream tasks. For networks with l hidden layers, there are multiple fusion methods. When selecting one feature from the forward learning path and one from the backward learning path for concatenation, there are l × l possible combinations. Alternatively, fusion can involve selecting multiple features from both the forward and backward paths. §.§ Synergetic System In our work, each elementary model functions as a subsystem within the larger synergetic system. These subsystems operate cooperatively to accomplish the designated task. Each subsystem is equipped with a two-way training model and a feature fusion module, enhancing the system's adaptability and performance. The synergistic interaction and complementarity among these subsystems contribute significantly to the overall system's remarkable classification accuracy. Furthermore, the modular design of the subsystems facilitates their deployment in a distributed manner, resulting in a substantial reduction in computational time overhead. §.§ Training Strategy §.§.§ Parallelizability In our method, each elementary model is capable of independently completing its task, allowing elementary models to be trained simultaneously, thus enabling parallelization. During training, each elementary model randomly samples a subset of the training set according to a predefined sampling ratio. This approach fosters diversity and heterogeneity among the elementary models, which in turn enhances the overall stability and generalization performance of the synergetic learning system. §.§.§ Early Stopping Early stopping is a technique used to prevent overfitting during model training. Its basic principle involves monitoring the performance metrics of the model on a validation set, the training is halted with a high probability to prevent the model from continuing to overfit the training data. In our work, it is used as a structural control scheme. § EXPERIMENTS §.§ Data Sets In our study, we rigorously evaluate the performance of proposed methodologies by employing a wide array of benchmark data sets that are widely recognized within the research community. Specifically, we include classical and contemporary data sets such as MNIST, Fashion-MNIST (F-MNIST), and the NORB data set <cit.>. Additionally, we utilize a varied selection of data sets from the UCI Machine Learning Repository (http://archive.ics.uci.edu/mlhttp://archive.ics.uci.edu/ml) and the OpenML platform (http://openml.orghttp://openml.org), which have been commonly used in related works. These data sets include but are not limited to, Abalone, the Advertisement data set, the Gisette data set, the Human Activity Recognition (HAR) data set, the Kin8nm data set, Madelon, Mfeat, Isolet, Occupancy, Yeast, Semeion, Segment, and Spambase. §.§ Experimental Design The performance of our method was compared to five baselines that included HELM <cit.>, PILAE <cit.>, ELM-AE <cit.>, PILLS <cit.>, and BLS <cit.>. These methods use the non-gradient learning strategy. The primary distinction between them lies in the mapping linked to the input weights. They use least squares or ridge regression for output weight learning. For the MNIST, Fashion-MNIST, and NORB data sets, the structural settings of the baselines take the optimal structure mentioned in the corresponding papers. In addition, we compared the training efficiency of our method with three typical gradient descent based methods (LeNet-5 <cit.>, ResNet50 <cit.> and VGG16 <cit.>) on MNIST, F-MNIST and NORB. LeNet-5 comprises two convolutional layers with 6 and 16 5x5 filters respectively, followed by two max-pooling layers. It has three fully connected layers with 120, 84, and 10 (output) nodes. LeNet-5 was trained using the Adam optimizer, with an initial learning rate of 0.001, batch size of 64, for 30 epochs. ResNet50 is a 50-layer deep residual network constructed from bottleneck residual blocks. It employs 1x1 and 3x3 convolutional filters, with skip connections to facilitate training of such a deep architecture. For ResNet50, we employed the Adam optimizer with an initial learning rate of 0.1 and a batch size of 64. The network was trained for 20 epochs. VGG16 utilizes very small 3x3 convolutional filters, with a substantially increased depth of 16 weight layers. It comprises five max-pooling layers and two fully-connected layers with 4096 nodes each. VGG16 was originally trained using stochastic gradient descent (SGD) with an initial learning rate of 0.001, batch size of 64, for 5 epochs. All experiments were conducted on a PC equipped with an Intel(R) Core(TM) i5-14600K 3.50 GHz processor and 48.0 GB of DDR5 RAM. §.§ Result and Analysis Our proposed method and baselines are compared on 19 public data sets in terms of accuracy on the test set, and the comparison results are shown in Table <ref>. Overall, our performance outperformed the baselines in 16 out of 19 data sets. Specifically, our results show significant improvement over the baselines in Abalone, Advertisement, Gina_agnostic, Kin8nm, Madelon, Isolet, Prior, Yeast, Semeion, Spambase, and NORB data sets. For Gisette, our method achieved the third best result, trailing only 0.28 behind the top-performing BLS and ELM-AE methods. For Har, our method achieves the second best result, trailing PILLS by 0.49. For F-MNIST, our method trailed the top-performing method by only 0.02. For MNIST, we are 0.19 ahead of the two best results of baselines. For the NORB data set, we set the forward learning structure to 1500-1000-600-500, which was the maximum number of hidden layers, and the actual number was decided using the early stopping strategy. The backward learning structure was the same as the structure determined by forward learning. The number of subsystems was set to 10, and the sampling ratio of each subsystem was 0.8. The number of classifier neurons for the fused feature was set to 5000. For F-MNIST, three subsystems were used, the sampling ratio of each subsystem was set to 0.8, the forward learning structure was 1500-1000-600-500, an early stopping strategy was used, and the number of classifier neurons for the fused features was set to 10000. For MNIST, two subsystems were used, the sampling ratio of each subsystem was set to 0.8, the forward learning structure was 2000-1500-500, using an early stopping strategy, and the number of classifier neurons for fused features was set to 10000. Table <ref> shows the comparison of our method with the baselines in terms of the average ranking of accuracy in the test set. Obviously, the method we propose is leading the way. A comparison of training efficiency is shown in Fig. <ref>, our method achieves higher efficiency in the case that we attain approximate accuracy with the baselines. The accuracy of the baselines in the experiment may not have reached its optimal result because of the insufficient epoch setting, but the training time consumed has been significantly more than our method. § CONCLUSION In this work, we propose a semi-adaptive synergetic two-way pseudoinverse learning system, comprised of subsystems that interact synergistically. Each subsystem consists of three modules: forward learning, backward learning, and feature fusion. The contributions of this work are threefold. Firstly, experimental results demonstrate that our method exhibits superior performance compared to the representative competing baselines. Forward and backward learning constitute two-way learning that enables learning of richer features. Secondly, our method utilizes a non-gradient descent algorithm for training and employs a data-driven semi-adaptive strategy to determine the model structure, thereby reducing the workload for hyperparameter tuning. Additionally, the elementary models within the synergetic system can be trained in parallel, further enhancing the training efficiency. In fact, the forward and backward learning can also be viewed as subsystems in a synergetic system. Such nested synergetic learning systems will be further investigated in future work. § ACKNOWLEDGEMENT This study is supported by China Postdoctoral Science Foundation (2020M682348), National Key Research and Development Program of China (2018AAA0100203), and Natural Science Foundation of Henan Province, China (232300421235). splncs04
http://arxiv.org/abs/2406.17972v1
20240625230718
LABOR-LLM: Language-Based Occupational Representations with Large Language Models
[ "Tianyu Du", "Ayush Kanodia", "Herman Brunborg", "Keyon Vafa", "Susan Athey" ]
cs.LG
[ "cs.LG", "cs.CL", "econ.EM" ]
Inherent Challenges of Post-Hoc Membership Inference for Large Language Models [ ======================================================================================== § ABSTRACT Many empirical studies of labor market questions rely on estimating relatively simple predictive models using small, carefully constructed longitudinal survey datasets based on hand-engineered features. Large Language Models (LLMs), trained on massive datasets, encode vast quantities of world knowledge and can be used for the next job prediction problem. However, while an off-the-shelf LLM produces plausible career trajectories when prompted, the probability with which an LLM predicts a particular job transition conditional on career history will not, in general, align with the true conditional probability in a given population. Recently, <cit.> introduced a transformer-based “foundation model”, CAREER, trained using a large, unrepresentative resume dataset, that predicts transitions between jobs; it further demonstrated how transfer learning techniques can be used to leverage the foundation model to build better predictive models of both transitions and wages that reflect conditional transition probabilities found in nationally representative survey datasets. This paper considers an alternative where the fine-tuning of the CAREER foundation model is replaced by fine-tuning LLMs. For the task of next job prediction, we demonstrate that models trained with our approach outperform several alternatives in terms of predictive performance on the survey data, including traditional econometric models, CAREER, and LLMs with in-context learning, even though the LLM can in principle predict job titles that are not allowed in the survey data. Further, we show that our fine-tuned LLM-based models' predictions are more representative of the career trajectories of various workforce subpopulations than off-the-shelf LLM models and CAREER. We conduct experiments and analyses that highlight the sources of the gains in the performance of our models for representative predictions. § INTRODUCTION Predictive models of individual career trajectories are important components of many labor economic analyses and career planning tools. These predictive models are building blocks used in empirical analyses in economics and social sciences to understand labor markets. For example, such models are used for labor market turnover <cit.> and to quantify and decompose wage gaps by gender <cit.> and race <cit.>. For many applications, it is important to have predictions of job transitions conditional on career history that are representative of the general population. For example, models that predict outcomes conditional on worker history are used to estimate average (over a representative set of workers in a defined subpopulation, such as U.S. high school graduates) counterfactual differences in outcomes that result from a policy intervention, for example when estimating the causal effect of interventions such as training programs <cit.>, or when estimating the causal effect of displacement <cit.>. Policy-makers may also need to predict the future transitions for particular subgroups of workers when considering policies that affect them or their families <cit.>. In the context of recommendation systems <cit.>, in some settings, it may be desirable that job recommendation tools predict the most likely outcomes for a worker conditional on their history and context. With sufficiently large data from a representative longitudinal dataset, estimating a predictive model with unbiased conditional predictions about labor market outcomes such as transitions would boil down to training a sufficiently flexible model of transitions conditional on history. However, in practice, the number of possible career paths for workers is extremely large relative to the population, let alone relative to available data. In the U.S., there are a few relatively small, survey-based longitudinal datasets broadly available to researchers where the surveys attempt to find a representative sample of the population. Although labor economists have studied a wide range of questions using these survey datasets <cit.>, because of their small size, traditional models estimated on these datasets impose restrictive functional form assumptions. For example, models that condition career history often assume that the next occupation of a worker depends only on their last occupation and some covariates <cit.> or a few summary statistics about their past <cit.>. As a result, traditional models have limited predictive power. Recent advances in deep sequential models (e.g., RNNs and transformers), which encode career histories as low-dimensional representations, offer a promising way to design better occupational prediction models. These deep learning models often require a lot of data to train on, and existing small-scale survey datasets fail to meet this requirement. Fortunately, online sources, from job posting websites to news articles about the labor market, encode a large amount of information about career transitions and can be used as a supplementary data source. Breakthroughs in artificial intelligence offer a method to leverage these online data sources: one can train a foundation model <cit.> using large-scale datasets. Because foundation models are built on a backbone of larger data, they can learn the general structure underlying observed data (i.e., labor market) <cit.>. Then, the researcher can fine-tune the foundation model on much smaller survey datasets of interest. For example, <cit.> develop the CAREER framework, which uses a transformer architecture to model transitions as first, a discrete choice of whether to change jobs at all, and second, a discrete choice among a set of occupations. CAREER is trained using a large, unrepresentative resume dataset and fine-tuned using U.S. survey data. <cit.> shows that this approach yields more accurate predictions than models trained only on survey data, and improves predictive power substantially over traditional econometric models. Large language models (LLMs) are foundation models for natural language. They consist of tens or hundreds of billions of parameters, are trained on massive, broad text corpora <cit.>, and encode a wide variety of world knowledge, potentially capturing a more comprehensive range of labor market information. The public release of LLMs, pre-trained on massive amounts of text data using substantial computational resources, has ushered in a new era where these models are used for tasks beyond Natural Language Processing (NLP), such as protein sequence generation <cit.>, scientific research <cit.> and more. It is natural to consider using these models for the next job prediction problem. In this paper, we propose the LAnguage-Based Occupational Representations with Large Language Models (LABOR-LLM) framework, which incorporates several approaches to leveraging LLMs for modeling labor market data and producing representative predictions. The simplest way to produce next-job predictions is to condition an LLM on job history and demographics by prompting the LLM using a text representation of such a job history and demographics, produced using a text-based template. We also consider more complex approaches, including fine-tuning as well as approaches that extract embeddings from LLMs and incorporate them into multinomial classifier models trained to predict the choice of next job for a worker given the embedding that summarizes worker history. We compare the performance of several alternative models within the LABOR-LLM framework, and further we contrast these with alternative baselines, including in particular the state-of-the-art CAREER <cit.> framework. A concern with approaches that build on general-purpose LLMs is that they may or may not yield predictions that are representative of the job transitions of the general public. An LLM is generally not trained on representative data, or even for the task of next job prediction, so it may produce poor predictions for demographics that are underrepresented in its training set <cit.>. If we query an LLM to predict a job trajectory for an individual, it will likely generate a coherent and plausible trajectory. However, there is no guarantee that the probability that a particular transition is specified by an LLM will be consistent with the true probability of that career transition for workers with similar histories in the population at large. Questions about the use of foundation models for tasks have arisen in other areas, such as opinion surveys. A recent literature has emerged that aims to assess whether the outputs of foundation models are representative of larger populations <cit.>. A common strategy to assess this for LLMs is to query them with survey responses from long-standing opinion surveys and see how aligned their responses are with the survey average. For example, if 70% of the survey respondents in these surveys respond “Yes” when asked “Do you support taxing the rich?”, we can query an LLM with the same question and assess if it responds with “Yes” 70% of the time. In this paper, we propose to evaluate representative predictions in a stronger sense: the distributions of predicted next jobs should be representative of true next jobs conditional on job histories. This type of conditional representativeness can be analyzed with reference to a particular population, and in some contexts, it may be important that a model be sufficiently representative within subpopulations of interest, such as disadvantaged socioeconomic groups or groups that are the target of policy interventions. In general, a transformer model such as an LLM trained with the objective of accurate next-token prediction (conditional on a sequence of past tokens) will make predictions that are representative of the set of next-token prediction examples from the training data. However, the training data may or may not be representative of the population of interest to an analyst. Further, there are a variety of subtle choices to be made when defining the population of next-token predictions that would be used in an ideal test set for evaluating performance, as well as in how to measure performance. These choices include which subpopulations to focus on, whether to take the perspective of a population of individuals (and their full careers) or a population of transitions (where individuals with longer careers have more transitions), and what evaluation measures to use (e.g., accuracy of the most likely prediction or the complete likelihood assigned by the model to all possible next jobs). Different substantive goals lead to different choices of objective function as well as different weightings of examples in training and testing. In this paper, we make several choices about how to operationalize representativeness. First, a language model, such as a transformer, can be viewed as estimating conditional probability distributions over future tokens given past tokens. For the task of next job prediction, we evaluate the model's performance at estimating conditional probabilities over the next occupation (which, when considered as text, consist of several tokens) given job history and various covariates. We evaluate the quality of the estimates produced by a model using measures such as perplexity <cit.> constructed from the log-likelihood of the observed occupation according to a model's estimates. For expositional and computational simplicity, we consider our target population to be a population of career transitions, so that workers with longer careers are weighted more heavily; and we focus the set of career transitions that appear in three widely used government-collected representative U.S. administrative surveys as a target population of interest. We further examine representativeness within subpopulations. Our results suggest that off-the-shelf LLMs provide unsatisfactory performance using these datasets compared to previous baseline models, but that fine-tuning LLMs on survey data improves performance beyond the state-of-the-art methods (e.g., <cit.>). This is a surprising fact: while CAREER was created using resumes specifically for the problem of job prediction, general-purpose LLMs acquire this ability passively. Furthermore, we find that the predictions from these fine-tuned LLMs are representative of career trajectories of various demographic subgroups in the workforce, conditioned on job histories. This allows us to use these models as predictive modules conditioned on various demographic subgroups and job histories despite LLMs being pre-trained on datasets that are not representative of the entire workforce. Importantly, these LLMs we develop are more accessible than CAREER because CAREER requires proprietary resume data. Instead, anyone with computational resources can fine-tune the publicly available LLMs. We will release our best-performing LLM. We conduct a series of experiments and analyses to understand the advantages brought by LLMs, analyzing how the knowledge base of an LLM informs its predictions. We also compare model performance on subpopulations defined by different educational backgrounds, which indicates that fine-tuned LLMs make more accurate predictions overall and by subgroup. Our findings demonstrate a method for adapting LLMs to make representative labor market predictions without relying on proprietary models or data. § RELATED WORK Career Trajectory Modeling and Next Job Prediction Economists have historically fitted relatively simple predictive models of labor markets to relatively small datasets. These methods typically only predict a few occupation categories. <cit.> used conditional logit models to study workers' choices among 11 occupational groups with estimated earnings, training expenses, and costs due to unemployment. <cit.> utilized logit models to assess how race, sex, educational attainment, and labor market experience influence the probability that individuals attain five different occupational categories, revealing significant effects of these variables on occupational outcomes. Although future occupations can have complex dependencies on the entire sequence of previous jobs, traditional methods typically only leverage the most recent past job with curated features summarizing the job history <cit.> or some summary statistics <cit.>. Machine Learning Methods for Next Job Prediction In the context of resume datasets, researchers have utilized deep learning and graph neural network methods to develop machine learning algorithms that model sequences <cit.>. Extending these methods, <cit.> developed CAREER, a transformer model pre-trained on a massive resume dataset of 24 million resumes. However, the model was then fine-tuned on survey datasets; the CAREER model demonstrated superior performance compared to other approaches for the next job prediction problem on these survey datasets. Our paper introduces an alternative approach to CAREER, starting with a pre-trained LLM instead of pre-training our own model. We show how to leverage this model to the task of predicting the next job on survey data sets. Natural Language Process and Language Modeling In the approaches mentioned above, jobs are represented as individual discrete choices. However, job titles also have an inherent linguistic meaning. We review recent developments in the Natural Language Processing (NLP) and LLM literature, which inform our approach to modeling the next job prediction problem as a language modeling problem. Recurrent Neural Network (RNN) models based on architectures such as GRU <cit.> and LSTM <cit.> were an important class of performant NLP methods. However, they process tokens in a sentence sequentially, forcing high computational complexity due to the dependence on sequential processing. Transformer architectures <cit.> with attention broke through this computational barrier by utilizing a key-query-value design to allocate attention while making the prediction dynamically. These models were accompanied by powerful unsupervised training methods such as Causal Language Modeling (CLM) <cit.> and Masked Language Modeling (MLM) <cit.>. Recently, industry practitioners have leveraged transformers' scalability and developed LLMs with billions of trained parameters, such as GPT-3 <cit.> and Llama <cit.>. The Biases and Representativeness of LLMs LLMs have since been used in several open-ended tasks, such as dialog <cit.> and recommendations <cit.>, and have begun to have an impact on public opinion. Social science researchers have begun using them to emulate survey responses, leading to an emerging research agenda on the study of LLM's biases and their effectiveness in simulating survey responses. Recent work has shown that LLMs produce responses to public opinion surveys that are not representative of various demographic groups, even after being steered toward them <cit.>. <cit.> show that a binary classifier can almost perfectly differentiate model-generated data from the responses of the U.S. census in the context of the American Community Survey. However, other work <cit.> has shown that LLMs can be used to characterize complex manifestations of political ideology in text. <cit.> shows that language models can be used to simulate human samples through prompting and appropriate conditioning on sociodemographic backstories, making these samples effective proxies for specific human subpopulations. We contribute to this literature by showing that off-the-shelf LLMs are not representative of survey responses for job transitions, and we show methods to make them representative. Further, for the next job prediction problem, we seek a stronger form of conditional calibration - model predictions should be calibrated within demographic subgroups conditional on job histories. We show that our models produce more conditionally representative predictions than CAREER. Adapting LLMs to Build Domain-Specific Models Training these LLMs from scratch requires computational resources that cost millions of dollars and a high carbon footprint <cit.>. The pre-training and fine-tuning paradigm proposes a practical, tractable, and more sustainable way to use LLMs. The paradigm involves training a model on a large dataset to learn general knowledge and then refining it on a smaller, task-specific dataset to adapt its learned patterns to specific applications <cit.>. Research has demonstrated that fine-tuning a pre-trained LLM on the dataset of interest can yield superior results than directly training a large model from scratch. This pre-training and fine-tuning paradigm has produced state-of-the-art models for dialogue systems <cit.>, code generation <cit.>, music generation <cit.>, scientific knowledge <cit.>, protein structure prediction <cit.>, chemistry <cit.>, medicine <cit.>, and other settings. The literature on the adaptation of LLMs for recommendation systems is also closely related. <cit.> introduced a general paradigm to adapt the recommendation task to language processing. We propose a language modeling approach to the next job prediction task. Moreover, we can predict the complete distribution over the next jobs treated as discrete choices with higher performance than prior state-of-the-art models while framing this problem as a causal language modeling problem. Machine Learning in Economics Machine Learning methods are increasingly used in economics <cit.> as modules for prediction <cit.> and causal inference with high-dimensional data <cit.>. The following line of work in economics is closely related to our work and uses machine learning methods for discrete choice modeling. <cit.> introduces SHOPPER, a Bayesian demand model that builds item embeddings from large-scale grocery datasets and predicts customers' choices, combining ideas from language modeling and econometrics. <cit.> shows how to estimate a similar demand model using a nested Bayesian matrix factorization approach, while sharing parameters across products, customers, and product categories <cit.>, while modeling product choice, on a per category basis, jointly for several categories. We contribute to this literature by further extending ideas from language modeling and discrete choice models with language modeling to build a model for labor choice. § REPRESENTATIVE OCCUPATION MODELING The goal of occupation modeling is to predict an individual's career trajectory. In many cases, it is important for these predictions to be representative of a larger population. In this section, we formalize the problem of occupation modeling and describe a data source that can be used to assess whether a model's predictions are representative: national longitudinal survey datasets. Occupation Modeling. An individual's career trajectory can be defined as a sequence of occupations, each held at a different timestep of their career history. An occupation model is a probabilistic model over these occupational sequences. We consider the case where occupations are represented as discrete variables. For example, survey datasets typically encode jobs into discrete occupations using taxonomies like the Standard Occupational Classification System (SOC) and the Occupation Classification Scheme (OCC). More formally, denote by y_i,t∈𝒴 the occupation that an individual i has at time t, with 𝒴 denoting the set of all occupations. Each worker is also associated with covariates — static covariates x_i (e.g., ethnicity) are fixed over time, while dynamic covariates x_i,t (e.g., education level) may change throughout the worker's career. We use the shorthand y_i, < t =(y_i,1, …, y_i, t-1) to denote an individual's job sequence prior to their t'th observation (for t ≤ 1, define y_i, < t = ∅), and similarly x_i, ≤ t = (x_i,1, …, x_i, t) to denote the set of dynamic covariates up to the t'th observation. Lastly, we use T_i to denote the total number of records from individual i. An occupation model is a predictive model of an individual's next occupation: P(y_i,t| x_i , x_i, ≤ t, y_i, < t). The model conditions on all previous occupations and all current and previous covariates. Covariates are treated as “pre-transition”; for example, a model may condition on an individual's current education to predict their next job. Representative Predictions. In many settings, it is important for an occupation model to make representative predictions for several reasons. In economic analysis settings, when performing counterfactual simulations and policy analysis, it is necessary to have representative model predictions so that estimation within different demographic groups is unbiased. In a recommendation system setting, representative models may sometimes be required in recommendation system settings where it is important to surface recommendations that resemble the true underlying job transitions in a subgroup. For instance, a career guidance tool aimed at low-income workers may want to suggest feasible and common job transitions for that demographic rather than high-paying but unrealistic options. Further, it is important that career trajectory predictions are representative not only conditional on demographic subgroups but also con on job histories. Representative Surveys. To assess whether occupation models make representative predictions, we use longitudinal survey datasets. These datasets follow individual workers who are regularly interviewed about their lives and careers. Crucially, these datasets are constructed to be nationally representative. As a result, we can assess whether a model makes representative predictions by comparing predicted job sequences to actual sequences from survey data. We analyze three well-known survey datasets in the United States: the Panel Study of Income Dynamics (PSID), the National Longitudinal Survey of Youth 1979 (NLSY79), and the National Longitudinal Survey of Youth 1997 (NLSY97). Each survey is constructed differently and thus follows different populations. PSID, which began in 1968, aims to be representative of the United States as a whole and continues to add new workers over time. In contrast, the NLSY datasets follow specific birth cohorts: NLSY79 began in 1979 and followed individuals aged 14-22 at the time, while NLSY97 began in 1997 and followed individuals aged 12-16 at the time. § HOW REPRESENTATIVE ARE LLMS AS OCCUPATION MODELS? Any conditional distribution over job sequences is an occupation model. Here, we study the occupation modeling capabilities of LLMs and assess how accurately LLMs could model such conditional distributions. LLMs are trained primarily to predict missing words from text culled from the Internet. However, they are capable of performing many tasks extending far beyond the next-word prediction task they were trained to perform, such as solving logic puzzles <cit.> and modeling time series data <cit.>. While LLMs are not explicitly trained to predict occupational sequences, they are trained on massive amounts of data containing information about career trajectories — such as news articles about the labor market and reports from the Bureau of Labor Statistics. This information may equip them with the ability to make accurate and representative predictions of occupational sequences. Predicting occupational trajectories using LLMs requires converting occupational sequences to textual prompts that LLMs can understand. In this section, we describe a prompting strategy for eliciting occupational predictions from LLMs. With this strategy, LLMs predict plausible-sounding occupational trajectories. We then assess whether these predictions are representative of the American population by comparing them to trajectories from three nationally representative surveys. We show that LLMs consistently make unrepresentative predictions. §.§ Prompting LLMs to Predict Occupations LLMs are conditional probability distributions over text sequences; conditional on a sequence of text (i.e., a prompt), an LLM provides conditional probabilities over all possible continuations of the prompt. Therefore, repurposing LLMs to predict occupations requires representing occupational trajectories as text. We create a text template, a function that transforms an individual's career history into a textual summary; this function is denoted by 𝒯(x_i, x_i, ≤ t, y_i, < t). Our text template takes advantage of the fact that each occupation has a natural textual representation: its title. For example, the title of the occupation with SOC code is [Readers can refer to the official Bureau of Labor Statistics (https://www.bls.gov/OES/CURRENT/oes_stru.htmhttps://www.bls.gov/OES/CURRENT/oes_stru.htm) for a list of the latest SOC titles.]. Job titles can be variable in length, depending on how an LLM tokenizes words; for our experiments, the length of job titles ranged from 2 to 28 tokens, with an average length of 8 tokens. (Figure <ref> in Appendix <ref> presents a word cloud example of job titles). We use a similar strategy for representing covariates as text; for example, we represent an individual's educational status using values such as . To elicit an LLM's predictions of an individual's next job, we include all previous job information (along with all previous and current covariate information) in a text template. To predict an individual's t+1'st job, the text template will begin with a description of the static covariates and then include a row for each of the t previous occupations and dynamic covariates. It will conclude with a partial row for the occupation to be predicted. For example, the following text template would be used to elicit an LLM's prediction of an individual's third job: [breaklines, commandchars= {}] <A Resume from the NLSY79 Dataset> The following is the resume of a male white US worker residing in the northcentral region. The worker has the following work experience on the resume, one entry per line, including job code, year, education level, and a description of the job: 1988 to 1989 (graduate degree): Secretaries and administrative assistants 1989 to 1990 (graduate degree): Carpet, floor, and tile installers and finishers 1990 to 1991 (graduate degree): The template omits the title of the individual's third job (“Elementary and middle school teachers”). When an LLM is prompted with this template, we can record its response as its prediction of the next job. We can also use the text template to build the individual's full job history. The example below shows the text representation of a worker's entire career history generated by our text template. Note that the individual can stay in the same job for multiple records; the text representation explicitly reflects this information. This individual will have five prediction tasks in total, one for each record, throughout their job history. With a slight abuse of notation, let 𝒯(x_i , x_i, ≤ T_i, y_i, ≤ T_i) denote the paragraph representing the entire career history of worker i. [breaklines] <A Resume from the NLSY79 Dataset> The following is the resume of a male white US worker residing in the northcentral region. The worker has the following work experience on the resume, one entry per line, including job code, year, education level, and a description of the job: 1988 to 1989 (graduate degree): Secretaries and administrative assistants 1989 to 1990 (graduate degree): Carpet, floor, and tile installers and finishers 1990 to 1991 (graduate degree): Elementary and middle school teachers 1991 to 1992 (graduate degree): Elementary and middle school teachers 1992 to present (graduate degree): Adult Basic and Secondary Education and Literacy Teachers and Instructors <END OF RESUME> The corpus of text representations of full career histories is useful when fine-tuning language models. §.§ Evaluating Representativeness To assess the representativeness of an LLM's occupational predictions, we compare its predictions to actual occupational trajectories from survey datasets. We study three commonly used survey datasets: the Panel Study of Income Dynamics (PSID) <cit.> and two cohorts from the National Longitudinal Survey of Youth (NLSY79 and NLSY97) <cit.>. We randomly construct “test samples” containing 20% of individuals in each dataset (test samples contain all observations for each included individual). Table <ref> in Appendix <ref> presents summary statistics about each dataset. For each individual in the test set, we prompt LLMs to predict each recorded observation of their career: predicting their first job from just their covariates, predicting their second job from their first job and covariates, etc. We evaluate a model's representativeness by comparing its predictions of an individual's next job to their actual next job. Specifically, we evaluate models by computing their perplexity, a commonly used metric in NLP. The perplexity is a monotonic transformation of log-likelihood, with lower perplexity indicating that a model's predictions are more representative. Formally, for a model P̂(y_i,t | x_i, x_i, ≤ t, y_i, <t) that assigns a probability to each possible occupation, perplexity is given by exp{-1/∑_i T_i∑_i∑_t=1^T_i w_it[ logP̂(y_i,t| x_i , x_i, ≤ t, y_i, < t) ] }, where T_i is the number of observations for individual i; w_it denotes the sampling weight for the individual; we can adjust these weights to assess the model's performance on different subpopulations or other objectives. For example, we can set these weights to be such that we weight each transition equally, or we weight each individual equally. We can also set them such that we seek representative predictions only on the first few or last few transitions for every individual; In Section <ref> and Section <ref>, we set w_it = 1 to evaluate models' performances on the general population. We consider additional evaluation metrics (such as calibration) in Section <ref>. While perplexity evaluates the probabilities assigned to occupations, LLMs assign probabilities at the token level. Occupation titles typically span multiple tokens; for example, the title “software engineer” may be tokenized into two tokens, one for “software” and one for “engineer”. However, because LLMs are probabilistic models, we can use the chain rule of probability to extract probabilities assigned to full occupation titles. Equation (<ref>) illustrates how one can obtain the conditional probability assigned to “software engineer”. See Appendix <ref> for more details. P̂(y_i, t = “Software Engineer”| x_i , x_i, ≤ t, y_i, < t) = P_LLM(“Software Engineer”|Prompt) = P_LLM(“Software”|Prompt) P_LLM(“Engineer”|Prompt, “Software”) Because evaluating perplexity requires accessing a model's assigned probabilities, we can only study LLMs whose probabilities are accessible. We study three open-source LLMs from the Llama-2 family of models: Llama-2 (7B), Llama-2 (13B), and Llama-2 (70B); these models were trained on 2 trillion tokens of text from the Internet, and are among the most capable open-source LLMs <cit.>. When we prompt these LLMs to predict an individual's future occupations, they provide plausible-sounding trajectories. Readers can refer to Appendix <ref> for examples. However, they also assign mass to strings that are not valid job titles. To encourage models to predict only valid occupations, we consider an additional prompting strategy that includes the list of all possible titles before the prompt. Table <ref> contains the perplexity of each model with both prompting strategies. As a comparison, we also include the perplexity of CAREER <cit.>, a non-language model developed solely to predict nationally representative occupational trajectories. The LLMs consistently make unrepresentative predictions, with perplexities ranging from 39.53 to 3820.31. For comparison, a completely uninformative model that assigns uniform mass to each possible occupation would achieve a perplexity of |𝒴|, which is 335. LLM predictions are improved by including the list of job titles in the prompt, but they're still significantly worse than the CAREER model. Part of this poor performance is due to models assigning mass to occupational titles that do not exist (i.e., ∑_y ∈all jobsP̂(title_y |prompt) is far less than 1); however, explicitly removing this mass by renormalizing a model's predictions does not make up a large difference. Readers can refer to Appendix <ref> for more details on our experiments with baseline language models. § MODIFYING LLMS TO MAKE MORE REPRESENTATIVE PREDICTIONS In Section <ref>, we showed that while LLMs can generate plausible-sounding occupational trajectories, these trajectories are not representative of the broader population. Here, we consider two approaches to generate more representative occupational predictions from LLMs: one based on fine-tuning models and one based on training new classifiers on top of extracted embeddings. These approaches are illustrated in Figure <ref>. §.§ Fine-Tuning Language Models Our first strategy is to fine-tune LLMs to predict occupational trajectories on survey data. Fine-tuning on survey data would encourage models to make more representative predictions while retaining the knowledge they acquired during pre-training. Since LLMs make predictions at the token level, we fine-tune models to predict each token of a textual summary of worker careers. Specifically, we randomly divide each dataset into 70/10/20 train/validation/test splits. Splits are constructed at the individual level; if an individual is in a split, all of their observations are in the same split. We use the same test splits as for the exercises in Section <ref>. We then create a text template for each individual consisting of all of the observations of their career, as the second example template illustrated in Section <ref>. We fine-tune the three Llama-2 models used in Section <ref> on the training set text templates and evaluate models on the test split as in Section <ref>. Figure <ref> illustrates the fine-tuning procedure. We perform fine-tuning by maximizing a model's assigned likelihood to the true next token conditional on all previous tokens in a text template. Our objective includes each token of each template, regardless of whether or not it's an occupation title (we do not include the full list of occupations in the prompts; as we will show later, fine-tuned models indeed learn the set of valid job titles). We perform full parameter and full precision fine-tuning for 3 epochs with a batch size 32. To improve computational efficiency for inference, we quantize fine-tuned language models to 8-bits. In Appendix <ref>, we show that running model inference in full precision does not significantly improve performance. Table <ref> reports the test set perplexity of the three fine-tuned Llama-2 LLMs along with two baselines trained on the training split: a bi-gram Markov model that only predicts an individual's next job from the empirical frequency of transitions, and CAREER <cit.>, a foundation model designed to make representative predictions on survey data. Fine-tuned models make significantly more representative predictions than the original models Table <ref>. Surprisingly, the fine-tuned LLMs make more representative predictions than CAREER, which was trained on 24 million resumes and designed specifically to make accurate predictions on survey data. Although the Llama-2 models are not explicitly trained to model occupations, the information they acquire about career trajectories in the training process enables them to outperform CAREER. While CAREER is trained on proprietary resume data, the pre-trained Llama-2 models are open-source, making it possible for practitioners with computational resources to build state-of-the-art models. It is worth noting that two models' perplexities on the same observation are often correlated; we use a bootstrap method to better understand how significantly and consistently our models outperform CAREER. Table <ref> compares the performance of different variants of fine-tuned Llama-2 models and the previous CAREER transformer by analyzing the perplexity differences in pairs of models. Specifically, we generated 1,000 bootstrap samples from each of the three survey datasets; then, we computed two perplexities on each bootstrap sample using CAREER and one of our fine-tuned Llama-2 models. Readers can refer to Appendix <ref> for visualizations and more comparison results. Since LLMs make predictions on the token level, they may place mass on job titles that do not exist. In Appendix <ref>, we show that fine-tuning encourages models to assign only mass to existing occupations. For example, the fine-tuned Llama-2 models place an average of 99% of their mass on valid occupations. As a result, renormalizing the predictions of fine-tuned LLMs to ensure that they only place mass on real occupations has little effect. §.§ Extracting Embeddings from Language Models While the fine-tuning approach is effective for generating representative predictions, it is computationally expensive. Here, we consider another approach with a lower computational cost. Our approach is based on passing in a text description of an individual's career to an LLM and extracting the model's embedding. A new classification model is trained on top of the embedding to predict the individual's next job. We first convert each job sequence to text using the template discussed in Section <ref>. When we pass in a text template to a language model, the model embeds the text in d-dimensional Euclidean space to predict the individual's next job. To predict an individual's next job from their embedding, we train a multi-class classifier using multinomial logistic regression. Because we are training new models on top of embeddings, we are no longer constrained to make predictions on the token level. So we train the classifier to predict occupation codes directly rather than the job title. Crucially, this approach only requires performing inference steps for each prediction (i.e., to build the emebdding), and so it is more computationally efficient than fine-tuning model parameters. See Appendix <ref> for more details. To extract embeddings from the Llama-2 models (fine-tuned and off-the-shelf), we use the final-layer model representation of each model. We consider both the off-the-shelf Llama-2 models considered in Section <ref> and the fine-tuned models described above. We also consider an approach that takes advantage of the in-context learning capabilities of LLMs. Specifically, in addition to including an individual's job trajectory in the template, we also include the complete trajectories of three randomly sampled individuals at the beginning of the prompt. See Appendix <ref> for more details. In addition to extracting embeddings from LLMs, we also consider models that are designed specifically to provide embeddings. Specifically, we generated text embeddings using three models available from OpenAI: , , and . For each survey, we report results for the model with the best performance. Table <ref> contains the results summarizing model performance. All models form far better predictions for the survey population than the original LLMs. However, there is still a substantial gap between these models and the fully fine-tuned LLMs, pointing to the importance of fine-tuning for generating representative predictions. The embeddings extracted from the LLMs form better predictions than those from the embedding-only models. In-context learning also appears to provide additional benefits at minimal computational cost. It's worth noting that the 70B parameter models form worse predictions than the 13B parameter models. However, this may be due to regularization challenges; the 70B parameter model contains 8,192 embedding dimensions compared to 5,120 for the 13B parameter model. §.§ Summary Table <ref> compares various approaches for predicting the next occupation in a career trajectory, evaluating them based on fixed cost, variable cost per observation, and representativeness. Classical methods such as logit models have relatively low fixed computational costs for model estimation; the variable cost per prediction is also extremely low. Since these models are not capable of capturing complicated temporal dependencies in job histories, their predictions are only moderately representative of survey datasets. Non-LLM deep learning approaches, like CAREER, require significant model training and hyperparameter tuning effort. However, they demonstrate more representative predictions compared to classical methods. The bottom three rows summarize the methods described in this paper. While using LLMs out-of-the-box requires no additional training, their predictions are poor. Fine-tuning these models results in the most representative predictions, albeit at a high fixed computational cost. The embedding approach alleviates the cost, yet the representativeness of predictions suffers. One of the most surprising results is that fine-tuned LLMs form better predictions of occupational trajectories than CAREER, a model designed specifically to form representative predictions. CAREER was trained on a proprietary dataset of 24 million resumes. In contrast, Llama-2 is only trained on publicly available data, and it is open-source. The success of the Llama-2 model lowers the barrier to entry for researchers interested in studying career trajectories, as they no longer need access to proprietary datasets or significant computational resources to train deep learning models from scratch. § ANALYSES Our experiments demonstrate that our best-performing approach, which is to directly predict jobs through text tokens using a fine-tuned Llama-2 (70B) model, achieves superior perplexity scores compared to the previous state-of-the-art CAREER model, even without training on an extensive resume dataset. These findings imply that future researchers might forgo training transformers from scratch using large datasets while still producing an excellent labor choice model and potentially use it in other economic modeling contexts. Given our results, we investigate why this approach outperforms the previous state-of-the-art CAREER model, which is pre-trained on a massive resume dataset. This section delves deeper into the performance differences between the previous state-of-the-art CAREER model and our best-performing approach. Our experimental results demonstrate that our fine-tuned Llama-2 (70B) model is our best-performing model. Consequently, we will use this model as a reference point for comparison with other approaches. §.§ Binary Prediction We start with inspecting models' performance on the binary task of whether an individual will change her job (i.e., y_i,t≠ y_i,t-1) or not. Specifically, we define stay_i,t = 1{y_i,t = y_i,t-1}. The model predicts if an individual will stay in the same occupation with the following probability: P̂(stay_i,t| x_i , x_i, ≤ t, y_i, < t) = P̂(y_i,t-1| x_i , x_i, ≤ t, y_i, < t) and define move_i,t = 1{y_i,t≠ y_i,t-1}; the predicted probability of moving is therefore P̂(move_i,t| x_i , x_i, ≤ t, y_i, < t) = 1 -P̂(stay_i,t| x_i , x_i, ≤ t, y_i, < t) We exclude the first record t=1 for every individual from our analysis. ROC Curve The ROC curve is a graphical representation that illustrates the performance of a binary classifier by plotting the true positive rate against the false positive rate across various threshold settings. Figure <ref> compares the ROC curve of different models with moving as the positive label, which suggests that the fine-tuned language model outperforms the CAREER model by a slight margin. Model Calibration Model calibration is crucial, as it ensures that predictive models accurately reflect real-world outcomes, enhancing their reliability and applicability in scientific research. We investigate different models' calibration in predicting whether an individual will change her job (i.e., y_i,t≠ y_i,t-1) or not. To assess how well-calibrated each model is, we split observations into ten groups based on deciles of predicted probability of changing jobs P̂(move_i,t) (i.e., the next occupation y_i, t is different from the previous one y_i, t-1). Then, for each group, we compute the empirical percentage of movers. If a model is well-calibrated, the average predicted P̂(move_i,t) should match the actual proportion of movers within each group. Figure <ref> demonstrates the calibration plot for CAREER and our best-performing approach, in which the diagonal line represents a perfectly calibrated model. Despite both models being calibrated on average, we observe that our best-performing approach is better-calibrated in predicting staying and moving than the CAREER model, which underestimates moving in some groups and overestimates it in others. §.§ Performance on the multinomial prediction task conditional on moving Figure <ref> shows that the fine-tuned Llama-2 (70B) performs better on movers uniformly across the three datasets. However, we also note that it assigns a lower overall probability of staying in the same job. In this part, we investigate whether fine-tuned Llama-2 performs better on movers because of its tendency to allocate more probability mass to job changes in general, rather than its ability to accurately predict the specific job an individual transitions to, conditional on them moving to a new job. To assess model performance for movers, we compute the probability of the next occupation conditional on moving: P̂(y_i,t| y_i,t≠ y_i-1,t, x_i , x_i, ≤ t, y_i, < t) = P̂(y_i,t| x_i,x_i, ≤ t,y_i, < t)/P̂(move_i,t| x_i , x_i, ≤ t, y_i, < t) In Table <ref>, we further compute the differences in model perplexity between our fine-tuned Llama-2 models and CAREER using bootstrapping. We see that the fine-tuned Llama-2 (70B) outperforms all other models. We note that the perplexity measured on the conditional modeling problem in Table <ref> is much higher than the perplexities reported in Table <ref> in the experiments section, which is why we also report higher differences between these models. §.§ Explaining the Advantages of the LLM Approach We explore the advantages of LLMs in predicting future occupations by asking the following question: for what kind of observations (y_i,t, x_i, x_i, ≤ t, y_i, <t) do language models outperform the previous specialized transformer? We define our prediction target as the difference in the log-likelihood of the ground truth between predictions from the fine-tuned Llama-2 (70B) and CAREER. To analyze the heterogeneity in this prediction target, we employ a cross-fitting approach. First, we split the test set into ten folds. We then loop through the folds, where in each case, one of the folds is a held-out quintile evaluation fold, while the complement, the training folds, are used to construct a mapping from feature X_it into quintiles. Using the data from the training folds, we train a regression forest <cit.>. We then rank the predicted values in the training fold and determine the thresholds for quintiles; this in turn determines a mapping from features into quintiles. Next, we apply this trained function to the data points in the evaluation fold, assigning each point to its corresponding quintile. We then calculate the mean value of the prediction target within each quintile using the assigned points. This process is repeated for all folds, where in each fold, the within-quintile means are estimated using an evaluation fold that is distinct from the data used to estimate the quintile mapping for that fold. Finally, we present the mean values per quintile averaged across all folds. The presence of heterogeneity in these quintile-level means, estimated on held-out data, indicates that the intensity of differences in performance between fine-tuned Llama-2 (70B) and CAREER vary as a function of the features X_it. Then, we show the values of each of several features in each quintile, allowing us to understand the factors that vary systematically between higher and lower quintiles. We conduct this analysis separately for two prediction scenarios: binary move vs. stay, and job choice conditional on moving. [This method used to measure prediction heterogeneity is closely related to existing methods for analyzing heterogeneous treatment effects (HTE) <cit.> where conditional average treatment effects (CATEs) are estimated from observational or randomized data where units are exposed to treatment at random. However, unlike in traditional CATE estimation, where only one potential outcome is observed for each unit, we observe both counterfactuals (i.e., the log-likelihoods from fine-tuned Llama-2 (70B) and CAREER) for every observation. In this context, the "treatment" is the use of fine-tuned Llama-2 (70B) instead of CAREER for prediction, and the "treatment effect" is the difference in log likelihood between the two models. The Conditional Average Treatment Effect (CATE) represents the expected treatment effect given a set of features X_it. By sorting observations into quintiles based on estimated CATE, we can assess whether there is detectable heterogeneity in the treatment effect as a function of the features. If the CATE estimates within each quintile group, which are estimated on held-out data, exhibit monotonicity, this implies that there is significant heterogeneity in the treatment effect that can be explained by the features <cit.>. Furthermore, by examining heatmaps of feature values across the CATE quintiles, we can identify which features are associated with larger or smaller improvements in prediction performance when using fine-tuned Llama-2 (70B) compared to CAREER. This analysis allows us to interpret the sources of heterogeneity in the treatment effect and understand which features drive the differences in model performance. The fact that we observe both counterfactuals for each observation strengthens the validity of our heterogeneity analysis, as it eliminates the need for assumptions typically required in CATE estimation when only one potential outcome is observed.] ΔP̂_move = P̂_Fine-tuned Llama-2 (70B)(move_i,t| x_i , x_i, ≤ t, y_i, < t) - P̂_CAREER(move_i,t| x_i , x_i, ≤ t, y_i, < t) ΔP̂_job = P̂_Fine-tuned Llama-2 (70B)(y_i,t| y_i,t≠ y_i-1,t, x_i , x_i, ≤ t, y_i, < t) - P̂_CAREER(y_i,t| y_i,t≠ y_i-1,t, x_i , x_i, ≤ t, y_i, < t) We craft features X_it from (y_i,t, x_i, x_i, ≤ t, y_i, <t) and use generalized random forests to discover heterogeneity in the space of X_it with different treatment effects (i.e., the performance gap between fine-tuned Llama 2 and CAREER). Since the feature set X_it heavily depends on previous occupations, we exclude the first prediction (t=1) of each worker from our analysis. We also merge observations from test splits of the three survey datasets, analyze them together, and retain a dataset indicator. Specifically, the feature X_it includes: * : The rank of the job title y_i,t within the individual's career trajectory, which is the integer t. * : The number of occurrences of occupation y_i,t in the dataset. * : number of occurrences of occupation y_i, t-1 in the dataset * : the product of and * : number of tokens in the next job title y_i, t. * : number of tokens in the previous job title y_i, t-1 * : number of tokens in the text representation of the job history 𝒯(x_i, x_i, ≤ t, y_i, < t) * : The empirical number of transitions y_i, t-1→ y_i,t, which is calculated as #[y_i, t-1→ y_i,t]. * : The empirical probability of transition y_i, t-1→ y_i,t, which is calculated as #[y_i, t-1→ y_i,t]/#[y_i, t-1]. * and : These variables are only defined for movers. We use the SOC hierarchy to cluster y_i, t-1 (y_i,t) into SOC-group(y_i, t-1) (SOC-group(y_i,t)) and SOC-detailed-group(y_i, t-1) (SOC-detailed-group(y_i,t)). The SOC hierarchy generates around 10 SOC groups and 30 detailed SOC groups. We add two additional indicators intuitively measuring the magnitude of job transition for movers from y_i, t-1 to y_i,t, 1{SOC-group(y_i, t-1) = SOC-group(y_i,t)} and 1{SOC-detailed-group(y_i, t-1) = SOC-detailed-group(y_i,t)}. * , and - indicator variables for the dataset that each point comes from. * : A 32-dimensional PCA representation of the full 8192-dimensional representation of the text representation 𝒯(x_i, x_i, ≤ t, y_i, < t). While these features are not easily interpretable, they help discover heterogeneity, and we only analyze the interpretable features in the subgroups in our analyses. §.§.§ Heterogeneous Treatment Effects for the Binary Staying versus Moving Prediction First, we conduct heterogeneity analysis for the binary choice of staying versus moving (see Equation <ref>). Figure <ref> shows that the observations predicted to be in the highest quintile have higher average differences as estimated using heldout data using our method, indicating that elements of job histories have strong predictive power for the gap in performance between fine-tuned Llama-2 (70B) and CAREER. Furthermore, Figure <ref> splits observations into 5 quintiles based on estimated treatment effects and shows average values of each feature over observations belonging to each of these quintiles. From the heatmap, we find that observations in the higher quintiles have lower values for the feature and slightly higher values for the features and lengths of the text from the text template, . This indicates that the world knowledge embedded in Llama-2 through natural language in the pre-training phase transfers to our problem, and helps it predict better over rare transitions and longer job histories, analogous to performance results of LLMs over NLP tasks <cit.>. §.§.§ HTE for Movers Conditional on Moving We now conduct heterogeneity analysis for the conditional choice problem of modeling job choice conditional on moving (see equation <ref>). Figure <ref> shows that the difference in performance between fine-tuned Llama-2 (70B) and CAREER systematically differs as a function of characteristics of job history. The corresponding heat map is shown in Figure <ref>. Similarly to the binary prediction case, <ref> shows that fine-tuned Llama-2 performs better for the movers, as increases and increases. This can again be attributed to the attention mechanism and pre-training. §.§ Model Performance by Populations with Different Educational Backgrounds Previous sections have shown that our new language-based approach models our target population better than previous state-of-the-art CAREER models. In this section, we explore how models perform on different subgroups defined by educational backgrounds. Table <ref> presets the perplexity differences between fine-tuned Llama-2 (70B) and CAREER on different subgroups in different datasets, suggesting that our language-based approach consistently outperforms the previous state-of-the-art model for different subpopulations. Figure <ref> depicts the calibration plots models' performance of predicting moving and staying in different subgroups after combining three datasets. Our experiment results indicate that the fine-tuned language model is consistently better calibrated than CAREER across subpopulations. We thus show that fine-tuning allows us to adapt an LLM such that its predictions, conditional on both demographic characteristics and job history, are well calibrated. Figure <ref> in Appendix <ref> suggests that our conclusion holds when analyzing different datasets separately. Readers can refer to Appendix <ref> for more details. § CONCLUSION In this paper, we propose LABOR-LLM, a method for encoding workers' career histories using texts and building representative occupational models with LLMs. Experiment results indicate that one can leverage pre-trained, publicly available LLMs to achieve state-of-the-art performance on career trajectory prediction via fine-tuning. The LABOR-LLM approaches provide researchers with ways to circumvent pre-training transformer models on massive resume datasets, which require excessive computational resources, cost of data access, and engineering effort. Our results further show that the perplexities of our best-performing approach of fine-tuning an LLM and predicting jobs based on its next token distribution are better than those in Table <ref>, suggesting that fine-tuned Llama models are more effective in predicting future occupations as tokens of job titles. One potential explanation is that the fine-tuned model integrates its knowledge of the general English language from pre-training with the specific job titles in the dataset it learns during fine-tuning. We also find that a fine-tuned model outperforms off-the-shelf models paired with in-context learning. While LLMs generate plausible career trajectories with prompting, these predictions are not representative of the workforce. We find that our fine-tuned LLMs produce representative career trajectory predictions, conditioned on demographics within subpopulations, as well as job history. Our approach produces predictions that are more representative than CAREER, a transformer-based dedicated next job prediction model, pre-trained on resume datasets. Thus, we show how to adapt LLMs for the purpose of next job prediction on nationally representative survey datasets. We conclude our paper with an outline of future research directions. Due to the constraint on computational resources, we did not conduct the normalization step for the largest language model fine-tuned. The current perplexity metrics underestimate the performance of the fine-tuned Llama-2 (70B) model. We plan to compute the performance of the largest model after normalization to obtain a more accurate estimation of the potential of language models in the career trajectory prediction problem. In our second approach, using language models as embedding engines, we use a simple multinomial logistic regression with weight decay to generate predicted distributions from embeddings. We observe that such simple linear models fail to decipher the high-dimensional embeddings from the 70 billion parameter model. Exploring more sophisticated predicting head models, such as deep neural networks, could fully unleash these embeddings from the large model and potentially improve the perplexity. Our experimental results indicate that incorporating in-context learning examples enhances the predictive performance of pre-trained models. However, due to the limited context length of the Llama-2 models, we were constrained to adding only three in-context learning examples. In future research, we intend to investigate the value of information added by varying the number of in-context examples, leveraging language models with extended context windows. Taken together, our approach and results show that LLMs can be used as powerful base models for predictive models of the labor market, and can be adapted using fine tuning to make nationally representative labor market predictions. More generally, our results indicate that LLMs may also be helpful for other economic modeling problems. They obviate the need to collect large datasets for pre-training and circumvent the challenges of training, which demands significant time and engineering expertise. plainnat § EXAMPLES OF CAREER TRAJECTORIES GENERATED BY OFF-THE-SHELF LLMS LLMs gain knowledge about the labor market and hierarchy among different occupations from the pre-training data. Therefore, off-the-shelf LLMs can generate plausible sequences of future occupations conditional on one's job history via appropriate prompt engineering. These sections demonstrate a few examples of career trajectories generated by the off-the-shelf Llama-2 (7B) model. In each example, we inform the model with the worker's career history and the desired data format by prompting the following text. We use a model with 8-bit quantization and to generate texts. To save space, we only show the first five occupations generated in this section. The prompt text above is denoted with in the following examples. §.§ Examples on an Individual with Low Education Level We use the following text as the prompt summarizing the individual's career history. [breaklines] The following is the resume of a female white US worker residing in the northeast region. The worker has the following work experience on the resume, one entry per line, including year, education level and the job title: 1979 to 1980 (high school diploma): Cashiers 1980 to 1981 (high school diploma): Not in labor force 1981 to 1982 (high school diploma): Food servers, nonrestaurant 1982 to 1983 (high school diploma): Food servers, nonrestaurant 1983 to 1984 (high school diploma): Food servers, nonrestaurant 1984 to 1985 (high school diploma): Generation Example 1 (Low-Education) [breaklines] <JOB HISTORY PROMPT> Waiters and waitresses 1985 to 1986 (high school diploma): Cashiers 1986 to 1987 (high school diploma): Cashiers and office clerks, general 1987 to 1988 (high school diploma): Office clerks, general 1988 to 1989 (high school diploma): Food servers, nonrestaurant Generation Example 2 (Low-Education) [breaklines] <JOB HISTORY PROMPT> Cooks, short order 1985 to 1986 (high school diploma): Food servers, nonrestaurant 1986 to 1987 (high school diploma): Cooks, short order 1987 to 1988 (high school diploma): Cooks, short order 1988 to 1989 (high school diploma): Waiters and waitresses Generation Example 3 (Low-Education) [breaklines] <JOB HISTORY PROMPT> Cashiers 1985 to 1986 (high school diploma): Food servers, nonrestaurant 1986 to 1987 (high school diploma): Food servers, nonrestaurant 1987 to 1988 (high school diploma): Waiters and waitresses 1988 to 1989 (high school diploma): Cashiers §.§ Examples on an Individual with High Education Level The second example illustrates model's capability on the high-education group. We use the following prompt text in this example. [breaklines] The following is the resume of a female white US worker residing in the northeast region. The worker has the following work experience on the resume, one entry per line, including year, education level and the job title: 1984 to 1985 (college): Business Teachers Postsecondary 1985 to 1986 (college): Postmasters and mail superintendents 1986 to 1987 (graduate degree): Sales Representatives Services All Other 1987 to 1988 (graduate degree): Wholesale and retail buyers, except farm products 1988 to 1989 (graduate degree): Elementary and middle school teachers 1989 to 1990 (graduate degree): Elementary and middle school teachers 1990 to 1991 (graduate degree): Elementary and middle school teachers 1991 to 1992 (graduate degree): Generation Example 1 (High-Education) [breaklines] <JOB HISTORY PROMPT> First-line supervisors of retail sales workers 1992 to 1993 (graduate degree): First-line supervisors of retail sales workers 1993 to 1994 (graduate degree): Elementary and middle school teachers 1994 to 1995 (graduate degree): Sales representatives, services, all other 1995 to 1996 (graduate degree): Postmasters and mail superintendents Generation Example 2 (High-Education) [breaklines] <JOB HISTORY PROMPT> Sales representatives services all other 1992 to 1993 (graduate degree): Postmasters and mail superintendents 1993 to 1994 (graduate degree): Business teachers postsecondary 1994 to 1995 (graduate degree): Sales representatives services all other 1995 to 1996 (graduate degree): Elementary and middle school teachers Generation Example 3 (High-Education) [breaklines] <JOB HISTORY PROMPT> Secondary school teachers 1992 to 1993 (graduate degree): Postsecondary teachers 1993 to 1994 (graduate degree): Postsecondary teachers 1994 to 1995 (graduate degree): Social scientists and related workers 1995 to 1996 (graduate degree): Social scientists and related workers § NOTATION Table <ref> summarizes the notations we use in this paper. § SUMMARY OF DATASETS Table <ref> shows the number of observations and number of individuals in each split of each dataset. For example, there are 8,684 workers in the training split of the PSID dataset, and there are ∑_i T_i = 44,231 prediction observations in the same split. Table <ref> summarizes the training corpus used to fine-tune our language models. Note that the maximum length of each prompt is less than 1,000 tokens, significantly shorter than the 4,096 context window size for the Llama-2 family. § DETAILS OF THE JOB TITLES Figure <ref> presents example job titles in a word cloud, weighted by their popularity. The popularity of an occupation title is determined by the frequency of its total occurrences across the test splits of the three datasets. § FULL-PRECISION VERSUS QUANTIZATION Model quantization is a technique for improving models' computational efficiency and decreasing memory usage by reducing the numerical precision of model parameters (e.g., from 32-bit to 8-bit or 4-bit). Existing research has shown that LLMs with quantization can achieve similar performance to full-precision models <cit.>. All Llama experiments in the main paper used the 8-bit quantized versions of models to save computational resources. In this section, we compare performance of the full-precision and 8-bit quantization versions of the fine-tuned Llama-2 (7B). Specifically, the Llama-2 (7B) model was fine-tuned under full precision; then, we query predicted probabilities of future job titles using the two variants of the fine-tuned model, one in full precision and the other quantized to 8-bit. Table <ref> compares models' performance on different datasets. These results suggest no significant difference between the full-precision and quantized models in average normalization constant, perplexity (normalized), and perplexity (unnormalized). § PREDICT FUTURE OCCUPATIONS AS TOKENS IN JOB TITLES We can directly leverage LLMs' next token prediction capabilities to predict future occupations without building an additional classifier. To obtain the predicted probability of the next occupation, we first tokenize each job title title_y into a sequence of tokens. Suppose the string title_y is tokenized into n tokens {token_y^(1), token_y^(2), …, token_y^(n)}. Then, the unnormalized probability of predicting y is the likelihood the language model assigns to the token sequence {token_y^(1), token_y^(2), …, token_y^(n)} as the continuation of the text representation 𝒯(x_i, x_i, ≤ t, y_i, < t). The predicted probability can further be expanded using the chain rule of probability, as shown in Equation (<ref>). P̃_LLM(y |𝒯(x_i , x_i, ≤ t, y_i, < t)) = P_LLM({token_y^(1), token_y^(2), …, token_y^(n)}|𝒯(x_i , x_i, ≤ t, y_i, < t)) = ∏_j=1^n P_LLM(token_y^(j)|𝒯(x_i , x_i, ≤ t, y_i, < t), token_y^(1), token_y^(2), …, token_y^(j-1) ) The P_LLM(token_y^(j)|𝒯(x_i, x_i, ≤ t, y_i, < t), token_y^(1), token_y^(2), …, token_y^(j-1)) is operationalized by (1) appending all tokens token_y^(1), token_y^(2), …, token_y^(j-1) to the text representation 𝒯(x_i, x_i, ≤ t, y_i, < t) and (2) querying the likelihood the language model assigned to token_y^(j) as the next token conditioned on all the previous tokens. It is worth noting that we cannot guarantee that the model only assigns positive probabilities to valid job titles. In fact, given the presence of the softmax function in our language model, P_LLM(·|𝒯(x_i, x_i, ≤ t, y_i, < t)) > 0 for any sequence of tokens of any length. Therefore, the sum of all possible job titles' probabilities is not necessarily one. We would need the following normalization to calculate the probability of predicting y_t so that ∑_yP̂(y |𝒯(x_i, x_i, ≤ t, y_i, < t)) = 1. P̂(y_i,t| x_i , x_i, ≤ t, y_i, < t) = P̃_LLM(y_i,t|𝒯(x_i , x_i, ≤ t, y_i, < t))/∑_y' ∈𝒴P̃_LLM(y' |𝒯(x_i , x_i, ≤ t, y_i, < t)) The normalization operation in Equation (<ref>) is computationally expensive, since we need to perform LLM inference |𝒴| times. In the experiments section, we will assess the necessity of this normalization step by examining how well P̃_LLM(·) approximates P̂_LLM(·). It is worth noting that since the denominator in <ref> is less than 1 (since the total probability mass on the subset of job title tokens is less than the total probability mass on all tokens), P̃_LLM is always an overestimate of P̂_Model. As a result, Test Perplexity calculated using the former is also an overestimate of the latter since the normalization constant is less than one. § OFF-THE-SHELF LANGUAGE MODELS To examine the performance of pre-trained LLMs without fine-tuning, we use the prediction-as-token approach (see Appendix <ref>) and construct P̂(y | x_i, x_i, ≤ t, y_i, t) = P_LLM(title_y |𝒯(x_i , x_i, ≤ t, y_i, < t)). Table <ref> presents perplexity scores of the Llama-2 (7B) with bootstrap standard deviations. Our results indicate that off-the-shelf models fail to accomplish the career trajectory task well. One possible explanation for this inferior performance is that the pre-trained model lacks knowledge of the set of valid job titles. Consequently, the model assigns a significant probability mass to strings that are not valid job titles, resulting in small values of P_LLM(title_y |𝒯(x_i , x_i, ≤ t, y_i, < t)). To improve the baseline model, we prepend the complete list of job titles to the text representation 𝒯(x_i , x_i, ≤ t, y_i, < t) in the prompt. The list of titles is a paragraph with a single title_y on each line and a total of |𝒴| lines, the total length of this list is around 2,500 tokens. The predicted probability of landing at occupation y is P_LLM(y |List of Titles⊕𝒯(x_i , x_i, ≤ t, y_i, < t)), where ⊕ denotes the string concatenation operation. The results in Table <ref> indicate that providing the model with a list of job titles enhances its performance. However, even with this improvement, off-the-shelf models still perform worse than other baseline models. We also examine how often the off-the-shelf Llama-2 (7B), without any fine-tuning, predicts valid job titles. Specifically, we randomly sample 10% of the test split of each survey dataset and evaluate the “normalization constant” in Equation (<ref>), defined as ∑_y ∈𝒴 P_LLM(title_y |prompt). The average normalization constant is only around one-third using Llama-2 (7B) with 𝒯(x_i , x_i, ≤ t, y_i, < t) as the prompt. After adding the list of job titles to the prompt, the average normalization constant rises to around two-thirds but is still far away from one; the relatively low chance of hitting valid job titles partially explains the poor performance of LLMs off-the-shelf. Table <ref> enumerates average normalization constants across datasets using different prompt formats. Finally, for the sample studied above, we perform the explicit normalization in Equation (<ref>) and compute the perplexity (bootstrap standard deviations in paraphrase). Table <ref> reports perplexities on different datasets using two different prompts. We see that, even after constraining the model to predict valid job titles through normalization, the off-the-shelf Llama-2 (7B) model still failed to match the performance of baseline models examined by <cit.>. § EFFECTS OF NORMALIZATION IN OCCUPATION-AS-TOKEN PREDICTION This section examines the performance of fine-tuned Llama-2 (7B) and fine-tuned Llama-2 (13B) models on the three survey datasets using the first approach discussed (i.e., predict the next occupation as tokens) with explicit normalization. We could not run the same experiment with the Llama-2 (70B) model because the normalization operation required excessive computational resources. We conduct experiments to investigate the necessity of the computationally expensive normalization procedure. The third column in Table <ref> shows that the average normalization constant is close to one for the fine-tuned Llama-2 (7B) and Llama-2 (13B) models on all three test datasets. Therefore, it is possible to approximate P(y_t |𝒯(x_i, x_i, ≤ t, y_i, < t)) using Equation (<ref>) without normalization, which would enable us to scale to up much larger language models such as Llama-2 (70B). The last two columns in Table <ref> report perplexities from unnormalized probabilities in Equation (<ref>) and perplexities from the normalized probabilities in Equation (<ref>). Our experiment results indicate that it is feasible to use unnormalized probabilities for prediction in larger models without significantly affecting performance. Moreover, as noted earlier, since the denominator in <ref> is less than 1 (since the total probability mass on the subset of job title tokens is less than the total probability mass on all tokens), P̃_LLM is always an overestimate of P̂_Model. As a result, Test Perplexity calculated using the former is also an overestimate of the latter since the normalization constant is smaller than one. In other words, in table <ref>, the unnormalized perplexity is a strict overestimate of the normalized perplexity. We have shown that even without normalization, our approach outperforms the state of the CAREER model. Thus, we can bypass the computational overhead associated with normalization, making it practical to scale up to models like Llama-2 (70B). § DETAILS OF MODEL PAIRWISE PERFORMANCE DIFFERENCES Figure <ref> illustrates the distributions of (Perplexity of Model 1, Perplexity of Model 2) pairs across different model pairs and datasets. Our observations indicate that larger models (represented on the y-axis) consistently outperform smaller models (represented on the x-axis) in terms of perplexity, suggesting significant returns from scaling model size. § DETAILS OF LANGUAGE MODELS USED AS EMBEDDING ENGINES Table <ref> summarizes the embedding models we use in our experiments and the dimensions of embeddings they generate. The most straightforward prediction head to use is to use a multinomial regression model to predict the next occupation. The estimate of the conditional probability of the next occupation is given by: P̂_MNL(y | x_i , x_i, ≤ t, y_i, < t) = exp(β_y^⊤ E_i, < t)/∑_y' ∈𝒴exp(β_y'^⊤ E_i, < t) where {β_y}_y ∈𝒴 is the set of trainable parameters. We train βs in the prediction head to minimize the cross-entropy loss between the predicted distribution and the true distribution of the next occupation, defined in Equation (<ref>). We use L2 regularization on the βs to avoid overfitting. β^⋆ = _β∈ℝ^|𝒴| -1/∑_i ∈Train Set T_i∑_i ∈Train Set∑_t=1^T_i∑_y ∈𝒴1{y_i,t = y}logP̂_MNL(y | x_i , x_i, ≤ t, y_i, < t) Finally, we plug in the estimated β^⋆ into Equation (<ref>) to obtain the predicted probability for every occupation, and we use the same test set perplexity in Equation (<ref>) to evaluate the model. It is worth noting that β_y's in Equation (<ref>) can be interpreted as a latent representation of job y; β_y's were initialized randomly and learned during the training process. In contrast, the direct prediction from the job tokens approach in Section <ref> incorporates the LLMs' understanding of the information embedded in job titles while making the prediction; therefore, we expect a slightly worse performance from this second approach. Researchers can also deploy other prediction heads, such as random forests, gradient boosting, and neural networks, to predict the next occupation. We fit a |𝒴|-class multinomial regression on the train split of the respective survey dataset to capture the ground truth occupation y_i,t with the Adam optimizer, a learning rate of 0.003, and a weight decay (i.e., regularization) hyperparameter. Since the loss landscape of multinominal regressions is generally well-behaved, we only conduct a hyper-parameter search on the weight decay, ranging from 10^-6 to 1 in log space. To avoid over-fitting and speed up the experiment, a training strategy with early stopping (on the validation set loss) was implemented, and the final regularization parameter was chosen to minimize the validation set loss. § DETAILS OF IN-CONTEXT LEARNING Formally, let 𝒯_j = 𝒯(x_j , x_j, ≤ T_j, y_j, ≤ T_j), 𝒯_k = 𝒯(x_k , x_k, ≤ T_k, y_k, ≤ T_k), and 𝒯_ℓ = 𝒯(x_ℓ , x_ℓ, ≤ T_ℓ, y_ℓ, ≤ T_ℓ) denote text representations of the three in-context learning examples. Given individual i's history (x_i , x_i, ≤ t, y_i, < t), we compute an embedding vector Ẽ_i,<t = Model(𝒯_j ⊕𝒯_k ⊕𝒯_ℓ⊕𝒯(x_i , x_i, ≤ t, y_i, < t) ) ∈ℝ^d where 𝒯(·) is the text representation function as defined in Section <ref> and ⊕ denotes the string concatenation operation. Finally, we train a |𝒴|-class logistic regression model on Ẽ_i,<t to predict the next occupation, following the same procedure as in Section <ref>. To ensure the robustness and replicability of our findings, this experiment is replicated five times for each dataset, with each iteration utilizing a different set of randomly selected examples. This procedure allows us to evaluate the stability of in-context learning across various career trajectories. Although generating the embedding in Equation (<ref>) requires the language model to process a longer sequence of text, which would increase the inference cost, this approach requires a significantly lower amount of computational resources, since it does not require model fine-tuning. § MODEL PERFORMANCE BY DIFFERENT EDUCATION GROUPS Table <ref> presents the perplexities of the CAREER and fine-tuned language models on different survey datasets, indicating superior performance of fine-tuned language models compared to previous models. Figure <ref> shows the calibration plots for predicting staying/moving on different datasets and education groups. Finally, Figure <ref> plots ROC curves of different models while predicting staying/moving on different datasets and education groups.
http://arxiv.org/abs/2406.18907v1
20240627053849
Historia Magistra Vitae: Dynamic Topic Modeling of Roman Literature using Neural Embeddings
[ "Michael Ginn", "Mans Hulden" ]
cs.CL
[ "cs.CL" ]
§ INTRODUCTION Topic models, which model the distribution of how topics appear within documents, have long been proposed as tools for computational social science and history <cit.>. In particular, dynamic topic models <cit.> can be used to represent a distribution of topics that changes over time. However, experts have been critical of the practicality of topic models for research, noting the difficulty in quantitative evaluation <cit.>, handling documents with figurative language <cit.>, and the difficulty in distinguishing novel, meaningful insights from noise <cit.>. Traditionally, topic models have used a Latent Dirichlet Allocation (LDA), a Bayesian statistical model that explains observed text using unseen groups of documents <cit.>. Recent research has proposed an alternate strategy for topic modeling, where neural embeddings are used to create topic clusters <cit.>. In this work, we compare the effectiveness of this neural approach with traditional statistical techniques, hypothesizing that this approach will be more robust to noise and reduce the effort needed to create useful topic models. Due to the cultural and political dominance of the Ancient Romans, there exists an extensive surviving corpus of Roman literature, with roughly 220,000 surviving texts and inscriptions <cit.>. Many of these texts are individually well-studied, but holistic analysis of the corpus remains difficult. We experiment with various techniques for dynamic topic modeling over this entire corpus, which is noisy and broad in scope, to determine which approach performs best at producing useful results, while requiring minimal hyperparameter tuning. We make the following contributions: * We produce the first (to our knowledge) dynamic topic model spanning all surviving Roman texts, and create visualizations demonstrating clear historical trends. * We find that the neural embedding approach produces a more readily interpretable topic distribution compared to traditional LDA approaches. * We demonstrate that quantitative metrics for evaluating topic models do not necessarily align with human judgments, and better strategies for evaluating topic models are needed. Our code is available on GitHub.[link omitted for anonymity] § RELATED WORK Dynamic topic modeling has been used across disciplines to explore the evolution of bodies of literature such as nineteenth-century British Parliament debates <cit.>, medical publications about sepsis <cit.>, and Twitter data during the COVID-19 pandemic <cit.>. However, such works (and the majority of similar ones) use the traditional LDA model for dynamic topic modeling, while we utilize a modern neural clustering technique. Additionally, to our knowledge, this is the first work to apply dynamic topic models across the entire corpus of a historical state. There has been growing interest in NLP approaches for ancient languages including Latin, evidenced by workshops such as LT4HALA <cit.>. Previous work in Latin NLP has included language modeling <cit.>; lemmatization, part-of-speech tagging, and feature identification <cit.>; and automatic dating of Latin texts <cit.>. § DATA We use the Latin Library corpus as our source for Latin texts latinlibrary. We date documents according to the information gathered in <cit.>, although many of these dates are estimated years based on historical knowledge. After removing documents that did not have known dates, the corpus consisted of 1,350 plain-text documents, including histories, poems, plays, inscriptions, and speeches. Documents spanned from roughly 449 BC (The Twelve Tables) through AD 600 (The Epistles of Gregory the Great), which roughly covers the time span from the founding of Rome to its fall.[We chose not to include the majority of post-Roman literature, which includes many Latin texts but is often considered as a distinct corpus by classical scholars.] For authors with many small texts (poems and inscriptions), multiple texts were combined in a single document, whereas very long texts (such as the Aeneid) were split into several documents. As Latin is a highly inflectional language, preprocessing is essential to extract meaningful topic words. We tokenized, lemmatized, and removed stop words using the Classical Language Toolkit (CLTK) <cit.>. § METHODOLOGY We trained and compared three different models for dynamic topic modeling: a Latent Dirichlet Allocation (LDA) model, as in <cit.>; a model based on Non-negative Matrix Factorization (NMF), and a neural, transformer-based model using BERTopic <cit.>.[Implementation details and hyperparameters will be provided in an appendix in the final version.] §.§ LDA Model The LDA model follows the approach used in <cit.> closely, and uses Gensim[<https://radimrehurek.com/gensim/>] <cit.> for implementation. The LDA dynamic topic model extends a static topic model, modeling each time slice as a static topic model. The parameters for the distribution at each time step are modeled with a state space distribution, capturing how the distribution of topics evolves over time. We separated documents into ten timesteps, finding through qualitative observation that this was the most effective setting across models. §.§ NMF Model Non-negative Matrix Factorization (NMF) is a linear algebra technique often used for extracting latent information. We reimplement the approach used in <cit.> for dynamic topic modeling. In this approach, we run NMF on each timestep, factoring the matrix of documents and words into a matrix of documents to topics and topics to terms. Then, we run NMF again, using the terms for the topics at each timestep as the "documents". §.§ BERTopic Model BERTopic <cit.> uses neural embeddings of documents to create clusters of related documents, from which topics are extracted. For dynamic topic modeling, BERTopic uses the static topic model as a global distribution. We compute a topic distribution using the same topics for each time step and fine-tune this representation by averaging each time step globally and evolutionarily (with the previous time step). §.§ Evaluation Following the approach used in <cit.>, we use several quantitative metrics to evaluate topic models. The TC-Embed metric scores topic coherence by taking the average cosine similarity between embeddings for the topic ten words in a topic. The Mean Pairwise Jaccard similarity measure scores topic generality by taking the average ratio of shared terms to total terms for a pair of topics. A good topic model is thought to have high coherence and low generality. § RESULTS The coherence and generality scores are given in <ref>, showing that the LDA model minimalizes generality best, while the NMF model has the highest topic coherence, while the BERTopic model underperforms in both. However, prior research has suggested that these quantitative metrics often do not line up with human intuition about the quality of the model, particularly for dynamic topic models. Thus, we also evaluate the usefulness of the topic distributions from each model. [The full list of terms for each topic will be provided in an appendix in the final version] §.§ Intertopic Distance Map First, we analyzed the distribution of topics regardless of time. We use LDAvis and principal component analysis (PCA) to create a graph showing the distribution of topics for the two static topic models <cit.>, presented in <ref>. §.§ Topics over time We calculate the frequency of topics at each time step by calculating the number of documents with each topic as its most likely topic and visualize the popularity of topics over time. These results are presented in <ref>. § DISCUSSION Though the LDA and NMF models outperform based on quantitative metrics, we can see in <ref> and <ref> that their insights were limited, particularly for the LDA model. The LDA intertopic map shows that Topic 1 and its cluster of topics were very general and dominant, and as a result <ref> does not reveal much of interest about the topics' frequency over time. The LDA model had difficulty distinguishing meaningful topic words from noise. We experimented with modifying hyperparameters (prior probabilities, chain variance) but were unable to make improvements. The NMF model produced a wider range of topics, spread out and changing over time. However, most of the topic terms are difficult to interpret as meaningful, and the generality score indicates that the topics are mainly composed of common Latin terms. One promising topic is Topic 5, with terms such as ignis ("fire") and magnus ("great"), which peaks around AD 100 and could be related to the Great Fire of Rome in AD 64. The results for the BERTopic model are insightful and useful, despite the quantitative measures, and the model requires minimal configuration. The intertopic map (<ref>) shows a spread of evenly distributed topics, and the topics over time graph (<ref>) demonstrates clear trends in topic popularity, described below. BERTopic benefits from utilizing embedded document representations to create meaningful clusters of any shape, while the LDA model considers all documents equally. §.§ Topic 2: Christianity in Rome One particularly clear and insightful topic is topic 1, which includes terms such as deus ("god"), homo ("man"), christus ("Christ"), and spiritus ("breath, spirit"), and is clearly associated with Christian religious writings. The topic peaks twice: once around A.D. 200, after Christianity was first established and introduced in Rome, and again around A.D. 400, after the Edict of Milan and the establishment of Christianity as the official religion of the Roman Empire. §.§ Topic 3: The fall of the Republic Another interesting topic is topic 3, including terms such as res publicus ("republic"), populus ("the people"), and romanus ("Roman"). This topic seems to be associated with writings and speeches about Republican values such as Cicero's orations. We can see that the topic is most popular around 100 B.C. and decreases in popularity through A.D. 300, likely a reflection of the fall of the Roman Republic (44 B.C.) and its associated republican ideals. §.§ Comparison The results from the LDA and NMF models were consistent with many of the criticisms from humanities and social science experts: the models did not extract particularly useful information, and the quantitative metrics did not align with human judgments. We believe that with extensive hyperparameter search, we could achieve comparable results to the BERTopic model, but this effort may not be worth the resulting value to a historian. In contrast, the BERTopic model mitigates many of these issues, with a more robust method for clustering documents that produces insights with minimal effort. However, it does require an existing model for creating document embeddings, which may not always be available for many low-resource languages. Future work could explore using a state space to model distributional parameters over time while incorporating neural embeddings to guide topic extraction. § CONCLUSION In this work, we trained three different dynamic topic models over documents spanning the lifetime of the Roman Republic and Roman Empire. We find that the traditional LDA and NMF models suffer from common issues with topic modeling: they may be difficult to interpret in a useful way, they are sensitive to noisy data, and they often require parameter search to find optimal configurations. In contrast, the BERTopic model is able to achieve insightful results with minimal effort, producing trends that line up with historical intuitions. We believe this topic model could be useful in certain scenarios; for example, a Roman historian might be interested in how a particular author's distribution of topics relates to the overall historical trends. While this approach shows clear potential in the use of topic models for historical analysis, it does not solve all of the issues. Interpretation is still critical and difficult, and there is something of a paradox: it is difficult to tell if a trend in a topic model is meaningful and accurate, without already knowing that the trend exists. Nevertheless, we hope that newer approaches can make topic modeling more robust and performant as one of many tools for discovering insights and performing research. § ETHICS We hope that through more sophisticated methods for topic modeling, we can aid historians and social scientists in their research. However, topic modeling and other computational approaches for humanities should not supplant human experts, who are able to investigate research questions with more care and nuance than an automated method. Topic models, and other forms of statistical text summarization, tend to extract majority ideas and topics. Topic models should be used to gain holistic information about trends and topics, but should not be used to overlook minority opinions. This work uses Latin text which is now in the public domain, but any work utilizing data for living languages should ensure they have proper permission to use the data, especially with indigenous and endangered languages. Communities should always have control over the data they produce, and language should be seen as a cultural and personal artifact, not merely data for conducting research. Finally, models that require high computational costs, particularly neural models, use large amounts of energy and carry an environmental cost <cit.>. § BIBLIOGRAPHICAL REFERENCES lrec-coling2024-natbib § LANGUAGE RESOURCE REFERENCES lrec-coling2024-natbib languageresource
http://arxiv.org/abs/2406.19003v1
20240627084337
Hyperbolicity of generic hypersurfaces of polynomial degree via Green-Griffiths jet differentials
[ "Benoit Cadorel" ]
math.AG
[ "math.AG" ]
[ Massimiliano Morini July 1, 2024 ======================= § ABSTRACT We give a new version of a recent result of Bérczi-Kirwan, proving the Kobayashi and Green-Griffiths-Lang conjectures for generic hypersurfaces in ℙ^n+1, with a polynomial lower bound on the degree. Our strategy again relies on Siu's technique of slanted vector fields and the use of holomorphic Morse inequalities to prove the existence of a jet differential equation with a negative twist – however, instead of using a space of invariant jet differentials, we base our computations on the classical Green-Griffiths jet spaces. § INTRODUCTION The Green-Griffiths-Lang conjecture <cit.> predicts that any projective manifold of general type X should be quasi-Brody hyperbolic, namely there should exist a proper algebraic subset Z ⊊ X containing the image of any non-constant holomorphic map f : ℂ⟶ X. Studying this question in the case of hypersurfaces is already a very difficult problem – in this situation, Kobayashi also conjectured that one should be able to obtain genuine Brody hyperbolicity for higher degrees and generic hypersurfaces: Let n ≥ 2 be an integer. * (Green-Griffiths-Lang conjecture for smooth hypersurfaces) Any smooth hypersurface X ⊂ℙ^n+1 of degree d ≥ n + 3 is quasi-Brody hyperbolic; * (Kobayashi conjecture <cit.>) [The bound in the second item did not appear in Kobayashi's original article: it would follow naturally from results by Clemens-Ein-Voisin-Pacienza (see <cit.>) – at least for a very general hypersurface – if the Green-Griffiths-Lang conjecture were known to hold in full generality.] A generic hypersurface X ⊂ℙ^n+1 of degree d ≥ 2n (or d ≥ 2n+1 if n ≤ 4) is Brody hyperbolic i.e. there exists no non-constant holomorphic map f : ℂ→ X. The previous conjecture has attracted a lot of attention in the last few years, and we now know that the two items hold if we consider generic hypersurfaces of high enough degree: There exists two sequences of integers d_n, d_n' such that the following hold: * (Diverio-Merker-Rousseau <cit.>) a generic hypersurface X ⊂ℙ^n+1 of degree d ≥ d_n is quasi-Brody hyperbolic; * (Brotbek <cit.>) a generic hypersurface X ⊂ℙ^n+1 of degree d ≥ d_n' is Brody hyperbolic. Obtaining effective bounds for the sequences d_n, d_n' is no easy task, and we are still very far from the linear bounds of Conjecture <ref>. However, a significant breakthrough has been made recently by Bérczi and Kirwan <cit.>, who managed to obtain polynomial bounds d_n≈ d_n' ≈ O(n^4) in both items – thus substantially improving the previously known bounds, that were all growing at least as e^O(nlog n) (see e.g. <cit.>). The strategy of <cit.> relies on the technique of slanted vector fields introduced by Siu <cit.>. Eventually, everything boils down to proving the bigness of a well-chosen line bundle on an projective jet space X_k→ X sitting above X (see Section <ref> below). The classical strategy to prove this bigness is to apply Siu's algebraic Morse inequalities, that requires in turn to show the positivity of an adequate intersection number. There are several possible choices for the jet space X_k, but not all seem to give very satisfactory bounds on d_n or d_n': most of the previous exponential bounds were obtained for example using the Demailly-Semple jets spaces X_k = X_k^DS. The novelty in <cit.> was to introduce a jet space X_k = X_k^BK on which the intersection theory is much more favorable, by means of the non-reductive Geometric Invariant Theory. The previous spaces X_k^DS and X_k^BK are jet bundles naturally associated to the so-called invariant jet differentials – their definition is quite elaborate compared to the Green-Griffiths jet bundles X_k^GG introduced more than 40 years ago (see <cit.>). Quite surprisingly, these latter jet spaces seem to have been a bit overlooked in their potential applications to the problem at hand. In these notes, we will show that it is indeed possible to use X_k = X_k^GG and that following the strategy described above also yields polynomial degree bounds. More precisely, one can show the following: Let n ≥ 2 be an integer. * a generic hypersurface X ≥ℙ^n+1 of degree d > 153/4 n^5 is quasi-Brody hyperbolic; * a generic hypersurface X ≥ℙ^n+1 of degree d > 153/4(2n-1)^5 is Brody hyperbolic. The fact that the bound looks similar in the second item is no mystery: as in <cit.>, it follows from the work of Riedl-Yang <cit.> that if the first item of Theorem <ref> has been proved using e.g. the jet differentials techniques of <cit.>, then the second item must also hold with d_n' = d_2n-1. As we explained above, we make no change to the strategy of slanted vector fields: the only new input is the computation of the intersection number coming from the algebraic Morse inequalities. To perform these computations, we will use the theory of weighted projective bundles and their associated Segre classes, in a manner very similar to some earlier work of the author on jet differentials on compactifications of ball quotients (see <cit.>). §.§ Organization of the article. These notes are divided in three parts and an annex: * Section <ref>: we gather a few facts on weighted vector bundles, jets spaces and the holomorphic Morse inequalities; * Section <ref>: we recall the main criterion for hyperbolicity of generic hypersurfaces, that sums up the strategy of slanted vector fields (see Theorem <ref>). We then present the positivity statement that is needed to apply the holomorphic Morse inequalities (Proposition <ref>). * Section <ref>: we prove Proposition <ref>. * Section <ref>: In an annex to this article, we give a quite elementary proof of the numerical version of the Whitney formula employed in Section <ref> (see the equation (<ref>)). The author hopes this proof is even simpler than the one he presented in his thesis; in the end, it is based on straightforward computations of integrals on simplexes (in a manner very similar to the seminal work of Green-Griffiths <cit.>). §.§ Acknowledgments The author wishes to thank Damian Brotbek, Frédéric Campana, Lionel Darondeau, Simone Diverio, Antoine Étesse, Joël Merker, Eric Riedl and Erwan Rousseau for all the discussions that took place before and during the preparation of this work. Special thanks are due to Gergely Bérczi and Frances Kirwan for their kind and enlightening explanations on non-reductive GIT, during our stay at the Isaac Newton Institute of Cambridge. During the preparation of this work, the author was supported by the French ANR project KARMAPOLIS (ANR-21-CE40-0010). The author would also like to thank the Isaac Newton Institute for Mathematical Sciences for the support and hospitality during the programme New equivariant methods in algebraic and differential geometry when work on this paper was undertaken. This work was supported by: EPSRC Grant Number EP/R014604/1. This work was also partially supported by a grant from the Simons Foundation. § WEIGHTED PROJECTIVE BUNDLES AND GREEN-GRIFFITHS JET DIFFERENTIALS We recall here some of the results and notation of <cit.> and <cit.> pertaining to weighted projective bundles and their intersection theory. §.§.§ Weighted projective bundles Let X be a complex projective manifold. By a weighted vector bundle on X, we mean the data of finitely many couples (E_i, a_i)_1 ≤ i ≤ s, where E_i→ X are vector bundles, and the a_i≥ 1 are integers. We will often write this data under the form E_1^(a_1)⊕…⊕ E_s^(a_s). Given a weighted vector bundle, we can construct several associated objects on X: Let 𝐄 := E_1^(a_1)⊕…⊕ E_s^(a_s) be a weighted vector bundle over X. * The dual of 𝐄 is 𝐄^∗ := (E_1^∗)^(a_1)⊕…⊕ (E_s^∗)^(a_s); * The symmetric algebra of 𝐄 is the graded 𝒪_X-algebra S^∙𝐄 = ⊕_m ∈ℕ S^m𝐄 whose pieces are the vector bundles S^m(E_1^(a_1)⊕…⊕ E_s^(a_s)) := ⊕_a_1 l_1 + … + a_s l_s = m S^l_1 E_1⊗…⊗ S^l_s E_s, endowed with its natural product law (S^l_1 E_1⊗…⊗ S^l_s E_s) ⊗ (S^l_1' E_1⊗…⊗ S^l_s' E_s) ⟶ S^l_1 + l_1' E_1⊗…⊗ S^l_s + l_s' E_s. * the weighted projective space ℙ(𝐄) is the projectivized scheme ℙ(𝐄) = Proj_X (S^∙𝐄). With the notation of the previous definition, one can also define dually ℙ(𝐄) as a ℂ^∗-quotient: P(𝐄^∗) = P(E_1^∗^(a_1)⊕…⊕ E_s^∗^(a_s)) := (E_1^∗⊕…⊕ E_s^∗) - {0}ℂ^∗, where by {0} we denote the zero section, and the action of λ∈ℂ^∗ on the total space of 𝐄^∗ is given fiberwise by λ· (v_1, …, v_r) = (λ^a_1 v_1, …, λ^a_s v_s). This implies that π: ℙ(𝐄) → X is a bundle in weighted projective spaces ; it is endowed with tautological sheaves 𝒪^sh(m) (with m ∈ℕ) for which one has π_∗𝒪^sh(m) = S^m(𝐄). In general, the 𝒪^sh(m) are not locally trivial, but this is however the case if gcd(a_1, …, a_s) divides m. If one has also m>0, then 𝒪^sh(m) is a relatively ample line bundle with respect to π. In all the following, we will use the notation 𝒪(1) to denote the mth-root of 𝒪^sh(m) as a ℚ-line bundle, i.e. the element 𝒪(1) := 1/m𝒪^sh(m) in the rational Picard group Pic ℙ(𝐄) ⊗ℚ. Accordingly, we let 𝒪(d) := 𝒪(1)^⊗ d for any integer d ≥ 1; this element coincides with 𝒪^sh(d) (up to ℚ-linear equivalence) if d is divisible enough. Alternatively, one could also see ℙ(𝐄) as a smooth Deligne-Mumford stack, endowed with a natural tautological (stacky) line bundle 𝒪(1). In this case, the bundles 𝒪(m) can all be seen as line bundles on the corresponding stack, and one has naturally 𝒪(1)^⊗ m = 𝒪(m). §.§.§ Weighted Segre classes If 𝐄 = E_1^(a_1)⊕…⊕ E_s^(a_s) is a weighted vector bundle, we gave in <cit.> a definition of the Segre classes of E as endomorphisms of the rational Chow rings (A_∗ X)_ℚ, as follows. Let α∈ A_∗ X be any class on X. Then one lets s_j(𝐄) ∩α = 1/m^j + r - 1π_∗ (c_1𝒪(m)^j + r -1∩π^∗α). where π : ℙ(𝐄^∗) → X is the natural projection, r = ∑_jrk E_j and m := lcm(a_1, …, a_s)[In <cit.>, we used the notation r = ∑_jrk E_j - 1 instead.] . In this situation, we proved the following Whitney formula in <cit.>, that makes sense as an equality between endomorphisms of (A_∗ X)_ℚ: s_∙(E_1^(a_1)⊕…⊕ E_s^(a_s)) = gcd(a_1, …, a_s)/a_1… a_s∏_j s_∙(E_j^(a_j)), where s_∙(E^(a)) = 1/a^rk E - 1∑_ls_l(E)/a^l for a single vector bundle E and any integer a > 0. The reader can refer to the annex (see Section <ref>) for a proof of a numerical version of this formula, based on straightforward computations of Euler characteristics. §.§.§ Green-Griffiths jet differentials Let X be a complex projective manifold. We refer to <cit.> for all definitions related to Green-Griffiths jet differentials. For our purposes, it will be enough to know that for any order k ∈ℕ, we may define the Green-Griffiths algebra of holomorphic jet differentials E_k, ∙^GGΩ_X = ⊕_m ∈ℕ E_k, m^GGΩ_X. which is an 𝒪_X-algebra whose sections represent holomorphic differentials equations of order k on X. The Green-Griffiths jet bundles are the projective schemes associated to these algebras: X_k^GG := 𝐏𝐫𝐨𝐣_X(E_k, ∙^GGΩ) p_k⟶ X These spaces are endowed with natural tautological ℚ-line bundles 𝒪_GG, k(1) such that E_k, m^GGΩ_X = p_k^∗𝒪_GG, k(m). One of the crucial properties of the Green-Griffiths algebra is the existence of a natural filtration whose graded object is easy to describe in terms of weighted vector bundles. Let k ∈ℕ. Then there exists a filtration F on E_k, ∙^GGΩ_X, compatible with its structure of 𝒪_X-algebra, and whose associated graded algebra satisfies Gr_F(E_k, ∙^GGΩ_X) ≅ S^∙Ω_k , where Ω_k := Ω_X^(1)⊕…⊕Ω_X^(k). By elementary considerations on short exact sequences (<cit.>, see also <cit.>), one has, for any line bundle L → X: h^0(E_k, m^GGΩ_X⊗ L) ≥ h^0(X, S^mΩ_k⊗ L) - h^1(X, S^mΩ_k⊗ L). We will use this inequality jointly with the following result: Let P_k := ℙ(Ω_k) π_k⟶ X, and let L be a line bundle on X. Then one has, for any 1 ≤ i ≤ X and any m ≥ 1 divisible by (1, 2, …, k): h^i(X, S^mΩ_k⊗ L) = h^i(P_k, 𝒪_P_k(m) ⊗π_k^∗ L), where 𝒪_P_k(m) are the tautological line bundles on P_k. By the Leray spectral sequence and the projection formula, it suffices to show that R^i (π_k)_∗𝒪_P_k(m) = 0 for any i > 0 and any m divisible enough. This can be checked fiberwise, and immediately follows from the corresponding results for the cohomology of weighted projective spaces (see e.g. <cit.>). In particular, if we consider a very ample line bundle 𝒪_X(1) on X and any positive ϵ > 0, one has h^0(X_k^GG, 𝒪_GG, k(m) ⊗ p_k^∗𝒪(-m ϵ)) ≥ (h^0 - h^1)(P_k, 𝒪_P_k(m) ⊗π_k^∗𝒪(-m ϵ)) §.§.§ Holomorphic Morse inequalities Let us recall the statement of the famous holomorphic Morse inequalities, in the version proved by Siu: [Siu <cit.>, Demailly <cit.>, see also <cit.>] Let Y be a complex projective variety of dimension n. Let A, B be two nef line bundles on Y, and let L := A ⊗ B^-1. Then one has (h^0 - h^1) (Y, L^⊗ m) ≥ (A^n - n A^n-1· B) m^n/n! + O(m^n-1). §.§.§ Nefness of adequate twists With the notation of the previous section, our wish in Section <ref> will be to apply the holomorphic Morse inequalities to a line bundle of the form 𝒪_P_k(m) ⊗𝒪_X(-m ϵ). To do this, the following statement will be quite useful. Let 𝒪_X(1) be a very ample line bundle on X. Then the ℚ-line bundle L_k = 𝒪_P_k(1) ⊗π^∗_k𝒪_X(2) is nef on P_k. It follows from Definition <ref> (1) that for any m ≥ 0, one has S^m(Ω_k) ⊗𝒪(2m) ≅ S^m( ⊕_1 ≤ l ≤ kΩ_X(2l)^(l)) for any m ∈ℕ. This implies that L_k can be seen as the ℚ-tautological line bundle of the weighted vector bundle ℙ(⊕_1 ≤ l ≤ kΩ_X(2l)^(l)), which is naturally isomorphic to P_k as a scheme above X. However, Lemma <ref> below implies that each of the pieces Ω_X(2l) is globally generated. This implies that for m divisible enough, the line bundle L_k^⊗ m is globally generated as well, and hence is nef. The following very classical lemma was used in the proof of the previous proposition. Let X ⊂ℙ^n be a submanifold, and let 𝒪_X(1) be the associated very ample line bundle. Then Ω_X(2) is globally generated. Since the restriction map Ω_ℙ_n|_X→Ω_X is onto, it suffices to prove the result for X = ℙ^n. Let Z_0, …, Z_n be homogeneous coordinates, and z_i := Z_i/Z_0 be the associated inhomogeneous coordinates on the chart U_0 := { Z_0≠ 0 }. Then the elements Z_0^2 dz_i = Z_0 dZ_i - Z_i dZ_0∈ H^0(ℙ^n, Ω_ℙ^n(2)) generate Ω_ℙ^n(2) on U_0. The same reasoning also holds for the other charts U_j. Using the semi-continuity of the nef property for the countable Zariski topology, we can see as in <cit.> that the ℚ-line bundle 𝒪_GG, k(1) ⊗ p_k^∗𝒪(2) is nef on X_k^GG. The fact that we may obtain a nef line bundle on X_k^GG by taking a twist on the base independently of k is in stark contrast with the case of the Demailly-Semple tower, where we need an twist on the base growing exponentially fast as we climb the jet tower (see <cit.>). As Bérczi-Kirwan remarked in <cit.>, it is also possible to use a constant twist for their non-reductive GIT quotient X_k^BK, which makes the holomorphic Morse inequalities much easier to satisfy. § STATEMENT OF THE MAIN RESULTS In this section, we state the main estimates which, joint with Siu's strategy of slanted vector fields and Riedl-Yang's work <cit.>, allow to derive the main result. All of this has become quite classical, so we will only quote the necessary statements, and refer to the original articles for more details. We fix an integer n ≥ 2. The general slanted jet techniques give the following: Fix d ≥ n. Assume that for any smooth hypersurface X ⊂ℙ^n+1 of degree d, we have proven that the ℚ-line bundle 𝒪_n, GG(1) ⊗ p_n^∗𝒪_X (- (5n +3))) is big. Then the generic hypersurface of degree d is quasi-Brody hyperbolic. Let ϵ > 0 and k ∈ℕ_≥ 1 to be fixed later. To prove the bigness of the ℚ-line bundle 𝒪_k, GG(1) ⊗ p_k^∗𝒪_X (- ϵ), we see in view of (<ref>) and Theorem <ref> that it suffices to apply the Morse inequalities to the following ℚ-line bundle on P_k = ℙ(Ω_k): M := 𝒪_P_k(1) ⊗π_k^∗𝒪_X(-ϵ) To do this, use first Proposition <ref> to write it as a difference of two nef line bundles M = A - B, with A = 𝒪_P_k^GG(1) ⊗π_k^∗𝒪_X(2) and B = p_k^∗𝒪_X(2 + ϵ). The Morse inequalities then ask to show the positivity of P(n, d, ϵ) := A^N_k - N_k A^N_k· B. where N_k = P_k = n + nk - 1. To satisfy the hypothesis of Theorem <ref>, we will just have to specialize to the case k = n and ϵ = 5n + 3. This positivity of P(n, d, ϵ) can be achieved thanks to the following proposition, that will be proved in Section <ref>. The idea of using the Fujiwara estimates (see Lemma <ref>) originally stems from <cit.> and has been used again e.g. in <cit.>. Assume n ≥ 2, and fix k = n. Let ϵ > 0 be a rational number. * One may write P(n, d, ϵ) = d ∑_j=0^n Q_j(n, ϵ) d^j. for some polynomials with rational coefficients Q_j(n, ϵ). The leading term Q_n(n, ϵ) > 0 actually depends only on n. * There is a number D_ϵ > 0, depending only on ϵ such that | Q_j(n, ϵ) | < (D_ϵ n^4)^n-j Q_n(n, ϵ) for all j ≥ 1. One may actually take D_ϵ = max(27/2, 9(1 + ϵ/4)). * (Fujiwara bound) As a consequence, if d > 2 D_ϵ· n^4 then P(n, d, ϵ) > 0. For such values of n, d, ϵ, the line bundle 𝒪_X_n^GG(1) ⊗π^∗_n𝒪_X(-ϵ) is big. Thus, if we fix first n ≥ 2, and then take ϵ := 5n+3 in Proposition <ref>, we get the bound d > 18(5n+3/4 + 1)n^4. Let us simply take a monomial lower bound that ensures positivity for all n ≥ 2: For n≥ 2 and d > 153/4 n^5, the generic hypersurface of degree d in ℙ^n+1 is quasi-hyperbolic. This proves the first item of Theorem <ref>. The second one follows from the work of Riedl-Yang <cit.>, who showed that if the first item has been proven for d ≥ d_n using the jet differential techniques discussed here, then the Kobayashi conjecture must hold with the lower bound d_n' = d_2n-1. § MAIN COMPUTATIONS In this section, we prove Proposition <ref>. We will first drop the hypothesis k = n to perform the beginning of our computations; we will only resume this hypothesis after Step 4. Step 1. Expression of A^N_k and A^N_k-1· B in terms of weighted Segre classes. Recall that P_k = ℙ(Ω_k). Let us denote by 𝒪_k(1) its tautological ℚ-line bundle. Dually, we may also write P_k = P(𝐓_k) with 𝐓_k := T_X^(1)⊕…⊕ T_X^(k). Thus, one may expand the Newton binomial and use the definition of weighted Segre classes (<ref>) to obtain the following (pullbacks to P_k are implied): A^N_k = ∫_P_k ( u + 2 h)^N_k (u = c_1(𝒪_k(1)), h = c_1(𝒪_X(1)) ) = ∑_l=0^n 2^lN_kl∫_X s_n-l(𝐓_k) h^l, On the other hand, we also have: A^N_k-1· B = ∫_P_k ( u + 2 h)^N_k-1 (2 + ϵ) h = ∑_l=1^n 2^l-1(2 + ϵ) N_k-1l-1∫_X s_n-l(𝐓_k) h^l. Thus, singling out the 0 term in A^N_k, one deduces that A^N_k - N_k A^N_k - 1· B = ∫_X s_n(𝐓_k) + ∑_l=1^n 2^l-1[2 N_kl - (2 + ϵ) N_kN_k - 1l-1] ∫_X s_n-l(𝐓_k) h^l = ∫_X s_n(𝐓_k) + ∑_l=1^n 2^l-1(2 - (2 + ϵ)l) N_kl∫_X s_n-l(𝐓_k) h^l. Step 2. Application of the Whitney formula. To compute the previous numbers, we will apply the Whitney formula (<ref>) to express the weighted Segre classes of 𝐓_k in terms of the hyperplane class h. In our context, since X ⊂ℙ^n+1 is a smooth degree d hypersurface, we have s_∙(T_X) = s_∙(T_ℙ^n+1) c_∙(N_X) = [ ∑_j=0^n (-h)^j]^n+2 (1 + hd). Thus the Whitney formula yields s_∙(𝐓_k) = 1/(k!)^n∏_1 ≤ l ≤ k[ ∑_j=0^n (- h/l)^j]^n+2[ 1 + hd/l] Step 3. Expressions of (<ref>) and (<ref>) as polynomials in d. Let us start by writing the coefficient Λ_α, β of d^α h^β in the expression s_β(𝐓_k) for any integers α, β. Inspection of (<ref>) shows that Λ_α, β = 0 unless β≥α. If this holds, let us write β = α + γ with γ≥ 0, in which case one then has (k!)^nΛ_α, β = (-1)^γ B_γ C_α (γ = β - α≥ 0) where B_γ is the coefficient of h^γ in ∏_1 ≤ l ≤ k (∑_j=0^nh/j)^n+2, and C_α is the coefficient of h^α in ∏_1 ≤ l ≤ k(1 + h/l). We have the following formulas: * B_0 = 1 and for all γ≥ 1, one has B_γ := ∑_{u_1≤…≤ u_γ}⊂ S_k, n+21/u_1… u_γ In this expression, the sum runs over all non-decreasing sequences u_1, …, u_γ in the ordered set S_k, n+2 := { 1_1 < … < 1_n+2 < 2_1 < … < 2_n+2 < … < k_1 < … < k_n+2}, and the product is computed by forgetting the indexes (see also <cit.>). * C_0 = 1 and for all α≥ 1, one has C_α := ∑_1 ≤ l_1 < … < l_α≤ k1/l_1… l_α Only the first item needs explaining. If γ≥ 1, the definition of B_γ gives B_γ = ∑_p_1, 1 + … p_1, n+2 + … + p_k, 1 + … p_k, n+2=γ1/1^p_1, 1… 1^p_1, n+2 2^p_2, 1… 2^p_2, n+2… k^p_k, 1… k^p_k, n+2 Now, there is a bijection between the set of k(n+2)-uples of integers (p_l, j)_1 ≤ l ≤ k, 1 ≤ j ≤ n+2 summing to γ, and the set of sequences { u_1≤…≤ u_γ}⊂ S_k, n+2: to any such tuple, we associate the sequence obtained by taking each element l_j∈ S_k, n+2 repeated p_l, j times. Expressing the previous sum in terms of the sequences (u_i)_1 ≤ i ≤γ instead gives the requested expression. We may now rewrite the two intersection numbers as polynomials in d: (k!)^n A^N_k = ∑_l = 0^n 2^lN_kl( ∑_0 ≤α≤ n - lΛ_α, n-l d^α) ·(∫_X h^n) = ∑_l = 0^n 2^lN_kl( ∑_α + γ = n - l (-1)^γ B_γ C_α d^α) ·(∫_X h^n) = d ∑_α = 0^n[ ∑_l = 0^n- αN_kl 2^l (-1)^n-α -l B_n-α -l] C_α d^α ( since∫_X h^n = d ) Similarly, one has (k!)^n A^N_k-1· B = ∑_l= 1^nN_k - 1l-1 2^l-1(2 + ϵ) ( ∑_0 ≤α≤ n - lΛ_α, n-l d^α) h^l = ∑_l=1^nN_k-1l-1 2^l-1(2 + ϵ) [ ∑_α + γ = n-l (-1)^γ B_γ C_α d^α] · (∫_X h^n) = d ∑_α=0^n[∑_l=1^n-αN_k-1l-1 2^l-1(2+ϵ) (-1)^n-l-α B_n-α-l] C_α d^α This shows that: A^N_k - N_k A^N_k -1· B = d ∑_α=0^n Q_α(n, ϵ) d^α with Q_α(n, ϵ) = C_α[ ∑_l=0^n-αN_kl 2^l (-1)^n-α-l B_n-α-l - ∑_l=1^n - α N_kN_k -1l-1 2^l-1 (2 + ϵ) (-1)^n-α-l B_n-α-l] Let us rewrite this by singling out the term l=0 and merging the two sums using the formula a ba = b b-1a-1: Q_α(n, ϵ) = (-1)^n-α C_α[ B_n-α + ∑_l=1^n-α ( 2 - (2 + ϵ)l) N_kl (-1)^l 2^l-1 B_n-α-l] Step 4. Bounds on the coefficients. We now fix k = n. To obtain the bound (<ref>), we are simply going to drop all the signs in the expression of Q_α to define R_α = C_α[ B_n-α + ∑_l=1^n-α ( 2 + (2 + ϵ)l) N_kl 2^l-1 B_n-α-l] and then use the very coarse inequality | Q_α | ≤ R_α. We may rewrite R_α as R_α = C_α[ B_n-α + ∑_l=1^n-α D_l B_n-α - l] with D_l = ( 2 + (2 + ϵ)l) N_kl 2^l-1 The main estimates on the R_α will come from (<ref>) and the following inequalities: We have the following upper bounds: * B_α+1≤ 2 n^2 B_α * C_α≤3/2 n^2 C_α + 1 if α + 1 ≤ n * D_α +1≤ 9 n^2 D_α for α≥ 1. (1). One has B_α + 1 = ∑_{u_1≤…≤ u_α≤ u_α +1}⊂ S_n, n+21/u_1… u_αu_α +1 ≤∑_{u_1≤…≤ u_α}⊂ S_n, n+2∑_u_α+1∈ S_n, n+21/u_1… u_γ1/u_α+1 ≤ n(n+2) ∑_{u_1≤…≤ u_α}⊂ S_n, n+21/u_1… u_γ ≤ n(n+2) B_α≤ 2 n^2 B_α. To obtain the last inequality, recall that n ≥ 2. (2). One has, if α + 1 ≤ n: C_α + 1 = ∑_1 ≤ l_1 < … < l_α < l_α +1≤ n1/l_1… l_α l_α + 1 = 1/α + 1∑_S ⊂ 1, n, |S|= α ∑_l ∉S1/prod(S)1/l ≥1/n+1∑_S ⊂ 1, n, |S|= α ∑_l ∉S1/prod(S)1/n = n - α/n(n+1) C_α≥1/n(n+1) C_α≥2/3n^2 C_α. (3). One has D_α + 1 = ( 2 + (2 + ϵ)(α + 1)) N_nα + 1 2^α ≤ 6 N_n ( 2 + (2 + ϵ)α) N_nα 2^α - 1 = 6 N_n D_α where we used N_nα+1 = N_n-α/α+1N_nα≤ N_nN_nα and 2 + (2 + ϵ) (α + 1)/2 + (2 + ϵ) α≤2 + (2 + ϵ) (α + 1)/(2 + ϵ) α≤ 1 + α + 1/α≤ 3. Note that N_n = n + n^2 - 1 ≤3/2n^2, so finally we find D_α + 1≤ 9 n^2 D_α. Finally, we obtain: There exists a constant D independent of n, d, ϵ, such that * R_n-1≤ 9 n^4 (1 + ϵ/4) R_n * for all 1 ≤α≤ n-1, we have R_α-1≤27/2 n^4 R_α. (1) One has R_n = C_n B_0 = C_n > 0 and R_n-1 = C_n-1 (B_1 + D_1 B_0) = C_n-1 (B_1 + (4 + ϵ)N_n) ≤3/2 n^2 C_n· (2n^2 + (4 + ϵ)3/2 n^2) = 9 n^4(1 + ϵ/4) R_n. (2) One has R_α-1 = C_α-1[ B_n-α + 1 + ∑_l=1^n-α+1 D_l B_n-α + 1 - l] = C_α-1[ B_n-α + 1 + ∑_l=1^n-α D_l B_n-α + 1 - l + D_n-α+1] ≤3/2 n^2 C_α[ 2 n^2 B_n-α + 2 n^2 ∑_l=1^n-α D_l B_n-α - l + 9 n^2 D_n-α] = 27/2 n^4 C_α[ 2/9 B_n-α + 2/9∑_l=1^n-α-1 D_l B_n-α - l + D_n-α] ≤27/2 n^4 R_α. Again, one gets the result. This now proves item (2) of Proposition <ref>: the previous lemma shows inductively that R_α≤ D_ϵ^n-j n^4(n-j) R_n, with D_ϵ = max(9(1 + ϵ/4), 27/4). To obtain the result, just use the fact that Q_n = C_n = R_n >0 and the inequalities |Q_α| ≤ R_α. Step 5. Fujiwara bound. Recall the following elementary result: Let Q(t) = a_n t^n + a_n-1 t^n-1 + ⋯ + a_0 be a polynomial with real coefficients such that a_n > 0. Assume there exists a constant M > 0 such that |a_n-j| ≤ M^j a_n for all j > 0. Then for t > 2M, one has Q(t) >0. In our situation, the polynomial Q is given by Q(t) = ∑_j=1^n Q_j(n, ϵ) t^j, (see (<ref>)), where n, ϵ are seen as constants. Then by item (2) of Proposition <ref>, one has the bound (<ref>) with M = D_ϵ n^4. This gives the result. § ANNEX. INTERSECTION THEORY ON WEIGHTED PROJECTIVE BUNDLES The purpose of this annex is mainly to give an elementary proof of the Whitney formula (<ref>), as an equality between numerical classes (which is enough to perform the computations presented in this article). The route we follow will be closely related to the original computations of Green-Griffiths <cit.> (i.e. estimating Euler characteristics by integrals of polynomials on adequate simplexes) – we also used this kind of computations in the work <cit.>. Let X be a complex projective manifold, endowed with a weighted vector bundle 𝐄 := E_1^(a_1)⊕…⊕ E_r^(a_r) (the reader can refer to Section <ref> for the terminology). Then the Whitney formula that was proved in the author's thesis can be stated as follows: We have the following equality of endomorphisms of (A_∗ X)_ℚ: s_∙(E_1^(a_1)⊕…⊕ E_s^(a_s)) = gcd(a_1, …, a_s)/a_1… a_s∏_j s_∙(E_j^(a_j)) where s_∙(E^(a)) = 1/a^rk E - 1∑_ls_l(E)/a^l. for a single vector bundle E and any integer a > 0. The proof presented in <cit.> essentially copies Fulton's presentation in <cit.>. As announced above, we will show that this formula holds at least after quotienting by the numerical equivalence relation – the idea will be to see the top Segre class as the leading coefficient in the asymptotic expansion of the Euler characteristic of the symmetric products of 𝐄^∗. As the reader will see, this type of computation is very close in spirit to the ones of <cit.>. We will need several combinatorial lemmas, all gathered in Section <ref> below. We will prove that for any integer k ∈ 1, n and any cycle α∈ A_k X, one has for 1 ≤ l ≤ k: s_l(E_1^(a_1)⊕…⊕ E_s^(a_s)) ∩α≡_num( gcd(a_1, …, a_s)/a_1… a_s∏_j s_∙(E_j^(a_j)) )_l∩α Step 1. Reduction steps. The previous formula means that intersecting both sides with any β∈ A_k-l(X) must give the same intersection number; we see right away that replacing α by α∩β, allows to reduce to the case l = k, where one has to show equality between the two sides of the equation, which are seen as elements of ℤ. Also, breaking α into its irreducible components, we see that we may finally assume α = [X] and j = n. By the classical splitting principle (see <cit.>), one may also assume that each E_i is split as a sum of line bundles. Step 2. Proof of the formula in the split case. Let us then assume that each E_i = L_i, 1⊕…⊕ L_i, r_i is split as a direct sum of line bundles for all 1 ≤ i ≤ s, and let α_i, j = c_1 (L_i, j) denote the corresponding Chern roots. We will rather prove the dual formula for s_n(𝐄^∗) to make the signs easier to track. Let m_0 := (a_1, …, a_r). Then, applying the asymptotic Riemann-Roch theorem, one has that ∫_X s_n(𝐄^∗) = 1/m_0^n+r-1∫_ℙ(𝐄) c_1𝒪(m_0)^n+r-1 (by definition of s_n(𝐄^∗)) = lim_m ⟶ +∞m_0 | mχ(ℙ(𝐄), 𝒪(m))/m^n+r-1/(n+r-1)! By the Leray spectral sequence, one also has χ(ℙ(𝐄), 𝒪(m))) = χ(X, π_∗𝒪(m)) = χ(X, S^m𝐄). But then, since each E_i is split, Proposition <ref> below implies that ∫_X s_n(𝐄^∗) = m_0/a_1^r_1… a_s^r_s∫_X( ∏_i=1^s∏_j=1^r_i∑_p=0^n(α_i, j/a_i)^p)_n. Now, we see that for all i = 1, …, s, the expression 1/a_i^r_i - 1∏_j=1^r_i∑_p=0^n(α_i, j/a_i)^p identifies with the formula for s_∙((E_i^∗)^(a_i)) that is given in the statement of the proposition. Thus, one obtains ∫_X s_n(𝐄^∗) = m_0/a_1… a_r∫_X( ∏_i=1^s s_∙((E_i^∗)^(a_i)) )_n, which gives the result. §.§ Combinatorial lemmas The purpose of the following discussion is to prove Proposition <ref>, that was used in the proof of the Whitney formula. We need several combinatorial lemmas, closely related to several computations that appeared in <cit.>. Notation. * For all n, we let vol_n denote the n-dimensional euclidian volume measure. * Let m ∈ℕ. A m-dimensional simplex Δ is a metric space isomorphic to the convex envelop of m+1 points in ℝ^m, such that each p of them generate an affine (p-1)-space. We will sometimes write Δ ∘⊂ ℝ^m to emphasize the fact that Δ has non-empty interior in ℝ^m (or equivalently, that Δ = m) and to oppose this situation to the case of a (m-1)-dimensional simplex included in ℝ^m. * Recall that for any m-dimensional simplex Δ∘⊂ℝ^m, the uniform probability measure of Δ is the measure d 𝐏_Δ = 1/vol_m(Δ) d vol_m. Since this measure on Δ is the unique probability measure which is the restriction of a translation invariant measure on ℝ^m, we see that if Δ_1, Δ_2 ⊆ℝ^m are m-dimensional simplexes, and if Ψ∈GL(ℝ^m) is such that Δ_2 = Ψ(Δ_1), then Ψ sends the uniform measure of Δ_1 on the uniform measure of Δ_2. §.§ Lattices and volumes of fundamental domains Let a_1, ..., a_r ∈ℕ. Let H = { (t_1, ..., t_r) ∈ℤ^r | ∑_i a_i t_i = 0}. Then H ⊆ℤ^r is a primitive sublattice, meaning that ℤ^rH is torsion-free. Hence, by the adapted basis theorem, there exists a basis (f_1, ..., f_r) of ℤ^r such that (f_1, ..., f_r-1) is in turn a basis for H. Let C_H = ∑_1 ≤ i ≤ r-1 [0, 1] · f_i denote the associated fundamental domain of H. Note that all fundamental domains are image of one another by an element of SL(H) so they all have the same (n-1)-volume. Any fundamental domain of H has volume vol_r-1( C_H ) = √(∑_1 ≤ i ≤ r a_i^2)/(a_1, ..., a_r). The lattice H and the proposed formula for the volume do not change if we replace a_i by a_i/(a_1, ..., a_r), hence we can suppose that (a_1, ..., a_r) = 1. In this case, there exist u_1, ..., u_r ∈ℤ such that ∑_i a_i u_i = 1. Replacing f_r by the vector (u_1, …, u_r) in (f_1, …, f_r) still gives a basis of ℤ^r, so we can assume that f_r = (u_1, ..., u_r). Since (f_1, ..., f_r) is a basis of ℤ^r, we have vol_r(∑_1 ≤ i ≤ r [0, 1] · f_i) = 1. Moreover, vol_r(∑_1 ≤ i ≤ r [0, 1] · f_i) = vol_r-1(∑_1 ≤ i ≤ r - 1 [0, 1] · f_i) · π_H^⊥ (f_r) _eucl = vol_r-1(C_H) · π_H^⊥ (f_r) _eucl where π_H^⊥ (f_r) is the orthogonal projection of f_r on H^⊥, and ·_eucl is the euclidian norm. We obtain vol_r-1(C_H) = 1/π_H^⊥(f_r). Let us now compute π_H^⊥(f_r). Since H^⊥ = ℝ· (a_1, …, a_r) by definition of H, one can write f_r = (u_1, …, u_r) = λ· (a_1, …, a_r) + (b_1, …, b_r), with λ∈ℝ and (b_1, …, b_r) ∈ H_ℝ. Thus, one has 0 = ∑_j a_j b_j = ∑_j a_j u_j - λ∑_j a_j^2. Since ∑_j a_j u_j = 1, this gives λ = 1/∑_j a_j^2 and thus π_H^⊥ (f_r) _eucl^2 = λ^2∑_j a_j^2 = 1/∑_j a_j^2. We obtain the result. Let a = (a_1, ..., a_r) ∈ℕ^r, and let Δ_a = { (t_i) ∈ℝ_+^r | ∑_i a_i t_i = 1 }. Then the volume of Δ_a is vol_r-1 (Δ_a) = 1/(r-1)!(a_1, ..., a_r)/a_1 ... a_rvol_r-1(C_H); By Lemma <ref>, it suffices to show that vol_r-1(Δ_a) = 1/(r-1)!√(∑_1 ≤ i ≤ r a_i^2)/a_1 ... a_r. To perform this computation, we can for example use the parametrization of Δ_a given by ψ : t ∈Δ⟼ (1/a_1 t_1, ..., 1/a_r-1 t_r-1, 1/a_r (1 - ∑_1 ≤ i ≤ r - 1 t_i)), where Δ = { (t_i) ∈ [0,1]^r-1 | ∑_i t_i ≤ 1 } is the standard (r-1)-dimensional simplex in ℝ^r-1. We have then ψ^∗( dvol_r-1 ) = √( G) dvol_r-1, where G= (< ψ_∗(e_i), ψ_∗(e_j)>)_i,j is the Gram matrix of the vectors ψ_∗ (e_i) ((e_i)_i being the canonical basis of ℝ^r-1). A simple computation shows that G = 1/∏_i a_i^2∑_i a_i^2. Thus, we have vol_r-1(Δ_a) = √(∑_i a_i^2)/∏_i a_ivol_r-1(Δ). To conclude, it suffices to compute vol_r-1(Δ) = 1/(r-1)!, which is easy. §.§ Some integrals We are now going to compute the integral of some monomial functions on simplexes with respect to the uniform probability measure. The goal is to prove the following. Let a = (a_1, …, a_r) ∈ℕ^r, and let Δ_a = { (t_i) ∈ℝ_+^r | ∑_i a_i t_i = 1 }. Let p_1, …, p_r∈ℕ. Then ∫_Δ_a t_1^p_1… t_r^p_r d 𝐏_Δ_a(t) = (r-1)! p_1! … p_r!/(p_1 + p_2 + … + p_r + r - 1)!1/a_1^p_1… a_r^p_r. First, let us remark that we can easily get back to the case where a is equal to 1 := (1, …, 1). Indeed, letting Ψ(t_1, …, t_r) = (a_1 t_1, …, a_r t_r), one has Ψ_∗ d𝐏_Δ_1 = d 𝐏_Δ_a, so that ∫_Δ_a t_1^p_1… t_r^p_r d 𝐏_Δ_a(t) = ∫_Δ_1(t_1/a_1)^a_1…(t_r/a_r)^a_r d 𝐏_Δ_1(t) = 1/a_1^p_1… a_r^p_r∫_Δ_1 t_1^p_1… t_r^p_r d 𝐏_Δ_1(t) For any r ∈ℕ and p_1, …, p_r∈ℕ, let C_p_1, …, p_r := ∫_Δ_1 t_1^p_1… t_r^p_r d 𝐏_Δ_1(t). By the remark above, the proof of Lemma <ref> will then be complete with the following result. We have C_p_1, …, p_r = (r-1)! p_1! … p_r!/(p_1 + p_2 + … + p_r + r - 1)!. By induction on r. Letting Δ_r := Δ_(1, …, 1) to simplify the notation (1 is repeated r times), one has C_p_1, …, p_r = ∫_t_1 + … + t_r = 1 t_1^p_1… t_r^p_rdt_1∧…∧ dt_r-1/vol_dt_1∧…∧ dt_r-1(Δ_r) = ∫_t_1 + s = 1 dt_1 t_1^p_1 vol_dt_2∧…∧ dt_r-1(s Δ_r-1)/vol_dt_1∧…∧ dt_r-1(Δ_r)∫_t_2 + … + t_r = s t_2^p_2… t_r^p_rdt_2∧…∧ dt_r-1/vol_dt_2∧…∧ dt_r-1(s Δ_r-1) = ∫_t_1 + s = 1 dt_1 t_1^p_1 s^r-2/(r-2)!/1/(r-1)!· s^p_2 + … + p_r∫_t_2 + … + t_r = 1 t_2^p_2… t_r^p_rdt_2∧…∧ dt_r-1/vol_dt_2∧…∧ dt_r-1(Δ_r-1) = (r-1) ∫_t_1 + s = 1 dt_1 t_1^p_1 s^r - 2 + p_2 + … + p_r C_p_2, …, p_r = (r-1) p_1!(p_2 + … + p_r + r-2)!/(p_1 + p_2 + … + p_r + r - 1)! C_p_2, …, p_r where at the last line, we used Lemma <ref> below. This permits to prove the formula by induction. Let a, b ∈ℕ. One has ∫_0^1 t^a (1 - t)^b dt = a! b!/(a + b + 1)!. This comes from the Beta function identity ∫_0^1 t^a (1 - t)^b dt = Γ(a+1) Γ(b+1)/Γ(a+b+2). §.§ Riemann integrals and asymptotic estimates Using the previous results, we can now give the following asymptotic estimates that will prove useful to compute the asymptotics of Euler characteristics. Fix integers a_1, …, a_r∈ℕ, and p_1, …, p_r∈ℕ. We have, for m ⟶ + ∞ divisible by (a_1, …, a_r): ∑_a_1 l_1 + … + a_r l_r = ml_1^p_1/p_1!…l_r^p_r/p_r! = (a_1, …, a_r)/a_1^p_1 + 1… a_r^p_r + 1m^p_1 + … + p_r + r - 1/(p_1 + p_2 + … + p_r + r - 1)! + o(m^p_1 + … + p_r + r - 1). Let H_m be the set of (l_1, …, l_r) ∈ℤ^r such that ∑_j l_j a_j = m. It is non-empty if and only if (a_1, …, a_r) | m, and if such is the case, it is then a translate of the lattice H = { (l_1, …, l_r) ∈ℤ^r | ∑_j a_j l_j = 0}. Let C_H denote a fundamental domain for H. As (l_1, …, l_r) varies in H_m∩ℕ^r, the element (l_1/m, …, l_r/m) varies in Δ_a, running in a lattice with cells isometric to 1/mC_H. Thus, one can use a Riemann sum to obtain vol_r-1(1/m C_H) ·∑_a_1 l_1 + … + a_r l_r = m(l_1/m)^p_1…(l_r/m)^p_r m ⟶ + ∞⟶∫_Δ_a t_1^p_1… t_r^p_r d vol_r-1(t). = vol_r-1(Δ_a) ∫_Δ_a t_1^p_1… t_r^p_r d 𝐏_Δ_a(t). Thus we deduce 1/m^p_1 + … + p_r + r - 1∑_a_1 l_1 + … + a_r l_r = m l_1^p_1… l_r^p_r⟶vol_r-1(Δ_a)/vol_r-1(C_H)∫_Δ_a t_1^p_1… t_r^p_r d 𝐏_Δ_a(t). The right hand side can be computed using Lemmas <ref> and <ref>. This gives the result. We will need another version of that lemma for our application to the asymptotic Riemann-Roch theorem. Let n, r ∈ℕ be two integers. Let α_1, …, α_r be indeterminates over ℂ. Fix integers a_1, …, a_r∈ℕ, and p_1, …, p_r∈ℕ. We have, for m ⟶ + ∞ divisible by (a_1, …, a_r): ∑_a_1 l_1 + … + a_r l_r = m(α_1 l_1 + … + α_r l_r)^n/n! = (a_1, …, a_r)/a_1… a_r[ ∑_p_1 + … + p_r = n(α_1/a_1)^p_1…(α_1/a_r)^p_r] m^n + r -1/(n+ r - 1)! + o(m^n+r-1). where o(m^n+r-1) means a homogeneous polynomial of degree n in α_1, …, α_n, all of whose coefficients are negligeable compared to m^n+ r -1. We expand the sum, using the Newton identity, and we apply Lemma <ref>: ∑_a_1 l_1 + … + a_r l_r = m(α_1 l_1 + … + α_r l_r)^n/n! = ∑_a_1 l_1 + … + a_r l_r = m ∑_p_1 + … + p_r = nnp_1, …, p_r l_1^p_1… l_r^p_rα_1^p_1…α_r^p_r = (a_1, …, a_r)/a_1… a_r[ ∑_p_1 + … + p_r = n(α_1/a_1)^p_1…(α_1/a_r)^p_r] m^n + r -1/(n+ r - 1)! + o(m^n+r-1). In <cit.>, the author invoked Toën's orbifold Riemann-Roch theorem to give an asymptotic estimate of a particular case of Lemma <ref>, which might seem a bit disproportionate. Let us use these notes to give here a more down to earth argument. We fix k, n ∈ℕ. Identifying α_1 = … = α_r = 1 in the expression of Lemma <ref>, and taking r = k, we get: ∑_l_1 + 2 l_2 + … + k l_k = m(l_1 + … + l_k)^n/n! = ( 1/k!∑_p_1 + … + p_k = n1/1^p_1…1/k^p_k) m^n+k-1/(n+k-1)! + o(m^n+k-1) = 1/k![ ∑_1 ≤ i_1≤…≤ i_n≤ k1/i_1… i_n] m^n+k-1/(n+k-1)! + o(m^n+k-1). The formula holds without restriction of divisibility on m since (1, 2, …, k) = 1. This gives back the estimate of <cit.>. §.§ Asymptotics of Euler characteristics In this section, we use Lemma <ref> to determine the asymptotic behaviour of the Euler characteristics of symmetric powers of some weighted projective sums. Let X be a complex projective manifold of dimension n. Let L_1, …, L_r be line bundles on X, and fix integers a_1, …, a_r∈ℕ. Then, one has the asymptotic expansion, as m goes to +∞ while being divisible by (a_1, …, a_r): χ(X, S^m(L_1^(a_1)⊕…⊕ L_r^(a_r))) = (a_1, …, a_r)/a_1… a_r∫_X( ∏_j = 1^r∑_p=0^nc_1(L_j)^p/a_j^p)_nm^n+r-1/(n+r-1)! + o(m^n+r-1). where (·)_n means we take the part of pure degree n of the class between the brackets. Let α_i = c_1(L_i) for all 1 ≤ i ≤ r. One has then, using the Hirzebruch-Riemann-Roch theorem: χ(X, S^m(L_1^(a_1)⊕…⊕ L_r^(a_r))) = χ(X, ⊕_a_1 l_1 + … + a_r l_r = m L_1^⊗ l_1⊗…⊗ L_r^⊗ l_r) = ∑_a_1 l_1 + … + a_r l_r = mχ(X, L_1^⊗ l_1⊗…⊗ L_r^⊗ l_r) = ∑_a_1 l_1 + … + a_r l_r = m[ ∫_X(α_1 l_1 + … + α_r l_r)^n/n! + ∑_j=0^n∫_Xβ_j· (α_1 l_1 + … + α_r l_r)^n-j]. where for all j = 1, …, n, the symbol β_j∈ H^2j(X) denotes a cohomology class depending only on X, but not on m. One can now apply Lemma <ref> to obtain the result. amsalpha
http://arxiv.org/abs/2406.18036v1
20240626030945
Operating Single-Photon Circulator by Spinning Optical Resonators
[ "Jing Li", "Tian-Xiang Lu", "Meiyu Peng", "Le-Man Kuang", "Hui Jing", "Lan Zhou" ]
quant-ph
[ "quant-ph" ]
* jinghui73@foxmail.com ,†zhoulan@hunnu.edu.cn 1 Key Laboratory of Low-Dimension Quantum Structures and Quantum Control of Ministry of Education, Key Laboratory for Matter Microstructure and Function of Hunan Province, Synergetic Innovation Center for Quantum Effects and Applications, Xiangjiang-Laboratory and Department of Physics, Hunan Normal University, Changsha 410081, China 2Institute of Interdisciplinary Studies, Hunan Normal University, Changsha, 410081, China 3College of Physics and Electronic Information, Gannan Normal University, Ganzhou 341000, Jiangxi, China A circulator is one of the crucial devices in quantum networks and simulations. We propose a four-port circulator that regulate the flow of single photons at muti-frequency points by studying the coherent transmission of a single photon in a coupled system of two resonators and two waveguides. When both resonators are static or rotate at the same angular velocity, single-photon transport demonstrates reciprocity; however, when the angular velocities differ, four distinct frequency points emerge where photon circulation can occur. In particular, when the angular velocities of the two resonators are equal and opposite, there are two different frequency points where photon circulation can be achieved, and there is a frequency point where a single photon input from any waveguide can be completely routed to the other waveguide. Interestingly, by rotating the two resonators, the single-photon circulation suppressed by the internal defect-induced backscattering can be restored. § INTRODUCTION Nonreciprocal optical devices, such as isolators <cit.>, circulators <cit.>, and directional amplifiers <cit.>, featuring different optical responses after exchanging positions between the input and output terminals, are not only important components in optical systems but also have important applications in constructing quantum networks and implementing quantum communication. One of the most basic requirements for achieving nonreciprocal transmission in optical systems is to break the symmetry of time inversion <cit.>. In recent years, in order to meet the requirements of chip integration, a large number of non-magnetic and nonreciprocal systems have begun to develop, which are generally based on dynamic spatiotemporal modulation structures <cit.>, quantum Hall effects <cit.>, Kerr-nonlinear microresonators <cit.>, optomechanical resonator <cit.>, non-Hermitian systems <cit.>, moving atomic gases <cit.>, and spinning resonators <cit.>. Recent experimental studies <cit.> have shown that spinning resonators can break the time reversal symmetry via Fizeau resistance to achieve nonreciprocal transmission of light. Afterwards, more and more researchers became interested in rotating resonators and proposed many nonreciprocal quantum devices, such as nonreciprocal photon blocking <cit.>, nonreciprocal phonon lasers <cit.>, nonreciprocal entanglement <cit.>, nonreciprocal optical solitons <cit.>, nonreciprocal optical bandwidth <cit.>, and photon circulators <cit.>. Circulator is typically a nonreciprocal device composed of three or four ports. They separate the paths of the sender and receiver by introducing non reciprocity between ports. The circulator has been experimentally proven by microwave <cit.> and optical signals <cit.>, and can be applied in regions ranging from classical to quantum <cit.>. Recently, a two-frequency point photon circulator based on a rotating resonator with two tapered fibers has been proposed <cit.>. However, the novel possibility of a four-frequency points single-photon circulator by utilizing two spinning resonators, as far as we know, has not been explored. In this paper, we found that four frequency points can achieve photon circulator by utilizing two spinning resonators with different angular velocities. To pursue this, we has been conducted coupling two rotating resonators with two waveguides to form a four-port device. There are various combinations of angular velocities for the two resonators, each resulting in different outcomes. When the resonators are static or rotate at the same angular velocity, photon transmission is reciprocal. However, with differing angular velocities, it's been observed that four frequency points of photon circulation can be achieved. Particularly, when the two resonators have equal and opposite angular velocities, two frequency points of photon circulation are attainable. Additionally, there exists a frequency point where photons input from any waveguide can be completely routed to another. We also found that, compared with other nonreciprocal device circulators, single-photon circulators formed by rotating resonators exhibits robustness against the backscattering. Therefore, the routing direction of the photon circulator is dependent on the frequency of incoming photons and the rotational angular velocities of the two resonators. Compared to nonreciprocal devices with two ports, the four-port quantum optical circulator with multi-frequency points enables us to establish two-dimensional and three-dimensional networks to achieve photon quantum simulations<cit.>. Our results may provide inspiration for multi-frequency points circulator in quantum information. The paper is organized as follows: In Sec. 2, we outline the theoretical model of the system by presenting Hamiltonian. We give the calculation method and solution to the model. In Sec. 3, we study the single-photon transport properties when two resonators rotate or static. In Sec. 4, we study the effect of adding backscattering in resonators on photon transmission. Finally, we summarize the single-photon transport and give the conclusions in Sec. 5. § THEORETICAL MODEL Recent experiments have demonstrated the utilization of spinning devices for achieving optical isolators<cit.>, one-way heat flow <cit.>, gyroscopes<cit.>, acoustic amplifiers<cit.>, and rotational doppler effect<cit.>. Among these, utilizing a spinning resonator can achieve nonreciprocal transmission with an isolation degree of up to 99.6%<cit.>. In this experiment, a spherical resonator was successfully created by flame polishing the end of a molten silicon glass cylinder and mounting the resonator on a turbine. By properly placing a fiber near the spinning resonator, a stable fiber-resonator coupling was established through a "self-adjusting" aerodynamic process. In this paper, we propose how to realize a single-photon circulator with multiple frequency points in a system composed of two spinning whispering-gallery-mode (WGM) resonators and two waveguides. As shown in Fig. 1(a), we consider two spinning WGM microresonators f and d, coupled to two waveguides a and b, with four input/output ports, respectively. The WGM holds two cavity modes corresponding to clockwise (CW) and counterclockwise (CCW) modes with frequencies ω_cw=ω_c- Δ_F and ω_ccw=ω_c+Δ_F, respectively. When the resonator spins at an angular velocity Ω, the rotation-induced Sagnac-Fizeau shift Δ_F is given by <cit.> Δ _F=nRΩω _c/c( 1-1/n^2-λ/ndn/dλ)= Ω G, where ω_c is the intrinsic frequency of the nonspinning resonator, c and λ are the propagation speed and wavelength of light, R and n are the radius and reflectivity of the resonator, respectively. The dispersion term dn/dλ, characterizing the relativistic origin of the Sagnac shift, is relatively small in typical materials (∼1%) <cit.>. Ω>0 (Ω<0) indicates CW (CCW) rotation of the resonator. Ω_1 (Ω_2) is the rotational angular velocity of the f (d) resonator, and Δ_F1 (Δ_F2) is the Sagnac shift due to the rotation of the f (d) resonator. Under the rotating wave approximation, the total Hamiltonian can be written as (ħ=1) Ĥ=Ĥ_0+Ĥ_int . Here, the first term Ĥ_0 describes the free part, Ĥ_0 = ∑_α =cw,ccw( ω _αf̂_α^†f̂_α+ω _d,αd̂_α^†d̂_α) -iv_g∫ dxâ_R_x^† ∂/∂ xâ _R_x+iv_g∫ dxâ_L_x^†∂/∂ xâ_L_x -iv_g∫ dxb̂_R_x^†∂/∂ xb̂ _R_x+iv_g∫ dxb̂_L_x^†∂/∂ xb̂_L_x, where ω_α=ω_c±Δ_F1 (ω_d,α=ω_c ±Δ_F2) is the effective frequency for α (α=cw,ccw) travelling modes in the f (d) resonator. f̂_α^† (d̂_α^†) is the creation operators in the f (d )resonator. â^†_R_x (â^†_L_x) is the creation operators for the right-moving (left-moving) photon along the waveguide a at position x. b̂^†_R_x (b̂^†_L_x) is the creation operators for the right-moving (left-moving) photon along the waveguide (b) at position x. The group velocities have been assumed to be v_g. The second terms Ĥ _int in Eq. (<ref>) describes the interacting part. Ĥ_int = g_a(â_R_0^†f̂_cw+â _L_0^†f̂_ccw)+g_b(b̂_L_0^†d̂ _cw+b̂_R_0^†d̂_ccw) +J(f̂_cw^†d̂_ccw+f̂_ccw^†d̂ _cw)+H.c. where g_a (g_b) is the f (d) resonator and a (b) waveguide coupling strength. J is the coupling strength of resonator f and d. In the case of a single excitation, the state vector of the system can be written as |ψ⟩ = ∑_α =cw,ccw( C_αf̂ _α^†|∅⟩ +D_α d̂ _α^†|∅⟩) +∑_β =L,R( ∫ dxA_β( x) â_β _x^†+∫ dxB_β( x) b̂_β _x^†) |∅⟩, where |∅⟩ is the vacuum state which indicates that there is zero photon in both the waveguide and the resonator. C_α (D_α) is the probability amplitude of a photon appearing in f (d) resonator. A_β(x) (B_β(x), β=L, R) is the probability amplitude of the right- or left-moving photon in the waveguide a (b). According to the Schrödinger equation Ĥ |ψ⟩=E|ψ⟩, and removing C_α and D_α, the coupled equations can be written EA_L( x) -iv_g∂ A_L( x) /∂ x =δ( x) g_a^2( E-ω _d,cw) A_L( 0) +Jg_ag_bB_L( 0) /( E-ω _ccw) ( E-ω _d,cw) -J^2, EB_L( x) -iv_g∂ B_L( x) /∂ x =δ( x) g_b^2( E-ω _ccw) B_L( 0) +Jg_ag_bA_L( 0) /( E-ω _ccw) ( E-ω _d,cw) -J^2, EA_R( x) +iv_g∂ A_R( x) /∂ x =δ( x) g_a^2( E-ω _d,cw) A_R( 0) +Jg_ag_bB_R( 0) /( E-ω _cw) ( E-ω _d,ccw) -J^2, EB_R( x) +iv_g∂ B_R( x) /∂ x =δ( x) g_b^2( E-ω _cw) B_R( 0) +Jg_ag_bA_R( 0) /( E-ω _cw) ( E-ω _d,ccw) -J^2. The photon enters through port 1 of the a waveguide, and the wave function is A_R( x) =e^iE/v_gx( θ( -x) +t_1→ 2θ( x) ), A_L( x) =e^-iE/v_gxt_1→ 1θ( -x), B_R( x) =e^iE/v_gxt_1→ 3θ( x), B_L( x) =e^-iE/v_gxt_1→ 4θ( -x), where t_i→ j (i,j=1,2,3,4) represents the probability amplitude of photon transmission from port i to port j. θ( ± x) is a step function. The transmission probability is defined by T_i→ j=|t_i→ j|^2. Combine Eq. (<ref>) and Eq. (<ref>), we obtain the transmission probability amplitude t_1→ 1 = 0, t_1→ 4=0, t_1→ 2 = ( δ +Δ _F1-ig_a^2/ 2v_g) ( δ -Δ _F2+ig_b^2/2v_g) -J^2/( δ +Δ _F1+ig_a^2/2v_g) ( δ -Δ _F2+ig_b^2/2v_g) -J^2 , t_1→ 3 = -i1/v_gJg_ag_b/( δ +Δ _F1+ig_a^2/2v_g) ( δ -Δ _F2+i g_b^2/2v_g) -J^2 , where δ=E-ω_c is detuning. Since the direction of photon transmission is opposite to the direction of the CCW mode in f resonator, T_1→ 1=0. Similarly, since the CW mode of the f resonator is only coupled to the CCW mode of the d resonator, T_1→ 4=0. So, it is easily to find that T_1→ 2+T_1→ 3=1 which guarantees the probability conservation for the incident photon. If the photon enters through port 2 of the a waveguide, and the wave function is A_L( x) =e^-iE/v_gx( θ( x) +t_2→ 1θ( -x) ), A_R( x) =e^iE/v_gxt_2→ 2θ( x), B_R( x) =e^iE/v_gxt_2→ 3θ( x), B_L( x) =e^-iE/v_gxt_2→ 4θ( -x). The probability amplitude can be obtained by combining Eqs. (<ref>) and ( <ref>) t_2→ 2 = 0, t_2→ 3=0, t_2→ 1 = ( δ -Δ _F1-ig_a^2/ 2v_g) ( δ +Δ _F2+ig_b^2/2v_g) -J^2/( δ -Δ _F1+ig_a^2/2v_g) ( δ +Δ _F2+ig_b^2/2v_g) -J^2 , t_2→ 4 = -i1/v_gλ g_ag_b/( δ -Δ _F1+ig_a^2/2v_g) ( δ +Δ _F2+ig_b^2/2v_g) -J^2. Using the same method, we can also obtain transmission probabilities T_3→ 4, T_3→ 1, T_4→ 3 and T_4→ 2. They satisfy the relationships T_1→ 2=T_4→ 3, T_1→ 3=T_4→ 2, T_2→ 1=T_3→ 4 and T_2→ 4=T_3→ 1. From Eqs. (<ref>-<ref>) and (<ref>-<ref>), it can be observed that if Δ_F1=Δ_F2 and g_a=g_b, we can observe that T_1→ 2=T_2→ 1, T_1→ 3=T_2→ 4. That is to say, when two resonators are static (Ω_1=Ω_2=0) or rotate with the same angular velocity (Ω_1=Ω_2≠0), the photon transmission in the same waveguide is reciprocal. § A SINGLE-PHOTON CIRCULATOR MADE OF TWO SPINNING OPTICAL RESONATORS In this section, the main focus of these results is to analyze how different spinning direction in the resonator can impact the frequency of the realized photon circulator. For this purpose, we will focus on the following cases: 1) One resonator spins while the other remains static. 2) Both resonators spin in the same direction. 3) Both resonators spin in opposite directions. In our calculations, we have selected the experimentally feasible parameters <cit.>: R=30 μm, n=1.4 , Q=10^9, λ=1.55×10^-6 m, ω_c=2π c/λ, g_a=g_b=6ω_c/Q, c=v_g=3× 10^8 m/s, and J=2.4 MHz. For comparisons, we first consider the single resonator case illustrated in Fig. <ref>(a_1). In Figs. <ref>(a_2) and <ref>(a_3), transmission probability distribution as a function of detuning δ for single resonator. The photon transmitted with circulation in the counterclockwise direction 1→ 4→ 3→ 2→ 1, at δ=Δ_F1 [see Fig. 2(a_1)]. In addition, the photon also can transmit with circulation in the clockwise direction 1→ 2→ 3→ 4→ 1, at δ=-Δ_F1. The above circulation can be seen from the transmission spectra as shown in Fig. 2(a_2-a_3). The results are consistent with those in Ref. <cit.>. For two resonators case with Ω_1>0 and Ω_2=0 (or Ω_1=0 and Ω_2<0), when δ =±√( g_a^2g_b^2+4v_g^2J^2+v_g^2Δ _F1 ^2) /(2v_g)-Δ _F1/2, the photon transmitted with circulation in the reverse 8-word direction 2→ 1→ 3→ 4→ 2, see Fig. 2(b_1) (The position marked by a red triangle). The circulation can be seen from the transmission spectra as shown in Figs. 2(b_2-b_3). In this case, the photon can not be transmitted from ports 2 and 3 to ports 4 and 1 , respectively. Due to the large frequency detuning, the photon entering from port 2 can not be transmitted to the CCW mode of the f resonator. It also cannot couple with the CW mode of the f resonator, as the direction of photon transmission is opposite to it. So that the photon from port 2 transmits through the a waveguide directly to port 1. Also, because of the large frequency detuning, the photon entering from port 3 can not be transmitted to the CW mode of the d resonator. It also cannot couple with the CCW mode of the d resonator, as the direction of photon transmission is opposite to it. So that the photon from port 3 transmits through the b waveguide directly to port 4. In contrast, the photon entering from port 1 (4) couples to the CW (CCW) travelling mode with the couping g_a (g_b), so that the photon can be transferred to port 3 (2). Therefore, four frequency point photon circulators can be achieved in this system. The circulator can also be in the direction of a positive 8 character, i.e. 1→ 2→ 4→ 3→ 1, at δ =±√(g_a^2g_b^2+4v_g^2J^2+v_g^2Δ_F1 ^2) /(2v_g)+Δ _F1/2, see Fig. <ref>(c_1) (The position marked by a green star). The routing behavior can be understood in a similar way as given above. The photon can not be transmitted from ports 1 and 4 to ports 4 and 2 , respectively. Due to the large frequency detuning, the photon entering from port 1 can not be transmitted to the CW mode of the f resonator. It also cannot couple with the CCW mode of the f resonator, as the direction of photon transmission is opposite to it. So that the photon from port 1 transmits through the a waveguide directly to port 2. The photon entering from port 4 can not be transmitted to the CCW mode of the d resonator. It also cannot couple with the CW mode of the d resonator, as the direction of photon transmission is opposite to it. So that the photon from port 4 transmits through the b waveguide directly to port 3. In contrast, the photon entering from port 2 (3) couples to the CCW (CW) travelling mode with the couping g_a (g_b), so that the photon can be transferred to port 4 (1). In addition, when Ω_1<0 and Ω_2=0 (or Ω_1=0 and Ω_2>0), we have plotted the transmission probability distribution as a function of detuning δ, as shown in Fig. <ref>(c_1-c_3). We found that Fig. <ref>(c_2-c_3) is mirror symmetric in Fig. <ref>(b_2-b_3) with δ=0 as the mirror surface. Then, we consider the photon transmission spectra of two resonators rotating in the same directions (e.g., Ω_1=Ω_2>0), as shown in the Figs. <ref>(a-b). Consistent with our previous calculation results, the probability of phohton transmission from any port is 1/2, and the photon transmission is reciprocal in the same waveguide. In Figs. <ref>(c-f), we have plotted the photon transmission spectra of two resonator rotating in opposite directions, e.g., Ω_1>0, Ω_2<0, and |Ω_1|=|Ω_2| or Ω_1<0, Ω_2>0, and |Ω_1|=|Ω_2|. Here, we found that a frequency point δ =±√(g_a^2g_b^2+4v_g^2J^2+v_g^2( Δ _F1+Δ_F2) ^2)/(2v_g)+(Δ _F2-Δ _F1)/2 (δ = ±√(g_a^2g_b^2+4v_g^2J^2+v_g^2( Δ _F1+Δ_F2) ^2)/(2v_g)+(Δ _F2+Δ _F1)/2) can achieve an reverse or positive 8 circulator, which means to realize the two photon circulator frequency point. Besides, we also found that at a specific angular velocity |Ω_1|=|Ω_2|= M/G (M=√( g_a^2g_b^2+4v_g^2J^2)/(2v_g), G=R ω _c( n^2-1)/(cn)), there was one frequency point that the photon entering from any waveguide can be completely routed to the other waveguide. We also found that Figs. <ref>(c-d) is mirror symmetric in Figs. <ref>(e-f) with δ=0 as the mirror surface. Comparad to a single resonator case, two resonators can have four frequency points for photon circulators, due to the coupling between resonators caused energy level splitting and the Fizeau drag induced splitting of the resonance frequencies of the two counter-travelling optical modes. In addition, we also found that there was one frequency point that the photon entering from any waveguide can be completely routed to the other waveguide. The direction of the photon circulator has also changed, form clockwise (counterclockwise) to reverse (positive) 8-word. The different directions of the photon circulator are due to the coupling between the CW (CCW) mode of the f resonator and the CCW (CW) mode of the d resonator. § BACKSCATTERING EFFECT The WGM resonators has two degenerate modes corresponding to CW and CCW modes. In practical devices, backscattering is unavoidable. They may be coupled together if the surface of the resonator is rough or the material is uneven<cit.>. So the Hamiltonian of the system is added with a term Ĥ_bs=χ _1f̂_cw^†f̂_ccw+χ _2d̂_ccw^†d̂ _cw+H.c. where χ _1 (χ _2) is the coupling strength of CW and CCW modes inside the f (d) resonator due to backscatter. Using the same method as in the previous section, we can calculate the probability T_i→ j for each port. Figure<ref> shows the probability of the photon transitioning from port 1 to port 3 under different conditions. In Fig. <ref>(a), the dashed line indicates no rotation, but the f resonator has backscattering, e.g. Ω _1=Ω _2=0,χ _1=1.2 MHz, χ _2=0 . The solid line indicates rotation of the f resonator under the condition of the dashes line, e.g. Ω _1>0,Ω _2=0,χ _1=1.2 MHz, χ _2=0. We found that the maximum value of T_1→ 3 increased from 0.25 to 0.683. So the results indicate that rotation is robust against backscattering. We also found that the frequency points that can be transmitted have shifted to the left, which is caused by the clockwise rotation of the resonator. On the contrary, when the resonator rotates counterclockwise, the transmission frequency point shift to the right. In Fig. 4(b), the dashed line indicates that both resonators are static but have backscattering, e.g. Ω _1=Ω _2=0, χ _1=χ _2=1.2 MHz. In contrast to Figure 4(a) a resonator has backscatter, we found that adding backscattering in another resonator does not change the value of T_1→ 3=0.25, bacause the four ports are transmitted with equal paobability. In the case of backscatter in both resonators, the soild line indicates tthe f resonator rotates clockwise and the d resonator static, e.g. Ω _1>0, Ω _2=0, χ _1=χ _2=1.2 MHz. The maximum value of T_1→ 3 has decreased from 0.683 to 0.638. This is cauesd by the backscatter of the d resonator. When the f resonator rotates clockwise and the d resonator rotates counterclockwise (Ω _1>0, Ω _2<0, |Ω _1|=|Ω _2|), the transmission probability T_1→ 3 can reach 0.895, as shown in Fig. <ref>(c). In addition, if Ω _1<0, Ω _2>0, |Ω _1|=|Ω _2|, its transmission spectrum can be obtained in Fig. <ref>(b) though mirror reflection symmetry with δ =0 as the mirror surface. In the previous section, when the rotational angular velocities of the two resonators are equal and the direction is the same, the photon transmission is reciprocal in the same waveguide. Here, due to the addition of backscattering, we have broken this reciprocity. In Fig. <ref>, the transmission spectra of photons are plotted by adding backscattering to the resonators when the rotational angular velocities of the two resonators are equal and the direction is the same. We found two frequency points that can implement a single photon circulator, and the routing behavior is shown in Figs. <ref>(a-b). The circulator can be seen from the transmission spectra as shown in Figs. <ref>(c-d). Unlike the 8-word circulator, the direction of the circulator is clockwise or counterclockwise circulator. § CONCLUSION We studied the coherent transmission of a single photon in a coupled system of two rotating WGM resonators and two waveguides. This system could form a four-port circulator, whose routing direction depended on the frequency of the incident photon, and the direction of rotation of the two resonators. We found that when both resonators rotated at identical angular velocities, single-photon transmission exhibited reciprocity within the same waveguide. When the angular velocities differed, usually there were four frequency points capable of facilitating single-photon circulation. However, when two resonators rotated in opposite directions with equal angular velocities, besides photon circulates at two frequency points, there is one frequency point at which a single photon input from any waveguide is completely routed to another. When the backscattering of a resonator is present, the single-photon circulation suppressed by the internal defect-induced backscattering can be restored by rotating the resonator, which is witnessed by enhancement of the probability of a single photon transfer. We note that the presence of the reciprocity shut down the circulator for photons with arbitrary frequency. To turn on the circulator, the resonator must rotate at different angular velocities, so that the reciprocity of photon transmission is broken. Furthermore, one can tune the angular velocity to turn on or off circulating photons with a given frequency. The nonreciprocity of quantum circulators can be used to design novel quantum photonic device, which may have extensive applications in future quantum technologies. Funding This work was supported by NSFC Grants No.11935006, No. 12075082, No. 12247105, No. 12205054, the science and technology innovation Program of Hunan Province (Grant No. 2020RC4047), National Key R & D Program of China (No. 2024YFE0102400), the Hunan Provincial major Sci-Tech Program (2023ZJ1010) and PH. D. Research Foundation (BSJJ202122). Disclosures The authors declare no conflicts of interest. 999 xia19 C. C. Xia, X. B. Yan, X. D. Tian, and F. Gao, "Ideal optical isolator with a two-cavity optomechanical system," Opt. Commun. 451 , 197-201, (2019). Jalas13 D. Jalas, A. Petrov, M. Eich, W. Freude, S. H. Fan, Z. F. Yu, R. Baets, M. Popović, A. Melloni, J. D. Joannopoulos, M. Vanwolleghem, C. R. Doerr and H. Renner, "What is-and what is not-an optical isolator," Nat. Photonics 7, 579-582, (2013). Stadler14 B. J. H. Stadler and T. Mizumoto, "Integrated Magneto-Optical Materials and Lsolators: A Review," IEEE Photonics Jourbal  6, 1-15, (2014). Ruesink18 F. Ruesink, J. P. Mathew, M. A. Miri, A. Alù, and E. Verhagen, "Optical circulation in a multimode optomechanical resonator," Nat. Commun. 9, 1798, (2018). Navarathna23 R. Navarathna, D. T. Le, A. R. Hamann, H. D. Nguyen, T. M. Stace, and A. Fedorov, "Passive Superconducting Circulator on a Chip," Phys. Rev. Lett. 130, 037001 (2023). Xu23 B. G. Xu, D. G. Zhang, Y. Wang, B. B. Hong, G. X. Shu and W. L. He, "A Terahertz circulator Based on Magneto Photonic Crystal Slab," Photonics 10, 360 (2023). jiang18 C. Jiang, L. N. Song, and Y. Li. "Directional amplifier in an optomechanical system with optical gain," Phys. Rev. A 97, 053812 (2018). Aplet64 L. J. Aplet and J. W. Carson, "A Faraday Effect Optical Isolator," Applied Optics,  3, 544-545 (1964). Qiu22 W. Y. Qiu, X. H. Cheng, A. X. Chen, Y. Y. Lan, and W. J. Nie, "Controlling quantum coherence and entanglement in cavity magnomechanical systems," Phys. Rev. A 105, 063718 (2022). Viola14 G. Viola and D. P. DiVincenzo, "Hall Effect Gyrators and Circulators," Phys. Rev. X 4, 021019 (2014). Fan12 L. Fan, J. Wang, L. T. Varghest, H. Shen, B. Niu, Y. Xuan, A. M. Weiner, and M. Qi, "An All-Silicon Passive Optical Diode," Science 335, 447-450 (2012). Hua16 S. Y. Hua, J. M. Wen, X. S. Jiang, Q. Hua, L. Jiang and M. Xiao, "Demonstration of a chip-based optical isolator with parametric amplification," Nat. Commun. 7, 13657 (2016). Cao17 Q. T. Cao, H. M. Wang, C. H. Dong, H. Jing, R. S. Liu, X. Chen, L. Ge, Q. H. Gong, and Y. F. Xiao, "Experimental Demonstration of Spontaneous Chirality in a Nonlinear Microresonator," Phys. Rev. Lett.   118, 033901 (2017). Zhang19 X. Y. Zhang, Q. T. Cao, Z. Wang, Y. X. Liu, C. W. Qiu, L. Yang, Q. H. Gong and Y. F. Xiao, "Symmetry-breaking-induced nonlinear optics at a microcavity surface," Nat. Photonics 13, 21-24 (2019). Yang23 P. F. Yang, M. Li, X. Han, H. He, G. Li, C. L. Zou, P. F. Zhang, Y. H. Qian, and T. C. Zhang, "Non-Reciprocal Cavity Polarition with Atoms Strongly Coupled to Optical Cavity," Laser & Photonics Rev. 17, 2200574 (2023). Manipatruni09 S. Manipatruni, J. T. Robinson, and M. Lipson, "Optical Nonreciprocity in Optomechanical Structures," Phys. Rev. Lett.  102, 213903 (2009). Tang2023 Z. X. Tang, and X. W. Xu, "Multiterminal nonreciprocal routing in an optomechanical plaquette via synthetic magnetism," New J. Phys. 25, 123028 (2023). Hafezi12 M. Hafezi and P. Rabl, "Optomechanically induced non-reciprocity in microring resonators," Opt. Express 20, 7672-7684 (2012). Habraken12 S. J. M. Habraken, K. Stannigel, M. D. Lukin, P. Zoller and P. Rabl, "Continuous mode cooling and phonon routers for phononic quantum networks," New J. Phys.  14, 115004 (2012). Schmidt15 M. Schmidt, S. Kessler, V. Peano, O. Painter, and F. Marquardt, "Optomechanical creation of magnetic felds for photons on a lattice," Optica  2, 635-641 (2015). Metelmann15 A. Metelmann, and A. A. Clerk, "Nonreciprocal photon transmission and amplifcation via reservoir engineering," Phys. Rev. X  5, 021025 (2015). Xu15 X. W. Xu, and Y. Li, "Optical nonreciprocity and optomechanical circulator in three-mode optomechanical systems," Phys. Rev. A 91, 053854 (2015). Xu16 X. W. Xu, Y. Li, A. X. Chen, and Y. X. Liu, "Nonreciprocal conversion between microwave and optical photons in electro-optomechanical systems," Phys. Rev. A 93, 023837 (2016). Shen16 Z. Shen, Y. L. Zhang, Y. Chen, C. L. Zou, Y. F. Xiao, X. B. Zou, F. W. Sun, G. C. Guo and C. H. Dong, "Experimental realization of optomechanically induced non-reciprocity," Nat. Photonics 10, 657-661 (2016). Ruesink16 F. Ruesink, M. A. Miri, A. Al, and E. Verhagen, "Nonreciprocity and magnetic-free isolation based on optomechanical interactions," Nat. Commun. 7, 13662 (2016). Verhagen17 E. Verhagen, and A. Ai, "Optomechanical nonreciprocity," Nat. Phys. 13, 922-924 (2017). Fang17 K. Fang, J. Luo, A. Metelmann, M. H. Matheny, F. Marquardt, A. A. Clerk, and O. Painter, "Generalized non-reciprocity in an optomechanical circuit via synthetic magnetism and reservoir engineering," Nat. Phys, 13, 465-471 (2017). Peterson17 G. A. Peterson, F. Lecocq, K. Cicak, R. W. Simmonds, J. Aumentado, and J. D. Teufel, "Demonstration of efficient nonreciprocity in a microwave optomechanical circuit," Phys. Rev. X 7, 031001 (2017). Bernier17 N. R. Bernier, L. D. Toth, A. Koottandavida, M. A. Ioannou, D. Malz, A. Nunnenkamp, A. A. K. Feofanov and T. J. Kippenberg, "Nonreciprocal reconfigurable microwave optomechanical circuit," Nat. Commun.  8, 604 (2017). Barzanjeh17 S. Barzanjeh, M. Wulf, M. Peruzzo, M. Kalaee, P. B. Dieterle, O. Painter, and J. M. Fink, "Mechanical on-chip microwave circulator," Nat. Commun. 8, 953 (2017). Tian17 L. Tian and Z. Li, "Nonreciprocal quantum-state conversion between microwave and optical photons," Phys. Rev. A 96, 013808 (2017). Jiang17 C. Jiang, L. N. Song and Y. Li, "Directional amplifer in an optomechanical system with optical gain," Phys. Rev. A 97, 053812 (2017). Xu19 H. Xu, L. Jiang, A. A. Clerk and J. G. E. Harris, "Nonreciprocal control and cooling of phonon modes in an optomechanical system," Nature 568, 65-69 (2019). Mercier19 L. M. d. Lépinay, E. Damskägg, C. F. Ockeloen-Korppi, and M. A. Sillanpää, "Realization of directional amplifcation in a microwave optomechanical device," Phys. Rev. Appl. 11, 034027 (2019). Lai20 D. G. Lai, J. F. Huang, X. L. Yin, B. P. Hou, W. Li, D. Vitali, F. Nori and J. Q. Liao, "Nonreciprocal ground-state cooling of multiple mechanical resonators," Phys. Rev. A 102, 011502 (2020). Xu20 X. Xu, Y. Zhao, H. Wang, H. Jing, and A. Chen, "Quantum nonreciprocality in quadratic optomechanics," Photon. Res. 8, 143-150 (2020). Chen21 Y. Chen, Y. L. Zhang, Z. Shen, C. L. Zou, G. C. Guo and C. H. Dong, "Synthetic gauge fields in a single optomechanical resonator," Phys. Rev. Lett. 126, 123603 (2021). Liu23 J. X. Liu, Y. F. Jiao, Y. Li, X. W. Xu, Q. Y. He, and H. Jing, "Phase-controlled asymmetric optomechanical entanglement against optical backscattering," Sci. China-Phys. Mech. Astron. 66, 230312 (2023). Lu2312 T. X. Lu, Y. Wang, K. Y. Xia, L. M. Kuang, and H. Jing, "Quantum squeezing induced nonreciprocal photon laser," Sci. China-Phys. Mech. Astron. 67, 6 (2024). Zhao20 W. Zhao, S. D. Zhang, A. Miranowicz, and H. Jing, "Weak-force sensing with squeezed optomechanics," Sci. China-Phys. Mech. Astron. 63, 224211 (2020). Zheng24 L. L. Zheng, X. H. Xiong, X. Y. Guo, D. W. Zhang, and X. Y. Lü, "Nanoparticle detection based on microcavity exceptional-point characteristics," Phys. Rev. A 109, 013502 (2024). Zhang24 D. W. Zhang, L. L. Zheng, M. Wang, Y. Zhou, and X. Y. Lü, "Loss-induced chaos in double-cavity optomechanical system," Phys. Rev. A 109, 023529 (2024). Xin15 X. Y. Lü, H. Jing, J. Y. Ma, and Y. Wu, "PT-Symmetry-Breaking Chaos in optomechanics," Phys. Rev. Lett. 114 , 253601 (2015). Rter10 C. E. Rüter, K. G. Makris, R. El-Ganainy, D. N. Christodoulides, M. Segev, and D. Kip, "Observation of parity time symmetry in optics," Nat. Phys. 6, 192-195 (2010). Wu14 J. H. Wu, M. Artoni, and G. C. L. Rocca, "Non-Hermitian Degeneracies and Unidirectional Reflectionless Atomic Lattices," Phys. Rev. Lett. 113, 123004 (2014). Bender13 N. Bender, S. Factor, J. D. Bodyfelt, H. Ramezani, D. N. Christodoulides, F. M. Ellis, and T. Kottos, "Observation of asymmetric transport in structures with active nonlinearities," Phys. Rev. Lett.  110, 234101 (2013). Peng14 B. Peng, Ş. K. Özdermir, F. Lei, F. Monifi, M. Gianfreda, G. L. Long, S. Fan, F. Nori, C. M. Bender and L. Yang, "Parity-time-symmetric whispering-gallery microcavities," Nat. Phys.  10, 394-398 (2014). Chang14 L. Chang, X. S. Jiang, S. Y. Hua, C. Yang, J. M. Wen, L. Jiang, G. Z. Wang and M. Xiao, "Parity-time symmetry and variable optical isolation in active-passive-coupled microresonators," Nat. Photonics 8 , 524 (2014). Chang20 H. Zhang, R. Huang, S. D. Zhang, Y. Li, C. W. Qiu, F. Nori, and H. Jing, "Breaking anti-PT symmetry by spinning a resonator," Nano Lett. 20, 7594-7599 (2020). Wang20 F. Wang, X. Niu, X. Hu, T. Gu, X. Wang, J. Yang, H. Yang, Y. Ao, S. Wang and Q. Gong, "All-Optical Mode-Selective Router Based on Broken Anti-PT Symmetry," Phys. Rev. Appl. 14, 044050 (2020). Wang2013 D. W. Wang, H. T. Zhou, M. J. Guo, J. X. Zhang, J. Evers and S. Y. Zhu, "Optical diode made from a moving photonic crystal," Phys. Rev. Lett.  110, 093901 (2013). Horsley13 S. A. R. Horsley, J. H. Wu, M. Artoni and G. C. La Rocca, "Optical nonreciprocity of cold atom bragg mirrors in motion," Phys. Rev. Lett.  110, 223602 (2013). Ramezani18 H. Ramezani, P. K. Jha, Y. Wang and X. Zhang, "Nonreciprocal localization of photons," Phys. Rev. Lett. 120, 043901 (2018). Zhang18 S. Zhang, Y. Hu, G. Lin, Y. Niu, K. Xia, J. Gong and S. Gong, "Thermal-motion-induced non-reciprocal quantum optical system," Nat. Photonics 12, 744-748 (2018). Xia18 K. Xia, F. Nori and M. Xiao, "Cavity-free optical isolators and circulators using a chiral cross-kerr nonlinearity," Phys. Rev. Lett.  121, 203602 (2018). Lin19 G. Lin, S. Zhang, Y. Hu, Y. Niu, J. Gong and S. Gong, "Nonreciprocal amplification with four-level hot atoms," Phys. Rev. Lett.  123, 033902 (2019). Liang20 C. Liang, B. Liu, A. N. Xu, X. Wen, C. Lu, K. Xia, M. K. Tey, Y. C. Liu and L. You, "Collision-induced broadband optical nonreciprocity," Phys. Rev. Lett. 125, 123901 (2020). Li20 E. Z. Li, D. S. Ding, Y. C. Yu, M. X. Dong, L. Zeng, W. H. Zhang, Y. H. Ye, H. Z. Wu, Z. H. Zhu, W. Gao, G. C. Guo and B. S. Shi, "Experimental demonstration of cavity-free optical isolators and optical circulators," Phys. Rev. Res. 2, 033517 (2020). Hu21 X. X. Hu, Z. B. Wang, P. Zhang, G. J. Chen, Y. L. Zhang, G. Li, X. B. Zou, T. Zhang, H. X. Tang, C. H. Dong, G. C. Guo and C. L. Zou, "Noiseless photonic non-reciprocity via optically-induced magnetization," Nat. Commun. 12, 2389 (2021). Jing2018 H. Jing, H. Lü, S. K. Özdemir, T. Carmon and F. Nori, "Nanoparticle sensing with a spinning resonator," Optica. 5, 1424-430 (2018). Graf22 A. Graf, S. D. Rogers, J. Staffa, U. A. Javid, D. H. Griffith, and Q. Lin, "Nonreciprocity in Photon Pair Correlations of Classically Reciprocal Systems," Phys. Rev. Lett. 128, 213605 (2022). Maayani18 S. Maayani, R. Dahan, Y. Kligerman, E. Moses, A. U. Hassan, H. Jing, F. Nori, D. N. Christodoulides and T. Carmon, "Flying couplers above spinning resonators generate irreversible refraction." Nature  558, 569-572 (2018). Li2019 Y. Li, Y. F. Peng, L. Han, M. A. Miri, W. Li, M. Xiao, X. F. Zhu, J. Zhao, A. Alù, S. H. Fan, and C. W. Qiu, "Anti-parity-time symmetry in diffusive systems," Science 364, 170-173 (2019). Mao2022 X. Mao, H. Yang, D. Long, M. Wang, P. Y. Wen, Y. Q. Hu, B. Y. Wang, G. Q. Li, J. C. Gao, and G. L. Long, "Experimental demonstration of made-matching and Sagnac effect in a millimeter-scale wedged resonator gyroscope," Photon. Res. 10, 2115-2121 (2022). Khia2018 P. P. Khial, A. D. White and A. Hajimiri, "Nanophotonic optical gyroscope with reciprocal sensitivity enhancement," Nat. Photonics  12, 671-675 (2018). Huang18 R. Huang, A. Miranowicz, J. Q. Liao, F. Nori and H. Jing, "Nonreciprocal photon blockade," Phys. Rev. Lett. 121, 153601 (2018). Huang19 B. Li, R. Huang, X. W. Xu, A. Miranowicz, and H. Jing, "Nonreciprocal unconventional photon blockade in a spinning optomechanical system," Photon. Res. 7, 630-641 (2019). Wang2019 K. Wang, Q. Wu, Y. F. Yu, and Z. M. Zhang, "Nonreciprocal photon blockade in a two-mode cavity with a second-order nonlinearity," Phys. Rev. A  100, 053832 (2019). Shen2020 H. Z. Shen, Q. Wang, J. Wang, and X. X. Yi, "Nonreciprocal unconventional photon blockade in a driven dissipative cavity with parametric amplifcation," Phys. Rev. A 101, 013826 (2020). Xue2020 W. S. Xue, H. Z. Shen, and X. X. Yi, "Nonreciprocal conventional photon blockade in driven dissipative atom-cavity," Opt. Lett.  45, 4424-427 (2020). Jing2021 Y. W. Jing, H. Q. Shi, and X. W. Xu, "Nonreciprocal photon blockade and directional amplifcation in a spinning resonator coupled to a two-level atom," Phys. Rev. A 104, 033707 (2021). JiangY2018 Y. Jiang, S. Maayani, T. Carmon, F. Nori, and H. Jing, "Nonreciprocal phonon laser," Phys. Rev. Appl. 10, 064037 (2018). Xu21 Y. Xu, J. Y. Liu, W. Liu, and Y. F. Xiao, "Nonreciprocal phonon laser in a spinning microwave magnomechanical system," Phys. Rev. A  103, 053501 (2021). Jiao20 Y. F. Jiao, S. D. Zhang, Y. L. Zhang, A. Miranowicz, L. M. Kuang, and H. Jing, "Nonreciprocal Optomechanical Entanglement against Backscattering Losses," Phys. Rev. Lett. 125, 143605 (2020). Li21 B. J. Li, S. K. Özdemir, X. W. Xu, L. Zhang, L. M. Kuang, and H. Jing, "Nonreciprocal optical solitons in a spinning Kerr resonator," Phys. Rev. A 103, 053522 (2021). Hu23 N. Hu, Z. X. Tang, and X. W. Xu, "Broadband optical nonreciprocity via nonreciprocal band structure," Phys. Rev. A 108, 063516 (2023). Tang22 J. S. Yang, W. Nei, M. Chen, X. Su, Y. Q. Lu, F. Nori, and K. Y. Xia, "Nonreciprocal Single-Photon Band Structure," Phys. Rev. Lett.  126, 203602 (2022). wei20 Y. W. Jing, "Quantum spining photonic circulator", Scientific Reports  12, 5844, (2020). Kerckhof15 J. Kerckhof, K. Lalumière, B. J. Chapman, A. Blais, and K. W Lehnert, "On-chip superconducting microwave circulator from synthetic rotation," Phys. Rev. Appl. 4, 034002 (2015). Sliwa15 K. M. Sliwa, M. Hatridge, A. Narla, S. Shankar, L. Frunzio, R. J. Schoelkopf, and M. H. Devoret, "Reconfgurable josephson circulator/directional amplifer," Phys. Rev. X 5, 041020 (2015). Chapman17 B. J. Chapman, E. I. Rosenthal, J. Kerckhoff, B. A. Moores, L. R. Vale, J. A. B. Mates, G. C. Hilton, K. Lalumire, A. Blais, and K. W. Lehnert, "Widely tunable on-chip microwave circulator for superconducting quantum circuits," Phys. Rev. X 7, 041043 (2017). Muller18 C. Müller, S. W. Guan, N. Vogt, J. H. Cole, and T. M. Stace, "Passive on-chip superconducting circulator using a ring of tunnel junctions," Phys. Rev. Lett. 120, 213602 (2018). Shen18 Z. Shen, Y. L. Zhang, Y. Chen, F. W. Sun, X. B. Zou, G. C. Guo, C. L. Zou and C. H. Dong, "Reconfgurable optomechanical circulator and directional amplifer," Nat. Commun. 9, 1797 (2018). Scheucher16 M. Scheucher, A. Hilico, E. Will, J. Volz, and A. Rauschenbeutel, " Quantum optical circulator controlled by a single chirally coupled atom," Science 354, 1577-1580 (2016). Chen17 X. W. Xu, A. X. Chen, Y. Li, and Y. X. Liu, "Single-photon nonreciprocal transport in one-dimensional coupled-resonator waveguides," Phys. Rev. A 95, 063808 (2017). Liu17 X. W. Xu, A. X. Chen, Y. Li, and Y. X Liu, "Nonreciprocal single-photon frequency converter via multiple semi-infnite coupled resonator waveguides," Phys. Rev. A 96, 053853 (2017). xu2020 X. W. Xu, Y. Li, B. Li, H. Jing, and A. X. Chen, "Nonreciprocity via nonlinearity and synthetic magnetism," Phys. Rev. Appl.  13, 044070 (2020). Georgescu14 I. M. Georgescu, S. Ashhab, and F. Nori, "Quantum simulation," Rev. Mod. Phys. 86, 153-185 (2014). Xu2020 G. Q. Xu, K. C. Dong, Y. Li, H. G. Li, K. P. Liu, L. Q. Li, J. Q. Wu, and C. W. Qiu, "Tunable analog thermal material," Nat. Commun.  11, 6028 (2020). Cromb2020 M. Cromb, G. M. Gibson, E. Toninelli, M. J. Padgett, E. M. Wright, and D. Faccio, "Amplification of waves from a rotating body," Nat. Phys. 16, 1069-1073 (2020). Chuan24 C. X. Zhang, X. Jiang, and D. Ta, "Revealing the incidence-angle-independent frequency shift in the acoustic rotational doppler effect," Phys. Rev. Lett. 132, 114001 (2024). Malykin2000 G. B. Malykin, "The Sagnac effect: Correct and incorrect explanations," Phys.-Usp. 43, 1229 (2000). Kerry03 K. J. Vahala, "Opitical microcavities," Nature  424, 839-846, (2003). Spillane05 S. M. Spillane, T. J. Kippenberg, K. J. vahala, K. W. Goh, E. Wilcut, and H. J. Kimble, "Ultrahigh-Q toroidal microresonators for cavity quantum electrodynamics," Phys. Rev. A 71, 013817 (2005). Takao06 T. K Aoki, B. Dayan, E. Wilcut, W. P. Bowen, A. S. Parkins, T. J. Kippenberg, K. J. Vahala and H. J. Kimble, "Observation od strong coupling between one atom and a monolithic microresonator," Nature  443, 671-674 (2006). zhu46 J. G. Zhu, S. H. K. Ozdemir, Y. F. Xiao, L. Li, L. N. He, D. R. Chen, and L. Yang, "On-chip single nanoparticle detection and sizing by mode splitting in an ultrahigh-Q microresonator," Nat. Photonics 4, 46-49 (2010). Guo2023 Q. Guo, K. X. Zhou, C. H. Bai, Y. C. Zhang, G. Li, and T. C. Zhang, "Nonreciprocal mechanical squeezing in a spinning cavity optomechanical system via pump modulation," Phys. Rev. A 108, 033515 (2023). Jiao2020 Y. F. Jiao, S. D. Zhang, Y. L. Zhang, A. Miranowicz, L. M. Kuang, and H. Jing, "Nonreciprocal Optomechanical Entanglement against Backscattering Losses," Phys. Rev. Lett. 125, 143605 (2020).
http://arxiv.org/abs/2406.18936v1
20240627070814
Credit Ratings: Heterogeneous Effect on Capital Structure
[ "Helmut Wasserbacher", "Martin Spindler" ]
econ.GN
[ "econ.GN", "q-fin.EC", "stat.AP" ]
@nat@width>@nat@width kframe @end@of@kframe @end@of@kframe ##1totalleftmargin -##1- --totalleftmargin -totalleftmargin@ setminipage @end@of@kframe upquote.sty [1] [t]-#1 equationsection plain thmTheorem[section] plain #1 theoremTheorem conjectureConjecture corollaryCorollary lemmaLemma propositionProposition assumptionA. definition definitionDefinition definitionsection stepStep remarkRemark[section] remarksection exampleExample Credit Ratings and Capital Structure T1The views and opinions expressed in this document are those of the first author and do not necessarily reflect the official policy or position of Novartis or any of its officers. Helmut Wasserbacher Novartis International AG Novartis Campus 4002 Basel Switzerland E-mail: helmut.wasserbacher@novartis.com Martin Spindler University of Hamburg Hamburg Business School Moorweidenstr. 18 20148 Hamburg Germany E-mail: martin.spindler@uni-hamburg.de § ABSTRACT Why do companies choose particular capital structures? A compelling answer to this question remains elusive despite extensive research. In this article, we use double machine learning to examine the heterogeneous causal effect of credit ratings on leverage. Taking advantage of the flexibility of random forests within the double machine learning framework, we model the relationship between variables associated with leverage and credit ratings without imposing strong assumptions about their functional form. This approach also allows for data-driven variable selection from a large set of individual company characteristics, supporting valid causal inference. We report three findings: First, credit ratings causally affect the leverage ratio. Having a rating, as opposed to having none, increases leverage by approximately 7 to 9 percentage points, or 30% to 40% relative to the sample mean leverage. However, this result comes with an important caveat, captured in our second finding: the effect is highly heterogeneous and varies depending on the specific rating. For AAA and AA ratings, the effect is negative, reducing leverage by about 5 percentage points. For A and BBB ratings, the effect is approximately zero. From BB ratings onwards, the effect becomes positive, exceeding 10 percentage points. Third, contrary to what the second finding might imply at first glance, the change from no effect to a positive effect does not occur abruptly at the boundary between investment and speculative grade ratings. Rather, it is gradual, taking place across the granular rating notches (“+/-”) within the BBB and BB categories. [class=JEL] JEL classification: [Primary ]C14 C21 D22 G24 G32 double machine learning heterogeneous treatment effect capital structure leverage credit rating machine learning § INTRODUCTION The specific mix of debt and equity instruments a company uses to finance its operations represents its “capital structure”. The ratio of debt to equity within this structure constitutes the company's financial “leverage” ratio. Corporate finance theory suggests that the optimal capital structure is that which maximizes the company's market value <cit.>. Surprisingly, however, the question of why a company chooses a particular capital structure has remained the subject of extensive debate <cit.> ever since Stewart Myers first highlighted the “Capital Structure Puzzle” in 1984 <cit.>. Additionally, there is no clear, unifying model for optimal leverage <cit.>. Empirical research has extensively examined firm and industry characteristics to determine which factors can explain observed leverage ratios <cit.>. In recent years, machine learning approaches have begun to complement traditional econometric methods <cit.>, enabling researchers to apply tree-based models, which are more flexible than conventional linear frameworks. Moreover, by incorporating regularization (shrinkage) methods that perform automatic, data-driven variable selection <cit.>, machine learning allows researchers to consider a larger set of potential explanatory factors. However, the primary focus of most machine learning methods is to maximize predictive performance, rather than uncovering causal relationships <cit.>. Causal machine learning is an emerging field that attempts to fill this gap. It is specifically concerned with uncovering causal mechanisms through the use of machine learning techniques. In this paper, we employ the double/debiased machine learning framework <cit.> to investigate the causal effect of credit ratings on leverage. In applying this very recent methodology, we contribute to the literature by identifying the heterogeneity of this effect across the different credit rating levels in a complex, high-dimensional setting. We structure this paper as follows. Section <ref> provides a brief overview of the predominant capital structure theories. Section <ref> consists of an introduction to credit ratings. Section <ref> reviews recent publications on the use of machine learning models to predict leverage or credit ratings, as well as publications on the impact of credit ratings on leverage. Section <ref> introduces the double machine learning framework we will employ in Section <ref> to determine the causal effect of credit ratings on leverage ratios for a sample of companies from the Compustat database. Section <ref> summarizes our findings, outlines the limitations of our work and suggests how these could form the basis for further research. Lastly, section <ref> is the appendix, which contains further details related to several sections of the main text. § CAPITAL STRUCTURE THEORIES The capital structure of a company refers to the specific mix of financial instruments it uses to finance its operations. Alongside decisions regarding investments, it constitutes a fundamental question for management <cit.>: what is the source of funds for our investments? Capital structure analysis typically focuses on “leverage”, which is the ratio of the total amount of debt (as a broad asset class) to total equity in the capital structure. Different theories have been developed about the optimal capital structure, or the leverage that maximizes the overall market value of a company <cit.>. Key theories include Modigliani and Miller's theory of irrelevance <cit.> (page 268, “Proposition I”), the trade-off theory <cit.>, the pecking order theory <cit.> and the market timing theory <cit.>. Over time, many extensions and complementary perspectives have been proposed (e.g., <cit.> and <cit.>). In the context of this article, the inclusion of credit ratings as a factor affecting a company's leverage is of particular interest. <cit.> labelled this the credit rating - capital structure hypothesis. Among the theories concerning the optimal capital structure, there is no consensus, and the reasons why individual companies choose a particular capital structure remain largely unknown <cit.>. Thus, the “Capital Structure Puzzle” <cit.> remains unsolved. A second important aspect of most theories is that they do not explicitly specify the functional form through which the putative factors influence or determine capital structure. In particular, they neither postulate nor even imply a linear relationship between leverage and its determining factors. At the same time, there is extensive empirical research on the potential determinants of observed leverage ratios, as witnessed by surveys such as <cit.> or <cit.>. The considered explanatory variables are mostly financial in nature and include elements from the balance sheet, income statement and cash flow statement. These variables are typically scaled by total assets or total sales and are accompanied by a rationale of what they measure <cit.>. However, the precise economic concepts that these measures are intended to proxy and, thus, the causal mechanisms involved are not always clear <cit.>. Empirical studies sometimes also attempt to capture individual company attributes beyond financial characteristics. This typically involves the use of dummy variables to represent traits such as company maturity, “uniqueness” (often linked to sub-industry sectors) or operations in regulated industries such as utilities or railroads. More generally, dummy variables representing a company's industry at the two- to four-digit Standard Industrial Classification (SIC) code level or relying on the Fama and French industry classification <cit.> are commonly included as covariates in empirical analyses. Examples of such approaches can be found in <cit.> and <cit.>. Less frequently employed variables are those that attempt to capture concepts such as management skills <cit.>, effective corporate governance mechanisms <cit.>, or the impact of the economic regime in which a company operates, such as the tax system <cit.>. <cit.> refers to this group of explanatory factors as “cognitive variables” (page 115). Most studies examining leverage include a subset of the aforementioned explanatory variables using variants of linear regression <cit.>. However, empirical research <cit.> supports the view that “the relation between leverage and many of these variables is nonlinear” (<cit.>, page 311) and that these nonlinearities persist even after excluding particular subgroups of companies, such as distressed firms. However, as highlighted by <cit.> (page 337), few empirical analyses have explicitly taken account of these nonlinear dynamics. Machine learning methods, which are adaptable to complex, non-linear patterns, appear well-suited to address our research question in this environment <cit.>. § CREDIT RATINGS <cit.> (page 4) define credit ratings as “opinions about credit risk”, that aim “to provide investors and market participants with information about the relative credit risk of issuers and individual debt issues.” Indeed, the main purpose of credit ratings is to reduce information asymmetries in financial markets, facilitated by credit agencies' access to privileged information from company management. The credit rating market is dominated by three big agencies: S&P Global Ratings (formerly known as Standard & Poor's Ratings Services), Moody's Investor Services and Fitch Ratings, with S&P holding approximately 50% of the market share <cit.>. Ratings are typically assigned using a hierarchical, letter-based scale. For instance, the highest S&P rating is denoted as “AAA”, corresponding to “[e]xtremely strong capacity to meet financial commitments”, while the lowest rating is denoted as “D”, corresponding to “[p]ayment default on a financial commitment or breach of an imputed promise; also used when a bankruptcy petition has been filed or similar action taken” <cit.> (page 9). Further elements include “+” or “-” signs added to the rating to indicate the relative standing (“notch”) within a broad rating category, as well as the distinction between “investment-grade” (from AAA to BBB-) and “speculative-grade” (BB+ and below) ratings, “outlooks” for possible rating changes anticipated within six to 24 months, and “watchlists” for more immediate concerns (usually 90 days). Industry sources such as <cit.> and <cit.> provide further details on the codification of ratings. For this article, it is important to stress that the credit rating industry operates almost entirely under the “issuer-pays model”. In this model, a company seeking a credit rating approaches a rating agency and pays for the service <cit.> (page 1961). This approach contrasts with the “investor-pays model”, under which rating agencies are financed through fees charged to investors accessing the ratings; this model is now less common. Alternative models remain marginal, such as the “public-utility model” in China <cit.>. Thus, the issuance of a company rating is the result of an explicit decision by the company's management. For example, Fitch states that “[t]he rating process usually begins when an issuer [...] contacts a member of Fitch's Business and Relationship Management (BRM) group with a request to engage Fitch to provide a credit rating” <cit.> (page 2). Similarly, “Ratings request from issuer” is the first box in S&P's flowchart explaining “Raising Capital Through Rated Securities” <cit.> (page 7). Put differently, companies self-select into having a rating.[At least theoretically, other situations are, of course, conceivable. For instance, a company might wish to obtain a rating, but the rating agency does not or cannot provide one (for whatever reason); or, after initial discussions about requesting a rating, a company withdraws its request. We believe, however, that such situations are limited in practice. ] The question of potential determinants of corporate credit ratings has been examined extensively in the empirical literature, with <cit.> providing a recent overview. Similar to the determinants of leverage, financial ratios are widely employed in empirical research on credit ratings <cit.>; indeed, as <cit.> (page 10) asserts, “Studies that omit these variables are almost always incomplete by definition”. Additionally, some authors indicate that corporate governance mechanisms also play a role in determining credit ratings <cit.>. Meanwhile, findings regarding the influence of macroeconomic variables on credit ratings have yielded mixed results <cit.>, corresponding with the assertion of credit rating agencies that they already include “the anticipated ups and downs of the business cycle” in their assessments <cit.> (page 10). A further similarity between leverage and rating studies is the predominant reliance on linear relationships between dependent and independent variables <cit.>. <cit.> (page 545) observe in their literature review that the key advantage of these models is that they are “succinct and [...] easy to explain.” However, many authors are aware of likely non-linear effects and try to accommodate these. For instance, <cit.> (page 2649) first transform interest coverage into a piecewise linear function and then create four distinct variables over four regions. <cit.> (page 1976) include squared and cubed versions of all explanatory variables. Again, machine learning methods, which by design are flexible to adapt to complex, non-linear patterns, appear particularly appropriate in such scenarios <cit.>. For further details on ratings, the references in <cit.> and <cit.> provide good starting points. § LITERATURE REVIEW Our review of the literature focuses on the central topic of this paper, which is the causal effect of credit ratings on capital structure. Here, we discuss the relatively few existing publications on this subject. In the appendix, we provide additional information by highlighting relevant findings from selected studies that employ machine learning methods to investigate leverage ratios or credit ratings. A plausible case can be made that credit ratings influence capital structure decisions. For instance, companies sometimes mention credit rating objectives in the context of specific financing decisions. Surveys also consistently indicate that ratings are among the main factors when managers decide about leverage for their firms (e.g., <cit.>). Additionally, as early as 1936, federal regulations in the United States required banks to invest exclusively in investment-grade bonds (see <cit.>, section 4, for a historical overview). It is therefore surprising that research on the determinants of capital structure took so long to consider the potential causal effect of credit ratings. Given the possibility that “in the social sciences often that is treated as important which happens to be accessible to measurement” <cit.> (page 3), we hypothesize that the lack of early research on this topic was due to the initially limited availability of corporate credit rating data. First, prior to the late 1960s, credit rating agencies operated under the investor-pays model (see section <ref>). Thus, credit ratings were private information purchased by investors. Only with the switch to the issuer-pays model did this information begin to be “[distributed] to the general public at no charge” <cit.> (page 102). Second, ratings became available in popular databases substantially later than data such as balance sheet and income statement information. For instance, whereas Compustat[<https://www.marketplace.spglobal.com/en/datasets/compustat-financials-(8)#dataset-overview> (accessed December 8, 2022)] was initiated and already had significant coverage as early as 1950, 1985 was, as <cit.> (page 1047) points out, the first year in which the S&P long-term credit rating became available in Compustat. Published in 2006, <cit.> claims to be “the first paper to examine the direct effect of credit ratings on capital structure decisions” (page 1036). In particular, the author postulates the “Credit Rating - Capital Structure Hypothesis” (CR-CS) <cit.> (page 1037), according to which credit ratings represent a material factor in capital structure decisions due to the discrete costs and benefits of different rating levels. Above all, changes in credit ratings trigger costs or benefits that influence the leverage decisions of companies according to the CR-CS. The empirical test of the CR-CS theory <cit.> relies on a sample of 12'336 firm-years from Compustat from 1986 to 2001. The sample is restricted to companies for which a Standard & Poor's Long-Term Domestic Issuer Credit Rating is available. The fundamental idea of the test is to examine how managers' concerns about potential rating changes affect their decision to issue debt versus equity. Using the presence of a plus or minus rating as a proxy for managerial concern about an impending rating change, the CR-CS theory predicts that such companies will issue relatively less debt. Indeed, <cit.> finds that companies with a credit rating that includes a plus or minus (i.e., “+” or “-” notch qualification) issue approximately 0.5% to 1% less debt than companies with a straight rating (i.e., without a plus or minus qualification). In a subsequent paper on the relationship between ratings and capital structure, <cit.> finds that companies reduce leverage by approximately 1.5% to 2.0% of assets following a rating downgrade, whereas rating upgrades do not affect subsequent leverage levels. This asymmetry suggests that companies strive to achieve and maintain minimum rating levels. The hypothesized reason for this behavior is that certain ratings offer discrete benefits, such as the ability to issue commercial paper. <cit.> also explicitly consider the role of credit ratings in the context of capital structure decisions. However, their argument focuses on the supply side of capital, especially a company's access to the public bond market, as measured based on whether the company has a credit rating. The underlying reasoning is that a desired level of leverage might be unattainable for a company if lenders are rationing capital (see, for instance, <cit.>, pages 108-113). Thus, the authors postulate a link between a company's source of capital and its leverage. For their empirical analysis (described on pages 51-54), <cit.> use Compustat data from 1986 to 2000, resulting in 77'659 firm-years and a dataset similar to that used in <cit.> and <cit.>. <cit.> find that the effect of having any credit rating (versus having none at all) increases a company's leverage by about 6 to 8 percentage points, corresponding to an approximate 35% increase relative to the average leverage ratio of 22%. Because <cit.> (page 54) use the existence of a debt rating as a proxy for access to the capital market, the authors conclude that companies with access to the public bond markets have significantly more leverage. From an analytical perspective, we observe that while the vast majority of control variables in the five linear regression specifications of <cit.> are statistically significant at the 1% level, the R^2, even of model V, which includes 12 company control variables and a year dummy, does not exceed 37.3%. This suggests that capital structures are difficult to predict with linear model specifications. Adopting a different approach, <cit.> took advantage of several changes made by the rating agency Moody's[We note that this is one of the very few papers identified in our literature search that rely on rating information from Moody's rather than S&P Global Ratings (Standard and Poor's).] in 2006 to the calculation and reporting of leverage ratios in relation to pensions, operating leases, and hybrid securities. The author argues that because these changes were exogenous to company fundamentals, they provide a natural experiment (see for instance <cit.>, page 75) to determine their causal impact on capital structure and investment decisions. The findings across several analyses support the view that changes to the rating adjustment methods affected capital structure decisions. <cit.> therefore concludes that “credit ratings have a significant impact on financial and real decisions of firms” (page 581) and “rating agencies have the power to affect corporate decisions” (page 567). The results of the studies discussed so far suggest that credit ratings have a significant but very general effect on leverage. However, <cit.> provide a more nuanced view. The authors investigate the validity of the CR-CS model as proposed in <cit.> and <cit.> by testing four hypotheses about company-level attributes. The authors argue that for companies with these attributes, maintaining or achieving a certain rating is especially desirable. Thus, these attributes “should proxy for management's inclination to adopt the CR-CS model” (<cit.>, page 574). For instance, depending on their broad rating category, companies should behave differently because the relative costs of a change in ratings vary across categories; notably, companies on the verge of moving from investment-grade (BBB- on the S&P rating scale) to speculative-grade (BB+ and below) ratings should be highly sensitive to the CR-CS logic due to the many negative regulatory implications of non-investment-grade status. The sample period in <cit.> spans from 1986 to 2009, leading to a total of approximately 16'000 company-years. Results across all four hypotheses do not support the view that credit ratings significantly affect capital structure decisions. For instance, estimates of the effect of plus/minus ratings on leverage are generally not significant when companies are split by broad rating category or by investment- versus non-investment-grade ratings, with the sole exception of the minus-category of rating class B. <cit.> (page 574) infer that “[<cit.>'s] original findings appear to be driven by the subsample of firms with extremely low ratings.” <cit.> conclude “that the CR-CS model is not a good descriptor of how firms determine their marginal financing decision” (page 594) and hypothesize that the “marginal financing behavior [of B- rated firms] to avoid debt may be more an indication of lack of access to the debt market than an indication of a conscious attempt to decrease debt financing.” In summary, while existing studies have estimated the average effect of credit ratings on leverage, the average effect may mask significant heterogeneity. Indeed, <cit.> has already provided preliminary evidence that such heterogeneity exists. In the present paper, we aim to go one step further and determine the presence, pattern and extent of this effect heterogeneity. To do so, we employ double machine learning, a modern machine learning approach. The next section will provide an introduction to double machine learning. § DOUBLE MACHINE LEARNING We have seen from the previous sections that there is no general consensus regarding the determinants of leverage and how they interact at the company level. Nevertheless, it is likely that many factors play a role and the mechanisms by which they influence capital structure are complex. Given the lack of a strong theoretical framework, isolating the causal effect of credit ratings poses a formidable challenge. Additionally, we need to consider that this effect may be heterogeneous. Double machine learning <cit.> is a recently developed methodology that can help solve questions of causal inference in such settings by harnessing what <cit.> calls “the unreasonable effectiveness of data.” Among the key advantages of double machine learning are the following characteristics. First, there is the ability to handle high feature dimensionality, i.e., the presence of many potential influencing factors in addition to the treatment variable of interest, and to provide valid inference on treatment effects in such high-dimensional, complex data environments. Second, it employs a data-driven approach to select among these influencing factors. Third, it facilitates the use of various machine learning algorithms with flexible function-fitting capabilities. Fourth, there is double-robustness with respect to nuisance functions. “Partialling-out”, “Neyman orthogonality” and “cross-fitting” are three important concepts enabling the “doubly robust” nature of the double machine learning approach. We will briefly discuss each of these concepts in this section and refer readers to the appendix of this paper and the literature referenced in this section for further details. §.§ Partialling-out Double machine learning builds on the concept of Frisch-Waugh-Lovell (FWL) “partialling out” <cit.>. According to the FWL theorem, a parameter of interest θ in a linear model such as: Y=θ D+β X +ϵ with 𝔼(ϵ | D,X) = 0 can be estimated with linear regression using, for example, ordinary least squares taking either of two approaches. Under the first approach, θ can be directly estimated by regressing Y on D and X. Under the second approach, θ is determined in the last step of a three-step procedure: first, Y is regressed on X, and the corresponding residuals ϵ_Y are determined. Second, D is regressed on X and again, the corresponding residuals ϵ_D are determined. Third, the residuals ϵ_Y from the first step are regressed on the residuals ϵ_D from the second step. The regression coefficient obtained from this third step corresponds to θ, the parameter of interest. This latter approach is employed for double machine learning with machine learning algorithms and even ensemble methods combining different machine learning methods being used for the first and second step. We underline that machine learning methods cannot be used to “directly” estimate equation <ref> as per the first approach described above. Such a “naive approach” <cit.> (page 36) entails a high risk of yielding a severely biased estimator for the treatment parameter <cit.>, hence leading to invalid inference. §.§ Neyman orthogonality Following the general outline of <cit.>, we illustrate the approach using a “partially linear model” <cit.>, which we will also employ in our empirical analysis in section <ref>. The usual form of a partially linear regression model is: Y=θ_0D+g_0(X) +ζ with 𝔼(ζ | D,X) = 0 and D=m_0(X) +𝒱 with 𝔼(𝒱 | X) = 0, where Y is the outcome variable, D is the treatment (policy) variable of interest, and X is a (potentially high-dimensional) vector of confounding covariates. ζ and 𝒱 are error terms. The regression coefficient θ_0 is the parameter of interest. We can interpret θ_0 as a causal parameter, i.e. the causal effect of treatment D on outcome Y, provided that D is “as good as randomly assigned” <cit.> (page 73), conditional on the covariates X, thus rendering D exogenous conditionally on X. Applying the partialling-out procedure to equations <ref> and <ref> removes the confounding effect of X. Afterwards, by regressing the residuals on each other, the regularization bias introduced by machine learning methods with a penalty or regularization mechanism has no first-order effect on the target parameter <cit.>. Technically, a method-of-moment estimator for the parameter of interest θ_0 is employed: 𝔼[ψ(W;θ_0, η_0)] = 0 where ψ represents the score function, W = (Y,D,X) is the set (data triplet) of outcome, treatment, and confounding variables, θ_0 is the parameter of interest as already indicated above, and η_0 are nuisance functions (for instance, g_0 and m_0, which we will employ later in our empirical application). For the double machine learning inference procedure, the score function ψ(W;θ_0, η_0) from equation <ref> (with θ_0 as the unique solution) needs to satisfy the Neyman orthogonality <cit.> condition: ∂_η𝔼[ψ(W;θ_0, η)]|_η=η_0 = 0, where the derivative ∂_η denotes the pathwise Gateaux derivative operator. Intuitively, Neyman orthogonality in equation <ref> ensures that the moment condition ψ(W;θ_0, η_0) from equation <ref> is insensitive to small errors in the estimation of the nuisance function η (around its “true” full population value η_0). Thus, it removes the bias arising from using a machine learning-based estimator for η_0. §.§ Cross-fitting A second point to consider is that machine learning methods usually rely on sample splitting to avoid bias introduced by overfitting <cit.>. Under double machine learning, a similar data splitting methodology applies in the case of a partially linear model with two nuisance functions as described in the next section with equations <ref> and <ref>. Only one subset of the data is used to estimate the nuisance functions, which are partialled-out, while the other subset is used to estimate the parameter of interest (i.e., the treatment effect). Of course, such a limited use of the data implies a loss of efficiency. To overcome this efficiency loss from data splitting, double machine learning employs a technique called “cross-fitting” <cit.> (page C6). In this procedure, the roles of the two data subsets are swapped, and two estimates for the parameter of interest are obtained. Because these two estimators are approximately independent, they can simply be averaged to make use of the full data set <cit.> (Figure 2, page C7). The cross-fitting procedure can be expanded beyond two data sets into a K-fold version to further increase robustness; <cit.> (page 13) reports that four to five folds appear to work well in practice. §.§ Double robustness “Double machine learning” is so named because it applies machine learning methods to estimate both equation <ref> and equation <ref>. However, the estimated treatment effect is also “doubly robust” thanks to the partialling-out procedure described previously. This robustness means that potential “mistakes in either of the two prediction problems” <cit.> (page 221) (i.e., equations <ref> or <ref>) do not invalidate the effect estimate as long as at least one equation is sufficiently well estimated. In other words, while it is necessary “to do a good job on at least one of these two prediction problems” <cit.> (page 221), it does not matter which one is more accurately modeled. Although this feature should not encourage lax model specifications, it underscores another attractive property of double machine learning, particularly when uncertainties remain about the precise model characteristics. As noted by <cit.> (page 34), “Because model selection mistakes seem inevitable in realistic settings, it is important to develop inference procedures that are robust to such mistakes.” Finally, double machine learning exhibits a general robustness irrespective of the particular machine learning (ML) algorithm employed. <cit.> comment regarding their empirical results that “the choice of the ML method used in estimating nuisance functions does not substantively change the conclusions“ (page C45). Of course, the machine learning methods need to be of sufficient quality for given task. Considering the broad spectrum of available machine learning models, however, this typically does not present a major hurdle, and even ensemble models are suitable <cit.> (pages C22-C23). § EMPIRICAL METHODOLOGY AND FINDINGS In this section, we employ the double machine learning framework to estimate the causal effect of ratings on the leverage ratio. We first describe our study design and the data we employ. Subsequently, we report and discuss the results, including those of several robustness checks. Our main finding is the presence of heterogeneous effects of ratings on leverage. §.§ Empirical Design To estimate the effect of ratings on the leverage ratio, we employ a partially linear regression model (see for instance <cit.>) of the following form: LDA_i,t=θ' D_i,t+g_0(X_i,t) +ζ_i,t with 𝔼(ζ_i,t | D_i,t,X_i,t) = 0 and D_i,t=m_0(X_i,t) +𝒱_i,t, with 𝔼(𝒱_i,t | X_i,t) = 0 where LDA_i,t is the outcome variable representing the leverage ratio for company i in year t, defined as the book value of total debt (short-term and long-term) divided by the book value of total assets, X_i,t corresponds to a vector of covariates for company i at time t, which are assumed to be statistically associated with the outcome and treatment variables, and ζ and 𝒱 are stochastic error terms. D represents the “treatment”, i.e., the policy variable of interest in our study of rating effects. Specifically, D_i,t represents the vector of p binary treatment variables for company i in year t. To allow for heterogeneity, D can contain the rating variable and, for example, interactions with other potentially relevant variables or a refined rating category coded as dummy variables (we will use such a strategy later on). In the initial setting, we consider only one treatment variable: the presence (D=1) or absence (D=0) of a rating for a given company. Subsequently, we will expand the scope of analysis to conduct simultaneous inference on different treatment variables, for example, by considering each distinct rating category (such as AAA, AA+, A, AA-) as a separate treatment. Equation <ref> contains the parameter vector of interest, θ, which corresponds to the p causal effect measure(s) of the p treatment(s) on the outcome variable - i.e., in our case, the effect of rating on leverage. This causal interpretation is valid if the treatment D is “as good as randomly assigned” (<cit.> page 73) conditional on the covariates X, making D exogenous conditionally on X. In other words, the initially non-random treatment assignment can be ignored if controlling for the correct set of X <cit.>, because the selection bias towards different treatment types “disappears” (<cit.>, page 54) in this case. Thus, the selection of the covariates X and the modeling of their relationship with the outcome and treatment variables are critical for the validity of the analysis. This is precisely where the machine learning approach proves invaluable, because it is able to perform data-driven selection from a large number of candidate covariates and can flexibly model the form of their influence on the outcome variable <cit.>. g_0 and m_0 are two vector-valued functions that capture the relationship of the covariates X with the outcome LDA and the treatment D, respectively. These two “learner” functions do not need to be linear; in fact, we will use random forests <cit.> for our analyses because of their general strength in capturing non-linear, complex interactions and relationships, even in high dimensions and with large datasets <cit.>. The appropriateness of random forests specifically for empirical analyses of capital structures and ratings has been further confirmed by recent publications, which found random forests to perform better than other machine learning methods (see for instance <cit.> and <cit.> for leverage and <cit.> for company credit ratings). While the learner functions g_0 and m_0 do not need to be linear and can be flexibly adapted by the machine learning algorithm, it is important to remind ourselves that our specification in equation <ref> corresponds to a linear effect of D on the outcome. We refer interested readers to the literature on non-linear response models (e.g., <cit.>). §.§ Data We use Compustat data for North American companies for the years 2005 to 2015.[Financial data were extracted from “Compustat Daily Updates - Fundamentals Annual” and rating data from “Compustat Daily Updates - Ratings” on September 13, 2022, via Wharton Research Data Services (WRDS). ] Similar to the extant literature,[See for instance: <cit.> (page 51, page 54), <cit.> (page 1329), <cit.> (page 1047), <cit.> (page 583), <cit.> (Table 1, page 8). We do not exclude utilities (unlike <cit.>, page 1329) or winsorize data (unlike <cit.>, Table 1, page 8).] we exclude companies from the financial and public sectors (SIC codes 6xxx and 9xxx), observations with negative shareholder equity or negative total debt, and observations involving sales or assets smaller than one million US dollars. For unreported balance sheet, income, and cash flow statement items, missing values are replaced by zero, while non-financial metrics, such as CEO/CFO SOX certification codes or the company's auditor, are explicitly coded as “missing”. Following these criteria, we arrive at a sample of 57'832 company-year observations. A common characteristic of the publications examining the impact of credit ratings on leverage described in our literature review in section <ref> is their reliance on an a priori selection of variables deemed related to the leverage ratio or credit rating. This selection, such as in <cit.> (page 57) or <cit.> (page 1056), is based on capital structure theories or the results of previous research. We by no means wish to criticize this approach of relying on previous research findings, which we follow ourselves (as evidenced by our own use of random forests as learner functions in equations <ref> and <ref>). What we intend to highlight, however, is that our machine-learning approach does not require rigid a priori decisions about the inclusion or exclusion of specific variables for predicting leverage and the presence of a rating. Moreover, the ability of random forests to automatically reflect complex interactions and nonlinearities implies that we only need a very basic level of researcher-driven “feature engineering”. In fact, in our model, we only include three transformations. First, we scale balance sheet, income, and cash flow statement items by sales and total assets (e.g., PP&E as a percentage of sales and of total assets). Second, as a measure of company size, we add the logarithm of sales and the logarithm of total assets to the variable set. Third, we transform certain data items into dummy variables, such as creating dummies for three-digit SIC codes and for the adoption of certain accounting changes.[This corresponds to the field “ACCTCHG” in Compustat. For instance, the adoption of the FASB accounting standard SFAS 157 effective during 2007 is coded as “FS157”. SFAS 157 concerns measurement and disclosure principles of “fair value” in generally accepted accounting principles, mainly in illiquid markets. See <https://www.fasb.org/page/PageContent?pageId=/reference-library/superseded-standards/summary-of-statement-no-157.html bcpath=tff> (accessed January 20, 2023).] Given our deliberate use of random forests, a highly flexible method capable of learning complex interactions, we must prevent the algorithm from “back-calculating” total debt or equity, which are key determinants of the leverage ratio. We achieve this by removing all data items from the liabilities side of the balance sheet and excluding debt-related items from the income and cash flow statements, such as data items related to interest expenses. This clearly distinguishes the capital structure decision (liability side) from the investment decision (asset side) in alignment with the two fundamental managerial decisions discussed in section <ref>. Employing this strategy, we compile a total of 1'840 features that comprise our set X of covariates. We provide a full list of the covariates in the appendix. As already mentioned in subsection <ref>, we define the outcome variable, the leverage ratio LDA, as the ratio of total debt to total assets for company i in year t: LDA_i,t=(Long-term Debt_i,t + Short-term Debt_i,t) / Total Assets_i,t We measure LDA (shorthand for “Leverage Debt to Assets”) in terms of book values, because these reflect the actions of company managers more directly than do market values <cit.>. However, we also verify the main results of our paper using a market value[Because the market value of debt is approximated by its book value in this definition, this corresponds to a “quasi-market value measure” <cit.> (page 316), a concept employed by most extant research (see e.g., <cit.>).] definition of leverage (LDMA), defined in line with previous research (e.g., <cit.>) as: LDMA_i,t=(Long-term Debt_i,t + Short-term Debt_i,t) / (Total Assets_i,t - Book Value of Equity_i,t + Market Value of Equity_i,t) Finally, for the treatment variable D, we use “Standard & Poor's Long-Term Domestic Issuer Credit Rating” from Compustat[See footnote <ref> for the date of data extraction.] to determine whether a rating is present for a given company-year and, if so, which rating it is. We acknowledge that a company may not have an S&P rating but could instead hold ratings from other agencies such as Moody's Investor Services or Fitch Ratings. However, considering S&P's dominant market share of approximately 50% (see section <ref>) and the fact that the majority of companies in the US have ratings from at least two leading rating agencies <cit.>, we see this as a minor concern for our analyses. Another concern when measuring the impact of individual rating categories is that companies may have what is called a “split-rating” <cit.>.[For instance, the pharmaceutical company Novartis reports a split rating in their presentation dated January 19, 2023, with AA- from S&P one notch higher than A1 from Moody's <cit.> (page 19).] A split-rating describes a situation in which a company is rated differently by two different agencies. While this affects up to 50% of rated companies, differences are typically at the “notch level”, i.e. concerning the most granular plus and minus sub-levels within a given broader rating category <cit.>. We address this issue in our analyses by examining the causal impact of specific ratings at different levels of granularity. The corresponding results, which we report in the appendix, support the findings presented in the main paper. Table <ref> provides an overview of the outcome variable, leverage (LDA), with a split by broad rating category. We observe several general facts from table <ref>, which are broadly in line with the extant literature <cit.>. First, median and mean leverage are significantly higher for observations that have any kind of rating compared to observations that have no rating. Leverage increases as the rating category decreases, except for the particular rating classes “SD” (which signifies that selective default on a particular debt instrument has occurred, but it is believed that the company will honor its other obligations) and “D” (default).[Most authors do not include the rating categories “SD” and “D”. Also, the absence of rating class “C” in our sample is consistent with its rarity in other empirical analyses; for instance, <cit.> (Table 1, page 1966) reports only three instances of “C”-ratings out of a total of almost 30'000 ratings for their 1985-2009 sample.] Overall, roughly a quarter (26%) of observations have a rating, out of which slightly more than half (14% of 26%) are investment-grade ratings (better than BB). lrcccrc 7lSummary statistics for LDA (in %) by rating category Rating 1st Median Mean 3rd Obser- % of category quart. quart. vations total AAA 5.0 12.1 17.8 20.0 86 0.1 AA 11.9 20.4 20.5 26.7 391 0.7 A 18.4 26.4 26.9 34.4 2'458 4.3 BBB 20.4 28.9 28.9 36.7 5'174 8.9 BB 22.8 33.3 34.0 44.7 3'732 6.5 B 33.1 45.0 44.9 56.9 2'981 5.2 CCC 35.5 46.0 45.5 60.4 148 0.3 CC 30.4 51.9 48.6 67.2 15 0.0 SD 26.0 35.5 32.1 41.6 4 0.0 D 16.7 31.1 28.8 41.4 41 0.1 Total ratings 21.7 31.6 32.9 42.5 15'030 26.0 No rating 0.1 11.1 17.1 28.2 42'802 74.0 Grand total 1.8 18.5 21.2 34.1 57'832 100.0 Summary statistics for the outcome variable, leverage (LDA), by rating category. LDA values (total debt divided by total assets) are displayed as %. “1st quart.” and “3rd quart.” correspond to the 25th- and 75th-percentile, respectively. “Observations” refers to the number of company-years over the 2005-2015 timespan. “% of total” represents the share of observations of a particular rating class relative to all company-year observations. Values in this column are displayed as %. The (single) C-rating category is absent because no firm-year had such a rating over the sample period. §.§ Results In this section, we describe the key results from our analyses of the causal effect of credit ratings on leverage. We begin with the most fundamental question: Does having a credit rating affect the leverage ratio? Subsequently, we examine the individual effects of the 22 most granular rating categories. For the sake of conciseness, we delegate the more gradual exploration of our research question and its results to the appendix, where we first assess the difference in effect between investment-grade and speculative-grade ratings, followed by a delineation among the 10 broad rating categories. Additionally, we support our findings with robustness tests, including a second metric for leverage, different model specifications within the machine learning framework, and a different sample period. §.§.§ Effect of having any rating versus having no rating For our first analysis, we estimate the causal effect on the leverage ratio of having any rating, regardless of the specific rating category, versus not having a rating. As mentioned previously, we use regression trees out of the machine learning toolbox as learners for the two functions g_0 and m_0, which capture the relationship of the covariates X with the outcome, LDA, and the treatment, D, respectively. The literature sometimes refers to g_0 and m_0 as “nuisance functions” and their parameters as “nuisance parameters” <cit.> because their estimation is not the primary aim of the causal analysis (which is the estimation of the causal parameter θ). We therefore limit the scope of their discussion to a brief description here and refer to the appendix for technical details. For g_0, we specify the random forest so that it has 500 trees, each with a maximum depth of seven levels, to predict LDA. This achieves an out-of-sample prediction accuracy of approximately 53% for the R^2. This level of accuracy is in line with existing literature (see for instance <cit.>, Table 2, page 11 or <cit.>, Table 2, page 23). For m_0, we also specify the random forest so that it has 500 trees, but with a slightly lower maximum depth of five levels each. Out-of-sample, we achieve a correct classification rate of approximately 87%, again in line with the literature (see for instance <cit.>, Table III, Panel A, page 1972). With the learners g_0 and m_0 defined, we can apply the double machine learning framework described in section <ref> to determine the parameter of interest θ. We follow the practical recommendation from <cit.> (page 13) and use a five-fold split as well as two repetitions to arrive at aggregated parameter estimates and standard errors. Table <ref> summarizes the results. For an immediate robustness check, we also include here the results based on the alternative (quasi-market value) leverage definition (LDMA) from <ref>; however, as motivated previously, the focus of our paper remains leverage as measured by book values (LDA). lcc Rating effect on leverage LDA LDMA θ (rating yes/no) 0.0878 0.0655 Std. error 0.0021 0.0020 t-value 41.8 32.9 p-value 0.000 0.000 Memo: mean LDA/LDMA 0.212 0.202 Rating effect (θ) vs. mean 41% 32% Results for the estimated causal effect θ on LDA (book value leverage) and LDMA (quasi-market value leverage) of having or not having a rating. Parameter estimates and standard errors are aggregated over a five-fold split with two repetitions. Subsections <ref> and <ref> describe the analytical approach and the data sample. Our estimate of the general rating effect summarized in table <ref> is both statistically[A discussion of the relevance and validity of significance levels, including the controversy of the “5% p-value” and the general topic of the “replication crisis”, go beyond the scope of this paper. We refer interested readers to sources such as <cit.>. However, we underline the general relevance of this topic by highlighting that <cit.> (a paper we discussed in section <ref>) mention that they are unable to replicate the results from <cit.>. See <cit.> (footnote 13, page 584). ] and economically significant. On average, having a rating increases LDA by roughly 9 percentage points (pps). Compared to the sample average leverage ratio of approximately 21%, this represents an increase of 41%. Using LDMA as the outcome variable to check the robustness of the results, the effect estimate is roughly 6.5pps (32% increase versus mean LDMA) and also highly significant. Thus, our results at this very general yes/no rating level corroborate the finding in <cit.> that firms with a credit rating have more debt. Moreover, the order of magnitude is very comparable: <cit.> conduct their analysis for market leverage (LDMA) and, in fact, our own LDMA effect estimate of 32% versus the mean is very close to the 35% they report (page 1). It is important to put our rating effect estimate of 9pps into perspective with purely descriptive statistics. Table <ref> shows that, before adjusting for any confounding company characteristics via the double machine learning approach, the average leverage ratio for companies with a rating is nearly 16pps (32.9%-17.1%) higher than that for companies without a rating. Thus, this figure would overstate the rating effect by 7pps, i.e., by close to 80% above the value we estimated. For a second robustness check, we employed different learner specifications for g_0 and m_0. The effect estimates across the different model alternatives are very consistent, as can be seen from table <ref>. A detailed description of the alternative learner specifications and a discussion of the results are provided in the appendix. Here, we simply remind readers of the “double robustness” discussed in subsection <ref>: as long as one of the two nuisance functions is accurately specified within the double machine learning framework, the overall outcome for the causal parameter of interest is correct <cit.>. lccccc 6lRobustness check: alternative model specifications Rating effect MM AM1 AM2 AM3 AM4 on LDA (RF/RF) (DML2) (LASSO/RF) (Ridge/RF) (Restr.) θ (rating yes/no) 0.0878 0.0878 0.0925 0.0935 0.0942 Std. error 0.0021 0.0021 0.0023 0.0369 0.0021 t-value 41.8 41.8 40.0 2.5 45.2 p-value 0.000 0.000 0.000 0.011 0.000 Effect (θ) vs. mean 41% 41% 44% 44% 44% Results for the estimated causal effect θ on LDA of having or not having a rating, according to alternative model (AM) specifications. “MM” refers to the main model specification used throughout the paper. Please refer to the appendix for details of the specifications for the different AMs. §.§.§ Effect of rating by individual, granular rating category In the above analysis of having a rating versus having no rating, we implicitly assume that it does not matter which rating a company has: all rating types represent the same “treatment” for leverage; because ratings are opinions about credit risk, this implies that the type of opinion does not matter. However, it is easy to argue that different ratings, i.e. different opinions, may in reality represent different treatments, and thus, different versions of the treatment exist. Put differently, our initial analysis may suffer from incorrectly assuming that there are “no hidden variations of treatments” <cit.> (pages 10-13). This is one of the assumptions included in the “stable unit treatment value assumption” (SUTVA) <cit.>, which provides a fundamental framework for causal analysis (<cit.> or <cit.>). We therefore sequentially investigate different levels of rating granularity. The first two of these analyses are reported in the appendix: in the first, we examine whether the rating effect differs between the two very broad categories of “investment-grade” and “speculative-grade” (non-investment grade, “junk bond”). In the second analysis, we look at whether the rating effect differs by broad rating categories as defined by one to three letters (such as AAA, AA, A, BBB). In the main body of this paper, we report the effect estimates for the most granular rating categories, i.e., those that include plus/minus notch qualifications (such as A+, A, and A-) within the broad rating categories. To avoid confusion, we label the granular sub-category ratings without a plus or minus sign as “straight" (e.g. “AAstraight”) and the broad categories as “broad” (e.g. “AAbroad”). Taking AA as an example, “AA+”, “AAstraight” and “AA-” ratings exist within the broad category of AAbroad. At this level of detail, we simultaneously test 22 granular rating categories. We note that moving from a single binary treatment variable to two or more treatment variables requires some technical adaptations to ensure valid statistical inference. The “multiplicity problem” is especially relevant in our case: the possibility of falsely identifying an effect as “significant” increases with the number of treatments tested. We therefore report multiplier bootstrap (MB) standard errors and p-values, as well as, for comparison, the corresponding Romano-Wolf (RoWo) and Bonferroni (Bonf) p-values to account for simultaneous inference on multiple parameters. We discuss these different methods briefly in the appendix. Table <ref> summarizes the effect estimates for the 22 granular rating categories. For ease of direct comparison and to assess the robustness of our results, we also include the effect estimates from a less granular analysis based on the 10 “broad” rating categories as “memo” in this table. As a reminder, these broad rating categories are defined by one to three letters (such as AAA, AA, A, BBB) without considering the plus and minus qualifications. Details from this broad analysis are included in the appendix (see table <ref>). We first compare results for the four rating categories without notch qualifications. For AAA, the effect estimates from the granular versus the broad analysis differ only at the fourth decimal place. The p-values are very similar, too, albeit slightly higher for AAAstraight in the granular analysis versus AAAbroad in the broad analysis. Intuitively, this should be expected because we are simultaneously testing 22 treatment variables in the granular versus only 10 in the broad analysis; thus, the risk of falsely identifying a treatment effect as “non-zero” increases. The same characteristic generally holds for the p-values of the other three rating categories without notch qualifications (only the Romano-Wolf p-value for Dbroad is slightly higher than for Dstraight). For CC, the effect estimates differ by approximately 1pp. We consider this small difference to be reassuring. We again find very consistent p-values, supporting in both cases the conclusion of a non-zero treatment effect. For SD, the effect estimates are also very consistent. They differ by 1pp and are both close to zero, with all p-values consistently indicating that the null hypothesis should not be rejected. Finally, the effect estimates for D differ by 1.5pps; however, both estimates are again very close to zero and the p-values consistently indicate that the null hypothesis of no treatment effect should not be rejected. In summary, we interpret the very consistent results for these four rating categories without notch qualifications as evidence supporting the robustness of our approach. Next, we turn to the other six categories with notch qualifications. The pattern of the estimated rating effect within AA follows the rating scale, with AA+ displaying the largest (negative) effect. The coefficient estimate is also consistently negative for all three granular AA-ratings. For AA+ and AAstraight, the p-values are highly significant. However, AA- has the smallest (negative) coefficient estimate and high p-values, indicating that the null hypothesis of no rating effect should not be directly rejected (MB is still slightly below 0.05, while RoWo at 0.24 and Bonferroni at 0.91 are clearly above 0.05). This situation is consistent when considering the next rating category, A+. A+ has a smaller (albeit still negative) effect estimate coupled with higher p-values compared to AA-. The “disappearing” of a clear rating effect as one moves from AAstraight to the subsequent rating categories AA- and A+ thus appears to be gradual. Considering the p-values for A+, the null hypothesis of no effect should definitely not be rejected for A+. Similar to the transition from AA- to A+, the results for A- in conjunction with those for BBB+ are consistent with the view that there is a gradual, smooth change in effect across these granular rating categories. BBB+ also has an effect estimate of approximately -1pp with p-values that are very similar to those for A-. Within the broad BBB rating category, we observe the same “concave” treatment heterogeneity as in the broad A category: the BBB+ and BBB- coefficient estimates are negative, while the one for BBBbroad in the middle of the category is positive, or more precisely, very close to zero with p-values that suggest non-rejection of the null hypothesis. A possible ad-hoc interpretation for this phenomenon is that BBB+ companies may try to achieve at least A- status to benefit from the “better letter”.[For instance, the European Banking Authority (EBA) maintains mapping tables that match ratings to certain rules and requirements, such as regulatory capital rules. In this context, AAA and AA are within the same “credit quality steps” (1), whereas A (2), BBB (3), BB (4), and B (5) are each in different step categories. From CCC downward, no distinction applies, and all categories are summarized in credit quality step 6. Thus, A and BBB ratings have different implications in this context, even though both are investment-grade ratings <cit.>.] On the other side of the spectrum, companies rated BBB- and thus at risk of a downgrade from BBB to BB may preemptively take actions that also lead to lower leverage. The distinction between BBB and BB is particularly important because this represents the dividing line between investment- and speculative-grade ratings with the corresponding economic implications (see section <ref>). Nevertheless, we strongly caution against overinterpreting these findings and qualify our ad-hoc interpretation as speculative. While it is true that both the A and BBB categories display concavity, their neighboring categories AA, BB and B do not. This is in line with the findings from <cit.> discussed in the literature review in section <ref>, which show that the general effects attributed to plus/minus ratings stem from specific sub-samples of low-rated firms. In particular, <cit.> find these effects only in the B category - a category in which we do not find these effects. Thus, the concept of a general plus/minus rating effect remains doubtful. Within the BB and B rating categories, the pattern of the estimated effects follows the rating scale, with BB+ displaying the smallest (positive) effect and B- the largest. The coefficient estimates are consistently positive for all granular ratings within these two categories, and the p-values are generally highly significant; even for BB+, the multiplier bootstrap p-value is below 0.01. Here, we highlight two points. First, the granular analysis refines our understanding of the boundary regarding rating impact. While the analysis based on 10 broad rating categories (reported in the appendix) identifies the first inflection point of the treatment effect between BBBbroad and BBbroad, marking the transition from investment- to speculative-grade ratings, the granular analysis within the BBbroad-category reveals a more gradual rise; starting at approximately 1% for BB+, moving to 2% for BBstraight, and peaking at 6% for BB-. Second, and in contrast to what we have just described for the BB rating category, the effect estimates within the B rating category are all consistently above 10%. In particular, the increase from BB- to B+ is immediate and not gradual. Taking these two observations together, we hypothesize that companies that are still very close to an investment-grade rating have lower leverage, potentially in anticipation of regaining investment-grade status. Those that are much farther away and thus probably not anticipating an upgrade have markedly higher leverage. The fact that the differences in rating effects between the categories B and CCC are small provides additional support for this hypothesis (approximately 13% for both Bbroad and CCCbroad). Finally, in the CCC category, the rating effects are similar for all notch categories, leading to a highly significant overall CCCbroad effect of close to 13%. Of note, the sample size for company-years in this category is limited with only 148 observations, of which the majority (107) are concentrated in CCC+. lcccccrr 8lGranular rating categories Rating effect Coef. MB MB RoWo Bonf Obser- % of (on LDA) estim. Std. error p-val. p-val. p-val. vations total 6lGranular rating categories Rating effect Coef. MB MB RoWo Bonf Obser- % of (on LDA) estim. Std. error p-val. p-val. p-val. vations total 6rContinued on next page θ^AAA straight -0.0588 0.0191 0.002 0.020 0.046 86 0.1 memo: θ^AAA broad -0.0582 0.0189 0.002 0.015 0.021 86 0.1 θ^AA+ -0.0683 0.0183 0.000 0.003 0.004 26 0.0 θ^AA straight -0.0547 0.0098 0.000 0.000 0.000 151 0.3 θ^AA- -0.0169 0.0083 0.041 0.243 0.911 214 0.4 memo: θ^AA broad -0.0385 0.0068 0.000 0.000 0.000 391 0.7 θ^A+ -0.0060 0.0051 0.242 0.663 1.000 433 0.7 θ^A straight 0.0086 0.0041 0.035 0.244 0.761 906 1.6 θ^A- -0.0115 0.0033 0.000 0.007 0.009 1'119 1.9 memo: θ^A broad 0.0001 0.0027 0.956 0.950 1.000 2'458 4.3 θ^BBB+ -0.0106 0.0028 0.000 0.002 0.003 1'589 2.7 θ^BBB straight 0.0019 0.0026 0.463 0.759 1.000 2'105 3.6 θ^BBB- -0.0105 0.0031 0.001 0.010 0.016 1'480 2.6 memo: θ^BBB broad -0.0009 0.0021 0.677 0.942 1.000 5'174 8.9 θ^BB+ 0.0110 0.0042 0.009 0.063 0.191 946 1.6 θ^BB straight 0.0235 0.0036 0.000 0.000 0.000 1'248 2.2 θ^BB- 0.0568 0.0036 0.000 0.000 0.000 1'538 2.7 memo: θ^BB broad 0.0512 0.0024 0.000 0.000 0.000 3'732 6.5 θ^B+ 0.1010 0.0041 0.000 0.000 0.000 1'413 2.5 θ^B straight 0.1069 0.0048 0.000 0.000 0.000 1'124 1.9 θ^B- 0.1128 0.0079 0.000 0.000 0.000 444 0.8 memo: θ^B broad 0.1301 0.0031 0.000 0.000 0.000 2'981 5.2 θ^CCC+ 0.1034 0.0160 0.000 0.000 0.000 107 0.2 θ^CCC straight 0.1497 0.0345 0.000 0.000 0.000 32 0.1 θ^CCC- 0.1108 0.0595 0.062 0.269 1.000 9 0.0 memo: θ^CCC broad 0.1284 0.0144 0.000 0.000 0.000 148 0.3 θ^CC straight 0.1367 0.0437 0.002 0.020 0.040 15 0.0 memo: θ^CC broad 0.1471 0.0044 0.001 0.004 0.008 15 0.0 θ^SD straight 0.0482 0.0553 0.383 0.759 1.000 4 0.0 memo: θ^SD broad 0.0597 0.0531 0.261 0.689 1.000 4 0.0 θ^D straight 0.0291 0.0289 0.920 0.917 1.000 41 0.1 memo: θ^D broad 0.0141 0.0294 0.632 0.942 1.000 41 0.1 Total ratings - - - - - 15'030 26.0 No rating - - - - - 42'802 74.0 Grand total - - - - - 57'832 100.0 Estimated causal effects on leverage by granular rating category (i.e., split by the plus and minus notch qualifications within a broad category) versus the baseline of having no rating. “Straight” indicates the categories in between the plus and minus notches; pro memoria (“memo:”) and for ease of comparison, effect estimates from the broad rating analysis summarized in table <ref> in the appendix have been added in this table as “broad”. The rating categories “AAA”, “CC”, “SD” and “`D” do not have notch qualifications. The rating category C is absent because no firm-year had such a rating over the sample period. The empirical design (<ref>), data (<ref>), and random forest characteristics (<ref>) are described in the main text. Standard errors and corresponding p-values are corrected for simultaneous multiple inference: “MB” refers to the multiplier bootstrapping method, “RoWo” to the Romano-Wolf procedure, and “Bonf” to the Bonferroni-correction. “Observations” refers to the number of company-years from 2005 to 2015. “% of total” represents the share of observations of each rating category relative to all company-year observations. Values in this column are displayed as %. Having presented the individual effect estimates for the granular ratings, we now conclude our analysis by taking a holistic view of the pattern of the effects across the full spectrum of 22 granular ratings. Figure <ref> provides this visual summary. Initially, for the highest rating categories, the effect estimates are negative, ranging from -5% to -7%. Values for the middle rating categories hover around what can be seen as neutral to small effects, from -2% to +2%. Finally, from BB- onward, the effect estimates become significantly positive and reach double-digit values throughout the B+ to CC categories. CCC- stands out somewhat because its effect is smaller than those of CCCstraight and CCstraight and its MB p-value reaches 0.062. However, there are only nine company-year observations in this particular class. In summary, our analysis of granular rating categories yields three important insights. First, treatment effects are heterogeneous across the rating spectrum. Second, they follow a distinct pattern along the rating scale, with initially negative effects on leverage for the highest rating categories, no or very limited effects for the middle rating categories and large positive effects towards the lower end of the rating scale. For the two default categories at the very end of the rating scale, the effect vanishes. Third, the transition from no/very limited effects to clearly positive effects does not precisely coincide with the boundary between investment- and speculative-grade ratings, as the results from the broad analysis would suggest. Rather, it occurs gradually over the granular ratings within the two categories BBB and BB, which represent the boundary between investment- and speculative-grade ratings. Before closing the empirical section of our article, we report highly summarized results from two further robustness checks. The appendix contains full details for each of these analyses. First, we apply our analytical approach to a data sample from a different time frame. Second, we partially loosen the restriction of interest-related expense categories from the income statement. We had initially excluded these items to prevent the random forest learners from “back-calculating” the leverage ratio. Because interest coverage is believed to be a decisive factor for credit ratings (<cit.>, pages 645-650), we will include this metric as a covariate. §.§.§ Rating effects in a different sample period For this robustness check, we consider a second data sample from a different time period. Employing double machine learning with the same analytical methodology and data sources described in subsections <ref>, <ref>, and <ref>, we use data from the years 2000 to 2004 to arrive at a sample of 32'162 company-year observations. Table <ref> compares the results of our main analysis (as per table <ref>) in the left column with the results from the second sample period in the right column. The rating effect estimate amounts to 9.6pps, which is 0.8pps higher than the parameter estimate of 8.8pps from the main sample. Compared to the mean leverage of the sample, this corresponds to an impact of 43% versus 41% from the main sample. Again, the rating effect is highly significant, both statistically and economically. We interpret this result as further evidence in support of the presence of a rating effect. Similarly, the results from the second sample period support the results from the main analysis for the broadest down to the most granular rating categories. Full details are provided in the appendix. We restrict ourselves here in the main text to plotting the effect estimates from the main analysis next to those from the second data sample for the granular rating categories (figure <ref>). The similarity of the shapes from the main and the second data sample is compelling. lcc Rating effect 2005-2015 2000-2004 on leverage (LDA) n=57'832 n=32'162 θ (rating yes/no) 0.0878 0.0962 Std. error 0.0021 0.0029 t-value 41.8 32.9 p-value 0.000 0.000 Memo: mean leverage 0.212 0.224 Rating effect (θ) vs. mean 41% 43% Comparison of results for the estimated causal effect θ on leverage (LDA) of having or not having a rating for the main data sample from 2005 to 2015 with 57'832 company-year observations compared to a second, different data sample for the years 2000 to 2004 with 32'162 company-year observations. The methodology for the second data sample is the same as for the main one (as described in previous sections), including aggregation of parameter estimates and standard errors over a five-fold split with two repetitions. §.§.§ Rating effects when including interest coverage as a covariate As described in subsection <ref>, we excluded data items that would allow the random forest to back-calculate total debt or equity. However, we still want to verify that the rating effect estimates hold when including selected items that determine credit ratings (or are at least strongly believed to do so). <cit.> (pages 645-650) explain that “credit ratings are primarily related to two financial indicators” (page 647). One of these is size, which we have already included via items such as the logarithm of sales, the logarithm of assets or the number of employees in our set of covariates. The second is coverage, which measures “a company's ability to comply with its debt service obligations” (page 648). We therefore include interest coverage (IntCov) as defined in <cit.> (Exhibit 33.8, left panel, page 649): IntCov_i,t=EBITDA_i,t / Interest expenses_i,t where EBITDA represents earnings before interest, taxes, depreciation and amortization and interest expenses represents the expenses for servicing a company's total financial debt.[In Compustat, this is the item with code “xint” (“Interest and Related Expense - Total”).] We use our empirical sample as described in subsection <ref> and remove company-years with interest expenses of less than USD ten thousand in a given year and arrive at 48'585 company-year observations. We make no change to the double machine learning model described in subsections <ref> and <ref>. Table <ref> compares the results for the estimate of the general rating effect from our main analysis (as per table <ref>) with those from the approach in this subsection, which includes interest coverage (“IntCov”) as a feature in the set of covariates. The effect estimate amounts to approximately 7pps including IntCov, or 29% versus the sample mean leverage of roughly 25%. This effect estimate is 1.5pps lower than the one from the main analysis, which translates into a drop of 10pps in the relative effect magnitude versus the mean leverage (29% versus 41% in the main analysis). Nevertheless, the rating effect remains clearly present. lcc Rating effect Excl. IntCov Incl. IntCov on leverage (LDA) n=57'832 n=48'585 θ (rating yes/no) 0.0878 0.0731 Std. error 0.0021 0.0021 t-value 41.8 35.3 p-value 0.000 0.000 Memo: mean leverage 0.212 0.249 Rating effect (θ) vs. mean 41% 29% Comparison of results for the estimated causal effect θ on leverage (LDA) of having or not having a rating, depending on whether interest coverage (“IntCov”) as defined in equation <ref> is excluded or included in the set X of covariates as per equations <ref> and <ref>. The general methodology for “Incl. IntCov” is the same as for the main model used throughout this paper (“Excl. IntCov”, as described in previous sections), including aggregation of parameter estimates and standard errors over a five-fold split with two repetitions. Similarly, the results from the analyses including interest coverage support the results from the main analysis for the broadest down to the most granular rating categories. Full details are provided in the appendix. We provide here in the main section the graphical comparison of results for the granular rating categories in figure <ref>; the figure displaying the effect estimates together with the corresponding multiplier bootstrap p-values is located in the appendix. As can be seen, the results including interest coverage are very similar to those from the main analysis without interest coverage, both in terms of magnitude and overall shape. In particular, we also see the gradual rise in effect size over the notch-ratings within the BBB and BB rating classes, which supports our previous finding that there is no abrupt divide in effect between investment-grade and speculative-grade ratings. In summary, our robustness checks reinforce our three main conclusions regarding the effects of credit ratings on leverage: first, ratings affect the leverage ratio. Second, this effect is heterogeneous and depends on the rating category. Third, the change in effect size is gradual across the individual, granular categories within BBB and BB and thus does not occur abruptly at the boundary between investment- and speculative-grade ratings. § CONCLUSION To date, the literature has not provided a definitive explanation for why individual companies choose particular capital structures. In the absence of a consensus model and considering the large number of potential influencing factors, we employed double machine learning to investigate the causal effect of credit ratings on leverage. This approach allowed us to use random forests, which are highly flexible models capable of discerning the relationship between company characteristics, leverage, and ratings from the data, without the need for assuming linear relationships, pre-selecting a limited set of variables, or undertaking extensive feature engineering. We were able to perform valid inference and to estimate the heterogeneity of the treatment effect using machine learning methods that, without double machine learning, would have led to bias in the estimated coefficients. As a result, we were able to document for our empirical sample three important facts about the effect of ratings on leverage. First, ratings have a causal effect on the leverage ratio. Holding all else equal, having a rating increases the book leverage ratio by approximately 7 to 9 percentage points, or roughly 30% to 40% compared to the mean leverage ratio of our sample. However, this effect exhibits a significant degree of heterogeneity, captured in our second finding. To use colorful language, consider a cocktail bar where drinks have an average alcohol content of 9%; one could remain completely sober or get completely drunk from just one drink depending on what one is served. Applying this analogy to our context, the impact of ratings on leverage varies significantly across different rating categories. For the two highest categories, AAA and AA, the rating effect is negative, leading to lower leverage. For the next two categories, A and BBB, the effect is approximately zero. However, beginning with BB, the effect turns distinctly positive, leading to higher leverage, and then stays high or increases even further for the last three non-default categories B, CCC, and CC. Third, and in contrast to what the second point would seem to suggest at first glance, the transition in the direction of the effect is gradual over the individual, granular categories within the broad categories of BBB and BB, and especially over straight BBB, BBB-, BB+, and straight BB. Thus, the shift from no effect to a positive effect does not occur abruptly at the boundary between investment- and speculative-grade ratings. Several different robustness tests corroborate these findings. Nevertheless, as with most empirical research, our work has a number of limitations, each of which we see as a potential area for complementary and future study. First, our empirical data could be enriched in various ways. Obvious dimensions would be longer or different time periods and additional covariates, including metrics related to environmental, social, and governance (ESG) criteria, or company officer characteristics. Second, an interesting next step would be to understand the mechanisms underlying the relationship between different ratings and different capital structures: Why do different ratings lead to different leverage ratios? Third, expectations about the future often play an important role in economics. As noted by <cit.> (page 123): “[I]n business, there is usually no before, during, or after.” Thus, it is highly likely that ratings are influenced by expectations about company characteristics, including the leverage ratio itself. Disentangling the effect of expectations on the relationship between ratings and leverage represents another formidable research challenge. <cit.> (page 8) advise that we sometimes need to refrain from devising “extremely elegant theories, and instead embrace complexity and make use of the best ally we have: the unreasonable effectiveness of data.” We hope that our paper, with double machine learning and data as our allies, has illuminated the heterogeneous effects of credit ratings on leverage. § APPENDIX In this appendix, we provide additional information pertaining to the main part of our paper. We begin with an addition to the literature review from the main section and discuss leverage and rating studies that used machine learning methods. We then provide an overview of double machine learning which is more extended than the one in the main paper. For the empirical part, we provide the full list of covariates used in our analyses, the specifications of the learner functions for the main model as well as the specifications for the alternative models used for robustness checks. The appendix also contains the analyses of the rating effect on leverage at two levels of granularity not reported for sake of brevity in the main part of the paper. First, the rating effect split by investment- and speculative-grade rating and second, the effect by individual broad rating category. Finally, we present here also the details of two further robustness checks: results from a different sample period and results including interest coverage as an additional covariate. §.§ Literature review: machine learning-based leverage and rating studies As indicated in sections <ref> and <ref> in the main text, machine learning methods are very flexible to adapt to complex, non-linear patterns. Additionally, they can handle data that do not follow well-behaved distributions (such as the normal or at least symmetrical distributions) and can cope with multicollinearity between covariates <cit.>. In the context of forecasting credit ratings, machine learning first appeared in the computer science literature (e.g., <cit.> and <cit.> with neural networks or <cit.> with support vector machines) rather than in economic research publications <cit.>. We hypothesize that one contributing factor is the fact that the primary objective of machine learning methods has been to maximize predictive performance, and not to identify causal patterns <cit.>, while “[t]he goal of most empirical studies in economics and other social sciences is to determine whether a change in one variable [...] causes a change in another variable” <cit.> (page 3). For instance, <cit.> suggest in their recent review of machine learning-based corporate default predictions that models should be able to suggest the cause(s) of default in order to increase their usefulness. Still, machine learning models have the advantage that they can easily incorporate large numbers of financial covariates (predictive variables, features). For instance, <cit.> use 27 covariates in their credit rating study and <cit.> find that their models perform better when the full set is provided as input and the various neural networks themselves perform feature selection in the training process. Indeed, the selection of the relevant covariates, and thus the resulting model, is data-driven with machine learning algorithms. “This approach contrasts with economics, where (in principle, though rarely in reality) the researcher picks a model based on principles and estimates it once” <cit.> (page 508). Given the absence of a predominant capital structure theory and the fact that different theories “lead to such different, an in some ways diametrically opposed, decisions and outcomes” <cit.> (page 8), this feature of machine learning appears especially attractive for analyzing leverage ratios. Also, the predictive power of machine learning models is generally strong. For instance, <cit.> (Table 2, page 194) find in their comparison of model accuracy for a large set of S&P 500 company ratings that random forests and support vector models improve prediction accuracy by two to three percentage points versus linear discriminant analysis, the best-performing non-machine learning method. The improvements in prediction accuracy by machine learning versus benchmark statistical methods (predominantly logistic regression and multiple discriminant analysis) in the rating studies listed by <cit.> (Table 1, page 547) are even higher, surpassing ten percentage points in several instances. <cit.> is a recent paper comparing non-linear machine learning models with linear models to predict leverage one year in advance. Out of six different models, the random forest performs best to predict the leverage and improves out-of-sample R^2 by 16 percentage points compared to standard linear models, with an R^2 for the random forest of 56% versus 40% for linear approaches (<cit.>, Table 2, page 11). The models rely on 34 covariates as input, of which eight are dummy variables. <cit.> (page 2) thus “challenge the conventional wisdom that the standard set of firm and macroeconomic determinants has limited ability to explain firms' leverage choices.” We highlight that none of the eight dummy variables figures among the key determinants of leverage in the best performing models (random forest and gradient boosting). Specifically, the binary “debt rating” dummy separating very low and unrated companies from the others has one of the lowest measures of variable importance <cit.> (Figure 3, page 12). We hypothesize that since both the z-score as a measure of bankruptcy probability <cit.> and the rating dummy attempt to measure the general construct of “the ability to meet debt obligations” (see section <ref>), the information contained in both covariates is highly overlapping and thus, the random forest and the gradient boosting machine selectively use only the one with the higher predictive power. However, from this observation, we obviously can “merely make associational claims” <cit.> (page 2). We cannot make a causal statement which of these two factors causes leverage (if at all), as opposed to merely predicting it (for an example of using machine learning for forecasting (prediction) versus planning (causation) in the corporate finance function, see for instance <cit.>). Finally, even when <cit.> augment linear models with common non-linear transformations of the input variables (e.g., squared or cubed values) as well as a full set of interaction effects between the covariates, the random forest (excl. transformed and interaction covariates) continues to predict leverage significantly better (<cit.>, internet appendix, Table A4, page 8). Additionally, the authors find in untabulated results that the predictive performance of the random forest does not improve when interaction terms are included. In summary, out of the machine learning toolbox, random forests appear to be a very powerful tool, requiring only limited feature pre-selection or feature engineering. §.§ Double Machine Learning We have seen from the sections in the main text that there is no general consensus regarding the determinants of leverage and how they interact at the company level. Nevertheless, it is likely that many factors play a role and the mechanisms by which they influence capital structure are complex. Given the lack of a strong theoretical framework, isolating the causal effect of credit ratings poses a formidable challenge. Additionally, we need to consider that this effect may be heterogeneous. Double machine learning <cit.> is a recently developed methodology that can help solve questions of causal inference in such settings by harnessing what <cit.> calls “the unreasonable effectiveness of data.” Among the key advantages of double machine learning are the following characteristics. First, there is the ability to handle high feature dimensionality, i.e., the presence of many potential influencing factors in addition to the treatment variable of interest, and to provide valid inference on treatment effects in such high-dimensional, complex data environments. Second, it employs a data-driven approach to select among these influencing factors. Third, it facilitates the use of various machine learning algorithms with flexible function-fitting capabilities. Fourth, there is double-robustness with respect to nuisance functions. “Partialling-out”, “Neyman orthogonality” and “cross-fitting” are three important concepts enabling the “doubly robust” double machine learning approach. We will discuss each of these terms in this section. §.§.§ Partialling-out Double machine learning builds on the concept of Frisch-Waugh-Lovell (FWL) “partialling out” <cit.>. According to the FWL theorem, a parameter of interest θ in a linear model such as: Y=θ D+β X +ϵ with 𝔼(ϵ | D,X) = 0 can be estimated with linear regression, using e.g., ordinary least squares, in either of two ways. Under the first approach, θ can be directly estimated by regressing Y on D and X. Under the second approach, θ is determined in the last step of a three-step procedure: first, Y is regressed on X, and the corresponding residuals ϵ_Y are determined. Second, D is regressed on X and again, the corresponding residuals ϵ_D are determined. Third, the residuals ϵ_Y from the first step are regressed on the residuals ϵ_D from the second step. The regression coefficient from this third step corresponds to θ, the parameter of interest. Both approaches will yield the same estimate for θ. Throughout this paper, we will continue to use the term “partialling-out” for the second approach, which is usually employed in the economics literature. “Residualization" represents another term for the same technique <cit.> (pages 219-220). It would be convenient if machine learning methods could be used instead of OLS-based linear regression to determine θ. Machine learning has traditionally emphasized predictive performance, and principally predictive performance on the validation (hold-out) data sample, which is intentionally not used for model estimation. Machine learning thus represents the “algorithmic modeling culture” described by <cit.> and comes with several advantages. These include a high flexibility with respect to the model choice and design. Many machine learning methods do not impose strong assumptions on the functional forms, but learn those from the data. This constitutes a valuable safeguard against incorrect model specifications, which also lead to biased parameter estimates, even if there are no unmeasured confounding variables. This quality is particularity useful for the empirical analysis in this paper, because, as we described in section <ref>, no consensus about a unifying model for capital structure exists. Additionally, most machine learning algorithms allow us to rely on a largely “automatic”, data-driven variable selection process. Again, this is a welcome feature in the absence of a consensus model and the presence of many potentially influencing factors. Finally, with the data-driven variable selection approach, machine learning methods are able to handle high-dimensional settings, in which the number of potential predictive variables is large compared to the number of observations. We refer interested readers to the vast literature in this context, for instance <cit.> or <cit.>. The downside of this focus on predictive performance is that inference on the model parameters, the core of causal inference, is generally not possible with machine learning methods. Yet, empirical research in many domains, including economics, is predominantly concerned with causal questions <cit.>, and e.g., <cit.> sees the general inability of machine learning methods to uncover causal relationships as a fundamental obstacle to further expand their applications. In particular, machine learning methods generally produce biased parameter estimates. The bias stems from the fact that machine learning methods use regularization penalties in their data-driven variable selection procedure. The intuition behind this regularization bias is that the parameter estimates for covariates that are highly correlated with the treatment variable will get severely “shrunk” versus their true value (e.g., in the case of Ridge regression) or even set to zero (e.g., for LASSO), because the treatment variable by itself has sufficient predictive power. Correspondingly, the parameter value of the treatment variable will get inflated, because it will incorporate the effect of correlated covariates. Of course, the reverse situation with inflated covariate parameters and a significantly shrunk or even zero treatment parameter is also possible. This is the reason that machine learning methods cannot be used to “directly” estimate equation <ref> as per the first approach described above. Such a “naive approach” <cit.> (page 36) incurs a high risk of yielding a severely biased estimator for the treatment parameter <cit.>. However, machine learning methods can be employed following the second approach, i.e. by partialling-out. This leads to the double machine learning approach for causal analysis. For this approach, Neyman orthogonality plays an central role which we will detail in the next subsection. §.§.§ Neyman orthogonality Following the general outline of <cit.>, we illustrate the approach using a “partially linear regression” model <cit.>, which we will also employ in our empirical analysis in section <ref>. The usual form of a partially linear regression model is: Y=θ_0D+g_0(X) +ζ with 𝔼(ζ | D,X) = 0 and D=m_0(X) +𝒱 with 𝔼(𝒱 | X) = 0, where Y is the outcome variable, D is the treatment (policy) variable of interest, and X is a (potentially high-dimensional) vector of confounding covariates. ζ and 𝒱 are error terms. The regression coefficient θ_0 is the parameter of interest. We can interpret θ_0 as a causal parameter, i.e. the causal effect of treatment D on outcome Y, if D is “as good as randomly assigned” <cit.> (page 73) conditional on the covariates X and thus, D is exogenous conditionally on X. Of course, the other standard assumptions of causal inference need to hold as well, for instance consistency, conditional exchangeability, and positivity <cit.>. Applying the partialling-out procedure on equations <ref> and <ref> removes both the confounding effect of X and the regularization bias[More precisely, the first-order effect of the regularization bias is removed. Removing the first-order effect is usually enough to produce a high-quality, low-bias estimator for the parameter of interest <cit.>. <cit.> expand this to k-th order orthogonality but show that for partially linear regressions (as employed in our empirical analysis in section <ref>), first-order orthogonality is the limit of robustness when treatment residuals are normally distributed.] introduced by a machine learning method with a penalty or regularization mechanism <cit.>. Within the partialling-out procedure, cross-validation remains required for the determination of the residuals to avoid bias from overfitting. Technically, a method-of-moment estimator for the parameter of interest θ_0 is employed: 𝔼[ψ(W;θ_0, η_0)] = 0 where ψ represents the score function, W = (Y,D,X) is the set (data triplet) of outcome, treatment, and confounding variables, θ_0 is the parameter of interest as already indicated above and η_0 are nuisance functions (for instance, g_0 and m_0, which we will employ later in our empirical application). For the double machine learning inference procedure, the score function ψ(W;θ_0, η_0) from equation <ref> (with θ_0 as the unique solution) needs to satisfy the Neyman orthogonality <cit.> condition: ∂_η𝔼[ψ(W;θ_0, η)]|_η=η_0 = 0, where the derivative ∂_η denotes the pathwise Gateaux derivative operator. Intuitively, Neyman orthogonality in equation <ref> ensures that the moment condition ψ(W;θ_0, η_0) from equation <ref> is insensitive to small errors[Technically, this concerns the “speed” of the convergence rates. We refer interested readers to <cit.>.] in the estimation of the nuisance function η (around its “true” full population value η_0). Thus, it removes the bias arising from using a machine learning based estimator for η_0. As a further consequence, Neyman orthogonality ensures “adaptivity” of the estimator for θ_0: its approximate distribution does not depend on the fact that the machine learning based estimate for η_0 contains errors, if the latter are “mild” (as described in <cit.>). §.§.§ Cross-fitting A second point to consider is that machine learning methods usually rely on sample splitting in order to avoid bias introduced by overfitting. Overfitting occurs when models follow the data that they are trained on “too closely”, thus picking up not only the true underlying pattern, but also the noise contained in the (sample) data. The more complex and flexible a model, the higher the risk for this behavior <cit.>. Thus, as mentioned previously, machine learning typically divides the data into two distinct sub-sets: one training data set, used to determine the model, and one validation (hold-out) data set to evaluate the model (but not used to train the model). A similar data splitting methodology applies in the case of a partially linear model with two nuisance functions as described in equations <ref> and <ref>. Only one part of the data is used to estimate the nuisance functions which are partialled-out, while the other part of the data is used to estimate the parameter of interest (i.e., the treatment effect). Of course, such a limited use of the data implies a loss of efficiency. To overcome this efficiency loss due to the necessary data splitting, double machine learning employs a technique called “cross-fitting” <cit.> (page C6). Under cross-fitting, the roles of the two data sets are swapped and two estimates for the parameter of interest are obtained. Since these two estimators are approximately independent, they can simply be averaged to make use of the full data set <cit.> (Figure 2, page C7). The cross-fitting procedure can be expanded beyond two data sets into a K-fold version to further increase robustness; <cit.> (page 13) reports that four to five folds appear to work well in practice. Furthermore, the cross-fitting procedure can be repeated to enhance robustness of the estimator with respect to potential effects of a particular random split of the data in the K-folds. While the specific sample partition has no impact on results asymptotically <cit.> (page C30), it is recommended in practice to repeat the estimation procedure <cit.> (page 13). §.§.§ Double robustness “Double Machine Learning” derives its name from the fact that machine learning methods are used to estimate both equation <ref> and equation <ref>. However, the estimated treatment effect is also “doubly robust” thanks to the partialling-out procedure described previously. This means that potential “mistakes in either of the two prediction problems” <cit.> (page 221) (i.e., equations <ref> or <ref>) do not invalidate the effect estimate as long as at least one of these two is sufficiently well estimated. In other words, while it is necessary to “to do a good job on at least one of these two prediction problems” <cit.> (page 221), it does not matter on which one. While we caution practitioners against interpreting this as an invitation to careless model specifications, we believe that this represents another attractive property of double machine learning whenever doubts about the precise model characteristics persist. <cit.> (page 34) remarks: “Because model selection mistakes seem inevitable in realistic settings, it is important to develop inference procedures that are robust to such mistakes.” Finally, a general robustness of double machine learning with respect to the particular machine learning (ML) algorithm that is employed has been observed. For instance, <cit.> (page C45) comment on their empirical results that “the choice of the ML method used in estimating nuisance functions does not substantively change the conclusions.“ Of course, the machine learning methods employed need to be of sufficient quality for the problem at hand. Considering the large choice of machine learning models, this is typically not an important hurdle, and even ensemble models are suitable <cit.> (pages C22-C23). §.§ Covariates Tables <ref> and <ref> provide an overview of the 1'840 covariates (features) used throughout the empirical analysis. The vector X in equations <ref> and <ref> is composed of these variables. In addition to the variables displayed in the two tables, only two variables have been “engineered” to provide a potentially better measure for size, which can be useful for purely linear models such as LASSO and Ridge regression. These two variables are the logarithm of sales (code “sale” in table <ref>) and the logarithm of total assets (code “at”). ll 2lCovariates: financial data (absolute and as % of assets and % of sales) Code Long text 2lCovariates: financial data (absolute and as % of assets and % of sales) Code Long text 2rContinued on next page acchg Accounting Changes - Cumulative Effect accrt ARO Accretion Expense acdo Current Assets of Discontinued Operations aco Current Assets - Other - Total acodo Other Current Assets Excl Discontinued Operations acominc Accumulated Other Comprehensive Income (Loss) acox Current Assets - Other - Sundry acqao Acquired Assets > Other Long-Term Assets acqcshi Shares Issued for Acquisition acqgdwl Acquired Assets - Goodwill acqic Acquisitions - Current Income Contribution acqintan Acquired Assets - Intangibles acqinvt Acquired Assets - Inventory acqppe Acquired Assets > Property, Plant & Equipment acqsc Acquisitions - Current Sales Contribution act Current Assets - Total adjex_c Cumulative Adjustment Factor by Ex-Date - Calendar adjex_f Cumulative Adjustment Factor by Ex-Date - Fiscal afudcc Allowance for Funds Used During Construction (Cash Flow) afudci Allowance for Funds Used During Construction ajex Adjustment Factor (Company) - Cumulative by Ex-Date ajp Adjustment Factor (Company) - Cumulative byPay-Date aldo Long-term Assets of Discontinued Operations am Amortization of Intangibles amc Amortization (Cash Flow) - Utility ano Assets Netting & Other Adjustments ao Assets - Other aocidergl Accum Other Comp Inc - Derivatives Unrealized Gain/Loss aociother Accum Other Comp Inc - Other Adjustments aocipen Accum Other Comp Inc - Min Pension Liab Adj aocisecgl Accum Other Comp Inc - Unreal G/L Ret Int in Sec Assets aodo Other Assets excluding Discontinued Operations aol2 Assets Level2 (Observable) aoloch Assets and Liabilities - Other - Net Change aox Assets - Other - Sundry aqa Acquisition/Merger After-tax aqc Acquisitions aqd Acquisition/Merger Diluted EPS Effect aqeps Acquisition/Merger Basic EPS Effect aqi Acquisitions - Income Contribution aqp Acquisition/Merger Pretax aqpl1 Assets Level1 (Quoted Prices) aqs Acquisitions - Sales Contribution arce As Reported Core - After-tax arced As Reported Core - Diluted EPS Effect arceeps As Reported Core - Basic EPS Effect at Assets - Total aul3 Assets Level3 (Unobservable) bastr Average Short-Term Borrowings Rate billexce Billings in Excess of Cost & Earnings capsft Capitalized Software capx Capital Expenditures capxv Capital Expend Property, Plant and Equipment Schd V cb Compensating Balance cdvc Cash Dividends on Common Stock (Cash Flow) ceiexbill Cost & Earnings in Excess of Billings ch Cash che Cash and Short-Term Investments chech Cash and Cash Equivalents - Increase/(Decrease) ci Comprehensive Income - Total cibegni Comp Inc - Beginning Net Income cicurr Comp Inc - Currency Trans Adj cidergl Comp Inc - Derivative Gains/Losses cimii Comprehensive Income - Noncontrolling Interest ciother Comp Inc - Other Adj cipen Comp Inc - Minimum Pension Adj cisecgl Comp Inc - Securities Gains/Losses citotal Comprehensive Income - Parent cogs Cost of Goods Sold cshfd Common Shares Used to Calc Earnings Per Share - Fully Diluted cshi Common Shares Issued csho Common Shares Outstanding cshpri Common Shares Used to Calculate Earnings Per Share - Basic cshr Common/Ordinary Shareholders cshtr_c Common Shares Traded - Annual - Calendar cshtr_f Common Shares Traded - Annual - Fiscal cstke Common Stock Equivalents - Dollar Savings currtr Currency Translation Rate curuscn US Canadian Translation Rate datadate Data Date dc Deferred Charges depc Depreciation and Depletion (Cash Flow) derac Derivative Assets - Current deralt Derivative Assets Long-Term derhedgl Gains/Losses on Derivatives and Hedging diladj Dilution Adjustment dilavx Dilution Available - Excluding Extraordinary Items dlcch Current Debt - Changes dltis Long-Term Debt - Issuance do Discontinued Operations donr Nonrecurring Disc Operations dp Depreciation and Amortization dpacre Accumulated Depreciation of RE Property dpact Depreciation, Depletion and Amortization (Accumulated) dpc Depreciation and Amortization (Cash Flow) dpret Depr/Amort of Property dpvieb Depreciation (Accumulated) - Ending Balance (Schedule VI) drlt Deferred Revenue - Long-term dv Cash Dividends (Cash Flow) dvc Dividends Common/Ordinary dvintf Dividends & Interest Receivable (Cash Flow) dvp Dividends - Preferred/Preference dvpsp_c Dividends per Share - Pay Date - Calendar dvpsp_f Dividends per Share - Pay Date - Fiscal dvpsx_c Dividends per Share - Ex-Date - Calendar dvpsx_f Dividends per Share - Ex-Date - Fiscal dvt Dividends - Total ebit Earnings Before Interest and Taxes ebitda Earnings Before Interest emp Employees epsfi Earnings Per Share (Diluted) - Including Extraordinary Items epsfx Earnings Per Share (Diluted) - Excluding Extraordinary Items epspi Earnings Per Share (Basic) - Including Extraordinary Items epspx Earnings Per Share (Basic) - Excluding Extraordinary Items esub Equity in Earnings - Unconsolidated Subsidiaries esubc Equity in Net Loss - Earnings exre Exchange Rate Effect fatb Property, Plant, and Equipment - Buildings at Cost fatc Property, Plant, and Equipment - Construction in Progress at Cost fate Property, Plant, and Equipment - Machinery and Equipment at Cost fatl Property, Plant, and Equipment - Leases at Cost fatn Property, Plant, and Equipment - Natural Resources at Cost fato Property, Plant, and Equipment - Other at Cost fatp Property, Plant, and Equipment - Land and Improvements at Cost fca Foreign Exchange Income (Loss) ffo Funds From Operations (REIT) fiao Financing Activities - Other finaco Finance Division Other Current Assets, Total finao Finance Division Other Long-Term Assets, Total fincf Financing Activities - Net Cash Flow finch Finance Division - Cash finivst Finance Division Short-Term Investments finrecc Finance Division Current Receivables finreclt Finance Division Long-Term Receivables finrev Finance Division Revenue finxopr Finance Division Operating Expense fopo Funds from Operations - Other fopox Funds from Operations - Other excluding Option Tax Benefit fopt Funds From Operations - Total fsrco Sources of Funds - Other fsrct Sources of Funds - Total fuseo Uses of Funds - Other fuset Uses of Funds - Total fyear Data Year - Fiscal gdwl Goodwill gdwlam Goodwill Amortization gdwlia Impairments of Goodwill After-tax gdwlid Impairments of Goodwill Diluted EPS Effect gdwlieps Impairments of Goodwill Basic EPS Effect gdwlip Impairments of Goodwill Pretax gla Gain/Loss After-tax glcea Gain/Loss on Sale (Core Earnings Adjusted) After-tax glced Gain/Loss on Sale (Core Earnings Adjusted) Diluted EPS glceeps Gain/Loss on Sale (Core Earnings Adjusted) Basic EPS Effect glcep Gain/Loss on Sale (Core Earnings Adjusted) Pretax gld Gain/Loss Diluted EPS Effect gleps Gain/Loss Basic EPS Effect gliv Gains/Losses on investments glp Gain/Loss Pretax gp Gross Profit (Loss) hedgegl Gain/Loss on Ineffective Hedges ib Income Before Extraordinary Items ibadj Income Before Extraordinary Items - Adjusted for Common Stock ibc Income Before Extraordinary Items (Cash Flow) ibcom Income Before Extraordinary Items - Available for Common ibmii Income before Extraordinary Items and Noncontrolling Interests intan Intangible Assets - Total intano Other Intangibles intc Interest Capitalized invch Inventory - Decrease (Increase) invfg Inventories - Finished Goods invo Inventories - Other invrm Inventories - Raw Materials invt Inventories - Total invwip Inventories - Work In Process irent Rental Income itcb Investment Tax Credit (Balance Sheet) itcc Investment Tax Credit - Net (Cash Flow) - Utility itci Investment Tax Credit (Income Account) ivaco Investing Activities - Other ivch Increase in Investments ivncf Investing Activities - Net Cash Flow ivst Short-Term Investments - Total ivstch Short-Term Investments - Change lifr LIFO Reserve lifrp LIFO Reserve - Prior lno Liabilities Netting & Other Adjustments mib Noncontrolling Interest (Balance Sheet) mibn Noncontrolling Interests - Nonredeemable - Balance Sheet mibt Noncontrolling Interests - Total - Balance Sheet mii Noncontrolling Interest (Income Account) mkvalt Market Value - Total - Fiscal msa Marketable Securities Adjustment ni Net Income (Loss) niadj Net Income Adjusted for Common/Ordinary Stock nipfc Pro Forma Net Income - Current nipfp Pro Forma Net Income - Prior nopi Nonoperating Income (Expense) nopio Nonoperating Income (Expense) - Other nrtxt Nonrecurring Income Taxes After-tax nrtxtd Nonrecurring Income Tax Diluted EPS Effect nrtxteps Nonrecurring Income Tax Basic EPS Effect oancf Operating Activities - Net Cash Flow ob Order Backlog oiadp Operating Income After Depreciation oibdp Operating Income Before Depreciation opeps Earnings Per Share from Operations oprepsx Earnings Per Share - Diluted - from Operations optca Options - Cancelled (-) optdr Dividend Rate - Assumption (%) optex Options Exercisable (000) optexd Options - Exercised (-) optgr Options - Granted optlife Life of Options - Assumption (# yrs) optosby Options Outstanding - Beg of Year optosey Options Outstanding - End of Year optprcby Options Outstanding Beg of Year - Price optprcca Options Cancelled - Price optprcex Options Exercised - Price optprcey Options Outstanding End of Year - Price optprcgr Options Granted - Price optprcwa Options Exercisable - Weighted Avg Price optrfr Risk Free Rate - Assumption (%) optvol Volatility - Assumption (%) pddur Period Duration pdvc Cash Dividends on Preferred/Preference Stock (Cash Flow) pi Pretax Income pidom Pretax Income - Domestic pifo Pretax Income - Foreign pnca Core Pension Adjustment pncad Core Pension Adjustment Diluted EPS Effect pncaeps Core Pension Adjustment Basic EPS Effect pncia Core Pension Interest Adjustment After-tax pncid Core Pension Interest Adjustment Diluted EPS Effect pncieps Core Pension Interest Adjustment Basic EPS Effect pncip Core Pension Interest Adjustment Pretax pncwia Core Pension w/o Interest Adjustment After-tax pncwid Core Pension w/o Interest Adjustment Diluted EPS Effect pncwieps Core Pension w/o Interest Adjustment Basic EPS Effect pncwip Core Pension w/o Interest Adjustment Pretax pnrsho Nonred Pfd Shares Outs (000) ppegt Property, Plant and Equipment - Total (Gross) ppenc Property, Plant, and Equipment - Construction in Progress (Net) ppent Property, Plant and Equipment - Total (Net) ppevbb Property, Plant and Equipment - Beginning Balance (Schedule V) ppeveb Property, Plant, and Equipment - Ending Balance (Schedule V) prca Core Post Retirement Adjustment prcad Core Post Retirement Adjustment Diluted EPS Effect prcaeps Core Post Retirement Adjustment Basic EPS Effect prcc_c Price Close - Annual - Calendar prcc_f Price Close - Annual - Fiscal prch_c Price High - Annual - Calendar prch_f Price High - Annual - Fiscal prcl_c Price Low - Annual - Calendar prcl_f Price Low - Annual - Fiscal prsho Redeem Pfd Shares Outs (000) prstkc Purchase of Common and Preferred Stock prstkcc Purchase of Common Stock (Cash Flow) prstkpc Purchase of Preferred/Preference Stock (Cash Flow) rca Restructuring Costs After-tax rcd Restructuring Costs Diluted EPS Effect rceps Restructuring Costs Basic EPS Effect rcp Restructuring Costs Pretax rdip In Process R&D Expense rdipa In Process R&D Expense After-tax rdipd In Process R&D Expense Diluted EPS Effect rdipeps In Process R&D Expense Basic EPS Effect recch Accounts Receivable - Decrease (Increase) recco Receivables - Current - Other recd Receivables - Estimated Doubtful rect Receivables - Total ret Total RE Property revt Revenue - Total rra Reversal - Restructruring/Acquisition Aftertax rrd Reversal - Restructuring/Acq Diluted EPS Effect rreps Reversal - Restructuring/Acq Basic EPS Effect rrp Reversal - Restructruring/Acquisition Pretax rstche Restricted Cash & Investments - Current rstchelt Long-Term Restricted Cash & Investments sale Sales/Turnover (Net) salepfc Pro Forma Net Sales - Current Year salepfp Pro Forma Net Sales - Prior Year scstkc Sale of Common Stock (Cash Flow) seta Settlement (Litigation/Insurance) After-tax setd Settlement (Litigation/Insurance) Diluted EPS Effect seteps Settlement (Litigation/Insurance) Basic EPS Effect setp Settlement (Litigation/Insurance) Pretax siv Sale of Investments spce S&P Core Earnings spced S&P Core Earnings EPS Diluted spceeps S&P Core Earnings EPS Basic spi Special Items spid Other Special Items Diluted EPS Effect spieps Other Special Items Basic EPS Effect spioa Other Special Items After-tax spiop Other Special Items Pretax sppe Sale of Property sppiv Sale of Property, Plant and Equipment and Investments - Gain (Loss) spstkc Sale of Preferred/Preference Stock (Cash Flow) sret Gain/Loss on Sale of Property sstk Sale of Common and Preferred Stock stkco Stock Compensation Expense stkcpa After-tax stock compensation tdc Deferred Income Taxes - Net (Cash Flow) tfva Total Fair Value Assets tfvce Total Fair Value Changes including Earnings tfvl Total Fair Value Liabilities tlcf Tax Loss Carry Forward txach Income Taxes - Accrued - Increase/(Decrease) txbco Excess Tax Benefit Stock Options - Cash Flow Operating txbcof Excess Tax Benefit of Stock Options - Cash Flow Financing txc Income Taxes - Current txdb Deferred Taxes (Balance Sheet) txdba Deferred Tax Asset - Long Term txdbca Deferred Tax Asset - Current txdbcl Deferred Tax Liability - Current txdc Deferred Taxes (Cash Flow) txdfed Deferred Taxes-Federal txdfo Deferred Taxes-Foreign txdi Income Taxes - Deferred txditc Deferred Taxes and Investment Tax Credit txds Deferred Taxes-State txfed Income Taxes - Federal txfo Income Taxes - Foreign txndb Net Deferred Tax Asset (Liab) - Total txndba Net Deferred Tax Asset txndbl Net Deferred Tax Liability txndbr Deferred Tax Residual txo Income Taxes - Other txp Income Taxes Payable txpd Income Taxes Paid txr Income Tax Refund txs Income Taxes - State txt Income Taxes - Total txtubadjust Other Unrecog Tax Benefit Adj. txtubbegin Unrecog. Tax Benefits - Beg of Year txtubend Unrecog. Tax Benefits - End of Year txtubmax Chg. In Unrecog. Tax Benefits - Max txtubmin Chg. In Unrecog. Tax Benefits - Min txtubposdec Decrease- Current Tax Positions txtubposinc Increase- Current Tax Positions txtubpospdec Decrease- Prior Tax Positions txtubpospinc Increase- Prior Tax Positions txtubsettle Settlements with Tax Authorities txtubsoflimit Lapse of Statute of Limitations txtubtxtr Impact on Effective Tax Rate txtubxintbs Interest & Penalties Accrued - B/S txtubxintis Interest & Penalties Reconized - I/S txw Excise Taxes uaoloch Other Assets and Liabilities - Net Change (Statement of Cash Flows) uaox Other Assets - Utility uapt Accounts Payable - Utility uccons Contributions in Aid of Construction ucustad Customer Advances for Construction udcopres Deferred Credits and Operating Reserves - Other udfcc Deferred Fuel - Increase (Decrease) (Statement of Cash Flows) udpfa Depreciation of Fixed Assets udvp Preferred Dividend Requirements ugi Gross Income (Income Before Interest Charges) uinvt Inventories - Utility ulcm Current Liabilities - Miscellaneous ulco Current Liabilities - Other - Utility uniami Net Income before Extraordinary Items unopinc Nonoperating Income (Net) - Other uois Other Internal Sources - Net (Cash Flow) uopi Operating Income - Total - Utility uopres Operating Reserves updvp Preference Dividend Requirements* upstksf Preferred/Preference Stock Sinking Fund Requirement urect Receivables (Net) urectr Accounts Receivable - Trade - Utility urevub Accrued Unbilled Revenues (Balance Sheet) uspi Special Items usubdvp Subsidiary Preferred Dividends utme Maintenance Expense - Total utxfed Current Taxes - Federal (Operating) wcap Working Capital (Balance Sheet) wcapc Working Capital Change - Other - Increase/(Decrease) wcapch Working Capital Change - Total wda Writedowns After-tax wdd Writedowns Diluted EPS Effect wdeps Writedowns Basic EPS Effect wdp Writedowns Pretax xacc Accrued Expenses xad Advertising Expense xi Extraordinary Items xido Extraordinary Items and Discontinued Operations xidoc Extraordinary Items and Discontinued Operations (Cash Flow) xintopt Implied Option Expense xlr Staff Expense - Total xopr Operating Expenses - Total xoptd Implied Option EPS Diluted xopteps Implied Option EPS Basic xpp Prepaid Expenses xpr Pension and Retirement Expense xrd Research and Development Expense xrdp Research & Development - Prior xrent Rental Expense xsga Selling, General and Administrative Expense Overview of data items sourced from Compustat (“Compustat Daily Updates - Fundamentals Annual”) and employed as continuous-valued covariates throughout the empirical analysis (see section <ref>). Data items are sorted in alphabetical order of their code. The code and long text are as per the Compustat Data Guide, accessible via the Wharton Research Data Service (WRDS). Please see subsection <ref> for further details. Data items are used as absolute values (as sourced from Compustat) and as scaled values, once by total assets (code “at”) and once by total sales (code “sales”). Scaling was not performed in selected cases where this appeared to be meaningless, for instance for the fiscal year (code “fyear”). The long text for the codes “afudci” (“Allowance for Funds Used During Construction (Investing) (Cash Flow)”), “ibad” (“Income Before Extraordinary Items - Adjusted for Common Stock Equivalents”), “niadj” (“Net Income Adjusted for Common/Ordinary Stock (Capital) Equivalents”) and “uniami” (“Net Income before Extraordinary Items and after Noncontrolling Interest”) has been shortened in the table to limit the breadth of the second column. llr 3lCovariates: dummy variables Code Long text Count 3lCovariates: dummy variables Code Long text Count 3rContinued on next page acctchg Adoption of Accounting Changes 5 acctstd Accounting Standard 3 acqmeth Acquisition Method 6 adrr ADR Ratio 54 au Auditor 21 auop Auditor Opinion 5 auopic Auditor Opinion - Internal Control 3 bspr Balance Sheet Presentation 2 ceoso Chief Executive Officer SOX Certification 3 cfoso Chief Financial Officer SOX Certification 3 cik CIK Number 1 compst Comparability Status 10 costat Active/Inactive Status Marker 1 curcd ISO Currency Code 1 curncd Native Currency Code 33 cusip CUSIP 1 dldte Research Company Deletion Date 1 dlrsn Research Co Reason for Deletion 1 exchg Stock Exchange Code 12 fax Fax Number 1 fic Current ISO Country Code - Incorporation 58 final Final Indicator Flag 1 fyr Fiscal Year-end Month 11 fyrc Current Fiscal Year End Month 11 idbflag International, Domestic, Both Indicator 1 incorp Current State/Province of Incorporation Code 52 ipodate Company Initial Public Offering Date 1 ismod Income Statement Model Number 2 loc Current ISO Country Code - Headquarters 65 ltcm Long Term Contract Method 3 ogm OIL & GAS METHOD 2 phone Phone Number 1 prican Current Primary Issue Tag - Canada 1 prirow Primary Issue Tag - Rest of World 1 priusa Current Primary Issue Tag - US 1 rank Rank - Auditor 1 scf Cash Flow Format 3 sic Standard Industry Classification Code 306 src Source Document 7 stalt Status Alert 2 state State/Province 61 stko Stock Ownership Code 3 tic Ticker Symbol 1 udpl Utility - Liberalized Depreciation Code 3 upd Update Code 1 weburl Web URL 1 Overview of data items sourced from Compustat (“Compustat Daily Updates - Fundamentals Annual”) and transformed into dummy variables throughout the empirical analysis (see section <ref>). Data items are sorted in alphabetical order of their code. The code and long text are as per the Compustat Data Guide, accessible via the Wharton Research Data Service (WRDS). Please see subsection <ref> for further details. “Count” refers to the number of dummirized variables into which one particular data item was transformed. For instance, there are six types for “acctchg” in our empirical data set (i.e., whether or not a company has adopted a particular new accounting standard), which translates into five dummy variables. For data items with “count” = 1 (i.e, one single dummy variable), dummy coding corresponds to presence or absence of the data item. For instance, a company may have or may not have in a given year a central index key (CIK number, code “cik”) from the FDA, or a fax number (code “fax”), displayed in Compustat. For the Standard Industry Classification (code “sic”), dummies have been created at the first (7), second (58) and third (241) level for a total count of 306. §.§ Learner specifications: technical details We provide here technical details for the learners (“nuisance functions”, <cit.>) g_0 and m_0 from subsection <ref> which capture the relationship of the covariates X with the outcome LDA and the treatment D, respectively. For g_0, we specify the random forest to consist of 500 trees, each with a maximum depth of seven levels, to predict LDA. This achieves an out-of-sample prediction accuracy of approximately 53% for the R^2. Specifically, we use the “regr.ranger” function in R with the following parameters: num.trees = 500, mtry = 50, min.node.size = 10, max.depth = 7; we refer interested readers to the corresponding R package documentation <cit.>. We tuned these parameters based on a 30% training - 70% testing sample split. For reference, the out-of-bag (OOB) R^2 in the training data is 49%. For m_0, we specify the random forest to consist also of 500 trees, but with a slightly lower maximum depth of five levels each. Specifically, we used the “classif.ranger” function in R with the following parameters: num.trees = 500, mtry = 50, min.node.size = 10, max.depth = 5; we refer interested readers to the corresponding R package documentation <cit.>. We tuned these parameters based on a 30% training - 70% testing sample split. For reference, the out-of-bag (OOB) correct classification rate is 87% in the training data. Out-of-sample, we also achieve a correct classification rate of approximately 87%. With these learner specifications and a five-fold split as well as two repetitions (following the recommendation in <cit.>, page 13) total run time with this set-up was approx. 35 minutes on a standard personal computer (Intel Core i7, 8 cores) for the analysis with one binary treatment variable. §.§ Robustness check: alternative model specifications As mentioned in the main section of this paper, we have employed different learner specifications for g_0 and m_0 as a robustness check. The effect estimates across the different model alternatives are very consistent as can be seen from table <ref> reported in the main part of this paper, which we repeat for ease of reading with an extended legend in table <ref>. We also provide here a detailed description of the alternative learner specifications and a discussion of results. For the first alternative model (AM1), we used the specifications of our main model (MM) described in the main section of our paper but changed the cross-fitting algorithm to“DML2” instead of “DML1” (differences between the two algorithms are specified in <cit.>, page 12). In a second alternative model (AM2), we changed the machine learning method for learner g_0 to LASSO <cit.> while keeping the random forest from the main model for m_0. For AM3, we switched to Ridge regression <cit.> for learner g_0, again keeping the random forest from the MM for m_0. For AM4, we maintained the random forest method for both learners, but restrained the trees by limiting their maximum depth to five (g_0) and three (m_0) levels (versus seven and five in MM). The effect estimates across the different model alternatives are very consistent: AM1 results are virtually indistinguishable from MM. The effect estimate only differs in the sixth digit after the decimal point (not shown in the table). Of course, this should be expected, since only the cross-fitting algorithm was changed, while the learner models and parametrization were identical. However, also AM2 and AM3, where the machine learning approach for g_0 were changed from random forest to the LASSO and Ridge regression, respectively, yield causal effect estimates that differ only by 0.5pps to 0.6pps from MM. Similarly, AM4, where both learner functions were “held back from learning” by restricting their tree depth, yields an effect estimate that differs only by 0.6pps from the main model employed in this paper. In terms of p-values, all models are highly significant with p-values of 0.000; only for AM3 (Ridge regression), the p-value is different with 0.011, but of course still clearly below the usual cut-off value of 0.05. lccccc 6lRobustness check: alternative model specifications Rating effect MM AM1 AM2 AM3 AM4 on LDA (RF/RF) (DML2) (LASSO/RF) (Ridge/RF) (Restr.) θ (rating yes/no) 0.0878 0.0878 0.0925 0.0935 0.0942 Std. error 0.0021 0.0021 0.0023 0.0369 0.0021 t-value 41.8 41.8 40.0 2.5 45.2 p-value 0.000 0.000 0.000 0.011 0.000 Effect (θ) vs. mean 41% 41% 44% 44% 44% Results for the estimated causal effect θ of having a rating (or not) on LDA, according to alternative model (AM) specifications. “MM” refers to the main model specification used throughout the paper. The main characteristics are random forests for both learners with the specifications and tuning parameters detailed in the main text. “AM1” differs from MM only by using a different aggregation procedure (“DML2” versus “DML1” <cit.>) for the score function; results are virtually indistinguishable from MM. “AM2” (“AM3”) uses the LASSO (Ridge) as learner for g_0, while retaining the random forest from MM for m_0.“AM4” is set up like MM, except that the two random forests learners are “restrained" by limiting the maximum depth to five (g_0) and three (m_0) levels versus respectively seven and five in the MM specification. The “Effect (θ) vs. mean” is calculated versus the mean LDA value of 0.212. Parameter estimates and standard errors (bootstrap procedure) are aggregated over a five-fold split with two repetitions for all models. Subsection <ref> describes the data sample. §.§ Effect of investment-grade rating and speculative grade rating (versus having no rating) As mentioned in the main text, the initial analysis of having a rating versus having no rating implicitly assumes that it does not matter which rating a company has: all rating types are the same “treatment” for leverage; since ratings are opinions about credit risk, this implies that the type of opinion would not matter. However, it is easy to argue that different ratings, i.e. different opinions, may in reality represent different treatments, and thus, different versions of the treatment exist. Put differently, our initial analysis may suffer from the fact that it incorrectly assumes that there are “no hidden variations of treatments” <cit.> (pages 10-13). This is one of the assumptions included in the “stable unit treatment value assumption” (SUTVA) <cit.>, which provides a fundamental framework for causal analysis. Interested readers can access a vast literature on this topic, for instance <cit.> or <cit.>). We therefore investigate in a second analysis whether the rating effect differs between the two very broad categories of “investment-grade” and “speculative-grade” (non-investment grade, “junk bonds”). Rating agencies themselves explicitly categorize their different ratings into these two broad groups <cit.> and the distinction has significant implications for regulatory purposes as mentioned in section <ref>. The partially linear model described in equations <ref> and <ref> can thus be written to contain two different binary treatment variables, D^InvGR_i,t and D^SpeGR_i,t and their corresponding causal parameters θ^InvGR and θ^SpeGR. These two treatment variables specify whether a given company i had in year t an investment-grade rating (D^InvGR_i,t = 1, D^SpeGR_i,t = 0) or a speculative-grade rating (D^InvGR_i,t = 0, D^SpeGR_i,t = 1), or no rating at all (D^InvGR_i,t = 0, D^SpeGR_i,t = 0):[We remind ourselves that investment-grade ratings include rating categories from AAA to BBB-, speculative-grade ratings include BB+ and below and that the three categories (investment-grade, speculative-grade, no rating) are mutually exclusive and collectively exhaustive at any given point in time for each company. Of course, the (granular) rating for a company can change within a given year; however, the likelihood of change across these three very broad categories is small.] LDA_i,t=θ^InvGR D^InvGR_i,t+θ^SpeGR D^SpeGR_i,t+g_0(X_i,t) +ζ_i,t with 𝔼(ζ_i,t | D^InvGR_i,t,D^SpeGR_i,t,X_i,t) = 0. Equation <ref> is defined accordingly to reflect two different binary treatment variables. By considering two treatment variables, we are now conducting (causal) inference on multiple parameters at the same time. Therefore, we need to take into account the “multiplicity problem”: the possibility of falsely identifying an effect as “significant” increases with the number of treatments tested. Several methods have been proposed to account for this (see <cit.> for a condensed review and applications in high-dimensional settings). The classical method to control the “family-wise error rate” (i.e., the probability of at least one false rejection of the null hypothesis of no causal effect) is the Bonferroni correction; it is considered as very conservative <cit.>. As an alternative, the Benjamini-Hochberg false discovery rate control <cit.>, which targets the expected share of falsely rejected null hypotheses, relies on independence between tests, which is often an unrealistic assumption <cit.>. Other approaches attempt to maintain the concept of the family-wise error rate while reducing its conservatism. These include step-down methods, such as the step-down method of Holm <cit.> or the more recent Romano-Wolf step-down procedure <cit.>, which also takes the dependence structure of test statistics into consideration. Another approach for valid simultaneous inference relies on the multiplier bootstrap procedure proposed by <cit.>. This procedure iterates over the set of treatment variables and selects each of them to individually estimate its effect on the outcome variable; the other, currently not selected treatment variables are included in the nuisance functions. Table <ref> reports results for the effect estimate of investment-grade ratings and the effect of speculative-grade ratings on leverage (versus the baseline of no rating). Taking into account the multiplicity problem of simultaneous inference on multiple parameters described above, we report multiplier bootstrap (MB) standard errors and p-values, as well as, for comparison, the corresponding Romano-Wolf (RoWo) and Bonferroni (Bonf) p-values. lrcccc 6lInvestment- versus speculative-grade rating category Rating effect (on LDA) Coef. MB MB RoWo Bonf estim. Std. error p-val. p-val. p-val. θ^InvGR (investment-grade) -0.0030 0.0024 0.209 0.204 0.417 θ^SpeGR (speculative-grade) 0.1045 0.0022 0.000 0.000 0.000 Results for the estimated causal effect on leverage of having an investment-grade rating (θ^InvGR) or a speculative-grade rating (θ^SpeGR) versus the baseline of having no rating. The empirical design (<ref>), the data (<ref>) and the random forest characteristics (<ref>) are described in the main text. Standard errors and corresponding p-values are corrected for simultaneous multiple inference: “MB” refers to the multiplier bootstrapping method, “RoWo” to the Romano-Wolf procedure and “Bonf” to the Bonferroni-correction. Our estimates show that speculative-grade ratings have a large effect on leverage: on average, having a speculative-grade rating increases leverage by nearly 10.5pps. Also, p-values for θ^SpeGR are highly significant across the three reported methods. However, the coefficient estimate for investment-grade ratings θ^InvGR is close to zero with -0.3pps. It is hardly relevant from an economic perspective and p-values are not significant, surpassing 0.20 for the multiplier bootstrap and Romano-Wolf procedure and even 0.40 for the (more conservative) Bondferroni-corrected one. Thus, it is in reality the speculative-grade rating category that drives the (apparent) general rating effect (having any rating versus having no rating) identified in the initial analysis. In contrast, having an investment-grade rating does not affect leverage. At this stage, the result of our analysis refines the understanding of the rating effect proposed by <cit.>, who had concluded that firms with a rating, i.e., any rating, have more debt. Rather, our data suggest that firms with low ratings, i.e., speculative-grade ratings, have more debt, while the effect from investment-grade ratings on leverage is approximately zero. Considering these very different results between investment- and speculative-grade ratings, we explore in the following subsection the rating effect by individual broad rating category. §.§ Effect of rating by individual broad rating category The analysis in the previous section yielded a highly heterogeneous treatment effect for the two very general groups of investment- and speculative-grade rating. In this section, we explore whether treatment effects are also heterogeneous at finer levels. We remind ourselves from section <ref> that the “broad” rating categories are defined by one to three letters (such as AAA, AA, A, BBB). Within the broad categories from AA to CCC, three more granular sub-categories (“notches”) exist, separated by “+” and “-” signs, for instance AA+, AA and AA-. To add clarity, we will label the granular sub-category ratings without a “+” or “-” sign as “straight" (e.g. “AAstraight”) and the broad categories as “broad” (e.g. “AAbroad”). Thus, AAbroad is comprised of AA+, AAstraight and AA-. Applying the same approach as in the investment versus speculative grade rating analysis, we can determine the causal effect estimate for the ten different broad rating categories, again accounting in the methodology for the standard errors and p-value for the fact that we test multiple hypotheses. The results in table <ref> provide an interesting picture: effects are highly heterogeneous across the broad rating categories, but follow a distinct pattern. The effect estimates for the two highest-quality ratings (AAAbroad and AAbroad) are negative and highly significant. The AAAbroad rating reduces leverage by approximately -6pps, and the AAbroad rating by approximately -4pps. The effect of the next two categories (Abroad and BBBbroad) can be considered zero, both in terms of the parameter estimate itself (0.01pps and -0.09pps, respectively) and in terms of their p-values, which suggest by their values of 0.956 and 0.677 (for the multiplier bootstrapping method) that the null hypothesis of no effect can hardly be rejected based on the observed data. The coefficient estimates for the next four categories, BBbroad to CCbroad, are all positive and the corresponding p-values highly significant. Thus, these ratings increase the leverage ratio between approx. 5pps (BBbroad) and up to 15pps (CCbroad). Coefficient estimates for the categories corresponding to (partial) default, SDbroad and Dbroad, are still positive, albeit of much smaller magnitude; however, their p-values suggest that the null hypothesis of no effect can hardly be rejected. With ten different treatments tested simultaneously, the difference in p-values between the three methods employed to account for simultaneous inference becomes also more pronounced in table <ref> as compared to the situation with only two treatment variables in table <ref>. However, the results of the three methods are very consistent in their general direction, especially if customary cutoffs (e.g., 0.01 or 0.05) are used for p-values.[Please see footnote <ref> in the main part of this paper regarding the p-value controversy.] The results also support the previously indicated view that the Bonferroni correction is more conservative than the two other methods. lrcccc 6lBroad rating categories Rating effect Coef. MB MB RoWo Bonf (on LDA) Estim. Std. Error p-val. p-val. p-val. 6lBroad rating categories Rating effect Coef. MB MB RoWo Bonf (on LDA) estim. Std. error p-val. p-val. p-val. 6rContinued on next page θ^AAA broad -0.0582 0.0189 0.002 0.015 0.021 θ^AA broad -0.0385 0.0068 0.000 0.000 0.000 θ^A broad 0.0001 0.0027 0.956 0.950 1.000 θ^BBB broad -0.0009 0.0021 0.677 0.942 1.000 θ^BB broad 0.0512 0.0024 0.000 0.000 0.000 θ^B broad 0.1301 0.0031 0.000 0.000 0.000 θ^CCC broad 0.1284 0.0144 0.000 0.000 0.000 θ^CC broad 0.1471 0.0044 0.001 0.004 0.008 θ^SD broad 0.0597 0.0531 0.261 0.689 1.000 θ^D broad 0.0141 0.0294 0.632 0.942 1.000 Results for the estimated causal effect on leverage by broad rating category versus the baseline of having no rating. Broad rating categories comprise the “+” and “-” notch qualifications for those categories within which they exist (e.g., “AAbroad” includes the S&P rating categories AA+, AA and AA-). The “C”-rating category is absent as no firm-year had such a rating over the sample period. The “SD” rating indicates that while a “selective” default on a particular debt instrument occurred, the company is believed to honor the other obligations. Standard errors and corresponding p-values are corrected for simultaneous multiple inference: “MB” refers to the multiplier bootstrapping method, “RoWo” to the Romano-Wolf procedure and “Bonf” to the Bonferroni-correction. Figure <ref> is a graphical representation of the results from table <ref>. The shape of the bar chart provides a visual impression about the heterogeneity of the treatment effect and its pronounced pattern following the broad rating categories. From AAAbroad to BBBbroad, the effect is slightly negative to neutral. From BBbroad onward, the effect turns clearly positive (i.e., higher leverage). This is also the dividing line between investment-grade and speculative-grade rating as per the analysis in subsection <ref>, which yielded a strong positive effect for speculative-grade rating versus hardly any effect for investment-grade rating. What we interpret as reassuring is the fact that the treatment effect estimates for the individual broad rating categories are very consistent within the two respective “aggregate categories” of investment- versus speculative-grade rating. For instance, alternating positive and negative estimates within the speculative categories would appear to be much more counter-intuitive. As a robustness check, we estimate the rating effect by broad category also for the market leverage (LDMA) as defined in equation <ref>. We report here only the graphical representation of the results without commenting them in detail as we consider them very reassuring. Even though AAAbroad is slightly lower than AAbroad, figure <ref> for LDMA displays a stark resemblances with the shape in figure <ref>. Notably the overall sequence of effect estimates from negative, roughly neutral to highly positive (and the tapering off at the tail end) resembles the one from our main analysis for LDA. §.§ Robustness check: rating effects in a different sample period As a complementary robustness check for our results from the previous analyses, we consider a second data sample from a different time period. Using double machine learning with the same analytical methodology and data sources as described in subsections <ref>, <ref> and <ref>, we step back in time to the years 2000 to 2004 to arrive at a second data sample of 32'162 company-year observations. With this new data sample, we want to assess our main findings: first, the existence of an effect on leverage from having a rating (versus having no rating); and second, that this rating effect is heterogeneous across rating categories. In particular, we are interested if the second sample confirms the characteristic shape of the rating effect by broad and granular rating category observed in figures <ref> and <ref>. Verifying our findings with this data sample from a different period provides reassurance that the results also hold under potentially different (macro)economic, geopolitical and societal circumstances.[For instance, the “dotcom bubble” burst in 2000 and 2001 saw the terrorist attacks on the World Trade Center; the Euro was introduced in twelve European Union countries in 2002 and 2003 saw the end of Saddam Hussein's rule as Iraqi president; Google's IPO occurred in 2004. More directly relevant to the topic of this paper, <cit.> (page 647) observe that the relative increase of companies with speculative grade ratings during 2008 to 2018 was due to newly rated companies entering the debt market, motivated by low interest rates. Thus, the 2000 to 2004 period represents a different environment.] Table <ref> compares the results of our main analysis (as per table <ref>) in the left column with the results from the second sample period in the right column. The rating effect estimate amounts to 9.6pps, which is 0.8pps higher than the parameter estimate of 8.8pps from the main sample. Compared to the mean leverage of the sample, this corresponds to an impact of 43% versus 41% from the main sample. Again, the rating effect is highly significant, both statistically and economically. We interpret this result as adding another piece of evidence confirming the presence of a rating effect. lcc Rating effect 2005-2015 2000-2004 on leverage (LDA) n=57'832 n=32'162 θ (rating yes/no) 0.0878 0.0962 Std. error 0.0021 0.0029 t-value 41.8 32.9 p-value 0.000 0.000 Memo: mean leverage 0.212 0.224 Rating effect (θ) vs. mean 41% 43% Comparison of results for the estimated causal effect θ of having or not having a rating on leverage (LDA) for the main data sample from 2005 to 2015 with 57'832 company-year observations compared to a second, different data sample for the years 2000 to 2004 with 32'162 company-year observations. The methodology for the second data sample is the same as for the main one (as described in previous sections), including aggregation of parameter estimates and standard errors over a five-fold split with two repetitions. Figures <ref> and <ref> are graphical representations of the rating effect estimates from the second data sample for the broad and granular rating categories. The respective multiplier bootstrap p-values are displayed underneath the effect estimates. The shapes in both charts are very similar to the ones resulting from the analysis of the main samples in figures <ref> and <ref>. For the ten broad categories in figure <ref>, the effect is slightly negative to neutral from AAAbroad to BBBbroad. From BBbroad onward, the effect turns clearly positive (i.e., higher leverage) and reduces at the tail end in the default categories of SDbroad and Dbroad. As with the main results, the switch from negative/neutral to positive is situated at the dividing line between investment-grade and speculative-grade rating. And again, the individual broad rating treatment effect estimates are very consistent within the two respective “aggregate categories” of investment- versus speculative-grade rating. For the 22 granular rating categories in figure <ref>, overall results from the second data sample are also very similar to those from the main sample. For the four rating categories without notch qualification (AAA, CC, SD and D), the effect estimates are virtually the same from the granular analysis as compared to the broad analysis. The overall shape of the rating effect curve also resembles the one of the main analysis. Importantly, we again see the gradual increase of the rating effect within the BB category. This confirms our previous observation that the dividing line of the rating impact is not “sharp” at the dividing line between investment- (BBB) and speculative-grade (BB) rating. One effect estimate that stands out in figure <ref> is the one for the CCC- category. However, similar to what we observed in the main sample, the number of observations in this category is very low (n= 4 in the second sample period versus n=9 in the main sample). A second observation we need to emphasize is the behavior of the estimated rating effects within the A and BBB categories. In the main sample, we found “concave” shapes within both categories (please refer to table <ref> in the main part of this paper). Specifically, A+ and A- displayed negative effect estimates, while the effect estimate for Astraight was positive. The same was true for BBB+ and BBB- relative to BBBstraight. In our second sample, the concavity disappears. For A ratings, it actually flips into convexity: A+ and A- display positive effect estimates, while the effect estimate for Astraight is negative. And within the category of BBB ratings, BBB+ and BBBstraight display a negative impact, while the effect estimate is positive for BBB+. We conclude that the hesitation voiced in the main part of the text to build elaborate theories on such findings is justified. Our ad-hoc interpretation in subsection <ref> for the concavity which we found in the main samples could probably serve as example that “[h]umans are extraordinarily quick to infer that the events they observe are caused by creatures with plans and intentions” <cit.> (page 94). For ease of comparison, we also plot the effect estimates from the main analysis next to the ones from the second data sample for the broad categories (figure <ref>) and for the granular rating categories (figure <ref>). The similarity of the shapes from the main and the second data sample is compelling. §.§ Robustness check: rating effects when including interest coverage as a covariate As described in subsection <ref>, we excluded data items that would allow the random forest as a very flexible learner to back-calculate total debt or equity. However, we still want to verify that the rating effect estimates hold when including selected items that determine credit ratings (or are at least strongly believed to do so). <cit.> (pages 645-650) explain that “credit ratings are primarily related to two financial indicators” (page 647). One of them is size, which we have already included via items such as the logarithm of sales, the logarithm of assets or the number of employees in our set of covariates. The second is interest coverage, which measures “a company's ability to comply with its debt service obligations” (page 648). We therefore include interest coverage (IntCov) as defined in <cit.> (Exhibit 33.8, left panel, page 649): IntCov_i,t=EBITDA_i,t / Interest expenses_i,t where EBITDA represents earnings before interest, taxes, depreciation and amortization and interest expenses represents the expenses for servicing a company's total financial debt.[In Compustat, this is the item with code “xint” (“Interest and Related Expense - Total”).] We use our empirical sample as described in subsection <ref> and remove company-years with interest expenses of less than USD ten thousand in a given year and arrive at 48'585 company-year observations. We make no change to the double machine learning model as described in subsections <ref> and <ref>. Table <ref> compares the results for the general rating effect estimate from for our main analysis (as per table <ref>) with those from the approach in this subsection which includes interest coverage (“IntCov”) as a feature in the set of covariates. The effect estimate amounts to approximately 7pps including IntCov, or 29% versus the sample mean leverage of roughly 25%. This effect estimate is 1.5pps lower than the one from the main analysis, which translates into a 10pps drop in the relative effect magnitude versus the mean leverage (29% versus 41% in the main analysis). Still, the rating effect remains clearly present. Key is now to assess the effect heterogeneity and shape of the effect curve in subsequent steps. lcc Rating effect Excl. IntCov Incl. IntCov on leverage (LDA) n=57'832 n=48'585 θ (rating yes/no) 0.0878 0.0731 Std. error 0.0021 0.0021 t-value 41.8 35.3 p-value 0.000 0.000 Memo: mean leverage 0.212 0.249 Rating effect (θ) vs. mean 41% 29% Comparison of results for the estimated causal effect θ of having a rating (or not) on leverage (LDA) depending on whether interest coverage (“IntCov”) as defined in equation <ref> is excluded or included in the set X of covariates as per equations <ref> and <ref>. The general methodology for “Incl. IntCov” is the same as for the main model used throughout this paper (“Excl. IntCov”, as described in previous sections), including aggregation of parameter estimates and standard errors over a five-fold split with two repetitions. The results for the ten broad rating categories are summarized in table <ref>. Figure <ref> provides a graphical representation of their effect heterogeneity and figure <ref> compares the effect shape with the results from the main analysis previously reported in table <ref>. lrcccc 6lBroad rating categories with IntCov included as covariate Rating effect Coef. MB MB RoWo Bonf (on LDA) estim. Std. error p-val. p-val. p-val. 6lBroad rating categories with IntCov included as covariate Rating effect Coef. MB MB RoWo Bonf (on LDA) estim. Std. error p-val. p-val. p-val. 6rContinued on next page θ^AAA broad -0.0532 0.0187 0.004 0.024 0.044 θ^AA broad -0.0397 0.0064 0.000 0.000 0.000 θ^A broad 0.0002 0.0026 0.943 0.998 1.000 θ^BBB broad -0.0053 0.0020 0.008 0.032 0.078 θ^BB broad 0.0398 0.0024 0.000 0.000 0.000 θ^B broad 0.1128 0.0031 0.000 0.000 0.000 θ^CCC broad 0.1135 0.0142 0.000 0.000 0.000 θ^CC broad 0.1377 0.0429 0.001 0.007 0.013 θ^SD broad 0.0461 0.0560 0.410 0.799 1.000 θ^D broad 0.0019 0.0286 0.948 0.998 1.000 Results for the estimated causal effect on leverage (LDA) by broad rating category with interest coverage (“IntCov”) included in the set of covariates. Standard errors and corresponding p-values are corrected for simultaneous multiple inference: “MB” refers to the multiplier bootstrapping method, “RoWo” to the Romano-Wolf procedure and “Bonf” to the Bonferroni-correction. Parameter estimates and standard errors are aggregated over a five-fold split with two repetitions. The analytical approach within the double machine learning framework is the same as previously described for the main analyses in this paper. Results for the broad rating categories from the double machine learning specification including interest coverage are very similar to the ones from the main analysis without interest coverage. First, results confirm the heterogeneity of the effects: for the two highest rating categories are again negative, then effect estimates are zero, before increasing to positive for BB ratings and exceeding 10pps for the remainder of ratings excluding SD and D default ratings. Second, the differences between the effect estimates along the rating scale are similar to the one from the main analysis, thus yielding the same overall shape of the effect curve. Figure <ref> illustrates these two points. Third, p-values across the three methodologies employed to account for multiple hypothesis testing are consistent with each other. Figure <ref> also confirms the observation from the result regarding the effect of any rating versus no rating, reported in table <ref>. There, the absolute effect estimate was roughly 1.5pps lower including interest coverage as an additional covariate than excluding it. The same is true for the individual effect estimates by broad rating category: including interest coverage, most of them are between 1 and 2pps lower than excluding interest coverage, but nevertheless economically relevant and highly significant from a statistical perspective. For the 22 granular rating categories, we provide in this subsection with figure <ref> the effect estimates together with the corresponding multiplier bootstrap p-values and also a graphical comparison of results including versus excluding interest coverage in figure <ref>. Also at this granular level, the results including interest coverage are very similar to those from the main analysis without interest coverage, both in terms of magnitude and overall shape. In particular, we also see the gradual rise in effect size over the notch-ratings within the BBB and BB rating classes, which comforts our previous finding that there is no sharp divide in effect between investment-grade and speculative -rade ratings. In conclusion, results from the robustness checks confirm our three main conclusions on the rating effects on leverage: first, ratings affect the leverage ratio. Second, this effect is heterogeneous and depends on the rating category. Third, the transition of the effect size is gradual over the individual, granular categories within BBB and BB and thus does not occur sharp at the switch from investment to speculative grade. imsart-number
http://arxiv.org/abs/2406.18289v1
20240626121818
On Shilnikov's scenario with a homoclinic orbit in 3D
[ "Hans-Otto Walther" ]
math.DS
[ "math.DS", "math.CA", "37D45, 34C28" ]
EmT: A Novel Transformer for Generalized Cross-subject EEG Emotion Recognition Yi Ding, Member, IEEE, Chengxuan Tong, Graduate Student Member, IEEE, Shuailei Zhang, Muyun Jiang, Yong Li, Kevin Lim Jun Liang, and Cuntai Guan, Fellow, IEEE Yi Ding and Chengxuan Tong contribute equally to this work. Yi Ding, Chengxuan Tong, Shuailei Zhang, Muyun Jiang, Yong Li, and Cuntai Guan are with the College of Computing and Data Science, Nanyang Technological University, 50 Nanyang Avenue, Singapore, 639798. e-mail: (ding.yi, tong0110, shuailei.zhang, james.jiang, yong.li, ctguan)@ntu.edu.sg. Chengxuan Tong and Jun Liang Kevin Lim are with Wilmar International, Singapore. E-mail: (chengxuan.tong, kevin.limjunliang)@sg.wilmar-intl.com. Cuntai Guan is the Corresponding Author. July 1, 2024 ============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================= § INTRODUCTION The purpose of the present account is to provide a detailed proof that complicated motion exists in Shilnikov's scenario <cit.>, which consists of a smooth vectorfield V:ℝ^3→ℝ^3 with V(0)=0 so that the equation x'(t)=V(x(t)) has a homoclinic solution h:ℝ→ℝ^3∖{0} with lim_|t|→∞h(t)=0, and DV(0) has eigenvalues u>0 and σ± i μ, σ<0<μ, with (H)0<σ+u. Of course, smoothness of V is to be specified in this statement. Shilnikov's result in <cit.> is that for V analytic there is a countable set of periodic orbits close to the homoclinic orbit h(ℝ). A related, slightly stronger statement about complicated motion is conjugacy with the shift (s_j)_-∞^∞↦(s_j+1)_-∞^∞ in two symbols s_j∈{0,1}, for a return map, which is given by intersections of solutions with a transversal to the homoclinic orbit. Work on verification of this property and of related ones can be found in the monographs <cit.> and in the references given there. A strong simplifying hypothesis is that V is linear close to the equilibrium. For a detailed presentation under this hypothesis see <cit.>. Shilnikov-type results on complicated motion close to a homoclinic orbit approaching a stationary state have also been obtained for semiflows in infinite-dimensional spaces, e. g. in <cit.> on delay differential equations which are linear close to the stationary state. These results are related not to <cit.> but to Shilnikov's work on vectorfields on ℝ^4 <cit.> which is analogous to <cit.> except for the presence of pairs of complex conjugate eigenvalues of DV(0) in either halfplane. In the sequel we consider Shilnikov's scenario in ℝ^3 for V twice continuously differentiable. The hypothesis of second order differentiability, as opposed to minimal smoothness of merely continuous differentiability, serves for the justification of the facilitating additional assumption that for V (just once) continuously differentiable both eigenspaces of DV(0) are invariant under V in a vicinity of x=0. The Appendix in Section 9 describes how the said invariance property can be achieved by means of a transformation which, however, reduces the order of differentiability by one. The result on complicated motion is stated in the final Theorem 8.2. Its proof uses the scaled form y'(t)=V_ϵ(y(t)), ϵ>0, of Eq. (1), with V_ϵ:ℝ^3→ℝ^3 given by V_ϵ(x)=1/ϵV(ϵ x). Eq. (2) is equivalent to Eq.(1) as y is a solution to Eq. (2) if and only if x=ϵ y is a solution to Eq. (1). Following <cit.> we introduce a return map in a transversal to the homoclinic orbit, with domain on one side of the local stable manifold W^s_loc at y=0, ”above" or ”below" W^s_loc, compare Figure 1 on page 3. The return map is transformed to a map in the plane. For ϵ>0 sufficiently small we find disjoint sets M_0,M_1 so that for each symbol sequence (s_j)_j=-∞^∞ there exist trajectories (x_j)_j=-∞^∞ of the transformed return map with x_j∈ M_s_j for all integers j. The result is achieved by an examination of the expansion of curves under the return map, and by the inspection of intersections of the expanded curves with the domain of the return map. The reader may observe that the proof of Theorem 8.2 actually yields a countable family of sets of complicated trajectories of the return map. No attempt is made in the present paper to verify covering relations like for example a Smale horseshoe, and neither existence of periodic orbits close to the homoclinic nor conjugacy of the return map on some invariant set with the shift in two symbols are touched upon. Notation, preliminaries. A forward trajectory of a map f:M⊃ dom→ M is a sequence (x_j)_0^∞ in dom with with x_j+1=f(x_j) for all integers j≥0. Entire trajectories are defined analogously, with all integers as indices. For a vectorspace X, x∈ X and M⊂ X we set x± M={y∈ X:y∓ x∈ M}. Similarly, for A⊂ℝ and x∈ X, Ax={y∈ X: a∈ A, y=ax}. The interior, the boundary, and the closure of a subset of a topological space are denoted by int M, ∂ M, and cl M, respectively. Components of vectors in Euclidean spaces ℝ^n are indicated by lower indices.The inner product on ℝ^n is written as <x,y>=∑_i=1^nx_iy_i, and we use the Euclidean norms given by |x|=√(<x,x>). The vectors of the canonical orthonormal basis on ℝ^n are denoted by e_j, j=1,…,n, e_jj=1 and e_jk=0 for j≠ k. In ℝ^3 we write L=ℝe_1⊕ℝe_2 and U=ℝe_3. The associated projections onto L and onto U are P_L:ℝ^3→ℝ^3 and P_U:ℝ^3→ℝ^3, respectively. For every x∈ℝ^3, |P_Ux|^2+|P_Lx|^2=|x|^2, and each of the projections has norm 1 in L_c(ℝ^3,ℝ^3). For a function f:ℝ^n⊃ dom→ℝ and for x∈ dom the derivatives and partial derivatives are linear maps Df(x) and D_jf(x), respectively. Partial derivatives as numbers satisfy ∂_jf(x)=D_jf(x)1=Df(x)e_j. The tangent space T_xM of a continuously differentiable submanifold M of ℝ^n at x∈ M is the set of tangent vectors v=c'(0)=Dc(0)1 of continuously differentiable curves c:I→ℝ^n with I an interval, c(I)⊂ M, 0∈ I, c(0)=x. For a continuously differentiable map f:M→ N, N a continuously differentiable submanifold of ℝ^n, the derivative at x∈ M is the linear continuous map T_xf:T_xM→ T_f(x)N given by T_xf(v)=(f∘ c)'(0), for v and c as before. The flow ℱ generated by a vectorfield 𝒱:ℝ^n⊃𝒰→ℛ^n which is locally Lipschitz continuous is the map ℝ×ℝ^n⊃ dom_ℱ→ℝ^n which is given by (t,x)∈ dom_ℱ if and only if t belongs to the domain of the maximal solution y:I_x→ℝ^n of the differential equation x'(t)=𝒱(x(t)) with initial value y(0)=x, and ℱ(t,x)=y(t) in this case. ℱ is of the same order of differentiability as 𝒱. A subset M⊂ U is invariant under ℱ if x∈ M implies ℱ(t,x)∈ M for all t∈ℝ with (t,x)∈ dom_ℱ. For a further subset N⊂ U the set M is called invariant under ℱ in N if for every x∈ M∩ N and for every interval I∋ 0 with ℱ(I×{x})⊂ N we have ℱ(I×{x})⊂ M. § A SIMPLER SETTING, AND SCALED VECTORFIELDS We assume from here on that we are in Shilnikov's scenario with V continuously differentiable, and that in addition V has the subsequent properties (A1) and (A2). In the Appendix (Section 9) it is explained how (A1) and (A2) can be achieved if we start from Shilnikov's scenario for a vectorfield which is twice continuously differentiable. (A1) For all x∈ℝ^3, DV(0)x=Ax with a matrix A= ( [ ρ σ 0; -σ ρ 0; 0 0 u ]). Then AL=L and AU=U are the (realified) eigenspaces associated with the eigenvalues σ± i μ and u, respectively. (A2) There exists r_V>0 with P_UV(x)=0 x∈ L, |x|≤ r_V, P_LV(x)=0 x∈ U, |x|≤ r_V, and there are reals t_U<t_L with h(t)∈ U t≤ t_U, h(t)∈ L t≥ t_L. In order to exploit the hypothesis (H) on eigenvalues we need to study solutions of Eq.(1) in small neighbourhoods of the origin where V is appropriately close to its linearization DV(0). Alternatively one can study solutions of the equivalent Eqs. (2) for small ϵ>0 in a fixed neighbourhood of the origin. We prefer the latter and follow solutions of the Eqs. (2) inside the compact truncated solid cylinder B_1={x∈ℝ^n:|P_Ux|≤1,|P_Lx|≤1}, Let F:ℝ^3⊃ dom→ℝ^3 denote the flow generated by Eq.(1). The scaled vectorfields V_ϵ:ℝ^3→ℝ^3, ϵ>0, are continuously differentiable and satisfy V_ϵ(0)=0. For every ϵ>0, - the solution h_ϵ=1/ϵh of Eq. (2) satisfies lim_|t|→∞h_ϵ(t)=0, - we have DV_ϵ(0)=DV(0) since DV_ϵ(z)y=1/ϵDV(ϵ z)ϵ y=DV(ϵ z)y for ϵ>0,z∈ℝ^3,y∈ℝ^3, - the flow F_ϵ:ℝ×ℝ^3⊃Ω_ϵ→ℝ^3 generated by Eq. (2) is given by (t,x_0)∈Ω_ϵ if and only if (t,ϵ x_0)∈Ω, and F_ϵ(t,x_0)=1/ϵF(t,ϵ x_0). The following proposition states precisely that in B_1 the linear spaces L and U are invariant under scaled vectorfields V_ϵ. For 0<ϵ≤ r_V and for all x∈ B_1, P_UV_ϵ(P_Lx)=0 P_LV_ϵ(P_Ux)=0. Proof. For 0<ϵ≤ r_V and x∈ B_1, we have P_Lx∈ L∩ B_1 and |ϵ P_Lx|=ϵ |P_Lx|≤ϵ≤ r_V. Using Eq. (3) we get P_UV(ϵ P_Lx)=0, which in turn yields 0=P_U1/ϵV(ϵ P_Lx)=P_UV_ϵ(P_Lx). The other equation of the assertion is shown analogously. Proposition 2.1 can be rephrased as V_ϵ(L∩ B_1)⊂ L V_ϵ(U∩ B_1)⊂ U 0<ϵ≤ r_v. The next proposition expresses closeness of scaled vectorfields to their linearization at the origin in terms of components in U and L. For every η>0 there exists ϵ(η)∈(0,r_V] such that for all ϵ∈(0,ϵ(η)), |DV(ϵ y)-A| ≤ η y∈ B_1, |P_U(V_ϵ(x)-Ax)| ≤ η|P_Ux| x∈ B_1, |P_L(V_ϵ(x)-Ax)| ≤ η|P_Lx| x∈ B_1. Proof. Let η>0 be given. By continuity of DV at 0 there exists ϵ(η)∈ (0,r_V] so that for all ϵ∈(0,ϵ(η)) and all y ∈ B_1, |DV(ϵ y)-A|≤η, which is Eq. (7). Proof of Eq. (8): |P_U(V_ϵ(x)-Ax)| = |P_U(V_ϵ(x)-Ax)-(P_U(V_ϵ(P_Lx)-AP_Lx))| P_UAP_L=0) = |∫_0^1P_U(DV_ϵ(y_t))-A)[x-P_Lx]dt| y_t=P_Lx+t(x-P_Lx)=P_Lx+tP_Ux∈ B_1 = |∫_0^1P_U{1/ϵDV(ϵ y_t)∘(ϵ· id)-A}[P_Ux]dt| = |∫_0^1P_U{DV(ϵ y_t)-A}[P_Ux]dt|≤max_0≤ t≤1|P_U{DV(ϵ y_t)-A}||P_Ux| ≤ max_y∈ B_1|DV(ϵ y)-A||P_Ux|≤η|P_Ux| |P_U|=1). Eq. (9) is shown analogously. Incidentally, notice that conversely the inequalities (8) and (9) for x∈ B_1 imply that V_ϵ(U∩ B_1)⊂ U and V_ϵ(L∩ B_1)⊂ L. There exists ϵ_M>0 so that for every ϵ∈(0,ϵ_M) there are t_E,ϵ<t_I,ϵ with h_ϵ(t_E,ϵ)∈{-e_3,e_3} h_ϵ(t_I,ϵ)∈ L, |h_ϵ(t_I,ϵ)|=1. Either h_ϵ(t)∈(-∞,0)e_3 for all t≤ t_E,ϵ, or h_ϵ(t)∈(0,∞)e_3 t≤ t_E,ϵ. For all t≥ t_I,ϵ, h_ϵ(t)∈ L. Proof. 1. For all t≤ t_U, h(t)∈ U∖{0}=(-∞,0)e_3∪(0,∞)e_3. Hence either h(t)∈(-∞,0)e_3 for all t≤ t_U, or h(t)∈(0,∞)e_3 for all t≤ t_U. 2. On t_E,ϵ for ϵ∈(0,|h(t_U)|). In case h(t)∈(0,∞)e_3 for all t≤ t_U the relation lim_t→-∞h(t)=0 shows that for each ϵ∈(0,|h(t_U)|) there exists t_E,ϵ≤ t_U with |h(t_E,ϵ)|=ϵ. It follows that h(t_E,ϵ)=ϵ e_3. For the same ϵ<|h(t_U)| and for all t≤ t_E,ϵ≤ t_U we get h_ϵ(t)=1/ϵh(t)∈(0,∞)e_3 and in particular, h_ϵ(t_E,ϵ)=1/ϵh(t_E,ϵ)=e_3. Analogously for the second case. 3. On t_I,ϵ for ϵ∈(0,|h(t_L)|). The relations h(t)∈ L for t≥ t_L and lim_t→∞h(t)=0 show that for each ϵ∈(0,|h(t_L)|) there exists t_I,ϵ≥ t_L≥ t_U≥ t_E,ϵ with |h(t_I,ϵ)|=ϵ. Hence |h_ϵ(t_I,ϵ)|=1. For t≥ t_I,ϵ≥ t_L, h_ϵ(t)=1/ϵh(t)∈1/ϵL= L. 4. Set ϵ_M=min{|h(t_U)|,|h(t_L)|}. In the sequel we focus on case (10) of Proposition 2.3 - the other case is analogous. § TRANSVERSALITY, PROJECTED SOLUTION CURVES, POLAR COORDINATES We want to describe the behavior of solutions to Eq. (2) close to the homoclinic loop h_ϵ(ℝ)∪{0} for small ϵ∈(0,ϵ_M). This will be done in terms of a return map which follows the solutions from a subset of the cylinder M_I={x∈ℝ^3:|P_Lx|=1} above the plane L until their return to M_I. The return map will be obtained as a composition of an inner map, which follows solutions until they reach the plane M_E=e_3+L, parallel to L (compare Figure 1 on page 3), with an exterior map following solutions from a subset of M_E until they reach M_I. The construction and analysis of the inner and exterior maps require preparations. We begin with transversality of the vectorfield where the homoclinic solution intersects with M_E and with M_I, which is at t=t_E,ϵ and at t=t_I,ϵ, respectively. The tangent spaces of M_E are all equal to L while for every x∈ M_I we have T_xM_I=x^⊥⊕ U. where x^⊥∈ L is orthogonal to P_Lx in L, x^⊥=([ -x_2; x_1; 0 ]). (i) There exists ϵ_E>0 so that for all ϵ∈(0,ϵ_E), V_ϵ(e_3)∉ T_e_3M_E. (ii) There exists ϵ_I>0 so that for all ϵ∈(0,ϵ_I) and for all x∈ M_I with |x_3|<1, V_ϵ(x)∉ T_xM_I. Proof. 1. On (i). From Proposition 2.2 for 0<ϵ<ϵ(u/2) we get |P_UV_ϵ(e_3)|≥|P_UAe_3|-|P_U(V_ϵ(e_3)-Ae_3)| =u-|P_U(V_ϵ(e_3)-Ae_3)| ≥ u-u/2|P_Ue_3|=u/2>0. Hence P_UV_ϵ(e_3)∉ L=T_e_3M_E. 2. On (ii). For x∈ M_I with |x_3|<1, x∈ B_1. P_Lx≠0 and ([ -x_2; x_1; 0 ])≠0 are orthogonal. Using Proposition 2.2 for 0<ϵ<ϵ(|σ|/2) we get |<P_LV_ϵ(x),P_Lx>| ≥ |<P_LAx),P_Lx>| - |<P_L(V_ϵ(x)-Ax),P_Lx>| ≥ |<([ σ x_1+μ x_2; -μ x_1+σ x_2; 0 ]),([ x_1; x_2; 0 ])>|-|σ|/2|P_Lx|^2 = (-σ)|P_Lx|^2-|σ|/2|P_Lx|^2>0, which yields P_LV_ϵ(x)∉ℝ([ -x_2; x_1; 0 ]). Now the assertion follows easily. For 0<ϵ<min{ϵ_M,ϵ(u/2),ϵ(|σ|/2)}, h_ϵ'(t_E,ϵ)=V_ϵ(h_ϵ(t_E,ϵ))∉ T_h_ϵ(t_E,ϵ)M_E h_ϵ'(t_I,ϵ)=V_ϵ(h_ϵ(t_I,ϵ))∉ T_h_ϵ(t_I,ϵ)M_I. In order to formulate differential equations for projections of solutions of Eq. (2) into L and U we introduce R_ϵ,L:ℝ^3→ℝ^3 and R_ϵ,U:ℝ^3→ℝ^3 by R_ϵ,L(x)=P_L(V_ϵ(x)-Ax)∈ L R_ϵ,U(x)=P_U(V_ϵ(x)-Ax)∈ U. Assume _1(η,ϵ) 0<η 0<ϵ<min{ϵ(η),ϵ_M,ϵ_I,ϵ_E,ϵ(u/2),ϵ(-σ/2)}. For every solution y:I→ℝ^3 of Eq. (2) and t∈ I with y(t)∈ B_1, (P_Ly)'(t) = AP_Ly(t)+R_ϵ,L(y(t)) |R_ϵ,L(y(t))|≤η|P_Ly(t)|, (P_Uy)'(t) = AP_Uy(t)+R_ϵ,U(y(t)) |R_ϵ,U(y(t))|≤η|P_Uy(t)|. We turn to the position of projections P_Ly of solutions y to Eq. (2) in terms of polar coordinates. For the statement of the next result it is convenient to introduce the maps 𝒜_ϵ:{x∈ℝ^3: x_1^2+x_2^2>0}→ℝ^3 and ℬ_ϵ:{x∈ℝ^3: x_1^2+x_2^2>0}→ℝ^3 given by 𝒜_ϵ(x)=<( [ R_ϵ,L,1(x); R_ϵ,L,2(x) ]) , 1/x_1^2+x_2^2( [ x_1; x_2 ]) > ℬ_ϵ(x)=<( [ R_ϵ,L,1(x); R_ϵ,L,2(x) ]) , 1/x_1^2+x_2^2( [ -x_2; x_1 ]) >. Assume H_1(η,ϵ). Let I⊂ℝ be an interval and consider a solution y:I→ℝ^3 of Eq. (2). (i) For every t∈ I, y_3'(t)=u y_3(t)+R_ϵ,U,3(y(t)) and ( [ y_1'(t); y_2'(t) ]) = ( [ σ μ; -μ σ ]) ( [ y_1(t); y_2(t) ]) + ( [ R_ϵ,L,1(y(t)); R_ϵ,L,2(y(t)) ]) . In case y(t)∈ B_1, |(R_ϵ,U,3(y(t))|≤η y_3(t) |( [ R_ϵ,L,1(y(t)); R_ϵ,L,2(y(t)) ]) | ≤η|( [ y_1(t); y_2(t) ]) |. (ii) Assume 0∈ I, and (y_1(t))^2+(y_2(t))^2>0 for all t∈ I. Let ψ∈ℝ be given such that ([ y_1(0); y_2(0) ])=√((y_1(0))^2+(y_2(0))^2)([ cos(ψ); sin(ψ) ]). Then y and the continuously differentiable functions r:I→(0,∞) and ϕ:I→ℝ given by r(t)=√((y_1(t))^2+(y_2(t))^2) ϕ(t)= ψ-σ t+∫_0^tℬ_ϵ(y(s))ds satisfy y'(t) = V_ϵ(y(t)), r'(t) = σ r(t)+𝒜_ϵ(y(t))r(t), ϕ'(t) = -μ+ℬ_ϵ(y(t)) for every t∈ I, with ( [ y_1(t); y_2(t) ]) =r(t) ( [ cos(ϕ(t)); sin(ϕ(t)) ]) t∈ I. (iii) If y(I)⊂ B_1 and (y_1(t_0))^2+(y_2(t_0))^2>0 for some t_0∈ I then (y_1(t))^2+(y_2(t))^2>0 for all t∈ I. Compare Figure 2 on page 9. Proof. 1. Assertion (i) is a consequence of Eq. (7) in combination with Corollary 3.3. 2. Proof of assertion (ii). 2.1. Proof of Eq. (11). Differentiation of r^2 in combination with the differential equation from assertion (i) yields 2r(t)r'(t) = 2<([ y_1(t); y_2(t) ]),([ y_1'(t); y_2'(t) ])> = 2 < ( [ y_1(t); y_2(t) ]),( [ σ μ; -μ σ ]) ( [ y_1(t); y_2(t) ])> + 2 < ( [ R_ϵ,L,1(y(t)); R_ϵ,L,2(y(t)) ]), ( [ y_1(t); y_2(t)) ]) > = 2( y_1(t)[σ y_1(t)+μ y_2(t)]+y_2(t)[-μ y_1(t)+σ y_2(t)])+2 𝒜_ϵ(y(t))(r(t))^2) = 2 σ(r(t))^2+2 𝒜_ϵ(t)(r(t))^2. Divide by 2 r(t)>0. 2.2. Differentiation of ϕ yields Eq. (12). 2.3. Decomposition of the right hand side of the differential equation for ([ y_1; y_2 ]) from assertion (i) with respect to the orthonormal bases {1/r(t)([ y_1(t); y_2(t) ]),1/r(t)([ -y_2(t); y_1(t) ])}, t∈ I, yields that ( [ y_1; y_2 ]) is a solution of the nonautonomous linear system ( [ y_1'(t); y_2'(t) ]) = ( [ σ μ; -μ σ ]) ( [ y_1(t); y_2(t) ]) +𝒜_ϵ(y(t)) ( [ y_1(t); y_2(t) ]) (t)+ℬ_ϵ(y(t))( [ -y_2(t); y_1(t) ]). 2.4. For the continuously differentiable map w:I∋ t↦ r(t)([ cos(ϕ(t)); sin(ϕ(t)) ])∈ℝ^2. we compute w'(t) = r'(t)([ cos(ϕ(t)); sin(ϕ(t)) ])+r(t)ϕ'(t)([ -sin(ϕ(t)); cos(ϕ(t)) ]) = [σ r(t)+𝒜_ϵ(y(t))r(t)]([ cos(ϕ(t)); sin(ϕ(t)) ])+r(t)[-σ+ℬ_ϵ(y(t))]([ -sin(ϕ(t)); cos(ϕ(t)) ]) = σ w(t)-μ([ -w_2(t); w_1(t) ])+𝒜_ϵ(y(t))w(t)+ℬ_ϵ(y(t))([ -w_2(t); w_1(t) ]) = ([ σ w_1(t)+μ w_2(t); -μ w_1(t)+σ w_2(t) ])+𝒜_ϵ(y(t))w(t)+ℬ_ϵ(y(t))([ -w_2(t); w_1(t) ]) = ( [ σ μ; -μ σ ]) ( [ w_1(t); w_2(t) ]) )+𝒜_ϵ(y(t))w(t)+ℬ_ϵ(y(t))([ -w_2(t); w_1(t) ]). 2.5. From r(0)=√((y_1(0))^2+(y_2(0))^2) and ϕ(0)=ψ we have w(0)=r(0)([ cos(ϕ(0)); sin(ϕ(0)) ])=√((y_1(0))^2+(y_2(0))^2)([ cos(ψ); sin(ψ) ])=([ y_1(0); y_2(0) ]). So w and ( [ y_1; y_2 ]) both are solutions of the same initial value problem. Hence ( [ y_1(t); y_2(t) ])=w(t)=r(t)([ cos(ϕ(t)); sin(ϕ(t)) ]) t∈ I. 3. On assertion (iii). We have (y_1(t))^2+(y_2(t))^2>0 for all t∈ I since otherwise the invariance of U under V_ϵ in B_1 would result in y(t_0)∈ U, hence y_1(t_0)=0=y_2(t_0), in contradiction to the hypothesis on y. The maximal solutions of the system (2,11,12) with initial values y(0)=x∈ℝ^3, r(0)∈(0,∞), ϕ(0)∈ℝ define a continuously differentiable flow G_ϵ:ℝ×ℝ^5⊃ dom(G_ϵ)→ℝ^5. Notice that for a solution y:I→ℝ^3 of Eq. (2) and for ϕ as in Proposition 3.4 (ii) we obtain (t,([ y(0); √((y_1(0))^2+(y_2(0))^2); ϕ(0) ]))∈ dom(G_ϵ) ϕ(t)=G_ϵ,5(t,([ y(0); √((y_1(0))^2+(y_2(0))^2); ϕ(0) ])) for all t∈ I. Assume H_1(η,ϵ). For every solution y of Eq. (2) on an interval I∋0 with y(I)⊂ B_1 and for r,ϕ as in Proposition 3.4 (ii) we have r'(t)∈σ r(t)+[-η,η]r(t) ϕ'(t)∈-μ+[-η,η] t∈ I. Proof. Let t∈ I. From Corollary 3.3 we infer |( [ R_ϵ,L,1(y(t)); R_ϵ,L,2(y(t)) ]) |=|R_ϵ,L(y(t))|≤η|P_Ly(t)|=η√((y_1(t))^2+(y_2(t))^2)=η r(t). Hence |𝒜_ϵ(y(t))|≤η and |ℬ_ϵ(y(t))|≤η. Use Eqs. (11,12) to complete the proof. § A TRAVEL TIME FROM A SUBSET OF M_I TO M_E The next objective is to establish the travel time from the continuously differentiable submanifold {x∈ M_I:0<x_3<1} to the differentiable submanifold M_E as a continuously differentiable map. Assume _2(η,ϵ) 0<η<min{-σ/√(2),u/2} 0<ϵ<min{ϵ(η),ϵ_M,ϵ_I,ϵ_E,ϵ(u/2),ϵ(-σ/2)}. (i) For every x∈ M_I with 0<x_3<1 there exists t>0 with (t,x)∈Ω_ϵ and F_ϵ(t,x)∉ B_1. (ii) The map t_ϵ:{x∈ M_I:0<x_3<1}→ℝ given by t_ϵ(x)=inf {t>0:(t,x)∈Ω_ϵ F_ϵ(t,x)∉ B_1} is continuous. For every x∈ M_I with 0<x_3<1 we have t_ϵ(x)>0, and the maximal solution y of Eq. (2) with y(0)=x satisfies y_3(t_ϵ(x))=1, and y_3'(t)>0 for 0≤ t≤ t_ϵ(x), and 0<x_3<y_3(t)<1 for 0<t<t_ϵ(x). (iii) The map t_ϵ is continuously differentiable. Proof. 1. On (i). For x∈ M_I with 0<x_3<1 let y:I_y→ℝ^3 denote the maximal solution of Eq. (2) with initial value y(0)=x, so that y(t)=F(t,x) on I_y. Suppose y(t)∈ B_1 for all t≥0 in I_y. In case sup I_y=∞ we obtain by means of Proposition 3.4 (i) first y_3(t)>0 for all t≥0, and then y_3(t)≥ x_3e^(u-η)t for all t≥0, which contradicts the assumption of boundedness. In case sup I_y<∞ we have I_y∩[0,∞)=[0,τ) for some τ>0. By assumption y(t)∈ B_1 on [0,τ). As V_ϵ is bounded on B_1, we get a bound for y'(t) on [0,τ). Hence y satisfies a Lipschitz condition on [0,τ). This yields that y has a limit at τ, which can be used to construct a continuation of y as a solution of Eq. (2) beyond τ, in contradiction to maximality. 2. Let x∈ M_I with 0<x_3<1 be given. Due to assertion (i) we obtain t_ϵ(x)∈[0,∞). Consider y:I_y→ℝ^3 as in Part 1. Proof of t_ϵ(x)>0 and of the assertions concerning y. We have I_y∩[0,∞)=[0,τ) with 0<τ≤∞. Obviously, 0≤ t_ϵ<τ. For r=|([ y_1; y_2 ])| we have r(0)=1, and by Corollary 3.5, r'(t)≤(σ+η)r(t)<0 on [0,t_ϵ(x)]. It follows that r(t)<1 on (0,t_ϵ(x)]. Using y(s)∉ B_1 for some s>t_ϵ(x) arbitrarily close to t_ϵ(x) in combination with r(s)<1 we conclude that |y_3(s)|>1 for these arguments s. By continuity, |y_3(t_ϵ(x))|=1. This yields t_ϵ(x)>0, in view of y_3(0)=x_3∈(0,1). Using 0<x_3=y_3(0)<1 and Proposition 3.4 (i) we deduce that y_3'(t)≥(u-η)y_3(t)>0 for 0≤ t≤ t_ϵ(x), and upon that, 0<y_3(t_ϵ(x)). It follows that y_3(t_ϵ(x))=1. Also, 0<x_3<y_3(t)<1 on (0,t_ϵ(x)). 3. On continuity of the map t_ϵ. Let x∈ M_I with 0<x_3<1 be given and consider y and r as in Part 2. Let ρ∈(0,t_ϵ(x)) be given. For some s∈(t_ϵ(x),t_ϵ(x)+ρ), y(s)∉ B_1. By continuity there is a neighbourhood N_1 of x in ℝ^3 with F_ϵ(s,z)∉ B_1 for all z∈ N_1. According to Part 2, 0<y_3(t)<1 and 0<r(t)<1 on [0,t_ϵ(x)-ρ]. Hence F_ϵ(t,x)∈ int B_1 on [0,t_ϵ(x)-ρ]. By continuity and compactness we find a neighbourhood N⊂ N_1 of x in ℝ^3 so that for all z∈ N and for all t∈[0,t_ϵ(x)-ρ] we have F_ϵ(t,z)∈ int B_1. Hence t_ϵ(x)-ρ≤ t_ϵ(z)<s<τ_ϵ(x)+ρ for all z∈ N⊂ N_1, which yields continuity of t_ϵ at x. 4. We show that locally the map t_ϵ is given by continuously differentiable maps. Let x∈ M_I with 0<x_3<1 be given. Then F_ϵ,3(t_ϵ(x),x)=1, and ∂_1F_ϵ,3(t_ϵ(x),x)>0. The Implicit Function Theorem yields an open neighbourhood N of x in ℝ^3 and ξ>0 and a continuously differentiable map τ:N→ (t_ϵ-ξ,t_ϵ+ξ) with F_ϵ,.3(τ(z),z)=1 for all z∈ N, and on (t_ϵ-ξ,t_ϵ+ξ)× N, F_ϵ,3(t,z)=1 t=τ(z). By continuity there is an open neighbourhood N_1⊂ N of x in ℝ^3 so that for all z∈ N_1∩ M_I we have 0<z_3<1 and t_ϵ(x)-ξ<t_ϵ(z)<t_ϵ(x)+ξ. Recall F_ϵ,3(t_ϵ(z),z)=1 for all z∈ M_I with 0<z_3<1. It follows that on N_1∩ M_I, t_ϵ(z)=τ(z). The restriction of τ to N_1∩ M_I is a continuously differentiable function on the open subset N_1∩ M_I of the submanifold M_I. Assume H_2(η,ϵ). For each x∈ M_I with 0<x_3<1, 1/u+η(-log x_3)≤ t_ϵ(x)≤1/u-η(-log x_3). Proof. Using Proposition 3.4 (i) and x_3<F_ϵ,3(t,x)<1=F_ϵ,3(t_ϵ(x),x) on (0,t_ϵ(x)) we get 1=F_ϵ,3(t_ϵ(x),x)∈ x_3e^(u+[-η,η])t_ϵ(x) which yields the assertion. § TRANSVERSALITY AT M_I AND THE INNER MAP The inner map I_ϵ:{z∈ M_I:0<z_3<1}∋ x↦ F_ϵ(t_ϵ(x),x)∈ M_E, for 0<ϵ<ϵ(η) and 0<η<min{-σ/√(2),u/2}, is continuously differentiable, from the submanifold M_I into the submanifold M_E. Assume H_2(η,ϵ). The inner map defines a diffeomorphism onto the open subset I_ϵ({x∈ M_I:0<x_3<1}) of M_E. Sketch of the proof. 1. The transversality condition D_1 F_ϵ(t_ϵ(x),x)1∉ L=T_ F_ϵ(t_ϵ(x),x)M_E is satisfied since according to Proposition 4.1 (ii), [D_1 F_ϵ(t_ϵ(x),x)1]_3=y_3'(t_ϵ(x))>0 for y=F_ϵ(·,x) with x∈ M_I and 0<x_3<1. 2. Proof that for every x∈ M_I with 0<x_3<1 the derivative T_xI_ϵ:T_xM_I→ T_I_ϵ(x)M_E is an isomorphism. This derivative is given by T_xI_ϵv=PD_2F_ϵ(t_ϵ(x),x)v v∈ T_xM_I, with the projection P:ℝ^3→ℝ^3 along D_1 F_ϵ(t_ϵ(x),x)1 onto L=T_ F_ϵ(t_ϵ(x),x)M_E. The isomorphism D_2F_ϵ(t_ϵ(x),x) maps T_xM_I onto a plane Z and sends the vector D_1F_ϵ(0,x)1=V_ϵ(x)∉ T_xM_I (see Proposition 3.1 (ii)) to D_1F_ϵ(t_ϵ(x),x)1 which is transversal to Z. Therefore the projection P defines an isomorphism from Z onto T_F_ϵ(t_ϵ(x),x)M_E. 3. The Inverse Mapping Theorem yields that the inner map is locally given by diffeomorphisms from open subsets of M_I onto open subsets of M_E. 4. The assertion follows easily provided the inner map is injective. Proof of this: Suppose I_ϵ(x)= I_ϵ(z) for x,z in M_I with 0<x_3<1,0<z_3<1. Let y=F_ϵ(·,x) and w=F_ϵ(·,z), so that y(t_ϵ(x))=w(t_ϵ(z)). Suppose t_ϵ(x)≠ t_ϵ(z). In case t_ϵ(x)<t_ϵ(z) uniqueness for backward solutions of initial value problems yields y(t_ϵ(x)-s)=w(t_ϵ(z)-s) for all s∈[0,t_ϵ(x)]. Hence x=y(0)=y(t_ϵ(x)-t_ϵ(x))=w(t_ϵ(z)-t_ϵ(x)), and thereby 1=|P_Lx|=|P_Lw(t_ϵ(z)-t_ϵ(x))|, in contradiction to |P_Lz|=1 and the fact that due to Corollary 3.5 |P_Lw| is strictly decreasing on [0,t_ϵ(z)]. It follows that t_ϵ(x)=t_ϵ(z), from which we infer x=y(0)=y(t_ϵ(x)-t_ϵ(x))=w(t_ϵ(z)-t_ϵ(x))=w(0)=z. It will be convenient to describe the inner map (and the outer map, which will be introduced in the next section) in terms of the parametrization K_ϵ:(-π,π)×ℝ∋(ψ,δ)↦([ cos(ω_ϵ+ψ); sin(ω_ϵ+ψ); δ ])∈ℝ^3, 0<ϵ<ϵ_M, where ω_ϵ∈[0,2π) is chosen so that ([ cos(ω_ϵ); sin(ω_ϵ); 0 ])=h_ϵ(t_I,ϵ)∈ M_I∩ L. The map K_ϵ defines a diffeomorphism from (-π,π)×(0,1) onto the open subset M_I∖{([ cos(ω_ϵ+π); sin(ω_ϵ+π); δ ])∈ℝ^3:δ∈ℝ} of M_I. Compare Figure 3 on page 15. We turn to the angle of the projection into the plane L of the value of the inner map. Notice the difference to angles for projected solutions of Eq. (2), which were addressed in Section 3 ! For x∈ M_I with 0<x_3<1 we have √(x_1^2+x_2^2)=1, and a unique ψ=ψ_x∈(ω_ϵ-π,ω_ϵ+π) with ([ x_1; x_2 ])=([ cos(ψ); sin(ψ) ]), and the maximal solution y of Eq. (2) with y(0)=x satisfies y(t)∈ B_1 on the interval [0,t_ϵ(x)], see Proposition 4.1 (ii). The function ϕ:[0,t_ϵ(x)]→ℝ given by y and ϕ(0)=ψ=ψ_x as in Proposition 3.4 (ii) is uniquely determined by x, so we set ϕ_x=ϕ and obtain the map Φ_ϵ,∗:{x∈ M_I:0<x_3<1}∋ x↦ϕ_x(t_ϵ(x))∈ℝ. In view of the equation ([ y_1(t); y_2(t) ])=√((y_1(t))^2+(y_2(t))^2)([ cos(ϕ_x(t)); sin(ϕ_x(t)) ]) r_x(t)=√((y_1(t))^2+(y_2(t))^2) at t=t_ϵ(x), by Proposition 3.4 (ii), the value Φ_ϵ,∗(x) stands for the desired angle. Compare Figure 4 on page 16. Assume H_2(η,ϵ). The function Φ_ϵ:(-π,π)×(0,1)→ℝ given by Φ_ϵ(ψ,δ)=Φ_ϵ,∗(K_ϵ(ψ,δ)) is continuously differentiable. Proof. 1. By the remarks before Corollary 3.5, Φ_ϵ,∗(x)=ϕ_x(t_ϵ(x))=G_ϵ,5(t_ϵ(x),([ x; 1; ψ_x ])) for all x∈ M_I with 0<x_3<1. The maps t_ϵ and G_ϵ are continuously differentiable, as well as the map ℝ^2∖{(0,0)}∋([ x_1; x_2 ])↦ψ_x∈(ω_ϵ-π,ω_ϵ+π) given by ([ x_1; x_2 ])=√(x_1^2+x_2^2)([ cos(ψ_x); sin(ψ_x) ]). It follows that the real function Φ_ϵ,∗ on the continuously differentiable submanifold {x∈ M_I:0<x_3<1} is continuously differentiable. 2. The chain rule yields that also the map Φ_ϵ is continuously differentiable. Next we estimate the range of the inner map in terms of the parameters in (-π,π)×(0,1). Assume H_2(η,ϵ). Let x=K_ϵ(ψ,δ) with -π<ψ<π and 0<δ<1 be given. (i) Then x∈ M_I and 0<x_3=δ<1, and I_ϵ(x)∈( [ e^(σ+[-η,η])[1/u+η,1/u-η]log(1/δ)([ cos; sin ])(ω_ϵ+ψ+(-μ+[-η,η])[1/u+η,1/u-η]log(1/δ)); 1 ]), (ii) δ^-σ+η/u-η≤|P_LI_ϵ(K_ϵ(ψ,δ))|≤δ^-σ-η/u+η. Proof. 1. On assertion (i). We have ([ I_ϵ,1(x); I_ϵ,2(x) ]) =r_x(t_ϵ(x))([ cos(Φ_ϵ(ψ,δ)); sin(Φ_ϵ(ψ,δ)) ]), see the remarks prior to Proposition 5.3. By Corollary 4.2, t_ϵ(x)∈[1/u+η,1/u-η]log(1/δ). Corollary 3.5, with r(0)=1 and ϕ(0)=ω_ϵ+ψ, yields r_x(t_ϵ(x)) ∈ e^(σ+[-η,η])t_ϵ(x), Φ_ϵ(ψ,δ) ∈ ω_ϵ+ψ+(-μ+[-η,η])t_ϵ(x). Combining these relations and I_ϵ,3(x)=1 we obtain assertion (i). 2. On assertion (ii). Using |P_LI_ϵ(K_ϵ(ψ,δ))|=|([ I_ϵ,1(x); I_ϵ,2(x) ])| we infer from assertion (i) the estimates δ^-σ+η/u-η = (1/δ)^σ-η/u-η= e^(σ-η)log(1/δ)1/u-η≤ |P_LI_ϵ(K_ϵ(ψ,δ))| ≤ e^(σ+η)log(1/δ)1/u+η = (1/δ)^σ+η/u+η=δ^-σ-η/u+η. Notice that in vertical line segments, for ψ fixed and δ↘0, the angle Φ_ϵ(ψ,δ) decreases with nearly constant speed close to -μ<0 to -∞ and the radius r_x(t_ϵ(x)), x=K_ϵ(ψ,δ), decreases almost like δ^-σ/u to 0, so P_LI_ϵ(x) spirals clockwise down to 0∈ L. § THE OUTER MAP, AND THE RETURN MAP IN THE PLANE We proceed to the outer map, under condition H_2(η,ϵ). The transversality V_ϵ(x)∉ T_xM_I from Proposition 3.1 (ii), for x=h_ϵ(t_I,ϵ)∈ L with |x|=1, yields a continuously differentiable travel time τ_ϵ:{x∈ M_E:|P_Lx|<r_ϵ}→(0,∞) for some r_ϵ>0, with τ_ϵ(h_ϵ(t_E,ϵ))=t_I,ϵ F_ϵ(τ_ϵ(x),x)∈ M_I x∈ M_E |P_Lx|<r_ϵ, such that the outer map E_ϵ:{x∈ M_E:|P_Lx|<r_ϵ}∋ y↦ F_ϵ(τ_ϵ(x),x)∈ M_I defines a diffeomorphism onto an open subset of M_I, with E_ϵ(h_ϵ(t_E,ϵ))=E_ϵ(e_3)=h_ϵ(t_I,ϵ)=([ cos(ω_ϵ); sin(ω_ϵ); 0 ]), compare Figure 1 on page 3. Incidentally, τ_ϵ and E_ϵ are given by restrictions of continuously differentiable maps on an open subset of ℝ^3 to an open subset of the submanifold M_E. By continuity there are δ_I,ϵ∈(0,1) and ω_I,ϵ∈(0,π) so that the set {([ cos(ϕ); sin(ϕ); δ ])∈ℝ^3:|ϕ-ω_ϵ|<ω_I,ϵ, |δ|<δ_I,ϵ} is contained in the image of E_ϵ. The next result guarantees that the outer map sends a small disk centered at e_3 into the domain of the inverse K_ϵ^-1, and that the inner map composed with K_ϵ sends a narrow strip into that small disk. Assume H_2(η,ϵ). (i) There exists r_ϵ,1∈(0,r_ϵ) with E_ϵ(y)∈ K_ϵ((-ω_I,ϵ,ω_I,ϵ)×(-δ_I,ϵ,δ_I,ϵ)) for all y∈ M_E with |P_Ly|<r_ϵ,1. (ii) There exists δ_I,ϵ,1∈(0,δ_I,ϵ) so that for all ψ∈(-ω_I,ϵ,ω_I,ϵ) and for all δ∈(0,δ_I,ϵ,1) we have δ^-σ+η/u-η≤ |P_LI_ϵ(K_ϵ(ψ,δ))|≤δ^-σ-η/u+η<r_ϵ,1. Proof. 1. The set K_ϵ((-ω_I,ϵ,ω_I,ϵ)×(-δ_I,ϵ,δ_I,ϵ)) is an open neighbourhood of K_ϵ(0,0)=h_ϵ(t_I,ϵ)=E_ϵ(e_3) in M_I. By continuity E_ϵ maps an open neighbourhood of e_3 in M_E into K_ϵ((-ω_I,ϵ,ω_I,ϵ)×(-δ_I,ϵ,δ_I,ϵ)). The said neighbourhood of e_3 contains sets of the form {y∈ M_E:|P_Ly|<r_ϵ,1} for r_ϵ,1∈ (0,r_ϵ) sufficiently small. 2. On (ii). Recall Proposition 5.3 (ii) and choose δ_I,ϵ,1∈(0,δ_I,ϵ) so small that δ^-σ-η/u+η<r_ϵ,1 for 0<δ<δ_I,ϵ,1 which is possible due to 0<-σ-η/u+η. Proposition 6.1 shows that the expression K_ϵ^-1(E_ϵ(I_ϵ(K_ϵ(ψ,δ)))) defines a diffeomorphism R_ϵ from (-ω_I,ϵ,ω_I,ϵ)×(0,δ_I,ϵ,1)⊂ℝ^2 onto an open subset of (-ω_I,ϵ,ω_I,ϵ)×(-δ_I,ϵ,δ_I,ϵ)⊂ℝ^2. In addition to the previous representation of the return map x↦ E_ϵ(I_ϵ(x)) as a diffeomorphism R_ϵ in the plane we need a coordinate representation of the outer map alone. The expression K_ϵ^-1(E_ϵ(x)) defines a diffeomorphism K_ϵ^-1(E_ϵ(·)) from the open subset {x∈ M_E:|P_Lx|<r_ϵ,1} of M_E onto an open subset of ℝ^2 which contains (0,0)=K_ϵ^-1(E_ϵ(e_3)). The linear map D(K_ϵ^-1(E_ϵ(·)))(e_3) is an isomorphism from T_e_3M_E=L onto ℝ^2. Let v_ϵ and w_ϵ denote the preimages under this isomorphism of the unit vectors ([ 1; 0 ])∈ℝ^2 and ([ 0; 1 ])∈ℝ^2, respectively. Let κ_ϵ:L→ℝ^2 denote the isomorphism given by κ v_ϵ=([ 1; 0 ])∈ℝ^2 and κ_ϵ w_ϵ=([ 0; 1 ])∈ℝ^2. The restriction P_LE=P_L|M_E which only subtracts e_3 is a diffeomorphism onto the plane with DP_LE(y)ŷ=ŷ for all y∈ M_E and ŷ∈ T_yM_E=L. The composition κ_ϵ∘ P_LE is a diffeomorphism from M_E onto the plane, with κ_ϵ P_LE(e_3)=κ_ϵ(0,0)=(0,0). By continuity there exists r_E,ϵ>0 so that (κ_ϵ∘ P_LE)^-1=P_LE^-1∘κ_ϵ^-1 maps {(ψ,δ)∈ℝ^2:|(ψ,δ)|< r_E,ϵ} into the set {y∈ M_E:|P_Ly|<r_ϵ,1}. It follows that the expression K_ϵ^-1(E_ϵ((κ_ϵ∘ P_LE)^-1(ψ,δ))) defines a diffeomorphism K_ϵ^-1(E_ϵ((κ_ϵ∘ P_LE)^-1)(·,·))) from {(ψ,δ)∈ℝ^2:|(ψ,δ)|< r_E,ϵ} onto an open subset of ℝ^2, with fixed point (0,0). Assume H_2(η,ϵ). We have D(K_ϵ^-1(E_ϵ((κ_ϵ∘ P_LE)^-1)(·,·))))(0,0)=id_ℝ^2. Proof. Use the chain rule in combination with the previous statements about κ_ϵ and about the derivatives of K_ϵ^-1(E_ϵ(·)) and P_LE. The next result makes precise in which sense the diffeomorphism K_ϵ^-1(E_ϵ((κ_ϵ∘ P_LE)^-1)(·,·))) is close to the identity map. Assume H_2(η,ϵ). For every β>0 there exists α_β,ϵ>0 so that for all (ψ,δ)∈[-α_β,ϵ,α_β,ϵ]×[-α_β,ϵ,α_β,ϵ] we have |(ψ,δ)| < r_E,ϵ, |K_ϵ^-1(E_ϵ((κ_ϵ∘ P_LE)^-1)(ψ,δ)))-(ψ,δ)| ≤ β|(ψ,δ)|, |D(K_ϵ^-1(E_ϵ((κ_ϵ∘ P_LE)^-1))(ψ,δ)-id_ℝ^2| ≤ β. Proof. Use the definition of differentiability for K_ϵ^-1(E_ϵ((κ_ϵ∘ P_LE)^-1)(·,·)) at (0,0) and Corollary 6.2, and continuity of the derivative at (0,0). We turn to estimates of the range of the maps κ_ϵ P_LE(I_ϵ(K_ϵ(·,·))) and R_ϵ. The hypothesis H_3(η,ϵ) in the following proposition is stronger than H_2(η,ϵ) since η<-σ/2 implies η<-σ/√(2) and η<u/2. Assume _3(η,ϵ) 0<η<min{μ,-σ/2,u/2} 0<ϵ<min{ϵ(η),ϵ_M,ϵ_I,ϵ_E,ϵ(u/2),ϵ(-σ/2)}. Let 0<β≤1/2. There exists α_β,ϵ>0 with α_β,ϵ<min{ω_I,ϵ,δ_I,ϵ,1} such that for δ_β,ϵ=(2/3(|κ_ϵ|+1)α_β,ϵ)^3u/-σ we have δ_β,ϵ≤2/3α_β,ϵ and for all (ψ,δ)∈[-α_β,ϵ,α_β,ϵ]×(0.δ_β,ϵ], |κ_ϵP_LE(I_ϵ(K_ϵ(ψ,δ)))| ≤2/3α_β,ϵ, |R_ϵ(ψ,δ))))| ∈ [-α_β,ϵ,α_β,ϵ]×[-α_β,ϵ,α_β,ϵ]. Proof. 1. In order to show the inequality δ_β,ϵ≤2/3α_β,ϵ observe that 0<η<min{-μ/2,u/2} yields u+η/-μ-η<u+u/2/-μ+μ/2=3u/2/-μ/2=3u/-μ It follows that δ_β,ϵ=(2/3(|κ_ϵ|+1)α_β,ϵ)^3u/-μ≤(2/3(|κ_ϵ|+1)α_β,ϵ)^u+η/-μ-η. Consequently, with 0<-σ-η<u+η and δ_β,ϵ<1, δ_β,ϵ≤(|κ_ϵ|+1)δ_β,ϵ^-μ-η/u+η≤2/3α_β,ϵ. 2. From Corollary 6.3 we get α_β,ϵ>0 with α_β,ϵ<min{ω_I,ϵ,δ_I,ϵ,1} so that for all (ψ̃,δ̃)∈ℝ^2 with (ψ̃,δ̃)∈[-α_β,ϵ,α_β,ϵ]×[-α_β,ϵ,α_β,ϵ] we have |(ψ̃,δ̃)|<r_E,ϵ and, by means of (13) and with 0<β≤1/2, |K_ϵ^-1(E_ϵ((κ_ϵ∘ P_LE)^-1)(ψ̃,δ̃)))|≤(1+β)|(ψ̃,δ̃)|≤3/2|(ψ̃,δ̃)|. 3. Proof of (14). Because of δ_β,ϵ≤α_β,ϵ<δ_I,ϵ,1 we obtain from Proposition 6.1(ii) in combination with the result of Part 1 that for all (ψ,δ)∈(-ω_I,ϵ,ω_I,ϵ)×(0,δ_β,ϵ], |κ_ϵ P_LE(I_ϵ(K_ϵ(ψ,δ)))|≤ |κ_ϵ|δ^-σ-η/u+η≤ |κ_ϵ|δ_β,ϵ^-σ-η/u+η≤2/3α_β,ϵ. 4. Proof of (15). For (ψ,δ) as in Part 3 let (ψ̃,δ̃)=κ P_LE(I_ϵ(K_ϵ(ψ,δ))). Then |(ψ̃,δ̃)|≤2/3α_β,ϵ, hence (ψ̃,δ̃)∈[-α_β,ϵ,α_β,ϵ]×[-α_β,ϵ,α_β,ϵ], which according to Part 2 yields |(ψ̃,δ̃)|<r_E,ϵ and |K^-1_ϵ(E_ϵ((κ_ϵ∘ P_LE)^-1(ψ̃,δ̃)))|≤3/2|(ψ̃,δ̃)|. It follows that |K^-1_ϵ(E_ϵ(I_ϵ(K_ϵ(ψ,δ))))| = |K^-1_ϵ(E_ϵ((κ_ϵ∘ P_LE)^-1[(κ_ϵ∘ P_LE)(I_ϵ(K(ψ,δ)))])| = |K^-1_ϵ(E_ϵ((κ_ϵ∘ P_LE)^-1(ψ̃,δ̃)))| ≤ 3/2|(ψ̃,δ̃)|≤α_β,ϵ. Finally, use that the circle of radius α_β,ϵ centered at the origin is contained in the square [-α_β,ϵ,α_β,ϵ]×[-α_β,ϵ,α_β,ϵ]. § CURVES EXPANDED BY THE RETURN MAP We show that the return map in coordinates R_ϵ expands each curve connecting two levels in the domain of R_ϵ, in such a way that the resulting curve intersects the domain at least in two disjoint sets. These sets will be related to - but not given by - positions in the left or right halfplanes. We begin with the angles Φ_ϵ(ψ,δ)=Φ_ϵ,∗(x) of the projection into the plane L of the values I_ϵ(x), for x=K_ϵ(ψ,δ) with -π<ψ<π and 0<δ<1. For 0<η<min{μ,u} we define c = c_η=(u+η)(μ+η)/(u-η)(μ-η) > 1, k = k_η=e^-6πu+η/μ-η < 1, and for δ_2∈(0,1) we define δ_1 = δ_1,δ_2,η=k_ηδ_2^c_η < δ_2. Assume H_2(η,ϵ). Let 0<δ_2<1 and δ_1=δ_1,δ_2,η. Then 4 π≤Φ_ϵ(ψ,δ_2)-Φ_ϵ(ψ̃,δ_1) for all ψ,ψ̃ in (-π,π), Proof. Assume -π<ψ<π, -π<ψ̃<π. Using the estimate of the speed of angles in Corollary 3.5 and the estimate of intersection times in Corollary 4.2 we have Φ_ϵ(ψ,δ_2)-(ω_ϵ+ψ)≥ (-μ-η)·1/u-ηlog(1/δ_2). Using this and the upper estimate Φ_ϵ(ψ̃,δ_1)-(ω_ϵ+ψ̃)≤(-μ+η)·1/u+ηlog(1/δ_1) we obtain Φ_ϵ(ψ,δ_2)-Φ_ϵ(ψ̃,δ_1) ≥ -2π+μ+η/u-ηlog(δ_2) -μ-η/u+ηlog(δ_1) ≥ -2π+μ+η/u-ηlog(δ_2) -μ-η/u+η[log(k)+clog(δ_2)] = -2π+6π+(μ+η/u-η-cμ-η/u+η)log(δ_2)=4π. It follows that m_1=max_|ψ|≤α_βΦ_ϵ(ψ,δ_1) m_2=min_|ψ|≤α_βΦ_ϵ(ψ,δ_2) satisfy m_1+4π≤ m_2. There exists ψ_ϵ∈[m_1+π,m_2-π] with ([ cos(ψ_ϵ); sin(ψ_ϵ); 0 ])=1/|w_ϵ|w_ϵ. (Angles along curves connecting vertical levels) Assume H_2(η,ϵ). Let a (continuous) curve c:[a,b]→(-π,π)×(0,1) be given with c(b)_2=δ_2 and c(a)_2=δ_1,δ_2,η=δ_1. Then there exist a'_0<b'_0≤ a'_1<b'_1 in [a,b] such that Φ_ϵ(c(t))∈(ψ_ϵ-π,ψ_ϵ) (a'_0,b'_0), Φ_ϵ(c(a'_0))= ψ_ϵ-π, Φ_ϵ(c(b'_0))=ψ_ϵ, Φ_ϵ(c(t))∈(ψ_ϵ,ψ_ϵ+π) (a'_1,b'_1), Φ_ϵ(c(a'_1))=ψ_ϵ, Φ_ϵ(c(b'_1))=ψ_ϵ+π. Compare Figure 6 on page 24. Proof. 1. We construct a'_1 and b'_1. From Φ_ϵ(c(a))≤ m_1<m_1+π≤ψ_ϵ≤ m_2-π<m_2≤Φ_ϵ(c(b)) we have Φ_ϵ(c(a))≤ψ_ϵ-π<ψ_ϵ<ψ_ϵ+π≤Φ_ϵ(c(b)). By continuity, ψ_ϵ=Φ_ϵ(c(t)) for some t∈(a,b). Again by continuity there exists b'_1∈(t,b] with Φ_ϵ(c(s))<ψ_ϵ+π on [t,b'_1) and Φ_ϵ(c(b'_1))=ψ_ϵ+π. Upon that, there exists a'_1∈[t,b'_1) with ψ_ϵ<Φ_ϵ(c(s)) on (a'_1,b'_1] and Φ_ϵ(c(a'_1))=ψ_ϵ. 2. The construction of a'_0 and b'_0 with b'_0≤ a'_1 is analogous. We turn to the height, or, to the second coordinate R_ϵ,2(ψ,δ) of image points under the return map in coordinates, for arguments (ψ,δ) which under the inner map in coordinates κ_ϵ P_LEI_ϵ(K_ϵ(·,·)) are mapped onto the vertical axis. Notice that in the cases Φ_ϵ(ψ,δ)=ψ_ϵ-π, Φ_ϵ(ψ,δ)=ψ_ϵ, Φ_ϵ(ψ,δ)=ψ_ϵ+π we get that P_LEI_ϵ(K_ϵ(ψ,δ)) belongs to the rays (0,∞)(-w_ϵ), (0,∞)w_ϵ, (0,∞)(-w_ϵ), and κ_ϵP_LEI_ϵ(K_ϵ(ψ,δ)) is on the vertical axis. (From angles to vertical levels) Assume H_3(η,ϵ) and 0<β≤1/2. Consider α_β,ϵ<min{ω_I,ϵ,δ_I,ϵ,1} and δ_β,ϵ<α_β,ϵ according to Corollary 6.3 and Proposition 6.4. Assume η>0 is so small that c_η-σ+η/u-η<1, and let δ_2>0 be so small that δ_2<δ_β,ϵ 2√(2)δ_2<1/|κ_ϵ^-1|k_η^-σ+η/u-ηδ_2^c_η-σ+η/u-η. Let δ_1=δ_1,δ_2,η and let (ψ,δ)∈[-α_β,ϵ,α_β,ϵ]×[δ_1,δ_2] be given, and set z=R_ϵ(ψ,δ)∈ℝ^2. (i) Then |z|>√(2)δ_2. (ii) In the cases Φ_ϵ(ψ,δ)=ψ_ϵ-π, Φ_ϵ(ψ,δ)=ψ_ϵ, Φ_ϵ(ψ,δ)=ψ_ϵ+π, we have that z_2<-δ_2, z_2>δ_2, z_2<-δ_2, . Compare Figure 7 on page 26. Proof. 1. On assertion (i). 1.1. Using (17) in combination with ω_I,ϵ<π and δ_ β,ϵ<α_β<δ_I,ϵ,1<1 we obtain from Proposition 6.4 that for 0<δ_1<δ_2≤δ_β,ϵ the rectangle [-α_β,α_β]×[δ_1,δ_2] is contained in the domain of definition of the diffeomorphisms κ_ϵ P_LE(I_ϵ(K_ϵ(·,·))) and R_ϵ. 1.2. By Proposition 5.3 (ii), x=κ_ϵP_LEI_ϵ(K_ϵ(ψ,δ)) satisfies |x|≥1/|κ_ϵ^-1|δ^-σ+η/u-η≥1/|κ_ϵ^-1|δ_1^-σ+η/u-η =1/|κ_ϵ^-1|k_η^-ρ+η/u-ηδ_2^c_η-σ+η/u-η>2√(2)δ_2. From Proposition 6.4 and Corollary 6.3 we know that x is contained in the domain of the map K_ϵ^-1(E_ϵ((κ_ϵ∘ P_LE)^-1(·))). It follows that z=R_ϵ(ψ,δ)=K_ϵ^-1(E_ϵ(I_ϵ(K_ϵ(ψ,δ))))=K_ϵ^-1(E_ϵ((κ_ϵ∘ P_LE)^-1(x))). From Corollary 6.3, |z-x|≤β|x|, and we obtain |z|≥|x|-β|x|=(1-β)|x|≥1/2|x|>√(2)δ_2. 2. On assertion (ii) for (ψ,δ)∈[-α_β,ϵ,α_β,ϵ]×[δ_1,δ_2] with Φ_ϵ(ψ,δ)=ψ_ϵ. Let z=R_ϵ(ψ,δ)∈ℝ^2. 2.1. The image P_LEI_ϵ(K_ϵ(ψ,δ)) is a positive multiple of ([ cos(Φ_ϵ(ψ,δ)); sin(Φ_ϵ(ψ,δ)); 0 ])=([ cos(ψ_ϵ); sin(ψ_ϵ); 0 ])∈(0,∞)w_ϵ, hence κ_ϵ P_LEI_ϵ(K_ϵ(ψ,δ))∈(0,∞)([ 0; 1 ]). 2.2. We use the previous abbreviations x and z and have |z|≥(1-β)|x|>√(2)δ_2 from Part 1. By Part 2.1, x=([ 0; x_2 ]) with x_2>0. 2.3. Proof of |z|≤√(2)z_2: We have |x|=x_2. From x_2-z_2≤|x_2-z_2|≤|z-x|≤β|x|=β x_2, z_2≥(1-β)x_2>0. Also, from x_1=0, |z_1|≤|x_1|+β|x|=β x_2. It follows that |z|^2=z_1^2+z_2^2≤β^2x_2^2+z_2^2≤β^2/(1-β)^2z_2^2+z_2^2≤ 2z_2^2. 2.4. Consequently, z_2=|z_2|≥1/√(2)|z|>δ_2. 3. The proofs of assertion (ii) in the two remaining cases are analogous, making use of the fact that in both cases we have that κ_ϵ P_LEI_ϵ(K_ϵ(ψ,δ)) is a positive multiple of ([ 0; -1 ]). The next result makes precise what was briefly announced at the begin of the section. The disjoint sets mentioned there will be given in terms of the angle Φ_ϵ(ψ,δ) corresponding to the value of the inner map I_ϵ(x), x=K_ϵ(ψ,δ), only, not by the position of the value R_ϵ(ψ,δ) of the full return map in coordinates in the left or right halfplane. Our choice of disjoint sets circumvents a discussion how the latter, namely, positions of values R_ϵ(ψ,δ) left or right of the vertical axis, are related to the more accessible angles Φ_ϵ(ψ,δ). Assume H_3(η,ϵ) and (16). Consider α_β,ϵ and δ_β,ϵ as in Proposition 7.3. Assume (18) for η>0, and (19) for δ_2. Set δ_1=δ_1,δ_2,η. Consider the disjoint sets M_0= {(ψ,δ)∈[-α_β,ϵ,α_β,ϵ]×[δ_1,δ_2]:ψ_ϵ-π<Φ_ϵ(ψ,δ)<ψ_ϵ} and M_1 = {(ψ,δ)∈[-α_β,ϵ,α_β,ϵ]×[δ_1,δ_2]:ψ_ϵ<Φ_ϵ(ψ,δ)<ψ_ϵ+π}. For every curve c:[a,b]→[-α_β,ϵ,α_β,ϵ]×[δ_1,δ_2] with c(a)_2=δ_1 and c(b)_2=δ_2 there exist a_0<b_0<a_1<b_1 in [a,b] such that (a_0,b_0), c(t)∈ M_0 R_ϵ(c(t))_2∈(δ_1,δ_2), R_ϵ(c(a_0))_2=δ_1 R_ϵ(c(b_0))_2=δ_2, (a_1,b_1), c(t)∈ M_1 R_ϵ(c(t))_2∈(δ_1,δ_2), R_ϵ(c(a_1))_2=δ_2 R_ϵ(c(b_1))_2=δ_1. Proof. 1. Proposition 7.2 yields a_0'<b_0'≤ a_1'<b_1' in [a,b] such that (a_0',b_0'), Φ_ϵ(c(t))∈(ψ_ϵ-π,ψ_ϵ), Φ_ϵ(c(a_0'))=ψ_ϵ-π Φ_ϵ(c(b_0'))=ψ_ϵ, (a_1',b_1'), Φ_ϵ(c(t))∈(ψ_ϵ,ψ_ϵ+π), Φ_ϵ(c(a_1'))=ψ_ϵ Φ_ϵ(c(b_1'))_2=ψ_ϵ+π. From Proposition 7.3 (ii). R_ϵ(c(a_0'))_2<-δ_2, R_ϵ(c(b_0'))_2>δ_2, R_ϵ(c(a_1'))_2>δ_2, R_ϵ(c(b_1'))_2<-δ_2. As in the proof of Proposition 7.2 one finds a_0<b_0 in (a_0',b_0') and a_1<b_1 in (a_1',b_1') with R_ϵ(c(a_0))_2=δ_1 R_ϵ(c(b_0))_2=δ_2, R_ϵ(c(t))∈(δ_1,δ_2) (a_0,b_0), R_ϵ(c(a_1))_2=δ_2 R_ϵ(c(b_1))_2=δ_1, R_ϵ(c(t))∈(δ_1,δ_2) (a_1,b_1). Observe that on [a_0,b_0]⊂[a_0',b_0'] we have c(t)∈ M_0 while on [a_1,b_1]⊂[a_1',b_1'] we have c(t)∈ M_1. § COMPLICATED DYNAMICS For the results of this section we assume as in Proposition 7.4 that H_3(η,ϵ) holds, and that β satisfies (16). We also consider α_β,ϵ and δ_β,ϵ as in Proposition 7.3, and we assume (18) for η>0, and (19) for δ_2. We set δ_1=δ_1,δ_2,η and consider the disjoint sets M_0 and M_1 from Proposition 7.4. For every sequence (s_j)_0^∞ in {0,1} there are forward trajectories (x_j)_0^∞ of R_ϵ with x_j∈ M_s_j and δ_1≤ R_ϵ(x_j)_2≤δ_2 for all integers j≥0. Proof. 1. Let a sequence (s_j)_0^∞ in {0,1} be given. Choose a curve c:[a,b]→ [-α_β,ϵ ,α_β,ϵ]×[δ_1,δ_2] such that c(t)_2∈(δ_1,δ_2) for a<t<b and c(a)_2=δ_1, c(b)_2=δ_2, for example c(t)=(0,t) for a=δ_1≤ t≤δ_2=b. For integers j≥0 we define recursively curves c_j:[A_j,B_j]→[-α_β,ϵ,α_β,ϵ]×[δ_1,δ_2] with decreasing domains in [a,b] as follows. 1.1. In order to define c_0 we apply Proposition 7.4 to the curve c and obtain a_0<b_0<a_1<b_1 in [a,b] with the properties stated in Proposition 7.4. In case s_0=0 we define c_0 by A_0=a_0, B_0=b_0, c_0(t)=c(t) for A_0≤ t≤ B_0. Notice that c_0(t)∈ M_s_0 for all t∈[A_0,B_0], R_ϵ(c_0(t))_2∈(δ_1,δ_2) on (A_0,B_0), R_ϵ(c_0(A_0))_2=δ_1, and R_ϵ(c_0(B_0))_2=δ_2. In case s_0=1 we define c_0 by A_0=a_1, B_0=b_1, c_0(t)=c(a_1+b_1-t) for A_0≤ t≤ B_0. Notice that also in this case c_0(t)∈ M_s_0 for all t∈[A_0,B_0], R_ϵ(c_0(t))_2∈(δ_1,δ_2) on (A_0,B_0), and R_ϵ(c_0(A_0)_2)=R_ϵ(c(a_1+b_1-a_1))_2=δ_1, R_ϵ(c_0(B_0))_2=R_ϵ(c(a_1+b_1-b_1))_2=δ_2. 1.2. For an integer j≥0 let a curve c_j:[A_j,B_j]→[-α_β,ϵ,α_β,ϵ]×[δ_1,δ_2] be given with c_j(t)∈ M_s_j for all t∈[A_j,B_j] and R_ϵ(c_j(t))_2∈(δ_1,δ_2) on (A_j,B_j), R_ϵ(c_j(A_j))_2=δ_1, R_ϵ(c_j(B_j))_2=δ_2. Proceeding as in Part 1.1, with the curve [A_j,B_j]∋ t↦ R_ϵ(c_j(t))∈[-α_β,ϵ,α_β,ϵ]×[δ_1,δ_2] in place of the former curve c, we obtain A_j+1<B_j+1 in [A_j,B_j] and a curve c_j+1:[A_j+1,B_j+1]→[-α_β,ϵ,α_β,ϵ]×[δ_1,δ_2] with c_j+1(t)∈ M_s_j+1 for all t∈[A_j+1,B_j+1] and R_ϵ(c_j+1(t))_2∈(δ_1,δ_2) on (A_j+1,B_j+1), R_ϵ(c_j+1(A_j+1))_2=δ_1, R_ϵ(c_j+1(B_j+1))_2=δ_2. 2. From A_j≤ A_j+1<B_j+1≤ B_j for all integers j≥0 we get ∩_j≥0[A_j,B_j]=[A,B] with A=lim_j→∞≤lim_j→∞B_j=B. By induction we obtain that for every t∈[A,B] there is a forward trajectory (x_j)_j≥0 of R_ϵ with x_j=c_j(t)∈ M_s_j and δ_1≤ R_ϵ(x_j)_2≤δ_2 for all integers j≥0. The final result extends Proposition 8.1 to entire trajectories. For every sequence (s_j)_j=-∞^∞ in {0,1} there exist entire trajectories (x_j)_j=-∞^∞ of R_ϵ with x_j∈ M_s_j for all integers j. Proof. 1. Let (s_j)_j=-∞^∞ in {0,1} be given. Proposition 8.1 guarantees that for each integer k there is a forward trajectory (y_k,j)_j=0^∞ of R_ϵ so that for all integers j≥0, y_k,j∈ M_s_j-k δ_1≤ R_ϵ(y_k,j)_2≤δ_2. For integers k≥-j we define z_k,j=y_k,j+k, so that z_k,j=y_k,j+k ∈ M_s_j+k-k=M_s_j, z_k,j+1=y_k,j+1+k=R_ϵ(y_k,j+k) = R_ϵ(z_k,j), R_ϵ(z_k,j)_2=R_ϵ(y_k,j+k)_2 ∈ [δ_1,δ_2]. 1.1. Choice of subsequences for integers J≥0. 1.1.1. The case J=0: For all integers k≥0, z_k,0=y_k,k∈ M_s_0. As [-α_β,ϵ,α_β,ϵ]×[δ_1,δ_2] is compact there exists a subsequence which converges to some y_0∈ cl M_s_0. This subsequence is given by z_κ_0(k),0, k∈ℕ_0, with κ_0:ℕ_0→ℕ_0 strictly increasing. 1.1.2. The case J=1: Consecutively choosing two further convergent subsequences we find a strictly increasing map κ_1:ℕ_0→ℕ_0 so that for k→∞, z_κ_0∘κ_1(k),-1→ y_-1∈ cl M_s_-1 z_κ_0∘κ_1(k),1→ y_1∈ cl M_s_1. 1.1.3. The general case J∈ℕ_0: Consecutively choosing further convergent subsequences analogously to Part 1.1.2 we obtain a strictly increasing map κ_J:ℕ_0→ℕ_0 with κ_J(0)≥ J so that for each j∈{-J,…,J} the sequence (z_κ_0∘…∘κ_J(k),j)_k=0^∞ converges for k→∞ to some y_j∈ cl M_s_j. (Notice that for all integers k≥0 we have -J≥-κ_J(0)≥-κ_J(k)≥-(κ_1∘…∘κ_J)(k).) 2. The diagonal sequence K:ℕ_0→ℕ_0 defined by K(J)=(κ_1∘…∘κ_J)(J) is strictly increasing since for every J∈ℕ we have K(J+1)=(κ_1∘…∘κ_J)(κ_J+1(J+1))> (κ_1∘…∘κ_J)(κ_J+1(J))≥ (κ_1∘…∘κ_J)(J)=K(J) due to strict monotonicity of all maps involved. 3. Let an integer j be given and set J=|j|. In order to show that (x_K(k),j)_k=J+1^∞ (x_(κ_1∘…∘κ_J)(k),j)_k=J+1^∞ consider λ:{k∈ℕ_0:k>J}→ℕ_0 given by λ(k)=(κ_J+1∘…∘κ_k)(k). As in Part 2 one sees that the map λ is strictly increasing, and for every integer k>J, K(k)=(κ_1∘…∘κ_k)(k)=((κ_1∘…∘κ_J)∘(κ_J+1∘…∘κ_k))(k)=((κ_1∘…∘κ_J)∘λ)(k). It follows that (z_K(k),j)_k=J+1^∞ is a subsequence of (x_(κ_1∘…∘κ_|j|)(k),j)_k=J+1^∞. 4. We show that (y_j)_j=-∞^∞ is an entire trajectory of R_ϵ. Let an integer j be given and set J=|j|. From Part 3 in combination with Part 1.1.3 we get that (z_K(k),j)_k=J+1^∞ converges to y_j∈[-α_β,ϵ,α_β,ϵ]×[δ_1,δ_2] and that (z_K(k),j+1)_k=J+2^∞ converges to y_j+1. According to Part 1, z_k,j+1=R_ϵ(z_k,j) for all integers k≥-j. For integers k>J=|j| we have j+1>j≥-k≥-K(k), and the preceding statement yields z_K(k),j+1=R_ϵ(z_K(k),j). It follows that y_j+1=lim_J+2≤ k→∞z_K(k),j+1=lim_J+1≤ k→∞R_ϵ(z_K(k),j)=R_ϵ(y_j). 5. Proof of y_j∈ M_s_j for all integers j. Let an integer j be given. We have y_j∈ cl M_s_j. In case s_j=0 this yields ψ_ϵ-π≤Φ_ϵ(y_j)≤ψ_ϵ. Therefore the assumption y_j∉ M_s_j results in Φ_ϵ(y_j)∈{ψ_ϵ-π,ψ_ϵ}, which according to Proposition 7.3 (ii) means |R_ϵ(y_j)_2|>δ_2, in contradiction to δ_1≤ R_ϵ(y_j)_2≤δ_2. The proof in case s_j=1 is analogous. § APPENDIX: HOW TO ACHIEVE (A1) AND (A2) Consider Shilnikov's scenario according to Section 1, with a twice continuously differentiable vectorfield V and a homoclinic solution h of Eq. (1). The form of the linearization at 0 in property (A1) results from replacing the vectorfield V by V_ℐ:y↦ℐV(ℐ^-1(y)) with an isomorphism ℐ:ℝ^3→ℝ^3 so that DV_ℐ(0)x=ℐDV(0)ℐ^-1x=Ax for all x∈ℝ^3. Obviously, V_ℐ(0)=0. The equation y'(t)=V_ℐ(y(t)) is equivalent to Eq. (1) since y is a solution if and only if x=ℐ^-1∘ y solves Eq. (1). Due to ℐ(0)=0 the solution h_ℐ=ℐ∘ h satisfies lim_|t|→∞h_ℐ(t)=0. Obviously h_ℐ(t)≠0 everywhere. Let F_ℐ:ℝ^3⊃ dom_ℐ→ℝ^3 denote the flow generated by the previous differential equation. V_ℐ and F_ℐ are twice continuously differentiable. Properties (3)-(6) can be achieved by a diffeomorphism 𝒮:ℝ^3→ℝ^3 which preserves what has been obtained by means of the isomorphism ℐ. A suitable diffeomorphism 𝒮 can be found as follows. Choose a neighbourhood N of the origin on which V_ℐ is close to its linearization DV_ℐ(0), say, |V_ℐ(x)-Ax|<ϵ|x| on N. Upon that change V_ℐ outside an open neighourhood N'⊂ N of the origin to a twice continuously differentiable vectorfield V':ℝ^3→ℝ^3 with V'(x)=DV_ℐ(0)x=Ax outside N and |V'(x)-Ax|<ϵ|x| on N. For ϵ>0 sufficiently small the flow of V' has global stable and unstable manifolds W^s(0)⊂ℝ^3 and W^u(0)⊂ℝ^3 of the stationary point 0 which are invariant under that flow and have the form W^s(0)={y+w^s(y):y∈ L}, W^u(0)={w^u(u)+u:u∈ U} with twice continuously differentiable maps w^s:L→ U and w^u:U→ L satisfying w^s(0)=0, w^u(0)=0, Dw^s(0)=0, Dw^u(0)=0. The intersections W^s(0)∩ N' and W^u(0)∩ N' are invariant in N' under the flow F_ℐ and have the property that solutions y:(-∞,t_0)→ N', t_0≤∞, of the equation y'(t)=V_ℐ(y(t)) satisfy y(t)∈ W^u(0) on some unbounded interval (-∞,t_u), t_u≤ t_0, while solutions y:(t_0,∞)→ N', -∞≤ t_0, of y'(t)=V_ℐ(y(t)) satisfy y(t)∈ W^s(0) on some unbounded interval (t_s,∞), t_0≤ t_s. In particular, h_ℐ(t)∈ W^u(0)∩ N' on some interval (-∞,t_u) and h_ℐ(t)∈ W^s(0)∩ N' on some interval (t_s,∞). The map 𝒮:ℝ^3→ℝ^3 defined by 𝒮(y+u)=y-w^u(u)+u-w^s(y), y∈ L, u∈ U is a diffeomorphism with 𝒮(0)=0 and D𝒮(0)=id which transforms W^s(0) onto L and W^u(0) onto U. L and U are invariant under the transformed flow F_𝒮:ℝ^3⊃ dom_𝒮→ℝ^3 which is defined by (t,y)∈ dom_𝒮 if and only of (t,𝒮^-1(y))∈ dom_ℐ, and in this case, F_𝒮(t,y)=𝒮(F_ℐ(t,𝒮^-1(y))). For the once continuously differentiable vectorfield V_𝒮:ℝ^3∋ z↦ D𝒮(𝒮^-1(z))V_ℐ(𝒮^-1(z)) ∈ℝ^3 we have F_𝒮(t,y)=z(t) with the maximal solution z:I_y→ℝ^3 of the equation z'(t)=V_𝒮(z(t)) with initial value z(0)=y. The preceding differential equation is equivalent to y'(t)=V_ℐ(y(t)) since y is a solution of the latter if and only if z=𝒮∘ y is a solution of the former. Obviously, V_𝒮(0)=0, and (A1) holds for V_𝒮. The set 𝒮(N') is an open neighbourhood of the origin, and L'=𝒮(W^s(0))∩𝒮(N')⊂ L and U'=𝒮(W^u(0))∩𝒮(N')⊂ U are open neighbourhoods of the origin in L and in U, respectively, and they are both invariant in 𝒮(N') under the transformed flow F_𝒮. By means of the relation V_𝒮(z)=D_1F_𝒮(0,z)1 for every z∈ℝ^3 one finds V_𝒮(L')⊂ L and V_𝒮(U')⊂ U. Moreover, 𝒮(N'), L', and U' have the property that solutions z:(-∞,t_0)→𝒮(N'), t≤∞, of the equation z'(t)=V_𝒮(z(t)) satisfy z(t)∈ U' on some unbounded interval (-∞,t_u), t_u≤ t_0, while solutions z:(t_0,∞)→𝒮(N'), -∞≤ t_0, of z'(t)=V_𝒮(z(t)) satisfy z(t)∈ L' on some unbounded interval (t_s,∞), t_0≤ t_s. For h_𝒮=𝒮∘ h_ℐ we obtain from 𝒮(0)=0 that h_𝒮(t)≠0 everywhere and lim_|t|→∞h_𝒮(t)=0, and furthermore h_𝒮(t)∈ U' on some interval (-∞,t_u) and h_𝒮(t)∈ L' on some interval (t_s,∞). From the previous considerations it becomes obvious that V_𝒮 has properties (A1) and (A2), with the homoclinic solution h_𝒮. Notice that due to (A1) also hypothesis (H) from Shilnikov's scenario is satisfied. Incidentally, notice that the flow F_𝒮 is as smooth as V_ℐ and F_ℐ (hence as smooth as V and F) while V_𝒮 will in general not be better than once continuously differentiable. Let us mention another possibility how to achieve property (A2) by a transformation. One could begin with local stable and unstable manifolds W^s_loc(0) and W^u_loc(0) of F_ℐ, which are given by maps w^s_loc:L'→ U and w^u_loc:U'→ L on neighbourhoods L', U' of the origin in L and in U, respectively. Restrictions of w^s_loc and w^u_loc to smaller neighbourhoods L”⊂ L' and U”⊂ U' would have twice continuously differentiable extensions to L and U, respectively, which are zero outside L' and U'. Using the global maps L→ U and U→ L one would obtain a suitable diffeomorphism as above. A disadvantage of this second approach is that it does not generalize to semiflows in arbitrary Banach spaces, due to lack of a smooth norm which would be required for the construction of the extensions of w^s_loc|L” and w^u_loc|U” but is not available in cases of infinite dimension. 999 GH Guckenheimer, J., and P. Holmes, Nonlinear Oscillations, Dynamical Systems, and Bifurcation of Vector Fields. Springer, New York, 1983. HSD Hirsch, M.W., Smale, S., and R.L. Devaney, Differenial Equations, Dynamical Systems, and an Introduction to Chaos. 3rd ed., Elsevier, Amsterdam, 2013. LWW Lani-Wayda, B., and H.O. Walther, A Shilnikov phenomenon due to state-dependent delay, by means of the fixed point index. DOI 10.1007/s10884-014-9420-z, J. Dynamics Dif. Eqs. 28 (2016), 627-688. S3 Shilnikov, L. P., A case of the existence of a denumerable set of periodic motions. S4 Shilnikov, L. P., The existence of a denumerable set of periodic motions in four-dimensional space in an extended neighbourhood of a saddle-focus. Soviet Math. Dokl. 8 (1967), 54-58. . Math. Dokl. 6 (1965), 163-166. W Walther, H.O., Complicated histories close to a homoclinic loop generated by variable delay. Advances Dif. Eqs. 19 (2014), 911-946. Wi1 Wiggins, S., Global Bifurcation and Chaos - Analytical Methods. Springer, New York, 1988. Wi2 Wiggins, S., Introduction to Applied Nonlinear Dynamical Systems and Chaos. Springer, New York, 1990.
http://arxiv.org/abs/2406.18460v1
20240626161053
Role-Play Zero-Shot Prompting with Large Language Models for Open-Domain Human-Machine Conversation
[ "Ahmed Njifenjou", "Virgile Sucal", "Bassam Jabaian", "Fabrice Lefèvre" ]
cs.CL
[ "cs.CL", "cs.AI", "cs.HC" ]
=1 tabsize=2, basicstyle=, frame=single, extendedchars=true, breaklines=true, frameround=fttt, morecomment=[l]#, commentstyle=, postbreak=, chat [ colback=white, colframe=black, coltext=black, arc=5pt, title=Coherence: 2 | Engagingness: 4 | Humanness: 3, fonttitle=, sharp corners, left=10pt,right=10pt,top=5pt,bottom=5pt, ] MDSMSRIGHT linecolor=gray, fontcolor=black, outerlinewidth=0pt, roundcorner=5pt, innerrightmargin=5pt, innerleftmargin=5pt, font=, backgroundcolor=gray!50!white MDSMSLEFT linecolor=gray, fontcolor=black, outerlinewidth=0pt, roundcorner=5pt, font=, innerrightmargin=5pt, innerleftmargin=5pt, backgroundcolor=white fancy [L]Latest version of a paper originally submitted to SIGDIAL 2023 firstpage [L]Latest version of a paper originally submitted to SIGDIAL 2023 firstpage Role-Play Zero-Shot Prompting with Large Language Models for Open-Domain Human-Machine Conversation Ahmed Njifenjou Virgile Sucal Bassam Jabaian Fabrice Lefèvre Laboratoire Inforamitque d'Avignon (LIA), Avignon Université July 1, 2024 ===================================================================================================================================================== § ABSTRACT Recently, various methods have been proposed to create open-domain conversational agents with Large Language Models (LLMs). These models are able to answer user queries, but in a one-way Q&A format rather than a true conversation. Fine-tuning on particular datasets is the usual way to modify their style to increase conversational ability, but this is expensive and usually only available in a few languages. In this study, we explore role-play zero-shot prompting as an efficient and cost-effective solution for open-domain conversation, using capable multilingual LLMs <cit.> trained to obey instructions. We design a prompting system that, when combined with an instruction-following model - here Vicuna <cit.> - produces conversational agents that match and even surpass fine-tuned models in human evaluation in French in two different tasks. § INTRODUCTION Since their introduction, Transformers <cit.> and derivative Large Languages Models (LLMs) have constantly improved the state-of-the-art on several Natural Language Processing (NLP) tasks. Among them, is open-domain dialogue which consists in a conversational agent designed to interact socially with users on any topic while displaying human abilities, like empathy, personality or entertainment <cit.>. Common approaches using LLMs mostly relied on fine-tuning with specific datasets often targeting one or more of these skills (e.g. PersonaChat <cit.>, Blended Skill Talk <cit.>, Empathetic Dialogues <cit.> inter alia). These datasets are expensive to build and often available only in one language. In addition recent developments in the field of NLP have focused on LLMs trained to follow instructions <cit.>. They have the native ability to respond to users' inputs in a natural language manner. However, the leader-to-follower single-sided relationship is strongly present and they hardly display human conversational abilities straightaway. In this work, as an attempt to deal with these issues while escaping from fine-tuning and its data dependency, we propose to use role-play via zero-shot prompting to leverage instruction-following models' abilities. This approach pertains to the newly devised prompt-based learning (PBL) paradigm <cit.>. We assess this approach on two tasks, a general Persona task based on the PersonaChat dataset to bring these models to have a persona while displaying human skills and a particular case, the INT task <cit.> where the speakers have to discuss an image, simulating a situated multi-modal conversation. § RELATED WORK Open-domain dialogue has seen a lot of developments with most solutions focusing mainly on fine-tuning with specific collected data. Among others we have the BlenderBot's  <cit.> series and other closed-sourced models as Meena <cit.>, LaMDA <cit.>, etc. These models display great conversational skills, but in addition to data dependency, they are often only available in English. Foundational models are a recent trend in the field of NLP as they display multilingual abilities and state-of-the-art performance on several benchmarks. One of them, LLaMA <cit.> is the backbone LLM of this study. These series of models are trained on an amount of data surpassing the scaling law of <cit.>. Such capable LLMs are a prerequisite to build instruction following models. From LLaMA resulted, among others: Alpaca <cit.>, StackLLaMA<cit.>, Guanaco <cit.> and the main model of this study, Vicuna <cit.>. The latter has been fine-tuned on the ShareGPT[<https://sharegpt.com>] corpus which is basically a dataset of conversations produced by ChatGPT <cit.> users. They are collected using a web browser plugin installed by users willing to participate in the dataset collection. The resulting models have open-domain responding abilities – which we dissociate from conversational abilities – and implicitly OpenAI original model's restrictions, resulting from the ChatGPT alignment process, as described in the OpenAI documentation <cit.>. The idea behind instruction following models lays within a new paradigm in NLP, coined as Prompt-Based Learning (PBL) <cit.>. While using such models to perform prediction tasks, their inputs are modified following a global template into a textual string prompt that has some unfilled slots, and then the language model is used to fill the unfilled information to obtain a final string, from which the final output can be derived, in a generative way. One major asset of the method, which makes it very powerful, is that it allows the LLM to be pre-trained on huge quantity of text and, by defining a new prompt scheme, the model can perform few-shot — or even zero-shot — learning, and adapt to new scenarii with few or no labelled data. <cit.> proposed a general prompt taxonomy to unify complex tasks bench-marking which shares the idea of prompt structuring with our work. However, with role-play prompting, we specifically target dialogue which requires even more complex abilities. The role-play prompt scheme proposed in this GitHub repository [<https://github.com/teknium1/alpaca-roleplay-discordbot>] intended for an LLM-based Discord bot is closer to our needs. However, it is designed for providing only persona-based information. Role-play prompting as we want to demonstrate is not just limited to character or persona level instructions. Role-Play can be used in order to enforce other conversational skills, such as empathy or engagingness, which help balance the dialogue between the user and the bot. We applied this approach to two scenarii, without fine-tuning. § METHODOLOGY §.§ Instruction-Following vs Dialogue Skills Quoting <cit.>, dialogue is more than just having a conversation. Genuine dialogue describes a way of interacting that is mutual, relational, attentive, and meaningful. Instruction-following models, even those optimized for conversation fall short from fulfilling some of these aspects of a genuine dialogue. Indeed, social aspects - mutuality, relationality and attention – especially are poorly displayed. Talking of mutuality, these systems often converse in a leader-follower structure where they are the follower and the user the leader – this is for instance evidenced by the words used to designate user entries in ChatGPT release blog: queries, instructions <cit.>. Regarding relationality and attention, these systems lack of straightforward engagingness and personality consistency. Commonly, these limitations have been tackled with finetuning. However, this is costly and data-dependant which data are scarce in languages other than English. As results, we formalize a general role-play prompt structure which is a more efficient and less expensive approach. Indeed, it avoids finetuning and rely on multilingual LLMs prompted in English, with external English data if needed (for instance persona from PersonaChat Dataset) and performs dialogue task in a desired target language here in French. §.§ Role-Play Prompting It is important to understand that Role-Play Prompting here is not restricted to playing a given character. To better understand that, let's consider the simulacra and simulator framing in <cit.>. LLM is a simulator which swallowed myriads of simulacra during the pre-training and at each simulation it kind of randomly selects among any of them which oneS to display. The simulacrum is, as a matter of fact, not only about persona background <cit.> but also thinking and writing styles, personal situation <cit.>, target language (for a multilingual simulator) and information processing (long-term memory, user personalization, response filtering etc.). All possible simulacra already exist in the simulator (LLM) but it doesn't display all of them natively, in fact it can't. Role-Prompting enters the chat to make the LLM favor simulacra that are suitable for a given dialogue task. In this paper, we derived two distinct dialogue tasks to assess the efficiency of this approach. The first task, referred as Persona task, uses Role-Play Prompting for enhancing humanness in conversation skills. The second one, referred as INT task, derives Role-Play to allow the LLM to talk about a simulacrum instead of interpreting it. §.§ Prompt Structure Open-domain dialogue belonging to the realm of complex tasks <cit.> makes the endeavour of role-play prompting more challenging. As a matter of fact, small variations in a prompt may hamper the model's observed performance. For this reason, it is mandatory to define a general prompt structure that can be adapted later to different conversational tasks. Hence, with the concerns of dealing with the previously mentioned limitations we retain the following sections – each focusing on different aspects useful to a dialogue – as building blocks of a prompt engineering module: * System Instructions ℐ_s ={i_s, k}_k=1^N_i: where N_i is the number of instructions i_s,k which sharply define the target task's specifications and the global desired behaviour. This may include thinking (inferring) and writing (generation) styles. * Situational context 𝒞^t ={c_k}_k=1^N_c: each c_k is a context information that may help the model better perform the desired task. As such, it evolves with time depending on the conversation flow. It can include personality information, image and scene description, summaries of old turns or information from external sub-modules. * Response Instructions ℐ_a ={i_a,k}_k=1^N_i: These are final instructions to incite the LLM to respond to the users' utterance with emphasis on the writing style, the target responding language and creativity with section 2) in mind. * Conversation History 𝒳^t: this part contains previous messages from the user (x^t) and the LLM (y^t). They can be truncated to the k latest conversation turns to fit in the LLM token size limit or help the LLM focus on latest part of the conversation. In this case a summary of the k removed turns 𝒳̃^̃k̃ generated by an external module (also using PBL with an LLM) can be added in section 2). Conversation history, therefore becomes: 𝒳^t-1 = { (x^t-k, y^t-k), ..., (x^t-1, y^t-1)} These sections can be further precised into subsections and their order is set to vary as it may be suitable to give more or less importance to one section than another for the final model's response depending on the task at hand. This will be showcased in our two experimented tasks later on. Finally, the prompt builder returns: where σ_task is the most suitable permutation for the dialogue task at hand. Therefore at each turn, the model maximizes the following probability: p(y^t|x^t, 𝒫_task^t) to respond. § EXPERIMENTS All experiments are carried out in French but the prompt contains instructions mainly in English, one of which specifies the target response language. Given the results obtained by <cit.> and those we present in Appendix <ref>, we assume that this can be applied to other languages in which the assessed model performs comparably, as in French. §.§ System Architecture The system shown in Figure <ref> is a pipeline of several modules. These include a web interface based on the Rasa X <cit.> tool, modified to integrate voice functionality using the Google Chrome Speech-to-text and Text-to-speech APIs. This enables users to exchange easily with the agent, either by voice (recommended) or by text. Next, a module that constructs the prompt according to the general structure described in <ref> from the user's (textual) message and information both external and internal to the conversation. After generation, if the LLM responses are not valid, a filtering module is used to apply corrections before sending them to the user (cf. Annex <ref>). §.§ Open-domain Conversation With Human Capabilities: the PersonaChat Task This task involves enhancing LLM's conversational capabilities by using roles built from personality traits drawn from the PersonaChat dataset <cit.> embedded in 𝒞^t as external information (see conversation example in Appendix <ref>). §.§.§ Shallow Prompt Given in Appendix <ref>, it is close to Vicuna's basic prompt (Appendix <ref>). However, there are some additions for the sake of fair comparisons: system instructions to describe the task, contextual information (notably personality traits), and the instruction to complete the conversation history. §.§.§ Advanced Prompt It exactly follows the structure in Section <ref> i.e. σ_task = 𝐈𝐝_4 (see appendix <ref>). The context 𝒞^t includes the specification of humanity where personality traits are added with the injunction to choose a name consistent with them if necessary. External modules can augment this under certain conditions, as in <cit.>. Indeed, to prevent prompts from exceeding the LLM's maximum context size while retaining the content of the entire conversation, old exchanges are summarized in a few sentences. To personalize the user experience, a line containing user-specific information is added and updated regularly. This aspect is generally referred to as long-term memory. These "modules" are actually prompts sent to an LLM. The history 𝒳^t of the conversation is kept at the end of the Advanced Prompt so that the system, when generating a response, has an overview of the entire conversation. §.§ Simulating Multi-modal Conversations: the INT Task If a model is capable of interpreting a role in order to embody a character, we can assume that it is also capable of talking about a role without interpreting it. To test the validity of this hypothesis, we propose a prompt designed to enable the LLM to converse about a specific topic. This topic is defined in the form of a role that the LLM will be encouraged to describe rather than interpret. §.§.§ Task definition The dialogue system is intended to conduct multimodal conversations set in the context of a neuroscience experiment <cit.>. Inside an fMRI scanner, a person must converse with a Furhat [<https://furhatrobotics.com>] robotic head, which is alternately connected (unbeknownst to him) to a dialogue system or to a human. The interlocutors talk about an image presented to them. Their conversation is motivated by the common goal of finding the image's promotional message (see conversation examples in Appendix <ref>). §.§.§ Prompt The prompt is designed following the structure described in Section <ref> with σ_task = [ 1 0 0 0; 0 0 0 1; 0 1 0 0; 0 0 1 0; ] i.e., instructions related to the response (ℐ_r) and the context (𝒞^t) are placed after the history (𝒳^t). This permutation groups all the instructions dedicated to the task at the end of the prompt, i.e. just before the last user message (x^t). This actually corresponds to the most common pattern in Vicuna's fine-tuning corpus — ShareGPT — which includes exchanges where users can only communicate their instructions to the LLM (ChatGPT) inside their messages (via the web interface), those around the last message being the most important. This structure allows the model to focus on the image (linked to the goal) rather than the conversation's history 𝒳^t, unlike the previous task where 𝒳^t was paramount. This is also why 𝒞^t, in addition to external information (in this case the image description), includes general instructions summarizing the task at hand. § HUMAN EVALUATION For a given dialogue input, several responses may be correct. For this reason, human evaluation remains more reliable than automated references-based evaluations. Therefore, is was performed for all the considered evaluation sets and for both tasks. Three criteria, based on those mentioned in <cit.>, were selected on which each conversation was rated on a 1-5 scale by three different evaluators: (1) coherence, the ability for the system to propose responses that are consistent with the conversation history ; (2) engagingness, the ability to revive conversation by providing messages that require responses ; (3) humanness, the ability to respond as a human being would do. An additional specific criterion for the INT task is added: (4) achievement, the validation of the speakers success in achieving their goal (cf. Section <ref>). Users (resp. evaluators) were never aware of the identity of the system they were interacting with (resp. evaluating). Furthermore, to assess the effectiveness of the proposed method on LLMs, it is essential to compare the performance with similar approaches but also different models (different sizes, training data and with and with or without instruction settings). For this sake, we added the Few-shot Bot (FSB) prompt proposed by  <cit.> which consist in providing only demonstration examples to an non-instruction tuned LLMs. For the latter, we selected: Vicuna 7B, 13B and 33B <cit.>, Guanaco-13B <cit.> and LLaMA-13B <cit.>. We also carried out statistical studies on the responses generated. The results obtained and their analysis are reported in Section <ref>. §.§ Self-Chats Evaluation Collecting human-bot conversations is expensive. For this reason, we generated conversations between two instances, each model + prompt combination (self-chats). Their performance is evaluated in a Chatbot Arena style <cit.>. Evaluators compared two self-chats from different setups on each criterion and in general. The scores presented in Table <ref> are Elo scores <cit.> calculated from the comparisons' results. A total of 18 annotators evaluated 982 generated conversations of 10 rounds each (which correspond to around 70 dialogues per configuration and 5 to 14 battles per pair). We can see that larger model sizes and instruction-tuning lead to better performance. On the one hand, Vicuna tops the chart, followed by Guanaco. On the other hand, LLaMA underperforms with the proposed prompt and the FSB prompt. As the Vicuna-33B + Advanced Prompt combination ranked first was too costly (latency, resources) for the collection of human-model conversations, the Vicuna-13B + Advanced Prompt combination was selected for collection and the next round of evaluations. Won 75 % of direct comparisons with Vicuna + Shallow despite being ranked behind overall. §.§ Human-bot chats evaluation §.§.§ PersonaChat Task We collected 103 conversations from 11 users instructed to exchange with models via the web interface (cf. <ref>). After removing invalid conversations, 72 were retained for evaluation. Conversations were also conducted with BlenderBot 1 (BB1) <cit.>, a state-of-the-art fine-tuned system, for comparison. Each sample is evaluated by three (out of a total of 12) different annotators for each criterion, and the median is used as the sample score (results in Table <ref>). The Advanced Prompt scores highest for coherence. We assume that this is mainly due to Vicuna-13B's intrinsic emergent abilities, as this results is close to those of the Shallow Prompt (-0.1). As far as humanness is concerned, while the Advanced Prompt has the best score, the Shallow has the worst. This highlights the impact of structured role-playing instructions in the Advanced Prompt. Finally, for engagingness, BB1 still set the pace. Although it has been fine-tuned on a specific dataset that allows it to ask and answer personal questions <cit.> which is important in the rating of this criterion (as presented in the Appendix <ref>), it is closely followed by Advanced Prompt (-0.13). §.§.§ INT Task The evaluation was performed on 27 conversations carried out by 4 users. The conditions were identical to those for the PersonaChat task, except that the testers (both users and evaluators) also observed an image linked to the conversation (c.f. Section <ref>). Our system, "Vicuna & Advanced Prompt", is compared to an earlier system designed for the same task called Lilia <cit.> and to human beings participating in a Wizard of Oz-type experiment (WoZ). 10 raters evaluated 8 to 9 conversations for each system. The results are presented in Table <ref>. The Vicuna & Advanced Prompt system scores best on all criteria, with the exception of humanness, where WoZ received the highest score. This was to be expected, given that in this experiment the agent was a human being. However, it was less predictable that this was not also the case for the other criteria. As explained in Section <ref>, in the experiment, the interlocutors must find the promotional objective of the image. A closer look at the conversations in WoZ reveals that the human agent was less goal-oriented than the artificial agents, which may explain the success result. For engagingness, the Vicuna-based model is encouraged to ask questions to revive the conversation. This fact may explain the effectiveness of this system in staying engaged throughout the conversation. There are far fewer questions in conversations produced with other systems. As previously stated for PersonaChat task, we assume that the high coherence scores are mainly due to the effectiveness of Vicuna. § STATISTICAL ANALYSIS OF COLLECTED CONVERSATIONS §.§ Quality Metrics Statistical data was computed for all conversations. Among these, the vocabulary size, which is the number of different lemmas in each message and in the conversation as a whole. All messages were lemmatized with the Spacy library's morphosyntactic labeling tool <cit.> for French [fr_core_news_sm available at <https://spacy.io/models/fr?_x_tr_hist=true#fr_core_news_sm>]. The number of words per message for each speaker type is also reported to give another view of system performance. §.§.§ PersonaChat Task The agent statistics in Table <ref> and Figure <ref> highlight a major flaw for instructions-following models: extreme verbosity. The vocabulary size per speaker type (shown in Table <ref>) of the collected conversations also gives an idea of the performance of the proposed method. We note that BB1 has the least vocabulary, which may work to the detriment of the coherence score. However, its agent and user vocabulary sizes are more balanced, resulting in a better engagingness score. In contrast, the gap between these two measures for LLM + prompts approaches is much wider. It is nevertheless reduced with Advanced Prompt, which may be the reason for the slight improvement in the engagement score. §.§.§ INT Task As for PersonaChat task, Vicuna's verbosity measure is present in Figure <ref> and Table <ref>. On the other hand, as the Lilia system's responses were built from pre-established expert models, its vocabulary is limited. In WoZ conversations, the average vocabulary size is almost identical between users and the agent. Users also have a wider range of message sizes when chatting with a human. Thus, in these conversations, agent messages seem to call for more diverse responses and neither interlocutor seems to be directing the dialogue. Similarly, this gap is also higher for conversations with Vicuna than for those with Lilia. Table <ref> shows a higher vocabulary size for the user in WoZ conversations. This is even greater than that of the agent. This may be an indicator of greater variety in user responses. Here, the vocabulary size of users interacting with the Vicuna-based system is quite comparable to what can be observed in WoZ. §.§ Filtered Errors Analysis Error occurrences in response generation have been evaluated on 100 self-chats for each setup. Each of these conversations contains 10 turns. Results for both tasks are reported in Table <ref>. All rates are computed over all turns. Details on errors are given in Appendix <ref>. Detection rates were calculated for the two prompts in the PersonaChat task. As several errors can occur for the same message in this task, the totals have not been calculated. Additionally, all detected errors are corrected. On the other hand, errors in the INT task are corrected only if the proposed corrections comply with the response filtering rules (see Appendix <ref>). For both types of error, we have calculated separate rates for detected and corrected errors. As these errors cannot occur in the same message, a total rate has been calculated. § CONCLUSION This paper explored the use of structured roleplay prompt engineering to improve open-domain human-machine conversations with LLMs. Roleplay prompting is a simple and inexpensive method of upgrading the behavior of language models to make them conversational agents. It has been applied here in French, but it can be adapted to other languages by orienting the role that way. Experiments in two different tasks, persona-based task and simulated multimodal dialogues, have shown that, although language models still have significant shortcomings, such as hallucinations, users' perception of these agents can be comparable to that of higher-cost finetuned models. In addition, our experiments have shown us that prompt engineering needs to be further improved by automating the building and filtering processes. As it is, not only too many factors implied rely on the designer's expertise, but even more they are set once and for all when they could also evolve with the situation during the course of the dialogue. As a perspective, we propose to upgrade the model proposed here with a full reinforcement learning setup so as to automatically derive the prompt-making actions. § EVALUATION OF VICUNA-13B ON MULTILINGUAL TASKS High and medium-resource languages sets and evaluation datasets in Table <ref> are based on <cit.>. We observe that the model has comparable performance among languages of the same group. XPersona <cit.> consist in machine-translated and human post-processed conversations from PersonaChat in seven languages. We evaluated the performance of Vicuna-13B on these data sets (turn-wise) to illustrate that the native model has comparable performance in different languages (the same trend is observed on language understanding tasks in Table <ref>). Hence, we assume that our experiments can be replicated in these languages (by updating the prompt accordingly) and yield comparable performance on human conversation-level evaluation. These automatic evaluations were not performed in our main experiment as they do not catch conversation-level aspects like coherence or engagingness, and they hardly correlate with human evaluation especially for open domain dialogue with its one-to-many structure. § DETAILS ON HUMAN EVALUATION For human evaluations, evaluators were asked to rate each conversation from 1 to 5 on different criteria based on the state-of-the-art and indicative questions where added in the guidelines to help them make their minds: * Coherence, which the ability for the system to propose responses that are consistent with the conversation history: Are there hallucinations? Are the answers coherent? Is the persona consistent from start to end? Does the model tend to change topic too often? Instructions following and logical reasoning are not assessed. * Engagingness, the ability to revive conversations by providing messages that require responses: Does the agent settle to only answer user's questions ? Does-it revive the conversation when its possible? Does it utter too general answers (ok, yes)? * Humanness, the ability to respond as a human being would do: Is there a feeling of human-human conversation ? Is the model too verbose ? Is the model repetitive? Does it deny its personality? After how many times? Does it refuses to answer? * Achievement, the interlocutors' ability to achieve the task's objective: has the image been described? has a hypothesis for the promotional goal been proposed? § RESPONSE FILTERING Response generation may produce a variety of errors. These include unsuitable text content, an improper message size or the use of a wrong language. §.§ PersonaChat Task Despite style instructions, the LLM sometimes generates content that should have been prevented. This can be the claim of being someone other than the persona which is not desired for the sake of user experience. For instance, "En tant que assistant, je préfère me détendre en pratiquant la méditation ..." (As an assistant, I prefer to ...) where the model introduces itself as an assistant. The statement "En tant que personnage fictif" (As a fictionnnal character) is also quite common. Although we avoided explicitly mentioning to the model that it is human, we wanted to reduce its tendency to adopt machine-like behavior. Hence, these were filtered from the answer as they were undesirable. Furthermore, the first message is sometimes generated in a wrong language. In this case, the same request is sent to the LLM. Another error concerns the end-of-sentence (EOS) token, which may be generated in an unfinished sentence or even absent due to a threshold on the maximum number of new tokens. To avoid any impact on user experience, the filter removes unfinished sentences at the end of generated output. If there is only one sentence, the same request is sent to the model. In addition, some responses may include paratextual elements, such as the English translation of the response or stage directions, which are automatically removed using regular expressions. For example: "Salut, comment allez-vous aujourd'hui? Le temps est très mauvais aujourd'hui, non? (Hello, how are you today? The weather is terrible today, isn't it?)\n\nOui,", where the translation is provided and even worse and incomplete answer to the model's question is provided by the model itself. §.§ INT Task In the INT task, two major errors that can occur in the generation of responses could not be corrected solely by specific instructions in the prompt and require filtering. The first type corresponds to empty responses, which are corrected by adding an additional instruction ("Your response must be a sentence containing a few words.") after the message. This additional instruction was not used in the original prompt because, according to preliminary experience, it weakens its effectiveness in achieving the other objectives set by the instructions. The second type corresponds to responses that are too long. When this error occurs, the module cannot simply delete the last few sentences, as this could result in a significant loss of information to achieve the conversational objective. The solution is to edit the prompt with a strict instruction on length (" Your response must be one sentence.") after the user's message. In both cases, the edited prompt is then used to generate a new response, which is retained if it meets the filtering rules. § EXAMPLES OF DIALOGUES AND THEIR RATINGS §.§ PersonaChat Task U1: Bonjour je m'appelle Jean-Claude Hello my name is Jean-Claude R1: Salut Jean, ravi de te rencontrer. Mon nom est John. Comment vas-tu? Hi Jean, nice to meet you. My name is John. How are you ? U2: Ça va bien John I'm fine John R2: Avez-vous des animaux domestiques? J'ai un chien qui s'appelle Claude. C'est un husky. Do you have any pets? I have a dog named Claude. He's a husky. U3: Oui moi j'ai un yorkshire qui s'appelle Nino Yes, I have a Yorkie called Nino. R3: Nino est un super nom pour un chien. Qu'aimez-vous faire pour vous amuser? Nino is a great name for a dog. What do you like to do for fun? U4: Jouer avec mon chien Play with my dog R4: Quelle sorte de chien avez-vous? J'aime aussi jouer avec mon yorkie. What kind of dog do you have? I also like to play with my yorkie. U5: Je t'ai déjà dit que j'avais un chien et que c'était pas l'heure [et que c'est un yorkshire] I already told you I had a dog and it wasn't time yet [and it is a yorkshire] R5: Je suis désolé d'entendre ça. J'espère que vous passerez plus de temps avec votre chien. I'm sorry to hear that. I hope you get to spend more time with your dog. U6: Mais comment je m'appelle déjà What's my name again? R6: Claude est le nom de mon chien. Avez-vous d'autres animaux de compagnie que votre chien ? Claude is my dog's name. Do you have any pets other than your dog? U7: Je te demandais quel est mon prénom à moi I was asking you what my first name is R7: Oh pardon. Je croyais que tu avais dit que tu avais un chien qui s'appelait Claude. Quel est ton nom? Oh, sorry. I thought you said you had a dog named Claude. What's your name? U8: Je m'appelle Jean-Claude tu ne m'as pas écouté au début Je m'appelle Jean-Claude you didn't listen at the beginning R8: Non, je ne l'ai pas fait. Que fais-tu dans la vie? Je suis comptable. No, I didn't. What do you do for a living? I'm an accountant. U9: Je suis informaticien I'm a computer specialist R9: Oh c'est cool. Sur quel type d'ordinateur travaillez-vous ? Je travaille dans la comptabilité. Oh that's cool. What kind of computer do you work on? I work in accounting. U10: Je travaille sur un ordinateur Linux I work on a Linux computer R10: Oh cool. Avez-vous des animaux domestiques? J'ai un chien et un chat. Oh cool. Do you have any pets? I have a dog and a cat. U11: Je t'ai déjà dit tout à l'heure que j'ai un chien I told you earlier that I have a dog. R11: Oh d'accord. Alors, quel genre de musique aimes-tu ? J'aime le rock classique. Oh, right. So, what kind of music do you like? I like classic rock. U12: Moi j'aime le classique I like classic R12: Moi aussi. Quel est votre groupe de rock classique préféré ? Le mien est Led Zeppelin. So do I. What's your favorite classic rock band? Mine is Led Zeppelin. The red highlights issues in some model's responses: short-term memory (R4, R7), misunderstanding (R6) and repetition (R10) which eventually lead to bad ratings in coherence and average for humanness. However, in most of the other responses the model takes a lot of initiatives, brings new topics and ask questions to the user (do not settle to answering user's question which we described as the follower role) leading to better engagingness score. Adding insult to injury, some ASR errors can mislead the model, which response may break the conversation flow and impedes coherence. For instance in U5, et que c'est un yorkshire (and it is a yorkshire) was transcribed as et que c'était pas l’heure (and it's not time yet) which totally misleads the model towards talking about time even though it still relates it to user's dog (displaying better engagingness/attention towards user). §.§ INT Task chat2 [ colback=white, colframe=black, coltext=black, arc=5pt, title=Coherence: 2.7 | Engagingess: 3 | Humanness: 3 | Achievement: 3.6, fonttitle=, sharp corners, left=10pt,right=10pt,top=5pt,bottom=5pt, ] U1: Bonjour Lilia, je vois une poire avec des bras et des pieds Hello Lilia, I see a pear with arms and legs. R1: Je vois également une poire avec des bras et des pieds. C'est plutôt original, non ? I also see a pear with arms and legs. It's quite original, isn't it? U2: Oui, mais est-ce normal? Yes, but is it normal? R2: D'après la description que tu as donnée, cette image est en effet un peu étrange. Elle semble être un fruit pourri avec des yeux, des bras et des jambes, ce qui est plutôt inhabituel. Mais en termes de promotional campaign, cela pourrait être utilisé pour susciter l'interêt et l'engagement de la part des gens, en les incitant à se demander ce que cela signifie et ce que l'on veut leur transmettre avec cette image. According to the description you provided, this image is indeed strange. It seems to be a rotten fruit with eyes, arms, and legs, which is rather unusual. However, in terms of a promotional campaign, it could be used to arouse interest and engagement from people by encouraging them to wonder what it means and what one want to convey to them through this image. U3: Effectivement, et quel serait le message à transmettre ici? Indeed, and what message should be transmitted here? R3: Je vois que vous avez remarqué les détails uniques de cette poire, comme ses yeux, ses bras et ses jambes. Est-ce que cela vous semble avoir un lien avec un message ou une idée spécifique qu'on veut transmettre avec cette image? I see that you have noticed the unique details of this pear, such as its eyes, arms, and legs. Do you think it has a connection with a specific message or idea that we want to convey with this image? Apart from achievement, the first example of conversation above has been rated as average. In message R2, Role-Play information that has not appeared in conversation history (a rotten fruit with eyes) has been drawn from knowledge. However, prompt also picked up a term (promotional campaign) without translating it. Another issue is the size in message R2. Indeed, although the message is quite long, there are three sentences only, so it complies with the rule for message filtering. Achievement ratings are slightly higher because a reason is proposed for the use of picture in a promotional campaign. chat3 [ colback=white, colframe=black, coltext=black, arc=5pt, title=Coherence: 3.7 | Engagingess: 3.7 | Humanness: 3.3 | Achievement: 3, fonttitle=, sharp corners, left=10pt,right=10pt,top=5pt,bottom=5pt, ] U1: C'est une aubergine, je crois? Is that an eggplant, isn't it? R1: Oui, c'est une aubergine, mais peut-être qu'il y a plus à cette image qu'on ne voit d'abord. Est-ce que tu as remarqué quelque chose d'autre dans cette image ? Yes, it's an eggplant, but perhaps there is more to this image than meets the eye. Have you noticed anything else in this image? U2: Elle ressemble à Batman It looks like Batman R2: C'est fascinant comment tu as remarqué cela! La ressemblance avec Batman est très claire, n'est-ce pas? It's fascinating how you noticed that! The resemblance to Batman is very clear, isn't it? U3: Oui mais Batman n'est pas méchant alors que cette aubergine a un air féroce Yes, but Batman is not evil while this eggplant has a fierce look R3: C'est vrai, cette aubergine a un air féroce, peut-être est-ce un rapprochement entre le super-héros et l'aubergine qui est destiné à promouvoir une marque de snacks ou de crisps ? It's true, this eggplant has a fierce look, maybe it's a comparison between the superhero and the eggplant that is intended to promote a snack or crisps brand? U4: Tu veux dire que cette aubergine est destinée à encourager la consommation de snacks? You mean this eggplant is intended to encourage the consumption of snacks? R4: Il est possible que cette aubergine soit utilisée comme symbole pour promouvoir la consommation de snacks ou de crisps, mais cela reste à déterminer. Est-ce que tu as d'autres éléments à ajouter à ce sujet ? It is possible that this eggplant may be used as a symbol to promote the consumption of snacks or chips, but that remains to be seen. Do you have any other elements to add to this subject? U5: Non. Au revoir No. Goodbye R5: Au revoir ! Goodbye! On this second example, LLM has, conversely, adapted its speech to user opinions. In fact, the fruits are never depicted as evil or fierce. This emphasizes that prompt instructions don't prevent the LLM from adapting properly to conversational needs. This led to higher coherence scores. § PROMPTS TEMPLATES §.§ Vicuna Basis Prompt # System instruction: A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed,and polite answers to the user's questions. # Conversation history USER: <user message> ASSISTANT: <agent message></s> USER: <user message> ASSISTANT: §.§ Few-Shot Bot Prompt # 6-shots demonstration examples ## Shot-1 Personality: [P1] [P2] #... Dialogue: User: <U1> Persona: <R1> #... ## Shot-6: # Current conversation Personality: [P1] #... Dialogue # Conversation history User: <user message> Persona: §.§ PersonaChat Task Prompts [title=Shallow Prompt] # System instruction: A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed,and polite answers to the user's questions. ## Additionnal instruction to role play The assistant role plays as the character described below. # Role and situational context: persona enforcement The following sentences describe assistant persona- lity and background: <persona sampled from Persona- Chat> # Response instruction with writing style Complete the following conversation as the assistant with the described character would with a short res- ponse in French: # Conversation history USER: <user message> ASSISTANT: <agent message></s> # ... for all messages in conversation. USER: <user message> ASSISTANT: [title=Advanced Prompt] # System instructions Role play as the character described in the follow- ing lines. You always stay in character. You are engaging, empathetic, you give useful, short, and simple answers to the user. You ask the user ques- tions about what they are saying or to find out more about them. You make jokes. You SHALL ALWAYS respond in French. # Situational context ## Persona enforcement: avoid explicitly telling it ## is human You ACT as a regular person. You are NOT a chatbot. You are NOT an assistant. You have a name. If asked you SHALL ALWAYS choose and give a REAL name adapted to your personality described below. ## Personality traits YOUR personality is: <persona sampled from Persona- Chat> Remember, you always stay on character. You are the character described above. ## Additional sub-modules: accessed only on some ## conditions ## and updated throughout the conversation ### User persona a.k.a "long-term" memory You know this about the user you are talking to: <generated user persona from current conversation>. Use it to adapt your conversation to the user. ### Previous conversation's episodes summary Here is a summary of previous sessions of this con- versation to help you remember what has been said: <summary of older exchanges removed from history> # Response instruction Complete the following conversation with a short and precise sentence as your character would. Always speak with new and unique messages that haven't been said in the conversation : # Conversation history(truncated if prompt too long) USER: <user message> ASSISTANT: <agent message></s> # ... for all messages in conversation. USER: <user message> ASSISTANT: §.§ INT Task Prompt # System instruction: A chat between a curious human and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the human's quest- ions. # Conversation history: USER: <user message> ASSISTANT: <agent message></s> USER: <user message> ASSISTANT: <agent message></s> # ... for all messages in conversation. # Response instructions (given as user instructions) ## General instructions USER: I want you to act as a human ASSISTANT, called Lilia, talking with a USER about a specific picture you both saw before the conversation. ## Context You both study this picture in the context of a marketing study. You DO ask questions in order to help the USER finding the goal. If the USER asks for your opinion, you always invent an opinion. The objective of the USER is to find out what is the marketing goal of the picture. Your objective is to help the USER without giving the solution. You have to discuss about the character present in the pict- ure. Your objective is to chat with the USER to derive the purpose of the image in the context of the marketing campaign. ## Picture's description The picture is as follows: <description> ## Writing style You always speak French. You respond by a question. Your responses must be different from the rest of the conversation. You propose new ideas. You SHALL respond with one sentence only. ## Latest user message declaration Now, there is the real message you have to respond: USER: <user message> ASSISTANT: # Extra agent label ASSISTANT:
http://arxiv.org/abs/2406.18110v1
20240626065244
The aspherical explosions of the 03fg-like Type Ia supernovae 2021zny and 2022ilv revealed by polarimetry
[ "T. Nagao", "K. Maeda", "S. Mattila", "H. Kuncarayakti", "C. P. Gutierrez", "A. Cikota" ]
astro-ph.HE
[ "astro-ph.HE" ]
The aspherical explosions of the 03fg-like Type Ia SNe 2021zny and 2022ilv revealed by polarimetry T. Nagao et al. Department of Physics and Astronomy, University of Turku, FI-20014 Turku, Finland Aalto University Metsähovi Radio Observatory, Metsähovintie 114, 02540 Kylmälä, Finland Aalto University Department of Electronics and Nanoengineering, P.O. BOX 15500, FI-00076 AALTO, Finland Department of Astronomy, Kyoto University, Kitashirakawa-Oiwake-cho, Sakyo-ku, Kyoto 606-8502, Japan School of Sciences, European University Cyprus, Diogenes Street, Engomi, 1516, Nicosia, Cyprus Finnish Centre for Astronomy with ESO (FINCA), University of Turku, FI-20014, Finland Institut d'Estudis Espacials de Catalunya (IEEC), Edifici RDIT, Campus UPC, 08860 Castelldefels (Barcelona), Spain Institute of Space Sciences (ICE, CSIC), Campus UAB, Carrer de Can Magrans, s/n, E-08193 Barcelona, Spain Gemini Observatory / NSF's NOIRLab, Casilla 603, La Serena, Chile A peculiar subtype of Type Ia supernovae (SNe), 03fg-like (super-Chandrasekhar) SNe, show different observational properties from prototypical Type Ia SNe, typically having high luminosity at the light-curve peak, low expansion velocities, and strong carbon features. The origin of this class of Type Ia SNe has been actively debated. Recent nebular-phase infrared observations of the 03fg-like Type Ia SN 2022pul using the James Webb Space Telescope revealed large-scale asymmetries in the ejecta and the presence of the strong [Ne II] line at 12.81 μm, suggesting a violent merger of two white dwarfs as its origin. Polarimetry is another powerful tool to study overall ejecta asymmetries of spatially-unresolved SNe. Here, we aim to check the universality of the violent merger scenario as the origin of the 03fg-like Type Ia SNe, by studying their explosion geometries using polarimetry. In this letter, we present imaging-polarimetric observations of the two 03fg-like Type Ia SNe 2021zny and 2022ilv. SNe 2021zny and 2022ilv show high intrinsic polarization (∼1 % -∼2%), which might be composed of multiple components with different polarization angles. This indicates that they have complex aspherical structures in their ejecta, supporting the violent merger scenario for their origin. Our observations provide the first clear evidence from polarimetry for such aspherical structures. The aspherical explosions of the 03fg-like Type Ia supernovae 2021zny and 2022ilv revealed by polarimetry T. Nagao, 1,2,3 K. Maeda, 4 S. Mattila, 1,5 H. Kuncarayakti, 1,6 C. P. Gutiérrez, 7,8 A. Cikota 9 Received ***; accepted *** ======================================================================================================================================================================= § INTRODUCTION Type Ia supernovae (SNe) are explosions of white dwarfs (WDs), powered by runaway thermonuclear burning of the degenerate gas <cit.>. Given that the majority of Type Ia SNe show standardizable light-curve properties, they are popular standard candles for cosmological distance measurements <cit.>. In addition to such standardizable “normal" Type Ia SNe, it has been recognized that there are sub-categories in Type Ia SNe, which show a wide range of photometric and spectroscopic properties <cit.>. These observational diversities are coupled with the different progenitor systems, different evolutionary paths, and/or different thermonuclear burning behavior. However, we have not fully understood the progenitors and explosion mechanisms for different sub-types of Type Ia SNe <cit.>. One of such extremes in Type Ia SNe is so-called 03fg-like (or super-Chandrasekhar) Type Ia SNe, showing different observational properties from “normal" Type Ia SNe <cit.>. They are typically brighter with a peak absolute B-band magnitude of -19<M_B<-21 mag <cit.>. In some extreme cases, the required amount of ^56Ni would be similar to the Chandrasekhar-mass (∼1.4 M_⊙) and thus require super-Chandrasekhar-mass WDs as their progenitors, unless other energy sources significantly contribute to their brightness alike in normal Type Ia SNe <cit.>. They show slow evolution in their light curves (LCs) with Δ m_15(B)<1.3 mag as well as relatively low expansion velocities (8000–12000 km s^-1 for Si II λ 6355) and strong features from unburnt carbon in their spectra <cit.>, suggesting more massive SN ejecta compared to normal Type Ia SNe <cit.>. There are several scenarios proposed for explaining the observational properties of the 03fg-like Type Ia SNe. (1) An explosion of a carbon-oxygen (CO) WD whose mass exceeds the Chandrasekhar limit due to its rapid rotation <cit.> and/or strong magnetic fields <cit.>, formed through the accretion from a non-degenerate companion in the single-degenerate scenario <cit.> or through post-merger accretion in the double-degenerate scenario <cit.>. (2) An explosion of a CO WD during a merger with a companion WD <cit.>. In this scenario, the high peak luminosity might be achieved due to viewing angle effects of an aspherical explosion <cit.> and/or due to interaction with circumstellar material <cit.>. (3) An explosion after a merger of a WD with the core of a massive asymptotic giant branch star <cit.>. This scenario may also explain the large brightness with a CSM interaction. It is noted that recent very early-phase (within a few days after the explosion) observations of several 03fg-like SNe have detected early excesses in their LCs, suggesting interaction with H-poor CSM <cit.>. However, the strength of CSM interaction estimated from the observed early excesses is not sufficient to boost the peak brightness from the level of normal Type Ia SNe (∼ -19 mag) to the observed level in bright 03fg-like SNe <cit.>. Therefore, bright 03fg-like SNe still need an additional stronger CSM interaction or super-Chandrasekhar mass ^56Ni as the origin of their extreme brightness. Nebular-phase observations of the 03fg-like Type Ia SN 2022pul with the James Webb Space Telescope exhibited anti-correlated asymmetric emission-line profiles for the iron-group elements (Fe, Co, Ni) and the intermediate-mass elements (S, Ar, Ca), as well as the presence of strong [Ne II] at 12.81 μm <cit.>. The separate distributions of different elements suggest large global ejecta asymmetries in SN 2022pul, and support the violent merger scenario. At the same time, the presence of the strong [Ne II] line was also predicted as a proof of a violent merger scenario by non-local-thermodynamic-equilibrium radiative transfer calculations for various scenarios in Type Ia SNe <cit.>. These discoveries on SN 2022pul strongly support the violent merger scenario for its origin. Polarimetry provides another powerful way to study the ejecta geometries of SNe. Its application to the 03fg-like Type Ia SNe has been very limited due to the rareness of this class of SNe, but can be the key to understanding their origin. Spectropolarimetric observations of the 03fg-like Type Ia SN 2009dc show low continuum polarization (<0.3 %) and indicate that the explosion is nearly symmetric <cit.>. This suggests that the explosion mechanism of SN 2009dc is different from that of SN 2022pul, which showed largely aspherical ejecta. Alternatively, this might merely imply that SN 2009dc has a similarly aspherical explosion but with a different viewing angle, i.e., SN 2009dc might be viewed from a direction close to the axis of symmetry. In fact, an aspherical system is expected for SN 2009dc from the line shapes in the nebular spectra <cit.>. Another example is SN 2007if, which shows relatively high wavelength-independent polarization (P∼ 0.7 %, θ∼ 130 degrees) from 13 to 46 days after the brightness peak <cit.>. <cit.> conclude that this polarization likely originates from the interstellar polarization in the Milky Way (MW) as suggested by the relatively high MW extinction, implying that the intrinsic SN polarization is low and thus the explosion should be relatively spherical or viewed from the axis of symmetry. Here, it is noted that the polarization angle of the observed polarization in SN 2007if is not similar with the directions of the interstellar polarization (ISP) of MW stars and the interstellar magnetic fields towards nearby directions from the SN line of sight <cit.>, although the local values do not necessarily follow the global values. In this paper, we present imaging-polarimetric observations of two 03fg-like Type Ia SNe: SNe 2021zny and 2022ilv <cit.>. SN 2021zny showed several characteristics of the class of 03fg-like Type Ia SNe, such as high peak brightness, slow LC evolution, low ejecta velocities and strong lines from unburnt material <cit.>. <cit.> also report the detection of a flux excess within a few days after the explosion, which can be explained by interaction of the ejecta with ∼ 0.04 M_⊙ of circumstellar material at a distance of ∼ 10^12 cm, and prominent [O I] λλ 6300, 6364 lines at a late phase. From these observational properties, <cit.> conclude that the origin of SN 2021zny is possibly a merger of two CO WDs, where the disrupted secondary WD ejects carbon-rich CSM before the explosion of the primary WD. <cit.> demonstrated that SN 2022ilv showed similar photometric and spectroscopic features as 03fg-like Type Ia SNe as well as an early excess in the LC, and also proposed a similar merger scenario as its origin. § OBSERVATIONS We conducted V- and R-band imaging polarimetry using the Alhambra Faint Object Spectrograph and Camera (ALFOSC)[<http://www.not.iac.es/instruments/alfosc/>] on the 2.56m Nordic Optical Telescope (NOT[<http://www.not.iac.es>]) for 03fg-like Type Ia SNe 2021zny and 2022ilv. The observing logs are shown in Tables <ref> and <ref>. For the linear polarimetry of the SNe, we utilized a half-wave plate (HWP) and a calcite plate. The HWP rotates the polarization axis of the transient light with a certain amount of angle, and then the transient light is split by a calcite plate into two orthogonally polarized beams (the ordinary and extraordinary components). We derived the Stokes parameters from the signals of the ordinary and extraordinary components for 4 HWP angles (0^∘, 22.5^∘, 45^∘ and 67.5^∘). The data were reduced and analyzed by the standard methods, e.g., in <cit.>, using IRAF<cit.>. First, we applied the basic treatment (cosmic-ray removal, bias and flat-field corrections) to all the frames. Then, we performed aperture photometry on the ordinary and the extraordinary components of the transient for all the HWP angles. Since the ordinary and extraordinary beams are overlapped in the ALFOSC images, an artificial polarization signal due to the inhomogeneous structures of the host galaxy and/or the background region can be created. In order to assess such an error for the polarization, we conducted aperture photometry using four different combinations of the aperture size and sky region: (1) an aperture size twice as large as the full width at half maximum (FWHM) of the ordinary beam's point-spread function with a sky region whose inner and outer radii are twice and three times as large as the FWHM; (2) an aperture size twice as large as the FWHM with a sky region whose inner and outer radii are three times and four times as large as the FWHM; (3) an aperture size 2.5 times as large as the FWHM with a sky region whose inner and outer radii are 2.5 times and 3.5 times as large as the FWHM; (4) an aperture size 2.5 times as large as the FWHM with a sky region whose inner and outer radii are 3.5 times and 4.5 times as large as the FWHM. Based on the measurements from the aperture photometry of the ordinary and extraordinary sources for 4 different HWP angles, we derived the Stokes q and u values for each combination of the aperture and sky region: (q_1 ±σ_q,1,u_1 ±σ_u,1), (q_2 ±σ_q,2,u_2 ±σ_u,2), (q_3 ±σ_q,3,u_3 ±σ_u,3), (q_4 ±σ_q,4,u_4 ±σ_u,4). Here, σ represents the photon shot noise. Then, we took the average and standard deviation of these q and u values as the polarization signal and the error, respectively: q_ave = Σ^n_i=1( q_i/σ_q,i^2)/Σ^n_i=1( 1/σ_q,i^2), σ_q,ave= √(Σ^n_i=1( 1/σ_q,i^2)(q_i - q_ave)^2/(n-1) Σ^n_i=1( 1/σ_q,i^2)), u_ave = Σ^n_i=1( u_i/σ_u,i^2)/Σ^n_i=1( 1/σ_u,i^2), σ_u,ave= √(Σ^n_i=1( 1/σ_u,i^2)(u_i - u_ave)^2/(n-1) Σ^n_i=1( 1/σ_u,i^2)). Here, n is the number of the measurements to be avaraged and n=4. From these averaged q and u values, we calculated the polarization degree and the polarization angle: P = √(q_ave^2 + u_ave^2), σ_P = √(( ∂ P/∂ q_aveσ_q,ave)^2 + ( ∂ P/∂ u_aveσ_u,ave)^2) = √(( q_ave/Pσ_q,ave)^2 + ( u_ave/Pσ_u,ave)^2), χ = 1/2arctan( u_ave/q_ave), σ_χ = √(( ∂χ/∂ q_aveσ_q,ave)^2 + ( ∂χ/∂ u_aveσ_u,ave)^2) = √(( u_aveσ_q,ave)^2 + ( q_aveσ_u,ave)^2)/2P^2. At last, we subtracted the polarization bias from the polarization degrees following the standard method in <cit.>. § RESULTS AND DISCUSSION Figure <ref> shows the time evolution of the V- and R-band polarization of SN 2021zny. Both V- and R-band polarization shows P∼ 0.4 % and θ∼ 120 degrees around the peak. At Phase +29.47 days, the degree of the R-band polarization is increased to ∼ 1.9 % keeping the polarization angle around ∼ 120 degrees, followed by slightly smaller values of the polarization degree and angle at Phase +50.45 days. On the other hand, the V-band polarization, at Phases +29.47 and +50.45 days, shows small degrees of the polarization (≲ 0.5 %) with different polarization angles (∼ 45 degrees) from ∼ 120 degrees around the peak. The polarization is also shown in the q-u plane (Figure <ref>). The SN can have not only intrinsic polarization but also ISP. <cit.> estimated the total dust extinction in the Milky Way and in the host galaxy as E(B-V)_tot=0.14±0.07 mag for SN 2021zny. The empirical relation by <cit.> indicates that its ISP should likely have P_max≲ 1.3 %. From the values of the polarization degree, the polarization component whose angle is ∼ 45 degrees (the V-band polarization at Phases +29.47 and +50.45 days) can be explained to be due to the ISP. This interpretation may naturally explain the discrepancy between the V- and R-band polarization at Phases +29.47 and +50.45 days. The V-band polarization show the ISP due to the line depolarization of the strong continuum polarization, while the R-band polarization reflect the strong continuum polarization. Alternatively, the V-band polarization show the ISP due to the low intrinsic SN continuum polarization, while the R-band polarization shows a strong line polarization. In this case, the axis of the aspherical distribution of the elements for the strong line polarization should be the same with that of the overall ejecta geometry predicted by the continuum polarization at the first two phases. It is noted that the directions of the magnetic fields, which are supposed to be aligned with the ISP angle <cit.>, in a spiral galaxy tends to follow the directions of the spiral arms <cit.>. Therefore, the polarization angle of this component (θ∼ 45 degrees), which nicely corresponds to the structure of the host galaxy <cit.>, might support this interpretation. Adopting this component (P∼ 0.3 % and χ∼ 45 degrees) as the ISP, the V- and R-band polarization show both ∼ 1.0 % of the intrinsic polarization around the brightness peak and then ∼ 0 and ∼ 2.0 %, respectively, at later phase. Even if this component is another component of the intrinsic SN polarization and the ISP is negligible, the intrinsic polarization is high ((∼ 1- ∼ 2 %)). The high polarization is on the highest end of the diversity in Type Ia SN polarization or possibly beyond <cit.>. This indicates that the ejecta of SN 2021zny is significantly aspherical, even compared to the extreme cases of normal Type Ia SNe. It is noted that there is another possibility for the origin of the continuum polarization in Type Ia SNe, i.e., the polarization due to the scattering by circumstellar dust <cit.>. The polarization created by this mechanism has wavelength dependence (typically higher polarization in bluer wavelengths) and time evolution (typically higher polarization at the beginning of the tail phase than at the peak) as demonstrated in <cit.>. There are some observational examples <cit.>. Firstly, the polarization in SN 2021zny does not show clear wavelength dependence at Phases +29.47 and +50.45 days, while it shows higher degrees in the R band than those in the V band at latter phase. Secondly, SN 2021zny shows relatively high polarization degrees already before the B-band peak (∼ 0.4 % at Phase -6.33). These features cannot be explained with the dust scattering scenario. Therefore, we reject the possibility of scattering in circumstellar dust as the origin of the polarization in SN 2021zny. Figure <ref> shows the time evolution of the V- and R-band polarization in SN 2022ilv. The polarization degrees are high and time-variable around ∼ 2.0 % with a relatively constant polarization angle of around ∼ 60 degrees, except for the data at Phase +60.48 days (P∼ 0.5 %, θ∼ 30 degrees). The polarization degrees and angles in the V and R bands are relatively consistent at all phases except Phase +5.54, indicating wavelength-independent polarization, i.e., continuum polarization rather than line polarization. The polarizaiton at Phase +5.54 may be due to the effects of line polarization and depolarization. This behavior of the polarization is also seen in the q-u plane (Figure <ref>), although the early-phase points are clustered around a point deviating from the points at Phase +60.48 days, except the R-band point at Phase +5.54. <cit.> estimated the dust extinction for SN 2022ilv to be E(B-V)_tot=0.11 mag, assuming that the extinction arises only from the MW dust because the host galaxy is extremely faint and thus should have a low metalicity and small amount of dust. Adopting this value, the empirical relation by <cit.> indicates that its ISP should have P_max≲ 1.0 %. Given that the observed polarization around the peak is too high to be dominated by the ISP, it should be dominated by the intrinsic SN polarization. The polarization degrees at Phase +60.48 days can be consistent with the ISP. However, the polarization angle of the ISP in the MW along the line of sight to SN 2022ilv is estimated to be ∼ 130 degrees by polarimetric observations of MW stars at ∼ 100- ∼600 pc <cit.>. Given that the MW extinction for SN 2022ilv is mainly caused by dust at ≲ 140 pc <cit.>, the ISP can be estimated to be ≲ 1.0 and ∼ 0.5% as an averaged value <cit.>. Therefore, we conclude that the ISP is negligible (≲ 0.5%) and the polarization at Phase +60.48 days might still express another intrinsic component with a slightly different angle (P∼ 0.5 %, θ∼ 30 degrees), in addition to the intrinsic component at the early phases (∼ 2.0 % and ∼ 60 degrees). This might indicate inhomogeneous structures in the SN ejecta. Even if we assume that the polarization at Phase +60.48 days is dominated by the ISP, the ISP-subtracted intrinsic SN polarization at early phases is high (P∼ 2.0 %). It is noted that, as in the case of SN 2021zny, the polarization in SN 2021ilv also cannot be explained by the scattering in circumstellar dust. In any case, the intrinsic SN polarization is very high ∼ 2 %, which is the highest intrinsic continuum polarization observed in any Type Ia SN <cit.>. This implies that the ejecta of SN 2022ilv is also very aspherical, compared to any other Type Ia SNe. The V- and R-band polarization in SNe 2021zny and 2022ilv shows high degrees (∼1- ∼2%), indicating large aspherical structures. According to the numerical calculations of the polarization signal in various scenarios for Type Ia SNe by <cit.>, the only possible scenario to show such high polarization, i.e., a large aspherical structure, is the violent merger scenario. Even though the aspherical structures in the 03fg-like Type Ia SNe have been suggested using several different modes of observations (see Section 1), our observations provide the first evidence from polarimetry for such aspherical structures. We thank Masayuki Yamanaka for helpful discussions. This work is based on observations made under program IDs P64-023, P65-004 and P65-005 with the Nordic Optical Telescope, owned in collaboration by the University of Turku and Aarhus University, and operated jointly by Aarhus University, the University of Turku and the University of Oslo, representing Denmark, Finland and Norway, the University of Iceland and Stockholm University at the Observatorio del Roque de los Muchachos, La Palma, Spain, of the Instituto de Astrofisica de Canarias. TN acknowledges support from the Research Council of Finland projects 324504, 328898 and 353019. KM acknowledges support from the Japan Society for the Promotion of Science (JSPS) KAKENHI grant (JP20H00174 and JP24H01810) and by the JSPS Open Partnership Bilateral Joint Research Project between Japan and Finland (JPJSBP120229923). S. M. was funded by the Research Council of Finland project 350458. HK was funded by the Research Council of Finland projects 324504, 328898, and 353019. CPG acknowledges financial support from the Secretary of Universities and Research (Government of Catalonia) and by the Horizon 2020 Research and Innovation Programme of the European Union under the Marie Skłodowska-Curie and the Beatriu de Pinós 2021 BP 00168 programme, from the Spanish Ministerio de Ciencia e Innovación (MCIN) and the Agencia Estatal de Investigación (AEI) 10.13039/501100011033 under the PID2020-115253GA-I00 HOSTFLOWS project, and the program Unidad de Excelencia María de Maeztu CEX2020-001058-M. aa
http://arxiv.org/abs/2406.19205v1
20240627142613
Coordinated RSMA for Integrated Sensing and Communication in Emergency UAV Systems
[ "Binghan Yao", "Ruoguang Li", "Yingyang Chen", "Li Wang" ]
eess.SP
[ "eess.SP" ]
Coordinated RSMA for Integrated Sensing and Communication in Emergency UAV Systems Binghan Yao, Ruoguang Li, Member, IEEE, Yingyang Chen, Senior Member, IEEE, and Li Wang, Senior Member, IEEE This work was supported in part by the National Natural Science Foundation of China under Grant U2066201, 62301157, and 62171054, in part by the Natural Science Foundation of Jiangsu Province of China under Project BK20230823, in part by the Fundamental Research Funds for the Central Universities under Grant 24820232023YQTD01, in part by the Double First-Class Interdisciplinary Team Project Funds under Grant 2023SYLTD06, and in part by the Guangdong Basic and Applied Basic Research Project under Grant 2024B1515020002, 2023A1515012892, and 2021B1515120067. (Corresponding author: Li Wang.) Binghan Yao and Li Wang are with the School of Computer Science (National Pilot Software Engineering School), Beijing University of Posts and Telecommunications, Beijing 100876, China (e-mail: {yaobinghan, liwang}@bupt.edu.cn). Ruoguang Li is with the College of Information Science and Engineering, Hohai University, Changzhou 213200, China (e-mail: ruoguangli@hhu.edu.cn). Yingyang Chen is with the College of Information Science and Technology, Jinan University, Guangzhou 510632, China (e-mail: chenyy@jnu.edu.cn). July 1, 2024 ======================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================= § ABSTRACT The destruction of terrestrial infrastructures and wireless resource-scarcity in disaster scenarios pose challenges for providing a prompt and reliable communication and sensing (C&S) services to search and rescue operations. Recently, unmanned aerial vehicle (UAV)-enabled integrated sensing and communication (ISAC) is emerging as a promising technique for achieving robust and rapid emergency response capabilities. Such a novel framework offers high-quality and cost-efficient C&S services due to the intrinsic flexibility and mobility of UAVs. In parallel, rate-splitting multiple access (RSMA) is able to achieve a tailor-made communication by splitting the messages into private and common parts with adjustable rates, making it suitable for on-demand data transmission in disaster scenarios. In this paper, we propose a coordinated RSMA for integrated sensing and communication (CoRSMA-ISAC) scheme in emergency UAV system to facilitate search and rescue operations, where a number of ISAC UAVs simultaneously communicate with multiple communication survivors (CSs) and detect a potentially trapped survivor (TS) in a coordinated manner. Towards this end, an optimization problem is formulated to maximize the weighted sum rate (WSR) of the system, subject to the sensing signal-to-noise ratio (SNR) requirement. In order to solve the formulated non-convex problem, we first decompose it into three subproblems, i.e., UAV-CS association, UAV deployment, as well as beamforming optimization and rate allocation. Subsequently, we introduce an iterative optimization approach leveraging K-Means, successive convex approximation (SCA), and semi-definite relaxation (SDR) algorithms to reframe the subproblems into a more tractable form and efficiently solve them. Simulation results demonstrate that the proposed CoRSMA-ISAC scheme is superior to conventional space division multiple access (SDMA), non-orthogonal multiple access (NOMA), and orthogonal multiple access (OMA) in terms of both communication and sensing performance. Integrated sensing and communication (ISAC), rate-splitting multiple access (RSMA), UAV deployment, UAV-CS association, beamforming. § INTRODUCTION Disaster scenarios generally require a prompt emergency response, which demands reliable and uninterrupted wireless communication and sensing (C&S) services to facilitate the transmission of rescue tasks and the detection of trapped survivors (TSs) <cit.>. However, the conventional terrestrial infrastructure is often out of work after disasters due to the severe destruction. In particular, the obstacles such as mountains and buildings may block the line-of-sight (LoS) links between the transmitters and receivers, resulting in a seriously degraded performance of on-site and timely C&S services. Driven by the flexible mobility and on-demand connectivity, unmanned aerial vehicle (UAV) is envisioned as an aerial multi-functional platform that can be harnessed for various applications in emergency events <cit.>, such as disaster warnings broadcasting, medical supplies delivery, survivor/environemt status monitoring, etc. Specifically, by exploiting strong LoS links, emergency UAV system can provide enhanced wireless C&S services from sky for rescuers and survivors with the extended coverage, higher capacity, and quicker deployment in rescue operations <cit.>. Extensive researches have mainly focused on the emergency UAV-enabled C&S network for disaster scenarios from the perspectives of framework design, trajectory design, resource allocation, and multi-UAV deployment, etc <cit.>. However, the separate deployment of C&S services inevitably incurs a heavy payload for emergency equipment, especially for that with limited size and available battery power, such as UAV <cit.>. Therefore, by introducing integrated sensing and communication (ISAC) technology to unify wireless communication and radar sensing functionalities into the emergency UAV system, a better adaption to search and rescue operations can be achieved in disaster scenarios with higher resource efficiency <cit.>. Recently, emergency UAV-enabled ISAC systems have attracted attention in both academia and industry, cost-efficiently improving C&S performance with reused resource and payload reduction<cit.>. For instance, <cit.> gave an overview of UAV-enabled ISAC system, including the basic network architecture and technical issues. The authors in <cit.> proposed an integrated periodic sensing and communication (IPSAC) mechanism, which flexibly provided C&S services for multiple users equipment (UEs) and targets, and jointly optimized the UAV trajectory, user association, sensing target selection, as well as the transmit beamforming to maximize the achievable communication rate. The work <cit.> investigated the trajectory design for a UAV which provided ISAC service to jointly optimize the downlink communication rate and localization accuracy. Besides, the freshness of sensed data collected by the UAV in an ISAC system was quantified by the peak age of information (PAoI) in <cit.>, and a joint UAV trajectory, target sensing scheduling, and resource allocation optimization problem was formulated to guarantee the PAoI of the system. Several recent works have considered the multi-UAV ISAC system <cit.>. For example, <cit.> proposed a cooperative time-division based sensing and communication scheme and optimized the sensing task allocation. The authors in <cit.> proposed an orthogonal frequency division multiple access (OFDMA) UAV-enabled ISAC system and designed a joint trajectory planning and resource allocation scheme. <cit.> investigated the problem of maximizing the average sum rate of UAVs in the scenario where multiple UAVs communicate with ground base stations and utilize echo signals for detecting functions. However, it is rare to see the mechanism design and applicable technique utilization for emergency UAV-enabled ISAC system, which highly emphasizes the reliability, effectiveness, and efficiency. On the other hand, due to the scarcity of spectrum resources, traditional orthogonal multiple access (OMA) struggles to support massive communication in emergency rescue scenarios. Therefore, exploiting multiple-input multiple-output (MIMO) and multiple access techniques with non-orthogonal spectrum has progressed towards the direction of spatial division multiple access (SDMA) and non-orthogonal multiple access (NOMA) into emergency networks <cit.>. During the emergency events, however, it is worth noting that the transmitted messages are variant with different priority. For example, weather notifications are often broadcasted to all users, while specific rescue instructions are uniquely required by certain users. Such an information demand diversity cannot be simply achieved via SDMA or NOMA that have a weak transmission flexibility in terms of encoding and decoding mechanism. Rate-splitting multiple access (RSMA) is a promising technology to deal with the above issue with higher design flexibility, which allows transmitter to send a superposition of multiple signals to the receiver by splitting the messages into common and private parts with adjustable rates <cit.>. Particularly, the common parts are encoded into common streams that are decoded by multiple users, while the private parts are independently encoded into the private streams that are decoded by the corresponding users only via successive interference cancellation (SIC). Thus, the common parts can transmit some public information such as weather condition and early warning, while the private parts can transmit the intended information of a certain user. Meanwhile, by adjusting the proportion of commonn streams flexibly, RSMA enables more flexible interference management. Due to this characteristics, RSMA is capable of partially decoding the interference while partially treating the remaining interference as noise, which contrasts with SDMA that fully treats interference as noise and NOMA that fully decodes the interference <cit.>. Several literatures have investigated the potential combination of RSMA and ISAC <cit.>. Specifically, the authors in <cit.> proposed a RSMA-assisted ISAC waveform design by jointly minimizing the Cramér-Rao bound (CRB) of the target detection and maximizing the minimum fairness rate (MFR) amongst UEs. In <cit.>, the authors considered the optimal transmit beamforming of the communication and radar signals in a multi-antenna RSMA ISAC system. Furthermore, a single UAV-assisted RSMA ISAC system was investigated in <cit.>, where the energy efficiency was maximized by optimizing the latitude and transmit beamforming. In <cit.>, the authors proposed a RSMA-based communication and radar coexistence (CRC) system which significantly improved spectral efficiency, energy efficiency, and quality of service (QoS) of communication users. However, the aforementioned efforts primarily focused on the RSMA ISAC system with a single transceiver, ignoring the performance gain in terms of cooperative sensing and communication, remaining a gap in research and discussion regarding the coordinated RSMA for ISAC with multiple UAVs. Inspired by the aforementioned analysis, in this paper, considering the diversity of information, a coordinated RSMA ISAC (CoRSMA-ISAC) in emergency UAV system is studied to achieve simultaneous cooperative data transmission with communication survivors (CSs) and target detection for a trapped survivor (TS). Our goal is to improve the communication performance while guaranteeing the sensing requirement by exploiting the transmission flexibility of RSMA and cooperation gain via multiple UAVs. Towards this end, a joint UAV-CS association, UAV deployment, and beamforming optimization problem is formulated to maximize the WSR of the system, subject to the sensing requirement at the TS. The main contributions of this paper are summarized as follows: * First, we propose a CoRSMA-ISAC framework in emergency UAV system where multiple UAVs aim to provide ISAC services for emergency rescue. We first analyze the expression of communication and sensing signals, and derive the corresponding performance metrics, i.e., received sensing signal-to-noise ratio (SNR) and communication signal-to-interference-plus-noise ratio (SINR). To maximize the WSR while satisfying the sensing requirement of TS, a joint deployment and beamforming optimization problem is formulated with the sensing SNR constraint for each CS. * Second, in order to solve the formulated non-convex optimization problem, we decompose the problem into three subproblems, i.e., UAV-CS association, UAV deployment optimization, and beamforming and rate allocation, respectively. Specifically, we first use the K-Means to determine the optimal UAV-CS association. For the UAV deployment, we propose an efficient algorithm utilizing the successive convex approximation (SCA) technique. Moreover, to address the beamforming and rate allocation subproblem, we equivalently transform it into a semi-definite programming (SDP) problem, which is solved by semi-definite relaxation (SDR) and SCA techniques. * Finally, numerical results demonstrate that the proposed design is able to achieve better WSR compared with SDMA, NOMA, and OMA for CSs while guaranteeing the sensing requirement of TS. Furthermore, the common message transmission via different UAVs with coordinated RSMA also provides extra sensing performance gain. Especially, when the total usable transmit power is insufficient or sensing requirements are high, the proposed CoRSMA-ISAC is still able to ensure the C&S performance by increasing the ratio of common rate. The rest of this paper is organized as follows. Section <ref> presents the system model for the proposed CoRSMA-ISAC in emergency UAV system and optimization problem formulation. Section <ref> proposes the algorithm for WSR maximization in CoRSMA-ISAC. Simulation results are presented in Section <ref>. Moreover, Section <ref> provides the concluding remarks. Notations: In this paper, scalars are denoted by italic letters. Vectors and matrices are denoted by boldface lower and uppercase letters, respectively. For a vector 𝐚, its Euclidean norm is denoted as 𝐚. For a matrix 𝐌, rank(𝐌), tr(𝐌), 𝐌^T, 𝐌^H, 𝐌_F, and [𝐌]_p,q denote its rank, trace, transpose, conjugate transpose, Frobenius norms, and the element in the p-th row and q-th column, respectively. Besides, 𝐌≽0 represents that 𝐌 is a semi-positive definite matrix. For a complex scalar a, its conjugate and magnitude are denoted by a^* and |a|, respectively. 𝔼[𝐱] is the expectation of 𝐱. For the sake of clarity, a main notation list is included in Table <ref>. § SYSTEM MODEL AND PROBLEM FORMULATION §.§ System Model As illustrated in Fig. <ref>, we consider a CoRSMA-ISAC framework in emergency UAV system to accomplish a post-disaster rescue mission, which consists of U ISAC UAVs and a dedicated radar receive UAV <cit.>. The system provides downlink transmissions for K CSs, while cooperatively detecting a single TS whose mobile device is out of work[We assume that all the clutters can be mitigated by using the existing clutter suppression techniques <cit.>, such that we do not consider the influence of the clutters.]. The sets of ISAC UAVs and CSs are denoted by 𝒰={1,⋯,U} and 𝒦∈{1,⋯,K}, respectively. We assume that each ISAC UAV is equipped with N_t transmit antennas and sensing receive UAV is equipped with N_r receive antennas, while each CS is equipped with single antenna. We further consider a three-dimensional (3D) Cartesian coordinate system, where the horizontal coordinate of CS k, k∈𝒦 and TS are fixed at 𝐪_k = (x_k,q, y_k,q) and 𝐪_0 = (x_0,q, y_0,q), respectively. The ISAC UAV u, u∈𝒰 and receive UAV are assumed to fly at fixed altitude H_u and H_0, respectively <cit.>. Let us further denote 𝐨_u = (x_u,o, y_u,o) and 𝐨_0 = (x_0,o, y_0,o), as their horizontal location, respectively. The sets of horizontal coordinates of ISAC UAV and CS can be denoted as 𝒪={𝐨_1 ,⋯,𝐨_U} and 𝒬={𝐪_1 ,⋯,𝐪_K}, respectively. In order to achieve dual purposes of communication and sensing, coordinated RSMA is adopted. Specifically, the communication messages for CS k are split into two parts by each UAV, i.e., a common part that carries the public information needed by all CSs, such as weather conditions, rescue situations, etc., and a private part that is exclusively desired by CS k. We denote 𝒥_u as the CS cluster containing all the CSs associated with UAV u, in which if CS k ∈𝒥_u, CS k will only receive the private message from UAV u, i.e., 𝒥_u∩𝒥_j=∅, ∀ u,j ∈𝒰, u≠ j. Additionally, we assume that the ISAC signal transmission over a particular block consisting of L symbols, and the set of symbols is denoted as ℒ={1, ⋯, L}. The messages for CSs are jointly processed at the central controller and the optimized transmit signals are sent to the corresponding UAVs <cit.>. For the l-th symbol, the common parts of all K CSs are jointly encoded into a single common message s_c[l] that is transmitted by all ISAC UAVs, and the private parts are encoded separately into K private messages s_k,p[l],∀ k∈𝒦. Furthermore, to fully exploit the spatial degree of freedom (DoF) of the transmit antennas, a dedicated sensing waveform s_r[l] is introduced to enhance the sensing performance. In this system model, s_c[l], s_k,p[l], and s_r[l] are all assumed to have zero mean and unit power, i.e., 𝔼{|s_c[l]|^2}=𝔼{|s_k,p[l]|^2}=𝔼{|s_r[l]|^2}=1, and they are independent from each other <cit.>. Then the baseband transmitted signal from ISAC UAV u can be expressed as 𝐱_u[l]=𝐩_u,c s_c[l]+∑_k ∈𝒥_u𝐩_u,k,p s_k,p[l]+𝐩_u,r s_r[l], where 𝐩_u,c∈ℂ^N_t × 1, 𝐩_u,k,p∈ℂ^N_t × 1, and 𝐩_u,r∈ℂ^N_t × 1 are the corresponding beamforming vector, respectively. §.§ Communication Model Let 𝐡_u,k=[h_u, k^1,…,h_u, k^n,…,h_u, k^N_t]^T ∈ℂ^N_t × 1 denote the channel coefficient vector between the UAV u and CS k. Specifically, h_u, k^n, n∈{1,⋯,N_t}, is the cofficient between the n-th antenna of the UAV u and CS k, which can be modeled as <cit.> h_u, k^n=√(ε_u,k)h̃_u, k^n,∀ n∈{1,⋯,N_t}, where h̃_u, k^n denotes the small-scale fading due to the multipath propagation with 𝔼[|h̃_u, k^n|^2]= 1 <cit.>. ε_u,k is the larger-scale channel power, which is given by ε_u,k=ε_0/r^2(𝐨_u,𝐪_k), where r(𝐨_u,𝐪_k)=√(𝐨_u-𝐪_k^2 +H_u^2) represents the distance between UAV u and CS k. ε_0≜G_T G_Cλ^2/(4π)^2 denotes the channel power at the reference distance of 1 meter <cit.>, where G_T and G_C are the transmit and receive antenna gain of the CS, respectively. λ is the wavelength. Therefore, the received signal at CS k∈𝒦 is given by y_k[l]=∑_u ∈𝒰𝐡_u, k^H 𝐩_u,c s_c[l]_Desired common signal+𝐡_u, k^H 𝐩_u, k,p s_k,p[l]_Desired private signal +∑_j ∈𝒥_u \{k}𝐡_u, j^H 𝐩_u, j,p s_j,p[l]+∑_i ∈𝒰\{u}∑_j ∈𝒥_i𝐡_i, j^H 𝐩_i, j,p s_j,p[l]_Multi-user intra and intercell Interference +𝐡_u, k^H 𝐩_u,rs_r[l]_Sensing interference+n_k[l]_Noise, where n_k[l] represents the additive white Gaussian noise (AWGN) with zero mean and variance σ^2. Since the dedicated radar waveform s_r[l] can be a prior known to CS k, it can be removed from the received signal <cit.>. Following the decoding order of RSMA, each CS first decodes the common message s_c[l] by treating other streams as noise <cit.>. Specifically, the received SINR of decoding the common message s_c[l] at CS k can be expressed by γ_k^c=|∑_u ∈𝒰𝐡_u, k^H 𝐩_u,c|^2/∑_u ∈𝒰∑_j ∈𝒥_u|𝐡_u, k^H 𝐩_u, j,p|^2+σ^2, and the corresponding achievable rate is given by R_k^c=Blog _2(1+γ_k^c), where B is the channel bandwidth. Besides, to guarantee that all CSs are capable of decoding the common stream, the common rate is defined as R^c=min _k{R_k^c | k ∈𝒦}. Since s_c[l] contains the common messages for K CSs, R^c is accordingly shared by K users. We denote the variable C_k as the allocated common rate to CS k. Then we have ∑_k ∈𝒦 C_k≤ R^c. After successfully decoding and removing the common message, the SINR for CS k to decode its own private message can be written as (<ref>) on the top of next page. And the corresponding achievable private rate is R_k^p=Blog _2(1+γ_k^p). Then the total achievable rate of CS k is equal to the summation of the allocated common rate and the private rate, which can be expressed as R^tot_k=C_k+R_k^p,∀ k ∈𝒦. Thus, the WSR of the system is written by R^w = ∑_k∈𝒦μ_kR^tot_k = ∑_k ∈𝒦μ_k(C_k+R_k^p), where μ_k∈[0,1] denotes the rate weight allocated to CS k with ∑_k=1^Kμ_k=1. §.§ Sensing Model For the sensing model, we assume that TS is a point-like target. Therefore, the channel matrix between the ISAC UAV u and the receive UAV through the TS can be defined as 𝐆_u=β_u 𝐛(𝐨_0,𝐪_0) 𝐚^H(𝐨_u,𝐪_0), where β_u represents the total sensing channel power gain, which is expressed as β_u=√(β _0/r^2(𝐨_u,𝐪_0)r^2(𝐨_0,𝐪_0)), where r(𝐨_u,𝐪_0)=√(𝐨_u-𝐪_0^2 +H_u^2) denotes the distance between UAV u and TS, and r(𝐨_0,𝐪_0)=√(𝐨_0-𝐪_0^2 +H_0^2) represents the distance between the receive UAV and TS. β_0 denotes the channel power at the reference distance of 1 meter, which is expressed as β_0=G_T G_Sσ_rcsλ^2/(4π)^3, where G_S is the receive antenna gain of the target and σ_rcs is the radar cross section (RCS). 𝐚(𝐨_u,𝐪_0)∈ℂ^N_t× 1 and 𝐛(𝐨_0,𝐪_0)∈ℂ^N_r× 1 in (<ref>) denote the transmit and receive steering vectors of the transmit and receive antennas, which are assumed to be a uniform linear array (ULA) with half-wavelength antenna spacing. Therefore, 𝐚(𝐨_u,𝐪_0) and 𝐛(𝐨_0,𝐪_0) can be respectively given by 𝐚(𝐨_u,𝐪_0) =[1, e^j πcosθ(𝐨_u,𝐪_0), ⋯, e^j π(N_t-1) cosθ(𝐨_u,𝐪_0)]^T, 𝐛(𝐨_0,𝐪_0) =[1, e^j πcosϕ(𝐨_0,𝐪_0), ⋯, e^j π(N_r-1) cosϕ(𝐨_0,𝐪_0)]^T, where θ(𝐨_u,𝐪_0)=arccosH_u/r(𝐨_u,𝐪_0) and ϕ(𝐨_0,𝐪_0)=arccosH_0/r(𝐨_0,𝐪_0) denote the transmit and receive azimuth angles, respectively. Therefore, the received signal reflected from TS at the receive UAV is written as 1!𝐲_r[l]=∑_u ∈𝒰𝐆_u 𝐱_u[l]+𝐧[l]=∑_u ∈𝒰β_u 𝐛(𝐨_0,𝐪_0) 𝐚^H(𝐨_u,𝐪_0) 𝐱_u[l]+𝐧[l], where 𝐧[l] ∈ℂ^N_r × 1 is the AWGN vector following 𝐧[l] ∼𝒞 𝒩(0, σ^2 𝐈) with σ^2 denoting the variance of each entry. For notational simplicity, let us denote 𝐏_u= [𝐩_u,c, 𝐩_ u, 1,p, …, 𝐩_ u, K,p, 𝐩_u,r] ∈ℂ^N_t ×(K+2), 𝐬[l]= [s_c[l],s_1,p[l],⋯,s_K,p[l],s_r[l]]^T ∈ℂ^(K+2)× 1, l∈ℒ, 𝐒= [𝐬[1],⋯,𝐬[L]] ∈ℂ^(K+2) × L. Then, the transmit signal with L symbols from UAV u can be expressed as 𝐗_u=𝐏_u 𝐒∈ℂ^N_t × L. Thus, the signal at the receive UAV can be writen as 𝐘_r= ∑_u ∈𝒰𝐆_u 𝐗_u + 𝐍 = ∑_u ∈𝒰β_u 𝐛(𝐨_0,𝐪_0) 𝐚^H(𝐨_u,𝐪_0) 𝐏_u_≜𝐆̅_u_≜𝐆𝐒+𝐍, where 𝐍=[𝐧[1],…,𝐧[L]]. And the received sensing SNR can be written as γ^s =𝔼[𝐆𝐒_F^2]/𝔼[𝐍_F^2]=|β_0|/r^2(𝐨_0,𝐪_0) σ^2∑_u ∈𝒰𝐚^H(𝐨_u,𝐪_0) 𝐏_u^2/r^2(𝐨_u,𝐪_0) . The derivation of (<ref>) is provided in Appendix <ref>. §.§ Problem Formulation We aim to maximize the WSR of the system by joint optimizing the UAV-CS association clusters {𝒥_u}_u=1^U, the UAV deployment 𝒪, the transmit beamforming matrix {𝐏_u}_u=1^U and common rate allocation variables {C_k}_k=1^K, subject to the sensing performance requirements, transmission QoS, and power limits, which can be formulated as (P0): max _{𝒥_u}_u=1^U, 𝒪,{𝐏_u}_u=1^U,{C_k}_k=1^K ∑_k ∈𝒦μ_k(C_k+R_k^p) 𝒥_u∩𝒥_j=∅, ∀ u,j ∈𝒰, u≠ j, ∑_u ∈𝒰|𝒥_u| =K ,∀𝒥_u≠∅,∀ u ∈𝒰, ∑_k ∈𝒦 C_k≤min _i{R_i^c| i ∈𝒦}, C_k≥0,∀ k∈𝒦, tr(𝐏_u𝐏_u^H) ≤ P_max,∀ u ∈𝒰, C_k+R_k^p≥ R_k^th,∀ k∈𝒦, γ^s≥γ̅, where (<ref>) guarantees that each CS is only associated with one UAV. (<ref>) ensures all CSs are associated with the UAVs, where |𝒥_u| is the number of CSs served by UAV u. (<ref>) ensures the common stream can be decoded by all users. (<ref>) implements a feasible partition of the common stream. (<ref>) ensures that the transmit power of each UAV meets the power budget P_max. (<ref>) indicates the communication rate requirement of each CS, where R_k^th is the threshold for the total achievable rate of CS k. (<ref>) indicates the sensing performance requirement, where γ̅ is the predefined sensing SNR threshold. § PROPOSED ALGORITHM FOR WEIGHTED SUM RATE MAXIMIZATION IN CORSMA-ISAC The optimization problem (P0) is a non-convex optimization problem, which is NP-hard in general and cannot be solved directly by standard convex optimization tools. In this regard, we propose an efficient algorithm to solve the problem (P0) which first determines the UAV-CS association cluster {𝒥_u}_u=1^U via K-Means, then alternately optimizes the UAV deployment 𝒪 and beamforming matrix {𝐏_u}_u=1^U and common rate allocation variables {C_k}_k=1^K by fixing one and solve remaining one. The framework of the proposed algorithm is illustrated in Fig. <ref>. §.§ UAV-CS Association Optimization We first update the UAV-CS association clusters {𝒥_u}_u=1^U with the fixed transmit beamforming matrix {𝐏_u}_u=1^U and common rate allocation variable {C_k}_k=1^K. Then, the optimization problem (P0) reformulated as (P1):max _{𝒥_u}_u=1^U ∑_k ∈𝒦μ_k R_k^p 𝒥_u∩𝒥_j=∅, ∀ u,j ∈𝒰, u≠ j, ∑_u ∈𝒰|𝒥_u| =K ,∀𝒥_u≠∅,∀ u ∈𝒰, C_k+ R_k^p≥ R_k^th,∀ k∈𝒦, γ^s≥γ̅. The problem (P1) is an integer programming problem, which can be solved by K-Means algorithm. First, we determine the number of clusters based on the number of UAVs U and cluster CSs based on CSs position. Specifically, we denote 𝒥_u^(ν) as u-th cluster at ν-th step, and treat the cluster centroids as 𝐨_u^(ν), u ∈𝒰, with 𝒪^(ν) denoting the position set of cluster centroids. At the ν-th step, we first calculate the distances between CSs and cluster centroids, and assign CS k into the cluster corresponding to the nearest cluster centroid 𝒥_u^(ν). k ∈𝒥_u^(ν)⇔ r(𝐨_u^(ν),𝐪_k)= min_i∈ U{r(𝐨_i^(ν),𝐪_k)}. Next, update the u-th cluster centroid by averaging the geographic positions of clustered CSs, which is denoted by 𝐨_u^(ν+1)=∑_k∈𝒥_u𝐪_k/|𝒥_u|, ∀ u ∈𝒰. Then update clusters 𝒥_u^(ν+1) based on (<ref>) until the cluster centroids no longer change. Then we get the clusters {𝒥_u^(ν+1)}_u=1^U as the UAV-CS association clusters and the position set of cluster centroids 𝒪^(ν+1), which can be used as initialization of the UAV deployment optimization. The process of using the K-Means method to solve problem (P1) is summarized in Algorithm <ref>. §.§ UAV Deployment Optimization The UAV deployment 𝒪 is then optimized with the fixed transmit beamforming vectors {𝐏_u}_u=1^U, common rate allocation variable {C_k}_k=1^K, and updated UAV-CS association clusters {𝒥_u}_u=1^U and is initialized to 𝒪^0 based on the position set of cluster centroids obtained in problem (P1). Specifically, by introducing a slack variable f_u,k, the optimization problem (P0) can be reformulated as (P2):max _𝒪 ∑_k ∈𝒦μ_k f_u,k f_u,k≤ R_k^p,∀ k ∈𝒦, C_k+R_k^p≥ R_k^th,∀ k∈𝒦, γ^s≥γ̅. Problem (P2) is also a non-convex problem due to the constraints (<ref>), (<ref>) and (<ref>). Specifically, to deal with the non-convex constraint (<ref>), we introduce slack variables {r̂_i,k} that follow r̂_i,k≤ r^2(𝐨_i,𝐪_k),∀ k ∈𝒦, k∉ 𝒥 _i . With (<ref>), (<ref>) can be reformulated as a convex constraint, which is expressed as 1!𝐨_u-𝐪_k^2≤1/Ψ(|1𝐩_ u, k,p|^2/2^R_k^t h-C_k/B-1-∑_j ∈ 𝒥 _u\{k}|1𝐩_u, j,p|^2)-H_u^2, where Ψ =∑_i ∈𝒰\{u}∑_j ∈𝒥_i1/r̂_i,k|1𝐩_i, j,p|^2+σ^2ε_0^-1, with 1∈ℝ^1× N_t being an all-ones vector. Please refer to Appendix <ref>. Next, we deal with the non-convex constraint (<ref>). Similarly, with (<ref>), constraint (<ref>) can be reformulated as f_u,k≤R̃^p_k,∀ k ∈𝒦, where R̃^p_k is expressed in (<ref>) at the top of this page. mytempeqncnt1 We adopt the SCA to deal with it. Specifically, by using the fact that the first-order Taylor expansion of the convex differentiable functions serves as a global lower bound <cit.>, we have f_u,k≤ A_u,k^(υ)+𝐜_u,k^(υ)^H(𝐨_u-𝐨_u^(υ)), where 𝐨_u^(υ) is the local point obtained at the υ-th iteration. A_u,k^(υ) and 𝐜_u,k^(υ) are expressed in (<ref>) and (<ref>), respectively, at the top of this page. mytempeqncnt Besides, in order to deal with the non-convex constraint (<ref>), let us define 𝐑_u=𝐏_u𝐏_u^H and 𝐀(𝐨_u,𝐪_0)=𝐚(𝐨_u,𝐪_0)𝐚(𝐨_u,𝐪_0)^H for notational convenience. Accordingly, we denote the entries in the p-th row and q-th column of 𝐑_u and 𝐀(𝐨_u,𝐪_0) as [𝐑_u]_p,q and [𝐀(𝐨_u,𝐪_0)]_p,q, respectively. The magnitude and phase of [𝐑_u]_p,q are denoted by|[𝐑_u]_p,q| and θ_p, q^𝐑_u, respectively. Therefore, (<ref>) can be rewritten equivalently as ∑_u ∈𝒰tr(𝐑_u 𝐀(𝐨_u, 𝐪_0))/r^2(𝐨_u, 𝐪_0)≥ r^2(𝐨_0, 𝐪_0) σ^2 γ̅/|β_0|. We perform the first-order Taylor expansion on the left-hand-side (LHS) of (<ref>) at local point 𝐨_u^(υ), then (<ref>) is reformulated as H_u^(υ)+𝐞_u^(υ)^H(𝐨_u-𝐨_u^(υ)) ≥ r^2(𝐨_0, 𝐪_0) σ^2 γ̅/|β_0|, where H_u^(υ)= tr(𝐑_u 𝐀(𝐨_u^(υ), 𝐪_0))/r^2(𝐨_u^(υ), 𝐪_0), 0.98!𝐞_u^(υ)= [F(𝐑_u,𝐨_u^(υ), 𝐪_0) r^2(𝐨_u^(υ), 𝐪_0) -2tr(𝐑_u 𝐀(𝐨_u^(υ), 𝐪_0))(𝐨_u^(υ)-𝐪_0)]/ r^4(𝐨_u^(υ), 𝐪_0), F(𝐑_u,𝐨_u^(υ), 𝐪_0)=.∂tr(𝐑_u 𝐀(𝐨_u, 𝐪_0))/∂𝐨_u|_𝐨_u=𝐨_u^(υ). Please refer to Appendix <ref>. Furthermore, since r^2(𝐨_i,𝐪_k) is a convex function with respect to 𝐨_i, we replace the right-hand-side (RHS) of (<ref>) with its first-order Taylor expansion and re-expression (<ref>) as 1!r̂_i,k≤𝐨_i^(υ)-𝐪_k^2+2(𝐨_i^(υ)-𝐪_k)^T(𝐨_u-𝐨_u^(υ))+H_i^2,∀ k ∈𝒦, k∉ 𝒥 _i . Finally, by replacing the non-convex constraints (<ref>), (<ref>), and (<ref>) as their approximated forms in (<ref>), (<ref>), and (<ref>), we obtain the convex version of the optimization problem (P2) in the υ-th iteration as (P2.υ): max _{f_u,k}, 𝒪^(υ),{r̂_i,k} ∑_k ∈𝒦μ_k f_u,k f_u,k≤ A_u,k^(υ)+𝐜_u,k^(υ)^H(𝐨_u-𝐨_u^(υ)), ∀ k ∈𝒦,k∈ 𝒥 _u, 0.86!𝐨_u-𝐪_k^2≤1/Ψ(|1𝐩_ u, k,p|^2/2^R_k^t h-C_k/B-1-∑_j ∈ 𝒥 _u\{k}|1𝐩_u, j,p|^2)-H_u^2,∀ k ∈𝒦,k∈ 𝒥 _u, 0.86!H_u^(υ)+𝐞_u^(υ)^H(𝐨_u-𝐨_u^(υ)) ≥ r^2(𝐨_0, 𝐪_0) σ^2 γ̅/|β_0|,∀ u ∈𝒰, 0.87!r̂_i,k≤𝐨_i^(υ)-𝐪_k^2+2(𝐨_i^(υ)-𝐪_k)^T(𝐨_u-𝐨_u^(υ))+H_i^2,∀ k ∈𝒦, k∉ 𝒥 _i . Hence, problem (P2.υ) can be solved efficiently using the convex software tools such as CVX <cit.>. The detailed process is given in Algorithm <ref>. §.§ Beamforming and Rate Allocation Optimization Now we optimize the UAV beamforming matrixes {𝐏_u}_u=1^U and common rate allocation variables {C_k}_k=1^K with the updated UAV-CS association cluster and the location of the UAV. By introducing R̂_k^p as the rate slack variable and letting 𝐏_u,c=𝐩_u,c𝐩_u,c^H, 𝐏_u, k,p=𝐩_u, k,p𝐩_u, k,p^H, 𝐏_u,r= 𝐩_u,r𝐩_u,r^H, 𝐇_u, k=𝐡_u, k𝐡_u, k^H, the optimization problem can be rewritten as (P3):max _{𝐏_u,c}_u=1^U,{𝐏_u, k,p}_k=1^K,{𝐏_u,r}_u=1^U{C_k}_k=1^K,{R̂_k^p}_k=1^K∑_k ∈𝒦μ_k(C_k+R̂_k^p) C_k≥0,∀ k∈𝒦, 0.95!Blog _2(1+∑_u ∈𝒰tr(𝐇_u, k𝐏_u,c)/∑_u ∈𝒰∑_j ∈𝒥_utr(𝐇_u, k𝐏_u, j,p)+σ^2) ≥∑_i ∈𝒦 C_i,∀ k ∈𝒦, 0.95!Blog _2(1+tr(𝐇_u, k𝐏_u,k,p)/∑_i ∈𝒰∑_j ∈𝒥_i \{k}tr(𝐇_i, k𝐏_i, j,p)+σ^2) ≥R̂_k^p,∀ k ∈𝒦 , tr(𝐏_u,c+ ∑_k=1^K𝐏_u, k,p+𝐏_u,r) ≤ P_max,∀ u∈𝒰, 𝐏_u,c≽ 0, 𝐏_u, k,p≽ 0, 𝐏_u,r≽ 0, ∀ u ∈𝒰,∀ k ∈𝒦, 0.95!rank(𝐏_u,c)=1, rank(𝐏_u, k,p)=1,rank(𝐏_u,r)=1, ∀ u ∈𝒰, ∀ k ∈𝒦 , 0.95!∑_u ∈𝒰tr((𝐏_u,c+ ∑_k=1^K𝐏_u, k,p+𝐏_u,r) 𝐀(𝐨_u, 𝐪_0))/r^2(𝐨_u, 𝐪_0)≥ r^2(𝐨_0, 𝐪_0) σ^2 γ̅/|β_0|, C_k+R̂_k^p≥ R_k^th,∀ k∈𝒦. Problem (P3) is an SDP problem, in which the constraints (<ref>), (<ref>), and (<ref>) are non-convex. To solve problem (P3), we first introduce slack variables {η_ k}_k=1^K and {ρ_ k}_k=1^K <cit.>, then the constraint (<ref>) can be transformed as η_ k-ρ_k≥∑_i ∈𝒦 C_i log 2/B , ∀ k ∈𝒦, 1!e^η_ k≤∑_u ∈𝒰tr(𝐇_u, k𝐏_u,c)+ ∑_u ∈𝒰∑_j ∈𝒥_utr(𝐇_u, k𝐏_u, j,p)+σ^2, ∀ k ∈𝒦, e^ρ_ k≥∑_u ∈𝒰∑_j ∈𝒥_utr(𝐇_u, k𝐏_u, j,p)+σ^2, ∀ k ∈𝒦. Similarly, by introducing slack variables {χ_k}_k=1^K,{ζ_k}_k=1^K, (<ref>) can be rewritten as χ_k-ζ_k ≥R̂_k^p log 2/B, ∀ k ∈𝒦, e^χ_k≤∑_i ∈𝒰∑_j ∈𝒥_i tr(𝐇_i, k𝐏_i, j,p)+σ^2, ∀ k ∈𝒦, e^ζ_k≥∑_i ∈𝒰∑_j ∈𝒥_i \{k}tr(𝐇_i, k𝐏_i, j,p)+σ^2, ∀ k ∈𝒦. It can be observed that the constraints (<ref>) and (<ref>) are still non-convex. To deal with it, at the κ-th step, we first approximate the LHS of (<ref>) based on their first-order Taylor expansion at local point ρ_ k^(κ), and accordingly re-express (<ref>) as 1!e^ρ_k^(κ)(1+ρ_k-ρ_ k^(κ))≥∑_u ∈𝒰∑_j ∈𝒥_utr(𝐇_i, k𝐏_i, j,p)+σ^2, ∀ k ∈𝒦. Similarly, (<ref>) can be re-expressed as 1!e^ζ_ k^(κ)(1+ζ_ k-ζ_ k^(κ)) ≥∑_i ∈𝒰∑_j ∈𝒥_i \{k}tr(𝐇_i, k𝐏_i, j,p)+σ^2, ∀ k ∈𝒦. Next, we deal with the non-convex rank constraints in (<ref>) via the idea of SDR. In particular, we relax the rank constraints in (<ref>) and (P3) can be reformulated as (P3.SDR): max _{𝐏_u,c}_u=1^U,{𝐏_u, k,p}_k=1^K, {𝐏_u,r}_u=1^U{C_k}_k=1^K, {R̂_k^p}_k=1^K, {η_ k}_k=1^K,{ρ_ k}_k=1^K,{χ_ k}_k=1^K,{ζ_ k}_k=1^K∑_k ∈𝒦μ_k(C_k+R̂_k^p) s.t. (<ref>),(<ref>),(<ref>),(<ref>),(<ref>),(<ref>), (<ref>),(<ref>),(<ref>) Note that problem (P3. SDR) is a convex SDP and thus can be solved by CVX. It is worth noting that the ranks of solution {𝐏_u,c}_u=1^U,{𝐏_u, k,p}_k=1^K, {𝐏_u,r}_u=1^U to (P3) are equal to 1, and can be obtained via eigenvalue decomposition (EVD). Otherwise, the Gaussian randomization method is utilized to construct sub-optimal rank-one solution <cit.>. §.§ Complexity Analysis In this subsection, we analyze the complexity of the proposed algorithms. The complexity for solving problem (P0) is mainly determined by the complexity of solving problem (P1), (P2), and (P3). Specifically, problem (P1) is solved by K-Means algorithm, and the complexity of solving problem (P1) is 𝒪(2KUϵ_1) <cit.>, where ϵ_1 is the accuracy of the K-Means algorithm for solving problem (P1). Next, problem (P2) is solved by the SCA technique, and transformed as problem (P2.υ) at each iteration. There are (K+1)(U+1)-1 constraints in problem (P2.υ), so the number of iterations that are required for the SCA technique is 𝒪(√((K+1)(U+1)-1)log _2(1 / ϵ_2)) <cit.>, where ϵ_2 is the accuracy of the SCA technique for solving problem (P2.υ). At each iteration, the complexity of solving problem (P2.υ) is 𝒪(S_1^2 S_2), where S_1=U(K+1) and S_2=(K+1)(U+1)-1 are the total numbers of the variables and constraints, respectively. Thus, the total complexity of the SCA technique for solving problem (P2) is 𝒪(U^3.5 K^3.5log _2(1 / ϵ_2)). In a similar manner, the computational complexity of problem (P3) is 𝒪((N_t^2UK)^3.5log _2(1 / ϵ_3)) <cit.>, where ϵ_3 is the accuracy of the SCA technique for solving problem (P3). As a result, the total complexity for solving problem (P0) is 1!𝒪(2KUϵ_1+T U^3.5 K^3.5log _2(1 / ϵ_2)+T(N_t^2UK)^3.5log _2(1 / ϵ_3)) <cit.>, where T is the number of iterations for the proposed CoRSMA-ISAC scheme. § SIMULATION RESULTS In this section, we provide simulations to demonstrate the effectiveness of the proposed CoRSMA-ISAC scheme. We consider a 2D disaster area of 500 m× 500 m where TS is located at the center of the area, i.e., 𝐪_0=(250,250), and CSs uniformly distributed in such an area. All UAVs are assumed to fly at a fixed altitude H_u=H_0=100 m with N_t=8 antenna elements. Unless otherwise stated, the maximum transmit power of each UAV is fixed at P_max=25 dBm, and the reference channel power is ε_0= -60 dB. The reference sensing channel power is β_0=-50 dB. The noise power is assumed to be σ^2=-110 dBm <cit.> and the channel bandwidth is B=1 MHz. Finally, the threshold for the total achievable rate of each CS is set as R_k^th=1 Mbps, the sensing SNR threshold is set as γ̅=2 <cit.>, and the rate weight is assumed to μ_k=1/K, ∀ k∈𝒦. Fig. <ref> shows the procedure of UAV-CS association and UAV deployment by our proposed CoRSMA-ISAC scheme. The number of CSs is set to K=6. In Fig. <ref>, it can be observed that all UAVs first obtain the UAV-CS association and the initial UAV deployment based on (<ref>) and (<ref>) by K-Means algorithm. Next, Fig. <ref> depicts that with the initialized UAV deployment, the optimal UAV deployment is finally obtained after several iterations of the proposed CoRSMA-ISAC scheme. Fig. <ref> shows a top view of the variation of UAV deployment, from which it can be seen that compared to the initial position of UAVs, their optimal position is getting closer to each other and surrounding the TS compared. This is because that each UAV transmits the common message to serve not only its associated CSs but also other unassociated CSs, while meeting the requirement of sensing SNR. Next, the convergence behavior of the proposed CoRSMA-ISAC scheme is investigated in Fig. <ref>. We can clearly see that the algorithm can converge quickly to reach the maximum WSR after serveral iterations. Besides, it is expected that more transmit power budget P_max we can utilize, a larger value of WSR is obtained. As a benchmark comparison, we consider the traditional ISAC design with SDMA. The corresponding communication rate in (<ref>) can be expressed as R_k^SDMA=Blog _2(1+|𝐡_u, k^H 𝐩_u, k|^2/∑_i ∈𝒰∑_j ∈𝒥_i \{k}|𝐡_i, k^H 𝐩_ i, j|^2+σ^2). We also consider the traditional ISAC design with NOMA where each ISAC UAV divides the available spectrum equally and communicates with its associated CSs via NOMA. According to the NOMA scheme, SIC is employed to decode the inter-CS interference. Without loss of generality, we reset the CS associated with the uth UAV index as 𝒥_u={k_u-1+1, ⋯, k_u}, with k_0=1, k_U=K. Simultaneously, we assume that 𝐡_u, k_u-1+1<⋯<𝐡_u, k_u. Thus, the rate of the CS k∈{k_u-1+1, ⋯, k_u-1} can be expressed as R_k^NOMA=B/U·log _2(1+|𝐡_u, k^H 𝐩_u, k|^2/∑_i=k+1^ k_u|𝐡_u, k^H 𝐩_u, i|^2+σ^2). For the CS k_u,∀ u∈𝒰, the rate is given by R_k_u^NOMA=B/U·log _2(1+|𝐡_u, k_u^H 𝐩_u, k_u|^2/σ^2). Moreover, we also consider the traditional ISAC design with OMA where CSs use separate spectrum and divide the spectrum equally for transmission. The corresponding communication rate in (<ref>) can be expressed as R_k^OMA=B/K·log _2(1+|𝐡_u, k^H 𝐩_u, k|^2/σ^2). It can be seen in Fig. <ref> that under the same sensing SNR constraint, if there are more UAVs transmit common message, the WSR will increase since performance of the interference management is enhanced with limited spectrum. Furthermore, Fig. <ref> illustrates a trade-off between communication and sensing performance. It can be observed that with the increase of sensing requirements, the WSR decreases. It is expected because when the sensing requirement is higher, more power is required to be allocated for sensing, then the power used for communication is relatively reduced and the WSR decreases. Besides, as γ̅ increases, CoRSMA-ISAC can achieve better communication performance compared to SDMA-ISAC, NOMA-ISAC, and OMA-ISAC. This is because that sensing also benifits from the beamforming gain of common message transmission by different UAVs with CoRSMA-ISAC. Compared to CoRSMA-ISAC and SDMA-ISAC, NOMA-ISAC performs worse due to the inefficient use of SIC layers in multi-antenna NOMA, which lowers the sum DoF compared to CoRSMA-ISAC and SDMA-ISAC <cit.>. Specifically, due to the rate threshold requirement, NOMA-ISAC decreases the power allocated to the CS with better channel gain in order to ensure fairness for CSs with worse channel gain, resulting in a worse WSR. On the other hand, because of non-coordination of UAV, UAVs transmit with orthogonal spectrum, then the reduction of transmission spectrum resources further leads to a decrease in WSR. Fig. <ref> shows that the ratio of common rate to WSR increases as γ̅ increases, which also verifies that the common message transmission will help sensing in the proposed CoRSMA-ISAC. Morever, when the number of UAV increases, CoRSMA-ISAC reduces the ratio of common rate to achieve a higher WSR. This is because that UAV-CS assoication is changed, each CS can choose more proper UAV for private message transmission. On the other hand, we compare WSR with different number of CSs. As shown in Fig. <ref>, as the number of CSs increases, more CSs will use the same spectrum for transmission, and WSR will decreases due to the multi-user intra and intercell interference. However, compared to other schemes, CoRSMA-ISAC with flexible decoding rules can mitigate the decrease of WSR caused by spectrum congestion by increasing the power allocation of common message. We further discuss the sensing performance of the proposed CoRSMA-ISAC scheme. From Fig. <ref>, it can be seen that CoRSMA-ISAC also achieve a higher sensing SNR with the same P_max compared to OMA-ISAC, NOMA-ISAC and SDMA-ISAC. Specifically, when the P_max is large enough, such an improvement is more significant. Similar to Fig. <ref>, it is still due to the benefits of common message transmission on sensing. Besides, Fig. <ref> shows that cooperative sensing by multiple UAVs can enhance the sensing SNR compared to simple single UAV sensing (U=1). Such a sensing SNR enhancement is more significant when the number of UAVs is larger. § CONCLUSION In this paper, we have investigated a joint UAV-CS association, UAV deployment, and beamforming optimization problem for a coordinated RSMA for ISAC in emergency UAV system. We presented the communication and sensing signal model, and formulated an optimization problem to maximize the WSR of the system while satisfying the sensing SNR constraint. To solve the formulated non-convex optimization problems, we proposed an efficient algorithm based on K-Means, SCA, and SDR techniques. The numerical results indicated that the proposed CoRSMA-ISAC scheme can achieve higher WSR while satisfying sensing SNR constraints in spectrum congestion. Moreover, the proposed CoRSMA-ISAC scheme also guarantees a higher sensing SNR under the same maximum transmit power constraint. In the future work, we will consider more practical scenarios in the emergency events, such as the mobility of communication users and targets, the existence of clutters, 3D UAV deployment, etc. §.§ Derivation of Sensing SNR in (<ref>) First, the numerator of (<ref>) can be simplified as <cit.> 1!𝔼[𝐆𝐒_F^2]=tr{𝔼[𝐒𝐒^H𝐆^H𝐆]}=tr{𝔼[𝐒𝐒^H]_L·𝐈𝔼[𝐆^H𝐆]} 0.89!=L∑_u ∈𝒰( tr{𝔼[𝐆̅_u^H𝐆̅_u]} + ∑_u' ∈𝒰\{u}tr{𝔼[𝐆̅_u^H𝐆̅_u']}) Due to the expectation over β_u and independence of these random variables, we have 𝔼[𝐆̅_u^H𝐆̅_u']=0. Further, we have tr{𝔼[𝐆̅_u^H𝐆̅_u]}= 1!tr{𝔼[|β_u|^2 𝐚(𝐨_u,𝐪_0)𝐚^H(𝐨_u,𝐪_0)𝐏_u𝐏_u^H 𝐛(𝐨_0,𝐪_0) 𝐛^H(𝐨_0,𝐪_0) _N_r]} =|β_0|N_r/r^2(𝐨_u,𝐪_0)r^2(𝐨_0,𝐪_0) 𝐚^H(𝐨_u,𝐪_0) 𝐏_u^2. Similarly, we have 𝔼[𝐍_F^2]=L N_r σ^2. With (<ref>), (<ref>), and (<ref>), (<ref>) can be obtained. This completes the proof. §.§ Proof of Proposition <ref> By following (<ref>) and (<ref>), we have (<ref>) at the top of the next page, which is further arranged to get |1𝐩_u,k,p|^2/2^R_k^th-C_k/B-1-∑_j∈𝒥_u∖{k}|1𝐩_u,j,p|^2≥ r^2(𝐨_u,𝐪_k)(∑_i∈𝒰∖{u}∑_j∈𝒥_i1/r^2(𝐨_i,𝐪_k)|1𝐩_i,j,p|^2+σ^2ε_0^-1). Since (<ref>) is still non-convex with respect to r^2(𝐨_u,𝐪_k), we use slack variables {r̂_i,k≤ r^2(𝐨_i,𝐪_k),∀ k ∈𝒦, k∉ 𝒥 _i } to replace r^2(𝐨_i,𝐪_k) in (<ref>). Then we have |1𝐩_u,k,p|^2/2^R_k^th-C_k/B-1-∑_j∈𝒥_u∖{k}|1𝐩_u,j,p|^2≥ r^2(𝐨_u,𝐪_k)(∑_i∈𝒰∖{u}∑_j∈𝒥_i1/r̂_i,k|1𝐩_i,j,p|^2+σ^2ε_0^-1_Ψ). Notice that r^2(𝐨_u,𝐪_k)=𝐨_u-𝐪_k^2+H_u^2, then (<ref>) can be re-expressed as r^2(𝐨_u,𝐪_k) =𝐨_u-𝐪_k^2+H_u^2 ≤1/Ψ(|1𝐩_u, k,p|^2/2^R_k^t h-C_k/B-1-∑_j ∈𝒥_u∖{k}|1𝐩_u, j,p|^2). Further, we have the convex constraint expressed as 0.89!𝐨_u-𝐪_k^2 ≤1/Ψ(|1𝐩_u, k,p|^2/2^R_k^t h-C_k/B-1-∑_j ∈𝒥_u∖{k}|1𝐩_u, j,p|^2)-H_u^2. This completes the proof. §.§ Proof of Proposition <ref> The entry in the p-th row and q-th column of 𝐀(𝐨_u, 𝐪_0) can by expressed as [𝐀(𝐨_u, 𝐪_0)]_p, q=e^j π (p-q) H_u/r(𝐨_u, 𝐪_0). It is observed from (<ref>) that 𝐑_u and 𝐀(𝐨_u, 𝐪_0) are Hermitian matrices <cit.>, and thus we have tr(𝐑_u 𝐀(𝐨_u, 𝐪_0)) = ∑_p=1^N_t∑_q=1^N_t[𝐑_u]_p, qe^j π (q-p)H_u/r(𝐨_u, 𝐪_0) = ∑_a=1^N_t[𝐑_u]_a, a+2 ∑_p=1^N_t∑_q=p+1^N_t|[𝐑_u]_p, q| ×cos(θ_p, q^𝐑_u+ π (q-p) H_u/r(𝐨_u, 𝐪_0)). Then the first-order derivative of (<ref>) with respect to 𝐨_u is derived as F(𝐑_u,𝐨_u, 𝐪_0)=∂tr(𝐑_u 𝐀(𝐨_u, 𝐪_0))/∂𝐨_u = 0.95!2 π∑_p=1^N_t∑_q=p+1^N_t|[𝐑_u]_p, q| ×sin[θ_p, q^𝐑_u+ π (q-p) H_u/r(𝐨_u,𝐪_0)] ×(q-p)H_u (𝐨_u-𝐪_0)/r^3(𝐨_u,𝐪_0). Then F(𝐑_u,𝐨_u^(υ), 𝐪_0)=.∂tr(𝐑_u 𝐀(𝐨_u, 𝐪_0))/∂𝐨_u|_𝐨_u=𝐨_u^(υ). With (<ref>) and (<ref>), the LHS of (<ref>) based on its first-order Taylor expansion at local point 𝐨_u^(υ) can be approximated as ∑_u ∈𝒰(H_u^(υ)+𝐞_u^(υ)^H(𝐨_u-𝐨_u^(υ))) , where H_u^(υ)= tr(𝐑_u 𝐀(𝐨_u^(υ), 𝐪_0))/r^2(𝐨_u^(υ), 𝐪_0), 1!𝐞_u^(υ)= [F(𝐑_u,𝐨_u^(υ), 𝐪_0) r^2(𝐨_u^(υ), 𝐪_0) -2tr(𝐑_u 𝐀(𝐨_u^(υ), 𝐪_0))(𝐨_u^(υ)-𝐪_0)]/ r^4(𝐨_u^(υ), 𝐪_0). This completes the proof. 00 D2019 D. G. C., A. Ladas, Y. A. Sambo, H. Pervaiz, C. Politis, and M. A. Imran, “An overview of post-disaster emergency communication systems in the future networks,” IEEE Wireless Commun., vol. 26, no. 6, pp. 132-139, Dec. 2019. EKMarkakis2017 E. K. Markakis et al., “Efficient next generation emergency communications over multi-access edge computing,” IEEE Commun. Mag., vol. 55, no. 11, pp. 92-97, Nov. 2017. WL2020edge L. Wang, J. Zhang, J. Chuan, R. Ma, and A. Fei, “Edge intelligence for mission cognitive wireless emergency networks,” IEEE Wireless Commun., vol. 27, no. 4, pp. 103–109, 2020. LGupta2016 L. Gupta, R. Jain, and G. Vaszkun, “Survey of important issues in UAV communication networks,” IEEE Commun. Surv. Tut., vol. 18, no. 2, pp. 1123-1152, Secondquarter 2016. YZeng2019 Y. Zeng, Q. Wu, and R. Zhang, “Accessing from the sky: A tutorial on UAV communications for 5G and beyond,” in Proc. IEEE, vol. 107, no. 12, pp. 2327–2375, Dec. 2019. Zhao2019 N. Zhao et al., “UAV-assisted emergency networks in disasters,” IEEE Wireless Commun., vol. 26, no. 1, pp. 45–51, Feb. 2019. WL2023 B. Tian et al., “UAV-assisted wireless cooperative communication and coded caching: a multiagent two-timescale DRL approach,” IEEE Trans. Mobile Comput., early access, 2023. ZYao2021 Z. Yao, W. Cheng, W. Zhang, and H. Zhang, “Resource allocation for 5G-UAV-based emergency wireless communications,” IEEE J. Sel. Areas Commun., vol. 39, no. 11, pp. 3395-3410, Nov. 2021. TDo-Duy2021 T. Do-Duy, L. D. Nguyen, T. Q. Duong, S. R. Khosravirad, and H. Claussen, “Joint optimisation of real-time deployment and resource allocation for UAV-aided disaster emergency communications,” IEEE J. Sel. Areas Commun., vol. 39, no. 11, pp. 3411-3424, Nov. 2021. SWu2022 S. Wu, W. Xu, F. Wang, G. Li, and M. Pan, “Distributed federated deep reinforcement learning based trajectory optimization for air-ground cooperative emergency networks,” IEEE Trans. Veh. Technol., vol. 71, no. 8, pp. 9107-9112, Aug. 2022. NLin2022 N. Lin, Y. Liu, L. Zhao, D. O. Wu, and Y. Wang, “An adaptive UAV deployment scheme for emergency networking,” IEEE Wireless Commun., vol. 21, no. 4, pp. 2383-2398, Apr. 2022. WL2021Joint3D X. Wu, L. Wang, L. Xu, Z. Liu, and A. Fei, “Joint optimization of UAVs 3-D placement and power allocation in emergency communications,” in Proc. IEEE Global Commun. Conf. (GLOBECOM), 2021, pp. 01–06. WL2017 L. Wang, H. Tang, H. Wu, and G. Stüber, “Resource allocation for D2D communications underlay in Rayleigh fading channels,” IEEE Trans. Veh. Technol., vol. 66, no.2, pp. 1159-1170, Feb. 2017. Fliu2022 F. Liu et al., “Integrated sensing and communications: Toward dual-functional wireless networks for 6G and beyond,” IEEE J. Sel. Areas Comm., vol. 40, no. 6, pp. 1728-1767, Jun. 2022. JAZhang2022 J. A. Zhang et al., “Enabling joint communication and radar sensing in mobile networks-a survey,” IEEE Commun. Surv. Tut., vol. 24, no. 1, pp. 306-345, 1st 2022. JYang2023 J. Yang et al., “Multi-domain cooperative SLAM: The enabler for integrated sensing and communications,” IEEE Wireless Commun., vol. 30, no. 1, pp. 40-49, Feb. 2023. ZWei2023 Z. Wei et al., “Integrated sensing and communication signals toward 5G-A and 6G: A survey,” IEEE Internet Things J., vol. 10, no. 13, pp. 11068-11092, Jul. 2023 WL2023Joint X. Li et al., “Joint power control and bandwidth allocation for UAV-assisted integrated communication and localization networks,” in Proc. Conf. Comput. Commun. Workshops (INFOCOM WKSHPS), 2023, pp. 1–6. JMu2023 J. Mu, R. Zhang, Y. Cui, N. Gao, and X. Jing, “UAV meets integrated sensing and communication: Challenges and future directions,” IEEE Commun. Mag., vol. 61, no. 5, pp. 62-67, May 2023. LWang2023 L. Wang et al., “Aerial-ground cooperative vehicular networks for emergency integrated localization and communication,” IEEE Net., vol. 37, no. 4, pp. 323-330, Jul./Aug. 2023. KMeng2023 K. Meng et al., “UAV-Enabled integrated sensing and communication: Opportunities and challenges,” IEEE Wireless Commun., early access, 2023. KMeng20232 K. Meng et al., “Throughput maximization for UAV-enabled integrated periodic sensing and communication,” IEEE Trans. Wireless Commun., vol. 22, no. 1, pp. 671-687, Jan. 2023. XJing2022 X. Jing, F. Liu, C. Masouros, and Y. Zeng, “ISAC from the sky: UAV trajectory design for joint communication and target localization,”IEEE Trans. Wireless Commun., early access, 2024. KZhang2021 K. Zhang and C. Shen, “UAV aided integrated sensing and communications,” in Proc. IEEE 94th Veh. Technol. Conf., Sep. 2021, pp. 1–6. KMeng20233 K. Meng, X. He, Q. Wu, and D. Li, “Multi-UAV collaborative sensing and communication: joint task allocation and power optimization,” IEEE Trans. Wireless Commun., vol. 22, no. 6, pp. 4232-4246, Jun. 2023. pan2023cooperative Y. Pan et al., “Cooperative trajectory planning and resource allocation for UAV-enabled integrated sensing and communication systems,” IEEE Trans. Veh. Technol., early access, 2023. GC2024 G. Cheng, X. Song, Z. Lyu, and J. Xu, “Networked ISAC for low-altitude economy: Transmit beamforming and UAV trajectory design,” arXiv e-prints, p. arXiv:2405.07568 , May 2024. ZXiao2020 Z. Xiao, L. Zhu and X. -G. Xia, “UAV communications with millimeter-wave beamforming: Potentials, scenarios, and challenges,” China Commun., vol. 17, no. 9, pp. 147-166, Sept. 2020. SFu2023 S. Fu et al., “Towards energy-efficient data collection by unmanned aerial vehicle base station with NOMA for emergency communications in IoT,” IEEE Trans. on Veh. Technol., vol. 72, no. 1, pp. 1211-1223, Jan. 2023. YMao2022 Y. Mao et al., “Rate-splitting multiple access: Fundamentals, survey, and future research trends,” IEEE Commun. Surv. Tut., vol. 24, no. 4, pp. 2073–2126, 4th 2022. GZheng2023 G. Zheng, M. Wen, Y. Chen, Y. -C. Wu, and H. Vincent Poor, “Rate-splitting multiple access in wireless backhaul hetnets: A decentralized spectral efficient approach,” IEEE Trans. Wireless Commun., early access, 2023. LYin2022 L. Yin, Y. Mao, O. Dizdar, and B. Clerckx, “Rate-splitting multiple access for 6G—part II: Interplay with integrated sensing and communications,” IEEE Commun. Lett., vol. 26, no. 10, pp. 2237-2241, Oct. 2022. CXu2021 C. Xu, B. Clerckx, S. Chen, Y. Mao, and J. Zhang, “Rate-splitting multiple access for multi-antenna joint radar and communications,” IEEE J. Sel. Topics Signal Process., vol. 15, no. 6, pp. 1332-1347, Nov. 2021. YLi2021 Y. Li, W. Ni, H. Tian, M. Hua, and S. Fan, “Rate splitting multiple access for joint communication and sensing systems with unmanned aerial vehicles,” in Proc. IEEE/CIC ICCC Workshops, Aug. 2021, pp. 37-42. TT2023 T. T. Nguyen et al., “Joint rate allocation and power control for RSMA-based communication and radar coexistence systems,” IEEE Trans. Veh. Technol., vol. 72, no. 11, pp. 14673-14687, Nov. 2023. Li2023 R. Li, Z. Xiao, and Y. Zeng, "Towards seamless sensing coverage for cellular multi-static integrated sensing and communication," IEEE Trans. Wireless Commun., early access, 2023. XChen2020 X. Chen, X. Yu, Y. Huang, and J. Guan, “Adaptive clutter suppression and detection algorithm for radar maneuvering target with high-order motions via sparse fractional ambiguity function,” IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens., vol. 13, pp. 1515–1526, 2020. Wu2018jointtraj Q. Wu, Y. Zeng, and R. Zhang, “Joint trajectory and communication design for multi-uav enabled wireless networks,” IEEE Trans. Wireless Commun., vol. 17, no. 3, pp. 2109–2121, 2018. YM2018 Y. Mao, B. Clerckx, and V. O. K. Li, “Rate-splitting multiple access for downlink communication systems: bridging, generalizing, and outperforming SDMA and NOMA,” EURASIP Journal on Wireless Communications and Networking, vol. 2018, no. 1, p. 133, May 2018. ZB2023 Z. Behdad, Özlem Tuğfe Demir, K. W. Sung, E. Björnson, and C. Cavdar, “Multi-static target detection and power allocation for integrated sensing and communication in cell-free massive mimo,” IEEE Trans. Wireless Commun., early access, 2024. jaafar2020 W. Jaafar, S. Naser, S. Muhaidat, P. C. Sofotasios, and H. Yanikomeroglu, “On the downlink performance of RSMA-based UAV communications,” IEEE Trans. Veh. Technol., vol. 69, no. 12, pp. 16 258–16 263, 2020. ZLSDR2010 Z Luo, W. -K. Ma, A. M. -C. So, Y. Ye and S. Zhang, "Semidefinite Relaxation of Quadratic Optimization Problems," IEEE Signal Processing Mag., vol. 27, no. 3, pp. 20-34, May 2010. MacQueen1967 J. MacQueen, “Some methods for classification and analysis of multivariate observations,” in Proc. 5th Berkeley Symp. Math. Statist. Probability, vol. 1, pp. 281–297, 1967. Grant2018 M. Grant and S. Boyd. CVX: MATLAB Software for Disciplined Convex Programming, Version 2.1. Accessed: Dec. 9, 2018. [Online]. Available: http://cvxr.com/cvx. Yin2022 L. Yin and B. Clerckx, “Rate-splitting multiple access for dual-functional radar-communication satellite systems,” in Proc. IEEE Wireless Commun. Netw. Conf. (WCNC), pp. 1–6, Apr. 2022 WL2018 L. Wang, H. Wu, Z. Han, P. Zhang and H. V. Poor, “Multi-hop cooperative caching in social IoT using matching theory,” IEEE Trans. Wireless Commun., vol. 17, no. 4, pp. 2127-2145, Apr. 2018. demirhan2023cellfree U. Demirhan and A. Alkhateeb, “Cell-free ISAC MIMO systems: Joint sensing and communication beamforming,” arXiv e-prints, p. arXiv:2301.11328, Feb. 2023. BC2021 B. Clerckx et al., “Is NOMA efficient in multi-antenna networks? A critical look at next generation multiple access techniques," IEEE Open J. Commun. Soc., vol. 2, pp. 1310-1343, 2021. Lyu2023jointmaneuver Z. Lyu, G. Zhu, and J. Xu, “Joint maneuver and beamforming design for UAV-enabled integrated sensing and communication,” IEEE Trans. Wireless Commun., vol. 22, no. 4, pp. 2424–2440, 2023.
http://arxiv.org/abs/2406.18421v1
20240626151602
Neural Network Emulation of Flow in Heavy-Ion Collisions at Intermediate Energies
[ "Nicholas Cox", "Xavier Grundler", "Bao-An Li" ]
nucl-th
[ "nucl-th", "hep-ph", "nucl-ex" ]
shapes.geometric, arrows startstop = [rectangle, rounded corners, minimum width=3cm, minimum height=1cm,text centered, draw=black, fill=red!30] io = [trapezium, trapezium left angle=70, trapezium right angle=110, minimum width=3cm, minimum height=1cm, text centered, draw=black, fill=blue!30] process = [rectangle, minimum width=3cm, minimum height=1cm, text centered, text width = 4cm, draw=black, fill=orange!30] decision = [diamond, minimum width=3cm, minimum height=1cm, text centered, text width = 4cm, draw=black, fill=green!30] arrow = [thick,->,>=stealth] 0ρ_0 E_ sym(ρ)  0E_ sym(ρ_0) U_ sym(ρ,k)  ∂ U/∂ k  0U_ sym(ρ_0,k_F)  L(ρ)  η  ł0L(ρ_0) 
http://arxiv.org/abs/2406.18430v1
20240626152726
Facial Image Feature Analysis and its Specialization for Fréchet Distance and Neighborhoods
[ "Doruk Cetin", "Benedikt Schesch", "Petar Stamenkovic", "Niko Benjamin Huber", "Fabio Zünd", "Majed El Helou" ]
cs.CV
[ "cs.CV" ]
Facial Image Feature Analysis and its Specialization for Fréchet Distance and Neighborhoods Doruk Cetin^* Benedikt Schesch^*† Petar Stamenkovic^† Niko Benjamin Huber^ Fabio Zünd^† Majed El Helou^† ^* Equal contribution, second author was an intern at MTC. ^Align Technology Zürich, Switzerland. ^†Media Technology Center, ETH Zürich, Switzerland. dcetin@aligntech.com, {bschesch, pstamenkovic, melhelou}@ethz.ch. July 1, 2024 ======================================================================================================================================================================================================================================================================================================================================================== § ABSTRACT Assessing distances between images and image datasets is a fundamental task in vision-based research. It is a challenging open problem in the literature and despite the criticism it receives, the most ubiquitous method remains the Fréchet Inception Distance. The Inception network is trained on a specific labeled dataset, ImageNet, which has caused the core of its criticism in the most recent research. Improvements were shown by moving to self-supervision learning over ImageNet, leaving the training data domain as an open question. We make that last leap and provide the first analysis on domain-specific feature training and its effects on feature distance, on the widely-researched facial image domain. We provide our findings and insights on this domain specialization for Fréchet distance and image neighborhoods, supported by extensive experiments and in-depth user studies. Fréchet distance, feature space distance, feature specialization, dataset similarity measures, image neighborhoods. § INTRODUCTION AND RELATED WORK Measuring distances between datasets is a valuable yet challenging task, in particular for complex signals such as images. It is crucial for understanding data distributions and domain gaps for transfer learning and generalization. It is also important for developing generative networks that recently gained in popularity <cit.> and that are prone to hallucination <cit.>. The most ubiquitous approach to measuring dataset distance is the widely used Fréchet inception distance (FID) <cit.>. It computes the Fréchet statistical distance <cit.> between the datasets' image features, extracted by an ImageNet-trained <cit.> Inception network <cit.>. A plethora of similar solutions emerged in the literature, notably extensions to conditional inputs <cit.> and adversarial robustness <cit.>. KID <cit.> proposes a modified distance on the same feature space. It has certain theoretical advantages but in practice correlates closely with FID. CKA <cit.> is another FID alternative, but both the paper's results and user survey show it performs in a similar way as FID. sFID <cit.> simply mimics FID on intermediate feature maps to improve spatial information that is more fine-grained. StyleGAN-XL <cit.> even computes rFID on the features of a randomly initialized network as an additional metric, with the idea originating from Naeem et al. <cit.>, where the objective is to be more general by being task-agnostic. rFID results, however, tend to have an erratic behavior in practice with extremely large values. Zhang et al.  <cit.> have shown that training is crucial as random networks achieve significantly worse performance, and that random networks focus more on low-level information <cit.>, further supporting the use of trained features for Fréchet distance. More specialized methods have been proposed that are different from FID. Kynkäänniemi et al. <cit.> investigated precision and recall between feature spaces as complementary metrics to FID. The pair can illustrate certain trade-offs but do not give a single score that can be used as an optimization target. We note that the experimental evaluation on faces only uses the standard ImageNet FID <cit.>. Precision and recall definitions are refined to density and coverage in <cit.>, to better adapt to image manifolds that need to be estimated through only a limited number of sample points. Another approach to visualize feature space drift is based on SVCCA <cit.>, however, it cannot readily scale to large datasets. Lastly and most similar to FID, Ramtoula et al. <cit.> create a histogram per neuron that contains the activation values of that neuron across network layers. The histogram of an image can be compared to the average histogram of a dataset to obtain a similarity metric. While this approach has the advantage of applicability to a single image, high-level information in cross-neuron dependencies as well as location information are lost. FID thus remains the most practical and ubiquitous metric in recent literature <cit.>, despite its numerous shortcomings. The most simple to resolve are that it can be affected by resizing when anti-aliasing is omitted <cit.> and that its estimator has statistical bias that is model dependent <cit.>. FID relies on an ImageNet-trained Inception network that can be more sensitive to texture than to shape <cit.>. This bias is due to aggressive random cropping in data augmentation and can be reduced by using more natural augmentations like image distortions <cit.>. However, the underlying ImageNet training causes inherent limitations <cit.>, such as a bias towards only the most salient object in a multi-object image <cit.>, because ImageNet is meant for single object learning. Most recently,  <cit.> criticizes the strong relation between Inception features and ImageNet classes, particularly as ImageNet does not contain human or human face classes while FID is most commonly used in studying generative models for face synthesis. The goal of Morozov et al. <cit.> is to explore replacing supervised ImageNet feature extractors with self-supervised ones. The results show certain improvements in FID. The investigation supports the use of self supervision and concludes with the open question of using self-supervised features that are domain specific, which was left to future research <cit.>. Our goal is to analyze how specializing the feature space impacts feature distances. We collect a novel facial dataset for our self-supervised learning to guarantee the independence from public datasets, and high image quality. We conduct extensive experiments and three user studies with 3432 answers from 26 participants. § METHODOLOGY   §.§ Feature-learning independent dataset   To train our feature extractor, it is important to rely on a completely external dataset. The reason is that common facial image datasets are often used in training image generators, and any distance metric should be disentangled from them. We thus collect an in-house facial image dataset to train our feature extractor through self-supervision. We create a 30'000 image training set, in accordance with the size of CelebA-HQ <cit.>, which we call Faces. The images are all center cropped, with no occlusions, and manually curated to ensure quality. By training a feature extractor on our held-out dataset, we lay the basis for an independent metric built over those features. This enables benchmarking on the commonly used public datasets, and on public image generators trained on them. To promote better fairness, our dataset is balanced across six ethnicities (latino hispanic, asian, middle eastern, black, indian, and white) <cit.>. We further leave out an additional 21'000 images that we label for gender and use as a test set in a separate experiment outlined in Sec. <ref>. §.§ Self-supervised feature learning Self-supervised learning can improve feature extraction performance <cit.>. Another advantage is to decrease biases and errors coming from the choice and assignment of labels for supervised learning <cit.>. We exploit the simple yet effective state-of-the-art DINO <cit.> method for self-supervised learning on our dataset. DINO builds on knowledge distillation between teacher and student networks, and fundamental self-supervised learning data augmentation strategies, notably extending on SwAV <cit.>. For all our experiments, we configure the feature embedding to have 2048 dimensions, aligning with the Inception <cit.> architecture for direct comparisons. We train for 100 epochs on one 24GB NVIDIA RTX 3090 GPU with a batch size of 16, and all other settings follow DINO's approach. §.§ Fréchet distance over feature spaces The Fréchet <cit.> distance F between two Gaussian distributions 𝒩(μ_1,Σ_1) and 𝒩(μ_2,Σ_2) is given by F(μ_1,Σ_1,μ_2,Σ_2) = || μ_1 - μ_2 ||_2^2 + Tr(Σ_1 + Σ_2 -2 (Σ_1Σ_2)^1/2 ), where Tr(·) is the matrix trace. This formulation is then adapted to measure the distance between two datasets 𝒟_1 and 𝒟_2. This is achieved by exploiting the Inception <cit.> network's feature extractor trained on ImageNet in a supervised manner, and called FID <cit.>. The feature extractor takes in an image and generates its embedding in a feature space. Generally, for any feature extractor f(·) we can define the Fréchet distance between datasets as F(μ^f_𝒟_1,Σ^f_𝒟_1,μ^f_𝒟_2,Σ^f_𝒟_2), where μ^f_𝒟_i and Σ^f_𝒟_i are the mean and covariance of the best-fit Gaussian over the feature distribution of dataset i, obtained by the feature extractor f(·). In our experiments, we study the effects of f(·) on the distance metric, with a focus on domain-specific specialized features. We denote the Fréchet distance computed over our DINO Faces feature space by FDD. § EXPERIMENTAL EVALUATION   §.§ Are our self-learned features sufficient?   We evaluate whether our self-supervised features extract sufficient information relevant to faces. We train MLPs and Head networks on top of ImageNet-trained Inception features (used by FID), DINO (I) features from the ImageNet-trained DINO, and DINO (F) features from the Faces-trained DINO. The MLPs and Heads are trained on the CelebA-HQ training set annotations to predict different binary classes (Blond, Young, Gender) based on input features. We show the results in Table <ref> on the CelebA-HQ test set and on an additional test set for gender from a fully independent source (Sec. <ref>). We note that accuracies are significantly high, indicating that the networks extract sufficient features to enable classification. The results with the self-supervised DINO are on-par with Inception results, even surpassing them when testing on the independent curated test set, rather than the test set of CelebA-HQ. We emphasize, however, that the results highlight that the features are sufficient but not that they are necessary, in other words, some could nonetheless be irrelevant. §.§ Benchmarking Fréchet distance results   We run benchmarking experiments for Fréchet distance computed over features from Inception (trained on ImageNet with supervision), SwAV (trained on ImageNet with self supervision), and DINO (trained on Faces with self supervision) networks. The distances are computed for 19 image sets (Fig. <ref>) with respect to CelebA-HQ images, for 5k samples. With DINO specialized to the facial domain with standardized faces, the distance is large when images are flipped vertically, while Inception and SwAV distances remain surprisingly small (smaller than the distance relative to FFHQ <cit.>, which also contains faces). For random erasing of small patches, the distance is the smallest for DINO, which can extract high-level facial features rather than only fine-granularity generalized ones, due to its specialization to faces. Meanwhile, Inception distance caused by random erasing is even larger than the Inception distance between Cars and CelebA-HQ. Lastly, we note the large distance for DINO on car images, which are completely out of domain. This is not the case with cats, where facial features remain correlated to human facial features and are aligned in the same way in preprocessing. For the remaining setups, the distances obtained by the different approaches remain, on average, closely tied. We also observe similar trends in Fig. <ref> when tracking the training of the SemanticStyleGAN <cit.> with Fréchet distance on Inception and DINO. We only note that FDD is larger and drops faster than FID, as it is more sensitive to the low-quality faces initially synthesized. We conduct a user study (with 10 images per class) to obtain ratings on how well an image corresponds to the CelebA-HQ distribution represented by randomly sampled sets of 9 images at a time (Table <ref>). While the variation in scores over classes such as male and female is interesting, we mostly note that FID and FDD (relative to CelebA-HQ) strongly correlated with the participants' answers (lower distance correlates with higher correspondence). §.§ Investigating photorealism correlation   We expand with an analysis of the connection between FID/FDD relative to CelebA-HQ and the photorealism of image distributions. We conduct a second user study to obtain opinion scores on photorealism based on 10 images per category. The results of FID and FDD (Table <ref>) are closely related on the different sets. For FFHQ and truncated StyleGAN2 <cit.> images, the distances match well with opinion scores, however, they diverge for untruncated StyleGAN2 and PGGAN <cit.>, indicating that participant opinions are strongly affected by visual artifacts while the distance metrics focus more on content distributions. This is even more observable with PGGAN FDD; as PGGAN is trained on CelebA-HQ, its synthetic-image distribution matches better with it and leads to a low FDD despite lower visual quality. This further supports the claim that the specialized FDD focuses on high-level abstract information. §.§ Deeper dive into feature space neighborhoods Finally, we narrow down to an image-level analysis of the feature spaces. We exploit local neighborhoods to analyze the feature-space landscapes. We select reference images and find their respective nearest neighbors in each of the two spaces. Sample results are shown in Fig. <ref>. We conduct a third user study where participants are asked to select which feature space induces neighbor images that are more similar to the reference (Table <ref>). The Inception space is by a large margin better for Stanford Cars images, as expected. For cats or random CelebA-HQ images, the Inception space is more in accordance with human perception. However, when asking which neighbor set contains people who are more similar to the reference person, the DINO space correlates closer to the user choices for images with accessories. We note, however, that these aggregated results hide part of the analysis. Depicted in Fig. <ref>, we observe that, indeed, Inception is excessively biased towards focusing on objects rather than faces. But for DINO, the lack of such bias did not guarantee the desired face similarity results, as seen in the bottom row. The specialization of DINO features on faces makes them untrained for other objects, which risk becoming similar to an adversarial attack. § CONCLUSION AND KEY TAKE-AWAYS   We analyze an open question on feature-space distance, particularly, the effects on Fréchet distance, and neighborhoods, of specializing the feature extractor to the facial domain. Our experiments and user studies support the following findings. (1) Specialists become better at abstraction. Our experiments highlight that our specialized feature extractor can learn abstract concepts pertaining to faces. The generalist focuses more on fine-granularity features that can be exploited across tasks, making it more sensitive to spatially localized loss of information and less sensitive to global changes like an upside-down face, as shown in Fig. <ref>. (2) Feature distance does not equate to photorealism. Fréchet distance measures statistics over dataset image features. This is affected both by photorealism and degradations, but also by the general content distribution across the dataset. When computing the Fréchet distance relative to a base dataset, it is important to use a high-quality one and to ensure that the base dataset contains a fair representation of desirable content. We emphasize that the vanilla distance is a holistic image-based distance rather than a face or identity distance. (3) Noticing can be easier than not noticing. While we can train specialists for features relevant to a specialized domain, this does not guarantee their ability to dismiss all irrelevant information. Facing novel content in their input can act as adversarial attacks perturbing the specialized network (Fig. <ref>). (4) The risk of smaller specialized datasets. Modern networks are large and this can lead to rich representations emerging even in randomly initialized ones. As the lottery ticket hypothesis <cit.> hints, multiple paths lead to the final representation, enough for coincidental features to appear. Training improves this representation making it more practical. Particularly, training over a massive dataset such as ImageNet constrains the behavior of the feature extractor across its many paths. This advantage can be lost when training a large specialist network on smaller domain-specific datasets, possibly leading to the weakness described in (3). Our findings fill a gap in the literature, highlighting the trade-offs between general and specialized feature extractors. One avenue for future research is hybrid training, preserving well constrained extractors with low-granularity features and robustness, while learning abstract specialized features. IEEEtran
http://arxiv.org/abs/2406.18416v1
20240626151042
Energies of atoms are approximately quadratic in nuclear charge
[ "Simon León Krug", "O. Anatole von Lilienfeld" ]
physics.chem-ph
[ "physics.chem-ph" ]
justification=RaggedRight figures/ [ [ [ = R Z r M x c Q Machine Learning Group, Technische Universität Berlin, 10587 Berlin, Germanyanatole.vonlilienfeld@utoronto.caMachine Learning Group, Technische Universität Berlin, 10587 Berlin, GermanyBerlin Institute for the Foundations of Learning and Data, 10587 Berlin, GermanyChemical Physics Theory Group, Department of Chemistry, University of Toronto, St. George Campus, Toronto, ON, CanadaDepartment of Materials Science and Engineering, University of Toronto, St. George Campus, Toronto, ON, Canada Vector Institute for Artificial Intelligence, Toronto, ON, CanadaDepartment of Physics, University of Toronto, St. George Campus, Toronto, ON, CanadaAcceleration Consortium, University of Toronto, Toronto, ON, Canada§ ABSTRACT Accurate quantum mechanics based predictions of property trends are so important for materials design and discovery that even inexpensive approximate methods are valuable. We use the Alchemical Integral Transform (AIT) to study multi-electron atoms, and find energy differences between iso-electronic atoms to be approximately quadratic in their nuclear charges, Δ E ≈ -(1 + 2γ√(N_e-1)) Δ Z Z̅. γ≈ 0.3766 ± 0.0020 Ha corresponds to a universal constant, and N_e, Δ Z, and Z̅ respectively to electron number, and nuclear charge difference and average. We compare the formula's predictive accuracy using experimental numbers and numerical results obtained via DFT for the entire periodic table up to Radon. A detailed discussion of the atomic Helium-series is included. Finally, we show the applicability of AIT by predicting trends between electron affinities from ionization energies with linear corrections, resulting in a Spearman's rank correlation coefficient of 0.931. O. Anatole von Lilienfeld July 1, 2024 ============================= § INTRODUCTION The electronic quantum many-body problem is without a doubt one of the outstanding challenges of materials design to date. More often than not, only numerical solutions are possible to obtain, but these are associated with complex and costly computations or, in case of modern machine learning (ML) solutions, subject to trillions of parameters. This offers little intuitive understanding even for systems with few electrons and/or model potentials. Naturally, it is desirable to predict the behavior of many atoms as in molecules and crystals, but even the multi-electron atom, i.e. the periodic table, requires expensive simulation for results below chemical accuracy. In a previous paper<cit.>, we derived and discussed a general version of the Alchemical Integral Transform (AIT); it allowed its user to recover the energy and electron density of a final system E_B from an iso-electronic initial system's electron density ρ_A and energy E_A, in n dimensions and for multi-electron systems, if the coordinates of the initial and final Hamiltonian could be expressed as one another by an affine transformation. Here, we apply AIT to real systems, i.e. the multi-electron atom. We introduce a simple and inexpensive formula for relative energies that even outperforms DFT computations. We assess the model's numerical performance for the He-atom series, and we investigate its applicability towards the prediction of electron affinities using ionization energies only. § METHODS The Alchemical Integral Transform (AIT) expresses energy and electron density of a system B using energy and electron density of some iso-electronic reference system A. This is possible if both systems can be expressed as one another by an affine transformation A(λ) x + b(λ) of the coordinates of their Hamiltonians, without regard for any normalization <cit.>: E_B - E_A = ∫_ℝ^n dx ρ_A (x) 𝒦[Δ v](x) with difference in external potentials Δ v (x):= v_B(x) - v_A(x) and the kernel 𝒦[Δ v](x) := ∫_0^1 dλ Δ v ( A^-1(λ) (x - b(λ)) ) . But how to obtain the quantities A(λ) and b(λ)? Consider the hydrogen-like (HL) atom from Ref. krug_generalAIT as example: just the problem's statement, i.e. the (electronic) Hamiltonian, reads: Ĥ^ HL := - 1/2∇^2_x - Z_A/||x||_2 = Z_A^2 (- 1/2∇^2_x/Z_A^2 - 1/Z_A||x||_2) Simply rescaling of x→ (Z(λ)/Z_A) x does the trick of transforming the coordinates of the Hamiltonian at nuclear charge Z_A to a general one at Z(λ): Ĥ^ HL→ Z_A^2 (- 1/2∇^2_x/Z^2(λ) - 1/Z(λ)||x||_2) Our affine transformation is just a factor A(λ) = Z(λ)/Z_A, which results in the kernel 𝒦[Δ v](x) = -Z_B+Z_A/2||x||_2(1 + Z_B/Z_A) . See Ref. krug_generalAIT for further details. Consider now a monoatomic, multi-electron system with fictitious inter-electron potential (∝ distance^-2), Ĥ^ fic := ∑_i(- 1/2∇^2_x_i - Z_A/||x_i||_2 + 1/2∑_j≠ i1/||x_i - x_j||_2^2) = Z_A^2 ∑_i(- 1/2∇^2_x_i/Z_A^2 - 1/Z_A||x_i||_2 + 1/2∑_j≠ i1/Z_A^2 ||x_i - x_j||_2^2) This is rescalable by the same transformation as in the hydrogen-like atom and thus produces the same kernel. Unfortunately, for the real multi-electron atom, Ĥ^ atom := ∑_i(- 1/2∇^2_x_i - Z_A/||x_i||_2 + 1/2∑_j≠ i1/||x_i - x_j||_2) no scaling transformation (or any affine transformation for that matter) is available to produce a kernel as in Eq. <ref>. However, since both the systems with constant (or no) electron-electron interaction (Eq. <ref>), and inversely quadratic interaction (Eq. <ref>) produce the same kernel, we now assume the hydrogen-like kernel of Eq. <ref> to correspond to a fair approximation. We considered AIT for monoatomics previously in Ref. krug_DeltaE. There, however, we found a parametrization by trial and error. Although this produced reasonable results, it left us without any measure to assess its error and systematically improve upon it. In this paper, it is quite clear from comparison of the problem statement in Eq. <ref> to the Hamiltonian in Eq. <ref> that our error introduced by choosing the (approximate) transformation x→ Z(λ)/Z_A x scales with the ratio Z_B/Z_A because the strength of inter-electron repulsion is miss-scaled by Z(λ)/Z_A. Using this approximation with Eq. <ref>, we can now write the energy difference between iso-electronic atoms A and B approximately as: E_B - E_A ≈-Z_B^2 + Z_A^2/2Z_A∫_ℝ^3 dx ρ_A(x)/||x||_2_=: μ_A The integral is known as the ('alchemical') electrostatic potential at the nucleus, μ_A. Eq. <ref> looks very similar to Levy's formula for energy differences from averaged electron densities when applied to the case of iso-electronic atoms<cit.>. Here, however, we only rely on knowledge about ρ_A! Eq. <ref> allows estimates of iso-electronic energy trends in atomic ions, and thus tackles the missing link for navigating atoms of any nuclear charge Z and electron number N_e<cit.>. The iso-protonic analogon, i.e. electron affinities (EA) and ionization energies (IE) for fixed nuclear charges, are experimentally measured quantities in physical chemistry. Combining both changes into thermodynamic cycles, one can arrange all elements and their possible ions in a scheme as given in Fig. <ref>. § RESULTS AND DISCUSSION In the methods section, we derived a formula for the approximate energy difference between two iso-electronic atoms A and B: E_B - E_A ≈(Z_A^2 - Z_B^2 ) μ_A/2Z_A , where μ_A denotes the ('alchemical') electrostatic potential at the nucleus of A <cit.>. Considering the relative energy difference of A to any third iso-electronic atom C, the E_A contribution cancels and the remaining energy difference between atom B and C reads: Δ E := E_B - E_C ≈ (Z_C^2 -Z_B^2) μ_A/2Z_A Note that for Z_C - Z_A = Z_A - Z_B, this equation recovers the energy difference exactly up to 4th order in an alchemical Taylor expansion <cit.>. However, since system A is independent of Z_B, Z_C, the ratio μ/Z must be a constant — for any fixed electron number N_e. This observation is consistent with the theoretical and numerical findings by Levy et al. that the integral in μ "appears to be remarkably close to a linear function of Z" <cit.>. This linearity is a necessity for Eq. <ref> to be (approximately) independent of Z_A[We encounter no discontinuities as discussed in detail in their paper <cit.> because we force ρ_A to be iso-electronic. Since electrons cannot leave system A here, even if energetically favorable, all functional relations are smooth]. Eq. <ref> gives not just access to the total energy between two atoms; by virtue of the virial theorem <cit.> for single atoms with electrons in the Coulomb potential, <T> = -<V>/2, one immediately finds both the kinetic and potential energy contributions as well. §.§ Comparison to experimental data for Hydrogen to Radon Varying only Z_B and fixing Z_C to correspond to the charge-neutral atom (i.e. N_e = Z_C), Fig. <ref> displays the experimental Δ E values<cit.> approximately quadratic in Z_B. In the case of just one electron, i.e. hydrogen-like series, we must recover c = μ_C/2Z_C = 1/2. With this constraint, we find that least square regression yields good agreement for the following form, c(N_e) = 1/2 + γ√(N_e - 1), for N_e ≥ 1 (see inset of Fig. <ref>). Here, γ is assumed to be one universal parameter for all atoms and ions, independent of electron number. Combining Eqs. <ref> and <ref>, and using experimental energies for all neutral atoms and ions with no more than 10 electrons and protons, we fit γ = 0.3481 ± 0.0086 Ha to reproduce Δ E in terms of number of electrons N_e. Fitting to the experimental data for all atoms up to Radon yields γ = 0.3766 ± 0.0020 Ha. Corresponding figures with all elements up to Radon can be found in the Supplemental Material (Fig. <ref>). Care has been taken, as AIT only applies for a given electronic state, i.e. if the electronic states of initial and final systems match. A version without this constraint, together with an analogue figure with data from DFT instead of experiment, can be found in the Supplemental Material as well (Figs. <ref>,<ref>,<ref>,<ref>). Throughout all data, we find correlation coefficients of R^2 > 0.993. Suggesting a functional form of c, i.e. Eq. <ref>, we assert generality but loose accuracy. Any fit of γ imposes a functional form on the electrostatic potential μ but without theoretical justification or inclusion of relativistic effects. If instead one fits c for each electron number anew, one evades this at the cost of generality. Originally, we expected Eq. <ref> to be proportional to N_e^7/3 as this is the functional relation of the total binding energy of the neutral atom in the Thomas-Fermi model<cit.> (which, unfortunately, holds exactly only in the limit of infinite nuclear charge). However, an exponent of 1/2 yields a better fit to the experimental data. Reinsertion into Eq. <ref> provides a general approximate formula for energy differences between atoms in any fixed iso-electronic series, Δ E ≈( 1/2 + γ√(N_e - 1)) (Z_C^2 - Z_B^2) = -( 1 + 2γ√(N_e - 1)) Δ Z Z with difference in nuclear charge Δ Z := Z_B - Z_C and their mean Z := (Z_B+ Z_C)/2. §.§ The He-like series In general, one may consider any one iso-electronic series in particular, then pick any trust-worthy experimental energy difference inside this series to calibrate c (without constraint to the functional form of Eq. <ref>), and subsequently apply Eq. <ref> to obtain all the remaining values. To assess our formula's predictive power, we have chosen the He-like atoms for their historical significance <cit.>, and for the availability of experimental, theoretical (perturbation theory), and computational data. Using just the energy difference between Li^+ and He<cit.> we determine c to correspond to 0.8752 Ha with the experimental uncertainty being less than 10^-8. Predictions and unsigned errors for the remainder of the entire He-like series are shown in Fig. <ref>, along with measurements and results from other methods. Note how DFT (using the exchange-correlation functional <cit.> and basis set <cit.>), is outperformed by our Bierdeckel estimate. We also compare to Hylleraas' historic perturbative 1/Z-expansion using five terms <cit.>, as well as to a subsequent contribution relying on over 500 terms <cit.>, both of which still fall short of chemical accuracy w.r.t. the experimental values <cit.> for all ions but Li^+ and Be^2+. §.§ Prediction of electron affinities from ionization energies Electron affinities can be difficult to measure. The above established connection between atoms of one iso-electronic series suggest that the energies of neutral atoms Z,Z+1 in conjunction with AIT (i.e. parameters c and thus ionization energies) might suffice to predict the (first) electron affinities of atoms Z. Subtracting Eq. <ref> for Z,Z+1 and N_e,N_e+1, we find: EA(Z) := E(Z,N_e+1) - E(Z,N_e) = (2Z+1)[c(N_e+1) - c(N_e)] + E(Z+1,N_e+1) - E(Z+1,N_e) For this, AIT requires initial and final state to be identical which renders direct comparison to experiments problematic, as in experiment, states often change between atoms Z and Z+1 (or iso-electronic ions) and consequently, AIT is not applicable (cf. Fig. <ref> in the Supplemental Material). However, the methods' merit can be investigated using quantum chemistry calculations (each ion's spin fixed to N_e 2, cf. Fig. <ref>). Note that because of the clamped electronic spin configuration, the electron affinities obtained are magnitudes larger than their experimental counterparts. For the various iso-electronic series computations we have used the density functional approximation <cit.> and basis sets <cit.> if N_e ∈{ 1, …, 18, 21, … 36 }, <cit.> if N_e ∈{ 19,20,37,38,55,56 }, <cit.> if N_e ∈{ 39, …, 54, 72, … 86 } and <cit.> if N_e ∈{ 57, … 71 }. The absolute error between electron affinities via DFT and AIT (inset of Fig. <ref>) shows a systematic dependency to the valence shell of the atom considered, as elements within side groups and the lanthanides display similar deviations. This allows the use of AIT as a first approximation, to be corrected by a linear least square fit to the experimental data for each valence shell (cf. Tab. <ref> for fitting parameters). From this, we do not just gain accuracy in predicting electron affinities directly (inset of Fig. <ref>), but especially in trends between them: Spearman's rank coefficient increases from -0.033 (before correction) to 0.931 (after). For experimental numbers AIT expectantly does not display correct trends, as quantum numbers between initial and final state often change (cf. Fig. <ref> in the Supplemental Material). § CONCLUSION We motivated and derived a kernel of AIT for monoatomic systems. This led to an approximate proportionality of the relative energy of any two iso-electronic atoms being purely quadratic in their nuclear charges, together with a general formula for the energy difference of atoms in Eq. <ref>. We numerically tested this approach with experimental data for ionization energies and electron affinities of the entire periodic table, together with additional tests using DFT data. The He-like energies were treated in detail. The utility of Eq. <ref> extends beyond single atoms, as many methods applied in molecules, e.g. for chemical reactions, bonding energies and distances, are intimately related to the energy of their constituent atoms as recently discussed from the alchemical viewpoint <cit.>. Eq. <ref> gives access to electron affinities and ionization energies by avoiding multiple quantum calculations, and consequently to estimates of dissociation energy and electronegativity (after Mulliken) <cit.>. Future work will deal with kernels beyond single atoms, e.g. in constitutional isomers or as approximate treatments of molecules. Both could be rendered arbitrarily accurate when used as baseline models for Δ-ML, multi-fidelity ML, or within transfer learning. Beyond these direct applications, we note that since the advent of the periodic table, arranging systems by their nuclear charge proved sensible<cit.>. Not just from the point of quantum alchemy, but computational chemistry as well, classifying systems by their electron number might be a fruitful concept, as proven by the content of this study: neither did we consider new data, nor new computational methods; the alchemical perspective alone revealed new and simple relationships. § DATA AND CODE AVAILABILITY The code that produces the figures and findings of this study, in specific the scripts for the generation of DFT data, plotting and fitting, are openly available on Zenodo under <zenodo.org/records/12547814>. The ionization energies were obtained from the National Institute of Standards and Technology <cit.>, the electron affinities were taken from the review by Rienstra-Kiracofe et al. <cit.>. Both datasets are also accessible on Zenodo. § SOFTWARE Software for the purpose of data generation (e.g. quantum chemistry software) are provided by the -packages <cit.>, <cit.>, <cit.> and <cit.>. Visualizations were created using <cit.>. § ACKNOWLEDGEMENTS We acknowledge discussions with Kieron Burke, Roi Baer, Dirk Andrae, Florian Bley and Danish Khan. We acknowledge the support of the Natural Sciences and Engineering Research Council of Canada (NSERC), [funding reference number RGPIN-2023-04853]. Cette recherche a été financée par le Conseil de recherches en sciences naturelles et en génie du Canada (CRSNG), [numéro de référence RGPIN-2023-04853]. This research was undertaken thanks in part to funding provided to the University of Toronto's Acceleration Consortium from the Canada First Research Excellence Fund, grant number: CFREF-2022-00042. O.A.v.L. has received support as the Ed Clark Chair of Advanced Materials and as a Canada CIFAR AI Chair. O.A.v.L. has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No. 772834). § CONFLICT OF INTEREST The authors have no conflicts to disclose. All authors read and approved the final manuscript. unsrt — Supplemental Information — Simon León Krug,^1 and O. Anatole von Lilienfeld^1,2,3,4,5,6,7 ^1)Machine Learning Group, Technische Universität Berlin, 10587 Berlin, Germany ^2)Berlin Institute for the Foundations of Learning and Data, 10587 Berlin, Germany ^3)Chemical Physics Theory Group, Department of Chemistry, University of Toronto, St. George Campus, Toronto, ON, Canada ^4)Department of Materials Science and Engineering, University of Toronto, St. George Campus, Toronto, ON, Canada ^5)Vector Institute for Artificial Intelligence, Toronto, ON, Canada ^6)Department of Physics, University of Toronto, St. George Campus, Toronto, ON, Canada ^7)Acceleration Consortium, University of Toronto, Toronto, ON, Canada (Dated: July 1, 2024) < g r a p h i c s > Experimental energy differences Δ E vs. the final system's nuclear charge Z_B for different iso-electronic atoms Z_B, Z_C∈{ 1, …, 86 } if initial and final atom's electronic configuration match, electron number N_e ∈{ Z_B+2, … Z_B-4 } and constraint Z_C = N_e. Iso-electronic fits are solid, colored lines. Term symbols of the neutral reference are given along the fitted lines. Inset: Fitted values for the parameter c vs. N_e for each iso-electronic series with fit function. < g r a p h i c s > Experimental energy differences Δ E vs. the final system's nuclear charge Z_B for different iso-electronic atoms Z_B, Z_C∈{ 1, …, 86 }, without the constraint of matching initial and final atom's electronic configuration, electron number N_e ∈{ Z_B+2, … Z_B-4 } and Z_C = N_e. Iso-electronic fits are solid, colored lines. Term symbols of the neutral reference are given along their respective fitted lines. Inset: Fitted values for the parameter c vs. N_e for each iso-electronic series. < g r a p h i c s > Experimental energy differences Δ E vs. the electron number N_e for different atoms Z_B, Z_C∈{ 1, …, 86 }, without the constraint of matching initial and final atom's electronic configuration and Z_C = N_e. Iso-atomic fits are solid, colored lines. The fit is identical to the one in Fig. <ref>. Even more extreme cations than +4 are drawn but never used for the fit and only serve orientation and comparison. < g r a p h i c s > Energy differences Δ E from Density Functional Theory vs. the final system's nuclear charge Z_B for different iso-electronic atoms Z_B, Z_C∈{ 1, …, 86 }, electron number N_e ∈{ Z_B+2, … Z_B-4 } and Z_C = N_e. Iso-electronic fits are solid, colored lines. The DFT computations were performed using the functional <cit.> and basis sets <cit.> if N_e ∈{ 1, …, 18, 21, … 36 }, <cit.> if N_e ∈{ 19,20,37,38,55,56 }, <cit.> if N_e ∈{ 39, …, 54, 72, … 86 } and <cit.> if N_e ∈{ 57, … 71 }. For full comparability between the ions, each ion's spin was fixed to N_e 2. ] ] ]
http://arxiv.org/abs/2406.17865v1
20240625181338
Equivalence of dynamics of disordered quantum ensembles and semi-infinite lattices
[ "Hallmann Óskar Gestsson", "Charlie Nation", "Alexandra Olaya-Castro" ]
quant-ph
[ "quant-ph", "cond-mat.dis-nn" ]
hallmann.gestsson.20@ucl.ac.uk c.nation@ucl.ac.uk a.olaya@ucl.ac.uk Department of Physics and Astronomy, University College London, London WC1E 6BT, United Kingdom § ABSTRACT We develop a formalism for mapping the exact dynamics of an ensemble of disordered quantum systems onto the dynamics of a single particle propagating along a semi-infinite lattice, with parameters determined by the probability distribution of disorder realizations of the original heterogeneous quantum ensemble. This mapping provides a geometric interpretation on the loss of coherence when averaging over the ensemble and allows computation of the exact dynamics of the entire disordered ensemble in a single simulation. Alternatively, by exploiting the reverse map, one can obtain lattice dynamics by averaging over realisations of disorder. The potential of this equivalence is showcased with examples of the map in both directions: obtaining dephasing of a qubit via mapping to a lattice model, and solving a simple lattice model via taking an average over realizations of disorder of a unit cell. Equivalence of dynamics of disordered quantum ensembles and semi-infinite lattices Alexandra Olaya-Castro July 1, 2024 ================================================================================== Introduction.―A disordered quantum ensemble is composed of quantum systems whose properties vary randomly from one member of the ensemble to another according to some distribution. Predicting observables of such ensembles then necessitates the introduction of stochastic elements into the Hamiltonian description of a quantum system. Examples of disordered ensembles are found across a wide range of fundamental and applied scenarios such as condensed matter physics <cit.>, quantum gravity and black hole physics <cit.>, quantum optics <cit.>, statistical physics <cit.>, quantum chemistry <cit.> and quantum biology <cit.>. When one considers observable properties of an ensemble of disordered quantum systems there will be a noticeable dephasing that is incurred, even if every individual system follows a closed system (unitary) quantum dynamics: information regarding coherences is lost in the averaging that is inextricably carried out when an observable is measured. This is a manifestly classical mechanism that is fundamentally distinct from those inducing incoherent dynamics for a quantum system interacting with an environment. Owing to the broad relevance of such phenomena in physical systems, a central issue of interest thus surrounds the description of the ensemble averaged dynamics of disordered quantum systems, and how this may be related to physically relevant processes. Recent works have established an effective master equation description of the ensemble averaged state to analyze its dynamical evolution <cit.>, indicating that disordered quantum systems may be analyzed by applying the frameworks developed within the theory of open quantum systems <cit.>. A unitary description of disordered quantum ensembles has been considered for discrete probability distributions <cit.>. Inspired by the chain-mapping technique <cit.> that has been applied to open quantum systems <cit.>, we consider an alternative and versatile approach where the Hamiltonian of the entire ensemble continuum is transformed using a unitary change of basis that leverages the properties of orthogonal polynomials <cit.>. In doing so, we show that the dynamical description of the disordered ensemble is unitarily equivalent to a single semi-infinite lattice <cit.> with couplings that are local for physically relevant models of disorder. Our work then allows gaining new physical insights on the mechanisms on the loss of coherence while at the same time establishing new conceptual links between two apparently dissimilar physical contexts: disordered ensembles and lattice dynamics. This lattice representation of the dynamics gives a geometric intuition to the mechanism by which information on coherence is lost when an ensemble average is carried out. Yielding exact dynamics of the entire continuum of realizations, which may then be recovered by performing a partial trace over the lattice degree of freedom, bypassing reliance on numerical quadrature rules and sampling from the space of realizations. In practical terms, the advantage of the proposed method stems from the simplicity of the Hamiltonian representation and the accompanying initial wavefunction in the new basis. It is furthermore straightforward to consider initial states of the ensemble that are a function of the disorder, e.g. eigenstates of the system. This work is arranged as follows. First, we introduce the pure state formalism for disordered quantum ensembles, and the orthogonal polynomial mapping procedure, showing the general equivalence between the ensemble average dynamics and semi-infinite lattice models. To exemplify our approach, we then discuss the case of linear disorder, and practical aspects of the mapping for numerical application. We then provide examples of the exact equivalence in realistic systems. Additional details are provided in the supplemental material (SM). Disordered Quantum Ensembles.―The ensemble of disordered quantum systems we consider is composed of every possible realization of the disorder, which we assume are not interacting with one another, meaning that each system belonging to the ensemble evolves independent of all other systems in the ensemble. Unitary dynamics of a specific realization of disorder is determined by a Hamiltonian of the form Ĥ_λ = Ĥ_0 + V̂_λ, where Ĥ_0 and V̂_λ are components that are independent and dependent on the disorder, respectively. The multi-index λ consists of l unit-less independent random variables, such that we have λ = (λ_1, λ_2,…,λ_l) and a factorizing joint probability distribution function p(λ) = ∏_i=1^l p^(i)(λ_i). The state space of each realization is spanned by a basis {|n, λ⟩}_n=1^N which we use to expand Eq. (<ref>) as Ĥ_λ = ∑_n,m=1^N(⟨n|Ĥ_0| m|+⟩ f_n,m(λ))|n, λ⟩⟨m, λ|, where the ⟨n|Ĥ_0| m|$⟩ elements are constant for allλso it is dropped as a label, whilst thef_n,m(λ) = ⟨n, λ|V̂_λ|m, λ|$⟩ elements vary as a given function of the disorder. We furthermore expand the initial state of each realization with respect to this basis as |ψ_0, λ⟩ = ∑_n = 1^N c_n(λ)|n, λ⟩, with c_n(λ) = ⟨n, λ|ψ_0, λ|$⟩. The Hamiltonian of the entire ensemble can be stated in terms of an integral over all possible realizations as Ĥ_Ens = ∫dλĤ_λ, with an initial state of the ensemble of the form |Ψ_0⟩ = ∫dλ√(p(λ))|ψ_0, λ⟩. Note that the distribution of disorder informs the initial state of the ensemble (different distributions of disorder result in different initial states) and that|Ψ_0⟩is normalized as⟨Ψ_0|Ψ_0|=⟩ ∫dλp(λ) = 1. This is a proper physical choice for the initial state of the ensemble as the population density of realizations is given byp(λ), such that the probability amplitude may be taken as√(p(λ)). WithĤ_Ensand|Ψ_0⟩at hand we are now in position to determine exact dynamics of the entire ensemble. Recovering ensemble averaged dynamics.―We now express disorder averaged operators as the partial trace over the disorder labelλ, O(t) = ∫dλ⟨λ| O(t)|λ|,⟩ whereO(t)is a generic operator onto the Hilbert space of the entire ensemble, andO(t)is its corresponding disorder average. In particular we have that for average density matrix dynamics with its initial state to be of the form given in Eq. (<ref>) we have ρ(t) = ∫dλ⟨λ| e^-iĤ_Ens|Ψ_0|⟨%s|%s⟩⟩Ψ_0| e^iĤ_Ens|λ = ∫dλ p(λ) ⟨λ| e^-iĤ_λ t|ψ_0, λ|⟨%s|%s⟩⟩ψ_0, λ| e^iĤ_λ t|λ, where we have made use of the fact that realizations are mutually independent such that⟨λ|e^-iĤ_Ens|ψ_0,λ^'|=⟩ ⟨λ|e^-iĤ_λt|ψ_0, λ|δ⟩(λ- λ^'). In the special case where each initial state is independent ofλwe can further simplify the expression forρ(t)by making use of⟨λ|e^-iĤ_λt|ψ_0, λ|=⟩ e^-iĤ_λt|ψ_0⟩, yielding ρ(t) = ∫dλ p(λ) e^-iĤ_λ t|ψ_0⟩⟨ψ_0|e^iĤ_λ t, such that we are able to recover from our pure state formalism the average density matrix dynamics considered in Refs. <cit.>. Mapping ensemble dynamics to a semi-infinite lattice.―The typical strategy for calculating average dynamics of a disordered quantum ensemble is to sample the space of realizations, either randomly or according to numerical quadratures, which can be parallelized in a straightforward manner asĤ_Ensis block diagonal inλ. We consider a different approach and perform a change of basis using a family of orthonormal polynomials that transforms the integral representation of Eq. (<ref>) into a discrete semi-infinite lattice with nearest-neighbour hopping terms. This procedure is equivalent to a Lanczos tridiagonalization of the ensemble continuum Hamiltonian when the disorder parameter enters as a linear term into the Hamiltonian <cit.>. Note this approach is also used in the study of operator complexity growth <cit.>. We now introduce the set of polynomials,{ϕ_k(λ)}_k=0^∞, that are mutually orthonormal with respect to the measurep(λ)dλ(see SM for a brief introduction to orthogonal polynomials, or e.g. Ref. <cit.>) and define a new discrete basis, |n, λ⟩ = ∑_k=0^∞√(p(λ))ϕ_k(λ)|n, k⟩. The disorder has been assumed to consist oflindependent random variables such that we have the factorizationp(λ) = ∏_i=1^lp^(i)(λ_i), which results in a factorization of the polynomials asϕ_K(λ) = ∏_i=1^lϕ_k_i(λ_i), where we have introduced a discrete multi-indexK = (k_1, k_2, …, k_l). The disorder-independent component of the ensemble Hamiltonian in the discrete basis will reduce as ∫dλ|n, λ⟩⟨m, λ| = ∑_K|n, K⟩⟨m, K|. As for the disorder-dependent terms we have ∫dλ f_n,m(λ)|n, λ⟩⟨m, λ| = ∑_K,K^' f_n,m^(K,K^')|n, K⟩⟨m, K^'|, wheref_n,m^(K,K^') = ∫dλp(λ)f_n,m(λ)ϕ_K(λ)ϕ_K^'(λ), such that the Hamiltonian of the ensemble in the lattice basis is written as Ĥ_Ens = ∑_K,K^'(Ĥ_0,Kδ_K,K^' + ∑_n,m=1^Nf_n,m^(K,K^')|n, K⟩⟨m, K^'|), withĤ_0,K = ∑_n,m=1^N⟨n|Ĥ_0|m||%s⟩⟩n, K⟨m, K|andδ_K,K^' = ∏_i = 1^lδ_k_i,k^'_i. The SM provides additional details on how one may arrive at Eq. (<ref>). The disorder independent components have been projected onto each node of the lattice, whilst disorder dependent terms result in inter- and intra-node coupling terms. Therefore the dynamics of the ensemble of Hamiltonians is unitarily equivalent to a single semi-infinite lattice whose dimension is equal to the number of disorder parametersl. We may therefore bypass numerical quadrature techniques that yield approximate solutions to the ensemble dynamics of Eq. (<ref>) and whose accuracy generally relies on the number of points sampled on the integral interval, and directly compute the exact dynamics of the ensemble via straightforward simulation of the unitary dynamics corresponding to Eq. (<ref>). Utility of this change of basis will however rely on both the initial state of the ensemble and the form of thef_n,m(λ)functions. Note that a disordered Hamiltonian with many random variables will generally result in a high dimensional lattice which can become numerically intractable, such that randomly sampling the space of realizations would be the practical numerical approach for computing average dynamics in such cases. We are able to effectively simulate the exact ensemble dynamics corresponding to low dimensional lattices by utilizing the sparsity of the Hamiltonian matrix representation. The initial state of the ensemble can be expanded in terms of the lattice basis states as |Ψ_0⟩ = ∑_n = 1^N ∑_K d_n,K|n, K⟩, withd_n,K = ∫dλp(λ)ϕ_K(λ)c_n(λ)that also serve as expansion coefficients forc_n(λ) = ∑_K d_n,K ϕ_K(λ). A special case of the initial state of the ensemble is when the wavefunction of each realization of disorder does not change as a function of the disorder, i.e.c_n(λ) = c_nfor allλ. We then simply havec_n(λ) = c_n ϕ_0(λ)which results in Eq. (<ref>) reducing to a single term, |Ψ_0⟩ = ∑_n = 1^N c_n |n, 0⟩. The initial state of the ensemble is found to be entirely localized to the single node at the origin of the lattice due to the distributions of the disorder and initial states being identical in this special case, which yields a simple representation of the initial ensemble wavefunction in the lattice basis. This fact motivates a straightforward truncation scheme to the numerical simulation of the exact ensemble dynamics by introducing a cutoff to the lattice at a distanceDfrom the origin such that the ensemble dynamics up to a timet < MDis accurately simulated, whereMis some positive constant determined by the specifics of the disorder model. Another form of the initial state of general interest is to consider an ensemble of eigenstates ofĤ_λ, e.g. an ensemble of energy ground states. As each realization is found in a stationary state this case reduces to a case of spectral disorder where the ensemble distribution is determined by the ground state energy distributionp(E_G(λ)), rather thanp(λ). The dephasing due to the ensemble average can then be exactly simulated by performing the change of basis, Eq. (<ref>), with respect top(E_G(λ)). Linear disorder.―A ubiquitous functional form of the disorder component is a static linear functionf_n,m(λ) = ∑_i=1^l c_n,m^(i)λ_i, wherec_n,m^(i)are arbitrary coefficients that depend on the specifics of the model for disorder <cit.>. For this case we find the ensemble dynamics to map onto a lattice with nearest-neighbour interaction terms only. The expression for the expansion coefficientsf_n,m^(K,K^')will simplify considerably and be written in terms of polynomial recurrence coefficients as f_n,m^(K,K^') = ∑_i=1^l c_n,m^(i)(α_k_iδ_K,K^' + √(β_k_i+1)δ_K,K^'±1_i), where1_idenotes a multi-index which has a single non-zero entry of unity as itsi-th component (see SM for derivation of Eq. (<ref>)). The interaction of thel-lattice Hamiltonian is then reduced to nearest-neighbours and we obtain Ĥ_Ens = ∑_K(Ĥ_0,K + ∑_n,m=1^N∑_i=1^l c_n,m^(i)α_k_i|n, K⟩⟨m, K|) + ∑_n,m=1^N∑_i=1^l c_n,m^(i)√(β_k_i+1)|n, K⟩⟨m, K+1_i| + h.c. It is by virtue of the three-term recurrence relation that every set of mutually orthogonal polynomials satisfy that we may writef_n,m^(K,K^')in terms of their recurrence coefficients (see SM for more details regarding the recurrence relation). Disordered qubit ensemble dephasing.―Let us consider an ensemble of non-interacting qubits with randomly sampled excited state energies over a probability distributionp(λ), such that each realization is described by a Hamiltonian of the form Ĥ_λ = E_0|0, λ⟩⟨0, λ| + (E_1 + λ)|1, λ⟩⟨1, λ|, whereE_0andE_1are the central energies of the ground and excited state, respectively, andλis the static disorder. The Hamiltonian of the ensemble is then mapped onto a nearest-neighbour coupled chain form via Eq. (<ref>), Ĥ_Ens = ∑_k=0^∞(E_0|0, k⟩⟨0, k| + (E_1 + α_k)|1, k⟩⟨1, k|) + ∑_k=0^∞√(β_k+1)|1, k⟩⟨1, k+1| + h.c. Figure <ref> demonstrates the dephasing dynamics for an initial state where each realization is in the superposition|ψ_0, λ⟩ = (|0, λ⟩ + |1, λ⟩)/√(2)such that the initial wavefunction of the ensemble may be represented in the lattice basis as|Ψ_0⟩ = (|0, 0⟩ + |1, 0⟩)/√(2). Dephasing is shown for four different probability distributions: Gaussian, Cauchy, Semicircle, and Uniform. Notice that the semicircle and uniform distributions cases exhibit coherence revivals, which is a consequence of the fact that they have finite support <cit.>. For this particular case it has been found that the dephasing dynamics is characterized completely by the characteristic function of the random variable <cit.> and this can be seen by noting that each realization will have its coherence to be proportional toe^iλt, and that the expected value ofe^iλtis the definition of the characteristic function <cit.>. We can determine the exact error of our numerical approach for this example as analytical solutions are known. We find that the error is on the order of machine precision where we have considered the Gaussian, semicircle, and uniform distributions, confirming that we are recovering exact dynamics. There is however a finite error for the Cauchy distribution stemming from the energy cutoff we introduced in order to obtain well defined recurrence coefficients. See the SM for details on the parameters used and the closed form solutions we compare to. We also demonstrate in the SM how one may recover the closed form solution of the ensemble averaged dynamics by tracing over the lattice degree of freedom. Disorder averaged dimer dynamics.―Let us consider an ensemble of non-interacting dimers whose site energies are sampled from independent identical Gaussian distributions, such that each realization is described by a Hamiltonian of the form Ĥ_λ_1,λ_2 = (E_1 + λ_1)|1, λ_1,λ_2⟩⟨1, λ_1,λ_2| + (E_2 + λ_2)|2, λ_1,λ_2⟩⟨2, λ_1,λ_2| + V(|1, λ_1,λ_2⟩⟨2, λ_1,λ_2| + h.c.), whereE_1andE_2are the two central energies of the site states,Vis the coupling, andλ_1,λ_2are the static disorder. The parameters we assume for this particular dimer model areE_1 = 12325 cm^-1,E_2 = 12025 cm^-1,V = 273 cm^-1, so as to simulate a dimerized pair of chromophores <cit.>. The standard deviation we have chosen isσ= 200 cm^-1. The change of basis we propose will now expand the ensemble into a two dimensional semi-infinite lattice. Performing the trace over the lattice degrees of freedom will then yield the disorder averaged density matrix in terms of the sites states, whose populations we have plotted in Fig. <ref>. We assume that the initial state of the ensemble is such that every instance is localized to site 1. Every instance of the system evolves in time according to the Schrödinger equation, with the population oscillating between sites in a unitary manner. This is not the case for the disordered ensemble average of the dimer as Fig. <ref> demonstrates. Here we observe an initial relaxation of the average population dynamics, which is however not a complete relaxation as the average dynamics eventually reach a new oscillating stationary state. Wigner semicircle distribution and constant couplings.―In the above text we have established an equivalency between disordered quantum ensembles and semi-infinite lattices. When we transform from theλbasis and into thekbasis, we are effectively leveraging the spectral information we have at hand (p(λ)) in order to compute the ensemble averaged dynamics. Fully utilizing the duality that we have established, one can go from a Hamiltonian of a semi-infinite lattice with couplings that can be interpreted as recurrence coefficients and (block-) diagonalize the Hamiltonian. For example, the Wigner semicircle distribution gives rise to recurrence coefficients that are constant and can therefore be used to generate semi-infinite lattices that are translationally invariant for a given unit cell. The corresponding set of orthogonal polynomials will be the Chebyshev polynomials of the second kind,U_k(λ), which can be made equivalent to a sine wave basis expansion with a change of variables <cit.>. By settingλ= cosθwe havep(θ) = 2/πsin^2θandU_k(cosθ) = sin((k+1)θ)/sinθsuch that Eq. (<ref>) will become|n,θ⟩ = √(2/π)∑_k=0^∞sin((k+1)θ)|n,k⟩. In other words, the dynamics of a single particle propagating across a semi-infinite constant coupling lattice is equivalent to dynamics of an ensemble whose disorder follows a Wigner semicircle distribution. This stems from the fact that the eigenstates of the lattice form a continuum that is indexed by a quantum number which in turn plays the role of disorder in our ensemble picture. To clarify, what we are highlighting here is not an analytical technique to investigate lattice dynamics, but rather a new interpretation of those dynamics. Conclusions.―We have shown an equivalency between the dynamics of an ensemble of disordered quantum systems and of a single semi-infinite lattice. This has been achieved by identifying the pure state description of ensembles which may then be unitarily transformed to a lattice description using a basis of orthogonal polynomials. Terms representing disorder in the ensemble picture will correspond to exchange (hopping) terms in the lattice picture. We furthermore show that exact ensemble average dynamics may be recovered by performing a partial trace over the lattice degrees of freedom, yielding non-unitary evolution that is fundamentally distinct from open quantum systems. We have demonstrated that the transformation to the lattice picture is exact by simulating disorder-induced dephasing of a qubit and comparing to known analytical solutions. We have also studied the average population dynamics for a disordered dimer which exhibits novel non-unitary dynamics, demonstrating the generality of our approach. Finally, we have discussed the case of semi-infinite lattices with constant nearest-neighbour coupling and highlighted their equivalency to ensembles of quantum systems with disorder that follows a Wigner semicircle distribution. Acknowledgements.― We thank the and the Gordon and Betty Moore Foundation (Grant GBMF8820) for financial support and the Engineering and Physical Sciences Research Council (EPSRC UK Grant No. EP/V049011/1). We would like to thank D. Porras for insightful discussions. Supplemental Material: Equivalence of dynamics of disordered quantum ensembles and semi-infinite lattices 1.0em Hallmann Óskar Gestsson, Charlie Nation, and Alexandra Olaya-Castro Department of Physics and Astronomy, University College London, London WC1E 6BT, United Kingdom (Dated: July 1, 2024) § PRIMER ON ORTHOGONAL POLYNOMIALS For a given probability distributionpwith finite moments that is defined for the real domain𝒟 ⊆ℝwe may define an inner product ⟨f, g|=⟩∫_𝒟f(x)g(x)p(x)dx for arbitrary real-valued functionsf,gdefined on𝒟. There then exists a sequence of monic orthogonal polynomials{P_n(x)}_n=0^∞that satisfies the following condition, ⟨P_n, P_m|=⟩∫_𝒟P_n(x)P_m(x)p(x)dx = ζ_nδ_n,m, withδ_n,mbeing the Kronecker delta, and positive constantsζ_nwhereζ_0 = 1<cit.>. The subscript of theP_nelements is taken to correspond to the degree of the polynomial, e.g.P_0(x) = 1. The sequence forms a basis for real-valued functions on𝒟such that for anyf : 𝒟 →ℝthere exists a non-zero sequence of real-valued elements{c_n}_n=0^∞such that f(x) = ∑_n=0^∞ c_nP_n(x), wherec_n = ⟨f,P_n|/⟩ ζ_n. We can analogously introduce a corresponding sequence of orthonormal polynomials{p_n(x)}_n=0^∞whose elements are defined asp_n(x) = P_n(x) / √(ζ_n)and will satisfy ⟨p_n, p_m|=⟩δ_n,m. §.§ Recurrence Relation A powerful property of any sequence of orthogonal polynomials, that is at the core of the mapping in the main text, is the three-term recurrence relation they satisfy. For monic polynomials we have P_n+1(x) = (x - α_n)P_n(x) - β_n P_n-1(x), with initial conditions:P_0(x) = 1,P_1(x) = x - α_0, and recurrence coefficientsα_n,β_n, that are determined as α_n = ⟨xP_n,P_n|/⟩⟨P_n,P_n|,⟩ β_n = ⟨P_n,P_n|/⟩⟨P_n-1,P_n-1|.⟩ We can show that Eq. (<ref>) holds by expandingxP_n(x)in terms of monic polynomials using Eq. (<ref>), xP_n(x) = ∑_k = 0^n+1⟨xP_n,P_k|⟩/ζ_kP_k(x), where we have noted that the expansion involves only polynomials of degreen+1or lower, asxP_n(x)will be a polynomial of degreen+1. Now we consider the coefficients appearing in the expansion and write ⟨xP_n,P_k| ⟩ = ⟨P_n,xP_k| ⟩ = ∑_m = 0^k+1⟨P_n,xP_m|⟩/ζ_m⟨P_n,P_m| ⟩ = ∑_m = 0^k+1⟨P_n,xP_m|δ⟩_n,m, where in going from the first to second line we have expandedxP_k(x)again in terms of the monic polynomials. We therefore have that⟨xP_n,P_k|=⟩ 0forn>k+1such that the right hand side of Eq. (<ref>) reduces to three terms, xP_n(x) = d_n+1P_n+1(x) + d_nP_n(x) + d_n-1P_n-1(x). The leading coefficient ofP_n(x)is unity by definition such that the same will be the case forxP_n(x), which in turn impliesd_n+1 = 1. As ford_n-1we can write d_n-1ζ_n-1 = ⟨xP_n,P_n-1| ⟩ = ⟨P_n,xP_n-1| ⟩ = ⟨P_n,P_n + d_n-1P_n-1 + d_n-2P_n-2| ⟩ = ⟨P_n,P_n|,⟩ where we have applied Eq. (<ref>) when going from the second to third line. Putting all of this together and rearranging the terms of Eq. (<ref>) we can write P_n+1(x) = (x - ⟨xP_n,P_n|⟩/ζ_n)P_n(x) - ζ_n/ζ_n-1P_n-1(x), which is the three-term recurrence relation given in Eq. (<ref>). § TRANSFORMING THE INTEGRAL REPRESENTATION OF THE ENSEMBLE TO A LATTICE REPRESENTATION The Hamiltonian of the entire ensemble continuum can be represented in terms of an integral over all elements of the ensemble as Ĥ_Ens = ∫dλĤ_λ, whereλis a label that spans the elements. Rewriting Eq. (<ref>) in terms of the{|n, λ⟩}_n=0^Nbasis yields Ĥ_Ens = ∑_n,m=0^N⟨n|Ĥ_0| m|∫⟩dλ|n, λ⟩⟨m, λ| + ∑_n,m=0^N∫dλ f_n,m(λ)|n, λ⟩⟨m, λ|. In order to develop the integral terms in Eq. (<ref>) we make an unitary change of basis using a sequence of polynomials{p_k(λ)}_n=0^∞that are mutually orthonormal with respect to a given probability distributionp, |n, λ⟩ = ∑_k=0^∞√(p(λ))p_k(λ)|n, k⟩, whose inverse is |n, k⟩ = ∫dλ√(p(λ))p_k(λ)|n, λ⟩. We can see that the change of basis is indeed unitary by considering ⟨n, k | n, k^'|⟩ = ∬dλdλ^'√(p(λ)p(λ^'))p_k(λ)p_k^'(λ^')⟨n, λ| n, λ^'| ⟩ = ∬dλdλ^'√(p(λ)p(λ^'))p_k(λ)p_k^'(λ^')δ(λ - λ^') = ∫dλ p(λ)p_k(λ)p_k^'(λ) = δ_k, k^'. For terms of the form appearing in the first line of Eq. (<ref>) we will have ∫dλ|n, λ⟩⟨m, λ| = ∑_k,k^'=0^∞∫dλ p(λ)p_k(λ)p_k^'(λ)|n, k⟩⟨m, k^'| = ∑_k,k^'=0^∞⟨p_k,p_k^'||%s⟩⟩n, k⟨m, k^'| = ∑_k = 0^∞|n, k⟩⟨m, k|, where going from the first to second line we have noted that the integral is the inner product that we had in Eq. (<ref>) and then from going from the second to last line we applied the orthogonality of the polynomials. For terms of the form appearing in the second line of Eq. (<ref>) we will have ∫dλ f_n,m(λ)|n, λ⟩⟨m, λ| = ∑_k,k^'=0^∞∫dλ f_n,m(λ)p(λ)p_k(λ)p_k^'(λ)|n, k⟩⟨m, k^'| = ∑_k,k^'=0^∞ f_n,m^(k, k^')|n, k⟩⟨m, k^'| where going from the first to second line we have made the definitionf_n,m^(k, k^') = ⟨f_n,mp_k,p_k^'|$⟩. We can now rewrite Eq. (<ref>) as a semi-infinite lattice model of the form Ĥ_Ens = ∑_k,k^' = 0^∞∑_n,m=0^N(⟨n|Ĥ_0| m|δ⟩_k,k^' + f_n,m^(k,k^'))|n, k⟩⟨m, k^'|, where the λ-independent component of the ensemble has been projected onto each node of the lattice, whilst the λ-dependent component is responsible for the f_n,m^(k,k^') lattice coupling terms. §.§ Linear disorder At this point it is worth noting that we have transformed the integral representation of Ĥ_Ens into a lattice model with respect to an arbitrary probability distribution with well defined moments, and that the utility of the lattice mapping will depend on whether or not our choice of p will result in a simple form for the f_n,m^(k,k^') coupling constants. We now demonstrate how such a choice is simple to determine for most models of disorder that are considered in the literature by exploiting the recurrence relation that orthogonal polynomials satisfy. Let us assume that f is a linear function of disorder such that f(λ) = wλ, where w has units of energy and can be considered to characterize the strength of the disorder contribution. We can then compute the coupling coefficients of the lattice expansion using the three-term recurrence relation, f^(k,k^') = w⟨λ p_k,p_k^'| ⟩ = w/√(ζ_kζ_k^')⟨λ P_k,P_k^'| ⟩ = w(√(β_k+1)δ_k+1,k^' + α_kδ_k,k^' + √(β_k)δ_k-1,k^'). We can see that the resulting interaction terms in the lattice will be of the nearest-neighbour form, and the strength of this coupling is determined by the square root of the recurrence coefficients, √(β_n). We also have terms corresponding to energy shifts that are given by α_n. Finally note that α_n = α for all n if the probability distribution is even with respect to a point α, i.e. p(α-λ) = p(α+λ). This can be seen by recalling their definition given in Eq. (<ref>) and noting that the resulting polynomials will have an even and odd parity with respect to the point α. §.§ Example: Two-level system ensemble Let us consider an ensemble of non-interacting qubits that have their excited state energies to be randomly sampled from a probability distribution p(λ), such that the dynamics of each realization is determined by a Hamiltonian of the form Ĥ_λ = E_0|0, λ⟩⟨0, λ| + (E_1 + wλ)|1, λ⟩⟨1, λ|, where E_0 and E_1 are the central energies of the ground and excited state, respectively, λ is drawn from a random distribution p(λ), and w characterizes the width of the static disorder. The Hamiltonian of the ensemble is then mapped onto a nearest-neighbour coupled chain form, Ĥ_Ens = ∑_k=0^∞(E_0|0, k⟩⟨0, k| + (E_1 + wα_k)|1, k⟩⟨1, k|) + w∑_k=0^∞√(β_k+1)|1, k⟩⟨1, k+1| + h.c. where α_k and β_k are the recurrence coefficients stemming from performing the lattice expansion with respect to the distribution of the disorder, p(λ). We take each realization to have an initial state of the form |ψ_0⟩_λ = a|0, λ⟩ + b|1, λ⟩, with | a|^2 + | b|^2 =1, which will then yield an initial state of the ensemble in terms of the lattice basis that is of the form |Ψ_0⟩ = a|0, 0⟩ + b|1, 0⟩. We can now perform a partial trace over all nodes of the lattice in order to compute the ensemble averaged dynamics, ρ(t) = ∑_k=0^∞⟨k|ρ(t)| k| ⟩ = ∑_n,m = 0, 1[∑_k=0^∞⟨n, k|ρ(t)| m, k|⟩] |n⟩⟨m|, where ρ(t) = e^-iĤ_Enstρ_0e^iĤ_Enst and ρ_0 = | a|^2|0, 0⟩⟨0, 0| + | b|^2|1, 0⟩⟨1, 0| + ab^*|0, 0⟩⟨1, 0| + a^*b|1, 0⟩⟨0, 0|. We define functions c_n,m(t) = ∑_k=0^∞⟨n, k|ρ(t)| m, k|$⟩ for the sake of brevity and compute the averaged dynamics for this example. We can immediately determinec_0,0(t) = |a|^2by noting that|0, 0⟩is a stationary state ofĤ_Ens. We can reducec_0,1(t)to a single term, c_0,1(t) = ab^*∑_k=0^∞⟨0, k| e^-iĤ_Enst| 0, 0|⟨%s|%s⟩⟩1, 0| e^iĤ_Enst| 1, k = ab^* e^-iE_0t⟨1, 0| e^iĤ_Enst| 1, 0|,⟩ and then note that⟨1, 0|e^iĤ_Enst|1, 0|$⟩ is the characteristic function of the random variable Λ = wλ + E_1 <cit.>. We denote the characteristic function of a random variable X as φ_X such that we may now write c_0,1(t) = ab^* e^-i(E_0-E_1)tφ_wλ(t), where we have made use of the fact that φ_Λ(t) = e^iE_1tφ_wλ(t). As c_1,0(t) = c_0,1^*(t), we now only need to determine c_1,1(t), which will reduce as c_1,1(t) = | b|^2 ∑_k=0^∞|⟨1, k| e^-iĤ_Enst| 1, 0|⟩|^2. We might already expect c_1,1(t) to be constant for all times as we know that populations of the |0, λ⟩, |1, λ⟩ states are conserved quantities. It is however not so obvious that this should be the case when considering Eq. (<ref>). Notice that we can write the |1, k⟩ elements in terms of |1, 0⟩ in a compact form if we allow for the corresponding orthonormal polynomials to take operator arguments, i.e. we have |1, k⟩ = p_k(Ĥ_Ens)|1, 0⟩. We then use this fact in conjunction to Ĥ_Ense^-iĤ_Enst = i∂_t e^-iĤ_Enst in order to write ⟨1, k| e^-iĤ_Enst| 1, 0| ⟩ = ⟨1, 0| p_k(Ĥ_Ens)e^-iĤ_Enst| 1, 0| ⟩ = p_k(i∂_t)⟨1, 0| e^-iĤ_Enst| 1, 0| ⟩ = p_k(i∂_t)φ_Λ^*(t), and in general we have ⟨1, k| e^iĤ_Enst| 1, k^'|=⟩ p_k(-i∂_t)p_k^'(-i∂_t)φ_Λ(t). We can now rewrite Eq. (<ref>) in terms of the characteristic function as c_1,1(t) = | b|^2 ∑_k=0^∞| p_k(i∂_t)φ_Λ^*(t)|^2. In this form we can now readily consider i∂_t c_1,1(t) and apply the three-term recurrence relation for P_k in order to see that c_1,1(t) is indeed a constant. We have i∂_t c_1,1(t) = | b|^2 ∑_k=0^∞1/ζ_k([i∂_tP_k(i∂_t)φ_Λ^*(t)]P_k(-i∂_t)φ_Λ(t) - P_k(i∂_t)φ_Λ^*(t)[-i∂_tP_k(-i∂_t)φ_Λ(t)]) = | b|^2 ∑_k=0^∞1/ζ_k([P_k+1(i∂_t) + α_kP_k(i∂_t) + β_kP_k-1(i∂_t)]φ_Λ^*(t)P_k(-i∂_t)φ_Λ(t) . .- P_k(i∂_t)φ_Λ^*(t)[P_k+1(-i∂_t) + α_kP_k(-i∂_t) + β_kP_k-1(-i∂_t)]φ_Λ(t)) = 2| b|^2∑_k=0^∞Im( 1/ζ_kP_k+1(i∂_t)φ_Λ^*(t)P_k(-i∂_t)φ_Λ(t) - 1/ζ_k-1P_k(i∂_t)φ_Λ^*(t)P_k-1(-i∂_t)φ_Λ(t)) = 2| b|^2lim_k→∞1/ζ_kIm(P_k+1(i∂_t)φ_Λ^*(t)P_k(-i∂_t)φ_Λ(t)), whereby having collected like-terms we have noted that we have an infinite series of the form ∑_k=0^∞ a_k - a_k-1 = lim_k→∞a_k (with a_-1 = 0), so we are left with a single term of the form 1/ζ_kP_k+1(i∂_t)φ_Λ^*(t)P_k(-i∂_t)φ_Λ(t) = √(β_k+1)⟨1, k+1| e^-iĤ_Enst| 1, 0|⟨%s|%s⟩⟩1, 0| e^iĤ_Enst| 1, k which goes to zero as k→∞ for a local Hamiltonian at finite times. Combining the fact that ∂_t c_1,1(t) = 0 with the initial condition c_1,1(0) = | b|^2 we have that c_1,1(t) = | b|^2. We therefore have the ensemble averaged dynamics to be ρ(t) = | a|^2|0⟩⟨0| + | b|^2|1⟩⟨1| + ab^*e^-i(E_0 - E_1)tφ_wλ(t)|0⟩⟨1| + h.c. which is in agreement with what has been derived in <cit.>. §.§ Example: Disorder dependent initial state The pure state formalism is able to treat physical situations where the initial state of each realization is correlated to the disorder. We illustrate this point here with an example of an ensemble of systems, Ĥ_λ, where λ follows a Wigner semicircle distribution p(λ) = 2/π√(1-λ^2) such that λ can take values on the range [-1, 1]. We shall consider the initial state of each realization to be of the form |ψ_0, λ⟩ = λ|0,λ⟩ + √(1-λ^2)|1,λ⟩ The initial state of the ensemble is now written as |Ψ_0⟩ = ∫_-1^1dλ√(p(λ))(λ|0,λ⟩ + √(1-λ^2)|1,λ⟩) = ∑_k=0^∞ d_0,k|0,k⟩ + ∑_k=0^∞ d_1,k|1,k⟩, with coefficients d_0,k = ∫_-1^1dλ p(λ)U_k(λ)λ, d_1,k = ∫_-1^1dλ p(λ)U_k(λ)√(1-λ^2), where U_k(λ) are the Chebyshev polynomials of the second kind <cit.>. Computing these integrals yields d_0,k = 1/2δ_1,k, d_1,k = -4/π1+cos(π k)/(k+3)(k^2-1), such that our initial ensemble state is discretized as |Ψ_0⟩ = 1/2|0,1⟩ - 8/π∑_k=0^∞1/(2k+3)(4k^2-1)|1,2k⟩. § INTRODUCING AN ENERGY CUTOFF The mapping breaks down for cases of disorder which follow a distribution whose moments are undefined, e.g. Cauchy and Lévy distributions, which then in turn result in undefined recurrence coefficients. The divergence implicit in the integral form of Ĥ_Ens is made explicit in its lattice form. We can remedy the situation by introducing an energy cutoff which then leads to well defined recurrence coefficients and recover approximate averaged dynamics whose accuracy depends on the choice of cutoff. It is also useful to apply an energy cutoff even in cases where all moments of the distribution are well defined, as doing so will ensure that the recurrence coefficients will reach asymptotic values. Take for example a Gaussian distribution with mean zero and standard deviation σ such that the recurrence coefficients will be α_k = 0 for all k and β_k+1 = σ^2(k+1). Due to the strictly increasing value of the lattice couplings we find that there are diminishing returns in increasing the lattice truncation, making it more and more computationally intensive to simulate dynamics to longer times. A 5σ cutoff will lead to the couplings having an asymptotic value of lim_k→∞√(β_k+1) = 5/2σ, thus curbing the monotonic increase of the lattice couplings whilst simultaneously preserving the fidelity of the distribution up to ± 5σ. Figure <ref> demonstrates this behaviour of the lattice couplings corresponding to a Gaussian disorder distribution with and without an energy cutoff. § TABLE OF QUANTITIES 1.5 Name p(λ) 𝒟 α_k β_k+1 φ_Λ(t) Gaussian 1/√(2π)σe^-λ^2 / 2σ^2 ℝ 0 σ^2(k+1) e^-σ^2t^2/2 Cauchy 1/πθθ^2/λ^2 + θ^2 ℝ N/A N/A e^-θ |t| Semicircle 2/π w^2√(w^2 - λ^2) [-w,w] 0 w/2 2J_1(wt)/wt Uniform 1/2v [-v,v] 0 v(k+1)/√(4(k+1)^2 - 1) sin(vt)/vt For the qubit dephasing considered in the main text we have set σ = θ = w = v = E_1 - E_0. An energy cutoff of E_± = ±30θ is introduced for the Cauchy distribution in order to achieve well defined recurrence coefficients. apsrev4-1
http://arxiv.org/abs/2406.18736v1
20240626200433
Self-assembly behaviour of diblock copolymer-diblock copolymer under oscillating shear field
[ "Y. Guo", "H. He", "X. Fu" ]
cond-mat.soft
[ "cond-mat.soft" ]
-1/2 Ii- Ii 䳺 . . ˳ July 1, 2024 ===================== § ABSTRACT The self-assembly behaviour of a diblock copolymer–diblock copolymer mixture under an oscillating shear field is investigated via cell dynamics simulation. The results indicate that the macrophase separation of the com­po­si­te system is accompanied by the corresponding microphase separation induced by the oscillating shear field. With an increase in the shear frequency, the AB phase changes from a tilted layered structure to a parallel layered structure, and finally to a vertical layered structure. The CD phase transforms from the initial concentric ring into a parallel layer in the ring and then into a parallel layered structure; thus, the system finally forms a layered structure of the AB phase (vertical layer) and CD phase (parallel layer) perpendicular to each other. To verify the phase transition, the dynamic evolution of the domain size at different shear frequencies is analysed. The ordered phase transition with an increase in the oscillating shear field varies when the initial composition ratio of the system is changed. This conclusion provides a valuable guidance for the formation and transformation of ordered structures in experiments. iblock copolymer, oscillating shear field, self-assembly § INTRODUCTION In recent decades, diblock copolymers (DBCs) have attracted considerable attention because of their capability to self-assemble into various amazing structures and their applications in nanotechnology, microelectronics, and clean energy <cit.>. In addition to their novel structures, the formation of the ordered structure and the ordered phase transformation of DBCs are important for the development of new functional materials <cit.>. If the homopolymer <cit.> or a different DBC <cit.> is added, a multiscale hierarchical structure can be formed, and its function can be optimised while enriching the structure of the system. Many theoretical studies have been performed on the doping of a DBC with a homopolymer or a different DBC. Both phenomenological methods <cit.> and numerical simulations <cit.> have yielded results consistent with experiments, while predicting new structures and properties. Martinez et al. studied the stabilisation of multiple ordered bicontinuous phases in blends of a DBC and a homopolymer via a combination of particle-based simulations and self-consistent field theory (SCFT) <cit.>. Liu et al. adopted Monte Carlo simulations to study the influences of the chain length and DBC concentration on the interfacial properties between two immiscible homopolymers <cit.>. Xie et al. systematically studied the formation and relative stability of spherical packing phases in binary blends composed of AB-DBCs and A-homopolymers by applying SCFT to the freely-jointed chain model of polymers <cit.>. In addition, many researchers have experimentally studied block copolymers doped with homopolymers. Habersberger et al. systematically described the phase behaviour of a variety of symmetric CE/C/E ternary copolymer/homopolymer blends, where C represents poly(cyclohexylethylene) and E represents poly(ethylene), verifying the location in the composition of the technologically important bicontinuous microemulsion (BµE) channel as a function of molecular weight of the diblock <cit.>. Later, Robert et al. from the same research group discussed a connection between structure and ionic conductivity in salt-containing ternary polymer blends consisting of a polystyrene-poly(ethylene oxide) (PS-PEO) DBC and a poly(ethylene oxide) (PEO) hompolymer <cit.>. Zheng et al. systematically investigated the phase behaviour of partially charged DBC–homopolymer ternary blends using small-angle X-ray scattering <cit.>. For the system comprising two DBCs, Wang et al. assessed a versatile computational strategy involving cooperative assembly of DBC blends to form spherical and cylindrical compartmentalised micelles with complicated morphologies and structures. The co-assembly tactic combines the advantages of polymer blending and incompatibility-induced phase separation <cit.>. Su et al. explored the self-assembled structure of a poly(styrene-b-vinylbenzyl triazolylmethyl methyladenine) (PS-b-PVBA) DBC and a poly(vinylbenzyl triazolylmethyl methylthymine) (PVBT) homopolymer via nuclear magnetic resonance spectra, small-angle X-ray scattering and transmission electron microscopy <cit.>. Fan et al. investigated the phase behaviour of an AB/CD block copolymer blend via SCFT. The results indicated that the phase transitions from layered structures on different spatial scales to a core–shell structure were ascribed to an increase in the fusion degree of components B and D <cit.>. Sun et al. explored the orientational order transition of striped patterns in microphase structures of DBC–DBC mixtures in the existence of periodic oscillatory particles <cit.>. By altering the oscillatory frequency and amplitude, the orientational order transition of a striped microphase structure from the state parallel to the oscillatory direction to that perpendicular to the oscillatory direction was obtained. However, Pan et al. made the two DBCs under parallel wall confinement and studied the effects of the distance between walls, the wall–block interaction, and the repulsive interactions between different monomers on the phase behaviour <cit.>. It is important to understand how to regulate the orientation of ordered structures and how to transform the orientation order more simply, conveniently, and quickly to obtain various self-assembled structures of composite materials. The principal methods of regulating nanostructures are substrate induction, space confinement, and external field application <cit.>, all of which can render the bulk more novel and organised. Many studies have proven that the application of an external field is an effective method for regulating polymer nanocomposites <cit.>. An oscillating shear field can enable the system to produce various novel and ordered structures and realise transformation and effective regulation of the structures <cit.>. Experimentally determining the formation of certain structures is often tedious; thus, we need computer simulations to provide a more effective guidance for experiments. In this study, the self-assembly behaviour of two different block copolymers under an oscillating shear field was analysed via computer simulations, and an effective manner in forming and changing the orderly structure of the regulation system was obtained. The remainder of this paper is organised as follows: section 2 describes the model and simulation method; section 3 presents the numerical results and discussions; finally, section 4 presents conclusions. § MODELS AND SIMULATION METHODS We consider a phase separating system consisting of two different DBCs, which are under the oscillating shear field. The polymer chain of one DBC consists of A and B monomers, which has a short-range repulsive interaction between them. The polymer chain of the other DBC consists of C and D monomers, which also has a short-range repulsive interaction. The B and D monomers in two different DBCs are mutually exclusive with each other. Furthermore, the hydrodynamic effects dominate in the very final stage of microphase separation in polymer mixtures, so they are neglected in the present model. For DBC-DBC system, several parameters are defined. We consider that the DBCs are symmetric. Thus, the polymerization degree of A block is equal to that of B block, which is also true for the C and D blocks, that is, N_A=N_B, N_C=N_D. In the process of phase separations, fluctuations are predominant, so we should investigate the local volume fractions of monomers A, B, C and D. They are denoted, respectively, by ϕ_A(x,y), ϕ_B(x,y), ϕ_C(x,y) and ϕ_D(x,y). The total density ϕ_A(x,y)+ϕ_B(x,y)+ϕ_C(x,y)+ϕ_D(x,y) is constant under the incompressibility condition. Then, we take ψ(x,y)=ϕ_A(x,y)+ϕ_B(x,y), ϕ(x,y)=ϕ_A(x,y)-ϕ_B(x,y), and ξ(x,y)=ϕ_C(x,y)-ϕ_D(x,y) are the independent variables that are used to characterize the structure ordering. The order parameter ϕ(x,y) describes the local concentration difference between the A and B monomers, the order parameter ξ(x,y) gives the local concentration difference between the C and D monomers. Here, we use a three-order-parameter model <cit.>, whose free-energy functional of the system is given by F=F_L+F_S, the long-range part F_L is given by F_L=α/2∬ dr dr'G(r,r' )[ϕ(r)-ϕ_0][ϕ(r')-ϕ_0]+β/2∬ dr dr'G(r,r' )[ξ(r)-ξ_0][ξ(r')-ξ_0] , where α, β are all positive constants, refer to the long-range interaction. G(r,r') is the Green's function defined by the equation -∇^2G(r, r')=δ(r-r'), while ϕ_0 and ξ_0 is the spatial averages of ϕ and ξ. As mentioned before, the two DBCs are symmetric, so ϕ_0=0, ξ_0=0. The short-range part is more complex than the long-range part, which is given by F_S=∬ d x d y [d_1/2(∇ψ)^2+d_2/2(∇ϕ)^2+d_3/2(∇ξ)^2 +f_1(ψ,ϕ,ξ)], where the d_1, d_2 and d_3 terms correspond to the surface tensions. The local interaction term f_1(ψ,ϕ,ξ) could be replaced by f_1(η,ϕ,ξ) <cit.>, where η=ψ-ψ_C with ψ_C being the volume fraction of two different DBCs at the critical point of the macrophase separation. It is obvious that the important physical results will be mainly included in the local interaction f_1(ψ,ϕ,ξ). For further treatment, we can take its form in a phenomenological approach <cit.> as f_1(η,ϕ,ξ)=ν_1 (η)+ν_2 (ϕ)+ν_3(ξ)+b_1ηϕ-b_11ηξ -1/2b_2η(ϕ)^2+1/2b_22ηϕ(ξ)^2. In the symmetric case, b_1=(-χ_AC-χ_AD+χ_BC+χ_BD)/4, b_11=(-χ_AC+χ_AC-χ_BC+χ_BD)/4. In the model, the repulsion between the B monomer and the D monomer is set to be the largest, in others words, χ_BD>χ_BC, χ_BD>χ_AD, χ_BD>χ_AC, while the value of χ_BC is close to the value of χ_AC and only a little larger than χ_AC, and χ_AD≫χ_BC. In general, b_1, b_11, b_2 and b_22 are all positive constants, thereinto, b_1, b_11 represent the short-range attraction between polymer monomers, b_1 mainly arises from the repulsive interaction between the CD DBC and the B monomers, whereas the b_2 and b_22 originate from the conformational entropy between AB DBC and CD DBC. These two terms decide that the observation of a microphase separation could take place in the DBCs. In fact, b_2 indicates that a large absolute value of ϕ(x,y) is favorable in the region η(x,y)>0, while b_22 implies that a large absolute value of ξ(x,y) is more favorable in the region η(x,y)<0. Equation (<ref>) describes the minimum model of the short-range part of the free energy in DBC-DBC system. In the free energy function, the competing action leads to the phase separation of two DBC blends. According to the three-parameter model, the dynamic equation of phase separation can be described as the time dependent Ginzburg–Landau (TDGL) equation of coupling the diffusion field and velocity field: ∂η/∂ t+v·∇η =M_η ∇^2δ F/δη, ∂ϕ/∂ t+v·∇ϕ =M_ϕ ∇^2δ F/δϕ, ∂ξ/∂ t+v·∇ξ =M_ξ ∇^2δ F/δξ, where M_η, M_ϕ and M_ξ are transport coefficients, v is an external velocity field describing the shear flow profile. For convenience, we can express the external velocity profile as: v=(γω y cos(ω t),0), where γ is the shear amplitude, ω is the shear frequency. x denotes the oscillating shear flow direction, and y denotes the shear gradient direction. The numerical solutions of the above model system can be carried out in an L_x × L_y (128×128) two-dimensional square lattice, by using the cell dynamics simulation approach proposed by Oono and Puri <cit.>. The order parameters for each cell are η(n,t), ψ(n,t), where n=(n_x,n_y) is the lattice position and n_x, n_y are integers between 1 and L. Laplace operator is approximated in the cell dynamics simulation as: ∇^2ϕ(n)=⟨⟨ϕ(n)⟩⟩-ϕ(n), where ⟨⟨ϕ(n)⟩⟩ represents the nearest-neighbor (n.), the next-nearest neighbor (n..) cells of ϕ(n) <cit.>: ⟨⟨ϕ(n)⟩⟩=1/6∑_n=n.ϕ(r)+1/12∑_n=n..ϕ(r). In the lattice size (Δ x, Δ y) and the time step (Δ t) are all set to unity. The cell dynamics simulation equations corresponding to equations (<ref>)–(<ref>), in their space-time discreteness form, are written as follows: η(r,t+Δt) = η(r,t)-1/2γsin(ω t)[η(x+1,y,t)-η(x-1,y,t)] +M_η(⟨⟨ I_η⟩⟩-I_η), ϕ(r,t+Δt) = ϕ(r,t)-1/2γsin(ω t)[ϕ(x+1,y,t)-ϕ(x-1,y,t)] +M_ϕ(⟨⟨ I_ϕ⟩⟩-I_ϕ)-αϕ(r,t), ξ(r,t+Δt) = ξ(r,t)-1/2γsin(ω t)[ξ(x+1,y,t)-ξ(x-1,y,t)] +M_ξ(⟨⟨ I_ξ⟩⟩-I_ξ)-βξ(r,t), where I_η=-d_1(⟨⟨η⟩⟩-η)-A_ηtanhη+η+b_1ϕ-b_11ξ- 1/2b_2ϕ^2+1/2b_22ξ^2, I_ϕ=-d_2(⟨⟨ϕ⟩⟩-ϕ)-A_ϕtanhϕ+ϕ+b_1η-b_2ηϕ, I_ξ=-d_3(⟨⟨ξ⟩⟩-ξ)-A_ξtanhξ+ξ-b_1η-b_22ηξ. A shear periodic boundary condition <cit.> should be applied to x direction, ϕ(n_x,n_y,t)=ϕ[n_x+N_xL+γ(t)N_yL,n_y+N_yL], where N_x and N_y are arbitrary integers, γ_0(t) is the shear strain, and γ_0(t)=γsin(ω t). The parameters are chosen to be d_1=1.0, d_2=0.5, d_3=0.5, b_1=0.008, b_2=0.2, b_11=0.1, b_22=0.2 and M_η=M_ϕ=M_ξ=1, the initial distribution of ϕ, η and ξ are specified by random uniform distributions in the range [-0.01,0.01], according to the previous work. In the present simulation work, in the case of α=0.04, β=0.03, by the formula α=12/[N^2f_b(1-f_b)] (f_b denotes a block ratio) <cit.>, it can be obtained that N_A=N_B=17, N_C=N_D=20, where N_AB=N_A+N_B and N_CD=N_C+N_D. In this paper, all parameters are scaled, so all of them are dimensionless. § NUMERICAL RESULTS AND DISCUSSION §.§ Microphase transformation induced by oscillating shear field First, we study the microphase transformation of DBC–DBC with changes in the shear frequency at f_AB/CD=7/3, where f represents the ratio of the volume fraction of the AB copolymer and CD copolymer, and γ=0.02, as shown in figure <ref>. In the diagram, the white, black, light gray, and dark gray regions represent phases A, B, C, and D, respectively. For clarity, ϕ>0 means the A-rich domain, ϕ<0 means the B-rich domain, ξ>0 means the C-rich domain, and ξ<0 means the D-rich domain. In figure <ref>, the CD phase is a concentric ring structure, and the AB phase is a tilted layered structure, except for a small amount of AB phase wrapped in the CD phase, which is a concentric ring when the shear frequency is 0.0000001. In other words, the effect of the oscillating shear field on the domain structure of the system is negligible when the shear frequency is low. The result obtained in this case is roughly the same as the phase morphology without an oscillating shear field [figure <ref>(a)]. The oscillatory shear begins to take effect as the shear frequency increases to 0.00001. The microphase separation of the system is evidently disturbed, the coarsening degree of the AB phase along the x direction is significantly increased, and the structure of the CD phase is still a concentric ring [figure <ref>(b)]. When the shear frequency is increased to 0.00002, the AB phase is almost parallel to the direction of the oscillatory shear, exhibiting a parallel layered structure, while the CD phase transforms from the original concentric ring structure to a parallel layered structure in the ring along the oscillatory shear direction [figure <ref>(c)]. When the shear frequency increases to 0.00004, the AB phase transforms back into the tilted layered structure, and the CD phase becomes an inclined concentric ring structure stretched along the oscillatory shear direction [figure <ref>(d)]. The CD phase is a parallel layered structure along the x direction, and the AB phase is an oblique layered ordered structure as shear frequency significantly increases to 0.48 [figure <ref>(e)]. As the shear frequency increases to 0.66, the CD phase remains parallel lamellae, while the AB phase completely transforms into a vertical layered structure perpendicular to the oscillatory shear direction. Finally, a lamellar structure comprising the AB phase (perpendicular lamellae) and CD phase (parallel lamellae) perpendicular to each other is formed, as shown in figure <ref>(f). In summary, as the oscillatory shear frequency increases, the CD phase changes from a concentric ring structure to a parallel layered structure, and the AB phase changes from a tilted layered structure to a parallel layered structure and then to a perpendicular layered structure. These results can be explained as follows. The influence of the oscillating field on the domain structure of the system is negligible when the shear frequency is low. Increasing the shear frequency enhances the coarsening degree of the system in the x direction; therefore, both the AB phase and the CD phase in the ring are almost parallel layered structures along the oscillating field at the appropriate frequency of 0.00002. As the shear frequency continues to increase, the different components push each other and are coarsened in the y direction under the action of rapid periodic oscillatory shear. Under the condition of f_AB/CD=7/3, the AB phase occupying a larger proportion is significantly affected by the oscillating shear field, which results in an easy pileup in the direction perpendicular to the oscillating shear field, forming the perpendicular layered structure. Since the CD phase is confined in the ring before, it breaks the restraint of the ring at a higher shear frequency and forms a parallel layered structure along the oscillating field. To verify the above microphase transition, we calculate the domain size R_i(t) (i=x or y) in the x or y direction as a function of time. The domain sizes R_i(t) can be derived from the inverse of the first moment of the structure factor S(k,t) as R_i(t)=2/⟨ k_i(t)⟩, where ⟨|k_i(t)|⟩=∫ dk k_iS(k,t)/∫ dk S(k,t). The structure factor S(k,t) is decided by the Fourier component of the spatial concentration distribution. Figure <ref> shows the time evolution of the microdomain sizes R_i(t) in the x and y directions in double-logarithmic plots. The results are the average values for 10 independent runs. In figure <ref>(a), with an increase in the shear frequency, the domain size R_x of the AB microphase domain along the x direction in the equilibrium state first increases (curves a–c) and then decreases (curves d–f). This indicates that the coarsening degree of the AB domain increases gradually along the oscillating field (first for small values of ω) as the shear frequency increases. Then, when ω increases to a certain extent, the shear frequency is so high that the AB domains push each other, resulting in coarsening along the y direction. At this moment, the coarsening along the direction of the oscillating shear field is inhibited. The coarsening perpendicular to the oscillatory shear of the AB phase becomes increasingly apparent, until it grows completely perpendicular to the oscillatory shear. By contrast, the domain size R_y of the AB microphase domain along the y direction first decreases (curves a–c) and then increases (curves d–f) as the shear frequency increases, as shown in figure <ref>(b). Furthermore, when the oscillating shear field increases to a certain extent, the AB phase occupying a larger proportion coarsens along the direction perpendicular to the oscillating shear field; however, this coarsening is inhibited when the oscillating shear field is weak. In addition, the obtained domain structure is stable, as indicated by the growth curve. §.§ Evolution progress We scrupulously examine the formation process of the lamellar structure with the AB phase (perpendicular lamellae) and CD phase (parallel lamellae) perpendicular to each other. Simulation snapshots obtained at different times for ω=0.66 are shown in figure <ref>. The growth curves of the domain sizes along the x- and y-axes are presented in figure <ref>. Figure <ref> presents the morphology evolution of the domain structure of the polymer system over time. In the evolution process, the macrophase separation is accompanied by microphase separation. The phase separation is not obvious at t=250000. At t=500000, there is an obvious microphase separation when the macrophase separation occurs. At this time, most of the AB phase is a tilted layered structure, which has a certain angle with the shear direction, except for a small part around the CD phase. The CD phase is disordered in the ring. At t=2000000, the AB phase tends to grow perpendicular to the oscillating shear field and becomes more ordered than before, while the CD phase is completely parallel to the direction of the oscillating shear field, forming a parallel layered structure. In the last stage, the AB phase grows completely perpendicular to the oscillating field and finally forms a layered structure of the AB phase (vertical layer) and the CD phase (parallel layer) perpendicular to each other. Additionally, the evolution of the domain sizes in the x and y directions of the AB phase with the vertical layer and the CD phase with the parallel layer is examined, as shown in figure <ref>. Figure <ref>(a) presents the growth of the domain size of the AB phase as a function of time in a double-logarithmic plot, corresponding to the black and white regions in figure <ref>(f). Figure <ref>(b) presents the growth of the domain size of the CD phase as a function of time in a double-logarithmic plot, corresponding to the light gray and dark gray regions in figure <ref>(f). R_y > R_x shown in figure <ref>(a) indicates that the domain size of the AB phase in the y direction is considerably larger than that in the x direction, which verifies the vertical layered structure of the AB phase in figure <ref>(d). Meanwhile, R_x≫ R_y shown in figure <ref>(b) indicates that the domain size of the CD phase in the x direction is much larger than that in the y direction, which verifies the parallel layered structure of the CD phase in figure <ref>(d). §.§ Effect of oscillating shear field on composite system with different ratios To explore the effect of the concentration of DBC–DBC on the self-assembled structure, the initial concentration of the AB and CD block is changed to f_AB/CD=35/65. It is found that the self-assembly structure of the composite system under the oscillating shear field is completely different from that in f_AB/CD=7/3. Figure <ref> presents the microphase transformation of the composite system with an increase in the shear amplitude when the shear frequency is fixed at ω=0.00001. As shown, the phase enclosed in the ring transforms from the original CD phase into the AB phase, and the macroscopic phase interfaces of both the AB and CD phases are the ring structure. When the shear amplitude is small, the AB phase forms the tilted layered structure far from the phase interface, while the CD phase is a bicontinuous structure [figures <ref>(a)–(b)]. The AB phase in the ring gradually becomes ordered along the direction of the oscillating shear field as the shear amplitude increases to 0.022 [figure <ref>(c)]. With a further increase in the oscillating shear field, the AB and CD phases of the composite system coarsen along the x direction, forming an ordered parallel layered structure, except for the ring structure of the macrophase interface [figure <ref>(d)]. The parallel layered structure is disorganised, and the whole system is tilted in the y direction [figures <ref>(e)–(f)]. The aforementioned results indicate that the influence of the oscillating shear field on the domain structure of the composite system is weak — even negligible — when the shear amplitude is small. The phase morphology of the composite system is essentially identical to that without the oscillating shear field. The oscillating field starts to take effect as the shear amplitude increases, and the phase separation of the system is disturbed. Then, the whole system tends to coarsen in the x direction. When the shear amplitude increases to a certain extent, the system is completely parallel to the oscillating shear field except for the ring structure of the macrophase interface, exhibiting a perfect parallel layered structure. Finally, the amplitude is so large that the composite system accumulates in the direction perpendicular to the oscillating shear field, resulting in the whole system being tilted in the y direction. To further investigate the formation process of the parallel layered structure inside and outside the ring of the AB and CD phases, the evolution of the domain structure over time is discussed, and the growth curves of the CD phase in the x and y directions are analysed, as shown in figures <ref> and <ref>. Figure <ref> presents the morphology evolution of DBC–DBC with f_AB/CD=35/65, ω=0.00001, and γ=0.032. As shown, the phase separation of the system is not obvious, and macrophase separation first occurs at t=10000 [figure <ref>(a)]. Over time, microphase separation occurred simultaneously with macrophase separation. At this time, the AB phase is a lamellar structure inside the ring, while the CD phase is a bicontinuous layered structure [figures <ref>(b)–(c)]. At t=3000000, the system is anisotropic and exhibits a stable and ordered parallel layered structure, except for the ring structure of the macrophase interface. Figure <ref> shows the growth of the domain size of the CD phase in the x and y directions as a function of time in double-logarithmic plots. R_x≫ R_y indicates that the domain size in the x direction is far larger than that in the y direction, corresponding to the structure in figure <ref>(d). § CONCLUSIONS The self-assembly behaviour of DBC–DBC under an oscillating shear field is investigated via cell dynamics simulation. The results indicate that the macrophase separation of the composite system is accompanied by the corresponding microphase separation induced by the oscillating shear field. When the shear frequency increases, the AB phase changes from a tilted layered structure to a parallel layered structure, and finally to a vertical layered structure. The CD phase transforms from the initial concentric ring into a parallel layer in the ring and then into a parallel layered structure; thus, the system finally consists of a layered structure of the AB phase (vertical layer) and CD phase (parallel layer) perpendicular to each other. This structural transformation can be explained as follows. The increase in the shear frequency can accelerate the coarsening of the microdomain along the oscillating shear field; however, the different components push each other, resulting in coarsening perpendicular to the oscillating shear field when the frequency is too high. The AB phase occupying a larger proportion is significantly affected by the oscillating shear field, which results in an easy pileup in the direction perpendicular to the oscillating shear field, forming the perpendicular layered structure. Since the CD phase is already confined in the ring, it breaks the restraint of the ring at a higher shear frequency and forms the parallel layered structure along the oscillating field. To verify this phase transition, the dynamic evolution of the domain size at different shear frequencies is analysed, revealing that it is identical to the previous phase transition. As for the formation of the layered structure of the AB phase (vertical layer) and CD phase (parallel layer) perpendicular to each other, the morphology evolution and the dynamic evolution of the domain growth are also examined. Moreover, the ordered phase transition with an increase in the oscillating shear field is different when the initial composition ratio of the system is changed. This conclusion provides a valuable guidance for the formation and transformation of ordered structures in experiments. § ACKNOWLEDGEMENTS Project supported by the Basic Research Plan of Shanxi Province, China (Grant No. 202103021223386), the Graduate Education Teaching Program of Shanxi Province, China (Grant No. 2022YJJG301). 99 key1 Bates C. M., Bates F. S., Macromolecules, 2017, 50, 3, 10.1021/acs.macromol.6b02355. key2 Thorkelsson K., Bai P., Xu T., Nano Today, 2015, 10, 48, 10.1016/j.nantod.2014.12.005. key3 Jacoby M., Chem. Eng. News, 2014, 92, 8, 10.1021/cen-09220-cover. key4 Elabd Y. A., Hickner M. A., Macromolecules, 2011, 44, 1, 10.1021/ma101247c. key5 Yang S. Y., Yang J. A., Kim E. S., Jeon G., Oh E. J., Choi K. Y., Hahn S. K., Kim J. K., ACS Nano, 2010, 4, 3817, 10.1021/nn100464u. key6 Gu W., Hun J., Hong S. W., Sveinbjornsson B. R., Park C., Grubbs R. H., Russell T. P., ACS Nano, 2015, 9, 7729, 10.1021/acsnano.5b03233. key7 Shao Z., Zhang D., Hu W., Xu Y., Li W., Polymer, 2019, 177, 202, 10.1016/j.polymer.2019.05.062. key8 Shi L. Y., Lan J., Lee S., Cheng L. C., Yager K. G., Ross C. A., ACS Nano, 2020, 14, 4289, 10.1021/acsnano.9b09702. key9 Jun T., Lee Y., Jo S., Ryu C. Y., Ryu D. Y., Macromolecules, 2018, 51, 282, 10.1021/acs.macromol.7b01946. key10 Hickey R. J., Gillard T. M., Lodge T. P., Bates F. S., ACS Macro Lett., 2015, 4, 260, 10.1021/acsmacrolett.5b00014. key11 Habersberger B. M., Gillard T. M., Hickey R. J., Lodge T. P., Bates F. S., ACS Macro Lett., 2014, 3, 1041, 10.1021/mz500531y. key12 Irwin M. T., Hickey R. J., Xie S., So S., Bates F. S., Lodge T. P., Macromolecules, 2016, 49, 6928, 10.1021/acs.macromol.6b01553. key13 Zheng C., Zhang B., Bates F. S., Lodge T. P., Macromolecules, 2022, 55, 4766, 10.1021/acs.macromol.2c00518. key14 Zhang B., Xie S., Lodge T. P., Bates F. S., Macromolecules, 2021, 54, 460, 10.1021/acs.macromol.0c01745. key15 Wang Z., Sun S., Li C., Hu S., Faller R., Soft Matter, 2017, 13, 5877, 10.1039/c7sm01194f. key16 Sun M., Zhang J., Wang B., Wu H., Pan J., Phys. Rev. E, 2011, 84, 011802, 10.1103/PhysRevE.84.011802. key17 Wright D. B., Patterson J. P., Pitto-Barry A., Lu A., Kirby N., Gianneschi N. C., Chassenieux C., Colombani O., O'Reilly R. K., Macromolecules, 2015, 48, 6516, 10.1021/acs.macromol.5b01426. key18 Komura S., Kodama H., Phys. Rev. E, 1997, 55, 1722, 10.1103/PhysRevE.55.1722. key19 Roan J. R., Shakhnovich E. I., Phys. Rev. E, 1999, 59, 2109, 10.1103/PhysRevE.59.2109. key20 Martínez-Veracoechea F. J., Escobedo F. A., Macromolecules, 2009, 42, 1775, 10.1021/ma802427a. key21 Liu D., Dai L., Duan X., Shi T., Zhang H., Chem. J. Chin. Univ., 2015, 36, 1752, 10.7503/cjcu20150202. key22 Xie J., Shi A., Giant, 2021, 5, 100043, 10.1016/j.giant.2020.100043. key23 Su W. C., Wu Y. S., Wang C. F., Kuo S. W., Crystals, 2018, 8, 330, 10.3390/cryst8080330. key24 Fan J. J., Yu X. L., Liang X. M., Acta Phys. Sin., 2013, 62, 158105, 10.7498/aps.62.158105. key25 Pan J. X., Zhang J. J., Wang B. F., Wu H. S., Sun M. N., Chin. Phys. B, 2013, 22, 026401, 10.1088/1674-1056/22/2/026401. key26 Pan J. X., Zhang J. J., Wang B. F., Wu H. S., Sun M. N., Chin. Phys. Lett., 2013, 30, 046401, 10.1088/0256-307X/30/4/046401. key27 Pinna M., Zvelindovsky A. V., Todd S., Goldbeck-Wood G., J. Chem. Phys., 2006, 125, 154905, 10.1063/1.2356468. key28 Pinna M., Zvelindovsky A. V., Guo X., Stokes C. L., Soft Matter, 2011, 7, 6991, 10.1039/c1sm05478c. key29 Dessí R., Pinna M., Zvelindovsky A. V., Macromolecules, 2013, 46, 1923, 10.1021/ma400124j. key30 Chen Y., Xu Q., Jin Y., Qian X., Ma R., Liu J., Yang D., Soft Matter, 2018, 14, 6635, 10.1039/c8sm00833g. key31 Juan Y. T., Lai Y. F., Li X., Tai T. C., Lin C. H., Huang C. F., Li B., Shi A. C., Hsueh H. Y., Macromolecules, 2023, 56, 457, 10.1021/acs.macromol.2c02086. key32 Guo Y., Zhang J., Wang B., Wu H., Sun M., Pan J., Condens. Matter Phys., 2015, 18, 23801, 10.5488/CMP.18.23801. key33 Borthakur M. P., Nath B., Biswas G., Phys. Rev. Fluids, 2021, 6, 023603, 10.1103/PhysRevFluids.6.023603. key34 Majidi M., Bijarchi M. A., Arani A. G., Rahimian M. H., Shafii M. B., Int. J. Multiphase Flow, 2022, 146, 103846, 10.1016/j.ijmultiphaseflow.2021.103846. key35 Guo Y. Q., Pan J. X., Zhang J. J., Sun M. N., Wang B. F., Wu H. Sh., Condens. Matter Phys., 2016, 19, 33601, 10.5488/CMP.19.33601. key36 Wang K. Y., Ma C. Y., Yu H. M., Zhang H. T., Cen J. Y., Wang Y. Y., Pan J. X., Zhang J. J., Acta Phys. Sin., 2023, 72, 079401, 10.7498/aps.72.20222207. key37 Kamkar M., Salehiyan R., Goudoulas T. B., Abbasi M., Saengow C., Erfanian E., Sadeghi S., Natale G., Rogers S. A., Giacomin A. J., Sundararaj U., Prog. Polym. Sci., 2022, 132, 101580, 10.1016/j.progpolymsci.2022.101580. key38 Ginzburg V. V., Qiu F., Paniconi M., Peng G., Jasnow D., Balazs A. C., Phys. Rev. Lett., 1999, 82, 4026, 10.1103/PhysRevLett.82.4026. key39 Ginzburg V. V., Peng G., Qiu F., Jasnow D., Balazs A. C., Phys. Rev. E, 1999, 60, 4352, 10.1103/PhysRevE.60.4352. key40 Ito A., Phys. Rev. E, 1998, 58, 6158, 10.1103/PhysRevE.58.6158. key41 Ohta T., Ito A., Phys. Rev. E, 1995, 52, 5250, 10.1103/physreve.52.5250. key42 Ohta T., Nozaki H., Doi M., J. Chem. Phys., 1990, 93, 2664, 10.1063/1.458905. key43 Oono Y., Puri S., Phys. Rev. Lett., 1987, 58, 836, 10.1103/PhysRevLett.58.836. key44 Oono Y., Puri S., Phys. Rev. A, 1988, 38, 434, 10.1103/PhysRevA.38.434. key45 Puri S., Oono Y., Phys. Rev. A, 1988, 38, 1542, 10.1103/PhysRevA.38.1542. key46 Shinozaki A., Oono Y., Phys. Rev. A, 1992, 45, R2161, 10.1103/PhysRevA.45.R2161. key47 Doi M., Chen D., J. Chem. Phys., 1989, 90, 5271, 10.1063/1.456430. key48 Chen D., Doi M., J. Chem. Phys., 1989, 91, 2656, 10.1063/1.456975. key49 Ohta T., Nozaki H., Doi M., J. Chem. Phys., 1990, 93, 2664, 10.1063/1.458905. key50 Ohta T., Enomoto Y., Harder J. L., Doi M., Macromolecules, 1993, 26, 4928, 10.1021/ma00070a029. key51 Corberi F., Gonnella G., Lamura A., Phys. Rev. E, 2000, 62, 8064, 10.1103/PhysRevE.62.8064. key52 Luo K., Yang Y., Polymer, 2004, 45, 6745, 10.1016/j.polymer.2004.07.059. key53 Pinna M., Zvelindovsky A. V., Todd S., Goldbeck-Wood G., J. Chem. Phys., 2006, 125, 154905, 10.1063/1.2356468. key54 Li W., Dong B., Yan L., Macromolecules, 2013, 46, 7465, 10.1021/ma4009884. key55 Ohta T., Kawasaki K., Macromolecules, 1986, 19, 2621, 10.1021/ma00164a028. label1 , , ˳, 033001, label2 쳿 , , , 030006, § ABSTRACT =3000 - 䳺 . , , . dz , , . CD , — . , ( ) CD ( ), . . , . . , ,
http://arxiv.org/abs/2406.18329v1
20240626131834
Quasinormal modes, thermodynamics and shadow of black holes in Hu-Sawicki f(R) gravity theory
[ "Ronit Karmakar", "Umananda Dev Goswami" ]
gr-qc
[ "gr-qc" ]
[Email: ]ronit.karmakar622@gmail.com Department of Physics, Dibrugarh University, Dibrugarh 786004, Assam, India [Email: ]umananda@dibru.ac.in Department of Physics, Dibrugarh University, Dibrugarh 786004, Assam, India § ABSTRACT We derive novel black hole solutions in a modified gravity theory, namely the Hu-Sawicki model of f(R) gravity. After obtaining the black hole solution, we study the horizon radius of the black hole from the metric and then analyse the dependence of the model parameters on the horizon. We then use the 6th-order WKB method to study the quasinormal modes of oscillations (QNMs) of the black hole perturbed by a scalar field. The dependence of the amplitude and damping part of the QNMs are analysed with respect to variations in model parameters and the errors associated with the QNMs are also computed. After that we study some thermodynamic properties associated with the black hole such as its thermodynamic temperature as well as greybody factors. It is found that the black hole has the possibility of showcasing negative temperatures. We also analyse the geodesics and derive the photon sphere radius as well as the shadow radius of the black hole. The photon radius is independent of the model parameters while the shadow radius showed a fair amount of dependence on the model parameters. We tried to constrain the parameters with the help of Keck and VLTI observational data and obtained some bounds on m and c_2 parameters. Quasinormal modes, thermodynamics and shadow of black holes in Hu-Sawicki f(R) gravity theory Umananda Dev Goswami 0000-0003-0012-7549 Received 7 March 2024 / Accepted 23 May 2024 ============================================================================================== § INTRODUCTION General relativity (GR) has undoubtedly been successful in accounting for observational results in the solar system and beyond <cit.>. GR theory predicts with great accuracy the precession of the perihelion of planet Mercury <cit.> and bending of light due to gravitational field <cit.> in the local as well as distant observations. GR has predicted the existence of black holes and gravitational waves (GWs) which has been recently experimentally verified by the LIGO-Virgo collaboration <cit.>. The recent direct images of the black hole shadows published by the Event Horizon Telescope (EHT) group <cit.> also back GR in terms of experimental verification of the theory. In spite of these successes, GR fails to address recent observations like the accelerated expansion of the Universe <cit.>. Indeed, it does not provide any insights regarding the dark components of the Universe <cit.>. Thus, to overcome these issues, physicists worked on modified theories of gravity, the most common among them includes the ΛCDM <cit.> model, f(R) gravity theory <cit.>, f(R,T) gravity theory <cit.>, f(Q) gravity theory <cit.> and so on (see <cit.>). These theories can compensate for the effects of dark components <cit.>, explain galactic rotation curves <cit.>, accelerated expansion of the Universe <cit.> and are well constrained by modern observations. Rastall gravity proposed in 1972 is arguably a unique modified theory of gravity which does not follow from an established Lagrangian formalism <cit.>. It advocates the violation of conservation of energy-momentum tensor T_μν and equates it to the derivative of the Ricci scalar R. In recent times, a number of f(R) gravity models have been proposed, some of them include Starobinsky <cit.>, Hu-Sawicki <cit.>, Sujikawa <cit.> and other two models mentioned in Refs. <cit.> to name a few. These models have been extensively studied in the literature regarding various aspects like their cosmological and astrophysical implications <cit.>, dynamical system analysis <cit.>, early Universe mysteries <cit.> and so on. Similarly, f(Q) gravity has also attracted a lot of attention in the recent times and many cosmological studies have been carried out in this theory, for instance see Refs. <cit.> and references therein. The first vacuum solution of the Einstein field equations leading to a black hole was given by Schwarzschild in 1916 <cit.>. Since then, a number of black hole solutions have been proposed from time to time in various frameworks of gravity. Black holes are often studied with an engulfing field around them. These fields may include quintessence fluid <cit.>, matter in the form of dust, radiation <cit.>, plasma <cit.>, dark matter halo <cit.> and so on. These surrounding fields have impacts on various thermodynamic properties, quasinormal modes (QNMs), shadow radius etc. of black holes and have been extensively studied in the literature. In a recent paper <cit.>, GUP-improved Schwarzschild-type solution, its thermodynamic properties and quasinormal modes have been studied. In another work <cit.>, a Schwarzschild-type black hole in Bumblebee gravity has been considered and its thermodynamics and shadow have been studied. Black hole solutions have been derived in the framework of f(R) gravity in many recent papers. In Ref. <cit.>, Saffari and Rahvar derived novel black hole solutions in the f(R) framework and also proposed a novel f(R) form that is feasible in both local and galactic scales. In Ref. <cit.>, the authors derived black hole solutions in various f(R) models. In a recent work <cit.>, novel black hole solutions were derived in various f(R) models and the authors studied topological and thermodynamic properties of the solutions obtained. Motivated by these ongoing researches, we derive novel black hole solutions in Hu-Sawicki gravity. Here, we intend to study various properties relating to the black hole solutions obtained. Our solution is unique in the sense that the black hole solution for the Hu-Sawicki model of f(R) gravity has not been worked out before to the best of our knowledge and thus we are motivated to study its properties including QNMs, thermodynamics as well as shadow radius and greybody factors. Further, the black hole's shadow also has gained a fair amount of attention, credits to the recently released data and images of the black holes at the center of M87 galaxy and Sgr A. This has opened up a new window to constrain various theories of gravity and parameter values as well. Recently in Ref. <cit.>, different parameters of modified theories of gravity have been constrained using modern shadow radius data. Another work <cit.> constraints regular black hole parameters with shadow data of the EHT. Recent works regarding black hole shadows have gained momentum as shadow provides interesting new insights and data to constrain black hole physics <cit.>. QNMs of oscillations of a perturbed black hole hold promise of constraining physics at the extreme regimes of black holes. The QNMs are basically complex frequencies linked to GWs produced when a black hole is perturbed by some external means. There has been an upsurge of research regarding various aspects of QNMs, new techniques of computing QNMs, their relationships with shadow data and so on. The WKB method of computing QNMs is the most widely used technique, though many spectral and analytical techniques are often used in combination. There has been a wide range of applications of QNMs in understanding various phenomena such as testing the No-Hair theorem <cit.> and constraining theories of modified gravity <cit.>. QNMs can also be used to study the stability of background spacetime when it is acted upon by a minute perturbation <cit.>. The relation between shadow radius and QNMs has been dealt with in Ref. <cit.>. QNMs and Hawking radiation sparsity for GUP-corrected black holes with topological defects have been studied in Ref. <cit.>. A brief account of various methods employed in recent times to compute the QNMs can be found in the Refs. <cit.>. Black hole thermodynamics has gained momentum and attracted a lot of attention following the path-breaking work of Bekenstein and Hawking <cit.>. Their idea led to the development of four laws of black hole thermodynamics. Recently a number of research works have been carried out in this field. Schwarzschild black holes with quantum corrections have been investigated for scattering and absorption cross-section <cit.>. In Ref. <cit.>, the authors studied absorption and scattering by a black hole with a global monopole in f(R) gravity. Recently, thermodynamic properties of extended GUP-corrected black holes has been carried out in Ref. <cit.>. Thermodynamics of static dilaton black holes have been studied in Ref. <cit.>. In this work, we derive black hole solutions in the Hu-Sawicki model of f(R) gravity and study its thermodynamic properties along with its QNMs using the 6th-order WKB method. We compute the shadow radius and present the plots of its variation with respect to different model parameters. The primary motivation for choosing the Hu-Sawicki model is that black hole solutions have not been worked out in this model, and thus it is really intriguing to study the properties of such a solution. The Hu-Sawicki model is a viable choice as it is observationally consistent in cosmological scales <cit.>. It is consistent with the solar system tests and thus shows viability in the local scales as well <cit.>. The choice of this model is thus motivated by viability and ongoing research works that utilize this theory to study various aspects of astrophysics and cosmology <cit.>. The plan of the paper is as follows. In the second section, we introduce the field equations in the f(R) gravity framework and briefly discuss the method of solving the equations. After attaining the black hole solution, we move to the third section where we compute QNMs of the black hole. Then in the fourth section, we discuss the thermodynamic properties including temperature, entropy and heat capacity along with greybody factors. Then in the fifth section, we compute the shadow radius and plot it for variations in parameters. Finally, we conclude the work with a brief summary and future scopes. § FIELD EQUATIONS IN F(R) GRAVITY THEORY The field equations for the f(R) gravity theory will be presented here in the spherically symmetric spacetime by adopting the metric formalism of the theory, in which the variation of action is done with respect to the metric only. The f(R) gravity field equation can be obtained from an action in which the Ricci scalar R in the Einstein-Hilbert action is replaced by some function f(R) of R. Thus the generic action of the f(R) gravity theory can be written as <cit.>: S=1/2κ∫ d^4 x √(-g) f(R) + S_m, where κ = 8π G c^-4 and S_m is the matter part of the action. As mentioned already, taking the variation of the above action (<ref>) with respect to the metric g_μν, one can obtain the field equations of f(R) gravity as F R_μν-1/2f(R) g_μν-(∇_μ∇_ν-g_μν□)F=κ T_μν, where F=df(R)/dR and □=∇_α∇^α. Taking the trace of this (<ref>), we can write the function f(R) as f(R) = 1/2(3 □ F + FR - κ T). The derivative of this Eq. (<ref>) with respect to the radial coordinate r leads to an equation in terms of F and R as given by F' R-F R'+3(□ F)'=κ T', where the prime denotes the derivative with respect to the radial coordinate r. This equation will serve as a consistency relation for the function F that any solution for F must satisfy this relation in order to be a solution of the field equations, Eq. (<ref>). Further, using Eq. (<ref>) in Eq. (<ref>), the field equations can be expressed in terms F instead of f(R) as R_μν-1/4 g_μνR=κ/F(T_μν-1/4 g_μνT)+1/F(∇_μ∇_ν F-1/4 g_μν□ F). Considering the case of the vacuum where the energy-momentum tensor and its trace vanish, we can rewrite the above equation as F R_μν-∇_μ∇_ν F=1/4 g_μν(F R-□ F). Since we are interested in the solution of this time-independent spherically symmetric vacuum field equations following the procedure adopted in Ref. <cit.>, we consider a generic spherically symmetric metric in the form: g_μν=[ -N(r) 0 0 0; 0 M(r) 0 0; 0 0 r^2 0; 0 0 0 r^2 sin^2 θ; ], where N(r) and M(r) are metric coefficients to be determined, associated with the time and space components of the metric respectively which are indeed functions of r. For this spherically symmetric metric, both sides of Eq. (<ref>) become diagonal and accordingly, we can define an index independent parameter from this equation as P_μ≡F R_μμ-∇_μ∇_μ F/g_μμ. As this quantity P_μ is independent of indices, we can have P_μ-P_ν=0 for all μ and ν values and hence from this property one can obtain the following expressions: 2F X'/X+r F' X'/X-2 r F” = 0, N”+(F'/F-X'/2X)N'-2/r(F'/F-X'/2X)N-2/r^2N + 2/r^2 = 0. Here X=MN. In this work, our solution is considered to have constant curvature for the sake of simplicity. Hence the terms F' and F” vanish and the field Eqs. (<ref>) and (<ref>) take the forms: N M'+N' M = 0, 1-M+r/2(N'/N+M'/M)(r/2N'/N-1)-r^2 N”/2N = 0. Solving these two Eqs. (<ref>) and (<ref>), one can obtain: M(r)=s_1/N(r), and N(r)=s_1 + s_2/r+s_3 r^2, where s_1, s_2 and s_3 are constants of integration. In order to get these coefficients, we follow the procedure in Refs. <cit.> and compare the second solution with the standard Schwarzschild-de Sitter solution. The standard Schwarzschild-de Sitter black hole metric coefficient is <cit.> C(r)=1-2M/r-Λ r^2/3. Again, the relationship between the scalar curvature and cosmological constant is <cit.> R=- 4Λ. Now, comparing the second solution in Eq. (<ref>) with Eq. (<ref>), we have s_1 = 1, s_2 = - 2M, s_3 = - Λ/3 = R/12. From (<ref>), considering the vacuum case and constant curvature R_0, we have R_0=2f(R_0)/F(R_0). As mentioned earlier, the f(R) gravity model we employed in our work is the Hu-Sawicki model <cit.>, which is given by f(R)=- m^2c_1(R/m^2)^n/c_2 (R/m^2)^n +1, where m, n (>0), c_1 and c_2 are the model parameters. Here c_1 and c_2 are dimensionless and m represents the mass (energy) scale <cit.>. For this model, we solve Eq. (<ref>) to get the constant curvature R_0=12 s_3=m^2 (n-2/2 c_2)^1/n. Thus, we arrive at our black hole solution for the Hu-Sawicki model as N(r)=1-2M/r+ m^2/12(n-2/2c_2)^1/nr^2. It is clear that our black hole solution is independent of the Hu-Sawicki model parameter c_1. Fig <ref> shows the metric function versus radial distance for the other two Hu-Sawicki model parameters m and c_2, while taking the parameter n=1 for simplicity (this is considered for the whole study if we do not mention otherwise). In the plots, it is seen that the black hole solution (<ref>) has two horizons for a range of parameter values. The first plot shows that with increasing m, the outer horizon moves closer to the inner horizon, while the second plot shows that for higher values of c_2, the outer horizon increases. After a certain higher value of the parameter m and the lower value of the parameter c_2, the black hole appears to be a horizonless singularity for the given values of the other parameters. § QUASINORMAL MODES OF THE BLACK HOLE In this section, we compute the QNMs of the black hole (<ref>) using the most common method, the 6th-order WKB approximation method. To this end, we apply a perturbation to the black hole in the form of a probe coupled minimally to a scalar field Φ and having the equation of motion <cit.>: 1/√(-g) ∂_α(√(-g)g^αβ∂_β)Φ=μ^2 Φ, where μ is the mass of the scalar field, which for our convenience will be taken as a massless scalar field with μ=0. We can express the scalar field Φ in terms of spherical harmonics of the form <cit.>: Φ(t,r,θ,ϕ)=e^-i ω tΨ(r)/r Y_l^p(θ,ϕ). Here Ψ(r) represents the radial part of the wave and Y_l^p represents the spherical harmonic part. Employing Eq. (<ref>) in Eq. (<ref>), we get a Schrödinger-type equation, as given below: d^2Ψ/dx^2+[ω^2-V(x)]Ψ=0, with the new variable, the tortoise coordinate, which is defined as x=∫dr/N(r). The effective potential in Eq. (<ref>) can be expressed as V(r)=N(r)(N'(r)/r+l(l+1)/r^2). It is necessary to apply the appropriate boundary conditions to Eq. (<ref>) for the physical consistency both at the black hole horizon and at infinity. For spacetime which is flat asymptotically the following quasinormal criteria have to be satisfied: Ψ(x) → A e^+i ω x if x → -∞, B e^-i ω x if x → +∞. Here, the coefficients A and B represent the amplitudes of the waves. These ingoing and outgoing waves are in accordance with the physical requirements that nothing can escape from the black hole horizon and no radiation comes from the infinity respectively. Further, these make sure of the existence of an infinite set of discrete complex numbers, usually known as the QNMs. To study the behaviour of the potential (<ref>) before calculating the QNMs of the black hole (<ref>), we plot the potential versus r for different variations of model parameters in Fig. <ref>. As seen from the left plot of Fig. <ref>, the peak of the potential decreases for higher m values. From the middle plot, one can see that increasing values of parameter c_2 enhances the peak of the potential. A similar trend is seen with mutipole l values, where peaks are found to increase for higher l values. The QNMs have been calculated utilising the 6th-order WKB method in the form of their amplitude and damping varying with the model parameters. As seen from Fig. <ref>, the general trend of amplitude and damping of QNMs is that both decrease with the parameter m for all values of multiple l. On the other hand, Fig. <ref> shows that both amplitude and damping increase slightly with the parameter c_2 for all l values. In both cases, the effect of l is more dominating on the amplitude than that on the damping. Moreover, in both cases of m and c_2 variations, the amplitude increases, while the damping decreases with the increasing value of l. We compute the error associated with the WKB QNMs with a prescribed formula that has been used extensively in the literature. This error estimating formula for the WKB method is as follows <cit.>: Δ_6=|WKB_7-WKB_5|/2, where WKB_5 and WKB_7 are respectively the QNMs obtained from the 5th and 7th order WKB method. In Table <ref>, we present the 6th-order WKB QNMs along with the associated errors for various values of the model parameters along with the multipole number l. It is clear that the errors are reduced for higher multipole numbers l. Similar trends in variations of QNMs with respect to different parameters as seen in Figs. <ref> and <ref> are displayed in the tabulated data. The estimated errors in most of the cases lie around 10^-4-10^-5. § THERMODYNAMIC CHARACTISTICS OF THE BLACK HOLE As mentioned earlier, a black hole as a thermodynamic system was first conceptualised in the ground-breaking work of Hawking and Bekenstein in the early 1970s. In this paper, we analyse the black hole temperature and the grey body factors which are important properties that give useful insights in this regard. The temperature of a black hole is an important property that is associated with the quantum particles created near its horizon. It is inversely related to the size or mass of the black hole, that is a larger black hole will have a lower temperature. Hawking conceptualised the temperature of black hole in the form of radiation, which is today referred to as Hawking radiation. It remains a challenge to detect such radiation experimentally. We can theoretically compute the black hole temperature from the metric solution (<ref>), by employing the simple relation: T_BH=N'(r)/4π =1/4π r_H^2[2M+m^2/12(n-2/c_2)^1/nr_H^3]. It can also be calculated using the First law of black hole thermodynamics as follows: T_BH=dM/dS=dM'/dS' = 1/4π r_H^2[2M+m^2/12(n-2/c_2)^1/nr_H^3]. This confirms our computation of the black hole temperature from the first law. We plot the thermodynamic temperature (<ref>) with respect to the horizon radius r_H in Fig. <ref>. Here, we see clearly that with horizon radius r_H, the temperature of the black hole is always in the decreasing trend. The left plot shows the temperature variations with respect to r_H for three different values of parameter m. It is seen that higher values of m lead to negative temperatures. While for the parameter c_2, lower values lead to negative temperatures as can be seen from the right panel of Fig. <ref>. Though the negative temperature seems unphysical, this has been encountered in the literature and explained as a possible state of formation of ultra-cold black holes <cit.>. The greybody factor or the transmission coefficient is a measure of the probability that a particle created by quantum processes near the event horizon of a black hole will escape to infinity or get absorbed inside the black hole. Greybody factor (T^2) equal to 1 means that all the particles that are created are able to escape the black hole while lower values of it mean that some of them end up inside it. If T^2=0, it means that the black hole is completely dark and absorbs every single particle. The greybody factor has been extensively studied in the literature in various scenarios. We can express the reflection and transmission of the particles hitting the black hole barrier potential in the following form <cit.>: ψ(x) = T(ω) exp (-i ω x), x → -∞, ψ(x) = exp (-i ω x) + R(ω) exp(i ω x), x → +∞, where R(ω) and T(ω) are respectively reflection and transmission coefficients and are functions of frequency ω. WKB approximation formula is used to get to the computational form of these two coefficients which are presented below <cit.>: |R(ω)|^2 =1/1+exp(-2π i τ), |T(ω)|^2 = 1/1+exp(2π i τ), where the parameter τ is defined in the WKB method as the following <cit.>: τ=i (ω^2 -V_0)/√(-2V_0^”)-Λ_j. Here double primes represent the double derivative of the maximum of the effective potential V_0 with respect to x and Λ_j can be obtained from the WKB formula found in Ref. <cit.>. In Fig. <ref>, we plot the greybody factors with respect to frequency ω for three values of the model parameters m considering n=1, c_2=1 and M=1 with multipole l=1 (left plot) and l=2 (middle plot). It is seen that for higher m values the grebody factor increases faster with respect to ω than that for smaller m values. Also, for a smaller l value (l=1), the greybody factor increase is more rapid and begins from a smaller ω value as compared to a higher l value (l=2). It needs to be mentioned that the values of the parameter c_2 are found to be insensitive in the variation of the greybody factors with respect to ω as shown in the right plot of Fig. <ref>. § SHADOW OF THE BLACK HOLE Black hole shadow has been extensively studied in the literature as it provides good scope to test various theories of gravity and black hole physics in extreme gravity regimes. The recent observational data of black hole shadow radius has provided the scientific community an opportunity to constrain model parameters using these data. In this section, we compute the photon sphere and the shadow radius expression and plot the same for analysing its dependence on various model parameters. We also try to constrain the parameter space with observational data of the EHT group. The simple condition to determine the photon sphere radius of a black hole in spherical symmetry consideration is given by the following relation <cit.>: 2-r N'(r)/N(r)=0. Using the form of N(r) from Eq. (<ref>), we solve the Eq. (<ref>) for r to get the photon sphere radius as r_ph=3M. From the photon radius, we can derive the shadow radius as follows: r_sh=r_ph/√(N(r))|_r→ r_ph=3 M/√(3/4 m^2 M^2 (n-2/2 c2)^1/n+1/3). Obviously, the shadow radius depends on all three model parameters associated with N(r). It is also evident that when model parameters m=n=c_2=0, we recover the standard r_sh=3√(3)M which is the shadow radius for the Schwarzschild black hole. Now, for the 2-D stereoscopic projection of shadow radius, we define celestial coordinates X and Y as given by <cit.> X =lim_r_0→∞(- r_0^2sinθ_0.dϕ/dr|_r_0), Y =lim_r_0→∞(r_0^2.dθ/dr|_(r_0,θ_0)). Here θ_0 is the observer's angular position with regards to the plane of the black hole. In Fig.<ref>, we show the variation of the shadow radius with parameters m and c_2. It is seen from the left plot that with an increase in m, the shadow radius increases. In the right plot, it is evident that the shadow radius decreases with increasing c_2 values. Thus, the parameters m and c_2 have opposite influences on the shadow radius. In order to constrain the parameters of the model, we shall employ the technique mentioned in Ref. <cit.>. We briefly present some important steps in this direction. The main point of the methodology is that we compare the observed angular radius of the Sgr A* black hole as captured by the EHT group recently with the theoretically calculated shadow radius from the expression (<ref>) by constraining the model parameters. This requires the prior value of the mas-to-distance ratio for Sgr A*. Another feature that is required for this method is the calibration factor that correlates the observed to the calculated shadow radius. This method has been used to constrain model parameters in the literature <cit.> and we shall follow the same route. A new parameter δ defined by the EHT group to refer to the fractional deviation between the observed shadow radius r_s and shadow radius of a Schwarzschild black hole r_sch is <cit.> δ=r_s/r_sch-1=r_s/3√(3)M-1. This parameter was estimated by the Keck and VLTI measurements as <cit.> Keck: δ=- 0.04^+0.09_-0.10 VLTI: δ=- 0.08^+0.09_-0.09 For simplification, we shall adopt the mean of the two observations as considered in Ref. <cit.> in the rest of the work, which is δ=- 0.060 ± 0.065. This leads to δ parameter's 1σ and 2σ intervals as - 0.125 ≲ δ≲ 0.005 (1σ), - 0.190 ≲ δ≲ 0.070 (2σ). It is found that the bounds (<ref>) and (<ref>) when imposed upon Eq. (<ref>) give the bounds on r_sh as follows <cit.>: 4.55 ≲ r_sh/M ≲ 5.22 (1σ), 4.21 ≲ r_sh/M ≲ 5.56 (2σ). We plot the shadow radius with the bounds imposed by the observations of Keck and VLTI in Fig. <ref>. The left plot shows that the shadow radius increases with increasing m values as found in Fig. <ref>. It shows that for smaller values of c_2, the shadow radius quickly moves to the forbidden region. With increasing c_2 values, the shadow radius within the allowed region increases. In the right plot, the shadow radius is plotted versus c_2 which shows that the shadow radius decreases with increasing c_2 values as observed earlier. The plots are within the 2σ allowed region in this case with the exception of larger m and smaller c_2 values as clearly visible from the plot. It is quite evident that the constraints imposed are not rigid but depend on the range of values of the parameters of the model. This method of constraining parameters of a theory has been adopted in the literature <cit.> and by the EHT group themselves <cit.> and provides a robust way of constraining parameters. But in cases of model parameters exceeding one, we need some supporting constraining methods so that one parameter can be cornered and rigorous constraints can be obtained. However, we leave this as a future extension of the work. § SUMMARY AND CONCLUSION In this work, we derive novel black hole solutions in the framework of Hu-Sawicki gravity. We plot the metric function versus r for various values of model parameters and encountered two horizons of the black hole. It is seen that higher m values cause the horizon radius to shrink while the opposite trend is observed for the parameter c_2. We then analyse the QNMs of the novel black hole solution using the 6th-order WKB approximation. The amplitude increases with an increase in c_2 values while it decreases with m. The damping decreases with increasing m while it increases slightly with c_2. This trend can be realised from the tabulated QNM data in Table <ref>. It is evident that the QNM frequencies are affected by the model parameters. The associated error is found to be around 10^-4 to 10^-5 in some cases. The thermodynamic temperature associated with the black hole is investigated and it is found to decrease with the black hole radius r_H in all cases. The temperature can also become negative, suggesting the possibility of the formation of an ultra-cold black hole. The greybody factors are also computed, especially the transmission coefficients with respect to frequency ω and the dependence of the model parameter m is studied. Higher m results in a swifter increase in the greybody factors towards the saturation value of 1. It is noteworthy that increasing the multipole l lowers the rate of increase of the greybody factors and saturation is achieved at higher ω. The photon radius and the shadow radius associated with the spherically symmetric black hole spacetime are then studied. We presented the stereographic projection of the shadow in the celestial coordinate system and using the contour-type feature, showed the variation of the shadow radius with increasing model parameters m and c_2. Taking into consideration the already established constraints on r_sh by Keck and VLTI observations, we constrain our model parameters using a well-proven scheme. The parameter m is roughly constrained to be less than ∼0.5 while parameter c_2 is constrained to be greater than ∼2, as can be seen from Figure <ref>. The recent technical advancements made in the fields of astrophysics and observational astronomy have made the present era very suitable for theoretical physicists to constrain and test fundamental theories and models, which was not possible until a decade back. With the ground-breaking leaps in the form of the LIGO-Virgo team's observation of GWs in 2015 along with the first-ever image of the black hole M87* and later that of Sgr A*, scientists plan to further enhance the sensitivity of the present detectors as well as new ambitious projects like the space-based LISA project and the Einstein Telescope are already in the planning stages. As a future scope of this work, we can analyse other viable models of gravity like f(R,T) and f(Q), and work on new black hole solutions as well as rotating Kerr-type solutions that can also be explored. Further, the study of black hole shadows surely holds a lot of potential in constraining fundamental physics and it certainly deserves further investigation. § ACKNOWLEDGEMENTS UDG is thankful to the Inter-University Centre for Astronomy and Astrophysics (IUCAA), Pune, India for awarding the Visiting Associateship of the institute. 99 1 R.M. Wald, General Relativity, https://doi.org/10.7208/chicago/9780226870373.001.0001The University of Chicago Press, Chicago, 1984. 2C. M. Will, Was Einstein right? A centenary assessment, https://doi.org/10.48550/arXiv.1409.7871arXiv:1409.7871 [gr-qc] (2004). 3C. M . Will, New General Relativistic Contribution to Mercury’s Perihelion Advance, https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.120.191101Phys. Rev. Lett 120, 191101 (2018). 4C. M. Will, The 1919 measurement of the deflection of light, https://iopscience.iop.org/article/10.1088/0264-9381/32/12/124001Class. Quantum Grav. 32, 124001 (2015). 5B. P. Abbott et al., Observation of Gravitational Waves from a Binary Black Hole Merger, https://doi.org/10.1103/PhysRevLett.116.061102Phys. Rev. Lett. 116, 061102 (2016). 6B. P. Abbott et al., Observation of Gravitational Waves from a 22-Solar-Mass Binary Black Hole Coalescence, https://doi.org/10.1103/PhysRevLett.116.241103Phys. Rev. Lett. 116, 241103 (2016). 7B. P. Abbott et al., Observation of Gravitational Waves from a Binary Neutron Star Inspiral, https://doi.org/10.1103/PhysRevLett.119.161101Phys. Rev. Lett. 119, 161101 (2017). 8B. P. Abbott et al., Observation of a Binary-Black-Hole Coalescence with Asymmetric Masses, https://doi.org/10.1103/PhysRevD.102.043015Phys. Rev. D 102, 043015 (2020). 9R. Abbott et al., Observation of Gravitational Waves from Two Neutron Star–Black Hole Coalescences, https://doi.org/10.3847/2041-8213/ac082eApJL 915, L5 (2021). 10The Event Horizon Telescope Collaboration et al., First M87 Event Horizon telescope Results. I. The Shadow of the supermassive Black Hole, https://iopscience.iop.org/article/10.3847/2041-8213/ab0ec7ApJL 875, L1 (2019). 11The Event Horizon Telescope Collaboration et al., First M87 Event Horizon telescope Results. II. Array and Instrumentation, https://iopscience.iop.org/article/10.3847/2041-8213/ab0c96ApJL 875, L2 (2019). 12The Event Horizon Telescope Collaboration et al., First M87 Event Horizon telescope Results. III. Data Processing and Calibration, https://iopscience.iop.org/article/10.3847/2041-8213/ab0c57ApJL 875, L3 (2019). 13The Event Horizon Telescope Collaboration et al., First M87 Event Horizon telescope Results. IV. Image the Central Supermassive Black Hole, https://iopscience.iop.org/article/10.3847/2041-8213/ab0e85ApJL 875, L4 (2019). 14The Event Horizon Telescope Collaboration et al., First M87 Event Horizon telescope Results. V. Physical Origin of the Asymmetric Ring, https://iopscience.iop.org/article/10.3847/2041-8213/ab0f43ApJL 875, L5 (2019). 15The Event Horizon Telescope Collaboration et al., First M87 Event Horizon telescope Results. VI. The Shadow and Mass of the Central Black Hole, https://iopscience.iop.org/article/10.3847/2041-8213/ab1141ApJL 875, L6 (2019). 16V. Faraoni and S. Capozziello, Beyond Einstein gravity: a survey of gravitational theories for cosmology and astrophysics, https://link.springer.com/book/10.1007/978-94-007-0165-6Fundam. Theor. Phys. 170, 1–428 (2010). 17A. G. Riess et al., Observational Evidence from Supernovae for an Accelerating Universe and a Cosmological Constant, https://doi.org/10.1086/300499The Astronomical Journal 116, 1009 (1998). 18S. Perlmutter et al., Measurements of Ω and Λ from 42 High-Redshift Supernovae, https://doi.org/10.1086/307221ApJ 517, 565 (1999) 19C. Pérez de los Heros, Status, Challenges and Directions in Indirect Dark Matter Searches,https://doi.org/10.3390/sym12101648 Symmetry 12, 1648 (2020). 20N. A. Bahcall, The Cosmic Triangle: Revealing the State of the Universe, https://doi.org/10.1126/science.284.5419.1481Science 284, 1481 (1999). 21L. Amendola and S. Tsujikawa, Dark Energy: Theory and Observations, https://www.cambridge.org/core/books/dark-energy/EC55E8BF946C34D61B758273D8286618Cambridge University Press, Cambridge, 2010. 22D. J. Gogoi and U. D. Goswami, A new f(R) gravity model and properties of gravitational waves in it, https://link.springer.com/article/10.1140/epjc/s10052-020-08684-3EPJC 80, 1101 (2020). 23T. P. Sotiriou and V. faraoni, f(R) theories of gravity, https://journals.aps.org/rmp/abstract/10.1103/RevModPhys.82.451Rev. Mod. Phys. 82, 451 (2010). 24T. P. Sotiriou and V. faraoni, f(R) theories of gravity, https://journals.aps.org/rmp/abstract/10.1103/RevModPhys.82.451Rev. Mod. Phys. 82, 451 (2010). 25A. De Felice and S. Tsujikawa, f(R) Theories, https://link.springer.com/article/10.12942/lrr-2010-3#citeasLiving Rev. Relativ. 13, 3 (2010). 26T. Harko, F. S. N. Lobo, S. Nojiri and S. D. Odintsov, f(R,T) gravity, https://journals.aps.org/prd/abstract/10.1103/PhysRevD.84.024020Phys. Rev. D 84, 024020 (2011). 27P. Sarmah, A. De and U. D. Goswami, Anisotropic LRS-BI Universe with f(Q) gravity theory, https://www.sciencedirect.com/science/article/abs/pii/S2212686423000432?via 28A. De, S. Mandal, J. T. Beh, T. H. Loo and P. K. Sahoo, Isotropization of locally symmetricBianchi-I universe in f(Q)-gravity, https://epjc.epj.org/articles/epjc/abs/2022/01/10052_2022_Article_10021/10052_2022_Article_10021.htmlEPJC 82, 10052 (2022). 29R. Solanki, A. De, S. Mandal and P. K. Sahoo, Accelerating expansion of the universe in modified symmetric teleparallel gravity, https://doi.org/10.1016/j.dark.2022.101053Phys. Dark Universe 36, 101053 (2022). 30D. J. Gogoi, A. Övgün and M. Koussour, Quasinormal Modes of Black holes in f(Q) gravity, https://doi.org/10.1140/epjc/s10052-023-11881-5EPJC 83, 700 (2023). 31T. P. Sotiriou and V. Faraoni, f(R) theories of gravity, https://journals.aps.org/rmp/abstract/10.1103/RevModPhys.82.451Rev. Mod. Phys. 82, 451 (2010). 32N. Parbin and U. D. Goswami, Scalarons mimicking dark matter in the Hu-Sawicki model of f(R) gravity, https://doi.org/10.1142/S0217732321502655Mod. Phys. Lett. A 36, 37 (2021). 33N. Parbin and U. D. Goswami, Galactic rotation dynamics in a new f(R) gravity model, https://link.springer.com/article/10.1140/epjc/s10052-023-11568-xEPJC 83, 411 (2023). 34R. Myrzakulov, Accelerating universe from F(T) gravity, https://link.springer.com/article/10.1140/epjc/s10052-011-1752-9EPJC 71, 1752 (2011). 35A. Mukherjee and N. Banerjee, Acceleration of the universe in f(R) gravity models, https://link.springer.com/article/10.1007/s10509-014-1949-0Astrophys. Space Sci. 352, 839 (2014). 36P. Rastall, Generalization of the Einstein Theory, https://doi.org/10.1103/PhysRevD.6.3357Phys. Rev. D 6, 3357 (1972). 37A. A. Starobinsky, Disappearing cosmological constant in f(R) gravity, https://link.springer.com/article/10.1134/S0021364007150027JETP 86, 157 (2007). 38W. Hu and I. Sawicki, Models of f(R) cosmic acceleration that evade solar system tests, https://doi.org/10.1103/PhysRevD.76.064004Phys. Rev. D 76, 064004 (2007). 39R. Saffari and S. Rahvar, f(R) Gravity: From the Pioneer Anomaly to the Cosmic Acceleration, https://journals.aps.org/prd/abstract/10.1103/PhysRevD.77.104028Phys. Rev. D 77, 104028 (2008). 40J. -Y. Cen, S. -Y. Chien, C. -Q. Geng and C. -C. Lee, Cosmological evolutions in Tsujikawa model of f(R) Gravity, https://www.sciencedirect.com/science/article/abs/pii/S2212686419301608?via 41D. J. Gogoi and U. D. Goswami, A new f(R) gravity model and properties of gravitational waves in it, https://link.springer.com/article/10.1140/epjc/s10052-020-08684-3EPJC 80, 1011 (2020). 42P. Bessa, M. Campista and A. Bernui, Observational constraints on Starobinsky f(R) cosmology from cosmic expansion and structure growth data, https://link.springer.com/article/10.1140/epjc/s10052-022-10457-zEPJC 82, 506 (2022). 43P. V. Ky, N. T. H. Van and N. A. Ky, Gravitational radiation of a spherically symmetric source in f(R)-gravitation, https://link.springer.com/article/10.1140/epjc/s10052-024-12606-yEPJC 84, 298 (2024). 44J. Bora, D. J. Gogoi and U. D. Goswami, Strange stars in f(R) gravity palatini formalism and gravitational wave echoes from them, https://iopscience.iop.org/article/10.1088/1475-7516/2022/09/057JCAP 09 057 (2022). 45L. Amendola, R. Gannouji, D. Polarski and S. Tsujikawa, Conditions for the cosmological viability of f(R) dark energy models, https://journals.aps.org/prd/abstract/10.1103/PhysRevD.75.083504Phys. Rev. D 75, 083504 (2007). 46P. Sarmah and U. D. Goswami, Dynamical system analysis of LRS-BI Universe with f(Q) gravity theory, https://arxiv.ohttps://arxiv.org/abs/physics/9905030rg/abs/2403.16118arXiv:2403.16118v1 [gr-qc] 47T. Katsuragawa, S. Matsuzaki and E. Senaha, F(R) gravity in the early Universe: electroweak phase transition and chameleon mechanism*, https://iopscience.iop.org/article/10.1088/1674-1137/43/10/105101Chinese Physics C 43, 105101 (2019). 48R. Solanki, A. De, S. Mondal and P. K. Sahoo, Accelerating expansion of the universe in modified symmetric teleparallel gravity, https://www.sciencedirect.com/science/article/abs/pii/S2212686422000541Phys. Dark Universe 36, 101053 (2022). 49S. Capozziello, V. De Falco and C. Ferrara, Comparing equivalent gravities: common features and differences, https://link.springer.com/article/10.1140/epjc/s10052-022-10823-xEPJC 82, 856 (2023). 50D. Zhao, Covariant formulation of f(Q) theory, https://link.springer.com/article/10.1140/epjc/s10052-022-10266-4EPJC 82, 303 (2022). 51 K. Schwarzschild, On the gravitational field of a mass point according to Einstein's theory, https://arxiv.org/abs/physics/9905030arXiv:physics/9905030 (1916). 52S. Fernando, Schwarzschild black hole surrounded by quintessence: null geodesics, https://link.springer.com/article/10.1007/s10714-012-1368-xGen Relativ. Gravit. 44, 1857–1879 (2012). 53R. Karmakar, D. J. Gogoi and U. D. Goswami, Quasinormal modes and thermodynamic properties of GUP-corrected Schwarzschild black hole surrounded by quintessence, https://doi.org/10.1142/S0217751X22501809IJMPA 37, 2250180 (2022). 54D. J. Gogoi, R. Karmakar and U. D. Goswami, Quasinormal Modes of Non-Linearly Charged Black Holes surrounded by a Cloud of Strings in Rastall Gravity, https://doi.org/10.1142/S021988782350007XIJGMMP 20, 2350007 (2023). 55S. Chen, B. Wang and R. Su, Hawking radiation in a d-dimensional static spherically symmetric black hole surrounded by quintessence, https://doi.org/10.1103/PhysRevD.77.124011Phys. Rev. D 77, 124011 (2008). 56Y. Heydarzade and F. Darabi, Black Hole Solutions Surrounded by Perfect Fluid in Rastall Theory, https://doi.org/10.1016/j.physletb.2017.05.064Phys. Lett. B 771, 365 (2017). 57F. Atamurotov, A. Abdujabbarov and W. -B. Han, Effect of plasma on gravitational lensing by a Schwarzschild black hole immersed in perfect fluid dark matter, https://journals.aps.org/prd/abstract/10.1103/PhysRevD.104.084015Phys. Rev. D 104, 084015 (2021). 58R. A. Konoplya, Shadow of a black hole surrounded by dark matter, https://www.sciencedirect.com/science/article/pii/S0370269319303648Phys. Lett. B 795, 1 (2019). 59R. Karmakar, D. J. Gogoi and U. D. Goswami, Thermodynamics and Shadows of GUP-corrected Black Holes with Topological Defects in Bumblebee Gravity, https://doi.org/10.1016/j.dark.2023.101249Phys. Dark Universe 41, 101249 (2023). 60T. Multamäki and I. Vilja, Spherically symmetric solutions of modified field equations in f(R) theories of gravity, https://doi.org/10.1103/PhysRevD.74.064022Phys. Rev. D 74, 064002 (2006). 61B. Hazarika and P. Phukon, Thermodynamic Topology of Black Holes in f(R) Gravity, https://doi.org/10.1093/ptep/ptae035PTEP 2024, Issue 4, 043E01 (2024). 62V. Prokopov, S. Alexeyev and O.Zenin, Black Hole Shadows Constrain Extended Gravity, https://doi.org/10.1134/S1063776122070093J. Exp. Theor. Phys. 135, 91–99 (2022). 63K. Jafarzade, M. K. Zangeneh and F. S. N. Lobo, Observational optical constraints of regular black holes, https://doi.org/10.1016/j.aop.2022.169126Annals of Physics 446, 169126 (2022). 64 İ. Çimdiker, D. Demir, and A. Övgün, Black Hole Shadow in Symmergent Gravity, https://doi.org/10.1016/j.dark.2021.100900Physics of the Dark Universe 34, 100900 (2021). 65S. Haroon, K. Jusufi and M. Jamil, Shadow Images of a Rotating Dyonic Black Hole with a Global Monopole Surrounded by Perfect Fluid, https://www.mdpi.com/2218-1997/6/2/23Universe 6 (2),23 (2020). 66M. Okyay and A. Övgün, Nonlinear electrodynamics effects on the black hole shadow, deflection angle, quasinormal modes and greybody factors, https://iopscience.iop.org/article/10.1088/1475-7516/2022/01/009JCAP 01, 009 (2022). 67A. Belhaj and Y. Sekhmani, Shadows of rotating quintessential black holes in Einstein–Gauss–Bonnet gravity with a cloud of strings, https://link.springer.com/article/10.1007/s10714-022-02902-xGen. Relativ Gravit. 54 (2021). 68R. Roy, S. Vagnozzi and L. Visinelli, Superradiance evolution of black hole shadows revisited, https://journals.aps.org/prd/abstract/10.1103/PhysRevD.105.083002Phys. Rev. D 105, 083002 (2022). 69S. Vagnozzi, C. Bambi and L. Visinelli, Concerns regarding the use of black hole shadows as standard rulers, https://iopscience.iop.org/article/10.1088/1361-6382/ab7965Class. Quantum Grav. 37, 087001 (2020). 71K. Jusufi et al., Black hole surrounded by a dark matter halo in the M87 galactic center and its identification with shadow images, https://journals.aps.org/prd/abstract/10.1103/PhysRevD.100.044012Phys. Rev. D 100, 044012 (2019). 71-1S. Vagnozzi et al., Horizon-scale tests of gravity theories and fundamental physics from the Event Horizon Telescope image of Sagittarius A*, https://iopscience.iop.org/article/10.1088/1361-6382/acd97bClass. Quantum Grav. 40, 165007 (2023). 71-2K. Jusufi, Quasinormal Modes of Black Holes Surrounded by Dark Matter and Their Connection with the Shadow Radius, https://journals.aps.org/prd/abstract/10.1103/PhysRevD.101.084055Phys. Rev. D 101, 084055 (2020). 71-3E. Berti, V. Cardoso, and C. M. Will, On gravitational- wave spectroscopy of massive black holes with the space interferometer LISA, https://journals.aps.org/prd/abstract/10.1103/PhysRevD.73.064030Phys. Rev. D 73, 064030 (2006). 71-4G. Franciolini, L. Hui, R. Penco, L. Santoni and E. Trincherini, Effective Field Theory of Black Hole Quasinormal Modes in Scalar-Tensor Theories, https://link.springer.com/article/10.1007/JHEP02(2019)127JHEP 02, 127 (2019). 71-5A. Ishibashi and H. Kodama, Stability of higher dimensional Schwarzschild black holes, https://academic.oup.com/ptp/article/110/5/901/1897607?login=falseProg. Theor. Phys 110, 901 (2003). 72D. J. Gogoi and U. D. Goswami, Quasinormal Modes and Hawking Radiation Sparsity of GUP corrected Black Holes in Bumblebee Gravity with Topological Defects, https://iopscience.iop.org/article/10.1088/1475-7516/2022/06/029JCAP 06, 029 (2022). 73R. A. Konoplya, Quasinormal behavior of the D-dimensional schwarzschild black hole and the higher order WKB approach, https://journals.aps.org/prd/abstract/10.1103/PhysRevD.68.024018Phys. Rev. D 68, 024018 (2003). 74X. Zhang, M. Wang and J. Jing, Quasinormal modes and late time tails of perturbation fields on a schwarzschild-like black hole with a global monopole in the einstein-bumblebee theory, https://link.springer.com/article/10.1007/s11433-023-2153-6Sci. China Phys. Mech. Astron. 66 100411 (2023). 75R. A. Konoplya, A. Zhidenko and A. F. Zihhailo, Higher order WKB formula for quasinormal modes and grey-body factors: recipes for quick and accurate calculations, https://iopscience.iop.org/article/10.1088/1361-6382/ab2e25Class. Quantum Grav. 36, 155002 (2019). 76R. Karmakar and U. D. Goswami, Quasinormal modes, temperatures and greybody factors of black holes in a generalized Rastall gravity theory, https://doi.org/10.1088/1402-4896/ad350ePhys. Scr. 99, 055003 (2024). 76-1S. W. Hawking, Particle creation by black holes, http://refhub.elsevier.com/S2212-6864(23)00083-3/sb71Commun. Math. 43, 199 (1975). 76-2S. W. Hawking, D.N. Page, Thermodynamics of black holes in anti-de Sitter space, http://refhub.elsevier.com/S2212-6864(23)00083-3/sb73Commun. Math. 87, 577 (1983). 76-3J. M. Bardeen, B. Carter, S. W. Hawking, The four laws of black hole mechanics, http://refhub.elsevier.com/S2212-6864(23)00083-3/sb72Commun. Math. 31, 161 (1973). 76-4M. A. Anacleto, F. A. Brito, J. A. V. Campos and E. Passos, Quantum-corrected scattering and absorption of a Schwarzschild black hole with GUP, https://www.sciencedirect.com/science/article/pii/S037026932030633X#section-cited-byPhys. Lett. B 810, 135830 (2020). 76-5M. A. Anacleto, F. A. Brito, S. J. S. Ferreira and E. Passos, Absorption and scattering of a black hole with a global monopole in f(R) gravity, https://www.sciencedirect.com/science/article/pii/S0370269318308608?via 76-6H. Su and C. -Y. Long, Thermodynamics of the black holes under the extended generalized uncertainty principle with linear terms, https://iopscience.iop.org/article/10.1088/1572-9494/ac624cCommun. Theor. Phys. 74, 055401 (2022). 76-7J. Ji-liang, Thermodynamics of the black holes under the extended generalized uncertainty principle with linear terms, https://iopscience.iop.org/article/10.1088/0256-307X/14/2/001Chinese Physics Letters 14, 81 (1997). 76-8R. T. Hough, A. Abebe and S. E. S. Ferreira, Viability tests of f(R)-gravity models with Supernovae Type 1A data, https://link.springer.com/article/10.1140/epjc/s10052-020-8342-7#citeasEPJC 80, 787 (2020). 76-9M. Martinelli and A. Melchiorri, Cosmological constraints on the Hu-Sawicki modified gravity scenerio, https://journals.aps.org/prd/pdf/10.1103/PhysRevD.79.123516Phys. Rev. D 79, 123516 (2009). 76-10J. Q. Guo, SOLAR SYSTEM TESTS OF f(R) GRAVITY, https://www.worldscientific.com/doi/abs/10.1142/S0218271814500369IJMPD 23, 1450036 (2014). 76-11S. Nojiri, S. D. Odinstov and V. K. Oikonomou, Modified gravity theories on a nutshell: Inflation, bounce and late-time evolution, Physics Reports 692, 1-104 (2017). 76-12.F. Cardone, S. Camera and A. Diaferio, An updated analysis of two classes of f(R) theories of gravity, https://iopscience.iop.org/article/10.1088/1475-7516/2012/02/030JCAP 2012, 030 (2012). 77S. Dey and S. Chakrabarty, A note on electromagnetic and gravitational perturbations of the Bardeen de Sitter black hole: quasinormal modes and greybody factors, https://link.springer.com/article/10.1140/epjc/s10052-019-7004-0EPJC 79, 504 (2019). 78 T. Johannsen et al., Testing General Relativity with the Shadow Size of SGR A*, https://doi.org/10.1103/PhysRevLett.116.031101Phys. Rev. Lett. 116, 031101 (2016). 79D. Psaltis, Testing General Relativity with the Event Horizon Telescope, https://link.springer.com/article/10.1007/s10714-019-2611-5#citeasGen. Relativ. Gravit. 51, 137 (2019). 80P. Kocherlakota et al., (Event Horizon Telescope), http://dx.doi.org/10.1103/PhysRevD.103.104047Phys. Rev. D 103, 104047 (2021).
http://arxiv.org/abs/2406.18001v1
20240626011007
Scalable Dual Coordinate Descent for Kernel Methods
[ "Zishan Shao", "Aditya Devarakonda" ]
cs.DC
[ "cs.DC", "stat.ML", "65Y05", "D.1.3; G.4; F.2.1" ]
LLMs as for Doctors: Leveraging Medical LLMs to Assist Doctors, Not Replace Them Zishan Shao shaoz20@wfu.edu Wake Forest University Aditya Devarakonda devaraa@wfu.edu Wake Forest University ================================================================================================================== § ABSTRACT Dual Coordinate Descent (DCD) and Block Dual Coordinate Descent (BDCD) are important iterative methods for solving convex optimization problems. In this work, we develop scalable DCD and BDCD methods for the kernel support vector machines (K-SVM) and kernel ridge regression (K-RR) problems. On distributed-memory parallel machines the scalability of these methods is limited by the need to communicate every iteration. On modern hardware where communication is orders of magnitude more expensive, the running time of the DCD and BDCD methods is dominated by communication cost. We address this communication bottleneck by deriving s-step variants of DCD and BDCD for solving the K-SVM and K-RR problems, respectively. The s-step variants reduce the frequency of communication by a tunable factor of s at the expense of additional bandwidth and computation. The s-step variants compute the same solution as the existing methods in exact arithmetic. We perform numerical experiments to illustrate that the s-step variants are also numerically stable in finite-arithmetic, even for large values of s. We perform theoretical analysis to bound the computation and communication costs of the newly designed variants, up to leading order. Finally, we develop high performance implementations written in C and MPI and present scaling experiments performed on a Cray EX cluster. The new s-step variants achieved strong scaling speedups of up to 9.8× over existing methods using up to 512 cores. § INTRODUCTION Optimization methods, particularly Dual Coordinate Descent (DCD) and Block Dual Coordinate Descent (BDCD), are fundamental to efficiently training nonlinear machine learning model on large-scale datasets. These iterative methods are particularly useful in solving regression and classification (both regularized and unregularized) optimization problems that arise in various research areas such as biology, computer vision, biophysics and healthcare. Given the volume of data generated in these scientific disciplines and the availability of high-performance computing architectures, scaling these methods to solve optimization problem quickly and efficiently is an important challenge. However, one of the main bottlenecks to scaling these methods in such distributed-memory environments is the cost of communication. Since communication cost often dominates computation cost, especially in the context of large datasets, we propose to develop efficient, scalable DCD and BDCD methods that defer communication without altering convergence behavior or solution accuracy. Traditionally, the performance of DCD and BDCD implementations is limited by the frequency of communication. This is due the fact that DCD and BDCD are iterative optimization methods which require communication at every iteration to train a candidate machine learning model. Iterative algorithms and their communication bottlenecks are well known, particularly with respect to Krylov subspace methods <cit.>. In this work, we borrow ideas from Krylov subspace methods and apply them to the DCD and BDCD optimization methods to reduce the frequency of communication by a factor of s. The contributions of the paper are, * Derivation and theoretical analysis of s-step variants of DCD and BDCD methods for solving the K-SVM and K-RR problems which reduce the frequency of communication by a factor of s at the expense of additional computation and storage. * Empirical evaluation of the numerical stability (in MATLAB) of the s-step variants as a function of s for several benchmark classification and regression datasets. * Performance evaluation of the strong scaling and running time breakdown of the proposed methods on a Cray EX system, which show speedups of up to 9.8× on large-scale dense and sparse datasets. § RELATED WORK In this section, we briefly survey the existing state of the art on scaling optimization methods with a focus on reducing the communication costs in distributed-memory and shared-memory parallel computing environments. s-step methods have recently been adapted and generalized to nonlinear machine learning tasks in order to scale iterative optimization methods. This body of work includes s-step variants of block coordinate descent methods applied to the ridge regression (L2-regularized least squares) problem <cit.>, s-step variants of novel stochastic FISTA (S-FISTA) and stochastic Newton (S-PNM) methods for proximal least squares <cit.>, and P-packSVM which applied a variant of the s-step technique to obtain scalable linear SVM classifiers in distributed, cloud environments <cit.>. These methods showed significant speedups in distributed-memory and cloud settings due to reduced communication overhead. All of these adaptations to machine learning were build on prior s-step Krylov methods work from numerical linear algebra. Prior work in numerical linear algebra <cit.> developed s-step Krylov methods for solving linear systems on distributed multiprocessors. This work was further generalized to a wide variety of Krylov methods with three-term recurrences <cit.>. Further work developed practical Krylov methods with numerical stability analysis and strategies to stability s-step Krylov methods <cit.>. Our expands this body of work to kernelized machine learning problem, particularly kernel SVM and kernel ridge regression, targeting first-order, coordinate descent methods. One common thread in this line of work, is that s-step variants are mathematically equivalent to their classical counterpart. Thus, the limiting factor on s becomes numerical instability and the computation-communication trade off. Alternative approaches to reducing communication include asynchronous methods, federated learning approaches (i.e. divide and conquer), and approximation methods. These approaches typically relax convergence and solution accuracy requirements in order to reduce communication. In this section, we briefly survey the asynchronous methods work. HOGWILD! <cit.> presents an asynchronous SGD method for shared-memory settings, which reduces the latency bottleneck by removing synchronization. This work proves that if solution updates have bounded delay, then this asynchronous SGD variant converges to the true solution, in expectation. This work was generalized to mini-batch SGD <cit.>, also implemented in the shared-memory setting, shows that taking a batch size greater than 1 allows for better memory-bandwidth utilization. This work showed that the asynchronous mini-batch SGD method also converges and exhibits additional performance improvements over HOGWILD! The asynchronous approach was also extended to dual coordinate descent methods for solving the SVM problem <cit.>, where a greedy coordinate selection algorithm is introduced to further accelerate convergence of the asynchronous DCD method. This work also targeted a multicore, shared-memory environment. Divide-and-conquer or federated learning approaches such as CoCoA and proxCoCoA+ <cit.> offer a framework for distributed optimization in cloud environments (using Apache Spark). CoCoA and proxCoCoA+, in particular, reduce synchronization by performing dual coordinate descent on locally stored data for L2 and L1-regularized convex optimization problem. In these frameworks, each Apache Spark executor maintains a local solution which is optimized using only locally stored data. The local solution is occasionally sum/average-reduced after a tunable number of local iterations. This work introduces a convergence-performance trade off where differing the aggegation step for too many iterations impacts convergence and final model accuracy. A similar approach is also shown to work for SGD <cit.> for the L2-regularized logistic regression problem. This work sparked follow-on work in federated learning and has been generalized to many different optimization methods and machine learning problems. The main drawback of these federated learning approaches is that they exhibit a convergence-performance trade off where scaling to additional threads/processors negatively affects convergence and solution accuracy. Finally, we survey a subset of work on approximation methods which improve performance by exploiting the low-rank structure of the kernel matrix, for kernelized ML problems <cit.>. These prior works exploit low-rank structure in two ways: * construct a hierarchical semi-separable (HSS) approximation to the kernel matrix using approximate nearest-neighbors <cit.>. * Utilize (balanced) K-means/medoids clustering to divide data between processors <cit.>. Both approaches trade accuracy/convergence for performance in the distributed-memory setting. § DERIVATION In this section, we introduce the K-SVM and K-RR problems and present the dual coordinate descent (DCD) and its blocked (BDCD) variant for solving these problems. We also present s-step derivations of DCD for K-SVM and BDCD for K-RR. §.§ Support Vector Machine Support Vector Machine (SVM) <cit.> are supervised learning models used for binary classification. SVM classifies the input dataset by finding a hyperplane defined by H := { x | v^⊺ x - ρ = 0 }, where v ∈ℝ^n is any vector, ρ∈ℝ is the intercept, and x ∈ℝ^n is the normal vector to the hyperplane. Given a dataset A ∈ℝ^m × n and a vector of binary labels y ∈ℝ^m s.t. y_i ∈{-1, +1} ∀ i=1,…,m, the SVM optimization problem finds a hyperplane which maximizes the distance (margin) between the two classes. A good separation is achieved when the hyperplane, x, has the greatest distance from the nearest training data points (support vectors) from the two classes. This formulation is known as the hard-margin SVM problem, which implicitly assumes that the data points are linearly separable. For datasets containing errors or where the margin between the two classes is small, the hard-margin SVM formulation may not achieve high accuracy. The soft-margin SVM problem introduces slack variables for each data point, ξ_i ∀ i=1,…,m, so that the margin can be increased by allowing misclassification of some data points. Note that setting C = 0 recovers the hard-margin formulation, so we will focus on solving the soft-margin SVM problem: _x ∈ℝ^n 1/2x_2^2 + C∑_i=1^mξ_i subject to y_i(a_i,:x + ρ) ≤ 1 - ξ_i ξ_i ≥ 0 ∀ i = 1,…, m The constraints in (<ref>) can be re-written in the empirical risk minimization form[We omit the bias term in the derivation for presentation clarity.] by introducing the hinge loss for each ξ_i: SVM-L1: max(1 - y_ia_i,:x, 0) SVM-L2: max(1 - y_ia_i,:x, 0)^2. We refer to the hinge loss variant as the L1-SVM problem and the squared hinge-loss variant as the L2-SVM problem. For datasets that are not linearly separable in n dimensions, the SVM problem can be solved in a high-dimensional feature space by introducing a non-linear kernel function. The kernelized variants of the L1-SVM and L2-SVM problems can be obtained by deriving their Lagrangian dual problems, kernelized SVM-L1 (K-SVM-L1) has the following form, _α∈ℝ^m 1/2∑_i = 1^m ∑_j = 1^m α_i α_j y_i y_j 𝒦(a_i,:,a_j,:) - ∑_i = 1^m α_i subject to  0 ≤α_i ≤ C, and kernelized SVM-L2 (K-SVM-L2) has the following form, _α∈ℝ^m1/2∑_i = 1^m ∑_j = 1^m α_i α_j y_i y_j 𝒦(a_i,:, a_j,:) - ∑_i = 1^m α_i + 1/4C∑_i=1^mα_i^2. where α∈ℝ^m is the solution to the Lagrangian dual problem and 𝒦(a_i,:, a_j,:) is a kernel function which defines an inner product space. <Ref> lists the kernel functions used in this work. The L1 and L2 K-SVM problems can be solved using several algorithmic variants of coordinate descent <cit.>. In this work, we will focus on cyclic coordinate descent, which we will refer to as Dual Coordinate Descent (DCD). DCD is an iterative algorithm which reduces the K-SVM problems to single-variable (or coordinate) problems which have closed-form solutions. Once a sub-problem is solved, DCD proceeds by selecting a new variable, solving the sub-problem with respect to the chosen variable, and repeating this process until convergence. <Ref> shows the DCD algorithm for solving the L1 and L2 K-SVM problems. §.§ s-Step DCD Derivation Note that <Ref> selects a single data point from A at each iteration. This limits DCD performance to BLAS-1 and BLAS-2 operation in each iteration. This limitation also suggests that a distributed-memory parallel implementation of DCD would require communication at every iteration. We propose to improve the performance of DCD by deriving a mathematically equivalent variant we refer to as s-step DCD which avoids communication for s iterations. We begin by modifying the iteration index from k (for DCD) to sk + j, where j ∈{1, 2, …, s} and k ∈{0, 1, …, H/s}. We begin the s-step derivation by assuming that α_sk was just computed and show how to compute the next s solution updates. We see that from <Ref> u_k ,g_k, and α_k are vector quantities whereas η_k and θ_k are scalar quantities. The following two solution updates at iterations sk+1 and sk+2 require computing the gradients g_sk+1 and g_sk+2, which are defined by: g_sk+1 = u_sk+1^⊺α_sk - 1 + ω e_i_sk+1^⊺α_sk g_sk+2 = u_sk+2^⊺α_sk+1 - 1 + ω e_i_sk+2^⊺α_sk+1 However, notice we can also replace α_sk+1 with its equivalent quantity from iteration sk. α_sk+1 = α_sk + θ_sk+1 e_i_sk+1 Given that θ_sk+1 is a scalar quantity, g_sk+2 can be rewritten as, g_sk+2 = u_sk+2^⊺α_sk - 1 + θ_sk+1u_sk+1^⊺ e_i_sk+1 + ω e_i_sk+2^⊺α_sk + ω e_i_sk+2^⊺θ_sk+1 e_i_sk+1 This recurrence unrolling suggests that g_sk+2 can be computed using α_sk provided that the quantities u_sk+1, u_sk + 2 and θ_sk+1 are known. Since u_sk+1 and u_sk + 2 are independent, they can be computed simultaneously. The sequential dependence on θ_sk+1, however, cannot be eliminated. Notice that g_sk +j is defined as g_sk+j = u_sk+j^⊺α_sk+j-1 - 1 + ω e_i_sk+j^⊺α_sk+j-1. Since α_sk+j-1 can be unrolled as follows α_sk+j-1 = α_sk + ∑_t=1^j-1θ_sk+te_sk+t, we can define g_sk+j in terms of α_sk: g_sk+j = u_sk+j^⊺α_sk + u_sk+j^⊺∑_t=1^j-1θ_sk+t e_i_sk+t - 1 + ω e_i_sk+j^⊺α_sk + ω e_i_sk+j^⊺∑_t=1^j-1θ_sk+t e_i_sk+t. Since all quantities u_sk+j are independent, we can compute them upfront after which all quantities θ_sk+j are computed sequentially. <Ref> shows the resulting s-step DCD algorithm for solving the L1 and L2 K-SVM problems. §.§ Kernel Ridge Regression The ridge regression problem <cit.> for regression is defined as follows: _x ∈ℝ^n1/2m‖ Ax - y ‖_2^2 + λ/2‖ x ‖_2^2. By deriving the Lagrangian dual formulation, we obtain the Kernel Ridge Regression (K-RR) problem, _α∈ℝ^m1/2(∑_i = 1^m ∑_j = 1^m α_i α_j (1/λ𝒦(a_i,:, a_j,:) + m)) - ∑_i = 1^m α_i y_i In contrast to K-SVM, this problem can be solved in closed form. However, when m ≫ 1, explicitly computing and storing the m × m kernel matrix can be prohibitive. In this work, we focus on iteratively solving the K-RR problem using the Block Dual Coordinate Descent (BDCD) method. BDCD solves (<ref>) by randomly or cyclically selecting a block size, b, of samples from A and solving a subproblem with respect to just those coordinates. <Ref> shows the BDCD algorithm for solving the K-RR problem. §.§ s-Step BDCD Derivation Similar to DCD, we can also unroll the recurrence relationship in <Ref>. We see that from <Ref> that the matrix quantities U_k ∈ℝ^b × m and G_k ∈ℝ^b × b are required at every iteration to solve the subproblem and compute the vector quantity Δα_k ∈ℝ^b. We begin the s-step derivation by modifying the iteration index from k to sk + j where j ∈{1, 2, …, s} and k ∈{0, 1, …, H/s}, as before. We assume that α_sk was just computed and show how to compute the next s solution updates. We focus on the subproblem solutions defined at iterations sk +1 and sk+2 for the s-step derivation, which are given by, Δα_sk+1 = G^-1_sk+1(V_sk+1^⊺ y - m V_sk+1^⊺α_sk - 1/λU_sk+1^⊺α_sk) Δα_sk+2 = G^-1_sk+2(V_sk+2^⊺ y - m V_sk+2^⊺α_sk+1 - 1/λU_sk+2^⊺α_sk+1). Using the solution update, α_sk+1 = α_sk + V_sk+1Δα_sk+1, we can unroll the recurrence for Δα_sk+2 by substitution. This yields Δα_sk+2 = G^-1_sk+2(V_sk+2^⊺ y - m V_sk+2^⊺α_sk -m V_sk+2^⊺ V_sk+1Δα_sk+1 - 1/λU_sk+2^⊺α_sk - 1/λU_sk+2^⊺ V_sk+1Δα_sk+1). Notice that the solutions at iteration sk+1 and sk+2 both depend on α_sk, but Δα_sk+2 requires additional correction terms because α_sk+1 is never explicitly formed. This recurrence unrolling can be generalized to an arbitrary future iteration, sk + j, as follows Δα_sk+j = G^-1_sk+j(V_sk+j^⊺ y - m V_sk+j^⊺α_sk -m ∑_t = 1^j-1 V_sk+j^⊺ V_sk+tΔα_sk+t - 1/λU_sk+j^⊺α_sk - 1/λ∑_t = 1^j-1U_sk+j^⊺ V_sk+tΔα_sk+t). In (<ref>) the quantities U_sk + j for j ∈{1, 2, …, s} can be computed upfront by selecting sb coordinates and computing a kernel matrix that has a factor of s additional rows. The sequence of U_sk+j's can then be extracted from the m × sb kernel matrix. The resulting s-step BDCD algorithm for K-RR is shown in <Ref>. § PARALLEL ALGORITHMS AND ANALYSIS In this section we analyze their computation and communication costs (<Ref>) using Hockney's performance model <cit.>: γ F + β W + ϕ L, where F, W and L represent the algorithm costs for computation, bandwidth, and latency; and γ, β and ϕ represent the associated hardware parameters. Given the similarities between the coordinate descent methods for K-RR and K-SVM, we focus our analysis on <Ref> which are blocked generalizations of DCD. The leading-order costs of BDCD for K-RR can be specialized to DCD for K-SVM by setting the block size, b = 1. Independent analyses of BDCD and DCD are required in order to bound constants, which we omit in this work. §.§ Computation and Communication Analysis We assume that A ∈ℝ^m × n is a sparse matrix with density f, where 0 < f ≤ 1, such that the non-zeros are uniformly distributed where each row contains fn non-zero entries. We further assume that the processors are load balanced with fmn/P non-zeros per processor. The K-SVM and K-RR problems require non-linear kernel operations which are often more expensive than floating-point arithmetic. For example, the polynomial kernel requires a pointwise instruction for each entry of the sampled kernel matrix. Similarly, the RBF kernel requires an instruction for each entry. We model this overhead by introducing a scalar, μ, to represent the cost of applying a non-linear function relative to floating-point multiplies during the kernel computation. Note that due to the non-linear kernel function (particularly the RBF kernel), we store the m × b (sampled) kernel matrix in dense format. Finally, the BDCD and s-step BDCD algorithms require an MPI Allreduce call at each iteration. We assume that the Allreduce has a cost of L = O(log P) and W = O(w) where w is the message size (in words) <cit.>. We begin by proving the parallel computation, communication, and storage costs of the BDCD algorithm (<Ref>) followed by analysis of the s-step BDCD algorithm (<Ref>). Let H be the number of iterations of the Block Dual Coordinate Descent (BDCD) algorithm, b the block size, P the number of processors, and A ∈ℝ^m × n the matrix that is partitioned using 1D-column layout. Under this setting, BDCD has the following asymptotic costs along the critical path: Computation: 𝒪(H (bfmn/P + μ bm + b^3)) flops, Bandwidth: 𝒪(H bm) words moved, Latency: 𝒪(H log P) messages, Storage: 𝒪(bm + fmn/P) words of memory. Each iteration of parallel BDCD for K-RR begins by computing U_k = 𝒦(A, V_k^⊺ A), which costs at most bfmn/P flops, in parallel, to form the m × b sampled columns, AA^TV_k, of the m × m full kernel matrix, AA^T. Each processor forms the m × b partial kernel matrix which must be sum-reduced before applying the non-linear kernel operation. This requires bm words to be moved and log P messages. After communication, the non-linear kernel operation can be applied to each entry of the kernel matrix using μ bm flops. The quantity G_k can be extracted directly from U_k by selecting the b rows, corresponding to V_k^⊺, sampled for this iteration. Since each processor redundantly stores y and α, the vector quantity V_k^⊺ y - m V_k^⊺ - 1/λU_k^⊺α_k-1 can be computed independently in parallel on each processor. This operation requires bm flops. Once the vector quantity is computed, the linear system can be solved in b^3 flops to compute Δα_k redundantly on each processor. Finally, α_k can be updated using b flops by updating only the coordinates of α sampled in this iteration. Summing and multiplying the above costs by H, the total number of BDCD iteration, proves the computational and communication costs. The storage costs can be obtained by noticing that each iteration of BDCD must simultaneously store A and U_k in memory, which requires fmn/P words and bm respectively. Each iteration also requires G_k, which requires b^2 words of storage. However, storing G_k and other vector quantities are low-order terms in comparison to storing A and U_k. Let H be the number of iterations of the s-Step Block Dual Coordinate Descent (s-Step BDCD) algorithm, b the block size, P the number of processors, and A ∈ℝ^m × n the matrix that is partitioned using 1D-column layout. Under this setting, s-Step BDCD has the following asymptotic costs along the critical path: Computation: 𝒪(H/s(sbmfn/P + μ sbm + sb^3 + s2b^2) ) flops, Bandwidth: 𝒪(H/s sbm) words moved, Latency: 𝒪(H/slog P) messages, Storage: 𝒪(fmn/P + sbm) words of memory. The s-Step BDCD algorithm for K-RR computes a factor of s larger kernel matrix, Q_k = 𝒦(A, Ω_kA), which costs at most sbfmn/P flops, in parallel, to partially form. The partial Q_k's on each processor must be sum-reduced prior to applying the non-linear kernel operation, which requires sbm words to be moved and log P messages. Once Q_k is sum-reduced the non-linear kernel computation can be performed redundantly on each processor. Since the non-linear operation is required for each entry of Q_k ∈ℝ^m × sb, forming the sampled kernel matrix costs μ sbm flops. As with parallel BDCD, the b × b matrices G_sk+j for j ∈{1,2,…,s} can be extracted from Q_k and is a low-order cost. However, the s-Step BDCD algorithm requires additional matrix-vector computations to form the right-hand side of the linear system. Forming the right-hand side requires a total of s U_sk+j^⊺α_sk matrix-vector computations which costs sbm flops and a sequence of matrix-vector computations, U_sk+j^⊺ V_sk+tΔα_sk+t for t ∈{1, …, j-1}, to correct the right-hand side. There are a total of s2 such corrections with each requiring b^2 flops. Once the corrections have been performed, s linear systems can be solved using s b^3 flops. The leading-order cost of the inner loop (indexed by j) is s2b^2 + sbm + sb^3 flops. Once the sequence of s Δα_sk + j vectors have been computed, α_sk+s can be computed by summing with appropriate indices of α_sk, which requires sb flops. The s-Step BDCD computes s solutions every outer iteration, so a total of H/s outer iterations are required to perform the equivalent of H BDCD iterations. Multiplying these costs by H/s proves the computation and communication costs. Finally, this algorithm requires storage of A, Q_k, G_sk+j, and several vector quantities. However, the leading order costs are the storage of A and Q_k which costs fmn/P + sbm words. § EXPERIMENTS This section presents the numerical and performance experiments of the s-step DCD and BDCD algorithms for K-SVM and K-RR, respectively. Prior work on s-step Krylov methods <cit.> showed that the additional computation led to numerical instability due to the Krylov basis becoming rank-deficient. The s-step DCD and BDCD algorithms require computation of factor of s larger kernel matrix and requires gradient correction due to the deferred update on α. We begin by studying the convergence behavior of the s-step methods relative to the classical DCD and BDCD methods on several binary classification and regression datasets. Then, we show the performance trade off and scaling behavior of the high-performance, distributed-memory implementations (written in C and MPI) of the s-Step DCD and BDCD methods relative to the standard DCD and BDCD methods on a Cray EX cluster. §.§ Convergence Experiments We measure the convergence behavior of the s-step methods against the classical methods and perform ablation studies by varying the values of s, kernel choice (shown in <Ref>), and the block size (for the K-RR problem). The datasets used in the experiments were obtained from the LIBSVM repository <cit.> whose properties are shown in <Ref>. All algorithms were implemented and tested in MATLAB R2022b Update 7 on a MacBook Air 2020 equipped with an Apple M1 chip. We measure the convergence of the K-SVM problem by plotting the duality gap for each setting of the s-step DCD method in comparison to the classical DCD method. Since K-SVM is a convex problem, we should expect the duality gap to approach machine precision. However, in these experiments we set the duality gap tolerance to 10^-8. We plot the duality gap over H iterations until the gap converges to the specified tolerance. Duality gap is defined as follows, D(x) - P(x), where the D(x) refers to the objective value of the K-SVM problem (Lagrangian dual problem) and P(x) which refers to the objective value of the primal K-SVM problem as computed by LIBSVM <cit.>. The primal and dual SVM problems are defined in (<ref>) and (<ref>), respectively. Note that we report convergence for both the K-SVM-L1 and (smoothed) K-SVM-L2 problems. <Ref> shows the convergence behavior (in terms of duality gap) of the DCD and s-step DCD methods on the datasets in <Ref> and the kernels described in <Ref>. We polynomial kernel utilizes a degree d = 3 and c = 0. The RBF kernel utilizes σ = 1. We can observe for both datasets and all kernels, that the s-step DCD methods for K-SVM-L1 and K-SVM-L2 (red markers) exhibit the same convergence behavior as the DCD methods and attain the same solution, α_H, up to machine precision. Note that we expect the convergence behavior of K-SVM-L1 and K-SVM-L2 to differ since they solve different problems and attain different solutions. We also perform experiments for the K-RR problem shown in (<ref>). Since K-RR has a closed-form solution, we use relative solution error to illustrate convergence behavior of BDCD and s-step BDCD. The relative solution error is defined as follows, ‖α_k - α^*‖_2/‖α^* ‖_2, where α_k is the partial solution at iteration k for BDCD (or iteration sk + j for s-step BDCD) and α^* is the optimal solution obtained via matrix factorization. In order to compute α^* we first compute the full, m × m kernel matrix and solve the linear system for α^*. Figure 2 compares the convergence behavior of BDCD and CA-BDCD for s < 1 with all three kernels. <Ref> shows the convergence behavior for the regression datasets in <Ref> for the three kernels in <Ref> for a large setting of b and s. Similar to K-SVM, we observe that s-step BDCD attains the same solution at BDCD for every iteration and converges to a relative error tolerance of 10^-8 (single-precision accuracy). The abalone dataset is the largest MATLAB dataset tested, so we set the block size to b = 128. We report two settings for s to study the convergence behavior at small and large values, where s = 16 and s = 256, respectively. We use a smaller block size, b = 64, for the smaller bodyfat dataset and use the same settings for s. As <Ref> illustrates, s-step BDCD is numerically stable even when b ≫ 1. Furthermore, we can observe that the convergence of s = 256 and s = 16 (red markers) match for both datasets tested. Since K-RR, in particular, can utilize b ≫ 1, we can expect the s-step BDCD method to achieve smaller performance gains over classical BDCD due to the bandwidth-latency trade off exhibited by the s-step methods. So we should expect the s-step methods to be numerically stable for smaller values of b. <Ref> both illustrate that the s-step methods are numerically stable for the tested ranges and are practical for many machine learning applications. As a result, the maximum value of s depends on the computation, bandwidth, and latency trade off for a given dataset and hardware parameters of a candidate parallel cluster. §.§ Performance Experiments In this section, we study the performance of the s-step methods for K-SVM and K-RR problems. We implement all algorithms in C using Intel MKL for sparse and dense BLAS routines and MPI for distributed-memory parallel processing. We show performance results for each of the kernels shown in <Ref>. The linear and polynomial kernels utilize the Intel MKL SparseBLAS library for sparse GEMM computations. The polynomial kernel with degree d requires additional elementwise operations on the output of the sparse GEMM. We compute the RBF kernel by using the definition of the dot-product to expand ‖ a_i,: - a_j,:‖_2^2 into a sparse GEMM computation. Finally, we use the -O2 compiler optimization flag additional performance optimization. The datasets for the performance experiments are shown in <Ref>. We use datasets obtained from the LIBSVM repository and synthetic sparse dataset in order to study performance characteristics on benchmark machine learning datasets (which may not be load-balanced) and a perfectly load-balanced, sparse synthetic matrix. All datasets are processed in compressed sparse row (CSR) format. We use a Cray EX cluster to perform experiments. We run the DCD and BDCD methods for a fixed number of iterations and vary s for the s-step DCD and BDCD methods. We partition the dataset and store it in 1D-column layout (i.e. feature partitioning) so that each MPI process stored roughly n/P columns. Each CPU node contains two sockets equipped with AMD EPYC 7763 processors with each containing 64 physical cores. We did not see any benefits from utilizing simultaneous multi-threading, so we limit the number of MPI processes per node to 128 processes. We use MPI process binding and disable dynamic frequency scaling to ensure comparable performance as the number of MPI processes and number of nodes is varied. §.§.§ Strong Scaling We explore the strong scaling behavior of the DCD and s-step DCD methods for K-SVM in <Ref>. We also present performance results for BDCD and s-step BDCD methods for K-RR, specifically, as the block size is varied. DCD is limited to b = 1 since SVM does not have closed-form solution, thus, we expect better scaling and performance. Given the computation, bandwidth, and latency trade off, we expect s-step BDCD to achieve reduced performance benefits as b is increased. We report running time and speedups with respect to the slowest processor. We performed offline tuning of s to obtain the setting which achieves the best running time. Note that we limit the values of s to powers of two, thus, additional performance may be attainable with fine-grained tuning of s. The small, colon cancer dataset exhibits scalability to O(10) processors in <Ref>. The DCD method is latency bound, so we observe large speedups from the s-step DCD method. On the colon cancer dataset, the s-step DCD method attains speedups up to 3.5×, 4.3×, and 8.9× on the linear, polynomial, and RBF kernels, respectively. On the duke dataset, the s-step DCD method attains speedups up to 4.8×, 5.4×, and 9.8× on the linear, polynomial, and RBF kernels, respectively. Finally, the larger, synthetic dataset attains speedups up to 2.4×, 2.4×, and 2× on the linear, polynomial, and RBF kernels, respectively. Finally, the s-step BDCD method attains more modest speedups as the block size is increased for all kernels and all datasets, as illustrated in <Ref>. We see decreasing benefits as the block size is increased for the colon-cancer and duke datasets. For the colon-cancer dataset, we observed speedups of up to 4.78× at b = 1 and 1.7× at b = 4. For the colon-cancer dataset, we observed speedups of up to 5.48× at b = 1 and 1.68× at b = 4. §.§.§ Runtime Breakdown <Ref> presents the running time breakdown of the DCD and s-step DCD methods for the colon-cancer, duke, and synthetic datasets for various settings of s. We show results for the values of P that achieve the fastest strong scaling running time and show only the results for the RBF kernel. A notable observation in <Ref> is the decrease in kernel computation time as we increase s. Since DCD is limited to a block size of 1, the kernel computation is limited to computing a single row of the kernel matrix at each iteration. In contrast, the s-step DCD method computes s rows of the kernel matrix every (outer) iteration. As a result, the SparseBLAS routines have better single-node memory-bandwidth utilization (in addition to decreasing latency cost). <Ref> also shows a decrease in MPI allreduce time, which is expected when latency cost dominates. However, for the synthetic dataset when s > 16, we observe that the allreduce time increases. Note that this increase in communication time is also expected as the bandwidth term begins to dominate. Thus, the value of s must be tuned carefully to achieve the best performance. The s-step methods require the same total bandwidth as the classical methods, but the per message bandwidth increases by a factor of s. For the colon-cancer and duke datasets, we see significant improvements in kernel computation and allreduce times with s = 256 and s = 32 being the optimal setting for colon-cancer and duke, respectively. For the synthetic dataset, we see a smaller factor of improvement in kernel computation and allreduce times. This suggests that the synthetic dataset can be strong scaled further before the DCD runtime becomes dominated by DRAM/network communication. §.§.§ Performance and Load Balance Experiments in <Ref> showed results for LIBSVM and synthetic datasets which were load balanced. However, the news20.binary dataset contains non-uniform nonzero distribution which led to load imbalance when stored in 1D-column layout across P processors. This section explores the performance trade offs of the classical and s-step methods under load imbalance specifically for the news20.binary dataset. <Ref> shows the strong scaling behavior of the DCD and s-step DCD methods for K-SVM. DCD under utilizes the available DRAM bandwidth, therefore, the good strong scaling behavior is primarily due to DRAM bandwidth doubling as P is also doubled. In contrast, s-step DCD has more efficient DRAM bandwidth utilization since a block of s rows of the kernel matrix are computed per (outer) iteration. As a result, s-step DCD strong scaling hits the load imbalance scaling limit before DCD. s-step DCD attains a speedup of 3× over DCD at P = 4096 with s = 64. <Ref> shows the running time breakdown of DCD and s-step DCD at P = 2048 (the fastest s-step running time) as s is varied. The results highlight once again that the s-step DCD method reduces both kernel computation and allreduce times for s > 1. Since the news20.binary dataset also has the largest number of samples (m = 𝒪(10^4)), we also see a larger fraction of runtime related to gradient computation and memory management for s > 1. This is due to the factor of s2 b^2 additional computation required for the s-step method to perform gradient correction. Once the s-step inner loop is computed, the temporary buffers must be reset for the next iteration. Hence, there is additional running time overhead which increases proportionally with s. However, we note that the gradient correction and memory reset overheads are a small fraction of running time when compared to kernel computation and allreduce times. <Ref> shows the BDCD and s-step BDCD strong scaling and speedup at b = 4 with the RBF kernel. Given the larger block size, both methods exhibit better strong scaling behavior throughout the range of P tested. The s-step BDCD method reaches the load imbalance scaling limit before BDCD, as expected, due to more efficient memory-bandwidth utilization. <Ref> shows the running time breakdown for BDCD and s-step BDCD with b = 4 and P = 2048 (where BDCD achieves the fastest running time) as s is varied. Given the larger block size, we observe reduced kernel computation benefits when s is increased when compared to the K-SVM results. Allreduce time also becomes more bandwidth dominant since m = 𝒪(10^4), therefore, the overall performance benefits of the s-step BDCD method reduces to a 1.14× speedup of BDCD. Furthermore, as s continues to increase we can observe that allreduce bandwidth, gradient correction, and memory reset overhead become a larger fraction of running time. This suggests that we cannot set both s and b to very large values. inverse trend is observed with allreduce time; it becomes increasingly significant with higher s values and a greater number of processes, both of which are bandwidth-dominated cases. For instance, at 2048 processes, the allreduce time constitutes over 45% of the total runtime at s = 256, compared to less than 20% for the same s value at 128 processes. Figures 3 (d - f) further illustrate that larger batch sizes tend to exacerbate the dominance of allreduce time. In the case of the news20.binary dataset, there is an increase in overall runtime despite the reduction in kernel computation time as s increases. This phenomenon is also evident in the colon-cancer dataset. As depicted in Figure 4 (a - b), the CA-BDCD algorithm continues to reduce the running time until s reaches 32. Beyond this point, an increase in kernel computation time is observed. The proportion of allreduce time relative to the total runtime also grows with the number of processes; it is significantly less dominant at p = 4 of the runtime than at p = 32. In the case of the DCD algorithm, a similar pattern is observed, as both DCD and CA-DCD algorithms involve the computation and communication of AA^T. Specifically, the CA-DCD algorithm efficiently reduces running time with increasing values of s. An increase in the number of processes correlates with a higher proportion of allreduce time. However, the absolute value of the allreduce time remains relatively stable across different process counts. This stability is attributed to the fact that the size of the message communicated during allreduce does not depend on the number of processes. Instead, it is the consistent reduction in kernel computation time that results in the allreduce time occupying a larger proportion of the overall runtime. § CONCLUSION This work demonstrates that the s-step DCD and s-step BDCD methods for K-SVM and K-RR, respectively, attain large speedups when latency is the dominant cost. We show that this conclusion holds for dense and sparse datasets as well as datasets with non-uniform nonzero distributions that lead to load imbalance. We also show that the performance benefits of the s-step methods are moderate as allreduce bandwidth becomes the dominant cost. This observation underscores the importance of dataset characteristics and machine balance in determining the performance of the proposed methods in high-performance computing environments. Our research extends prior work on s-step methods to kernelized machine learning models for classification and regression. We show that in contrast to prior s-step coordinate descent and stochastic gradient descent methods, the kernel methods do not increase the total communication bandwidth (in theory) and attain speedups for a greater range of values of s. In the future, we plan to further optimize the s-step methods' kernel computation and gradient correction overheads by approximating the sampled kernel matrix (for example using the Nyström method). This performance optimization would enable the s-step method to scale to larger block sizes at the expense of weaker convergence. We also aim to study the performance characteristics of the proposed methods in distributed environments (e.g. federated or cloud environments) where network latency costs are more prohibitive and where s-step methods may yield impactful performance improvements. § ACKNOWLEDGEMENTS The authors would like to thank Boyuan Pan for assistance with generating figures and Grey Ballard for helpful discussions and feedback on this manuscript. This work was supported under Contract No. DE-AC05-00OR22725 with the US Department of Energy. This research used resources of the National Energy Research Scientific Computing Center (NERSC), a U.S. Department of Energy Office of Science User Facility located at Lawrence Berkeley National Laboratory, operated under Contract No. DE-AC02-05CH11231 using NERSC award ASCR-ERCAP0024170. This work also used computational resources provided by the Wake Forest University (WFU) High Performance Computing Facility for testing and debugging. siam
http://arxiv.org/abs/2406.19090v1
20240627111637
Influences of stoichiometry on steadily propagating triple flames in counterflows
[ "Prabakaran Rajamanickam", "Wilfried Coenen", "Antonio L. Sánchez", "Forman A. Williams" ]
physics.flu-dyn
[ "physics.flu-dyn" ]
prajaman@ucsd.edu Department of Mechanical and Aerospace Engineering, University of California San Diego, La Jolla, CA 92093–0411, USA [cor1]Corresponding author: § ABSTRACT Most studies of triple flames in counterflowing streams of fuel and oxidizer have been focused on the symmetric problem in which the stoichiometric mixture fraction is 1/2. There then exist lean and rich premixed flames of roughly equal strengths, with a diffusion flame trailing behind from the stoichiometric point at which they meet. In the majority of realistic situations, however, the stoichiometric mixture fraction departs appreciably from unity, typically being quite small. With the objective of clarifying the influences of stoichiometry, attention is focused on one of the simplest possible models, addressed here mainly by numerical integration. When the stoichiometric mixture fraction departs appreciably from 1/2, one of the premixed wings is found to be dominant to such an extent that the diffusion flame and the other premixed flame are very weak by comparison. These curved, partially premixed flames are expected to be relevant in realistic configurations. In addition, a simple kinematic balance is shown to predict the shape of the front and the propagation velocity reasonably well in the limit of low stretch and low curvature. Triple flames Edge flames Counterflow flames Stoichiometry Dilution § INTRODUCTION Triple flames, first identified by Phillips over fifty years ago <cit.>, play a fundamental role in many practical combustion systems. Since they have been observed to move along mixing layers that are strained in laminar and turbulent jet flows, for example, there is interest in investigating their response to strain. The counterflow mixing layer separating two opposed planar jets of fuel and oxidizer, used in previously in theoretical <cit.> and experimental <cit.> studies, provides an attractive canonical problem for analyzing these effects. The present contribution is intended to offer some clarifications concerning such steadily propagating triple flames, by building on a simplification of the formulation of Daou and Liñán <cit.>, who emphasized effects of Lewis numbers by parametrically studying, both numerically and analytically, these triple flames in mixtures with unequal diffusivities. Since underlying influences of stoichiometry tend to be obscured by varying Lewis numbers, the present considerations are restricted to equi-diffusional systems in which all Lewis numbers are unity. Effects of variable-densities <cit.> and heterogeneous mixtures <cit.> introduce a number of additional interesting phenomena, but are not considered here because the emphasis is on other aspect of the problem that can be addressed more clearly without introducing these complications. Under these restrictions, implications are considered here for systems with stoichiometries that are quite likely to be encountered in practice. The simplifications that will be introduced in the formulation will be identical to those in these previous references <cit.>, simplifications which also are employed in a number of other publications <cit.>, thereby facilitating comparisons. § FORMULATION The analysis adopts a one-step irreversible reaction for the chemistry, one unit mass of fuel reacting with s units of mass of oxygen to generate products, according to F + s O_2→ (1+s) P + q, where q denotes the amount of energy released in the process per unit mass of fuel consumed. The number of moles of fuel burned per unit volume per unit time, ω = B (ρ Y_ F/W_ F) (ρ Y_ O_2/W_ O_2) ^-E_a/RT, involves a pre-exponential factor B and an activation energy E_a. Here, ρ and T are the density and temperature of the gas mixture, and R is the universal gas constant. Mass fractions and molecular weights of species i are represented by Y_i and W_i, respectively. Following <cit.>, we consider a strained mixing layer configuration as shown in figure <ref>, with the front propagating at a constant speed U in the negative x' direction. To render the problem steady, a reference frame moving with the front will be used in the description, with the counterflowing streams approaching from y'=±∞ and leaving at z'=±∞. In the thermo-diffusive approximation (i.e. constant density and constant transport properties), the counterflow velocity field reduces to the familiar stagnation-point solution (v,w)=(-Ay',Az') in terms of the strain rate A, which defines, together with the thermal diffusivity D_T, the characteristic mixing-layer thickness δ_m = (D_T/A)^1/2. Although the velocity varies in the z' direction, the temperature and composition fields are independent of z' in the configuration considered. With the dimensionless variables x = x'/δ_m, y = y'/δ_m, = Y_ F/Y_ F,F, = Y_ O_2/Y_ O_2,A, = T-T_A/γ T_A, and the stoichiometric mass ratio S (i.e. the amount of oxygen needed to burn the unit mass of the fuel stream completely), the non-dimensional heat release γ, and the reciprocal-time pre-exponential factor B̂, namely S=sY_ F,F/Y_ O_2,A, γ = qY_ F,F/c_pT_A(1+S), = ρ B Y_ O_2,A/W_ O_2 (where c_p is the specific heat at constant pressure), the relevant problem is to solve for the temperature field and obtain the concentrations of reactants through a mixture fraction, defined as Z = 1/2(y/√(2))=S -+1/S+1 = + (1+S) / + 1+ S. where =(T_F-T_A)/γ T_A measures the difference in temperature between the fuel and oxidizer streams. The governing equation then becomes[Here, =U/S_L∞,s and =D_T A/S_L∞,s^2, where S_L∞,s = [4 (1-Z_s)^-3B̂ D_T ^-E_a/RT_s]^1/2 is the stoichiometric planar velocity obtained at leading order in the limit ≫ 1, with T_s= (1+γ) T_A + (T_F-T_A)Z_s, = E_a/RT_sT_s-T_A/T_s and Z_s=1/(S+1) being, respectively, the stoichiometric values of the adiabatic flame temperature, Zel'dovich number and mixture fraction.] /√()x -y y = ^2x^2 + ^2y^2 + /Z_s , where = ^3 /4(1-Z_s)exp[-(-)/1-(-)] is the dimensionless reaction rate, with =γ T_A/T_s and =1+ Z_s, to be integrated with boundary conditions in the transverse direction, y→ -∞: = , y→∞: =0, along with a chemically frozen upstream mixture and an emerging downstream diffusion flame, corresponding to x→ -∞: = Z, x→∞: x=0, the value of serving as an eigenvalue that enables (<ref>) to be satisfied. The reaction rate (<ref>) must be evaluated with use of = Z - Z_s and = (1-Z) + (1-Z_s) (Z -), obtained from (<ref>). § NUMERICAL RESULTS Since the problem defined above exhibits invariance under translations in the x direction, to anchor the flame the additional condition =0.3 is imposed at x=0 along the stoichiometric line y=y_s. The parametric values =8 & 20, representative of the range of overall activation energies usually encountered, =0.85 (corresponding to typical amounts of heat release in flames) and =0 (equal feed temperatures) are used in the integrations for three different values of stoichiometric ratio S=(1,4,17.2). Here S=4 and S=17.2 are selected as representative of the conditions found in methane-oxygen and methane-air combustion, respectively. Since the extinction strain rate A_E for the one-dimensional trailing diffusion flame is of order S_L∞,s^2/D_T, the solution for the triple flame can be anticipated to exist only for values of in the range 0 < < _E∼ 1. The value of _E is shown in figure <ref> as a function of S and compared with the asymptotic predictions for ≫ 1 <cit.>. In the limit ≫ 1, the curves exhibit two inflection points, one for S>1 and another for S<1. These inflection points are present in both the numerical computation <cit.> and the correlation formula <cit.>, which gives a zero slope at S=1, however, disappear at realistic values of , the decrease in the temperature sensitivity of the reaction rate reversing the dependence on S found in the diffusion-flame regime. The influence of the stoichiometry of the fuel stream on the structure of the propagating flame is investigated in figure <ref> for =8 by exhibiting contours of reaction rates defined in (<ref>). The front shapes for =20, not shown here, were found to be quite similar to those for =8, except for an overall reduction in the spatial extent of the reaction region, consistent with the stronger temperature sensitivity associated with the increase in activation energy. To better identify the relative position of the flame, the stagnation plane y=0 and the stoichiometric plane y=y_s are represented in each plot by a dot-dashed line and a solid line, respectively. Two values of the strain rate are selected in the figure for each value of S, with the smaller value on the top corresponding to an advancing front with >0 and the higher value on the bottom corresponding to a retreating front with <0. The symmetric solutions for S=1 result in a triple-flame structure for low strain rates and a retreating edge-flame structure for near-extinction strain rates, as is well known. It can be seen, however, that the symmetric character is lost for S=4, with the flame migrating to the oxidizer side of the mixing layer and the associated lean flame that develops for y>y_s becoming very weak. At the higher strain rate for this value of S, the retreating edge flame bends away from stoichiometry, towards the stagnation plane, as it broadens. The fading lean branch disappears altogether for S=17.2, at which value the propagating front takes on a C shape, with one of the wings of the premixed front evolving into the trailing diffusion flame as x →∞. Also of interest is that at the lower strain rate selected for this figure, the front at S=17.2 is found to propagate at a velocity =1.15>1, that is, higher than its stoichiometric value. This behavior, already reported without explanation for the range ≪^-2 <cit.> through asymptotic analysis, can be explained by investigating the composition dependence of one-dimensional planar flames <cit.>, where it is shown that the peak of the laminar planar burning velocity for these large values of S, in general does not lie at the stoichiometric point, but at the fuel-rich conditions. Although the density decrease across curved flames is well known to increase propagation velocities in configurations such as this, that influence is absent in the present constant-density analysis, leading to the higher speed arising from the effect of the planar burning velocity. While the density-change influence would be largest at stoichiometric conditions, it is seen in the left-hand figure that, on the contrary, this C-shaped flame lies entirely in a fuel-rich region, bounded by the stoichiometric and the stagnation plane. From the right-hand plot, the retreating front is seen in the figure at this value of S≫ 1 to become hook-shaped, with the reaction rate of the retreating edge flame actually beginning to increase very far from stoichiometry, near the stagnation plane, where the available residence times are longer, allowing the heating of the mixture by the diffusion flame to have had more time to increase the reaction rate. The computed dependence of the flame velocity on the strain rate is shown in figure <ref> for the same three different values of S. Since for a given activation energy , in the thermo-diffusive approximation the two-dimensional flame propagation velocity cannot exceed its one-dimensional planar maximum speed, S_L,max, for any strain rate between ignition and extinction, i.e, < _L,max, where _L,max is the maximum value of _L=S_L/S_L∞,s, it is appropriate to plot the two-dimensional flame velocity normalized by its one-dimensional maximum velocity calculated in <cit.>. As can be seen, for each value of S there exists an at which =0, thereby defining the boundary between advancing fronts and retreating fronts. The magnitude of the negative value of the propagation velocity of the retreating front goes to infinity as the strain rate approaches the extinction value _E, although the computations have not been carried far into the range <0, where convergence difficulties become more accute. The limiting strain rate is different for the solid and dashed curves, consistent with the results shown in figure <ref> for the two different activation energies considered here, the limiting values being indicated by vertical arrows at the bottom of the figure. § KINEMATICS OF THIN FRONTS As the strain rate becomes small, the flame becomes thin compared with its radius of curvature, enabling a more general analysis to be developed that is not necessarily restricted to the chemical kinetics. The initial analytical description of flame-front structures and propagation velocities, corresponding to low-strain feed streams in the present configuration, is due to Dold et al.  <cit.>, later extended to fuels with non-unity Lewis numbers by Daou & Liñán <cit.>, both works invoking activation-energy asymptotics in the description. It is, however, not necessary to adopt that approach in addressing the thin-flame limit, which may be analyzed directly by treating =√(Ã) as a small parameter of expansion, thereby admitting reactions with more complex chemistry. A front propagating at velocity V_f into a fluid whose velocity field is v may be described in a level-set approach by any constant value of a continuous and differentiable field function G that obeys the equation v ·∇ G = V_f |∇ G|, when n=-∇ G/|∇ G| is the local unit vector in the direction of propagation. To apply this description to the present problem, the components of v are taken to be (u,v)=(U,-Ay'), and the field function is selected to be G(x,y)=x-f(y) with G=0 along the front. The flame shape is then given by x=f(y), conditions along the flame sheet being treated as functions of y. A similar type of kinematic balance has been used previously in computing shapes of lifted flames in axisymmetric fuel jets under the additional approximation of negligible front-curvature effects <cit.>. In terms of planar adiabatic laminar burning velocity _L(y) (which can be obtained irrespective of the functional form of the reaction rate), a laminar-flame thickness for a low strain rate can be defined as δ_L(y)/δ_m = √(Ã)/_L. For fronts with small curvature, the local burning velocity can be expressed in dimensional form <cit.> as V_f = S_L - S_L δ_L κ + ' δ_L n·∇ v · n. Here is the Markstein number for curvature and ' that for strain, which, in general, vary with y along the front, κ = ∇· n is the front curvature, and - n ·∇ v · n being the imposed strain rate associated with the velocity gradients. In terms of the unknown function g(y)=df/dy, the non-dimensional equation then simplifies to the first-order ordinary differential equation + yg = _L√(1+g^2) - g/ y/1+g^2 - ^2 ' g^2/S̃_L √(1+g^2). The solution to this equation for g describes a C-shaped flame with g approaching positive infinity at y=y_∞ and negative infinity at y=y_-∞. This then defines a two-point boundary-value problem that possesses a continuous and differentiable solution g(y) only for a particular value of the constant , thus constituting a nonlinear eigenvalue problem with boundary conditions, g→/±(_L-^2'/_L)- y, the upper sign applying at y_∞ and the lower sign at y_-∞, obtained as limiting forms of (<ref>). The domain interval (y_-∞,y_∞) itself is derived by setting the denominator of (<ref>) to zero. Through its relationship to the mixture-fraction function, Z(y), the variation of the planar laminar burning velocity with the equivalence ratio defines the function _L(y), which will achieve its maximum value S_m=S_L,max/S_L∞,s at a value of y denoted by y_m. It is evident from (<ref>) that in the limit =0 the constant cannot be less than S_m since the magnitude of the square root is never less than unity, nor can it be greater than S_m, since then the entire pattern would propagate faster than any element of the front. Hence, at leading order in , the pattern must propagate at the velocity =S_m and the solution at this order then becomes simply g(y) = ±(S_m^2/_L^2-1)^1/2, the upper sign applying for y>y_m and the lower sign for y<y_m. Since =S_m at leading order, the first correction to arising from the front curvature is determined by the variation of _L(y) in the vicinity of the point y=y_m. With the normal quadratic variation about the maximum, in the first approximation _L(y)=S_m-(a/2)(y-y_m)^2, where the positive constant a is the negative of the second derivative of _L(y) at y_m. Then, it is found that the order perturbation to g(y) diverged in proportion to 1/(y-y_m) as y approaches y_m unless =S_m-(y_m) (a/S_m)^1/2. This determines the first correction to the propagation velocity, and further pursuit of the perturbation analysis, summarized in the appendix for the values ='=1, which apply in the thermo-diffusive approximation in the limit →∞ <cit.>, serves to determine subsequent corrections to the location y=y_m of the turning point (nose of the pattern), y=y_t, as well as the front shape. Representative results from numerical integration of (<ref>) with ='=1 are shown as dashed curves in the upper plots in figure <ref>. The dashed curves are seen to coincide well with the contours of maximum reaction rate, irrespective of the value of S, even exhibiting a good agreement, where >0.2 for S=17.2. Thin-flame descriptions therefore may be considered to be quite robust for describing the C-shaped curves in these problems. Predictions of propagation velocities of C-flame patterns are compared in figure <ref> for =8. As can be seen, the thin-flame propagation-velocity predictions thus are somewhat less robust than the flame-shape predictions. Although the predicted linear dependence of propagation velocities on is seen to be good, it decreases more rapidly with than is found by the full integration. This is likely a consequence of selecting a Markstein number of unity; this value of is small enough that the variation with may be expected to be weaker than would occur in the limit →∞. The thin-flame description employing the expansion derived in the appendix is seen to be in good agreement with results of the numerical integration of the thin-flame equation, whence the thin-flame formulation is likely to be reasonably accurate with proper evaluations of Markstein numbers. § CONCLUSIONS The main conclusion to be drawn from this investigation is that not all partially premixed flames in counterflow configurations may be expected to exhibit the classical tribrachial structure of rich and lean premixed flames with a diffusion flame trailing behind. Especially at high values of the dilution-adjusted stoichiometric fuel-air ratio, such as values appropriate for methane-air flames, the diffusion flame may fade into the lean wing, with the triple flame then evolving into a fuel-rich C-shaped premixed flame, however, heat release may modify this configuration, bringing the trailing diffusion flame back into visibility, yet the asymmetry in the direction identified here is likely to remain. Totally symmetric triple flames therefore should not be anticipated to be prevalent in practical situations. A further notable finding is that, although not at all applicable to retreating or even to advancing edge flames, thin-flame approximations enable Markstein numbers to be applied, employing quite general chemical kinetics, to reduce the problem involving ordinary differential equations in place of more complex partial differential equations. Moreover, in the limit of small strain rates, instead of integrating ordinary differential equations, sequential solutions of purely algebraic equations suffice to produce the results that are needed. It could be worthwhile to extend simplifications of that type to address the important influences of density changes associated with the heat release. § ACKNOWLEDGEMENTS This work was supported by the US AFOSR Grant No. FA9550-16-1-0443. § APPENDIX It is convenient to employ the re-scalings Û = Ũ/S_m, Ŝ_L=_L/S_m and = /S_m, before introducing the perturbation series g=g_o+g_1 + ^2 g_2 + ··· and Û = U_o + U_1 + ^2 U_2 + ··· into (<ref>). Then the resulting problem becomes purely algebraic in nature for the unknown quantities at each successive order. A unique choice of the eigenvalue at each order makes the solution uniformly valid in y by eliminating non-analyticity at the turning point. A Taylor's expansion of Ŝ_L(y) around the maximum point is needed, Ŝ_L = 1 + δ^2 Ŝ_m”/2! + δ^3 Ŝ_m”'/3! + δ^4 Ŝ_m^iv/4! + ···, where δ=y-y_m, and Ŝ_m”, Ŝ_m”',... are derivatives of Ŝ_L(y) evaluated at the maximum location. The turning-point location is also an unknown quantity that can be expanded in series y_t = y_m + y_1 + ^2 y_2 + ···. As discussed before, the leading-order solution is g_o(y) = ±(U_o^2/Ŝ_L^2-1)^1/2, U_o=1. where g_o(y) is real (U_o≮ 1) and continuous (U_o≯ 1) at δ=0. Collecting terms of O() and solving for g_1(y) gives g_1(y) = U_1 /Ŝ_L^2 g_o + y/Ŝ_L^2 + 1/g_o g_o/ y. The behaviour of g_1(y) as it approaches the turning point y→ y_t= y_m + y_1 is found to be g_1(y) ∼δ^-1[U_1/√(-Ŝ_m”) + 1] + O(1), thus U_1 will be chosen to make g_1(y) be bounded. The turning point at this order is obtained as the value of y at which g_o(y)+g_1(y)=0. In the same spirit, the equation for g_2(y) is obtained at the next order from, g_2(y) = U_2/Ŝ_L^2 g_o + yg_1/Ŝ_L^2 g_o - g_1^2/2g_o + Ŝ_L^2 g_o g_1^2/2 + 1/g_o g_1/ y - 2 Ŝ_L^2g_1 g_o/ y + g_o/Ŝ_L^2, its behaviour as y→ y_m + y_1 + ^2 y_2 diverges like δ^-1 as before and U_2 is chosen so as to remove this divergence. The selection of the value of U_1 has eliminated a term of order δ^-2, which otherwise would appear. The uniform solution at this order is given by g(y) =±±|g_o + g_1| + ^2 g_2 + O(^3), with the new turning point y_t=y_m+y_1 + ^2 y_2 now being determined by requiring the expression inside the outer absolute value signs to vanish there. At this order, the eigenvalue is Û= 1 - √(-S_m”) + ^2 (γ_1^2/2 -γ_2 - y_m γ_1) + O(^3). where γ_1 = y_m - Ŝ_m”'/3(-Ŝ_m”), γ_2 = 1- 7Ŝ_m”'^2/72Ŝ_m”^2 - Ŝ_m^iv/8(-Ŝ_m”). elsarticle-num
http://arxiv.org/abs/2406.19337v1
20240627171113
A Contact Binary Satellite of the Asteroid (152830) Dinkinesh
[ "Harold F. Levison", "Simone Marchi", "Keith S. Noll", "John R. Spencer", "Thomas S. Statler", "the Lucy mission team" ]
astro-ph.EP
[ "astro-ph.EP" ]
square 0000-0001-5847-8099]Harold F. Levison Southwest Research Institute, Boulder, CO, USA 0000-0003-2548-3291]Simone Marchi Southwest Research Institute, Boulder, CO, USA 0000-0002-6013-9384]Keith S. Noll NASA Goddard Spaceflight Center, Greenbelt, MD, USA 0000-0003-4452-8109]John R. Spencer Southwest Research Institute, Boulder, CO, USA 0000-0003-4909-9542]Thomas S. Statler NASA Headquarters, Washington, DC, USA 0000-0002-2006-4074]James F. Bell III Arizona State University, Tempe, AZ, USA 0000-0001-5890-9821]Edward B. Bierhaus Lockheed Martin Space, Littleton, CO, USA 0000-0002-9995-7341]Richard Binzel Massachusetts Institute of Technology, Cambridge, MA, USA 0000-0002-1804-7814]William F. Bottke Southwest Research Institute, Boulder, CO, USA 0000-0002-6968-6448]Daniel Britt University of Central Florida, Orlando, FL, USA 0000-0002-8255-0545]Michael E. Brown California Institute of Technology, Pasadena, CA, USA 0000-0003-0854-745X]Marc W. Buie Southwest Research Institute, Boulder, CO, USA 0000-0001-9625-4723]Philip R. Christensen Arizona State University, Tempe, AZ, USA 0000-0002-8379-7304]Neil Dello Russo The Johns Hopkins University Applied Physics Laboratory, Laurel, MD, USA 0000-0001-9265-9475]Joshua P. Emery Northern Arizona University, Flagstaff, AZ, USA 0000-0002-8296-6540]William M. Grundy Lowell Observatory, Flagstaff, AZ, USA Northern Arizona University, Flagstaff, AZ, USA 0000-0002-7813-3669]Matthias Hahn Rheinisches Institut für Umweltforschung an der Universität zu Köln, Cologne, Germany 0000-0001-8675-2083]Victoria E. Hamilton Southwest Research Institute, Boulder, CO, USA 0000-0003-1869-4947]Carly Howett Oxford University, Oxford, UK 0000-0002-6562-9462]Hannah Kaplan NASA Goddard Spaceflight Center, Greenbelt, MD, USA 0000-0001-9601-878X]Katherine Kretke Southwest Research Institute, Boulder, CO, USA 0000-0003-3234-7247]Tod R. Lauer NSF’s National Optical Infrared Astronomy Research Laboratory, Tucson, AZ, USA London Stereoscopic Company, London, UK 0000-0002-0362-0403]Raphael Marschall CNRS, Observatoire de la Côte d'Azur, Laboratoire J.-L. Lagrange, Nice, France 0000-0003-3402-1339]Audrey C. Martin University of Central Florida, Orlando, FL, USA London Stereoscopic Company, London, UK 0000-0002-0457-3872]Stefano Mottola DLR Institute of Planetary Research, Berlin, Germany 0000-0002-5846-716X]Catherine B. Olkin Muon Space, Mountain View, CA, USA 0000-0003-3479-856X]Martin Pätzold Rheinisches Institut für Umweltforschung an der Universität zu Köln, Cologne, Germany 0000-0002-3672-0603]Joel Wm. Parker Southwest Research Institute, Boulder, CO, USA 0000-0003-0333-6055]Simon Porter Southwest Research Institute, Boulder, CO, USA 0000-0001-9005-4202]Frank Preusker DLR Institute of Planetary Research, Berlin, Germany 0000-0001-8541-8550]Silvia Protopapa Southwest Research Institute, Boulder, CO, USA 0000-0002-6829-5680]Dennis C. Reuter NASA Goddard Spaceflight Center, Greenbelt, MD, USA 0000-0002-8585-2549]Stuart J. Robbins Southwest Research Institute, Boulder, CO, USA 0000-0002-5977-3724]Julien Salmon Southwest Research Institute, Boulder, CO, USA 0000-0003-4641-6186]Amy A. Simon NASA Goddard Spaceflight Center, Greenbelt, MD, USA Southwest Research Institute, Boulder, CO, USA 0000-0002-9413-8785]Jessica M. Sunshine University of Maryland, College Park, MD, USA 0000-0001-9665-8429]Ian Wong NASA Goddard Spaceflight Center, Greenbelt, MD, USA American University, Washington, DC, USA 0000-0003-0951-7762]Harold A. Weaver The Johns Hopkins University Applied Physics Laboratory, Laurel, MD, USA KinetX Space Navigation and Flight Dynamics Practice, Simi Valley, CA, USA Lockheed Martin Space, Littleton, CO, USA Southwest Research Institute, Boulder, CO, USA Arizona State University, Tempe, AZ, USA 0000-0002-3578-7750]Olivier S. Barnouin The Johns Hopkins University Applied Physics Laboratory, Laurel, MD, USA Southwest Research Institute, Boulder, CO, USA NASA Goddard Spaceflight Center, Greenbelt, MD, USA Southwest Research Institute, Boulder, CO, USA 0000-0002-4950-6323]Bryce Bolin NASA Goddard Spaceflight Center, Greenbelt, MD, USA Lockheed Martin Space, Littleton, CO, USA NASA Goddard Spaceflight Center, Greenbelt, MD, USA Lockheed Martin Space, Littleton, CO, USA 0000-0002-8456-3390]Russell Carpenter NASA Goddard Spaceflight Center, Greenbelt, MD, USA Indigo Information Services, Tucson, AZ, USA Lockheed Martin Space, Littleton, CO, USA Lockheed Martin Space, Littleton, CO, USA Lockheed Martin Space, Littleton, CO, USA Lockheed Martin Space, Littleton, CO, USA KinetX Space Navigation and Flight Dynamics Practice, Simi Valley, CA, USA Red Canyon Software, Denver, CO, USA Lockheed Martin Space, Littleton, CO, USA Lockheed Martin Space, Littleton, CO, USA Lockheed Martin Space, Littleton, CO, USA Lockheed Martin Space, Littleton, CO, USA KinetX Space Navigation and Flight Dynamics Practice, Simi Valley, CA, USA 0000-0003-1268-8845]Caden Gobat Southwest Research Institute, Boulder, CO, USA Lockheed Martin Space, Littleton, CO, USA Lockheed Martin Space, Littleton, CO, USA Lockheed Martin Space, Littleton, CO, USA Lockheed Martin Space, Littleton, CO, USA Cornell University, Ithica, NY, USA Lockheed Martin Space, Littleton, CO, USA Lockheed Martin Space, Littleton, CO, USA Lockheed Martin Space, Littleton, CO, USA Lockheed Martin Space, Littleton, CO, USA Marshall Space Flight Center, Huntsville, AL, USA Southwest Research Institute, Boulder, CO, USA 0000-0003-0797-5313]Brian A. Keeney Southwest Research Institute, Boulder, CO, USA Lockheed Martin Space, Littleton, CO, USA Lauffer Space Engineering, Littleton, CO, USA KinetX Space Navigation and Flight Dynamics Practice, Simi Valley, CA, USA Teton Cyber Technology, Littleton, CO, USA KinetX Space Navigation and Flight Dynamics Practice, Simi Valley, CA, USA American University, Washington, DC, USA Lockheed Martin Space, Littleton, CO, USA KinetX Space Navigation and Flight Dynamics Practice, Simi Valley, CA, USA Arizona State University, Tempe, AZ, USA Lockheed Martin Space, Littleton, CO, USA Lockheed Martin Space, Littleton, CO, USA 0000-0001-7616-3664]Matthew Montanaro Rochester Institute of Technology, Rochester, NY USA KinetX Space Navigation and Flight Dynamics Practice, Simi Valley, CA, USA The Johns Hopkins University Applied Physics Laboratory, Laurel, MD, USA KinetX Space Navigation and Flight Dynamics Practice, Simi Valley, CA, USA 0000-0002-3242-4938]Derek S. Nelson KinetX Space Navigation and Flight Dynamics Practice, Simi Valley, CA, USA NASA Headquarters, Washington, DC, USA Lockheed Martin Space, Littleton, CO, USA 0000-0003-4574-8795]John Y. Pelgrift KinetX Space Navigation and Flight Dynamics Practice, Simi Valley, CA, USA Lockheed Martin Space, Littleton, CO, USA Stellar Solutions, Denver, CO, USA NASA Goddard Spaceflight Center, Greenbelt, MD, USA KinetX Space Navigation and Flight Dynamics Practice, Simi Valley, CA, USA Southwest Research Institute, Boulder, CO, USA 0000-0003-4615-8340]Eric Sahr KinetX Space Navigation and Flight Dynamics Practice, Simi Valley, CA, USA Teton Cyber Technology, Littleton, CO, USA Southwest Research Institute, Boulder, CO, USA KinetX Space Navigation and Flight Dynamics Practice, Simi Valley, CA, USA Lockheed Martin Space, Littleton, CO, USA Stellar Solutions, Denver, CO, USA The Johns Hopkins University Applied Physics Laboratory, Laurel, MD, USA NASA Headquarters, Washington, DC, USA Lockheed Martin Space, Littleton, CO, USA 0000-0002-0338-0534]Michael Vincent Southwest Research Institute, Boulder, CO, USA Big Head Endian, Burden, KS, USA Big Head Endian, Burden, KS, USA 0009-0006-1592-0397]Daniel R. Wibben KinetX Space Navigation and Flight Dynamics Practice, Simi Valley, CA, USA Southwest Research Institute, Boulder, CO, USA 0009-0002-0189-650X]John P. Wilson The Johns Hopkins University Applied Physics Laboratory, Laurel, MD, USA Arizona State University, Tempe, AZ, USA § Asteroids with diameters less than about 5 km have complex histories because they are small enough for radiative torques (that is, YORP, short the Yarkovsky–O’Keefe–Radzievskii–Paddack effect) <cit.> to be a notable factor in their evolution <cit.>. (152830) Dinkinesh is a small asteroid orbiting the Sun near the inner edge of the Main Asteroid Belt with a heliocentric semi-major axis of 2.19 AU; its S-type spectrum <cit.> is typical of bodies in this part of the Main Belt <cit.>. Here we report observations by the Lucy spacecraft <cit.> as it passed within 431 km of Dinkinesh. Lucy revealed Dinkinesh, which has an effective diameter of only ∼720 m, to be unexpectedly complex. Of particular note is the presence of a prominent longitudinal trough overlain by a substantial equatorial ridge, and the discovery of the first confirmed contact binary satellite, now named (152830) Dinkinesh I Selam. Selam consists of two near-equal sized lobes with diameters of ∼210 m and ∼230 m. It orbits Dinkinesh at a distance of 3.1 km with an orbital period of about 52.7 hr, and is tidally locked. The dynamical state, angular momentum, and geomorphologic observations of the system lead us to infer that the ridge and trough of Dinkinesh are probably the result of mass failure resulting from spin-up by YORP followed by the partial reaccretion of the shed material. Selam probably accreted from material shed by this event. Dinkinesh was a late addition to the Lucy mission and was intended primarily as an in-flight test of an autonomous range-finding and tracking system that is a critical component of ’s operations <cit.>. It was an appealing target because the flyby geometry closely mimicked that of the Trojan targets to be encountered later in the mission <cit.>. approached Dinkinesh at a solar phase angle of 120°; at close approach the phase dropped rapidly, going through near-zero and then increased to an outbound phase of 60. The relative velocity of and Dinkinesh was 4.5 km/s. At closest approach, was 430.629 ± 0.045 km from Dinkinesh and had a -Dinkinesh-Sun angle of 30^∘. A sample of the high-resolution images is shown in Fig.<ref>. The basic shape of Dinkinesh is reminiscent of the `top' shapes seen in the near-earth asteroid (NEA) population (for example, Moshup <cit.>, Bennu <cit.> Ryugu <cit.>, and, to a lesser degree, Didymos <cit.>). Dinkinesh is similarly sized as well. As described in more detail below, Dinkinesh has an effective diameter of 719 m, while Bennu, Ryugu, and Didymos have effective diameters of between ∼560 m and ∼900 m. Like these objects, Dinkinesh is dominated by a prominent equatorial ridge. Dinkinesh also has a large trough running nearly perpendicular to the ridge. While both Ryugu and Didymos have similar features <cit.>, the trough on Dinkinesh appears to be more substantial. The ridge overlays the trough, implying that it is the younger of the two structures. However, there is no information on their absolute ages, and thus they could potentially have formed in the same event. High-resolution images obtained throughout the encounter (see the Methods/Observations section) make it possible to reconstruct shape models for each of the components. Due to the small size of Dinkinesh and Selam, usefully-resolved imaging was possible for only several minutes before and after close encounter. Dinkinesh's rotation was observed, but the amount of additional terrain revealed by the rotation was small (∼10%) compared to the unilluminated portion of the body. No rotational or orbital motion of Selam was seen. Illumination of Dinkinesh's anti-solar hemisphere from Selam was too faint to be observed. Thus, only one hemisphere of each body is visible in imaging. However, constraints on the unobserved hemispheres can be provided by photometry from both the ground <cit.> and when it was too far away to resolve the targets. We therefore turn our attention to the analysis of this photometry before we further discuss the shapes and structure of the system. The unresolved data from the post-encounter light curve photometry campaign (see the Methods/Observations section) is described in Fig. <ref>. From these data we determine that the contribution of Selam's rotation to the lightcurve has periodicty with T = 52.44 ± 0.14 hrs, comparable to the 52.67 ± 0.04 hrs period found from ground-based observations <cit.>. We adopt the ground-based period of 52.67 hrs because it is more precise due to its longer sampling baseline. The post-encounter light curve also shows dips inferred to be due to mutual eclipses of Dinkinesh and Selam with the same 52 hour periodicity (Fig. <ref>, and the Methods/Observations section), demonstrating that Selam's orbital period is very similar to its rotational period. We interpret this to mean that the system is tidally locked. By using the formalism in <cit.>, we estimate that the timescale for tidal effects to align the long axis of Selam radially relative to Dinkinesh to be short, of order 10^5 yr at the current separation, although their formalism might not be accurate because some important radiation effects <cit.> were not considered <cit.>. We also find that the centers of Dinkinesh and the two lobes of Selam appear to lie along a single line (Fig.<ref>m) — consistent with a tidally locked system. Thus, we conclude that Selam is in synchronous rotation and thereby orbits Dinkinesh with a period of 52.67 hrs. The timing of the mutual events in the post-encounter lightcurve (Fig. <ref>), relative to Selam's orbital position during the flyby, shows that Selam's orbit must be retrograde with respect to Dinkinesh's heliocentric orbit. The primary, Dinkinesh, rotates more rapidly, with the best-fit to the lightcurve giving a spin period of P = 3.7387 ± 0.0013 hr. Feature tracking during the flyby shows that the rotation is retrograde with respect to ecliptic North, i.e., in the same sense as Selam's orbit. The overall spin state (a synchronous secondary and a rapidly spinning primary) makes Dinkinesh similar to the majority of small near-earth and Main Belt asteroids with close satellites <cit.>. We now return to the topic of the shapes of Dinkinesh and Selam. A model of Dinkinesh produced by the process described in the Methods/Shape section and based on a preliminary reconstruction of 's trajectory is illustrated in Fig. <ref>. We find a volume-equivalent spherical diameter of 719 ± 24 m for Dinkinesh based on this shape model. Selam appears to consist of two distinct lobes. However, the contact point was in shadow during the encounter and so the exact nature of the neck is uncertain. Images taken during approach where the outer lobe was father way from the spacecraft than the inner one (see Fig. <ref>h for example) show that the neck is less than ∼67% of the inner lobe's diameter. We find equivalent spherical diameters of 212 ± 21 m and 234 ± 23 m for the inner and outer lobes of Selam based on fitting ellipses to visual limb profiles. If the lobes were in orbit about one another, their period would be ∼4 hr, which is inconsistent with the lightcurve observations described above. Additionally, we would have detected motion if the period were that short. Thus, the lobes must be resting on one another and Selam is likely a contact binary. Outbound images clearly show both lobes of Selam (Fig.<ref>m) from a direction almost perpendicular to the vector between them, as determined by triangulation. From these images, we derive a preliminary estimate of the center-of-figure separation between Dinkinesh and Selam to be 3.11 ±0.05 km at the time of the flyby. We argue in the Methods/Mass&Density section that Selam is on a circular orbit. If so, this separation represents the semi-major axis of the mutual orbit. The orientation in space of both Dinkinesh and Selam can be estimated with current data. In particular for Dinkinesh, the small amount of rotation observed during the encounter and the direction of its shape model's short axis suggests that its obliquity ∼178.7 ± 0.5 (i.e. its rotational axis is ∼1 from being perpendicular to its orbital plane). For the satellite, the mutual eclipses observed during the post-encounter Lucy observations, and mutual events inferred from the 2022–2023 ground-based lightcurve [6], suggest that its orbit plane is close to Dinkinesh's heliocentric orbital plane. It is therefore likely that all three, Dinkinesh's heliocentric orbit, Selam's orbit, and Dinkinesh's equatorial plane are close to one another. This configuration is nearly ubiquitous among small binary asteroids <cit.> as a result of spin-pole reorientation by the asymmetric thermal radiation forces caused by the YORP effect <cit.>. The YORP timescale is less than ∼ 10^7 yr for Dinkinesh's spin-pole to approach either zero or 180 <cit.>. The inner lobe of Selam also has a prominent ridge-like structure (Fig. <ref>h-k). Both lobes of Selam have flat facets and a blocky, angular overall shape, and the apparent ridge may be the boundary of two such facets. If, however, the structure formed from the accretion of material from a Dinkinesh-centered disk, as one might expect, it would have originally been aligned with both the orbit plane and Dinkinesh's ridge. In that case, it is likely that Selam's ridge then became misaligned during the formation of the contact binary, but this implies that either 1) Selam is currently rotating or librating about its long axis, or 2) its ridge formed before contact. The observed structure of Selam implies that it is a rubble pile, at least partially. However, the angular, binary, shape of Selam implies significant internal strength, and is dramatically different from the oblate spheroid shape of Dimorphos, the moon of Didymos <cit.> the only other satellite of a sub-km asteroid (also an S-type) for which we have detailed images. The mineralogy and bulk density of Dinkinesh provide constraints on its structure. Dinkinesh's bulk density is 2400 ± 350 kg/m^3 (Methods/Mass&Density section), which is in the range of expected values for objects with ordinary chondrite mineralogies. Bulk densities of L-chondrite meteorites, which are a good analog for the range of ordinary chondrites <cit.> and have the expected mineralogy for S-type asteroids, average 3360 ± 160 kg/m^3 with 7.5% microporosity <cit.>. Given the uncertainties in Dinkinesh bulk density discussed in the Methods/Mass&Density section this suggests a macroporsity of 25±10%. Its bulk density is in family with the S-type NEAs of this mineralogy and in this size range. For example, while Didymos has a similar density of 2800 ± 280 kg/m^3 <cit.>, Itokawa’s bulk density is 1900 ± 130 kg/m^3 <cit.> and the radar-observed binary Moshup is 1970 ± 240 kg/m^3 <cit.>. The low-density objects are likely much more porous and have a more pronounced rubble-pile structure than Didymos, with Dinkinesh some place in between. Dinkinesh and Didymos are probably on part of a continuum of where significant portions of the object are relatively coherent. Dinkinesh accounts for 94% of the volume of the system with Selam accounting for 6%. If we assume that all of the components have an equal density, the component masses of Dinkinesh and Selam are M_D = 4.67 × 10^11 and M_S = 0.28 × 10^11 kg, respectively. Using these component masses, it is possible to calculate that the barycenter is offset from the center of mass of Dinkinesh by a distance s_bary = 176 m in the direction of Selam, well interior to the body of the primary. Fig.<ref> strongly suggests that Dinkinesh suffered a global structural failure in its past. Given its small size, this event is likely the result of spin-up by the YORP effect <cit.>, see discussion in Fig. <ref> caption. If true, then the Dinkinesh system's angular momentum should be comparable to the total angular momentum of a parent body spinning near the spin-barrier limit <cit.>. Indeed, we find that the Dinkinesh system contains 88% of the angular momentum required for rotational breakup (cf. Methods/Angular_Momentum section), which is consistent with the idea that Dinkinesh's structure failed due to its large angular momentum. Dinkinesh shares many characteristics with other similar-sized asteroids, both Near Earth and Main Belt and is the only sub-km size Main Belt object ever studied at close range. Approximately 15% of small asteroids are observed to be binaries <cit.>. For the subset of these systems that are well characterized, the dominant pattern is a system with a synchronous secondary in a near-circular orbit with a semi-major axis, a, of ≈ 3 or more primary radii, r_prim <cit.>. Selam's semi-major axis, at a/r_prim≈ 9, is wide compared to the majority of other well-characterized systems of similar size that cluster closer to a/r_prim≈ 3 <cit.>. Dinkinesh's spin period is also longer than the ≈ 2.5 hr period typically observed in the NEA binary population <cit.>. One possible scenario is that Selam originally formed nearer to Dinkinesh and then evolved to a larger semimajor axis through tidal interaction and/or binary YORP that also slowed down Dinkinesh's rotation <cit.>. The most distinctive characteristic of the Dinkinesh - Selam system is the contact binary structure of Selam. Fig.<ref> illustrates three possible scenarios for its formation. The binary nature of Selam places important constraints on the formation of these satellite systems no matter how it formed. First, the fact that the two lobes are nearly the same diameter argues that the satellite formation process responsible for Selam favors building objects of a particular size. As far as we are aware, none of the formation models in the literature has been shown to meet this requirement. Second, as we describe above, the two lobes are distinct bodies. So, the process that brought the two lobes together must have done so with a small enough velocity for the lobes to have survived. The unexpected complexity of the Dinkinesh system strongly suggests that small asteroids in the Main Belt are more complex than previously thought. The fact that a contact binary can form in orbit about a larger object suggests a new mode for the formation of small bilobed bodies such as Itokawa<cit.>, for which they may once have been components of a system such as Dinkinesh that subsequently became unbound. § FIGURE CAPTIONS < g r a p h i c s > § METHODS Observations: The analysis presented here is based on panchromatic (350 — 850 nm) images taken with 's LOng Range Reconnaissance Imager, hereafter L'LORRI, which is a 20.8 cm, f/13 telescope feeding a 1024 × 1024 pixel CCD focal plane <cit.>. L'LORRI has a field of view of 0.29^∘ and a pixel size of 5 μrad. L'LORRI was primarily used in three distinct observation campaigns during the encounter: 1) Optical navigation reconstruction images were designed to precisely determine 's trajectory. They were taken daily during the period of ± 4 days of encounter (= –4 to +4 days) and every 15 minutes from = –2 hr to +2 hr. 2) High-resolution close approach images, which were taken every 15 seconds from = –10 min to +9 minutes, then with 1 minute cadence until +55 minutes. 3) Post-encounter light curve photometry was acquired from = +4 hr to +95 hr. Three exposures were taken at a cadence of one hour. At this time the Dinkinesh-Selam system was unresolved. In order to minimize data volume, these data were taken in L'LORRI's so-called 4×4 mode, which bins the data by 4×4 pixels during the CCD read out. Lightcurve_Analysis: The orbital period of Selam and the rotational period of Dinkinesh can be determined using the post-encounter light curve photometry described above in the Methods/Observations section. Instrumental magnitudes of the system were extracted from the images using a 1.5-pixel radius aperture. The small aperture served to exclude contamination from nearby stars. The formal errors from the extraction were scaled upward by a factor of 1.545 to adjust the reduced χ^2 to be 1 prior to determining the final uncertainties on the fitted results. There were 267 images analysed. The data were compensated for the changing distance as well as correcting to a constant solar phase angle using a phase coefficient of 0.06 mag/deg. The phase angle varied from 60.52 at the start to 59.67 at the end. The observing direction changed little over the 3.5 days and these corrections remove these slight changes leaving only a record of the global photometric properties of the system. The resulting lightcurve is shown in Extended Data Fig. <ref> in units of relative flux. We analysed the lightcurve with an iterative process designed to separate the contributions to the total flux from Dinkinesh and Selam. As the first step a model was constructed that consisted of a Fourier series expansion of the lightcurve combined with a period for each object. The reference time for the rotational phase was arbitrarily set to the time of the first data point for both objects. The mean flux of Dinkenesh was a free parameter in the model. In addition, we iteratively varied the Selam/Dinkinesh mean flux ratio. This ratio is constrained by the close-approach resolved images (Fig. <ref>d, for example) which show that the ratio of the visible areas of the two objects is 0.25. The two objects are also seen to have similar surface brightness, and so the unresolved flux ratio is also 0.25. This ratio was assumed to be at minimum light for both objects because Selam is viewed edge-on. An iterative correction was applied after separating the lightcurves to correct from the minimum to the mean flux and the final mean flux ratio was set at 0.33 (corresponding to a magnitude difference of 1.3). The model parameters were determined in a series of iterative steps. The first pass fit set a reasonable mean flux for Dinkinesh and the Fourier terms were disabled. At this point, only Selam was free to be adjusted to fit the data. The data were scanned in period. At each step, a best fit Fourier series was computed and the χ^2 was recorded. The lowest χ^2 period gave a preliminary value of 51.76 hours for Selam. This model was subtracted from the lightcurve data and a similar scan was performed on the Dinkinesh-only data. The Dinkinesh scan returned two interesting minima in χ^2 at periods of ∼3.7 and ∼4.3 hours. Note that all periods assume that the lightcurve is double-peaked. Given the two preliminary periods, the data were then fitted with the full model from the two objects and all free parameters were optimized simultaneously with an amoeba χ^2 minimization <cit.>. Using the amoeba fit as the starting point with the a posteriori correction to the uncertainties, a second Markov-Chain Monte-Carlo fit <cit.> was run for the model. There were 18 data points that were excluded due to unreasonably large residuals (see the discussion below). The final fitted lightcurves revealed amplitudes of 0.82 mag for Selam and 0.25 mag for Dinkinesh. The Selam rotation period was determined to be 52.44 ± 0.14 hours from this fit, but it is also attributed to its orbital period about Dinkinesh because it is likely tidally locked, as shown by the presence of mutual events. The resulting phased lightcurves are shown in Fig. <ref>. The variation in flux for the two objects coincidentally are about the same. Dinkinesh is much larger, which implies that it has a smaller relative variation in its flux. The lightcurve of Selam is well fit by two Fourier terms that capture the slightly asymmetric maximum and slightly broadened minima. The lightcurve of Dinkinesh is considerably more complicated, both the minima and maxima are asymmetric, but there are also clearly higher order variations seen. In this case a 4-term Fourier fit was required and even this does not fully capture all of the detail in the curve. For instance, one of the minima is sharper than can be followed with a 4-term fit. Dinkinesh's rotation period was determined to be 3.7387 ± 0.0013 hr (the 4.3 hour period discussed above was determined to be an alias). The outliers that were flagged during the lightcurve fitting, which are shown in red in the figures, are also of interest because they occur at a coherent rotation phase following a similar time after the two lightcurve minima for Selam. A reasonable explanation for these low points is a mutual event between the two bodies. These could, in general, be from the bodies occulting each other from the spacecraft's perspective, or from casting shadows on one another. Fortunately, the timing of these minima allows us to determine which. Looking at the photometry as a function of time, the low points appear at a regular interval at half the rotation period of Selam. Geometric constraints from the absolute timing indicate that the events are shadow transits of each other and not physical obscuraton along the line of sight (occultations). Furthermore, the timing clearly indicates that the orbital motion of Selam is retrograde, as is true for the rotation of Dinkinesh as well. The first and third dips seen in time are inferior shadowing events while the middle dip is a superior event. In the phased plot, the two inferior events overlay each other and trace out a more complete lightcurve of an event. The superior event has fewer measurements and shows an incomplete profile of the dip that misses the maximum eclipse point that must be in the middle between the two sets of points. Shape: The digital shape model used for this study (see Fig.<ref>) was generated by applying classical stereo-photogrammetry techniques <cit.> to L’LORRI imagery. A total of 48 images with a best ground sampling distance ranging from ∼10 m/pix to 2.2 m/pix were chosen from the high-resolution close approach images described in the Methods/Observations section. These were employed to establish a network of  3000 control points, which served as an input for the bundle adjustment process. Further, thanks to the very good noise and sensitivity performance of the L'LORRI imager, and to its comparatively large field of view, we could identify about 20 catalog field stars in the Dinkinesh fields throughout the encounter. These star positions were used in the determination of the stereo-photogrammetric adjustment, and contributed considerably to stabilize the solution. As a result, the camera extrinsic matrices were determined, which describe the transformation between the camera’s and the body-fixed reference system. These transformation matrices were then used to triangulate surface points from homologous image points, which were derived by means of dense stereo-matching <cit.>. The resulting dense point cloud (∼ 5 × 10^6 3D points) was then connected into a regular triangular mesh. The shape model derived from stereo reconstruction has an estimated scale error of about 1.4%, and covers about 45% of the body’s surface. In order to produce a closed shape, and allow an estimation of the body volume, the unseen hemisphere has been approximated with an analytical solid figure. For this purpose, we chose a generalized super-ellipsoid <cit.>, whose implicit representation is given by the function nonums 1 = | x/a|^k + | y/b|^m + | z/c|^n, where x, y, and z are the standard Cartesian coordinates. A fit to the reconstructed hemisphere leads to a = 0.40, b = 0.40, c = 0.35 km, k = m = 2, and n = 1.35. The generalized super-ellipsoid provides a better match to the Dinkinesh’s `top' shape than a conventional triaxial ellipsoid. We estimated the uncertainty in Dinkinesh's volume from the difference between the shape model and the super-ellipsoid convex shell. For the hemisphere covered by imaging, the difference in volume is 4.7%. In order to be conservative, we round this and apply an arbitrary factor of two margin to arrive at the volume uncertainty of ± 10%. This uncertainty is propagated to quantities derived from the volume. In particular, we note that the volume equivalent radius of Dinkinesh is calculated as r_veq = (3V/4π)^1/3 rather than from direct distance measurements. The dimensions of the two lobes of Selam were found by fitting ellipses to orthogonal axes in multiple resolved images of Selam from different viewing angles. Selam's inner lobe is fit with an ellipsoid measuring 240 × 200 × 200 m. The outer lobe is measured at 280 × 220 × 210 m. Uncertainties were estimated to be 10% per axis by adjusting the ellipsoidal fits until they were visually too large or too small to match the images. Combining the above values, we calculate a total system volume of V_tot=2.06 ± 0.20 × 10^8 m^3. Mass&Density: System density can be estimated from the orbital period and relative semi-major axis of the two bodies. As we describe in the main text, the center-of-figure separation between Dinkinesh and Selam was 3.11 ± 0.05 km at the time of the flyby. The eccentricity of Selam's orbit is not directly derivable from existing data, although it can be constrained. The regular phasing of the lightcurve minima collected prior to encounter from the ground <cit.> and from (Fig.<ref>) is consistent with a near-circular orbit, given our inference (Fig. <ref>) that these minima are caused by mutual eclipses. We would expect Selam’s eccentricity to be near zero given that tidal timescales for orbit circularization are of order 10^6 - 10^7 yrs. The ages of asteroid pairs where one of the members of the pair has subsequently undergone a mass-shedding event leading to the formation of a satellite suggest that binary-YORP effects <cit.> might shorten the circularization timescale to less than ∼ 10^6 yrs <cit.>. Thus, we assume e = 0 in the analysis performed here. Ground-based lightcurve observations, taken at multiple epochs, can better constrain any orbital eccentricity that might exist. Assuming that Selam is in a circular orbit about Dinkinesh and has an orbital period of 52.67 ± 0.04 hr, we derive a system mass of 4.95 ± 0.25 × 10^11 kg (GM = 33.0 ± 1.6 m^3/ s^2) from Kepler's third law. In the Methods/Shape section, we calculate a total system volume of V_tot=2.06 ± 0.20 × 10^8 m^3. Combining the system mass and volume we derive a bulk density of ρ = 2400 ± 350 kg/m^3. We add the caveat that if the assumption of zero eccentricity is incorrect and the separation observed at the time of the flyby differs from the semi-major axis, it would introduce a systematic error into the calculation of density. Conversely, however, the range of likely density for a S-type asteroid, as discussed below, constrains the maximum eccentricity to be of order of 0.1 and the assumption of zero eccentricity is fully consistent with known asteroid properties. Angular_Momentum: Knowledge of the component masses and the spin state can be combined to calculate the system's angular momentum. For simplicity we assume that Dinkinesh's moment of inertia can be adequately represented by a sphere of volume-equivalent radius. Assuming that Selam is tidally locked, the contribution to the angular momentum from its spin is small. Likewise, the orbital motion of Dinkinesh around the barycenter is small and we ignore it. The system angular momentum is nearly equally divided between Dinkinesh's spin, L_spin = 11.2 ± 1.9 × 10^12 kg m^2 s^-1, and Selam's orbital motion, L_orb = 8.0 ± 4.0 × 10^12 kg m^2 s^-1. The total angular momentum of the system is L_sys = 19.3 ± 4.4 × 10^12 kg m^2 s^-1. The normalized angular momentum, α_L, is computed from the total system angular momentum divided by the angular momentum of a sphere containing the total mass of the system rotating at the maximum rate for a cohesionless rubble pile <cit.>. That rate is given by ω_max = (4 πρ G / 3)^1/2, corresponding to a spin period of T_max = 2.13 hrs, i.e., the observed Main Belt spin barrier. We find α_L = 0.88, consistent with that expected for a binary produced by fission <cit.>. END NOTES Acknowledgements The mission is funded through the NASA Discovery program on contract No. NNM16AA08C. The authors thank the entire mission team for their hard work and dedication. Author Contributions H.F.L. and K.S.N. jointly led the writing of the text. S.Mo., F.P., and S.Ma. developed the shape model. J.R.S., I.S., J.S., and S.Ma. planned the science encounter sequence. J.R.S. led the production and analysis of L'LORRI images. M.B., J.R.S, and S.Mo. analyzed Lucy lightcurve data to derive Selam's orbital period and mutual event timing. T.R.L contributed to deconvolution of L'LORRI images. B.H.M. led the production of stereo images from deconvolved L'LORRI images. R.M. identified Dinkinesh as a possible target for . All of the authors contributed to science, science planning and the successful operation of before, during, and after the encounter. Competing Interest Declaration The authors declare no competing interests. Corresponding Author Correspondence and requests for materials should be addressed to Harold F. Levison, Southwest Research Institute, Boulder, CO. hal.levison@swri.org Data Availability Statement All raw and calibrated images, as well as the digital shape model, will be available via the Planetary Data System (PDS) (https://pds-smallbodies.astro.umd.edu/data_sb/missions/lucy/index.shtml) by 31 August 2024. § EXTENDED DATA
http://arxiv.org/abs/2406.17683v1
20240625161758
Asymptotic Properties of Random Homology Induced by Diffusion Processes
[ "Artem Galkin", "Mauro Mariani" ]
math.PR
[ "math.PR", "math.DG", "58J65, 60H10, 60F10, 53C43" ]
[2010]58J65, 60H10,60F10,53C43 § ABSTRACT We investigate the asymptotic behavior, in the long time limit, of the random homology associated to realizations of stochastic diffusion processes on a compact Riemannian manifold. In particular a rigidity result is established: if the rate is quadratic, then the manifold is a locally trivial fiber bundle over a flat torus, with fibers being minimal in a weighted sense (that is, regarding the manifold as a metric measured space, with the invariant probability being the weight measure). Surprisingly, this entails that at least for some classes of manifolds, the homology of non-reversible processes relaxes to equilibrium slower than its reversible counterpart (as opposed to the respective empirical measure, which relaxes faster). Schrödinger cats coupled with cavities losses: the effect of finite and structured reservoirs. L. Sanz July 1, 2024 ============================================================================================== § INTRODUCTION The asymptotic behavior of invariants of random curves on manifolds is a classical subject, a famous example being the Cauchy asymptotic of the winding number of a Brownian motion in R^2 due to Spitzer, see <cit.>. Manabe, Lyons & McKean, and Pitman & Yor further investigated winding numbers in a well-known series of papers throughout the '80s, <cit.>, with some large deviations bounds provided in <cit.>. It is not possible to mention all the literature starting from the '90s, even when considering just Brownian winding numbers. A geometrical approach to the problem has been developed by the Japanese school of stochastic calculus, an explanatory reference being Watanabe's <cit.>, where the asymptotic behavior of windings around points and windings around topological "holes" is investigated on Riemannian surfaces. §.§.§ Results and motivations This paper is mostly motivated by this approach, in particular it studies the qualitative behavior of the fluctuations of the random homology associated to diffusion processes on compact manifolds. We focus on the long time limit T→∞, of the random homology h_T associated to paths of general diffusion processes. While precise definitions are given in Section <ref>, this should be thought ash a finite-dimensional random variable that describes the number of windings per unit time of a stochastic continuous curve (X_t)_t ≤ T. It is easy to show that in the sense of large deviations, as T→∞ e:ld P(h_T ≃h) ≃exp(-T G(h) ) where G(h) ∈ [0,∞] is a positive rate function defined on the space of DeRham homologies h∈ H_1(M; R), see Section <ref>. For a fixed compact Riemannian manifold M, the law of the diffusion process X_t is identified by the Riemannian metric g and a tangent vector field b on M. Thus G(·) is itself determined by g and b (while it is independent of the initial condition X_0), and our goal is to establish for which triples (M,g,b), the rate G features some qualitative properties. To fix the ideas, consider the SDE on a flat d-dimensional torus e:flat_sde Ẋ= h̅+Ḃ where h̅∈ R^d is a constant and B is a standard (Gaussian) Brownian motion. Then the vector h_T (X̃_T-X̃_0)/T describes exactly the time-normalized number of times the process winded around holes of the torus, where X̃ is the lift of X to R^d (see Remark <ref> about the fact that (X_t) is not a closed curve). In this case h_T is a Gaussian random variable with expected value h̅, and it is easily seen that as T→∞ it satisfies a large deviations principle with a quadratic rate G(h)=12h-h̅^2 for a suitable norm ·. In general, on any manifold, as T→∞, h_T converges to some deterministic h̅∈ H_1(M; R), and G(h) describes the asymptotic of h_T and in particular it quantifies how unlikely it is to fluctuate away from h̅ (thus particular G(h)=0 only for h=h̅). Our main achievement is a rigidity result stating that if G is quadratic, then the manifold M is not too different from a flat torus with a random dynamic of type (<ref>). While the precise statement of Theorem <ref> requires some preliminaries, it basically states that a triple (M,g,b) featuring a quadratic rate functional G(·) for the associated random homology h_T, is necessarily a locally trivial fiber bundle over a flat torus T^b_1 (where b_1 is the first Betti number), moreover fibers are minimal in a suitable sense (which corresponds to minimality on the metric measured space (M,g,m) where m is in the invariant measure of the process, see Section <ref>), and some weak reversibility property should finally hold. While there are trivial examples of such manifolds and diffusion processes (e.g. (<ref>) or a product of (<ref>) and a simply connected manifolds with independent random dynamics), also nontrivial examples exists. In particular, we deduce that a Brownian motion on formal manifolds, see <cit.>, features such a quadratic rate; or more in general one may build weighted formal manifolds with such a property. Our result has a nontrivial consequence. As discussed in detail in Remark <ref>, one usually expects that non-reversible Markov processes present a faster convergence to equilibrium (and thus smaller fluctuations and larger deviations rates), when compared to their reversible counterpart (that is, the process associated to the symmetric part of their generator). This folklore idea can be made rigorous in various contexts, for instance: through spectral analysis on finite graphs; or, exactly in the aforementioned sense of large deviations, for the occupation measure of diffusion processes, see <cit.>. This is a rather significant phenomenon in applications, since it may be used to speed-up the convergence rate of MCMC algorithms. The actual rigorous result is that the large deviations rate of observables that are functions of the occupation measure of a Markov process, is larger than the rate of the same observables associated to their reversible part, thus yielding smaller fluctuations and faster convergence in the non-reversible case. Random homologies provide a counterexample to this phenomenon. The occupation measure of a diffusion process can be recovered from the action of the stochastic current, see (<ref>) on exact forms; however the random homology associated to a process is obtained by the action of the same random current on closed forms. Surprisingly, as soon as one extends in such a minimal way (from exact to closed) the set of observables considered, the previously described inequality on deviations rates is broken, and even more, when considering the quotient closed/excact, the opposite inequality actually holds. In a suitable sense reversible processes (or more in general homologically reversible processes, see Definition <ref>) are the ones presenting the fastest convergence rates. This is a straightforward consequence of Theorem <ref>, and while not in contradiction with <cit.>, it is certainly unexpected, see Remark <ref>. A second set of results, see Proposition <ref>, computes the expansion of G(h) around its minimizer h̅, that is we compute lim_→ 0^-2 G(h̅+ h), a global explicit bound G(h)≤ Q(h), and we establish a Gallavotti-Cohen symmetry for G in some cases. Informally speaking, the large deviations rate functions of the pair measure-current are the out-of-equilibrium counterpart of thermodynamic potentials in the context of equilibrium Statistical Mechanics and Dynamical Systems, see e.g. <cit.> and references there in. In this context, the significance of Gallavotti-Cohen symmetries for the rate function of the pair measure-current for non-reversible processes has been a subject of interest, likely after <cit.>. More recently, such symmetries have been noticed (to hold, or often not to hold) for some observables informally related to discrete homologies, faggionato2011gallavotti,faggionato2017random. In Section <ref> we further discuss some deeper motivations that require some additional preliminaries to be properly discussed. §.§.§ Plan of the paper The paper is organized as follows. In Section <ref> we introduce the main notation and recall some results concerning random currents. In Section <ref> we state our main results, namely Proposition <ref> and Theorem <ref>, and discuss some open problems. In Section <ref> we introduce some mathematical tools that may have an independent interest, in particular weighted Albanese maps, the relation between weighted minimality and weighted harmonicity and the lift of Markov generators. In Section <ref>-<ref> we prove the main results together with some additional statements which may have a broader interest. §.§.§ Acknowledgement We are thankful to Domenico Fiorenza for pointing out Corollary <ref>. § PRELIMINARIES §.§ Notation Let (M, g) be a smooth closed compact, connected Riemannian manifold. D^k ≡ D^k(M) denotes the space of smooth k-forms on M, d and d^∗ the differential and codifferential. In particular we mostly consider D^0 (smooth functions) and D^1 (smooth 1-forms). P(M) denotes the space of Borel probability measures on M endowed with the standard narrow topology, and J(M) the space of 1-currents regarded as the dual of D^1. ⟨·, ·⟩ stands for the pairing between vector fields and 1-forms on M. In particular we understand, as standard in the probabilistic notation, ⟨ b,df⟩ as the action bf of the vector field b on f∈ D^0. μ(f) denotes the integral of f w.r.t. μ, and j(ω) the action of a current j on ω. Since we have fixed a Riemannian tensor g, we denote without further notice |·| the associated norms both on tangent and cotangent spaces. For instance, with this notation, |ω|^2(x)= ⟨ g^-1ω, ω⟩(x). With an abuse of notation, ·_μ denotes the induced L^2(μ)-norms both on μ-square integrable 1-forms and currents, e.g. ω_μ^2μ(| ω|^2). H^1(M; R) and H_1(M; R) denote the space of De Rham cohomology and its dual, the real homology group of the manifold M. [c] stands for the space of closed ω∈D^1 in cohomology class c, and ⟨ h,c⟩ denotes the homology-cohomology duality[As various scalar products are used in this paper, we use a notation which is more common in the probabilistic literature but somehow unusual in differential geometry. Angled brackets ⟨· , ·⟩ are used for dualities that are independent of the metric g, while (·,·), (·,·)_r, ·,· denote scalar products that do depend g.]. Recall that a Riemannian metric is defined on M, let Δ be the associated Laplace-Beltrami operator and b a smooth vector field on M. Let (X_t)_t ≥ 0 be the M-valued Feller process with generator L defined on smooth functions as e:generator Lf= 12 Δf + ⟨b,df ⟩ There exists a unique probability measure m such that m(Lf) = 0 for any f ∈ D^0, referred to as the invariant measure. With a little abuse of notation, hereafter we still denote by L the closure of L both in C(M) and L^2(m). Finally, for μ∈ P(M), we denote j_μ∈ J(M) the typical current defined by e:jmu_def j_μ(ω)μ(12 d^∗ω+ ⟨b,ω⟩) §.§ Rate function for random homologies In this section we define the rate function G for random homologies. As it is easily derived from known results, we define it rigorously but concisely, and refer to Section <ref> for further details. Recall that the operator L defined in (<ref>) generates a Markov process. The empirical measure π_T ∈ P(M) and the empirical current J_T ∈ J(M) are then defined pathwise as (∘ denotes the Stratonovich integral) e:pit π_T(f)1T ∫_0^T f(X_s) ds, f ∈D^0 J_T(ω) = 1T ∫_0^T ω(X_s) ∘dX_s, ω∈D^1 To be precise, (<ref>) does not immediately identify a random element J_T∈ J(M), since the definition holds only a.e. for each ω∈ D^1, but D^1 is uncountable. This technicality can be overcome in various equivalent ways, either defining J_T for every X ∈ C^α(M) as a geometric rough path, see e.g. the seminal <cit.>, or using a classical result by Mitoma <cit.>. In any case (<ref>) provides a well-posed definition of random current. Sharper results of well-posedness J_T in stronger space are possible, see for instance flandoli2005stochastic,flandoli2009regularity. Large deviations results, in the limit T→∞, can be established for the pair (π_T,J_T) with standard tools. In the last decades, sharper techniques strengthened the topology in which large deviations hold, in the same way as rough paths techniques strengthened the topology for the well-posedness of J_T, see kuwada2003sample,kusuoka2010large with the full results for generic diffusions in <cit.>. These results motivate the following definitions. denotes the space of pairs (μ,j) ∈ P(M)× J(M) such that * j is a closed current, namely j(df)=0 for all f∈ D^0. * μ has finite Fisher information w.r.t. the invariant measure m, namely μ is absolutely continuous μ = ϱ m and √(ϱ)∈ W^1,2(M,m). The large deviations rate of the pair empirical measure-current is defined as e:rate I P(M) ×J(M) →[0,∞] I(μ,j) 12 j-j_μ^2_μ if (μ,j) ∈ +∞ otherwise The large deviations rate G of the random homology associated to the generator L in (<ref>) is defined as e:rate2 GH_1(M;R) →[0,∞] G(h)inf{ I(μ,j)j(ω)= (c,h), ∀ ω∈[c] } As discussed before, the law of (π_T,J_T) satisfies a large deviations principle with speed T and rate I, see the aforementioned galkin2024large,kuwada2003sample,kusuoka2010large for details. However, since paths of diffusion processes are not closed, the restriction of J_T to closed 1-forms does not identify a random homology. Fix however any linear isomorphism H^1(M, R) ∋ c ↦ξ_c ∈ D^1, associating to each cohomology class c ∈ H^1(M, R) a closed 1-form ξ_c ∈ [c]. Then a random homology h_T∈ H_1(M; R) is naturally associated to the Markov process X by the relation e:randomh ⟨h_T,c ⟩J_T(ξ_c), c∈H^1(M;R) The following remark is an immediate consequence of the contraction principle <cit.> and standard properties of geometric rough paths/Stratonovich integrals, and it is quickly proved below. Regardless of the isomorphism c ↦ξ_c, the law of h_T satisfies a good large deviations principle with speed T and rate function G. The main results of our paper concern the qualitative behavior of the rate function G(·) in terms of the topological properties of the manifold M and the reversibility and curvature properties of the generator L in (<ref>). § MAIN RESULT In this section we state our main results, Proposition <ref> and Theorem <ref>. §.§ Quadratic bounds and symmetries Recall that g denotes the metric tensor and m the invariant measure. It is a standard fact that the invariant measure has a smooth, strictly positve density w.r.t. to the volume measure on M and we write e:vr m=e^-V r b+ 12 ∇V r may be interpreted as the non-reversible part of the drift b, see Definition <ref>. The metric-measure triple (M,g,m) induces a scalar product on real homologies and cohomologies as follows. For each c∈ H^1(M; R), there exists a unique closed 1-form η_c ∈ [c] that is orthogonal to exact forms in L^2(m) (see Section <ref>). Then (c,c') m( ⟨ g^-1η_c,η_c'⟩) defines a scalar product on H^1(M; R) and (h,h') denotes the dual scalar product on H_1(M; R). Similarly, for each c∈ H^1(M; R), there exists a unique L-harmonic form ω_c ∈ [c], see Definition <ref> and Remark <ref>, and we can introduce a second scalar product (c,c')_r m( ⟨ g^-1ω_c,ω_c'⟩) on H^1(M; R), and (h,h')_r denotes the dual scalar product on H_1(M; R). The index r stresses the dependence on the non-reversible field r, see (<ref>). In general, see Remark <ref>, (h,h)_r ≤ (h,h) and the two scalar products coincide in the reversible case r=0. We start defining some relevant extensions of reversibility. The generator L (and the vector field b) is called * reversible: if it is self-adjoint in L^2(m). Equivalently, if the 1-form gb is exact, or yet equivalently if the non-reversible field r vanishes, see (<ref>). * quasi-reversible: if gb is closed. Equivalently, if the gr is m-harmonic. * homologically reversible: if the scalar products introduced above coincide, (c,c)=(c,c)_r. Or equivalently, if r satisfies ⟨ r,η_c⟩=const, for all c∈ H^1(M; R). Here const means that the scalar product is independent of x ∈ M. * typically reversible: if m(⟨ r,η_c ⟩)=0 for every c∈ H^1(M; R); namely if the restriction of J_T to closed 1-forms vanishes as T→∞. Of course reversibility implies quasi-reversibility, homological and typical reversibility. On the other hand <ref> + <ref> + <ref> is actually equivalent to reversibility (as easy to show but not needed for the following). One may check that <ref> implies <ref> on the same class of manifolds characterized by the conditions in Theorem <ref>-(<ref>). There exists h̅∈ H_1(M,R), that only depends on g and b, such that e:gaussineq G(h) ≤Q(h)12 (h -h̅,h -h̅) and e:eps lim_↓0 ^-2 G(h̅+ h)= 12 (h,h)_r Moreover in the quasi-reversible case (see Definition <ref>-<ref>), for c̅∈ H^1(M; R) the cohomology class of gb, G satisfies e:gc G(h)-G(-h)=Q(h)-Q(-h)=-2⟨h,c̅⟩ The last proposition shows in particular that * For quasi-reversible processes, G enjoys a Gallavotti-Cohen symmetry, actually the same Gallavotti-Cohen symmetry that Q trivially satisfies. * For homologically reversible processes, Q provides both a global upper bound and the small-homology limit of G, and also in this case (<ref>) writes G(h)≤ 12 ⟨ G”(h̅)(h-h̅) , h-h̅⟩. * G(h)=0 if and only if h=h̅ = lim_T→∞ h_T/T. h̅ is actually the rotation number of the current mr, see Remark <ref>. * For typically reversible processes, G is minimized at h=0 and it is quadratic around 0. More generally the quadratic bound in (<ref>) can be interpreted as a sub-gaussian bound for the large deviations of the random homology h_T introduced in (<ref>). In the next Section <ref> we fully characterize the cases where equality holds, the motivation for such a question is briefly explained in Section <ref> below. §.§ Asymptotically Gaussian homologies We say that the diffusion process X associated to the generator L has asymptotically Gaussian homology if equality holds in (<ref>), that is e:gauss2 G(h)= Q(h) 12 (h -h̅,h - h̅) This wording is due to the fact that, whenever the restriction of J_T to harmonic 1-forms is Gaussian (for instance on a flat torus), the equality in (<ref>) holds indeed. In this section we characterize asymptotically Gaussian homologies via a topological rigidity condition and an equivalent stochastic interpretation. Informally speaking, an asymptotically Gaussian homology can only rise from an underlying diffusion on flat tori. The metric-measure space (M,g,m) is naturally associated to the generator L: indeed one can recover g from L, see e.g. ambrosio2005gradient,bakry2014, while m is just the invariant measure of L. We then borrow the notion of m-weighted minimality from such a metric-measure context, see cheng2015stability,cheng2020minimal, and we conveniently rephrase it here in our framework. For e^-V the density of m w.r.t. the volume measure on M as above, and for N a submanifold with induced volume σ, N is called m-minimal if any of the following equivalent conditions is satisfied * N is a stationary point of the m-volume functional N↦∫_N e^-V dσ. * The m-mean curvature H_m H+∇ V^⊥ vanishes, where H is the usual mean curvature and ∇ V^⊥ is the normal (to N) projection of ∇ V. If the dimension d≠ 1,2, the previous conditions are also equivalent to * N is minimal w.r.t. the conformally equivalent metric g'=e^2V/dg. We finally state our main result. In the following theorem and hereafter, b_1 ≡ b_1(M) is the first Betti number of M. The following are equivalent. * The generator L has asymptotically Gaussian homology, see Section <ref>. * M is a locally trivial fiber bundle over a flat torus T^b_1, with m-minimal fibers, and b is homologically reversible. * There exists a smooth map ϕ M→ T^b_1 and [It follows from the proof that h̃ is characterized by h̅ introduced in Section <ref>, up to a linear transformation. In particular h̃=0 for typically reversible processes, see Definition <ref>.] h̃∈ H_1( T^b_1; R) such that Y=ϕ(X) is a solution to the SDE e:sdetorus dY_t=h̃ dt+dW_t where W is a Brownian motion on T^b_1 w.r.t. a flat metric on T^b_1. In particular b_1≤dim(M) and if b_1=dim(M) then M is a flat torus and X solves (<ref>) with h̃=h̅. In some sense, the previous theorem contains two statements. The first one is purely geometrical: asymptotic gaussianity of the homology is equivalent to the metric measure space (M,g,m) being a locally trivial fiber bundle over a flat torus in a way depending specifically on the weight m. The second states that only homologically reversible processes can have asymptotically gaussian homologies. For instance, in the reversible case, the theorem boils down to a differential geometric characterization: a reversible process has asymptotically gaussian homology if and only if M is a locally trivial fiber bundle over a flat torus T^b_1, with m-minimal fibers. We remark that there exist non-trivial examples of manifolds M endowed with a diffusion generator L satisfying any of the equivalent conditions of Theorem <ref>. A trivial example is a product of a flat torus with diffusion as in Thoerem <ref>-<ref> with a simply connected manifold and an independent random dynamics. However, even in the case of Brownian motion b=0, one can show non-trivial examples such as formal manifolds, see <cit.>. Indeed harmonic forms on formal manifolds have constant length, a property which implies Theorem <ref>-<ref> (provided b=0), as proved in <cit.>. It is folklore knowledge that non-reversible processes converge to equilibrium faster than their symmetric (reversible) part. In our framework, this means that as t→∞, one expects the process X_t with generator (<ref>) to converge in law to its stationary limit, faster than a process, say Y_t, with generator L'f=12Δ f - ⟨∇ V,df⟩, where V is defined as in (<ref>). In other words, X and Y have the same invariant measure and quadratic variation (informally speaking, the same simulation complexity), but the presence of the m-divergence free drift r≠ 0 speeds up the convergence to 'equilibrium'. This idea has been used to speed up sampling of measure in high-dimension using non-reversible MCMC. There are few rigourous results proving such a phenomenon, in particular when considering the convergence of the empirical measure 1T ∫_0^Tδ_X_sds to the invariant measure. One way to establish this speed-up effect, consists in proving that the large deviations rate of the empirical measure of X is larger than the one of Y, meaning that it is less likely for the empirical measure of X to fluctuate away from the invariant measure. This is rigorously established for diffusions in <cit.>. To our surprise, Theorem <ref> provides a counterexample to this floklore expectation. Empirical measures may be recovered calculating empirical currents on exact forms, a well-known feature of geometric rough paths. Yet, as soon as one enriches the considered set of observables to include the action of currents on closed forms, the picture is actually reversed, at least for manifold with some form of flatness (in the sense of Theorem <ref>-(2)). Indeed, reversible or more in general homologically reversible processes may achieve equality in (<ref>), which is instead a strict inequality for non-homologically reversible processes, see Theorem <ref>, (1) ⇒ (2). To make a trivial example[In this explicit example, it is easily seen that one may reduce to the reversible case h̅=0 via a rotation that does not effect the convergence speed. That is why we can compare the convergence speed as in <cit.> even if the process X is homologically reversible but not reversible for h̅≠ 0], fix h̅∈ R^d and consider the two processes on a standard flat torus e:yyy Ẋ = h̅ + Ẇ Ẏ= r(Y) + Ẇ where r is a divergence-free vector field with ∫ r(y) dy=h̅. Theorem <ref> guarantees that the large deviation rate (in the long time limit) of the random homology associated to X is strictly larger than the homology associated to Y, while the situation is exactly reversed for the deviations of the empirical measure. This shows that in general non-reversible perturbations may fail to provide a speed-up when sampling invariant observables that are not functions of the empirical measure; otherwise stated Y would in this case be a better choice to sample the volume measure, while X would be a better choice to sample h̅. The authors found this phenomenon non-trivial, unexpected and of some interest for the MCMC community. §.§ Open problems In this section we present two open questions. The second one in particular was one of our initial motivations to investigate the problem. To state the first open question, we start by a corollary that is an easy consequence of <cit.> and (<ref>). If d=2 and b=0, there is a one-to-one mapping between Riemannian tensors g on M, up to a normalized conformal transformation, and the rate function G. Namely, observing the random homology, one can reconstruct the generator of the process up to a random time change of average 1. In the general reversible case, one may wonder what information one gets on the metric-measure space (M,g,m) by the knowledge of G(·). To answer this question, one may need some weighted version of Torelli's theorem in the form presented by McMullen. A second question concerns the maximization of G(·) within classes of Riemannian metrics. First one should rule out trivial transformations, as for instance multiplying the metric by a constant just scales G quadratically. However, once trivial transformations are factored out, one may wonder whether manifolds obtained as some rigid extension of manifolds with constant curvature maximize G. That would mean, even in the case b=0, that breaking the symmetry of loops of a Brownian motion is the least likely on manifolds with constant curvature. Theorem <ref> is a result in this direction, although limited to the case of zero curvature. Indeed such a theorem states that G(h)/Q(h)≤ 1, with equality holding on flat tori, or manifolds were fluctuations of the homology only arises from a random dynamic on flat tori. Normalizing by Q(h) can be interpreted here as factoring out trivial transformation of the metric. The question however remains open: how to state and prove some maximal properties of manifold with constant curvature, even in the case of (possibly punctured) hyperbolic surfaces? § TECHNICAL PRELIMINARIES §.§ Lifted generator d^∗ D^1→ D^0 denotes the usual deRham codifferential, and recalling that m is the invariant measure and V was defined in (<ref>), we let d^∗_m D^1→ D^0 be the m-weighted codifferential which we can define by duality: e:codfm m (f d^∗_m ω) -m( ⟨∇f, ω⟩) , for all f ∈ D^0 or equivalently, as it can be verified via an integration by parts e:codfm2 d^∗_m ω= d^∗ω- ⟨∇V, ω⟩ Recall the definition of the generator L in (<ref>). Define the lifted generator L D^1 → D^0 as e:Lforms L ω: = 1/2 d^∗ ω+ ⟨b ,ω⟩= 12 d^∗_m ω+ ⟨r ,ω⟩ With this notation the condition that the measure e^-V is invariant, reads after a straightforward computation as e:rr d^∗_m (gr)=0 Moreover, whenever I(μ,j)<∞ and thus in particular μ=ϱ m with √(ϱ)∈ W^1,2(m), the typical current j_μ as defined at the end of Section <ref> is written as e:jmu2 j_μ(ω)= μ(Lω)= m (- ⟨∇√(ϱ), √(ϱ) ω⟩)+μ( ⟨r,ω⟩) where the integral in m in the right hand side makes sense as both sides of ⟨· , ·⟩ are in L^2(m). As ω_μ^2μ(|ω|^2) denoted the L^2(μ) norm (depending on g) for 1-forms, the dual norm on currents, still denoted by ·_μ as remarked in Section <ref>, is given by e:dualnorm j_μ^2=sup_ω 2 j(ω)- ω_μ^2 For a measure μ∈P(M) and a μ-integrable vector field E, we usually denote μ E the current defined by (μ E)(ω)=μ ( ⟨ E, ω⟩), in other words μ E is the current with Radon-Nikodym derivative w.r.t. μ given by E in the sense of <cit.>. With this notation, the scalar product ·,·_μ inducing the norm ·_μ on currents satisfies e:jscalar j, μ g^-1ω_μ = j(ω) In particular if I(μ,j)<∞ then, see (<ref>) e:jmuscalar j, j_μ_μ = j(-12 dlogϱ+ gr )=j(gr ) §.§ Weighted harmonic forms In this section we discuss some straightforward tilted version of the Hodge decomposition. A 1-form ω∈ D^1 is m-harmonic if dω=0 and d^∗_m ω=0. It is L-harmonic if dω=0 and d Lω=0, see (<ref>). It is easily checked that m-harmonic means that ω is in the kernel of a m-weighted Hodge laplacian, from which the name. On the other hand, notice that whenever r=0, namely whenever the invariant measure m is reversible, the notion of L-harmonicity reduces to the one of m-harmonicity. Indeed by definition, if ω is L-harmonic, Lω is constant, and thus it equals m( Lω). However if r=0 then L ω = d^∗_m ω and m(d^∗_m ω)=0 for ω∈ D^1 and thus d^∗_m ω=0. As a weighted version of the Hodge decomposition, it is easy to check that the space of 1-forms can be decomposed as a direct sum e:hodge D^1= D^1,exact ⊕D^1,m-harm ⊕(D^1,closed)^⊥ the three components of the direct sum represent respectively exact forms, m-harmonic forms and forms that are orthogonal in L^2(m) to closed forms. The three components are orthogonal in L^2(m). Similarly, we can decompose e:hodge2 D^1= D^1,exact ⊕D^1,L-harm ⊕(D^1,closed)^⊥ where now however the decomposition is not, in general, orthogonal in L^2(m). The relation between the weighted Hodge decomposition (<ref>) and (<ref>) is uniquely given as follows. If ω=df+η is closed with η m-harmonic, then by Fredholm alternative we can solve in the unknown u ∈ D^0 (u is smooth, a standard fact by elliptic regularity, see <cit.>) e:xxi -Lu = ⟨r,η⟩- m(⟨r,η⟩) which uniquely determines an exact 1-form du, and ω=(df-du)+(η+du) gives the decomposition (<ref>) for closed forms since L(η+du)=⟨ r,η⟩ +Lu= m(⟨ r,η⟩) is constant. There are two linear isomorphisms e:isomorphism H^1(M;R) ∋c↦η_c ∈D^1,m-harm H_1(M;R) ∋h↦η^h ∈D^1,m-harm associating to a cohomology class c a unique m-harmonic form η_c in class c, and to each homology class h a unique m-harmonic form η^h such that m(⟨ g^-1η^h,η_c ⟩)=⟨ h,c⟩ for all c∈ H^1(M; R). Such isomorphisms are isometries when D^1,m-harm is regarded as a finite-dimensional subspace of L^2(m) and H^1(M; R), H^1(M; R) are equipped with the scalar products defined at the beginning of section <ref>. Similarly there are two linear isomorphisms e:isomorphism2 H^1(M;R) ∋c↦ω_c ∈D^1,L-harm H_1(M;R) ∋h↦ω^h ∈D^1,L-harm associating to a cohomology class c a unique L-harmonic form ω_c in class c, and to each homology class h a unique L-harmonic form η^h such that m(⟨ g^-1ω^h,ω_c ⟩)=⟨ h,c⟩ for all c∈ H^1(M; R). Such isomorphisms are isometries when D^1,m-harm is regarded as a finite-dimensional subspace of L^2(m) and H^1(M; R), H^1(M; R) are equipped with the scalar products defined at the beginning of section <ref>. §.§ Weighted minimality and harmonicity of submersions In this section we introduce some basic notions of harmonicity and minimality in the context of weighted Riemannian manifolds. Minimality in this sense has been extensively studied in the last decades, see e.g. <cit.>, however the authors are not aware of any established connection with harmonicity w.r.t. Witten Laplacians of Riemannian submersions. Such a connection is well-established in the case without weight, we refer to <cit.> for a classical introductory text. See also <cit.> and IliasShouman2018,cheng2015stability, for weighted harmonic maps, and lichnerowicz1969applications,toth1984toroidal,nagano1975minimal for Albanese maps. In this section, (M, g, m) is a smooth compact weighted Riemannian manifold, where the weight m is a measure on M with strictly positive density, m = e^-V. Of course, for us m is to be thought as the invariant probability associated to (<ref>). (N, g') on the other hand is just a smooth Riemannian manifold with no weight associated. §.§.§ Weighted tension field We say that a smooth map φ: M → N is m-harmonic if it is a critical point of the energy functional (which depends on m, g and g') E_m(φ) = ∫ |d φ|^2 dm This means that if Φ [0,1)× M→ N is smooth with Φ(0,·)=φ(·), then e:eels ddt E_m(Φ(t,·))|_t=0=0 When V=0, this is sometimes called harmonic in the Eells-Sampson sense. Notice also that if ddim(M) ≥ 3, then the notion of m-harmonicity can be regarded as a standard notion of harmonicity for the conformal equivalent metric ĝ= e^-2/d-2 V g, see e.g. <cit.>. However, we do not pursue this point of view here for reasons that will become apparent later. As a straightforward generalization of the definition of the tension field in the case without weight, define the m-tension field e:mtension τ_m(φ) τ(φ) - d φ(∇V) where τ(φ) is the usual tension field of φ, see <cit.>. As in the standard case V=0, it is not hard to check that φ is m-harmonic if and only if τ_m(φ) = 0, see <cit.>. §.§.§ Weighted minimality of submersions In this section, we quickly establish the equivalence of m-minimality and m-harmonicity for Riemannian submersions, a well-known fact when m=. Let Σ be a smooth compact manifold and : Σ→ M a smooth immersion. Let H be the mean curvature normal field of the immersion, and let (∇ V)^⊥(x) be the orthogonal projection of ∇ V(x) ∈ T_xM to the normal bundle of (Σ) in x. The weighted mean curvature is the normal field e:wmean H_m= H+(∇V)^⊥ The immersion is m-minimal if H_m vanishes identically. If Σ⊂ M, Σ is called m-minimal whenever the inclusion map is m-minimal. The previous definition is motivated by the fact that is m-minimal if and only if it is a critical point of the volume of the immersion, see <cit.>. In particular, this is the usual notion of minimality if m is the volume measure. Recall that a surjective submersion φ M → N is a Riemannian submersion if for all x ∈ M, the restriction of its differential (d φ)_x (ker dφ_x)^⊥→ TN_φ(x) is an isometry. The following lemma generalizes propositon in <cit.> to the weighted case, and we indeed rely on the proof of the standard case. However, one may also check the statement in coordinates fixing an orthonormal frame on the tangent and normal spaces to the fibers. Let φ M → N be a Riemannian submersion. Then φ is m-harmonic if and only if all fibers of the submersion are m-minimal submanifolds in M. Given a point y ∈ N, let Σ_y be the fiber φ^-1(y), and let _y Σ_y → M be the inclusion map. As well known and easy to check, there is a trivial relation between the tension of the submersion φ and the tension of the inclusions of fibers, <cit.>. That is, for all x∈ M τ(φ)(x) = - d φ_x (τ(i_φ(x))(x)) Now notice that (since φ is a submersion) the kernel of dφ_x consists of vectors tangent to Σ at x, so that dφ(∇ V)= dφ( (∇ V)^⊥). Therefore by (<ref>) eq:tau_m_identity τ_m(φ)(x) = τ(φ)(x) - dφ(∇V)(x) = - d φ_x (τ(i_φ(x))(x))- dφ_x( (∇V)^⊥(x)) = - dφ_x(τ(i_φ(x))(x)+ (∇V)^⊥(x)) Now, τ(i_φ(x) )=H(x) from <cit.>, and therefore τ_m(φ)= - dφ(H+ (∇ V)^⊥). However, since H+ (∇ V)^⊥=H_m is normal, namely it is orthogonal to the kernel of dφ_x at any x, we have τ_m(φ)=0 iff H_m=0. §.§ Weighted Albanese map In this section we introduce a weighted Albanese map a_m defined on a weighted Riemannian manifold (M,g,m), consistently with the above notation. We quickly follow the standard construction of Albanese maps, carefully considering the dependance on the weight m∈ P(M). Let M̃ be universal cover of M with canonical projection p M̃→ M, p^∗ denoting the pullback of 1-forms via p. Fix x̃_0∈M̃, set x_0=p(x̃_0). Denote the action of the fundamental group on M̃ by (s,ỹ) ∋π_1(M,x_0) ×M̃↦ sỹ∈M̃. For ω∈ D^1,closed, p^∗ω is exact on M̃, namely there exists a unique smooth u_ωM̃→ R such that du_ω = p^∗ω and u_ω(x̃_0)=0. For s∈π_1(M,x_0), it holds u_ω(sỹ)-u_ω(ỹ) is independent of x̃_0 and ỹ, since its differential vanishes. Moreover, u_df(sỹ)-u_df(ỹ)=0, and ω↦ u_ω is linear. This entails that there exists an abelian representation π_1(M)∋ s↦ h_s ∈ H_1(M; R) such that e:hs u_df+η_c(s ỹ)-u_df+η_c(ỹ)= ⟨h_s,c⟩ for ỹ∈M̃, f∈ D^0, c∈ H^1(M; R) and η_c the unique m-harmonic form in class c, see Remark <ref>. Recalling Remark <ref> one can define eq:tilde_J ã_m M̃ →(D^1,m-harm)^†≃H_1(M;R) ã_m(ỹ)(η_c) u_η_c(ỹ) - u_η_c(x̃_0) It follows from (<ref>) that for if ỹ,ỹ'∈M̃ cover the same point, namely ỹ'=sỹ, it holds (J̃(ỹ')- J̃(ỹ))(η_c) = ⟨ h_s, c⟩. Now G=(h_s)_s∈π_1(M) is a discrete lattice of full rank of H_1(M; R), so that H_1(M; R)/G ≃ T^b_1 is a flat torus, where b_1 is the first Betti number of M. Therefore there is a map a_m such that the following diagram commutes M̃rã_m[swap]dp H_1(M; R)d/G M ra_m T^b_1 The weighted Albanese map a_m M → T^b_1 is m-harmonic (in the weighted Eells-Sampson sense). In this proof we denote by m̃ the lift of m to M̃, and d,d^∗_m̃ the corresponding operators on M built as in (<ref>). Whenever the arrival space N of a map φ as in Section <ref> is a flat torus, the m-harmonicity equation τ_m(φ)=0 is actually linear (since Christoffel symbols vanish, as one may easily check with similar computations as in <cit.>) and can be checked componentwise. So it is enough to show that ã_m is m̃-harmonic as a R^b_1-valued function. For u_ω as above, notice that d^∗_m̃ d u_ω(x̃)=(d^∗_m ω)(p(x̃)). In particular ω is m-harmonic iff u_ω is m̃-harmonic. So that the m̃-harmonicity of ã_m follows straightforwardly from its definition (<ref>). We refer to a_m as the m-weighted Albanese map. § QUADRATIC BOUNDS In this section we prove Proposition <ref>, although we start with showing a simple statement claimed in Section <ref>. Fix an isomorphism c↦ξ_c as above and consider the map (μ,j)↦ h^j∈ H_1(M; R) defined by duality as e:hmuj ⟨h^j,c ⟩= j(ξ_c), ∀c∈H^1(M;R) Since (ξ_c)_c∈ H^1(M; R) is finite-dimensional, this map is continuous and moreover h_T≡ h^J_T, see (<ref>). By contraction principle <cit.> we get that the law of h_T satisfies a large deviations principle with speed T and rate given by e:gpre H^1(M;R) ∋h ↦inf{ I(μ,j), (μ,j) h^j=h } However, since we can restrict to (μ,j) with I(μ,j)<∞, it follows that j is closed, and thus the relation h^j=h in (<ref>) is equivalent to j(ω)= ⟨ h,c⟩ for all ω∈ [c], regardless of the original linear isomorphism c↦ξ_c used to define h_T. Recall that I and G were introduced in Definition <ref>. Recall that was introduced in Definition <ref>, and for h∈ H_1(M; R), denote e:hh _h{ (μ,j)∈j(ω)= ⟨h,c⟩, for all ω∈ [c] and c∈ H^1(M; R)} So that G(h)=inf_(μ,j)∈_h I(μ,j). The following remarks are immediate. Recall that P(M), respectively J(M), are equipped with the weakest topology such that μ↦μ(f) is continuous for all f∈ D^0, respectively ω↦ j(ω) is continuous for all ω∈ D^1. Then the set {(μ,j)∈ P(M)× J(M) I(μ,j)≤ k } is compact for each k≥ 0. In other words, I is good in the sense <cit.>. Since m(⟨ r,df⟩)=0 for f∈ D^1, it remains defined the rotation number of the current mr, namely an element h̅∈ H_1(M; R) such that e:hbar4 m(⟨r,ω⟩) = ⟨h̅,c⟩ for all ω∈ [c] and c∈ H^1(M; R) For h∈ H_1(M; R), let J_h(M) be the space of closed currents with rotation number h, that is e:jh J_h(M){j∈J(M)j(ω)= ⟨h,c ⟩, for all ω∈ [c] and c∈ H^1(M; R) } For Q(·) as defined in (<ref>), it holds e:qeq Q(h)inf_j ∈J_h(M) I(m,j) For j ∈ J_h(M), (m,j)∈ and thus by (<ref>) and (<ref>) with ϱ≡ 1 e:mj1 I(m,j)= sup_ω∈D^1 (j-j_m)(ω)-12 m(|ω|^2) = sup_ω∈D^1 j(ω)-m(⟨r,ω⟩)-12m(|ω|^2) = sup_ω∈D^1,m-harm ⊕(D^1,closed)^⊥ j(ω)-m(⟨r,ω⟩)-12m(|ω|^2) where the second equality follows from m(d_m^∗ω)=0, while the last equality follows from the orthogonality of the direct sum (<ref>) in L^2(m), and j(df)=m(⟨ r, df⟩)=0, so that the supremum is attained on 1-forms ω with vanishing exact term in the decomposition. Recalling that η_c is the unique m-harmonic form in cohomology class c and that (η_c)_c∈ H^1(M; R) = D^1,m-harm we obtain e:mj2 I(m,j) = sup_c ∈H^1(M;R) j(η_c)-m(⟨r,η_c ⟩)-12m(|η_c|^2) + sup_ξ∈(D^1,closed)^⊥ j(ξ)-m(⟨r,ξ⟩)-12m(|ξ|^2) Now, for h̅ as in Remark <ref>, and since d^∗_m η_c=0 we actually have e:homologytypical ⟨h̅,c ⟩= m(⟨r,η_c ⟩) =m(⟨-12 ∇ϱ- 12 ∇V +r,η_c ⟩)= j_m(η_c) so h̅ is nothing but the rotation number of the typical current in the invariant measure j_m. Moreover m(|η_c|^2)=(c,c), see Remark <ref>, j(η_c)=⟨ h,c⟩ for all j∈ J_h(M). Thus we deduce from (<ref>) e:qeq2 inf_j ∈J_h(M) I(m,j) = sup_c ∈H^1(M;R) ⟨h - h̅,c ⟩-12(c,c) +inf_j ∈J_h(M) sup_ξ∈(D^1,closed)^⊥ j(ξ) -m(⟨r,ξ⟩)-12m(|ξ|^2) Now, since the scalar products induced on H^1(M; R) and H_1(M; R) are in duality, the r.h.s. in the first line of (<ref>) is exactly 12 (h-h̅,h-h̅). On the other hand, the second line is nonnegative (as one can always take ξ=0 in the sup) and equals 0 for any current whose restricition to ( D^1,closed)^⊥ coincides with m r. (<ref>) is thus proved, the infimum being achieved at the current j given by j(df+η_c+ξ)= ⟨ h,c⟩ + m(⟨ r,ξ⟩), where df+η_c+ξ represents the decomposition of a generic 1-form in D^1 as in (<ref>). The inequality (<ref>) is an immediate consequence of Lemma <ref> since e:upboundgauss2 G(h)= inf_(μ,j) ∈_h I(μ,j) ≤inf_j ∈J_h(M) I(m,j)= Q(h) We start with the proof of the ≤ inequality in (<ref>). Fix h∈ H_1(M; R) and for ω^h as in Remark <ref>, let u ≡ u^h ∈ D^0 be the unique solution to e:fred2 Lu - 2 ⟨r,du ⟩= d^∗_m ω^h with m(u)=0. The operator L^† u Lu - 2 ⟨ r,du ⟩ is the adjoint of L in L^2(m), so that the well-posedness of the equation (<ref>) is a consequence of the Freedholm alternative and the fact that the r.h.s. integrates to 0 w.r.t. m. For >0 such that u_C(M) < 1, 1+ u is a smooth probability density, since m(u)=1. Then define e:optimalmuj ν^,h=(1+u) m ∈P(M) ^,h(ω)= j_ν^,h(ω)+ m ( ⟨g^-1 ω^h,ω⟩) By these definitions e:closed ^,h(df) =ν^,h(Lf)- m(f d^∗_m ω^h) ^,h(df) = (m( u Lf- d^∗_m ω^h) )= m( f (L^†u - d^∗_m ω^h))=0 ^,h(ω_c)= ν^,h(L ω_c)+m(⟨g^-1 ω^h,ω_c ⟩)= ⟨h̅,c⟩+⟨h,c⟩ where in the first identity we used (<ref>), (<ref>), the invariance of m and (<ref>); in the second identity we used ( L ω_c)(x)=m (⟨ r,ω_c⟩)= ⟨h̅,c⟩ for every x∈ M and the definition of ω^h in Remark <ref>. (<ref>) imply that (ν^,h,^,h) ∈_h̅ + g and therefore by the very definition of G (<ref>) and (<ref>) e:upboundgauss3 G(h̅ +h) ≤I(ν^,h,^,h)= ^2/2 ∫ |ω^h|^2/(1+u)^2 dν^,h = ^2/2 ∫ |ω^h|^2/(1+u) dm so that, taking the limit inside the integral by bounded convergence e:upboundgauss4 _ ^-2 G(h̅ +h) ≤12 m(|ω^h|^2)=(h,h)_r We next turn to the ≥ inequality in (<ref>). Since I(·,·) has compact sublevel sets, see Remark <ref>, for each h∈ H_1(M; R) and >0, there exists a (μ^,h,j^,h) ∈_h̅+ h such that e:imuj2 I(μ^,h,j^,h) = G(h̅+h) ≤^2 ^-2 Q(h̅+h) = ^2/2 (h,h) where in the last inequality we used (<ref>). In particular (μ^,h,j^,h)_0<<1 lies in a compact set and any limit point (μ,j) as ↓ 0 satisfies I(μ,j)≤_ I(μ^,h,j^,h)=0 by (<ref>). Since (m,j_m) is the unique zero of I, it follows μ^,h→ m as ↓ 0. Therefore, recalling that η^h is defined in Remark <ref>, and (<ref>), (<ref>), we get for any c∈ H^1(M; R) and f∈ D^0 e:gacont G(h̅+h) = I(μ^,h,j^,h) = sup_ω∈D^1 (j-j_μ^,h)(ω)- 12 μ^,h(|ω|^2) ≥ (j-j_μ^,h)((η_c+df))- 12 μ^,h(|(η_c+df)|^2) = ⟨h̅ +h, c ⟩- j_μ^,h(η_c+df) - ^22 μ^,h(| η_c+df|^2) where in the inequality we just chose ω= (η_c+df), and we used j∈ J_h̅ + h in the last line. Now notice that j_μ^,h(η_c+df)= μ^,h(⟨ r,η_c ⟩ + Lf), ⟨h̅, c ⟩ = j_m(η_c) and m(Lf)=m(⟨ df, η_c⟩ )=0 to get from (<ref>) e:gacont2 ^-2 G(h̅+h) ≥ ⟨h, c ⟩- ^-1 μ^,h(⟨r,η_c ⟩+Lf) +^-1 m(⟨r,η_c ⟩+Lf) - 12 μ^,h (|η_c+df|^2 ) We then choose f∈ D^0 as the unique solution to e:fred -Lf = ⟨r,η_c ⟩- m( ⟨r,η_c ⟩) with m(f)=0, namely (<ref>) with η_c in place of η. With such a choice eof f the terms in ^-1 in (<ref>) vanish in view of e:extraterm μ^,h((⟨r,η_c ⟩+Lf)) = m( ⟨r,η_c ⟩)= m((⟨r,η_c ⟩+Lf)) Moreover, as Remarked after (<ref>), for f as in (<ref>), η_c+df=ω_c. Thus by the very definition of (·,·)_r, see the discussion at the beginning of Section <ref>, e:ccr m(| η_c+df|^2)=(c,c)_r Finally recalling that μ^,h converges weakly m as proved after (<ref>), since |ω_c|^2 is smooth e:muepshconv lim_ μ^,h (|η_c+df|^2 ) = m(|η_c+df|^2 ) =(c,c)_r Thus passing to the limit in (<ref>), and using (<ref>)-(<ref>) e:gacont3 _ ^-2 G(h̅+h) ≥ ⟨h, c ⟩- 12 (c,c)_r As we optimize over c∈ H^1(M; R) we get the ≥ inequality in (<ref>). Let ⊂ be the set of pairs (μ,j) with μ=ϱ m, j=μ E for some smooth, strictly positive density ϱ and smooth tangent vector field E. Then, with the notation introduced at the beginning of Section <ref> e:jmuone j_μ(ω) =-12 m(⟨∇ϱ,ω⟩)+ μ(⟨r,ω⟩) = -12 μ(⟨∇logϱ,ω⟩)+ μ(⟨r,ω⟩) that is j_μ= μ g^-1ω̅_ϱ for ω̅_ϱ = - 12 d logϱ + gr ∈ D^1. Thus (see also (<ref>)) e:gc1 I(μ,j)-I(μ,-j) = 1/2 j-j_μ_μ^2 - 1/2 -j-j_μ_μ^2=-2 j, j_μ_μ =- 2 j(ω̅_ϱ) = j (d logϱ)- 2 j(g r)= - 2 j(g r) where in the last line we used (<ref>) and the fact that j is closed. It is easy to check that is I-dense in , namely that for (μ,j) ∈ there exists a sequence (μ_n,j_n) → (μ,j) with (μ_n,j_n)∈ and lim_n I(μ_n,± j_n)=I(μ,± j). Indeed, I is nothing but the lower-semicontinuous envelope of its restriction to . Therefore (<ref>) holds on and not just on . We need to show that, in the quasi-reversible case see Definition <ref>-(b), it holds (_h is defined in (<ref>)) e:gcrestated inf_(μ,j)∈_h I(μ,j)= inf_(μ,j)∈_-h I(μ,j) - ⟨h,c̅⟩ Notice that (μ,j) ∈_h iff (μ,-j) ∈_-h. Therefore e:gc2 G(-h) = inf_(μ,j) ∈_-h I(μ,j) = inf_(μ,j) ∈_h I(μ,-j) = inf_(μ,j) ∈_h I(μ,-j) -2j(gr )+ 2j(gr ) =inf_(μ,j) ∈_h I(μ,j)+ 2j(gr ) where in the last line we used (<ref>). By hypotheses of quasi-reversibility, the vector field b is such that gb is closed, thus gb ∈ [c̅] for some cohomology class c̅. However, since r=b+ 12 ∇ V, it holds gr ∈ [c̅] as well. Therefore, for each (μ,j) ∈_h, the quantity 2 j(gr) in the last line of (<ref>) equals 2⟨ h,c̅⟩ and so it gets out of the inf to get (<ref>) (which trivially holds for Q). § ASYMPTOTICALLY GAUSSIAN HOMOLOGY In this section we prove Theorem <ref>. Recall that L has asymptotically Gaussian homology if G(h)=Q(h), see (<ref>). Assume that L has asymptotically Gaussian homology. Then L is homologically reversible, and moreover m-harmonic forms have constant length. That is |η_c| is constant (independent of x) for all c∈ H^1(M; R). Recall the definition (<ref>) of _h. From Lemma <ref>, G=Q iff for all h∈ H_1(M; R), there exists j^h ∈ J_h such that I(m,j^h) ≤ I(μ,j) for all (μ,j) ∈_h. Indeed, the minimizer of the coercive functional Q(·)=I(m;·) over the closed set J^h exists, so that equality holds iff e:gingi G(h)= inf_(μ,j)∈_h I(μ,j)=I(m,j^h)=Q(h) In such a case, since j_m=m r, it holds necessarily that j^h=m(r+g^-1ξ^h) for some ξ^h ∈ D^1, ξ^h ∈ L^2(m). Moreover, ξ^h has to be orthogonal to exact forms in L^2(m) since j^h(df)=m(⟨ r,df ⟩)=0`'. Now take in (<ref>) μ= m (1+u) for u∈ D^0 with u≥ -1 and ∫ u dm=0, and j of the form j^h+m g^-1ζ for some ζ∈ D^1 with m g^-1ζ∈ J_0. To get from (<ref>)-(<ref>), for all u's and ζ's as just described e:lagrange I(m,j^h) ≤I(m (1+u),j^h+ m g^-1 ζ) = 1/2 ∫ |ξ^h+ζ- 12 d u - u r|^2/1+u dm Now, changing u to u, ζ to ' ζ, since we chose u and ζ smooth, it is easily seen that the r.h.s. of (<ref>) is differentiable in ,', and imposing that the derivatives must vanish at ='=0 one gathers e:lagrange2 m(⟨ξ^h,ζ⟩)=0 m( -|ξ^h|^2 u + ⟨∇u +2 u r,ξ^h ⟩)=0 The first equation holds for all smooth ζ with m g^-1ζ∈ J_0, so that this equation implies that ξ^h is closed. And since ξ^h is orthogonal in L^2(m) to exact forms, it must hold ξ^h=η^h, for η^h as in Remark <ref>. In particular the term m( ⟨∇ u ,η^h ⟩) vanishes and we get from the second equation, recalling that ∫ u dm=0 e:lagrange3 |η^h|^2+ 2 ⟨r,η^h ⟩=constant for all h∈ H_1(M; R) By polarization, then one easily gets that the quadratic term and the linear one in η^h must be independently costant. As h spans H^1(M; R), η^h spans D^1,m-harm so that we get that ⟨ r,η⟩ is constant (thus L is homologically reversible) and |η| are constant for all m-harmonic η's. Let (M,g,m) be a weighted Riemannian manifold as in Section <ref>. If every m-harmonic form on M has constant length, then the weighted Albanese map a_m defined in Section <ref>, is a Riemannian submersion with m-minimal fibers, according to Definition <ref>. a_m is always m-harmonic, see Remark <ref>. On the other hand, m-harmonic submersions have m-minimal fibers, see Proposition <ref>. So it is enough to check that if every m-harmonic form has constant length, a_m is a Riemmanian submersion. If every m-harmonic form on M has constant length, by polarization, all pointswise scalar products ω·ω' = ⟨ g^-1ω,ω'⟩ in T^∗_x M are constant in x for ω,ω' m-harmonic forms. Fix an orthonormal base of the b_1-dimensional space of m harmonic equipped with the Hilbert norm ω_m= √(()m(|ω|^2)). Since scalar products are constant, it follows that the base is pointwise orthonormal on each T_x^∗ M, not just in L^2(m). Since a_m pulls back harmonic forms on T^1 to m-harmonic forms on M, orthonormal coframes are pulled back to orthonormal coframes in L^2(m), and thus pointwise orthonormal coframes. This is equivalent to a_m being a Riemannian submersion. Let ψ M→ T^d be of class C^2, and suppose that ψ pushes forward the Riemannian metric on M to a flat metric on the torus T^d. Then the semimartingale Y_t ψ(X_t) has quadratic variation [Y,Y]_t= S t for some constant (symmetric, positive definite) matrix S and Y_t-∫ B(X_s) ds is a martingale, where B(x) ∈ T_ψ(x) T^d is characterized as follows. For O∋ x a small enough open ball in M, one can write on O, using an orthonormal (w.r.t. the induced flat metric) frame on ψ( O): B=(B^1,…, B^d), ψ=(ψ^1,…,ψ^d). Then B^k=Lψ^k where L is the generator (<ref>). Let us compute in coordinates in O using the orthonormal frame as in the statement of the lemma. The statement on the quadratic variation is trivial, since Y satisfies, using standard semimartingale notation, d[Y^h,Y^k]_t= S^h,k(X_t) dt, with S^h,k(x)= g^i,j(x) ∂_i ψ^h(x) ∂_j ψ^k(x). This is constant in x, since it is nothing but the pushforwarded metric on T^d, which is flat by hypotheses. On the other hand, the bounded variation term in the Doob decomposition can be carefully computed in coordinates to get that Y_t-∫_0^t B(X_s)ds is a martingale for e:bigb B^k(x)= Lψ^k(x) - 12 √(|S(x)|)∂_i ( 1√(|S(x)|) S^i,k(x) ) where |S| denotes the determinant. As already noticed, ψ being a submersion implies that S(·) is constant (actually the identity in our coordinates). Therefore the Riemannian correction term in the last formula (the last term involving derivatives of S) vanishes. We are finally ready to prove the main theorem. (1) ⇒ (2). From Lemma <ref>, the map a_m that we defined in Section <ref>, is a Riemannian submersion with minimal fibers. A result by Hermann, see <cit.>, states that Riemannian submersions whose total space is complete are locally trivial fiber bundles. Since M is compact, thus geodesically complete, (M, T^b_1,a_m) is a locally trivial fiber bundle. Fibers are then m-minimal still from Lemma <ref>, while the homological reversibility of L comes from Lemma <ref>. (2) ⇒ (3). By hypotheses there exists a smooth ϕ M→ T^b_1 and a flat metric on T^b_1 such that (M, T^b_1,ϕ) is a locally trivial fiber bundle and ϕ is a submersion with m-minimal fibers, see Section <ref>. In particular, by Proposition <ref>, ϕ is m-harmonic[In the statement of Theorem <ref> we did not detail the smoothness assumptions on the projection map ϕ. In the literature, it may be sometimes assumed smooth or just differentiable. However notice that the harmonicity of ϕ guarantees that this two conditions are actually equivalent.], in the weighted Eells-Sampson sense, see Section <ref>. Since T^b_1 is flat (in particular Christoffel's symbols vanish), using a orthonormal frame as in Lemma <ref>, it is easily seen that the components ϕ^k are m-harmonic in the sense Δ_m ϕ^k=0. In particular from Lemma <ref> applied with ψ=ϕ, we get that locally B^k(x)= L ϕ^k= 1/2Δ_m ϕ^k + ⟨ r,dϕ^k⟩ = ⟨ r,dϕ^k⟩, for k=1,…,b_1. Now locally dϕ^k is an m-harmonic form, being the differential of a m-harmonic function. In particular, since b is homologically reversible by hypotheses (2), ⟨ r,dϕ^k⟩h̃^k is constant in x in any small enough open set, and thus everywhere on M, since M is connected. In other words, still by Lemma <ref>, Y_t- h̃t is a continuous martingale with quadratic variation coinciding with the quadratic variation of a standard (flat) Brownian motion. Namely the statement. (3) ⇒ (1). First notice that the quadratic variation [ϕ(X),ϕ(X)]_t equals I_b_1 t where I_b_1 is the identity (in orthonormal coordinates). This implies that b_1≤dim(M) and that dϕ has maximal rank, that is b_1. Thus ϕ is a submersion (although not necessarily a Riemannian submersion). If e is a smooth 1-form on T^b_1, then denoting ϕ_∗ the pullback on forms e:changeofvariable ∫_0^t e(Y_s)∘dY_s = ∫_0^t (ϕ_∗e)(X_s)∘dX_s If e is harmonic on the flat torus T^b_1 and Y satisfies (<ref>), it is easy to see that the l.h.s. of (<ref>) is Gaussian for every t≥ 0, since it coincides in law with cẆ_t + ⟨ c,h̅⟩ where c is the cohomology class of e and W a standard Brownian motion on R^b_1. On the other hand, ϕ_∗ e is closed in M since pullbacks commute with differentials. As we have already noticed that ϕ is a submersion, any cohomology classes c∈ H^1(M; R) have a representative closed 1-form ξ_c of the type ϕ^∗ e for e harmonic on T^b_1. Thus the random homology h_T defined by ⟨ h_T,c ⟩ = J_T(ξ_c) is Gaussian with covariance |c|^2 T, and thus has Gaussian large deviations with rate Q(·), see (<ref>). Since the large deviations rate of h_T does not depends on this choice of the isomorphism c↦ξ_c, see Remark <ref>, we conclude. plain
http://arxiv.org/abs/2406.19279v1
20240627154937
Probing self-interacting ultra-high-energy neutrinos with cosmic 21-cm signal
[ "Mansi Dhuria", "Bishnu Gupta Teli" ]
hep-ph
[ "hep-ph", "astro-ph.CO" ]
Department of Physics, School of Energy Technology, Pandit Deendayal Energy University (PDEU), Gandhinagar-382426, Gujarat, India § ABSTRACT In this study, we investigate the constraints on secret self-interactions of neutrinos by examining the impact of radiative scattering of ultra-high-energy (UHE) neutrinos. These neutrinos are produced from the decay of superheavy dark matter and interact with the cosmic neutrino background (CνB). We explore how these interactions influence the 21-cm hydrogen signal during the cosmic dark ages and cosmic dawn, periods relatively free from astrophysical uncertainties, providing a clearer signal for studying non-standard neutrino interactions. By analyzing the global brightness temperature measurements, we constrain the scattering cross-section of UHE self-interacting neutrinos, determining the coupling constant g to be within ∼ 10^-4 to ∼ 10^-3 for neutrino energies in the PeV to EeV range. Interestingly, these constraints are more competitive than those from existing astrophysical and collider experiments. As future 21-cm experiments focus on measuring brightness temperature across a wide range of redshifts from the cosmic dark ages to reionization, using the epoch of 21-cm to probe neutrino properties could provide crucial insights into dark matter and neutrino physics. Probing self-interacting ultra-high-energy neutrinos with cosmic 21-cm signal Mansi Dhuria[Mansi.dhuria@sot.pdpu.ac.in], Bishnu Gupta Teli[bishnu.tbsc20@sls.pdpu.ac.in] ================================================================================================ § INTRODUCTION Despite the significant progress made in observing our universe through various methods such as galaxy surveys, cosmic microwave background (CMB) based measurements, and recent gravitational interferometers, etc., large portions of our universe remain unexplored, particularly in the redshift range between z ∼ 1100 and z ∼ 6 due to the faintness of early universe sources. Remarkably, recent years have revealed another promising avenue of exploration spanning from shortly after the epoch of recombination at redshift z ∼ 1100 to the formation of the initial significant population of luminous objects around redshift z ∼ 30, up to the re-ionization of the universe at redshift z ∼ 6 <cit.>. During this epoch, the universe was primarily governed by neutral hydrogen until the emergence of the first stars and galaxies. Consequently, much of the investigation remains centered on observing the 21-cm signal emitted by neutral hydrogen, originating from its hyperfine transition <cit.>. The significant interest in such models has been sparked by the potential detection of a robust 21-cm signal by the EDGES experiment <cit.>. The signal was significantly stronger than the maximal absorption signal possible within standard cosmology, hinting towards non-standard dynamics <cit.>. Although the SARAS experiment <cit.> has contested it at a 95% significance level, more results are needed to rule out the EDGES claim. Other than that, several other ongoing and future experiments such as LEDA <cit.>, and REACH <cit.>, focus on detecting the global 21-cm signal across wide range of red-shift, while various radio interferometric telescopes such as MWA <cit.>, GMRT <cit.>, LOFAR <cit.>, HERA <cit.>, and SKA <cit.> are dedicated to probing the spatial fluctuations in the 21-cm hydrogen signal during the period of cosmic dawn and reionization. Given these ongoing/upcoming facilities, we must have a thorough understanding of the cosmological 21-cm hydrogen signal expected in consistent models of cosmology. In the past few years, it has been well-established that the observed 21-cm signal is greatly influenced by interactions between Dark matter (DM) and baryons. Recent studies have delved into examining the influence of various DM candidates and their interactions on the 21-cm observables such as cooling of hydrogen gas due to elastic scattering with DM <cit.>, heating of hydrogen due to decay or annihilation of DM <cit.>, modifications to the Rayleigh-Jeans tail due to the resonant conversion of DM to CMB photons, etc <cit.>. In this work, we study the impact of secret self-interactions of UHE neutrinos emitted from DM decay on the 21-cm brightness temperature. Neutrinos are known for their extremely weak interactions via the weak force in the standard model (SM), thus making it easy to travel through Earth. The recent cosmological and astrophysical observations have highlighted the significant role of neutrinos in multi-messenger astronomy. Particularly, the UHE neutrino flux from DM decay/annihilation or other astrophysical sources as probed by numerous current-generation neutrino experiments such as IceCube has been able to set competitive limits on the DM lifetime and DM annihilation cross-section for very heavy DM candidates <cit.>. In the last few years, there has also been increasing discussion about the potential for neutrinos to interact quite strongly with themselves via a new scalar/vector mediator, termed neutrino self-interaction (ν SI) <cit.>. This phenomenon has implications such as addressing the Hubble tension <cit.>, supporting KeV sterile neutrino as a viable DM candidate <cit.>, and influencing supernova neutrino emission <cit.>. Thus, the existence of ν SI naturally suggests physics beyond the SM, offering opportunities to explore its implications for various astrophysical and cosmological phenomena such as <cit.>. The investigation into the secret self-interaction of UHE neutrinos is also conducted in literature through the scattering of UHE astrophysical neutrinos with CνB neutrinos  <cit.>. The interaction between UHE neutrinos and CνB neutrinos results in distinctive dips and bumps in the astrophysical spectrum. By comparing this spectrum with the current data from IceCube, constraints on the self-interacting coupling of τ-neutrinos have been established. Although the existing IceCube data has been able to probe very high values of self-interacting coupling, it has been shown that the upcoming IceCube data should be able to probe even moderately small values of couplings <cit.>. In fact, the bounds on the self-interacting couplings obtained from the same are much stronger than those from other cosmological and collider probes <cit.>. While neutrino experiments can constrain such interactions or the annihilation/scattering cross-section of DM, distinguishing UHE neutrinos from DM decay from those emitted by astrophysical sources remains a significant challenge. The effectiveness of neutrino telescopes in investigating heavy DM relies heavily on the diffuse flux spectrum of UHE neutrinos. These UHE neutrinos act as a background to the flux spectrum from DM searches, potentially masking the subtle signs of DM decay <cit.>. Hence, this remains a considerable challenge. Recently, there has been an attempt to investigate interactions of superheavy DM by searching for radio emissions resulting from the interaction of UHE neutrinos with the lunar regolith <cit.>. This method can potentially explore energy levels beyond 10^12 GeV, which are beyond the reach of astrophysical accelerators. In our work, we aim to constrain secret self-neutrino interactions of UHE neutrinos emitted specifically through the decay of superheavy DM by studying its impact on the 21-cm signal in the period from dark ages to cosmic dawn, which is almost free from the astrophysical background. In a toy physics model beyond the SM, the self-interaction between neutrinos can be introduced by involving a new scalar or vector boson that interacts with a pair of neutrinos and their leptonic partners. For simplicity, we consider only a scalar boson in this work. In addition to elastic scattering between neutrinos, new interactions can also lead to the production of photons due to the radiative (one-loop) scattering of UHE neutrinos with cosmic neutrinos mediated by leptonic partners and the scalar boson. This process can heat the intergalactic medium (IGM), thereby affecting the 21-cm brightness temperature. By considering the number density of UHE neutrinos allowed by the present-day relic abundance of DM and their interaction with the cosmic neutrino background in the redshift ranging from the dark ages to cosmic dawn, we study the impact of such heating on the 21-cm absorption spectrum. As a result, we obtain constraints on the allowed parameter space of self-interacting neutrinos and find that these constraints are much more competitive than those from other astrophysical and laboratory probes. As many upcoming radio experiments aim to detect the 21-cm brightness temperature and the 21-cm power spectrum more precisely across a wide redshift range, our analysis will be valuable in highlighting the potential signatures of non-standard neutrino interactions using the 21-cm absorption signal in the future. The plan for the rest of the paper is as follows: In $<ref>, we begin with a brief overview of the 21-cm cosmology. In $<ref>, we discuss a basic toy model of particle physics that involves the interaction of a scalar boson with a pair of neutrinos and their leptonic partners. We also explain the possibility of producing photons through the radiative scattering of UHE neutrinos with CνB neutrinos. In $<ref>, we discuss the general steps in calculating heating induced by various processes on the gas temperature and the free electron fraction. In $<ref>, we specifically calculate the energy injection rate due to the self-scattering of UHE neutrinos emitted from superheavy DM with CνB background. In $<ref>, we present our findings on the impact of specific scattering cross-section values on the 21-cm absorption signal. Additionally, we derive constraints on the parameter space for self-interacting neutrino strength, considering the mass of the scalar mediator and various levels of UHE neutrino emitted from superheavy DM. In $<ref>, we summarize our results with conclusions and suggest possible future directions. There is one appendix <ref>. § 21-CM COSMOLOGY We start with a concise overview of the fundamental aspects of 21-cm cosmology <cit.>. The CMB photons we observe today have traveled through the universe since a redshift of around 1100, passing through cold neutral hydrogen clouds. During their journey, 21-cm wavelength photons were absorbed and emitted via hydrogen's hyperfine transitions. This process causes a deviation in the CMB spectrum, typically quantified by the differential 21-cm brightness temperature. The global 21-cm differential brightness temperature is defined as <cit.> T_21=27x_HI[0.15/Ω_m1+z/10]^1/2(Ω_bh/0.023)(1-T_γ/T_s) mK where x_HI=n_HI/n_H is the fraction of neutral hydrogen in the universe, n_HI and n_H are the number densities of neutral hydrogen, and total hydrogen (ionized+neutral), respectively. T_γ corresponds to the temperature of the surrounding bath of photons, typically fixed by the CMB temperature, so that T_γ=T_CMB (1+z). Here, Ω_b≈0.044 is relic abundance of baryonic matter, and Ω_m≈0.26 is relic abundance of total matter. The parameter h is the reduced Hubble constant and its value is h=H_0/100 km/s/Mpc=0.74. The relative populations between the two hyperfine levels, triplet and single state of a neutral hydrogen atom, can be determined by the spin temperature parameter T_s, defined as n_1/n_0=3e^-T_*/T_s. The splitting between the singlet and triplet state is denoted by Δ E/k_B=T_*=0.068K. The spin temperature is generally influenced by three factors <cit.>: (i) absorption/emission resulting from Compton scattering with the surrounding CMB photons, (ii) collisional coupling between hydrogen molecules, which is more significant at high redshifts, and (iii) resonant scattering with the Lyman-α (Lyα) photons, generally known as the Wouthusysen-Field effect <cit.>. The evolution of the spin temperature is given as <cit.>: T_s^-1=T_γ^-1+x_cT_k^-1+x_α T_c^-1/1+x_c+x_α where T_k is the kinetic gas temperature and T_c is the color temperature of the Lyα photons at the Lyα frequency. In most relevant scenarios, T_c ≈ T_K because the optical depth to Lyα scattering is usually quite high. This results in numerous scatterings of Lyα photons, which align the radiation field and the gas near the line center frequency, achieving local equilibrium <cit.>. The parameters x_c and x_α are the coupling coefficients due to atomic collisions and scattering of Lyα photons respectively. As the collisional coupling is mainly induced by collisions between hydrogen atoms with other hydrogen atoms, free electrons, and free protons, the total coupling coefficient will be given by x_c = x_c^HH + x_c^eH + x_c^pH, The collision coupling coefficient <cit.> for a particular channel is x_c^i=n_iκ_10^i/A_10T_*/T_γ where κ^i_10 denotes the rate coefficient for spin de-excitation in collisions in that particular channel (with units of m^3s^-1). With this, the coupling coefficient x_c turns out to be: x_c = T_*/A_10T_γ[κ^HH_10(T_k)n_H + κ^eH_10(T_k)n_e + κ^pH_10(T_k)n_p] where κ^HH_10 is the scattering rate between hydrogen atoms <cit.> , κ^eH_10 is the scattering rate between electrons and hydrogen atoms <cit.>, and κ^pH_10 is the scattering rate between protons and hydrogen atoms <cit.>. In the beginning, at high redshifts, the collisional coupling is dominating. But at the lower redshifts, the collision coupling becomes subdominant as the number density of hydrogen gas decreases. However, after the formation of the first star, resonant scattering of Lyα photons provides a new way of coupling. This is famously called the Woulthuysen-Field Mechanism <cit.>. The physics of this mechanism is much more subtle than this description. For convenience, we will use the semi-numerical model to calculate the coupling constant (x_α). We consider tanh parametrization model given in refs. <cit.> in order to calculate coefficient x_α given by: x_α(z)≡ 2A_α(z)/(1+z) where A_α(z)=A_α(1+tanh(z_α0-z/Δ z_α)) As suggested in <cit.>, we use following set of fiducial values to calculate x_α: {A_α,z_α 0,Δ z}={100,17,2} Using the expressions of the coupling coefficients x_α and x_c, the spin temperature T_s can be calculated by using eq. (<ref>) and consequently brightness temperature T_21 can be calculated by using eq. (<ref>) in the relevant range of redshift. In standard cosmology, one expects two absorption signals with the first shallow absorption minima near 20 MHz (with z ∼ 70) and the other deeper minima at higher frequencies between 50 - 110 MHz (with z ∼ 12 - 27) in the global cosmological 21-cm signal, which are signatures of collisional gas dynamics in the cosmic dark ages and Lyα photons from the first stars at cosmic dawn, respectively. § SELF-INTERACTING UHE NEUTRINOS FROM DECAY OF DARK MATTER We consider a scenario in which the decay of a superheavy DM particle with a mass m_ DM≥ PeV results in the production of UHE neutrinos. The number of neutrinos produced depends on the specific decay channels of the DM particle. If the heavy DM primarily undergoes two-body decay into a pair of neutrinos, i.e. DM→νν̅, this will lead to a neutrino flux with E_ν_h≈ m_ DMc^2. As mentioned in section <ref>, the UHE neutrino flux from DM as well as astrophysical sources is being probed by numerous current-generation neutrino experiments <cit.>. These experiments consider the scattering of high-energy neutrinos with the CνB en route to Earth, which redistributes their energies. We will consider the UHE neutrinos emitted specifically from the decay of DM and study the impact of the scattering of the same with the CνB on the 21-cm signal during the period from dark ages to cosmic dawn. In the minimal model of neutrino self-interaction, we consider a model in which the real singlet scalar at low energies couples both to neutrinos as well as leptonic partners. The interaction couplings are given as: L ⊃ g_ν_iϕν_iν_i + g_l_iϕl̅_̅i̅l_i where i = e, μ, τ represent three different flavors of neutrinos. In this formulation, we assume the Majorana neutrinos and use Weyl notation to denote the neutrino coupling to scalar bosons and Dirac notation to denote leptons coupling to the scalar boson. Similarly, the interaction can also be mediated through the new Z-Boson. In the context of a scalar boson, the coupling parameters g_l_i and g_ν_i may vary depending on the specific particle physics model. For the sake of simplicity in our toy model, we assume g_l_i = g_ν_i = g_i. In the presence of aforementioned interactions, the tree-level s-channel scattering of neutrinos can induce self-scattering of neutrinos, while the one-loop scattering mediated through leptons and a new scalar can produce photons. The tree-level and one-loop level Feynman diagram for this process is given in Fig. <ref> respectively. The emission of gamma rays produced by radiative scattering can heat the IGM, which, in turn, can affect the 21-cm global signal during the cosmic dawn and the dark ages. The cross-section for the one-loop process, shown in the Feynman diagram is given by <cit.>, σ = 81α^2 s/4π^3g^4_i/(s-m_ϕ^2)^2+m_ϕ^2Γ_ϕ^2×|1+ Q_i^2m_i^2 C_0^γ|^2 where C_0^γ is known as the scalar Passarino-Veltaman function and is given by, C_0^γ(s,m_i)=1/2sln^2(√(1-4m_i^2/s)-1/√(1-4m_i^2/s)+1) In the above eqs. (<ref>) and (<ref>), g_i stands for the self-interacting coupling for a particular flavor of neutrinos, m_i stands for the mass of leptonic partner, s=2m_ν_iE_ν_h stands for the center of mass Mandelstam variable, m_ϕ is the mass of the new scalar mediator and Γ_ϕ=g^2_i m_ϕ/4π is the neutrino decay width. Here, m_ν_i is the mass of the active neutrino of a particular flavor. Depending on the values of E_ν_h and m_ϕ, this cross-section can reach a resonance when E_ν_h≈ m_ϕ^2/2m_ν_i. Using the expression for the cross-section given in eq. (<ref>), we will now study the effect of self-interactions of UHE neutrinos on the evolution of 21-cm brightness temperature. § EFFECT OF HEATING ON THE 21-CM SIGNAL In this section, we will first discuss the general effect of heating induced by new physics and its consequent impact on the cosmic 21-cm absorption signal. The energy injection from such heating can alter the temperature of hydrogen gas during the cosmic dark ages and cosmic dawn, affecting the absorption of a 21-cm signal. Before quantifying the energy injection resulting from the scattering of UHE neutrinos with the cosmic neutrino background, we will outline the steps involved in calculating the evolution of the gas temperature (T_k) and ionization fraction (x_e) due to standard cosmological effects, as well as the additional effects due to heating. To calculate the brightness temperature given by eq. (<ref>), one needs to calculate the evolution of the fractional neutral hydrogen (x_HI), CMB temperature (T_CMB) and spin temperature (T_s) as a function of redshift. The parameter x_HI is related to the fraction of ionized hydrogen (x_e) as x_HI=1-x_e. The CMB temperature can be calculated as T_CMB=T_CMB,0(1+z), where T_CMB,0=2.7K is the CMB temperature today. From eq. (<ref>), we can see that in addition to T_CMB, the spin temperature also depends on x_c, x_α, T_k and T_c. Explanation to calculate x_c, x_α and T_c is given in $<ref>. Thus, the evolution of brightness temperature essentially depends on the history of gas temperature (T_k) and ionization fraction (x_e). To calculate the same, we follow a standard Peebles recombination framework <cit.>, further refined in subsequent studies <cit.>. This framework involves solving two coupled ordinary differential equations for the evolution of gas temperature and ionization fraction. §.§ Evolution of gas temperature The evolution of the kinetic gas temperature with redshift follows <cit.>: dT_k/dz=2 T_k/1+z+Γ_C/(1+z)H(T_k-T_CMB) In eq. (<ref>), the first term shows the effect of cosmological expansion on the gas temperature. The second term represents the heating due to the Compton scattering between hydrogen gas and CMB photons. Here, Γ_c denotes the Compton scattering rate, defined as Γ_C=8σ_Ta_rT^4_CMBx_e/3(1+f_He+xe)m_ec where σ_T, a_r and m_e are the Thomson scattering cross-section, Stephan-Boltzmann radiation constant and mass of an electron, respectively and f_He=n_He/n_H is the helium fraction. It has also been shown in ref. <cit.> that Lyman-α photons facilitate energy transfer between CMB photons and the thermal motions of hydrogen atoms. In scenarios lacking x-ray heating, this newly identified mechanism significantly modulates the temperature of adiabatically cooling gas by approximately 10% at z≈17. To include this effect, the eq. (<ref>) gets modified as, dT_k/dz=dT_k/dz|_eq. (<ref>)-Γ_R/H(z)(1+z)(T_γ/T_s-1)T_* where Γ_R is the heating rate due to the transfer of energy from CMB photons to the thermal motion of hydrogen gas and is given by Γ_R=x_HIx_CMB/2(1+f_He+x_e)A_10 where A_10=2.86×10^-15 s^-1 is the Einstein coefficient for spontaneous emission from the triplet state to singlet state, and x_CMB=1/τ_21(1-e^-τ_21) where τ_21 is optical depth given by τ_21=8.1×10^-2x_HI(1+z/20)^1.510 K/T_s. Effect of heating induced by BSM processes: Finally to include additional injection of energetic particles due to certain BSM processes such as DM decay/annihilation, decay of primordial black holes, or the scattering of energetic particles, etc, the eq. (<ref>) can be modified as follows, dT_k/dz =dT_k/dz|_eq. (<ref>) -2/H(z)(1+z)3k_Bn_H(z)(1+f_He+x_e)dE/dV dt |_dep,h where k_B is the Boltzmann constant. The last term in eq. (<ref>) corresponds to the energy deposition into the IGM. Each channel of energy deposition is represented by the subscripts c=i,α,h, corresponding to ionization, excitation, and heating, respectively. It should be noted that not all of the energy injected from DM interaction is fully deposited in the medium. The quantity of energy deposited in the medium heavily depends upon various DM interaction channels. The energy deposition rate <cit.>, in general form is given in terms of energy injection rate as, dE/dV dt|_dep,c = f_c(z)dE/dV dt|_inj where f_c(z) is a dimensionless factor representing efficiency, the amount of deposited energy in the medium in the three different channels. In this work, we assume that a fraction f_eff of the energy produced by various processes such as DM and heavy particle decay/annihilation, scattering, etc. at certain redshift is instantaneously transferred to the plasma, using a simplified approach called the “SSCK” approximation <cit.>. f_i=f_α≈ f_eff1-x_e/3, f_h=f_eff1+2x_e/3 For all our analysis, we will be using f_eff≈0.1, as discussed in <cit.>. §.§ Evolution of free electron fraction The evolution of the ionization fraction/free electron fraction (x_e) with redshift (z) is given by <cit.>, dx_e/dz=1/(1+z)H(z)[R_s(z)-I_s(z)-I_X(z)] where R_s and I_s are the standard recombination rate (from ionized gas to neutral gas) and standard ionization rate (from neutral gas to ionized gas). The details of these parameters are given in Appendix <ref>. The last term in eq. (<ref>), I_X, can be written as I_X=I_X_i+I_X_α. It represents an ionization rate due to the additional injection of energetic particles. I_X_i represents the direct ionization rate while I_X_α represents the excitation plus ionization rate <cit.>. Furthermore, the ionization rate I_X can be written in terms of the energy deposition rate from the additional injection of energetic particles due to the aforementioned exotic processes. I_X_i =1/n_H(z)E_0dE/dV dt |_dep,i I_X_α =(1-𝒫)/n_H(z)E_αdE/dV dt |_dep,α, where 𝒫 is the Peebles coefficient given in the Appendix <ref>, n_H(z) is the number density of hydrogen nuclei (proton density), E_0 is the ionization energy of a hydrogen atom and E_α is the Lyman-α energy of a hydrogen atom. Here, we neglect the effect of the extra energy injection on helium ionization, which has been demonstrated to be sub-dominant and thus should not significantly impact our results. Using this formalism, we will numerically calculate the evolution of the gas temperature and ionization fraction by incorporating the heating effect induced by the radiative scattering of UHE neutrinos with the cosmic neutrino background. For solving the differential equations, we assume the initial conditions T_k(z=10000)=T_CMB(z=10000) and x_e(z=10000)=1. The assumption is justified because, at high redshift, the gas temperature is strongly coupled to CMB temperature, and the gas is fully ionized. § ENERGY INJECTION RATE DUE TO SELF-SCATTERING OF UHE NEUTRINOS As stated in Section <ref>, the UHE neutrinos formed from the decay of superheavy DM can interact with the relic cosmic neutrino background present in the universe. This scattering can also lead to the production of photons at the one-loop level, which can heat the intergalactic gas and alter both the gas temperature T_k and the brightness temperature T_21. In this section, we will calculate the energy injection rate resulting from the emission of photons during the scattering of UHE neutrinos with CνB neutrinos. The CνB neutrinos are thermally distributed in the universe with present-day neutrino background temperature of T_ν,0=1.9 K and number density per flavor of n_ν_i,0=112 cm^-3 <cit.>. The velocity-averaged cross-section of the incident UHE neutrino having energy E_ν_h with CνB neutrinos can be expressed in the form <cit.>, ⟨σ v⟩=1/n_ν_i∫d^3p/(2π)^3f(p⃗)v_Mø lσ(s(E_ν_h,p⃗)) where, f(p⃗) is the CνB neutrino momentum distribution and v_Mø l is called Møller velocity. As CνB neutrinos will have m_ν_i≫ T_ν in the relevant range of redshift, the center of mass energy would be independent of momentum p⃗ of CνB neutrinos. Hence, we can approximate s≈ 2E_ν_hm_ν and v_Mø l=1. Using this, the integral becomes ⟨σ v⟩=σ(2E_ν_hm_ν_i), where m_ν_i∼ 0.1 eV is the mass of active neutrino. Following the procedure from <cit.>, we find the evolution of the number density of the UHE neutrinos (n_ν_h) while scattering with CνB neutrinos as: dn_ν_h/dt = n_ν_h n_ν_i⟨σ v⟩, where n_ν_i is the number density of a particular flavor of CνB neutrinos. Considering that a fraction of DM (f_ DM) in the universe consists of UHE neutrinos resulting from the decay of superheavy DM, the present-day number density of UHE neutrinos can be calculated from: n_ν_h, 0 = f_ DMΩ_ DMρ_c/m_ DM, where Ω_ DM is the present day relic abundance of DM, ρ_c is the critical density of the universe and m_ DM is the mass of DM. Since the neutrinos are non-relativistic, their number density varies as n_ν_h=n_ν_h,0(1+z)^3 and n_ν_i=n_ν_i,0(1+z)^3, where n_ν_h,0 and n_ν_i,0 are present-day neutrino density of UHE neutrinos emitted from decay of DM and present-day neutrino density of single generation of CνB background respectively. Using this and eqs. (<ref>) and (<ref>), we obtain dn_ν_h/dt = (1+z)^6f_ DMΩ_DMρ_c⟨σ v⟩ n_ν_i,0/m_ DM As almost the entire rest-mass of DM is available as part of the energy of neutrinos, the energy injection rate into the IGM will be then given by multiplying eq. (<ref>) with m_ DMc^2 for a simple case of DM two-body decay into a pair of UHE neutrinos. With this, the energy injection rate due to the given process is finally given by dE/dVdt|_inj= (1+z)^6f_ DMΩ_DMn_ν_i,0ρ_c c^2⟨σ v⟩ Utilizing the expression above for the energy injection rate, we can assess the impact of heating induced by the radiative scattering of UHE neutrinos into photons on the 21-cm brightness temperature by following the general procedure outlined in the previous section. For all our analysis, we will be using f_ DM≈1. § RESULTS AND DISCUSSION In this section, we present our numerical findings on the evolution of the 21-cm brightness temperature in the context of radiative scattering of UHE self-interacting neutrinos into photons. Given that the brightness temperature is influenced by the evolution of gas temperature and the free electron fraction, we begin by discussing the evolution of these parameters in the presence of energy injectionresulting from the radiative scattering. By inserting the expression of energy injection from eq. (<ref>) through eq. (<ref>) in eqs. (<ref>) and (<ref>), we determine the evolution of gas temperature in standard cosmology (absence of additional heating) as well as in the presence of heating effect induced due to scattering of UHE neutrinos by considering the certain specific value of scattering cross-section and f_ DM≈ 1. The solid black curve in Fig. (<ref>) represents the evolution of gas temperature while the dashed red line shows the evolution of CMB temperature with redshift in the standard cosmology. The behavior of gas temperature in standard cosmology can be understood as follows: at higher redshift, the universe was hot, the hydrogen gas was fully ionized and there was no neutral hydrogen. The gas was in equilibrium with the CMB. Thus, the black solid curve coincides with the dashed red line. As the universe expanded and cooled at redshift around z ≈ 150, the gas became non-relativistic and started cooling adiabatically as T_k∝(1+z)^2 while CMB photons cooled as T_CMB∝(1+z). Thus, the gas temperature starts decreasing faster than the CMB temperature. At lower redshift z≈17, the gas due to the new mechanism of Lyman-α that is described by eq. (<ref>), is heated and is depicted by the plateau region of the curve. Now, the presence of energy injection due to radiative scattering, described by eqs. (<ref>) and subsequently eq. (<ref>), causes the gas temperature to deviate from the standard cosmological behavior. Specifically, as the scattering cross-section ⟨σ v ⟩ exceeds 10^-35 cm^3 s^-1, the gas temperature starts to rise at lower redshifts, as depicted in the Fig. (<ref>). The green, yellow, and blue curves represent the rise in gas temperature in redshift roughly between z ≈ 150 and z≈ 6 due to an increase in the cross-section from 10^-35 cm^3 s^-1 to 10^-33 cm^3 s^-1 respectively. We have verified that a further significant increase in the cross-section would completely eliminate the 21-cm absorption signal. Further, by using eqs. (<ref>), (<ref>) and (<ref>) through eq. (<ref>), we determine the evolution of the ionization fraction for the given reference values of the scattering cross-section and f_ DM≈ 1. The solid black line in Fig. (<ref>) represents the evolution of the ionization fraction under standard cosmological conditions. In standard cosmology, at higher redshifts, hydrogen remains ionized, resulting in x_e = 1. As the universe expands and cools, electrons gradually recombine with hydrogen nuclei to form neutral hydrogen atoms. This process reduces the number of ionized hydrogen atoms and free electrons in the universe. Consequently, the ionization fraction x_e begins to decrease at lower redshifts. In the presence of an additional heating effect due to energy injection from the radiative scattering of UHE neutrinos, the ionization fraction starts to increase at lower redshifts in comparison to the standard cosmology. The green, yellow, and blue curves in Fig. (<ref>) represent the rise in ionization fraction at lower redshifts. Finally, by using eqs. (<ref>) and (<ref>), we numerically calculate the 21-cm brightness temperature as a function of redshift. The solid black curve in Fig. (<ref>) shows the evolution of brightness temperature with redshift in standard cosmology. It shows two absorption signals with the first absorption minima near z ∼ 70 and the other at higher frequencies at z ∼ 12 - 17 in the global cosmological 21-cm signal. From eqs. (<ref>) and (<ref>), we can see that the evolution of T_21 depends on the competition between the spin temperature (T_s) and CMB temperature. At higher redshifts, the absence of neutral hydrogen prevents spin-flip interactions, thus no absorption or emission of the 21-cm line occurs. Around z ≈ 200, during the early dark ages, the spin temperature (T_s) couples with the gas temperature. As the gas cools more rapidly than the CMB temperature, T_s becomes less than T_ CMB, resulting in a noticeable absorption dip in the 21-cm line at around z ∼ 70. As the universe continues to expand and cool, the interaction between the spin temperature and the gas temperature weakens, leading T_s to approach T_ CMB, and no discernible signal is observed. However, at a significantly lower redshift of approximately z ≈ 17, the Wouthuysen-Field mechanism becomes dominant. This mechanism facilitates a strong coupling between the spin temperature and the gas temperature once more. Given the substantial cooling of the gas by this stage, this coupling manifests as a much deeper second absorption signal. The green, orange, and blue curves in Fig. <ref> illustrate the effect of external heating due to the radiative scattering of UHE self-interacting neutrinos. When the thermally averaged cross-section surpasses a specific reference value, the resulting increase in gas temperature from energy deposited through photon emission can lead to weaker absorption dips in the 21-cm signal. As the scattering cross-section continues to rise, the induced heating can ultimately eliminate the absorption of the 21-cm hydrogen line. Overall, the analysis constrains the scattering cross-section value, affecting the brightness temperature magnitude. This suggests that future experiments measuring the 21-cm brightness temperature could offer valuable insights into self-interacting neutrino coupling. By examining the magnitude and characteristics of the 21-cm absorption dips, these experiments can constrain the scattering cross-section of UHE self-interacting neutrinos. This, in turn, can provide bounds on the self-interacting coupling strength and the mass of the mediating particle. §.§ Parameter space of self-interacting neutrino coupling In standard cosmology, the stronger absorption dip at redshift z=17.2 measures approximately T_21≈ -200 mK. However, as noted earlier, the heating effects from neutrino interactions can diminish the strength of this absorption dip. These interactions introduce additional energy into the intergalactic medium, raising its temperature. Consequently, the contrast between the gas temperature and CMB temperature is reduced, leading to a weaker 21-cm absorption signal. Here, we use benchmark cross-section values required to keep the brightness temperature T_21≈ -200 mK and T_21≈ -50 mK at z=17.2, respectively. By utilizing these cross-section values, we constrain the self-interacting neutrino coupling as a function of the mediator mass. Since there are already stringent constraints on self-interacting coupling for muon and electron neutrinos <cit.>, in this study, we specifically focus on the self-interacting coupling for τ-generation neutrinos and compare it with the sensitivity of the couplings obtained from 10 years of IceCube data given in <cit.>. For simplicity, we assume g_τ = g. According to eq. (<ref>), the scattering cross-section also depends on the energy of neutrinos emitted from the decay of the DM candidate. Considering superheavy DM with a mass range between PeV and EeV, the decay of DM would produce neutrinos with energies on the order of PeV to EeV. Consequently, we compute the cross-section for E_ν_h∼ PeV - EeV and determine the parameter space of self-interacting τ-neutrino coupling as a function of the mediator mass for a specific value of the energy of UHE neutrinos. The results are shown in Fig. (<ref>) for the specific value of the energy of UHE neutrinos. The black solid and dashed curves in each subfigure of Fig. (<ref>) represent the parameter space of self-interacting neutrino coupling (g) and mediator mass (m_ϕ) that satisfy these brightness temperature constraints T_21≈ -200 mK and T_21≈ -50 mK at z=17.2, respectively. These constraints depend significantly on the energy E_ν_h of ultra-high-energy neutrinos generated from the decay of superheavy dark matter. The dip in both curves occurs due to the resonance in the cross-section for a particular value of E_ν_h and m_ϕ. Interestingly, we notice that before hitting the resonance, for s > m^2_ϕ, the cross-section σ∝ g^4/E_ν_h becomes independent of the mass of mediator. Consequently, the coupling constant g remains nearly constant and increases as g ∝E^1/4_ν_h for a fixed value of the cross-section. After passing through the resonance, where s < m^2_ϕ, the cross-section σ∝ g^4 E_ν_h/m^4_ϕ. Therefore, g increases linearly with the mediator mass m_ϕ for a fixed energy value, and the ratio g/m_ϕ decreases as E^-1/4_ν_h when the energy of ultra-high-energy neutrinos E_ν_h is increased. Overall, we observe that across most of the parameter space, as E_ν_h increases from PeV to EeV, the value of g becomes constrained roughly in the range between 10^-4 - 10^-3. These constraints are notably stronger than those derived from other astrophysical and collider constraints. In the most exotic scenarios, if ultra-high-energy neutrinos E_ν_h reach the grand unification scale, the coupling g could potentially increase up to approximately 0.01-0.1, considering the given range of mass for the scalar mediator. While doing this analysis, we have considered f_ DM≈ 1. If the superheavy DM comprises only a small fraction, such as f_ DM∼ 0.1, the value of g will increase slightly by a factor of approximately O(1) in Fig. (<ref>). In Fig. (<ref>), we also present the bounds on g from astrophysical and cosmological observations. As mentioned in previous sections, the scattering of UHE neutrinos and CνB neutrinos results in distinctive dips and bumps in the astrophysical spectrum. Comparing this spectrum with current data from IceCube provides bounds on the self-interacting coupling <cit.>. The red region bounded by a red dashed line corresponds to the sensitivity for self-interacting τ-neutrinos by using 7.5 years of the IceCube-HESE data. The green region bounded by a green dashed line is the predicted sensitivity for ν_τ self-interactions by considering 10 years of the IceCube-Gen2 (2σ) <cit.>. This indicates that Gen2 of IceCube will be much more sensitive to larger parameter space than contemporary experiments. Further, the interaction between neutrinos and a scalar mediator allows the mediator to be in thermal equilibrium before neutrino decoupling, thereby affecting the relativistic degrees of freedom (Δ N_ eff≲ 0.5) in the universe. The constraint sets a lower bound on the mediator mass of m_ϕ≥ 1.6 MeV <cit.>. The excluded region is depicted as a light orange-shaded band in Fig. (<ref>). There have been constraints from colliders for τ-generation of neutrinos, but these are much weaker than the sensitivity of but these constraints are much weaker compared to the sensitivity of coupling given by IceCube. Therefore, we have not included them in the Fig. (<ref>). Our results show that the allowed range of self-interacting coupling g is more severely constrained from the detection of absorption brightness temperature T_21 than the existing IceCube constraints. This indicates that the epoch of dark ages and cosmic dawn can potentially provide more competitive bounds compared to existing dominant bounds from other cosmological and astrophysical probes. Another significant advantage of using global brightness temperature measurements is that the dark ages and cosmic dawn periods are relatively free from the complex and often uncertain astrophysical processes that can obscure other signals. This clarity makes this method a promising avenue for investigating UHE neutrino fluxes and potential non-standard neutrino interactions. § CONCLUDING REMARKS Ultra-high-energy (UHE) neutrinos, with energies ranging from PeV to EeV, play a crucial role in both cosmology and astrophysics. These particles are among the most energetic and least understood in the universe, providing valuable insights into various fundamental processes. Interestingly, their interactions with CMB neutrinos en route to Earth can provide unique information about potential self-interactions among neutrinos, which are not well-constrained by current models. In this work, we have investigated the constraints on secret self-interactions of neutrinos emitted through the decay of superheavy DM by studying the impact of their interaction with the cosmic neutrino background on the hydrogen 21-cm signal during the period from the cosmic dark ages to cosmic dawn. Since this period is relatively free from astrophysical uncertainties, it allows for a clearer signal when studying UHE neutrino fluxes and non-standard neutrino interactions. This makes global brightness temperature measurements a promising avenue for advancing our understanding of neutrino properties. By examining the magnitude and characteristics of the 21-cm absorption dips, these experiments can constrain the scattering cross-section of UHE self-interacting neutrinos, thus providing bounds on the coupling strength and the mass of the scalar mediator. We have conducted a detailed investigation into the allowed parameter space of self-interacting neutrino coupling as a function of the mediator mass by considering a toy model of a light scalar interacting with neutrinos and the leptonic partners. Utilizing specific cross-section values to maintain benchmark brightness temperatures of T_21≈ -200 mK and T_21≈ -50 mK at a redshift of z=17.2, we have constrained the self-interacting coupling of τ-neutrinos for neutrino energies in the PeV to EeV range, which is characteristic of decays from superheavy dark matter candidates. Our analysis indicates that as E_ν increases from PeV to EeV, the coupling constant g is constrained roughly within the range of 10^-4 to 10^-3. These constraints are much stronger than the predicted sensitivity for ν_τ self-interactions based on simulated data from 10 years of the IceCube-Gen2. Interestingly, this approach not only offers a novel and competitive method for probing neutrino properties but also utilizes the relatively simple astrophysical conditions during the cosmic dark ages and cosmic dawn to offer a clearer signal for studying non-standard interactions of neutrinos. Consequently, the potential detection of a robust 21-cm signal by upcoming experiments such as LEDA <cit.>, REACH <cit.> etc. can provide critical insights into the nature of dark matter and neutrino physics in future. In this work, we have focused primarily on analyzing the global brightness temperature within the context of 21-cm cosmology. In future work, we plan to extend our analysis to include power spectra and polarization signatures within the 21-cm signal. Additionally, a comprehensive investigation of self-interacting neutrino coupling will require exploring the origin of such interactions within a consistent model of physics beyond the Standard Model. Incorporating these additional aspects will enable us to gain deeper insights into the nature of self-interacting neutrinos and their impact on the cosmic environment. § ACKNOWLEDGMENTS MD would like to acknowledge support through the DST-Inspire Faculty Fellowship of the Department of Science and Technology (DST), Government of India under the Grant Agreement number: IFA18-PH215. § The parameters R_s and I_s, in eq. (<ref>), are the standard recombination rate (from ionized gas to neutral gas) and standard ionization rate (from neutral gas to ionized gas) respectively and are given by <cit.>, R_s(z) =𝒫[α_H x_e^2n_H] I_s(z) =𝒫[β_H(1-x_e)e^-hν_α/k_bTk] where 𝒫 is the Peebles coefficient. It represents the probability that an atom in the first excited state reaches the ground state before being completely ionized. This is given by 𝒫=1+K_HΛ_H n_H(1-x_e)/1+K_H(Λ_H+β_H)n_H(1-x_e) where K_H=π^2/E_α^3H and Λ_H=8.22/sec. K_H accounts for the effect of the expansion of the universe on the Lyman-α photon, Λ_H is the decay rate of the hydrogen atoms from the 2S to 1S level and E_α is the Lyman-α energy of a hydrogen atom. Here, α_H and β_H are the recombination coefficient and photoionization coefficient, respectively, which are given by <cit.>, α_H(T_k) =F×10^-19(at^b/1+ct^d) m^3s^-1 β_H(T_k) =α(T_k)(2π m_ek_BT_k/h_p^2)^3/2e^-E_2s/k_BT_k where, parameters F, a, b, c, d and t are given as, unsrt
http://arxiv.org/abs/2406.18179v1
20240626085326
DeepExtremeCubes: Integrating Earth system spatio-temporal data for impact assessment of climate extremes
[ "Chaonan Ji", "Tonio Fincke", "Vitus Benson", "Gustau Camps-Valls", "Miguel-Angel Fernandez-Torres", "Fabian Gans", "Guido Kraemer", "Francesco Martinuzzi", "David Montero", "Karin Mora", "Oscar J. Pellicer-Valero", "Claire Robin", "Maximilian Soechting", "Melanie Weynants", "Miguel D. Mahecha" ]
cs.LG
[ "cs.LG", "cs.DB" ]
The Predicament of Absorption-dominated Reionization II: Observational Estimate of the Clumping Factor at the End of Reionization [ =================================================================================================================================== empty § BACKGROUND & SUMMARY There has been an unprecedented rise in the frequency and severity of climate extremes<cit.>. These rising extremes can have severe ecological<cit.> and socio-economic consequences<cit.>, challenging our established paradigms of climate science<cit.>. For instance, in 2018, central and northern Europe experienced a record-breaking Compound Heatwave and Drought (CHD) event, which extensively impacted agriculture, forests, water supply, and the socio-economic sector<cit.>. Given the increasing intensity and adverse impacts of CHD events in the warming climate, it is critical to understand their intricate dynamics and interactions with climate drivers, spatial conditions, timing, and terrestrial ecosystems<cit.>. The exponential increase in Earth observation data represents a significant advancement but also introduces complex data management and analysis challenges<cit.>. In an era marked by rapid advances in remote sensing capabilities, including satellite observations, aerial imaging, and ground-based records, researchers have access to unprecedented amounts of information. These data are crucial for understanding the impacts of climate extremes<cit.>. Effective sampling strategies are required to harness this data deluge, ensuring relevance and manageability. Data cubes provide a flexible and efficient way to organise and analyse large volumes of multidimensional data, making such datasets manageable and streamlined across variables and spatio-temporal scales<cit.>. Machine Learning (ML) has been introduced into climate science as a valuable tool to understand and predict climate extremes and their impacts, as well as to decipher the interactions between climate and ecosystems <cit.>. Moreover, Deep Learning (DL) allows the identification of complex patterns and correlations that might elude traditional data science methods, thereby helping scientists to better understand the underlying mechanisms of climate variability and change. However, since ML generally performs best with large sample sizes, extreme impact prediction often has significantly smaller sample sizes compared to non-extreme conditions, which complicates the application of ML. In this dataset, we tackle this issue by oversampling extreme areas using the minicube strategy, which has a large distribution in space instead of time. This method introduces additional biases, as ML tends to amplify them. Nonetheless, this trade-off is necessary and must be considered when training models. The sophisticated Earth observation databases that train ML models for analysing climate extremes are growing. These datasets primarily focus on addressing the scarcity of curated data concerning complex weather patterns and climate extremes' impacts on ecosystems. For instance, The ExtremeWeather dataset<cit.> provides labelled extreme weather events (i.e., tropical depression, tropical cyclone, extratropical cyclone, atmospheric river) as boxes, along with climatic and meteorological variables on a global grid of 768 by 1152. This dataset allows training ML models to leverage spatial and temporal information to predict the localisation of extreme weather events. ClimateNet<cit.> provides an expert-labelled dataset that enables pixel-level identification of extreme events using ML models. Additionally, cross-domain and high-resolution datasets are designed to include localised variables critical for analysing responses to climate extremes, incorporating data from diverse domains. For example, EarthNet2021<cit.> aims to bridge the data gap by integrating a variety of data variables such as precipitation, temperature, sea-level pressure, digital elevation models, and Sentinel-2 Multi-Spectral Instrument (MSI) images, offering a holistic view of Earth system. A model trained on EarthNet2021 can forecast optical satellite images of high perceptual quality. The newly enhanced version, GreenEarthNet<cit.>, focuses more on predicting vegetation dynamics and includes an improved high-quality cloud mask<cit.>. The FluxnetEO data cubes<cit.> provide fully gap-filled Nadir BRDF Adjusted Reflectance (NBAR) data from MODIS, as well as Land Surface Temperature (LST) and several vegetation indices for the Fluxnet sites<cit.>, aiming for modelling carbon and water fluxes. Moreover, DynamicEarthNet<cit.> tracks daily land use and land cover changes across 75 global regions from 2018 to 2019, focusing on detecting land cover changes. BigEarthNet<cit.> is a large-scale benchmark dataset consisting of Sentinel-2 satellite images with multi-label land use and land cover. Presto's Training Dataset<cit.> is a high-resolution dataset that provides detailed data for training ML models to significantly improve the prediction and understanding of climate extremes and their impacts. However, these multi-purpose initiative datasets do not focus specifically on the impact of CHD extremes. We need harmonised datasets tailored for spatio-temporal ML methodologies, aiming to train ML methods to forecast and explain the impacts of extreme events such as droughts and heatwaves. Given the importance of CHD extremes and challenges arising from data biases and their repercussions, this paper is poised to propose a solution that encapsulates precision and reproducibility. Here, we present the DeepExtremeCubes dataset, a collection of minicubes that use a sampling methodology to focus on capturing the impact of CHD extremes globally. Specifically, we introduce 1) a globally stratified sampling procedure, 2) a reproducible data processing pipeline combining multi-modal data, and 3) a representative global dataset to train ML models on CHD extremes, which is analysis-ready and shared in cloud-native format. § METHODS The analysis of CHD extremes necessitates examining a broad range of Earth observation variables across climatic, meteorological, ecological, and topographical dimensions at various spatial and temporal scales <cit.>. Sampling these relevant datasets is crucial to focus on CHD impacts and to understand the complex interactions of different drivers, spatial conditions, and timing of these processes. The DeepExtremeCubes dataset employs sampled minicubes targeted at regions experiencing extreme CHD events and their surroundings, facilitating a more detailed investigation. From a practical viewpoint, managing the vast, high-dimensional Earth system datasets requires significant computational resources. Segmenting these datasets into smaller, manageable subsets (i.e. minicubes) can enhance machine learning computations efficiency<cit.>. §.§ Input data sources Two categories of input data sources are used to create DeepExtremeCubes. One is the reference dataset used to determine the strata. This encompasses the Dry and hot extreme events database (Dheed dataset, a predefined global dataset of CHD extreme events) <cit.> and the European Space Agency (ESA) Climate Change Initiative (CCI) land cover map. The other category of data sources consists of the comprehensive Earth system datasets from which the data included in individual minicubes is extracted. We first introduce the reference datasets and then provide details on the comprehensive datasets within the generated minicubes. §.§.§ Dheed event detection dataset Dheed <cit.> is a database of labelled CHD events utilising atmospheric temperature and precipitation from daily aggregated ERA5-Land reanalysis data. A spatially and temporally example piece of a Dheed dataset is shown in Fig. <ref>. The maximum daily Temperature at 2 m (Tmax) is used to detect heatwaves, and the daily differences of Precipitation and Evapotranspiration (PE) averaged over 30, 90, and 180 days (PE30, PE90, PE180) are employed to detect droughts <cit.>. The Dheed's label-cube covers a time range from 2016-01-01 to 2021-12-31, with a spatial resolution of 0.25°. Groups of spatio-temporal grid cells with extreme values connected across space and/or time are each assigned a unique event label. Dheed's labelled events have been benchmarked against extreme events documented in the literature or the media. §.§.§ Land cover map The ESA CCI land cover dataset employs the GlobCover unsupervised classification chain framework<cit.> to generate global annual land use maps from 1992 to 2020. It uses a combination of multi-year and multi-sensor strategies, incorporating data from various satellites such as ENVISAT-MERIS (2003–2012), AVHRR (1992–1999), SPOT-Vegetation (1999–2013), and PROBA-Vegetation (2013–2020)<cit.>. The dataset categorises 37 land cover classes according to the United Nations Land Cover Classification System<cit.> and offers the data at a 300-m spatial resolution in GeoTIFF and NetCDF formats. The selection criteria for a reliable land cover map aim to best meet the requirements to analyse vegetation responses to CHD extremes. First, the map must cover most of the study period from 2016 to 2022, ensuring data continuity to reflect land cover changes. Second, data must be readily accessible for direct download and use. Third, a detailed classification of vegetation types is crucial, particularly focusing on persistent vegetation covers such as broad-leaved trees, needle-leaved trees, and grassland. While some vegetation classes were merged to simplify the initial sampling process, it was essential to retain the original and finer classifications in the minicubes for subsequent analyses. Various global land cover maps were evaluated<cit.>, but none met these criteria as well as the ESA CCI WorldCover map. For instance, the Global Land Analysis and Discovery (GLAD) laboratory's Land Cover and Land Use Change (LCLUC) data<cit.>, based on Landsat, offers high accuracy in certain non-forest regions and detailed classifications of open canopy forests in Africa. However, its infrequent updates (every five years) during 2000 and 2020 and limited vegetation classification hinder its suitability as the reference land cover map for the DeepExtremeCubes dataset. The ESA CCI WorldCover map<cit.> offers high-resolution data for 2020 and 2021. However, it does not cover the entire study period and lacks comprehensive tree-type classifications. Therefore it is not suitable for the detailed vegetation analysis required for this study. §.§.§ Data sources within each minicube In addition to subsets of the Dheed dataset and the CCI land cover map, each minicube contains multiple data modalities: (1) Sentinel-2 MSI surface reflectance (L2A) time series data <cit.>, (2) a corresponding deep-learning-based cloud mask <cit.>, (3) ERA5-Land meteorological reanalysis variables <cit.>, and (4) data from the Copernicus Digital Elevation Model (DEM) <cit.>. The ERA5-Land reanalysis data provides information on the historical weather conditions, represented by variables such as temperature, humidity, soil moisture, and others. Sentinel-2 MSI satellite images are a proxy observation for vegetation health. We include bands B02, B03, B04, B05, B06, B07, and B8A, which can be used to compute vegetation indices<cit.>. Together, the ERA5-Land and Sentinel-2 data allow us to study the impact of CHD extremes on vegetation in the DeepExtremeCubes dataset. In Sentinel-2 images, pixels obscured by clouds and cloud shadows can be difficult to distinguish from actual changes in the underlying ecosystem, which may challenge subsequent analyses' accuracy. By incorporating the EarthNet Cloud Cover Mask<cit.>, which is based on the CloudSEN12 dataset<cit.>, obscured pixels can be filtered out, ensuring that vegetation biological dynamics are based on clear and reliable optical remote sensing data. In addition, the Copernicus DEM data is included as one of the key factors in climate-vegetation interactions. The Copernicus DEM provides topographical data at 30 m, enabling us to consider how elevation influences local climate conditions, subsurface hydrology and vegetation patterns. It is crucial in regions where elevation varies significantly and is also important on a global scale. For our minicubes, Sentinel-2 and the cloud mask are spatio-temporal arrays, ERA5-Land is included as a single-pixel time series (temporal array), and the DEM is a static image (spatial array). §.§ Approach We incorporated comprehensive data sources to develop the DeepExtremeCubes dataset, which comprises minicubes with a spatial size of 2.5 km by 2.5 km, covering the period from 2016 to 2022. The schematic approach is shown in Fig. <ref>. The Dheed dataset was used to generate a CHD event days map to determine the sampling locations for the minicubes. Subsequently, the DeepExtremeCubes minicubes were created with various variables. Additionally, we prepared a spatial data split strategy for subsequent users to train their ML forecasting models. §.§.§ Sampling locations for minicubes The sampling of minicube locations began with identifying areas frequently experiencing CHD extremes. We sampled from both areas affected by extremes and surrounding areas with similar land covers ("extreme" and "non-extreme" locations). This approach allows ML models to learn from both CHD-impacted and non-impacted instances, enhancing prediction accuracy and facilitating accurate estimation of carbon sequestration loss in CHD extremes. Additionally, we adjusted the locations based on land cover, with particular emphasis on persistent vegetation land covers. The Dheed dataset was compressed to a CHD event days map, marking all pixels that experienced 10 or more CHD event days during 2016 and 2021. Due to the large size of the Dheed dataset, it could not be processed quickly in the following steps. Therefore, we aggregated the temporal dimension and generated the CHD event days map (see Fig. <ref>)(b). All pixels that experienced 10 or more CHD event days were marked as potential central sampling locations. Around 80% of the "extreme" minicubes were located in heavily impacted areas (i.e., areas marked in the event days map), while roughly 20% "non-extreme" minicubes were situated in the vicinity of "extreme" areas and did not experience any CHD events (i.e., areas with 0 event days). This step serves two main purposes: first, it enables the models to learn from CHD-impacted and non-impacted instances, enhancing predictive accuracy. Second, it allows for more accurate computation of carbon sequestration losses within regions covered by minicubes by comparing paired minicubes from both "extreme" and "non-extreme" conditions. To avoid spatial autocorrelation at very close distances in random sampling within "extreme" areas, where sampled locations tend to cluster, we maintained a spatial grid of 0.125° (half of the Dheed dataset's resolution). We selected no more than one sample per grid cell for each land cover type. We defined a set of target vegetation types that best summarise all the persistent land covers while keeping a focus on vegetation. In this sampling step, we merged most of the land covers to simplify the sampling categories (see details in Table s1). These merged vegetation classes include broad-leaved trees, needle-leaved trees, mixed trees, and grassland. To enhance the diversity of land covers and examine prediction accuracy through comparisons between vegetation land covers and other persistent land covers, we also included bare area and urban area. Thereby, we focused on six land cover types in total. In addition, we assessed the purity of land covers within defined minicubes to evaluate the varying behaviours of the ML prediction model concerning pure versus mixed land covers. Given the ESA CCI land cover map's resolution of 300 meters per pixel, a minicube (2.5 km by 2.5 km) encompasses approximately 9×9 pixels. We established a spatial window of 81 pixels (9×9), centred on the central pixel, to determine the purity of each minicube's land cover. If 65 or more pixels within this window exhibit the same land cover (equivalent to an 8×8 pixel area, about 80% of the spatial coverage of a minicube), the central pixel is classified under "pure land cover" and is eligible to be the central point of a minicube. Conversely, if fewer than 65 pixels share a single cover type, the central pixel is considered to have "mixed land cover." Given the importance of land cover purity in one minicube for ML prediction models, we set a lower threshold such that the central pixel must display at least 50% similarity (40.5 pixels in a 9×9 matrix). If a land cover meets this threshold range of 50%-79%, the central pixel is still considered a potential sampling location and is marked with this land cover as the dominant land cover. We also list the second dominant land cover by tallying the remaining land cover classes and selecting the most frequent. Fig. S1 presents the detailed distribution of (a) the pure land cover map and the mixed land cover maps, including (b) the dominant land cover map and (c) the secondary land cover map. In summary, the minicube location sampling is based on two factors: whether the area is impacted by CHD extremes, and is purely or mixed covered by the target land covers. First, the minicube location sampling primarily focuses on selecting areas affected by CHD extremes, specifically only those with more than 10 event days. From this set 80% of all mincubes are chosen. The other 20% is selected from areas surrounding these extremes, which did not experience CHD extremes. We want to distinguish between minicubes covered by pure or mixed land covers, to facilitate exploration of land cover purity for prediction. This categorisation considers pure land cover (about 80%-100% covered by one land cover) and mixed land cover (about 50%-79% covered by one land cover). When a minicube has mixed coverage, its second land cover class is provided to offer additional contextual information. The results of the minicube sampling are shown in Fig. <ref>. The samples indicate the central location of each minicube. §.§.§ Minicube Generation and variables We generated minicubes given the previously established central locations. A minicube is a dataset covering an area of 2.56 km by 2.56 km around the central location, ranging from 2016 to 2022, incorporating elements from various data sources (as listed in Table <ref>). To minimise the distortion of the original data, we opted to maintain separate spatial and temporal resolutions, which can be harmonised during subsequent processing if necessary. We collected data from various sources during the generation process and documented details about processing steps in configuration files. We consolidated the variables into a single dataset <cit.> and stored them in the Zarr format (https://zarr.readthedocs.io/https://zarr.readthedocs.io/). The generation process comprised two stages: initially, we created a "base-minicube" composed exclusively of Sentinel-2 data. In the second stage, this "base-minicube" was updated with the remaining components. By adopting this strategy, we reduce the frequency of Sentinel-2 data pooling, which is the most resource-intensive part of the process. Additionally, we generate corresponding configuration files for these two phases. § DATA RECORDS We intend to permanently store the DeepExtremeCubes dataset in the https://opensciencedata.esa.int/ESA Open Science Catalogue (OSC) once the platform is operational. As the dataset is large, approximately 3.2 TB, preparing the storage in OSC will take a few more months. During this time, we use the Amazon Web Services (AWS) bucket for the manuscript review process, see https://deepextremes-public.s3.eu-central-1.amazonaws.com/readme.pdfthis access guide (password: deepextremes). The database includes the DeepExtremeCubes minicubes, https://deepextremes-public.s3.eu-central-1.amazonaws.com/mc_registry_v4.csva registry table providing all attributes within each minicube, and the Dheed dataset. Table <ref> lists all attributes in the registry table. Users can use it to find minicubes based on specific criteria, such as spatial extent, components, land classes, or labelled extreme events. In addition, we provide https://deepextremes-public.s3.eu-central-1.amazonaws.com/mc_10.22_50.96_1.3_20230928_0.zipa minicube example and https://deepextremes-public.s3.eu-central-1.amazonaws.com/DeepExtremesCubes_Access.ipynba demonstration Jupyter notebook with direct hyperlinks to help explore the DeepExtremeCubes minicubes. § TECHNICAL VALIDATION §.§ Land cover representation To validate the land cover representation in the DeepExtremeCubes dataset, we compared it with those of the global land cover (see Fig. <ref>). Our analysis reveals an overrepresentation of broad-leaved and needle-leaved trees in minicubes compared to the global distribution. Conversely, grassland and urban area align proportionally with the global distribution, while mixed trees samples are underrepresented. This underrepresentation is the result of our method, which defines the dominant land cover in each minicube as its primary land cover, and mixed trees often coincide with either broad-leaved or needle-leaved trees. From the ML perspective, having a larger number of minicubes covered by broad-leaved or needle-leaved trees, as opposed to mixed trees, enhances prediction accuracy. §.§ Spatial analysis of minicube distribution To assess the impact of our stratified sampling on the spatial distribution of minicube locations, we computed the spatial autocorrelation of their extreme event occurrences. The analysis yielded a Moran's I value of 0.89, with a p-value of 0.001 and a z-score of 300.48. The high Moran's I value (0.89) reveals a significant and strong positive spatial autocorrelation among minicube locations. This suggests that locations with similar event statuses ("extreme" or "non-extreme") are clustered together. The statistical significance of this clustering is confirmed by the p-value of 0.001 and the high z-score (300.48), indicating that the observed spatial pattern is highly unlikely to be the result of random chance. Additionally, the proximity of "extreme" minicubes to "non-extreme" minicubes demonstrates a correlation in our dataset due to the sampling strategy. This effective sampling approach ensures that "non-extreme" samples are geographically close to "extreme" samples, confirming that our dataset's spatial correlation is preserved. We analysed the shortest distance along which an "extreme" minicube can find a "non-extreme" minicube with the same land cover and computed the proportion of these minicubes in the total "extreme" minicubes along that distance. It maintains comparable environmental and vegetation characteristics while differing only in the impact of the CHD event on the "extreme" minicube. The results indicate that at a surrounding distance of 200 km, approximately 25% to 35% of extreme minicubes of each land cover type (excluding urban areas) can find a non-extreme minicube with the same land cover as a reference. Urban areas have a 10% probability at this distance, which is attributed to their low coverage in both the global map and minicube coverage (see Fig. <ref>). At a surrounding distance of 100 km, extreme minicubes covered by bare areas have a relatively high probability of finding a non-extreme minicube, as bare areas dominate global land coverage. A similar pattern applies to mixed trees. Although the likelihood of finding a paired minicube covered with mixed trees within a surrounding distance of 75 km is relatively low, the high global coverage increases the chances of finding a paired non-extreme minicube at larger distances. For the three main vegetation land covers (broad-leaved trees, needle-leaved trees, and grassland), there is a 5% to 10% chance of finding a paired non-extreme minicube with the same land cover type within a surrounding distance of 100 km. Needle-leaved trees slightly exceed grassland at a distance of 150 km, but overall, they exhibit a similar proportion. §.§ Limitation of the data The DeepExtremeCubes dataset focuses on forecasting and analysing the impact of CHD extremes on persistent vegetation types. Although we included bare area and urban area as additional land covers, the diversity of land cover in the real world is not fully encompassed. For example, land covers primarily impacted by anthropological effects were omitted (e.g. croplands). This is the trade-off we must make to narrow variables for better prediction and training results in ML models. The created event days map includes areas experiencing 10 or more event days in the Dheed dataset. This might not be the best approach for a base map for minicube sampling. As mentioned by Weynants et al. (2024)<cit.>, the amount and volume of extreme events generally follow a power-law distribution, with a few extremely large events and many small ones. Additionally, events in the Dheed dataset with small spatial coverage and short duration could be false alarms. Therefore, selecting the event days map masking areas of more than 10 event days might be simplistic and effective as it accurately reflects the real CHD detection results. Considering this limitation, we propose an alternative strategy, setting criteria that require a volume greater than 1000 units, an area exceeding 30 pixels (equivalent to a 0.25°increment), and a duration longer than five days. Using these conditions, we identified 114 significant events out of 26,935 events. The resulting event days map is shown in Fig. s2 in the supplementary information. This alternative map can directly replace the 10-day event map. § USAGE NOTES The spatial distribution of the minicubes is strongly uneven. Climatic and meteorological data are correlated across large spatial ranges. This leads to minicubes being clustered in and around extreme events. When designing AI methods, one should circumvent this spatial autocorrelation, and samples may not be randomly selected from the minicubes collection to create the training and test sets. To limit the dependence between training, validation and test sets, we implemented a split of the collection into ten folds, ensuring that the distance between minicubes locations from different folds is larger than 50 km<cit.>. This split is created in three steps. First, we build a balltree from the haversine distance for all locations (lon, lat). A balltree (also called metric tree) is a tree that is created from successively splitting points into surrounding hyperspheres whose radii are determined from the given metric<cit.>. The Haversine distance is the angular distance between two points on the surface of a sphere. Second, we create clusters of locations, ensuring that the distance between locations from different clusters is always larger than 50 km. Third, clusters are distributed into ten groups, in decreasing order of size, always adding the largest cluster to the smallest group. Fig. <ref> shows the spatial distribution of the resulting groups. § CODE AVAILABILITY The code to create the minicubes is hosted at https://github.com/DeepExtremes/minicube-generation. § ACKNOWLEDGEMENTS The authors acknowledge the support from the ESA AI4Science project "Multi-Hazards, Compounds and Cascade events: DeepExtremes," 2022-2024, and the European Union's Horizon 2020 research and innovation program within the project "XAIDA: Extreme Events - Artificial Intelligence for Detection and Attribution", (grant agreement 101003469). K.M. acknowledges funding by the Saxon State Ministry for Science, Culture and Tourism (SMWK) – [3-7304/35/6-2021/48880]. § AUTHOR CONTRIBUTIONS STATEMENT C.J. contributed to conceptualization and methodology, conducted the experiment, software, validation, data curation, writing the original draft, and project administration. T. F. conducted the creation and storage of minicubes. M.W. and F.G. provided expertise in using the Dheed dataset and ran the spatially blocked dataset split. V.B.: conceptualization, methodology. M.A.F.T.: conceptualization, methodology. K.M.: conceptualization, methodology, contributed critically to the drafts, and gave final approval for publication. F.M.: validation. D.M.: software, visualization, writing - review and editing. O.P.: methodology, validation. C.R. contributed to the spatially blocked dataset split. M.M.: conceptualization, supervision, and funding acquisition. All authors contributed to writing - review, and editing. Authors from the third to the second-to-last are ordered alphabetically.
http://arxiv.org/abs/2406.18999v1
20240627083916
Improving Taxonomic Image-based Out-of-distribution Detection With DNA Barcodes
[ "Mikko Impiö", "Jenni Raitoharju" ]
cs.CV
[ "cs.CV" ]
Improving Taxonomic Image-based Out-of-distribution Detection With DNA BarcodesThe work was funded by Research Council of Finland project 333497. Mikko Impiö1 and Jenni Raitoharju12 1Finnish Environment Institute, Quality of Information, Finland 2University of Jyväskylä, Faculty of Information Technology, Finland July 1, 2024 ============================================================================================================================================================================== § ABSTRACT Image-based species identification could help scaling biodiversity monitoring to a global scale. Many challenges still need to be solved in order to implement these systems in real-world applications. A reliable image-based monitoring system must detect out-of-distribution (OOD) classes it has not been presented before. This is challenging especially with fine-grained classes. Emerging environmental monitoring techniques, DNA metabarcoding and eDNA, can help by providing information on OOD classes that are present in a sample. In this paper, we study if DNA barcodes can also support in finding the outlier images based on the outlier DNA sequence's similarity to the seen classes. We propose a re-ordering approach that can be easily applied on any pre-trained models and existing OOD detection methods. We experimentally show that the proposed approach improves taxonomic OOD detection compared to all common baselines. We also show that the method works thanks to a correlation between visual similarity and DNA barcode proximity. The code and data are available at https://github.com/mikkoim/dnaimg-ood. image-based taxonomic identification, out-of-distribution detection, DNA barcodes § INTRODUCTION The state of the natural environment has been significantly altered by humans and the increase of species extinction has prompted for action in environmental monitoring <cit.>. Efforts are currently being scaled up to address the challenge of monitoring the entirety of our planet. Automated methods based on computer vision and imaging devices that can collect more data from the environment and species are gaining a lot of interest among ecologists <cit.>. With automatic monitoring methods gaining interest, it is important to consider the trustworthiness of these systems. Image classification methods based on Deep Neural Networks (DNNs) have been shown to be effective when the models are evaluated with the same data distribution they were trained on <cit.>. However, in a realistic use case the trained model is presented to the "open world", where samples from unseen classes can be presented to them. This is known as the "known unknown" domain of pattern recognition, which has been extensively studied via out-of-distribution (OOD) problems <cit.>. It is well known, that DNNs can output high confidence outputs for out-of-distribution classes and even for inputs containing only noise <cit.>. OOD detection is especially relevant with arthropod and insect classification, as a vast amount of these groups remain undescribed <cit.>. Confidently made incorrect classifications can have undesired effects on the biological indices calculated from taxa lists, as well as on biomass and abundance estimates. Molecular methods such as deoxyribonucleic acid (DNA) barcoding <cit.> have been proposed as an answer to the accurate identification problem. The amount of sequencing data and availability of reference sequence libraries, such as the Barcode of Life database (BOLD) <cit.>, are increasing as sequencing technologies are becoming affordable <cit.>. Molecular methods can identify the species, but are not accurate with biomass or abundance estimation <cit.>. Thus, manual classification or automatic recognition based on images remain the most viable options for these tasks. OOD detection has been studied broadly for different modalities, including images, text, and genomic data <cit.>. Methods are typically based on ranking the samples using an OOD scoring method that produces a metric score. A high metric score is then associated with the likelihood of a sample being an outlier. Taxonomic out-of-distribution detection has usually been done with benchmark datasets. These benchmark datasets often have classes that are easily distinguishable from each other, with only few fine-grained classes, such as dog breeds or birds. Hendrycs et al. <cit.> recognize this problem and observe that with easily distinguishable classes the outputs are more concentrated, while with fine-grained classes outputs are divided between the highly similar classes. This problem is also present with finely-grained insect datasets, making them a good example of challenging OOD detection. This paper proposes using DNA barcoding data in image-based OOD detection in fine-grained taxa identification, which is a new research area. OOD detection purely for genetic data has been studied before <cit.>. In zero-shot-classification, using DNA side information for improving accuracy was proposed by <cit.>. Using side information in OOD detection has not been as common as in other areas of study. We show in this paper that using available DNA side information can produce better OOD detection performance. § PROPOSED METHOD We propose an approach for outlier (unseen taxa) detection in taxonomic identification using DNA barcodes as side information. The proposed method serves as an additional re-ordering component applied after a more commonly used scoring method, such as Energy <cit.> or MaxLogit <cit.>. Our assumption is that using DNA barcode proximity for re-ordering the OOD detection scoring outputs can improve outlier detection. This is based on the assumption that DNA barcodes encode some aspects of visual similarity. The practical use case for the method could be the following: We have a sample of specimens that we want to image using an imaging device and classify with a deep neural network (DNN). The classifier has been previously trained with a set of C inlier classes, 𝒞_in. We can also extract DNA barcodes from the sample using DNA metabarcoding or eDNA approaches <cit.>. This means that we can sequence the specimens in bulk and get a list of the taxa (classes) present in the collection, but the DNA sequences can not be assigned to specific specimens (images). We assume that the DNA list reveals the presence of a set of C_out outlier classes, 𝒞_out, we have not encountered when training the classifier. We also assume that the classes in 𝒞_out can be found in DNA barcode reference databases. Thus, the DNA list reveals information on the presence of outlier classes, but we cannot know which images/specimens they correspond to. We want to find out the most probable outliers in the sample. In actual monitoring situations, the candidate images could be then verified by a human taxonomist. The proposed method works in four phases: 1) Classification, 2) OOD scoring, 3) DNA distance ranking, and 4) Re-ordering the samples based on DNA distances. Finally, metrics are calculated for the re-ranked samples. An overview of the process can be seen in Fig. <ref>. §.§ Classification For classification, any commonly used deep neural network model can be used. The model f has parameters θ, it takes an input image x and outputs a discrete logit score vector f_θ(x) = 𝐬 = [s_0, s_1, ..., s_C-1] of length C, where C is the number of classes the model was trained on. A logit score s_i can be transformed into a softmax probability score p_i by p_i = e^s_i/∑^C-1_j=0e^s_j. The softmax probability score represents the heuristic probability of the image to belong to this class, such that ∑^C-1_i=0 p_i = 1. The most likely class assignment for an image can be calculated by taking the class index of the highest score c = argmax(𝐬). This predicted class is later used in re-ordering. §.§ Out-of-distribution scoring Previous OOD detection methods calculate a OOD scoring metric m from the DNN output 𝐬. This can be used to rank the samples in a decreasing order based on their probability to be outliers. This order also serves as a baseline for the DNA re-ordering performed later. From the ranking, we can choose the first p percent items as the most likely outlier candidates. The percentage is an arbitrary threshold and can be chosen freely, with the trade-off between true and false positive rates. Lower thresholds ensure that all inlier classes are classified as inliers with the cost of more outliers being assumed as inliers. Higher thresholds ensure that more outliers are correctly detected with the cost of more inliers being assumed as outliers. We use different OOD scoring methods proposed in the literature and others designed for the study that we could not easily find in the literature: 1) Maximum Sigmoid Probability (MSP) <cit.>, 2) Maximum Logit value (MaxLogit) <cit.>, 3) Energy score<cit.>, 4) Entropy of the sigmoid distribution, 5) Ratio between second highest and highest logit, and 6) Ratio between second highest and highest sigmoid probability. The MSP is simply the maximum value from the sigmoid probability output, m_MSP = max(𝐩), and MaxLogit refers to the maximum value of the raw logit output 𝐬. In practice, to get a decreasing order, the values are multiplied by -1. The energy function is also calculated from the logit output, by m_energy = -log( ∑^C-1_i=0 e^s_i) <cit.>. Entropy is calculated from the sigmoid probabilities with m_entropy = ∑^C-1_i=0 p_i log(p_i). The ratio scores are simply either max2(𝐲) / max(𝐲), where 𝐲 is either 𝐬 (logit) or 𝐩 (sigmoid). The function max2(𝐲) returns the second highest value of 𝐲. §.§ DNA distances We assume that the bulk DNA sample can be sequenced and a list of present taxa 𝒞_DNA can be attained by comparing the sequences to a reference database, such as the Barcode of Life Database (BOLD) <cit.>. Each taxa class has a representative DNA sequence consisting of nucleotide bases 𝐛∈{A,G,T,C}^n, which in practice is a DNA barcode <cit.> of length n. We can calculate a distance d_ab = dist(𝐛_a, 𝐛_b) between two sequences. Here, dist(·, ·) is a DNA distance function. There are several possible ways of calculating distances between DNA sequences. The most simple one is the percentage of sites that differ between sequences, referred to as the "raw" difference. In evolutionary biology, the Kimura 2-parameter distance (referred to as "K80") <cit.> is more commonly used. It assumes that the probabilities for transition and transversion substitutions are different. Transition means that the substitution happens inside a pyrimidine (C,T) or purine (A,G) pairs, while transversion refers to the substitution between and pyrimidines and purines. We experimented with both raw and K80 DNA similarities. The outlier class set can be found from the set of present taxa by 𝒞_out = 𝒞_DNA∖𝒞_in. In this paper, we consider the special case of |𝒞_out| = 1. As we have a single outlier class c_out, we can calculate the DNA distance to all inlier classes so that we get a list of distances [d_0, d_1, d_2, d_C-1] for all outlier-inlier pairs. This list is then ranked, so that the inlier class with the shortest distance to the outlier is first. §.§ Re-ordering based on DNA distances Our main contribution is a re-ordering method that serves as an additional component after OOD detection has been done for the DNN outputs. We use the DNA distance ranking as side information to re-order the initial OOD ranking done from the images. We propose two different approaches: 1) DNA ordering and 2) DNA quantile ordering. In both approaches, images are prioritized based on the DNA-based similarity (Section <ref>) between their predicted class and the outlier class. In the first re-ordering approach, all images classified as the inlier class with the highest DNA similarity to the outlier class are taken and then arranged based on their OOD scoring metric. Next, images from the class with the second highest DNA similarity are taken, and so on, following the DNA similarity ranking. Thus, samples classified by the DNN as the inlier class closest to the known outlier, determined by DNA proximity, are considered the most probable specimens belonging to the outlier class in the order of their OOD scoring. The DNA quantile method adds an additional step of first considering the block of q percent of the specimens with the highest OOD score and re-ordering them by DNA as described above. Next, the rest 1-q percent of the specimens are ordered. This approach takes into account that the OOD scoring metric might initially give us better information about a specimen being an outlier, and the classification along with the DNA-based class similarity is considered second. Finally, the order inside each classification group is determined by the OOD scoring metric as described. § EXPERIMENTS §.§ Experimental setup §.§.§ Dataset We use the FinBenthic 2 dataset[https://etsin.fairdata.fi/dataset/a11cdc26-b9d0-4af1-9285-803d65a696a3]<cit.> for all of our experiments. It has 460 009 images from 9631 specimens, belonging to 39 different taxa. 26 of the taxa are classified to the taxonomic level of a species, while the rest are classified to genus level, except one class, Simuliidae, which is on family level. Each specimen contains a maximum number of 50 images. Each image is classified separately. The largest class, Simuliidae, contains 44240 images from 887 specimens, while the smallest class, Hydropsyche saxonica, has 490 images from 17 specimens. As there is no DNA data corresponding to the specimens in the FinBenthic 2 dataset, we collected a toy dataset representing image-DNA pairs. For each 39 classes, we collected a DNA sequence representing this taxon. The DNA data was collected from BOLD <cit.>. The samples were chosen by searching the database for sequences from a taxa and choosing a random entry. For the classes corresponding to taxa higher than species level, we chose a random species in the corresponding genus or family. Only high-quality sequences from the COI gene were considered. Samples were chosen to be from Finland to match with the image dataset origin. After collecting the DNA dataset, we aligned the sequences using the MUSCLE multiple sequence alignment method <cit.>. The DNA sequence dataset is available at https://github.com/mikkoim/dnaimg-ood. §.§.§ Classification setup As the dataset has 39 different classes, we trained a total of 39 different models. Each model was trained with an inlier set of 38 classes, with one taxon serving as the outlier. The dataset was split to train and test sets so that 80% of the inlier class specimens are in the training set, and all specimens from the outlier class and 20% of specimens from each of the 38 inlier classes are in the test set. The splits were grouped by specimen, so that all images of a single specimen belong either to the test or train split. Models were trained for 30 epochs, using the AdamW <cit.> optimization algorithm. As the DNN backbone we used the EfficientNet-b0 <cit.> architecture. We used a batch size of 256, with a learning rate determined separately for each model with a learning rate search from the PyTorch Lightning library <cit.>. The learning rate was in the interval [2.291·10^-3, 5.754·10^-3] for all models. The images were resized to 224x224 pixels, and random data augmentation was applied. We used an assortment of data augmentation steps, including horizontal and vertical flips, geometric augmentations, color shifts, and pixel dropout. Augmentation was not used with the test dataset. §.§.§ Implementation environment We used R for the DNA distance calculations, and the taxonomist Python library for training the models. §.§.§ Evaluation metrics For the final evaluation of the DNA re-ordering methods, we use metrics commonly used in OOD detection literature: AUROC, AUPRC and FPR@95 metrics. The AUROC and AUPRC are the areas under the Reciever Operating Characteristics and Precision-Recall curves. The metrics correspond to calculating true positive rates (TPR), false positive rates (FPR), precision, and recall for all possible thresholds, and integrating the values as the area under the curves. FPR@95 is calculated from the same AUROC curve, and corresponds to the FPR at the position where TPR is 95%. §.§ Experimental results §.§.§ Re-ordering based on DNA We calculated the AUPRC, AUROC, and FPR@95 metrics for all classes with all OOD scoring methods and DNA re-ordering methods with two different DNA distance metrics, raw and K80. We observed that the raw and K80 DNA distances produced very similar outputs, often producing the exact same ranking and, thus, resulting in the same metrics. The K80 was slightly better so we use it when reporting the results. In Table <ref>, we report the averaged metrics (using the K80 distance) with their standard deviations over all 39 classes. Our proposed DNA-based approach produces almost always better results than the OOD baseline. The improvement between the baseline and plain DNA ordering is very slight or negligible, but the DNA quantile method is consistently better. Fig. <ref> illustrates the class-specific percentual difference between AUROC scores of the DNA quantile method and different baselines for all of the 39 models/outlier classes. It can be observed that the proposed approach improves AUROC from the baseline for a majority of classes. The differences between different OOD baselines are relatively small. The class where DNA ordering had the largest improvement to the baseline (31 / Polycentropus irroratus) has an inlier class (30 / Polycentropus flavomaculatus), in the same genus to which 95% of outliers are classified as. This inlier is also the closest by DNA proximity, resulting in good OOD detection performance for the proposed approaches. Some classes perform worse than the baseline, for example class 37 / Sphaerium. 41% of the images are originally classified as Oxyethira, which is the second furthest taxon from the outlier taxon by DNA proximity. §.§.§ Other results The effect of the DNA quantile parameter q can be seen in Fig. <ref>. The q value of 0.4 seems to have best performance with this dataset. The classifier outputs a class prediction c for each outlier image x_out. Each prediction is an inlier: c ∈𝒞_in. Summing the predictions for all outlier images of a certain class, we can form a histogram of predictions for the outlier class, where n_i is the number of outlier images predicted to inlier class c_i. We can calculate the proportion of predictions in each inlier class c_i as r_i = n_i N_out, where N_out is the number of images in the outlier class. If we do this separately for each time one of the 39 classes is an outlier and other 38 classes are inliers, we get a non-symmetric 39 × 39 distance matrix we can compare to the pairwise DNA distance matrix calculated for all 39×39 class pairs. Omitting pairs where r_i is zero and plotting the corresponding DNA distance and prediction proportion for each outlier-inlier class pair, we get the plot in Fig. <ref>. There is a slight correlation, with a Pearson R value of -0.38 and p value of 2.62e^-30. The figure highlights the classes from Fig. <ref>, where DNA ordering had the highest improvement. § DISCUSSION Our results show that using DNA proximity to re-order rankings produced by common out-of-distribution detection scoring methods improves the reliability of taxonomic OOD detection, measured with several metrics. The proposed method can be easily applied on top of any pre-trained model. The proposed method works the best if outliers are visually classified to inlier classes that are close by DNA proximity. This is often the case, thanks to the correlation between DNA distance and visual similarity (illustrated in Fig. <ref>), especially for genus-level outliers. However, if visually similar classes are by chance far away from the outlier by DNA distance, the method works worse than the baselines. Methods that rely on fine-tuning the classifier with OOD samples is a common family of OOD detection methods. DNA side information might also be useful with these methods, such when training a separate discriminator network<cit.>, or an estimation branch <cit.>. Currently, the proposed method is limited by the assumption of a single outlier class. In real life, DNA barcoding a group of specimens could contain several outlier classes. This limitation will be addressed in future studies. Datasets with realistic DNA-image pairs, such as the BIOSCAN-1M <cit.>, can be used to test the proposed methods further. IEEEtran
http://arxiv.org/abs/2406.18731v1
20240626195921
WavRx: a Disease-Agnostic, Generalizable, and Privacy-Preserving Speech Health Diagnostic Model
[ "Yi Zhu", "Tiago Falk" ]
eess.AS
[ "eess.AS", "cs.AI", "cs.CL" ]
Shell et al.: Bare Demo of IEEEtran.cls for IEEE Journals WavRx: a Disease-Agnostic, Generalizable, and Privacy-Preserving Speech Health Diagnostic Model Yi Zhu, Graduate Student Member, IEEE, and Tiago Falk, Senior Member, IEEE, Authors are with INRS. E-mail: Yi.Zhu@inrs.ca Manuscript received XXX. July 1, 2024 =============================================================================================================================================================== § ABSTRACT Speech is known to carry health-related attributes, which has emerged as a novel venue for remote and long-term health monitoring. However, existing models are usually tailored for a specific type of disease, and have been shown to lack generalizability across datasets. Furthermore, concerns have been raised recently towards the leakage of speaker identity from health embeddings. To mitigate these limitations, we propose WavRx, a speech health diagnostics model that captures the respiration and articulation related dynamics from a universal speech representation. Our in-domain and cross-domain experiments on six pathological speech datasets demonstrate WavRx as a new state-of-the-art health diagnostic model. Furthermore, we show that the amount of speaker identity entailed in the WavRx health embeddings is significantly reduced without extra guidance during training. An in-depth analysis of the model was performed, thus providing physiological interpretation of its improved generalizability and privacy-preserving ability. Health embeddings, speech, diagnostics, generalizability, privacy-preserving § INTRODUCTION During recent years, speech has emerged as a promising modality for disease diagnosis and remote health monitoring. Speech health diagnostics is typically based on the assumption that diseases causing abnormalities in articulatory and/or respiratory systems would lead to an atypical pattern in the human voice signal <cit.>. Such abnormality could be due to a variety of reasons, such as impaired neuromuscular control or an inflammation in the vocal tract and lungs <cit.>. While the impact on the speech signal may sometimes be imperceptible to humans, a machine learning (ML) model could be trained to detect certain disease-related vocal biomarkers. Over the years, there has been a substantial body of work that has explored the use of speech processing for diagnostics, including but not limited to COVID-19 <cit.>, dysarthria <cit.>, Parkinson's <cit.> and Alzheimer's disease <cit.>, as well as many other general respiratory symptoms <cit.>. Several challenges related to speech-based health diagnostics have also emerged, showing the impact of deep learning <cit.>. Despite these many published works, very few systems exist commercially or are used in real-world settings. There may be several reasons for this. First, existing system architectures are usually tailored for a single type of disease, i.e., are disease-dependent. While disease-related biomarkers can be well captured by the models, other health attributes are likely to be overlooked. For example, systems that focus on speech intelligibility may be useful at diagnosing dysarthria, but may fail at detecting COVID-19 infections. As recent innovations in self-supervised learning (SSL) are showing <cit.>, it is possible to learn universal representations that can be used across many different downstream tasks <cit.>. The same is desirable for health diagnostics, where the same architecture can be applied to a variety of diseases (i.e., disease-agnostic) where only downstream fine-tuning is needed. Second, it is expected that a well-trained diagnostic model will generalize well across datasets that share the same or similar pathology. However, recent studies have reported severe degradation in performance across several state-of-the-art diagnostic systems when tested on unseen data from different patients with the same disease <cit.>. This has been attributed to different confounding factors (e.g., noise level, gender) generated unwarily during data collection <cit.>. These factors could lead to models, especially deep learning-based ones, to overfit to a certain database property (e.g., changes in sampling frequency for different disease labels <cit.>) and not necessarily to diagnostic information. The lack of generalizability makes the reliability of existing models questionable and further exacerbates the criticism around the lack of explainability and the “black-box” aspect of deep neural networks. Lastly, since voice carries personal identity attributes, such as gender, age, and race <cit.>, uploading the voice signal to an online platform for model training and evaluation is dangerous, especially considering the rapidly growing voice cloning techniques <cit.>. One method to alleviate this privacy concern is to extract a speech representation locally, then upload only the representation itself. However, studies have shown that health information is likely to entangle with speaker identity in most widely used speech representations (e.g., openSMILE features, ECAPA-TDNN embeddings, and universal representations) <cit.>, suggesting that existing health representations still suffer from speaker leakage. While some privacy-preserving methods have been proposed as an alternative, including adversarial training <cit.> and voice anonymization <cit.>, such methods may alter the speech signal, thus potentially removing disease-discriminatory details; this was recently shown to be the case for COVID-19 detection <cit.>. To tackle these three major limitations, in this paper we propose a new speech health diagnostic model that is disease-agnostic, generalizable across datasets, and privacy-preserving. The proposed model, termed WavRx, is built on top of the well-known WavLM model <cit.> and incorporates a novel modulation dynamics module, which mixes the high-resolution temporal WavLM representation with the long-term modulation dynamics of speech. While the WavLM representation can carry both linguistic and paralinguistic attributes <cit.> at a 50Hz rate, these attributes focus more on transient temporal changes. Articulation and respiration related abnormalities, on the other hand, may modulate these short-time features at a much lower rate. As such, the proposed modulation dynamics block is designed to capture long-term variability and to provide complementary information to the temporal details. Our main contributions in this paper can be summarized as follows: * We propose a new speech health diagnostics model, WavRx, that mixes the universal temporal representation with long-term modulation dynamics. WavRx is tested on six different pathological speech datasets, spanning four different speech pathologies, and achieves state-of-the-art (SOTA) performance on 4 out of 6, with the highest average performance among six benchmark models. * We show that the modulation dynamics block, while being parameter-free, can significantly improve the overall model generalizability across datasets and diseases that share similar symptoms. * We demonstrate that the modulation dynamics block helps to markedly remove the speaker attributes from the health embeddings learned by WavRx, without the need for extra guidance during training. * We find that the health embeddings learned from the dynamics representation are twice more sparse than from the temporal representation, which helps to remove disease-irrelevant information. § RELATED WORK §.§ Speech-based diagnostic models Earlier works have focused on knowledge-based features to characterize the underlying speech pathology. Besides conventional speech features, such as mel-spectrograms or mel-frequency cepstral coefficients (MFCCs), studies have examined a wide variety of features associated with health status. The openSMILE ComParE set <cit.>, for example, has been used as a baseline across several challenges, such as the 2021 COVID-19 detection challenge <cit.>, the 2017 cold&snoring recognition challenge <cit.>, and the 2012 pathology sub-challenge that predicts speech intelligibility for individuals that received cervical cancer surgeries <cit.>. Other studies have proposed features designed specifically for certain types of diseases, such as phonation and articulation features for Parkinson's disease <cit.>, linear prediction (LP) based features for COVID-19 <cit.>, and voice quality features for depression <cit.>, just to name a few. These hand-crafted features aim to capture certain aspects of the speech signal affected by the disease using signal processing techniques. The engineered features are then fed into classical ML classifiers, such as support vector machine or random forest classifiers. Major advantages of hand-crated features are that they provide some explainability and interpretability (e.g., LP residuals represent vocal cord vibration patterns), are suitable for small datasets, which are typically the case in healthcare settings, and tend to generalize better across datasets <cit.>. More recently, models based on deep learning (DL) have started to burgeon <cit.>. These models typically take as input the speech waveform or some variant of the spectrogram (e.g., a mel-scaled spectrogram) and learn the underlying biomarkers via a data-driven approach. DL models are usually designed for one specific type of disease. For example, convolutional (CNN) and recurrent (RNN) networks have been used for COVID-19 detection using cough, speech, and breathing signals as input <cit.>. These models were trained from scratch using a limited amount of data, hence their power is yet to be fully explored. To address this issue, some studies have investigated transfer learning with large-size pre-trained models, such as the VGGish networks <cit.>, ECAPA-TDNN <cit.>, and audio transformers <cit.>. Studies have shown that pre-training on out-of-domain data (e.g., image datasets, speaker verification datasets, audio events) could also benefit speech diagnostics performance <cit.>. While pre-training is usually conducted in a supervised manner, there have been some initial attempts to leverage SSL pre-trained models for diagnostics <cit.>. The underlying assumption is that the universal speech representation resultant from the models, such as Wav2vec <cit.>, carries a variety of speech information, including linguistic, paralinguistic, and diagnostic information <cit.>. It has been shown that self-supervised pre-training is less biased by the upstream datasets than by supervised training <cit.>, thus making universal speech representations an excellent candidate for diagnostic tasks. However, while most existing works have taken different universal representations directly as the feature input to downstream diagnostic classifiers (e.g., <cit.>), we argue that existing universal representations are suboptimal for diagnostics tasks due to two main reasons. First, SSL models, such as Wav2vec <cit.>, WavLM <cit.>, and HuBERT <cit.>, aggregate the input waveform into short segments by a convolutional layer before feeding into the transformer layers. In the case of WavLM, the receptive field of each unit in the CNN output is around 20ms, similar to the frame lengths used in the mel-spectrogram. While this is short enough to capture linguistic content (e.g., phonemes) as well as other temporal details (e.g., speaker details), more longer-term dynamics, such as speaking rate, respiration, and emotions, may not be well encoded. This corroborates with the improved performance achieved by appending different downstream layers to the temporal representation, such as 1D CNNs (<cit.>) and RNNs (<cit.>). Second, the existence of the linguistic content in temporal representation may bias the diagnostic models, as the disease biomarkers should be independent of spoken content. Given these limitations, we propose a new representation that also captures long-term dynamics, following the widely-used concept of the speech modulation spectrum, but instead, applied to a universal speech representation. §.§ Modulation dynamics of speech Speech is produced by the vibration of vocal cords, the vibration is then transmitted through the vocal tract and modulated by the articulatory movement and respiration, generating the speech hearable by humans <cit.>. Typical speech analysis focuses on short-time analysis to capture transient changes caused by changes in phonemes. For example, the window size for the short-time Fourier transform (STFT) is usually 8 to 32ms <cit.>. Speech modulation, in turn, changes at a much lower rate due to the limit of human physiology. For articulatory movement, for instance, between 2 and 10 syllables are being uttered per second for most of the languages <cit.>. However, such underlying modulation is not well captured by a spectrogram and measures such as delta and double-delta cepstral parameters have been used for decades as measures of velocity and acceleration of changes in the cepstral parameters over somewhat larger window durations. To address this issue, several researchers have relied on the so-called modulation spectrum (e.g., <cit.>), which applies a second STFT to each frequency component obtained from the spectrogram. This extends the conventional spectrogram to a 3-dimensional space with an added modulation frequency axis. With a window size of over 128ms, the modulation spectrum analyzes the hidden periodicity of human speech. While most of the linguistic content is lost in the modulation frequency domain, other vocal characteristics such as speaking rate <cit.>, vocal hoarseness <cit.>, and whispering <cit.> may be better manifested. Features derived from such representation have been previously applied in the detection of dysarthric speech <cit.>, whispered speech <cit.>, voice pathologies <cit.>, COVID-19 <cit.>, and emotional speech <cit.>, to name a few. Motivated by the idea of the modulation spectrum, we here applied the modulation transformation to the universal representations to better capture their health-related attributes. § PROPOSED MODEL ARCHITECTURE The proposed WavRx comprises three main components: (1) a pre-trained encoder to extract temporal representations from the raw waveform; (2) the modulation dynamics block to capture long-term dynamics of the encoded temporal representations; and (3) attentive statistic pooling and output layers to fuse representations from the previous two blocks and generate a final decision. Details about each component are described in the following subsections. The model architecture is depicted in Figure <ref>. Considering the privacy requirement in real-world applications, WavRx encoder can be deployed locally to extract health embeddings, which are then uploaded to a central cloud server for decision-making. In later sections, we show that the health embeddings entail minimal speaker identity information, hence preventing the leakage of user identity. Our code is made publicly available on GitHub[<https://github.com/zhu00121/WavRx>]. Owing to the data sharing terms of the employed datasets, pre-trained model backbones are released upon requests. §.§ Temporal representation encoder The proposed model builds on top of the pre-trained WavLM as the temporal representation encoder <cit.>. WavLM takes a raw speech waveform as input and firstly feeds it into a CNN block comprised 7 temporal CNN layers with 512 channels, cascaded by layer normalization and a GELU activation layer. Each time step in the output from CNN block represents 25ms of audio with 20ms hop length. The CNN output is then sent into a transformer backbone, which comprises 13 layers with 768-dimensional hidden states. We employed the WavLM Base+ version[HuggingFace link: https://huggingface.co/microsoft/wavlm-base-plus. Accessed May 23rd, 2024.] which was pre-trained on 60K hours of Libri-light <cit.>, 10K hours of Gigaspeech <cit.>, and 24K hours of VoxPopuli <cit.>. Previous studies have shown that later transformer layers in WavLM carry more linguistic content, while early layers are likely to encode paralinguistic information <cit.>. For diagnostics, it remains unclear which layers are more crucial. Hence, we aggregated outputs from all 12 layers (with the first input layer excluded) by assigning weights to each of them. These weights were learned through supervised training on downstream tasks. The layer-weighted output from WavLM encoder is a time by feature representation {𝐓×𝐅}, which can be seen as a temporal representation showing how each feature changes over time. Given the temporal pooling configurations of the CNN layers, the resultant temporal representation has a temporal resolution of 50Hz. However, speech production is modulated at a lower rate and the temporal representation may carry redundant linguistic information that is less essential for disease diagnosis. Thus, we proposed the modulation dynamics block to provide complementary information that is missing from the temporal representation. §.§ Modulation dynamics block A visual demonstration of the modulation dynamics block is provided in Fig. <ref>. Given an output 𝐓(𝐦,𝐧) from WavLM (i.e., the weight sum of twelve transformer layer outputs), where 𝐦 represents the number of time windows and 𝐧 represents the number of features, we applied a short-time Fourier transform (STFT) to each feature channel, leading to a 3-dimensional modulation dynamics representation 𝐃_𝐧(𝐣,𝐟_𝐣): 𝐃_𝐧(𝐣,𝐟_𝐣) = |𝒮𝒯ℱ𝒯(𝐓(𝐦,𝐧))|^2, where 𝐣 refers to the number of time frames used for STFT and 𝐟_𝐣 to the number of modulation frequency channels. The results of the STFT include both real and imaginary parts; here, we keep only the real part by taking the absolute value operation (denoted by |·|) and calculate the power. For phoneme-level speech applications (e.g., speech recognition), the STFT usually relies on short time windows (e.g.,16-32ms) <cit.>, enabling the temporal resolution high enough to discriminate transitory events. The articulatory movement and respiration, on the other hand, are relatively steady and change at a much lower rate than the vibration of vocal cords. Therefore, we extended the window length to ≥128ms with a hop length ≥32ms to capture the dynamics at a wider range. To achieve the optimal performance, we experimented window length values from 128ms to 1s with 25% hop length, and found the best to be around 256ms. The effects of window sizes are detailed in Section V.C. The resultant dynamics has three different axes, namely feature, time, and modulation frequency, where each slice along the time axis carries the decomposed modulation frequency values for all features. §.§ Downstream components Similar to speaker embeddings, we assume that the health embeddings correspond to utterance-level characteristics. Thus, both temporal and dynamics representations require a temporal pooling operation to obtain the time-invariant embeddings. We compared average pooling and attentive statistic pooling (ASP) and found the latter to be a better suited for the task at hand. The original ASP aims to integrate the frame-level attention when calculating mean and standard deviation as follows: μ = ∑_t^Tα_th_t, σ = √(∑_t^Tα_th_t⊙ h_t-μ⊙μ), where α_t represents the weight assigned to the tth time frame. The ASP can be used directly on temporal features to flatten them into a 1-dimensional vector. With modulation dynamics, we first computed the average along the time axis, which leads to the shape {𝐅𝐫𝐞𝐪×𝐅}, where 𝐅𝐫𝐞𝐪 stands for frequency and 𝐅 for features. We then applied attention to different frequency channels, and calculated the attentive mean and standard deviation. The temporal and dynamics vectors were firstly concatenated then fed into a fully-connected (FC) layer to map into a 768-dimension vector, which was used as the health embedding. A dropout layer and a LeakyReLU with the negative slope of 0.1 were appended after. The second FC layer maps the health embeddings to a single value as the final decision. Additionally, we applied pruning on top of the last FC layer, where the percentage of neurons to be pruned was set as a hyperparameter. § EXPERIMENTAL SETUP §.§ Dataset To diversify the types of speech pathologies to be tested, we used six publicly available datasets covering four different speech-related abnormalities. Since they all differ in data collection procedures and some were originally designed for other purposes (e.g., ASR), we here outline the details for each dataset regarding the data collection procedure, data composition and demographics, and data partitions for our downstream ML tasks. §.§.§ Respiratory Symptoms Datasets Respiratory symptoms refer to the symptoms induced by infections in the respiratory system, such as coughs, fever, sore throat, etc. <cit.>. The appearance of respiratory symptoms is commonly seen with asthma, obstructive pulmonary disease (COPD), and pneumonia, just to name a few. At the time of writing, the largest publicly available speech database with various respiratory symptoms is the COVID-19 Sounds <cit.>. It contains a total of 552hours of audio data recorded remotely from 36,116 individuals around the globe via an app interface. During data collection, volunteers were prompted to conduct three tasks: (1) scripted speech, where all participants uttered the same sentence – `I hope my data can help to manage the virus pandemic' – three times in their mother tongue; (2) voluntary cough for three times; and (3) deep breathing through the mouth for three to five minutes. In addition, they also self-reported their COVID-status along with certain metadata information (e.g., gender, age, pre-existing medication condition, respiratory symptoms). In our study, we used only the speech signals and the metadata. It should also be emphasized that not all participants had conducted a PCR test before recording, hence the COVID-status was in the form of a subjective evaluation (e.g., `I think never had COVID-19') rather than a binary label (i.e., positive vs negative). Such ambiguity in COVID-19 labels motivated us to use this database for respiratory abnormality detection instead of COVID-19 prediction. Although the COVID-19 sounds database is advantageous in its size, it may not be the optimal version to train a diagnostics model considering that multiple factors were not controlled, such as language, sampling rate, or acoustic environment. Hence, we set up two subsets from the original database by screening out several potential confounding factors. The first subset was released along with the original database, which was used as the benchmark data for the respiratory symptom prediction task in the COVID-19 Sounds paper <cit.>. This subset is henceforth referred to as CS-Res. CS-Res contains English samples from 6,623 individuals with respiratory symptoms (e.g., sore throat, cough, etc.), resulting in a total of 31.3h speech data. The sampling rates varied upon different devices used, with the majority sampled at 44.1kHz (67.4%) and 16kHz (29.8%). CS-Res was carefully curated so that the recording quality and class balance were controlled. The second subset is similar to CS-Res (in that only English samples are used) but without controlling for the other factors. This subset is referred to as CS-Res-L, with a total of 123.1h of speech, of which 57.1% were sampled at 16kHz and 40.4% at 44.1kHz and the rest (2.5%) were sampled at 8kHz and 12kHz . For both subsets, participants were labelled into two classes, namely the positive ones who reported at least one respiratory symptom, and the negative ones reporting no symptoms at all. With CS-Res, we followed the official partitions as described in <cit.>. With CS-Res-L, a customized speaker-independent split was performed with a ratio of 7:1:2 (train:validation:test). Meanwhile, we ensured that the distribution of symptom labels, gender, and age were similar in all three splits. §.§.§ DiCOVA2 Dataset This dataset contains speech data used in the Second Diagnosing COVID-19 using Acoustics challenge organized in India <cit.>. DiCOVA2 collected multi-modal acoustic data (i.e., speech, cough, and breathing) remotely from a total of 965 participants via Android and Web apps. Participants were advised to keep the device 10cm from their mouth during recording. For the speech track, participants did number counting from 1 to 20 in a normal pace in English. The recordings were sampled at 48kHz. Furthermore, participants self-reported their metadata, such as gender, experienced symptoms, and COVID-19 status which was grouped into binary labels (either positive or negative). Since the test labels were not made accessible to the public, we used the validation data as the new test set, and partitioned the original training data into the new training set and validation set (8:2). §.§.§ TORGO Dataset This dataset consists of speech recordings and synchronized 3D articulatory features collected from healthy controls and speakers with either cerebral palsy (CP) or amyotrophic lateral sclerosis (ALS), the two most prevalent causes of dysarthria <cit.>. TORGO was originally designed to develop ASR models for dysarthric individuals. The publicly available version of TORGO includes 8 individuals with dysarthria and 7 healthy controls. During data collection, all subjects were asked to read English text from the screen. The speech data were recorded from two microphones, one facing the participant at a distance of 61cm with a sampling rate of 22.1kHz while the other is head-mounted with a sampling rate of 44.1kHz. Only the data from the front-facing microphone were employed herein. All subjects conducted four different reading tasks: (1) non-words (e.g., high- and low-pitch vowels); (2) short words (e.g. `yes', `no', `back', etc.); (3) restricted sentence (e.g., “The quick brown fox jumps over the lazy dog”); (4) unrestricted sentence (e.g., spontaneously describe 30 images from the Webber Photo Cards). We included data from all four tasks in our analysis. As there were no official data partitions, we followed the speaker-independent principle to split all 15 subjects into three sets[`F' and `M' stand for female and male; `C' stands for healthy controls.]: (1) training set (`FC02',`F03',`F01',`MC04',`MC03',`M02'); (2) validation set (`MC02',`FC01',`M03',`M01'); and (3) test set (`FC03',`F04',`MC01',`M05',`M04'). The average dysarthria severity was made similar for all three sets. §.§.§ Nemours Dataset This is a collection of speech recordings from 12 males, 11 with different levels of dysarthria and 1 healthy control <cit.>. Each participant was asked to record 74 nonsense sentences of the form “The X is Ying the Z.” (X≠ Z). Sentences were generated by randomly selecting X and Z without replacement from a set of 74 monosyllabic nouns and selecting Y without replacement from a set of 37 disyllabic verbs. All recordings were collected in a small sound dampened room with one table-mounted microphone, and digitized subsequently using a 16kHz sampling rate. Apart from recording sessions, Nemours also included a perception session where 5 listeners tried to identify the words of the nonsense sentences. The average number of correct identifications was calculated per speaker and the Frenchay speaker assessment scores were reported, which reflects the severity of dysarthria. The average assessment score of the dysarthric speakers is 74.68 with a standard deviation of 14.54. We labelled all speakers into two classes, namely the relatively severe individuals with scores lower than 74.68 (6 dysarthria speakers), and the mild ones with scores higher than 74.68 (5 dysarthria speakers plus 1 healthy control). §.§.§ NCSC Dataset This refers to the “NKI CCRT Speech Corpus” <cit.>. NCSC contains speech recordings and perceptual evaluations of 55 speakers (10 female and 45 male), who underwent concomitant chemo-radiation treatment (CCRT) for cancer of the head and neck region. Recordings and evaluations were made at three moments: (1) before CCRT; (2) 10-weeks after CCRT; and (3) 12-months after CCRT. All subjects read a 189-word passage from a Dutch fairy tale in a sound-treated room. Speech data were collected using a microphone with a 44.1kHz sampling rate at a distance of 30cm from mouth. 13 speech pathologists rated the intelligibility of these speech recordings on a scale of 1 to 7. We employed the NCSC data released by the INTERSPEECH 2012 Pathology Sub-Challenge <cit.>, where all recordings were labelled either as `intelligible' or `non-intelligible', and were split into three independent sets for model training and evaluation. However, since the test labels were not accessible to the public, we used the validation set as the new test set and split the original training set into the new training and validation set with a ratio of 8:2. An overview of the data set-up can be found in Table <ref>. For reproducibility, we also report if the data split was official or customized, and if a baseline model was released together with the dataset. We further release all data partition details in our code repository for future comparisons. Note that issues were seen with a few samples during our exploratory analysis, such as an empty recording or failures during loading. The file names of these recordings can be found here[TORGO/FC01/Session1/wav_arrayMic/0256.wav is an empty recording; Failed to load Nemours/RL/WAV/JPRL39.WAV with torchaduio.]. These error files were discarded in our experiments. §.§ Benchmark models As mentioned in Section <ref>, some challenge datasets were released with a baseline model, namely mel-spectrogram+VGG16 for CS-Res <cit.>, mel-spectrogram deltas+BiLSTM for DiCOVA2 <cit.>, and openSMILE+RandomForest for NCSC <cit.>. Though performing well on one dataset, studies have shown that these models lack generalizability across datasets, even within the same type of disease <cit.>. For simplicity, we group the best performance reported by these baseline models in one row (bottom row in Table <ref>). Recent work has reported better performance achieved with larger speech models, such as TDNN and transformer-based ones <cit.>. In our study, we compared WavRx to five state-of-the-art speech classification baselines, namely two that leverage SSL encoders, including Wav2vec <cit.> and Hubert <cit.>, two different versions of AST pre-trained with speech and audio data respectively <cit.> (denoted as ASTspeech and ASTaudio), and ECAPA-TDNN <cit.>. Modifications were made to these baseline models for compatibility with our tasks. The same ASP layer and classification head implemented in WavRx were appended to Wav2vec and Hubert encoders, and a single FC layer was applied to ECAPA-TDNN embeddings to map these pre-trained representations to a binary output. ASTspeech and ASTaudio were already compatible with our tasks, hence no modifications were made. Three versions of WavRx were compared, namely the original version fusing temporal and dynamics information, and two simplified versions removing either one of the two branches. Details about the baseline models can be found in their corresponding references. §.§ Tasks To test cross-disease, cross-dataset, and privacy-preserving properties of the proposed method, three tasks are proposed. A fourth task is also included to enhance interpretability. These four tasks include: §.§.§ Task 1 – In-domain diagnostic This task aims to compare the proposed WavRx to the other baseline models in an in-domain setting. Models were trained and evaluated within each of the six datasets. An ablation study is also conducted to demonstrate the effects of different model components of WavRx. §.§.§ Task 2 – Zero-shot diagnostic This task investigates the model generalizability in a stringent setting, where models were trained on one dataset and made predictions on unseen datasets. During inference, both the health embedding encoder and classification head were fixed. This task emulates a scenario where no training data is available from the target domain (e.g., an unseen disease). §.§.§ Task 3 – Privacy of health embeddings This task examines if the speaker identity is concealed in the WavRx health embeddings by running an automatic speaker verification (ASV) task on top. Since ASV requires multiple recordings from each single individual, TORGO (15 speakers) and Nemours (10 speakers) were selected for this task. With each individual, 10% of the speech samples were used for training and the remaining 90% were used for testing. We first extracted the health embeddings using the pre-trained WavRx from Task 1, then applied LDA as the speaker classifier. The WavLM model fine-tuned on Voxceleb 1&2 <cit.> was used as the baseline speaker embedding encoder for comparison purposes. §.§.§ Task 4 – Analysis/interpretability of the modulation dynamics block Previous tasks have quantified the changes in diagnostic performance, generalizability, and speaker privacy when integrating the modulation dynamics block. This task aims to explore the reason behind these changes by analyzing the characteristics of modulation dynamics and how it shaped the information learned by the upstream WavLM encoder. §.§ Training and evaluation details For training efficiency, we limited all input recordings to be within 10s by cutting off the over-length part. For those with left and right channels, we took the average to obtain a single-channel audio. All recordings were re-sampled to 16kHz and the amplitude was normalized between -1 and 1. Since the STFT operation in the modulation block requires a minimum of 1-signal, short audios were zero-padded to 1. The aforementioned pre-processing was achieved using the Torchaudio library <cit.>. Regarding data augmentation, we injected two types of environmental corruptions in each training batch, namely noise and reverberation, and concatenated the augmented samples with the original samples. Furthermore, we added speed perturbations by slightly speeding up (105%) and down (95%) the signal. These approaches were implemented via the SpeechBrain toolkit <cit.>. We used the same hyperparameters for training WavRx on all six datasets, changing only the data augmentation and pruning parameters. These hyperparameters are reported in Table <ref>. Data augmentation was only used when trained on DiCOVA2 and TORGO; the optimal pruning percentage was set to 90% for DiCOVA2 and NCSC, and 0% for the others. With the baseline models, we employed the same data augmentation methods used to train WavRx, and tuned the hyperparameters separately for each one of them. Diagnostic performance is measured by two metrics, namely the area under the receiver operating curve (AUC-ROC) and the F1 score. The former has been used widely in disease detection tasks as a baseline metric <cit.>. However, AUC-ROC has been shown to be over-optimistic when evaluating on extremely imbalanced datasets <cit.>. F1, on the other side, is more robust in an imbalanced setting. With both metrics, we calculated for each class and took the unweighted mean (i.e., macro). This is because positive samples (i.e. symptomatic) are usually the minority class, but the missed prediction of a positive sample is more disastrous than that of a negative sample. Hence, the macro average is more suitable than the weighted average. Furthermore, we found that a model could perform decently on the test set but poorly on the validation set (or vice versa). As such, we report F1 scores achieved with both test and validation sets, where the difference between these two can indicate the model robustness. Experiments were conducted on the Compute Canada platform <cit.> with four NVIDIA V100-SXM2 (32GB RAM per GPU). The training time with WavRx was approximately 3-4 hours for CS-Res and CS-Res-L, and less than 2 hours for the other datasets (excluding job waiting time). The shell scripts are also provided in our code repository for simpler replication. § RESULTS AND DISCUSSION §.§ Task 1: In-domain diagnostic performance In-domain diagnostics usually indicates the highest performance that can be achieved by each model in an ideal setting, where training and evaluation data share the same distribution. As shown in Table <ref>, the proposed WavRx obtains the highest test F1 scores in 4 out of 6 datasets, along with the highest average F1 score of 0.744 (combining test and validation) among all models. With the three datasets that were released with official baseline systems (i.e., CS-Res, DiCOVA2, and NCSC), WavRx markedly outperforms the baselines. When using only the modulation dynamics branch for detection, while the overall performance is not competitive as other benchmarks, it is shown to be the top-performer in the Nemours dysarthria detection task. Together, these results suggest that the dynamics of universal representations is crucial for disease detection. When comparing different model categories, SSL models (i.e., WavRx, Wav2vec, Hubert) in general outperform those pre-trained in a supervised manner (i.e., AST, ECAPA-TDNN), though both did not include pathological speech during the pre-training. This again demonstrates the benefits of SSL pre-training when evaluated on a variety of downstream tasks. Interestingly, as the only backbone that was pre-trained not on speech data, the ASTaudio outperforms its speech version. Since existing speech foundation models are usually trained with only speech data, potential improvement might be achieved when adding audio data to the pre-training stage, such as music and other non-speech acoustic events. §.§ Task 1: Ablation study Next, we carefully examined the improvements brought by the different components of WavRx. Figure <ref> shows the improvement in F1 scores averaged across six datasets with different design choices. The largest improvement was seen when all layer outputs were used instead of relying on the last single layer. This corroborates existing SSL model layer analyses, which suggest that later layers likely encode speech semantics while a higher percentage of paralinguistic attributes (e.g., speaker identity, emotion, prosody) are encoded in the early and middle layers <cit.>. For health diagnostics, the importance of utterance-level characteristics is expected to outweigh frame-level details, since the respiration and articulation patterns do not change from frame to frame. Hence, relying on only the last layer output is suboptimal for health diagnostic tasks. We also explored different WavLM backbone versions, with some further fine-tuned for other tasks, such as ASR and ASV. However, no major difference was seen compared to the raw backbone. The dropout rate is also shown to be important, which helps with model generalization. Since data augmentation is not a major focus of this study, we explored only adding noise and reverberation to the waveforms, which led to minor improvements. Lastly, once other components were optimized, addition of the modulation dynamics branch markedly boosted overall performance, demonstrating its complementarity to the temporal SSL representation. §.§ Task 2: Zero-shot diagnostic performance When applied in real-world settings, the amount of data collected from one disease is usually quite limited, as can be seen from the size of the existing pathological speech datasets <cit.>. Hence, it can be beneficial when a diagnostic model can generalize to unseen diseases with similar symptoms or pathological origins. As the top-performers in Task 1, we systematically tested WavRx as well as its two individual branches in a cross-dataset setting. Table <ref> reports the AUC-ROC scores achieved for the model trained on one disease and tested across unseen diseases, as well as the average over all unseen diseases. When comparing different test diseases, respiratory abnormality is shown as the pathology that is distinct from the others, which can be seen from the lowest AUC-ROC score (bottom row in each sub-table). The two dysarthric speech datasets, on the other hand, can lead to decent generalization to each other, although the speech content and data collection protocols differ. Models trained with dysarthric speech can also benefit the detection of COVID-19, as well as chemo-treated speech, which indicates that neuromuscular deficiency can be a shared characteristic among these three pathologies. When comparing the three sub-tables, significant improvements can be seen for all five pathologies when combining modulation dynamics with temporal embeddings. Together with Task 1 results, findings here suggest that integrating modulation dynamics of universal representations can help capture the disease-related biomarkers and improve the model generalizability to diseases sharing similar pathological origins. §.§ Task 3: Do WavRx health embeddings carry speaker identities? Given the system shown in Fig. <ref>, the health embeddings encoded by the local model are expected to carry minimal speaker identity attributes while maximally representing the health information. In this task, we investigate if the health embeddings encoded by the temporal representation alone carry speaker identities, and if the modulation dynamics block can help tackle this issue. The speaker verification accuracies and diagnostic AUC-ROC scores are shown side-by-side in Table <ref>. Ideally, privacy-preserving health embeddings should have a low ASV accuracy and a high diagnostic AUC-ROC score. As can be seen from row 4 and 7, when relying on only the temporal representation, the learned health embeddings carry a higher amount of speaker identity information than the baseline speaker embeddings. This is likely because that pathological speech follow a different feature distribution than the healthy speech in Voxceleb, hence leading to suboptimal performance of the pre-trained speaker embeddings. The health embeddings obtained from the temporal representation may encode both speaker identity and health attributes, therefore resulting in high ASV accuracies. The modulation dynamics representation, on the other hand, decreases the ASV accuracies by an average rate of 31.9% and 13.5% for TORGO and Nemours respectively (row 3 and 6). When fusing the two branches together, the resultant health embeddings lead to the best diagnostic performance, while maintaining the leakage of speaker identity at a lower level than the baseline speaker embeddings (row 2 and 5). We further visualize the health embeddings learned from temporal and dynamics representations, which are shown, respectively, in Fig. <ref>. Colors represent different speakers and marker types represent disease states. While in both plots, the positive and negative classes can be well separated apart, a more clear distinction between speaker clusters can be seen in the left plot (temporal) than the right one (modulation dynamics), suggesting that speaker identities are better concealed by the proposed modulation dynamics representation. §.§ Task 4: Modulation dynamics analysis and interpretability While the modulation dynamics branch is shown to improve the diagnostic performance and generalizability, it is crucial to investigate the characteristics of such representation to understand the reasons behind the improvements. To this end, we start by extracting the modulation dynamic representations from both positive and negative classes, then compute the Fisher's F-ratio <cit.> between the two groups. Since the representation is 2-dimensional (feature by modulation frequency), the F-ratio is calculated per pixel, where the higher value suggests more discrimination between two classes. We further filtered out F-ratio values below 1 since those regions were statistically insignificant. This process was repeated for all six datasets. The F-ratio plots for all tasks can be seen in Fig. <ref>. With the given hop length of the STFT (64), the maximal modulation frequency is 8.3 with the resolution of 0.125. For all six datasets, the majority of the difference is observed below 2, with peaks seen between 0.1 - 0.5, corresponding to a 2 to 5-period modulation. Such slow rate of modulation aligns with our initial hypothesis that long-term dynamics of universal representations are crucial for disease detection. While the physiological origin of such modulations still needs to be investigated, it is likely to be associated with slower respiratory and articulatory movement. For example, the automatic contraction of respiratory muscles has been shown to take place once every five seconds during dialogues <cit.>; an average of 15-25 breathing cycles per minute (equivalent to 2.4 to 4 per cycle) has been reported for adults and the elderly <cit.>. Another important phenomena noticed is the sparsity of the F-ratio plots, where only very few features among a total of 768 are shown with statistical significance. Based on this observation, we further calculated the sparsity of the 768-dimensional health embeddings learned from dynamic representations and compared with those learned from temporal representations. The sparsity values below 1% of the per-sample-maximum were thresholded to zeros. The final results are reported in Table <ref>. As can be seen, it is found that the health embeddings learned from temporal representations have an average sparsity of 35.8% across six datasets with a standard deviation of 9.1% across samples, while the average sparsity doubles to 76.7% with only 0.8% standard deviation for those learned from dynamic representations. Fusing the two together leads to an average sparsity of 64.1%. Findings here demonstrate that disease-related information is encoded more efficiently by the modulation dynamics, where roughly only half of the features are required for accurately detecting a disease. This not only provides insights into the reasons behind the improved generalizability across diseases, but also helps explain the improved privacy-preserving property of the proposed WavRx model. When learning the health embeddings from the fused representations, health-irrelevant information was likely discarded, which may include speaker attributes, such as gender and age. §.§ Task 4: Layer analysis Similar to a group of studies which performed layer analysis on SSL models for speech applications <cit.>, we investigated the impact of modulation dynamics block on the learned layer weights. Figure <ref> compares the layer weights learned with and without the modulation dynamics block. As seen, using only the temporal representation, early layers (0 to 5) are shown to be more crucial, where similar patterns were reported for speaker and emotion recognition tasks <cit.>. After adding the modulation dynamics, weights are found to shift from early layers to middle layers, with peaks typically seen between layer 6 to 8. Meanwhile, later layers (layer 8 to 11) were also assigned with higher weights. Recent works have suggested that very early layers encode speaker identities <cit.>, while middle layers were found most useful for predicting articulation trace <cit.>. Combined with our findings, it is likely that modulation dynamics guided the model to focus on articulation-related attributes rather than speaker identities, which led to such shift in layer weights and to the privacy-preserving property observed. §.§ Limitations and Future Study One potential limitation of our evaluation is the existence of confounding factors in the employed datasets, which have been reported previously <cit.>. Though we have carefully partitioned the datasets so that groups with reported metadata labels are balanced, there might be other underlying factors, such as the noise level, which could lead to over-optimistic results. Furthermore, while the proposed WavRx obtained SOTA performance on majority of the tasks, the robustness to unseen conditions (e.g., in-the-wild speech) can be further improved. This can be seen from the lowest in-domain diagnostic performance achieved with COVID-19 detection, where speech samples were collected in an uncontrolled setting. Meanwhile, recent works have shown the potential of using speech for mental disease detection <cit.>. While our study did not include such datasets for evaluation, the mechanism of our proposed model could enable its usage for other pathological conditions. § CONCLUSION In this study, we describe a novel speech health diagnostic model termed WavRx, by integrating modulation dynamics with a universal speech representation. Our proposed model achieves SOTA performance on five out of six pathological speech datasets and demonstrate zero-shot generalizability across diseases sharing similar symptoms. Furthermore, the leakage of speaker identities is significantly decreased after adding the innovated modulation dynamics block, thus providing the model with privacy-preserving properties needed in healthcare. An in-depth analysis of the modulation dynamic representation shows that low-frequency modulations below 2 are crucial to discriminate pathological samples. Sparsity and layer analyses help further explain the reasons behind the improvements seen in generalizability and privacy-preserving abilities. In general, WavRx demonstrates generalizability across diseases while minimizing the leakage of speaker identities, hence can be established as a new benchmark model for health diagnostic tasks. § DISCLAIMER AND ACKNOWLEDGEMENT The authors would like to acknowledge organizations and research groups that made their datasets available to the public. The data holders do not bear any responsibility for the analysis and results presented in this paper. All results and interpretation only represent the view of the authors. Authors also acknowledge funding from INRS, NSERC, and CIHR. IEEEtran
http://arxiv.org/abs/2406.18691v1
20240626185253
Geometric Features Enhanced Human-Object Interaction Detection
[ "Manli Zhu", "Edmond S. L. Ho", "Shuang Chen", "Longzhi Yang", "Hubert P. H. Shum" ]
cs.CV
[ "cs.CV" ]
IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT Geometric Features Enhanced Human-Object Interaction Detection Manli Zhu^0000-0002-8231-5342, Edmond S. L. Ho^0000-0001-5862-106X, Shuang Chen^0000-0002-6879-7285, Longzhi Yang^0000-0003-2115-4909, Senior Member, IEEE, Hubert P. H. Shum^0000-0001-5651-6039†, Senior Member, IEEE M. Zhu and L. Yang are with the Department of Computer and Information Sciences, Northumbria University, Newcastle upon Tyne, UK. Emails: {manli.zhu, longzhi.yang}@northumbria.ac.uk E. S. L. Ho is with the School of Computing Science, University of Glasgow, Glasgow, UK. Email: shu-lim.ho@glasgow.ac.uk S. Chen and H. P. H. Shum are with the Department of Computer Science, Durham University, Durham, UK. Emails: {shuang.chen, hubert.shum}@durham.ac.uk ^†Corresponding author: H. P. H. Shum July 1, 2024 ======================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================= § ABSTRACT Cameras are essential vision instruments to capture images for pattern detection and measurement. Human-object interaction (HOI) detection is one of the most popular pattern detection approaches for captured human-centric visual scenes. Recently, Transformer-based models have become the dominant approach for HOI detection due to their advanced network architectures and thus promising results. However, most of them follow the one-stage design of vanilla Transformer, leaving rich geometric priors under-exploited and leading to compromised performance especially when occlusion occurs. Given that geometric features tend to outperform visual ones in occluded scenarios and offer information that complements visual cues, we propose a novel end-to-end Transformer-style HOI detection model, i.e., geometric features enhanced HOI detector (GeoHOI). One key part of the model is a new unified self-supervised keypoint learning method named UniPointNet that bridges the gap of consistent keypoint representation across diverse object categories, including humans. GeoHOI effectively upgrades a Transformer-based HOI detector benefiting from the keypoints similarities measuring the likelihood of human-object interactions as well as local keypoint patches to enhance interaction query representation, so as to boost HOI predictions. Extensive experiments show that the proposed method outperforms the state-of-the-art models on V-COCO and achieves competitive performance on HICO-DET. Case study results on the post-disaster rescue with vision-based instruments showcase the applicability of the proposed GeoHOI in real-world applications. Human-object Interaction, Object Keypoints, Interactiveness Learning, Graph Convolutional Network, Attention Mechanism. § INTRODUCTION Cameras, as predominant vision instruments, are extensively employed in methods that rely on visual measurements <cit.>, such as human and object pose estimation <cit.>. Human-object interaction (HOI) detection is one of the most popular pattern detection approaches for captured human-centric visual scenes. It involves identifying and localizing interactive human-object pairs while predicting the specific interactions between them within an image, yielding HOI triplets <human, interaction, object >. It plays an important role in numerous applications, such as action recognition <cit.> and surveillance event detection <cit.>. The existing HOI detection methods generally fall into two-stage or end-to-end approaches. Two-stage approaches <cit.> typically take advantage of off-the-shelf object detectors like Fast R-CNN <cit.>. They first detect all instances (i.e., humans and objects) in an image, and then the interaction classification is carried out on every human-object pair. These methods may lead to sub-optimal HOI detections due to the independent optimization of two sub-problems <cit.>, i.e., object detection and interaction classification. In contrast, end-to-end approaches detect the components of an HOI triplet all at once <cit.>. In the earlier end-to-end attempts <cit.>, interaction points and object proposals are detected simultaneously. The interactions are then associated with each human-object pair. However, in still images of complex scenes, such as crowded areas with interaction points overlapping among different human-object pairs, these methods could lead to inaccuracies and misinterpretations <cit.>. End-to-end Transformer-based models <cit.> have been proposed to overcome these limitations, achieving state-of-the-art performance. Inspired by the Transformer object detector DETR <cit.>, these approaches frame the HOI detection as a set prediction problem, using a bipartite matching loss to align interaction queries with ground-truth HOI triplets. While successful, rich prior knowledge (e.g., the semantic features and structure information) is under-exploited due to the random initialization of parametric interaction queries. To address this limitation, <cit.> explored semantics, spatial features, and structure information. Nevertheless, the spatial features including instance bounding boxes and human-object layout employed in these works are too coarse to capture fine-grained relationships between human body parts and object parts. The fine-grained geometric features, such as human pose and object structure have proven to be highly effective in two-stage methods <cit.>. However, they remain under-explored in existing Transformers due to their one-stage paradigm of HOI detection. In this work, we investigate how to enrich HOI representations with fine-grained geometric features in an end-to-end Transformer framework. To this end, we propose a Geometric features enhanced Human-Object Interaction detection model (GeoHOI). Given that geometric features tend to outperform visual features on datasets with heavy occlusion <cit.> and offer information that complements visual cues, our idea is to learn fine-grained geometric features (i.e., keypoints) to facilitate interactiveness prediction of human-object pairs and to enhance interaction query representation. In detail, GeoHOI improves the Transformer-based framework of STIP <cit.> by introducing three novel components. First, a keypoints detection module unifies the keypoint detection across different object categories, including humans, and is integrated into GeoHOI for end-to-end HOI detection. It simplifies the appearance distribution of different object classes by reconstructing object segmentation masks instead of their RGB images, allowing the network to focus on learning different shapes and enabling it to learn keypoints for arbitrary objects. As a result, it generates consistent and robust keypoint representation across different object categories. Second, a keypoint-aware interactiveness prediction module employs a graph convolutional network, capturing the holistic cues (i.e., cross-instance features) between humans and objects that complement pairwise features to effectively predict the interactiveness of human-object pairs. Third, a part attention module intends to identify informative local cues since specific interaction types are defined with detailed local information of human and object parts. This enhances the representation of interaction queries in the HOI Transformer for effectively classifying specific interactions. Thus, we exploit a self-attention mechanism to produce part-level attention, with keypoint positions serving as positional encodings. This allows the HOI classifier to focus on specific local regions that are informative to each interaction type. We evaluate our model on two HOI benchmarks V-COCO <cit.> and HICO-DET <cit.>. The proposed GeoHOI achieves superior results on both datasets. Source codes are available at https://github.com/zhumanli/GeoHOIhttps://github.com/zhumanli/GeoHOI. Our contributions are: * We introduce GeoHOI, a geometric features enhanced human-object interaction detection approach, facilitating pattern detection and measurement in images captured by vision instruments. * We present a self-supervised keypoints learning method (UniPointNet) to detect keypoints for different object categories including humans in a unified manner. To the best of our knowledge, this is the first attempt that unifies keypoints detection across different object classes in HOI. * We design a keypoint-aware interactiveness prediction module that incorporates holistic relationships between humans and objects. The geometric keypoint features are exploited to measure the likelihood of human-object interactions, boosting the interactiveness prediction of human-object pairs. * We propose a part attention module that refines interaction query representation using self-attention, enhancing specific interaction prediction by identifying informative human and object parts. * We demonstrate the effectiveness of our proposed GeoHOI by conducting experiments in public HOI detection benchmark datasets, outperforming state-of-the-art methods by a large margin of 3.4 mAP on V-COCO and 3.76 mAP on HICO-DET. We further conduct a real-world application case of post-disaster with UAVs, and GeoHOI outperforms all the baselines in terms of AP and recall. § RELATED WORK §.§ Two-stage Methods §.§.§ Multi-stream Approaches Early HOI detection models are typically implemented with a two-stage framework. In the first stage, an object detector such as Fast R-CNN <cit.> is used to localize instances. In the second stage, a classifier is trained to predict human-object interactions. Two-stage methods use pre-trained object detectors to simplify HOI detection, achieving a good trade-off between performance and complexity <cit.>. Earlier works focus on designing multi-branch HOI classifiers with convolutional neural networks modelling human and object appearance features and spatial layout. Gkioxari et al. <cit.> extended Fast R-CNN by introducing a human-centric branch to predict interactions at each target object location. Chao et al. <cit.> proposed a three-branch framework to model pairwise human-object appearance features and their spatial relations. Hou et al. <cit.> presented a five-branch framework with a novel fabricated compositional branch targeting the issue of long-tailed distributions of HOI interactions. These methods mainly focus on exploring the pairwise human and object features, overlooking the holistic features that could complement the pairwise ones. Some works have exploited graph convolutional networks (GCNs) to model the relationships between humans and objects from a global perspective. Qi et al. <cit.> proposed a fully connected graph with humans and objects as nodes, and the adjacency matrix was inferred by their proposed link function. Ulutan et al. <cit.> introduced a visual-spatial-graph network to model structural connections between instances. Similar to Qi et al., they model humans and objects as nodes. Instead of a fully connected graph, they only build connections between inter-class instances, omitting unnecessary human-human and object-object pairs. Their adjacency matrix is predicted by the visual branch. Zhang et al. <cit.> presented a spatially conditioned graph with a multi-branch fusion module computing the adjacency structure and refining graph features. GCN-based HOI methods have shown that the modelling of intra-level and inter-level HOI representations can significantly improve HOI detection performance <cit.>. The reason is that GCNs not only capture pairwise features but also infer holistic cross-instance cues, which are useful for HOI reasoning. We leverage its advantage by fusing both pairwise features and cross-instance cues to enhance HOI prediction. §.§.§ Geometric Features Informed Approaches Geometric features such as human pose and object structure provide fine-grained spatial information and have been proved to be effective in improving HOI detection performance in two-stage methods. Fang et al. <cit.> and Wan et al. <cit.> explored the semantic cues of human body parts with an attention module that effectively identifies the most informative body parts for HOI recognition. Wu et al. <cit.> proposed to extract cross-person cues for body parts, which afford useful and supplementary information for the discovery of interactiveness. Park et al. <cit.> designed a graph with a pose-conditioned self-loop structure, allowing the human node embedding to be updated based on the local features of human joints. As discussed, the human pose has been well-studied in HOI detection, while the geometric features of objects such as keypoint positions are less explored. To overcome this, Zheng et al. <cit.> proposed to model the interactions between human joints and object keypoints using a graph network for capturing fine-grained spatial relationships in HOI detection. Nevertheless, the representation of object keypoints in their work (i.e., two corner points of the object bounding box) is too simple to capture object shapes or structures as it considers only the rectangular spatial scope of an object. Efforts have been made to improve the representation of object structure in HOI detection. Zhu et al. <cit.> proposed a deterministic method for representing object keypoints which encapsulate the underlying structure of an object. They extracted an object skeleton from its segmentation mask using a morphological skeletonization algorithm and obtained its keypoints by applying the K-means clustering to the set of key points on the skeleton. This kind of non-probabilistic method is less robust in handling various object shapes, particularly when dealing with non-articulated objects, making it difficult to accurately detect keypoints across different objects. Bar et al. <cit.> exploited transfer learning to estimate animal keypoints with a pre-trained object keypoints detector and adopted the interest point detection in geometry with bin girding to obtain keypoints for artefacts such as beds and computers. Ito <cit.> proposed a human and object keypoint-based extension module to improve conventional HOI detection models such as <cit.>. However, the different representations of human and object keypoints presented in these frameworks are less consistent and difficult to maintain across different objects, making them less applicable to real-world applications. In this work, we explore a self-supervised framework for learning keypoints of both humans and objects, which is a more versatile and robust approach for keypoint estimation. §.§ End-to-end Transformer-based Methods Transformers have shown superior performance in many fields including HOI detection due to their advanced network architecture and high capacity. They are first adopted in <cit.> by utilizing the vanilla Transformer architecture <cit.> to map the parametric interaction queries into a set of HOI predictions with a bipartite matching loss. Later, Kim et al. <cit.> introduced a multi-scale Transformer architecture to boost HOI detection. Recently, a multiplex relation network that disentangled Transformer decoders to encourage rich context exchange was proposed in <cit.>. Unlike two-stage methods that optimize instance detection and interaction detection in separate stages, these end-to-end frameworks infer human-object relationships from a global contextual perspective. They predict all elements of HOI triplets directly, significantly surpassing the performance of existing two-stage approaches. Nevertheless, the rich prior knowledge, such as spatial features <cit.>, are not exploited in the above Transformer-based attempts. Some studies have attempted to inject prior knowledge into Transformer architectures, to address the aforementioned limitation. Iftekhar et al. <cit.> proposed to utilize the semantic features (i.e., text embeddings) and the spatial features (i.e., the relative spatial configuration of human and object bounding box locations) to enhance the query representations of decoders. Zhang et al. <cit.> exploited the inter-interaction semantic structure and intra-interaction spatial structure over interaction proposals (i.e., human-object pairs) to strengthen HOI predictions. Xie et al. <cit.> proposed a novel category query learning approach where interaction queries are explicitly associated with specific and fixed image categories, facilitating HOI detection. We observe that spatial features, such as instance bounding boxes and human-object layout used in these works are too coarse to capture fine-grained relationships between human body parts and object parts, which have been demonstrated to be beneficial in existing two-stage HOI models <cit.>. In this paper, we leverage the geometric keypoint features to facilitate HOI classification in an end-to-end Transformer-based framework. § OVERVIEW OF GEOHOI This work aims to improve end-to-end Transformer-based HOI detection networks with fine-grained geometric features of humans and objects. To this end, we propose GeoHOI. It utilizes learnable fine-grained geometric features (i.e., keypoint positions) to facilitate the interactiveness prediction of human-object pairs and to enhance interaction query representations. Inspired by the process of HOI detection with prior knowledge, we improve the structure-aware Transformer over interaction proposals (STIP <cit.>) by using keypoint features. As shown in Fig. <ref>, STIP is an improved network over the vanilla Transformer with prior knowledge of inter-interaction (i.e., whether or not two HOI triplets share the same human or object) and intra-interaction (i.e., the layout of human and object) structure. It becomes a natural backbone of GeoHOI due to its decompose-style design of HOI predictions, i.e., interaction proposals are first generated, followed by interaction classification. Such design allows us to explore rich geometric features for effective interaction proposal generation and non-parametric interaction query representation. Specifically, our framework introduces three novel components to STIP, i.e., keypoints detection with our novel UniPointNet, keypoint-aware interactiveness prediction module for predicting interactive human-object pairs, and part attention module to enhance interaction query representation with informative human and object local parts. We start with GeoHOI architecture (Section <ref>), then introduce the keypoint-aware interactiveness prediction module (<ref>) and the part attention module (<ref>). Section <ref> details the keypoints detection network (UniPointNet). §.§ Architecture of GeoHOI An overview of GeoHOI is shown in Fig. <ref>. Given an input image x∈ℝ^H × W × C, where H, W, and C represent the image height, width and channels, accordingly, GeoHOI first extracts the image feature map F_x∈ℝ^H^'× W^'× d with a CNN backbone of ResNet. F_x is then sent to Panoptic DETR <cit.> to obtain instance detections including bounding boxes and segmentation masks. Next, the segmentation masks are fed into UniPointNet to obtain keypoints for each instance. After that, the keypoint-aware interactiveness prediction module constructs pairwise and holistic graph features for instance-level feature presentation. It then predicts and outputs interactive pairs, enhanced by local cues from keypoints in the part attention module. Finally, the structure-aware Transformer generates HOI predictions. Details are introduced in the following sections. §.§ Keypoint-aware Interactiveness Prediction Module The Keypoint-aware Interactiveness Prediction (KIP) module aims to suppress non-interactive human-object pairs using coarse instance-level features. It transforms the random parametric interaction queries in the vanilla Transformer to non-parametric interaction proposals equipped with prior knowledge (e.g., instance visual features and their spatial layout), facilitating relational reasoning among interactions in HOI set prediction <cit.>. When learning the interactiveness of a human-object pair, visual cues can be explored not only from the targeted human and object but also from other humans and objects in the scene <cit.>, providing a more comprehensive understanding of the scene. However, previous works such as <cit.> only consider target pairwise features, failing to effectively extract interactive pairs. As a potential solution, mining cues from a global cross-instance perspective, i.e., using other humans and objects as a reference, would offer helpful and supplementary information for interactiveness inference. Therefore, in addition to pairwise features, we incorporate graph features using keypoint positions measuring the geometric distance with a graph convolutional network from a global perspective, extracting cross-instance cues. We first enumerate all human-object pairs using the detected instances by Panoptic DETR, and the KIP then estimates the likelihood of interaction for each pair based on both pairwise features and holistic graph features through a multi-layer perceptron (MLP). Finally, the KIP module outputs the top-K human-object pairs with the highest probability scores. Concretely, for each human-object pair, the human visual feature f_h^v, object visual feature f_o^v, spatial feature f_s^u, union feature f_u^v are represented as 256-dimensional vectors, while the object's semantic feature f_o^c (the embedding of the object class label) is a 300-dimensional vector. We refer to these as pairwise features. For the graph representation, we model humans and objects as nodes, connecting each human to all objects and each object to all humans. As shown in Fig. <ref>, the node features are represented with visual features f_h^v, f_o^v. We use the similarities between human keypoints and object keypoints to define the adjacency matrix A. This highlights that closer keypoints between a human and an object indicate a higher likelihood of interaction. Unlike the implicit adjacency matrix representation (i.e., it is predicted from instance visual features) in <cit.>, the keypoints similarity explicitly captures the geometric distance prior knowledge between a human and an object, resulting in effective interactiveness prediction. As depicted in Fig. <ref>, given the keypoint features f_h^p∈ℝ^N×2 of a human and f_o^p∈ℝ^N×2 of an object, they are first embedded by a linear layer with 128-dimensional vectors and the similarity between them is served as their edge weight A_ho∈A, which is expressed as follows: A_ho = ϕ(f_h^p)⊗ϕ(f_o^p), where ϕ is implemented with a linear layer to encode keypoint positions, ⊗ denotes the dot product. Note that the edge weight between a human and an object is symmetric, i.e., A_ho = A_oh. The graph features f_h^g and f_o^g are then defined as follows: f_h^g = f_h^v + ∑_o=1^ÔA_hof_o → h^v, f_o^g = f_o^v + ∑_h=1^ĤA_ohf_h → o^v, where Ô and Ĥ are the numbers of humans and objects, f_o → h^v is the projection of object visual feature f_o^v in the human space, and f_h → o^v is the projection of human visual feature f_h^v in the object space. Finally, the instance-level features for interactiveness prediction of each human-object pair in this module are obtained as the concatenation of all the features as follows: f_ho= f_h^vf_o^vf_h^gf_o^gf_u^sf_o^cf_u^v. §.§ Part Attention Module While the instance-level features provide coarse information for interactions, specific interaction types are defined with fine-grained details. They highlight local information on human and object parts that are unlikely to be captured in instance-level features <cit.>. In addition, the fine-grained correlations among human body parts and object parts (e.g., the spatial layout between the human hands and the laptop keyboard shown in Fig. <ref>) implicitly depict the consistent spatial, scale, and co-occurrence relationships between humans and objects, providing a finer granularity context information of an image <cit.>. However, existing works <cit.> only consider human body parts while overlooking object structure parts. To address the aforementioned limitation, we introduce a Part Attention Module (PAM) designed to identify the most relevant parts of both humans and objects for detecting a specific interaction category. We use self-attention to learn the part-level features of a given human-object pair, enabling each part to aggregate information from all other parts, regardless of their distance or position. This allows the network to extract richer and more comprehensive context features, leading to a deeper understanding of the scene. This module serves to enhance the interaction query representation of each selected interactive human-object pair, improving the effectiveness of classifying particular interactions. In detail, given the human keypoints f_h^p = {f_h^p1…, f_h^pN}, we define a local region x_pi∈ℝ^4 for each keypoint p_i^h, it is centered at p_i^h and has a size γ proportional to the size of the human bounding box. We adopt RoI-Align <cit.> to generate local patch features and rescale them to a resolution of R_p × R_p. We apply the same operations to object keypoints f_o^p = {f_o^p1…, f_o^pN} to generate their local patch features as well. For the sake of simplicity, we denote the extracted patch features of humans and objects as f^p^' = {f^p^'1…, f^p^'2N}. In addition, we embed each keypoint as positional encodings to its corresponding patch. By doing this, the model can capture more detailed spatial relationships and configurations within each human-object pair. It also ensures a richer representation of the data, allowing the model to make more context-aware predictions of specific interaction types. We then represent patch features of each human-object pair, integrated with their corresponding positional encodings, as a sequence of queries q̂=(q̂_1, …, q̂_2N), keys k̂=(k̂_1, …, k̂_2N), and values v̂=(v̂_1, …, v̂_2N). Following the self-attention mechanism <cit.>, each patch is computed by aggregating all values weighted with attention, and an attended patch feature is represented as follows: f^p̂_i=∑_j α_ij(W_v̂v̂_j), where each α_i j=exp(e_ij)/∑_j exp(e_ij) is the normalized attention weight with softmax. Here the primary attention weight e_ij is the scaled dot-product between each key k̂ and query q̂: e_ij=(W_q̂q̂_i)^T(W_k̂k̂_j)/√(d_key), note that W_q̂, W_k̂, W_v̂ are learnable embedding matrices, and d_key is the embedding dimension of keys. The attended local part feature for a human-object pair is calculated by concatenating all patches: f^p̂ = f^p̂_1 f^p̂_2 …f^p̂_2N. Finally, each interaction query q∈ Q is represented by the fusion of instance-level interactiveness features and the attended part features: q = f_hof^p̂. They are fed into the structure-aware Transformer <cit.> for HOI classification. §.§ Training and Inference We follow the training and inference procedure of the STIP <cit.>. The KIP module is optimized with focal loss (FL) <cit.>: L_interactiveness=1/∑_i=1^N̂ z_i∑_i=1^N̂ F L(ẑ_i, z_i), where N̂ is the number of sampled human-object pairs, z_i ∈{0, 1} denotes the existence of ground-truth interaction, and ẑ_i is the predicted interactiveness score. For each of the output human-object pair of KIP, the focal loss is also used as the multi-label classification loss to train the possible interactions: L_class=1/∑_i=1^N̂∑_j=1^Ĉ y_i j∑_i=1^N̂∑_j=1^Ĉ F L(ŷ_i j, y_i j), where Ĉ is the number of interaction classes, y_ij∈{0, 1} indicates the ground-truth interaction class, and ŷ_i j is the predicted probability of j-th interaction class. The overall training objective of our GeoHOI integrates the above interactiveness loss and the interaction classification loss: L_GeoHOI=L_interactiveness+L_class. § SELF-SUPERVISED OBJECT KEYPOINT DETECTION As a core of our model, we propose leveraging keypoints as fine-grained geometric features of both humans and objects, to facilitate HOI prediction, but it is essential to detect these keypoints before utilizing them. Existing keypoints detection models typically focus on a single object class (e.g., human pose estimation <cit.>) rather than common objects. The difficulties lie in the complexity of distinct spatial structures and appearance distributions exhibited by various objects and the limited annotation availability. In addition, there is very limited work in detecting keypoints across different objects in HOI <cit.> due to a large number of object categories (e.g., 80 common object categories in MS-COCO <cit.>) and occlusions. To address this challenge, we propose UniPointNet which can detect keypoints for arbitrary objects. We employ the self-supervised keypoints learning framework of AutoLink <cit.>. While AutoLink was proposed to learn keypoints for single object classes, our goal is to detect keypoints across all classes present in the HOI task. To this end, we make two key changes to AutoLink. First, we feed object segmentation masks into the network instead of RGB images. This eliminates the appearance variations across different object classes, simplifying their appearance distribution. As a result, the network can focus on learning object shapes and structures. Second, instead of using an individual edge graph with shared graph weight to align all samples, we opt for a set of edge graphs with different graph weights, aligning samples within their respective clusters. This design accommodates object masks with significant variations, thus allowing the network to detect keypoints across a diverse range of object categories. Using such a network to detect keypoints for humans and all the other object classes is advantageous. First, it unifies keypoints detection for different object classes within a single network, which is more applicable in real-world applications in which diverse object types are often involved. Second, it ensures the consistency of keypoints distribution across different object categories including humans, resulting in a unified and consistent keypoints representation that facilitates network learning. Third, unlike the common keypoints representation in occluded cases (e.g., zeros for occluded or invisible joints of a human), all the detected keypoints in our UniPointNet contribute to the representation of an object's shape. This guarantees a more robust keypoints representation when objects are partially visible. §.§ Architecture of UniPointNet An overview of UniPointNet is shown in Fig. <ref>. Given an object segmentation binary map B∈ℝ^H × W × 1 with a height of H and a width of W, our goal is to learn a set of keypoints κ = {k_i|i=1, 2, 3, …, N; k_i ∈[0,1] ×[0,1] ⊂ℝ^2}, where N is the number of keypoints. As per <cit.>, keypoints are detected by the encoder with ResNet and upsampling, and each pair of keypoints is connected with a differentiable edge <cit.>. This kind of graph connectivity defines a unique structure of a group of objects with similar shapes that share the same cluster label, learned in a self-supervised manner. The edge map E∈ℝ^H × W is concatenated with the masked binary map B_m ∈ℝ^H × W × 1 along the channel dimension, and fed into the decoder to obtain the reconstructed segmentation binary map B^'. Detailed encoder and decoder network architectures can be referred to <cit.>. §.§ Segmentation Structure Representation Here, we introduce keypoints representation and the edge map generation. ℋ = {h_i|i=1, 2, 3, …, N; h_i ∈ℝ^H × W} is the N heatmaps generated by the Encoder from the input mask. The keypoint k_i is obtained by the differentiable soft-argmax function, k_i = ∑_pψ(h_i)p, where ψ(h_i) is the Softmax operation on a single heatmap h_i, defined as, ψ(h_i) = exp(h_i(p))/∑_j=1^N exp(h_j(p)), where p is normalized pixel coordinates. According to <cit.>, a differentiable edge map E_ij is generated for any two keypoints k_i and k_j, by assigning a value of 1 to pixels on the edge connecting the keypoints. For other pixels, their values decrease exponentially based on their distance to the line. The edge map E_ij is a Gaussian that extends along the line <cit.>, and it is formally expressed as, E_i j(p)=exp(d_i j^2(p) / σ^2), where the hyperparameter σ controls the thickness of the line, and d_i j^2(p) is the L_2 distance between the pixel p and the line from keypoints k_i and k_j. According to the location of the pixel p, i.e., before the starting keypoint k_i, between the starting keypoint k_i and the ending keypoint k_j, or after the ending keypoint k_j, it is defined as, d_i j(p)={[ p-k_i_2 if t ≤ 0,; p-(k_i+t k_j)_2 if 0<t<1,; p-k_j_2 if t ≥ 1, ] . where t=(p-k_i) ·(k_j-k_i)/k_i-k_j_2^2. The final edge map E∈ℝ^H × W is obtained by taking the maximum at each pixel of all the heatmaps, E(p)=max _i j w_i jE_i j(p), where w_i j is a learnable edge weight. As explained in <cit.>, opting for the maximum value at each pixel helps untangle the edge weights from the convolution kernel weights and generates better performance. §.§ Segmentation Reconstruction The masked segmentation B_m is obtained by randomly masking out 90% of the input segmentation. It is then concatenated with the edge map and is fed into the decoder to reconstruct the original segmentation, B^' = Decoder(αB_m E), where denotes concatenation along the channel dimension and the parameter α is a learnable factor that adjusts for the variation in edge weight magnitude during training and is initialized to 1. The L_1 loss and VGG perceptual loss <cit.> are used to minimize the difference between the original segmentation and the reconstructed one, ℒ=1/M∑_i=1^M (|B_i-B_i^'| + Γ(B_i)-Γ(B_i^')_2^2), where M represents the total number of examples, and Γ indicates the feature extractor, i.e., the VGG network. § EXPERIMENTS In this section, we introduce HOI benchmark datasets V-COCO <cit.> and HICO-DET <cit.>, followed by experimental settings and implementation details. We then evaluate our proposed model against state-of-the-art approaches and provide insights on per-class performance by comparing it with the backbone STIP. Finally, we present ablation studies on the selection of the number of keypoints and the impact of individual component designs of our model. §.§ Datasets V-COCO is a popular HOI detection dataset and is a subset of MS-COCO <cit.> including 29 different action classes. It consists of 10,346 images, with 2533 images for training, 2867 images for validating, and 4946 images for testing. Following the settings in previous works <cit.>, we apply the Average Precision (AP_role) metric over 24 interactions for the evaluation. Five actions are omitted as one of them has limited samples and the other four have no object associated with humans. Two types of AP_role (i.e., AP_role^#1 and AP_role^#2) are reported under different scenarios with different scoring criteria for cases where objects are occluded. Concretely, in the scenario of AP_role^#1, the occluded object bounding box must be predicted as empty, i.e., [0,0,0,0]. In contrast, in scenario AP_role^#2, the occluded object is ignored. A human-object pair is considered a true positive if the predicted bounding boxes for both the human and the object have an Interaction-over-Union (IoU) ratio greater than 0.5 with their corresponding ground-truth annotation and the interaction category is accurate. HICO-DET is a larger HOI detection dataset consisting of 47,051 images with 37,535 training and 9,515 testing images. It has 600 annotated human-object interactions and covers the same 80 object categories in MS-COCO <cit.>. We follow previous works <cit.> and report in two different settings, i.e., Default and Known Object. The Default setting represents the evaluation of AP across all testing images, whereas the Known Object setting calculates the AP of each object solely on images that contain that object class. We report the AP for each setting over three different sets of HOI categories based on the number of training samples, i.e., Full (all 600 HOI categories), Rare (138 HOI categories that have less than 10 training samples), and Non-Rare (462 HOI categories with at least 10 training samples). §.§ Implementation Details To train UniPointNet, we extract object segmentation masks from the COCO dataset <cit.>. Masks with a ratio less than 0.2 relative to the image are discarded, since we aim to learn object shapes and structures and these tiny masks do not contain enough pixels to compute shape features. We finally collected a total of 50,238 training samples. We then apply ResNet to group all samples into 100 clusters with K-means clustering. Each cluster is associated with a unique set of graph weights during training, aligning the shape of samples within that cluster. Following AutoLink <cit.>, the network is trained for 20k iterations with Adam optimizer, a learning rate of 10^-4, a batch size of 64, and the edge thickness of σ^2=5 e-5. During inference, an input sample is assigned a cluster label based on its distance to each cluster centroid. Subsequently, its keypoints are detected using the corresponding graph weights. The number of keypoints can range from 4 to 48. We adopt the object detector Panoptic DETR <cit.> pre-trained over MS-COCO, for both object bounding box detection and segmentation. It provides segmented inputs to UniPointNet, thereby enabling us to seamlessly incorporate UniPointNet into GeoHOI. The UniPointNet is then utilized as a pre-trained component of GeoHOI, detecting HOIs in an end-to-end manner. The backbone RestNet-50 is used for image feature extraction. We present results for two variations of our proposed method: GeoHOI and GeoHOI*. GeoHOI is trained for the HOI detector only with frozen parameters in Panoptic DETR, whereas GeoHOI* is trained with joint fine-tuning of both the object detector and HOI detector in an alternate manner. In the experiments, same as STIP, we output top-32 interactive human-object pairs of the Keypoints-aware Interactiveness Prediction module. Following the previous practice in <cit.>, the RoI-Align in PAM is set to a resolution of R_p = 5, and the size of human and object patches is γ=0.1 of their respective instance bounding box height and width. Then all the patch features are scaled to 5×5. The whole architecture is trained for 30 epochs over a single NVIDIA A100 GPU with a mini-batch size of 6, initial learning rate 5×10^-5, and AdamW optimizer. §.§ Comparisons with State of the Art We evaluate the performance of GeoHOI and compare it with state-of-the-art models, including methods that use geometric features of both humans and objects. Table <ref> shows the performance comparison with end-to-end methods. For V-COCO, our method beats all existing end-to-end methods by a large margin in both scenarios. In particular, compared with MUREN <cit.>, which is the previous state-of-the-art method, GeoHOI achieves a significant performance gain of 0.6 mAP in AP_role^#1and 3.4 mAP in AP_role^#2. For HICO-DET, GeoHOI achieves consistent performance gains and surpasses all the previous state-of-the-art methods. These results indicate our method's effectiveness in capturing the holistic cross-instance cues between humans and objects using their keypoints through graph convolutional networks and enhancing interaction query representations with local patches. In table <ref>, we compare GeoHOI against two-stage methods. Our GeoHOI outperforms all the existing two-stage methods on V-COCO. Compared with the latest method ViPLO <cit.>, it obtains large performance improvements of 7.2 mAP in AP_role^#1and 6.4 mAP in AP_role^#2. This is mainly because most of these two-stage methods use CNNs or vanilla Transformers for HOI classification, leading to limited model capacity or prior knowledge. For HICO-DET, we also achieve comparable performance to previous state-of-the-art methods. Compared to ViPLO, the performance gain is not as noticeable as on V-COCO. Considering the complexity and end-to-end nature of GeoHOI, we follow STIP to use the lightweight ResNet for image feature extraction, while ViPLO employs the advanced Transformer backbone ViT <cit.> in their first stage of object detection. We believe ViT has a larger capacity than ResNet and is superior for handling the larger HICO-DET. Table <ref> compares our results with existing methods using human and object keypoints on V-COCO. We omit comparison on HICO-DET because most of these methods did not provide results on this dataset. For fairness, we compare GeoHOI without fine-tuning the object detector against these methods. The proposed GeoHOI outperforms all of them by a marked margin in both AP_role^#1(5.3 mAP) and AP_role^#2 (13.6 mAP). Demonstrating the effectiveness of GeoHOI, i.e., by taking advantage of both the advanced Transformer architecture and the fine-grained geometric keypoints, it boosts HOI detection. In Table <ref>, we report the per-class performance of GeoHOI and compare it with the backbone model STIP on V-COCO. We run the pre-trained checkpoint of STIP to obtain its per-class results since they are not provided in the original paper. We can see that GeoHOI outperforms the backbone STIP in the majority of classes, particularly in the “eat-obj", “lay-instr", “carry-obj", and “drink-instr" classes. From the upper row of Fig. <ref>, objects in these classes are generally either partially occluded with humans or quite large. For example, the objects in “drink-instr" are often occluded with human hands. In these cases, we believe that keypoints can provide valuable information on the visible parts of these objects and how they are being interacted with the human, resulting in enhanced performance. Moreover, objects such as beds and surfboards, often associated with the “lay-instr" and “carry-obj" actions, are typically quite large. The detected keypoints can capture their shapes pretty well. As a result, it boosts the performance of interaction detection. On the other hand, GeoHOI performs much worse than STIP in the “hit-instr", “hit-obj", and “talk_on_phone" classes. First, objects such as baseball bats and tennis rackets shown in the lower row of Fig. <ref>, typically appear in crowded scenes, leading to inaccurate object detection. Second, the inherent slender shape of baseball bats and the varying perspectives of tennis rackets hinder our UniPointNet from detecting their representative keypoints effectively. Third, balls associated with the “hit-obj" action and cell phones in the “talk_on_phone" action are too small. Thus, their masks have very small areas compared to the entire images, leading to noisy keypoints that harm interaction detection. Fig. <ref> shows qualitative results and compares GeoHOI with the backbone STIP. The top 3 interaction prediction probabilities of GeoHOI are visualized. The images show the variance in object sizes, human visibilities, and different interaction classes. First, the attention maps highlight different local regions for the same interaction category in the same image. For example, the hands are highlighted in different regions for action “work_on_computer” as shown in the first row. Second, when the human and object are far away from each other, they also gather a certain amount of information from their neighbourhood as illustrated in the second row, indicating both local and cross-instance cues are essential for HOI classification. We further showcase crowded scenes with multiple humans and objects in Fig. <ref>. GeoHOI shows higher confidence for true interaction actions “hold_obj” and “sit_instr” and less confidence for true negative interaction “look_obj”, indicating its effectiveness. Overall, GeoHOI improves the backbone of STIP by predicting higher scores in various true interactions and lower scores in negative ones in most cases. More qualitative results of GeoHOI on complex scenes can be found in the supplementary material. In Table <ref>, we report the performance, number of parameters, and speed for training and inference on V-COCO of STIP and GeoHOI for an objective comparison. GeoHOI outperforms STIP by a significant margin of 1.8 and 2.6 in terms of mAP in AP_role^#1 and AP_role^#2 with a comparable number of parameters. GeoHOI's training and inference time is slower than STIP but remains within an acceptable one-second threshold. Compared to STIP, GeoHOI requires two additional modules (object segmentation and keypoint detection) executed sequentially, resulting in higher time consumption. §.§ Ablation Studies In this section, we analyze each GeoHOI design by discussing its possible variants on V-COCO to provide more insights. All experiments are carried out under the training setting of the pre-trained object detector with frozen weights. Impact of Individual Components. We conduct ablation experiments by comparing different variants of GeoHOI in Table <ref>. We start with the baseline model (Baseline), which adopts the structure-aware HOI network introduced in <cit.>, but with its object detector replaced by Panoptic DETR. Next, we extend the Baseline model by integrating our keypoint-aware graph convolution network into its interactiveness prediction module, incorporating the holistic graph features, yielding Baseline + KIP which demonstrates better performance. After that, we enhance the Baseline model with our Part Attention Module but without human patches. This variant of our model (Baseline + PAM (w/o human patch)) achieves better performance than both the Baseline model and Baseline + KIP. Another variant of our model (Baseline + PAM (w/o object patch)) shows even better performance. This indicates that the human patch features are more important than the object patch features. We believe this is because the rich poses of humans captured by keypoints are more beneficial for recognizing interactions, which aligns with the findings in <cit.>. To evaluate the benefits of using keypoints as positional encodings, we create (Baseline + PAM (w/o positional encodings)). It outperforms other variants, though it is slightly worse than Baseline + PAM that incorporates both patch features and keypoints. Both Baseline + PAM (w/o positional encodings) and Baseline + PAM demonstrate the effectiveness of using local features with self-attention. Finally, when jointly upgrading the Baseline model with the keypoint-aware interactiveness prediction module and part attention module (i.e., our GeoHOI), it results in the best performance. Effect of Different Numbers of Keypoints. Here, we vary N from 4 to 48 to demonstrate the relationship between the performance and the select keypoints number N. Table <ref> shows the quantitative ablation tests, and the best performance is obtained when N is 32. The increasing number of keypoints (until N=32) can generally boost the performance. This is expected since more details can be captured with more keypoints. For example, when N=4, only the upper part of the horse is modelled as shown in the first column of the third row in Fig. <ref>. In addition, when the object's mask is separated into multiple fragments due to occlusion (as seen in the last two rows in Fig. <ref>), a higher number of keypoints can span more of these segments, generating a more accurate shape representation of an object. However, when N is greater than 32, i.e., 48, the performance decreases. We speculate that too many keypoints might introduce more noise, leading the model to overfit, which results in affecting its generalization. As such, we have empirically selected N to be 32. To evaluate the effectiveness of UniPointNet, in Table <ref>, we compare it with the existing skeleton-based keypoint representation (Skeletal Keypoint) for HOI detection <cit.>, which also utilizes object segmentation. GeoHOI (UniPointNet) means the keypoints in GeoHOI are detected by our proposed UniPointNet, and GeoHOI (Skeletal Keypoint) represents that the keypoints are obtained from <cit.>. We can see that UniPointNet surpasses the skeletal keypoints in both AP_role^#1 and AP_role^#2, showcasing the effectiveness of the proposed UniPointNet. The Skeletal Keypoint is a skeleton-driven method, it is only robust for articulated objects like humans and dogs, which demonstrate a clear and consistent structure of joints and parts. It is also limited when extracting keypoints from non-articulated objects such as pizzas and phones because of its skeletonization process. In contrast, UniPointNet is a shape-driven representation, making it robust to arbitrary shapes of objects. §.§ A Case Study in Post-Disaster Rescue with UAVs The proposed HOI detector (GeoHOI) has a wide range of applications, including vision-based instrumentation and measurement. To showcase the generalization of our GeoHOI and evaluate its performance in real-world applications relevant to instrumentation and measurement, we conducted a case study in Post-Disaster Rescue with unmanned aerial vehicles (UAVs) on the PDD dataset <cit.>. It was collected from real-world ruins, including various post-disaster scenes, from multiple angles of UAVs and different distances and resolutions. Disasters include natural calamities such as earthquakes and outdoor rescue scenarios, among others. It consists of 832 training, 100 validation, and 100 testing images. By conducting the experiments on the PDD dataset, we evaluate our GeoHOI for human detection task and compare it with the baseline methods. GeoHOI is designed for HOI detection, outputting triplets as <human, interaction, object >. To evaluate it on human detection, we measure its outputs of detected human bounding boxes and ignore interaction and object bounding box predictions. To make a fair comparison between our methods and baselines in <cit.>, we requested the PDD test set (100 images) from the authors for evaluation, and we also used the same evaluation metrics, i.e., average precision (AP), F1 score, recall, and precision. We directly apply the pre-trained GeoHOI and STIP (both are trained on V-COCO) on the test set of the PDD dataset. In addition, we conduct a qualitative analysis of HOI detection to showcase that the proposed method can further facilitate post-disaster rescue with UAVs. For example, detecting individuals in wheelchairs or needing medical assistance allows rescue teams to effectively prioritize rescue efforts such as aid and resources for those who need them most. In Table <ref>, we compare the quantitative performance of HOI-based models, including our proposed GeoHOI and its backbone STIP and baselines proposed in <cit.>. With the default number of 32 output proposals in the Keypoint-aware Interactiveness Prediction (KIP) module, GeoHOI outperforms all the baselines on AP and achieves comparable precision, demonstrating its effectiveness in detecting humans in post-disaster scenes. STIP obtains similar performance in both AP and precision, and we believe the main reason is that the HOI-based detection systems can enhance human bounding box precision by leveraging contextual information (i.e., interactions between humans and objects) and joint optimization (i.e., optimizing the predictions of humans, interactions, and objects simultaneously). The integrated analysis of humans, objects, and their interactions refines human detection accuracy compared to these baselines designed alone for human detection. The lower performance on the F1 score and recall of GeoHOI and STIP indicate that the HOI-based systems have a higher missed detection rate. We think the KIP module that suppresses non-interactive human-object pairs is the primary cause since it can filter out humans who do not interact with objects, resulting in compromised performance in recall. To verify this, we increase the number of proposals (K) to 64 and 100, respectively. Recall significantly improves with the number of proposals and outperforms all the baselines at K = 100. This indicates our model's adaptability in balancing recall and precision by tuning the number of output proposals in our KIP module in practical applications. In addition, we show the qualitative results of HOI detection in Fig. <ref> to provide an in-depth analysis of how HOI detection can facilitate post-disaster rescue. GeoHOI demonstrates a varied performance across different scenarios. For instance, it predicts relatively high confidence scores in recognizing the interactions of “ride_instr” (riding a bicycle), “talk_on_phone_instr”, and “hold_obj” where the scenes are less complicated. In contrast, it shows diminished confidence in more complex scenes, such as a person lying down in the rubble, or when the scene is crowded, e.g., the image in the first row and column. This indicates the challenges in detecting interactions in cluttered post-disaster scenes. The qualitative results show that our proposed GeoHOI is able to detect different human interactions in post-disaster scenes, facilitating search and rescue operations. For instance, identifying individuals in wheelchairs or those lying on the ground enables rescue teams to prioritize medical attention. Observations of people using phones or riding bicycles provide crucial insights into the operational status of communication networks and the accessibility of various areas. Additionally, recognizing survivors holding onto pets or personal belongings allows rescue teams to provide not only necessities like food and water but also support for pet care and the safekeeping of valuables, enhancing the overall rescue operation. § CONCLUSION In this paper, we have proposed GeoHOI, an end-to-end Transformer-style model for detecting human-object interactions using fine-grained geometric keypoint features of humans and objects. We have also presented UniPointNet, a self-supervised framework that detects keypoints for arbitrary objects and enhances HOI performance. The KIP module uses keypoints to mine cross-instance cues via a graph network, enhancing pairwise cues for optimizing the prediction of interactive human-object pairs. The PAM module uses self-attention on keypoint patches to discover informative local cues, facilitating the prediction of specific interaction categories. Extensive experimental results have shown that GeoHOI improves the backbone of STIP and achieves superior performance on public HOI benchmarks. We further demonstrated the advantages of using GeoHOI on human-centric applications such as the case study on post-disaster rescue. The presented UniPointNet also facilitates visual measurement tasks, including object pose estimation <cit.> and 3-D reconstruction <cit.>. The end-to-end GeoHOI is limited in training and analysis of the geometric features. For future research, the proposed geometric features can be employed in two-stage frameworks such as <cit.>, facilitating more analysis and insights into the geometric context (e.g., relative keypoint distance) in HOI detection. As discussed in the experiments, our UniPointNet struggles with tiny or slender objects due to their limited spatial resolution in images, which hinders accurate shape reconstruction and keypoint detection. Future work could explore better keypoint representation for these objects, such as adaptively selecting the optimal numbers and locations of keypoints to represent objects in different sizes. Additionally, investigating how to incorporate semantic information in keypoint detection and evaluating the effect on HOI detection would also be valuable. Furthermore, recent advancements in large language models, especially those with integrated vision-language capabilities such as CLIP <cit.>, have demonstrated their effectiveness in zero-shot HOI detection <cit.>. Given that annotating HOI triplets is challenging and rare HOIs are not learned as effectively as non-rare ones, it is worth further exploring the capabilities of large language models in the future to tackle the long-tail problem and zero-shot learning in HOI detection, facilitating real-world HOI applications. § ACKNOWLEDGEMENT This research is supported in part by the EPSRC NortHFutures project (ref: EP/X031012/1). IEEEtran IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT Geometric Features Enhanced Human-Object Interaction Detection (Supplementary Material) Manli Zhu^0000-0002-8231-5342, Edmond S. L. Ho^0000-0001-5862-106X, Shuang Chen^0000-0002-6879-7285, Longzhi Yang^0000-0003-2115-4909, Senior Member, IEEE, Hubert P. H. Shum^0000-0001-5651-6039†, Senior Member, IEEE July 1, 2024 ============================================================================================================================================================================================================================ § SUPPLEMENTARY MATERIAL Here we show more qualitative results of GeoHOI on complex scenes to showcase its effectiveness and robustness.
http://arxiv.org/abs/2406.18912v1
20240627055430
The nonexistence of unicorns and many-sorted Löwenheim-Skolem theorems
[ "Benjamin Przybocki", "Guilherme Toledo", "Yoni Zohar", "Clark Barrett" ]
math.LO
[ "math.LO", "cs.LO" ]
rm compat=newest rotated halfcircle/.style= mark=halfcircle*, mark color=black, fill=red, every mark/.append style=rotate=#1 P[1]>p#1 theoremTheoremTheorems corollaryCorollaryCorollaries exampleExampleExamples document B. Przybocki et al. Stanford University, USA benjamin.przybocki@gmail.com, barrett@cs.stanford.edu Bar-Ilan University, Israel {guivtoledo,yoni206}@gmail.com The nonexistence of unicorns and many-sorted Löwenheim–Skolem theorems Benjamin Przybocki1() 0009-0007-5489-1733 Guilherme Toledo2 0000-0002-6539-398X Yoni Zohar2 0000-0002-2972-6695 Clark Barrett1 0000-0002-9522-3084 Received XXX; accepted YYY ========================================================================================================================================================== § ABSTRACT Stable infiniteness, strong finite witnessability, and smoothness are model-theoretic properties relevant to theory combination in satisfiability modulo theories. Theories that are strongly finitely witnessable and smooth are called strongly polite and can be effectively combined with other theories. Toledo, Zohar, and Barrett conjectured that stably infinite and strongly finitely witnessable theories are smooth and therefore strongly polite. They called counterexamples to this conjecture unicorn theories, as their existence seemed unlikely. We prove that, indeed, unicorns do not exist. We also prove versions of the Löwenheim–Skolem theorem and the Łoś–Vaught test for many-sorted logic. § INTRODUCTION Given decision procedures for theories _1 and _2 with disjoint signatures, is there a decision procedure for _1 ∪_2? In general, the answer is “not necessarily”, but a central question in Satisfiability Modulo Theories (SMT) <cit.> is: what assumptions on _1 and _2 suffice for theory combination? This line of research began with Nelson and Oppen's theory combination procedure <cit.>, which applies when _1 and _2 are stably infinite, roughly meaning that every _i-satisfiable quantifier-free formula is satisfied by an infinite _i-interpretation for i ∈{1,2}. The Nelson–Oppen procedure is quite useful, but requires both theories to be stably infinite, which is not always the case (e.g., the theories of bit-vectors and finite datatypes are not stably infinite). Thus, sufficient properties of only one of the theories were identified, such as gentleness <cit.>, shininess <cit.>, and flexibility <cit.>. The most relevant property for our purposes is strong politeness <cit.>. It is essential to the functioning of the SMT solver cvc5 <cit.>, which is called billions of times per day in industrial production code. A theory is strongly polite if it is smooth and strongly finitely witnessable, which are model-theoretic properties we will define later. These properties are more involved than stable infiniteness, so proving a theory to be strongly polite is more difficult. But the advantage of strongly polite theories is that they can be combined with any other decidable theory, including theories that are not stably infinite. Given the abundance of model-theoretic properties relevant to theory combination, some of which interact in subtle ways, it behooves us to understand the logical relations between them. Recent papers <cit.> have sought to understand the relations between seven model-theoretic properties—including stable infiniteness, smoothness, and strong finite witnessability—by determining which combinations of properties are possible in various signatures. In most cases, a theory with the desired combination of properties was constructed, or it was proved that none exists. The sole exception was theories that are stably infinite and strongly finitely witnessable but not smooth, dubbed unicorn theories and conjectured not to exist. Our main result, <Ref>, confirms this conjecture. Besides completing the taxonomy of properties from <cit.>, our result has practical consequences. The nonexistence of unicorns implies that strongly polite theories can be equivalently defined as those that are stably infinite and strongly finitely witnessable. Since it is easier to prove that a theory is stably infinite than to prove that it is smooth, this streamlines the process of proving that a theory is strongly polite. Thus, each time a new theory is introduced, proving that it can be combined with other theories becomes easier.[<cit.> already proved that stably infinite and strongly finitely witnessable theories can be combined with other theories. Our result gives a new proof (see <Ref>), and shows that their procedure is not more general than polite combination.] Similarly, our results give a new characterization of shiny theories, which makes it easier to prove that a theory is amenable to the shiny combination procedure (see <Ref>). We also believe that our result is of theoretical interest. <Ref>, which is the main ingredient in the proof of <Ref>, can be seen as a variant of the upward Löwenheim–Skolem theorem for many-sorted logic, since proving that a theory is smooth amounts to proving that cardinalities of sorts can be increased arbitrarily, including to uncountable cardinals. This result may be of independent interest to logicians studying the model theory of many-sorted logic, and we hope the proof techniques are useful to them as well. Speaking of proof techniques, our proof is curious in that it uses Ramsey's theorem from finite combinatorics. This is not the first time Ramsey's theorem has been used in logic. Ramsey proved his theorem in the course of solving a special case of the decision problem for first-order logic <cit.>. Ramsey's theorem also shows up in the Ehrenfeucht–Mostowski construction in model theory <cit.>. Our proof actually requires a generalization of Ramsey's theorem, which we prove using the standard version of Ramsey's theorem. A major component of the proof of <Ref> amounts to proving a many-sorted version of the Löwenheim–Skolem theorem. On the course to proving this, we realized that a proper understanding of this theorem for many-sorted logic appears to be missing from the literature, despite the fact that the SMT-LIB standard <cit.> is based on many-sorted logic. To fill this gap, we prove generalizations of the Löwenheim–Skolem theorem for many-sorted logic, and use them to prove a many-sorted Łoś–Vaught test, useful for proving theory completeness. The remainder of this paper is structured as follows. <Ref> provides background and definitions on many-sorted logic and SMT. <Ref> proves the main result of this paper, namely the nonexistence of unicorn theories. <Ref> proves new many-sorted variants of the Löwenheim–Skolem theorem. <Ref> concludes and presents directions for future work.[Some proofs are omitted. They can be found in the appendix.] § PRELIMINARIES §.§ Many-sorted first-order logic We work in many-sorted first-order logic <cit.>. A signature Σ consists of a non­empty set _Σ of sorts, a set _Σ of function symbols, and a set _Σ of predicate symbols containing an equality symbol =_ for every sort ∈_Σ.[When specifying a signature, we often omit the equality symbols, and include them implicitly. We also omit σ from =_σ when it does not cause confusion.] Every function symbol has an arity (_1, …, _n, ) and every predicate symbol an arity (_1, …, _n), where _1, …, _n, ∈_Σ and n≥ 0. Every equality symbol =_ has arity (, ). To quantify a variable x of sort , we write x : and x : for the universal and existential quantifiers respectively. Let |Σ| = |_Σ|+|_Σ|+|_Σ|. If a signature contains only sorts and equalities, we say it is empty. Two signatures are said to be disjoint if they share at most sorts and equality symbols. We define Σ-terms and Σ-formulas as usual. The set of free variables of sort in φ is denoted _(φ). For S⊆_Σ, let _S(φ) = ⋃_∈ S_(φ). We also let (φ) = __Σ(φ). A Σ-sentence is a Σ-formula with no free variables. A Σ-structure interprets each sort ∈_Σ as a nonempty set ^, each function symbol f ∈_Σ as a function f^ with the appropriate domain and codomain, and each predicate symbol P ∈_Σ as a relation P^ over the appropriate set, such that =_^ is the identity on ^. A Σ-interpretation is a pair (, ν), where is a Σ-structure and ν is a function, called an assignment, mapping each variable x of sort to an element ν(x) ∈^, denoted x^. We write t^ for the interpretation of the Σ-term t under , which is defined in the usual way. The entailment relation, denoted , is defined as usual. Two structures are elementarily equivalent if they satisfy the same sentences. We say that is an elementary substructure of if is a substructure of and, for all formulas φ and all assignments ν on , we have (,ν) φ if and only if (, ν) φ. Note that if is an elementary substructure of , then they are elementarily equivalent. is an elementary subinterpretation of if is an elementary substructure of and 's assignment is the same as 's assignment. Given a Σ-structure , let ≥={∈_Σ: |^|≥ℵ_0} and <=_Σ∖≥. We similarly define ≥ and < for a Σ-interpretation . A Σ-theory is a set of Σ-sentences, called the axioms of . We write ⊢_φ instead of φ. Structures satisfying are called -models, and interpretations satisfying are called -inter­pre­ta­tions. We say a Σ-formula is -satisfiable if it is satisfied by some -interpretation, and we say two Σ-formulas are -equivalent if every -interpretation satisfies one if and only if it satisfies the other. is complete if for every sentence φ, we have ⊢_φ or ⊢_φ. is consistent if there is no formula φ such that ⊢_φ and ⊢_φ. If Σ_1 and Σ_2 are disjoint, let Σ_1 ∪Σ_2 be the signature with the union of their sorts, function symbols, and predicate symbols. Given a Σ_1-theory _1 and a Σ_2-theory _2, the (Σ_1 ∪Σ_2)-theory _1 ∪_2 is the theory whose axioms are the union of the axioms of _1 and _2. The following theorem, proved in <cit.>, is a many-sorted variant of the first-order compactness theorem. A set of Σ-formulas Γ is satisfiable if and only if every finite subset of Γ is satisfiable. We say that a Σ-theory has built-in Skolem functions if for all formulas ψ(x, y), there is f ∈_Σ such that ⊢_x (y (ψ(x, y)) →ψ(x, f(x))).[Intuitively: has enough function symbols to witness all existential formulas.] The following is a many-sorted variant of Lemma 2.3.6 of <cit.>. The proof is almost identical to that of the single-sorted case from <cit.>. lemmabuiltin If is a Σ-theory for a countable Σ, then there is a countable signature Σ^* ⊇Σ and Σ^*-theory ^* ⊇ with built-in Skolem functions. We state a many-sorted generalization of the Tarski–Vaught test, whose proof is also similar to the single-sorted case <cit.>. [The Tarski–Vaught test]lemmaTVTEST Suppose is a substructure of . Then, is an elementary substructure of if and only if (, ν) vφ(x, v) implies (, ν) vφ(x, v) for every formula φ(x, v) and assignment ν over . §.§ Model-theoretic properties Let Σ be a many-sorted signature, S ⊆_Σ, and a Σ-theory. * is stably infinite with respect to S if for every -satisfiable quantifier-free formula φ, there is a -interpretation satisfying φ with |^| ≥ℵ_0 for every ∈ S. * is stably finite with respect to S if for every quantifier-free Σ-formula φ and -interpretation satisfying φ, there is a -interpretation satisfying φ such that |^| ≤ |^| and |^| < ℵ_0 for every ∈ S. * is smooth with respect to S if for every quantifier-free formula φ, -interpretation satisfying φ, and function κ from S to the class of cardinals such that κ() ≥ |^| for every ∈ S, there is a -interpretation satisfying φ with |^|=κ() for every ∈ S. Next, we define arrangements. Given a set of sorts S ⊆_Σ, finite sets of variables V_ of sort for each ∈ S, and equivalence relations E_ on V_, the arrangement δ_V on V=⋃_∈ SV_ induced by E=⋃_∈ SE_ is ⋀_∈ S[⋀_xE_y(x=y) ⋀_xE_y(x=y)], where E_ is the complement of E_. Let Σ be a many-sorted signature, S ⊆_Σ a finite set, and a Σ-theory. Then is strongly finitely witnessable with respect to S if there is a computable function from the quantifier-free formulas into themselves such that for every quantifier-free formula φ: * φ and w(φ) are -equivalent, where w=((φ))∖(φ); and * given a finite set of variables V and an arrangement δ_V on V, if (φ) δ_V is -satisfiable, then there is a -interpretation satisfying (φ) δ_V such that ^=_((φ) δ_V)^ for every ∈ S. §.§ Notation ℕ denotes the set of non-negative integers. Given m,n ∈ℕ, let [m,n] := {ℓ∈ℕ : m ≤ℓ≤ n} and [n] := [1,n]. Given a set X, let Xn := {Y ⊆ X : |Y| = n}, X^n := {(x_1, …, x_n) : x_i ∈ X for all i ∈ [n]}, and X^* := ⋃_n ∈ℕ X^n. For any x, we denote (x,…,x) by (x)^⊕ n. Given a tuple of tuples (x_1, …, x_n), where x_i∈ X^* for all i, we will often treat it as an element of X^* by flattening the tuple. § THE NONEXISTENCE OF UNICORNS We now state our main theorem, which implies that unicorn theories do not exist. Note that since we are motivated by applications to SMT, we hereafter assume all signatures are countable.[The paper that introduced unicorn theories <cit.> also made this assumption.] Assume that is a Σ-theory, where Σ is countable. If is stably infinite and strongly finitely witnessable, both with respect to S ⊆_Σ, then is smooth with respect to S. For our proof, we define a weaker variant of smoothness, that focuses the requirements only for finite cardinals. A Σ-theory is finitely smooth with respect to S ⊆_Σ if for every quantifier-free formula φ, -interpretation with φ, and function κ from <∩ S to the class of cardinals with |^| ≤κ() < ℵ_0 for every ∈<∩ S, there is a -interpretation with φ with |^|=κ() for every ∈<∩ S. We make use of the following two lemmas. lemmalemseventhree If is stably infinite and strongly finitely witnessable, both with respect to some set of sorts S ⊆_Σ, then is finitely smooth with respect to S. If is strongly finitely witnessable with respect to some set of sorts S ⊆_Σ, then is stably finite with respect to S. In light of the above two lemmas, the following theorem implies <Ref>. Assume that is a Σ-theory, where Σ is countable. If is stably finite and finitely smooth, both with respect to some set of sorts S ⊆_Σ, then is smooth with respect to S. The remainder of this section is thus dedicated to the proof of <Ref>. §.§ Motivating the proof In this section, we illustrate the proof technique with a simple example. The goal is to motivate the proof of <Ref> before delving into the details. Suppose is a Σ-theory, where _Σ = {_1, _2}, _Σ = {f}, f has arity (_2, _1), and the only predicate symbols are equalities. Suppose that is also stably finite and finitely smooth, both with respect to S = _Σ. Let φ be a -satisfiable quantifier-free formula and a -interpretation satisfying φ. Let κ be a function from S to the class of cardinals such that κ() ≥ |^| for both ∈ S. For concreteness, suppose |_1^| = |_2^| = 10, κ(_1) = ℵ_0, and κ(_2) = ℵ_1. Our goal is to show that there is a -interpretation ^- satisfying φ with |_1^^-| = ℵ_0 and |_2^^-| = ℵ_1.[The reason for the - superscript in ^- will be clear presently.] A natural thought is to apply some variant of the upward Löwenheim–Skolem theorem, but this doesn't quite work. As will be seen in <Ref>, generalizations of the Löwenheim–Skolem theorem to many-sorted logic do not let us control the cardinalities of _1 and _2 independently. Nevertheless, let us emulate the standard proof technique for the upward Löwenheim–Skolem theorem. Here is the most natural way of generalizing the proof of the upward Löwenheim–Skolem theorem to our setting. For simplicity, assume that already has built-in Skolem functions. We introduce ℵ_0 new constants {c_1,α}_α < ω and ℵ_1 new constants {c_2,α}_α < ω_1. We define a set of formulas Γ = {φ}∪Γ_1, where Γ_1 = { (c_i,α = c_i,β) : i ∈{1,2}; α, β < κ(_i); α≠β}. By <Ref> and finite smoothness, there is a -interpretation satisfying Γ: indeed, were that not true, <Ref> would guarantee that some finite subset of Γ is unsatisfiable; yet such a set would only demand the existence of finitely many new elements, which can be achieved by making use of finite smoothness. Since Γ_1, we have |_1^| ≥ℵ_0 and |_2^| ≥ℵ_1. Since may be too large, we construct a subinterpretation ^- with _1^^- = {c_1,α^}_α < ω∪{f^(c_2,α^)}_α < ω_1 _2^^- = {c_2,α^}_α < ω_1. And using the assumption that has built-in Skolem functions, we can prove that ^- is an elementary subinterpretation of , so ^- Γ; we can then prove that |_2^^-| = ℵ_1, but we unfortunately cannot guarantee that |_1^^-| = ℵ_0. This is because ^- has not only the ℵ_1 elements {c_2,α^}_α < ω_1 of sort _2, but also the elements {f^(c_2,α^)}_α < ω_1 of sort _1. The function symbol f has created a “spillover” of elements from _2 to _1. To fix this, we need to ensure that |{f^(c_2,α^)}_α < ω_1| ≤ℵ_0. To that end, define Γ to instead be {φ}∪Γ_1 ∪Γ_2, where Γ_2 = {f(b) = f(d) : b,d ∈{c_2,α}_α < ω_1}. Then, if there is a model satisfying Γ, we have |{f^(c_2,α^)}_α < ω_1| = 1 ≤ℵ_0. To show Γ is -satisfiable, it suffices by the compactness theorem to show that ∪Γ' is satisfiable for every finite subset Γ' ⊆Γ. So let Γ'_1 ⊆Γ_1 and Γ'_2 ⊆Γ_2 be finite subsets. We will construct a -interpretation ' such that ' {φ}∪Γ'_1 ∪Γ'_2. For concreteness, suppose that {c_1,0, c_1,1, …, c_1,99} and {c_2,0, c_2,1, …, c_2,9} are the new constants that appear in Γ'_1 ∪Γ'_2. By finite smoothness, there is a -interpretation ' satisfying φ such that |_1^'| = 100 and |_2^'| = 901. By the pigeonhole principle, there is a subset Y ⊆_2^' with |Y| ≥ 10 such that f^' is constant on Y; if 901 pigeons are put in 100 holes, then some hole has at least 10 pigeons (although this is not true for 900 pigeons). Then, ' can interpret the constants {c_1,0, c_1,1, …, c_1,99} as distinct elements of _1^' and the constants {c_2,0, c_2,1, …, c_2,9} as distinct elements of Y. This proves that Γ is -satisfiable. [2]#2@th#1∙ r.4 [scale=0.7, every mark/.append style=draw=white, mark size=2.4pt] [ xmin=-0.2, xmax=1.8, ymin=-0.2, ymax=1.8, axis lines=center, axis on top=true, domain=0:1, xtick=0.1, 0.5, 1.0, 1.2, 1.4, xticklabels=1,10,ℵ_0,ℵ_1,ℵ_2, extra x ticks=0.3, 0.75, 1.6, extra x tick style=tick style=draw=none, extra x tick labels=⋯, ⋯, ⋯, ytick=0.1, 0.5, 1.0, 1.2, 1.4, yticklabels=1,10,ℵ_0,ℵ_1,ℵ_2, extra y ticks=0.3, 0.75, 1.6, extra y tick style=tick style=draw=none, extra y tick labels=⋮, ⋮, ⋮ ] [mark=none, black, dotted] coordinates (0.5,0) (0.5,1.7); [mark=none, black, dotted] coordinates (0,0.5) (1.7,0.5); [mark=none, black, thick, dotted] coordinates (1.0,0) (1.0,1.7); [mark=none, black, thick, dotted] coordinates (0,1.0) (1.7,1.0); [mark=none, black, dotted] coordinates (1.2,0) (1.2,1.7); [mark=none, black, dotted] coordinates (0,1.2) (1.7,1.2); [mark=none, black, dotted] coordinates (1.4,0) (1.4,1.7); [mark=none, black, dotted] coordinates (0,1.4) (1.7,1.4); at (axis cs:-0.1, 1.75)_2; at (axis cs: 1.75, -0.1)_1; at (axis cs: 1.2, 1.6); at (axis cs: 1.6, 1.3); at (axis cs:0.4,0.4); at (axis cs: 1.6,1.6); at (axis cs:0.9,1.1)^-; [->,every node/.style=font=] (axis cs:0.5,0.5) edge[bend left] node [right] (axis cs:1.1,1.6) (axis cs:1.6,1.15) edge[bend left] node [right] (axis cs:1.05,1.15); [only marks] table 0.5 0.5 ; [only marks, mark=*,mark options=red,scale=1.7] table 1.0 1.2 ; [only marks, mark=*,mark options=red] table 1.0 1.4 1.2 1.2 1.2 1.4 1.4 1.2 1.4 1.4 ; How we move from interpretation to interpretation We illustrate the top level structure of the proof idea in <Ref>, applied to the working example. The x axis represents cardinalities of interpretations of σ_1, and the y axis does the same for σ_2. Starting from the interpretation with |_1^|=|_2^|=10, we construct some interpretation , represented by the array of red dots as there is some degree of uncertainty regarding the precise cardinalities of its domains, with |_1^|≥ℵ_0 and |_2^|≥ℵ_1. From we hope to construct ^-, which has |_1^^-|=ℵ_0 and |_2^^-|=ℵ_1: the latter can be achieved using techniques similar to the many-sorted Löwenheim-Skolem theorems (see <Ref> below), while the former requires the aforementioned pigeonhole principle arguments. The above proof sketch illustrates the main ideas behind the proof of <Ref>. The generalization to more sorts and function symbols requires some extra bookkeeping. More interestingly, the generalization to functions of arity greater than one requires a version of Ramsey's theorem, which is a generalization of the pigeonhole principle. §.§ Ramsey's theorem and generalizations In this section, we state Ramsey's theorem and a generalization of it. Ramsey's theorem is sometimes stated in terms of coloring the edges of hypergraphs, but for our purposes it is more convenient to state it as follows. In the following lemma, the notations Xn and [k] are defined as in Section <ref>. For any k,n, m ∈ℕ, there is an R(k,n,m) ∈ℕ such that for any set X with |X| ≥ R(k,n,m) and function f : Xn→ [k], there is a subset Y ⊆ X with |Y| ≥ m such that f is constant on Yn. Note that in Ramsey's theorem, the set [k] can be replaced by any set of cardinality k. We want to generalize Ramsey's theorem to functions f : X^n → [k]. The most natural generalization would state that there is a large subset Y ⊆ X such that f is constant on Y^n. But this generalization is false, as the following example shows. Let X = ℤ, and let f : X^2 → [2] be given by f(m,n) = 1 if m < n 2 otherwise. Then, f(m,n) ≠ f(n,m) for all m,n ∈ X with m ≠ n. Thus, there is no subset Y ⊆ X with |Y| ≥ 2 such that f is constant on Y^2. To avoid counterexamples like this, our generalization needs to consider the order of the arguments of f. This motivates the following definition. Let (X, <) be a totally ordered set, and let x = (x_1, …, x_n) and y = (y_1, …, y_n) be elements of X^n. We write x∼y if for every 1 ≤ i < j ≤ n we have x_i < x_j ⟺ y_i < y_j and x_i = x_j ⟺ y_i = y_j. Observe that ∼ is an equivalence relation on X^n with finitely many equivalence classes.[The number of equivalence classes is given by the ordered Bell numbers (<https://oeis.org/A000670>).] Now we can state our first generalization of Ramsey's theorem. lemmadirramsey For any k,n,m ∈ℕ, there is an R^*(k,n,m) ∈ℕ such that for any totally ordered set (X, <) with |X| ≥ R^*(k,n,m) and function f : X^n → [k], there is a subset Y ⊆ X with |Y| ≥ m such that f is constant on each ∼-equivalence class of Y^n. Next, we further generalize Ramsey's theorem to multiple functions f_1, …, f_r. lemmamultramsey For any k,m ∈ℕ and n = (n_1, …, n_r) ∈ℕ^r, there is a number R^**(k,n,m) ∈ℕ, such that for any totally ordered set (X, <) with |X| ≥ R^**(k,n,m) and functions f_i : X^n_i→ [k] for i ∈ [r], there is a subset Y ⊆ X with |Y| ≥ m, such that f_i is constant on each ∼-equivalence class of Y^n_i for all i ∈ [r]. §.§ The proof of <Ref> Fix a Σ-theory and a set of sorts S⊆_Σ. Assume that Σ is countable. Suppose that is stably finite and finitely smooth, both with respect to S. Let φ be a -satisfiable quantifier-free formula and a -interpretation satisfying φ. Let κ be a function from S to the class of cardinals such that κ() ≥ |^| for every ∈ S. Write S = {_1, _2, …} and, without loss of generality, assume κ(_1) ≤κ(_2) ≤⋯. For notational convenience, we write all Σ-terms in the form t(x_1, x_2, …),[Even if S is infinite, the denoted term is still finite since each term only has a finite number of variables occurring in it.] where x_i is a tuple of variables of sort _i. If κ(_i) < ℵ_0 for all i, then we are done by the fact is finitely smooth. Otherwise, let ℓ be the largest natural number such that κ(_ℓ) < ℵ_0 if there is such a number, and let ℓ = 0 otherwise. The proof of <Ref> proceeds in two steps. First, we construct a set of formulas Γ such that φ∈Γ and prove that there is a -interpretation satisfying Γ. Second, we prove that has an elementary subinterpretation ^- such that |_i^^-| = κ(_i) for all i. Since φ∈Γ, it will follow that is smooth. The assumption that is stably finite and finitely smooth is used to construct -interpretations of the following form, which will be useful for a compactness argument. There is a -interpretation satisfying φ such that |_i^| = κ(_i) for all i ≤ℓ, and |_i^| is arbitrarily large but finite for all i > ℓ. First, apply stable finiteness to get a -interpretation ' satisfying φ such that |_i^'| ≤ |_i^| and |_i^'| < ℵ_0 for all i. Then, apply finite smoothness to ' with κ' given by κ'(_i) = κ(_i) for all i ≤ℓ and κ'(_i) arbitrarily large but finite for all i > ℓ. It will be convenient to work with a theory with built-in Skolem functions, so we use <Ref> to get a Σ^*-theory ^* ⊇, where Σ^* ⊇Σ and Σ^* is countable. To construct our set of formulas Γ, we introduce κ(_i) new constants {c_i,α}_α < κ(_i) of sort _i for each i. We consider these constants to be part of an even larger signature Σ' ⊇Σ^*. In what follows, we construct sentences and interpretations over Σ'. Impose an arbitrary total order on each {c_i,α}_α < κ(_i) to be used for the ∼ relation. For the definition below, recall that given a set X, we define X^*= ⋃_n∈ℕX^n. We define a set of formulas Γ = {φ}∪Γ_1 ∪Γ_2 ∪Γ_3, where Γ_1 = { (c_i,α = c_i,β) : 1 ≤ i ≤ |S|; α, β < κ(_i); α≠β} Γ_2 = {t(c_1, …, c_i, b_i+1, b_i+2, …) = t(c_1, …, c_i, d_i+1, d_i+2, …) : . t is a Σ^*-term of sort _i; i > ℓ; c_k, b_k, d_k∈ ({c_k, α}_α < κ(_k))^* . for all k; b_j∼d_j for all j > i } Γ_3 = {x : _i⋁_α < κ(_i) x = c_i,α : i ≤ℓ}. Note that the disjunctions in Γ_3 are finite given the condition i ≤ℓ. lemmaconsistent There is a ^*-interpretation such that Γ. This lemma, whose proof is in the appendix, forms the core of the argument. By the compactness theorem, it suffices to prove that for any finite subset Γ' ⊆Γ, there is a ^*-interpretation ' such that ' Γ'. The tricky part is making ' satisfy Γ' ∩Γ_2. The strategy is to use <Ref> to construct a model ' in which |_i+1^'| is very large in terms of |_i^'| for each i > ℓ. <Ref> will ensure that there is some way of interpreting the constants {c_i,α}_α < κ(_i) so that ' Γ' ∩Γ_2. We are now ready to prove <Ref>. By <Ref>, there is a ^*-in­ter­pre­ta­tion such that Γ. Let B = {t^((c_1)^, (c_2)^, …) : t is a Σ^*-term; c_i∈ ({c_i, α}_α < κ(_i))^* for all i}. For every f ∈_Σ, the set B is closed under f^. Thus, we can define ^- to be the subinterpretation of obtained by restricting the sorts, functions, and predicates to B.[In other words, ^- is the Skolem hull of ⋃_i {c_i,α^}_α < κ(_i) in <cit.>.] Since the Σ^*-theory ^* has built-in Skolem functions, ^- is an elementary subinterpretation of by <Ref>. We claim |_i^^-| = κ(_i) for all i. First, {c_i,α^^-}_α < κ(_i) is a set of κ(_i) distinct elements in _i^^-, because ^- Γ_1. Thus, |_i^^-| ≥κ(_i) for all i. Second, |_i^^-| ≤ |{c_i,α}_α < κ(_i)| = κ(_i) for all i ∈ [ℓ], as ^- Γ_3. Finally, it remains to show that |_i^^-| ≤κ(_i) for all i > ℓ. Inductively suppose that |_j^^-| ≤κ(_j) for all j < i. Now, every element of _i^^- is of the form t^((c_1)^, …, (c_i)^, (c_i+1)^, (c_i+2)^, …), where t is a Σ^*-term of sort _i. Since Σ^* is countable, there are at most ℵ_0 choices for t. We have at most κ(_i) choices for (c_1)^, …, (c_i)^. Finally, we have finitely many choices for (c_i+1)^, (c_i+2)^, … up to ∼-equivalence. Since ^- Γ_2, it follows that there are at most κ(_i) elements of _i^^-. Therefore, ^- is a ^*-interpretation satisfying φ with |_i^^-| = κ(_i) for all i. Taking the reduct of ^- to Σ gives the desired -interpretation. §.§ Applications to theory combination Since <Ref> implies that stably infinite and strongly finitely witnessable theories are strongly polite, we can restate the theorem on strongly polite theory combination with weaker hypotheses. This was already proved in <cit.> via a different method, but is now obtained as an immediate corollary of <Ref>. Let Σ_1 and Σ_2 be disjoint countable signatures. Let _1 and _2 be Σ_1- and Σ_2-theories respectively, and let φ_1 and φ_2 be quantifier-free Σ_1- and Σ_2-formulas respectively. Suppose _1 is stably infinite and strongly finitely witnessable, both with respect to _Σ_1∩_Σ_2, and let V = __Σ_1∩_Σ_2((φ_1)). Then, φ_1 φ_2 is (_1 ∪_2)-satisfiable if and only if there is an arrangement δ_V on V such that (φ_1) δ_V is _1-satisfiable and φ_2 δ_V is _2-satisfiable. We can also use our results to give a new characterization of shiny theories, which allows us to restate shiny combination theorem with weaker hypotheses. To define shininess, we first need a few other notions. Let Σ be a signature with _Σ finite, and let S ⊆_Σ. Write S = {_1, …, _n}. Then, the S-size of a Σ-interpretation is given by the tuple (|_1^|, …, |_n^|). Such n-tuples are partially ordered by the product order: (x_1, …, x_n) ≼ (y_1, …, y_n) if and only if x_i ≤ y_i for all i ∈ [n]. Given a quantifier-free formula φ, let S(φ) be the set of minimal S-sizes of -interpretations satisfying φ. It follows from results in <cit.> that S(φ) is a finite set of tuples.[<cit.> proves this assuming that is stably finite, using Hilbert's basis theorem. This assumption can be dropped by using the fact that if (X, ≤) is a well-quasi-order, then so is (X^n, ≺), where ≺ is the product order. Here X is the class of cardinals.] Then, we say a Σ-theory is shiny with respect to some subset of sorts S ⊆_Σ if _Σ is finite, is stably finite and smooth, both with respect to S, and S is computable. <Ref> implies that we can replace smoothness by finite smoothness, which may make it easier to prove that some theories are shiny. We can therefore improve the shiny theory combination theorem from <cit.> as an immediate corollary of <Ref>. Let Σ_1 and Σ_2 be disjoint countable signatures, where _Σ_1 and _Σ_2 are finite. Let _1 and _2 be Σ_1- and Σ_2-theories respectively, and assume the satisfiability problems for quantifier-free formulas of both _1 and _2 are decidable. Suppose _1 is stably finite and finitely smooth, both with respect to _Σ_1∩_Σ_2, and _1_Σ_1∩_Σ_2 is computable. Then, the satisfiability problem for quantifier-free formulas of _1∪_2 is decidable. § MANY-SORTED LÖWENHEIM–SKOLEM THEOREMS In this section, we state many-sorted generalizations of the Löwen­heim–Skolem theorem. Our first results, in <Ref>, hold with no assumptions on the signature. Later, in <Ref>, we state stronger results for restricted signatures, which we then use for a many-sorted variant of the Łoś–Vaught test in <Ref>. But first, in <Ref>, we explain the limitations of relying solely on translations to single-sorted first-order logic. §.§ Lost in translation We may transform a many-sorted signature into a single-sorted signature by adding unary predicates signifying the sorts; of course, some restrictions are necessary, distinctness of sorts, etc. This procedure <cit.> is often used to lift results from single-sorted to many-sorted logic. As one example, standard versions of the downward Löwenheim–Skolem theorem for many-sorted logic, found in <cit.>, are proven using this translation; we can, however, strengthen these results while still using only translations: [Downward]theoremtranslateddownwardmanysorted Let Σ be a many-sorted signature with |_Σ|<ℵ_0. Suppose we have a Σ-structure with max{|^| : ∈_Σ}≥ℵ_0, a cardinal κ satisfying max{|Σ|, ℵ_0}≤κ≤min{|^| : ∈≥}, and sets A_⊆^ with |A_|≤κ for each σ∈_Σ. Then, there is an elementary substructure of such that ^=^ for every ∈<, ℵ_0≤|^|≤κ for all ∈≥, |^|=κ for some ∈_Σ, and A_⊆^ for all ∈_Σ. [Upward]theoremtranslatedULS Let Σ be a many-sorted signature with |_Σ|<ℵ_0. Suppose we have a Σ-structure with max{|^| : ∈_Σ}≥ℵ_0 and a cardinal κ≥max{|Σ|, max{|^| : ∈_Σ}}. Then, there is a Σ-structure containing as an elementary substructure such that ^=^ for all ∈<, ℵ_0≤|^|≤κ for all ∈≥, and |^|=κ for some sort ∈_Σ. As convenient as translation arguments are, the above Löwenheim–Skolem theorems seem unsatisfactory, as they only allow us to choose a single cardinal, rather than one for each sort. §.§ Downward, upward, and combined versions The following are generalizations of the downward and upward Löwenheim–Skolem theorems to many-sorted logic, which are proved by adapting the proofs of the single-sorted case. Notice that we set all infinite domains to the same cardinality, while finite domains preserve their cardinalities. [Downward]theoremgeneralizedLST Fix a first-order many-sorted signature Σ. Suppose we have a Σ-structure , a cardinal κ such that max{ℵ_0, |Σ|}≤κ≤min{|^| : ∈≥}, and sets A_⊆^ with |A_| ≤κ for each ∈≥. Then, there is an elementary substructure of that satisfies |^|=κ and ^⊇ A_ for every ∈≥, and also ^=^ for every ∈<. [Upward]theoremGeneralizationULST Fix a first-order many-sorted signature Σ. Given a Σ-structure , pick a cardinal κ≥max{|Σ|, ℵ_0,sup{ |^| : ∈≥}}. Then, there is a Σ-structure containing as an elementary substructure that satisfies |^|=κ for every ∈≥, and also ^ = ^ for every ∈<. <Ref> can be combined to yield yet another variant of the Löwenheim–Skolem theorem, which may be called the combined version. [Combined]corollarymiddleLST Fix a many-sorted signature Σ. Given a Σ-structure , pick a cardinal κ≥max{|Σ|, ℵ_0}. Then, there is a Σ-structure elementarily equivalent to with |^|=κ for every ∈≥, and ^ = ^ for ∈<. [12]r.5 [scale=.85, every mark/.append style=draw=white, mark size=2.4pt] [ xmin=-0.1, xmax=1.3, ymin=0, ymax=1, axis lines=center, axis on top=true, axis y line=none, domain=0:1, clip=false, xtick=0, 0.1, 0.4, 0.6, 0.9, 1.2, xticklabels=_1, _2, _n, ^'_1, ^'_i,^'_m, extra x ticks=0.25, 0.75, 1.05, extra x tick style=tick style=draw=none, extra x tick labels=⋯, ⋯, ⋯ ] [mark=none, black, thick] coordinates (-0.1,0.25) (1.3,0.25); [mark=none, black, thick] coordinates (-0.1,0.435) (1.3,0.435); [mark=none, black, thick, dotted] coordinates (0.0,0) (0.0,0.2); [mark=none, black, thick, dotted] coordinates (0.4,0) (0.4,0.125); [mark=none, black, thick, dotted] coordinates (0.1,0) (0.1,0.075); [mark=none, black, thick, dotted] coordinates (0.6,0) (0.6,0.435); [mark=none, black, thick, dotted] coordinates (0.9,0) (0.9,0.435); [mark=none, black, thick, dotted] coordinates (1.2,0) (1.2,0.435); [mark=none, red, thick, dotted] coordinates (1.2,0.435) (1.2,0.475); at (axis cs:-0.1, 0.285)ℵ_0; at (axis cs:-0.1, 0.4)κ; [only marks] table 0.6 0.435 0.9 0.435 1.2 0.435 ; [only marks, rotated halfcircle=-90] table 0.0 0.2 0.1 0.075 0.4 0.125 ; [only marks, mark=*,mark options=red] table 0.6 0.375 0.9 0.4 1.2 0.475 ; Illustration of <Ref>. We illustrate <Ref> in <Ref>. In black, we represent the cardinalities of the resulting structure, and in red, those of the original one. When they coincide, we use marks split between the two colors. This representation shows a set of sorts in the horizontal axis, and the heights of the marks represent the cardinalities of the respective domains. We clearly separate cardinals larger and smaller than ℵ_0 with a rule. Assume, without loss of generality, that initially _1…_n have finite cardinalities and ^'_1 has the least and ^'_m the greatest infinite cardinality.[For greater clarity, the diagram only depicts the cases where there are finitely many sorts and the signature is countable.] <Ref> allows us to pick an infinite cardinal κ in between the least and greatest infinite cardinalities, and set all infinite cardinlaities in the interpretation to κ. The above theorems require that the desired cardinalities of the infinite sorts are all equal. The following example shows that this limitation is necessary. Take the signature Σ with sorts S={_1, _2}, no predicates, and only one function f of arity (_1,_2). Take the Σ-structure with: _1^ and _2^ of cardinality ℵ_1, and f^ a bijection. It is then true that φ_φ_, where φ_=x : _1y : _1[[f(x)=f(y)]→[x=y]] and φ_=u : _2x : _1[f(x)=u], codifying that f is injective and surjective respectively. Notice then that, although max{|Σ|, ℵ_0}=ℵ_0, there cannot be an elementary substructure of with |_1^|=ℵ_0 and |_2^|=ℵ_1: for if φ_∧φ_, f^ must be a bijection between _1^ and _2^. A similar argument shows that the corresponding generalization of the upwards theorem fails as well. §.§ A stronger result for split signatures <Ref> relies on “mixing sorts” by using a function symbol with arities spanning different sorts. We can state stronger versions of the many-sorted Löwenheim–Skolem theorems when such mixing of sorts is restricted. A signature Σ is said to be split by Λ into a family of signatures {Σ_λ : λ∈Λ} if Λ is a partition of _Σ, _Σ_λ=λ for each λ∈Λ, _Σ=⋃_λ∈Λ_Σ_λ, and _Σ=⋃_λ∈Λ_Σ_λ. If Σ is split by Λ and each λ∈Λ is a singleton, then we say that Σ is completely split by Λ. If Σ is split by Λ, then the function/predicate symbols of Σ_λ must be disjoint from Σ_λ' for λ≠λ'. Given a partition Λ of _Σ and λ∈Λ, let ≥(λ)=≥∩λ. We state the downward, upward, and combined theorems for split signatures. [Downward]theoremdownwardsplitLS Fix a first-order many-sorted signature Σ split by Λ. Suppose we have a Σ-structure , a cardinal κ_λ such that max{ℵ_0, |Σ_λ|}≤κ_λ≤min{|^| : ∈≥(λ)} for each λ∈Λ, and sets A_⊆^ with |A_| ≤κ_λ for each ∈≥(λ). Then, there is an elementary substructure of that satisfies |^|=κ_λ and ^⊇ A_ for ∈≥(λ), and ^=^ for ∈<. [Upward]theoremupwardsplitLS Suppose Σ is split by Λ. Given a Σ-structure , pick a cardinal κ_λ≥max{|Σ_λ|, ℵ_0, sup{|^| : ∈≥(λ)}} for each λ∈Λ. Then, there is a Σ-structure containing as an elementary substructure that satisfies |^|=κ_λ for ∈≥(λ), and ^ = ^ for ∈<. [Combined]corollarymiddlesplitLS Suppose Σ is split by Λ. Given a Σ-structure , pick a cardinal κ_λ≥max{|Σ_λ|, ℵ_0} for each λ∈Λ. Then, there is a Σ-structure elementarily equivalent to with |^|=κ_λ for every ∈≥(λ), and also ^ = ^ for every ∈<. [10]r.5 [scale=0.85, every mark/.append style=draw=white, mark size=2.4pt] [ xmin=-0.1, xmax=1.8, ymin=0, ymax=1, axis lines=center, axis on top=true, axis y line=none, domain=0:1, clip=false, xtick=0, 0.1, 0.4, 0.6, 0.7, 1.0, 1.3, 1.6, 1.7, xticklabels=_1, _2, _n, ^'_1, ^''_1,^'_i,^''_j,^'_m,^''_m, extra x ticks=0.25, 0.85, 1.15, 1.45, extra x tick style=tick style=draw=none, extra x tick labels=⋯, ⋯, ⋯, ⋯ ] [mark=none, black, thick] coordinates (-0.1,0.25) (1.8,0.25); [mark=none, black, thick] coordinates (-0.1,0.44) (1.8,0.44); [mark=none, black, thick] coordinates (-0.1,0.505) (1.8,0.505); at (axis cs:-0.1, 0.285)ℵ_0; at (axis cs:-0.1, 0.405)κ^'; at (axis cs:-0.1, 0.54)κ^''; [mark=none, black, thick, dotted] coordinates (0.0,0) (0.0,0.2); [mark=none, black, thick, dotted] coordinates (0.1,0) (0.1,0.075); [mark=none, black, thick, dotted] coordinates (1.0,0) (1.0,0.44); [mark=none, black, thick, dotted] coordinates (1.6,0) (1.6,0.44); [mark=none, red, thick, dotted] coordinates (1.6,0.44) (1.6,0.475); [mark=none, black, thick, dotted] coordinates (0.4,0) (0.4,0.125); [mark=none, black, thick, dotted] coordinates (0.6,0) (0.6,0.44); [mark=none, black, thick, dotted] coordinates (0.7,0) (0.7,0.505); [mark=none, black, thick, dotted] coordinates (1.3,0) (1.3,0.505); [mark=none, black, thick, dotted] coordinates (1.7,0) (1.7,0.505); [mark=none, red, thick, dotted] coordinates (1.7,0.505) (1.7,0.545); [only marks] table 0.6 0.44 0.7 0.505 1.0 0.44 1.3 0.505 1.6 0.44 1.7 0.505 ; [only marks, rotated halfcircle=-90] table 0.0 0.2 0.1 0.075 0.4 0.125 ; [only marks, mark=*,mark options=red] table 0.6 0.375 0.7 0.445 1.0 0.4 1.3 0.465 1.6 0.475 1.7 0.545 ; Illustration of <Ref>. <Ref> is illustrated in <Ref>. We add sorts S^''={^''_1,…,^''_m}, and assume our signature is split into Σ_λ_1 and Σ_λ_2, where ≥(λ_1)={^'_1,…,^'_m} and ≥(λ_2)=S^'' (the sorts with finite cardinalities can belong to either). Then, κ^' is the cardinal associated with Σ_λ_1, and κ^'' with Σ_λ_2. Thus, we are able to choose a cardinality for each class of sorts. §.§ An application: the Łoś–Vaught test We describe an application of our Löwenheim–Skolem theorems for theory-completeness: the Łoś–Vaught test. This is particularly relevant to SMT, as if a complete theory has a decidable set of axioms, then it is decidable whether ⊢_φ  <cit.>. The single-sorted Łoś–Vaught is the following. Let Σ be a signature and κ a function from _Σ to the class of cardinals. A Σ-theory is κ-categorical if it has exactly one model (up to isomorphism) with the property that |^| = κ() for every ∈_Σ. If there is only one sort ∈_Σ, we abuse notation by using κ to denote the cardinal κ(). Suppose Σ is single-sorted and is a Σ-theory with only infinite models. If is κ-categorical for some κ≥ |Σ|, then is complete. The Łoś–Vaught test is quite useful, e.g., for the completeness of dense linear orders without endpoints and algebraically closed fields. We generalize it to many sorts. Translating to one-sorted logic and using <Ref> gives us: corollarylosvaughttranslation Let Σ be a signature with |_Σ|<ℵ_0. Suppose is a Σ-theory, all of whose models satisfy max{|^| : ∈_Σ}≥ℵ_0. Suppose further that for some cardinal κ≥ |Σ|, has exactly one model (up to isomorphism) such that max{|^| : ∈_Σ} = κ. Then, is complete. This is not the result one would hope for, because it excludes some many-sorted κ-categorical theories, as the following example demonstrates. Suppose Σ has S = {_1, _2}, no predicate symbols, and function symbols 0, 1, +, and ×, of the expected arities. Let = 𝖠𝖢𝖥_0∪{ψ^_2_≥ n : n ∈ℕ}, where 𝖠𝖢𝖥_0 is the theory of algebraically closed fields of characteristic zero (with respect to _1) and ψ^_≥ n=x_1 : ⋯x_n : ⋀_1≤ i<j≤ n(x_i=x_j), which asserts that there are at least n elements of sort . is κ-categorical, where κ(_1) = ℵ_1 and κ(_2) = ℵ_0. But is also κ'-categorical, where κ'(_1) = κ'(_2) = ℵ_1. Thus, has multiple models satisfying max{|^| : ∈_Σ} = ℵ_1. Similar reasoning holds for other infinite cardinals, so <Ref> does not apply. For completely split signatures, we prove a more natural Łoś–Vaught test: A Σ-structure is strongly infinite if |^| ≥ℵ_0 for all ∈_Σ. theoremlosvaught Suppose Σ is completely split into {Σ_ : ∈_Σ}, is a Σ-theory all of whose models are strongly infinite, and is κ-categorical for some function κ such that κ() ≥ |Σ_| for every ∈_Σ. Then, is complete. The assumption that Σ is completely split is necessary for <Ref>: Let Σ have sorts _1, _2, and function symbol f of arity (_1,_2). Let = {ψ^_1_≥ n : n ∈ℕ}∪{ψ^_2_≥ n : n ∈ℕ}∪{φ_x : _1y : _1 [f(x)=f(y)]}. In , _1,_2 are infinite, and f is injective or constant. is κ-categorical for κ(_1) = ℵ_1,κ(_2) = ℵ_0, but not complete, due to the sentence ∀ x ,y : _1 .f(x)=f(y). This does not contradict <Ref>, as Σ is not completely split. § CONCLUSION We closed the problem of the existence of unicorn theories and discussed applications to SMT. This included a result similar to the Löwenheim–Skolem theorem, which inspired us to investigate the adaptation of this theorem to many-sorted logic. We also obtained a many-sorted version of the Łoś–Vaught test. In future work, we plan to investigate whether <Ref> can be extended to uncountable signatures. More broadly, we intend to continue studying the relationships among many-sorted model-theoretic properties related to SMT. §.§.§ This work was supported in part by the Stanford Center for Automated Reasoning, NSF-BSF grant numbers 2110397 (NSF) and 2020704 (BSF), ISF grant 619/21, and the Colman-Soref fellowship. The first author thanks the organizers of the CURIS research program. splncs04 left=20mm, right=20mm, bottom=22mm, top=15mm § APPENDIX § PROOF OF <REF> This lemma was used in <cit.>, and is explicitly found, with a proof, in its extended technical report <cit.>, as Lemma 73. For completeness, we include its proof in this appendix. * Let be stably infinite and strongly finitely witnessable, both with respect to S. Let φ be a -satisfiable quantifier-free formula, a -interpretation satisfying φ, and κ a function from <∩ S to the class of cardinals such that |^| ≤κ() < ℵ_0 for every ∈<∩ S. We have w(φ), where w=((φ))∖(φ). Hence, by modifying the interpretation of the variables in w, we obtain a -interpretation ' satisfying (φ). Let V = ((φ)), and let δ_V be the arrangement on V induced by the equalities in '. Then, (φ) δ_V is -satisfiable, so there exists a -interpretation ” satisfying (φ) δ_V such that ^”=_((φ) δ_V)^” for every ∈ S. The map from ^” to ^' given by x^”↦ x^', where x ∈_((φ)), is well-defined and injective, so we have |^”| ≤ |^'| = |^| for every ∈ S. In particular, κ() - |^”| ≥κ() - |^| ≥ 0. For each ∈<∩ S, introduce κ() - |^”| fresh variables W_ of sort . Let W = ⋃_∈<∩ S W_, and extend the arrangement δ_V to an arrangement δ_V ∪ W by asserting that all variables in W are distinct from each other and other variables in V. Since is stably infinite, (φ) δ_V ∪ W is -satisfiable, so there exists a -interpretation satisfying (φ) δ_V ∪ W such that ^=_((φ) δ_V ∪ W)^ for every ∈ S. Since satisfies (φ), it also satisfies φ. We also have |^| = |^”| + |W_| = κ() for every ∈<∩ S. Therefore, is finitely smooth with respect to S. § PROOF OF <REF> * Let R^*(k,n,m) = R(k^n^n,n,m+n-1). For any function f : X^n → [k], let f_ρ(x_1, …, x_n) = f(x_ρ(1), …, x_ρ(n)), where ρ : [n] → [n] is an arbitrary function. Fix an ordering ρ_1, …, ρ_n^n on the set of functions from [n] to itself. Then, let F : Xn→ [k]^n^n be given by, for x_1 < … < x_n, F({x_1, …, x_n}) = (f_ρ_1(x_1, …, x_n), …, f_ρ_n^n(x_1, …, x_n)). By <Ref>, for any totally ordered set (X, <) with |X| ≥ R^*(k,n,m), there is a subset Y' ⊆ X with |Y'| ≥ m+n-1 such that F is constant on Y'n. Let Y ⊆ Y' be the initial m elements of Y' according to the order on Y' inherited from X. Let x, y∈ Y^n with x∼y, and let the distinct elements of x be x_1 < … < x_ℓ and let those of y be y_1 < … < y_ℓ for some ℓ∈ [n]. Add additional elements from Y' to get {x_1, …, x_n}⊇{x_1, …, x_ℓ} and {y_1, …, y_n}⊇{y_1, …, y_ℓ}, where x_1 < … < x_n and y_1 < … < y_n. Then, F({x_1, …, x_n}) = F({y_1, …, y_n}), since {x_1, …, x_n}, {y_1, …, y_n}∈Y'n. Hence, f_ρ_i(x_1, …, x_n) = f_ρ_i(y_1, …, y_n) for all i ∈ [n^n]. In particular, let ρ_i be such that (x_ρ_i(1), …, x_ρ_i(n)) = x and (y_ρ_i(1), …, y_ρ_i(n)) = y. Then, f(x) = f_ρ_i(x_1, …, x_n) = f_ρ_i(y_1, …, y_n) = f(y), as desired. § PROOF OF <REF> * Let n = n_1 + … + n_r, and let R^**(k,n,m) = R^*(k^r, n, m+1). Given functions f_i : X^n_i→ [k], let F : X^n → [k]^r be given by F(x_1, …, x_r) = (f_1(x_1), …, f_r(x_r)). As proven in <Ref>, for any totally ordered set (X, <) with |X| ≥ R^**(k,n,m), there is a subset Y' ⊆ X with |Y'| ≥ m+1 such that F is constant on ∼-equivalence classes of Y'^n. Let y' ∈ Y' be the maximum element according to the order on Y' inherited from X, and let Y = Y' ∖{y'}. Given some i ∈ [r], let x, y∈ Y^n_i with x∼y. Then, ((y')^⊕ n_1, …, (y')^⊕ n_i-1, x, (y')^⊕ n_i+1, …, (y')^⊕ n_r) ∼ ((y')^⊕ n_1, …, (y')^⊕ n_i-1, y, (y')^⊕ n_i+1, …, (y')^⊕ n_r), since y' is strictly greater than every element in x and y. Therefore, F((y')^⊕ n_1, …, (y')^⊕ n_i-1, x, (y')^⊕ n_i+1, …, (y')^⊕ n_r) = F((y')^⊕ n_1, …, (y')^⊕ n_i-1, y, (y')^⊕ n_i+1, …, (y')^⊕ n_r), so f_i(x) = f_i(y), as desired. § PROOF OF <REF> * By the compactness theorem, it suffices to prove that ^* ∪Γ' is satisfiable for every finite subset Γ' ⊆Γ. So let Γ'_1 ⊆Γ_1, Γ'_2 ⊆Γ_2, and Γ'_3 ⊆Γ_3 be finite subsets. We will construct a ^*-interpretation ' such that ' {φ}∪Γ'_1 ∪Γ'_2 ∪Γ'_3. The tricky part is making ' satisfy Γ'_2. The strategy is to use <Ref> to construct a model ' in which |_i+1^'| is very large in terms of |_i^'| for each i > ℓ. <Ref> will ensure that there is some way of interpreting the constants {c_i,α}_α < κ(_i) so that ' Γ'_2. For each i, let C_i = {c_i,α : α < κ(_i); c_i,α appears in Γ'_1 ∪Γ'_2 ∪Γ'_3}. Since |⋃_i C_i| < ℵ_0, there is a maximum natural number i such that C_i ≠∅, which we denote s. For each i > ℓ, let T_i = {t : t is a Σ^*-term of sort _k appearing in Γ'_2 for some k < i}. Since T_i ⊆ T_i+1 for each i, and each T_i is finite, we can enumerate the terms of ⋃_i T_i so that for each i, there is an r_i such that t_1, …, t_r_i is an enumeration of T_i. Let ' be a -interpretation satisfying φ obtained according to <Ref>, where |_i^'| when i > ℓ is specified as follows. Suppose inductively that |_k^'| has been determined for all k < i. Given a term of the form t(x_1, …, x_s), let m_t,i = |_1^'|^|x_1|×…×|_i-1^'|^|x_i-1|× |C_i+1|^|x_i+1|×…× |C_s|^|x_s|, and let n_t,i = |x_i|. Let n_i = ((n_t_1, i)^⊕ m_t_1,i, …, (n_t_r_i, i)^⊕ m_t_r_i,i).[Recall that (x)^⊕ n denotes the tuple consisting of x repeated n times.] Then, choose |_i^'| so that |_i^'| ≥ R^**(|_ℓ+1^'| + … + |_i-1^'|, n_i, |C_i|), where R^** is the function from <Ref>. Now, we specify how ' interprets the constants C_i. Note that it does not matter how ' interprets the constants in {c_i,α}_α < κ(_i)∖ C_i, since these constants do not appear in {φ}∪Γ'_1 ∪Γ'_2 ∪Γ'_3. Impose an arbitrary total order on each _i^' to be used for the ∼ relation. If i ≤ℓ, then interpret the elements of C_i as distinct elements of _i^', which is possible because |C_i| ≤ |{c_i,α}_α < κ(_i)| = κ(_i) = |_i^'|. Otherwise, if i > ℓ, we specify the interpretation of the constants C_i by induction on s-i. That is, we specify the interpretation of C_s, then that of C_s-1, and so on. Suppose that the interpretation of the constants C_j has been determined for all j > i. Given a term t ∈ T_i, define the following family of functions in (_i^')^n_t,i→_ℓ+1^'∪…∪_i-1^': 𝔣_t,i = {a↦ t^'(a_1, …, a_i-1, a, (c_i+1)^', …, (c_s)^') : . . a_k∈_k^' for all k < i; c_j∈ (C_j)^* for all j > i}. Observe that |𝔣_t,i| = m_t,i. By our choice of |_i^'|, we can apply <Ref> to the functions 𝔣_i 𝔣_t_1,i∪…∪𝔣_t_r_i,i to conclude that there is a subset Y_i ⊆_i^' with |Y_i| ≥ |C_i| such that each f ∈𝔣_i is constant on ∼-equivalence classes of Y_i^n, where n is the arity of f. Then, interpret the constants C_i as distinct elements of Y_i in a way that is compatible with their respective total orders (i.e., c_i, α < c_i, β if and only if c_i, α^' < c_i, β^'). This completes the description of '. It remains to show that ' Γ'_1 ∪Γ'_2 ∪Γ'_3. First, we show that ' Γ'_1. By construction, ' interprets the constants C_i as distinct elements of _i^' for all i ∈ [s]. Therefore, ' Γ'_1. Second, we show that ' Γ'_2. For each i > ℓ, let Γ'_2^i = {t(c_1, …, c_i-1, b_i, c_i+1, …, c_s) = t(c_1, …, c_i-1, d_i, c_i+1, …, c_s) : . . t ∈ T_i; c_k, b_k, d_k∈ (C_k)^* for all k; b_i∼d_i}. Since C_i^'⊆ Y_i and each f ∈𝔣_i is constant on ∼-equivalence classes of Y^n, where n is the arity of f, we have t^'((c_1)^', …, (c_i-1)^', (b_i)^', (c_i+1)^', …, (c_s)^') = t^'((c_1)^', …, (c_i-1)^', (d_i)^', (c_i+1)^', …, (c_s)^') for each t ∈ T_i whenever b_i∼d_i. Thus, ' Γ'_2^i for all i > ℓ. Now, observe that ⋃_i>ℓΓ'_2^i entails Γ'_2. Therefore, ' Γ'_2. Finally, we show that ' Γ'_3. Suppose Γ'_3 contains a sentence of the form x ∈_i⋁_α < κ(_i) x = c_i,α, where i ≤ℓ. Then, C_i = {c_i, α}_α < κ(_i), so |C_i| = κ(_i) = |_i^'|. Since ' interprets the constants C_i distinctly, C_i^' = _i^'. Thus, the sentence above is equivalent to the fact that every element of _i^' is denoted by some constant in C_i. Therefore, ' Γ'_3. § PROOF OF <REF> * Consider the -structure with: domain ⋃_∈_Σ^; for every function f of arity (_1, …, _n, ) (in Σ), f^ equals f^ when restricted to _1^×⋯×_n^, and is arbitrary otherwise; for every predicate P of arity (_1, …, _n) (in Σ), P^ equals P^; and a∈ P_^ iff a∈^. Notice that, because is a Σ-structure, we get satisfies the following additional formulas: I for every ∈_Σ, xP_(x); II for any two distinct , ∈_Σ, x(P_(x)→ P_(x)); III if _Σ={_1,…,_n}, xP__1(x)∨⋯∨ P__n(x); IV for f of arity (_1, …, _n, ), x_1,⋯, x_n⋀_i=1^nP__i(x_i)→ P_(f(x_1, … , x_n)) (with some obvious care being necessary if n=0). Now, |Σ|=|| since _ must be in bijection with _Σ, and _ with _Σ∪_Σ; and because has an infinite domain and κ≤min{|^|:∈_Σ}, is infinite and has cardinality greater than κ. Taking A=⋃_∈_ΣA_, |A|≤κ, and we can therefore apply the classical downward Löwenheim–Skolem to obtain an elementary substructure of with domain of cardinality κ, and containing A. Finally, we define a Σ-structure by making: ^=P_^ for every sort (these are nonempty and disjoint, given satisfies the sets of formulas in I and II); for a function f of arity (_1, …, _n, ), f^ equals f^ restricted to _1^×⋯×_n^ (which is well-defined because satisfies the set of formulas in IV); and, for a predicate P of arity (_1, …, _n), P^ equals the intersection of P^ and _1^×⋯×_n^, making of a substructure of . It it easy to prove that is elementary equivalent to , and therefore ^ has the same cardinality as ^ if the latter is finite (and thus ^=^), and is infinite if the latter is infinite. We also get that, since A is contained in , A_ is contained in ^. Finally, ⋃_∈_Σ^ equals, given satisfies the formula in III, the domain of , meaning ∑_∈_Σ|^|=κ; given _Σ is finite, this means some domain of has cardinality κ, finishing the proof. § PROOF OF <REF> * Construct the -structure as in the proof of <Ref>, and since the cardinality of the domain of is max{|^|:∈_Σ} we can apply the classical upward Löwenheim–Skolem to obtain a -structure with an elementary substructure isomorphic to , and a domain of cardinality κ. Furthermore, since _Σ is finite (say it equals {_1, … , _n}), then satisfies x(P__1(x)∨⋯∨ P__n(x)), and so must . Translating back to a Σ-structure , again as done in the proof of <Ref>, we obtain that has an elementary substructure isomorphic to (and so |^|=|^| for every ∈_Σ such that ^ is finite); and for some ∈_Σ, has cardinality κ, because every element of must be in some domain of . § PROOFS OF <REF> These results are obtained as corollaries from <Ref>, whose proofs can be found in the following sections. The reason being that every signature can be trivially split to a singleton partition. § PROOF OF <REF> Suppose Σ is split by Λ. A Σ-formula is said to be a generalized Λ-cube if it is a conjunction ⋀_i=1^nφ_i, where each φ_i is a Σ_λ_i-formula, where λ_i∈Λ and for i≠j, we have λ_i≠λ_j; similarly, a Σ-formula is said to be a generalized Λ-clause if it is a disjunction ⋁_i=1^mφ_i, where each φ_i is a Σ_λ_i-formula, where λ_i∈Λ and for i≠j, we have λ_i≠λ_j. A Σ-formula that is a disjunction of generalized Λ-cubes (respectively, a conjunction of generalized Λ-clauses) is said to be in generalized disjunctive Λ-normal form, or Λ-GDNF (respectively, generalized conjunctive normal Λ-form, or Λ-GCNF).[Whenever Λ is clear from context, we will omit it from the nomenclature. ] We start with some technical lemmas. lemmaGDNFandDCNF Suppose Σ is split into {Σ_λ:λ∈Λ}; then a formula that is equivalent to a formula in GDNF is also equivalent to a formula in GCNF, and vice-versa. We prove that if φ that is equivalent to a formula in GDNF is also equivalent to a formula in GCNF: the reciprocal has an analogous proof. So, suppose that φ is equivalent to ψ=⋁_i=1^m⋀_j=1^n_iφ^i_j, and define the number of generalized literals in ψ as n=∑_i=1^mn_i: notice that any quantifiers in ψ must be inside one of the φ_j^i. We proceed by induction on n: if n=1, ψ is already in GCNF as well, so there is nothing to prove. Suppose then that the result holds for some n≥ 1, and take a generalized cube ⋀_i=1^n_iφ^i_j with n_i>1 (if there are none, again ψ is already in GCNF): without loss of generality, assume that i=m, and that m>1 (otherwise ψ is again in GCNF, and there is nothing to be done). Then we have that, denoting by θ≡θ^' the fact that θ and θ^' are equivalent, ψ=⋁_i=1^m⋀_j=1^n_iφ^i_j=[⋁_i=1^m-1⋀_j=1^n_iφ^i_j]∨[(⋀_j=1^n_m-1φ^m_j)∧φ^m_n_m]≡[(⋁_i=1^m-1⋀_j=1^n_iφ^i_j)∨(⋀_j=1^n_m-1φ^m_j)]∧[(⋁_i=1^m-1⋀_j=1^n_iφ^i_j)∨φ^m_n_m], by using the distributivity of disjunction over conjunction. Now, in the second line, the formulas on both sides of the conjunction are in GDNF and have a number of generalized literals strictly less than that of ψ, so they are equivalent by induction hypothesis to formulas ψ_1 and ψ_2 in GCNF. To summarize, ψ is then equivalent to ψ_1∧ψ_2, which is itself in GCNF, and so φ is equivalent to a formula in GCNF. lemmaGDNFexists If Σ is a split signature, each of its formulas is equivalent to a formula in GDNF. It is well known that any first-order Σ-formula φ is equivalent to a formula in prenex normal form (PNF), that is, to a formula 1x_1⋯nx_n φ, where Q_i∈{∀, ∃} and φ is quantifier free; without loss of generality, let us assume that all Σ-formulas are in PNF, and we write the proof by induction on n. If n=0, φ is itself quantifier-free: writing φ in disjunctive normal form (DNF), and using the commutativity of conjunction to place literals of the same signature Σ_λ together (notice every literal on Σ is a literal of one of the Σ_λ because Σ is split), we obtain φ is equivalent to a formula in GDNF. Now, assume the result holds for n≥ 1, and then it is true that φ=1x_1⋯n+1x_n+1 φ=1x_1 (2x_2⋯n+1x_n+1 φ)=1x_1 ⋁_i=1^p⋀_j=1^qφ_j^i by induction hypothesis, where φ_j^i are Σ_λ_j-formulas.[Notice that if a generalized cube of a formula in GDNF does not include formulas of exactly the same signatures as the other generalized cubes, we can always add tautologies to make the treatment of that formula more uniform.] Now, we have two cases to consider. * If Q_1=∃ and x_1 is of sort, without loss of generality, in Σ_λ_q, we have that φ=x_1⋁_i=1^p⋀_j=1^qφ_j^i=⋁_i=1^px_1⋀_j=1^qφ_j^i=⋁_i=1^px_1φ_q^i∧⋀_j=1^q-1φ_j^i, since φ_j^i for 1≤ j≤ q-1 cannot have the variable x_1. Of course, we are then done. * Now, suppose Q_1=∀. Because of <Ref> and our induction hypothesis, we know that we can rewrite the formula 2x_2⋯n+1x_n+1 φ as ⋀_i=1^P⋁_j=1^Qψ_j^i for some Σ_λ_j-formulas ψ_j^i. Then, assuming again without loss of generality that x_1 is of sort in Σ_λ_Q, φ=x_1⋀_i=1^P⋁_j=1^Qψ_j^i=⋀_i=1^Px_1⋁_j=1^Qψ_j^i=⋀_i=1^Px_1φ_Q^i∨⋁_j=1^Q-1φ_j^i, which is in GCNF. Once again applying <Ref>, we obtain φ may be written in GDNF, as we wanted to prove. * Given a formula φ and free variable x ∈(φ), let f^x_φ be a Skolem function, meaning that (, ν)xφ implies (, μ)φ, where μ differs from ν at most on x, (φ)={x, y_1, … , y_n}, and μ(x)=f^x_φ(ν(y_1), … , ν(y_n)). Skolem functions can be proven to exist as in single-sorted logic. For each ∈≥(λ), we take a set A^0_ such that A_⊆ A_^0⊆^ and |A^0_| = κ_λ, which is possible given that the A_ in the statement of the theorem have cardinality at most κ_λ; if ∈<(λ), we make A_^0=^. We then define, for every m∈ℕ, if is a sort of Σ_λ, A^m+1_=A^m_∪{f_φ^x(a_1,…, a_n) :φ is a Σ_λ-formula, (φ)= {x, y_1,…, y_n}, x is of sort , y_i is of sort _i, and a_i∈ A^m__i}. We define a Σ-structure , where ^=⋃_n∈ℕA_^m; f^, with f of arity (_1, …, _n, ), equals f^ restricted to _1^×⋯×_n^; and P^, with P of arity (_1, …, _n), equals P^∩(_1^×⋯×_n^). We claim that if ∈Σ_λ, then |^|=κ_λ. Since κ_λ≥ℵ_0, it suffices to show that |A^m_|=κ_λ for each m∈ℕ. This is true for m=0 by hypothesis. The cardinality of the set Σ_λ of formulas on the signature Σ_λ is at most max{|Σ_λ|, ℵ_0}≤κ_λ. Thus, κ_λ = |A_^m| ≤ |A^m+1_| ≤ |A_^m|+∑_n∈ℕ|Σ_λ|×|A_^m|^n≤κ_λ, so |A^m+1_| = κ_λ, as desired. Now, it remains for us to show that is an elementary substructure of , clearly being a substructure. We wish to apply <Ref>, so take a formula φ, a free variable x in φ (the other variables of φ being y_1 through y_n, of sorts, respectively, _1 through _n), and suppose that (, ν)xφ. Because of <Ref>, we know we can write φ as ⋁_i=1^p⋀_j=1^qφ_j^i, where φ_j^i is a Σ_λ_j-formula: without loss of generality, suppose x is of sort in λ_q; then xφ=⋁_i=1^pxφ^i_q∧⋀_j=1^q-1φ_j^i, and thus (,ν)xφ^i_q∧⋀_j=1^q-1φ_j^i for some 1≤ i≤ p. Then the function f^x_φ^i_q, calculated on ν(y_1)∈ A_^m_1, … , ν(y_n)∈ A_^m_n, returns a witness for xφ^i_q in A_^max{m_1,…, m_n}+1, a subset of , which finishes our proof. § PROOF OF <REF> * For each ∈_Σ, let P_ be a set of new constants of sort , where |P_| has cardinality |^|. For each λ∈Λ and ∈≥(λ), let Q_ be a set of new constants of sort , where |Q_| has cardinality κ_λ. Let Σ_P be the signature obtained by adding the sets P_ to Σ, and let Σ_Q be the signature obtained by adding the sets Q_ to Σ_P. We extend into a Σ_P-structure by interpreting the sorts, functions, and predicates in Σ in the same way as in , and defining b_^, for b_∈ P_, so that the mapping b_∈ P_↦ b_^∈^ is bijective. Now, let Γ be the set of all Σ_P-sentences satisfied by , and let Γ=Γ∪{(c_=d_) : c_, d_∈ Q_, with c_≠ d_, for all ∈≥}. By <Ref>, Γ is consistent. Let be a model of Γ. Then, for every ∈<, we have |^| = |^|, and for every ∈≥(λ), we have |^| ≥κ_λ. By <Ref>, there is an elementary substructure of with |^| = κ_λ for every ∈≥(λ) and ^=^ for each ∈<. Since Γ, is isomorphic to an elementary substructure of , where the isomorphism is given by b_^↦ b_^ for b_∈ P_. Identifying these elements of with the corresponding elements of completes the proof. § PROOF OF <REF> * First, apply <Ref> to with the cardinals θ_λ = max{ℵ_0, |Σ_λ|} to get a structure elementarily equivalent to satisfying |^|=max{ℵ_0, |Σ_λ|} for every ∈≥(λ) and ^ = ^ for every ∈<. Then, apply <Ref> to with the cardinals κ_λ to get a structure elementarily equivalent to with |^|=κ_λ for every ∈≥(λ) and ^ = ^ for every ∈<. § PROOF OF <REF> * The translation of into a single-sorted Σ^†-theory ^† is described in <cit.>. We informally recapitulate the translation here. First, we may assume without loss of generality that Σ has no function symbols, since function symbols can be eliminated in favor of suitably axiomatized predicate symbols. Let Σ^† have the components _Σ^† = {^†} _Σ^† = ∅ _Σ^† = {P_ : ∈_Σ}∪_Σ. Now, a Σ-formula φ can be translated to Σ^†-formula φ^† by using the predicates P_ to relativize quantifiers to their respective sorts, as in the proof of <Ref>. Then, let ^† = ^†_1 ∪^†_2 ∪^†_3, where * ^†_1 is the set of translated axioms of , * ^†_2 is a set of sentences asserting that every element belongs to exactly one sort (this is where we use the assumption that Σ has finitely many sorts), and every sort is nonempty, and * ^†_3 is a set of sentences asserting that predicates are true only if they are applied to elements of the appropriate arity. Now, there is a one-to-one correspondence between models of and models of ^†, such that a -model corresponds to a ^†-model with |^†^| = ∑_∈_Σ |^|. Indeed, given a -model , construct a ^†-model by letting ^†^ = ⋃_∈_Σ^ P_^ = ^ for each ∈_Σ P^ = P^ for each P ∈_Σ. Conversely, given a ^†-model , construct a -model by letting ^ = P_^ for each ∈_Σ P^ = P^ for each P ∈_Σ. Using this correspondence, we see that ^† is a Σ^†-theory all of whose models are infinite, and that ^† is κ-categorical. By <Ref>, ^† is complete. Thus, ⊢_^†φ^† or ⊢_^†φ^† for every Σ-sentence φ. Hence, ⊢_φ or ⊢_φ, so is complete. § PROOF OF <REF> * Suppose is not complete. Then, for some sentence φ, the theories _0 = ∪{φ} and _1 = ∪{φ} are consistent. Let _0 and _1 be models of _0 and _1 respectively. Since _0 and _1 are also models of , both are strongly infinite. By <Ref>, there are models '_0 and '_1 that are elementarily equivalent to _0 and _1 respectively such that |^'_0| = |^'_1| = κ() for all ∈_Σ. Since '_0 φ and '_1 φ, the models '_0 and '_1 are not isomorphic, contradicting the assumption that is κ-categorical. § ILLUSTRATIONS In the body of the paper, we included <Ref>, in order to visualize the effect of the new combined Löwenheim–Skolem theorems. We only included the diagrams for the combined theorems. In this appendix, we include similar diagrams for the downward and upward theorems as well. Translation-based proofs <Ref> are proven using a translation-based approach. We visualize their effects in, respectively, <Ref>.(a) and <Ref>.(d). Notice that ^'_i is the sort in the resulting structure that gets assigned the desired cardinality κ, although more than one such sort could exist; the final cardinalities for other sorts with infinite domains are bounded from below by ℵ_0 and from above by κ, but we do not have any further control over them. Direct proofs <Ref> are proven in a direct manner, by addapting the single-sorted classical proofs to the many-sorted case. We visualize <Ref> in <Ref>.(b) and <Ref>.(e), respectively. With these new results, we are able to set the cardinalities of all the infinite sorts, but only to the exact same cardinal. Direct proofs for split signatures We provide examples of applying <Ref> in, respectively, <Ref>.(c), <Ref>.(f). We assume that our signature is split into Σ_λ_1 and Σ_λ_2, where ≥(λ_1)={^'_1,…,^'_m} and ≥(λ_2)={^''_1,…,^''_m} (the sorts with finite interpretations can belong to either λ_1 or λ_2). Then, κ^' is the cardinal associated with Σ_λ_1, and κ^'' the cardinal associated with Σ_λ_2. Thus, for split signatures, we are able to choose a cardinality for each class of sorts. Overall, going downward in either the left or right part of <Ref>, we see that the translation-based proofs only let us set the maximal cardinality (diagrams (a) and (d)), while the theorems proved using the direct approach allow us to set all sorts to a single cardinality (diagrams (b) and (e)). For split signatures, we can do even better, and have a dedicated cardinality for each class of sorts (diagrams (c) and (f)).
http://arxiv.org/abs/2406.19206v1
20240627142835
Quantum Thermodynamics
[ "Patrick P. Potts" ]
quant-ph
[ "quant-ph", "cond-mat.mes-hall", "cond-mat.stat-mech" ]
SPstyle Quantum Thermodynamics Patrick P. Potts Department of Physics and Swiss Nanoscience Institute, University of Basel, Klingelbergstrasse 82, 4056 Basel, Switzerland mailto:patrick.potts@unibas.chpatrick.potts@unibas.ch § ABSTRACT The theory of quantum thermodynamics investigates how the concepts of heat, work, and temperature can be carried over to the quantum realm, where fluctuations and randomness are fundamentally unavoidable. These lecture notes provide an introduction to the thermodynamics of small quantum systems. It is illustrated how the laws of thermodynamics emerge from quantum theory and how open quantum systems can be modeled by Markovian master equations. Quantum systems that are designed to perform a certain task, such as cooling or generating entanglement are considered. Finally, the effect of fluctuations on the thermodynamic description is discussed. 1pt 1pt § INTRODUCTION By investigating concepts such as heat, work, and temperature, the theory of thermodynamics was a driving force in the industrial revolution, enabling the improvement and development of technologies that reshaped society such as steam engines and refrigerators <cit.>. Quantum thermodynamics investigates heat, work, temperature, as well as related concepts in quantum systems. In analogy to macroscopic thermodynamics, this endeavor promises to enable the development and improvement of novel quantum- and nano-technologies. Furthermore, a good understanding of the thermodynamics of quantum systems is key for keeping the energetic footprint of any scalable quantum technology in check <cit.>. As the concepts investigated by quantum thermodynamics are very general, the theory is of relevance for essentially all scenarios involving open quantum systems. This makes the field extremely broad, including well established topics such as thermoelectricity <cit.>, investigating how an electrical current arises from a temperature gradient, as well as very recent investigations into the energetics of single qubit gates <cit.>. The broad nature of the field implies that the quantum thermodynamics community brings together people with very different backgrounds. These lecture notes are meant to provide an introduction to this diverse and fast-growing field, discussing fundamental concepts and illustrating them with simple examples. After a short introduction to basic concepts, macroscopic thermodynamics, and information theory will be reviewed in Sec. <ref>. Section <ref> will address the topic of thermodynamic equilibrium. This will allow for understanding why a system may often be described by very few parameters, such as temperature and chemical potential, and it will give you a physical understanding of these fundamental parameters. Section <ref> will then discuss how the laws of thermodynamics emerge from quantum mechanics, connecting these very general laws to the microscopic behavior of quantum systems. Applying the laws of thermodynamics to quantum systems will allow you to make general predictions on what happens when small systems are placed into contact with thermal reservoirs. This may improve your physical intuition for how systems in and out of equilibrium behave, which is an extremely valuable skill as a physicist. In Sec. <ref>, we will study quantum master equations as a tool to describe open quantum systems. This is an extremely useful and versatile tool that can be employed to model many physical systems, e.g., quantum dots, NV centers, and optical cavities. Section <ref> will then investigate quantum thermal machines, exploring some of the tasks that can be achieved with open quantum systems. Such tasks include the production of work, refrigeration, and the creation of entanglement. Finally, in Sec. <ref>, we will consider fluctuations which become important in nano-scale systems. You will learn how thermodynamic considerations may result in equalities and inequalities that can be understood as extensions of the laws of thermodynamics, determining the behavior of fluctuations around average values. Further reading: In 2022, a textbook on quantum stochastic thermodynamics by Strasberg was published <cit.>. While similar in spirit to the material discussed in this course, the book has a stronger focus on fluctuations which are only covered in Sec. <ref> in this course. Another good resource is the book published in 2019 by Deffner and Campbell <cit.>. Compared to these lecture notes, it has a stronger focus on information-theoretic concepts and thermodynamic cycles. A slightly older textbook on the topic was published in 2009 by Gemmer, Michel, and Mahler <cit.>. While being a valuable resource for many of the concepts, it predates a number of important works that are central to how the topic is presented here. In addition, in 2019 a book giving a snapshot of the field was published, providing numerous short reviews on different topics by over 100 contributing authors <cit.>. Furthermore, a number of excellent reviews focusing on different aspects of quantum thermodynamics are provided by Refs. <cit.>. These resources are complemented by the focus issue in Ref. <cit.>. §.§ Basic concepts In this section, we introduce some basic concepts that are used throughout the lecture notes. We set ħ=1 throughout. Note that this section should mainly act as a reminder (with the possible exception of Sec. <ref>) and to set the notation. If a concept is completely new to you, I suggest to read up on it in the respective references, which provide a far more detailed introduction. §.§.§ Linear algebra Because linear algebra provides the mathematical description of quantum mechanics, we review some basic concepts here. In a format accessible for physicists, a more detailed introduction can be found in most textbooks on quantum mechanics, see, e.g., Sec. 1 in the book by Shankar <cit.> or Sec. 2.1 in the book by Nielsen and Chuang <cit.>. Vectors in Hilbert spaces: A (pure) quantum state can be described by a vector in a complex Hilbert space ℋ, which is a vector space equipped with an inner product. Such vectors may be denoted by |ψ⟩,|φ⟩,a|ψ⟩+b|φ⟩, where a and b are complex numbers. For reasons that will become clear later, we also call a vector a ket. Any vector can be written using a set of linearly independent basis vectors |j⟩ as |ψ⟩ = ∑_j ψ_j |j⟩. Linear operators: Vectors can be manipulated by operators. A linear operator Â:ℋ→ℋ' maps a vector from a Hilbert space, ℋ, onto another vector, potentially in another Hilbert space ℋ', i.e., Â|ψ⟩ is a vector in ℋ' for any |ψ⟩∈ℋ. Furthermore, a linear operator obeys Â(a|ψ⟩+b|φ⟩) = aÂ|ψ⟩+bÂ|φ⟩. Dual vectors: For each vector, a dual vector (an element of the dual space ℋ^*) exists given by the Hermitian conjugate of the original vector ⟨ψ| = |ψ⟩^†. A dual vector is also called a bra. The notation introduced in Eqs. (<ref>) and (<ref>) is called Dirac or bra-ket notation. Inner product: The inner product between two vectors |ψ⟩ and |φ⟩ is denoted by ⟨ψ|φ⟩∈ℂ, and is a complex number. Note that this makes any dual vector a linear operator mapping vectors onto the Hilbert space of complex numbers, i.e., ⟨ψ|:ℋ→ℂ. With the inner product, we may define an orthonormal basis |j⟩ as one fulfilling ⟨i|j⟩ = δ_i,j, where δ_i,j denotes the Kronecker delta. Using Eq. (<ref>), we may express any vector through an orthonormal basis. The inner product between two vectors may then be evaluated as ⟨ψ|φ⟩ = (∑_jψ_j^*⟨j|)(∑_kφ_k|k⟩) = ∑_jψ_j^*φ_j. Composite systems: When describing a composite system that is made of multiple subsystems, a tensor product space can be used. In the case of two subsystems (a bi-partite system), the total Hilbert space is then given by ℋ = ℋ_A⊗ℋ_B, where ℋ_A and ℋ_B are the Hilbert spaces describing subsystems A and B, and ⊗ denotes the Kronecker product. An orthonormal basis can be constructed from orthonormal bases of the subsystems as |a,b⟩ = |a⟩⊗|b⟩, where |a⟩ and |b⟩ are the basis states of the subsystems. Sometimes it is necessary to discard one of the subsystems. This is achieved through the partial trace. For an operator acting on the composite system Ĉ: ℋ→ℋ, the partial trace over the subsystem B is defined as Tr_B{Ĉ} = ∑_a,a',b|a⟩⟨ a'|⟨ a,b|Ĉ|a',b⟩, which is an operator that acts on subsystem A. In particular, if we consider a pure quantum state describing the composite system |Ψ⟩, then the partial trace Tr_B{Ψ} is the reduced (possibly mixed) state of subsystem A. The reduced state fully describes subsystem A if no information on subsystem B can be obtained. Row and column vectors: Every complex vector space of finite dimension n is isomorphic to ℂ^n. What this means is that as long as we consider finite-dimensional Hilbert spaces, we may use conventional row and column vectors. For instance, a set of orthonormal basis vectors may be identified by column vectors with the j-th entry equal to one and all others equal to zero. General vectors can then be expressed through Eq. (<ref>), i.e., |j⟩≡[ ⋮; 0; 1; 0; ⋮ ], ⟨j|≡[ ⋯ 0 1 0 ⋯ ],⇒|ψ⟩≡[ ψ_0; ψ_1; ψ_2; ⋮ ], ⟨ψ|≡[ ψ_0^* ψ_1^* ψ_2^* ⋯ ]. Matrices and the resolution of the identity: An important operator is the identity operator 1 defined by 1|ψ⟩ = |ψ⟩∀|ψ⟩. Using any orthonormal basis, the identity may be written 1 = ∑_jj. This equation is called the resolution of the identity and it is heavily used in deriving various results in quantum mechanics. With its help, we may express an operator in the basis of |j⟩ as  = ∑_j,kjÂk = ∑_j,kA_jkjk, where A_jk=⟨j|Â|k⟩ are the matrix elements of Â. Indeed, with the help of the row and column vectors in Eq. (<ref>), we may identify Â≡[ A_00 A_01 ⋯; A_10 A_11; ⋮ ⋱ ], Â^†≡[ A^*_00 A^*_10 ⋯; A^*_01 A^*_11; ⋮ ⋱ ], and we recover the usual prescription for the Hermitian conjugate of matrices. Superoperators: We may consider operators as vectors in a different vector space, the so-called Liouville space. The Hilbert-Schmidt inner product between two operators  and B̂ is defined as Tr{Â^†B̂}. With this inner product, Liouville space is itself a Hilbert space. An operator that acts on a vector in Liouville space is called a superoperator to distinguish it from operators that act on states |ψ⟩. Superoperators thus act on operators the same way as operators act on states. Throughout these lecture notes, superoperators are denoted by calligraphic symboles, e.g., ℒ, while operators can be identified by their hat, e.g.,  (with a few exceptions such as the identity operator 1). As discussed above, we may write a vector in Liouville space as a column vector. Starting from the matrix representation of an operator Â, c.f. Eq. (<ref>), this can be achieved by stacking its columns. Any superoperator may then be written as a matrix. This procedure is called vectorization (see Ref. <cit.> for more details) and it can be highly useful when using numerics to compute the behavior of open quantum systems. §.§.§ The density matrix The density matrix describes the state of a quantum system and is thus a central object throughout these lecture notes and in quantum theory in general. A more detailed introduction can be found in the book by Nielsen and Chuang <cit.>, see Sec. 2.4. The state of a quantum system is described by a positive semi-definite operator with unit trace acting on the Hilbert space ℋ. Positive semi-definite means that all eigenvalues of the density matrix are larger or equal to zero and unit trace means that the eigenvalues add up to one. The density matrix may be written as ρ̂ = ∑_j p_j ψ_j,with⟨ψ_j|=⟩ 1, p_j≥ 0,∑_jp_j = 1. We may interpret the density matrix as the system being in the pure state |ψ_j⟩ with probability p_j. If multiple p_j are non-zero, this describes a scenario of incomplete knowledge. Measurements and expectation values: In quantum mechanics, any measurement can be described using a positive operator valued measure (POVM) <cit.>. A POVM is a set of positive, semi-definite operators M̂_j that obey ∑_j M_j = 1. The index j labels the outcome of the measurement and it is obtained with probability p_j =Tr{M̂_jρ̂} for a quantum state ρ̂. Note that the positive semi-definiteness ensures p_j≥ 0, since ⟨ψ|M̂_j|ψ⟩≥ 0 for any |ψ⟩. The fact that the M̂_̂ĵ sum to the identity ensures that ∑_jp_j=1. Of particular interest are projective measurements, where M̂_j^2 = M̂_j, i.e., the operators M̂_j are projectors. We may create a projective measurement from any operator  = ∑_j a_j j, by using the eigenbasis of the operator to define M̂_j = j. In this case, we say that a projective measurement of the operator  gives the value a_j with probability p_j=⟨ j|ρ̂|j⟩. The average value of the measurement outcomes is given by ⟨Â⟩≡Tr{Âρ̂} = ∑_j a_j p_j. We also call this the average value of Â. After a projective measurement with outcome a_j, the system is collapsed into the state j. Repeating the same projective measurement twice thus necessarily gives the same outcome. While projective measurements are widely used in quantum theory, they represent an idealization that strictly speaking cannot be implemented in the laboratory as they would violate the third law of thermodynamics <cit.>. Classical mixture vs quantum superposition: Let us consider the toss of a coin, where we identify with |0⟩ the outcome tails and with |1⟩ the outcome heads. The outcome of such a coin toss (before observation) is described by the density matrix ρ̂ = 1/2(0+1) = 1, which is equal to the identity matrix, also called the maximally mixed state because each outcome is equally likely. Such a mixture of states is completely classical and merely reflects a scenario of incomplete knowledge, in this case the outcome of the coin toss. We now compare the state in Eq. (<ref>) to a quantum superposition provided by the pure state |+⟩=(|0⟩+|1⟩)/√(2) ρ̂ = +=1/2(0+1+01+10). In contrast to the maximally mixed state, this quantum state is pure and we thus have perfect knowledge of the system. Performing a projective measurement on this state in the basis |±⟩ will with certainty result in +. However, the state describes a coherent superposition of the states |0⟩ and |1⟩. This coherent superposition is distinguished from a mixture by the off-diagonal elements 01 and 10 (also called coherences). Time evolution of an isolated system: The time-evolution of a system that is isolated from its environment is given by ρ̂(t) = Û(t)ρ̂(0)Û^†(t),Û(t) = 𝒯 e^-i∫_0^t dt'Ĥ(t'), where Ĥ(t) is the Hamiltonian of the system and 𝒯 denotes the time-ordering operator. The time-ordered exponential appearing in Eq. (<ref>) can be written as 𝒯 e^-i∫_0^t dt'Ĥ(t') = lim_δ t→ 0 e^-iδ t Ĥ(t)e^-iδ t Ĥ(t-δ t)e^-iδ t Ĥ(t-2δ t)⋯ e^-iδ t Ĥ(2δ t)e^-iδ t Ĥ(δ t)e^-iδ t Ĥ(0), such that the time argument in the Hamiltonian on the right-hand side increases from the right to the left of the expression. Each exponential in the product can be understood as the time-evolution by the infinitesimal time-step δ t <cit.>. Note that in quantum mechanics, a system that is isolated from its environment is denoted as a closed system. This can be confusing in the field of quantum thermodynamics because in thermodynamics, a closed system traditionally refers to a system that can exchange energy but cannot exchange matter with its environment. To avoid confusion, I use the term isolated system throughout these lecture notes. §.§.§ Second quantization Many scenarios considered in this course feature the transport of electrons or photons through a quantum system. Such scenarios are most conveniently described using the formalism of second quantization, see Sec. 1 in the Book by Bruus and Flensberg <cit.> for a more detailed introduction. We first introduce the commutator and anti-commutator which play important roles for bosons and fermions respectively Commutator:[Â,B̂] = ÂB̂-B̂Â, Anti-commutator:{Â,B̂} = ÂB̂+B̂Â. Bosons: For a single bosonic mode, a central object is the creation operator â^†, which creates a boson in this mode. Its Hermitian conjugate â denotes the annihilation operator, removing a boson from the mode. These operators obey the canonical commutation relations [â,â^†] = 1,[â,â] =[â^†,â^†] =0. The state without any bosons is denoted the vacuum state |0⟩ and by definition it is annihilated by the annihilation operator, â|0⟩ = 0. The state with n bosons (also called a Fock state) can be written with help of the creation operator as |n⟩≡(â^†)^n/√(n!)|0⟩,⟨n|m⟩=δ_n,m. With these definitions, we find the action of the creation and annihilation operators on the Fock states â|n⟩ = √(n)|n-1⟩,â^†|n⟩ = √(n+1)|n+1⟩,â^†â|n⟩ = n|n⟩. The operator n̂≡â^†â is called the number operator. When dealing with multiple bosonic modes, we introduce subscripts on the annihilation operators â_j. The canonical commutation relations then read [â_j,â^†_k]=δ_j,k,[â_j,â_k] =[â_j^†,â_k^†] =0. The number states can be written as |n_0,n_1,⋯⟩ = |n⟩≡∏_j(â_j^†)^n_j/√(n_j!)|0⟩,â^†_jâ_j|n⟩ = n_j|n⟩. Fermions: For a single fermionic mode, the creation and annihilation operators are denoted as ĉ^† and ĉ respectively. In contrast to bosonic operators, they obey canonical anti-commutation relations {ĉ,ĉ^†} = 1,{ĉ,ĉ} ={ĉ^†,ĉ^†} =0. The latter relation directly implies that ĉ^2=(ĉ^†)^2 = 0. A fermionic mode may thus at most be occupied by a single fermion. This is known as the Pauli exclusion principle. Just like for bosons, the vacuum state |0⟩ is annihilated by the annihilation operator, ĉ|0⟩ = 0. The Fock states can then be written as |1⟩ = ĉ^†|0⟩,ĉ|1⟩ = |0⟩,ĉ^†ĉ|n⟩=n|n⟩, where n=0,1 and ĉ^†ĉ is called the number operator just like for bosons. Note that due to the Pauli exclusion principle, the occupied state |1⟩ is annihilated by the creation operator ĉ^†|1⟩ = 0, implying a symmetry between creation and annihilation operators. Indeed, we may call ĉ^† the annihilation operator for a hole which has the vacuum state |1⟩. Multiple modes are denoted by an index and obey the canonical anti-commutation relations {ĉ_j,ĉ_k^†} = δ_j,k, {ĉ_j,ĉ_k} ={ĉ_j^†,ĉ_k^†} =0. Note that operators for different modes anti-commute, implying a sign change when exchanging two fermions. Just as for bosons, the number states can be written as |n_0,n_1,⋯⟩ = |n⟩≡∏_j(ĉ_j^†)^n_j|0⟩,ĉ^†_jĉ_j|n⟩ = n_j|n⟩. Note however that do to the anti-commutation relations, the order with which ĉ_j^† act on the vacuum in the last expression matters. §.§.§ Exact and inexact differentials Exact and inexact differentials play an important role in thermodynamics. They are introduced in most textbooks on thermodynamics, see for instance App. A of the book by Hentschke <cit.>. We recall that the differential of a differentiable function f(x,y) of two variables is denoted by df(x,y) = ∂_x f(x,y)dx+∂_yf(x,y)dy, where ∂_x denotes the partial derivative with respect to x, keeping y fixed. The integral of such a differential along any curve γ is determined solely by its endpoints ∫_γ df(x,y) = f(x_f,y_f)-f(x_i,y_i), where the subscript i (f) denotes the initial (final) values of the curve γ. We now consider a differential form dg(x,y) = a(x,y) dx + b(x,y) dy. We call this an exact differential if and only if ∂_y a(x,y) = ∂_x b(x,y). In that case, we may write a(x,y)=∂_x g(x,y) and b(x,y)=∂_y g(x,y) and g(x,y) is a differentiable function. An exact differential is therefore a differential form that can be written as the differential of a function. We call the differential form in Eq. (<ref>) an inexact differential if it cannot be written as the differential of a function. In this case, ∂_y a(x,y) ≠∂_x b(x,y). Importantly, the integral of an inexact differential generally does depend on the path of integration and not just its endpoints. As an example, we consider the exact differential dz = ydx+xdy, which is the differential of the function z(x,y) = xy. The differential in Eq. (<ref>) can be written as the sum of two inexact differentials q = ydx, w=xdy, where denotes an inexact differential. We may now integrate these differentials along two different curves with the same endpoints γ_1: (0,0)→ (1,0)→(1,1),γ_2: (0,0)→ (0,1)→(1,1). Integrals along these paths may be evaluated using ∫_γ_1 dg(x,y) = ∫_0^1 dx a(x,0)+∫_0^1 dy b(1,y), ∫_γ_2 dg(x,y) = ∫_0^1 dy a(0,y)+∫_0^1 dx b(x,1), where we made use of Eq. (<ref>). A simple calculation results in ∫_γ_1 q = ∫_γ_2 w = 1,∫_γ_2 q = ∫_γ_1 w =0. As expected, we recover ∫_γ_1 dz =∫_γ_2 dz = z(1,1)-z(0,0) = 1. §.§ Macroscopic thermodynamics Here we briefly summarize some important aspects of the traditional theory of thermodynamics which applies at macroscopic scales, where fluctuations may safely be neglected. There are many textbooks on this topic, see for instance the books by Callen <cit.> or Hentschke <cit.>. Among physical theories, Thermodynamics takes a rather peculiar role because it is generally valid for all systems (at the macroscopic scale) and it does not provide numerical results but rather general equalities and inequalities which constrain physically allowed processes. Indeed, the theory of thermodynamics has been called the village witch among physical theories <cit.>. The main assumption in thermodynamics is that we can describe a macroscopic body by very few variables such as temperature T, pressure p, volume V, internal energy U, and entropy S, see Fig. <ref>. Given these macroscopic variables, thermodynamics provides relations and constraints between them. The first law of thermodynamics: The first law of thermodynamics may be written in the form dU = W- Q, The first law states that energy is conserved and that energy changes come in two forms: heat Q and work W. Quite generally, work is a useful form of energy that is mediated by macroscopic degrees of freedom. This is in contrast to heat, which is a more inaccessible form of energy mediated by microscopic degrees of freedom. Note that these quantities are inexact differentials, i.e., they may depend on the path taken when changing thermodynamic variables. Here we use a sign convention where work is positive when increasing the energy of the body while heat is positive when it flows out of the body and into the environment. The first law of thermodynamics prevents a perpetuum mobile of the first kind, i.e., a machine that produces the work it needs to run itself indefinitely. In the presence of any type of losses, such a machine needs to produce work out of nothing, violating Eq. (<ref>). An example would be a lamp that is powered by a solar cell, illuminated only by the lamp itself, see Fig. <ref>. Work may come in different forms. For instance, electrical work (also called chemical work) is produced when charges are moved from a region with lower voltage to a region with higher voltage, which is what happens when charging a battery. While this type of work will be the focus of later parts of these lecture notes, here we focus on the traditional example of mechanical work which is produced by changing the volume of a gas using, e.g., a piston as illustrated in Fig. <ref>. In this case, we know from classical mechanics that work is given by W = -p_B dV, where p_B denotes the external pressure exerted by the piston. Furthermore, the heat flowing into the body can be related to the entropy change of the surrounding environment Q = T_B dS_B, where the subscript B stands for ”bath´´. This relation follows from describing the environment as being in thermal equilibrium at temperature T_B and we will revisit this later. We stress that in these expressions for heat and work, the quantities of the surrounding environment appear (note that dV=-dV_B). The second law of thermodynamics: The second law of thermodynamics is provided by the Clausius inequality <cit.> dΣ = dS + dS_B = dS +∑_α Q_α/T_α≥ 0, where we included multiple reservoirs at respective temperatures T_α that may exchange heat Q_α with the body. The second law states that the total entropy, which is the sum of the entropy of the body S and the entropy of the environment S_B = ∑_α S_α, can only increase. This change in total entropy is called entropy production Σ. The second law of thermodynamics prevents a perpetuum mobile of the second kind, i.e., a machine that runs indefinitely by extracting heat out of an equilibrium environment. In this case, there is only one temperature T describing the environment. Furthermore, for long times we may neglect U as well as S since they remain finite while the produced work -W increases indefinitely. The first law then requires W = Q which when inserted in Eq. (<ref>) provides W/T ≥ 0, preventing any production of work. An example would be given by a boat that moves (performing work against the friction force of the water) by extracting heat from the surrounding water, see Fig. <ref>. Note that dissipating work (i.e., turning work into heat, W = Q ≥0) is perfectly allowed by the laws of thermodynamics and indeed is what happens when an incandescent light bulb is glowing. In addition to this example, the second law of thermodynamics provides important limitations on physically allowed processes. Consider a heat engine, where the temperature gradient between two thermal reservoirs at temperatures T_h>T_c is exploited to produce work, see Fig. <ref>. Let us further consider a long-time scenario, where any changes in the system may be neglected compared to the heat and work which continuously increase. The first and second laws of thermodynamics then read W = Q_h+Q_c,Q_h/T_h+Q_c/T_c≥ 0. These equations have a few remarkable consequences. First, for W=0 it is straightforward to show that -Q_h = Q_c≥ 0, i.e, heat flows out of the hot and into the cold reservoir. Second, either Q_h, Q_c, or both have to be positive, i.e., it is not possible to extract heat simultaneously from both reservoirs. This is equivalent to the argument preventing a perpetuum mobile of the second kind. A heat engine, defined by -W>0 will thus require an influx of heat from the hot reservoir Q_h<0, providing the necessary energy, as well as a heat outflux into the cold reservoir Q_c>0 ensuring the increase in entropy. The second law of thermodynamics then implies the well-known result that the efficiency of such a heat engine is upper bounded by the Carnot efficiency η≡-W/-Q_h≤ 1-T_c/T_h, where the efficiency can be understood as the ratio between the desired output (-W) divided by the resource that is used (-Q_h). Thermodynamic processes and cycles: In a thermodynamic process, the state of a body is changed by, e.g., changing its volume V or pressure p. Such changes can be applied under different conditions. For instance, in an adiabatic process, the state is changed without exchanging heat with the environment. Similarly, processes at constant temperature T or entropy S are called isothermal and isentropic processes respectively. Note that the word adiabatic is used differently in thermodynamics and in quantum mechanics. In quantum mechanics, it refers to processes that are sufficiently slow such that a system in its ground state remains in its groundstate throughout the process. For this reason, I will try to avoid the word adiabatic. An important class of thermodynamic processes are reversible processes, which obey Σ = 0. This generally requires that the body is in thermal equilibrium with its environment at all times. Reversible processes therefore have to be very slow and always are an idealization of any real process. As we will see below, reversible processes are very useful to derive thermodynamic relations. By combining different thermodynamic processes, we can design thermodynamic cycles. After a completed cycle, the body is in the same state but since Q and W are inexact differentials, we generally have ∮ dU = 0,∮ Q = ∮ W≠ 0. Thus, it is possible to turn heat into work in a thermodynamic cycle. Prominent examples of thermodynamic cycles are the Otto cycle, providing the underlying principle behind car engines, and the Carnot engine, which is an idealized, reversible engine that reaches the Carnot efficiency given in Eq. (<ref>). Maxwell relations: Let us consider the first law of thermodynamics for a reversible process. As the body is in equilibrium with the environment throughout the process, we may identify T = T_B, p = p_B and because dΣ = 0, we have dS = -dS_B. The first law thus reads dU = TdS-pdV = ∂_S U dS+ ∂_V U dV. Since this is an exact differential, the last expression holds no matter if the process is reversible or not. However, the identification of heat and work with the first and second term on the right-hand side is only justified for reversible processes. Since U is a smooth function, partial derivatives with respect to different variables commute and we find the Maxwell relation ∂_S∂_V U = ∂_V T = -∂_S p. Other Maxwell relations may be obtained similarly from other thermodynamic potentials such as the free energy F= U-TS. The free energy plays a particularly important role as it captures the capacity of a body to do work (in contrast to U capturing the capacity to provide heat and work). The differential of the free energy reads dF = -SdT-pdV = ∂_T F dS+ ∂_V F dV, from which we may derive the Maxwell relation -∂_T∂_V F = ∂_V S = ∂_T p. Equations of state: In order to obtain quantitative predictions, the theory of thermodynamics has to be supplemented by equations of state. For instance, a monatomic ideal gas with N atoms obeys the relations U = 3/2 N k_B T,pV=Nk_B T, where k_B denotes the Boltzmann constant. With the help of these equations, we may then derive non-trivial quantitative results such as the change in entropy when changing temperature and volume S(T,V)- S(T_0,V_0) = k_Bln[(T/T_0)^3/2V/V_0]. §.§ Information theory As we will see below, entropy is closely related to concepts from information theory. Here we provide a brief introduction to these concepts, for a more detailed discussion see for instance the original paper by Shannon <cit.> or the book by Cover and Thomas <cit.>. Self-information: Let us consider a random variable X with outcomes x_1, ⋯, x_n occuring with probabilities p_1,⋯,p_n. Note that in this section, we use a different symbol for the random variable X and its outcomes x_j as is customary in mathematics. The self-information or surprisal of an outcome x_j is defined as I_j = -ln p_j. It quantifies the information that is gained when observing outcome x_j. The self-information has the following properties: * The less probable an event is, the more surprising it is and the more information we gain lim_p_j→1I_j = -ln 1 = 0,lim_p_j→0I_j = ∞. * The information of uncorrelated events is additive. Let p^X_j and p^Y_k denote the probabilities that the independent random variables X and Y take on the values x_j and y_k respectively. It then holds that p_j,k = p^X_jp^Y_k,I_j,k=-ln p_j,k=-ln p^X_j-ln p^Y_k = I^X_j+I^Y_k, where p_j,k denotes the joint probability of observing outcomes x_j and y_k. We note that with Eq. (<ref>), information is quantified in nats since we are using the natural logarithm. To quantify information in bits, one should use the logarithm of base two instead. Shannon entropy: The Shannon entropy is defined as the average self-information H(X)=∑_j p_j I_j = -∑_j p_jln p_j. It quantifies the average information gain when observing the random variable X, or, equivalently, the lack of information before observation. The practical importance of the Shannon entropy is highlighted by the noiseless coding theorem which states that storing a random variable taken from a distribution with Shannon entropy H requires at least H nats on average (if H is defined using log_2, then it requires at least H bits). Any further compression results in loss of information. This theorem can be illustrated using an example taken from Nielsen and Chuang <cit.>: Consider a source that produces the symbols A, B, C, and D with respective probabilities 1/2, 1/4, 1/8, and 1/8. If we do not use any compression, we need two bits of storage per symbol, e.g., A→ 00,B→ 01,C→ 10,D→ 11. However, the symbols provided by the source may be encoded using less bits per symbol on average by taking into account the probability distribution of the source. The basic idea is to use less bits for the more frequent symbols. For instance, we may use the following encoding A→ 0,B→ 10,C→ 110,A→ 111. Note that we cannot use B→ 1 because then we can no longer determine if 110 means C or if it means BBA. With this compression, we find the average number of bits per symbol to be 1/2·1+1/4· 2+1/8· 3+1/8· 3=7/4. This equals the Shannon entropy (in bits) H=-1/2log_21/2-1/4log_21/4-1/8log_21/8-1/8log_21/8 = 7/4. It turns out that any compression algorithm that uses less bits results in the loss of information. Von Neumann entropy: The quantum version of the Shannon entropy is given by the von Neumann entropy S_vN[ρ̂] = -Tr{ρ̂lnρ̂} = -∑_j p_jln p_j, where ρ̂=∑_j p_jj. As a generalization of the Shannon entropy to quantum systems, it describes the lack of knowledge we have over a quantum system given a state ρ̂. The von Neumann entropy is minimized for pure states and maximized for the maximally mixed state given by the identity matrix S_vN[ψ] = 0,S_vN[1/d] = ln d, where d denotes the dimension of the Hilbert space. Relative entropy: The relative entropy, also known as the Kullback-Leibler divergence <cit.> is useful to quantify information when a random variable is described by an erroneous distribution. For two distributions {p_j} and {q_j}, the relative entropy is defined as D_KL[{p_j}||{q_j}] = ∑_j p_j lnp_j/q_j. By definition, D_KL≥ 0 where equality is only achieved if the two distributions are the same. To shed some light onto the interpretation of this quantity, let us assume that a random variable X is distributed according to {p_j} but we describe it by the erroneous distribution {q_j}. The information gain that we attribute to the outcome x_j then reads -ln q_j. On average, this gives -∑_jp_jln q_j = -∑_j p_jln p_j +∑_j p_j lnp_j/q_j = H(X) +D_KL[{p_j}||{q_j}]. The left-hand side of this equation provides the information we assign to the random variable on average, the assumed information. The right-hand side provides the actual average information, given by the Shannon entropy, plus the relative entropy. The actual information is thus strictly smaller than the assumed information and the discrepancy is quantified by the relative entropy. We can think of the relative entropy as an information loss due to the erroneous description. Quantum relative entropy: The generalization of the relative entropy to the quantum scenario is defined as <cit.> S[ρ̂||σ̂] = Tr{ρ̂lnρ̂-ρ̂lnσ̂}. Just as its classical counterpart, is always non-negative S[ρ̂||σ̂]≥ 0 and it quantifies the discrepancy between the assumed and the actual lack of information when a system in state ρ̂ is erroneously described by the state σ̂. Mutual information: The mutual information is a measure that characterizes the mutual dependence of two random variables. It tells us how much information one random variable has on the other and it is defined as I_MI(X;Y) = D_KL[{p_j,k}||{p^X_jp^Y_k}]=H(X)+H(Y)-H(X,Y). To extend this definition to the quantum regime, we consider a bi-partite system in a Hilbert space ℋ = ℋ_A⊗ℋ_B. The quantum mutual information in a quantum state ρ̂_AB is defined as I_MI[A;B] = S[ρ̂_AB||ρ̂_A⊗ρ̂_B] = S_vN[ρ̂_A]+S_vN[ρ̂_B]-S_vN[ρ̂_AB], where we introduced ρ̂_A=Tr_B{ρ̂_AB} and ρ̂_B=Tr_A{ρ̂_AB}. Since the quantum relative entropy is a non-negative quantity, so is the mutual information. Furthermore, the mutual information is bounded from above by a corollary of the Araki–Lieb inequality <cit.> I_MI[A;B]≤ 2min{S_vN[ρ̂_A],S_vN[ρ̂_B]}≤ 2 min{d_A,d_B}, where d_α denotes the dimension of the Hilbert space ℋ_α. The last inequality follows from the upper bound on the von Neumann entropy given in Eq. (<ref>). § THERMODYNAMIC EQUILIBRIUM Equilibrium states may be characterized by a few intensive variables such as temperature T, chemical potential μ, and pressure p. In equilibrium, no net currents flow and the system is stationary, i.e., observables become time-independent. Note the similarity to the main assumption in macroscopic thermodynamics, see Sec. <ref>. This is no coincidence. Indeed, in macroscopic thermodynamics, systems are assumed to be in local equilibrium (i.e., can be described by a thermal state with variables that may differ from the environment). §.§ The grand-canonical ensemble Throughout these lecture notes, we consider a system that may exchange both energy and particles with its environmnet as sketched in Fig. <ref>. In equilibrium, such a system is described by the grand-canonical Gibbs state ρ̂_G = e^-β(Ĥ-μN̂)/Z,Z=Tr{e^-β(Ĥ-μN̂)}, where β=1/k_BT denotes the inverse temperature, Ĥ is the Hamiltonian, N̂ the particle-number operator, and Z is called the partition function. For equilibrium states, we may generally identify the relevant quantities in macroscopic thermodynamics as average values U ≡⟨Ĥ⟩,S≡ k_BS_vN[ρ̂]. Because the Gibb's state is of central importance for much that follows, we now discuss three different ways of motivating why it provides an adequate description for thermodynamic equilibrium. §.§.§ Subsystem of an isolated system The first motivation considers the system of interest to be a small part of a very large “supersystem” that is described as an isolated system, see Fig. <ref>. This motivation for the Gibbs state can be found in many textbooks, see e.g., Landau & Lifshitz <cit.> or Schwabl <cit.>. We start by assuming that the supersystem has a fixed energy and particle number given by E_tot and N_tot respectively. In this case, it is described by the so-called micro-canonical ensemble ρ̂_tot = ∑_E,Np_tot(E,N)E,N, where p_tot(E,N) is nonzero only if the energy and particle number are close to E_tot and N_tot, i.e., for E_tot≤ E < E_tot+δ E and N_tot≤ N < N_tot+δ N. For energies and particle numbers within these shells, each state is equally likely in the microcanonical ensemble, i.e., p_tot(E,N) = Θ(E-E_tot)Θ(N-N_tot)Θ(E_tot+δ E-E)Θ(N_tot+δ N-N)/Ω(E_tot,N_tot)δ Eδ N, where Ω(E_tot,N_tot) denotes the density of states and Θ(x) the Heaviside theta function which is equal to one for x>0 and zero otherwise. Being a macroscopic supersystem, the number of states contributing to the microcanonical ensemble is assumed to be large Ω(E_tot,N_tot)δ Eδ N ≫ 1. At the same time, for energy and particle numbers to be well defined, we require δ E≪ E_tot and δ N≪ N_tot. This also justifies to assume that the density of states is constant in the energy and particle number shells that contribute to the micro-canonical ensemble. Fur future reference, we note that the von Neumann entropy of the microcanoncial ensemble is given by the logarithm of the number of contributing states [see also Eq. (<ref>)] S_vN(ρ̂_tot) = ln[Ω(E_tot,N_tot)δ Eδ N]. Since our system of interest is part of the supersystem, we may write the Hamiltonian and particle-number operator as Ĥ_tot = Ĥ_S+Ĥ_B+V̂,N̂_tot = N̂_S+N̂_B, where the subscript B stands for “bath” and describes the remainder of the supersystem. We now make a weak coupling approximation that is crucial to obtain the Gibbs state: we assume that V̂ is sufficiently weak such that we may neglect the coupling energy and the energy and particle eigenstates of the supersystem approximately factorize |E,N⟩≃|E_S,N_S⟩⊗|E_B,N_B⟩. The density matrix in Eq. (<ref>) may then be written as ρ̂_tot = ∑_E_S,N_S∑_E_B,N_Bp_tot(E_S+E_B,N_S+N_B)E_S,N_S⊗E_B,N_B. We are interested in the reduced state of the system obtained by taking the partial trace over the bath ρ̂_S = Tr_B{ρ̂_tot}=∑_E_B,N_B⟨E_B,N_B|ρ̂_tot|E_B,N_B⟩ = ∑_E_S,N_Sp_S(E_S,N_S)E_S,N_S. The probability for the system to be in a state with energy E_S and particle number N_S may be written as p_S(E_S,N_S) =∑_E_B,N_B p_tot(E_S+E_B,N_S+N_B) = Ω_B(E_tot-E_S,N_tot-N_S)δ Eδ N /Ω(E_tot,N_tot)δ Eδ N , where the numerator in the last expression denotes the number of states in the bath, i.e., the number of terms in the sum ∑_E_B,N_B p_tot(E_S+E_B,N_S+N_B) which are non-vanishing. In analogy to Eq. (<ref>), the entropy of the bath is given by S_B(E,N) = k_Bln[Ω_B(E,N)δ Eδ N ]. We note that the factor of k_B appears here because this is the thermodynamic, not the von Neumann entropy. Since the system is considered to be small, we may expand the bath entropy around E_tot and N_tot to obtain S_B(E_tot-E_S,N_tot-N_S) ≃ S_B(E_tot,N_tot)-E_S∂_E S_B(E,N_tot)|_E=E_tot -N_S∂_N S_B(E_tot,N)|_N=N_tot = S_B(E_tot,N_tot)-E_S-μ N_S/T, where we identified ∂_E S_B(E,N) = 1/T and ∂_N S_B(E,N) = -μ/T. Inserting the last expression into Eq. (<ref>) results in p_S(E_S,N_S) = Ω_B(E_tot,N_tot)δ Eδ N /Ω(E_tot,N_tot)δ Eδ N e^-β(E_S-μ N_S) =e^-β(E_S-μ N_S)/Z, where the last equality follows from the normalization of the state. Since this is exactly the probability for the Gibbs state given in Eq. (<ref>), we have shown that a small subsystem of a large and isolated supersystem is described by the Gibbs state if the supersystem is described by the microcanonical ensemble (i.e., has well defined energy and particle number) and if the system is weakly coupled to the remainder of the supersystem. §.§.§ Jaynes' maximum entropy principle The second motivation for the Gibbs state that we discuss is based on Jaynes' maximum entropy principle <cit.>. The basic idea behind this principle is that an adequate description of the physical system should only encode the information that we actually have. As discussed above, equilibrium states may be described by intensive variables such as temperature and chemical potential. Equivalently, they may be described by the conjugate extensive quantities provided by the average energy E̅ and the average particle number N̅. Following Jaynes', we take E̅ and N̅ as the prior data, i.e., the knowledge that we have about the state. Jaynes' crucial insight was that in order to avoid encoding any other knowledge in the state, the entropy, a quantifier for lack of knowledge, should be maximized. Mathematically, the Gibbs state is then obtained by maximizing the von Neumann entropy under the constraints E̅ = ⟨Ĥ⟩,N̅= ⟨N̂⟩. To perform this maximization, we start by considering states of the form ρ̂=∑_n p_n E_n,N_n,Ĥ|E_n,N_n⟩ = E_n|E_n,N_n⟩,N̂|E_n,N_n⟩ = N_n|E_n,N_n⟩. This form may be motivated by demanding that equilibrium states do not change in time, which implies [see also Eq. (<ref>)] Û(t)ρ̂Û^†(t) = ρ̂⇔[Ĥ,ρ̂]=0, and that there are no coherent superpositions of states with different number of particles, which implies [N̂,ρ̂]=0. The latter assumption may be motivated by a particle superselection rule <cit.> and it does not apply when considering scenarios with a vanishing chemical potential μ=0. We now want to maximize the von Neumann entropy S_vN[ρ̂] = -∑_n p_nln p_n, under the following three constraints ∑_n p_n =1,∑_np_nE_n = E̅,∑_np_nN_n = N̅, where the first constraint simply ensures the normalization of the state. Such constrained maximization problems can be solved by the method of Lagrange multipliers <cit.>. To this end, we define the function ℒ = -∑_n p_nln p_n-λ_0(∑_n p_n - 1)-λ_1(∑_n p_nE_n -E̅)-λ_2(∑_n p_nN_n -N̅). The solution to the maximization procedure is the distribution {p_n} that obeys ∂ℒ/∂ p_n = -ln p_n-1-λ_0-λ_1 E_n-λ_2 N_n = 0. We now identify λ_1 = β,λ_2 = -βμ,e^λ_0+1=Z. The solution of Eq. (<ref>) then reads p_n=e^-β(E_n-μ N_n)/Z, recovering the Gibbs state given in Eq. (<ref>). The Lagrange multipliers or, equivalently, the quantities β, μ, and Z are fully determined by the constraints given in Eq. (<ref>). Introducing the grand potential which plays a similar role to the free energy Φ_G = ⟨Ĥ⟩ -k_BTS_vN[ρ̂]-μ⟨N̂⟩, we may write Eq. (<ref>) as ℒ = -βΦ_G-λ_0(∑_n p_n -1)+β( E̅-μN̅). The last term may be dropped as it does not depend on p_n. The last equation then implies that maximizing the von Neumann entropy given the average energy and average particle number is mathematically equivalent to minimizing the grand potential (the only constraint left, corresponding to the Lagrange multiplier λ_0, ensures the normalization of the state). §.§.§ Complete passivity The final motivation we provide for the Gibbs state is based on the notion of passive states <cit.>. These are states from which no work can be extracted. To define passive states, we consider an isolated system that obeys the time-evolution given in Eq. (<ref>). For an isolated system with a time-dependent Hamiltonian, we interpret the total energy change of the system as work. This can be motivated by considering the time-dependence of the Hamiltonian as mediated by a classical field describing a macroscopic degree of freedom. In analogy to a piston compressing a gas, the energy change mediated by this macroscopic degree of freedom should be considered as work. We now consider a cyclic work protocol of duration τ, where the Hamiltonian is changed over time Ĥ(t) such that Ĥ≡Ĥ(0) = Ĥ´(τ). The work extracted during this protocol reads W_ex = Tr{Ĥρ̂}-Tr{ĤÛρ̂Û^†}, where ρ̂ denotes the initial state of the system and the unitary Û≡Û(τ) is determined by the time-dependent Hamiltonian, see Eq. (<ref>). We note that with a suitable Hamiltonian, any unitary operator can be generated. A passive state can now be defined as a state τ obeying Tr{Ĥ(τ̂-Ûτ̂Û^†)}≤ 0∀ Û, which implies that no work can be extracted from passive states, no matter how the time-dependent Hamiltonian is designed. Passive states are of the form τ̂ = ∑_n p_n E_n, with E_0≤ E_1≤ E_2⋯p_0≥ p_1≥ p_2⋯ Remarkably, taking multiple copies of a passive state may result in a state that is no longer passive. This suggests the definition of completely passive states <cit.>: a state τ̂ is completely passive iff Tr{Ĥ_n(τ^n-Ûτ̂^nÛ^†)}≤ 0∀ Û, n, where Ĥ_n = Ĥ⊕Ĥ⊕⋯⊕Ĥ,τ̂^n = τ̂⊗τ̂⊗⋯⊗τ̂. Here we introduced the Kronecker sum as Â⊕B̂=Â⊗1+1⊗B̂. This implies that it is not possible to extract work from completely passive states even if we take multiple copies and let them interact during the work protocol. Remarkably, the only completely passive states are Gibbs states in the canonical ensemble (assuming no degeneracy in the ground state) <cit.> ρ̂_c = e^-βĤ/Tr{e^-βĤ}. In complete analogy, the grand-canonical Gibbs state is completely passive with respect to Ĥ-μN̂, i.e., the average ⟨Ĥ-μN̂⟩ cannot be lowered by any unitary, even when taking multiple copies. If we further assume a particle-superselection rule <cit.>, preventing the creation of coherent superpositions of different particle numbers as long as μ≠ 0, then the grand-canonical Gibbs state in Eq. (<ref>) is completely passive since [Û,N̂]=0. §.§ Equivalence of ensembles in the thermodynamic limit We have already encountered the grand-canonical, the canonical, as well as the micro-canonical ensemble. Here we discuss these ensembles in a bit more detail and argue that they all become equivalent in the thermodynamic limit, which is the limit of large systems relevant for macroscopic thermodynamics. Microcanonical ensemble ρ̂_M = ∑_E̅≤ E<E̅+δ E N̅≤ N<N̅+δ N1/Ω(E̅,N̅)δ Eδ NE,N, where Ω(E̅,N̅) denotes the density of states that we have encountered in Eq. (<ref>). In the microcanonical ensemble, both the particle number as well as the energy is fixed, where fixed means that all states within an energy (particle number) shell of thickness δ E (δ N) contribute. Within this shell, the state is assumed to be completely mixed, i.e., all states have equal weight. As such, the state maximizes the von Neumann entropy for fixed E̅ and N̅. Canonical ensemble ρ̂_c = ∑_N̅≤ N<N̅+δ N∑_E e^-β E/Z_cE,N = e^-βĤ_N̅/Tr{e^-βĤ_N̅}, where Ĥ_N̅ denotes the Hamiltonian projected onto the relevant particle-number subspace Ĥ_N̅ = ∑_N̅≤ N<N̅+δ NNĤN,|N⟩=∑_E |E,N⟩. In the canonical ensemble, only the particle number is fixed. The average energy ⟨Ĥ⟩ is determined by the temperature of the environment. The canonical ensemble is the adequate equilibrium state when energy (but not particles) can be exchanged with the environment. The canonical Gibbs state maximizes the von Neumann entropy for a fixed particle number N̅ and an average energy given by ⟨Ĥ⟩=E̅. Furthermore, the canonical Gibbs state minimizes the free energy F = ⟨Ĥ⟩-k_BTS_ vN[ρ̂], when the particle number N̅ is fixed. Grand-canonical ensemble The grand-canonical Gibbs state is given in Eq. (<ref>). As discussed above, it maximizes the von Neumann entropy for given average values of energy and particle number. This is equivalent to minimizing the grand potential given in Eq. (<ref>). The grand-canonical ensemble is the adequate description when both energy and particles may be exchanged with the environment. The average energy and particle number are then determined by the temperature and chemical potential of the environment respectively. Thermodynamic limit and equivalence of ensembles The thermodynamic limit is the limit of large systems which formally can be defined as V→∞,E̅/V=cst.N̅/V=cst. This can be achieved by setting V∝ x, E̅∝ x, N̅∝ x, and letting x→∞. In the thermodynamic limit, all ensembles become equivalent <cit.> because relative fluctuations around average values vanish. Here, we illustrate this for the canonical ensemble. In this ensemble, the probability for the system to take on a certain energy obeys p(E) ∝ e^-β EΩ(E,N̅). As it turns out, this function becomes highly peaked in the thermodynamic limit. This can be seen from the relative fluctuations ⟨Ĥ^2⟩-⟨Ĥ⟩^2/⟨Ĥ⟩^2 = -1/⟨Ĥ⟩∂_β⟨Ĥ⟩ = k_BT^2/⟨Ĥ⟩^2C∝1/x, where we introduced the heat capacity C=∂_T⟨Ĥ⟩, which quantifies the change of energy upon a change of temperature. The last proportionality in Eq. (<ref>) follows because the heat capacity is an extensive quantity and is thus proportional to x for large systems. Equation (<ref>) implies that in the thermodynamic limit, the relative fluctuations vanish. The probability distribution for the system to take on a given relative energy may then be approximated as a Gaussian distribution (see Ref. <cit.> for a discussion on higher moments) p(E/E̅)≃E̅/√(2π k_BT^2C)e^-(E/E̅-1)^2/2k_BT^2C/E̅^2, which tends to a Dirac delta distribution δ(E/E̅-1) in the thermodynamic limit. § THE LAWS OF THERMODYNAMICS In this section, we discuss how the laws of thermodynamics emerge in a quantum mechanical framework. §.§ The general scenario The general scenario we consider consists of a system coupled to multiple reservoirs which are in local thermal equilibrium as sketched in Fig. <ref>. This is described by the Hamiltonian Ĥ_tot(t)=Ĥ_S(t)+∑_α(Ĥ_α+V̂_α), where the system is labeled by the subscript S and the reservoirs are labeled by the index α. the term V̂_α denotes the coupling between the system and reservoir α. Because the system and reservoirs together comprise an isolated system, the time evolution of the total density matrix is given by ρ̂_tot(t)=Û(t)ρ̂_tot(0)Û^†(t)⇔∂_tρ̂_tot(t)=-i[Ĥ_tot(t),ρ̂_tot(t)], where the time-evolution operator is given by Û(t)=𝒯e^-i∫_0^t dt'Ĥ_tot(t'), with 𝒯 denoting the time-ordering operator (see Sec. <ref>) and we allow the system Hamiltonian to be time-dependent while we assume the reservoir Hamiltonians as well as the coupling terms to be time-independent. As seen in Sec. <ref>, the energy flows between the system and the reservoirs are of crucial importance in thermodynamics. This motivates considering the mean energy change of reservoir α ∂_t⟨Ĥ_α⟩=Tr{Ĥ_α∂_tρ̂_tot}=∂_t⟨Ĥ_α-μ_αN̂_α⟩+μ_α∂_t⟨N̂_α⟩=J_α+P_α. Here we divided the energy change into a part that we call the heat current J_α and a part that we call the power P_α that enters reservoir α. Throughout these notes, the sign convention is such that energy flows are positive when flowing toward the body that their index refers to. To motivate the separation of energy flows into heat and work, we introduce the concept of entropy production in the quantum regime. §.§ Entropy production The standard extension of entropy to the quantum regime is the von Neumann entropy. However, under unitary time-evolution, the von Neumann entropy is constant and thus ∂_t S_vN[ρ̂_tot]=-∂_tTr{ρ̂_tot(t)lnρ̂_tot(t)}=0. The von Neumann entropy of the total system can thus not tell us anything about how energy flows between the system and the reservoirs. To make progress, we consider an effective description based on local thermal equilibrium [i.e., the reservoirs are described by the Gibbs state given in Eq. (<ref>)] * True description: ρ̂_tot(t) * Effective description: ρ̂_S(t)⊗_ατ̂_α where ρ̂_S(t)=Tr_B{ρ̂_tot(t)} and we denote the Gibbs state describing reservoir α as τ̂_α = e^-β_α(Ĥ_α-μ_αN̂_α)/Z_α. The effective description thus contains all information on the system but neglects any changes to the reservoirs as well as the correlations that may build up between system and reservoirs. Such an effective description is often the best one can do in an experiment, where one might have control over the microscopic degrees of freedom of the system only. We now consider the quantum relative entropy between the true state ρ̂_tot(t) and our effective description S[ρ̂_tot||ρ̂_S⊗_ατ̂_α]=S_vN[ρ̂_S]+∑_αβ_α⟨Ĥ_α-μ_αN̂_α⟩+∑_αln Z_α-S_vN[ρ̂_tot], where averages are with respect to the total state ⟨X̂⟩ = Tr{X̂ρ̂_tot(t)} and we used lnτ̂_α = -β_α(Ĥ_α-μ_αN̂_α)-ln Z_α as well as ln(Â⊗B̂) = ln(Â)⊗1+1⊗ln(B̂). As discussed in Sec. <ref>, Eq. (<ref>) may be interpreted as the information that is lost due to our effective description. We now introduce the entropy production rate <cit.> Σ̇ = k_B∂_t S[ρ̂_tot||ρ̂_S⊗_ατ̂_α]=k_B∂_tS_vN[ρ̂_S]+∑_αJ_α/T_α. The entropy production rate can thus be interpreted as the rate at which information is lost by our local equilibrium description, due to the buildup of correlations between system and environment as well as changes to the reservoirs. Note that it is not guaranteed to be positive. Finite size effects as well as non-Markovian dynamics can result in a negative entropy production rate (a backflow of information from the reservoirs). However, as we will see later, for infinitely large and memoryless reservoirs, the entropy production rate is ensured to be positive at all times as information is irretrievably lost when one can only access the system alone. Note that Eq. (<ref>) motivates the interpretation of J_α as a heat flow, such that the entropy production associated to reservoir α is given by the usual expression for reservoirs which remain in thermal equilibrium. Interestingly, we may refine our effective description by using time-dependent temperatures and chemical potentials <cit.> τ_α(t) = e^-β_α(t)(Ĥ_α-μ_α(t)N̂_α)/Z_α, such that Tr{Ĥ_αρ̂_tot(t)} = Tr{Ĥ_ατ̂_α(t)},Tr{N̂_αρ̂_tot(t)} = Tr{N̂_ατ̂_α(t)}. In this case, one may show that k_B∂_tS_vN[τ̂_α(t)] = J_α(t)/T_α(t), and the entropy production rate reduces to Σ̇ = k_B∂_t S[ρ̂_tot||ρ̂_S⊗_ατ̂_α(t)] = k_B∂_tS_vN[ρ̂_S]+k_B∑_α∂_tS_vN[τ̂_α(t)] =k_B∂_tS_vN[ρ̂_S]+∑_αJ_α(t)/T_α(t). §.§ The first law of thermodynamics To derive the first law of thermodynamics, we consider the change in the average energy of system and reservoirs ∂_t⟨Ĥ_tot(t)⟩ = Tr{[∂_tĤ_tot(t)]ρ̂_tot}+Tr{Ĥ_tot(t)∂_tρ̂_tot} = ∂_t⟨Ĥ_S(t)⟩ +∂_t∑_α⟨Ĥ_α⟩ + ∂_t∑_α⟨V̂_α⟩. Using Eq. (<ref>), the last term on the first line can be shown to vanish. With the help of the last equation and Eq. (<ref>), we may then derive the first law of thermodynamics (note that the energy flows are defined to be positive when they enter the location corresponding to their index) ∂_t⟨Ĥ_S(t)⟩ = P_S(t) -∑_α[J_α(t)+P_α(t)]-∂_t∑_α⟨V̂_α⟩,P_S≡⟨∂_tĤ_S⟩, where P_S denotes the power entering the system due to some external classical drive that renders Ĥ_S time-dependent. The term due to the coupling energy ⟨V̂_α⟩ can be neglected when the coupling is weak, which is a common assumption for open quantum systems. Relaxing this assumption and considering the thermodynamics of systems that are strongly coupled to the environment is an exciting ongoing avenue of research <cit.>. §.§ The second law of thermodynamics Let us consider an initial state which is a product state of the form ρ̂_tot(0)=ρ̂_S(0)⊗_ατ̂_α, i.e., we assume our effective description to be exact at t=0. In this case, Eq. (<ref>) can be written as Σ≡ k_BS[ρ̂_tot(t)||ρ̂_S(t)⊗_ατ̂_α]= k_BS_vN[ρ̂_S(t)]-k_BS_vN[ρ̂_S(0)]+∑_αQ_α/T_α, where the heat is defined as Q_α = Tr{(Ĥ_α-μ_αN̂_α)ρ̂_tot(t)}-Tr{(Ĥ_α-μ_αN̂_α)ρ̂_tot(0)}, and we used that S[ρ̂_tot(0)||ρ̂_S(0)⊗_ατ̂_α]=0. Since it is expressed as a quantum relative entropy, we have Σ≥ 0. From an information point of view, this inequality tells us that if our effective description is true at t=0, then it can only be worse at later times. To understand how our description becomes worse, we follow Refs. <cit.> and write the entropy production as Σ = I_MI[S;B] + S[ρ̂_B(t)||⊗_ατ̂_α], where we introduced the reduced state of the environment ρ̂_B(t)=Tr_S{ρ̂_tot(t)} and the mutual information between system and environment is given by [c.f. (<ref>)] I_MI[S;B] = S_vN[ρ̂_S(t)]+S_vN[ρ̂_B(t)]-S_vN[ρ̂_tot(t)]. The first term in Eq. (<ref>) describes the correlations between the system and the environment while the second term describes the departure of the environment from the initial product of Gibbs states. As was recently shown, the departure of the environment from its initial state provides the dominant contribution to the entropy production <cit.>. This can be seen from the upper bound of the mutual information I_MI[S;B]≤ 2 d_S [c.f. Eq. (<ref>)], where d_S denotes the dimension of the Hilbert space of the system. In the long-time limit, or under steady state conditions, the mutual information therefore saturates and can no longer contribute to Σ. Any entropy production due to persisting heat currents, see Eq. (<ref>), can thus be associated with the environment departing further and further from the initial state with well defined temperatures and chemical potentials. As mentioned above, the entropy production rate Σ̇ is not always guaranteed to be positive (i.e., Σ is not necessarily a monotonously increasing function of time). However, at small times, Eq. (<ref>) ensures that the entropy production rate is also positive. Furthermore, as we will see in the next section, the entropy production is positive for infinitely large and memory-less reservoirs which couple weakly to the system. We note that with a time-dependent effective description, see Eq. (<ref>), Eqs. (<ref>) and (<ref>) still hold. §.§ The zeroth law of thermodynamics For completeness, we will also briefly mention the zeroth and the third law of thermodynamics, although they will play a less important role throughout these notes. The zeroth law is usually phrased as: If two systems are both in equilibrium with a third one, then they are in equilibrium with each other. Being equipped with definitions for temperature and chemical potential, then the zeroth law implies: Systems in equilibrium with each other have the same temperature and chemical potential. When a small system is coupled to a large equilibrium reservoir, then the reduced system state is expected to tend to <cit.> lim_t→∞ρ̂_S(t) = Tr_B{e^-β(Ĥ_tot-μN̂_tot)}/Tr{e^-β(Ĥ_tot-μN̂_tot)}e^β(Ĥ_S-μN̂_S)/Z. As expected, in equilibrium, the system is thus characterized by the same temperature and chemical potential as the reservoir. We note that equilibrium may only be reached if Ĥ_tot is time-independent and if there is only a single temperature and a single chemical potential characterizing the environment. The exact range of validity of the first equality in Eq. (<ref>) is a subject of ongoing research <cit.>. §.§ The third law of thermodynamics The formulation of the third law of thermodynamics that we consider here is known as Nernst's unattainability principle <cit.>. In its modern formulation, it reads: It is impossible to cool a system to its ground state (or create a pure state) without diverging resources. These resources may be time, energy (in the form of work), or control complexity <cit.>. When a diverging amount of work is available, one may cool a system to the ground state by increasing the energy of all excited states by an infinite amount. Then, the equilibrium state at any finite temperature reduces to the ground state which can thus be obtained by equilibration with any thermal reservoir. When an infinite amount of time is available, this process may be performed reversibly, reducing the work cost to a finite amount. Loosely speaking, infinite control complexity allows one to parallelize this cooling process and reach the ground state using only a finite amount of time and work <cit.>. Existing proofs of the third law of thermodynamics for quantum systems <cit.> use the framework of the resource theory of thermodynamics <cit.> and are not directly applicable to the scenario considered in these notes. § MARKOVIAN MASTER EQUATIONS In this section, we consider Markovian master equations as a description for the reduced state of the system ρ̂_S. Markovianity implies that no memory effects of the environment are taken into account. For instance, a particle that is emitted from the system will not be reabsorbed by the system before losing all memory of the emission event. In principle, memory effects are always present but if the coupling between system and environment is weak, these effects can often safely be ignored. There are numerous references that discuss Markovian master equations going substantially beyond these notes, see for instance Refs. <cit.>. As illustrated in Fig. <ref>, the time-evolution of the reduced density matrix can be described by a universal dynamical map (UDM). A UDM ℰ is a linear map which transforms a density matrix into another density matrix. Furthermore, it is independent of the density matrix it acts upon. In its most general form, a UDM can be written as ℰρ̂=∑_jK̂_jρ̂K̂_j^†,∑_jK̂_j^†K̂_j=1, where the operators K̂_j are called Kraus operators. We say that a system obeys Markovian time-evolution if it is described by a divisible UDM, i.e., ρ̂(t)=ℰ_t,t_0ρ̂=ℰ_t,t_1ℰ_t_1,t_0ρ̂, For any intermediate time t_0<t_1<t. Furthermore, we note that a differential equation is a Markovian master equation (i.e., results in Markovian time-evolution of a density matrix) if and only if it can be written in the form ∂_tρ̂(t) =-i[Ĥ(t),ρ̂(t)]+∑_kγ_k(t)[L̂_k(t)ρ̂(t)L̂_k^†(t)-1/2{L̂^†_k(t)L̂_k(t),ρ̂(t)}] =-i[Ĥ(t),ρ̂(t)]+∑_kγ_k(t)𝒟[L̂_k(t)]ρ̂(t), where Ĥ(t) is Hermitian, γ_k(t)≥ 0, and the operators L̂_k are referred to as Lindblad jump operators. For a proof, see Ref. <cit.>. This form of the master equation is also called GKLS form, after Gorini, Kosakowski, Sudarshan <cit.>, and Linblad <cit.> who considered the time-independent case. In the rest of this chapter, we provide detailed derivations for Markovian master equations that describe our general scenario introduced in Sec. <ref>. §.§ Nakajima-Zwanzig superoperators To derive Markovian master equations, we use the Nakajima-Zwanzig projection operator approach <cit.> following Ref. <cit.>. To this end, we introduce the superoperators 𝒫ρ̂_tot=Tr_B{ρ̂_tot}⊗_ατ̂_α=ρ̂_S⊗_ατ̂_α,𝒬=1-𝒫. Note that these are projectors as 𝒫^2=𝒫. Further, note that we are interested in the time-evolution of 𝒫ρ̂_tot(t), which provides us with an effective description of the form discussed in Sec. <ref>. We consider the general scenario discussed in Sec. <ref>, i.e., the Hamiltonian is given by Eq. (<ref>), but we move to an interaction picture (which we denote by a tilde instead of a hat) ρ̃_tot(t)=Û_0^†(t)ρ̂_tot(t)Û_0(t), determined by the unitary operator Û_0(t)=𝒯e^-i∫_0^tdt'[Ĥ_S(t')+∑_αĤ_α], with 𝒯 denoting time-ordering. In the interaction picture, the time-evolution of the total density matrix is determined by ∂_tρ̃_tot(t)=-i∑_α[Ṽ_α(t),ρ̃_tot(t)]=𝒱(t)ρ̃_tot(t), where we used Eq. (<ref>) as well as ∂_tÛ_0(t)=-i[Ĥ_S(t)+∑_αĤ_α]Û_0(t),∂_tÛ_0^†(t)=iÛ_0^†(t)[Ĥ_S(t)+∑_αĤ_α], and the coupling Hamiltonian in the interaction picture is given by Ṽ_α(t) = Û_0^†(t)V̂_αÛ_0(t), in analogy to Eq. (<ref>). Finally, we have expressed the commutator in Eq. (<ref>) with the help of the superoperator 𝒱. In the following, we will assume 𝒫𝒱(t)𝒫=0. This is not a restriction as it can always be ensured by adding some terms to Ĥ_S and subtracting them from V̂_α, as we will see below. We can then write ∂_t𝒫ρ̃_tot(t)=𝒫𝒱(t)ρ̃_tot(t)=𝒫𝒱(t)𝒬ρ̃_tot(t), ∂_t𝒬ρ̃_tot(t)=𝒬𝒱(t)ρ̃_tot(t)=𝒬𝒱(t)𝒫ρ̃_tot(t)+𝒬𝒱(t)𝒬ρ̃_tot(t), where we used 𝒫+𝒬=1. The formal solution to the second equation is given by 𝒬ρ̃_tot(t) = 𝒢(t,0)𝒬ρ̃_tot(0)+∫_0^tds𝒢(t,s)𝒬𝒱(s)𝒫ρ̃_tot(s), where we introduced the propagator 𝒢(t,s) =𝒯e^∫_s^t𝒬𝒱(τ)dτ. We now assume factorizing initial conditions ρ̃_tot(0)=ρ̂_S(0)⊗_ατ̂_α, such that 𝒫ρ̃_tot(0)=ρ̃_tot(0) and 𝒬ρ̃_tot(0)=0. Inserting Eq. (<ref>) into Eq. (<ref>), we find ∂_t𝒫ρ̃_tot(t)=∫_0^tds𝒫𝒱(t)𝒢(t,s)𝒬𝒱(s)𝒫ρ̃_tot(s). We note that this expression is still exact (for the given initial conditions). Since it explicitly depends on 𝒫ρ̃_tot at previous times, it contains memory effects and does not constitute a Markovian master equation. §.§ Born-Markov approximations We now make a weak coupling approximation. If the coupling between system and reservoirs are proportional to V̂_α∝ r, with r≪ 1, the propagator in Eq. (<ref>) obeys 𝒢(t,s)=1+𝒪(r), resulting in ∂_t𝒫ρ̃_tot(t)=𝒫∫_0^tds𝒱(t)𝒱(s)𝒫ρ̃_tot(s)+𝒪(r^3), where we again used 𝒫𝒱(t)𝒫=0. The last equation implies ∂_tρ̃_S(t)=-∫_0^tds∑_αTr_B{[Ṽ_α(t),[Ṽ_α(t-s),ρ̃_S(t-s)⊗_ατ̂_α]]}, where we substituted s→ t-s and we made use of Tr_B{Ṽ_δ(t)Ṽ_γ(s)ρ̃_S(s)⊗_ατ̂_α}=0 for δ≠γ. This is similar to the assumption 𝒫𝒱(t)𝒫=0 and can always be ensured by an appropriate redefinition of the terms in the Hamiltonian as shown below. We note that Eq. (<ref>) is often obtained by assuming that ρ̃_tot(t)=ρ̃_S(t)⊗_ατ̂_α at all times, the so-called Born approximation. Here we do not make such an assumption. In agreement with the discussion in the previous section, we consider ρ̃_S(t)⊗_ατ̂_α to be an effective description, which only keeps track of the system and neglects changes in the environment state as well as correlations between system and environment. In addition to the weak coupling approximation, we now make a Markov approximation. To this end, we assume that the integrand in Eq. (<ref>) decays on a time-scale τ_B (the bath-correlation time, more on this below). If this time-scale is short enough, which is the case for large, memory-less environments, we can assume ρ̃_S(t-s) to approximately remain constant and replace its time-argument in Eq. (<ref>) by t. Furthermore, using the same argumentation, we can extend the integral to infinity obtaining ∂_tρ̃_S(t)=-∫_0^∞ds∑_αTr_B{[Ṽ_α(t),[Ṽ_α(t-s),ρ̃_S(t)⊗_ατ̂_α]]}. This equation is Markovian, i.e., it is local in time and does not depend explicitly on the initial conditions. However, it is not in GKLS form and does not in general preserve the positivity of the density matrix, i.e., eigenvalues may become negative. The approximations that result in Eq. (<ref>) are usually called the Born-Markov approximations. For a more formal application of these approximations, see Refs. <cit.>. Note that under the Born-Markov approximations, the effect induced by different reservoirs is additive. To make progress, we write the coupling Hamiltonian in the general form V̂_α = ∑_k Ŝ_α,k⊗B̂_α,k=∑_kŜ^†_α,k⊗B̂^†_α,k, where we used the Hermiticity of V̂_α in the second equality. We note that the operators Ŝ_α,k and B̂_α,k are not necessarily Hermitian. Inserting Eq. (<ref>) into Eq. (<ref>), we find after some algebra ∂_tρ̃_S(t)=∑_α∑_k,k'∫_0^∞ ds{C^α_k,k'(s)[S̃_α,k'(t-s)ρ̃_S(t)S̃^†_α,k(t)-S̃^†_α,k(t)S̃_α,k'(t-s)ρ̃_S(t)]. .+C^α_k,k'(-s)[S̃_α,k'(t)ρ̃_S(t)S̃^†_α,k(t-s)-ρ̃_S(t)S̃^†_α,k(t-s)S̃_α,k'(t)]}, where we introduced the bath-correlation functions C^α_k,k'(s)=Tr{B̃^†_α,k(s)B̂_α,k'τ̂_α}, and we used [C^α_k,k'(s)]^*=C^α_k',k(-s). These bath-correlation functions are usually peaked around s=0 and decay over the time-scale τ_B (indeed, this is how τ_B is defined). If this time-scale is short, the integrand in Eq. (<ref>) decays quickly and the Markov assumption performed above is justified. Note that it is important that this approximation is made in the interaction picture, where ρ̃_S varies slowly (in the Schrödinger picture, ρ̂_S tends to oscillate with frequencies given by the differences of the eigenvalues of Ĥ_S). While Eq. (<ref>) is in general still not in GKLS form, it happens to reduce to GKLS form when C^α_k,k'(s)∝δ_k,k' and [Ŝ_α,k,Ĥ_S]=ω_α,kŜ_α,kS̃_α,k(t)=e^-iω_α,ktŜ_α,k, which implies that Ŝ_α,k are ladder operators for the Hamiltonian Ĥ_S, removing the energy ω_α,k from the system. Before we consider such an example in detail, we briefly justify two assumptions made in the derivation above. To this end, we note that if Tr{B̂_α,kτ̂_α} =0, then it is straightforward to show that 𝒫𝒱(t)𝒫=0,Tr{Ṽ_δ(t)Ṽ_γ(s)ρ̃_S(t)⊗_ατ̂_α}=0,∀δ≠γ. In case Eq. (<ref>) is not true, we may define B̂'_α,k =B̂_α,k-Tr{B̂_α,kτ̂_α}, which do fulfill Tr{B̂'_α,kτ̂_α}=0. Defining V̂_α'=∑_k Ŝ_α,k⊗B̂'_α,k,Ĥ_S'(t)=H_S(t)+∑_α,kTr{B̂_α,kτ̂_α}Ŝ_α,k, we have Ĥ_tot(t) = H_S'(t)+∑_α (Ĥ_α+V̂_α'), i.e., the same total Hamiltonian but now in a form such that Eq. (<ref>) holds, ensuring our assumptions given in Eq. (<ref>). §.§ Example: equilibration of a quantum dot We now consider an example provided by a spinless, single-level quantum dot tunnel-coupled to a single fermionic reservoir, see Fig. <ref>. In this case, Eq. (<ref>) happens to already be in GKLS form. The Hamiltonian of system and environment is then given by Ĥ_tot=Ĥ_S+Ĥ_B+V̂, with Ĥ_S=ε_dd̂^†d̂,Ĥ_B=∑_qε_qĉ_q^†ĉ_q,V̂=∑_q(g_qd̂ĉ_q^†-g_q^*d̂^†ĉ_q). Here the reservoir is modeled as a collection of non-interacting fermions and the coupling Hamiltonian describes tunneling of single electrons between the system and the reservoir (the minus sign arises from the fermionic anti-commutation relations). We note that for fermions, there is no tensor product structure because operators on the environment and on the system may anti-commute. Strictly speaking, the derivation in the last section is thus not valid. However, using a Jordan-Wigner transform, one can map the fermionic system onto a spin system where such a tensor-product structure is provided <cit.>. After tracing out the reservoir, the spin operators can then be replaced by fermionic operators again. For the tunneling Hamiltonian that we often use to describe the system-environment coupling for fermions, this procedure is equivalent to replacing the tensor product by the usual product between fermionic operators in the derivation. §.§.§ The master equation Comparing the coupling Hamiltonian to Eq. (<ref>), we may write V̂=Ŝ_0B̂_0+Ŝ_1B̂_1 with Ŝ_0=d̂,Ŝ_1=d̂^†, B̂_0=∑_qg_qĉ_q^†, B̂_1=-∑_qg_q^*ĉ_q. We further find [d̂,Ĥ_S]=ε_dd̂⇒S̃_0(t) = e^-iε_dtd̂,S̃_1(t) = e^iε_dtd̂^† = S̃^†_0(t), and similarly B̃_0(t) = ∑ g_qe^iε_qtĉ_q^†,B̃_1(t)=-∑ g_q^*e^-iε_qtĉ_q. These expressions result in the bath-correlation functions C_0,0(s) =∑_q|g_q|^2e^-iε_qsTr{ĉ_qĉ_q^†τ̂_B}=∑_q|g_q|^2e^-iε_qs[1-n_F(ε_q)] =∫_-∞^∞ dω e^-iω sρ(ω)[1-n_F(ω)] , C_1,1(s) =∑_q|g_q|^2e^iε_qsTr{ĉ^†_qĉ_qτ̂_B}=∑_q|g_q|^2e^iε_qsn_F(ε_q) =∫_-∞^∞ dω e^iω sρ(ω)n_F(ω) , where we introduced the Fermi-Dirac occupation n_F(ω)=1/e^ω-μ/k_BT+1, and made use of Tr{ĉ^†_qĉ_q'τ̂_B}=0 for q≠ q'. Furthermore, it is easy to show that C_0,1(s)=C_1,0(s)=0 since Tr{(ĉ^†_q)^2τ̂_B}=Tr{(ĉ_q)^2τ̂_B}=0. Finally, in Eq. (<ref>) we introduced the spectral density ρ(ω)=∑_q|g_q|^2δ(ε_q-ω), which will be treated as a continuous function. This is justified whenever the summands in Eq. (<ref>) are sufficiently smooth such that the sums can be thought of as Riemann sums that approximate integrals of smooth functions. Using Eq. (<ref>), we may re-write the master equation in Eq. (<ref>) as ∂_tρ̃_S(t) = ∫_-∞^∞ ds{C_0,0(s)e^iε_ds𝒟[d̂]ρ̃_S(t)+C_1,1(s)e^-iε_ds𝒟[d̂^†]ρ̃_S(t)} -1/2∫_-∞^∞ ds sign(s)[C_0,0(s)e^iε_ds-C_1,1(s)e^-iε_ds][d̂^†d̂,ρ̃_S(t)], where 𝒟[Â]ρ̂=Âρ̂Â^†-1/2{Â^†Â,ρ̂}. With the help of the bath-correlation functions given in Eq. (<ref>), we may evaluate the integrals over s to find the transition rates ∫_-∞^∞ dsC_0,0(s)e^iε_ds= ∫_-∞^∞ dω∫_-∞^∞ ds e^-i(ω-ε_d) sρ(ω)[1-n_F(ω)]=κ[1-n_F(ε_d)], ∫_-∞^∞ dsC_1,1(s)e^-iε_ds= ∫_-∞^∞ dω∫_-∞^∞ ds e^i(ω-ε_d) sρ(ω)n_F(ω)=κ n_F(ε_d), where we used ∫_-∞^∞ ds e^-i(ω-ε_d) s=2πδ(ω-ε_d), and introduced κ≡ 2πρ(ε_d). We furthermore find the so-called Lamb shift -1/2∫_-∞^∞ ds sign(s)[C_0,0(s)e^iε_ds-C_1,1(s)e^-iε_ds] = -i Im∫_0^∞ ds[C_0,0(s)e^iε_ds-C_1,1(s)e^-iε_ds] = iP∫_-∞^∞ dωρ(ω)/ω-ε_d, where we made use of C_j,j^*(s) = C_j,j(-s) as well as the identity lim_t→∞∫_0^tds e^iω s=πδ(ω)+iP(1/ω), with P denoting the Cauchy principal value [i.e., P(1/ω) is equal to 1/ω except at ω=0, where the principal value vanishes]. Finally, inserting Eqs. (<ref>) and (<ref>) back into Eq. (<ref>), we find the Markovian master equation in the Schrödinger picture ∂_tρ̂_S(t) = -i[Ĥ_S,ρ̂_S(t)]+e^-iĤ_St[∂_tρ̃_S(t)]e^iĤ_St = -i[ε̅_dd̂^†d̂,ρ̂_S(t)]+κ[1-n_F(ε_d)]𝒟[d̂]ρ̂_S(t)+κ n_F(ε_d)𝒟[d̂^†]ρ̂_S(t), where the renormalized dot energy reads ε̅_d = ε_d+P∫_-∞^∞dωρ(ω)/ε_d-ω. The reservoir thus has two effects: Through the dissipative part of the master equations, it describes electrons entering (𝒟[d̂^†]) and leaving (𝒟[d̂]) the quantum dot. Note that to enter the dot, electrons in the reservoir have to be available, corresponding to the factor n_F while to leave the dot, empty states have to be available corresponding to the factor 1-n_F. In addition, the energy level of the quantum dot is renormalized. Note that when taking the ratio between the rates of entering and leaving the quantum dot, we obtain the Boltzmann factor e^β(ε_d-μ) = κ[1-n_F(ε_d)]/κ n_F(ε_d). This condition is known as local detailed balance and it generally holds for transition rates that are induced by reservoirs in thermal equilibrium. It ensures that in equilibrium, the system state tends to a Gibbs state, see also Eq. (<ref>) below. As discussed above, the Markovian approximation is justified if the bath-correlation functions decay on a time scale which is much faster than the time over which ρ̃_S varies. In the limiting case where both ρ(ω) as well as n_F(ω) are independent of ω, the bath-correlation functions become proportional to a Dirac delta function and the environment becomes truly memoryless (i.e., τ_B→0). In practice, it is sufficient for τ_B to be much shorter than any relevant time-scale of the system. In energy, this translates to the condition that the functions ρ(ω) as well as n_F(ω) are flat around the relevant energies of the system. For the present system ρ̃_S changes on the time-scale 1/κ. The Markov approximation is then valid as long as κτ_B≪ 1. In energy space, this requires ρ(ω) as well as n_F(ω) to be approximately constant in the interval ω∈[ε_d-κ,ε_d+κ]. The spectral density depends on the details of the reservoir and may or may not fulfill this condition depending on the specific scenario. For the Fermi-Dirac occupation to be sufficiently flat for the Markov approximation, the following condition has to hold κ≪max{k_BT,|ε_d-μ|}. At low temperatures, the Fermi-Dirac occupation becomes a step function. Therefore, the Markovian approximation is not justified at low temperatures if the dot level ε_d is close to the chemical potential. For a more detailed discussion on the validity of the Born-Markov approximations, see appendix B.1 of Ref. <cit.>. §.§.§ Solving the master equation To solve the master equation in Eq. (<ref>), we write the density matrix of the system as ρ̂_S(t)=p_0(t)|0⟩⟨ 0|+p_1(t)|1⟩⟨ 1|,p_0(t)+p_1(t)=1, where |0⟩ denotes the empty state and |1⟩ the full state. Here we used the fact that we cannot have a superposition of states with a different number of electrons in the system due to a particle superselection rule <cit.>. Using these basis states, the fermionic operators can be cast into d̂ = 01,d̂^† = 10,d̂^†d̂ = 1,p_1(t)=Tr{d̂^†d̂ρ̂_S(t)}. The master equation in Eq. (<ref>) can then be reduced to ∂_t p_1(t) = -κ[1-n_F(ε_d)]p_1(t)+κ n_F(ε_d)p_0(t) = -κ[p_1(t)-n_F(ε_d)]. This equation shows that a full dot is emptied with rate κ[1-n_F(ε_d)] whereas an empty dot is filled with rate κ n_F(ε_d). The solution to this differential equation reads p_1(t)=p_1(0)e^-κ t+n_F(ε_d)(1-e^-κ t). The occupation probability thus exponentially goes toward the equilibrium value n_F(ε_d). The time-scale with which this happens is given by 1/κ. In equilibrium, the system is described by the thermal state with temperature and chemical potential equal to those of the reservoir, as demanded by the zeroth law of thermodynamics, and we find lim_t→∞ρ̂_S(t) = e^-β(ε_d-μ)d̂^†d̂/Tr{e^-β(ε_d-μ)d̂^†d̂}. This result is closely related to the local detailed balance condition in Eq. (<ref>). §.§.§ Energy flows and the first law In addition to the state of the quantum dot, we are interested in the energy flow between the dot and the reservoir. To this end, we consider the change in the energy of the system ∂_t⟨Ĥ_S⟩ = Tr{ε_dd̂^†d̂∂_tρ̂_S}=(ε_d-μ)∂_t⟨d̂^†d̂⟩+μ∂_t⟨d̂^†d̂⟩=-J_B(t)-P_B(t). To identify the heat current and the power, we used that the change in the number of electrons in the system is minus the change in the number of electrons in the reservoir, more on this in Sec. <ref>. Using p_1=⟨d̂^†d̂⟩, we find for the heat current and power that flow into the reservoir J_B(t) = -(ε_d-μ)∂_t⟨d̂^†d̂⟩= (ε_d-μ)κ e^-κ t[p_1(0)-n_F(ε_d)], P_B(t) = -μ∂_t⟨d̂^†d̂⟩= μκ e^-κ t[p_1(0)-n_F(ε_d)]. We thus find that if the dot starts out in a non-equilibrium state, there is an exponentially decreasing energy flow which can be divided into power and heat. The power flows into the reservoir whenever p_1(t)>n_F(ε_d). The heat flow additionally depends on the sign of ε_d-μ: electrons entering the reservoir above the chemical potential (ε_d-μ>0) heat up the reservoir, while electrons entering below the chemical potential (ε_d-μ<0) cool it down. This can be understood intuitively by noting that at zero temperature, all states below the chemical potential are occupied while states above the chemical potential are empty. Electrons entering the reservoir below the chemical potential thus bring the reservoir closer to the zero-temperature distribution. §.§.§ Entropy and the second law For the second law, we need to consider the entropy of the quantum dot given by S_vN[ρ̂_S(t)]=-p_0(t)ln p_0(t) -p_1(t)ln p_1(t),∂_tS_vN[ρ̂_S(t)] = -ṗ_1(t)lnp_1(t)/1-p_1(t), where the dot denotes the time-derivative and we used p_0+p_1=1 to compute the derivative. The entropy production rate given in Eq. (<ref>) can then be expressed as Σ̇(t) = k_B∂_tS_vN[ρ̂_S(t)]+J_B/T=k_BJ_B(t)/(ε_d-μ)[β(ε_d-μ)+lnp_1(t)/1-p_1(t)], where we used -ṗ_1=J_B/(ε_d-μ) which follows from Eq. (<ref>). Using the equality in Eq. (<ref>), we find Σ̇(t) = k_BJ_B(t)/(ε_d-μ)ln(p_1(t)[1-n_F(ε_d)]/[1-p_1(t)]n_F(ε_d)) = k_Bκ(p_1(t)[1-n_F(ε_d)]-[1-p_1(t)]n_F(ε_d))ln(p_1[1-n_F(ε_d)]/[1-p_1]n_F(ε_d))≥ 0, where we used J_B = (ε_d-μ)κ(p_1-n_F) which follows from Eqs. (<ref>) and (<ref>). the positivity of Σ̇ can be shown by writing it in the form (x-y)(ln x-ln y)≥ 0 which is ensured to be positive because the logarithm is a monotonously increasing function. Using the solution in Eq. (<ref>), we can explicitly write the entropy production rate as Σ̇(t) = k_Bκ e^-κ tδ_0ln(e^-κ tδ_0+n_F)(1-n_F)/(1-n_F-e^-κ tδ_0)n_F, where we introduced δ_0 = p_1(0)-n_F and we suppressed the argument of the Fermi-Dirac occupation for ease of notation. We thus find that for this Markovian master equation, the entropy production rate is indeed always positive, as anticipated above, and exponentially decreases in time such that in equilibrium, no entropy is produced as expected. §.§ Obtaining GKLS form We now return to Eq. (<ref>). As mentioned above, this equation is not in GKLS form. In this section, we consider additional approximations that bring it into GKLS form. To this end, we first write S̃_α,k(t) = ∑_j e^-iω_j tŜ^j_α,k, i.e., we write the system operators in Eq. (<ref>) in a Fourier series. While this can always be done, it is particularly simple when the system Hamiltonian is time-independent. In this case, we may introduce its eigenstates as Ĥ_S|E_a⟩ = E_a|E_a⟩. We may then find the operators Ŝ^j_α,k by multiplying Ŝ_α,k from the left and the right by resolved identities Ŝ_α,k=∑_a,bE_aŜ_α,kE_b_Ŝ_α,k^j⇒[Ŝ_α,k^j,Ĥ_S]=ω_jŜ_α,k^j, with ω_j=E_b-E_a. With the help of Eq. (<ref>), we may cast Eq. (<ref>) into ∂_tρ̃_S(t) = ∑_α,k,k'∑_j,j'e^i(ω_j-ω_j')tΓ_k,k'^α(ω_j')[Ŝ_α,k'^j'ρ̃_S(t)(Ŝ_α,k^j)^†-(Ŝ_α,k^j)^†Ŝ_α,k'^j'ρ̃_S(t)]+H.c. with Γ_k,k'^α(ω)≡∫_0^∞ ds e^iω sC_k,k'^α(s) = 1/2γ_k,k'^α(ω)+iΔ_k,k'^α(ω), where γ_k,k'^α(ω) and Δ_k,k'^α(ω) are both real. In the remainder of this section, we will consider a time-independent Hamiltonian Ĥ_S, as well as bath-correlation functions that obey C_k,k'^α∝δ_k,k' for simplicity. This will simplify the notation as well as the derivation of the laws of thermodynamics for the master equations we consider. §.§.§ The secular approximation The secular approximation is the most common approach for obtaining a master equation in GKLS form and can be found in many text-books (see for instance Ref. <cit.>). Let τ_S denote the time scale over which ρ̃_S changes (remember that the Born-Markov approximations require τ_S≫τ_B). The secular approximation is justified if |ω_j-ω_j'|τ_S≫1∀ j≠ j'. In this case, we may drop all terms in Eq. (<ref>) with j≠ j' because they average out over a time-scale shorter than τ_S due to the oscillating term exp[i(ω_j-ω_j')t]. Going back to the Schrödinger picture, we then find the GKLS master equation ∂_tρ̂_S(t) = -i[Ĥ_S+Ĥ_LS,ρ̂_S(t)]+∑_α,k,jγ_k^α(ω_j)𝒟[Ŝ_α,k^j]ρ̂_S(t), where γ_k^α(ω_j)=γ_k,k^α(ω_j) and the Lamb-shift Hamiltonian is given by Ĥ_LS = ∑_α,k,jΔ_k^α(ω_j)(Ŝ_α,k^j)^†Ŝ_α,k^j, with Δ_k^α(ω_j)=Δ_k,k^α(ω_j). We note that from Eq. (<ref>), it follows that [Ĥ_S,Ĥ_LS]=0. Due to Eq. (<ref>), the secular approximation works well for systems which have no small gaps in Ĥ_S. As we will see, thermal machines may consist of multiple parts that are weakly coupled and have small energy gaps induced by the weak coupling. In such systems, the secular approximation breaks down. Note that in order to obtain the jump operators Ŝ^j_α,k, the Hamiltonian Ĥ_S needs to be diagonalized first. This implies that already obtaining the master equation may be a formidable task. We note that in the secular approximation, for a non-degenerate system Hamiltonian, the populations decouple from the coherences. Indeed, the dissipative part of Eq. (<ref>) describes classical jumps between the eigenstates of Ĥ_S. Concretely, this means that in the energy-eigenbasis, the off-diagonal terms of the density matrix tend to zero and the dynamics can be described by a classical rate equation involving only the populations. While this may be a good approximation, there are many situations of interest where coherences between energy eigenstates are important in the presence of thermal reservoirs and the secular approximation is no longer justified <cit.>. A particularly appealing feature of the secular approximation is the fact that the laws of thermodynamics are ensured to hold <cit.>: 0th law: If all reservoirs are at the same inverse temperature β and chemical potential μ, then the steady state of Eq. (<ref>) reduces to the Gibbs state ρ̂_S(t)e^-β(Ĥ_S-μN̂_S)/Tr{e^-β(Ĥ_S-μN̂_S)}. In equilibrium, the system is thus described by the same temperature and chemical potential as the environment. 1st law: Writing the master equation in Eq. (<ref>) as ∂_tρ̂_S(t) = -i[Ĥ_S+Ĥ_LS,ρ̂_S(t)]+∑_αℒ_αρ̂_S(t), we find the first law as ∂_t⟨Ĥ_S⟩ = -∑_α[J_α(t)+P_α(t)], with J_α(t) = -Tr{(Ĥ_S-μ_αN̂_S)ℒ_αρ̂_S(t)},P_α(t) = -μ_αTr{N̂_Sℒ_αρ̂_S(t)}. These definitions are to be compared with the definitions for the power and heat current in the general scenario, c.f. Eq. (<ref>). There, we defined heat and work using the changes in the energy and particle numbers in the reservoirs. When using a master equation for the reduced system state, we no longer have access to the properties of the reservoirs. However, we may still infer heat and work because the term ℒ_α in the master equation describes the exchange of energy and particles with reservoir α. An increase in the particle number due to ℒ_α implies a decrease of the particle number in the reservoir α by the same amount, and similarly for energy. This is discussed in more detail below, see Sec. <ref>. 2nd law: The second law of thermodynamics also follows from Eq. (<ref>) and it may be shown that <cit.> Σ̇ = k_B∂_tS_vN[ρ̂_S(t)]+∑_αJ_α(t)/T_α≥ 0. While we focused on a time-independent system Hamiltonian here, the secular approximation may analogously be applied for a time-dependent Hamiltonian and the laws of thermodynamics continue to hold in this case, see Ref. <cit.> and references therein. §.§.§ The singular-coupling limit The singular-coupling limit <cit.> is another popular approach to obtain a master equation in GKLS form. It is justified when all Bohr frequencies ω_j are close to each other, i.e., |ω_j-ω_j'|τ_B≪ 1∀ j,j'. In this case, we may write S̃_α,k(t-s)=∑_je^-iω_j(t-s)Ŝ_α,k^j≃ e^iω_α,ks∑_je^-iω_jtŜ_α,k^j=e^iω_α,ksS̃_α,k(t), where ω_α,k has to be chosen such that |ω_α,k-ω_j|τ_B≪ 1 for all j. Because of Eq. (<ref>), one may for instance choose ω_α,k to equal the average of all ω_j. Equation (<ref>) is justified for all s≲τ_B, i.e., exactly the values of s which are relevant in the integral of Eq. (<ref>). As a consequence of Eq. (<ref>), we may replace Γ_k^α(ω_j') by Γ_k^α(ω_α,k) in Eq. (<ref>) which, in the Schrödinger picture, results in the GKLS master equation ∂_tρ̂_S(t) = -i[Ĥ_S+Ĥ_LS,ρ̂_S(t)]+∑_α,kγ_k^α(ω_α,k)𝒟[Ŝ_α,k]ρ̂_S(t), where the Lamb-shift Hamiltonian reads Ĥ_LS = ∑_α,kΔ_k^α(ω_α,k)Ŝ_α,k^†Ŝ_α,k, which does not necessarily commute with Ĥ_S. Note that the jump operators Ŝ_α,k entering Eq. (<ref>) are the operators that enter the coupling Hamiltonian V̂_α. This implies that, in contrast to the secular approximation, the system Hamiltonian does not need to be diagonalized in order to write down the master equation. Since we are often interested in systems that are not explicitly solvable, this is very helpful. We further note that the singular-coupling limit is always justified for a perfectly Markovian environment, i.e., an environment where τ_B→ 0. More generally, the singular-coupling limit is justified when γ_k^α(ω_j)≃γ_k^α(ω_α,k) for all j. This is often the case in quantum-optical systems, which is why the singular-coupling limit is widely applied in this community. Showing that the laws of thermodynamics hold in the singular-coupling limit is a bit more difficult than in the secular approximation. We first need to introduce a thermodynamic Hamiltonian Ĥ_TD, which is obtained by rescaling the gaps of Ĥ_S as ω_j→ω_α,k. The thermodynamic Hamiltonian is then used to compute the internal energy of the system. As argued in Ref. <cit.>, the mistake we make by replacing Ĥ_S with Ĥ_TD in the thermodynamic bookkeeping is smaller than the resolution of heat that the master equation in Eq. (<ref>) ensures. Within the accuracy of our model, the replacement is thus completely justified and in many cases it can be compared to neglecting the system-bath coupling in the thermodynamic bookkeeping. For the thermodynamic Hamiltonian, we find [Ĥ_TD,Ĥ_S] = [Ĥ_TD,Ĥ_LS] = 0,[Ŝ_α,k,Ĥ_TD] = ω_α,kŜ_α,k. The last equality implies that a jump Ŝ_α,kρ̂_SŜ_α,k^† reduces the internal energy by ω_α,k. With the help of the thermodynamic Hamiltonian, the laws of thermodynamics may be shown to hold <cit.>: 0th law: If all reservoirs are at the same inverse temperature β and chemical potential μ, then the steady state of Eq. (<ref>) reduces to the Gibbs state ρ̂_S(t)e^-β(Ĥ_TD-μN̂_S)/Tr{e^-β(Ĥ_TD-μN̂_S)}. In equilibrium, the system is thus described by the same temperature and chemical potential as the environment. 1st law: Writing the master equation in Eq. (<ref>) as ∂_tρ̂_S(t) = -i[Ĥ_S+Ĥ_LS,ρ̂_S(t)]+∑_αℒ_αρ̂_S(t), we find the first law as ∂_t⟨Ĥ_TD⟩ = -∑_α[J_α(t)+P_α(t)], with J_α(t) = -Tr{(Ĥ_TD-μ_αN̂_S)ℒ_αρ̂_S(t)},P_α(t) = -μ_αTr{N̂_Sℒ_αρ̂_S(t)}. 2nd law: The second law of thermodynamics also follows from Eq. (<ref>) and it may be shown that Σ̇ = k_B∂_tS_vN[ρ̂_S(t)]+∑_αJ_α(t)/T_α≥ 0. As for the secular approximation, the results from this section may be extended to a time-dependent Hamiltonian and the laws of thermodynamics continue to hold in this case <cit.>. We note that in particular in thermodynamic contexts, it is often the case that there are positive and negative Bohr frequencies, with each S̃_α,k(t) containing only frequencies of one sign. In this case, we may perform the singular-coupling limit for the positive and negative Bohr frequencies separately. In this case, the master equation in Eq. (<ref>) is also valid if Eq. (<ref>) is only respected for frequencies ω_j and ω_j' of the same sign. Below, we will discuss an example where this is the case, see Sec. <ref>. §.§.§ The unified GKLS master equation Finally, we briefly mention an approach to obtain GKLS form for systems where neither the secular approximation nor the singular-coupling limit is justified <cit.>. The problem is that for some values of j and j', we may find |ω_j-ω_j'|τ_S≲ 1, rendering the secular approximation inapplicable, while for other values of j and j', we may have |ω_j-ω_j'|τ_B≳ 1, such that the singular-coupling limit may not be applied. The solution for this problem exploits the fact that τ_B≪τ_S, otherwise the Born-Markov approximations are not justified in the first place. This inequality implies that for all values of j and j', we either have |ω_j-ω_j'|τ_S≫ 1 or we have |ω_j-ω_j'|τ_B≪ 1. One may then perform the following approximations to reach GKLS form: * Drop all terms with j and j' such that |ω_j-ω_j'|τ_S≫ 1, in analogy to the secular approximation. * Perform a singular-coupling limit on the remaining cross terms with ω_j≠ω_j'. The resulting master equation also obeys the laws of thermodynamics for an appropriately chosen thermodynamic Hamiltonian. The thermodynamic consistency of this approach was recently also shown using the method of full counting statistics <cit.>. §.§ Heat and work in quantum master equations Here we briefly connect the definitions of heat and work we use for master equations, as introduced in Eqs. (<ref>) and (<ref>), to the definitions introduced for the general scenario, see Eqs. (<ref>). We first consider power. The conservation of the total number of particles results in ∂_t ⟨N̂_S⟩ = -∂_t∑_α⟨N̂_α⟩, where the averages are taken with respect to the exact, total density matrix ρ̂_tot(t). In a master equation written in the form of Eq. (<ref>), we only have access to the left-hand side of the last equation, which is given by ∂_t⟨N̂_S⟩ = ∑_αTr{N̂_Sℒ_αρ̂_S(t)}. Comparing Eq. (<ref>) to Eq. (<ref>), we may infer P_α(t) = μ_α∂_t ⟨N̂_α⟩ = -μ_αTr{N̂_Sℒ_αρ̂_S(t)}, connecting the definition in Eqs. (<ref>) to Eqs. (<ref>) and (<ref>). Since the contributions of the different reservoirs to the Liouvillean are additive, we may infer the change of particle number induced by each reservoir and, by exploiting the conservation of particles, we may infer ∂_t ⟨N̂_α⟩ even though we only have access to the reduced system state. A similar analysis can be done for energy. From total energy conservation (assuming as above a time-independent Hamiltonian), we have ∂_t ⟨Ĥ_S⟩ = -∂_t∑_α[ ⟨Ĥ_α⟩+⟨V̂_α⟩]≃-∂_t∑_α⟨Ĥ_α⟩, where we neglected the energy stored in the coupling between system and environment because we assume it to be small in order to derive a Markovian master equation. From a master equation written in the form of Eq. (<ref>), we may write the left-hand side of Eq. (<ref>) as ∂_t⟨Ĥ_S⟩ = ∑_αTr{Ĥ_Sℒ_αρ̂_S(t)}, which leads us to identify ∂_t ⟨Ĥ_α⟩≃ -Tr{Ĥ_Sℒ_αρ̂_S(t)} and therefore to define the heat current as J'_α = -Tr{(Ĥ_S-μ_α N_S)ℒ_αρ̂_S(t)}≃∂_t ⟨Ĥ_α⟩-μ_α∂_t ⟨N̂_α⟩. While this is indeed the appropriate heat current for master equations in the secular approximation, see Eq. (<ref>), we introduced a different definition in Eq. (<ref>) based on a thermodynamic Hamiltonian. To understand why, it is necessary to appreciate that Markovian master equations rely on assumptions on time-scales. In particular, the Markov approximation neglects the time-evolution of the system during the bath-correlation time, see Eq. (<ref>). This neglects the finite life-time of the particles in the system. Through the time-energy uncertainty relation, this neglects the energy broadening of the states in the system. As a consequence, we no longer know the exact energy at which particles are exchanged with the environment. In the singular-coupling limit, this is exacerbated because we also neglect small differences in the Bohr frequencies of the system Hamiltonian, see Eq. (<ref>). The fact that we no longer know the exact energy at which particles enter the environment implies that we lose resolution in the energy exchanged with the environment. As a result, a naive application of Eq. (<ref>) can result in violations of the laws of thermodynamics, see for instance Ref. <cit.>. It is important to stress however that any violations should remain absent or negligibly small as long as the approximations that result in the master equation are well justified <cit.>. It is however desirable to have a framework that mathematically ensures the laws of thermodynamics. This can be obtained by introducing the thermodynamic Hamiltonian to quantify the internal energy of the system <cit.> which results in the definition for the heat currents [c.f. Eq. (<ref>)] J_α≡ -Tr{(Ĥ_TD-μ_α N_S)ℒ_αρ̂_S(t)}≃∂_t ⟨Ĥ_α⟩-μ_α∂_t ⟨N̂_α⟩. Importantly, the use of the thermodynamic Hamiltonian should only slightly affect the values of the heat currents, i.e., ∂_t⟨Ĥ_S⟩≃∂_t⟨Ĥ_TD⟩ and J'_α≃ J_α, such that both definitions of the heat current are acceptable. Should the two definitions result in substantial differences, then the approximations that went into the master equation are most likely no longer justified. Since we will mainly consider master equations in the singular-coupling limit below, and since we prefer to have a framework that ensures the laws of thermodynamics exactly and not only approximately, we will use the heat current given in Eq. (<ref>) and the power given in Eq. (<ref>). §.§ Example: A double quantum dot We now consider an example provided by a double quantum dot, coupled to two fermionic reservoirs. We again consider spinless electrons, such that each dot can at maximum host one electron. Furthermore, we consider two dots in series, such that each dot is coupled to one of the reservoirs, see Fig. <ref>. The total Hamiltonian that describes this scenario is given by Ĥ_tot=Ĥ_S+Ĥ_L+Ĥ_R+V̂_L+V̂_R, with the system Hamiltonian Ĥ_S=εd̂^†_Ld̂_L+εd̂^†_Rd̂_R+g(d̂^†_Ld̂_R+d̂^†_Rd̂_L)=(ε+g)d̂^†_+d̂_++(ε-g)d̂^†_-d̂_-, where we chose the on-site energies ε of the two dots to be equal for simplicity and we introduced the eigenmodes d̂_± = 1/√(2)(d̂_R±d̂_L),d̂_R = 1/√(2)(d̂_++d̂_-),d̂_L = 1/√(2)(d̂_+-d̂_-). The reservoirs are again modeled by a collection of non-interacting electrons that are tunnel-coupled to the respective dots Ĥ_α=∑_qε_α,qĉ_α,q^†ĉ_α,q,V̂_α=∑_q(g_α,qd̂_αĉ_α,q^†-g_α,q^*d̂_α^†ĉ_α,q), with α = L,R. As in the example of a single quantum dot, we may write the coupling Hamiltonian as V̂_α = Ŝ_α,0B̂_α,0+Ŝ_α,1B̂_α,1 and we find S̃_R,0(t) = e^iĤ_Std̂_Re^-iĤ_St = e^-i(ε+g)td̂_+/√(2)+e^-i(ε-g)td̂_-/√(2), S̃_L,0(t) = e^iĤ_Std̂_Le^-iĤ_St = e^-i(ε+g)td̂_+/√(2)-e^-i(ε-g)td̂_-/√(2). Comparing these expressions to Eq. (<ref>), we may read off the frequencies ω_j = ε± g, as well as the corresponding operators Ŝ_α,0^j. Similarly, we find S̃_α,1(t) = S̃^†_α,0(t), which involve the frequencies ω_j= -(ε± g). The bath-correlation functions are obtained in analogy to the single-dot case in Sec. <ref> and read C_0,0^α(s)=∫_-∞^∞ dω e^-iω sρ_α(ω)[1-n_F^α(ω)] , C_1,1^α(s)=∫_-∞^∞ dω e^iω sρ_α(ω)n_F^α(ω) , with the Fermi-Dirac occupation n_F^α(ε)=1/e^ε-μ_α/k_BT_α+1. Furthermore, from Eq. (<ref>) we find the relevant transition rates γ_0^α (ω) = κ_α [1-n_F^α(ω)],γ_1^α (-ω) = κ_α n_F^α(ω), where κ_α = 2πρ_α(ω), as well as the energy shifts Δ_0^α (ω) = P∫_-∞^∞ dω'ρ_α(ω')[1-n_F^α(ω')]/ω-ω',Δ_1^α (-ω) = -P∫_-∞^∞ dω'ρ_α(ω')n_F^α(ω')/ω-ω'. Note the different signs in the arguments of the functions with different subscripts. As we will see below, these functions will be evaluated at frequencies of opposite signs. From the bath-correlation functions, we conclude that the Born-Markov approximations are justified when ρ_α(ε± g±κ)≃ρ_α(ε± g),n_F^α(ε± g±κ)≃ n_F^α(ε± g), where κ = max{κ_L,κ_R}. §.§.§ The secular approximation Having identified all the quantities appearing in Eq. (<ref>), we find the GKLS master equation in the secular approximation ∂_tρ̂_S(t) = -i[∑_σ=±ε̅_σd̂^†_σd̂_σ,ρ̂_S(t)] +∑_α=L,R∑_σ =±κ_α/2{n_F^α(ε_σ)𝒟[d̂_σ^†]+[1-n_F^α(ε_σ)]𝒟[d̂_σ]}ρ̂_S(t), with the energies ε_± = ε± g,ε̅_± = ε_± + P∫_-∞^∞ dωρ_L(ω)+ρ_R(ω)/ε± g-ω. The master equation in the secular approximation describes classical hopping into and out-of the delocalized eigenmodes described by the operators d̂_±. Any coherences between these eigenmodes decay. We may thus interpret the secular master equation as a classical master equation. However, note that the eigenmodes themselves describe electrons being in a coherent superposition between the two dots. Evaluating Eq. (<ref>), we find the heat flow and power J_α = ∑_σ=±ε_σ-μ_α/2κ_α[⟨d̂_σ^†d̂_σ⟩ - n_F^α(ε_σ)],P_α = μ_α∑_σ=±κ_α/2[⟨d̂_σ^†d̂_σ⟩ - n_F^α(ε_σ)]. From Eq. (<ref>), we conclude that the secular approximation is justified when 1/τ_S=κ_α≪ g = 1/2|ω_j-ω_j'|. This implies that for strong coupling between the dots, the secular approximation can safely be applied. However, once g becomes comparable to either coupling κ_α, it is no longer justified. In this case, one should apply the singular-coupling limit. §.§.§ The singular-coupling limit In the singular-coupling limit, we make the replacement S̃_α,0(t-s)≃ e^iε sS̃_α,0(t),S̃_α,1(t-s)≃ e^-iε sS̃_α,1(t). This implies that the frequencies appearing in Eq. (<ref>) are ω_α,0=ε and ω_α,1=-ε. We note that there is no restriction on |ω_α,0-ω_α,1| because S̃_α,0 only involves the frequencies ε± g while S̃_α,1 only involves the frequencies -(ε± g). With the substitution in Eq. (<ref>), we find the GKLS master equation ∂_tρ̂_S(t) = -i[Ĥ_S+Ĥ_LS,ρ̂_S(t)]+∑_α=L,Rκ_α{n_F^α(ε)𝒟[d̂_α^†]+[1-n_F^α(ε)]𝒟[d̂_α]}ρ̂_S(t), with the Lamb-shift Hamiltonian Ĥ_LS = Δ_Ld̂^†_Ld̂_L+Δ_Rd̂^†_Rd̂_R,Δ_α = P∫_-∞^∞dωρ_α(ω)/ε-ω. The master equation in the singular-coupling limit is also known as a local master equation, because the jump operators act locally on the left and right quantum dots. In contrast to the secular master equation (also denoted global master equation), the populations and coherences do not decouple and we may find coherence and even entanglement between the two quantum dots <cit.>. We note that the local master equation may also be obtained heuristically by first setting g=0, deriving the master equation as in Sec. <ref>, and then reinstating g in the Hamiltonian. For certain scenarios, this heuristic approach may result in master equations that violate the laws of thermodynamics <cit.>. For this reason, it is recommended to perform the singular-coupling approximation as outlined above. We then obtain a thermodynamically consistent framework with the thermodynamic Hamiltonian Ĥ_TD = ε(d̂^†_Ld̂_L+d̂^†_Rd̂_R). In the thermodynamic bookkeeping, we thus neglect the coupling between the dots, in analogy to how we neglect the system-bath coupling energy. Evaluating Eq. (<ref>), we find the heat flow and power J_α = κ_α(ε-μ_α)[⟨d̂_α^†d̂_α⟩ - n_F^α(ε)],P_α = κ_αμ_α[⟨d̂_α^†d̂_α⟩ - n_F^α(ε)]. In this approximation, each electron that enters reservoir α carries the heat ε-μ_α and the power μ_α, resulting in the simple relation J_α/P_α = (ε-μ_α)/μ_α. The singular-coupling limit is justified when ρ_α(ε± g)≃ρ_α(ε) as well as n_F^α(ε± g)≃ n_F^α(ε). The second condition is obeyed when g≪max{k_BT_α,|ε-μ_α|}. Note that g≪κ is not required for the singular-coupling limit to be justified. The secular approximation and the singular-coupling limit may thus be justified at the same time. Indeed, since τ_S≫τ_B this is what we expect from Eqs. (<ref>) and (<ref>). § QUANTUM THERMAL MACHINES In this section, we consider quantum thermal machines, i.e., machines that use reservoirs in local equilibrium to perform a useful task such as converting heat into work or producing entanglement. While in local equilibrium, these reservoirs have different temperatures and/or chemical potentials such that together, they describe an out-of-equilibrium scenario. §.§ A quantum dot heat engine The first machine we consider is a simplified version of the heat engine that was implemented experimentally in Ref. <cit.>. In contrast to a quantum dot coupled to a single reservoir, where the only thing that happens is thermalization, we will find heat flows in the steady state and we will see how heat can be converted into work and how work can be used to refrigerate. The system we consider is a spinless, single-level quantum dot tunnel-coupled to two heat baths Ĥ_tot=Ĥ_S+Ĥ_c+Ĥ_h+V̂_c+V̂_h, with Ĥ_S=ε_dd̂^†d̂,Ĥ_α=∑_qε_α,qĉ_α,q^†ĉ_α,q,V̂_α=d̂∑_qg_α,qĉ_α,q^†-d̂^†∑_qg_α,q^*ĉ_α,q, where α=c, h labels the reservoirs according to their temperatures T_c≤ T_h. Just as for the quantum dot coupled to a single reservoir, Eq. (<ref>) is already in GKLS form. Since the terms in the master equation corresponding to different reservoirs are additive, we find ∂_tρ̂_S = -i[Ĥ_S,ρ̂_S]+ℒ_cρ̂_S+ℒ_hρ̂_S, with ℒ_αρ̂=κ_α[1-n_F^α(ε_d)]𝒟[d̂]ρ̂+κ_α n^α_F(ε_d)𝒟[d̂^†]ρ̂, where n^α_F is the Fermi-Dirac occupation with temperature T_α and chemical potential μ_α, see Eq. (<ref>). Here we neglected the renormalization of ε_d which is given by a straightforward generalization of Eq. (<ref>). §.§.§ Solving the master equation The master equation can easily be solved by considering ∂ t p_1 = Tr{d̂^†d̂∂_tρ̂_S}=-∑_α=c,hκ_α{[1-n_F^α(ε_d)]p_1-n_F^α(ε_d)p_0} = -γ(p_1-n̅), where γ = κ_c+κ_h,n̅=κ_cn_F^c(ε_d)+κ_hn_F^h(ε_d)/κ_c+κ_h. Comparing to Eq. (<ref>), we find that the quantum dot behaves just like a quantum dot coupled to a single heat bath with coupling strength γ and mean occupation n̅. The solution thus reads p_1(t)=p_1(0)e^-γ t+n̅(1-e^-γ t). §.§.§ The first law From the master equation, we find the first law ∂_t⟨Ĥ_S⟩ = ε_dTr{d̂^†d̂ℒ_cρ̂_S}+ε_dTr{d̂^†d̂ℒ_hρ̂_S}=-J_c-P_c-J_h-P_h, where the power and heat currents are defined in agreement with Eq. (<ref>) P_α = -μ_αTr{d̂^†d̂ℒ_αρ̂_S}, J_α =-(ε_d-μ_α)Tr{d̂^†d̂ℒ_αρ̂_S}. Explicitly, we find P_α = μ_ακ_α e^-γ t[p_1(0)-n̅]+μ_ακ_α[n̅-n_F^α(ε_d)],J_α = ε_d-μ_α/μ_αP_α. Just as for a single reservoir, there is a transient term in the power which decreases exponentially in time. In contrast to the single reservoir case, there is now also a time-independent term which remains in steady state. In the steady state, the observables of the system do not change. We can use this fact to draw a number of conclusions without using the explicit solutions for the power and the heat currents. In particular, since the left-hand side of Eq. (<ref>) vanishes, we find Tr{d̂^†d̂ℒ_cρ̂_S}=-Tr{d̂^†d̂ℒ_hρ̂_S}. From this, using Eqs. (<ref>), follows P=P_c+P_h = -(J_c+J_h), which is nothing but the first law, as well as η = P/-J_h=μ_c-μ_h/ε_d-μ_h=1-ε_d-μ_c/ε_d-μ_h, where we introduced the efficiency η which is given by the ratio between the power (the output of the heat engine) and the heat current from the hot reservoir (the input of the heat engine). Using the explicit solution for the power in Eq. (<ref>), we find P=κ_cκ_h/κ_c+κ_h(μ_c-μ_h)[n_F^h(ε_d)-n_F^c(ε_d)]. This quantity vanishes at zero voltage (μ_c=μ_h), as well as at the stopping voltage where n_F^h(ε_d)=n_F^c(ε_d), see also Fig. <ref> b). Let us now consider under which conditions the system acts as a heat engine, i.e., heat from the hot reservoir is converted into power. From Eq. (<ref>), we can identify different regimes depending on the signs of P, J_c, and J_h. These regimes are illustrate in Figs. <ref> and <ref> (a). For μ_c≥μ_h (μ_c≤μ_h), we find that the quantum dot acts as a heat engine for large positive (negative) values of ε_d. In both cases, we find from Eq. (<ref>) that power is positive as long as ε_d-μ_c/ε_d-μ_h≥T_c/T_h⇒η≤ 1-T_c/T_h=η_C. We thus find that the efficiency is bounded from above by the Carnot efficiency as long as the power output is non-negative. §.§.§ The second law We first make some general statements about the second law in a two-terminal setup. These are very similar to the statements made in Sec. <ref>. The entropy production rate is given by Σ̇ = k_B∂_tS_vN[ρ̂_S]+J_c/T_c+J_h/T_h≥ 0. In the steady state, the first term vanishes and we immediately find that at least one of the heat currents has to be positive. This implies that it is impossible to cool down all reservoirs at the same time (in the steady state). Furthermore, for equal temperatures (T_c=T_h), we find P=-(J_c+J_h)≤ 0 which implies that it is not possible to convert heat into work with reservoirs at a single temperature. This is known as the Kelvin-Planck statement of the second law. Finally, we can use the first law in Eq. (<ref>) to eliminate J_c, resulting in Σ̇ = P/T_cη_C-η/η⇒ 0≤η≤η_C, where the last inequality holds for P≥ 0. The fact that the efficiency is upper bounded by the Carnot efficiency is thus a direct consequence of the second law. In our system, the entropy of the quantum dot is given in Eq. (<ref>). Using ∂_t p_1 = -J_c/ε_d-μ_c-J_h/ε_d-μ_h, we can write the entropy production rate as Σ̇ = k_B∑_α=c,hκ_α(p_1[1-n^α_F(ε_d)]-[1-p_1]n^α_F(ε_d))ln(p_1[1-n^α_F(ε_d)]/[1-p_1]n^α_F(ε_d))≥ 0, which is positive since each term in the sum is positive in complete analogy to Eq. (<ref>). In the steady state, we find Σ̇ = k_Bκ_cκ_h/κ_c+κ_h[n_F^h(ε_d)-n_F^c(ε_d)][β_c(ε_d-μ_c)-β_h(ε_d-μ_h)]≥ 0, which vanishes at the Carnot point, where n_F^h(ε_d)=n_F^c(ε_d), η=η_C, and both the power as well as the heat currents vanish. We stress that while an equilibrium situation (i.e., T_c=T_h and μ_c=μ_h) ensures n_F^h(ε_d)=n_F^c(ε_d), the Carnot point can also be reached out of equilibrium. The interplay between power and efficiency is illustrated in Fig. <ref>. We find that at maximum power, the efficiency reaches above 60% of the Carnot efficiency. Similar values where found experimentally in Ref. <cit.>. As mentioned above, the Carnot efficiency is obtained at the stopping voltage. This is a consequence of the fact that there is only a single energy at which transport happens. At the stopping voltage, all transport is blocked implying that both the charge as well as the heat currents vanish. This implies that there is no dissipation (Σ̇=0) and the efficiency takes on the Carnot value (see also Ref. <cit.>). In reality, as well as in the experiment of Ref. <cit.>, this ideal filtering effect is spoiled by the broadening of the energy level which is neglected in the Markovian master equation, see Fig. <ref>. Including this energy broadening, one finds that at some energies, charges move from hold to cold while at other energies, they move in the other direction. At the stopping voltage, this results in a vanishing of power but still a net heat current, such that the efficiency vanishes. In this case, Fig. <ref> (b) takes on the shape of a lasso. §.§.§ Refrigeration As discussed above, the quantum dot can also act as a refrigerator in the regime where P<0, J_h> 0, and J_c< 0. In this case, electrical power is used to reverse the natural flow of heat, resulting in a heat flow out of the cold reservoir and into the hot reservoir. The efficiency of this process is usually characterized by the coefficient of performance (COP) η^COP = -J_c/-P, where we left the minus signs to stress that this performance quantifier is relevant in the regime where both P as well as J_c are negative under our sign convention. We can use the first law in Eq. (<ref>) to eliminate J_h and write the entropy production rate as Σ̇ = -J_c/T_hη_C^COP-η^COP/η_C^COPη^COP⇒0≤η^COP≤η_C^COP, where the second inequality holds for J_c≤ 0 and we introduced the Carnot value for the COP η_C^COP = T_c/T_h-T_c. We note that as T_c→ T_h, η_C^COP diverges. This reflects the fact that in principle, it is possible to move heat in between two reservoirs with equal temperature without investing any work. In our system, we find from Eqs. (<ref>) and (<ref>) η^COP = ε_d-μ_c/μ_c-μ_h. From this, we find that η^COP vanishes when ε_d=μ_c and takes on the Carnot value at ε_d=μ_cT_h-μ_hT_c/T_h-T_c, which is exactly the point where the regime of the refrigerator meets the regime of the heat engine, see Fig. <ref>. Interestingly, both the COP as well as the efficiency reach their maximum value at this point, where no transport takes place. The Carnot point is often called the point of reversibility. At this point nothing happens but it can be seen as the limit of converting heat into work infinitely slowly and without wasting any energy (thus, reversibly). Equivalently, taking the limit from the other side, it can be seen as the limit of cooling the cold reservoir reversibly. §.§ Entanglement generator In this section, we consider a thermal machine that uses a temperature gradient in order to produce entanglement between two quantum dots. The original idea goes back to Ref. <cit.>, where qubits instead of quantum dots are considered. Here we focus on the scenario investigated in Ref. <cit.>, i.e., a double quantum dot coupled to two fermionic reservoirs, just like in Sec. <ref> (see Fig. <ref>). §.§.§ Entanglement Before we consider the thermal machine itself, we provide a brief introduction to entanglement which is one of the most intriguing features of quantum mechanics. For more information, see the Book by Nielsen and Chuang <cit.>. To consider entanglement, we require a bi-partite Hilbert space ℋ = ℋ_A⊗ℋ_B, where ℋ_A denotes the Hilbert space of Alice and ℋ_B is Bob's Hilbert space. A quantum state on this bi-partite Hilbert space is then said to be a product state if it reads ρ̂ = ρ̂_A⊗ρ̂_B. This corresponds to the scenario where Alice and Bob have access to their respective states ρ̂_A and ρ̂_B, without any correlations between them. A classical mixture of product states is called a separable state ρ̂ = ∑_j p_j ρ̂^j_A⊗ρ̂^j_B,p_j≥ 0,∑_j p_j = 1. Such a state contains classical correlations. For instance, it can describe the rather trivial example where Alice and Bob by fruit together. With equal probabilities, they buy two apples or two oranges. If Alice has an apple, we know Bob has an apple as well and similarly for oranges. Obviously, no fruits are entangled in this example. A state is entangled iff it is not separable, i.e., if it cannot be written in the form of Eq. (<ref>). In this case, the correlations between Alice and Bob are no longer classical in nature. Entanglement may thus be seen as a form of correlation that goes beyond classical correlations. Here we illustrate this with two simple examples. First, we consider the state ρ̂_cl = 1/2(00+11) = 1/20⊗0+1/21⊗1, which is evidently separable. It corresponds to the apple-orange scenario above, where |00⟩ could denote an apple for both Alice and Bob. As an example for an entangled state, we consider one of the Bell states |Φ^+⟩ = 1/√(2)(|00⟩+|11⟩),|Φ^+⟩⟨Φ^+| = 1/2(00+11+0011+1100). This state cannot be written in the form of Eq. (<ref>) due to the off-diagonal elements. Indeed, it is the maximally entangled state for two qubits. The amount of entanglement can be quantified by the entanglement of formation <cit.>. Loosely speaking, it is determined by the number of Bell states that are required to prepare the given state using only local operations and classical communication (LOCC). For two qubits, the entanglement of formation is a number between zero and one, where zero is obtained for separable states and one for Bell states. Concurrence: Determining if a given state is entangled is in general a highly non-trivial task. However, for two qubits the low dimensionality of the problem considerably facilitates the task. A common measure for entanglement in this scenario is the concurrence <cit.>, which is monotonically related to the entanglement of formation. Just as the latter, the concurrence ranges from zero, obtained for separable states, to one, reached for Bell states. The concurrence of a state ρ̂ can be computed with the help of the auxiliary state ρ̃ = σ̂_y⊗σ̂_yρ̂^*σ̂_y⊗σ̂_y, σ̂_y = -i 01+i10, where the star denotes complex conjugation in the computational basis (|0⟩,|1⟩). Let λ_j denote the eigenvalues of ρ̂ρ̃ in decreasing order, i.e., λ_1≥λ_2≥λ_3≥λ_4. The concurrence may then be written as C[ρ̂] = max{0,√(λ_1)-√(λ_2)-√(λ_3)-√(λ_4)}. This quantity lies between zero and one, where C[ρ̂] = 0 holds for separable states and C[ρ̂] > 0 for entangled states, with unity reached only for Bell states. Fermionic entanglement: The discussion above assumed that the Hilbert space has the tensor-product structure ℋ = ℋ_A⊗ℋ_B. For fermions, this is not the case because fermionic operators corresponding to Alice and Bob anti-commute. This implies that any definition for entanglement needs to be reconsidered for fermions <cit.>. To make progress, we denote by |00⟩ the vacuum state, where no fermion is present. Using the creation operators on the dots d̂^†_α, we may then define the states |10⟩ = d̂_L^†|00⟩,|01⟩ = d̂_R^†|00⟩,|11⟩ = d̂_L^†d̂_R^†|00⟩ = -d̂_R^†d̂_L^†|00⟩. Note the minus sign in the last equation, which forces us to associate to the state |11⟩ a specific order for the fermionic operators. It turns out that in our system, we may compute the concurrence as if these states were two-qubit states. This is the case because we will only find a single off-diagonal element corresponding to a coherent superposition of a single electron between the two modes (i.e., between |01⟩ and |10⟩). In general, more care has to be taken when evaluating the entanglement of fermionic systems (even for two modes) <cit.>. §.§.§ The master equation We will use the master equation in the singular-coupling limit (local master equation) given in Eq. (<ref>), neglecting any Lamb-shift. For convenience, we reproduce this equation here ∂_tρ̂_S(t) = -i[Ĥ_S,ρ̂_S(t)]+ℒ_Lρ̂_S(t)+ℒ_Rρ̂_S(t), with the local dissipators ℒ_α = κ_α n_F^α(ε)𝒟[d̂^†_α]+κ_α [1-n_F^α(ε)]𝒟[d̂_α], and the Hamiltonian Ĥ_S = ε(d̂^†_Ld̂_L+d̂^†_Rd̂_R)+g(d̂^†_Ld̂_R+d̂^†_Rd̂_L). As discussed in Sec. <ref>, this restricts our analysis to g≪max{k_BT_α,|ε-μ_α|}. For a discussion of the entanglement generator outside of this regime, see Ref. <cit.>. Since this master equation is bi-linear and contains one annihilation and one creation operator per term, the time-derivatives ∂_t⟨d̂^†_αd̂_β⟩ form a closed set of equations. Furthermore, the quantities ⟨d̂^†_αd̂_β⟩ completely determine the quantum state of the system at all times (given that this is true for the initial state). The quantum state is said to be Gaussian. Expectation values including more than two annihilation or creation operators can then always be reduced using Wick's theorem <cit.>. In particular, for Gaussian states it holds that ⟨d̂_i^†d̂_j^†d̂_kd̂_l⟩ = ⟨d̂_i^†d̂_l⟩⟨d̂_j^†d̂_k⟩-⟨d̂_i^†d̂_k⟩⟨d̂_j^†d̂_l⟩. For more information on solving master equations for Gaussian processes, see Ref. <cit.>. From the master equation in Eq. (<ref>), one may derive ∂_tv⃗ = Av⃗+b⃗, with v⃗ = [ ⟨d̂^†_Ld̂_L⟩; ⟨d̂^†_Rd̂_R⟩; ⟨d̂^†_Ld̂_R⟩; ⟨d̂^†_Rd̂_L⟩ ], A=[ -κ_L 0 -ig ig; 0 -κ_R ig -ig; -ig ig -κ_L+κ_R/2 0; ig -ig 0 -κ_L+κ_R/2 ],b⃗ = [ κ_Ln_F^L(ε); κ_Rn_F^R(ε); 0; 0 ]. At steady state, we may set the LHS of Eq. (<ref>) to zero and we find 0 = Av⃗_ss+b⃗,⇒v⃗_ss = -A^-1b⃗. In order to connect the averages in v⃗ to the state of the double quantum dot, we write ρ̂_S = ∑_n_L,n_R n'_L,n'_Rρ_n'_L,n'_R^n_L,n_Rn_L,n_Rn'_L,n'_R. With the basis states given in Eq. (<ref>), the matrix elements may be cast into ρ_n'_L,n'_R^n_L,n_R = ⟨n_L,n_R|ρ̂_S|n'_L,n'_R⟩=Tr{(d̂^†_L)^n_L'(d̂^†_R)^n_R'00d̂_R^n_Rd̂_L^n_Lρ̂_S} = ⟨(d̂^†_L)^n_L'(d̂^†_R)^n_R'(1-d̂^†_Ld̂_L)(1-d̂^†_Rd̂_R)d̂_R^n_Rd̂_L^n_L⟩. Here we used 00=(1-d̂^†_Ld̂_L)(1-d̂^†_Rd̂_R) which may be verified by applying this operator to all basis states. Note that due to the fermionic anti-commutation relations, and because there is no coherence between states with a different total number of electrons, the only averages required to compute the density matrix that are not of the form ⟨d̂^†_αd̂_β⟩ can be reduced by Eq. (<ref>). A lengthy but straightforward calculation then results in the steady state ρ̂_ss = 1/4g^2+κ_Lκ_R{ κ_Lκ_Re^-β_L(ε-μ_L)d̂^†_Ld̂_Le^-β_R(ε-μ_R)d̂^†_Rd̂_R/Z_LZ_R+4g^2e^-β̅(ε-μ̅)(d̂^†_Ld̂_L+d̂^†_Rd̂_R)/Z̅^2 -2gκ_Lκ_R(n_F^L-n_F^R)/κ_L+κ_Ri(d̂^†_Ld̂_R-d̂^†_Rd̂_L)}, where we omitted the argument of the Fermi functions for brevity and the barred quantities are defined by the equation n̅ = 1/e^β̅(ε-μ̅)+1 = κ_Ln_F^L(ε)+κ_Rn_F^R(ε)/κ_L+κ_R, and we abbreviated the partition functions Z_α = Tr{e^-β_α(ε-μ_α)d̂^†_αd̂_α},Z̅ = Tr{e^-β̅(ε-μ̅)d̂^†_αd̂_α}, where in the last equality, the choice of α does not matter. Note that the terms in the first line of Eq. (<ref>) are both product states. Thus, it is the second line, including the coherence factor d̂^†_Ld̂_R-d̂^†_Rd̂_L=1001-0110, which is responsible for any entanglement we might get. Limiting cases: It is instructive to consider the steady state in some limiting cases. First, we consider an equilibrium scenario where β_L=β_R=β̅ and μ_L=μ_R=μ̅. In this case, the second line of Eq. (<ref>) vanishes and the first two terms become proportional to each other. We thus find, as expected from Eq. (<ref>) ρ̂_ss = e^-β̅(Ĥ_TD-μ̅N̂_S)/Z, with the thermodynamic Hamiltonian <cit.> and particle-number operator Ĥ_TD = ε(d̂^†_Ld̂_L+d̂^†_Rd̂_R), N̂_S = d̂^†_Ld̂_L+d̂^†_Rd̂_R. Note that this is a product state due to the additive nature of Ĥ_TD. In the singular-coupling master equation considered here, the thermal state does not exhibit any entanglement. This is different at large coupling g, where the secular approximation should be used instead <cit.>. Second, we consider the limit where κ_L,κ_R≫ g. In this case, only the first term in Eq. (<ref>) is relevant. This term describes two quantum dots, each in equilibrium with the reservoir it couples to. The state is thus a product of Gibbs states with the local reservoir temperatures and chemical potentials. The third case we consider is the case where κ_L=0. In this case, we find that only the second term in Eq. (<ref>) survives with β̅=β_R and μ̅ = μ_R. The system thus equilibrates with the right reservoir, taking the form of Eq. (<ref>). Finally, we consider the limit g≫κ_L,κ_R. Just as in the last case, the second term in Eq. (<ref>) dominates and the system equilibrates to the average occupation n̅ with the state reducing to Eq. (<ref>). In all these limiting cases, no entanglement is found as the term involving coherence always drops out. Concurrence: To quantify the entanglement in Eq. (<ref>), we consider the concurrence. The steady state given in Eq. (<ref>) is of the form ρ̂_ss = [ p_0 0 0 0; 0 p_L α 0; 0 α^* p_R 0; 0 0 0 p_d ], where the basis states are |00⟩, |10⟩, |01⟩, |11⟩. For such a state, the concurrence reduces to C[ρ̂_ss] = max{0,2|α|-2√(p_0 p_d)}. From Eq. (<ref>), we find 2|α|-2√(p_0 p_d)= 2κ_Lκ_R/4g^2+κ_Lκ_R{2|g||n_F^R-n_F^L|/κ_L+κ_R -√([(1-n_F^L)(1-n_F^R)+4g^2/κ_Lκ_R(1-n̅)^2][n_F^Ln_F^R+4g^2/κ_Lκ_Rn̅^2])}. We thus find that entanglement is indeed generated. Interestingly, the entanglement is generated due to reservoirs which are out of equilibrium, as can be inferred from the term |n_F^R-n_F^L|. This is a surprising result because usually, any coupling to the environment only reduces coherence and entanglement in the system. This is because entanglement is monogamous, which means that if A is strongly entangled with B, it cannot at the same time be strongly entangled with C <cit.>. When a system couples to its environment, this results in entanglement between the system and the reservoirs, which generally reduces the inter-system entanglement. Here we encounter a different behavior, where an out-of-equilibrium environment induces entanglement within a system. Heat current: It is instructive to consider the heat current, which is given by J_R = -Tr{(Ĥ_TD-μ_RN̂_S)ℒ_Rρ̂_ss} = (ε-μ_R)4g^2κ_Lκ_R(n_F^L-n_F^R)/(κ_L+κ_R)(4g^2+κ_Lκ_R), with the thermodynamic Hamiltonian given in Eq. (<ref>). Noting the similarity of this expression with the first term in the concurrence, see Eq. (<ref>), we find that the system is entangled when the heat current exceeds a critical value <cit.> |J_R|≥ |ε-μ_R||g|2κ_Lκ_R/4g^2+κ_Lκ_R√([(1-n_F^L)(1-n_F^R)+4g^2/κ_Lκ_R(1-n̅)^2][n_F^Ln_F^R+4g^2/κ_Lκ_Rn̅^2]). When we can trust the master equation we employ to be a valid description, the heat current may thus serve as an indicator for entanglement. Maximal entanglement: The concurrence in Eq. (<ref>) can be maximized over all parameters, which results in C[ρ̂_ss] = √(5)-1/4≃ 0.309. This value is obtained by the parameters n_F^L =1, n_F^R=0, κ_L=κ_R, g/κ_L=(√(5)-1)/4. The Fermi functions should thus be maximally different, which results in the largest current and off-diagonal density matrix element. The couplings to the reservoirs should be equal and interestingly, the ratio between the interdot coupling and the system-bath coupling is determined by the golden ratio φ. Indeed, we find φ≡1+√(5)/2 = κ_L/2g = 1/2C=|α|/2√(p_0p_d). As shown in Ref. <cit.>, the steady state for these parameters is not only entangled but can also be used for quantum teleportation. The entanglement is further enhanced when including electron-electron interactions, which may even result in nonlocal states that violate Bell inequalities <cit.>. §.§ Absorption refrigerator As a final example of a thermal machine, we consider a quantum absorption refrigerator, which is a refrigerator that uses heat as its energy source <cit.>. Quantum absorption refrigerators have been implemented experimentally, using trapped ions <cit.> and very recently using superconducting qudits <cit.>. The system we consider here consists of three two-level systems (qubits), coupled to three reservoirs respectively, see Fig. <ref>, with temperatures T_c≤ T_r≤ T_h. Here the subscript r denotes an intermediate room temperature. The basic principle behind the absorption refrigerator is that the tendency from heat to flow from the hot reservoir to the room reservoir is exploited to drive a heat current from the cold reservoir to the room reservoir. Thereby, a hot thermal reservoir is used to cool down the coldest reservoir. At the same time, the qubit coupled to the cold reservoir is cooled below T_c. The absorption refrigerator may thereby achieve two goals: * Cooling the cold reservoir. * Cooling the qubit coupled to the cold reservoir. Which of these goals is more relevant may depend on the specific circumstances. As we see below, different figures of merit can be used to describe the performance of the refrigerator in reaching the different goals. We stress that in contrast to the previous examples, we do not consider quantum dots where fermions are exchanged with the environment. Instead, we consider qubits that can exchange photons with the environment. We therefore do not have chemical potentials in this section (since the μ=0 for photons). §.§.§ The master equation The three qubits are described by the Hamiltonian Ĥ_S = Ĥ_0+Ĥ_int, where the first term describes the individual qubits Ĥ_0 = ε_cσ̂_c^†σ̂_c+ε_hσ̂_h^†σ̂_h+ε_rσ̂_r^†σ̂_r, and the second term describes their interaction Ĥ_int = g(σ̂_r^†σ̂_cσ̂_h+σ̂_c^†σ̂_h^†σ̂_r) = g(001110+110001). Here we introduced the lowering operators σ̂_c = 01⊗1_2⊗1_2,σ̂_h = 1_2⊗01⊗1_2,σ̂_c = 1_2⊗1_2⊗01. From the interaction Hamiltonian in Eq. (<ref>), we may anticipate the cooling mechanism: Two excitations, one in the hot qubit and one in the cold qubit, are turned into an exciation in the room qubit by the term σ̂_r^†σ̂_cσ̂_h. This excitation is then dissipated into the room reservoir. The hot qubit is then re-excited by the hot reservoir, preparing to remove any excitation in the cold qubit coming from the cold reservoir. The interaction in Eq. (<ref>) is most effective when it is in resonance, i.e., when the states |001⟩ and |110⟩ have the same energy. We thus demand ε_r=ε_c+ε_h. We note that the interaction in Eq. (<ref>) would be unphysical for fermions, where it would imply that one fermion is turned into two fermions. We describe the coupling to the environment using the master equation in the singular-coupling limit ∂_tρ̂_S(t) = -i[Ĥ_S,ρ̂_S(t)]+ℒ_cρ̂_S(t)+ℒ_hρ̂_S(t)+ℒ_rρ̂_S(t), with the dissipators ℒ_α = κ̃_α n_B^α𝒟[σ̂^†_α]+κ̃_α(n_B^α+1)𝒟[σ̂_α] = κ_α n_F^α𝒟[σ̂_α^†]+κ_α(1-n_F^α)𝒟[σ̂_α], where the Bose-Einstein occupation and the Fermi-Dirac occupation are given by n_B^α = 1/e^ε_α/k_BT_α-1n_F^α =1/e^ε_α/k_BT_α+1. This master equation can be derived using reservoirs described by noninteracting bosons (with μ=0) Ĥ_α = ∑_qε_α,qâ_α,q^†â_α,q,V̂_α = ∑_k g_α,q(â^†_α,qσ̂_α+σ̂_α^†â_α,q). This naturally results in the dissipators given by the first equality in Eq. (<ref>), including the Bose-Einstein occupations. In the second equality in Eq. (<ref>), the dissipators are written in terms of the Fermi-Dirac occupations by using κ_α = κ̃_αn_B^α/n_F^α,n_B^α+1/n_B^α = 1-n_F^α/n_F^α = e^ε_α/k_BT_α. The last equality ensures that the ratio between the rates of absorbing and emitting energy is given by the Boltzmann factor. This is known as local detailed balance and it ensures that the equilibrium state is a Gibb's state. Note that while κ̃_α is determined by the reservoir's spectral density and is temperature independent, κ_α depends on temperature T_α. Heat currents: From the master equation, we may extract the heat currents that enter the different reservoirs. As discussed in Sec. <ref>, the singular-coupling limit requires the use of a thermodynamic Hamiltonian for a consistent thermodynamic bookkeeping. The correct choice for the thermodynamic Hamiltonian depends on the details of the Hamiltonian. Given the resonance condition in Eq. (<ref>), the appropriate thermodynamic Hamiltonian is given by Ĥ_TD = Ĥ_0, i.e., we neglect the interaction between the qubits in the thermodynamic bookkeeping. The heat currents then read J_α = Tr{Ĥ_0ℒ_αρ̂_𝒮} = -ε_ακ_α(n_F^α-⟨σ̂_α^†σ̂_α⟩). Similarly to what we found for the systems based on quantum dots, the sign of the heat currents are determined by which occupation is larger, the occupation of the reservoir n_F^α or the occupation of the qubit ⟨σ̂_α^†σ̂_α⟩. To make progress, it is instructive to consider the time-evolution of the qubit occupation ∂_t ⟨σ̂_α^†σ̂_α⟩ = Tr{σ̂_α^†σ̂_α∂_tρ̂_S(t)} = -i⟨[σ̂_α^†σ̂_α,Ĥ_int]⟩ + κ_α(n_F^α-⟨σ̂_α^†σ̂_α⟩). The second term on the right-hand side can be identified as -J_α/ε_α by comparing to Eq. (<ref>). The first term can be related to the following quantity I≡ 2gIm⟨σ̂_r^†σ̂_cσ̂_h⟩ = i⟨[σ̂_c^†σ̂_c,Ĥ_int]⟩ = i⟨[σ̂_h^†σ̂_h,Ĥ_int]⟩ =-i⟨[σ̂_r^†σ̂_r,Ĥ_int]⟩. In the steady state, we have ∂_t ⟨σ̂_α^†σ̂_α⟩=0. From Eqs. (<ref>) and (<ref>), we may then infer J_c = -ε_c I,J_h = -ε_h I,J_r = ε_r I. This remarkably simple relation between the heat currents arises due to the simple structure of the Hamiltonian. Heat may only traverse the system by exchanging an excitation in the room qubit with two excitations, one in the cold and one in the hot qubit. The laws of thermodynamics: In the steady state, Eq. (<ref>) allows us to express the laws of thermodynamics in a particularly simple form. The first law of thermodynamics may be written as J_c+J_h+J_r = I(ε_r-ε_c-ε_h) = 0, which holds because of the resonance condition in Eq. (<ref>). Obviously, the first law still holds if this resonance condition is not met. However, in that case the thermodynamic Hamiltonian should be chosen differently which would modify Eq. (<ref>). The second law of thermodynamics reads J_c/T_c+J_h/T_h+J_r/T_r = I(ε_c+ε_h/T_r-ε_c/T_c-ε_h/T_h)≥ 0. In contrast to the first law, we may not verify the second law without explicitly solving the master equation. However, knowing that it holds, the second law can tell us when the device operates as a refrigerator. From Eq. (<ref>), we see that the cold reservoir is cooled when I≥ 0. From the second law, we can infer that this is the case if the term in brackets in Eq. (<ref>) is positive. This results in the condition for cooling ε_c/ε_h≤T_c(T_h-T_r)/T_h(T_r-T_c)≡η^COP_C, where η^COP_C denotes the Carnot value for the coefficient of performance of an absorption refrigerator, see also the next subsection. §.§.§ Figures of merit As discussed above, we may pursue two distinct goals with the absorption refrigerator: Cooling the cold reservoir (goal 1), or cooling the qubit coupled to the cold reservoir (goal 2). For these two goals, we introduce different figures of merit. Goal 1: For this goal, the desired output is a large heat current out of the cold reservoir. The natural figure of merit is then the coefficient of performance (COP) η^COP≡-J_c/-J_h = ε_c/ε_h≤η^COP_C. As for the heat engine in Sec. <ref>, the coefficient of performance can be understood as the desired output divided by the consumed resource, which in this case is the heat provided by the hot reservoir. For the absorption refrigerator under consideration, the COP is determined only by the qubit energy splittings, due to Eq. (<ref>). As shown above, the second law of thermodynamics provides an upper bound on the COP, under the assumption that the device indeed operates as a refrigerator. This bound diverges when T_r→ T_c, as heat can in principle be moved in between two reservoirs of the same temperature without consuming any resources. Goal 2: When cooling the qubit coupled to the cold reservoir, the natural figure of merit is its occupation. It may be quantified by an effective temperature θ = ε_c/k_Bln(1+⟨σ̂_c^†σ̂_c⟩/⟨σ̂_c^†σ̂_c⟩)⇔⟨σ̂_c^†σ̂_c⟩=1/e^ε_c/k_Bθ+1. From Eq. (<ref>), we find (in the steady state) ⟨σ̂_c^†σ̂_c⟩ = n_F^c-I/κ_c >0, which depends on the quantity I encountered before. The occupation may asymptotically reach zero in the limit g/κ_r→ 0,κ_r/κ_c = κ_h/κ_c→∞,ε_h/k_BT_h→0,ε_r/k_BT_r→∞. These conditions are very demanding and can obviously never be reached exactly. This can be understood as an implication of the third law of thermodynamics, which prevents cooling a system down to the ground state. §.§.§ Perturbation theory As we have seen above, the quantity I determines most of the observables that we are interested in. In the model we consider, I can be calculate analytically [this is how the limit in Eq. (<ref>) was obtained]. However, this is a rather tedious calculation. Here we consider a perturbative calculation, which is simpler but nevertheless provides some insight. To this end, we consider the interacting part of the Hamiltonian Ĥ_int [c.f. Eq. (<ref>)] as a small perturbation to the dynamics. This is justified, as long as g≪κ_c+κ_h+κ_r. We then write ρ̂_S = ρ̂_S^(0)+ρ̂_S^(1)+𝒪(g^2), where ρ̂_S^(j) is proportional to g^j. We note that the Hamiltonian is already written as the sum of a g independent part, Ĥ_0, and the interaction Hamiltonian which is linear in g. We then write the master equation in Eq. (<ref>) order by order in g, starting with the zeroth order ∂_tρ̂^(0)_S(t) = -i[Ĥ_0,ρ̂^(0)_S(t)]+ℒ_cρ̂^(0)_S(t)+ℒ_hρ̂^(0)_S(t)+ℒ_rρ̂^(0)_S(t). This master equation describes three independent qubits, each coupled to its respective thermal reservoir. The steady state is thus simply a product of three Gibbs states (at different temperatures) and can be written as ρ̂^(0) = [ 1-n_F^c 0; 0 n_F^c ]⊗[ 1-n_F^h 0; 0 n_F^h ]⊗[ 1-n_F^r 0; 0 n_F^r ]. The master equation to first order may be written as ∂_tρ̂^(1)_S(t) =-i[Ĥ_int,ρ̂^(0)_S(t)] -i[Ĥ_0,ρ̂^(1)_S(t)]+ℒ_cρ̂^(1)_S(t)+ℒ_hρ̂^(1)_S(t)+ℒ_rρ̂^(1)_S(t). The first term on the right-hand side gives [Ĥ_int,ρ̂^(0)_S(t)] = gδ n (001110-110001), where we abbreviated δ n = n_F^cn_F^h(1-n_F^r)-(1-n_F^c)(1-n_F^h)n_F^r. Note that δ n provides the population of the state |110⟩ minus the population of the state |001⟩ at g=0. The term in Eq. (<ref>) acts like a source term in the differential equation in Eq. (<ref>). This suggests the ansatz ρ̂^(1)_S = x001110+x^*110001. This ansatz is Hermitian and trace-less, which is important to keep the density matrix in Eq. (<ref>) Hermitian and with trace one. With this ansatz, we find [Ĥ_0,ρ̂^(1)_S] = (ε_r-ε_c-ε_h)x001110-(ε_r-ε_c-ε_h)x^*110001, and ℒ_jρ̂^(1)_S = -κ_j/2ρ̂^(1)_S. These identities can be inserted into Eq. (<ref>) to obtain ρ̂^(1)_S = -2igδ n/κ_c+κ_h+κ_r(001110-110001), which allows us to determine I = 2gIm⟨σ̂_r^†σ̂_cσ̂_h⟩ = 2gIm⟨110|ρ̂_S|001⟩=4g^2δ n/κ_c+κ_h+κ_r. From this expression, we find that for small g, the heat currents are quadratic in g. Furthermore, we obtain cooling whenever δ n>0. One may show that this inequality is equivalent to ε_c/ε_h<η_C^COP. This perturbative calculation thus reproduces the cooling condition for all values of g. §.§.§ Coherence-enhanced cooling Finally, we consider a feature of the absorption refrigerator that exploits quantum coherence in the transient dynamics. To this end, we assume that we have a means to turn the refrigerator on and off. In an implementation based on a superconducting circuit, this can for instance be achieved using a magnetic field <cit.>. When the refrigerator is off, the qubits will tend to the state given in Eq. (<ref>), i.e., each qubit will thermalize with its respective reservoir. When the refrigerator is turned on, the effective temperature θ defined in Eq. (<ref>) will start to oscillate, see Fig. <ref>. These oscillations will eventually damp out towards the steady-state value. Interestingly however, θ may dip below its steady-state value. If the refrigerator is switched off when θ reaches its first minimum, it can stay below its steady-state value for a significant amount of time if the couplings to the environment, κ_j, are sufficiently small. Quantum coherence thus allows for better cooling. This effect was first discussed in Ref. <cit.> and recently verified experimentally <cit.>. The damped oscillations in Fig. <ref> are a consequence of the interplay between the unitary and the dissipative part in the master equation. The unitary part results in oscillations between the states |110⟩ and |001⟩, while the dissipative part induces an exponential time-dependence towards the steady state. To get some insight into the oscillations, we isolate the effect of the Hamiltonian. To this end, we consider the von Neumann equation ∂_tρ̂_S(t) = -[Ĥ_int,ρ̂_S(t)] = -igx(t) (001110-110001)+gy(t) (110110-001001), where x(t)=⟨110|ρ̂_S(t)|110⟩,y(t)=2Im⟨110|ρ̂_S(t)|001⟩. In Eq. (<ref>), we dropped Ĥ_0 because it commutes both with Ĥ_int as well as with the initial state, which we choose to be given by Eq. (<ref>). Equation (<ref>) can be cast into the coupled differential equations ∂_t x(t)=-2gy(t),∂_ty(t)=2gx(t), with the initial conditions x(t=0)=δ n and y(t=0)=0. These equations may easily be solved, resulting in x(t) = δ n cos(2gt),y(t)=δ nsin(2gt). The solution to Eq. (<ref>) then reads ρ̂_S(t) = ρ̂_S(t=0) -δ n sin^2(gt)(110110-001001) -iδ n/2sin(2gt)(001110-110001). These oscillations are known as (generalized) Rabi-oscillations and they appear whenever two states are coherently coupled to each other. From the quantum state, we may easily obtain the time-dependent occupation of the cold qubit ⟨σ̂_c^†σ̂_c⟩ = n_F^c-δ nsin^2(gt). We note that δ n≤ n_F^c, ensuring that the population remains positive at all times. § FLUCTUATIONS In this section, we are exploring a key difference between macroscopic systems and small quantum systems: fluctuations. While in macroscopic systems, fluctuations can generally be neglected because they are negligible compared to mean values (c.f. Sec. <ref>), fluctuations do play an important role in small, nano-scale systems. The laws of thermodynamics derived in Sec. <ref> only include average values and they hold irrespective of the relevance of fluctuations. In this section, we discuss new laws that describe the behavior of heat and work in the presence of fluctuations. These new laws include fluctuation theorems <cit.> and thermodynamic uncertainty relations <cit.>. While fluctuation theorems generalize the second law of thermodynamics to fluctuating systems, thermodynamic uncertainty relations provide a trade-off between a signal-to-noise ratio and entropy production, and they do not have any counterpart in macroscopic systems. §.§ Fluctuation theorems for an isolated system Before we consider the general scenario introduced in Sec. <ref>, we focus on an isolated system that undergoes unitary time-evolution governed by a time-dependent Hamiltonian Ĥ_S(t). We consider an initial state that is diagonal in the instantaneous energy eigenbasis, i.e., ρ̂_S(0) = ∑_n p_n |E_n^0⟩⟨ E_n^0|,Ĥ_S(t) = ∑_n E_n^t|E_n^t⟩⟨ E_n^t|. In this case, the time-evolution operator is given by Û_S (t) = 𝒯e^-i∫_0^t dt' Ĥ_S(t'). Since the system is isolated, no heat is exchanged with the environment. However, due to the time-dependence of the Hamiltonian, work will be performed on the system. The average work performed on the system after time τ reads (see also Sec. <ref>) W_S = ∫_0^τdt Tr{[ ∂_tĤ_S(t)]ρ̂_S(t)}. §.§.§ The two-point measurement scheme In order to go beyond average values, we introduce measurements according to the so-called two-point measurement scheme <cit.>: * Projectively measure the system energy at t=0. This results in an outcome E_n^0. * Let the system evolve for time τ according to the time-evolution in Eq. (<ref>). * Projectively measure the system energy at t=τ. This results in an outcome E_m^τ. The outcomes of the measurements define a trajectory as the system evolved from state |E_n^0⟩→ |E_m^τ⟩. The change in energy for such a trajectory is given by W_m← n = E_m^τ-E_n^0. Since there is no heat exchange, we interpret this energy change as work. The probability for observing a trajectory corresponds to the joint probability of measuring the initial and final energies and it reads p(m← n) = p_n p_m|n = p_n |⟨ E_m^τ|Û_S(τ)|E_n^0⟩^2, where p_n is the initial probability of finding the system in state |E_n^0⟩ and p_m|n denotes the conditional probability of finding the system in state |E_m^τ⟩ at time τ given that it started out in state |E_n^0⟩ at t=0. The average energy change may then be evaluated as ⟨ W_m← n⟩ = ∑_n,m p(m← n) W_m← n = Tr{∑_m E_m^τ |E_m^τ⟩⟨ E_m^τ|Û_S(τ)∑_n p_n|E_n^0⟩⟨ E_n^0| Û_S^†(τ)} -Tr{Û_S(τ)∑_n p_nE_n^0|E_n^0⟩⟨ E_n^0|Û_S^†(τ)} =Tr{Ĥ_S(τ)ρ̂_S(τ)} -Tr{Ĥ_S(0)ρ̂_S(0)}=∫_0^τ dt ∂_tTr{Ĥ_S(t)ρ̂_S(t)}=W_S. This further justifies the interpretation of W_m← n as the work performed on the system during the trajectory |E_n^0⟩→ |E_m^τ⟩, i.e., in a single experimental run when E_n^0 and E_m^τ are measured. Note that we use the same brackets to denote averages over trajectories ⟨ f(n,m)⟩ = ∑_n,mp(m← n)f(n,m) as we do for ensemble averages ⟨Ô⟩ = Tr{Ôρ̂}. The object that is averaged over usually makes it clear which average is meant. Having defined the work along trajectories allows us to investigate the fluctuations in the performed work. For instance, we may be interested in the higher moments of work defined as ⟨ W_m← n^k⟩ = ∑_n,mp(m← n) W^k_m← n. §.§.§ The backward experiment In fluctuation theorems, time-reversal plays an important role. For this reason, we introduce the time-reversal operator Θ̂ <cit.>. Time-reversal in quantum mechanics is described by an anti-unitary operator. If we define |ã⟩=Θ̂|a⟩ and |b̃⟩=Θ̂|b⟩, the defining properties of anti-unitarity are ⟨b̃|ã⟩ = ⟨b|a⟩^* = ⟨a|b⟩, Θ̂(α|ψ⟩+β|ϕ⟩) = α^* Θ̂|ψ⟩+β^*Θ̂|ϕ⟩. Note that these relations imply that the time-reversal operator anti-commutes with the imaginary unit: Θ̂ i = -iΘ̂. Being anti-unitary, some rules that we take for granted do not apply to the time-reversal operator. For instance, the cyclic invariance of the trace has to be adapted as Tr{Θ̂ÂΘ̂^-1} = Tr{Â}^*. Another useful identity we will employ reads ⟨ã|Â|b̃⟩ = ⟨ b|Θ^-1Â^†Θ̂|a⟩. We note that in some works, a daggered time-reversal operator appears. Here we follow Refs. <cit.> and avoid such a notation, using instead the inverse of the time-reversal operator Θ̂^-1. Furthermore, we always consider the time-reversal operator (and its inverse) to act on the right. With the help of the time-reversal operator, we define a backward experiment. In this experiment, the time-evolution is determined by the time-reversed Hamiltonian H̃_S(t) = Θ̂Ĥ_S(τ-t)Θ̂^-1,Ũ_S(t)=𝒯e^-i∫_0^tdt'H̃_S(t'). The work protocol discussed above, where time-evolution is governed by Ĥ_S(t) is denoted as the forward experiment. In addition to transforming the Hamiltonian by the time-reversal operator, the time argument is inverted in Eq. (<ref>). Any external parameters that are changed during the forward experiment thus have their time-dependence reversed in the backward experiment. For instance, if an electric field is ramped up in the forward experiment, it is ramped down in the backward experiment. A special role is played by external magnetic fields. In the presence of such fields we have H̃_S(B⃗,t) = Θ̂Ĥ_S(-B⃗,τ-t)Θ̂^-1. The reason for this is that if we included the charges that create the magnetic field in the description, the time-reversal operator would reverse the momenta of these charges which changes the sign of the resulting magnetic fields. The time-evolution along the forward and backward experiments are linked by the so-called microversibility condition Θ̂Û_S(τ)Θ̂^-1 = Ũ_S^-1(τ)=Ũ^†_S(τ), which may be derived from Eq. (<ref>) and the properties of the time-reversal operator. As an initial state for the backward experiment, we consider a state that is diagonal in the basis of the time-reversed Hamiltonian at t=0 ρ̃_S(0) = ∑_m q_m Θ̂|E_m^τ⟩⟨ E_m^τ|Θ̂^-1, H̃_S(t)=∑_m E_m^τ-tΘ̂|E_m^τ-t⟩⟨ E_m^τ-t|Θ̂^-1. The backward experiment is then defined similarly to the forward experiment: * Projectively measure the system energy at t=0. This results in an outcome E_m^τ. * Let the system evolve for time τ according to the time-evolution in Eq. (<ref>). * Projectively measure the system energy at t=τ. This results in an outcome E_n^0. The outcomes of the measurements again define a trajectory |Ẽ_m^τ⟩=Θ̂|E_m^τ⟩→Θ̂|E_n^0⟩=|Ẽ_n^0⟩. The probability for this trajectory reads p̃(n← m) = q_m |⟨Ẽ_n^0|Ũ_S(τ)|Ẽ_m^τ⟩|^2 = q_m |⟨E_m^τ|Θ^-1Ũ^†_S(τ)Θ̂|E_n^0⟩|^2 = q_m|⟨ E_m^τ|Û_S(τ)|E_n^0⟩|^2, where we employed Eqs. (<ref>) and (<ref>). §.§.§ Fluctuation theorems We may now derive fluctuation theorems by taking the ratios of observing time-reversed trajectories in the forward and backward experiment p̃(n← m)/p(m← n) = q_m/p_n. Due to microreversibility, the transition probabilities drop out and we are left with the ratio of initial probabilities. This innocuous expression is actually very powerful. To see this, let us consider thermal initial states ρ̂_S(0) = e^-βĤ_S(0)/Z_0,ρ̃_S(0) = Θ̂e^-βĤ_S(τ)/Z_τΘ̂^-1, where the partition function reads Z_t = ∑_n e^-β E_n^t = e^-β F_t, with F_t denoting the free energy for a thermal state at inverse temperature β and Hamiltonian Ĥ_S(t), see Eq. (<ref>). For thermal states, the probabilities reduce to p_n = e^-β(E_n^0-F_0),q_m = e^-β(E_m^τ-F_τ). Crooks fluctuation theorem: Plugging Eq. (<ref>) into Eq. (<ref>) results in p̃(n← m)/p(m← n) = e^-β(W_m← n-Δ F), with Δ F=F_τ-F_0. This relation is known as Crooks fluctuation theorem <cit.>. It is an instance of a detailed fluctuation theorem because it involves the probabilities for single trajectories. Jarzynski relation: Multiplying Eq. (<ref>) with p(m← n) and summing over all n and m, we find the Jarzynski relation <cit.> ∑_n,mp̃(n← m) = ∑_n,m p(m← n) e^-β(W_m← n-Δ F)⇒⟨ e^-β W_m← n⟩ = e^-βΔ F. This relation is known as an integral fluctuation theorem since it involves an average rather than individual trajectories. The Jarzynski relation is remarkable because it relates an equilibrium quantity, the difference in equilibrium free energies on the right hand side, to an out-of-equilibrium quantity, the work performed in the forward experiment that appears on the left-hand side. Importantly, there is no requirement of remaining in or close to equilibrium during the experiment. Since equilibrium free energies are difficult to measure, the Jarzynski relation has been used to determine free energy landscapes, in particular in single-molecule pulling experiments <cit.>. In practice, evaluating the average on the left-hand side may be challenging as trajectories with exponentially small probabilities may contribute <cit.>. Finally, we may apply Jensen's inequality to the Jarzynski relation. Jensen's inequality states that for a convex function f(a x_1 +(1-a)x_2)≤ af(x_1)+(1-a) f(x_2), the inequality ⟨ f(X)⟩≥ f(⟨ X⟩), holds for a random variable X. Using this inequality, we find e^-βΔ F=⟨ e^-β W_m← n⟩≥ e^-β⟨ W_m← n⟩⇒⟨ W_m← n⟩=W_S≥Δ F. The final inequality states that the work performed on the system always exceeds the difference in equilibrium free energies. This inequality is equivalent to the second law of thermodynamics, when both the initial and final states are thermal states (which can always be achieved by letting the system equilibrate after the experiment). Thus, we find that the Jarzynski relation and the Crooks fluctuation theorem generalize the second law of thermodyanmics, as they constrain not only the average value of the performed work but also its fluctuations. §.§ Fluctuation theorems for the general scenario We now return to the general scenario introduced in Sec. <ref>. To define trajectories, we write the initial state [c.f. Eq. (<ref>)] as ρ̂_tot(0) = ρ̂_S(0)⊗_ατ̂_α = ∑_i,k⃗ p_0(i,k⃗)ψ_i(0),k⃗, with the states |ψ_i(0),k⃗⟩ = |ψ_i(0)⟩⊗_α|E_k_α^α,N_k_α^α⟩, where Ĥ_α|E_k_α^α,N_k_α^α⟩ = E_k_α^α|E_k_α^α,N_k_α^α⟩, N̂_α|E_k_α^α,N_k_α^α⟩ = N_k_α^α|E_k_α^α,N_k_α^α⟩. The sub- and superscripts may be confusing here. The eigenvalues of Ĥ_α are labeled by E_j^α and the vector k⃗ has elements k_α. The probabilities in Eq. (<ref>) read p_0(i,k⃗) = p_i(0)∏_αe^-β_α(E_k_α^α-μ_α N^α_k_α)/Z_α. Finally, we introduced the eigenvalues and eigenstates of the reduced state of the system through ρ̂_S(t)=Tr_B{ρ̂_tot(t)} = ∑_i p_i(t)ψ_i(t). §.§.§ Forward trajectories We now introduce trajectories by * Projectively measure all reservoir energies and particle numbers {Ĥ_α,N̂_α} and the system in its eigenbasis |ψ_i(0)⟩. This results in outcomes: i,k⃗. * Let the system evolve for time τ according to the time-evolution operator Û(τ) given in Eq. (<ref>). * Projectively measure all reservoir energies and particle numbers {Ĥ_α,N̂_α} and the system in its eigenbasis |ψ_j(τ)⟩. This results in outcomes: j,l⃗. We denote the corresponding trajectory by γ: |ψ_i(0),k⃗⟩→|ψ_j(τ),l⃗⟩. The probability to observe such a trajectory is given by P(γ) = p_0(i,k⃗)|⟨ψ_j(τ),l⃗|Û(τ)|ψ_i(0),k⃗⟩|^2. We may now define various thermodynamic quantities along these trajectories. Stochastic heat: The stochastic heat exchanged with reservoir α is defined as Q_α(γ) = (E^α_l_α-μ_α N^α_l_α)-(E^α_k_α-μ_α N^α_k_α). The average of the stochastic heat reduces to ⟨ Q_α(γ)⟩ = ∑_γ P(γ)Q_α(γ) = Tr{(Ĥ_α-μ_αN̂_α)[ρ̂_tot(τ)-ρ̂_tot(0)]}=Q_α, which is equal to the definition for heat introduced in Eq. (<ref>). In Eq. (<ref>), the sum over all trajectories is given by ∑_γ=∑_i,j,k⃗,l⃗ and the equality may be proven using the identities Ĥ_α = ∑_j,l⃗E^α_l_αψ_j(τ),l⃗,N̂_α = ∑_j,l⃗N^α_l_αψ_j(τ),l⃗, 1 = ∑_j,l⃗ψ_j(τ),l⃗. Stochastic (chemical) work: Analogously, the stochastic chemical work is defined as W_α(γ) = μ_α(N^α_l_α-N^α_k_α). As the stochastic heat, its average reduces to the expected value ⟨ W_α(γ)⟩ =μ_αTr{N̂_α[ρ̂_tot(τ)-ρ̂_tot(0)]}. Stochastic entropy: Following Ref. <cit.>, we may use the self-information introduced in Sec. <ref> to define a stochastic system entropy change Δ S(γ) = -ln p_j(τ)+ln p_i(0), which averages to the change in von Neumann entropy ⟨Δ S(γ)⟩ = S_vN[ρ̂_S(τ)]-S_vN[ρ̂_S(0)]. Stochastic entropy production: Finally, we can introduce the stochastic entropy production along a trajectory as Σ(γ) = k_BΔ S(γ) + ∑_αQ_α(γ)/T_α, which averages to the entropy production introduced in Eq. (<ref>), ⟨Σ(γ)⟩=Σ as can be easily shown using Eqs. (<ref>) and (<ref>). Importantly, while the average entropy production Σ is always nonnegative, the stochastic entropy production Σ(γ) can become negative. We note that in contrast to Sec. <ref>, we do not define a stochastic version of the work associated to the time-dependence of the Hamiltonian. The reason for this is that if the initial state does not commute with the Hamiltonian, there is no unique way to define stochastic work that obeys all desired properties <cit.>. §.§.§ Backward trajectories To derive fluctuation theorems, we need to introduce backward trajectories. For these, we consider the initial state ρ̃_tot(0) = Θ̂ρ̂_S(τ)⊗_ατ̂_αΘ̂^-1=∑_j,l⃗p_τ(j,l⃗)Θ̂ψ_j(τ),l⃗Θ̂^-1, where p_τ(j,l⃗) is obtained from Eq. (<ref>) by replacing p_j(0)→ p_j(τ), i.e., p_τ(j,l⃗) = p_j(τ)∏_αe^-β_α(E_l_α^α-μ_α N^α_l_α)/Z_α. Note that the initial state of the backward trajectory is quite different from what we considered in Sec. <ref>. For the system state, we consider the (time-reversed) final state of the forward experiment. The reservoirs on the other hand are initialized in the same Gibbs states as in the beginning of the forward experiment. This is consistent with the paradigm that we have control over the system, but not over the environment which we only describe by constant temperatures and chemical potentials. This state is then evolved by the operator Ũ(t) = 𝒯 e^-i∫_0^t dt' Θ̂Ĥ_tot(t')Θ̂^-1, which obeys the micro-reversibility condition Θ̂Û(τ)Θ̂^-1 = Ũ^†(τ). The backward trajectories are now defined by * Projectively measure all reservoir energies and particle numbers {Ĥ_α,N̂_α} and the system in its eigenbasis Θ̂|ψ_j(τ)⟩. This results in outcomes: j,l⃗. * Let the system evolve for time τ according to the time-evolution operator Ũ(τ) given in Eq. (<ref>). * Projectively measure all reservoir energies and particle numbers {Ĥ_α,N̂_α} and the system in its eigenbasis Θ̂|ψ_i(0)⟩. This results in outcomes: i,k⃗. We denote the corresponding trajectory by γ̃: Θ̂|ψ_j(τ),l⃗⟩→Θ̂|ψ_i(0),k⃗⟩. The probability to observe such a trajectory is given by, using Eq. (<ref>) P̃(γ̃) = p_τ(j,l⃗)|⟨ψ_j(τ),l⃗|Θ̂^-1Ũ^†(τ)Θ̂|ψ_i(0),k⃗⟩|^2 = p_τ(j,l⃗)|⟨ψ_j(τ),l⃗|Û(τ)|ψ_i(0),k⃗⟩|^2. §.§.§ Fluctuation theorems A detailed fluctuation theorem may now be obtained by taking the ratio of probabilities for time-reversed trajectories in the forward and in the backward experiment P̃(γ̃)/P(γ) = p_τ(j,l⃗)/p_0(i,k⃗) = e^-∑_αQ_α(γ)/T_α+lnp_j(τ)/p_i(0)=e^-Σ(γ)/k_B. As for the isolated system, c.f. Eq. (<ref>), the transition probabilities drop out and the right-hand side is determined solely by the initial probabilities. The Boltzmann factors in Eq. (<ref>) result in the stochastic heat, c.f. (<ref>), and together with the initial probabilities of the system, the exponent adds up to the stochastic entropy production, c.f. Eq. (<ref>). Equation (<ref>) provides a generalization of the second law of thermodynamics to nano-scale systems, where fluctuations matter. It is instructive, to consider what happens if Eq. (<ref>) is applied to a macroscopic process, for instance a glass shattering. In such a process, that we denote by the trajectory γ, roughly 10^23 degrees of freedom are involved (Avogadro constant), resulting in an entropy production of the order of 10^23k_B. The probability to observe the time-reversed process γ̃ is thus exponentially suppressed by an extremely large number. It is thus safe to assume that macroscopic processes with negative entropy production do not happen as their probabilities are virtually zero. For a nano-scale object, the entropy production along a trajectory may be as low as a few k_B, and indeed we may observe the time-reversed trajectory. This has interesting consequences for the thermodynamic arrow of time. For macroscopic systems, such an arrow of time is given by the fact that entropy is produced. For nano-scale systems, fluctuations prevent us from identifying the direction of time with certainty: if you see a video of a fluctuating system, you may not with certainty identify if the video was played forward or backward. Recently, machine learning was employed to learn the arrow of time and the algorithm rediscovered the fluctuation theorem as the underlying thermodynamic principle <cit.>. From the detailed fluctuation theorem, we may derive the Crooks fluctuation theorem by summing over all trajectories that have the same entropy production P̃(-Σ)/P(Σ)=e^-Σ/k_B,P(Σ) = ∑_γδ(Σ-Σ(γ))P(γ),P̃(Σ) = ∑_γδ(Σ-Σ(γ))P̃(γ). Furthermore, an integral fluctuation theorem can be derived ⟨ e^-Σ(γ)⟩ = 1, from which the second law, ⟨Σ(γ)⟩≥0, follows through Jensen's inequality in complete analogy to Eqs. (<ref>) and (<ref>). This implies that the fluctuation theorem in Eq. (<ref>) contains strictly more information than the second law, and therefore can be understood as a generalization thereof. §.§ Full counting statistics In the last subsection, we defined fluctuations along single trajectories for the general scenario. Here we show how we can extract these fluctuations from an approximate description based on Markovian master equations. To this end, we consider the method of full counting statistics (FCS) which allows us to calculate probability distributions for heat and work, as well as their associated moments and cumulants. A more detailed introduction can be found in Refs. <cit.>. §.§.§ Counting particles We start by considering a system that can exchange particles with a fermionic reservoir as illustrated in Fig. <ref>. In addition, it might be coupled to other reservoirs. We are interested in the net number of particles n that entered the fermionic reservoir after time t. Whenever a fermion enters the reservoir, n is increased by one. When a fermion leaves the reservoir (enters the system), n is reduced by one. The quantity n is thus a stochastic variable that is different in each experimental run. We consider the master equation ∂_tρ̂_S(t) = ℒ̃ρ̂_S(t)+κ (1-n_F)𝒟[ĉ]ρ̂_S(t)+κ n_F𝒟[ĉ^†]ρ̂_S(t)≡ℒρ̂_S(t), where ℒ̃ denotes the time-evolution due to the Hamiltonian as well as all reservoirs except the one we are interested in and the annihilation operator ĉ destroys a fermion in the system. Now we identify which terms in the master equation change the values of n. To this end, we write out the superoperators 𝒟[ĉ^†]ρ̂ = ĉ^†ρ̂ĉ-1/2{ĉĉ^†,ρ̂},𝒟[ĉ]ρ̂ = ĉρ̂ĉ^†-1/2{ĉ^†ĉ,ρ̂}. The term ĉ^†ρ̂ĉ increases the number of fermions in the system by one. This fermion comes from the reservoir, thus this term decreases n by one. Similarly, the term ĉρ̂ĉ^† increases n by one. We now introduce the n-resolved density matrix ρ̂_S(n) as the density matrix if n fermions entered the reservoir. From this quantity, we may recover the full density matrix as ρ̂_S = ∑_n=-∞^∞ρ̂_S(n). Having identified the terms in the master equation which change n, we may write the time-evolution of the n-resolved density matrix as ∂_t ρ̂_S(n) = ℒ_0 ρ̂_S(n) + 𝒥_+ρ̂_S(n-1)+ 𝒥_-ρ̂_S(n+1), where the superoperators that increase and decrease n are written as 𝒥_+ρ̂ = κ (1-n_F)ĉρ̂ĉ^†,𝒥_-ρ̂ = κ n_Fĉ^†ρ̂ĉ, and ℒ_0 denotes the time-evolution that does not change n such that ℒ = ℒ_0+𝒥_++𝒥_-. Summing Eq. (<ref>) over n, we thus recover the master equation in Eq. (<ref>). From the n-resolved density matrix, we can obtain the probability that n fermions entered the reservoir during time t as p(n) = Tr{ρ̂_S(n)}. Often we are interested in the moments of this distribution ⟨ n^k⟩ = ∑_n=-∞^∞ n^k p(n), with the first moment ⟨ n ⟩ being the average. Keeping the information on n comes at a price. Comparing Eqs. (<ref>) to Eq. (<ref>), we turned a single master equation into infinitely many coupled master equations, one for each ρ̂_S(n). This problem can be simplified by Fourier transforming ρ̂_S(χ) = ∑_n=-∞^∞ e^inχρ̂_S(n), where χ is known as the counting field. Fourier transforming Eq. (<ref>), we find ∂_tρ̂_S(χ) = ℒ_0∑_n=-∞^∞ e^inχρ̂_S(n)+𝒥_+∑_n=-∞^∞ e^inχρ̂_S(n-1)+𝒥_-∑_n=-∞^∞ e^inχρ̂_S(n+1) =ℒ_0ρ̂_S(χ)+e^iχ𝒥_+ρ̂_S(χ)+e^-iχ𝒥_-ρ̂_S(χ) ≡ℒ(χ)ρ̂_S(χ). This equation has the formal solution ρ̂_S(χ) = e^ℒ(χ)tρ̂_S(χ,t=0). To find the initial condition, we note that we start counting at t=0, implying that n=0 at t=0 and therefore ρ̂_S(n,t=0) = ρ̂_S(t=0)δ_n,0⇔ρ̂_S(χ,t=0) = ρ̂_S(t=0). The trace of the χ-dependent density matrix provides the characteristic function Λ(χ) ≡∑_n=-∞^∞ e^inχ p(n) = Tr{ρ̂_S(χ)},Λ(0) = ∑_n=-∞^∞ p(n) = 1. In the field of FCS, the characteristic function is often called the moment-generating function since the moments of n can be obtained by taking derivatives as ⟨ n^k⟩ = (-i)^k∂_χ^k. Λ(χ)|_χ=0. Note however that in probability theory, the moment-generating function is the Laplace transform of the probability distribution. A particularly useful quantity is the cumulant-generating function, given by the logarithm of the characteristic function S(χ) = lnΛ(χ),⟨⟨ n^k⟩⟩ = (-i)^k∂_χ^k. S(χ)|_χ=0, which produces the cumulants ⟨⟨ n^k⟩⟩ upon differentiation. The first cumulant is equal to the mean ⟨⟨ n⟩⟩ = -i∂_χ. S(χ)|_χ=0 = -i.∂_χΛ(χ)/Λ(χ)|_χ=0 = ⟨ n⟩. The second cumulant is equal to the variance ⟨⟨ n^2⟩⟩ = -∂^2_χ. S(χ)|_χ=0 =.-∂_χ^2Λ(χ)/Λ(χ)|_χ=0-.[-i∂_χΛ(χ)]^2/Λ^2(χ)|_χ=0 = ⟨ n^2⟩-⟨ n⟩ ^2. The third cumulant ⟨⟨ n^3⟩⟩ is related to the skewness of the distribution and the fourth cumulant to its kurtosis (“tailedness"). Often, we are interested in the long-time behavior of n. In this case, it is useful to use the spectral decomposition to write e^ℒ(χ)t = ∑_j e^ν_j t𝒫_j, where ν_j denote the eigenvalues of the Liouvillean ℒ and the projectors 𝒫_j can be obtained from its eigenvectors. The characteristic function may then be written as Λ(χ) = ∑_je^ν_jtTr{𝒫_jρ̂_S(t=0)} e^ν_maxtTr{𝒫_maxρ̂_S(t=0)}, where ν_max is the χ-dependent eigenvalue of the Liouvillean with the largest real part. Similarly, the cumulant generating function may be written as S(χ) ν_maxt+lnTr{𝒫_maxρ̂_S(t=0)}≃ν_maxt. The cumulant generating function is thus fully determined by the eigenvalue of the Liouvillean with the largest real part. Furthermore, all cumulants become linear in time, since S(χ)∝ t. §.§.§ Example: transport through a quantum dot As an example, we return to the quantum dot coupled to two fermionic reservoirs, see Fig. <ref>, as discussed in Sec. <ref> in the context of a heat engine. Here we consider the so-called large bias regime for simplicity, where μ_L-ϵ_d,ϵ_d-μ_R≫ k_BT. In this case, n_F^L≃ 1 and n_F^R≃ 0, such that particles can only enter the dot from the left reservoir and leave to the right reservoir. We want to describe the statistics of the net number of particles that enter the right reservoir. This system was experimentally investigated in Ref. <cit.>, where the tunneling electrons could be observed and counted one by one. The master equation for this scenario is given by ∂_t ρ̂_S = κ_L𝒟[d̂^†]ρ̂_S+κ_R𝒟[d̂]ρ̂_S = ℒρ̂_S. Since we only count the particles entering the right reservoir, we may introduce the counting field by the replacement κ_Rd̂ρ̂_Sd̂^†→ e^iχκ_Rd̂ρ̂_Sd̂^†. This results in the counting-field dependent master equation ∂_t ρ̂_S(χ) =ℒρ̂_S(χ)+κ_R(e^iχ-1)d̂ρ̂_Sd̂^†. Noting that the density matrix remains diagonal, and denoting the diagonal elements by p_0(χ) and p_1(χ), we may cast this master equation into ∂_t [ p_0(χ); p_1(χ) ] = [ -κ_L κ_Re^iχ; κ_L -κ_R ]_L(χ)[ p_0(χ); p_1(χ) ], where L(χ) denotes the Liouvillean (acting on a vector instead of on a matrix). Its eigenvalues can easily be calculated and read ν_± = -κ_L+κ_R/2±1/2√((κ_L-κ_R)^2+4e^iχκ_Lκ_R). The cumulant generating function in the long-time limit is thus [c.f. Eq. (<ref>)] S(χ)/t =ν_+= -κ_L+κ_R/2+1/2√((κ_L-κ_R)^2+4e^iχκ_Lκ_R), resulting in the average ⟨ n⟩ = -i∂_χ.S(χ)|_χ=0 = κ_Lκ_R/κ_L+κ_Rt = ⟨ I⟩ t, where ⟨ I⟩ denotes the average particle current, see also the power given in Eq. (<ref>). The variance is given by ⟨⟨ n^2⟩⟩ = -∂^2_χ.S(χ)|_χ=0 = ⟨ n⟩κ_L^2+κ_R^2/κ_L+κ_R^2. Transport becomes particularly simple if one of the couplings far exceeds the other, for instance if κ_L≫κ_R. In this case, the cumulant generating function reduces to S(χ) = κ_R t(e^iχ-1). Note that it is the smaller of the couplings that features in the cumulant generating function. This is the case because transport at long times is dominated by bottlenecks. In this case, the bottleneck is the smaller coupling to the right reservoir. The cumulants obtained from Eq. (<ref>) are all equal and read ⟨⟨ n^k⟩⟩ = κ_Rt. This is the hallmark of Poissonian transport, where each particle is independent of the others. Indeed, we may write the characteristic function as Λ(χ) = e^-κ_Rtexp(κ_Rte^iχ) = e^-κ_Rt∑_n=0^∞(κ_Rt)^n/n!e^inχ, from which we may read off, with the help of Eq. (<ref>), p(n) = (κ_Rt)^n/n!e^-κ_Rt, which is indeed a Poisson distribution. §.§.§ Counting heat and work So far, we focused on counting particles. We now illustrate how the same technique can be applied to evaluate the moments and cumulants of heat and power exchanged with the environment. To this end, we consider the singular-coupling master equation given in Eq. (<ref>) which is here restated for convenience ∂_t ρ̂_S(t) = -i[Ĥ_S,ρ̂_S(t)]+∑_α,kγ_k^α(ω_α,k)𝒟[Ŝ_α,k]ρ̂_S(t)=ℒρ̂_S(t), where the Hamiltonian may include a Lamb shift. The jump operators obey [Ŝ_α,k,Ĥ_TD] = ω_α,kŜ_α,k,[Ŝ_α,k,N̂_S] = n_α,kŜ_α,k, where the thermodynamic Hamiltonian is introduced in Sec. <ref>. From these relations, we may conclude that a jump Ŝ_α,kρ̂_SŜ_α,k^† reduces the particle number in the system by n_α,k and the internal energy by ω_α,k. We are interested in the joint distribution of the heat and work exchanged with the different reservoirs P({Q_α,W_α}). This distribution can be obtained from a characteristic function Λ({χ_α,λ_α}) = (∏_α∫ dQ_α e^iQ_αχ_α∫ dW_α e^iW_αλ_α)P({Q_α,W_α})=Tr{ρ̂_S({χ_α,λ_α})}. The counting field dependent density matrix can again be obtained by a modified master equation ∂_t ρ̂_S({χ_α,λ_α})=ℒ({χ_α,λ_α})ρ̂_S({χ_α,λ_α}), where the counting field dependent Liouvillean is obtained from the Liouvillean ℒ in Eq. (<ref>) by the replacement Ŝ_α,kρ̂_SŜ_α,k^†→e^iχ_α(ω_α,k-μ_α n_α,k)e^iλ_αμ_α n_α,kŜ_α,kρ̂_SŜ_α,k^†. In contrast to Eq. (<ref>), the counting fields are weighted by the heat and work associated to each jump. While Eq. (<ref>) provides a simple recipe, we note that we can rigorously obtain ℒ({χ_α,λ_α}) by introducing counting fields on the unitary evolution with Ĥ_tot and subsequently tracing out the environment using Born-Markov approximations <cit.>. We also note that the fluctuation theorem in Eq. (<ref>) may be derived from the master equation using the method of FCS. A trajectory γ is then determined by outcomes of initial and final measurements on the system, as well as the times and types of all jumps that occur <cit.>. §.§ The Thermodynamic uncertainty relation The thermodynamic uncertainty relation (TUR) is an inequality that bounds fluctuations in a current by the entropy production <cit.> ⟨⟨ I^2⟩⟩/⟨ I⟩^2≥2 k_B/Σ̇. Here ⟨⟨ I^2⟩⟩ and ⟨ I⟩ denote the steady-state current fluctuations and average current respectively (see below), and Σ̇ denotes the entropy production rate. The TUR can be understood as an upper bound on the signal-to-noise ratio ⟨ I⟩^2/⟨⟨ I^2⟩⟩, provided by dissipation, as expressed through the entropy production rate: to achieve a high signal-to-noise ratio, a sufficient amount of entropy must be produced. Equivalently, the TUR implies that it is not possible to simultaneously achieve a high current, low fluctuations, and low dissipation. In contrast to fluctuation theorems, the TUR has a smaller range of validity. It applies to classical, Markovian systems. It may therefore be violated in quantum-coherent systems, see for instance Refs. <cit.>. Markovian quantum systems thus have the potential to achieve higher signal-to-noise ratios for a given amount of dissipation than their classical counterparts. We note however that the TUR may also be violated in classical, non-Markovian systems <cit.>. §.§.§ Current and current noise The average and the noise of a current enter the TUR. Here we briefly connect these quantities to the cumulants obtained from FCS as discussed in Sec. <ref>. Let n count the number of transferred quanta (e.g., particles or photons exchanged with a reservoir). The current cumulants are then defined as the time-derivatives of the cumulants of n ⟨⟨ I^k⟩⟩ = ∂_t ⟨⟨ n^k⟩⟩⟨⟨ n^k⟩⟩/t, where we used that all cumulants become linear in time in the long-time limit. The first cumulant of the current is simply the average current ⟨ I⟩, while the second cumulant characterizes the noise of the current. The relation between current and the counting variable n can be motivated by introducing a stochastic current, which is the time-derivative of n, i.e., n(t) = ∫_0^t dt' I(t'). Since n typically increases or decreases by one if a particle is exchanged, the stochastic current I(t) consists of a series of Dirac deltas at these times. With this definition, one may show ⟨ n⟩ = ∫_0^t dt' ⟨ I(t)⟩,lim_t→∞⟨⟨ n⟩⟩ = t∫_-∞^∞ dτ⟨δ I(τ)δ I(0)⟩ = t S_δ I, where δ I(t) = I(t) -⟨ I(t)⟩ denotes the deviation of the current from its mean and S_δ I is known as the zero-frequency power spectrum. More information on noise and fluctuations can be found for instance in Refs. <cit.>. §.§.§ Application: heat engine As an illustrative example, let us consider the TUR applied to a heat engine operating between a hot and a cold reservoir, as discussed in Sec. <ref>. In this case, Eq. (<ref>) may be re-written as <cit.> P η/η-η_Ck_B T_c/⟨⟨ P^2⟩⟩≤1/2, where P denotes the average output power, η the efficiency and η_C the Carnot efficiency [c.f. Eq. (<ref>)]. The TUR thus implies that we cannot at the same time achieve high power, an efficiency close to the Carnot efficiency, and low fluctuations in the output power (sometimes called high constancy). In other words, a high output power at efficiencies close to the Carnot efficiency is only possible if fluctuations are large. Equation (<ref>) may be obtained from Eq. (<ref>) by using the expression for the entropy production rate in Eq. (<ref>) together with the identities ⟨ I⟩ = P/μ_c-μ_h,⟨⟨ I⟩⟩ =⟨⟨ I⟩⟩/(μ_c-μ_h)^2, which relate the power and its fluctuations to the particle current. The TUR thus implies that the well-known trade-off between high power and an efficiency close to the Carnot efficiency is actually a trade-off between three quantities, including the power fluctuations. This sheds light onto previous works that found finite power at Carnot efficiency close to a phase transition, where fluctuations diverge <cit.>. Interesting additional applications of the TUR include the estimation of efficiencies of molecular motors <cit.>, as well as investigating the precision of biological clocks <cit.>. § SUMMARY These lecture notes provide an introduction to the vast and growing field of quantum thermodynamics. In particular, the following key questions were addressed: What is thermodynamic equilibrium? This question is addressed in Sec. <ref>, where we introduced the Gibbs state as well as the concepts of temperature and chemical potential. To highlight the relevance of the Gibbs state, we provided three motivations: The maximum entropy principle, considering a small part of a system with well defined energy and particle number, as well as the concept of global passivity. How do the laws of thermodynamics emerge from quantum mechanics? In Sec. <ref>, we tackled this key question that lies at the heart of quantum thermodynamics and we introduced the general scenario that was used throughout the lecture notes. We discussed how energy conservation results in the first law of thermodynamics, we introduced the concept of entropy and entropy production, and we discussed how the second law of thermodynamics can be understood in terms of loss of information. How can we model open quantum systems? In Sec. <ref> we derived Markovian master equations that provide a powerful tool to produce approximations for observables in the general scenario. We discussed how these equations can be derived in a thermodynamically consistent way and highlighted their limitations. What can we do with open quantum systems? Section <ref> introduced three different quantum thermal machines, a heat engine, an entanglement generator, and a refrigerator. These devices illustrate how open quantum systems coupled to multiple thermal reservoirs can be used to perform useful tasks. How do small systems differ from large ones? In Sec. <ref>, we focused on a key difference between microscopic and macroscopic systems: fluctuations. We illustrated how they become relevant and how they result in new thermodynamic rules including fluctuation theorems, which generalize the second law of thermodynamics, and the thermodynamic uncertainty relation. § OUTLOOK While a number of topics in the field of quantum thermodynamics is addressed in these lecture notes, there is much more that has not been touched upon. Here I provide a subjective selection of exciting research avenues that build on the concepts discussed in these notes. §.§ The thermodynamics of Information Already in the 19th century, J. C. Maxwell realized, by considering a thought experiment, that having access to microscopic degrees of freedom allows for performing tasks that are otherwise forbidden by the second law of thermodynamics <cit.>. Maxwell considered a “neat-fingered being”, commonly referred to as Maxwell's demon, that can see individual particles in a gas. Using a trap-door, the demon could separate fast from slow particles and thereby create a temperature gradient out of equilibrium without performing any work, seemingly reducing entropy and thus violating the second law of thermodynamics. More generally, using measurement and feedback control, thermal fluctuations can be rectified which may result in negative entropy production. This seeming contradiction of the laws of thermodynamics is resolved by appreciating that the demon itself is a physical system. In order to function, it needs to produce entropy. Including this contribution to the entropy production, the laws of thermodynamics are restored. Experimental progress has enabled the implementation of Maxwell's thought experiment in the laboratory, both in classical <cit.>, and more recently in quantum systems <cit.>. Interestingly, to understand the thermodynamics of feedback-controlled systems, it is not required to model the physical implementation of feedback control (i.e., the demon). Instead, it is sufficient to include the information obtained by the demon in the thermodynamic bookkeeping <cit.>. Since feedback-controlled systems exploit fluctuations (different measurement outcomes result in different feedback protocols), and since the fluctuation theorem provides a generalization of the second law of thermodynamics to fluctuating systems, information can be incorporated in the thermodynamic bookkeeping by accordingly modified fluctuation theorems <cit.>. A better understanding of the thermodynamics of measurement and feedback promises to shed light onto how such systems can be exploited for emerging nano- and quantum technologies, similar to how a better understanding of thermodynamics allowed for the refinement of steam engines and refrigerators in the industrial revolution. §.§ Quantum thermo-kinetic uncertainty relation The TUR discussed in Sec. <ref> shows how entropy production bounds the signal-to-noise ratio of any current. In addition to the TUR, another bound exists known as the kinetic uncertainty relation (KUR) <cit.>. The KUR looks very similar to the TUR but instead of the entropy production rate, the average rate of transitions between different states, the dynamical activity enters. While the TUR is tight close to equilibrium (a consequence of the fluctuation-dissipation theorem), it becomes very losoe far from equilibrium because the entropy production rate increases in this regime. In contrast, the KUR provides a relevant bound far from equilibrium and can also be applied to irreversible dynamics, such as predator-prey models. Very recently, a tighter version of the KUR was discovered which is known as the clock uncertainty relation (CUR), due to its connection to the problem of time estimation <cit.>. Both the TUR as well as the KUR hold for classical Markovian systems and can be violated in systems described by a quantum Markovian master equation (see for instance Ref. <cit.>). This has motivated efforts in finding similar bounds that constrain signal-to-noise ratios in quantum systems. Over the last few years, a number of extensions of the TUR and KUR to the quantum regime where discovered, see for instance Refs. <cit.>. These works provide insights into the fascinating avenue of research that aims at understanding the fundamental limitations of precision in open quantum systems, as well as the differences between quantum stochastic systems. §.§ Finite-size environments In quantum thermodynamics, the environment is often treated as large and memoryless, being well described by fixed temperatures and chemical potentials. This is however not always the case. Indeed in recent experiments, temperature fluctuations were measured <cit.>, and a single two-level system was used to purify the state of its environment <cit.>, resulting in enhanced coherence times. There is no unique approach to describe the thermodynamics of such scenarios. Indeed multiple approaches exist, including the extended micro-canonical master equation <cit.>, where the energy of the environment is treated dynamically and classical correlations between system and environment are taken into account. Furthermore, recently a thermodynamic description of energy exchanges between arbitrary quantum systems was put forward <cit.>. In Ref. <cit.>, a single-level quantum dot coupled to a finite-size reservoir that can be described by a fluctuating temperature was investigated. It was found that the thermodynamics of the quantum dot is described by an entropic temperature instead of the actual, fluctuating temperature. In particular, the quantum dot equilibrates to the entropic temperature, in analogy to the constant temperature case considered in Sec. <ref>. There still remain many open questions related to finite-size reservoirs. For instance, when can they be described using a fluctuating temperature and when is a non-equilibrium description needed? This research direction is closely related to understanding systems that are strongly coupled to their environment <cit.>. §.§ Dissipative phase transitions Phase transitions are a central concept in macroscopic thermodynamics. A phase transition denotes a drastic change in a systems' property upon changing an external variable. Common examples include the transition of liquid water to ice at zero degrees Celcius, or the onset of magnetization in a ferromagnet. These phase transitions occur as temperature is changed. In quantum systems, phase transitions may occur at zero temperature, as quantum fluctuations result in a drastic state of the ground state properties. These phase transitions are termed quantum phase transitions. Phase transitions are heavily exploited in technological applications, including refrigerators and sensors, e.g. single photon detectors <cit.>. Indeed, phase transitions are very promising for sensing applications due to the drastic change that can be induced by a small perturbation. More recently, dissipative phase transitions between out-of-equilibrium steady states have been investigated <cit.>. These transitions occurring far from equilibrium provide a rich behavior and can be exploited for sensing applications <cit.>. The thermodynamics of these transitions remains rather unexplored and provides a promising research avenue. §.§ Thermodynamics of multiple conserved quantities Throughout these lecture notes, we considered the grand canonical (and sometimes the canonical) ensemble. In this ensemble, there are two conserved quantities, energy and particle number. One can imagine adding additional conserved quantities, for instance angular momentum or any other observable that commutes with the total Hamiltonian. This motivates the extension of the thermal state to the so-called generalized Gibbs state <cit.> ρ̂_G = e^-β(Ĥ-∑_k μ_k Q̂_k)/Z, where Q̂_k denotes the conserved charges. In this case, thermodynamic forces corresponding to different conserved quantities can be can be traded off with each other, similar to how the heat engine in Sec. <ref> exploits a temperature gradient to overcome a voltage bias. In contrast to a heat engine, which performs one useful task, hybrid thermal machines perform multiple useful tasks at the same time <cit.>. For instance, a three-terminal device can be used to produce work and cool down the coldest reservoir at the same time. Of particular interest is the study of thermodynamics when the conserved quantities no longer commute, as is the case for instance with the different components of angular momentum. In this case, established concepts in thermodynamics are challenged and many fascinating questions are raised <cit.>. § ACKNOWLEDGMENETS These lecture notes were created for a Master course at the University of Basel where they are accompanied by exercise sheets (available upon request). They are based on a previous set of notes prepared for an 8-hour course at Lund University. I thank all participants of these courses whose feedback has been crucial for the development of these notes. I also thank Peter Samuelsson for providing the opportunity to develop my own course at Lund University. Many thanks also to the teaching assistants in Basel including Marcelo Janovitch, Matteo Brunelli, Kacper Prech, and Aaron Daniel. Special thanks to Aaron Daniel for illustrating Figs. <ref>, <ref>, <ref>, <ref>, <ref>, <ref>, <ref>, and <ref>, and to Marcelo Janovitch for illustrating Fig. <ref>. Special thanks also to Aaron Daniel and Nicolas Hunziker for pointing out the appearance of the golden ratio in Sec. <ref>. I acknowledge funding from the Swiss National Science Foundation (Eccellenza Professorial Fellowship PCEFP2_194268).
http://arxiv.org/abs/2406.19344v1
20240627171908
On joint returns to zero of Bessel processes
[ "Quentin Berger", "Loïc Béthencourt", "Camille Tardif" ]
math.PR
[ "math.PR", "60G40, 60H30" ]
Thermal Dynamics of Heat Pipes with Sub-Critical Nanopores Sumith Yesudasan July 1, 2024 ========================================================== § ABSTRACT In this article, we consider joint returns to zero of n Bessel processes (n≥ 2): our main goal is to estimate the probability that they avoid having joint returns to zero for a long time. More precisely, considering n independent Bessel processes (X_t^(i))_1≤ i ≤ n of dimension δ∈ (0,1), we are interested in the first joint return to zero of any two of them: H_n := inf{ t>0, ∃ 1≤ i <j ≤ n such that X_t^(i) = X_t^(j) =0 } . We prove the existence of a persistence exponent θ_n such that (H_n>t) = t^-θ_n+o(1) as t→∞, and we provide some non-trivial bounds on θ_n. In particular, when n=3, we show that 2(1-δ)≤θ_3 ≤ 2 (1-δ) + f(δ) for some (explicit) function f(δ) with sup_[0,1] f(δ) ≈ 0.079. § INTRODUCTION AND MAIN RESULT Let n≥ 2 be some fixed integer, and consider X=(X^(i))_1≤ i ≤ n independent squared Bessel processes of dimension δ, i.e. described by the evolution equations X_0^(i) =x_i , X_t^(i) = 2 √(X_t^(i)) W_t^(i) + δ t , with (W_t^(i))_1≤ i ≤ n independent standard Brownian motions. In other words, (X_t)_t≥ 0 is a diffusion in (_+)^n with generator ℒ^n_δ := 2 ∑_i=1^n x_i ∂^2/∂ x_i^2 + δ∂/∂ x_i . We denote by _x the law of (X_t)_t≥ 0 started from x = (x_1,…,x_n). For i,j ∈{1,…, n}, i≠ j, let us denote T_i := inf{ t≥ 0, X_t^(i) =0} , T_i,j := inf{ t≥ 0, X_t^(i) = X_t^(j) =0 } = inf{ t≥ 0, X_t^(i) + X_t^(j) =0 } , which are respectively the first return to 0 of X^(i) and the first joint return to 0 of X^(i) and X^(j). Then, it is classical, see e.g. <cit.>, to obtain that if δ <2, then for any fixed i, _x(T_i >t) ∼ c_x_i t^- (1- 1/2δ) as t→∞ , where the constant c_x_i depends only on x_i and δ, and is given by c_x_i = x_i^1 -δ/2/2^1-δ/2(1-δ /2)Γ(1-δ/2) , where Γ is the usual gamma function. As far as joint returns are concerned, for any fixed i≠ j, we have, if δ∈ (0,1), _x(T_i,j>t) ∼ c_x_i+x_j t^- (1-δ) as t→∞ . This can be viewed from the fact that X^(i)+X^(j) is a squared Bessel process of dimension δ̃=2δ with starting point x_i+x_j, see <cit.>, so one can apply (<ref>). We also refer to Section <ref> for more comments. In this article, we consider the first joint return to 0 of any two of the n squared Bessel processes, namely H_n := min{ T_i,j , 1≤ i < j ≤ n } . This can also be seen as the hitting time of the (n-2)-dimensional set 𝒜 = ⋃_i≠ j{x_i=x_j=0} by the (ℝ_+)^n-valued process (X_t)_t≥ 0. In dimension n=2, this corresponds to the hitting time of the corner of the quadrant (_+)^2; in dimension n=3, this is the hitting time of one of the axis of the octant (_+)^3. One could also consider the hitting time by (X_t)_t≥ 0 of the (n-k)-dimensional set 𝒜^(k) = ⋃_|I|=k{x_i=0 ∀ i∈ I}, corresponding to simultaneous returns to 0 of k Bessel processes. We will make a few comments on this general case, but for simplicity we focus on the case k=2 in the rest of the paper. Our main goal is to estimate the tail probability _x(H_n>t) as t→∞. We will focus on the case where δ is in (0,1), since in the case δ≥ 1 we have T_i,j =+∞ a.s. for all i,j, while for δ≤ 0 squared Bessel processes are absorbed at 0 (still, we discuss this case in Section <ref>). We prove below that the persistence exponent θ_n exists, see Proposition <ref>, i.e. that we have, for any x∉𝒜, _x(H_n>t) = t^- θ_n +o(1) as t→∞ . The question is then to identify θ_n; a further question would be to obtain a sharper asymptotic behavior, for instance _x(H_n>t) ∼ c_n,x t^-θ_n. In this article, we put some emphasis on the case n=3 for simplicity. Even if we are not able to determine the exponent θ_3, we prove non-trivial upper and lower bounds, showing that 2(1-δ)≤θ_3 ≤ 2 (1-δ) + f(δ) for some (explicit) function f(δ) with sup_[0,1] f(δ) ≈ 0.079, see Theorem <ref> below. §.§ Some motivations Spatial population with seed-bank and renewal processes. Our original motivation was a question raised by F. den Hollander, in the context of renewal processes, in relation to models of populations with seed-banks <cit.>, in particular in a multi-colony setting, see e.g. <cit.> (or the introduction of <cit.> for an overview). In these models, individuals can become dormant and stop reproducing and after some (random, possibly heavy-tailed) time they wake up, become active and start reproducing but only for a short period of time. Roughly speaking, the times where individual from a seed-bank becomes active form a renewal process, and joint renewals correspond to times when individuals become jointly active and are able to interact and exchange genetic material. Thus, understanding the tail behavior of the joint renewals is key in understanding the evolution of genetic variability in these models. Our question would then amount to studying the tail probability of having no joint renewals for any two individuals in a given set of n individuals. Renewal processes on ℕ and joint renewals. Let us formulate the question of the previous paragraph directly in terms of renewal processes and make some comments. Consider n independent recurrent renewal processes (τ^(i))_1≤ i≤ n on ℕ_0={0,1,2,…}: τ^(i) = {τ_k^(i)}_k≥ 0 is such that τ_0^(i) =0 and (τ_k^(i)-τ_k-1^(i))_k≥ 1 are i.i.d. -valued random variables. We can interpret τ^(i) as the activation times of an individual in a seed-bank, or as the return times to 0 of a Markov process. We assume that (τ_1^(i) > t) ∼ c_0 t^-α as t→∞, for some α>0 and some constant c_0>0. This is a natural fat tail assumption for population with seed-bank, see <cit.> and it is also verified for the return times to 0 of Bessel-like random walks, see <cit.>; in particular, the parameter α is related to the dimension δ of the Bessel-like random walk[More precisely, 1/Nτ^(i) converges in distribution (as a closed subset of [0,∞)) to a min(α,1)-stable regenerative set, see e.g. <cit.>, which can be interpreted as the zero set of a Bessel process of dimension δ.] by the relation α = 1- 1/2δ, see e.g. (<ref>) (or equivalently δ=2(1-α)<2). Defining ρ^(i,j) := τ^(i)∩τ^(j) the joint renewals of τ^(i) and τ^(j), then one easily have that ρ^(i,j) is also a renewal process, which is recurrent if α >1/2 (which corresponds to δ<1). In the case α∈ (1/2,1) (which corresponds to δ∈ (0,1)), the renewal structure allows one to obtain the tail asymptotic (ρ^(i,j)∩ (0,t] =∅) thanks to a Tauberian theorem, simply by estimating the renewal function U(t) = ∑_s=1^t (s ∈ρ^(i,j)) = ∑_s=1^t (s ∈τ^(i))^2: estimates on (s ∈τ^(i)) are available (see e.g. <cit.>) and after a short calculation one gets that (ρ^(i,j)∩ (0,t] =∅) ∼ c_1 t^- (2α-1) = c_1 t^-(1-δ); we refer to <cit.> for details. The case α≥ 1 is actually more delicate since one cannot apply a Tauberian theorem, but one has (ρ^(i,j)∩ (0,t] =∅) ∼ c_1' t^- α, see <cit.>. We refer to <cit.> for an overview of results on the intersection of two renewal processes. However, if there are n renewal processes and if we define the set of joint renewals as ρ := { s∈ℕ_0 , ∃ 1≤ i<j≤ n , s ∈τ^(i)∩τ^(j)}, then ρ is not a renewal process anymore if n≥ 3. Then, it is not clear how to estimate the tail probability (ρ∩ (0,t] = ∅) and the goal of the present article is precisely to give an idea on how this probability should decay, since it is natural to expect that (ρ∩ (0,t] = ∅) ≈(H_n>t), with squared Bessels of dimension δ := 1-2α. A toy model for collisions of particles. Another source of motivation for studying joint returns to 0 is that one can interpret the instant T_i,j as the first collision time between two particles i,j — for instance one could interpret Y_i,j:=X^(i)+X^(j) as the distance between particles i,j. This is of course a toy model of particle systems since particles have not much interaction, but the question is already interesting (and difficult) because of the intricate relation between the processes Y_i,j. In the following, we sometimes call an instant t such that X_t^(i)=X_t^(j)=0 a collision between particles i and j. In this framework, our question consists in studying the large deviation probability of having no collision (of any pair of particles) for a long time. We have in mind several models where such a question is natural, such as mutually interacting Brownian of Bessel processes[Also related to Dyson's Brownian motion and Dunkl processes, see e.g. <cit.> for an overview.], see e.g. <cit.>, or Keller–Segel particles systems, see e.g. <cit.> — note that both models feature (squared) Bessel processes. §.§ Main results: joint returns to zero of n≥ 3 Bessel processes We now turn to the case of n≥ 3 Bessel processes and state our main result. Recall the definition (<ref>) of H_n, the hitting time of 𝒜=⋃_i≠ j{x_i=x_j=0}. First of all, we show the existence of the persistence exponent θ_n. There is some θ_n ≥ 0, that depends on δ but not on the starting point x ∈ (_+)^n∖𝒜, such that lim_t→∞1/log tlog_x( H_n> t) =-θ_n . In other words, _x( H_n> t) = t^-θ_n +o(1) as t→∞. Before we state our main result, let us give “trivial” bounds on the probability _x(H>t), and so on θ_n. For an upper bound, we can use the independence of (T_2i-1,2i+1)_1≤ i ≤⌊ n/2⌋, together with (<ref>), to obtain that _x(H_n>t) ≤ c t^- ⌊ n/2⌋ (1-δ) as t→∞. Hence, this gives the bound θ_n ≥⌊ n/2⌋ (1-δ). Let us stress that if n=3 this gives that θ_3 ≥ 1-δ, which is simply the exponent obtained when n=2; in particular, it is a priori not clear whether one has θ_3 > 1-δ. For a lower bound, imposing T_1,2>t and T_i>t for i≥ 3, using the independence and (<ref>)-(<ref>), we obtain that _x(H_n>t) ≥ c t^- (1-δ) - (n-2) (1-1/2δ) as t→∞. This gives the upper bound θ_n ≤ n (1-1/2δ) -1. In particular, when n=3 we get θ_3 ≤ 2-3/2δ. Our main result provides a non-trivial lower bound on θ_n, valid for all n≥ 3. In the case n=3, we also find an upper bound on θ_3. For all n≥ 3, we have that θ_n≥ (n-1)(1-δ) . When n=3 we have the following upper bound θ_3 ≤ 2(1-δ) + f(δ), with f(δ) = 1/4(√((6-5δ)^2 +8δ(1-δ)) -(6-5δ) ). §.§ First comments and some guesses We now make a few comments on our result and we develop some interesting open questions one could pursue. About θ_3. In view of the fact that the function f(δ) is small (see Figure <ref>) and the fact that our upper bound could possibly be improved (see Remark <ref>) one may have the following guess. For n=3 and δ∈ (0,1), we have θ_3 = 2 (1-δ). We would not venture to call it a conjecture since we have no simple heuristic as to why this should be the correct answer; in fact we expect that this guess should not be correct when n is large, see Guess <ref> below. About subsets of joint returns to zero. Naturally, there are many other questions one could ask about joint returns to zero of Bessel processes. For instance we could consider a subset K ⊂_2(n) := {{i,j}, 1≤ i <j≤ n } of all possible pairs of indices, and consider T_K := min{ T_i,j , {i,j}∈ K}, i.e. the first joint return for any X^(i) and X^(j) with {i,j}∈ K. We focus in this article on the case K=_2(n), and in fact we have no clear guess for a general subset K, even in simple cases such as n=3, K= {{1,2} ,{1,3}}. However, the following guess seems reasonable, but we are not able to prove it. For any δ∈(0,1), we have (T_1,2>t, T_1,3>t) = t^-θ̃_3 +o(1) as t→∞, with θ_2 = 1-δ < θ̃_3 < 2(1-δ) ≤θ_3. This guess somehow tells that it is strictly harder to avoid collisions when one considers more pairs of particles, but we are not able to prove any of the bounds 1-δ < θ̂_3 < 2(1-δ). In fact, our Theorem <ref> shows that it is strictly harder to avoid any collision when you have three particles, which is already an achievement. About θ_n when n is large. Another aspect of the problem one may consider is when the number n of particles is very large. We then have the following guess (for which we give some convincing argument below), which tells in particular that the lower bound θ_n ≥ (n-1)(1-δ) is not sharp, at least when n is large[Numerical simulations appear to confirm that θ_n > (n-1) (1-δ) when n≥ 5, at least in some range of δ.]. For any fixed δ∈ (0,1), we have that θ_n ∼ n (1-1/2δ) as n→∞. Let us briefly explain why we conjecture this specific asymptotic behavior for θ_n. First of all, we showed a trivial upper bound θ_n ≤ n (1-1/2δ) -1 in Section <ref>, which matches this asymptotics. For the lower bound, the heuristic goes as follows. First, let us set Ȟ_n^(1) := inf{t, ∃ 2≤ i ≤ n , X_t^(1)=X_t^(i)=0} the first instant of collision of particle number 1 with any other particle. Then, we strongly believe (but are not able to prove) that when n is large, one has _x(Ȟ_n^(1) > t) = t^- θ̌_n +o(1) with θ̌_n ∼ 1-1/2δ . Indeed, the easiest way for the particle number 1 to avoid a collision with the other n-1 particles is to avoid touching 0 whatsoever (i.e. having T_1>t), hence the exponent should be close to 1-1/2δ, which comes from (<ref>); indeed, requiring all n-1 other particles doing something unusual should be much more costly. With this in mind, we should have that _x(H_n>t) = _x(Ȟ_n^(1) > t) _x( H_n>t |Ȟ_n^(1) > t) ≈ t^- θ̌_n +o(1)_x( H_n-1>t ) , where H_n-1 denotes the first collision time among n-1 particles. The reasoning here is that the conditioning by the event {Ȟ_n^(1) > t} mostly affects the first particle but almost not the others: in practice, we should have _x( H_n>t |Ȟ_n^(1) > t) ≈_x( H_n>t | T_1 > t) = _x( H_n-1>t). Iterating this argument (as long as the number of particles remains large) supports the guess that θ_n = n (1-1/2δ) +o(n) as n→∞. §.§ Organisation of the rest of the paper Let us briefly outline the rest of the paper. * In Section <ref>, we comment on some related questions: we present remarkable properties of the case of n=2 Bessel processes (these properties fail for n≥ 3); we give results in the case of a negative dimension δ<0, which are trivial; we comment on the relation of our question with various PDE problems, which provide a different perspective (that we were not able to exploit). * In Section <ref>, we present some preliminary results: a comparison theorem that allows us to compare different diffusion processes; a proof of the existence of the persistence exponent θ_n via an elementary (and general) method (it relies on the sub-additive lemma, with some small additional technical difficulty). We also present the general strategy of the proof in Section <ref>: in a nutshell, the idea is to find an auxiliary process (Z_t)_t≥ 0 for which H_n is the hitting time of 0, and to compare (Z_t)_t≥ 0 with a time-changed Bessel process (for which we know how to control the hitting time of 0). * In Section <ref>, we implement the strategy outlined in Section <ref>. We introduce two auxiliary processes (a different ones for the lower and the upper bound on θ_n) and compare them with time-changed Bessel processes. The time-changes are controlled in a separate Section <ref> (our goal is to give a self-contained and robust proof, and in particular we do not rely on subtle properties of Bessel processes). * In Appendix <ref> and <ref>, we collect some tedious calculations that we had postponed not to break the flow of the proof. § VARIOUS COMMENTS §.§ About two Bessel processes conditioned on having no joint return to zero Let us now develop a bit on the case of n=2 squared Bessel processes, which contains some interesting features and helps understand why the case n≥ 3 is more complicated. A natural approach to attacking the case n=3 and a natural question in itself is to consider two Bessel processes conditioned on having no collision before time t. Indeed one can write _x(H_3>t) = _x(T_1,2 >t) _x(H>t | T_1,2>t) ∼ c_x t^- (1-δ)_x( T_3,1, T_3,2>t | T_1,2>t) , and understanding the behavior of X_t^(1), X_t^(2) conditioned on T_1,2>t seems to be a good start to study _x(H_3>t). Interestingly, the behavior of X_t^(1), X_t^(2) conditioned on having no collision, i.e. T_1,2=+∞, is remarkably clear. Indeed, let S_t:= X_t^(1) +X_t^(2) and U_t:= X_t^(1)/S_t ∈ [0,1], 1-U_t= X_t^(2)/S_t, so that X_t^(1) = S_t U_t and X_t^(2) = S_t (1-U_t). Then, a simple application of Itô's formula gives, after straightforward calculations, that (S_t)_t≥ 0 and (U_t)_t≥ 0 satisfy the following SDEs: S_t = 2√(S_t) W_t + 2δ t U_t = 2/√(S_t)√(U_t(1-U_t)) W̃_t + δ/S_t (1-2U_t) t with (W_t)_t≥ 0, (W̃_t)_t≥ 0 two independent Brownian motions. In particular, (S_t)_t≥ 0 is a squared Bessel process of dimension 2δ and U_t can be written as time-changed (by ∫_0^t S_u^-1 u) diffusion, independent of (S_t)_t≥0. Hence, conditioning on T_1,2 = +∞ (i.e. on S_t>0 for all t>0) simply has the effect of changing (S_t)_t≥ 0 to a squared Bessel process of dimension 4-2δ, see <cit.>. We therefore end up with the following result. Conditionally on T_1,2 = +∞, the process (X_t^(1),X_t^(2))_t≥ 0 have the distribution of ( S̃_t Ũ_τ̃_t, S̃_t (1-U_τ̃_t))_t≥ 0, where S̃, Ũ are independent diffusion characterized by the following: * (S̃_t)_t≥ 0 is a squared Bessel process of dimension 4-2δ, i.e. follows the evolution equation S̃_t = 2√(S̃_t) W_t + (4-2δ) t ; * (Ũ_t)_t≥ 0 follows the evolution equation Ũ_t = 2√(Ũ_t(1-Ũ_t)) W̃_t + δ (1-2Ũ_t) t ; and τ̃_t is the inverse of t↦∫_0^t S̃_u^-1 u. We could also define the angle Θ_t, such that Ũ_t := cos^2(Θ_t), 1- Ũ_t = sin^2(Θ_t). Applying Itô's formula, after some calculation one ends up with the following SDE for (Θ_t)_t≥ 0: Θ_t = W̃_t + δ-1/2 tan(2 Θ_t) t . Note that it looks like the evolution equation of a Bessel process of dimension δ when Θ approaches 0 (and similarly for π/2 -Θ, by symmetry), with a null drift when Θ = π/4. Let us make some further comments and give one result. Comment 1. The conditioning by {T_1,2=+∞} significantly changes the behavior of the tail of the first hitting of zero, min{T_1,T_2}. In fact, somewhat surpisingly, the persistence exponent of _x( min{T_1,T_2}>t | T_1,2 =+∞) is equal to 1, and in particular it does not depend on δ∈(0,1). Indeed, as t→∞, we have that, _x( min{T_1,T_2}>t | T_1,2 >t ) = _x( min{T_1,T_2}>t )/_x( T_1,2 >t )∼c_x_1 c_x_2 t^-2(1-δ/2)/ c_x_1+x_2 t^-(1-δ) = c_x_1 c_x_2/ c_x_1+x_2 t^-1 , where we have used (<ref>)-(<ref>); we leave aside the technicality of replacing the conditioning by T_1,2=+∞. This shows in particular that the conditioning makes it strictly easier for the Bessel processes to avoid hitting zero at all, changing the persitence exponent of min{T_1,T_2} from 2-δ to 1. Comment 2. The zero set ^(i):={t, X_t^(i)=0} of a squared Bessel is a regenerative set, in fact an α-stable regenerative set with α:=1-1/2δ; see <cit.> for an introduction to regenerative sets. Then, the set of collision times {t, X_t^(i) =X_t^(j) =0} is ^(i)∩^(j), which is itself a regenerative set. This regenerative structure is not specific to Bessel processes and holds for any Markov process, and can be useful in estimating the probability (T_i,j>t) = (^(i)∩^(j)∩[0,t] =∅), similarly to the discrete setting (see Section <ref> above). On the other hand, the regenerative structure completely disappears when considering n≥ 3 processes, since the set of collision times is then ⋃_i≠ j^(i)∩^(j) which is not a regenerative set anymore[Note however that the regenerative structure is present if one considers “n-collisions”, i.e. simultaneous return to 0 of the n processes all together.]. Comment 3. Proposition <ref> allows us to “understand” the law of ^(1)∪^(2) conditioned on ^(1)∩^(2)=∅: it is the zero set of the process R_t := Ỹ_t Ũ_τ̃_t (1-Ũ_τ̃_t), for which one has the evolution equation R_t = 2√(R_t (1-3 U_t))Ŵ_t + δ (1- 6 U_t) t , where U_t := Ũ_τ̃_t(1-Ũ_τ̃_t) ∈ [0,1/4]. Note that the process R_t can be interpreted as a time-changed (by ∫_0^t (1-3 U_s) s) squared Bessel process, with varying dimension δ1- 6 Û_t/1-3 U_t — the difficulty here is that the variation of the dimension is intricate. Then, one could hope to understand ( T_3,1,T_3,2 >t | T_1,2 =+∞), since min{T_3,1,T_3,2} is the hitting time of zero of the process R_t+X_t^(3). In fact, with techniques similar to the ones developed in this paper, we should be able to show that _x( T_3,1,T_3,2 >t | T_1,2 =+∞) ≤ t^-(1-δ)+o(1) (which in view of (<ref>) would correspond to the bound θ_3≥ 2(1-δ)), but we are not able to obtain matching upper and lower bounds with this approach. §.§ The case of a negative dimension Let us comment briefly on the case where the dimension of the squared Bessel processes is negative, i.e. δ≤ 0. In that case, the processes X^(i) are absorbed at 0, meaning that X_t^(i) =0 for all t≥ T_i. Therefore, we get that T_i,j := max{T_i,T_j} so that _x(T_i,j >t) = _x(T_i >t or T_j>t) ∼ (c_x_i+c_x_j) t^- (1-1/2δ) as t→∞ , using also (<ref>). Similarly, we have that H_n = min_1≤ i <j≤ n{ T_i,j} is the second smallest T_i, so we have that _x(H_n>t) = _x( ⋃_j=1^n {T_i >t for all i≠ j }) . Using the inclusion-exclusion principle and again (<ref>), we easily end up with the following result. Let n≥ 2 and δ≤ 0. Then we have as t→∞ _x(H_n>t) ∼∑_j=1^n _x( T_i >t for all i≠ j ) ∼ c_n t^ - θ_n , with θ_n = (n-1) (1-δ/2) and with the constant c_n:= ∑_j=1^n ∏_i≠ j c_x_i. Let us observe that in Proposition <ref>, the persistence exponent verifies θ_n=(n-1) θ_2 (similalry as in Guess <ref> with n=3) and also θ_n ∼ n (1-1/2δ) as n→∞ (similarly as in Guess <ref>). On the other hand, we also have that _x(T_1,2>t, T_1,3 >t) ∼ c_x_1 t^-(1-1/2δ), so that Guess <ref> does not hold in the case δ≤ 0: we have here that _x(T_1,2>t) ∼ (1+c_x_2/c_x_1) _x(T_1,2>t, T_1,3 >t) and therefore θ_2=θ̃_3 <θ_3. §.§ Relation to PDEs We mention in this section the relation of our question with some PDE problems, which provide other approaches for studying the persistence exponent. We will not pursue these approaches further since we were not able to obtain any useful information from it. *Laplace transform of the hitting time. In this paragraph we recall the classical fact that the Laplace transform of the hitting time can be obtained by solving a PDE problem with boundary conditions. In our context, the PDE is not so complicated, but the difficulties lie in the boundary conditions. Let us denote by φ_λ (x):=_x [e^-λ H_n] the Laplace transform of H_n:= min{ T_i,j , 1≤ i < j ≤ n }, with starting point X_0=x ∈ (ℝ_+)^n. Since the stopping time H_n is the hitting time of the set 𝒜 := ⋃_i≠ j{x_i=x_j=0} by the (_+)^n-valued diffusion process (X_t)_t≥0, we classically have that φ_λ solves ℒ^n_δφ (x) =λφ (x) , x∈ (_+^*)^n , φ(x)=1 , x ∈𝒜 , lim_x→∞φ(x)=0 . where we recall that ℒ^n_δ is the generator of n independent Bessels processes, see (<ref>). When θ_n <2, proving that _x(H_n>t)∼ c_x t^-θ_n as t →∞ is equivalent to proving that 1-φ_λ(x) ∼λ^θ h(x) as λ↓ 0 , where h is expected to be ℒ-harmonic. Note that, by scale invariance of Bessel processes, we have φ_λ(x) =φ_1(λ x), and the goal would thus be to find the behavior of φ_1 near 0, where φ_1 is the “good” eigenfunction solving (<ref>) with λ=1. *Link with Quasi-Stationary Distributions. There is a link, which is at first hand not so direct, between our problem and questions related to the theory of Quasi-Stationary Distributions (QSD). We recall in a nutshell this theory but we refer to <cit.> and <cit.> for detailed references. Let (Z_t)_t≥0 be a Markov process on a state space 𝒳. We assume that 𝒳 can be decomposed in two parts: 𝒳_a, the set of allowed states and 𝒳_f := 𝒳∖𝒳_a, the set of forbidden states, and we let T:=inf{t>0, Z_t ∈𝒳_f } be the hitting time of 𝒳_f. A distribution ν on 𝒳_a is said to be a Quasi-Stationary Distribution (QSD) if it is invariant under time evolution when the process is conditioned to survive in 𝒳_a, that is such that for all t>0 and A⊂𝒳_a, _ν( Z_t ∈ A | T>t) = ν(A). This condition implies that T is exponentially distributed under _ν, i.e. there is some θ_ν>0 such that _ν (T>t)=e^-θ_ν t, and formally the couple (ν, θ_ν) solves the spectral problem ℒ^* ν= -θ_νν, where ℒ^* is the adjoint of the generator ℒ of Z_t killed when it reaches 𝒳_f. The basic questions in this theory are the existence of QSD and of the so-called Yaglom limits, that is, for some initial distribution μ, the convergence of the conditional laws _μ ( Z_t ∈·| T>t) towards some QSD measure when t goes to infinity (note that Section <ref> could be framed in this spirit). In general, it is expected that, for all x∈𝒳_a, _x ( Z_t ∈·| T>t) converges to ν_⋆, where ν_⋆ is the minimal QSD measure, i.e. the one associated with the eigenvalue θ_⋆ at the bottom of the spectrum of -ℒ^*. Such a result would give that, for all x∈𝒳_a, _x( T>t)= e^-θ_⋆ t (1+o(1)) as t→∞ . At first, our problem seems quite different, the hitting time H_n of 𝒜=⋃_i≠ j{x_i=x_j=0} having a heavy-tailed distribution. But, as we will see in Section <ref> below, we can perform an exponential time change by considering X̂_t:= e^-t X_e^t-1, which remains a Markov process (it is a n-dimensional Cox-Ingersoll-Ross process). Then, if Ĥ_n denotes the hitting time of 𝒜 by X̂_t, we get that having (H_n >t)=t^-θ_n(1+o(1)) as t→∞ is equivalent to (Ĥ_n>t)=e^-θ_n t (1+o(1)). Thus, following the theory of QSD, our persistence exponent is expected to be the bottom of the spectrum of -ℒ̂^*, the adjoint of the generator of X̂_t killed when it reaches 𝒜. Unfortunately, up to our knowledge, there is no general result in the QSD theory which can be applied directly to our problem and provide the existence of θ_n (and the minimal QSD associated). In our situation, the difficulties come from the fact that we consider a n-dimensional diffusion (with n≥ 2), taking values in an unbounded set, and also that the forbidden set 𝒜 is a proper subset of the boundary of the state space (_+)^n. Note that a QSD theory would provide the existence of a persistence exponent θ_n and of a Yaglom limit, but not the value (or estimates) on the exponent θ_n. Instead, we prove the existence of θ_n via some “elementary” sub-additive techniques and we estimate θ_n also via some “elementary” techniques. *Link with a spectral problem on a bounded domain. In this paragraph we discuss another approach to obtain θ_n, which exploits the symmetries of the problem and which reduces to a spectral problem for a certain operator on a bounded domain. The advantage of this approach is that we reduce the number of variables by one, and also that we obtain a diffusion on a bounded domain; the caveat is that the diffusion is harder to study. We only give an overview of the reduction one could perform and we provide some details in Appendix <ref> For simplicity, we consider the case n=3, and recall that we denote X_t:=(X_t^(1),X_t^(2),X_t^(3)). Anticipating a bit with notation, we further define the three elementary symmetric polynomials in the coordinates of X_t, S_t:= X_t^(1) + X_t^(2) +X_t^(3) , A_t:=X_t^(1) X_t^(2) + X_t^(2) X_t^(3) + X_t^(3) X_t^(1) , P_t:= X_t^(1) X_t^(2) X_t^(3) , which have respective homogeneity 1, 2 and 3. Note that, for all t≥ 0, X_t is entirely determined, up to some permutation, by (S_t,A_t,P_t). Also, since the Bessels processes are independent we can check that the process (S_t,A_t,P_t)_t≥ 0 is itself a diffusion process, whose generator can be computed explicitly, see (<ref>) for a formula. Expressed with those symmetrical coordinates, the hitting time H_3 can be expressed as H_3:= inf{t≥ 0, A_t=0} (notice that X_t ∈𝒜 if and only if A_t=0). Moreover, as a consequence of the symmetries of the problem, we can factorize the dynamics of (S_t,A_t,P_t) by (S_t)_t≥ 0, which plays the role of a “radial” process, and some “angular” (i.e. without scaling) process (A̅_t, P̅_t):=(A_t/S_t^2, P_t/S_t^3). It turns out that one can write the angular (2-dimensional) process as a time-changed diffusion (U_t,V_t), independent of (S_t)_t≥0 and whose generator ℒ̅ can also be computed (again, see Appendix <ref> for details) — this in analogy with what is done in Section <ref>, see (<ref>), in the case of n=2 Bessels. Also, one can show that the angular process (A̅_t, P̅_t)_t≥ 0 evolves in a bounded domain 𝒯̅⊂ (_+)^2 (with boundary) which can be determined explicitly, see Figure <ref> in Appendix <ref> for an illustration. Now we can relate the persistence exponent to a spectral problem for the generator ℒ̅ on the bounded domain 𝒯̅: finding (μ, φ) such that ℒ̅φ= μφ with φ a non-negative function on 𝒯̅ that vanishes only when u=0, then one should be able to relate the eigenvalue μ to the persistence exponent θ_3 by the relation θ_3(θ_3-1+3δ/2)+μ=0. We refer to Appendix <ref> for details, but we were not able to exploit further this approach, the spectral problem seeming out of our reach. § SOME PRELIMINARIES §.§ A comparison theorem We state in this section a comparison theorem for Bessel processes with varying dimensions. The proof is standard and can be found in <cit.>. We consider here a probability space (Ω, ℱ, ℙ) supporting a Brownian motion (W_t)_t≥0 and we denote by 𝔽 = (ℱ_t)_t≥0 the filtration generated by this Brownian motion, after the usual completions. Let (D_t^1)_t≥0 and (D_t^2)_t≥0 be two 𝔽-adapted non-negative processes. Let also (Z_t^1)_t≥0 and (Z_t^2)_t≥0 be two processes such that, if it exists, a.s. for any t≥0, Z_t^1 = z_1 + 2∫_0^t √(Z_s^1) W_s + ∫_0^t D_s^1 s and Z_t^2 = z_2 + 2∫_0^t √(Z_s^2) W_s + ∫_0^t D_s^2 s for some z_1, z_2 ≥ 0. We have the following comparison theorem. If z_1 ≤ z_2 and almost surely for any t ≥0, D_t^1 ≤ D_t^2, then almost surely for any t ≥0, Z_t^1 ≤ Z_t^2. §.§ Existence of the persistence exponent Let us prove Proposition <ref> in this section. First, we perform some exponential time change of (X_t)_t≥0 and consider the process (X̂_t:= e^-t X_e^t-1)_t≥ 0 which is still a Markov process (known as a Cox-Ingersoll-Ross process, see for instance <cit.>) generated by ℒ̂_n^δ := 2 ∑_i=1^n x_i ∂^2/∂ x_i^2 + (δ-x_i) ∂/∂ x_i . Then, if we denote Ĥ_n:=inf{t≥ 0, ∃ i≠ j, X̂_t^(i)= X̂_t^(i)=0 }, we naturally have _x( H_n> t)=_x(Ĥ_n> log(1+t)). Therefore, to prove Proposition <ref> we simply need to show that, for any x ∈ (_+)^n, lim_t→∞1/tlog_x(Ĥ_n> t) = -θ_n . Notice also that since t↦_x(Ĥ_n> t) is non-increasing, one can consider the limit in (<ref>) only along integers. Before we prove (<ref>), let us stress that the limit (if it exists) does not depend on x. For x,y∈ (_+)^n, let us write x≤ y if x_i≤ y_i for all i=1,…, n. Then, by the comparison property of Proposition <ref> (applying it componentwise), we obtain that, for any starting point y≥ x, _y( Ĥ_n> t) ≥_x( Ĥ_n> t). Therefore, for x,x'∈ (_+^*)^n, the Markov property gives that _x(Ĥ_n >1+t) ≥_x(Ĥ_n >1+t , X̂_1 ≥ x') = _x[_{Ĥ_n >1 , X̂_1 ≥ x' }_X̂_1( Ĥ_n >t) ] ≥_x( Ĥ_n >1 , X̂_1 ≥ x') _x'( Ĥ_n >t) =: C_x,x'_x'( Ĥ_n >t) , where we have used the comparison inequality for the second line. This shows that, for any x,x'∈ (_+^*)^n and t>1, C_x',x_x'(Ĥ_n > t-1)≤_x(Ĥ_n >t) ≤ C_x',x^-1_x'(Ĥ_n > t+1 ) so that the limit in (<ref>), if it exists, does not depend on x. Now, let x ∈ (_+)^n. To prove (<ref>), let us introduce, for t>0, q_x(t) :=_x ( Ĥ_n> t, X̂_t ≥ x ) . We now show that (log q_x(t))_t≥ 0 is super-additive. Indeed, by the Markov property, we have that q_x(t+s) = _x [_{Ĥ_n> t}_X̂_t( Ĥ_n> s, X̂_s ≥ x) ] ≥_x [_{Ĥ_n> t, X̂_t≥ x}_X̂_t( Ĥ_n> s, X̂_s ≥ x) ] . Now, by comparison (applying Proposition <ref> componentwise), we obtain that, for any starting point y≥ x, _y( Ĥ_n> s, X̂_s ≥ x) ≥ q_x(s). We therefore end up with q_x(t+s) ≥ q_x(t)q_x(s) for any s,t≥ 0, which shows the super-additivity and thus that the limit θ_n := -lim_t→∞1/tlog q_x(t) exists (the limit is taken along integers). We can now compare q_x(t) with the original probability _x(Ĥ_n >t). First of all, we clearly have that q_x(t) ≤_x(Ĥ_n > t ), so that lim inf_t→∞1/tlog_x(Ĥ_n > t) ≥ -θ_n. The other bound is a bit more subtle. Recall that 𝒜 := ⋃_i≠ j{x_i=x_j=0} and let us define 𝒜_δ = {y = (y_1,…, y_n) ∈ (_+)^n , ∃ i<j such that max(y_i,y_j)< δ} for some δ>0. Then, we fix >0 and δ>0, and we consider the upper bound _x(Ĥ_n > t ) ≤_x(X̂_s∈𝒜_δ for all s ∈ [(1-)t,t-1] ) + _x (Ĥ_n > τ_δ, τ_δ≤ t-1 ) , where we have set τ_δ := inf{ s > (1-)t , X̂_s ∉𝒜_δ}. We now estimate both probabilities. For the first one, applying Markov's inequality iteratively every unit of time, we get that _x(X̂_s∈𝒜_δ for all s ∈ [(1-)t,t-1] ) ≤( sup_y∈𝒜_δ_y( X̂_s∈𝒜_δ for all s ∈ [0,1] ) )^⌊ t -1⌋ = exp(- C_δ⌊ t-1 ⌋). where the constant C_δ = - log (sup_y∈𝒜_δ_y( X̂_s∈𝒜_δ for all s ∈ [0,1])) goes to +∞ as δ↓ 0. Indeed, observe that, for all y∈𝒜_δ, _y( X̂_s∈𝒜_δ for all s ∈ [0,1]) ≤_0( sup_s∈[0,1]min_i≠ jmax(X̂_s^(i),X̂_s^(j) ) < δ^2) , by comparison. Now, the upper bound converges to _0( sup_s∈[0,1]min_i≠ jmax(X̂_s^(i),X̂_s^(j) ) =0 ) as δ↓ 0, which is equal to 0. For the other probability, let us set p_x, δ(s):= inf_ y∈∂𝒜_δ_y(Ĥ_n >s, X̂_s ≥ x) . Then, we have that _x (Ĥ_n > τ_δ, τ_δ≤ t-1 ) ≤_x [ _{Ĥ_n > τ_δ,τ_δ≤ t-1 }p_x,δ(t-τ_δ)/inf_s ∈ [1, t] p_x,δ(s)] , and, by the strong Markov property, _x [ _{Ĥ_n > τ_δ,τ_δ≤ t-1 } p_x,δ(t-τ_δ) ] ≤_x [ _{Ĥ_n > τ_δ,τ_δ≤ t-1 }_X̂_τ_δ( Ĥ_n > t-τ_δ , X̂_t-τ_δ≥ x) ] ≤_x (Ĥ_n > t, τ_δ≤ t-1, X̂_t ≥ x ) ≤ q_x(t) . Now, notice that by the Markov property and by comparison (see Proposition <ref>), for s>1 we have that _y(Ĥ_n >s, X̂_s ≥ x) ≥_y(Ĥ_n >1, X̂_1 ≥ x) _x(Ĥ_n >s-1, X̂_s-1≥ x), so that p_δ, x(s) ≥ C_δ, x q_x(s-1) , with C_δ, x:= inf_{ y∈∂𝒜_δ}_y(Ĥ_n >1, X̂_1 ≥ x) >0. Indeed, for any y ∈∂𝒜_δ, there is at most one i with y_i<δ: since {Ĥ_1 >1}⊃⋂_j≠ i{∀ s ∈ [0,1] X_s^(j)>0}, we get by independence, then by comparison, that _y(Ĥ_n >1, X̂_1 ≥ x ) ≥_y_i( X̂_1^(i)≥ x_i ) ∏_j≠ i_y_j( ∀ s ∈ [0,1] X̂_s^(j)>0, X̂_1^(j)≥ x_j ) ≥_0( X̂_1^(1)≥x_∞) _δ( ∀ s ∈ [0,1] X̂_s^(1)>0, X̂_1^(1)≥x_∞)^n-1 , which is a positive lower bound on C_δ,x. All together, we obtain that _x (Ĥ_n > τ_δ, τ_δ≤ t-1 ) ≤ C_δ,x^-1 1/inf_s∈ [0, t] q_x(s) q_x(t) . Going back to (<ref>) and using that lim_t→∞1/tlog q_x(t) = -θ_n, we conclude that for t sufficiently large (how large may depend on ,δ,x), we have inf_s∈ [0, t] q_x(s) ≥ e^-(θ_n +) t and q_x(t) ≤ e^-(θ_n-)t so, _x(Ĥ_n > t ) ≤ e^ -1/2 C_δ t + e^- ( θ_n - -θ_n-^2)t . Now, for any fixed >0, we can choose δ small enough so that 1/2 C_δ≥θ_n - -θ_n-^2, which gives that lim sup_t→∞1/tlog_x(Ĥ_n > t) ≤ - ( θ_n - -θ_n-^2). This concludes the proof, since is arbitrary. §.§ General strategy of the proof We consider a probability space (Ω, ℱ, ℙ) supporting (n+1) independent Brownian motions W^(0),W^(1), …, W^(n). We denote (ℱ_t)_t≥0 the filtration generated by these Brownian motions, after the usual completions. On this filtered probabilty space, we consider n independent squared Bessel processes of dimension δ∈(0,1), solution of X_t^(i) = 2 √(X_t^(i)) W_t^(i) + δ t. Our general strategy is to find some auxiliary one-dimensional stochastic process (Z_t)_t≥ 0, which hits 0 exactly at time H_n, that we are able to compare with a (time-changed) squared Bessel process, for which the first hitting time of 0 is well-understood. More precisely, let : (_+)^n →_+ be a smooth function, and define for all t≥ 0, Z_t:= ( X_t^(1),…, X_t^(n)) =: (X_t) . Then, using the evolution equation of X^(i) and applying Itô's formula, we obtain the evolution equation of Z_t: Z_t = 2 ∑_i=1^n √(X_t^(i)) ∂/∂ x_i (X_t) W_t^(i) + ( δ∑_i=1^n ∂/∂ x_i(X_t) + 2 ∑_i=1^n X_t^(i) ∂^2 /∂ x_i^2(X_t) ) t . Let us set Y_t := ∑_i=1^n X_t^(i)(∂/∂ x_i(X_t))^2. Recalling that W^(0) is a Brownian motion independent from the rest, we define W_t :=1_{Y_t = 0} W_t^(0) + ∑_i=1^n 1_{Y_t > 0}√(X_t^(i))∂/∂ x_i (X_t)/√(Y_t) W_t^(i) , which is an (ℱ_t)_t≥0 Brownian motion since it is a local martingale with quadratic variation ⟨ W ⟩_t =t. Note that whenever Y_t = 0 for some t≥0, we have X_t^(i) (∂/∂ x_i(X_t))^2 = 0 for every i ∈{1, …, n}. Therefore, if we set D^(1)_t =∑_i=1^n ∂/∂ x_i(X_t) and D^(2)_t =∑_i=1^n X_t^(i) ∂^2 /∂ x_i^2(X_t) , we get that Z_t = 2 √(Y_t) W_t + (δ D^(1)_t + 2D^(2)_t) t. Let us now define the processes V and D by V_t := Y_t/Z_t , and D_t := δ D^(1)_t + 2 D^(2)_t/V_t . We will now make the following assumption on the function which will be verified in practice. (𝐇) hyp:H(H) The function is such that a.s. Leb({t ≥ 0, Y_t = 0}) = 0 and V_t < ∞ for any t ≥0. Under this assumption, it turns out that we can rewrite (<ref>) as Z_t = 2 √(Z_t V_t) W_t + D_t V_t s . The advantage of the formulation (<ref>) is that it formally looks like the evolution equation of a time-changed square Bessel process, with varying dimension D_t. Our objective is now be to find functions (one for the upper bound, one for the lower bound) such that: * the function verifies (x_1,…, x_n) =0 if and only if[In fact, one actually need only one of the implication depending on whether one is interested in the upper or the lower bound, but we stick to the if and only if formulation for simplicity.] x_i=x_j=0 for some i≠ j, so that H_n=inf{t >0, Z_t =0}; * the “velocity” V_t and the “dimension” D_t can be controlled, namely one can obtain explicit bounds on them. Let us set ρ_t := ∫_0^t V_u u , which corresponds to the time-change in (<ref>). Thanks to Assumption <ref> and the definition (<ref>) of V, we see that a.s. Leb({t ≥ 0, V_t = 0}) = 0. This implies that ρ is an increasing and continuous time-change. Let us denote by τ its inverse and let us set K_t = Z_τ_t as well as ℬ_t = ∫_0^τ_t√(V_s) W_s which is an (ℱ_τ_t)_t≥0 Brownian motion. We classically have that K_t = 2 √(K_s)ℬ_s + D_τ_t t. For the upper bound on _x(H_n>t). Let us assume here that there is some δ_+∈(0,2) such that D_t ≤δ_+ uniformly in t. Then, if we define Q^(δ_+)_t = 2 √(Q^(δ_+)_t)ℬ_t + δ_+ t , which is a squared Bessel process of dimension δ_+, we get by comparison, see Proposition <ref>, that a.s. K_t ≤ Q^(δ_+)_t for any t≥0. It follows that Z_t ≤ Q_ρ_t^(δ_+) for any t≥0. Denoting T_0(Z) := inf{t>0, Z_t=0} for a stochastic process (Z_t)_t≥ 0, we therefore get that _x(H_n>t) = _x(T_0(Z)>t ) ≤_x( T_0(Q^(δ_+)) >ρ_t) , and it remains to control ρ_t, and in particular show that it cannot be too small; let us stress that one difficulty is that (ρ_t)_t≥ 0 is in general not independent from (Q^(δ_+)_t)_t≥ 0. We will then need a lemma as follows. There is some κ>0 such that, for any p≥ 1 there exists a constant C_p=C_x,p such that, for any t≥ 1 and any <1 _x( ρ_t ≤ t^κ) ≤ C_p ^p . With the help of this lemma, we then get that, for any p≥ 1 (large), _x(H_n>t)≤_x( T_0(Q^(δ_+)) >ρ_t) ≤_x( ρ_t ≤ t^-1/√(p) t^κ) + _x( T_0(Q^(δ_+)) > t^-1/√(p) t^κ) ≤ C_p t^- √(p) + c t^(2-δ_+)/√(p) t^- κ(1-1/2δ_+) , where we have also used (<ref>) for the last inequality. This strategy therefore shows that, for any η>0, choosing p sufficiently large yields _x(H_n>t)≤ c' t^-θ_- +η with θ_- := κ (1-1/2δ_+); here δ_+ is an upper bound on D_t and κ gives the scale exponent of ρ_t and appears in Lemma <ref>. Since η is arbitrary, this shows that θ_n ≥θ_-. For the lower bound on _x(H_n>t). On the other hand, if we assume that there is some δ_- such that D_t ≥δ_- uniformly in t, then, just as for the upper bound, we define Q^(δ_-)_t = 2 √(Q^(δ_-)_t)ℬ_t + δ_- t and we get by comparison, thanks to Proposition <ref> again, that K_t ≥ Q^(δ_-)_t for any t ≥0, which yields that Z_t ≥ Q_ρ_t^(δ_-) for any t≥0. We then get that _x(H_n>t) = _x(T_0(Z)>t ) ≥_x( T_0(Q^(δ_-)) >ρ_t) . We then now need to show that ρ_t cannot be too large. There is some κ>0 such that, for any M≥ 1 there exists a constant C_p=C_x,p such that, for any t≥ 1 and any A>1 _x( ρ_t ≥ A t^κ) ≤ C_p A^-p . Then, with this lemma, we get that _x(H_n>t)≥_x( T_0(Q^(δ_-)) >ρ_t) ≥_x( T_0(Q^(δ_+)) > t^1/√(p) t^κ) - _x( ρ_t ≥ t^1/√(p) t^κ) ≥ c t^-(2-δ_+)/√(p) t^- κ(1-1/2δ_-) - C_p t^-√(p) , where we have again used (<ref>) for the last inequality. This strategy therefore shows that, for any η>0, taking p usfficiently large yields that _x(H_n>t)≥ c' t^-θ_+ -η with θ_+ := κ (1-1/2δ_-); here, δ_- is a lower bound on D_t and κ is the one from Lemma <ref>. Sicne η>0 is arbitrary, this shows that θ_n ≤θ_+. In some cases, one could in theory improve Lemmas <ref>-<ref> and obtain (stretched) exponential tails for t^-κρ_t, see e.g. (<ref>): this would improve the bounds on _x(H_n>t) replacing the t^η,t^-η by some power of log t. Since we are only interested in the persistence exponent, we do not pursue further this direction. § PROOF OF THE MAIN RESULT This section consists in applying the strategy of Section <ref>, i.e. choosing the correct functions  for the upper bound and for the lower bound on _x(H_n>t). §.§ Upper bound on _x(H_n>t) For the upper bound, let us first deal with the case n=3 for clarity. We turn to the general case n≥ 3 afterwards: the strategy is identical but with more tedious calculations. §.§.§ The case n=3 Let us consider the functional A_t:= X_t^(1) X_t^(2) + X_t^(2) X_t^(3) + X_t^(3) X_t^(1) =: (X_t) , and observe that (x_1,x_2,x_3) = x_1 x_2 +x_2 x_3 +x_3x_1=0 if and only if x_1=x_2=0 or x_2=x_3=0 or x_3=x_1=0. Then, let us derive the evolution equation of (A_t), as in (<ref>). Denoting A=x_1 x_2 +x_2 x_3 +x_3x_1 and S=x_1+x_2+x_3, P=x_1x_2x_3, straightforward calculations give that ∑_i=1^3 x_i (∂/∂ x_i)^2 = AS + 3P , ∑_i=1^3 ∂/∂ x_i = 2 S , ∑_i=1^3 x_i ∂^2 /∂ x_i^2 =0 . Hence, recalling the definitions (<ref>) and (<ref>), we obtain A_t = 2 √(A_t V_t) W_t + D_t V_t t , with V_t = S_t + 3 P_t/A_t , and D_t = 2 δ S_t/S_t + 3 P_t/A_t≤ 2 δ , where we denoted S_t=X_t^(1)+X_t^(2)+X_t^(3) and P_t=X_t^(1)X_t^(2)X_t^(3) . Let us stress that Assumption <ref> is satisfied here and we will show that it is indeed the case in the more general case n≥ 3, in the next section below. Since we have bounded D_t ≤δ_- := 2δ, in view of Section <ref>, it remains to control the time-change. Now, we will show that Lemma <ref> holds with κ=2. Notice that we may simply bound V_t≥ S_t ≥ X_t^(1), so we only need to show the following: for any p≥ 1, there is some C_p such that _x( ∫_0^t X_u^(1) u ≤ t^2) ≤ C_p ^p . We postpone the proof of (<ref>) to Section <ref>, but it allows us conclude thanks to Section <ref> that θ_n≥θ_- = κ (1-1/2δ_+) = 2(1-δ) as announced. §.§.§ The general case n≥ 3 Define for k ∈{1,…, n}, Π_k(x) := ∑_I⊂{1,…,n}, |I|=k x_I , with x_I := ∏_i∈ I x_i , and consider the process Z_t=Π_n-1(X_t), which is indeed equal to 0 if and only if X_t^(i)=X_t^(j)=0 for some i≠ j. Then, we have that ∂Π_n-1/∂ x_i = ∑_|I|=n-2, i ∉ I x_I = ∑_j≠ i x_{i,j}^c and in particular ∑_i=1^n ∂Π_n-1/∂ x_i = 2 Π_n-2, where the factor 2 comes from the fact that each pair {i,j} appears twice in the sum. Of course we have that ∑_i=1^n x_i ∂^2 Π_n-1/∂ x_i^2 = 0, so recalling (<ref>) we end up with Y_t = ∑_i=1^n X_t^(i)(∂Π_n-1/∂ x_i)^2(X_t), V_t = Y_t/Π_n-1(X_t), D_t = 2δ Π_n-2(X_t)/V_t . Let us first show that Assumption <ref> in verified here. First, we observe that for any x∈(ℝ_+)^n, ∑_i=1^n x_i (∂Π_n-1/∂ x_i)^2 = ∑_i=1^n |I| =n-1i∈ I|J|=n-2i∉ J x_I x_J =∑_|I| =n-1∑_|J|=n-2| I ∩ J^c | x_I x_J . Moreover, for any x∈(ℝ_+)^n, it is clear that Π_n-1(x) Π_n-2(x) = ∑_|I| =n-1∑_|J|=n-2 x_I x_J. Since for I and J such that | I| =n-1 and | I| =n-2, | I ∩ J^c |≤ 2, we deduce that a.s., for any t ≥0, we have V_t ≤ 2 Π_n-2(X_t) < ∞. Next, we see from (<ref>) that {t ≥0, Y_t = 0}⊂⋃_i=1^n{t ≥0, X_t^(i) = 0} which implies that Leb({t ≥0, Y_t = 0}) = 0 since for any i ∈{1, ⋯, n}, we classically have that Leb({t ≥0, X_t^(i) = 0}) = 0. It remains now to estimate V_t. In fact we will show that V_t ≥Π_n-2(X_t), which gives the bound D_t ≤ 2δ, so we again have δ_+=2δ. To show this, we observe that for I and J such that | I| =n-1 and | I| =n-2, | I ∩ J^c | = n-1 +2 - | I ∪ J^c|≥ 1. Recalling (<ref>) and (<ref>), we immediately get that V_t ≥Π_n-2(X_t). To conclude that θ_n≥ (n-1)(1-δ), we show Lemma <ref> with κ=n-1. In fact since V_t ≥Π_n-2(X_t) ≥∏_i=1^n-2 X_t^(i), we simply need to prove that for all large p, ( ∫_0^t ∏_i=1^n-2 X_u^(i) u ≤ t^n-1) ≤ C_p^p. For 1≤ k≤ n, the first time that Π_n-k+1(X_t) hits 0 is the first simultaneous return to zero of k independent Bessel processes, i.e. the time (X_t)_t≥ 0 hits the (n-k)-dimensional set 𝒜^(k) = ⋃_|I|=k{x_i=0 ∀ i∈ I}, see Remark <ref>. Using the same strategy as above, one can show that the associated persistence exponent θ^(k)_n is larger or equal than (n-k+1)(1-k δ/2). §.§ Lower bound on _x(H_3>t) As far as the lower bound is concerned, a natural choice of functional would be Z_t := (X_t) with (x) = (x_1+x_2)(x_2+x_3)(x_3+x_1), which is such that (x)=0 if and only if x_i+x_j=0 for some i≠ j. One can then proceed with the calculations and find that κ=3 and δ_- =2δ so that it gives a bound θ_+=3(1-δ). We are going to give some slightly more optimized functional to improve the upper bound. Recall the definitions of the functionals A_t, S_t and P_t from the previous section. Then, we define Z_t := S_t^a( A_t - P_t/S_t) =: (X_t) . where a ∈ [0,1] is some fixed exponent (that depends on δ), to be optimized later on. Note that for a=1, one recovers the functional (x) = (x_1+x_2)(x_2+x_3)(x_3+x_1). Let us stress however that in the case a∈ (0,1), the derivatives of the function are singular at the point (0,0,0) and therefore the Itô formula is not valid on the time-interval ℝ_+. We will see how we can still apply the strategy outlined in Section <ref>, but let us first compute the processes Y and V and check that Assumption <ref> still holds in the present case. A delicate calculation gives the following (we refer to Appendix <ref> for more details): Y :=∑_i=1^3 x_i (∂/∂ x_i)^2 = S^2a-1(A- P/S) ( S^2 + a(a+4)A + (1-a)(a+5) P/S) . It is again clear that we have {t ≥0, Y_t = 0 }⊂⋃_i=1^3{t ≥0, X_t^(i) = 0 } so we also have here that a.s. Leb({t ≥0, Y_t = 0 }) = 0. Recalling now the definition (<ref>) of V, we find that V_t= S_t^a-1 Q_t , Q_t:= S_t^2 + a(a+4)A_t + (1-a)(a+5) P_t/S_t . Notice that A_t ≤1/2 S_t^2 and P_t ≤1/6 S_t^3 so that Q_t≤9/2 S_t^2 and finally V_t ≤9/2S_t^a+1 < ∞. This shows that Assumption <ref> holds. Let us now compute the process D. Some straightforward (but tedious) calculations give the following (again, see Appendix <ref> for details): D^(1) := ∑_i=1^3 ∂/∂ x_i = S^a-1( 2S^2 + (3a-1) A + 3(1-a) P/S) , D^(2) := ∑_i=1^3 x_i ∂^2 /∂ x_i^2 = S^a-1( a(a+3) A + (1-a)(a+4) P/S) . Recalling that D_t = (δ D_t^(1) + +2 D_t^(2)) / V_t, we get that D_t= 1/Q_t( 2δ S_t^2 + (δ (3a-1) + 2a(a+3)) A_t + (1-a)(3δ + 2(a+4)) P_t/S_t) . Now, we can write D_t= 2δ + 1/Q_t( f_1(a,δ) A_t + f_2(a,δ) P_t/S_t), with f_1(a,δ) = 2(1-δ) a^2 +(6-5δ)a -δ , f_2(a,δ) = -2(1-δ) a^2 - (6-5δ) a +8-7δ . We now choose a=a(δ) such that f_1(a,δ)=0, that is a(δ) := √((6-5δ)^2 +8δ(1-δ)) -(6-5δ)/4(1-δ) . With this choice, we have that f_2(a,δ) = 8(1-δ) ≥ 0, so in particular D_t ≥ 2δ =: δ_-. Let us now explain how we can apply our strategy even though the function is not C^2 on (ℝ_+)^3. If we set T_0(S) = inf{t > 0, S_t = 0}, we can apply the Itô formula up until this time and the evolution equation (<ref>) remains valid on [0, T_0(S)): almost surely, for any t ∈ [0, T_S), Z_t = Z_0 + 2∫_0^t√(Z_s V_s) W_s + ∫_0^t D_s V_s s . On the other hand, the time-change ρ_t = ∫_0^t V_s s is always well-defined, and, denoting by τ its inverse, the process ℬ_t = ∫_0^τ_t√(V_s) W_s is a (ℱ_τ_t)_t≥0-Brownian motion. Finally, remembering that K_t = Z_τ_t, we get that for any t ∈ [0, ρ_T_0(S)), K_t = K_0 + 2∫_0^t√(K_s)ℬ_s + ∫_0^tD_τ_s s . Let Q^(δ_-) be the process defined by Q^(δ_-)_t = 2 √(Q^(δ_-)_t)ℬ_t + δ_- t . Then, by comparison, we get that a.s. for any t∈[0,ρ_T_0(S)), K_t ≥ Q_t^(δ_-). Since H_3 ≤ T_0(S) and since ρ_H_3 is the first hitting of zero of K, it is clear that for any t ≥0, {T_0(Q_(δ_-)) > ρ_t}⊂{ρ_H_3 > ρ_t} = {H_3 > t} and therefore ℙ_x(H_3 > t) ≥ℙ_x(T_0(Q_(δ_-)) > ρ_t) as in Section <ref>. Then, we need to control the time-change, and we will prove that Lemma <ref> holds with κ = 2+a. For this, remember that V_t ≤9/2 S_t^a+1 and therefore V_t ≤9/2 3^a+1 ((X_t^(1))^a+1 +(X_t^(2))^a+1 +(X_t^(3))^a+1). All together, using also a union bound, Lemma <ref> with κ = 2+a follows if we show that for any b = a+1>0, for any large p _x( ∫_0^t (X_u^(1))^b u ≥ A t^b+1) ≤ C_p A^-p . Again, we postpone the proof of (<ref>) to Section <ref>, but we can now conclude thanks to Section <ref> that θ_n≤θ_+ := κ (1-1/2δ_+) = 2(1-δ) +f(δ) with f(δ) = a(δ) (1-δ), as stated in Theorem <ref>. We could try to optimize further the functional , for instance considering Z̃_t := S_t^a ( A_t^b - c P_t S_t^2 b-3), for some constants a,b,c to be optimized over (the exponent 2b-3 ensures that the functional is of homogeneity a+2b). We have used Matematica to help us with the calculations of V_t,D_t, and guess a lower bound on D_t: it seems that, optimizing over a,b,c, one would obtain the following upper bound on the decay exponent: θ_+ = 2(1-δ) + 1/4(√((6-5δ)^2 +144 δ^2(1-δ) /16+9 δ) -(6-5δ) ) However, the calculations are very intricate and the effort seems excessive compared to the improvement of the bound from Theorem <ref> — we have here sup_δ∈ [0,1] [ θ_+ -2(1-δ)] ≈ 0.048. §.§ Control of the time-change processes: proof of (<ref>)-(<ref>) and (<ref>) We first show (<ref>) and (<ref>) before we turn to (<ref>), which is an improvement of (<ref>). Such bounds should be classical, but we were not able to find references, so we prove them by elementary (and robust) methods. §.§.§ Proof of (<ref>) and (<ref>) Let us denote X_t :=X_t^(1) for simplicity. First of all, notice that by scale invariance, we have that, for any b≥ 0, ∫_0^t (X_u)^b u (d)= t^b+1∫_0^1 (X_u)^b u (with a different starting point). Therefore, taking b=1 in (<ref>) and b=1+a in (<ref>), it is enough show that for any x∈ [0,1], _x(∫_0^1 (X_u)^b u ≤) ≤ C_p ^p , _x(∫_0^1 (X_u)^b u ≥ A ) ≤ C_p A^-p . In fact, we will show much stronger bounds: we show that there is some γ = γ_b>0 and some constant c>0 such that _x(∫_0^1 (X_u)^b u ≤) ≤ e^- c ^-γ , _x(∫_0^1 (X_u)^b u ≥ A ) ≤ e^-c A^γ . Before we prove this, let us show a simple technical lemma that controls the supremum and infimum of a continuous Markov process (Y_s)_s≥ 0. The content and the proof of this Lemma are inspired by Etemadi's maximal inequality, see e.g. <cit.>. Let (Y_s)_s≥ 0 be a continuous (time-homogeneous) Markov process. Let A>0. Then, for any x ∈ [0,A/2], _x( sup_s∈ [0,t] Y_s > A ) ≤_x( Y_t > A/2 ) + sup_s∈ [0,t]_A( Y_s ≤ A/2 ) . Also, for any x ≥ 4A, _x( inf_s∈ [0,t] Y_s ≤ A ) ≤_x( Y_t ≤ x/2 ) + sup_s∈ [0,t]_A( Y_s > x/2 ) . Let us start with the first inequality. Let τ_A:= inf{s, Y_s=A} and write _x( sup_s∈ [0,t] Y_s > A ) ≤_x( Y_t > A/2 ) + _x( τ_A<t, Y_t ≤ A/2) . Then, applying the (strong) Markov property at time τ_A, on the event {τ_A<t} we have that _x( Y_t ≤ A/2 |_τ_A) = _t(τ_A) with _t(s):= _A(Y_t-s≤ A/2). We therefore obtain _x( τ_A<t, Y_t ≤ A/2) = _x [ _{τ_A<t}_t(τ_A) ] ≤sup_s∈ [0,t]_t(s) , which gives the first inequality. For the second inequality, we use a similar reasoning: we write _x( inf_s∈ [0,t] Y_s < A ) ≤_x( Y_t < x/2 ) + _x( τ_A<t, Y_t ≥ x/2) . Then, applying the (strong) Markov property at time τ_A, on the event τ_A<t we have that _x( Y_t ≥ x/2 |_τ_A) = _t(τ_A) with _t(s):= _A(Y_t-s≥ x/2). We therefore obtain _x( τ_A<t, Y_t ≥ x/2) = _x [ _{τ_A<t}_t(τ_A) ] ≤sup_s∈ [0,t]_t(s) , which gives the second inequality. Let us now prove the second inequality in (<ref>), which is the simpler of the two. We have _x(∫_0^1 (X_u)^b u ≥ A ) ≤_x( sup_u∈ [0,1] X_u ≥ A^1/b) ≤_x( X_u ≥12 A^1/b) + sup_u∈ [0,1]_A^1/b( X_u ≥12 A^1/b) , where we have used Lemma <ref> for the last inequality. Now, recall that (X_t)_t≥ 0 is a squared Bessel process of dimension δ, so that we have _x( X_u ≥12 A^1/b) ≤ e^- c A^1/b , _A^1/b( X_u ≤12 A^1/b) ≤ e^- c u^-1 A^1/b≤ e^- c A^1/b . Indeed, these bounds can be easily deduced from the expression of the transition density of squared Bessel processes, see for instance <cit.>. This concludes the proof of the second part of (<ref>). We now turn to the first inequality in (<ref>). We let γ = 1/(4b+4) <1, and we write _x(∫_0^1 (X_u)^b u ≤) ≤_x( sup_u∈ [0,1] S_u ≤^γ) + _x( ∫_0^1 (X_u)^b u ≤ , sup_u∈ [0,1] X_u ≥^γ) . For the first term in (<ref>), we use a rough bound: we use the Markov property at every time (i-1)^γ for i∈{1,…, ^-γ}, _x( sup_u∈ [0,1] X_u ≤^γ) ≤( sup_x∈ [0,^γ]_x( sup_u∈ [0,^γ] X_u ≤^γ) )^^-γ≤ e^-c ^-γ , where for the second inequality we have used that, by scale invariance and comparison, sup_x∈ [0,^γ]_x( sup_u∈ [0,^γ] X_u ≤^γ) = _0( sup_u∈ [0,1] X_u ≤ 1 ) =: e^-c . Let us now control the second term in (<ref>). We set r:=2+b/2+2b and we decompose the probability according to the interval of the form [(i-1) ^r, i^r] where the supremum is attained, on which the infimum also need to be smaller than ^(1-r)/b (otherwise the integral on this interval would be larger than ). Then, the last term of (<ref>) is upper bounded by: _x( ∃ i∈{1,…, ^-r} , sup_u∈ [(i-1) ^r, i^r] X_u ≥^γ , inf_[(i-1) ^r, i^r] X_u ≤^(1-r)/b) ≤^-rsup_x∈_+_x( sup_u∈ [0, ^r] X_u ≥^γ , inf_[0, ^r] X_u ≤^2γ) , having used subadditivity and the fact that (1-r)/b=2γ for the last inequality (recall that γ:= 1/(4+4b)). We then consider two cases for the last probability: either x ≤ 4^2γ, or x≥ 4^2γ. In the case where x ≤ 4^2 γ, by comparison and scaling, we bound the probability by _4^2γ( sup_u∈ [0, ^r] X_u ≥^γ) = _4( sup_u∈ [0,^r-2γ] X_u ≥^-γ). Now from our choices of γ,r we have r-2γ= 1/2 >0, so ^r-2γ≤ 1. From that we bound the above probability by _4( sup_u∈ [0,1] X_u ≥^-γ) ≤_4 ( X_1 > 12 ^-γ) + sup_u∈[0,1]_^-γ( X_u < 12 ^-γ) , using Lemma <ref>. Using the fact that (X_t)_t≥ 0 is a squared Bessel process, we conclude analogously to (<ref>) that both probabilities are bounded by exp(- c ^-γ). In the case where x ≥ 2^2γ, by comparison and scaling, we bound the probability by _4^2γ( inf_u∈ [0, ^r] X_u ≤^2γ) = _4^-1/2( inf_u∈ [0, 1] X_u ≤^-1/2) ≤_4^-1/2( X_1 ≤ 2 ^-1/2) + sup_u∈[0,1]_^-1/2( X_u > 2 ^-1/2) , where we have also used the fact that 2γ-r= -1/2; the second inequality comes from Lemma <ref>. Again, analogously to (<ref>), both probabilities are bounded by exp(- c ^-1/2). This concludes the proof of the second part of (<ref>). §.§.§ Proof of (<ref>) To simplify notation, we let m=n-2 and we denote P_t^(m) := ∏_i=1^m X_t^(i) . First of all, note that, by scaling and comparison, we have that _x( ∫_0^t P_t^(m) u ≤ t^m+1) ≤_0 ( ∫_0^1 P_t^(m) u ≤) . We now show the following lemma, of which point (3) is exactly (<ref>). Let m≥ 1 be a fixed integer and let P_t^(m) := ∏_i=1^m X_t^(i). Then, for all p≥ 1, we have that there is a constant C_p=C_p,m>0 such that: (1) _0 [ ( sup_0≤ s <t≤ 1| P_t^(m)-P_s^(m)|/(t-s)^1/4)^p] ≤ C_p; (2) _0( sup_s∈[0,1] P_s^(m)≤) ≤ C_p^p; (3) _0 ( ∫_0^1 P_s^(m) s ≤)≤ C_p ^p. First of all, let us simplify notation and write P_t := P_t^(m) and :=_0. To prove the first point, it suffices to show (by the Kolmogorov criterion, see <cit.>) that for all p≥ 1, ∀ 0≤ s<t≤ 1, [ | P_t-P_s|^p ]≤ c_p | t-s |^p/2 . From the Itô formula, we have that P_t-P_s= 2∫_s^t ∑_i=1^m√(X_u^(i))∏_j≠ i X_u^(j) W^(i)_u + δ∫_s^t ∑_i=1^m∏_j≠ i X_u^(j) u , and using Burkholder-Davis-Gundy's inequality (see e.g. <cit.>), we get that [ | P_t-P_s|^p ]≤ c_p' [ ( ∫_s^t ∑_i=1^m X_u^(i)∏_j≠ i (X_u^(j))^2 u)^p/2 ]+ c_p”[ (∫_s^t ∑_i=1^m∏_j≠ i X_u^(j) u)^p ] . We finally obtain (<ref>) by dominating, into the integrals, all the X^(i) by their supremum on [0,1] (which are independent and admit a moment of order p for all p≥1). We now prove points (2)-(3) by iteration on m≥ 1. The case m=1 has already been treated in Section <ref>, so we now take m≥ 2 and suppose that points (2)-(3) hold for m-1. Let us start to show that point (2) holds for m. First of all, observe from (<ref>) that P_t is a time-changed square Bessel process Q^(δ) of dimension δ, i.e. P_τ_t= Q^(δ)_t, where τ_t is the inverse of ρ_t :=∫_0^t ∑_i=1^m-1∏_j≠ i X_u^(j) u . Then we can write ( sup_[0,1]P_s ≤) =( sup_[0,ρ_1] Q^(δ)_s ≤) ≤(ρ_1 ≤√()) + ( sup_[0, √()] Q^(δ)_s ≤) . By scaling, the second term equals (sup_[0, 1] Q^(δ)_s ≤√()) which is bounded by e^-c √() as seen in (<ref>). Moreover, since ρ_1 is the integral of products of m-1 independent Bessel processes, one can use point (3) with m-1 to get that, for any p≥ 1, we have (ρ_1 ≤√() ) ≤ C_p' ^p. This proves point (2) for m. We now turn to point (3). Let us denote P^*:=sup_s∈[0,1] P_s, and K:=sup_0≤ s<t≤ 1| P_t-P_s|/| t-s|^1/4 , and let t^*∈ [0,1] be such that P^*=P_t^*. Since by definition of K we have P_t≥ P^*-K | t-t^*|^1/4, we obtain that for all >0, ∫_0^1 P_s s ≥∫_0^1_{| s-t^*|≤1/2^7/8} P_s s ≥^7/8P^*-K ^35/32≥^7/8P^*-K ^9/8 . We therefore get that (∫_0^1 P_s s≤) ≤( P^* ≤ 2 ^1/8)+( P^* ≥ 2 ^1/8, ^7/8P^*-K ^9/8≤) ≤( P^* ≤ 2 ^1/8)+ ( K≥^-1/8) . Hence, using point (2) for the first probability and Markov's inequality and point (1) for the second one, we obtain that for any p≥ 1, both terms are bounded by C_p^p. This proves point (3) for m, concludes the recursion and proves the Lemma. § CALCULATIONS FROM SECTION <REF> In this section, we give some details on the tedious calculations from Section <ref>. We recall that the function φ is defined as φ(x) = S^a(A - P/S) where a ∈[0,1] and S = x_1 + x_2 + x_3, A = x_1x_2 + x_2x_3 + x_3x_1, P = x_1x_2x_3. Our aim here is to show the three following identities ∑_i=1^3 ∂/∂ x_i = S^a-1( 2S^2 + (3a-1) A + 3(1-a) P/S) , ∑_i=1^3 x_i ∂^2 /∂ x_i^2 = S^a-1( a(a+3) A + (1-a)(a+4) P/S) , ∑_i=1^3 x_i (∂/∂ x_i)^2 = S^2a-1(A- P/S) ( S^2 + a(a+4)A + (1-a)(a+5) P/S) . Since φ is a function of (S, A, P), we rely on the chain-rule formula to compute the derivatives of φ. More precisely, we have ∂/∂ x_i = ∂/∂ S∂ S/∂ x_i +∂/∂ A∂ A/∂ x_i + ∂/∂ P∂ P/∂ x_i =: H_i + F_i + G_i , where H := H_i = aS^a-1A + (1 - a)S^a-2P, F_i = S^a(x_i+1 + x_i+2), G_i = -S^a-1x_i+1x_i+2. Here, we used the convention that x_4 = x_1 and x_5 = x_2. Regarding (<ref>), we get ∑_i=1^3 ∂/∂ x_i = 3 H + 2S^a +1 - S^a-1A = S^a-1( 2S^2 + (3a-1) A + 3(1-a) P/S) Let us now compute (<ref>). We start by writing ∑_i=1^3 x_i (∂/∂ x_i)^2 = ∑_i=1^3 x_i (H^2 + F_i^2 + G_i^2 + 2HF_i + 2HG_i + 2F_i G_i ). Then, computing carefully all of the above six terms, we see that ∑_i=1^3 x_i H^2 = S^2a-1(a^2A^2 + 2a(1-a)AP/S + (1-a)^2 P^2/S^2), ∑_i=1^3 x_iF_i^2 = S^2a-1 S^2(A + 3P/S), and ∑_i=1^3 x_iG_i^2 = S^2a-1AP/S, ∑_i=1^3 2x_iHF_i = 4S^2a - 1(aA^2 + (1-a)A P/S) and ∑_i=1^3 2x_iHG_i = -6 S^2a-1(aAP/S + (1-a)P^2/S^2), ∑_i=1^3 2x_iF_iG_i = -4S^2a-1S^2 P/S. Recombining all the terms, one can easily check that (<ref>) holds. Let us now compute the second derivatives of . Differentiating in chain with respect to (S, A, P), we get ∂ H/∂ x_i = a(a-1)S^a-2A +aS^a-1(x_i+1 + x_i+2) + (1-a)(a-2)S^a-3P + (1-a)S^a-2x_i+1x_i+2 and ∂ F_i/∂ x_i = aS^a-1(x_i+1 + x_i+2), ∂ G_i/∂ x_i = (1-a)S^a-2x_i+1x_i+2. With these identities at hand and since ∂^2 /∂ x_i^2 = ∂ H/∂ x_i + ∂ F_i/∂ x_i + ∂ G_i/∂ x_i, one gets that ∑_i=1^3 x_i ∂^2 /∂ x_i^2= S^a-1( a(a-1)A +2a A +(1-a)(a-2)P/S+3(1-a)P/S + 2a A + 3 (1-a) P/S) , which concludes that (<ref>) holds. § RELATION WITH A SPECTRAL PROBLEM ON A BOUNDED DOMAIN We now come back to the last discussion in Section <ref> about the relation with a spectral problem for a certain operator on a bounded domain. Recall the definitions of the symmetric polynomials S_t:= X_t^(1) + X_t^(2) +X_t^(3) , A_t:=X_t^(1) X_t^(2) + X_t^(2) X_t^(3) + X_t^(3) X_t^(1) , P_t:= X_t^(1) X_t^(2) X_t^(3) , and that H_3 = inf{t≥ 0, A_t=0}. The generator ℒ̃ of (S_t,A_t,P_t)_t≥ 0 can be computed explicitly and (after calculation) it can be expressed as follows: for a C^2((_+^*)^3) function ψ(s,a,p) with compact support, ℒ̃ψ = 2s∂_ss^2 ψ +2(sa+3p)∂_aa^2ψ+ 2ap∂_pp^2ψ+8a∂_sa^2ψ + 12p∂_sp^2ψ + 8 ps∂_ap^2ψ + 3δ∂_s ψ+ 2δ s∂_a ψ+ δ a ∂_p ψ . We can now factorize the dynamics between a “radial” process (S_t)_t≥ 0 and an “angular” process (A̅_t, P̅_t):=(A_t/S_t^2, P_t/S_t^3), similarly to what is done in Section <ref>. Indeed, if in (<ref>) we take a function ψ of the form ψ(s,a,p)=ϕ(s)φ(a/s^2,p/s^3), we obtain after calculations that ℒ̃ψ(s,a,p)= φ(a/s^2,p/s^3)ℒ^1_3δϕ(s)+ϕ(s)/sℒ̅φ(a/s^2,p/s^3), where ℒ^1_3δ is the generator of a Bessel process of dimension 3δ in _+ and ℒ̅ is given by ℒ̅φ:= 2(u(1-4u)+3v)∂_uu^2 φ+2v(u-9v)∂_vv^2+8v(1-3u)∂^2_uv +2 ( δ(1-3u)-2u )∂_u φ+(δ(u-9v)-12v)∂_v φ . We stress that this decomposition makes sense for S_0≠ 0 and for times t<inf{u≥ 0, S_u=0}; in particular it makes sense before the hitting time H_3 (since S_t=0 implies that A_t=0). Splitting ℒ̃ as in (<ref>) means that the angular process (A̅_t, P̅_t) is a time-changed (by ∫_0^t S_u^-1 u) diffusion (U_t,V_t) generated by ℒ̅ and independent of (S_t)_t≥0. Now, one can check that the angular process (A̅_t, P̅_t)_t≥ 0 evolves in a bounded domain of (_+)^2 (with boundary), which is determined by computing the determinant of the principal symbol of ℒ̅. After calculations, one can verify that the angular process lives in a “curved” triangle 𝒯̅ described by 𝒯̅:={ (a,p)∈ [0,+∞[^2: p(-4 a^3 + a^2 + 18 a p - 4 p - 27 p^2) ≥ 0 } . We provide in Figure <ref> an illustration of the domain 𝒯̅. Now we can relate our question about the persistence exponent θ_3 to a spectral problem for the generator ℒ̅ on the bounded domain 𝒯̅, in the following way. The idea is to find a non-negative function ψ which is null only when a=0 and which is ℒ̃-harmonic, i.e. such that ℒ̃ψ=0. With this function ψ at hand, one obtains that ψ(S_t,A_t,P_t) is a time-changed Brownian motion[Note that our strategy, outlined in Section <ref>, was to compare the functional Z_t=φ(X_t) to a time-changed Bessel process (rather than a Brownian motion here).], which hits 0 only when X_t hits 𝒜. In view of the factorization property described above, we can look for ψ in the form ψ(s,a,p)= s^θφ(a/s^2,p/s^3) (with θ and φ to be determined): by the splitting (<ref>) given above, ψ is ℒ̃-harmonic if and only if φ verifies the eigenvalue problem ℒ̅φ= μφ, where μ is such that θ(θ-1+3δ/2)+μ=0. All together, one ends up with the following time-changed Brownian motion: ψ(S_t,A_t,P_t) = S_t^θφ(A̅_t, P̅_t )= ℬ_ρ_t , where ρ_t is given by t↦ρ_t = ∫_0^t 4 S_r^2θ-1g(A̅_r, P̅_r) r, and with g(u,v):=θ^2 φ(u,v)^2 + (u(1-4u)+3v) (∂_u φ (u,v) )^2 + (u-9v)v (∂_v φ (u,v) )^2 +4 v(1-3u)∂_u φ (u,v) ∂_v φ (u,v) . Then, we have (H_3>t)=(T^ℬ > ρ_t) where T^ℬ is the hitting time of 0 by a Brownian motion ℬ. Since ρ_t scales like t^2θ, one should get (after controlling the time-change), that (H_3>t) behaves like t^-θ as t→∞. To summarize, if one finds (μ, φ) such that ℒ̅φ= μφ with a function φ : 𝒯̅→_+ which vanishes only when the first coordinate is zero, then applying our strategy one should obtain that the persistent exponent θ_3 is the solution of θ_3(θ_3-1+3δ/2)+μ=0. *Acknowledgements. The authors are grateful to Frank den Hollander for suggesting (and discussing) this apparently simple but challenging question. We also would like to thank Nicolas Fournier for many enlightening discussions. Q.B. acknowledges the support of Institut Universitaire de France and ANR Grant Local (ANR-22-CE40-0012). L.B. is funded by the ANR Grant NEMATIC (https://anr.fr/Projet-ANR-21-CE45-0010ANR-21-CE45-0010). abbrv
http://arxiv.org/abs/2406.18073v1
20240626050535
Investigation into the origin of the soft excess in Ark 564 using principal component analysis
[ "Ming Lyu", "Zhenyan Fei", "Guobao Zhang", "X. J. Yang" ]
astro-ph.HE
[ "astro-ph.HE" ]
Department of Physics, Xiangtan University, Xiangtan, Hunan 411105, China Key Laboratory of Stars and Interstellar Medium, Xiangtan University, Xiangtan, Hunan 411105, China lvming@xtu.edu.cn Yunnan Observatories, Chinese Academy of Sciences (CAS), Kunming 650216, P.R. China Key Laboratory for the Structure and Evolution of Celestial Objects, CAS, Kunming 650216, P.R. China We combined a principal component analysis (PCA) and spectroscopy to investigate the origin of the soft excess in narrow-line Seyfert 1 galaxy Ark 564 with XMM-Newton observations over a period of ten years. We find that the principal components in different epochs are very similar, suggesting stable variability patterns in this source. More importantly, although its spectra could be equally well fitted by the two soft excess models, simulations show that the principal components from the relativistically smeared reflection model match the data well. At the same time, the principal components from the warm corona model show significant inconsistency. This finding indicates that the soft excess in Ark 564 originates from the relativistically smeared reflection, rather than the Comptonization in the warm corona, thereby favoring the reflection origin or the "hybrid" origin of the soft excess. Furthermore, the presence of the narrow absorption features in the spectra suggests that the soft excess is unlikely to originate from absorptions due to possible outflowing winds. Our results indicate that the PCA coupled with spectral analysis is a promising approach to exploring the origin of the soft excess in active galactic nuclei (AGNs). . lyu et al. Investigation into the origin of the soft excess in Ark 564 using principal component analysis Ming Lyu 1,2 Zhenyan Fei 1,2 Guobao Zhang 3,4 X. J. Yang 1,2 ============================================================================================== § INTRODUCTION Soft X-ray excess appears as an excess in the soft band (< 2 keV) after the extrapolation of the hard X-ray continuum in active galactic nuclei (AGNs). After its first discovery <cit.>, it has been widely observed in type 1 AGNs since there is no obscuration by the dusty torus <cit.>. Initially, soft X-ray excess was proposed to be the thermal radiation with the temperature of ∼ 0.1-0.2 keV originating from innermost part of the accretion flow. However, given that the disk temperature is inversely proportional to the black hole mass, the expected disk temperature for supermassive black holes should be typically of order 1-10 eV <cit.>. Consequently, this scenario was disfavored by the fact that the standard accretion disk could not reach a temperature as high as 0.1 keV. It has been found that the temperature required for generating the soft excess is always ∼ 0.1-0.2 keV <cit.>, independent of the black hole mass. At present, two popular, opposing models have been proposed to account for the soft excess: the warm corona model and the relativistically smeared reflection model. The former assumes that the soft excess comes from the up-scattered Comptonization of the ultraviolet (UV) seed disk photons in a warm (kT∼ 0.1-1 keV) and optically thick (τ∼10-40) corona <cit.>. This warm corona model is favored by the existence of similarities in the spectral shape and the variability between the optical/UV and the soft X-ray emission <cit.>. The relativistically smeared reflection model presumes that the low-energy emission-lines produced from the reflection off the inner part of the ionized disk are blurred and smoothed by strong relativistic effects around central black-hole, and finally forms the observed excess <cit.>. This model involves the atomic transition process and hence could more easily explains why the soft X-ray excess remains consistent over a wide range of black hole mass. Besides, this scenario is favored by the results from the soft X-ray reverberation lags detection <cit.>. In addition to the two models above, it has also been proposed that the soft excess could be due to the relativistically blurred absorption by disk winds <cit.>. In this absorption scenario, the complex velocity structure of disk winds leads to substantial broadening, which masks the sharp absorption features and generates a smooth soft excess structure in the spectrum. Observationally, the above two models could reproduce the observed X-ray spectrum well <cit.>, indicating that spectroscopy alone is not enough to distinguish them. Recent years, investigations into the Narrow-line Seyfert 1 galaxies (NLSy1s) and the broad-line Seyfert 1 galaxies (BLSy1s) have reported a variety of conclusions about the origin of the soft excess. In particular, NLSy1s are a subset of type-1 AGNs with the full-width-half-maximum (FWHM) of the Hβ lines less than 2000 km/s, while BLSy1s have FWHM bigger than 2000 km/s <cit.>. <cit.> studied the soft excess in 30 NLSy1s plus 59 BLSy1s and detected a positive correlation between the relative strength of the soft excess (R) and the primary X-ray spectral index. Besides, they found no correlation between the relative strength and the primary X-ray luminosity. These results indicate that the soft excess is generated by a warm corona. On the contrary, <cit.> found that the relative strength of the soft excess significantly correlates with the relative strength of hard excess for 22 NLSy1s, supporting the relativistically smeared reflection origin of the soft excess. Later, based on the Swift observations, <cit.> found no clear correlation between the relative strength R and power-law spectra index, favoring the relativistically smeared reflection scenario. Principal component analysis (PCA) is a powerful tool to dig different variability patterns in complex datasets. By decomposing a data set into several orthogonal eigenvectors, or principal components (PCs), PCA can efficiently quantify the variable components. The main advantage of the PCA is that it could obtain the variability in each separate varying component instead of only a total variability. When applied to a set of spectra, it returns detailed spectra of each variable component in a model-independent way. PCA has been widely used in many regions of astronomy, including stellar classifications <cit.>, variability in both AGNs <cit.> and X-ray binaries <cit.>. <cit.> applied the PCA to study the spectral variability of MCG-06-30-15 and found that its variability is driven mainly by the variation of the power-law component normalization (∼ 97% variability), the photon index (∼ 2%), and the normalization of the reflection (∼ 0.5%). <cit.> indicated that variability component in NGC 1365 from the PCA is different from those in MCG-06-30-15. Furthermore, they found that the PCA provides a clear distinction between absorption and reflection as the driver of the variability in AGN spectra. <cit.> applied the PCA to 26 AGNs and identified at least 12 different variation patterns, corresponding to several different physical mechanisms. Besides, the work of <cit.> suggests that the PCA could be an extremely powerful tool for distinguishing different variability patterns in AGNs. Later, <cit.> applied the PCA to the source Mrk 478 and found that only the blurred reflection model could reproduce the overall shape of the the dominant PC (90%), although its spectra could be nearly as equally well fitted by the partial covering model, the soft Comptonization model and the reflection model. Due to the degeneracy in the goodness of fits with the warm corona model and the blurred reflection model, the nature of the soft X-ray excess is still not fully understood after about 40 years since its first discovery. In this paper, we aim to combine the spectroscopy and the PCA to study the soft excess in the NLSy1 AGN Arakelian 564 (Ark 564) with XMM-Newton observations. Ark 564 locates at a redshift z = 0.02468 <cit.> and is one of the brightest NLSy1s, with a 2-10 keV luminosity ∼ 2×10^43 erg s^-1 <cit.>. In the following analysis, we first fit the spectra with the two models separately and then simulated the PCs from each model based on the fitting results. Finally, we compared the simulated PCs with the one derived from real data to see their consistency. § OBSERVATIONS AND DATA REDUCTION In this work, we analysed 13 XMM-Newton observations (Table 1) taken from 2000-06-17 to 2018-12-03 using the European Photon Imaging Camera, EPIC-PN <cit.> in imaging mode. We used the Science Analysis System (SAS) version 21.0.0 for the XMM-Newton data reduction, with the latest calibration files applied. We run the tool epproc for the extraction of the calibrated events, and used the command barycen to convert the arrival time of photons from the local satellite frame to the barycenter of the solar system. We filtered the flaring particle background via an iterative process that leads to a maximization of the signal-to-noise ratio (S/N), the same as the method described in <cit.>. We selected only single and double events, excluding all events at the edge of a CCD and close to a bad pixel for the extraction. We detected a moderate pile-up effect in some observations (Table 1) using the command epatplot. To remove the pile-up effect, we finally selected events from circular regions of radius 40 arcsec region centred on the source position, excluding a central circular region of 10 arcsec and 8 arcsec for the Obs 11 and other pile-up observations, respectively. The background spectra was generated from a 50 arcsec circular region far from the source. We finally applied the command rmfgen and arfgen to produce the response matrices (RMFs) and ancillary response files (ARFs). § PRINCIPAL COMPONENT ANALYSIS We applied the PCA in the same way as the one described in the work of Parker et al. (2014). The steps could be briefly summarized as below: (1) we first divided each observation into 5 ks segments, and extracted spectra for each segment; (2) we calculated a mean background-subtracted spectrum of, F_mean(E_j), over 0.4-9.0 keV for all segments; (3) we subtracted the mean spectrum from each segment spectrum to obtain a set of fractional residual spectra using the formula, F_res,i(E_j)=[F_i(E_j)-F_mean(E_j)]/F_mean(E_j), where F_i(E_j) is the flux at the j-th energy bin in the i-th segment spectrum; (4) we applied the singular value decomposition (SVD) to the F_res,i(E_j) matrix to derive the orthogonal eigenvectors and eigenvalues. We obtained the fractional variability of each component by dividing each eigenvalue by the sum of all eigenvalues; and (5) we perturbed the observed photon counts by a random amount commensurate with the photon shot noise and then applied the PCA to this perturbed dataset to make an estimate of the variance on the component for the error calculation <cit.>. Each eigenvector obtained in step (4) is the spectrum of an individually varying component and the eigenvalue is connected to the variation that component is responsible for. § SPECTRAL ANALYSIS In this work, we used XSPEC version 12.13.1 (Arnaud 1996) to fit all the 13 XMM-Newton spectra together in 0.4-10.0 keV energy range. We used the model tbabs to describe the absorption attributed to the interstellar medium (ISM) along the line of sight (LoS), with the solar abundance table of <cit.> and the photoionization cross-section table of <cit.>. The column density was fixed at N_H=5.34 × 10^20 cm^-2 <cit.> for all the fits. §.§ Warm corona model We applied the model nthcomp <cit.> to describe the power law emission from the hot corona. The thermal Comptonized model nthcomp more accurately describes the high-energy shape and the low-energy rollover compared with an exponentially cut-off power-law component. For the soft excess emission, we first used a simple bbody model to estimate its shape. And we finally replaced the bbody model with a physical model comptt <cit.> to account for the emission from the possible warm corona. Overall, comptt is an analytic model describing Comptonization of soft photons in a plasma. It works well for both the optically thin and thick regimes. During the fitting process, the redshift to the source, z, is fixed at 0.02468 <cit.> and the disk seed photon temperature kT_seed is frozen at 0.05 keV <cit.> in both the nthcomp and comptt component. We fixed the electron temperature, kT_e, in the nthcomp model at 100 keV and selected the disk geometry in the comptt. §.§ Relativistically smeared reflection model For the reflection scenario, we selected the model relxillcp, which describes the reflection off the disc illuminated by a power-law source. The relxillcp[http://www.sternwarte.uni-erlangen.de/ dauser/research/relxill/] combines the xillver model <cit.> and the relline code <cit.>, and it calculates the reflection from each emission angle. The parameters are: the inclination angle, i; the spin parameter, a; the inner and outer radius of the disk, R_in and R_out; the breaking radius separating them, R_break; the emissivity index of the inner and the outer disk, q_in and q_out; the redshirt, z; the photon index, Γ; the density and the ionization parameter of the disk, N; and ξ, the iron abundance, the electron temperature, kT_e; the reflection fraction, R_refl; and the normalization. During the fit, we fixed the spin at 0.998, which is the same as the value in <cit.>. The inner radius, R_in, is frozen at 1 R_ISCO, while the outer radius and the breaking radius are fixed at 400 R_g. The two emissivities are linked to be the same q_in=q_out. We set the electron temperature, kT_e, at 100 keV and fixed the iron abundance to the solar abundance. The inclination angle between different observations are linked together and we do the same to the emissivity index. After applying the above warm corona model and the reflection model, we found that there are still some residuals in the fits, possibly due to the complex absorption in Ark 564. We finally found that these residuals could be well described by a model composed of a partial ionized absorber (zxipcf), along with three to four phenomenological low-energy absorption edges (zedge) in the soft energy band, similarly to the absorption model used in <cit.>. § PC SIMULATIONS With the parameters derived from the above spectral analysis, we constructed a set of fake spectra and generated the simulated PCs. The fake spectra were produced by the tool Fakeit in Xspec. We varied the free parameters randomly within the ranges derived from the fits: Γ, kT_e, τ, Nor_nthcomp, and Nor_compTT for the warm corona model; Γ, log(ξ), log(N), R_refl, and Nor_relxillcp for the reflection model. For the free parameters in the absorption model, we first fixed them at their mean value, which we calculated from the values in the joint fitting results. In addition to that, we also let them vary randomly in their variation ranges in the related simulations to see the possible influence of the absorption in the PCs. We generated 200 simulated fake spectra for each model, with the exposure of each one being 5 ks, which is the same as the length of the segments in the previous principle component analysis. Finally, we applied the PCA to the fake spectra and obtained the corresponding PCs. Best-fitting results for the joint fit to the XMM-Newton spectra of Ark 564 with the warm corona model Tbabs × (Nthcomp+CompTT). The chi-square of the fit is χ^2_ν (χ^2/dof)=1.14 (2239/1960). All errors in the Tables are at the 90 percent confidence level unless otherwise indicated. A symbol * means that the error pegged at the hard limit of the parameter range. Model Comp Parameter Obs 1 Obs 2 Obs 3 Obs 4 Obs 5 Obs 6 Obs 7 Obs 8 Obs 9 Obs 10 Obs 11 Obs 12 Obs 13 zxipcf N_H (10^22) 24_-12^+101 162_-24^+103 49_-12^+29 142± 15 65± 14 23_-3^+7 23_-5^+7 45_-7^+12 64_-14^+20 67_-16^+34 65_-20^+38 57_-23^+8 24_-2^+7 log(ξ) 0.01_-0*^+5.17 1.7± 0.7 1.6± 0.5 2.1_-0.1^+0.2 1.9_-0.5^+0.1 1.1_-0.8^+0.4 0.9_-0.8^+0.5 0.9± 0.6 1.9_-1.0^+0.1 2.2± 0.2 2.1_-0.1^+0.2 1.9_-0.9^+0.1 0.01_-0*^+0.35 f_cover 0.23_-0.12^+0.52 0.79_-0.12^+0.09 0.31± 0.04 0.65± 0.03 0.37± 0.04 0.26± 0.03 0.27± 0.04 0.38± 0.07 0.41± 0.05 0.38± 0.06 0.31_-0.04^+0.02 0.41± 0.04 0.34_-0.03^+0.13 zedge1 E_c (keV) 0.554± 0.015 0.530± 0.012 0.526± 0.004 0.543± 0.009 0.549± 0.011 0.540± 0.007 0.548± 0.007 0.533± 0.006 0.547± 0.009 0.532± 0.008 0.542± 0.008 0.526± 0.005 0.525± 0.004 τ 0.19± 0.05 0.20± 0.05 0.20± 0.01 0.11± 0.02 0.11± 0.02 0.14± 0.02 0.16± 0.02 0.11± 0.02 0.16± 0.02 0.19± 0.03 0.14± 0.02 0.16± 0.02 0.14± 0.02 zedge2 E_c (keV) 0.727± 0.019 0.719± 0.015 0.699± 0.005 0.714± 0.017 0.708± 0.012 0.714± 0.007 0.724± 0.008 0.715± 0.009 0.702± 0.010 0.687± 0.014 0.704_-0.007^+0.013 0.707± 0.007 0.707± 0.006 τ 0.15± 0.04 0.16± 0.05 0.14± 0.01 0.07± 0.02 0.11± 0.02 0.14± 0.02 0.14± 0.02 0.10± 0.02 0.16± 0.02 0.13± 0.02 0.10± 0.02 0.14± 0.01 0.12± 0.01 zedge3 E_c (keV) 1.17_-0.10^+0.13* 1.0_-0*^+0.3* 1.14± 0.03 1.20± 0.03 1.21± 0.03 1.17± 0.09 1.19± 0.08 1.13± 0.03 1.14± 0.05 1.11± 0.02 1.16_-0.12^+0.08 1.17± 0.02 1.18± 0.02 τ 0.034_-0.032^+0.040 0.029_-0.028*^+0.041 0.038± 0.010 0.043_-0.013^+0.007 0.045± 0.013 0.017± 0.012 0.022± 0.014 0.044± 0.012 0.048± 0.016 0.082± 0.016 0.021± 0.016 0.068± 0.010 0.070± 0.010 nthComp Γ 2.52± 0.07 2.52± 0.06 2.58± 0.02 2.64± 0.02 2.62± 0.03 2.59± 0.03 2.59± 0.03 2.60± 0.03 2.57± 0.04 2.56± 0.03 2.62± 0.03 2.59± 0.03 2.61± 0.02 Norm (10^-2) 2.45_-0.61^+1.04 4.75_-1.23^+4.40 2.12_-0.17^+0.12 6.74_-0.74^+0.89 2.27_-0.16^+0.71 2.04_-0.10^+0.21 2.47_-0.16^+0.21 2.60_-0.30^+0.24 1.64_-0.16^+0.25 2.39_-0.27^+0.24 3.05_-0.37^+0.13 1.97_-0.15^+0.18 1.92_-0.11^+0.40 compTT kT_e (keV) 0.157± 0.014 0.160_-0.017^+0.030 0.145± 0.004 0.153± 0.006 0.152± 0.004 0.143± 0.002 0.144± 0.002 0.152± 0.007 0.149_-0.003^+0.006 0.152± 0.007 0.148± 0.006 0.150± 0.004 0.152± 0.004 τ 40_-10^+24 28_-5^+9 46_-4^+10 100_-40^+0* 56_-10^+27 100_-38^+0* 100_-40^+0* 40_-5^+10 100_-41^+0* 100_-42^+0* 100_-45^+0* 55_-9^+21 46_-5^+10 Norm 1.09_-0.22^+0.85 4.71_-2.01^+3.26 0.90_-0.15^+0.22 0.99_-0.07^+0.40 0.65± 0.22 0.43_-0.03^+0.10 0.61_-0.04^+0.13 0.92± 0.19 0.39_-0.03^+0.28 0.48_-0.06^+0.23 0.60_-0.05^+0.26 0.60_-0.07^+0.10 0.71± 0.13 Best-fitting results for the joint fit to the XMM-Newton spectra of Ark 564 with the reflection model Tbabs × RelxillCp. The chi-square of the fit is χ^2_ν (χ^2/dof)=1.17 (2256/1932). Model Comp Parameter Obs 1 Obs 2 Obs 3 Obs 4 Obs 5 Obs 6 Obs 7 Obs 8 Obs 9 Obs 10 Obs 11 Obs 12 Obs 13 zxipcf N_H (10^22) 288_-283^+116 84_-31^+119 181_-25^+39 83_-16^+20 181_-58^+34 18_-3^+10 17_-6^+34 65_-7^+25 18_-3^+18 46_-9^+11 15± 4 58_-9^+4 65_-9^+20 log(ξ) 4.39_-0.89^+1.48 0.76_-0.75^+1.49 2.71_-0.15^+0.05 2.03± 0.07 2.73_-0.17^+0.05 1.14_-0.72^+0.31 0.96_-0.95*^+0.73 1.09_-0.41^+0.53 0.01_-0*^+1.02 1.99± 0.06 0.83_-0.82*^+0.58 1.91_-0.42^+0.01 1.33_-0.55^+0.50 f_cover 0.50_-0.41^+0.02 0.42± 0.07 0.36± 0.03 0.38± 0.03 0.40± 0.04 0.26± 0.02 0.20± 0.03 0.32_-0.03^+0.08 0.35± 0.03 0.32± 0.05 0.35± 0.03 0.35± 0.02 0.35± 0.02 zedge1 E_c (keV) 0.39± 0.02 0.40_-0.08^+0.02 0.35± 0.03 0.39± 0.02 0.396± 0.005 0.38± 0.03 0.32± 0.02 0.37± 0.02 0.38± 0.02 0.39_-0.04^+0.01 0.33± 0.03 0.392±0.006 0.403± 0.005 τ 1.00_-0.08^+0.13 0.88± 0.14 0.96_-0.13^+0.20 1.00± 0.09 1.12± 0.05 0.96_-0.05^+0.18 1.65_-0.26^+0.37 0.84_-0.07^+0.12 1.24_-0.11^+0.19 0.89_-0.09^+0.14 1.61± 0.40 1.01± 0.05 1.09± 0.04 zedge2 E_c (keV) 0.53± 0.01 0.52± 0.01 0.513± 0.003 0.524± 0.006 0.530± 0.005 0.519± 0.004 0.518± 0.004 0.513± 0.004 0.524± 0.004 0.517± 0.006 0.514± 0.004 0.517± 0.004 0.525± 0.004 τ 0.29± 0.05 0.34± 0.05 0.30± 0.01 0.28± 0.03 0.28± 0.02 0.26± 0.02 0.29_0.02^+0.05 0.25± 0.02 0.32± 0.02 0.29± 0.04 0.32± 0.04 0.31± 0.02 0.29± 0.02 zedge3 E_c (keV) 1.14± 0.05 1.02_-0.02*^+0.07 1.07± 0.02 1.05± 0.03 1.04± 0.02 1.10± 0.03 1.09± 0.04 1.08± 0.02 1.11± 0.03 1.08± 0.02 1.08± 0.04 1.12± 0.02 1.08± 0.02 τ 0.076± 0.031 0.067± 0.029 0.066± 0.013 0.051± 0.015 0.047± 0.013 0.058_-0.019^+0.012 0.048± 0.018 0.065± 0.015 0.074± 0.022 0.109± 0.018 0.063± 0.017 0.091_-0.007^+0.013 0.071± 0.011 zedge4 E_c (keV) 1.36± 0.07 1.23± 0.06 1.24± 0.03 1.23± 0.03 1.24± 0.02 1.33_-0.09^+0.05 1.25± 0.04 1.24± 0.03 1.29± 0.05 1.27± 0.03 1.34_-0.09^+0.04 1.30± 0.04 1.24± 0.02 τ 0.07± 0.04 0.08± 0.04 0.06± 0.01 0.07± 0.02 0.08± 0.01 0.04± 0.01 0.06± 0.02 0.05± 0.01 0.07± 0.02 0.08± 0.02 0.06± 0.02 0.06± 0.01 0.08± 0.01 RelxillCp i (degree) 52.6± 0.5 Emissivity 8.58± 0.16 Γ 2.59± 0.01 2.65± 0.03 2.597± 0.004 2.59± 0.01 2.633± 0.006 2.68± 0.01 2.69± 0.02 2.639_-0.006^+0.009 2.72± 0.02 2.59± 0.01 2.82± 0.02 2.652± 0.009 2.644± 0.006 log(ξ) 2.55_-0.16^+0.09 2.59_-0.09^+0.12 2.70_-0.02^+0.01 2.79± 0.05 2.70_-0.02^+0.01 2.70_-0.06^+0.02 2.68_-0.06^+0.05 2.70_-0.05^+0.04 2.71_-0.05^+0.03 2.67± 0.07 2.70_-0.06^+0.05 2.70± 0.02 2.70_-0.02^+0.01 log(N) 17.01_-0.43^+0.15 16.85_-0.32^+0.21 16.99_-0.07^+0.05 17.17_-0.28^+0.07 17.05_-0.04^+0.03 17.00_-0.17^+0.05 16.57_-0.16^+0.12 16.65_-0.11^+0.14 16.99_-0.17^+0.08 17.24_-0.21^+0.10 16.66_-0.19^+0.17 17.00_-0.09^+0.02 16.97_-0.12^+0.05 R_refl 11.27_-1.95^+2.03 12.96_-2.25^+2.46 7.88_-0.39^+0.29 12.09_-0.71^+0.79 11.37_-0.57^+0.68 4.12_-2.08^+0.36 7.27± 0.64 6.40± 0.47 5.08_-0.49^+0.53 7.98± 0.68 3.25± 0.43 6.78_-0.39^+0.44 10.27_-0.49^+0.64 Norm (10^-4) 1.18± 0.17 1.11_-0.09^+0.17 1.46± 0.04 1.41_-0.07^+0.32 1.24± 0.07 1.86_-0.08^+0.12 1.84_-0.07^+0.11 1.81± 0.08 1.37± 0.09 1.24_-0.07^+0.10 4.02± 0.24 1.34± 0.05 1.17± 0.05 § RESULTS §.§ PCA results In Fig <ref>, we show the first three significant principal components based on all 13 XMM-Newton observations. We found that the first component is always positive and remains relatively flat, showing suppression at low energies and at the energy of the iron line. The second (pivoting) component is positive below ∼ 2 keV and then becomes negative above that. Since values below zero indicate that the variation in the corresponding energy bins is anti-correlated with the positive bins, those points at low energies in this component significantly anti-correlated with the ones at high energies. Moreover, it is flattening at low energies, and then steepening at high energies. For the third component, it is positive at both the low-energy and the high-energy ends, while it is below zero at the energy in between, with a turnover at ∼ 4-7 keV. To see whether the principal components in Ark 564 vary with time, we further applied the PCA to datasets in different epochs composed of Obs 1-5, Obs 6-9, and Obs 10-13, respectively. In Fig <ref>, we show the PCs of Ark 564 in different epochs. We found that the shape of the first three significant PCs in different epochs are very similar, suggesting a consistent underlying physical mechanism driving the variability. §.§ Spectral analysis results In Fig <ref>, we show the ratio between the data and the model after we removed the bbody component from the phenomenological warm corona model. As shown in the plot, the soft excess below ∼ 2 keV is significant in all observations. Furthermore, there is a clear variation in the shape of the excess in these observations. In Table <ref>, we show the parameters of the fit with the warm corona model comptt. The Γ values and the temperature of the warm corona, kT_e, remain more or less constant, namely: ∼ 2.6 and ∼ 0.147 keV, in all cases. The normalization of the comptt component distributes in a range from 0.36 to 7.97 in these observations. For the reflection model (Table <ref>), the photon index Γ is from 2.58 to 2.84, with the reflection fraction in a wide range of ∼ 2-15. The inclination angle derived in the fit is 52.6±0.5 degrees, and the emissivity index is ∼ 8.6. For the accretion disk, it was highly ionised, with the value of the log(ξ) ranges from 2.39 to 2.84. The disk density log(N) shows only small change, from 16.57_-0.16^+0.12 in Obs 7 to 17.24_-0.21^+0.10 in Obs 10. The corresponding spectra, individual components, and residuals of the fits are shown in Fig <ref>. Apparently, both the warm corona model and the reflection model could well describe the observed spectra in the figure, with the main difference being that a separated soft component is present in order to describe the data in the warm corona model. §.§ Simulation results §.§.§ Warm corona model We show the simulated PCs from the warm corona model in Figure <ref>. In panel A, a component always above zero is present when we let the parameter Nor_nth vary randomly. This PC is suppressed below ∼ 2 keV and remains flat in most of the energies. In panel B, we show the PCs when another normalisation Nor_comptt also varies. In this case there appears a new pivoting component, whose overall shape is relatively flat above ∼ 2 keV. While the first component, compared with the one in panel A, becomes less suppressed below ∼ 1 keV. In panel C, we show the PCs when all the free parameters in the continuum nthcomp and comptt vary. We found that the first component is relative flat, accounting for 72% of the total variability. The second component (26%) was suppressed below ∼ 0.7 keV, then switched to negative at ∼ 1.3 keV, and finally became flat above ∼ 2 keV. For the third component (0.4%), it cross the zero-line to the negative side at ∼ 0.8 keV and then returns back to the positive side above ∼ 7 keV. The fourth component accounts for only 0.3% of the total variability and is featureless, fluctuating around zero-line in the whole energy range. Comparison with the PCs from the real data indicates that the first simulated PC is consistent with the one from real data, while the second simulated PC significant deviates away from the real one. Specifically, the slope of the PC from simulation shows clear inconsistence with the one from the real data in the whole energy band. In panel D, it is the simulated PCs when variation of the free parameters in the absorption model zxipcf and zedge are considered. We found that the absorptions have very limited influence in the overall shape of the first and second component. The main effect is that the high energy end of the first PC and the ∼ 0.8 keV part of the second component get a little suppressed. While the absorptions have clear influence in the third component, suppressing the variability below ∼ 0.8 keV and enlarging the variability in ∼ 5-8 keV. §.§.§ Reflection model In Figure <ref>, it is the simulation results from the reflection model. As shown in panel A, the first component arised from the variation of the Nor_relxillCp always keep flat above the zero-line, matching very well with the one from the real data. The second component appears (panel B) when the variation of the parameter Γ is also included. It is flat below ∼ 0.6 keV and then behaves like a "straight line" crossing the zero-line at around 1.5 keV. In panel C we show the simulated PCs when the change of the continuum parameter log(ξ), log(N) and R_refl are further included. In this case the first principle component is very flat, and it accounts for 93% of the total variability. The second component (6%) is flat below ∼ 0.6 keV and cross the zero-line around 2 keV. For the third component (0.6%), it deviates from zero-line in 0.4-0.5 keV, and then it comes to the other side of the zero-line, finally it cross the zero-line again at ∼ 5 keV. The 4th component (0.2%) shows no clear feature and keeps to be consistent with the zero-line below ∼ 8 keV. As shown in the figure, the simulated first and the second PC are generally consistent with the ones from the data. In panel D, we show the PCs when absorption effects are included. Apparently, the absorption has little influence in the first and the second component, which contributes the overwhelming majority (≥ 98%) of the total variability in simulations. § DISCUSSION In this work, we study the variability patterns in Ark 564 with the PCA method using XMM-Newton observations. We found that the source show stable variability components (PCs) in different epochs spanning > 10 years. More importantly, although its spectra could be described by both the warm corona model and the reflection model, its PCs favor the reflection scenario as it could well reproduce the observed PCs from real data. As a comparison, the warm corona model fails to reproduce the second principle component derived from the data in Ark 564. The soft excess origin is still an open question and the origin in different sources may be different <cit.>. Previous spectroscopies have showed different scenarios for the soft excess in Ark 564. <cit.> studied XMM-Newton observations of many type 1 AGNs including the Ark 564. They found that a relativistically blurred photoionized disc reflection model could successfully reproduce the continuum shape, including the soft excess. <cit.> showed that the soft excess flux in Ark 564 is significantly variable on a timescale as short as ∼ 0.5-1 ks, and the soft excess could corresponds to the disk reflection emission. Conversely, <cit.> found that photons in 4-10 keV energy band lag behind the ones in 0.2-0.5 keV by ∼1.8 ks, thus excluding the possibility that the soft excess in Ark 564 is the reprocessed emission of the primary X-ray spectrum. They proposed that the soft excess originates from an optically thick corona in addition to the hot one for the power-law emission. This scenario is further supported by the work of <cit.> and <cit.>, in which they reported that the X-ray spectra of Ark 564 could be well fitted by a double Comptonization model: one for the soft excess and the other for the hard X-ray power law component. <cit.> studied 26 AGNs with the PCA method including Ark 564 and built a library of different PCs in order to quickly distinguish variability behaviors in AGNs. For the soft excess and the hard excess present in the third component from the PCA, <cit.> proposed that it could not due to a Comptonization or bremsstrahlung process in the warm corona since they cannot extend to high energies to generate a simultaneous hard excess. In their simulations, due to the lack of information on the physical parameters, a simple flux ratio has been assumed between the different components, with the varying factor of each flux being assigned to certain values. To make comparisons with their work, in this study, we combine the results from spectral analysis. Hence, we were able to obtain the variation ranges of the related physical parameters accurately. This is important since the shape and the relative strength of the features in the simulated PCs are, to a great extent, influenced by the variation ranges of the physical parameters. Therefore, by incorporating the parameter ranges from the spectroscopy, the PCA can more realistically reflects the consistency between the variability pattern of the real data and the models. Our simulations indicate that the warm corona model fails to generate the observed pivoting component (the second PC). Compared with the one from the real data, the second principle component that the warm corona model produces has a different shape and origin. We found that this simulated pivoting PC is due to the variations in the two normalization parameters. This is different from the origin of the one from the real data (Figure <ref>), which is believed to be caused by a power law varying in its photon index Γ <cit.>. Apart from the origin of the reflection, recent studies have show that a combination of the relativistic reflection and the warm corona likely be a natural scenario responsible for the soft excess <cit.>. Theoretically, <cit.> proposed a hybrid scenario which self-consistently combines the effects from the reflection and the warm corona. According to their model, the accretion energy released in the inner disk is distributed between a warm corona, a hot corona, and the disk. As the fraction of the energy dissipated in the warm corona and the hot corona varies, the soft excess shows variety of shapes and sizes. For the case of Ark 564, <cit.> simultaneously fitted its time lags and the flux spectra with XMM-Newton and NuSTAR observations. Their analysis showed that a blackbody component in addition to the disk reflection component is required to describe the flux spectra. The blackbody component represents the Comptonization off a warm corona, so both the warm corona and the relativistic reflection likely contribute to the soft excess, similar to the scenario proposed in the "hybrid" model. It could be also possible that the soft excess is not connected to the warm corona. The warm corona is proposed to be a Comptonizing regions exist at the surfaces of the accretion disk in AGNs. Nowadays there are still concerns about its physical plausibility. The theoretical work by <cit.> and <cit.> showed that strong magnetic pressure support is a prerequisite in order to generate a τ ∼ 10 warm corona in hydrostatic equilibrium. <cit.> demonstrated that only when the gas densities and the temperatures in a limited range could the warm corona generate a smooth soft excess, indicating that the warm corona must be located close to the hot corona region in order to reach sufficient high ionization. In addition, <cit.> found that the formation of the warm corona is prevented by thermal instability in the case of a low accretion rate. This statement is supported by <cit.>, which indicates that system with a low accretion rate is unable to provide enough energy to sustain a warm corona. Although the accretion rate in Ark 564 is high, making the aforementioned scenario unlikely, there is still no clear evidence to prove the existence of the warm corona in this system in the current stage. In our fitting results there are several absorption edges, which likely reflect elements and their charge states in the absorber. In the case of relativistically blurred reflection, the energy range for the four edges is 0.3-0.42 keV, 0.509-0.54 keV, 1.0-1.19 keV, and 1.17-1.43 keV, respectively. Absorption edges at similar energies in this source are also reported in the work of <cit.>. By comparing these edges to the identification results in <cit.>, we found that the edges below 0.5 keV could be due to the C V, C VI, S XIII, Ar XII, and Si XII, and the edges around 0.52 keV, 1.1 keV, and 1.3 keV have mainly arisen from the element O I-II, Ne X plus Fe XXIII, and Mg XI. These narrow absorption features suggests that the soft excess in Ark 564 is unlikely to have been caused by the complex absorptions. <cit.> proposed that the soft excess would be due to strong, relativistically smeared, partially ionized absorption, which may come from a differentially rotating, outflowing disc wind. In this scenario, there should be no narrow absorption lines present in the observed spectra since they are smeared and reshaped as a "quasi-continuum" by a very large dispersion velocity. Obviously, the detection of the narrow absorption edges in this work disagrees with the absorption origin of the soft excess in this source. This finding is also consistent with the conclusion in the work of <cit.>, where they found that a unphysically large smearing velocity (∼ 0.8c) is required in Ark 564 in order to generate the observed smooth soft excess emission by the absorptions . In this work, we show that the PCA method is capable of distinguishing different physical interpretations of the soft excess in AGNs, and that it can be applied to other NLSy1s and BLSy1s for their variability patterns and the soft excess origin. Furthermore, the PCA together with the spectroscopy would be a powerful tool for the exploration of the complicated "hybrid" warm corona, together with a blurred reflection scenario in the future. § ACKNOWLEDGEMENT We thank the anonymous referee for his/her careful reading of the manuscript and useful comments and suggestions. This research has made use of data obtained from the High Energy Astrophysics Science Archive Research Center (HEASARC), provided by NASA's Goddard Space Flight Center. This research made use of NASA's Astrophysics Data System. We thank M. L. Parker for the PCA code support. Lyu is supported by Hunan Education Department Foundation (grant No. 21A0096). X. J. Yang is supported by the National Natural Science Foundation of China (NSFC 12122302 and 12333005). Z.Y.F. is grateful for the support from the Postgraduate Scientific Research Innovation Project of Hunan Province (grant No. CX20220662) and the Postgraduate Scientific Research Innovation Project of Xiangtan University (grant No. XDCX2022Y072). aa
http://arxiv.org/abs/2406.19078v1
20240627110637
Distributed MIMO Networks with Rotary ULAs for Indoor Scenarios under Rician Fading
[ "Eduardo Noboro Tominaga", "Onel Luis Alcaraz López", "Tommy Svensson", "Richard Demo Souza", "Hirley Alves" ]
eess.SP
[ "eess.SP" ]
IEEEexample:BSTcontrol Distributed MIMO Networks with Rotary ULAs for Indoor Scenarios under Rician Fading Eduardo N. Tominaga, Student Member, IEEE, Onel L. A. López, Senior Member, IEEE, Tommy Svensson, Senior Member, IEEE, Richard D. Souza, Senior Member, IEEE, Hirley Alves, Member, IEEE This research was financially supported by Research Council of Finland (former Academy of Finland), 6Genesis Flagship (grant no. 346208), European Union’s Horizon 2020 research and innovation programme (EU-H2020), Hexa-X-II (grant no. 101095759) project, the Finnish Foundation for Technology Promotion, and in Brazil by CNPq (305021/2021-4, 402378/2021-0) and RNP/MCTIC 6G Mobile Communications Systems (01245.010604/2020-14). (Corresponding author: Eduardo N. Tominaga) Eduardo N. Tominaga, Onel L. A. López, and Hirley Alves are with the Centre for Wireless Communications (CWC), University of Oulu, Finland. (E-mail: {eduardo.noborotominaga,onel.alcarazlopez,hirley.alves}@oulu.fi). Tommy Svensson is with the Department of Electrical Engineering, Chalmers University of Technology, 412 96 Gothenburg, Sweden (E-mail: tommy.svensson@chalmers.se). Richard Demo Souza is with the Department of Electrical and Electronics Engineering, Federal University of Santa Catarina (UFSC), Florianópolis, 88040-370, Brazil. (E-mail: richard.demo@ufsc.br). July 1, 2024 ========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================= § ABSTRACT The Fifth-Generation (5G) wireless communications networks introduced native support for Machine-Type Communications (MTC) use cases. Nevertheless, current 5G networks cannot fully meet the very stringent requirements regarding latency, reliability, and number of connected devices of most MTC use cases. Industry and academia have been working on the evolution from 5G to Sixth Generation (6G) networks. One of the main novelties is adopting Distributed Multiple-Input Multiple-Output (D-MIMO) networks. However, most works studying D-MIMO consider antenna arrays with no movement capabilities, even though some recent works have shown that this could bring substantial performance improvements. In this work, we propose the utilization of Access Points (APs) equipped with Rotary Uniform Linear Arrays (RULAs) for this purpose. Considering a spatially correlated Rician fading model, the optimal angular position of the RULAs is jointly computed by the central processing unit using particle swarm optimization as a function of the position of the active devices. Considering the impact of imperfect positioning estimates, our numerical results show that the RULAs's optimal rotation brings substantial performance gains in terms of mean per-user spectral efficiency. The improvement grows with the strength of the line-of-sight components of the channel vectors. Given the total number of antenna elements, we study the trade-off between the number of APs and the number of antenna elements per AP, revealing an optimal number of APs for the cases of APs equipped with static ULAs and RULAs. 6G, Distributed MIMO, Location-Based Beamforming, Particle Swarm Optimization, Machine-Type Communications. § INTRODUCTION The Fifth Generation (5G) of wireless communication networks is currently under deployment and in advanced phases of standardization worldwide. Besides offering enhanced coverage capabilities and data rates for human-type communication applications such as mobile broadband internet connectivity, which corresponds to a natural evolution of previous generations, 5G also offers services for a variety of Machine-Type Communication (MTC) applications <cit.>. 5G MTC use cases are traditionally split into two broad categories: massive MTC (mMTC) and Ultra-Reliable Low Latency Communications (URLLC), also known as critical MTC (cMTC). The former aims to provide wireless connectivity to a massive number of low-power and low-complexity devices in applications with moderate data rates, latency, and reliability requirements. The latter category comprises applications with very stringent requirements in terms of latency and reliability. For instance, in an industrial setting, mMTC would correspond to sensors and actuators monitoring and controlling non-critical processes. In contrast, cMTC connectivity would enable critical control and safety systems to operate wirelessly <cit.>. The requirements of the different mMTC and cMTC use cases are foreseen to become even more stringent in future Sixth Generation (6G) networks. Thus, academia and industry have joined efforts in enhancing current technologies and developing new ones <cit.>. One of the novelties that have mostly attracted the attention of academia and industry is Distributed Multiple-Input Multiple Output (D-MIMO), often referred to as Cell-Free massive MIMO <cit.>. Instead of having a single base station equipped with several antenna elements serving a coverage area, the antenna elements are distributed among multiple Access Points (APs) over the coverage area. The APs are connected to a common Central Processing Unit (CPU) through fronthaul links. Such an approach provides a more uniform wireless coverage, enhancing network performance. The vast majority of works investigating the performance of D-MIMO networks consider fixed antenna arrays, i.e., antenna arrays with no movement capabilities. Nevertheless, the idea of antenna arrays that can move has gained attention among the research community since some recent works have shown that antenna movements can substantially improve the quality of wireless links <cit.>, as discussed in the following subsection. §.§ Related Works The utilization of antenna arrays with movement capabilities is not new. For instance, the authors in <cit.> proposed a Direction of Arrival (DOA) estimation method that utilizes a Rotary Uniform Linear Array (RULA) and achieves satisfactory performance for under-determined DOA estimations, where the number of source signals can be larger than the number of receive antenna elements. The achievable rate performance of point-to-point Line-of-Sight (LoS) links with both the transmitter and receiver equipped with a RULA was studied in <cit.> and shown to approach the LoS capacity at any desired Signal-to-Noise Ratio (SNR). López et. al. <cit.> and Lin et. al. <cit.> proposed the utilization of RULAs for wireless energy transfer. They studied a system where a power beacon equipped with a RULA constantly rotates and transmits energy signals in the downlink to several low-power devices. The devices harvest energy from the transmitted signal to recharge their batteries. The authors in <cit.> developed and tested a prototype for hybrid mechanical-electrical beamforming for mmWave WiFi. Their experimental results in a point-to-point setup showed that the optimal rotation of the antenna array can bring significant throughput improvements for both LoS and Non-LoS (NLoS) scenarios. More recently, movable antennas, which can move along one or several directions within a confined area, have been proposed <cit.>. The main drawback of their utilization is that each movable antenna requires at least two cables and two servo motors, representing high deployment, operation, and maintenance costs. A different case is that of UAVs operating as flying base stations <cit.>, which can also be interpreted as APs with movement capabilities. Their most notable advantage is that they present several degrees of freedom for movement since a UAV can be positioned at any point of the coverage area, at any height, and their position can be easily changed. However, their main drawbacks are limited carrying capacity, very high power consumption, and the consequent need for frequent recharges <cit.>. To the best of the authors' knowledge, this paper is the first work that investigates a D-MIMO network implemented with APs equipped with RULAs[Preliminary results of this work were published in the conference version <cit.>. In that work, we consider a similar scenario but only a single AP. Since the optimization problem studied in that work has only one variable (we optimize the angular position of only a single AP), brute force search was used instead of PSO.]. The most important advantages of the RULAs, when compared to the alternative approaches, are the lower deployment, operation, and maintenance costs since each AP requires a single servo-motor to rotate its Uniform Linear Array (ULA). §.§ Contributions and Organization of the Paper In this work, we evaluate the performance of D-MIMO networks for indoor industrial networks. We consider the uplink of a fully centralized system, where all APs simultaneously serve all the active Machine-Type Devices (MTDs). All APs are equipped with static ULAs or RULAS and connected to a common CPU through fronthaul links. The contributions of this work are listed below: * We propose a scheme for joint configuration of the angular positions of all the RULAs in the system to maximize the mean per-user achievable Spectral Efficiency (SE). The scheme relies on known estimates the positions of the MTDs and Particle Swarm Optimization (PSO). * Adopting a spatially correlated Rician fading channel model, we evaluate the performance of the different setups considering different values of the Rician factor. Our numerical results show that the optimal rotation of the RULAs can substantially improve the mean-per-user achievable SE and that the performance gains grow with the strength of the LoS components of the channel vectors. * We propose a positioning error model and define different levels of positioning accuracy based on the requirements specified by the Third Generation Partnership Project (3GPP) standards. The system's performance is evaluated as a function of the positioning accuracy. Our numerical results show that the optimal rotation of the RULAs brings performance gains even when the system's positioning accuracy is poor. * We evaluate the trade-off between the number of APs and antennas per AP given a total number of antenna elements. Our numerical results show that for both the cases of APs equipped with static ULAs and APs equipped with RULAs, there is an optimal number of APs that achieves the highest SE values. Table <ref> lists the acronyms used throughout this paper alphabetically. This paper is organized as follows. Section <ref> presents the system model. Section <ref> introduces the positioning error model. Section <ref> describes the proposed mechanism for optimizing the angular position of the RULAs. Section <ref> presents and discusses the numerical results. Finally, Section <ref> concludes the paper. Notation: lowercase boldface letters denote column vectors, while boldface uppercase letters denote matrices. a_i is the i-th element of the column vector a, while a_i is the i-th column of the matrix A. I_M is the identity matrix with size M× M. The superscripts (·)^T and (·)^H denote the transpose and the conjugate transpose of a vector or matrix, respectively. The scalar quantity's magnitude or the set's cardinality is denoted by |·|, and ‖·‖ denotes the Euclidian norm of a vector (2-norm). We denote the one-dimensional uniform distribution with bounds a and b by 𝒰(a,b). We denote the multivariate Gaussian distribution with mean 𝐚 and covariance 𝐁 by 𝒩(𝐚,𝐁). § SYSTEM MODEL We consider a square coverage area with dimensions l× l m^2. Q APs serve the coverage area, each equipped with a RULA of S half-wavelength spaced antenna elements. The APs are at height h_AP. A large number K_total of single-antenna MTDs are distributed on the coverage area. The positions of the MTDs are fixed for a relatively large period. From this large number of MTDs, a random subset of K MTDs are active and seek to transmit data in the uplink in each time slot. Let (x_k,y_k) denote the coordinates of the k-th MTD. For simplicity, we consider that the antenna elements of all MTDs are positioned at the same height h_MTD <cit.>. The system model is illustrated in Fig. <ref>. We consider fully centralized processing: all the APs are connected to a common CPU through fronthaul connections. The CPU is responsible for performing the following signal-processing tasks: * Computation of the estimates of the positions of the K_total MTDs using an indoor localization method. * MTD's activity detection and identification. * Computation of the optimal angular positions of the RULAs in each time slot based on the locations of the active MTDs. * Scheduling of the uplink data transmissions. * Channel State Information (CSI) acquisition through pilot sequences. * Computation of centralized receive combining vectors based on the channel estimates. * Linear centralized uplink data decoding. Further details on each of these tasks will be provided in the subsequent sections of this work. §.§ Channel Model We adopt a spatially correlated Rician fading channel model <cit.>. Let h_kq∈ℂ^S×1 denote channel vector between the k-th MTD and the q-th AP, which can be modeled as <cit.> h_kq=√(κ1+κ)h_kq^los + √(11+κ)h_kq^nlos, where κ is the Rician factor, h_kq^los∈ℂ^S×1 is the deterministic LoS component, and h_kq^nlos∈ℂ^S×1 is the random NLoS component. The deterministic LoS component is given by h_kq^los=√(β_kq)[ 1; exp(-j2πΔsin(ϕ_kq)); exp(-j4πΔsin(ϕ_kq)); ⋮; exp(-j2π(S-1)Δsin(ϕ_kq)); ], where β_kq is the power attenuation owing to the distance between the k-th MTD and the q-th AP, and ϕ_kq∈[0,2π] is the azimuth angle relative to the boresight of the ULA of the q-th AP. Meanwhile, the random NLoS component is distributed as h_kq^nlos∼𝒞𝒩(0,R_kq). Note that h_kq∼𝒞𝒩(√(κ1+κ)h_kq^los,R_kqκ+1), where R_kq∈ℂ^M× M is the positive semi-definite covariance matrix describing the spatial correlation of the NLoS components. The spatial covariance matrices can be (approximately) modeled using the Gaussian local scattering model <cit.>. Specifically, the s-th row, m-th column element of the correlation matrix is [R_kq]_s,m=β_kqN∑_n=1^Nexp[jπ(s-m)sin(ψ_kq,n)] ×exp{-σ_ϕ^22[π(s-m)cos(ψ_kq,n)]^2 }, where N is the number of scattering clusters, ψ_kq,n is the nominal angle of arrival for the n-th cluster, and σ_ψ is the angular standard deviation. §.§ Three-Phase Random Access Herein, we assume that the CPU has estimates of the positions of all MTDs in the network. We adopt a variation of the three-phase Random Access (RA) scheme introduced in <cit.> for scheduled massive access. Each of its phases is described below: * Phase 1: The active users transmit non-orthogonal uplink pilots for user identification and activity detection. * Phase 2: The CPU identifies the set of active users and determines the scheduling for the uplink transmissions. Based on the positions of the K devices that will be active in each time slot, the CPU computes the optimal angular positions for the RULAs for each time slot. Finally, the system broadcasts a common downlink feedback message to assign each user a time slot and an orthogonal pilot. * Phase 3: The MTDs simultaneously transmit their orthogonal pilots and data in their scheduled slot. As the CPU jointly controls the APs, the RULAs perform their rotations in each time slot. The three-phase RA scheme is illustrated in Fig. <ref>. Phase 1 corresponds to Slot 1, Phase 2 is performed during Slots 1 and 2, and Phase 3 comprises Slot 2 onwards. In Slot 1, 6 MTDs become active and transmit signals s_k, k∈{1,…,6} intended for activity detection and user identification. Then, K=2 MTDs are scheduled for each subsequent time slot. Each of them is assigned one of the K=2 orthogonal pilot sequences, {_1,_2}, and transmit their data signal, x_k, k∈{1,…,6}. In this work, we consider that the scheduling of MTDs for Phase 3 is random. Moreover, we assume MTC scenarios where the devices are stationary, such as smart utility meters <cit.> or video surveillance cameras <cit.>. The location of an MTD is obtained as soon as it is registered to the network[Indoor positioning technologies are discussed in <cit.>.]. We assume that the duration of a time slot is long enough to accommodate the computation of the optimal angular positions of the RULAs, their rotations, and the uplink transmissions from the K MTDs. The orthogonal pilots transmitted by the MTDs are used for CSI estimation. Then, the estimated channel coefficients are used for data decoding, as described in the next subsection. §.§ Signal Model Herein, we consider Phase 3 of the RA scheme: an uplink scenario where K active MTDs simultaneously transmit signals to the Q APs. Let M=QS denote the total number of receive antenna elements. The collective vector of wireless channel coefficients between the k-th MTD and the Q APs is h_k=[h_k1^T,h_k2^T,…,h_kQ^T]^T ∈ℂ^M×1. The matrix H∈ℂ^M× K containing the channel vectors of the K MTDs can be written as H=[h_1,h_2,…,h_K]. The M× 1 collective received signal vector can be written as y=√(p)Hx+n, where p is the fixed uplink transmit power that is the same for all MTDs, x∈ℂ^K× 1 is the vector of symbols simultaneously transmitted by the K MTDs, and n∈ℂ^M× 1 is the vector of additive white Gaussian noise samples such that n∼𝒞𝒩(0_M×1,σ^2_nI_M). Let V∈ℂ^M× K be a linear detector matrix used for the joint decoding of the signals transmitted from the K MTDs at all the APs. The received signal after the linear detection operation is split in K streams and given by r=V^Hy=√(p)V^HHx+V^Hn. Let r_k and x_k denote the k-th elements of r and x, respectively. Then, the received signal corresponding to the k-th MTD is r_k=√(p)v_k^Hh_kx_k_Desired signal + √(p)v_k^H∑_k'≠ k^K h_k'x_k'_Inter-user interference + v_k^Hn_Noise, where v_k and h_k are the k-th columns of the matrices V and H, respectively. From (<ref>), the Signal-to-Interference-plus-Noise Ratio (SINR) of the uplink transmission from the k-th MTD to all the APs is given by γ_k=p|v_k^Hh_k|^2p∑_k'≠ k^K |v_k^Hh_k'|^2+σ^2_n‖v_k^H‖^2. The receive combining matrix V is computed as a function of the matrix of estimated channel vectors Ĥ∈ℂ^M× K, Ĥ=[ĥ_1,…,ĥ_K]. In this work, we adopt the centralized Zero Forcing (ZF) combining scheme[Centralized ZF for distributed massive MIMO in indoor industrial scenarios was also adopted in <cit.>. Note that the SE performance obtained with ZF approaches that with MMSE in the high SINR regime <cit.>.]. The receive combining matrix is computed as <cit.> V=Ĥ(Ĥ^HĤ)^-1. §.§ Performance Metrics To illustrate the improvement in the deterministic LoS links owing to the optimal rotations of the RULAs, we adopt the mean per-user achievable SE as the performance metric. The achievable uplink SE of the k-th MTD is R_k=𝔼_H{log_2(1+γ_k)}. Then, the mean per-user achievable uplink SE is obtained by averaging over the achievable uplink SE of all the K active MTDs, i.e., R=1K∑_k=1^K R_k. §.§ Imperfect CSI Model The estimated channel vector of the k-th MTD, h_k∈ℂ^M×1, can be modeled as the sum of the true channel vector plus a random error vector as <cit.> ĥ_k=h_k+h_k, where h_k∼𝒞𝒩(0,σ_csi^2I_M) is the vector of channel estimation errors. Note that the true channel realizations and the channel estimation errors are uncorrelated. The parameter σ_csi^2 indicates the quality of the channel estimates. Let ρ=p/σ^2_n denote the transmit SNR. Assuming orthogonal pilot sequences during the uplink data transmission phase and least squares channel estimation, it can be modeled as a decreasing function of ρ as <cit.> σ_csi^2=1Kρ. § POSITIONING ERROR MODEL As mentioned in Section <ref>, the CPU utilizes the information about the positions of the MTDs to compute the optimal rotations of the RULAs. However, the estimates of the positions are not perfect in practical systems. In this section, we propose a mathematical model for the positioning error and relate it to the positioning accuracy requirements proposed by 3GPP. Considering that all the MTDs are positioned at the same height h_MTD, the imperfect positioning impairment refers to the uncertainty on the position of the MTDs only on the (x,y) axes. Let p_k=(x_k,y_k) denote the true position of the k-th MTD, and p̂_k=(x̂_k,ŷ_k) denote the estimated position. The positioning error vector associated to the k-th MTD becomes e_k=p_k-p̂_k=(x_e,k,y_e,k), where x_e,k=x_k-x̂_k and y_e,k=y_k-ŷ_k are the x and y components of the positioning error vector, respectively. We can rewrite the positioning error vector in polar form as e_k=r_e,k∠θ_e,k, where r_e,k=√(x_e,k^2+y_e,k^2) and θ_e,k=tan^-1(y_e,k/x_e,k) are the magnitude and phase of the positioning error vector, respectively. We assume the positioning error follows a bivariate Gaussian distribution with mean μ=[0 0]^T and covariance matrix Σ=σ_e^2I_2 <cit.>. Thus, x and y components of the positioning error vector follow a Normal distribution: x_e,k,y_e,k∼𝒩(0,σ_e^2). Moreover, the magnitude of the positioning error vector follows a Rayleigh distribution: f_R(r|σ_e^2)=rσ_e^2exp(-r^22σ_e^2), r≥ 0, where σ_e^2 is the scaling parameter. Finally, the angle of the positioning error vector follows a uniform distribution: θ_e,k∼𝒰(-π,π). Let r_e,k, i.e., the radius of the positioning error vector, represent the positioning accuracy. Considering the bivariate Gaussian model, the positioning accuracy follows the Rayleigh distribution. Thus, we have 𝔼{r_e,k}=σ_e√(π/2), Var{r_e,k}=4-π2σ_e^2. In 3GPP standards, the positioning accuracy requirements are typically specified in terms of the 95% quantile <cit.>. In 3GPP Rel-16, the positioning accuracy requirement specified for indoor scenarios was 3 m <cit.>. In 3GPP Rel-17, this requirement was further reduced to 20 cm <cit.>. The positioning accuracy requirement for beyond-5G and 6G networks in indoor factory scenarios is expected to be in the order of a few centimeters <cit.>. § OPTIMAL ANGULAR POSITIONS OF THE RULAS In each time slot, K distinct MTDs are active. As shown in our previous work <cit.>, a distinct optimal angular position exists for the RULA for each subset of K positions of active MTDs. Moreover, high-speed industrial servo motors can change their positions in a few milliseconds <cit.>, thus they can track slow-to-moderate network dynamics in practice. Let θ_q∈[-π,π] denote the rotation of the RULA of the q-th AP. A RULA and its angular position concerning two active MTDs, before and after its rotation, is illustrated in Fig. <ref>. The angle between the k-th MTD and the q-th AP after the rotation of the RULA is given by <cit.> ϕ_kq':=ϕ_kq+θ_q. The optimal rotations of the RULAs θ_q ;∀ q∈{1,…,Q}, are jointly computed by the CPU as a function of p̂_k, ∀ k∈{1,…,K}. Owing to the large number of variables to be jointly optimized and the high non-linearity of this problem, it is solved using particle swarm optimization (PSO), which will be presented in the next subsection. §.§ Particle Swarm Optimization PSO <cit.> is very suitable for solving problems where a function's global maximum or minimum is very difficult to find. The algorithm works over a population of candidate solutions, known as agents or particles, and moves them in the search space according to their current positions and velocities. Each particle's movement is influenced by its best-known position and the global best-known position in the search space. The local and global best positions are updated on each iteration, and the algorithm is expected to move the swarm of particles toward the optimal solution. PSO was initially proposed in <cit.> and intended to simulate social behavior, such as the movement of organisms in a bird flock or fish school. It has been used to solve many optimization problems in communication systems, such as optimal deployment, node localization, clustering, and data aggregation in wireless sensor networks <cit.>. It has also been used to design antennas with a specific desired side-lobe level or antenna element positions in a non-uniform array <cit.>. In communication systems, it has also been used to compute the optimal precoding vector that maximizes the throughput of a multi-user MIMO system <cit.>, to optimize the scheduling in the downlink of a multi-user MIMO system <cit.>, and to initialize the channel estimates for MIMO-OFDM receivers that jointly perform channel estimation and decoding <cit.>. The parameters of the PSO algorithm are listed in Table <ref>, while its pseudo-code is in Algorithm <ref>. The inertial weight w controls the particle's tendency to continue in its current direction, while c_1 and c_2 are the acceleration coefficients that control the influence of the personal and global best positions, respectively. In our optimization problem, the number of variables to be jointly optimized equals the number of APs, Q. Each particle is a candidate set of angular positions for the Q RULAs, i.e. {θ_1,…,θ_Q}, which corresponds to a point in the Q-dimensional space with boundaries [0,π]^Q×1. The objective function depends on the estimates of the positions of the MTDs, i.e., f(p_1,…,p_K), and is described in the next Subsection. There are two possible termination criteria: i) the maximum number of iterations is reached, or ii) the relative change in the value of the global best over a predefined number of maximum stall iterations is less than the tolerance ϵ. §.§ Location-Based Beamforming Before Phase 3 of the three-phase RA scheme, the CPU does not have any CSI information. Thus, the only information available for the computation of the optimal angular positions of the RULAs is the estimates of the positions of the MTDs that will be active in each time slot. Thus, we adopt a location-based beamforming <cit.> approach to estimate the corresponding objective function value. Herein, we assume that the locations of all the APs are perfectly known. Given p_k, ∀ k, the CPU computes the estimates for the distances and azimuth angles between all the APs and the MTDs, i.e. d̂_k,q and ϕ̂_k,q, ∀ k, ∀ q. Then, it computes pseudo channel vectors assuming full-LoS propagation as h_kq^pseudo=√(β_kq)[ 1; exp(-j2πΔsin(ϕ_kq)); exp(-j4πΔsin(ϕ_kq)); ⋮; exp(-j2π(S-1)Δsin(ϕ_kq)); ], where β_kq is the estimated large-scale fading coefficient, which is computed as a function of the estimated distance d_kq considering a known channel model. Then, receive combining vectors are computed as a function of the pseudo channel vectors according to (<ref>). Finally, the cost function is obtained by computing the mean per-user achievable SE utilizing the pseudo-channel vectors and the corresponding receive combining vectors in (<ref>), (<ref>) and (<ref>). § NUMERICAL RESULTS In this section, we present Monte Carlo simulation results that evince the performance improvements obtained by APs equipped with the proposed RULAs when compared to APs equipped with static ULAs. We keep the total number of antenna elements constant and study the trade-off between the number of APs and antenna elements per AP. The four different setups considered for the simulations are illustrated in Fig. <ref>. §.§ Simulation Parameters The power attenuation due to the distance (in dB) is modelled using the log-distance path loss model as β_kq=-L_0-10ηlog_10(d_kqd_0), where d_0 is the reference distance in meters, L_0 is the attenuation owing to the distance at the reference distance (in dB), η is the path loss exponent and d_kq is the distance between the k-th device and the q-th AP in meters. The attenuation at the reference distance is calculated using the Friis free-space path loss model and given by L_0=20log_10(4π d_0λ), where λ=c/f_c is the wavelength in meters, c is the speed of light and f_c is the carrier frequency. Unless stated otherwise, the values of the simulation parameters adopted in this work are listed in Table <ref>. Considering the typical values of M and h_AP defined in Table <ref>, every pair of AP and MTD are in the far-field (please refer to Appendix <ref>). The active MTDs are uniformly distributed on the square coverage area, that is, x_k,y_k∼𝒰(0,l). Moreover, the adopted parameters for the PSO algorithm are listed in Table <ref>. The noise power (in Watts) is given by σ^2_n=N_0BN_F, where N_0 is the Power Spectral Density (PSD) of the thermal noise in W/Hz, B is the signal bandwidth in Hz, and N_F is the noise figure at the receivers. For the computation of the correlation matrices R_kq, ∀ k, ∀ q, we consider N=6 scattering clusters, ψ_kq,n∼𝒰[ϕ_kq-40,ϕ_kq+40], and σ_ψ=5. The F-quantile of a Rayleigh distribution with scaling parameter σ_e is given by Q(F;σ_e)=σ_e√(-2ln(1-F)). Considering the proposed positioning error model and based on the 3GPP specifications, we list examples of positioning accuracy requirements for pessimistic, reasonable, and optimistic scenarios and their corresponding values of σ_e^2, in Table <ref>. §.§ Simulation Results and Discussions The mean per-user achievable SE is obtained by averaging over 32 network realizations, i.e., distinct sets of positions for the K active MTDs. For each network realization, the achievable SE of the K MTDs is obtained by averaging over 250 channel realizations, i.e., distinct realizations of the channel matrix H. The mean-per user achievable SE R as a function of the Rician factor κ, for different values of Q and l, are shown in Fig. <ref>. We observe that, in the case of APs equipped with static ULAs, the mean per-user achievable SE decreases with the Rician factor for any values of Q and l. This happens because the correlation among the channel vectors increases with κ, thus affecting the performance obtained with the ZF combiner. On the other hand, in the case of APs equipped with the proposed RULAs, the mean per-user achievable SE increases with κ, excepted for the setups with Q=8. This evinces the fact that the optimal rotation of the RULAs reduces the correlation between the channel vectors, consequently enhancing the performance obtained with ZF when the LoS component becomes strong. Note that, when κ≤-10 dB, i.e., when the channel tends to be pure NLoS, the optimal rotation of the RULAs brings no performance improvements, as expected. Nevertheless, when κ grows large, the performance gains obtained with the RULAs become very significant for any values of Q and l. Moreover, when the channel tends to be pure NLoS, increasing Q enhances the mean per-user achievable SE for any value of l. Thus, in rich scattering environments, the most distributed D-MIMO setup with Q=8 is always the best choice, and the setup with Q=4 follows closely behind. In contrast, when κ grows large and the LoS component becomes strong, there exists an optimal number of APs (and of antenna elements per AP) that maximizes the mean per-user achievable SE. In the case of APs with static ULAs, the best performance is achieved by the setup with Q=2 in the case of l=50 m, and by the setups with Q=4 for the cases of l=100 m and l=200 m. When we adopt APs equipped with RULAs, the single AP setup achieves the best performance for the case of l=50 m. When l=100 m, the setups with Q=[1,2,4] achieve very similar performance. Finally, when l=200 m, the best performance is achieved when Q=4. The numerical results in Fig. <ref> show that, in scenarios where the LoS component is strong, there is a sweet spot between the beamforming gains obtained with APs equipped with multiple antennas, and the macro-diversity gains obtained with the spatial distribution of APs. In the case of APs equipped with static ULAs, the intermediate deployments with Q=[2,4] achieve this sweet spot. Conversely, when the proposed RULAs are adopted, utilizing a single AP allow us to achieve a satisfactory performance, while at the same time avoiding the higher deployment and maintenance costs of D-MIMO networks. In Fig. <ref>, we show the mean per-user achievable SE versus the variance of the positioning error σ_e^2, for different values of Q and l, and considering κ=10 dB. Interestingly, we observe that the performance gains owing to the optimal rotations of the RULAs are still very significant even when the accuracy of the position estimates is poor. Nevertheless, the SE is almost constant for σ_e^2≤-10 dB, that is, improving the accuracy of the position estimates does not yield noticeable performance improvements. § CONCLUSIONS In this work, we studied the performance of D-MIMO networks for indoor scenarios. Aiming to enhance the quality of the wireless links, we proposed the deployment of APs equipped with RULAs. The CPU jointly computes the optimal rotation of the RULAs as a function of the estimated positions of the active MTDs. The optimization is done using a location-based beamforming approach and PSO. We considered a spatially correlated Rician fading model and evaluated the setups' performance for different Rician factor values. We took into account the impact of imperfect positioning information and also of the imperfect CSI. Given the total number of antenna elements, we evaluated the trade-off between the number of APs and antennas per AP. The numerical results show that the optimal rotation of the RULAs can bring substantial performance gains in terms of mean per-user achievable SE, and the gains grow with the Rician factor. We also observed that the optimal rotation of the RULAs can bring performance gains even when the positioning accuracy is poor or moderate. We also concluded that, given the total number of antenna elements, there is a sweet spot between the number of APs and the number of antenna elements per AP, corresponding to a trade-off between beamforming and macro-diversity gains. The best performance is achieved by adopting a few APs equipped with multiple antennas. This approach also has the advantage of reducing the deployment and maintenance costs of the system and its computational complexity. § PARAMETERS TO ENSURE FAR-FIELD PROPAGATION The Fraunhofer distance determines the threshold between the near-field and far-field propagation and is given by <cit.> d_F=2D^2λ, where D is the largest dimension of the antenna array, λ=c/f_c is the wavelength, c is the speed of the light and f_c is the carrier frequency. In the case of an RULA with half-wavelength spaced antenna elements, we have D_RULA=(S-1)λ/2, where S is the number of antenna elements of the RULA. Considering f_c=3.5 GHz, the height difference between the APs and the MTDs must be at least 10 m to ensure far-field propagation conditions for all the devices. Thus, in this paper we set max{S}=M=16 and h_AP=12 m. ./bibliography/IEEEtran
http://arxiv.org/abs/2406.17752v1
20240625173724
Connectivity and Community Structure of Online and Register-based Social Networks
[ "Márton Menyhért", "Eszter Bokányi", "Rense Corten", "Eelke M. Heemskerk", "Yuliia Kazmina", "Frank W. Takes" ]
physics.soc-ph
[ "physics.soc-ph" ]
Spectroscopic and Dynamic Orbital Analyses of Metal-Poor and High Proper Motion Stars: I. HD 8724 and HD 195633 [ June 25, 2024 =============================================================================================================== § ABSTRACT The dominance of online social media data as a source of population-scale social network studies has recently been challenged by networks constructed from government-curated register data. In this paper, we investigate how the two compare, focusing on aggregations of the Dutch online social network (OSN) Hyves and a register-based social network (RSN) of the Netherlands. First and foremost, we find that the connectivity of the two population-scale networks is strikingly similar, especially between closeby municipalities, with more long-distance ties captured by the OSN. This result holds when correcting for population density and geographical distance, notwithstanding that these two patterns appear to be the main drivers of connectivity. Second, we show that the community structure of neither network follows strict administrative geographical delineations (e.g., provinces). Instead, communities appear to either center around large metropolitan areas or, outside of the country's most urbanized area, are comprised of large blocks of interdependent municipalities. Interestingly, beyond population and distance-related patterns, communities also highlight the persistence of deeply rooted historical and sociocultural communities based on religion. The results of this study suggest that both online social networks and register-based social networks are valuable resources for insights into the social network structure of an entire population. § INTRODUCTION Until recently, population-scale social network analysis was typically done using data from online social networks (OSN) or mobile phone communication records. These digital data sources offer researchers unprecedented access to large amounts of information on human interactions and behavior kumar2006structure, mislove2007measurement,blondel2015survey,eagle2009inferring. Several works have attempted to understand the structural properties of global OSNs such as Facebook or Twitter myers2014information, ugander_anatomy_2011, as well as those of more localized ones, e.g., Hyves in the Netherlands corten_composition_2012, or iWiW in Hungary lengyel2015geographies. To understand how these social networks model the complex intricacies of the underlying societies, topological properties have been linked to various socio-economic outcomes. For example, economic connectedness of geographic areas is associated with upward social mobility chetty_social_2022, the abundance and diversity of connections is linked to economic prosperity eagle2010network, and inequality is reflected in more fragmented or closed network structures toth_inequality_2021, kovacs_income-related_2023. Recently, administrative government-curated records have become a novel population-scale resource for register-based social networks (RSN). The use of administrative records is not entirely new; for example, employment has been widely used to analyze labor market outcomes lyttelton_organizationally_2022,lyttelton_dual_2023,toth2022technology. Uniquely, Statistics Netherlands recently combined multiple registers of people's connections of family, school, work, household, and next-door neighbors into a unique population-scale social network with multiple edge types van_der_laan_person_2022_correct. Such a register-based social network (RSN) models, typically for a well-delineated population, the social opportunity structure of people, and how this, for example, varies by age and different socio-economic variables such as income or education bokanyi_anatomy_2023. The structure of these networks has been shown to offer new insights into persistent social issues such as segregation kazmina2024socioeconomic or the intergroup connectivity of migrants and natives soler2024contacts. Neither data sources (OSN nor RSN) have originally been designed for research; as such, they both present different opportunities and challenges. OSNs offer large-scale, automated data collection, often but certainly not always combined with self-reported data on people's demographic characteristics. However, the sample of both the nodes and the edges might not be representative. For one, this is because it is often unclear what exact social connections the edges represent. It can be difficult to differentiate bots from human agents, to find multiple profiles belonging to the same person, or to identify inactive or spurious connections lazer_meaningful_2021,corten_composition_2012. Because RSNs aggregate data from government-curated registers van_der_laan_producing_2017_correct, they offer legally defined high-quality data on nodes and edges. On the other hand, these edges only describe a so-called social opportunity structure bokanyi_anatomy_2023, kazmina2024socioeconomic, and we have no information on whether people actively use the connections. So far, research cross-matching and consistently comparing OSN and RSN data is nonexistent. The ties in RSN data represent legally defined relationships - people are connected through formal ties of kinship, work and school affiliations, and their registered address. Social tie formation for informal connections in OSNs is usually explained based on concepts such as homophily mcpherson2001birds of, e.g., demographics or beliefs, triadic closure asikainen2020cumulative, or geographic distance between people lambiotte2008geographical,liben-nowell2005geographic. Each of these is expected to influence the probability that a tie exists between two people. <cit.> shows that a large share of informal ties come from current or former formal ties of people such as work, school, or family connections. The comparison of the two types of networks (OSN and RSN) in this paper thus attempts to advance the reconciliation of these two seemingly different social tie definitions. In this work, we want to understand what influence the choice of population-scale data source has on network analysis research results, particularly on connectivity and community structure. To ensure the privacy of individuals de_jong_effect_2024, we aggregate our datasets at the municipality level, and then ask a number of important questions related to how the data sources compare. Is the number of connections between municipalities similar between the two networks? What are the types of connections (e.g. family, work, or school) from RSNs that are best represented in OSNs? How do population size and geographical distance, i.e., factors known to affect social network connections liben-nowell2005geographic, lambiotte2008geographical, krings2009urban, xu2022distance, impact each of the two networks? Are meso-scale network structures, such as communities fortunato2010community, expert_uncovering_2011, fortunato2016community, persistent across different data sources? And are these communities indicative of predefined administrative delineations, or do they reveal other patterns of group connectivity in the population-scale social network? In this paper, we address the above questions using the unique combination of the Dutch online social network (OSN) Hyves of 6.2M nodes and 320M edges, and a register-based multilayer social network (RSN) of the entire population of the Netherlands of 16.6M nodes and 570M edges. We find that the number of connections between municipalities is strikingly similar in the two networks. Each type of connection, modeled as layers of the RSN (family, work, and school), uniquely contributes to this similarity. Comparing connectivity after removing the effects of population density and distance dependence reveals that while the local network structure is comparable, the OSN captures a larger number of distant connections. The community structure of both networks does not follow strict administrative geographical delineations but instead reveals deeply rooted sociocultural effects in both networks. In general, the findings presented in this paper show that both online social networks and register networks are useful for modeling the social network structure of a population. This finding is important because across different countries, more and more population-scale register-based social networks are expected to be available for research in the future magnani2022generation,savcisens2024using. § RESULTS In this section, we first give a brief overview of the population-scale OSN and RSN datasets used. Then we provide two sets of empirical results. The first pertains connectivity, and compares the number of edges between municipalities in both networks, investigating how the different types of edges, that is, layers in the RSN, compare to the OSN. The second set of experiments dives into the community structure, focusing on a comparison with administrative borders with the aim of understanding sociocultural aspects of the connectivity patterns. In both sets of experiments, we consider normalization of the connection strength by two common aspects known to influence connectivity: population size and geographical distance. §.§ Online and register-based population-scale social network datasets We use a unique combination of two population-scale networks, each from 2010: the formerly highly popular Dutch online social network Hyves corten_composition_2012, and a register-based social network constructed by Statistics Netherlands van_der_laan_person_2022_correct.The Hyves dataset is an anonymized version of the service provider used in the study of <cit.>. The register-based social network was accessed and analyzed using the Remote Access environment of Statistics Netherlands. Hyves includes 6.2M users and more than 320M edges, and was the most popular online social networking site before the advent of Facebook. The register-based social network (RSN) contains all 16.6M registered residents of The Netherlands as of 1 January 2010. The roughly 570M edges between the nodes include formal ties of current family, school, and work relationships sourced from administrative databases. These different types of edges constitute the layers of the RSN. The works of <cit.> and <cit.> contain more details on the topological properties of both networks. Direct person-level matching of the two networks is legally and technically impossible due to privacy and record linkage limitations. Thus, we aggregated the two networks into the 431 municipalities of the Netherlands in the year 2010. Weighted edges between the municipality pairs denote the number of ties between people in the given network. Figure <ref>A and <ref>B depict the 500 edges with the largest weight in both networks. At the endpoints of these strong edges, there are major cities with large populations, but some of them are in rural areas. For more details on the two networks and the construction of the municipality-level aggregation, see the methods-and-data section. §.§ Connectivity in the OSN and the RSN In the following set of analyses, we seek to understand the structure of the OSN and the RSN in terms of connectivity. We do so for the weighted networks themselves, as well as for versions of the network normalized by population density and geographical distance (see methods-and-data for details). For each of the three variants of the network, we then look at the similarity of different individual layers of the RSN and the OSN. First, we compare the edge weight of the municipality pairs in the OSN and in the RSN in Figure <ref>C. The Pearson correlation of the logarithm of the weights is 0.939. This suggests a high micro-level similarity between the two networks. However, this similarity could be driven by factors that are known to influence the number of connections, such as population size and geographical distance between municipalities. In the following, we break down how similar individual RSN layers (e.g., family, school, or work) are to the OSN, how normalizing by population and distance influences the similarity. Figure  <ref>A shows the similarity between the OSN and the RSN for all possible connection types of the RSN (Combined), and the family, school, and work edge types. The correlations are calculated for the three different edge weighting strategies: the Count capturing the number of connections between two municipalities, the population-normalized SCI, and the distance-normalized DSCI. The first correlation of the top row shows the Pearson correlation of 0.939 from Figure <ref>C. If we restrict the RSN to a single layer type, we get lower correlation values: 0.924 for the family, 0.860 for the school, and 0.874 for the work layer. Thus, combining all available RSN edge types (layers) gives the highest similarity to the OSN. Population size is an important driving factor for connectivity, as larger municipalities naturally have more connections. The Social Connectedness Index (SCI) metric of <cit.> aims to correct for this dependence by normalizing the edge weights by the population size of the edge endpoints. Focusing on the results for SCI in Figure <ref>A, we observe slightly lower but still high similarities compared to the plain edge weights: 0.899 for the combined layers, 0.854 for the family, 0.741 for the school, and 0.849 for the work layers. This suggests that the edge weights of the two networks are similar beyond population distribution patterns. Again, combining layers gives the highest similarity. These correlations might still be partially driven by distance dependence, namely that closeby places are more likely to be connected. This relationship is often formalized as a gravity law lambiotte2008geographical, in which the connection probability between areas has a power-law dependence on the Euclidean distance with a negative exponent. Looking at Figure <ref>B, we can observe this power law in the OSN and in the RSN with tail exponents -0.77 and -1.11, respectively. The OSN has a larger (negative) power-law exponent which indicates more large-distance connections that are not as sensitive to distance as in the RSN. It should be noted that the overall higher probability of the OSN edges at all distances can be attributed to normalizing the probabilities by user counts instead of population size. To account for the distance dependence of the connection probability, we propose DSCI: a distance-aware SCI metric that beyond population, is also normalized by the expected SCI for a certain distance. Figure <ref>A shows that DSCI correlations between the OSN and RSN drop significantly compared to the plain SCI measure: the similarity of the OSN and the combined layers of the RSN is 0.649, the family layer is 0.621, the school layer is 0.300 and the work layer is 0.487. This reflects the influence of factors in social tie formation that go beyond population size or spatial distance. Because the RSN has more large-distance connections, we investigate how the correlation changes if we restrict the calculations to edges of different distances in Figure <ref>C. We find that indeed the correlation between DSCI edge weights decreases as the distance increases. §.§ Community structure In this section, we explore the community structure of the OSN and the RSN. We particularly investigate how closely these align between the two networks and with existing administrative boundaries, and what other connectivity patterns can be revealed by the obtained community structure. We use the well-known Louvain modularity optimization algorithm blondel_fast_2008 combined with consensus clustering kwak_consistent_2011,mandaglio_consensus_2018 to minimize the effects of randomness inherently present in such algorithms. For further explanation, see the methods-and-data section. The pairwise similarities of the communities found are calculated using the adjusted mutual information (AMI) metric. An AMI value of 1 means identical partitions and 0 means that the partitions are only as similar as expected due to random chance. In the triangles of Figure <ref>A-C, we can see the AMI similarity scores comparing partitionings of the OSN and the RSN. Figure <ref>A is based on the raw edge weights, Figure <ref>B uses the SCI, Figure <ref>C the DSCI metric. The similarity between the communities of the two networks is high for the counts and the SCI metric: 0.887 and 0.761, respectively. The country maps of Figures <ref>A and <ref>B displaying the community detection results show spatially contingent community structures. This highlights a localized geographical preference in group formation. If we normalize edge weights for distance as well, i.e., look at DSCI in Figure <ref>C, we get a lower similarity, 0.449, between the two networks. The visual representation of the results on the map suggests that the communities are no longer spatially contiguous. Despite the slightly lower similarity values, we find one specific commmunity in both networks that spans from the southwest to the northeast. This roughly follows the area of the so-called Bible Belt of the Netherlands, a set of regions that form a distinct sociocultural unit with high shares of religious adherence and conservative voters exalto2019strong, rellstab2023gender. The bottom two sides of the triangles in the leftmost column of Figure <ref>A-C indicate similarity of our resulting partitions to pre-established administrative boundaries, in our case, the subdivision of the Netherlands into 12 provinces. We highlight the borders of these provinces in Figure <ref>A-C. The first two edge weighting strategies (Count and SCI) provide relatively similar tendencies to province borders with AMI scores for both networks between 0.728, 0.726, 0.715, and 0.610. We can see that some clusters largely follow province borders, with only small deviations, whereas other clusters span multiple provinces. However, using DSCI, we can observe a notable difference to the province borders. This further supports the assumption that DSCI weighting can capture important socio-economic similarities other than the population and proximity bias. § DISCUSSION It is well-known in the social network analysis literature that each source of social network data comes with its particular biases, as well as data completeness and data quality issues lazer_meaningful_2021. We have analyzed the similarities between two Dutch population-scale networks of different sources: Hyves, an online social network (OSN), and a register-based social network of Statistics Netherlands (RSN), each aggregated to the municipality level. On the level of municipality pairs, we found that the two networks are very similar in terms of connectivity. Although there is a relatively high correspondence in the edge weights even when only using a single layer from the RSN as family, work, or school; the RSN is most similar to the OSN when including all available edge types, thus when combining multiple contexts of life in the construction of the register-based network. This highlights that behind the 'friend' edges of online social networks, there may be multiple different mechanisms at play when users establish connections. It also suggests that the more relationships a register-based network can incorporate, the better it reflects the social opportunity structures of people, and the better we can generalize results obtained from the RSN to other datasets. This result is also in line with the literature that suggests that a large share of informal relationships are based on various forms of current or former family, school, and work connections of people vaneijk2010unequal. Thus, even though a register-based network does not capture informal connections by definition, a superposition of various formal connections performs well when modeling an aggregated social network of a whole country. The similarity in edge weights can be driven by the fact that the same main factors of edge formation drive the connectivity in both networks. Therefore, we performed two normalization approaches: we calculated the SCI, a population-normalized version of the edge weights, and the DSCI, the deviation from the expected SCI at a given geographical distance. We find that the Pearson correlation of the edge weights decreases if we apply the normalization, but it remains at a relatively high value of 0.6 even for DSCI. Thus, connectivity reflects patterns of tie formation even after accounting for the population density and distance effects. The remaining similarity most likely captures socio-cultural homophily and geographical or economic constraints, footprints which are in the micro-level structure of both types of networks. The fact that the correlation of DSCI decreases with increasing distance highlights that local structures are strikingly similar in OSNs and RSNs. However, long-distance connections are both more numerous and more random in online social networks. It is important to consider these findings when interpreting the results from spatial networks on a large geographical scale. Reported findings on the community structure suggest that the structure of the two networks is also similar at the meso-scale. We find the least correspondence between the OSN and RSN communities when using normalization for both distance and population. However, in this latter case, a remarkable community which is not geographically contiguous, appears in both the OSN and RSN. It includes most of the so-called Dutch Bible Belt, which is a set of regions that form a distinct sociocultural unit with high shares of religious adherence and conservative voters exalto2019strong, rellstab2023gender. Thus, distance- and population-aware communities uncover socioculturally similar regions in both networks in a similar way to <cit.>. If we compare communities with the administrative boundaries of Dutch provinces, we find that distance-unaware communities somewhat correspond to administrative boundaries. However, there are notable differences, such as the northern part of the province Flevoland attaching to Overijssel. This might reflect that economic and infrastructural constraints matter more than administrative division in this case, since the Northern parts of Flevoland are infrastructurally well-connected to the neighboring province. When using DSCI, there is very little agreement between network communities with province boundaries. Hence, socioeconomic policy making on certain topics such as labor markets, infrastructure investments, or formal care systems might be better based on community clusters rather than provincial boundaries. The differences in the micro- and meso-scale structures of the OSN and RSN highlighted throughout this paper might originate from the different underlying link generation strategies in the two networks. In the RSN, family links represent persistent ties, but work and school relationships only reflect the situation of the current year. OSNs better capture the fact that some relationships are always retained from former schools or workplaces, even if sometimes in a different context such as a close friendship. Aggregating RSN links over time and comparing them with the OSN structure could provide further insight into this matter. On the other hand, OSN links might reflect connections of very different strengths, ranging from close family to distant past aquaintances. This can partly explain the differences in the structure of large-distance connections between the RSN and the OSN. RSNs miss out on important informal relationships such as church or leisure groups. It is important to note that the person-level degree distributions of the two networks are different. In the RSN, most people have a typical number of connections bokanyi_anatomy_2023. In Hyves, there are many low-degree nodes and fewer high-degree nodes corten_composition_2012. Interestingly, we observe the structural similarity of the two municipality-level networks despite these differences. § CONCLUSION In this work, we provided an in-depth comparison of the population-scale network structure of an online social network and a register-based social network in the Netherlands. We observed similarities between the micro and meso-level structure of the two networks despite the OSN containing self-reported friendship ties, and the RSN being based on legal definitions of kinship and formal affiliations such as work and school. We showed that the two networks are strikingly similar when comparing their connectivity; that combining all available RSN layers (family, school, work) results in the highest similarity; and that similarity remains relatively high even after accounting for population and spatial distance patterns, especially for local edges. By analyzing communities of the two networks with different edge weighting strategies, we showed that the networks have similar community structures using all three edge weighting strategies; and that detected communities do not closely follow pre-established administrative borders, especially when accounting for population and distance patterns. However, the latter method uncovers a socioculturally tightly knit community that corresponds to the Dutch Bible Belt. In summary, we expect researchers to draw similar conclusions based on register-based social networks and online social networks, especially for short-distance connections. Both data sources are useful for modeling the social network structure of a whole population, and the more edge types a register-based network contains, the better the comparability. § DATA AND METHODS In this section, we first present Hyves and the register-based social network of Statistics Netherlands, the OSN and RSN datasets, and their aggregation into Dutch municipalities, followed by a detailed description of the RSN layers. Then, we introduce our notation and describe the methods for normalizing edge weights. Lastly, we outline the process of identifying communities. §.§ Social network datasets OSN. The Hyves online social network was an online social media platform in the Netherlands corten_composition_2012 before the advent of Facebook. The dataset represents the late 2009 state of the network which during its peak period contained 10M people, covering up to 60% of the population of the Netherlands. The network represents supposed friendship connections between its registered users. There are 6.2M users with a self-reported place of residence at a municipality-level resolution, with 320M edges between them. We excluded users flagged as celebrities from our analysis. The self-reported municipality names in the data were provided by users and therefore were prone to different errors. <cit.> cleaned and aggregated place names at the municipality level even if users gave different administrative units, such as neighborhoods, as their place of residence. The municipalities were matched to the official list of Statistics Netherlands as of 2009. We use this cleaned and matched municipality dataset, and refer the reader to <cit.> for more details on data processing. RSN. The register-based social network (RSN) is compiled from official records of and by Statistics Netherlands (CBS) van_der_laan_producing_2017_correct,van_der_laan_person_2022_correct. In this network, the nodes are all 16.6M residents of the Netherlands in 2010. The almost 800M edges are organized in several layers representing various contexts of life comprising current family, school, work, neighbor and household relations. Each person's place of residence (municipality) is known. Only family, school and work connections are meaningful when considering inter-municipality connections, thus only these 570M edges are retained when aggregating connections at the municipality level. Family Family connections are derived from official parent-child and partner relations. The partner relations are derived from marriage registers, tax declarations, and household registers. From the above two source datasets, other family ties such as grandparents, grandchildren, siblings, aunts, uncles, cousins, nieces, and nephews are inferred. Step and in-law relationships are also included. School School connections are aggregated from various official educational agencies containing five levels of education: elementary, secondary, secondary special, vocational, and higher. People have a school tie if they go to the same school, year, location, and type of education. University and other higher education students are further distinguished by study programmes. Work Work connections contain links between people working for the same employer of their major source of income. If a company has less than 100 employees, all of them are connected to each other. Otherwise, a person is connected only to the 100 co-workers closest to their residence. We introduce the intuition and notion of the aggregated network of municipalities created from the person network. As direct person matching between the two networks is infeasible, we aggregate the networks such that nodes are the 431 municipalities of the Netherlands in 2010, and weighted edges between municipality pairs count the number of ties between people in the municipalities that the link connects. There were a few municipality merges in The Netherlands from 2009 to 2010 that we applied to the network dataset as well. We obtain this aggregated network for the OSN and every relevant layer in the RSN (family, school, and work), as well as for a combination of these layers. In the combined layers, an edge exists between two people when at least an edge exists in any of the three layers. If multiple edges run between people in the base layers, we count it as a single edge in the aggregated layer. §.§ Preliminaries We introduce the notion of the aggregated multilayer graph. The notation is based on bokanyi_anatomy_2023. We represent a person-level network as G_p=(V_p, E_p, L), where V_p is the set of nodes respresenting people. In our case, the residents of the Netherlands in 2010 consisted of |V_p|=n_p=16.6M people. The set of undirected edges running between these nodes can be described as E_p ⊆{ ({u,v},ℓ) : u,v∈ V_p, u ≠ v, ℓ∈ L }, such that L is the set of possible layers. In this setting, we can represent the network G_p using personal level binary adjacency tensor (A_p)_u,v,ℓ. An entry a_u,v,ℓ of this matrix is 1 if and only if an edge runs between persons u,v ∈ V_p in layer l ∈ L, and 0 otherwise. We define G=(V, E, L) as a multilayer graph. In this case, V is the set of municipalities, n=|V|= 431 is the number of nodes. The set of undirected edges is E ⊆{ ({u,v},ℓ, w) : u,v∈ V, u ≠ v, w ∈ℝ , ℓ∈ L }, such that L is the set of the possible layers and w is the strength of the connection between the two municipalities, for which we propose three different weighting schemes in the weightingschemes section below. We can represent the edges E using an adjacency matrix A_u,v,ℓ that counts half-edges that run between u,v ∈ V in the layer ℓ∈ L. We can relate the two representations as follows. We can represent the place of residence using a binary affiliation matrix B of shape n_p × n. An entry in this matrix is 1 if u∈ V_p is affiliated to v ∈ V. With the help of this representation, we can calculate A_u,v,ℓ = B^T (A_p)_u,v,ℓ B. Here, (.)^T represents matrix transposition. Person-level edges can also originate and end in the same municipality. This is represented by weighted self-edges in the aggregated graph. The above equation would count person-level edges twice within the same municipality. However, we dropped all of the self-edges when running our experiments. §.§ Weighting schemes We compare the strengths of the connections between municipality pairs in the RSN and the OSN by comparing edge weights corresponding to the same municipality pair. We incorporate population and distance into the weighting scheme as follows. Population corrected weighting. We use a metric that not only counts connections between areas but also takes into account that larger population areas typically have more connections inspired by bailey_social_2018. Within a layer, the metric between i,j ∈ V is formulated as SCI_ij = Connections_ij/Possible connections_ij. If i ≠ j, then the number of possible connections is Population_i × Population_j. Otherwise, it is equal to Population_i × (Population_j -1). In the case of the RSN, the population is the number of inhabitants. In the case of the OSN, population is the number of users that have self-reported the municipality as location in their profiles. Population and distance corrected weighting. It is widely known that distance is an important factor when forming connections lambiotte2008geographical. A power-law distribution often models this dependency which is often called the gravity law. In our context, we use a model-free metric inspired by expert_uncovering_2011. To measure the distances of municipalities, we calculate the distances of the centroids of the municipalities using the Euclidean distance metric. The proposed distance-aware social connectivity index (DSCI) is given by DSCI_ij,D = SCI_ij/𝔼[ SCI | D ] , where D denotes a certain spatial distance and 𝔼[ · ] denotes the expected value of a variable. We approximate this value by creating 200 bins that contain an equal number (464) of municipality pairs between 0 and 360 km. §.§ Community detection Community detection newman_modularity_2006,girvan_community_2002 is a way to identify groups of nodes in a network that form tightly knit subunits that are more loosely connected with other subunits. We use this to investigate the meso-level structure in our networks. We perform community detection based on the Louvain method, which accounts for edge weights. This allows us to investigate the three community structures resulting from our three edge weighting strategies: the number of connections between municipalities, the SCI weights, and the DSCI weights. We set the resolution parameter to 1. We use the Python package networkx hagberg_exploring_2008. It is well-known that community detection algorithms involve a degree of randomization. This can be accounted for by using consensus clustering lancichinetti2012consensus. In our experiments, we use 1) 1000 iterations of the Louvain algorithm to 2) create a new network based on the number of times the nodes belonged to the same community. Then, we go to step 1) and repeat until convergence. In our experiments, it took 3 iterations until all node pairs distinctively belonged to the same community. The results can also be regarded as a partitioning of the node set V into R non-empty partitions U = {U_1, U_2, …, U_R}, where U_i∩ U_j = {} for any i≠ j, i,j∈{1,2,…,R}, so the partitions are pairwise disjoint, and ∪_i=1^R U_i = V. We use the Adjusted Mutual Information (AMI) metric of <cit.> to compare the partitions we get from the concensus clustering on the different networks and edge weighting strategies, and also to compare the network partitions to the province borders of the Netherlands which is in essence an administrative partitioning. If we have two different partitionings, U={U_1, U_2, …, U_R} of R partitions, and T = {T_1, T_2, …, T_C} of C partitions , then the Adjusted Mutual Information is: AMI(U,T) = MI(U,T) - 𝔼{MI(U,T)}/avg {H(U),H(T)} - 𝔼{MI(U,T)}. In the above equation, MI stands for Mutual Information, which if calculated as MI(U,T) = ∑_i=1^R∑_j=1^C P_UT(i,j)logP_UT(i,j)/P_U(i)P_T(j), where P_UT(i,j) = |U_i∩ T_j|/|V| is the probability that a node belongs to partition i in U, and partition j in T, and P_U(i) = |U_i|/|V| is the probability that a node belongs to partition i in U, and P_T(j) = |T_j|/|V| is the probability that a node belongs to partition j in T. 𝔼 denotes the expected value of the Mutual Information, for details on its calculations, we refer the reader to <cit.>. The expected MI terms normalize this score to reflect that two random partitionings can also have similarity by chance. H stands for the entropy associated with a partitioning U: H(U) = ∑_i^R -P_U(i)log P_U(i). We use the implementation of the pedregosa2011scikitlearn Python package (see ) for the calculations. §.§ Acknowledgements We would like thank the POPNET team (<www.popnet.io>) for helpful suggestions and discussions. The POPNET project has been funded by Platform Digitale Infrastructuur Social Sciences and Humanities (<www.pdi-ssh.nl>). §.§ Data availability Results are based on calculations by Márton Menyhért (Leiden University) and Eszter Bokányi (University of Amsterdam) as part of the POPNET project (<www.popnet.io>) using non-public microdata from Statistics Netherlands. Under certain conditions, these microdata are accessible for statistical and scientific research. For further information , contact the corresponding author and: mailto:microdata@cbs.nlmicrodata@cbs.nl.
http://arxiv.org/abs/2406.19024v1
20240627092012
Switching current distributions in superconducting nanostrips
[ "Robert Vedin", "Jack Lidmar" ]
cond-mat.supr-con
[ "cond-mat.supr-con" ]
rvedin@kth.se jlidmar@kth.se Department of Physics, KTH Royal Institute of Technology, SE-106 91 Stockholm, Sweden § ABSTRACT We study switching current distributions in superconducting nanostrips using theoretical models and numerical simulations. Switching current distributions are commonly measured in experiments and may provide a window into the microscopic switching mechanisms. As the current through a superconducting strip is increased from zero it will at some point switch to the normal dissipative state. Due to thermal and/or quantum fluctuations the switching current will be random and follow a certain distribution depending on sweep rate, temperature, material properties and geometry. By analyzing the resulting distribution it is possible to infer the transition rate for a switch, which can be related to the free energy barrier separating the metastable superconducting state and the normal one. We study different switching scenarios and show using simulations how data taken for different sweep rates can be combined to obtain the switching rate over a wider interval of currents. Switching current distributions in superconducting nanostrips Jack Lidmar July 1, 2024 ============================================================= § INTRODUCTION Superconducting nanowires have emerged as an important component in applications such as the superconducting nanowire single photon detector (SNSPD) <cit.>. These devices rely on a bias current close to the critical I_c creating a fragile metastable superconducting state, such that the perturbation of a single photon is sufficient to trigger a switch to the normal state. In this bias regime the detectors also become sensitive to random fluctuations that can cause breakdown of the superconductivity in the form of dark counts. Dark counts may be triggered by thermally activated phase slips in thin superconducting wires <cit.> or quantum phase slips if the temperature is low enough <cit.>. In superconducting nanostrips a phase slip involves the entry, or unbinding, of vortices and anti-vortices <cit.>. The rate of these thermally activated events is often described by an Arrhenius type law governed by a free energy barrier. The problem of calculating this energy barrier has previously been approached using different methods <cit.>. Analytical estimates have been found by considering the interaction energy of mirror vortices in strips <cit.>. Numerical works based on the string method have also been demonstrated <cit.>, however the application of this method has been limited to consider only cases with no bias current. Similar mechanisms that are responsible for the dark counts also give rise to random variations of the switching current. The switching current statistics could therefore be used as an additional method of extracting information about the vortex energy barrier in the current biased regime. In this work the switching current distribution due to thermal activation is investigated theoretically and numerically through stochastic time dynamics for two different models of a superconducting wire, a one-dimensional (1D) Josephson junction chain and a two-dimensional (2D) time-dependent Ginzburg-Landau model. The switching current distribution in homogeneous wires is here shown to depend on the sweep rate of the bias current in a way that is analogous to the wire length. This behaviour can be exploited in order to obtain the switching current statistics, as well as the switching rate, on a larger interval of currents. In particular, this can be relevant for experiments, where changing the length of a wire typically involves fabrication of a completely new device, which can introduce additional variation of the I_sw due to inhomogeneities arising from the fabrication process. The length dependence of such sample-to-sample variations due to material disorder and inhomogeneities has previously been studied experimentally <cit.> and theoretically <cit.> in absence of thermal fluctuations. In Sec. <ref> we discuss theoretical approaches to compute the switching rate and arguments connecting it to the distribution of switching currents. Section <ref> illustrates the approach using several relevant examples from different switching scenarios. Section <ref> describes a larger scale numerical simulation based on time-dependent Ginzburg-Landau theory. § THEORY A current carrying superconducting wire will be in a metastable state where typically a large free energy barrier protects the current from decaying. As the applied current is increased the barrier will become lower and eventually vanish at some critical current I_c. In thin wires, thermal (or quantum) fluctuations can, however, cause phase slips and a decay of the current even below the critical current. This in principle results in a finite resistance R ∼ e^-Δ U/k_B T, at any finite temperature, although it will be exponentially suppressed when the barrier is large compared to temperature. We will focus on the situation where thermal fluctuations dominate over quantum, and where the applied current is relatively large but below I_c. This means that once a fluctuation is large enough to initiate a phase slip at some location in the wire, enough energy is released to locally cause a transition to the normal state, which may be detected as a voltage along the wire. The rate Γ of such switching events is mostly controlled by an energy barrier Δ U(I), which for a given geometry depends on the applied current I. §.§ Switching current distributions Consider an applied current I(t) gradually increasing from 0 at t = 0 to some value above the nominal critical current I_c. We will initially assume that the switching rate Γ_L(t) ≡ L Γ(t) of a homogeneous wire of length L depends on the applied current, but not on the sweep rate İ≡ dI/dt, i.e., Γ_L = L Γ(I(t)), which may be expected to hold for small enough İ. Under these circumstances it is possible to relate the switching rate to the probability distribution of the switching current, following arguments pioneered by Kurkijärvi and Fulton-Dunkleberger <cit.>. Assuming that switching may occur independently for each infinitesimal time interval Δ t, the probability of having no switch during a time 0 to t = nΔ t will be S(t) = ∏_i = 1^n ( 1 - Γ_L(I(i Δ t)) Δ t ) . In the limit Δ t → 0 this probability becomes S(t) = e^-∫_0^t Γ_L(t') dt' . The switching rate Γ_L(I) may then be obtained as Γ_L(I(t)) = - ∂/∂ tln S(t). Specializing in the following to the case where the applied current is ramped up linearly from zero, I(t) = İ t, with a constant sweep rate İ, and assuming a homogeneous wire where the switching event may occur equally likely along the whole length L, we may write the cumulative distribution F(I) = (I_sw < I) = ∫_0^I P(I') dI' = 1 - S(I/İ) for the switching current I_sw as F(I) = 1 - exp( - L/İ∫_0^I Γ(I') dI'), where Γ = Γ_L /L is the switching probability per time and length. The latter may thus be extracted via Γ = - İ/L∂/∂ Iln( 1 - F(I) ) = İ/LP(I)/1 - F(I) from measured (or simulated) switching current distributions. According to Eq. (<ref>) the sweep rate and the length of the wire only enter in the combination L/İ, hence the effect of increasing the length of the wire will be the same as decreasing the sweep rate. §.§ Delay time It turns out that the picture presented in the previous section may need some modifications when compared to numerical simulations or experiments. The assumption that the switching rate Γ does not depend on the current sweep rate İ is only approximately valid. In particular, there may be a time delay τ_d between the initiation of a switching event and the detection of this event. During this delay-time the current will increase further, which will induce a systematic shift in the distributions that must be accounted for. Experimentally, the delay may be due to the time of propagation of the voltage pulse through the wire to the detector, but there can also be an intrinsic delay in the nucleation mechanism of a phase slip, as discussed in more detail below. The probability distributions F(I) and P(I) = ∂ F(I)/∂ I of the previous section then correspond to the initiation of a switching event. The detection will occur after a slight delay τ_d, during which the current has increased by ∫_t^t+τ_dİ dt ≈İτ_d. In the simplest setting we may assume that the delay time τ_d is constant, independent of I and İ. The cumulative distribution for the detection is then F^det(I,İ) = F(I - İτ_d), and depends also on the sweep rate İ. This complicates the analysis, since the transition rate Γ(I) can no longer be extracted from Eq. (<ref>) using F^det in place of F. More generally, we may assume that the delay time is random with a certain distribution P_τ_d(τ_d), so that F^det(I, İ) = ∫_0^∞ P_τ_d(τ_d) F(I - İτ_d) dτ_d . In this case we may still define a characteristic delay time τ_d^*(I,İ) so that F^det(I,İ) = F(I - İτ_d^*), and derive an effective Γ^eff(I,İ) = - İ/L∂/∂ Iln (1 - F^det(I,İ)) = (1 - İ∂τ_d^*/ ∂ I) Γ(I - İτ_d^*), related to the true one by a shift and a scale factor. If the distribution of τ_d is very narrow τ_d^* will only be weakly dependent on I and may be treated as constant. We will test this hypothesis in simulations below. §.§ Escape over barrier To make a more detailed study a microscopic model of the switching mechanism is needed. We will discuss several such models in this and the following sections. The most important parameter determining the switching probability is the energy barrier Δ U(I). When the temperature is low compared to Δ U the rate will follow an Arrhenius law Γ≈Ω(I)/2π e^-Δ U(I)/k_B T . Although the dynamics of the transition typically involves a large number of degrees of freedom, it is often possible to project it down to a single reaction coordinate x, starting at x_0=0 in the uniformly superconducting metastable state and reaching a final value x_1 on the other side of the (free) energy barrier Δ U(x,I). The maximum barrier Δ U(I) = max_x Δ U(x,I) occurs at some x_max given by ∂Δ U(x_max,I)/ ∂ x = 0, and will decrease with increasing current until it becomes zero at the nominal critical current. The prefactor Ω/2π can be estimated using Kramers' theory, as Ω = ω_maxω_min D/k_B T, where ω_i^2 = |∂^2 Δ U(x_i,I) / ∂ x_i^2| evaluated at the minimum x_i = x_min and the maximum x_max, and D is an effective diffusion constant along x. Alternatively, instead of relying on the approximate Eq. (<ref>) the rate can be obtained from the mean first passage time. In the following, x(t) is assumed to obey a one-dimensional overdamped Langevin equation, ẋ = -D βΔ U'(x) + ζ(t), where D = k_B T/α is the diffusion constant, β = 1/k_B T, and ζ is a white noise process with zero mean and covariance < ζ(t) ζ(t') > = 2D δ(t-t'). The applied current is held fixed in this argument and we write Δ U(x,I)=Δ U(x) for brevity. The corresponding Fokker-Planck equation for the probability P(x,t) is ∂/∂ t P(x,t) = ∂/∂ x D e^-βΔ U(x)∂/∂ x e^βΔ U(x) P(x,t) . The transition rate Γ may then be directly related to the mean first passage time τ for crossing the barrier via Γ = τ^-1. The latter may be expressed in closed form as <cit.> τ = 1/D∫_x_0^x_1 dx e^βΔ U(x)∫_x_0^x dy e^-βΔ U(y) , which may be evaluated numerically to give the transition rate also when the condition Δ U(I) ≫ k_B T does not hold. In the following sections we consider a few different scenarios for the switching mechanism. § ILLUSTRATIVE EXAMPLES §.§ Single Josephson junctions As a first illustration consider a single Josephson junction connected to a current source and shunted by a resistance R. The reaction coordinate x may here be identified with the phase difference ϕ of the superconducting order parameter across the junction. In the overdamped limit, i.e., neglecting the junction capacitance, the phase obeys a Langevin equation (Φ_0/2π R) ϕ̇= - I_c sinϕ + I + I_n(t) where I_n is the Johnson-Nyquist noise in the resistor with zero mean and <I_n(t)I_n(t')> = (2 k_B T /R) δ(t-t'). This leads to an effective phase diffusion constant D =(2π/Φ_0)^2 R k_B T. The corresponding energy as function of ϕ takes the form of a tilted washboard potential Δ U(ϕ,I) = - E_J cosϕ - (Φ_0 / 2π) I ϕ, where E_J = I_c Φ_0/2π. When the applied current is below the critical, I < I_c, the stationary solution yields ϕ_min = sin^-1 (I/I_c), while the barrier maximum occurs at ϕ_max = π-ϕ_min. The energy barrier becomes Δ U(I) = 2 E_J √(1 - (I/I_c)^2) + I Φ_0(π - 2 ϕ_min)/2π, and ω_max = ω_min = √(E_Jcosϕ_min) = √( (Φ_0/2π) √(I_c^2 - I^2)), so that the transition rate estimated using Kramers' theory will be <cit.> Γ(I) = R √(I_c^2 - I^2)/Φ_0 e^-Δ U(I)/k_B T . In Fig. <ref> we compare this with the more precise Γ = τ^-1 obtained by numerically integrating Eq. (<ref>). As seen the Kramers rate will be relatively accurate except when the barrier is low compared to temperature. The downturn at high currents of the latter approximation comes from the prefactor and is obviously unphysical. Below we will compare with the value extracted from simulated switching current distributions in Josephson junction chains. §.§ Josephson junction chains As a more complex test case, we consider simulated switching current distributions in a chain of N Josephson junctions connected in series. We employ the circuit model described in Refs. <cit.>. Each junction of the chain is modeled as an ideal Josephson junction shunted by a capacitance C and a nonlinear resistor R. The total current through junction i is I^tot_i = I^s_i + I^C_i + I^R_i = I_csin(θ_i - θ_i+1) + C (V̇_i - V̇_i+1) + I^R_i , where θ_i is the phase of the superconducting order parameter at the island to the left of junction i, and V_i = θ̇_i Φ_0 / 2π is the voltage. The nonlinear resistive current is taken to be I^R_i = (V_i - V_i+1)/R + I^n_i if |V_i - V_i+1| > V_g (V_i - V_i+1)/R_qp + I_i^qp,n otherwise, where R is the normal resistance of a single junction, and R_qp≫ R is the quasiparticle subgap resistance. The current entering the chain through the lead resistance is given by I_L = (U - V_1)/R_term + I^n_L , where U is an applied voltage. In addition thermal noise currents I_i^n are included. They are modeled as a Gaussian random Johnson-Nyquist noise with zero mean and covariance < I_i^n(t) I_j^n(t')> = (2 k_B T/R_i) δ_ijδ(t-t'). The dynamical equations of motion form a system of equations obtained by imposing current conservation at each island i, C_0 V̇_i + I^tot_i - I^tot_i-1 = 0, where C_0 is the capacitance to ground. This gives a coupled system of 2nd order differential equations for the superconducting phases θ_i. These are integrated using a symmetric time discretization with a small time step Δ t = 0.02 (Φ_0/2π I_c R). Each iteration requires the solution of a tridiagonal system of equations. We set E_J/E_C = (I_cΦ_0/2π)/(4e^2/C) = 1, C/C_0 = 100, R_qp/R = 100, T = 0.01 E_J, R_term = 200 R, V_g = R I_c, and vary the sweep rate İ from 10^-7 to 10^-3 in units of 2π R I_c^2/Φ_0, for a chain consisting of L = 100 junctions. For this parameter choice with C ≫ C_0, the phase slips will be highly localized to single junctions and the reaction coordinate can be defined as the superconducting phase difference ϕ = Δθ across a junction in accordance with Sec. <ref>. Furthermore, a single phase slip at a particular junction will cause it to latch and stay in the dissipative running state, so a switching event may be identified as the first phase slip event. We show in Fig. <ref> the resulting simulated switching current histograms together with the theoretical prediction (solid lines) obtained from Eq. (<ref>) using the rate Γ = τ^-1 numerically computed from Eq. (<ref>). For low sweep rates the agreement is very good, considering that no fitting parameters were adjusted, in spite of the Josephson junction chain being considerably more complicated than a single junction. For higher sweep rates deviations are clearly seen, presumably because the time delay discussed in Sec. <ref> becomes non-negligible. The differences in the predictions of the distributions from Kramers' theory, Eq. (<ref>) and the mean first passage time are minuscule in this case. From the empirical histograms it is possible to recover the switching rate from Eq. (<ref>). This is shown in Fig. <ref>. At least for low sweep rates the accuracy of the procedure appears satisfactory. §.§ One-dimensional time-dependent Ginzburg-Landau theory Thin continuous wires are more appropriately modeled by time-dependent Ginzburg-Landau theory. Within a one-dimensional TDGL description, the free energy barrier for thermal phase slips has been calculated by Langer and Ambegaokar and by McCumber and Halperin (LAMH) as <cit.> Δ U(κ)/ρ_0 S ξ = 8√(2)/3√(1-3 κ^2 ) - 8κ( 1 - κ^2 )tan^-1(√(1-3 κ^2)/ 2 κ) where κ is related to the applied current density J via J/J_0 = κ (1 - κ^2), and J_d = (2/3√(3)) J_0 = (2/3√(3)) (Φ_0/2πμ_0 λ^2 ξ) is the GL depairing current density, ρ_0 the condensation energy density, and S the cross section of the wire. The prefactor in Eq. (<ref>) has been estimated by McCumber and Halperin to <cit.> Ω/2π≈√(3)/2 π^3/2τ_GLξ√(βΔ U(0) )(1 - κ√(3))(1 + κ^2/4) , where τ_GL∝ |T - T_c|^-1 is the GL time. The resulting rate is plotted in Fig. <ref>. As before, the downturn at high currents is an artefact of the approximations. §.§ Vortex barrier crossing For wider superconducting 2D sheets or strips the switching transition will necessarily involve vortex crossings perpendicular to the current. It is then natural to take the reaction coordinate x to be the distance from the edge to the vortex center. The applied sheet-current density 𝐉 exerts a Lorentz force 𝐟 = 𝐉×𝐧Φ_0 on a vortex trying to pull it further into and across the strip (𝐧 here is the normal to the surface). We assume that the vortices undergo diffusive motion in a potential with diffusion coefficient D = k_B T/α, where α = Φ_0^2 / 2πξ^2 ρ_n is the Bardeen-Stephen vortex friction <cit.> related to the normal state resistivity ρ_n. We further assume that the width W of the strip is larger than the GL coherence length ξ and much smaller than the Pearl length Λ = λ^2 / d, where λ is the magnetic penetration depth and d is the thickness of the strip. The nucleation of a vortex at an edge (or an anti-vortex at the opposite edge) involves a depletion of the superconducting order parameter in a region of the order of πξ^2 d, with an associated energy cost ∼ϵ_0/2. As the vortex moves further into the strip, ξ≲ x ≲ W-ξ, it will be attracted to its mirror images leading to an energy <cit.> U_0(x) = ϵ_0/2 + ϵ_0 ln [(2W /πξ) sin(π x/W)], where ϵ_0 = (Φ_0^2 / 4πμ_0 λ^2) d. A smooth interpolating formula for the total energy of the vortex may be defined as U(x,J) = ϵ_0/2ln[ 1 + e ( 2W/πξ)^2 sin^2 ( π x/W) ] - Φ_0 J x. In Fig. <ref> we plot the switching rate Γ = τ^-1 obtained from the numerical solution to Eq. (<ref>) using this U(x,J) for a couple of different temperatures and a width W = 100ξ. Three different regimes are clearly seen: For small currents I = JW ≪ϵ_0/Φ_0 the barrier maximum will occur near the center x ≈ W/2 of the strip resulting in an energy barrier U(J) ≈ U_0 - Φ_0 J W/2. The corresponding transition rate Γ will then grow exponentially with current. As the current is increased the maximum will move towards the edge and the dependence of the barrier on current turns logarithmic, which translates to a powerlaw dependence Γ∼ J^b, with an exponent b = βϵ_0 + 1. For larger currents corrections to the powerlaw behavior will occur due to the current-induced suppression of the order parameter not accounted for here [ In GL theory a uniform current leads to the renormalization ϵ_0 →ϵ = (1-κ^2) ϵ_0 < ϵ_0, where κ is defined below Eq. (<ref>)]. For even larger currents J ≳ J_dp = (2/3√(3)) (2 ϵ_0/Φ_0 ξ) the barrier will diminish resulting in yet another crossover to a regime where Γ∼ J. Correspondingly, the shape of the switching current distribution will change depending on what current regime is probed, which in turn depends on the sweep rate J̇, wire length, and temperature. For small currents the switching current will follow a Gumbel distribution, then a Weibull distribution with shape parameter βϵ_0 + 2 for higher currents, and eventually a Rayleigh distribution for the highest. We show some examples of the switching distributions for various J̇ / L and βϵ_0 = 8 in Fig. <ref>. § TIME-DEPENDENT GINZBURG-LANDAU SIMULATIONS A more detailed picture of the vortex barrier crossing requires a microscopic model including both amplitude and phase of the superconducting order parameter. Therefore, we now turn to a larger scale numerical simulation of a 2D superconducting strip using a stochastic formulation of the time-dependent Ginzburg-Landau (TDGL) equations uψ̇ = (1 - T - |ψ|^2)ψ + (∇ - i 𝐀)^2ψ + η_ψ 𝐀̇ = Im{ψ^* (∇ - i 𝐀) ψ} - ( λ/ξ)^2 ∇×𝐁 + η_𝐀 𝐁 = ∇×𝐀 here expressed in dimensionless units. Time is measured in units of the timescale τ_A = μ_0 σλ^2, σ is the normal state conductivity, and u=τ_ψ / τ_A the ratio of the timescale of the two equations. Positions are in units of the coherence length ξ, λ is the magnetic penetration length, T is in units of the critical temperature T_c, and magnetic field measured in units of the ħ / 2 e ξ^2. The precise value of u does not significantly influence the breakdown dynamics, and for computational efficiency we set u=1 rather than the commonly used value 5.79 <cit.> for a dirty superconductor. The temperature is taken to be stationary and Joule heating effects due to dissipation and vortex motion have been neglected, however the stochastic dynamics associated with a finite temperature are included through white noise terms η_ψ and η_A. The correlation functions of these noise terms are < η_ψ(𝐫, t) η^*_ψ(𝐫', t') > = 4 u D T δ(t - t') δ(𝐫 - 𝐫') and < η_A(𝐫, t) η_A(𝐫', t') > = 2 D T δ(t - t') δ(𝐫 - 𝐫'), with the dimensionless coefficient D = 2 e^2 μ_0 k_B T_c λ^2 / (ħ^2 d ). The thickness is here taken as d=ξ, and the critical temperature used is T_c = 10 K which is a typical order of magnitude for thin films of NbN or NbTiN used in detector devices <cit.>. These equations are expressed in the zero electric potential gauge. The geometry used to describe the wire is a rectangular domain 0<x<L and 0<y<W with length L and width W. At x=0 and x=L we use periodic boundary condition for both ψ and B to approximate the dynamics of a very long wire. At y=0 and y=W we apply the condition 𝐧· (∇ - i 𝐀)ψ = 0 corresponding to an insulating boundary, where 𝐧 = ±𝐲̂ is the unit normal of the boundary. The net current I_net through the cross section is tuned by the boundary conditions 𝐁(x, 0) = - B_0 𝐳̂, 𝐁(x, W) = + B_0 𝐳̂ for the magnetic field, since μ_0 I_net/d = ∫_0^W (∇×𝐁) ·𝐱̂ dy = 2 B_0. For the numerical solution of the equations of motion we use an explicit finite difference scheme with fixed time step Δ t = 10^-4, and the same spatial discretization Δ x = 1/2 in both the x and y coordinates. The complex phase θ and the vector potential 𝐀 are invariant under the gauge transformation θ→θ + α, 𝐀→𝐀 + ∇α. In order to preserve this gauge invariance of the covariant derivative we use a link-variable formulation (∇ -i 𝐀)ψ = U^* ∇(U ψ), where U = exp(-i ∫𝐀· d𝐥) is the so called link variable <cit.>. This formulation of the derivative has the benefit of being explicitly gauge invariant when using the discretized finite difference approximation for the derivatives. In order to determine the switching event the bias current is increased continuously in time as I(t) = İ t with a constant sweep rate İ, expressed in units of ħ d/(2 e τ_A μ_0 ξ^2). The switching current is identified from the time at which the breakdown of the superconducting state is first detected. This breakdown can be identified either by measuring the voltage along the strip, or alternatively by the flow of vortices across the wire. In the zero-potential gauge the electric field is given by 𝐄 = -𝐀̇ and the voltage along the strip is calculated through integrating the electric field V = ∫𝐀̇· d𝐥 along a path between a point on left and the right hand boundaries of the strip. With the stochastic dynamics this voltage becomes a very noisy signal, and it is therefore inconvenient to use the condition |V(I_c)| > 0 to define the critical current I_c. The vortex flow across the wire can be identified by integrating the complex phase gradient along a path between the left and right hand boundaries ν = ∫∇θ· d𝐥. The path is taken to be the mid-line of the strip at y=W/2 between x=0 and x=L. Each passing vortex increases this phase winding ν by (±) 2π, and the switching current is therefore determined as the first current for which |ν| ≥ 2 π. In practice, for high enough bias current, the first vortex passage will trigger an avalanche of many more vortices flowing across the wire. The first vortex passage therefore gives a more precise definition of the switching current compared to the detection of a non-zero voltage. § SIMULATION RESULTS Using the stochastic TDGL model we obtain statistics of the switching current for a wire with length L, width W, temperature T for a sweep rate İ. The TDGL model allows us to simulate the vortex dynamics, and to define the switching current in terms of the first vortex passage. From switching current statistics we can therefore extract information about the vortex entry barrier, following the procedure described in Sec. <ref>. An example of the switching current distribution of a relatively narrow wire (W=10ξ) is shown in Fig. <ref>, calculated from 1000 independent realizations of the stochastic noise for each İ. A large penetration length λ = 20ξ is used to allow an approximately uniform distribution of the current density. For these parameters the switching in all cases occur very close to the depairing current I_dp. We can note that for faster sweep rates part of the distribution extends to I_sw / I_dp > 1. This is a consequence of a finite delay time τ_d between the initiation of the breakdown process and the detection of the first vortex at the midpoint y=W/2, as discussed in Sec. <ref>. With a constant sweep rate İ of the current this leads to an overestimation τ_d İ of the switching current. In order to investigate this delay time numerically, we perform simulations for different İ and with a fixed ratio İ/L, since according to Eq. (<ref>) the distribution F(I_sw) would be invariant in the absence of a delay time. The result is shown in Fig. <ref>, and the fact that the histograms do not overlap while the overall shape of the distributions does not change is a clear indication that there is a significant time delay τ_d > 0. While the τ_d in principle could depend on İ we see from the figure that the histograms are separated by an approximately constant offset, indicating that τ_d is approximately constant. In order to estimate this time we make the ansatz I̅_i = I̅_sw + τ_d İ_i, where I̅_i refers to the median of the distribution of the detected switching currents and I̅_sw the true median in absence of a time delay. For each pair of distributions the τ_d can be obtained as τ_d = I̅_2 - I̅_1/İ_2 - İ_1. In order to evaluate this we let İ_1 be the slowest of the sweep rates, since this would be the least affected by the delay, and vary İ_2. The resulting delay time τ_d(İ_2) is shown in Fig. <ref>(a). For the slower sweep rates this delay approximately saturates to a constant value τ_d ≈ 100 τ_A, however for larger İ there is a small trend of gradually decreasing delay. This trend is interpreted as the larger İ allowing the bias current to increase more before the first vortex has had time to nucleate, and the higher bias current in turn accelerating the nucleation process of the vortex. The inset in Fig. <ref> (b) shows the cumulative distribution F(I_sw - τ_d İ), where the detected switching current is corrected for the delay time τ_d calculated for each sweep rate individually. We find that this shift is sufficient for the different distributions to collapse onto a single curve, and the obtained distributions F display the expected dependence on the ratio İ/L consistent with Eq. (<ref>). With the empirical cumulative distribution F corrected for the delay time τ_d the switching rate Γ can be calculated according to Eq. (<ref>). The derivative in this expression must be numerically evaluated on the set of switching currents I_sw^(i) obtained for the i=0,1,...,999 independent simulation realizations. The cumulative distribution is estimated as F(I_sw^(i)) = i/1000, where the sampled switching currents I_sw^(i) are sorted in increasing order. We use a symmetric difference approximation for the derivative as Γ(I_sw^(i)) = - İ/L1/I_sw^(i+k) - I_sw^(i-k)log( 1 - F(I_sw^(i+k))/1 - F(I_sw^(i-k))). Using nearest neighbour differences (k=1) is found to result in a very noisy approximation of Γ, while similarly a very large value of k would instead lead to a systematic underestimation of the variability of Γ, and we find a good middle ground in k=20. The resulting estimate of Γ is shown in Fig. <ref>(a) for a fixed length L=50ξ of the wire and different sweep rates İ for the bias current. The switching currents are here corrected for the finite delay time by subtracting τ_d İ, where we use the approximately constant value τ_d ≈ 100 τ_A obtained for the slower sweep rates in Fig. <ref>. The inset Fig. <ref> (b) for reference shows the corresponding Γ where the switching current is not corrected by the shift τ_d İ. It is clear that a finite τ_d must be taken into account in order for Γ calculated for different sweep rates to collapse onto the same curve. This nucleation time of a switching event can also be observed in snapshots of the modulus of the order parameter |ψ|^2. Examples of these snapshots are shown for a narrow wire (W=10ξ) in Fig. <ref>, where t=0 corresponds to the time when the first vortex crossing is detected at the mid-line y=W/2, but a signature of the formation of a weakspot can be seen much earlier in this case at t=-50 τ_A. An additional contribution to the delay time is expected due to a finite vortex velocity [A very rough theoretical estimate for the vortex motion contribution would be W/v, where the vortex velcocity v ≈ J Φ_0 / α, neglecting any variation in the potential across the wire.]. The size of this contribution can be estimated by detecting the first vortex crossing at several points y_i along the cross section. For a vortex entering through the boundary y=0 and moving with a constant velocity v the switching current profile would be of the form I_sw(y_i) = I_sw(0) + y_i/vİ. The I_sw(0) here is the current at which the vortex first enters the strip, and the velocity v can be estimated by fitting the slope. Often the breakdown will be more complex than a single vortex crossing from one boundary to the other, as an example of I_sw(y_i) shows in Fig. <ref>. The maximum near the center indicates that in this realization a vortex first entered at y=W, a short time later an anti-vortex enter at y=0 and the pair annihilates in the center. The timescale associated with this process is estimated from the variation Δ I_c as Δ t_v = Δ I_c / İ and is found to be much smaller than the timescale τ_d associated with the nucleation time of the first vortex described above. Together Figs. <ref> and <ref> suggest that, for a narrow (W = 10ξ) strip, a channel of suppressed order parameter amplitude forms prior to the passage of the vortices. For a wider wire (W=30ξ) snapshots of the modulus |ψ|^2 show, Fig. <ref>, the same characteristic signature as seen in the narrow wire (W=10ξ), where a weak spot starts to form a relatively long time prior to the first vortex entry. In the wider wires it is possible to resolve the shape of the moving vortices. The influence of the driving current is seen to deform the vortex profile into an elongated shape. With a wider cross section it is natural that also the time delay due to the finite vortex velocity increases as shown in Fig. <ref>. For W=30ξ this contribution to the total delay time τ_d is still small. However, as the width is increased further it will eventually become non-negligible. § DISCUSSION The simulated switching current distributions, shown in Fig. <ref> for a Josephson junction chain and in Fig. <ref> for a nanostrip, have a characteristic asymmetric shape with more weight toward lower currents and standard deviation on the order of a few percent of the mean switching current. These distributions capture many of the same features seen in experimentally measured switching current distributions <cit.>. The width of the simulated distributions decreases with decreasing sweep rate İ of the bias current, which is mostly in agreement with experiments. However, non-monotonous width-dependence has also been reported with a local minimum width appearing for intermediate values of İ <cit.>. The logarithm of the switching rate Γ extracted from the simulated distributions follows a slightly concave curve, see Fig. <ref>. A perfectly straight line, i.e., exponential dependence of Γ on current, would correspond to a Gumbel distribution for the switching current, which has a skewness of -1.14. The concave dependence we see correspond to slightly less skewed distributions, e.g., with skewness about -0.5 for the TDGL simulations. This concave trend of the switching rate has also been seen in experiments <cit.>, with increasing curvature for higher temperatures. In these references the curvature was attributed to multiple phase-slips being required in order to trigger a full switching event. However in our simulations the switching event is determined by the first phase-slippage detected, which is an indication that the curvature alone may not be a unique signature of a multi-phase-slip regime. In fact, under the assumptions discussed in Sec. <ref> the rate Γ(I) reflects the current dependence of the energy barrier Δ U(I), which may be complicated with several different crossovers. In our TDGL simulations of narrow wires we observed that the vortex nucleation process begins with a growing depletion of the superconducting order parameter amplitude at one edge of the strip. Only once the order parameter is sufficiently suppressed over a region reaching across the width of the strip do vortices start to flow. When the width is increased it becomes possible to discern the flow of individual vortices. Eventually, for even wider strips the vortex crossing scenario discussed in Sec. <ref> should become applicable. By exploiting how a slower sweep rate İ shift the I_sw distribution towards lower currents, the result for different values of İ can be combined according to Eq. (<ref>). This permits extracting the switching rate Γ, and by extension the vortex energy barrier Δ U(I), on a larger interval of currents. In doing this for our simulation results we found it necessary account for a time delay τ_d between the initiation of a phase slip and its later detection in the form of a vortex crossing. This intrinsic time delay is estimated to be of the order of a few ps in superconducting materials such as NbN using typical values for σ and λ <cit.>, making it less of an issue in experimental settings. On the other hand, experiments may be subjected to other more important contributions to the delay time, such as propagation time of the voltage pulse through the wire, that may need to be taken into account. By lowering the sweep rate or equivalently studying longer wires it is possible to probe rare barrier crossings. In this regime it is likely that inhomogeneities and disorder will start to play a role. Studying this crossover to a disorder dominated regime <cit.> would be an interesting extension. Several experiments have also demonstrated a non-monotoneous temperature dependence of the width of the switching current distribution <cit.>. Such an effect has been attributed to Joule-heating <cit.>, which has not been included in this investigation but could be another potential avenue for a future work. § CONCLUSION The formula (<ref>) connects the switching rate Γ to the probability distribution of switching currents P(I_sw). This formula is particularly useful for analyzing simulation data as we demonstrate above, and also for experiments. A complication is that in practice the switching current histograms may need to be shifted due to a prevalent time delay. Accounting for this, our simulations fit well with the theoretical description in Sec. <ref> and makes it possible to stitch together the extracted switching rate Γ from several switching current distributions taken using different sweep rates İ. The rate Γ in turn, makes it possible to obtain the thermal activation energy barrier of the process to logarithmic accuracy, i.e., neglecting the prefactor in Eq. (<ref>), thus providing insight into the detailed switching mechanism. A similar analysis could be employed for experimental data, thus allowing both the switching rate Γ and time delay τ_d to be measured. The computations were enabled by resources provided by the National Academic Infrastructure for Supercomputing in Sweden (NAISS), partially funded by the Swedish Research Council through grant agreement no. 2022-06725.
http://arxiv.org/abs/2406.18631v1
20240626180000
HATS-38 b and WASP-139 b join a growing group of eccentric hot Neptunes on polar orbits
[ "Juan I. Espinoza-Retamal", "Guðmundur Stefánsson", "Cristobal Petrovich", "Rafael Brahm", "Andrés Jordán", "Elyar Sedaghati", "Jennifer P. Lucero", "Marcelo Tala Pinto", "Diego J. Muñoz", "Gavin Boyle", "Rodrigo Leiva", "Vincent Suc" ]
astro-ph.EP
[ "astro-ph.EP" ]
http://arxiv.org/abs/2406.18448v1
20240626155310
On the increase of the melting temperature of water confined in one-dimensional nano-cavities
[ "Flaviano Della Pia", "Andrea Zen", "Venkat Kapil", "Fabian L. Thiemann", "Dario Alfè", "Angelos Michaelides" ]
cond-mat.mtrl-sci
[ "cond-mat.mtrl-sci" ]
§ ABSTRACT Water confined in nanoscale cavities plays a crucial role in everyday phenomena in geology and biology, as well as technological applications at the water-energy nexus. However, even understanding the basic properties of nano-confined water is extremely challenging for theory, simulations, and experiments. In particular, determining the melting temperature of quasi-one-dimensional ice polymorphs confined in carbon nanotubes has proven to be an exceptionally difficult task, with previous experimental and classical simulations approaches report values ranging from ∼ 180 K up to ∼ 450 K at ambient pressure. In this work, we use a machine learning potential that delivers first principles accuracy to study the phase diagram of water for confinement diameters 9.5 < d < 12.5 Å. We find that several distinct ice polymorphs melt in a surprisingly narrow range between ∼ 280 K and ∼ 310 K, with a melting mechanism that depends on the nanotube diameter. These results shed new light on the melting of ice in one-dimension and have implications for the operating conditions of carbon-based filtration and desalination devices. § INTRODUCTION Water under nanometric confinement is ubiquitous in nature and important for chemistry, physics, biology, geology, and engineering. It has received attention from both experiments and theory. Experiments suggest anomalous properties such as low dielectric response <cit.>, anomalously soft dynamics with pliable hydrogen bonds <cit.>, and massive radius-dependent flow <cit.>. Theory and simulations indicate a quantum mechanically induced friction <cit.>, ice-liquid oscillations <cit.>, and possible superionic behavior <cit.>. In addition to its potential for the discovery of new physics of confined liquids, nano-confined water has many promising applications in e.g., desalination <cit.> and clean energy <cit.>. In particular, water confined in carbon nanotubes (CNTs) has been of interest for quasi one-dimensional phase transitions <cit.>, macroscopically ordered water structures <cit.>, a transition from Fickian to ballistic diffusion <cit.>, ultra-fast water hydrodynamics <cit.>, formation of close-packed ice <cit.>, as well as promising applications ranging from water purification to blue energy harvesting <cit.>. Both experiments <cit.> and simulations <cit.> suggest that the phase diagram of water confined in sub-nanometer tubes is significantly different from bulk water, with the formation of both ordered hollow and filled one-dimensional polymorphs, namely ice nanotubes. Water confined in CNTs is of strong technological interest in both solid and fluid phases. Ice nanotubes have potential applications to ferroelectric devices <cit.>, while liquid water in CNTs is important for the development of high-flux membranes <cit.>, flow sensors <cit.>, and due to the strong analogy between CNTs and aquaporins <cit.> and its potential to developing artificial water channels <cit.>. In this context, it is important to ask in which temperature range water confined in nanotubes melts, with implications for all the aforementioned applications. The melting temperatures of ice nanotubes confined in CNTs have been an object of study in both simulations <cit.> and experiments <cit.>, especially below a critical confinement length scale (approximately 2.5 nm), where the macroscopic Gibbs-Thomson relation predicts a depression of the freezing point of water, breaks down <cit.>. However, measuring the melting temperature of water in narrow carbon nanotubes is a challenge both in experiments and simulations. In fact, X-ray diffraction (XRD) measurements <cit.> reported melting temperatures ranging from ∼ 300 K (pentagonal ice) to ∼ 180 K (octagonal ice). These results are roughly in agreement with classical molecular dynamics (MD) simulations <cit.> based on TIP4P <cit.> or SPC/E <cit.> water, as well as photoluminescence (PL) experiments <cit.>. In contrast, Raman spectroscopy experiments <cit.> reported melting temperatures that were extremely sensitive to the CNT diameter, varying from ∼ 450 K for d ∼ 10.5 Å to ∼ 280 K for d ∼ 15.2 Å. Qualitatively similar results were subsequently obtained with ReaxFF <cit.> MD simulations in Ref. . In summary, the debate on the values of the melting temperature is still open: different experiments and (empirical force-field) simulations report values ranging from ∼ 180 K up to ∼ 450 K. This seemly basic disparity has large implications on the working conditions of liquid water and ice nanotubes in emerging nanotechnological applications. While different experimental techniques have been applied to investigate this problem, no computational work with the accuracy of electronic structure theory is available, mainly due to the significant length and timescale needed for reliable results. In this work, we take a next step towards computing the melting temperatures of one-dimensional nano-confined ice and understanding its ambient pressure phase diagram with predictive accuracy. In particular, we achieve first-principles accuracy with feasible computational cost by using a machine learning potential (MLP) <cit.> trained on density functional theory (DFT) data, and target the question: are room temperature ice nanotubes liquid in a one-dimensionally confined CNT-like cavity? To address this question, we study the melting temperature of nano-confined ice with an implicit model, i.e. by emulating the confining material with a cylindrical confining potential fitted to the water-carbon interaction in sub-nanometer carbon nanotubes. This is a standard approach in analyzing the phase behavior of quasi one-dimensional nano-confined water <cit.>, and we have checked the reliability of our implicit model towards the explicit modeling of the carbon atoms by using the MLP developed in Ref. . To compute the melting temperatures, we determine the most stable polymorphs for a fixed nanotube diameter by using random structure search (RSS) <cit.> and compute its melting temperature via solid-liquid coexistence simulations. We find the melting points of (helical) triangular, square, pentagonal, and hexagonal ice nanotubes to be ∼ 10-30 K higher than the bulk water melting temperature. In addition, we report a non monotonic behavior of the number of hydrogen bonds with the confining diameter, that positively correlates with the water diffusion coefficient. On the one hand, our results confirm the possibility to study and apply ice nanotubes at around room temperature, but suggest that the range of stability is limited at temperatures below ∼ 310 K, as opposed to a previously higher suggested range. On the other hand, our results indicate that filtration and desalination devices based on water confined in narrow tubes do not require high working temperatures. § THE MELTING TEMPERATURE OF QUASI ONE-DIMENSIONAL NANO-CONFINED ICE Previous experimental and empirical force-field based computational work suggests that water confined in sub-nanometer nanotubes exhibits an interesting phase diagram, with different quasi one-dimensional ice polymorphs stable in different diameter ranges <cit.>. In this work, we explore the phase behavior of ice confined in narrow nanotubes, i.e. with a diameter d ∼ 10 Å, at first principles accuracy. Therefore, we considered five confining cylinders corresponding to zigzag nanotubes CNT(n,0) with n=12,13,14,15,16. The diameters of the considered nanotubes are respectively ∼ 9.5, 10.2, 11.0, 11.8, 12.5 Å. The diameters were selected to maximize the variability in the studied phase diagram. In fact, a different one-dimensional ice polymorph is expected to be the most stable polymorph for each confining diameter <cit.>. To explore the phase behavior of the ice nanotubes, we use an implicit confinement model. We first developed a uniform cylindrical confining potential fitted to revPBE-D3 <cit.> binding energies of a water molecule inside the CNT. The DFT functional for the water-carbon interaction was determined via an accurate benchmark to diffusion Monte Carlo (DMC) and Coupled Cluster with Single, Double, and perturbative Triple interactions (CCSD(T)) data from Ref. . The water-water interaction is described by using the MLP from Ref. , trained on revPBE0-D3 <cit.> for treating water in bulk and under confinement <cit.>. To find all the metastable phases in each nanotube, we performed an RSS with a home-built Python code. Subsequently, we identified the most stable polymorph in each nanotube based on the minimization of the enthalpy, and then computed its melting temperature via coexistence simulations. The reliability of our model in determining the lowest enthalpy structure has been checked against both revPBE0-D3 and DMC data, as reported in the Supporting Information (SI)<cit.> (section <ref>). Further technical details on the RSS, the fitting of the confining potential, and the coexistence simulations are reported in Methods and in the SI, together with tests on the robustness of our model with respect to the modeling of explicit carbon (section <ref>). Considering the weak impact of quantum nuclear motion on the melting temperature of bulk <cit.> and 2D nanoconfined water <cit.> at ambient pressures, we restrict ourselves to a classical description of the nuclei. The most stable phases identified with our approach in the five considered nanotubes are (helical) triangular (d∼ 9.5 Å), square (d∼ 10.2 Å), pentagonal (d∼ 11.0 and ∼ 11.8 Å), and hexagonal ice (d∼ 12.5 Å). Front-view snapshots of the zero-temperature structures are reported in Fig. <ref>(a). We refer to the melting temperature - diameter phase diagram in Fig. <ref>(b). The first principles accuracy melting points obtained in this work are reported with a blue triangle, red square, orange and dark green pentagon, and light green hexagon. We find that the melting temperatures of ice nanotubes are ∼ 10-30 K above the melting temperature of bulk water, which is 270 ± 5 K for our model <cit.>. In particular, the melting point of the square ice nanotube is in quantitative agreement with PL <cit.> spectroscopy. A much higher transition temperature was reported with Raman spectroscopy at similar diameters <cit.>. However, Chiasci et al. <cit.> argue that the high temperature reported in Ref. could be related to the observation of the encapsulation process (the vapor-liquid phase transition). The melting temperature of pentagonal and hexagonal ice nanotubes are in near quantitative agreement with XRD <cit.> and Raman <cit.> experiments, considering the large experimental error bars and uncertainties on the confining diameter. It is not straightforward to compare our results to the melting temperatures predicted by empirical water models. In fact, the MLP melting temperatures of square, pentagonal, and hexagonal ice are approximately 20-30 K higher than TIP4P, while the MLP melting temperature of triangular ice is ∼ 90 K higher than the TIP4P prediction. However, TIP4P predicts a bulk melting temperature of ∼ 230 K <cit.>. Hence, TIP4P predicts a melting temperature of triangular ice approximately 20 K lower than the bulk, while the TIP4P predicted melting temperatures of square, pentagonal, and hexagonal ice are approximately 30-50 K higher than the bulk. In general, the melting point of water is very sensitive to the force-field used to describe the water-water interaction, both in bulk <cit.> and under confinement <cit.>. Ref. shows for instance a discrepancy of 300 K across empirical forcefields in the predicted melting temperature for 2D water. In addition, we show in the SI (section <ref>) that the spectra of bond lengths and angles in the optimized ice nanotubes differ from the fixed value considered in rigid models, a result similar to that found in 2D confinement <cit.>. Overall this analysis emphasizes the need for achieving predictive ability with first principles accuracy, and suggests that our work provides valuable insight into the phase diagram of nano-confined ice. § CONTINUOUS OR DISCONTINUOUS PHASE TRANSITION? The nature of the melting phase transition is of special interest for water under confinement. In fact, while the melting in bulk systems is a first-order direct process, in low dimensional systems it can be more complex. For instance, 2D ice has been predicted to melt into a liquid via a hexatic phase <cit.>. The order of phase transitions resembles a first order for solid to hexatic, but second order for hexatic to liquid <cit.>. In the case of CNTs, previous empirical force-field MD studies showed that water in carbon nanotubes may freeze either continuously or discontinuously, with strong sensitivity on diameter and pressure <cit.>. In particular, at ambient pressure TIP4P simulations from Ref. suggest that the melting transition is continuous for pores with diameters d > 12 Å, while it can be both continuous or discontinuous for d< 12 Å. In contrast, ReaxFF simulations from Ref. suggest that the phase transition is discontinuous for hexagonal ice, continuous for square and pentagonal ice, while triangular ice undergoes a supercritical transition with the absence of a diffusive regime at high temperatures. Indeed, it has been shown for 2D nano-capillaries that the order of solid-liquid phase transitions of empirical forcefields is sensitive to their parametrization <cit.>. In Fig. <ref>, we report the density (a) and the diffusion coefficient D_z along the nanotube axis (b) as a function of temperature. We also report the diffusion coefficient of bulk water computed with the MLP and compared to experimental results <cit.>, showing the accuracy of our model with quantitative agreement from 280 K to 320 K. Details on the calculation of density and the diffusion coefficient are given in Methods and in the SI (sections <ref> and <ref>). With our model, both structure (density) and dynamics (diffusion) suggest that the phase transition can be either continuous or discontinuous. In particular, we observe hallmarks of a discontinuous phase transition for triangular and square ice (stronger confinement regime). On the other hand, seemingly smooth changes in the density and the diffusion coefficient indicate a continuous phase transition for pentagonal and hexagonal ice. As the melting temperature and the order of phase transitions are sensitive to finite size effects, we show in the SI (section <ref>) that our results are converged with respect to the system size. Overall, such contrasting melting behavior observed within such a narrow range of diameters is a clear illustration of the delicate and fascinating behavior of nano-confined water. § STRUCTURE AND DYNAMICS OF NANO-CONFINED WATER DEPEND NON-MONOTONICALLY ON THE NANOTUBE DIAMETER So far we focused on identifying the melting temperatures of nano-confined ice tubes. To gain further insight into these systems and to try to understand our observations, we now look at the properties of the liquid as a function of the confinement diameter. In Fig. <ref>, we report an analysis on structure and dynamics of the liquid equilibrated at the temperature T = 320 K. In particular, we compute the density as a function of the radial distance in the confining cylinder, the average number of hydrogen bonds via geometrical criteria defined in Ref. , and the water diffusion coefficient using the velocity autocorrelation function (VACF) method <cit.>. We observe a non monotonic behavior of the average number of hydrogen bonds per molecule as a function of the confining diameter, that correlates positively with the non monotonic trend in the diffusion coefficient. Increasing the diameter from 9.5 Å to 11.8 Å defines a less centred liquid, with the peak of the radial density as a function of the distance from the centre r (panel b) shifting from r ∼ 1.5 Å to r ∼ 2.6 Å. This change is accompanied by an increase in the average number of hydrogen bonds (panel c) from ∼ 2.0 to ∼ 2.7. The number of hydrogen bonds in the smallest diameter is consistent with both our RSS and previous results <cit.>, because only centred chains of water molecules are expected to be stable for d < 9 Å. The liquid structure in the largest diameter (12.5 Å) consists of a chain of water molecules inside water rings, in agreement with previous observations <cit.> . This results in a peak in the radial density close to r ∼ 0 and correlates with a decrease in the number of hydrogen bonds. The liquid phase diffusion coefficient D_z (panel d) shows a consistent trend with the number of hydrogen bonds, increasing from ∼ 3.0 × 10^-9m^2s^-1 at 11.8 Å to ∼ 5.5 ×10^-9 m^2s^-1 at 9.4 Å. In particular, the diffusion coefficient for d< 10 Å is higher than the bulk value at the same temperature, which is ∼ 3.8 × 10^-9 m^2s^-1 with our model. The increase of the diffusion coefficient with stronger confinement is consistent with recent experiments <cit.> reporting ultra-fast transport (slip length of ∼ 8.5 µm) in vertically aligned CNTs membrane with d < 9 Å. Qualitatively similar behavior was found with TIP4P/2005 and SPC/E, but not with other empirical force-field simulations, predicting a diffusion coefficient increasing with an increasing diameter in the sub-nanometer regime <cit.>. In summary, we observe that a different liquid is associated with a different confining diameter and ice nanotube, with the melting temperatures varying in a relatively narrow range of ∼ 30 K. As the melting temperature depends on the relative Gibbs free energy between a solid and the liquid, we think that the seemingly weak radius dependence of the melting temperature could arise from an overall cancellation of the effects of confinement in both the solid and liquid phases. Finally, we acknowledge that the exact values of both structural and dynamical properties of confined water are expected to be influenced by the modeling of the confining material. However, in the SI (section <ref>) we show that our results are consistent with respect to the modeling of explicit CNTs. In fact, we report on the diffusion coefficient and the average number of hydrogen bonds in the three largest nanotubes considered in this work and show results that are in qualitative agreement with those reported in the main manuscript with the uniform confining potential. § CONCLUSIONS In this work, we computed the melting temperatures of quasi one-dimensional ice nanotubes with first principles accuracy. This topic has been debated in the last decades, with both experiments and classical simulations reporting melting temperatures ranging from ∼ 180 K to ∼ 450 K. Exploiting machine learning based first principles accuracy simulations, we show that one-dimensional nano-confined water is liquid around room temperature. In particular, we computed the melting temperature of (helical) triangular, square, pentagonal, and hexagonal ice in a cylindrical confining potential, respectively of diameter d ∼ 9.5, 10.2, 11.0, 11.8, and 12.5 Å. We find that all the considered ice nanotubes melt in the range ∼ 280-310 K. Our melting temperature of square ice is in agreement with PL spectroscopy experiments <cit.>; similarly, the melting temperatures of pentagonal and hexagonal ice are in agreement with Raman, XRD, and PL spectroscopy experiments <cit.>. In addition, we report our melting temperature of triangular ice is predicted to be higher than previously reported. Notably, empirical force-field simulations predicted melting temperatures differing by as much as ∼ 80 K from our MLP, as well as qualitatively different behavior in the diffusion coefficient. Finally, we provide an insight into the structural and dynamical properties of water confined in different confinement regimes. In a sub-nanoscale confinement regime (d < 10 Å), we report a strong reduction in the number of hydrogen bonds and a corresponding enhanced diffusion coefficient that exceeds the bulk limit. Our model is certainly limited in capturing the effects of chirality and phonon-coupling of the CNTs, due to the use of a uniform confining potential. However, we show that the uniform confining potential remains a good model for studying the phase behavior of quasi 1D water. In addition, by changing the confining potential our simplified model could easily be adapted to the study of different confinement configurations, such as 2D slits of different widths, nanotubes of different diameters, or conical confinement configurations. In summary, this work suggests that first principles accuracy is required for the modeling of one-dimensional nano-confined water. In addition, it improves on the understanding of the melting transition, structural and dynamical properties of nano-confined water, providing further insight for technological application for water filtration and desalination, or in the development of artificial channels mimicking biological systems. § METHODS To ensure both computational efficiency and accuracy, we follow a similar approach used in Ref. . We used: (1) a combination of available DMC and CCSD(T) data to select a DFT functional to describe the water-carbon interaction and parametrise a Morse potential; (2) an MLP trained on bulk and confined water structures for the water-water interaction <cit.>; (3) random structure search (RSS) to identify metastable phases; and (4) solid-liquid coexistence simulations to compute the melting temperatures. § SEPARATION OF THE POTENTIAL ENERGY SURFACE We split the potential energy of the system into: (1) water-CNT interaction, modeled using a radial confining potential fitted to DFT water-CNT binding energies; and (2) water-water interactions described by an MLP trained on DFT data. The water-CNT interaction is modeled with the revPBE-D3 functional, selected according to the benchmark on DMC and CCSD(T) (previously computed in Ref. ) data reported in the SI (section <ref>). The water-water interaction is modeled with the MLP trained on revPBE0-D3 data for bulk and confined water in Refs. . The MLP model relies on the Behler–Parrinello neural network framework <cit.> to form a committee neural network potential <cit.> and trained with an active learning framework <cit.>. The DFT functional was selected based on DMC benchmarks <cit.>, and the model has been already applied to the analysis of the phase diagram of monolayer confined water <cit.> . § STRUCTURE SEARCH We probe potential phases of ice nanotubes using the RSS approach in combination with the MLP. Within this approach, we recover the previously known ice polymorphs by optimizing a large set of structures generated by randomly placing water molecules within the confinement region at ambient pressure. The RSS is less suitable for the identification of helical structures, which require a specific number of molecules. We build helical ice nanotubes according to the theory described in Ref. and optimize them with our model. The zero-temperature-zero-pressure stable phase is subsequently identified as the one with the lowest energy. The validity of our model has been tested towards DMC data for the CNT(15,0), as described in the SI. § COEXISTENCE SIMULATIONS The melting temperatures of the confined ice nanotubes are determined via solid-liquid coexistence simulations. An initial solid structure is thermalized at ∼ 250 K; half of the oxygens are subsequently frozen, while the other half is melted at ∼ 600 K and then quenched down to ∼ 300 K. The interface between the solid and the liquid is built in the NVT ensemble to avoid large box fluctuations during the high-temperature melting phase. The coexistence simulations are subsequently run in the NP_z T ensemble with P ∼ 10 bar and changing temperatures in the range ∼ [250,340] K. The melting temperature is determined according to changes in the density and the diffusion coefficient as a function of temperature, as shown in Fig. <ref>. The density is computed as the ratio of the number of molecules and the occupied volume inside the confining cylinder. To define the volume of the cylinder occupied by water molecules, we consider the radial density ρ(r) as a function of the radial distance r. We define r_max as the maximum distance r such that ρ(r)>0. The occupied volume V is computed as V=π r^2_max l_z, where l_z is the length of the simulation box. The shaded error bars in Fig. <ref>(a) are computed considering a ∼ 5% uncertainty in the definition of r_max. The diffusion coefficient of nano-confined ice is estimated using the VACF method. A comparative analysis on the estimate of the diffusion coefficient obtained by using the Einstein relation to extract the diffusion coefficient from the Mean Square Displacement (MSD)<cit.> and VACF is reported in the SI. The two approaches deliver results in close agreement, and the VACF approach was chosen to present results in the main manuscript as it delivers smaller sampling uncertainties. The diffusion coefficient of the bulk is computed with the MSD method using a cubic box of side L=24.84 Å containing 512 water molecules, and applying the temperature-dependent finite-size correction from Ref. . § COMPUTATIONAL DETAILS Molecular Dynamics (MD) simulations are performed using the i-PI <cit.> code with the n2p2-LAMMPS <cit.> library to calculate the MLP energies and forces, and an ASE <cit.> driver for the radial confining potential. The time-step is fixed to 0.5 fs, and pressure and temperature are controlled with the generalized Langevin equation (GLE) barostat-thermostat as implemented in i-PI. Complete details on the development of the used MLP are given in Ref. . Coexistence simulations have been performed with supercells containing ∼ 700 water molecules (∼ 40 nm long nanotubes) to limit finite-size effects. Tests on the effect of the number of water molecules on the density and the diffusion coefficient are reported in the SI (section <ref>). In particular, we show that density in our simulations is converged compared to simulations with ∼ 10^4 water molecules, which is comparable with the number of molecules expected in experiments with CNTs of length ∼ 1 μ m. The length of the MD simulations varies from 10-20 ns depending on when convergence is achieved, as shown in the SI. The water-CNT confining potential is fitted to revPBE-D3 binding energies computed with a 1× 1 × 3 k-point grid and ∼ 9 Å long nanotubes. All calculations are run using VASP <cit.> with a 1000 eV energy cut-off, fine FFT grids and hard pseudopotentials. Further details on the construction of the confining potential are given in the SI (section <ref>). The DMC calculations are performed using the CASINO <cit.> package, using eCEPP <cit.> pseudopotentials with the determinant-locality approximations <cit.>, and taking into account errors due to finite system size <cit.> and finite time-step <cit.> (convergence shown in the SI). This setup was also used to study bulk ice <cit.>, yielding results in excellent agreement with experiments. See the Supporting Information for details on the CNTs confining potential, validation of our model with diffusion Monte Carlo, the analysis of the coexistence simulations, tests on the estimates of the diffusion coefficient, and comparison between predictions of the implicit and explicit carbon models. We thank all members of the ICE group for valuable feedback on early stages of the manuscript. We acknowledge the computational resources from Cambridge Service for Data Driven Discovery (CSD3) operated by the University of Cambridge Research Computing Service (www.csd3.cam.ac.uk), provided by Dell EMC and Intel using Tier-2 funding from the Engineering and Physical Sciences Research Council (capital grant EP/T022159/1 and EP/P020259/1), and DiRAC funding from the Science and Technology Facilities Council (www.dirac.ac.uk). We are furthere grateful for computational support from the UK national high performance computing service, ARCHER2, for which access was obtained via the UKCP consortium and funded by EPSRC grant ref EP/X035891/1, and Swiss National Supercomputing Centre under project s1209. V.K. acknowledges support from the Ernest Oppenheimer Early Career Fellowship and the Sydney Harvey Junior Research Fellowship. A.M. acknowledges support from the European Union under the “n-AQUA” European Research Council project (Grant No. 101071937). D.A. and A.Z. acknowledges support from Leverhulme grant no. RPG-2020-038, and from the European Union under the Next generation EU (projects 20222FXZ33 and P2022MC742). In the supporting information we provide: * the benchmark of 42 density functional theory (DFT) functionals for the water-carbon interaction in section <ref>; * the confining potentials for the carbon nanotubes (CNTs) with revPBE-D3 in section <ref>; * the validation of our model with diffusion Monte Carlo (DMC) in section <ref>; * the analysis on the convergence of our coexistence simulations in section <ref>; * the analysis on the estimates of the diffusion coefficient in section <ref>; * the comparison of our (implicit) model with an explicit model containing carbon atoms in section <ref>; * the analysis of the finite size error in our coexistence simulations in section <ref>; * the analysis of the bonds and angles distributions in the optimized ice nanotubes in section <ref>. Benchmark of (42) DFT functionals for water-carbon interaction In the main manuscript, we describe the water-wall interaction with a confining potential fitted to DFT data of the binding energy of a water molecule inside the CNT. The first step towards developing such a confining potential is determining an accurate functional for the water-carbon interaction. This problem has been previously addressed in Ref. , where 28 DFT functionals were benchmarked to high-accuracy computational reference values. Here, we extend on Ref. by testing 31 additional functionals. The benchmark consists in computing the binding energies of a single water molecule to benzene, coronene and graphene (with three possible orientations of the water molecule), and both inside and outside the CNT(10,0). Reference values from Ref. were computed either with DMC or coupled cluster with single, double and perturbative triple excitations [CCSD(T)]. The DFT calculations are performed using VASP<cit.>. The binding energies were computed at the Γ point for benzene, coronene and the CNT(10,0). A 5× 5× 1 k-point grid was used for graphene (with a 50 atoms supercell), except for meta-GGA and hybrid functionals computed at Γ. Convergence of the computational set-up has been tested in Ref. . Geometries were taken from Ref. . In Fig. <ref> we report the performance of each tested functional as a Mean Absolute Error (MAE) with respect to the reference values. The 11 functionals that have been previously tested are indicated with an asterisk. Grey vertical lines highlight the chemical accuracy (40 meV) and a sub-chemical accuracy limit of 10 meV. Overall, several functionals achieve chemical or sub-chemical accuracy. Considering that revPBE-D3: (1) has a reliable performance for both water-carbon interaction and water-water in ice polymorphs <cit.>; (2) it is the functional used in the explicit water-CNT model tested afterwards<cit.>, this was chosen to compute the confining potential. Single water molecule in carbon nanotubes: confining potentials The confining potentials are computed by fitting the binding energy of a single water molecule inside the CNT to a Morse potential. We computed the energy as a function of the radial position of the water molecule averaging along 12 possible directions (± 0, ± 30°, ±45°, ±60°,±90°). The confining potentials are functions of the distance from the wall, therefore neglect the surface roughness of the nanotube. Tests (moving the water molecule along the CNT axis) showed that the corrugation changes the potential of <20 meV. The confining potentials are written as U_Morse(d_w) = E_0 + D_e (1 - e^-a(d_w-R_e))^2, where d_w is the distance from the wall, and E_0 is the binding energy at the equilibrium distance R_e. Note that D_e and E_0 do not coincide because of the constraint on the distance from the wall d_w, which cannot be greater than the nanotube radius. The parameters of the confining Morse potential fitted to the DFT data for each CNT are reported in Table <ref>. Using the computed DFT data, we subsequently extrapolated the Morse confining potential for a larger nanotube, CNT(16,0), with a linear fitting based on the parameters of CNT(14,0), CNT(8,8) and CNT(15,0). The parameters of the Morse potential for CNT(16,0) are reported in the last row of Table <ref>. Validation of the MLP with DMC The metastable ice polymorphs in each confining nanotube are determined by combining the MLP model with a RSS approach. The structure stable at zero pressure and zero temperature are identified by the minimum in the relative energy. To validate our model, we compare the MLP relative energies for the water-water interaction to DMC data (Fig. <ref>) for the diameter d ∼ 11.8 Å. The DFT data with revPBE0-D3 (on which the MLP is trained) are also shown. In particular, for each phase we report the relative energy, i.e. the difference between the energy of each phase and ice (6,0). Overall, the difference in the energetics prediction of the metastable phases with the MLP and DMC are less than ∼ 10 meV. The time step τ is a key factor affecting the accuracy of DMC calculations. In DMC, a propagation according to the imaginary time Schrödinger equation is performed to project out the exact ground state from a trial wave function <cit.>. A time step τ must be chosen, but the projection is exact only in the continuous limit τ→ 0. However, the ZSGMA <cit.> DMC algorithm gives better convergence with respect to τ than previously used methods. In this work, we have verified the time step convergence for each analyzed system. The time step convergence of the relative energies reported in Fig. <ref> are plotted in Fig. <ref>. The values used in Fig. <ref> are computed with a time step of 0.01 a.u. Convergence of coexistence simulations The melting temperature of the ice nanotubes is determined via solid-liquid coexistence simulations. The initial interface is built by melting half of the ice nanotubes at high temperatures in the NVT ensemble, as described in the Methods section of the main manuscript. Subsequently we run coexistence simulations in the NP_zT ensemble changing the temperature at the fixed pressure P∼ 10 bar. The simulated cells contain respectively 2130 atoms for d∼ 9.5 AA, 2160 atoms for d∼ 10.2 Å, 2100 atoms for d∼ 11.0 Å, 2286 atoms for d∼ 11.8 Å, and 2304 atoms for d∼ 12.5 Å. An overview of the systems investigated in this work is reported in Table <ref>. Tests on the convergence of the density with respect to the system size are reported in section <ref>. In Fig. <ref>, we plot the linear density (number of molecules divided by the length of the cell) as a function of the simulation time. Relatively short simulations (∼ 6 ns) are necessary to equilibrate the solid at low temperatures and the liquid at high temperatures. Longer simulations (> 20 ns) are necessary to achieve convergence for T close to the melting temperature. Estimate of the diffusion coefficient In this section, we report an analysis on the estimate of the diffusion coefficient of the ice nanotubes as a function of the temperature. The two most common methods used to estimate the diffusion coefficient in molecular dynamics (MD) simulations are: (1) using the Einstein relation to extract the diffusion coefficient from the Mean Square Displacement (MSD)<cit.>; (2) using the Green-Kubo relation to extract the diffusion coefficient from the Velocity Autocorrelation Function (VACF)<cit.>. In the first method, the diffusion coefficient is related to the MSD of a particle as a function of the observation time. In particular, the diffusion coefficient is proportional to the observation time in the limit that the observation time goes to infinity: D = 1/2n_dlim_t →∞⟨[ 𝐫(t_o+t)-𝐫(t_o) ]^2⟩/t, where D is the diffusion coefficient, 𝐫(t) is the particle position at time t, and n_d is the dimensionality of the system. The numerator in equation <ref> is the ensemble average of the particle square displacement. The ensemble average is an average over all the particles in the simulation and all the time origins t_o. The diffusion coefficient D can easily be obtained from the slope of the curve MSD versus time divided by 2d. The second method is based on linear response theory and estimates the diffusion coefficient from equilibrium MD simulations using a Green-Kubo relation: D = 1/n_d∫_0^t ⟨𝐯(t')·𝐯(0) ⟩ dt', where 𝐯(t) is the velocity at time t, and the brackets correspond to an ensemble average. In this work, we estimated the oxygen diffusion coefficient along the nanotube axis D_z as a function of the temperature for each confining diameter applying both methods to MD trajectories in the NVT ensemble (i.e., constant number of particles N, volume of the box V and temperature T). The MD-NVT trajectories are computed with a 0.5 fs time step and the `gle' thermostat, starting from the final configuration of the equilibrated coexistence simulations in the NP_zT ensemble. The MSD estimates are obtained with 0.5 ns long trajectories, computing the MSD with a 20 ps correlation time. The slope of the MSD is estimated with a linear fitting between ∼ 5 ps and ∼ 15 ps. The VACF estimates are obtained with 160 ps long trajectories for the solid phase and 320 ps long trajectories for the liquid phase. The VACF is computed with a 10 ps correlation time. As shown in Fig. <ref> and <ref>, the two methods are consistent within the statistical error bar. The error bars on the diffusion coefficient due to the statistical sampling were computed as 2 standard deviations, which are estimated via bootstrapping by dividing the trajectory in two blocks. Implicit vs explicit carbon In the main manuscript, we focus on the melting temperature of quasi one-dimensional ice polymorphs confined in a uniform cylindrical potential. The confining potential is fitted to the DFT water-carbon interaction (as described in Sec. <ref>) inside a CNT. We refer to this model as an implicit carbon model. In this section, we analyse the effect of explicit carbon on the results reported in the main manuscript. In particular, we considered the MLP trained on DFT revPBE-D3 data for water inside CNTs from Ref. . With this potential, we computed the diffusion coefficient as a function of the temperature in CNT(14,0), and both the diffusion coefficient and the number of hydrogen bonds at T=320 K for CNT(14,0), CNT(15,0), and CNT(16,0). The simulations with the explicit model were run as follows: (1) we started from an initial configuration resembling the water density of the implicit model in the NP_zT simulation; (2) we run a ∼ 2 ns NVT simulation at the same temperature to estimate the number of hydrogen bonds and the diffusion coefficient. The length of the CNTs used in the explicit model simulations is ∼ 240 Å, corresponding to a number of water molecules of ∼ 300. In Fig. <ref> (a) we report the diffusion coefficient as a function of the temperature for the CNT(14,0) with the implicit (red) and the explicit (blue) model. With the explicit model, we observe a slightly enhanced diffusion at fixed temperature in the liquid phase compared to the implicit model. The enhanced diffusion can be physically ascribed to the coupling with the phonon modes of the carbon, as shown in Refs. XXX. Furthermore, little differences between the two models are expected due to the use of different DFT functionals in the training of the MLPs. Panels (b) and (c) of Fig. <ref> show respectively the diffusion coefficient and the number of hydrogen bonds of the liquid phase (T=320 K) as a function of the diameter for the three largest nanotubes considered in the main manuscript. As in the previous case, we observe little differences between the explicit and implicit model. Overall, this test show that the implicit model is fairly accurate in describing the dynamics of the ice nanotubes. The difference in the prediction of the melting temperature with the explicit carbon model is ∼ 15 K. Finite size errors In this section, we report a test on the finite size errors on the estimate of the density and the diffusion coefficient, used in the main manuscript to characterize the phase transition and identify the melting temperature. In particular, we consider the case of square ice (diameter d ∼ 10.2 Å). In Fig. <ref>, we plot the linear density (a) and the diffusion coefficient (b) as a function of the number of water molecules N_mol at the fixed temperature T=300 K. In particular, we consider N_mol = 160, 320, 480, 720, and 9600 (except for the diffusion coefficient, where longer simulations are necessary to achieve convergence). Noticeably, the largest number of water molecules simulated (N=9600), is comparable to the number of water molecules expected in a realistic experimental set-up<cit.> with a CNT of length 1 μm (assuming the same linear density of water molecules of ∼ 1 Å^-1). In Fig. <ref> we plot the density (left) and the diffusion coefficient (right) as a function of the temperature for 320 and 720 water molecules. The difference in the melting temperature predicted with 320 and 720 water molecules is ∼ 5 K. Overall, our tests show that the simulations ∼ 10^3 water molecules considered in the main manuscript are converged for the computational of the melting temperature and the structural/dynamical properties analysed in this work. Distributions of bonds and angles In this section, we report the distributions of the O-H distances and the H-O-H angles in the MLP optimized structures considered in our manuscript. The histograms of the distribution are plotted in Fig. <ref>. For comparison, we also show the fixed values of the TIP4P original model<cit.>, which are d_OH∼ 0.957 Å and θ_HOH∼ 104.52 ^∘. The spread in the d_OH and θ_HOH is particularly not captured in the narrowest nanotube (d ∼ 9.5 Å), potentially explaining the disagreement between the MLP and TIP4P predictions.
http://arxiv.org/abs/2406.18351v1
20240626135247
Reinforcement Learning with Intrinsically Motivated Feedback Graph for Lost-sales Inventory Control
[ "Zifan Liu", "Xinran Li", "Shibo Chen", "Gen Li", "Jiashuo Jiang", "Jun Zhang" ]
cs.LG
[ "cs.LG", "cs.AI" ]
Reinforcement Learning with Intrinsically Motivated Feedback Graph for Lost-sales Inventory Control LIU Zifan Department of Electronic and Computer Engineering, The Hong Kong University of Science and Technology, LI Xinran Department of Electronic and Computer Engineering, The Hong Kong University of Science and Technology, Chen Shibo Department of Electronic and Computer Engineering, The Hong Kong University of Science and Technology, LI Gen Department of Statistics, The Chinese University of Hong Kong, JIANG Jiashuo Department of Industrial Engineering and Decision Analytics, The Hong Kong University of Science and Technology, Jun Zhang Department of Electronic and Computer Engineering, The Hong Kong University of Science and Technology, Reinforcement learning (RL) has proven to be well-performed and general-purpose in the inventory control (IC). However, further improvement of RL algorithms in the IC domain is impeded due to two limitations of online experience. First, online experience is expensive to acquire in real-world applications. With the low sample efficiency nature of RL algorithms, it would take extensive time to train the RL policy to convergence. Second, online experience may not reflect the true demand due to the lost sales phenomenon typical in IC, which makes the learning process more challenging. To address the above challenges, we propose a decision framework that combines reinforcement learning with feedback graph (RLFG) and intrinsically motivated exploration (IME) to boost sample efficiency. In particular, we first take advantage of the inherent properties of lost-sales IC problems and design the feedback graph (FG) specially for lost-sales IC problems to generate abundant side experiences aid RL updates. Then we conduct a rigorous theoretical analysis of how the designed FG reduces the sample complexity of RL methods. Based on the theoretical insights, we design an intrinsic reward to direct the RL agent to explore to the state-action space with more side experiences, further exploiting FG’s power. Experimental results demonstrate that our method greatly improves the sample efficiency of applying RL in IC. Our code is available at <https://anonymous.4open.science/r/RLIMFG4IC-811D/> Kolmogorov–Arnold Graph Neural Networks Preprint. Under review. Gianluca De Carlo, Andrea Mastropietro, Aris Anagnostopoulos Department of Computer, Control, and Management Engineering Sapienza University of Rome Rome, Italy decarlo@diag.uniroma1.it, mastropietro@diag.uniroma1.it, aris@diag.uniroma1.it July 1, 2024 =========================================================================================================================================================================================================================================================== § INTRODUCTION Inventory control (IC) is a crucial and practical problem, serving as the basis of business efficiency for supply chain management. IC problems are challenging due to the intractability caused by the inherent complexity and the difficulty of finding optimal solutions within a reasonable timeframe. In the past few decades, researchers <cit.>, <cit.>, <cit.>, and <cit.> have designed a few heuristic methods based on specific model assumptions. However, such methods often face challenges due to the curse of dimensionality <cit.>, where the problem size grows exponentially as the lead time increases. Here lead time refers to the duration between placing and receiving an order. Another limitation is that these model-based methods are not flexible enough to generalize to various environmental settings. These limitations call for a more adaptable approach to IC problems, for which data-driven control methods stand out as a promising alternative. Reinforcement learning (RL) has gained significant attention as a powerful data-driven technique for solving complex sequential decision-making problems. In particular, it offers several advantages for addressing the challenges of IC problems. Firstly, RL allows for the discovery of optimal policies without relying on strong problem-specific assumptions, enabling more generalizable solutions <cit.>. Secondly, when combined with the deep neural network, Deep RL (DRL) can handle large state and action spaces, making it suitable for problems with high-dimensional state variables <cit.>. Early works demonstrate the feasibility of RL in IC problems. <cit.> showcases the ability of a Deep Q-network (DQN) to discover near-optimal solutions for the widely recognized beer distribution game. <cit.> introduces the A3C algorithm into IC problems and show that it can achieve acceptable performance but not better than heuristic methods. <cit.> benchmarks various DRL methods such as A3C, PPO, and vanilla policy gradient (VPG) in IC problems. More recently, researchers tend to explore how DRL can address various scenarios within the IC problems. These scenarios include: non-stationary uncertain demand <cit.>, multi-product <cit.>, variable kinds of products <cit.>, multi-echelon supply chains <cit.>, one-warehouse multi-retailer <cit.>, and the stochastic capacitated lot sizing problem <cit.>. However, traditional RL methods are characterized by low sample efficiency, which becomes a significant barrier when implementing these techniques in real-world IC scenarios, as obtaining experiences can be both costly and time-consuming <cit.>. Furthermore, this sample inefficiency issue is enlarged in lost-sales IC problems because of censored demands <cit.>, which refers to the phenomenon when customers' real demands are unobservable due to insufficient inventory. For instance, if the order is placed daily, it takes over a year to generate four hundred experiences and part of them may be censored, making them too few to update the RL policy. <cit.> tries to alleviate this problem by incorporating heuristic knowledge, such as the base-stock policy, into DQN with reward-shaping. Although this method can improve the sample efficiency, it still relies on specific model heuristics, making it hard to generalize to different scenarios. Overall, resolving the low sample efficiency of RL without strong heuristics is crucial in solving IC (especially lost-sales IC) problems. The detailed related work is in Appendix <ref>. By addressing the above-mentioned limitation, this paper proposes a novel decision framework that combines the reinforcement learning with feedback graphs (RLFG) and intrinsically motivated exploration (IME): 1) We tailor the feedback graph (FG) based the general property of lost-sales IC problems rather than strong heuristics (e.g. known demand distribution). In particular, the connectivity of FG is adjusted dynamically based on the relationship between the demand and inventory rather than being static in the environment. With FG, the sample efficiency in the training process is significantly improved by allowing the agent to acquire not only online experiences but also side experiences from FG. 2) We conduct a theoretical analysis of how FG reduces the sample complexity with Q-learning as an example. It demonstrates that FG decreases the sample complexity by improving the update probabilities across all state-action pairs. 3) Inspired by these theoretical insights, we design a novel intrinsic reward that guides the RL algorithm to explore towards the state-action space where more side experiences can be obtained thereby further boosting sample efficiency. 4) We evaluate our method on the standard discrete lost-sales inventory control environment. Our empirical results demonstrate the superior sample efficiency improvement by FG and the intrinsic reward separately, underlining the effectiveness of our design. § BACKGROUND AND PROBLEM FORMULATION §.§ Reinforcement Learning with Feedback Graph RLFG is proposed by <cit.> to reduce the sample complexity of RL. In typical RL scenarios, an agent can only get one experience each time as feedback that can be used to update the policy. However, when some prior knowledge about the environments is available, it becomes possible to observe additional experiences involving other states and actions. RLFG aims to explore how RL algorithms can benefit from these side experiences by constructing a feedback graph (FG). Here "side experiences" bears the same meaning as "side observations" in <cit.>, and we use the term "side experiences" to avoid confusion with "observations" in partially observed MDPs. FG is a directed graph 𝒢=(𝒱, ℰ), where 𝒱 is the vertex set 𝒱={v|v=(s,a)} and ℰ is the edge set ℰ={v→v̅}, formalized by the side information indicating that if the agent visits v, it can also observe other vertices v̅. The total experiences 𝒪_t observed by the agent at t from 𝒢 is 𝒪_t(𝒢)={(s_t,a_t,r_t,s_t+1)}∪{s_t,a_t,r_t,s_t+1}. §.§ Lost Sales Inventory Control Problem We formulate a standard, single-item, discrete-time, and lost-sales IC problem following <cit.>, <cit.> and <cit.> with environment variables in Table <ref>. The "lost-sales" means that customers leave if there is not enough inventory without any way to record the excess demand. The lost-sales IC problem considers three kinds of costs in each time step t, which are the cost of procurement f_1(s_t), the cost of holding inventory f_2(s_t), and the cost penalty of lost sales f_3(s_t). The cost terms are specified below, where [x]^+=max(x,0). f_1(s_t,a_t) = ca_t, f_2(s_t)=h[y_t-d_t]^+, f_3(s_t)=p[d_t-y_t]^+. In this problem, the real demand is unobservable when it exceeds the current inventory. Thus we define the real demand d_t as a random variable and the observed demand d_t^o, which may be censored by y_t and d_t, given as d_t^o=min(d_t,y_t)= d_t, if d_t≤ y_t y_t, otherwise. Here, the term "censored" indicates the otherwise case in Equation <ref>, when we can only observe all inventory is sold but do not know the value of d_t. The objective is to minimize the total cost over the time horizon T under the uncertainty on the demand side, which is min_{a_t|t=0,...,T}∑_t=0^Tγ^T-tf(s_t,a_t)=∑_t=0^Tγ^T-t[f_1(s_t,a_t)+f_2(s_t)+f_3(s_t)]. §.§ MDP Formulation To better understand the lost-sales IC problem from the perspective of RL, we formulate it into an infinite-horizon MDP with discounted rewards <cit.>, represented by ℳ=(𝒮,𝒜,P,R,γ). The MDP consists of the state space 𝒮, action space 𝒜, transition function P(s'|s,a), reward function R, and discount factor γ. The detailed composition of MDP is shown as follows: State: The state includes the inventory at time t after receiving the orders and all upcoming orders due to lead time. We define it as s_t=(y_t,a_t+1-L,...,a_t-1). Action: The action is the amount to be ordered for future sales, given as a_t={0≤ a_t≤ a^max, a_t∈ℕ}, where a^max is the maximum amount that can be ordered for this item. Reward: Since the goal is to minimize the cumulative discounted cost, the reward at each time step t is defined as the opposite number of costs, which is r_t = R(s_t,a_t)=-f(s_t,a_t). Transition Function: The transition function is defined as s_t=(y_t,a_t+1-L,...,a_t-1)→s_t+1=(y_t+1,a_t+2-L,...,a_t). The inventory transition is defined as y_t+1=[y_t-d_t]^++a_t+1-L due to the lead time. As a_t+1-L has been received, a_t+1+i-L in s_t+1 will replace a_t+i-L in s_t for i=1,...,L-1. § METHOD §.§ IC Decision Framework with RLFG and IME The proposed decision framework for IC problems aims to enhance the sample efficiency of off-policy RL algorithms by integrating RLFG and IME together. This framework relies on two key assumptions. First, the real demand is unknown and the observed demand may be censored. Second, the experiences generated in real-world operations are limited due to the cost involved in collecting online data. As illustrated in Figure <ref>, the FG module generates side experiences to improve the sample efficiency and the intrinsic reward module aids the exploration to further exploit the power of the FG module. Here, the RL module can be any off-policy RL algorithm, such as DQN, DDPG, Rainbow, or TD3. Note that experiences from both reply buffers are sampled with no differences in the model-updating step. Then a new set of side experiences are generated based on the sampled experiences. These new side experiences are just used to calculate intrinsic rewards of the sampled experiences. Here we provide Rainbow-FG as an example in Algorithm <ref>. §.§ Feedback Graph in Inventory Control Motivated by the reduction in sample complexity by FG <cit.>, we incorporate FG into the IC problem. FG is naturally suitable for the IC problem since the IC problem is a structured MDP with most environmental transition components determined and predictable depending on the demand. Furthermore, the demand is usually independent of the state-action space. If we can observe the demand, the experiences of other state-action pairs can also be obtained through leveraging the underlying properties of such structured MDPs. Thus the main challenge is how to construct FG considering the lost-sales property, whose demand is potentially censored. To solve this problem, we propose to construct FG dynamically based on the observed demand. If the observed demand is real, FG generates side experiences of all other state-action pairs. Even if the observed demand is censored, FG can generate side experiences of the state-action pairs whose inventory is less than the observed demand. Algorithm <ref> shows the details of the FG module. When d^o_t is uncensored, FG is a complete graph and all state-action pairs can be used to generate side experiences based on d^o_t. When d^o_t is censored, it becomes more complex. For the state-action pairs having larger inventory numbers than that in s_t, d^o_t is not a correct demand for them and unfortunately d_t is unknown. Still using d^o_t will generate wrong side experiences. We can only use the state-action pairs having smaller inventory numbers than that in s_t to generate side experiences, which means that FG is a partially connected graph. Note that FG is still dynamically constructed based on d^o_t under the censored case since different d^o_t generates different numbers of side experiences. §.§ Theoretical Analysis We conduct a quantitative analysis of how the sample complexity is reduced with the FG in the lost-sales IC environment. The qualitative analysis based on the property of RLFG is in Appendix <ref>. Here we provide an analysis based on Q-learning for simplicity. The detailed proof is in Appendix <ref> Without loss of generality, we restrict some definitions in section <ref> and define some new concepts. We consider the reward function R:𝒮×𝒜→(0,1). The demand d_t∼ P_d(d) can obey any independent discrete distribution and d^max<y^max. We define π_b as the stationary behavior policy and μ(s,a) as the stationary distribution of the Markov chain under π_b and P(s'|s,a), which is the same as the update probability in typical RL. In RLFG, the stationary distribution does not change but the update probability becomes μ(s,a)≥μ(s,a) because of the side experiences. Scenario 1: Consider a graph 𝒢 with all state-action pairs as the nodes and there is no edge between the nodes. It can be regarded as 𝒢=𝒢_1∪𝒢_2, where 𝒢_1=∅ and 𝒢_2 consists of nodes without any edge. Lemma 1: The sample complexity of the asynchronous Q-learning under scenario 1 is analyzed in <cit.>, which is O(1/μ_min(1-γ)^5ϵ^2+t_mix/μ_min(1-γ)), where μ_min=min_(s,a)∈𝒢μ(s,a) and t_mix is the mixing time of the chain. Lemma 1 indicates that the sample complexity of Q-learning without FG is determined by μ_min. We will show how Q-learning with FG improves μ(s,a) to reduce the sample complexity. Scenario 2: Consider a graph 𝒢 with all state-action pairs as the nodes. Assume for nodes satisfying y≥ d_t, once one node is sampled, all of these nodes can be sampled and updated simultaneously. For nodes with y<d_t, only nodes with y'≤ y can be sampled and updated simultaneously. Thus 𝒢 can be regarded as 𝒢=𝒢_1∪𝒢_2. Theorem 1: The update probability for Q-learning with FG under scenario 2 is: μ(s,a)= ∑_d=0^d^maxP_d(d)∑_(s̅,a̅)∈𝒢 y̅≥ dμ(s̅,a̅|d)_Uncensored term + ∑_d=0^y-1P_d(d)∑_(s̅,a̅)∈𝒢 y≤y̅≤ dμ(s̅,a̅|d)_Censored term. Conclusion: We can obtain the relationship between μ(s,a) and μ(s,a) in Equation (<ref>). It shows that FG not only improves μ_min but also improves that of all the state-action pairs. μ(s,a)=𝔼_d∼ P_d[μ(s,a|d)] ≥𝔼_d∼ P_d[μ(s,a|d)]=μ(s,a). §.§ Intrinsically Motivated Exploration To further utilize the benefits of FG, we design an intrinsic reward by incorporating the information of the side experiences into the curiosity-driven exploration. In particular, for a state-action pair, the number and the average uncertainty of the side experiences generated by this state-action pair are incorporated into the intrinsic reward. With this intrinsic reward, the agent is directed towards the state-action space where more side experiences can be generated. This idea is inspired by the theoretical analysis based on Equation <ref>. The uncensored term indicates how much the uncensored case contributes to improving the probability of being updated. This term contributes to all state-action pairs so improving this term can improve the probability of being updated for all state-action pairs. The censored term indicates how much benefit the current state-action pair can obtain from the censored cases. For the censored case, the sample complexity can be further reduced by visiting the state-action pairs with larger inventory. To satisfy both conditions, designing a behavior policy manually is difficult and not general enough. Thus we design an intrinsic reward, written in Equation <ref> because both conditions lead to generating more side observations. r^in_i=r^in_i+log_10(J)×1/J∑_j=1^Jr^in_i,j. Algorithm <ref> shows the details of the intrinsic reward module. We utilize the M-head DQN <cit.> with each head trained by different mini-batch of experiences to get the prediction error as curiosity rewards. The final intrinsic reward, denoted in Equation <ref>, consists of the curiosity reward of the experience itself (r^in_i) and the averaged curiosity reward of all corresponding side experiences (r^in_i,j) generated by this experience. The averaged value preserves the advantage of scale invariance but loses the quantity information compared with the sum value. To balance these two aspects, we multiply the averaged value with log_10(J) to incorporate more quantity information into the intrinsic reward. In the censored condition, where J=y_t, a larger J increases the intrinsic reward, which means more side experiences. In the uncensored condition, J=y^max is larger than y_t, which means that the intrinsic reward is more likely to be larger than that in the censored condition. § EXPERIMENT To verify the performance and sample efficiency of our decision framework, we utilize the example algorithm in Algorithm <ref> and test it based on the standard lost-sales IC environment. All experiments are averaged with 20 random seeds with shaded areas representing the standard deviation. §.§ Setup §.§.§ Baseline Heuristic Methods: We compare our method with widely recognized and well-performed heuristic methods. All parameters used in these methods are searched in detail and the best results are selected. * Constant Order: The order is always a constant (a_t=r^h), where r^h is a parameter. * Myopic 1-period <cit.>: Assume the distribution of d_t is known. This method aims to minimize the expected cost at t+L. The order is a_t=min_P(u_t+L<0)≤c+h/p+h a_t, 0≤ a_t≤ a^max. * Myopic 2-period <cit.>: Since Myopic 1-period only considers the expected cost for one time step, it can be improved by considering 2-time steps. * Base-Stock method <cit.>: The Base-Stock method aims to keep the inventory level constant, including upcoming orders. The order is a_t=(S^h-1·s_t)^+, where S^h is a parameter. * Capped Base-Stock method <cit.>: It combines Base-Stock and Constant Order method. The order is a_t=min[(S^h-1·s_t)^+, r^h], where S^h and r^h are parameters. * Bracket method <cit.>: A variant of the Constant Order method. The order is a_t = ⌊(t+1)r^h+θ^h⌋-⌈ tr^h+θ^h⌉, where r^h and θ^h are parameters. Deep Reinforcement Learning Methods: We choose to compare Rainbow-FG with the A3C method in <cit.>, which is an on-policy RL method; Rainbow, which is the basis of Rainbow-FG; Rainbow-FG(H), which utilizes the heuristic knowledge to continue finetuning Rainbow-FG. Rainbow-FG(H) can be regarded as the upper bound of Rainbow-FG. Since A3C is on-policy, it is not applicable to compare sample efficiency with off-policy Rainbow-FG. Thus A3C is only used to compare the optimal results. §.§.§ Environments We test our method and baselines across different settings of the standard discrete lost-sales IC environment with demand obeying Poisson distribution, where d^m indicates the mean value. The parameters of the environment setting are shown in Table <ref> and the hyperparameters of our method are in Appendix <ref>. As for parameters in Table <ref>, we follow the common recognized settings according to <cit.>, <cit.>, and <cit.>. In the following sections, default parameters are used unless stated otherwise. The hyperparameter analysis is demonstrated in Appendix <ref>. §.§ Optimal Results Comparison Table <ref> presents the average results and optimality gap compared to the optimal results. Without strong heuristics, Rainbow demonstrates similar performance to the A3C method in <cit.>, whereas Rainbow-FG achieves superior results. This notable improvement can be attributed to Rainbow-FG's ability to leverage a broader range of side experiences from the feedback graph, enabling it to converge towards better solutions compared with Rainbow. When contrasting Rainbow-FG with the heuristic methods, Rainbow-FG still falls short of surpassing the top-performing heuristic methods, such as Myopic 2-period and Capped Base-Stock. However, Myopic 2-period method assumes the distribution of the demand is known, which is a strong assumption, and the Capped Base-Stock method needs extensive parameter searches to attain this optimal result. These parameters vary across different scenarios without patterns. Conversely, Rainbow-FG consistently achieves close performance and possesses adaptability to diverse settings without these strong heuristics. Furthermore, after incorporating the heuristic knowledge, Rainbow-FG(H) attains equivalent or even superior performance compared to the best heuristic methods, particularly in certain settings. This result indicates that Rainbow-FG has the potential to perform better than heuristic methods. §.§ Sample Efficiency Comparison for FG Figure <ref> illustrates the learning process of Rainbow and Rainbow-FG with different exploration parameters. The utilization of FG significantly enhances the sample efficiency. During the initial stage, Rainbow-FG with all ϵ consistently outperforms Rainbow. Towards the end, Rainbow-FG with ϵ=0 converges to the final result first and subsequently trails Rainbow-FG with ϵ=0.1. Conversely, Rainbow fails to converge to the final result throughout the entire 100 episodes. Furthermore, FG contributes to improving the stability of the learning process. Notably, the learning process of Rainbow-FG exhibits lower standard deviation and more stable learning curves. Under low exploration conditions (ϵ=0), Rainbow-FG demonstrates significantly faster learning compared to Rainbow. This outcome signifies the benefits of FG when exploration is not favorable. Experiments on more settings are shown in Appendix <ref>. §.§ Sample Efficiency Comparison for Intrinsic Reward This section investigates the effect of intrinsic reward designed for the IC problem. Figure <ref> illustrates the learning process of Rainbow-FG with and without the intrinsic reward, referred to as Rainbow-FG w/ inr and Rainbow-FG w/o inr, respectively. The results demonstrate that the designed intrinsic reward significantly improves sample efficiency during the initial stages of training in the environment with p=4, L=4, and d^m=5. However, as the learning process reaches the Constant Order level, the intrinsic reward does not exhibit a substantial impact. We attribute it to the simplicity of the environment, which weakens the effect of the intrinsic reward. To further evaluate its effectiveness, we conduct experiments in more complex settings with p=19, L=8, and varying values of d^m= (5, 10, 15). In these environments, we observe a clear effect of the intrinsic reward during the whole training process. Moreover, Rainbow-FG w/ inr demonstrates a more stable training process compared to Rainbow-FG w/o inr, achieving slightly lower cost. § CONCLUSION dresses the challenge of sample inefficiency for RL methods in lost-sales IC problems. We design a novel decision framework integrating RLFG and IME to boost the sample efficiency of RL methods. We first specially tailored FG only based on the general property of lost-sales IC problems rather than strong heuristics to generate side experiences to aid RL update. We then conduct a theoretical analysis to demonstrate our method's effectiveness with Q-learning as an example. The analysis shows that FG decreases the sample complexity by improving the update probabilities for all state-action pairs. Additionally, we design an intrinsic reward to fully utilize FG for lost-sales IC problems based on the analysis result. Experimental results show that our approach greatly enhances RL's sampling efficiency in IC problems, which is consistent with the theoretical analysis. As for the limitation, we only analyze the magnitude relationship between μ_min and μ̃_min; a detailed multiplicative relationship still needs to be proved. abbrvnat sectionappendix § RELATED WORK §.§ Inventory Control Problem The inventory control (IC) problem involves determining the optimal quantity of inventory to order to minimize costs while maintaining sufficient stock levels to meet customer demand. Based on different assumptions about the behavior of the customer, the IC problem can be divided into backlogging IC and lost-sales IC problems <cit.>. The backlogging IC problem assumes that when the demand cannot be met due to insufficient inventory, the request of customers is accepted but delayed until inventory is replenished. <cit.> has proven that the base-stock policy, which aims to keep the sum of inventory level and upcoming orders constant, is optimal for single-source backlogging IC with constant lead time. Compared with backlogging IC, the lost-sales IC problem is more complex and relevant <cit.>. the lost-sale IC problem assumes that when demand cannot be met due to insufficient inventory, the customer's request is lost and the exceeding request is unobservable, such as e-commerce. The base-stock policy can only be optimal when the cost of the lost-sales penalty is high. This motivates researchers to find better methods for the lost-sales IC problem. §.§ Heuristic Methods in Lost-sales Inventory Control The lost-sales IC problem is first simply studied by <cit.>, which assumes the lead time to place orders is one. <cit.> proves that base-stock policy is not optimal because the inventory availability in future periods cannot be characterized by the inventory level and order quantity. <cit.> extends the former analysis to any positive and integral-value lead time and provides the upper and lower bounds of the optimal policy. Furthermore, <cit.> generalizes to any lead times from the aspect of L-natural-convexity, and <cit.> studies the case when lead time is stochastic. The base-stock policy is sensitive to the demand due to its design. The opposite one is the Constant Order policy, whose decision has no relationship with the demand <cit.>. As the lead time goes to infinity, <cit.> proves that the constant-order policy can be asymptotically optimal and <cit.> proves that the constant-order policy can be better than the base-stock policy. The contrary properties of these two methods motivate a better idea that combines both advantages <cit.>. <cit.> improves this idea to a new method named capped base-stock policy. The above-introduced methods need to search for the best parameter for specific settings to perform well. Besides them, there is another serious of heuristic method, also called approximate dynamic programming (APD). These methods assume known demand distribution rather than parameter searching. <cit.> proposes the Myopic method, which aims to minimize the expected cost when the placed order arrives. This method can be extended to the Myopic-T method, which considers the expected cost for T time steps. §.§ Reinforcement Learning Method in Inventory Control The heuristic method needs either a parameter search or an assumption about the demand distribution, which are not general enough. This motivates the studies about applying reinforcement learning (RL) to lost-sales IC problems. RL has gained significant attention as a powerful data-driven technique for solving complex sequential decision-making problems <cit.>. RL offers several advantages for addressing the challenges of IC problems. Firstly, RL allows for the discovery of optimal policies without relying on strong problem-specific assumptions, enabling more generalizable solutions <cit.>. Secondly, combined with the deep neural network, Deep RL (DRL) can handle large state and action spaces, making it suitable for problems with high-dimensional variables <cit.>. Initially, the work mainly focuses on verifying the feasibility of RL in IC problems. <cit.> showcases the ability of a Deep Q-Network (DQN) to discover solutions that are close to optimal for the widely recognized beer distribution game. <cit.> introduces the A3C algorithm into IC problems and show that A3C can achieve acceptable performance but not better heuristic methods. <cit.> benchmarks various DRL methods such as A3C, PPO, and vanilla policy gradient (VPG) in IC problems. Recently, researchers tend to study how DRL can solve different situations of IC problems such as, non-stationary uncertain demand <cit.>, multi-product <cit.>, variable kinds of products <cit.>, multi-echelon supply chains <cit.>, one-warehouse multi-retailer <cit.>, stochastic capacitated lot sizing problem <cit.>. However, Two fundamental issues have not been resolved. First, the final performance of DRL is not optimal, sometimes even worse than heuristic methods. To address this problem, some papers <cit.> explore the potential to combine RL algorithms with existing other methods rather than directly applying RL to IC problems. The other problem is the low sample efficiency nature of existing RL methods, which restricts further application to real-world IC problems <cit.>, especially when obtaining experiences is expensive or time-consuming. Besides, this sample inefficiency problem is enlarged in lost-sales IC because of censored demands <cit.>, which refers to the phenomenon when the customer's real demand is unobservable due to insufficient inventory. Typically, if the order is placed daily, generating four hundred experiences needs over a year and part of them may be censored making them too hard to update RL policy. <cit.> tries to alleviate this problem by incorporating heuristic knowledge, such as base-stock policy, into DQN with reward-shaping. Although this method can improve the sample efficiency, it still relies on specific model heuristics, making it hard to generalize to different scenarios. Overall, Resolving the low sample efficiency of RL without strong heuristics is crucial in solving IC (especially lost-sales IC) problems. §.§ Feedback Graph and its Application <cit.> first proposes the feedback graph (FG) idea as a method to reduce the regret bound of the bandit problem when side observations can be obtained assuming the decision maker can also know the situations when other actions are taken besides the chosen action. <cit.> extends the analysis of FG on bandit problems beyond the learning problems. <cit.> gives the first analysis of Thompson sampling for bandits with FG based on information theory. Building upon these foundations, <cit.> combines FG with RL and shows how FG can reduce the regret bound and sample complexity of model-based RL algorithms. However, most existing work, such as <cit.>, <cit.>, and <cit.>, mainly focus on the analysis of FG in some bandits problems. Seldom work tries to apply FG to real-world problems since side observations and the structure of FG are hard to define in applications. § ANALYSIS BASED ON THE PROPERTY OF GRAPH Based on the feedback graph 𝒢, <cit.> defines three concepts to measure the sample complexity. ω, α, and ζ measure the minimum vertices to be sampled to observe the whole 𝒢. We will talk about ω, α, and ζ for the IC environment in section <ref>. Mas-number(ω): The maximum size of 𝒱'⊆𝒱 forming an acyclic subgraph of 𝒢 is the mas-number. Independence number(α): The size of the largest 𝒱'⊆𝒱 with no edge in 𝒢 within 𝒱' is the independence number. Domination number(ζ): A set of vertices 𝒱' ⊆𝒱 is a dominating set if there always ∃ v' ∈𝒱' and ∀ v ∈𝒱 such that v' → v. The smallest size of 𝒱' is called the domination number ζ. For any feedback graph 𝒢, the inequality (<ref>) is always satisfied, where |𝒮| is the size of the state set and |𝒜| is the size of the action set. The mas-number ω and independence number α quantify the worst-case connectivity by measuring the maximum number of distinct vertices an algorithm can traverse before encountering a repeated vertex. On the other hand, the domination number represents a best-case scenario and indicates the minimum number of vertices that an algorithm must visit to observe every vertex. Besides, <cit.> also shows that with feedback graph 𝒢, the regret bound/sample complexity of a model-based RL algorithm can be reduced from |𝒮||𝒜| scale to ω or even ζ scale. ζ≤α≤ω≤|𝒱|=|𝒮|×|𝒜|. In this paragraph, we will simply show how sample complexity is reduced based on ω, α, and ζ. If we define all state-action pairs as the nodes of the feedback graph 𝒢, 𝒢 can be divided into two parts 𝒢_1 and 𝒢_2, where 𝒢_1 is a connected graph, 𝒢_2 is a graph without any edge, and 𝒢=𝒢_1∪𝒢_2. For the uncensored case, 𝒢=𝒢_1 is a complete graph and 𝒢_2=∅. The complete feedback graph has a good property which is ω=α=ζ=1≪ |𝒮||𝒜|. For the censored case, 𝒢 consists of 𝒢_1 for ŷ_t≤ y_t and 𝒢_2 for ŷ_t> y_t. Thus the feedback graph's property should be α=ζ=(y^max-y_t)×(a^max)^L+1(y_t≠0) and ω=(y^max-y_t)×(a^max)^L+y_t. We can obtain that α=ζ≤ω≤|𝒮||𝒜|, where α=ζ=ω holds iff y_t=0&1 and ω=|𝒮||𝒜| holds iff y_t=0. Overall, sample complexity after using FG can be reduced. § THEORETICAL ANALYSIS Without loss of generality, we restrict the following definitions in section <ref> and define some new definitions. We consider the reward function R∈(0,1). The demand d∼ P_d(d) obeys an independent discrete distribution and d^max<y^max. We define π_b as the stationary behavior policy and μ(s,a) as the stationary distribution of the Markov chain under π_b and P(s'|s,a), which is the same as the probability of being updated in typical RL. In RLFG, the stationary distribution doesn't change but the probability of being updated becomes μ(s,a) or μ(s,a) because of the side experiences. Let us analyze the sample complexity and update probability scenario by scenario from the simplest case to real case. Scenario 1: Consider a graph 𝒢 with all state-action pairs as the nodes and there is no edge between the nodes. It can be regarded as 𝒢=𝒢_1∪𝒢_2, where 𝒢_1=∅ and 𝒢_2 consists of nodes without any edge. Lemma 1: The sample complexity of the asynchronous Q-learning under scenario 1 is analyzed in <cit.>, which is O(1/μ_min(1-γ)^5ϵ^2+t_mix/μ_min(1-γ)), where μ_min=min_(s,a)∈𝒢μ(s,a) Scenario 2: Consider a graph 𝒢 with all state-action pairs as the nodes. Assume all nodes, once one node is sampled, all of these nodes can be sampled and updated at the same time. Thus G can be regarded as 𝒢=𝒢_1∪𝒢_2, where 𝒢_1 is a complete graph and 𝒢_2=∅. This case is the same as Synchronous Q-Learning in <cit.>. Lemma 2: The sample complexity of the Q-learning with feedback graph under scenario 2 is O(1/(1-γ)^4ϵ^2) with learning rate being (1-γ)^3ϵ^2. Scenario 3: Consider a graph 𝒢 with all state-action pairs as the nodes. Assume for some nodes, once one node is sampled, all of these nodes can be sampled and updated at the same time. For other nodes, when one node is sampled, only itself can be updated. Thus 𝒢 can be regarded as 𝒢=𝒢_1∪𝒢_2, where 𝒢_1 is a complete graph and 𝒢_2 consists of nodes without any edge. Lemma 3: The sample complexity of the Q-learning with FG under scenario 3 is O(1/μ_min(1-γ)^5ϵ^2+t_mix/μ_min(1-γ)), where μ_min=min[min_(s,a)∈ G_2μ(s,a),∑_(s,a)∈𝒢_1μ(s,a)] Scenario 1 indicates the IC environment without feedback graph and scenario 2 indicates the uncensored case of the IC environment with feedback graph. As for the censored case of the IC environment, we simplify it in scenario 3 by assuming 𝒢_1 is a complete graph. We can see that the sample complexity order is O(1/μ_min(1-γ)^5ϵ^2+t_mix/μ_min(1-γ))≥O(1/μ_min(1-γ)^5ϵ^2+t_mix/μ_min(1-γ))>O(1/(1-γ)^4ϵ^2), since min_(s,a)∈𝒢μ(s,a)<min[min_(s,a)∈ G_2μ(s,a),∑_(s,a)∈𝒢_1μ(s,a)]<1, where 1 indicates μ_min=1 in scenario 2. Now, Let us loosen the assumption that 𝒢_1 is a complete graph in scenario 3. Scenario 4: Consider a graph G=G_1∪ G_2 with all state-action pairs as the nodes. We define v_i<v_j and s_i<s_j if s_i[0]<s_j[0]. For nodes in G_1, once one node v_i is sampled, ∀ v_j<v_i can be sampled and updated at the same time. For nodes in G_2, when one node is sampled, only itself can be updated. Thus G can be regarded as G=G_1∪ G_2, where G_1 is a connected graph and G_2=∅. Lemma 4: The sample complexity of the Q-learning with feedback graph under Assumption 4 is O(1/μ̂'_min(1-γ)^5ϵ^2+t_mix/μ̂'_min(1-γ)), where μ̂'_min=min[min_(s,a)∈ G_2μ(s,a),∑_{s,a|s∈ G_1;∀ŝ∈ G_1, s>ŝ}μ(s,a)]. Based on the above scenarios and lemmas, let us consider the lost-sales IC environment with constant demand d. Scenario 5: Consider a graph 𝒢 with all state-action pairs as the nodes. Assume for nodes with y≥ d, once one node is sampled, all of these nodes can be sampled and updated simultaneously. For nodes with y<d, only nodes with y'≤ y can be sampled and updated simultaneously. Thus 𝒢 can be regarded as 𝒢=𝒢_1∪𝒢_2. Lemma 5: The sample complexity of the Q-learning with feedback graph under scenario 5 is O(1/μ_min(1-γ)^5ϵ^2+t_mix/μ_min(1-γ)), where μ_min=∑_(s,a)∈𝒢; y≥ dμ(s,a). Proof: For {(s,a)|(s,a)∈𝒢; y≥ d}, we have μ_min^y≥ d=∑_(s,a)∈𝒢; y≥ dμ(s,a). For {(s,a)|(s,a)∈𝒢; y< d}, we have μ_min^y<d=∑_y=dμ(s,a)+μ_min^y≥ d. Thus we have μ_min=μ_min^y≥ d. If μ_min appears in {(s,a)|(s,a)∈𝒢; y≥ d}, we have μ_min=μ_min^y≥ d=∑_(s,a)∈𝒢; y≥ dμ(s,a)>(y^max-d)|A|^Lmin_(s,a)∈𝒢; y≥ dμ(s,a)=(y^max-d)|A|^Lμ_min. If μ_min appears in {(s,a)|(s,a)∈𝒢; y<d}, which means μ_min<min_(s,a)∈𝒢; y≥ dμ(s,a), we have μ_min=μ_min^y≥ d=∑_(s,a)∈𝒢; y≥ dμ(s,a)>(y^max-d)|A|^Lmin_(s,a)∈𝒢; y≥ dμ(s,a)>(y^max-d)|A|^Lμ_min. Thus μ_min under scenario 5 improves at least (y^max-d)|A|^L times than that under scenario 1. Proof done. Now we lossen the assumption of the constant demand d to d_t∼ P_d(d). Scenario 6: Consider a graph 𝒢 with all state-action pairs as the nodes. Assume for nodes with y≥ d_t, once one node is sampled, all of these nodes can be sampled and updated simultaneously. For nodes with y<d_t, only nodes with y'≤ y can be sampled and updated simultaneously. Proof of Theorem 1: For each (s,a) and each possible value of d, we have μ^y≥ d(s,a|d)=∑_(s̅,a̅)∈𝒢 y̅≥ dμ(s̅,a̅|d)≥μ^y≥ d(s,a|d), {(s,a)|(s,a)∈𝒢; y≥ d}. μ^y<d(s,a|d)=∑_(s̅,a̅)∈𝒢 y≤y̅≤ dμ(s̅,a̅|d)+ ∑_(s̅,a̅)∈𝒢 y̅≥ dμ(s̅,a̅|d)≥μ^y≤ d(s,a|d), {(s,a)|(s,a)∈𝒢; y< d}. Thus we have μ(s,a)=E_d∼ P_d[μ(s,a|d)] ≥ E_d∼ P_d[μ(s,a|d)]=μ(s,a). Then we formulate μ(s,a) in details: μ(s,a) =E_d∼ P_d[μ(s,a|d)] = ∑_d=y^d^maxP_d(d)∑_(s̅,a̅)∈𝒢 y̅≥ dμ(s̅,a̅|d) + ∑_d=0^y-1P_d(d)∑_(s̅,a̅)∈𝒢 y≤y̅≤ dμ(s̅,a̅|d)+ ∑_(s̅,a̅)∈𝒢 y̅≥ dμ(s̅,a̅|d) =∑_d=0^d^maxP_d(d)∑_( s̅,a̅)∈𝒢 y̅≥ dμ(s̅,a̅|d) + ∑_d=0^y-1P_d(d)∑_(s̅,a̅)∈𝒢 y≤y̅≤ dμ(s̅,a̅|d). Proof done. § PARAMETERS OF RAINBOW-FG § COMPUTE RESOURCES Experiments are carried out on Intel (R) Xeon (R) Platinum 8375C CPU @ 2.90GHz and NVIDIA GeForce RTX 3080 GPUs. All the experiments can be done within one day. § MORE EXPERIMENTS FOR FG Figure <ref> shows the learning process of Rainbow and Rainbow-FG with different exploration parameters in more settings. The learning processes in these settings show similar properties in section <ref>. § HYPERPARAMETER ANALYSIS §.§ Feedback Graph Size Theoretically, we should get enough side experiences by considering every term of S and A to get the largest size of G_1. However, in practice, the time or resources may be limited so that only part of the side information can be obtained. Based on this question, this section focuses on the effect of different sizes of the feedback graph. The default setting only constructs the feedback graph considering the current inventory y_t, which is the first term of s_t, and action a_t. Each comparison group adds one following term of s_t when constructing the feedback graph. Figure <ref> illustrates the learning process of Rainbow-FG with different sizes of FG. The result shows that the sample efficiency is more sensitive to the size of FG at the initial stage than at the final stage. As the size of FG increases, the improvement of the sample efficiency mainly occurs at the initial stage. At the final stage, Rainbow-FG w/ s[0:1] & a and s[0:4] & a first reach the final level and then follows Rainbow-FG with s[0:3] & a. §.§ Intrinsic Reward Weight To test the sensitivity of our intrinsic reward design, we test our method with different intrinsic reward weights. Figure <ref> shows the detailed learning process. A larger intrinsic reward weight tends to have higher sample efficiency during the initial stages of training. This result demonstrates the benefits of improving sample efficiency for the intrinsic reward method. However, large intrinsic reward weights can affect the performance of the final stage. The main reason is that adding the intrinsic reward to the extrinsic reward changes the original objective to be optimized.
http://arxiv.org/abs/2406.17857v1
20240625180052
A Hilton-Milner theorem for exterior algebras
[ "Denys Bulavka", "Francesca Gandini", "Russ Woodroofe" ]
math.CO
[ "math.CO", "math.AG", "05D05 (Primary) 05E14, 15A75 (Secondary)" ]
ℂ 𝔽 ℙ ℝ ℤ ⊗ ⋀ span eval inc #1#2shift_#1→#2 #1#2N_#1→#2 #1#2Gr(#1,#2) Work started while the first author was at the Department of Applied Mathematics, Faculty of Mathematics and Physics, Charles University, Prague, Czech Republic. Work of the first author is partially supported by the GAČR grant no. 22-19073S and by the Israel Science Foundation grant ISF-2480/20. Work of the second author is supported in part by the Slovenian Research Agency (research projects N1-0160, J1-3003). Work of the third author is supported in part by the Slovenian Research Agency research program P1-0285 and research projects J1-9108, N1-0160, J1-2451, J1-3003, and J1-50000. Einstein Institute of Mathematics, Hebrew University, Jerusalem 91904, Israel Denys.Bulavka@mail.huji.ac.il <https://kam.mff.cuni.cz/ dbulavka/> Department of Mathematics, Statistics, and Computer Science, St. Olaf College, Northfield MN, USA fra.gandi.phd@gmail.com <https://sites.google.com/a/umich.edu/gandini/> Univerza na Primorskem, Glagoljaška 8, 6000 Koper, Slovenia russ.woodroofe@famnit.upr.si <https://osebje.famnit.upr.si/ russ.woodroofe/> § ABSTRACT Recent work of Scott and Wilmer and of Woodroofe extends the Erdős-Ko-Rado theorem from set systems to subspaces of k-forms in an exterior algebra. We prove an extension of the Hilton-Milner theorem to the exterior algebra setting, answering in a strong way a question asked by these authors. A Hilton-Milner theorem for exterior algebras Denys Bulavka, Francesca Gandini, and Russ Woodroofe July 1, 2024 ======================================================== § INTRODUCTION A family of sets ℱ is pairwise-intersecting if every pair of sets in ℱ have nonempty intersection. Erdős, Ko, and Rado <cit.> gave an upper bound on the size of a pairwise-intersecting family of small sets, and characterized the families that achieve the upper bound. Let k≤ n/2. If ℱ⊆[n]k is a pairwise-intersecting family of sets, then |ℱ|≤n-1k-1. Moreover, if k<n/2 and |ℱ| achieves the upper bound, then ℱ consists of all the k-subsets containing some fixed element. There are many generalizations and analogues of Theorem <ref>. In this article, we focus on the extension to exterior algebras that was considered by Scott and Wilmer <cit.> and by the third author <cit.>. We first fix some notation: Through the paper, let be a field. We assume for expository purposes that the characteristic is not 2, although the fundamental techniques are independent of characteristic. Let V be a vector space of dimension n over . Let 𝐞={ e_1,…,e_n} be a given basis for V. If I is a subset of [n], then let V^(I) be the subspace spanned by {e_j:j∉ I}, and write V^(i) for V^({i}). A subset L of the exterior algebra is self-annihilating if L∧ L=0; that is, if for every x,y∈ L, it holds that x∧ y=0. Let k≤ n/2. If L is a self-annihilating subspace of ^kV, then L≤n-1k-1. The bound of Theorem <ref> is an extension of the bound of Theorem <ref>, as follows. Given a set F={i_1<i_2<⋯<i_k} in ℱ, we represent it with the exterior monomial e_F=e_i_1∧⋯∧ e_i_k in ^kV. Now taking {e_F:F∈ℱ} as a basis, using that e_F∧ e_G=0 if and only if F∩ G≠∅, and applying the distributive axiom, we obtain the bound of Theorem <ref> from Theorem <ref>. With this tight connection, Theorem <ref> may indeed be viewed as a categorification of the Erdős-Ko-Rado bound. Scott and Wilmer asked in <cit.> whether the characterization of the maximal families in Theorem <ref> extends to the exterior algebra setting. The third author of the current article asked in <cit.> the even stronger question of whether the Hilton-Milner upper bound extends. We will answer both questions in the affirmative. We first recall the theorem of Hilton and Milner <cit.>. We say that a pairwise-intersecting set family is nontrivial if ⋂_F∈ℱF is empty. Let k≤ n/2. If ℱ⊆[n]k is a nontrivial pairwise-intersecting family of sets, then |ℱ|≤n-1k-1-n-k-1k-1+1. Our main theorem will extend Theorem <ref> to exterior algebras. We say that a self-annihilating subset L of V is nontrivial if there is no 1-form in V=^1V that annihilates L. Let k≤ n/2. If L is a nontrivial self-annihilating subspace of ^kV, then L≤n-1k-1-n-k-1k-1+1. Theorem <ref> categorifies Theorem <ref> in the same way that Theorem <ref> categorifies the bound of Theorem <ref>. Since n-k-1k-1>1 when k<n/2, we recover the exterior extension of the characterization of maximal families in Theorem <ref>. Let k<n/2. If L is a self-annihilating subspace of ^kV such that L achieves the upper bound of n-1k-1, then L is annihilated by a 1-form. Theorem <ref> was earlier shown in <cit.> for the restrictive case where k=2. Draisma, Kraft, and Kuttler <cit.> considered bounds on nontrivially nilpotent subspaces in a somewhat different setting. Our strategy is to adapt proofs of Theorem <ref>, variously due to Frankl and/or Füredi <cit.> and to Hurlbert and Kamat <cit.>, to the exterior algebra setting. These proofs start with a nontrivial pairwise-intersecting set family, and apply combinatorial shifting operations. If the family becomes trivial at some step, then this guarantees strong structural properties at the previous step. Wise choices of further combinatorial shifting operations results in a nontrivial pairwise-intersecting family that is of the same size and shifted. In order to implement this strategy in the exterior algebra setting, we introduce a parameterized family of linear maps. The limit of the action of these maps extends combinatorial shifting to exterior algebras, and preserves more structure than those used by the third author in <cit.>. We act with these maps on self-annihilating subspaces and study the resulting (limit) subspaces. Our techniques are characteristic independent; although they are based on ideas from algebraic geometry, our presentation here is self-contained and relatively elementary. In addition to an analogue of Theorem <ref>, we also prove an extension of another theorem of Hilton and Milner. Set families ℱ and 𝒢 are cross-intersecting if for every F∈ℱ and G∈𝒢, the intersection F∩ G is nonempty. Hilton and Milner gave bounds on the size of |ℱ|+|𝒢| under the conditions that ℱ,𝒢 are nonempty and cross-intersecting (as a special case of a highly technical more general result <cit.>). The exterior algebra analogue is as follows. Subsets K and L of V are cross-annihilating if K∧ L=0, that is, if for every x∈ K and y∈ L it holds that x∧ y=0. We show: Let k≤ n/2. If K and L are nonzero cross-annihilating subspaces of ^kV, then K+ L≤nk-n-kk+1. Scott and Wilmer also consider a cross-intersecting theorem in <cit.>, where they bounded the product of K and L. The rest of this article is organized as follows. In Section <ref> we give various preliminaries and background results. In Section <ref> we introduce the linear maps whose limits will allow us to replace a given self-annihilating subspace with one that is easier to analyze. In Section <ref> we prove Theorem <ref>, and in Section <ref> we prove Theorem <ref>. § ACKNOWLEDGEMENTS We thank Jake Levinson for clarifying our doubts regarding limits on the Grassmannian, and Allen Knutson for answering our questions on the relationship to limits in algebraic geometry. § PRELIMINARIES §.§ Combinatorial shifting and shifted families of sets Let ℱ⊆[n]k be a family of sets, let F∈ℱ, and let 1≤ i<j≤ n. The combinatorial shift operation ji is defined as follows. ji(F,ℱ) = (F∖{j})∪{i} if j∈F,i∉F and (F∖{j})∪{i}∉ℱ, F otherwise. jiℱ ={ji(F,ℱ) F∈ℱ}. For I⊆[n], we say that a family ℱ of subsets of [n] is shifted with respect to I if jiℱ=ℱ for each pair i<j with i,j∈ I. If I=[n], we simply say that ℱ is shifted. It is well-known that iteratively applying combinatorial shift operations results in a shifted family <cit.>; see also Theorem <ref> below. The technique of combinatorial shifting was introduced by Erdős, Ko, and Rado in <cit.> to prove Theorem <ref>. It has since become a standard tool in combinatorial set theory <cit.>. §.§ Exterior algebras The exterior algebra over the vector space V, denoted V, is a graded algebra that is described as follows. For each 0≤ k≤ n, we define the kth graded component ^kV as the span of elements { e_S:S∈[n]k}. Here, we identify ^1V with V by identifying e_{i} with e_i; we identify ^0V with . The elements of ^kV are called k-forms. Now the product is defined to satisfy the following axioms: e_∅∧ e_S=1e_S=e_S, e_i∧ e_j=-e_j∧ e_i, and e_S=e_s_1∧ e_s_2∧⋯∧ e_s_k for S={ s_1<s_2<⋯<s_k}. The elements e_S are monomials with respect to the basis 𝐞. The exterior algebra is an antisymmetric analogue of the polynomial algebra, and more background may be found in most advanced algebra textbooks, such as <cit.>. It has long been used as a tool for proving results in combinatorial set theory <cit.>. Given a k-form x and an understood basis, the support of x is the set of monomials with nonzero coefficient in expression for x as a linear combination of monomials. The monomial e_S has variable set { e_i : i∈ S }. When it causes no confusion, we may identify a variable set { e_i : i∈ S} with the underlying set of indices S. Given a linear operator M on V, we extend the linear operator to V by sending e_S=e_s_1∧ e_s_2∧⋯∧ e_s_k to Me_s_1∧ Me_s_2∧⋯∧ Me_s_k and extending linearly. We notice that in particular, for any x,y in V, we have that M(x∧ y)=Mx∧ My. The following result on annihilation by 1-forms is well-known; see e.g. <cit.>. For v∈^1V, x∈^kV, we have that v∧ x=0 if and only if x=v∧ x' for some x'∈^k-1V. §.§ Limit actions on the Grassmannian Write V for the projective space defined on the vector space V, consisting of (equivalence classes of) vectors of V∖{0}, considered up to scalar multiplication. If v_1,v_2,…,v_k are linearly independent vectors in V, then we may identify the subspace spanned by these vectors with the point [v_1∧ v_2∧⋯∧ v_k] in (^kV). We get the Grassmannian kV, consisting of all points in (^kV) that may be written in this form. Thus, the Grassmannian is a geometric object whose points correspond to k-dimensional subspaces of V. We will need to take limits as t→0 of actions of families of matrices parameterized by t. The limits we will take are basically of an algebraic geometry nature, but we did not find accessible literature on the topic. For completeness, we give an elementary description, which we hope will be accessible to any reader who has had a solid graduate course in (linear) algebra. We start by describing the action on an arbitrary projective space V. Next, extend our field to (t) by adjoining a transcendental t. Thus, (t) consists of ratios of polynomials in t with coefficients in . We now extend coefficients in our vector space V over to a vector space V(t)≅ V_(t) over (t). Thus, an element of V(t) is a linear combination of e_1,…,e_n with coefficients in (t). By clearing denominators and/or dividing by t in V(t), we may write an arbitrary projective element v(t) as a combination of e_1,…,e_n, where each coefficient is a polynomial in t, and where the coefficients do not have a common divisor polynomial of positive t-degree. This form is the canonical representative of v(t). In particular, the (polynomial) coefficient of at least one e_i has a nonzero coefficient of the t-degree zero term. Now the map _0: V(t)→ V obtained by evaluating the polynomials in the canonical representative at t=0 is a well-defined map. Now let N(t) be a nonsingular linear map over (t). We define the limit of the action of N(t) on V to be the composition V V(t) V(t) V, where ι is the standard inclusion map. We write N for the resulting map V→ V. For the limit of N(t) acting on any fixed vector v of ^2, we can write N(t)v = w in coordinates as [w_1(t), w_2(t)]. This point corresponds to the formal ratio w_1(t)/ w_2(t). Our description of the limit map resembles closely the usual procedure for evaluating lim_t→0w_1(t)/w_2(t) from a first calculus course. Note that here the limit point [α, 1] corresponds to the real number α, while the (single) limit point [1,0] corresponds to the calculus limits ±∞. The action on (^kV) is now a special case: the basis for V gives a basis of monomials for ^kV, and a linear map N(t) on V(t) induces a linear map on ^kV(t). Similarly for (^r(^kV)), given a nonnegative integer r. We are ready to discuss the limit of the action on rV and r^kV. Let N(t) be a nonsingular linear map V(t)→ V(t). If L=v_1∧⋯∧ v_r is a point on rV⊆(^rV), then the limit NL is also in rV, and is the vector subspace spanned by {Nw:w∈ L}. We first notice that a vector w is in L if and only if w∧ L=0. Now N(t)(w∧ L)=N(t)w∧N(t)L=0, and so the same happens in t-degree zero in the canonical representative. It follows that if w∧ L=0, then Nw∧ NL=0. Conversely, we need to show that NL is annihilated by r linearly independent vectors in the image of N. Suppose that e_i_1∧⋯∧ e_i_r is in the support of NL. Then e_i_1∧⋯∧ e_i_r has a t-degree zero component in the canonical representative of N(t)L. The coefficient of this monomial is an r× r minor of the the matrix with columns N(t)v_1,…,N(t)v_r. By Gaussian elimination, we may find vectors N(t)w_1,…,N(t)w_r in N(t)L so that the corresponding minor of N(t)w_1,…,N(t)w_r is diagonal and has a t-degree zero component. It follows that Nw_1,…,Nw_r are linearly independent, as desired. Equivalently, if the matrix W with columns N(t)v_1,…,N(t)v_r has an r× r minor with a t-degree zero component, then let G be the corresponding r× r submatrix. Now the columns of WG^-1 are vectors in N(t)L whose limits are linearly independent. Let N(t) be a nonsingular linear map V(t)→ V(t). If L_0⊆ L are vector subspaces of V, then also NL_0⊆ NL. As before, the limit action N on r^kV is a special case. For the purpose of this paper, we need that the limit preserves the self-annihilating and cross-annihilating properties. Experts in algebraic geometry will recognize this as following from the fact that these properties correspond to closed sets in the Zariski topology. An elementary proof is also easy. Let N(t) be a nonsingular linear map V(t)→ V(t). If L and L' are cross-annihilating subspaces of L, then NL and NL' are also cross-annihilating. Let x be in L and x' in L'. The t-degree zero term of the exterior product of the canonical representatives of N(t)x and N(t)x' is the product of the t-degree zero terms of N(t)x and N(t)x'. Now if x∧ x'=0, so that also N(t)x∧N(t)x'=0, then we recover that Nx∧ Nx'=0. For further reading on projective space and the Grassmannian, we refer the reader to <cit.>, for example. Accessible literature on limits in algebraic geometry (also known as degenerations or specializations) seems to be a bit difficult to find. Artin <cit.> discusses limits of curves in algebraic geometry briefly, mainly over the field of complex numbers. The description we have given is known (in a somewhat more abstract setting) as the valuative criterion, as discussed by Newstead in <cit.>. Eisenbud and Harris <cit.> also give a somewhat similar description to ours in a more abstract and general situation. § SLOWER SHIFTING In this section, we introduce a new family of linear operators and their limits, similar to those given by the third author in <cit.>, and earlier by Knutson in <cit.>. In the case where a subspace of the exterior algebra does not admit a basis of monomials, our matrices will preserve more structure in comparison to those of <cit.>, yielding a slower and gentler shifting operation. Let i and j be between 1 and n. As before, let 𝔽(t) be the field extension with a new transcendental element t, and let V(t)≅ V(t) be the extension of V to coefficients in (t). Let N_j→ i(t) be the linear map sending e_j↦ e_i+te_j, and fixing all other basis elements. The limit of this linear map is the slow shifting operation. Also of some interest is the linear map M_i(t), sending e_i↦ te_i and fixing all other e_j. Obviously, both N_j → i(t) and M_i(t) depend on the choice of (ordered) basis 𝐞. In particular, as matrices acting on the left, we have N_j→ i(t)= i j 1 i 1 1 ⋱ j t 1 , M_i(t) = i 1 ⋱ i t ⋱ 1 . As in Section <ref>, we may extend an action on V(t) to the Grassmannian, and consider the limit action obtained by evaluating the canonical representative at t=0. Denote the limit of N_j→ i(t) as ji, and that of M_i(t) as M_i. §.§ The action of Nij We first describe the action of ji on (the projectivization of) elements of V. As in Notation <ref>, we denote by V^(j) the vector subspace of V spanned by 𝐞∖{e_j}. Let v=x+e_j be in V, where x is in V^(j). Then N_j→ i(t)v=x+e_i+te_j. This yields as canonical representative N_j→ i(t)v= x+e_i+te_j if e_i+x≠0, e_j otherwise.Thus, ji v= x+e_i if e_i+x≠0, e_j otherwise. We now extend this action to ℙ(^kV). Here, if m=x+e_j∧ y, where the supports of x and y do not contain e_j in their variable sets, then N_j→ i(t)m=x+e_i∧ y+te_j∧ y. Similarly to the above, this yields as canonical representative N_j→ i(t)m = x+e_i∧ y+te_j∧ y if e_i∧ y+x≠0, e_j∧ y otherwise; and ji m = x+e_i∧ y if e_i∧ y+x≠0, e_j∧ y otherwise. We observe that when m is a monomial with variable set F, then ji m is the monomial with variable set jiF. More generally, if L is generated by monomials whose variable sets form the set system ℱ, then ji is generated by the monomials with variable sets jiℱ. The latter statement follows quickly from Lemma <ref> below, or a direct proof with bases is also not difficult. §.§ Comparison with other operations Our operation ji slowly changes e_j's to e_i's in an exterior k-form, while leaving alone monomials whose variable sets avoid both. In contrast, the shifting operations considered in <cit.> may quickly replace non-trivial systems with trivial systems. Algebraic shifting <cit.> or the related techniques with initial monomials in <cit.> have similar defects. Indeed, one essential ingredient of our proof is Lemma <ref> below, which says that if the 1-form ℓ annihilates a subspace of k-forms after applying ji, then the 2-form ℓ∧(e_i-e_j) annihilates before shifting. This is an exterior algebra analogue of the fact that if jiℱ becomes trivial, then before shifting, every set in ℱ contains at least one of i or j. The analogue fails for the operations of <cit.>. To illustrate, we rephrase the operation of <cit.> in the language of the current paper: it is the limit O_j→ i as t→0 of the linear map O_j→ i(t) sending e_j↦ e_i+te_j and e_h↦ te_h for h≠ j. We consider the k-form z=e_1∧ x+e_2∧ y+w, where x, y, and w are in V^({1,2}). For n large enough relative to k, one can choose x,y,w so that z is not annihilated by any 2-form. Now 21 z is e_1∧ x+e_1∧ y+w. Depending on the choice of w, this may not even be annihilated by any 2-form, let alone a 1-form. In contrast, it is straightforward to see that O_2→1z is e_1∧ y, which is annihilated by e_1. We see that the analogue of Lemma <ref> for O_j→ i does not hold, even for 1-dimensional subspaces! Similar drawbacks hold for initial monomial techniques, which replace z with a monomial in a single step. §.§ The action of Mi Although it will be of less importance to us, it is not difficult to describe the action of M_i. If v=x+e_i is in V for x∈ V^(i), then M_i(t)v= x+te_i if x≠0, e_i otherwise.Thus, M_iv= x if x≠0, e_i otherwise. Extending this action to (^kV), we get for m=x+e_i∧ y (where the supports of x and y do not contain e_i in their variable sets) the following. M_i(t)m= x+te_i∧ y if x≠0, e_i∧ y otherwise. M_im= x if x≠0, e_i∧ y otherwise. §.§ Stabilization of slow shifting We now show that a subspace L of ^kV stabilizes under repeated actions of our slow shifting ji operations. A limit action of ji on L is said to be fixing if ji L = L; we will mainly be interested in non-fixing actions. Consider the following procedure. [Slow shifting procedure] Given L a subspace of ^kV, and I⊆[n]: While there exists a pair i<j chosen from I so that ji L≠ L, set L:= ji L. Return L. We will require some lemmas. We consider V^(j) as a subalgebra of V. We say that L is monomial with respect to e_j if L=(L∩^kV^(j))⊕(L∩(e_j∧^k-1V^(j))). That is, L is monomial with respect to e_j if we can find a basis of elements that are either multiples of e_j, or else do not have any monomials with e_j in the support. We also call such a basis monomial with respect to e_j. If 1≤ i<j≤ n, then ji L is monomial with respect to e_j. Immediate by (<ref>). If L is monomial with respect to e_h, then the slow shift ji distributes over this monomiality for h≠ i,j, as follows. Let L be a subspace of ^kV, and let h∈[n]. If L is monomial with respect to e_h, and 1≤ i<j≤ n are distinct from h, then jiL = ji(L∩^kV^(h))⊕ ji(L∩(e_h∧^k-1V^(h))) =( ji L∩^kV^(h))⊕( ji L∩(e_h∧^k-1V^(h))). It is immediate by (<ref>) that ji (L∩^kV^(h))⊆^kV^(h) and ji (L∩(e_h∧^k-1V^(h)))⊆ e_h∧^k-1V^(h). The proof now follows by counting dimensions. If L is monomial with respect to e_j, then the slow shift ji respects the monomiality in a weaker manner. Let L be a subspace of ^kV and j≤ n. Let K be the largest subspace of ^k-1V^(j) such that e_j ∧ K ⊆ L. If L is monomial with respect to e_j, and if 1≤ i<j, then ji L=((L∩^kV^(j))+(e_i∧ K))⊕(e_j∧ K'), where K' is the largest subspace of K such that e_i∧ K'⊆ L. It is immediate that L∩^kV^(j) is fixed under ji, while from definition ji (e_j∧ K)=(e_i∧ K)⊕(e_j∧⟨ x∈ K:e_i∧ x=0⟩). More broadly, for each y∈ K so that e_i∧ y∈ L, we have ji (e_j∧ y-e_i∧ y)=e_j∧ y. The proof now follows by counting dimensions. If L is monomial with respect to e_j, then for each e_j∧ x in ji L, it also holds that e_i∧ x is in ji L. The following will be useful as a base case for an inductive argument that Algorithm <ref> terminates. For any 1≤ i<j≤ n and subspace L of ^kV, we have ji ji ji L= ji ji L. Moreover, if L is monomial with respect to e_j, then ji ji L= ji L. We caution that there are (non-monomial) examples where ji ji L≠ ji L. Consider the following: The span of e_1∧ e_2∧ e_3-e_1∧ e_2∧ e_4 requires two applications of 43 to stabilize. The first application yields the span of e_1∧ e_2∧ e_4. Although this is monomial, it is not stable under 43. Performing 43 again yields e_1∧ e_2∧ e_3, which is stable under further slow shifts. Although it is clear that the shifting algorithm for set systems terminates, the analogue for slow shifting of exterior subspaces requires a slightly more careful analysis. Algorithm <ref> terminates, for any choice of a sequence of non-fixing slow shifts. We work by induction on the size of the set of permissible indices I given as input to the algorithm. If |I|=1, then the result is trivial, and if |I|=2, then the result follows from Corollary <ref>. If |I|>2, then let b be the greatest index in I. If there is no bi operation in the sequence, then by induction (examining I∖{b}) the algorithm terminates. The first bi operation makes L monomial with respect to e_b. By Lemma <ref>, each additional non-fixing bi operation reduces the dimension of the subspace consisting of multiples of e_b. By Lemma <ref>, each other slow shift operation preserves the dimension of this subspace. Thus, there are finitely many non-fixing bi operations. The result now follows by induction on I∖{b}. As in the set system case <cit.>, it is efficient to shift first from the greatest indices. Here we base our lexicographic order on the usual order on [n]. If we choose at each step of Algorithm <ref> the lexicographically last ordered pair (j,i) where ji is non-fixing, then the algorithm terminates in at most |I|-1+|I|2 iterations. Let b be the last element of I. We first apply bi over all i<b, in decreasing order, performing the first such shift twice if possible (and so necessary). After the first slow shift, we have a subspace L that is monomial with respect to e_b. Additional slow shifts preserve monomiality with respect to e_b by Lemma <ref>. Moreover, since L∩^kV^(b)⊆ bi (L∩^kV^(b)), each bi is non-fixing for at most one application. Suppose that we have completed all slow shifts bi on L, and let e_b∧ K be as in Lemma <ref>. Repeated application of Lemma <ref> then gives that e_i∧ K⊆ L for each i<b in I, hence that ⊕_i∈ Ie_i∧ K⊆ L. Now by another application of Lemma <ref>, the subspace ⊕_i∈ Ie_i∧ K is preserved under all further ji operations with i,j∈ I. In particular, bi will fix L for all i throughout the remainder of the slow shifting procedure. That the process terminates now follows by induction on I∖{b}. Since in the worst case we shift at each pair i<j, and one extra time at each element of I except the least in order to guarantee monomiality, we have at most |I|-1+|I|2 slow shifts. It is instructive to compare and contrast with <cit.>, which applies a similar sequence of combinatorial shifting operations to sets in the case where I=[n]. The set situation requires only n2 operations, since it is not required to first make the system monomial with respect to each i. By Theorem <ref> and Proposition <ref>, given a subspace L of ^kV, we may find a subspace of the same dimension that is stable under ji over all i<j both in I. We now describe the resulting subspaces. Let L be a subspace of ^kV, and let I⊆[n]. If L is stable under slow shifting ji over all i<j in I, then for each x∈ V^(I), the variable sets of the monomials y in V^([n] ∖ I) so that x∧ y∈ L form a shifted set system. Let x ∧ y be in the hypothesis. By definition, ji x ∧ y = x ∧ ji y. As we observed in Section <ref> that ji acts on monomials as combinatorial shifting on their variable sets, the result follows. Let L be a subspace of ^kV, let I⊆[n], and let a be min I. If L is stable under slow shifting ji over all i<j in I, then L has a basis consisting of elements of the form x∧ y, where x is a homogeneous form in V^(I∖ a) and y is a monomial whose variable set is a subset of { e_h:h∈ I}. Immediate from repeated application of Lemmas <ref> and <ref>, together with basic facts about direct sums. Thus, Theorem <ref> says that if L is stable under ji over I, then L is simultaneously monomial with respect to all variables with index in I except for the least indexed. In certain circumstances, we can also get monomiality with respect to the least index of I. Let L be a subspace of ^kV. If L is stable under slow shifting ji over all i,j∈[n], then L has a monomial basis. Up to constant multiplication, the only homogeneous forms in V^({2,…,n}) are 1 and e_1. We remark that if L is stable under ji over all i<j in I, but is not monomial with respect to a = min I, then it is easy to see that applying M_a (as in Section <ref>) will result in a system that is monomial with respect to all indices in I. Indeed, one might see M_a as a “degenerate” slow shifting operation, in the sense that we send e_a to zero (as in Section <ref>), but do not have a lesser indexed variable available with which to replace it. An essential step of our main proof will use shifting over I = { 3,…,n }. Let L be a subspace of ^kV that contains e_1∧ e_2∧^k-2V and that is annihilated by e_1∧ e_2. If L is stable under slow shifting ji over all i,j≥3, then L has a basis consisting of elements that are monomial with respect to all e_j with j≥3. In the basis yielded by Theorem <ref>, the form x has exterior degree 0,1,2, or 3. Without loss of generality, the basis contains the monomials of e_1∧ e_2∧^k-2V. It is immediate that for degree 0 or 3, we have the desired. For degree 1, the annihilation condition gives that x=λ_1e_1+λ_2e_2. For degree 2, we eliminate e_1∧ e_2 using the containment condition, leaving x=λ_1e_1∧ e_3+λ_2e_2∧ e_3. It seems worth noting that, in the case I = { 3, …, n }, a subspace stable under all ji's decomposes in a pleasing manner. Let L be a subspace of ^kV that contains e_1∧ e_2∧^k-2V and that is annihilated by e_1∧ e_2. If L is stable under slow shifting ji over all i,j≥3, then there are pairwise linearly independent vectors w_1,…,w_m in the span of e_1,e_2 and subspaces K_1,…,K_m of ^k-1V^({1,2}) so that L=e_1∧ e_2∧^k-2V+∑ w_i∧ K_i, and with the following additional properties: * For every distinct i,j∈[m], we have K_i∩ K_j=⋂_hK_h. * Each K_i as well as ⋂_hK_h has a basis of monomials whose variable sets form a shifted set system. Take K_i to be maximal under the condition that w_i ∧ K_i ⊆ L and apply Corollary <ref>. Suppose x∈ K_i∩ K_j. Then w_i∧ x and w_j∧ x are in L. Thus, arbitrary combinations of w_i∧ x and w_j∧ x are in L. In particular, as the span of w_i and w_j is the same as the span of e_1 and e_2, it holds that w_h∧ x is in L. Monomiality and shiftedness follows from Corollary <ref> and Proposition <ref>. § CROSS-ANNIHILATING SUBSPACES In this section, we prove Theorem <ref>. With the framework that we have developed, the proof is easy. By Lemma <ref>, slow shifting preserves the cross-annihilating property. Applying the slow shifting procedure over I=[n], by Theorem <ref>, Corollary <ref> and Proposition <ref> (with x=1), we can reduce to the case where K and L have a monomial basis. The variable sets of the monomials in the bases form shifted, nonempty, cross-intersecting set systems. The result now follows by <cit.>. § NONTRIVIAL SELF-ANNIHILATING SUBSPACES In this section, we prove Theorem <ref>. We use a similar proof strategy as in the proof of the set system result. We briefly sketch this proof: one reduces to the case where either 1 or 2 is in every set in the set system, then apply a cross-intersecting bound. A difficulty in extending this approach to the exterior algebra case is that an element of L may have both e_1 and e_2 occurring in its support, but not have e_1∧ e_2 as a factor. §.§ Main lemma The following technical lemma says that Theorem <ref> holds under the extra conditions of annihilation by a 2-form and partial annihilation by a 1-form factor. Let k≤ n/2, and let f,g∈ V. If L is a nontrivial self-annihilating subspace of ^kV, such that L is annihilated by f∧ g, and where some element of L is annihilated by f but not by g, then L≤n-1k-1-n-k-1k-1+1. Let V_0⊆ V be a complementary subspace to the span of {f,g}. Without loss of generality, it holds that A=f∧ g∧^k-2V is contained in L, as otherwise A+L satisfies the hypothesis. We now split L into the direct sum of three subspaces: the subspace A as before, the subspace B of elements that are annihilated by f but not by g, and a complementary subspace C. Moreover, B consists of elements of the form f∧ x, and we can choose C to consist of elements of the form f∧ x+g∧ y, where x,y∈^k-1V_0 and y≠0. Now we take B'⊆^k-1V_0 to be {x:f∧ x∈ B}, and C'⊆^k-1V_0 to be { y:f∧ x+g∧ y∈ C for some x} . Since no element of C is annihilated by f, we get that C'= C. Since B and C are cross-annihilating, and by the definition of exterior multiplication, we obtain that B' and C' are cross-annihilating. But now by Theorem <ref>, we see that L= A+ B'+ C'≤n-2k-2+n-2k-1-n-k-1k-1+1. The result now follows by the Pascal's triangle identity. §.§ Other lemmas We also need several lemmas that parallel results in combinatorial set theory. The first is immediate by Lemma <ref>, and parallels that if 1 or 2 is in every set F∈ℱ, then the same holds after combinatorial shifting. If L⊆^kV is a subspace so that e_1∧ e_2∧ L=0, then also e_1∧ e_2∧( ji L)=0 for every 3≤ i<j≤ n. The second is immediate by computation. Let f,g,ℓ be 1-forms of V, such that f and g are linearly independent. If L is a subspace of ^kV so that f∧ g∧^k-2V⊆ L, and so that ℓ annihilates L, then ℓ is in the span of {f,g}. The third is non-trivial, and requires analysis of the slow shifting operation. Let ℓ be a 1-form of V. If L is a subspace of ^kV such that ℓ∧ ji L=0, then ℓ∧(e_i-e_j)∧ L=0. Moreover, if L is nontrivial, then ℓ∧(e_i-e_j)≠0. Broadly speaking, Lemma <ref> is an extension of the fact in combinatorial set theory that if jiℱ becomes trivial, then every set in ℱ contains at least one of i,j. By (<ref>), whenever x+e_j∧ y is in L (where the variable sets of the supports of x and y do not contain e_j), we have ℓ∧(x+e_i∧ y)=0. Thus, ℓ∧(e_i-e_j)∧(x+e_j∧ y) =ℓ∧ e_i∧ x-ℓ∧ e_j∧(x+e_i∧ y) =ℓ∧ e_i∧ x. Since ℓ∧(x+e_i∧ y)=0, we have also ℓ∧ e_i∧ x=ℓ∧ e_i∧(x+e_i∧ y)=0. Finally, if L is nontrivial, then there is an m=x+e_j∧ y that is not annihilated by e_i-e_j. Thus, e_i∧ x-e_j∧(x+e_i∧ y)≠0, and in particular x≠-e_i∧ y. But then (<ref>) gives that ji m = x + e_i ∧ y. It now follows by computation that (e_i-e_j)∧ m=(e_i-e_j)∧ ji m, so that the latter is nonzero. This yields that ℓ and e_i-e_j are linearly independent, as desired. §.§ Proof of Theorem <ref> Let L be a nontrivial self-annihilating subspace of maximal dimension. As the theorem is trivial for k=1, assume that k≥ 2. Apply slow shifting operations over all i,j∈[n] and with respect to the standard basis 𝐞 until either L stabilizes or is annihilated by a 1-form. If L stabilizes to a nontrivial self-annihilating subspace under these slow shifting operations, then by Corollary <ref>, the resulting L is monomial with respect to 𝐞, and the desired bound follows from (the shifted version of) Hilton-Milner for set systems, Theorem <ref>. If ji L is annihilated by a 1-form ℓ, then we change to the ordered basis 𝐟 with f_1=ℓ, f_2=e_j-e_i, and f_3,…,f_n chosen arbitrarily to fill out the basis. It follows from Lemma <ref> that f_1 and f_2 are indeed linearly independent, and that L is annihilated by f_1∧ f_2. Apply slow shifting operations with respect to the basis 𝐟 for i,j∈{3,…,n} until either L stabilizes or is annihilated by a 1-form. We may assume by maximality that f_1∧ f_2∧^k-2 V ⊆ L, and notice that Lemma <ref> gives this subspace to be preserved by the slow shifting. If L stabilizes, then by Corollary <ref>, we may find a basis consisting of elements of the form f_1∧ f_2∧ x and (λ_1 f_1 + λ_2 f_2)∧ y. In particular, the resulting L satisfies the conditions of Lemma <ref> for some f,g in the span of {f_1,f_2}. The desired bound follows. Finally, if ji L is annihilated by a 1-form ℓ' for some 3≤ i<j, then Lemma <ref> gives that ℓ'∧(f_j-f_i) annihilates L. Lemma <ref> gives that ℓ' is in {f_1,f_2}; take g to be such that ℓ'∧ g=f_1∧ f_2. By maximality, ℓ'∧(f_j-f_i)∧^k-2V is contained in L. Now f_1∧ f_2∧^k-2V=ℓ'∧ g∧^k-2V⊆ L, and (since g is in the span of {f_1,f_2}) there are elements of ℓ'∧(f_j-f_i)∧^k-2V that are annihilated by ℓ' but not by g. Thus, we may apply Lemma <ref> to finish the proof. hamsplain
http://arxiv.org/abs/2406.17971v1
20240625230703
Robust integration of external control data in randomized trials
[ "Rickard Karlsson", "Guanbo Wang", "Jesse H. Krijthe", "Issa J. Dahabreh" ]
stat.ME
[ "stat.ME" ]
#1 1 1 Robust integration of external control data in randomized trials Rickard Karlsson^1,†*, Guanbo Wang^2,3†*, Jesse H. Krijthe^1 and Issa J. Dahabreh^2,3,4 ^1 Pattern Recognition Laboratory, Delft University of Technology, Delft, the Netherlands ^2 CAUSALab, Harvard T.H. Chan School of Public Health, Boston, Massachusetts, U.S.A ^3 Department of Epidemiology, Harvard T.H. Chan School of Public Health, Boston, Massachusetts, U.S.A ^4 Department of Biostatistics, Harvard T.H. Chan School of Public Health, Boston, Massachusetts, U.S.A ^†Equal contribution; r.k.a.karlsson@tudelft.nl; g.wang@hsph.harvard.edu ========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== § ABSTRACT One approach for increasing the efficiency of randomized trials is the use of “external controls” – individuals who received the control treatment in the trial during routine practice or in prior experimental studies. Existing external control methods, however, can have substantial bias if the populations underlying the trial and the external control data are not exchangeable. Here, we characterize a randomization-aware class of treatment effect estimators in the population underlying the trial that remain consistent and asymptotically normal when using external control data, even when exchangeability does not hold. We consider two members of this class of estimators: the well-known augmented inverse probability weighting trial-only estimator, which is the efficient estimator when only trial data are used; and a more efficient member of the class when exchangeability holds and external control data are available, which we refer to as the optimized randomization-aware estimator. To achieve robust integration of external control data in trial analyses, we then propose a combined estimator based on the efficient trial-only estimator and the optimized randomization-aware estimator. We show that the combined estimator is consistent and no less efficient than the most efficient of the two component estimators, whether the exchangeability assumption holds or not. We examine the estimators' performance in simulations and we illustrate their use with data from two trials of paliperidone extended-release for schizophrenia. Keywords: causal inference; combining information; data integration; efficiency; external controls; hybrid designs 1.45 Robust integration of external control data in randomized trials Rickard Karlsson^1,†*, Guanbo Wang^2,3†*, Jesse H. Krijthe^1 and Issa J. Dahabreh^2,3,4 ^1 Pattern Recognition Laboratory, Delft University of Technology, Delft, the Netherlands ^2 CAUSALab, Harvard T.H. Chan School of Public Health, Boston, Massachusetts, U.S.A ^3 Department of Epidemiology, Harvard T.H. Chan School of Public Health, Boston, Massachusetts, U.S.A ^4 Department of Biostatistics, Harvard T.H. Chan School of Public Health, Boston, Massachusetts, U.S.A ^†Equal contribution; r.k.a.karlsson@tudelft.nl; g.wang@hsph.harvard.edu ========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== § INTRODUCTION Randomized trials are the preferred approach for learning about treatment effects. Conducting trials, however, is costly and time-consuming, and trials often have small sample sizes that lead to imprecise results. As a result, there is growing interest in augmenting trials with data from external or historical controls <cit.> – individuals who received the control treatment of the trial as part of routine care or prior clinical investigations – to improve efficiency in estimating treatment effects in the trial. The problem of augmenting trials with external controls has many similarities with the problem of transporting causal inferences from a trial to a target population <cit.> because the former essentially reverse the flow of information compared with the latter <cit.>: instead of using information from the trial to learn about causal effects in a target population, external control methods use information from the external control population to improve inference in the trial. Consequently, trial analyses that use external control data often assume exchangeability conditions similar to those needed for transportability analyses and use similar methods to account for between-population differences <cit.>. However, when these exchangeability conditions do not hold, external control methods can introduce significant bias in the estimation of treatment effects. A natural, though imperfect, approach to address this challenge involves assessing if the trial and external control populations are compatible for pooling, for instance, through statistical hypothesis tests <cit.>. Unfortunately, these tests have low statistical power, particularly when the trial sample size is small, which is precisely when using external control data would be most appealing; false negative results with “test-then-pool” approaches may result in substantial bias. A related approach involves dynamically selecting valid external controls in a data-driven manner <cit.>, yet the risk of bias persists when the assumptions needed for valid dynamic selection do not hold. Here, we describe a “randomization-aware” class of estimators that can incorporate external control data and remain consistent and asymptotically normal, even when the external control population is not exchangeable with the trial population. We use optimization methods to identify a more efficient member of this class when exchangeability holds; we refer to this member of the class as the optimized randomization-aware estimator. Last, we propose a combined estimator that, asymptotically, is no less efficient than the efficient and robust trial-only estimator and the optimized randomization-aware estimator, whether the exchangeability assumption holds or not. In simulation studies, we verify that our estimators have good finite-sample performance and are competitive with existing, less robust alternatives. Last, we illustrate the methods using data from two trials of paliperidone extended-release for schizophrenia. § STUDY DESIGN, DATA STRUCTURE, AND CAUSAL ESTIMANDS Study design and data structure: We assume that the trial data and the external control data are independently obtained simple random samples from possibly different underlying populations, with unknown and possibly different sampling fractions. The trial and external control data are appended to form a composite dataset. In prior work on generalizability and transportability analyses, this sampling scheme is referred to as a non-nested trial because the sample proportions of trial participants and external controls in the composite dataset do not necessarily reflect the relative size of their underlying populations <cit.>; see Supplementary Material <ref> for additional design considerations. Simplifying assumptions: To focus on issues related to the integration of external controls and highlight unique aspects of our approach, we make several simplifying assumptions. We restrict attention to the case where the treatment is binary, though extensions to multi-valued discrete treatments are fairly straightforward. Furthermore, we assume complete adherence to treatment, no missing data, and no loss to followup. These complications are important in practice, but standard methods for addressing them can be combined with the approaches we focus on. Notation: Throughout, we use italic capital letters for random variables and lowercase letters to denote specific values. We use f(·) to denote densities of random variables. Let X denote baseline (pre-randomization and pre-treatment covariates), S denote a binary indicator for the study source (S=1 for trial participants; S = 0 for the external controls), A denote treatment strategies (without loss of generality we refer to treatment A=1 as the experimental treatment and treatment A=0 as control treatment, even though both groups may be receiving active treatments), and Y denote a (binary, continuous, or count) outcome measured at the end of the study. Sampling model: We model the data on observation i with S_i = s as independent and identically distributed, conditional on study source, realizations of the random tuple O_i=(X_i, S_i = s, A_i, Y_i), for i=1, …, n_s, where n_s denotes the number of observations from source S = s. We define n = n_1 + n_0 to be the sample size of the composite data set formed by appending the trial data with the external control data. In the trial, treatment A is randomly assigned. In the population underlying the external control data, the only treatment in use may be the control treatment, in which case {S_i = 0}{A_i = 0}, or the treatments in use may be more variable, including the experimental and control treatments evaluated in the trial, as well as other treatments not examined in the trial. To simplify exposition, we mainly address the case of uniform use of the control treatment in the population underlying the external control data; we illustrate this data structure in Table <ref>. Nevertheless, we will argue that, with small modifications, the methods we propose can also be applied when there exists variation in treatment in the population underlying the external data. Regardless of whether treatment varies in the population underlying the external control data, in many applied settings, the number of external controls, n_0, is typically much larger than the number of trial participants n_1. As the total sample size n increases, we assume that the ratios of the sample sizes of the trial and external control data over the total sample size converge to positive constants (i.e., as n→∞, n_s/n→ q_s> 0). Causal estimands: To define causal quantities of interest, we use potential (counterfactual) outcomes <cit.>. Specifically, for the ith individual and for a∈{0,1}, the potential outcome Y_i^a denotes the outcome under intervention to set treatment A to a, possibly contrary to fact. Our goal is to estimate the average treatment effect in the population underlying the trial, [Y^1 - Y^0 | S = 1] = [Y^1 | S = 1] - [Y^0 | S = 1], and its constituent potential outcome means, [Y^a | S = 1], a = 0,1, which are the components of the average treatment effect and typically of inherent scientific interest. § IDENTIFICATION AND ESTIMATION IN THE TRIAL §.§ Identification in the trial Identifiability conditions: The following conditions suffice to identify potential outcome means and average treatment effect in the population underlying the trial: Condition 1: For every individual i and each treatment a ∈{0,1}, if A_i = a, then Y^a_i = Y_i. Condition 2: For each a ∈{0,1}, Y^a A | (X, S=1). Condition 3: For each a ∈{0,1}, if f(x, S=1) ≠ 0, then [A = a | X = x, S = 1] > 0. Condition 1 holds when the intervention is well-defined (as is the case for protocol-directed treatments in trials), such that there are no “hidden” versions of treatment or different versions are not outcome-relevant, and there is no interference (no spillover effects). This condition is assumed on the basis of substantive knowledge, but aspects of experimental design (e.g., a carefully specified treatment protocol) can increase plausibility. Furthermore, implicit in our notation is an assumption that data source-specific effects (e.g., trial engagement effects <cit.>) are absent. Condition 2 is an assumption of no unmeasured confounding in the trial, conditional on the covariates. This assumption is supported by study design in the context of a marginally or conditionally randomized trial (in fact, in a marginally randomized trial, the stronger independence condition (Y^a, X) A | S=1 is supported by study design). Condition 3 is also supported by randomization, which ensures that any covariate pattern that can occur in the population underlying the trial has a non-zero probability of receiving each of the treatments examined in the trial. Identification: Under conditions 1 through 3, the trial data alone suffice to identify the potential outcome mean under intervention to set treatment A to a in the the trial's underlying population, [Y^a|S=1] , with ψ_a≡ [[ Y | X, S =1 , A = a ] | S = 1 ], which can be equivalently written as ψ_a = 1[S = 1][ 1(S = 1, A = a)Y[A=a|X, S=1]]. Furthermore, the average treatment effect in the population underlying the trial, [Y^1-Y^0|S=1], can be identified with τ=ψ_1-ψ_0. §.§ Estimation using trial data alone To estimate ψ_a, we can use an outcome regression estimator (∑_i=1^n S_i )^-1∑_i=1^n S_i g_a(X_i), where g_a(X) denotes the estimated probability of the outcome model in the trial, [Y|X, S=1, A=a]. When the model for g_a(X_i) is correctly specified, the outcome regression estimator is consistent and is the most efficient in the class of doubly robust estimators <cit.>. However, correct specification of the outcome model is challenging. On the other hand, one can estimate ψ_a using the inverse probability weighting estimator (∑_i=1^n S_i )^-1∑_i=1^n S_i 1(A_i=a)Y_i/e_a(X_i), where e_a(X) is an estimator for [A=a|X, S=1], the propensity score in the trial <cit.>. This weighting estimator is consistent because the model for estimating the propensity score in the trial can always be correctly specified (the probability of treatment is controlled by the investigators). In fact, e_a(X) in the inverse probability weighting estimator can be replaced with the true propensity score, e_a(X). However, estimating the propensity score can improve the efficiency of the estimator <cit.>. Nevertheless, even when the propensity score is estimated, the weighting estimator may be inefficient as it does not use data on the outcomes of individuals not receiving treatment a. An estimator that combines the advantages of the outcome regression and weighting estimators, is the augmented inverse probability weighting estimator, ϕ_a = (∑_i=1^n S_i )^-1∑_i=1^n S_i[1(A_i=a)e_a(X_i)(Y_i-g_a(X_i)) + g_a(X_i) ]. This estimator is asymptotically normal; furthermore, it is robust in the sense that it remains consistent even if the outcome regression model for g_a(X) is misspecified, because the model for e_a(X) is correctly specified. When the models for both e_a(X) and g_a(X) are correctly specified, this estimator achieves the semiparametric variance bound when only the trial data are available. A natural estimator for the average treatment effect in the trial τ is τ(g)=ϕ_1-ϕ_0; here, we index the treatment effect estimator by g to emphasize that it depends on the estimators for g_a(X), a=0, 1. Due to the linearity of the influence functions of ϕ_a, for a=0, 1, τ(g) is consistent and asymptotically normal as well. We refer to the estimator τ(g) as the efficient trial-only estimator of the treatment effect. However, τ(g) only uses the data from the trial. In the next section, we consider strategies for using information from the external control data. § USING EXTERNAL CONTROLS UNDER CONDITIONAL EXCHANGEABILITY We now present a pair of additional conditions that are often invoked when using external control data. In view of the results presented above, it should be clear that these additional conditions are not necessary for identification of the causal estimands of interest; instead they are invoked in the hope of improving the estimation of the treatment effect. We then briefly review alternative approaches to using external controls and identify challenges with their implementation to motivate our proposed approach. §.§ Identification using the external control data In prior work (e.g., <cit.>), some version of the following two conditions has been invoked to allow the incorporation of the external controls: Condition 4: Y^a=0 S | X. Condition 5: {S=0}{A=0}. Alternatively, Condition 5 can be replaced with the following independence condition: Condition 5': Y^0 A|(X, S=0). Condition 4 is a condition of exchangeability in distribution that allows the external control data to contribute to the analysis of the trial data. In our setting it can be replaced by the somewhat weaker (for non-binary outcome Y) condition of exchangeability in expectation, [Y^0 | X = x, S = 1] = [Y^0 | X = x, S = 1], for each x with positive density in the population underlying the trial, f(x,S=1) ≠ 0. Condition 5 is a formalization of the requirement that all individuals in the population underlying the external control data receive the control treatment in the trial. As noted, this condition can be replaced with Condition 5', a partial assumption of no-confounding (partially because it only involves counterfactuals under the control treatment); this assumption may be plausible even in the presence of treatment variation in the population underlying the external data. In fact, both conditions 5 and 5' can be understood as precluding the possibility of confounding in the population underlying the external data, either due to lack of variation in treatment or because a sufficiently rich set of covariates are available in X to control for confounding. Both conditions ensure that the observed external controls are comparable to the control group in the trial and thus can be used to estimate causal effects. We reiterate that these additional conditions – 4, and 5 or 5' – are strong assumptions, typically not supported by study design, and their plausibility is often uncertain or controversial. In fact, when added to conditions 1 through 3, these additional conditions impose restrictions on the law of the observed data (an instance of overidentification). §.§ Testable implications of the additional identifiability conditions Conditions 1-5 together have a testable implication in the law of observed data, namely that for each x with positive density in the population underlying the trial, f(x,S=1)≠ 0, H_0 : [Y | X=x, A=0, S=1] = [Y | X=x, A=0,S=0]. For completeness, we provide a derivation of this result in Supplementary Material <ref>. The above testable implication provides a way to evaluate whether conditions 1-5 jointly hold; informally, it may be used to assess compatibility between the trial and external control data. Various methods exist for testing H_0, such as parametric likelihood-ratio tests or non-parametric alternatives <cit.>. Complications related to doing a statistical test against H_0 and subsequently drawing statistical inferences using the same dataset can be addressed by sample-splitting or accounting for pre-testing when quantifying uncertainty (see, e.g., <cit.>). §.§ Identification assuming exchangeability of populations Under conditions 1-5, pooling of trial and external control data can be incorporated in analyses aiming to estimate the potential outcome mean in the population underlying the trial under intervention of the control treatment A to a=0, that is, [Y^0| S=1] <cit.>. Specifically, we can identify [Y^0| S=1] with ζ_0 = [ [Y| X, A=0] | S=1]. Compared to the identification results using the trial data alone, we no longer condition on S=0 in the inner expectation; instead, we pool the trial and external control data. The average treatment effect in the population underlying the trial, [Y^1-Y^0| S=1], is identified with ψ_1-ζ_0. §.§ Estimation under exchangeability of populations <cit.> proposed a doubly-robust estimator for ζ_0, ζ_0 = (∑_i=1^n S_i )^-1∑_i=1^n[ (S_i (1-A_i) + (1-S_i) r(X_i) /η(X_i) (1-e_0(X_i)) + (1-η(X_i))r(X_i)η(X_i))(Y_i - g_0(X_i)) + S_i g_0(X_i)] , where η(X) is an estimator for the probability of participation in the trial [S=1| X] and r(X) is an estimator for the variance ratio r(X) ≡[Y^0| X,S=1]/[Y^0| X, S=0] comparing the trial population and the population underlying the external data. The estimator of the variance ratio r(X) controls how much information to “borrow” from the external control data; this becomes evident by setting r(X)=0 in which case ζ_0 = ϕ_0. Under conditions 1-5, the estimator ζ_0 is consistent if either the models for estimating η(X) and e_0(X) are correctly specified, or if the model for estimating g_0(X) is correctly specified. Furthermore, <cit.>, if all working models are correctly specified, including for r(X), ζ_0 is the efficient estimator for the control outcome mean when trial and external control data are available. However, if conditions 4 and 5 do not hold (i.e. the external controls are not exchangeable) or the model for estimating η(X) is misspecified, then ζ_0 does not have these desirable properties whereas the trial-based estimator ϕ_0 remains consistent and is the most efficient estimator that ignores the external control data. §.§ Challenges in selecting an estimator We now have two options: (1) use an estimator based solely on trial data, such as ϕ_a, or (2) pool trial and external control data under exchangeability conditions using ϕ_1-ζ_0. If exchangeability holds, and all statistical models are correctly specified and estimated at a fast enough rate, pooling will result in more efficient estimation compared to estimation using only the trial data. However, if exchangeability does not hold, pooling may introduce significant bias. Trial-based estimators, on the other hand, remain consistent regardless of exchangeability because they only rely on conditions 1-3, which are supported by the trial design. Ideally, we want to choose the most efficient and consistent estimator; but how do we do so when we are uncertain if the external controls are compatible with the trial population? A commonly adopted answer to this questions is a test-then-pool approach – an instance of pre-test estimation. This approach tests H_0 in (<ref>) and then chooses either the trial-only estimator or the pooled estimator based on the test result. The test-then-pool approach can result in significant bias when the test has low statistical power, for example, due to the trial sample size being limited. This behavior of the approach is unfortunate, because the test will have low-power when the trial sample size is small, which is precisely when using external control data would be most attractive. Another solution for choosing whether to use external control data is dynamic borrowing. This approach selects in a data-driven manner a subset of the external control data which is compatible with the exchangeability conditions <cit.>. While dynamic borrowing aims to reduce bias when exchangeability does not fully hold, especially compared to naive test-then-pool approaches, we argue that it still takes more risks than necessary: mistakenly pooling the trial data with incompatible external control data can result in bias that was avoidable by limiting the analysis to the trial data. In the next section, we develop an estimator uses external control data but remains consistent even when the additional exchangeability condition does not hold. § A NOVEL ESTIMATION APPROACH USING EXTERNAL CONTROLS Our aim is to develop a consistent estimator that leverages the external control data to improve efficiency in estimating the average treatment effect in the population underlying the trial, but does not rely on exchangeability assumptions between data sources for consistency. In outline, our strategy is as follows: First, we examine a class of randomization-aware estimators for τ that are consistent when conditions 1 through 3 hold; the efficient trial-only estimator is a member of this class. Next, we identify a member within the class of estimators that has improved efficiency when conditions 4 and 5 hold. Last, we introduce a combined estimator based on the efficient trial-only estimator and the optimized randomization-aware estimator, and show that the combined estimator only requires conditions 1 through 3 for consistency, and is no less efficient than the most efficient of the efficient trial-only estimator and the optimized randomization-aware estimator. §.§ A class of consistent estimators We consider the following class of estimators indexed by two functions π and h, ψ_a (π, h) = (∑_i=1^n S_i )^-1∑_i=1^n S_i[1(A_i=a)/π(X_i){Y_i- h(X_i)} + h(X_i) ]. Different choices of {π(X), h(X)} correspond to different estimators ψ_a (π, h), each with different properties. For instance, when {π(X), h(X)} is chosen to be {e_a(X), 0}, the resulting estimator ψ_a (e_a, 0) is the inverse probability weighting estimator. Similarly, when {π(X), h(X)} is chosen to be {e_a(X), g_a(X)}, the resulting estimator ψ_a (e_a, g_a) is the augmented inverse probability weighting estimator ϕ_a. We note that the specification of the estimator depends on functions of the covariates X that only require trial information (i.e., functions that are conditional on S=1). In randomized trials, the propensity score e_a(X) is known, and it is reasonable to assume that e_a(X) can be estimated with a correctly specified model and convergence rate equal to √(n). We prove in Supplementary Material <ref> that when π(X) is e_a(X) or e_a(X) (being estimated at a parametric rate), the estimator ψ_a (π, h) is robust (i.e., consistent regardless of the specification of h(X)). In fact, the trial-only estimator ϕ_a is a special case of ψ_a (π, h); the robustness of the trial-only estimator ϕ_a is a manifestation of the robustness of the class ψ_a (π, h) when π is the true or correctly (and with fast-enough rate) estimated propensity score in the trial. Henceforth, we only consider cases where π(X) is chosen to be either e_a(X) or e_a(X), ensuring that estimators under consideration are consistent. We refer to this class of estimators as randomization-aware because its members exploit randomization and knowledge of the probability of treatment to ensure consistency. Instances of specific randomization-aware estimators have appeared in previous work on using external data sources (e.g., <cit.>). Note that even though the probability of treatment is known by design in our setting, it is often preferable to estimate it to improve efficiency (see, e.g., <cit.>). In trial analyses, it is reasonable to do so using a parametric model (e.g., a logistic regression model) that can be estimated at √(n)-rate of convergence. Though we use the true probability of treatment or estimate it using only the trial data, we attempt to “learn” h(X) using both the trial and external control data. Throughout this paper, we assume that h(X) satisfies certain regularity conditions; namely, that it is continuous, differentiable, smooth, and has finite expectation and variance at all points for X=x. Informally, the purpose of h(X) in our estimators is to capture the relationship between the outcome and baseline covariates; flexible regression models such as splines and kernel smoothing methods can approximate the relationship and satisfy the regularity conditions. Consider the choice {π(X), h(X)}={e_a(X), h_fix(X)}, that is, the case when the estimator ψ_a(e_a, h_fix) depends on the true propensity score and some fixed function h_fix(X). We note the following properties of ψ_a(e_a, h_fix) (see Supplementary Material <ref> for the proof). Under conditions 1 through 3, and assuming Y has finite expectation and variance, ψ_a(e_a, h_fix) is unbiased, asymptotically normal, and has asymptotic variance [S=1]^-1[𝒞_a+f_a(h_fix(X), Y)], where 𝒞_a=Var[Y^a|S=1]+[S=0]^2[Y^a|S=1], and f_a(h_fix(X), Y)=[[A=a|S=1]1-e_a(X)/e_a(X){Y-h_fix(X)}^2 | S=1, A=a]. In other words, ψ_a(e_a, h_fix) is asymptotically normal with asymptotic variance whose only component that involves h_fix(X) is f_a(h_fix(X), Y). We can interpret f_a(h_fix(X), Y) as a weighted mean-squared error of h_fix(X) in group {A=a} in the trial population. The treatment-reweighing penalizes errors for a given X more as the probability of being assigned to treatment {A=a}, that is e_a(X), becomes smaller. We will exploit Lemma <ref> to develop a procedure for choosing h(X) to improve the efficiency of our estimators. §.§ An optimized randomization-aware estimator Consider the class of randomization-aware estimators ψ_a(e_a, h). The results in the previous sub-section suggest that we can “learn” an h(X) function that results in a more efficient randomization-aware estimator. Specifically, we propose to use the trial and external control data together to find h(X) that minimizes the asymptotic variance of ψ_0(e_0, h_fix), which is the same as minimizing the term f_0(h_fix(X), Y) in (<ref>). Our approach is similar to the approach of <cit.> for the problem of estimating an expectation of an outcome with missing data in a single data source setting. In the results presented above, however, f_0(h_fix(X), Y) is writen as an expectation conditional on S=1, and it is not obvious how external control data could be incorporated in the optimization. To make progress, we use conditions 4 and 5 to express f_0(h_fix(X), Y) as an expectation over the trial and external control data (see Supplementary Material <ref> for the proof). Under conditions 1 through 5, we have that f_0(h_fix(X), Y) = [[S=1| X, A=0]/[S=1| A=0][A=0|S=1]e_1(X)/e_0^2(X){Y-h_fix(X)}^2| A=0 ]. This lemma provides a way for choosing h(X) using external control data such that the asymptotic variance of ψ_0(e_a, h_fix) is minimized. Removing all the normalizing terms, we define h^*(X) as the minimizer of R(h)=[η_0(X) e_1(X)/e_0^2(X) {Y-h(X)}^2| A=0 ] within a model class h∈ℋ, where η_0(X) = [S=1| X, A=0] is the probability of participation in the trial, conditional on covariates, among individuals receiving treatment A = 0. We can estimate h^*(X) by finding the minimizer of the sample analog of R(h) with the estimated e_a(X) and η_0(X). We let ℋ be a class of parametric models. Therefore, h(X) can be denoted by h(X; γ) and h^*(X) can be denoted by h(X; γ^*). Thus, estimating h^*(X) is the same as estimating γ^*, which can be obtained by γ^*=_γ∑_i=1^nR(O_i; γ), where R(O_i; γ)= (1-A_i) η_0(X_i)e_1(X_i)/e_0^2(X_i) {Y_i- h(X_i; γ)}^2. §.§ Implementation using M-estimation methods We propose to use M-estimation <cit.> to implement the randomization-aware estimator ψ_0(e_0, h^*) using h^*(X) as the estimated optimized h(X). For a set of smooth finite-dimensional target parameters θ, its M-estimator θ is the solution to a stack of equation of the form ∑_i=1^nm(O_i; θ)=0, where θ is the set of parameters with arbitrary values and m(O_i; θ) is the stack of estimating functions. We consider parametric models for e_1(X), η_0(X) and h^*(X) denoted by e_1(X; α), η_0(X; β) and h(X;γ^*), respectively, and furthermore, we denote q=[S=1]. The set of target parameters is θ={q, α, β, γ^*, ψ_0} and the stack of estimating functions is m(O_i;θ)=[ m_q(O_i; q); m_e_1(O_i; α); m_η_0(O_i; β); m_h^*(O_i; α, β, γ); m_ψ_0(O_i; q, α, γ, ψ_0); ]. To estimate q, we define m_q(O_i; q)=S_i - q. When the propensity score e_1(X) and the participant model η_0(X) are estimated by logistic regression, m_e_1(O_i; α) and m_η_0(O_i; β) are the corresponding logistic regression score equations. For estimating γ^* and ψ_0, we define m_h^*(O_i; α, β, γ)=∂/∂γR(O_i; α, β, γ)= (1-A_i) η_0(X_i; β) e_1(X_i; α)/{1-e_1(X_i; α)}^2{Y_i- h(X_i; γ)}∂/∂γ h(X_i; γ), m_ψ_0(O_i; q, α, γ, ψ_0)=S_iq[1(A_i=0)/1-e_1(X_i; α){Y_i- h(X_i; γ)} + h(X_i; γ) - ψ_0]. We obtain a consistent estimator of ψ_0(e_0, h^*) by jointly solving the stack of estimating functions, that is, letting θ be the solution to ∑_i=1^n m(O_i; θ) = 0. The following theorem summarizes the properties of ψ_0(e_0, h^*) obtained from solving this optimization task (see Supplementary Material <ref> for the proof). [Properties of the optimized randomization-aware estimator] Under conditions 1 through 3, the M-estimator ψ_0(e_0, h^*) is consistent and asymptotically normal. In addition, if conditions 4 and 5 hold and the model for estimating η_0(X) can be correctly specified, ψ_0(e_0, h^*) minimizes the asymptotic variance of ψ_0(e_0, h), over ℋ. Because h^*(X) is not estimated using only trial data (S=1), Theorem <ref> provides a practical solution for estimating ψ_0 when incorporating external control data. Furthermore, we can obtain an estimator for the average treatment effect in the trial population τ as τ(h^*)=ψ_1(e_1, g_1)-ψ_0(e_0, h^*) . Because both of ψ_1(e_1, g_1) and ψ_0(e_0, h^*) are consistent estimators, τ(h^*) is also a consistent estimator. To construct asymptotically normal estimators, we again propose to obtain {ψ_1(e_1, g_1), ψ_0(e_0, h^*)} via joint M-estimation (see details in Supplementary Material <ref>), so that ψ_1(e_1, g_1) and ψ_0(e_0, h^*) are asymptotically bivariate normal <cit.>. Because a linear combination of two bivariate normally distributed random variables is normally distributed, τ(h^*) is asymptotically normal. As a final remark, the consistency of τ(h^*) does not rely on conditions 4 and 5; however, the efficiency improvement that this estimator hopes to offer depends on these conditions. If conditions 4 and 5 do not hold, or if η_0(X) is misspecified, then τ(h^*) may be less efficient than the efficient trial-only estimator, τ(g). To further relax the dependence of the estimator's efficiency on these two conditions, in the next section we develop a new estimator that is asymptotically guaranteed to not perform worse than the efficient trial-only estimator. §.§ Combined estimator We have two different consistent estimators provided conditions 1 through 3 hold: the efficient trial-only estimator τ(g)=ϕ_1-ϕ_0; and the optimized randomization-aware estimator τ(h^*)=ψ_1(e_1, g_1) - ψ_0(e_0, h^*) that incorporates external control data. When conditions 4 and 5 also hold, and necessary statistical models are correctly specified, τ(h^*) is expected to be more efficient than τ(g) because it uses more observations to model the outcome conditional on covariates, under the control treatment. However, when conditions 4 and 5 do not hold, the relative efficiency of τ(h^*) and τ(g) depends on many factors, including the sample sizes of the trial and external controls and the extent to which these conditions do not hold. To avoid choosing between τ(h^*) and τ(g), we consider combining these two estimators, in a way that may provide further efficiency gains <cit.>. Specifically, we propose the combined estimator τ(λ)=λτ(h^*)+(1-λ)τ(g), ∀λ∈ℝ. If λ=0, then τ(λ) reduces to the efficient trial-only estimator τ(g); in all other cases, τ(λ) incorporates information from the external control data. We note some important properties of τ(λ) that hold for all λ∈ℝ (see Supplementary Material <ref> for the proof). Under conditions 1-3 when {τ(g), τ(h^*)} is obtained via joint M-estimation, τ(λ) is consistent and asymptotically normal for all λ∈ℝ. With our combined estimator being consistent, it is natural to select λ such that its variance σ^2_λ=[τ(λ)] is minimized, which equates to solving λ^*=_λσ^2_λ. Denote the sampling variance of the estimators τ(g) and τ(h^*) by σ^2_g and σ^2_h^*, respectively, and the sampling covariance by σ_g, h^*. By writing σ^2_λ=λ^2σ^2_h^*+(1-λ)^2σ^2_g+2λ(1-λ)σ_g,h^*, we see that σ^2_λ is a quadratic function of λ and we obtain the following closed-form expressions for the optimal λ^* and the corresponding variance σ^2_λ^*, which is the efficiency bound of {τ(λ), ∀λ∈ℝ}, λ^*=σ^2_g- σ_g, h^*σ^2_g+ σ^2_h^*-2 σ_g, h^*, and σ^2_λ^*=σ^2_gσ^2_h^*- σ^2_g, h^*σ^2_g+ σ^2_h^*-2 σ_g, h^*. The optimal λ^* is unknown because we do not know the variances of τ(g) and τ(h^*). Instead, we can use plugin estimators for the variances to obtain a sample analogue of λ^*. We substitute (σ^2_g, σ^2_h^*, σ_g, h^*) with the empirical sandwich estimators (σ^2_g, σ^2_h^*,σ_g, h^*) obtained via M-estimation. Then we can estimate λ^* and σ^2_λ^* by replacing the (σ^2_g, σ^2_h^*, σ_g, h^*) with their estimates in the above formula. Because the empirical sandwich estimators are consistent estimators <cit.>, λ^* and σ^2_λ^* are consistent estimators for λ^* and σ^2_λ^*. But more importantly, we can show that the linear combination τ(λ^*) using the plugin estimator λ^* converges asymptotically to the same distribution as τ(λ^*) with oracle knowledge about the optimal λ^* (see Supplementary Material <ref> for the proof). [Properties of the combined estimator] Under conditions 1 through 3, and provided the asymptotic variance-covariance of τ(g) and τ(h^*) can be consistently estimated, we have that n^-1/2((τ(λ^*) - τ) d→𝒩(0, [τ(λ^*)]) where [τ(λ^*)] denotes the asymptotic variance of τ(λ^*) and d→ denotes convergence in distribution. It follows from Theorem <ref> that τ(λ^*) achieves the efficiency bound of τ(λ^*). Further, the estimator τ(λ^*) has an obvious but important property, as stated in the following corollary. Under the conditions of Theorem <ref>, we have that σ^2_λ^*⩽min{σ^2_g, σ^2_h^*} where equality holds if and only if λ^*=1 or λ^*=0. We analyze the variance of τ(λ^*) as a quadratic function of λ^* to understand when this estimator's variance is less than the variances of τ(g) and τ(h^*), σ^2_λ^* =   (σ^2_g+σ^2_h^*-2σ_g,h^*) λ^*2+(2σ_g,h^*-2σ^2_g)λ^*+σ^2_g. We examine the relationships among λ^*, comparisons of {σ^2_g,σ^2_h^*, σ_g, h^*}, and comparisons of {σ^2_λ^*, min{σ^2_g,σ^2_h^*}} inTable <ref>. When σ_g, h^*≠σ^2_h^* and σ_g, h^*≠σ^2_g, the combined estimator is more efficient than both τ(g) and τ(h^*). Meanwhile, when σ_g, h^* = σ^2_h^* or σ_g, h^* = σ^2_g, the efficiency of the combined estimator matches the best of τ(g) and τ(h^*). Furthermore, when λ^*<0 the estimator “compensates” for τ(g). This could be because the conditions 4 and 5 are severely violated or η_0(X) is misspecified, resulting in σ^2_g < σ_g, h^* < σ^2_h^*. Conversely, when λ^*>1, the combined estimator still “compensates” for τ(h^*) but in the other direction. This could occur when conditions 4-5 hold, the model for estimating η_0(X) is correctly specified, and the number of external controls is large, resulting in σ^2_h^* < σ_g, h^* < σ^2_g. In sum, whether additional conditions hold or not, τ(λ^*) is guaranteed to be consistent. Furthermore, its efficiency is no less than that of its component estimators; in particular, it is no less efficient than the efficient trial-only estimator. § SIMULATION STUDIES We compared the finite-sample performance of our proposed estimators versus previously proposed methods for integrating external control data in clinical trials in simulations. First, we considered a best-case scenario favorable to all methods, where conditions 1-5 hold, all parametric models are correctly specified, and there is no distribution shift in baseline characteristics. Second, we considered a more adversarial scenario where conditions 4-5 do not hold, parametric models are misspecified, and there is a distribution shift (f(X| S=1) ≠ f(X| S=0)). Detailed simulation methods and results are presented in Supplementary Material <ref>. In both scenarios, our estimators had higher efficiency than the most efficient-trial based estimator while remaining nearly unbiased. In contrast, alternative approaches had significant bias in the adversarial scenario. § AUGMENTING A TRIAL OF TREATMENTS FOR SCHIZOPHRENIA USING EXTERNAL CONTROLS To illustrate the proposed methods, we used data from two independent placebo-controlled, double-blind trials (NCT00668837 and NCT00077714) that compared the effect of paliperidone ER tablets 6mg versus placebo on schizophrenia symptoms. We designated one trial <cit.> as the index trial (i.e., the trial that we aimed to augment using external data, denoted by S=1), and used data from the placebo group in the second trial as external controls (denoted by S=0; <cit.>. Positive and Negative Syndrome Scale (PANSS) total scores are used for rating the severity of schizophrenia symptoms. The outcome of interest in our analyses was the PANSS scores of patients at week 6 after randomization. We included patients assigned to either paliperidone ER or placebo and for whom PANSS scores were available at baseline and at week 6. From the index trial, we included 113 patients who were assigned to paliperidone ER tablets 6mg, and 110 patients who were assigned to the placebo; the sample size of the external control data was 123. We used patient gender, age, race, and baseline PANSS score as covariates. Their summary statistics are given in Supplementary Material <ref>. We implemented inverse probability weighting (IPW) and AIPW (both based on data from the index trials only and using the estimated propensity score), naive pooling using (<ref>), and the proposed optimized randomization-aware and combined estimators (Table <ref>). Compared to IPW and AIPW estimators, the naive pooling estimator produced higher effect estimates, indicating that conditions 4 and 5 may not hold. The optimized randomization-aware and combined estimators produced similar point estimates between them and with the trial-only IPW and AIPW estimators, indicating that these estimators produce reasonable results even when conditions 4 and 5 do not hold. The IPW estimator had much larger standard errors than the AIPW, indicating the improvements from covariate adjustment in the trial data. In this example the optimized randomization-aware and the combined estimator had similar standard errors as AIPW highlighting the robustness properties of our estimators, even when conditions 4 and 5 do not hold. We also repeated the analyses by randomly sampling with replacement a fraction (∼75%, ∼50%, ∼25%) of the available control observations (see Supplementary Materials <ref>). The results showed that the randomization-aware and combined estimators can generate reasonable estimates with smaller standard errors (relative to to AIPW) as the sample size of the control group in the trial becomes smaller. § DISCUSSION We proposed a novel approach for using external control data to improve inference in trials. Similar to earlier work, we show it possible to obtain efficiency gains under exchangeability conditions relating the trial and external control populations. However, our optimized randomization-aware estimator explicitly avoids relying on these additional conditions for its consistency, and only uses them in an attempt to improve efficiency. Furthermore, by combining the efficient trial-only estimator with our optimized randomization-aware estimator we provide a new estimator that is no less efficient than the most efficient of these two component estimators. The combined estimator may lead to further efficiency gains, as demonstrated in our simulation studies, but its main attraction is protection from performing worse than the efficient trial-only estimator in large samples. Future work could examine how to integrate our estimation strategy into the design of trials, for example, by allowing unequal allocation between the experimental treatment and control groups. Throughout, we used parametric M-estimation methods to jointly estimate all nuisance models and the trial-only and optimized randomization-aware estimators. This approach makes the logic of combining information transparent and ensures the joint normality of the trial-only and the randomization-aware estimator – a key result supporting Theorem <ref>. The majority of trial analyses use simple parametric models; therefore, our approach can be viewed as a relatively natural next step when trial data are to be combined with external control data. That said, extensions of the methods to use data-adaptive (e.g., machine learning) modeling strategies may further improve performance. § ACKNOWLEDGMENTS This work was supported in part by National Library of Medicine (NLM) award 5R01LM013616 and Patient-Centered Outcomes Research Institute (PCORI) award ME-2021C2-22365. The content is solely the responsibility of the authors and does not necessarily represent the official views of NLM, PCORI, PCORI's Board of Governors or PCORI's Methodology Committee. To illustrate the methods, our applied analyses used data from the Yale University Open Data Access (YODA) registry (<https://yoda.yale.edu/>) under project number 2022-5062. YODA has an agreement with Jassen Research & Development, L.L.C.. The interpretation and reporting of research using this data are solely the responsibility of the authors and do not necessarily represent the official views of the Yale University Open Data Access Project or Jassen Research & Development, L.L.C.. § SUPPLEMENTARY MATERIAL & DATA AVAILABILITY STATEMENT Additional supporting information can be found online in the Supplementary Material section. Code to reproduce our simulations and the data analyses is provided on GitHub: <https://github.com/RickardKarl/IntegratingExternalControls>. Data used in this paper to illustrate our findings can be obtained from YODA <https://yoda.yale.edu/>, subject to approval. § REFERENCES Please find the references at the end of this document. Supplementary Materials for “Robust integration of external control data in randomized trials” equationsection tablesection figuresection § PROOFS §.§ Derivation of testable implication in (<ref>) We want to prove that [Y| X, A=0,S=s]=[Y| X, A=0] under conditions 1-5. First, we have from condition 4 that [Y^0| X,S=1]=[Y^0| X,S=0] . Using conditions 1-3 we can re-write the left-hand as [Y^0| X,S=1]=[Y| X,A=0,S=1]. Meanwhile, the right-hand side can be written as [Y^0| X,S=0] =[Y^0| X,A=0,S=0]=[Y| X,A=0,S=0] where first equality follows from having no treatment variation in {S=0} (condition 5) and the second one from consistency (condition 1). This concludes the derivation of the testable implication presented in the main paper. §.§ Proof of the robustness property of a special case of ψ_a(π, h) We first observe that π∈{e_a(X), e_a(X)} converges in probability to the following quantity. ψ_a (π, h) =(∑_i=1^n S_i )^-1∑_i=1^n S_i[1(A_i=a)/π(X_i){Y_i- h(X_i)} + h(X_i) ] p→1[S=1][S1(A=a)π(X)Y]+1[S=1][Sh(X){1-1(A=a)π(X)}]. When π∈{e_a(X), e_a(X)} and it is reasonable to assume e_a(X)p→e_a(X), the first term equals [Y^a|S=1] and the second term is 0. Therefore, ψ_a (π, h) is a consistent estimator for ψ_a regardless of the specification of h(X) when π∈{e_a(X), e_a(X)} in the context of trials. §.§ Proof of Lemma <ref> We first prove the estimator is an unbiased estimator under the conditions. [ψ_a(e_a, h_fix)]= [1/[S=1]1/n∑_i=1^nS_i{1(A_i=a)/e_a(X_i){Y_i-h_fix(X_i)} + h_fix(X_i)}] = 1/[S=1]1/n∑_i=1^n[{1(A_i=a)/e_a(X_i)Y_i S_i}+{1(A_i=a)-e_a(X_i)/e_a(X_i)h_fix(X_i)S_i }] = 1/[S=1]({1(A_i=a)/e_a(X_i)Y_i S_i}+[{1(A_i=a)-e_a(X_i)/e_a(X_i)h_fix(X_i)| X_i, S_i=1 }]) = 1/[S=1]{1(A_i=a)/e_a(X_i)Y_i S_i} = ψ_a. Next, we prove its asymptotically normality. Observing the form of ψ_a(e_a, h_fix), we can see that it can be estimated by finding the ψ that solves the following estimating equation 1/n∑_i=1^n(1/1/n∑_i=1^nS_iS_i[1(A_i=a)/ e_a(X_i){Y_i- h_fix(X_i)} + h_fix(X_i) -ψ])=0. Therefore, ψ_a(e_a, h_fix) can be viewed as an M-estimator <cit.>, so that if Y has finite mean and variance, ψ_a(e_a, h_fix) is asymptotically normal <cit.>. Now we derive its asymptotic variance. Denote [S=1] by q, the asymptotic variance of ψ_a(e_a, h_fix) is Var[S/q{1(A=a)/e_a(X)Y-1(A=a)-e_a(X)/e_a(X)h_fix(X)}_T(X, Y)]. Using the formula Var[X]=[X^2]-^2[X], it equals 1-q/q^2[T(X, Y)| S=1]_(1) +1/qVar[T(X, Y)| S=1]_(2). To derive term (1), we observe that [T(X, Y)|X, Y^a, S=1] = {1(A=a)/e_a(X)Y-1(A=a)-e_a(X)/e_a(X)h(X)| X, Y^a, S=1} = {1(A=a)/e_a(X)Y| X, Y^a, S=1} = {1(A=a)/e_a(X)Y^a| X, Y^a, S=1} = Y^a, where the second to last equation follows because Y=1(A=a)Y^a+∑_a'≠ a1(A=a')Y^a'. By the law of total expectation, we have [T(X, Y)| S=1] =[{T(X, Y)| X, Y^a, S=1}| S=1] =[Y^a|S=1]. Therefore, (1)=(1-q)/q·^2[Y^a|S=1]. We apply the law of total variance on term (2) and have (2)=1/qVar{[T(X,Y)|X, Y^a, S=1]|S=1}_(2a)+ 1/q{Var[T(X,Y)|X, Y^a, S=1]|S=1}_(2b). From the derivation of (1), we have (2a)=1qVar[Y^a|S=1]. To derive term (2b), we first focus on Var[T(X,Y)|X, Y^a, S=1]. Using the formula Var[X|Y]=[(X-[X|Y])^2|Y], we have Var[T(X,Y)|X, Y^a, S=1] = [{T(X, Y)-[T(X, Y)|X, Y^a, S=1]}^2|X, Y^a, S=1] = [ {1(A=a)/e_a(X)Y^a-1(A=a)-e_a(X)/e_a(X)h_fix(X)-Y^a}^2 | X, Y^a, S=1 ] = [ {1(A=a)-e_a(X)/e_a(X){Y^a-h_fix(X)}}^2 | X, Y^a, S=1 ] = 1-e_a(X)/e_a(X){Y^a-h_fix(X)}^2, where the second equation follows because, again, Y=1(A=a)Y^a+∑_a^'≠ a1(A=a^')Y^a^' and the derivation for (1). Then, we have (2b)= 1/q[1-e_a(X)/e_a(X){Y^a-h_fix(X)}^2| S=1] = 1/q[[1-e_a(X)/e_a(X){Y^a-h_fix(X)}^2| X, S=1, A=a]| S=1] = 1/q[[1-e_a(X)/e_a(X){Y-h_fix(X)}^2| X, S=1, A=a]| S=1] = 1/q[[1(A=a)1-e_a(X)/e_a^2(X){Y-h_fix(X)}^2| X, S=1]| S=1] = 1/q[1(A=a)1-e_a(X)/e_a^2(X){Y-h_fix(X)}^2| S=1] = 1/q[[A=a|S=1]1-e_a(X)/e_a^2(X){Y-h_fix(X)}^2| S=1, A=a]. Combining (1), (2a), and (2b), we have the asymptotic variance of ψ_a(e_a, h_fix) is 1/q[[A=a|S=1]1-e_a(X)/e_a^2(X){Y-h_fix(X)}^2| S=1, A=a]+ 1qVar[Y^a|S=1]+(1-q)/q·^2[Y^a|S=1]. §.§ Proof of Lemma <ref> Before we present the proof of Lemma <ref>, we need to prove the following auxilliary lemma. Under conditions 1-5 we have that Y S | ( X, A=0 ). First, due to the condition of no treatment variation in {S=0}, we have Y^a A| (X, S=0) because A is a constant when S=0 (alternatively, in case there is treatment variation in {S=0}, we can replace condition 5 by directly invoking Y^a A | (X,S=0) from condition 5'). Thus, combining this with that Y^a A| (X, S=1) (condition 2), we have Y^a A| (X, S). Combining this with condition 4, Y^a S | X, we have Y^a (A,S)| X. This condition implies Y^a S| (X, A), which follows from the weak union property of conditional independence. Finally, using condition 1 (consistency), we have Y S| (X, A=0). Here follow the proof of Lemma <ref>. We will express (<ref>) as a quantity that is not conditional on {S=1}. We denote l(X,Y)=[A=0|S=1]e_1(X)e_0^2(X){Y-h(X)}^2 and have f_a(h(X), Y) =[ l(X,Y)| S=1, A=0 ] = [1(S=1)/[S=1| A=0] l(X,Y)| A=0] = [ [1(S=1)/[S=1| A=0] l(X,Y) | X, A=0]| A=0] = [ [1(S=1)/[S=1| A=0]| X, A=0][ l(X,Y) | X, A=0]| A=0 ] = [ [S=1| X, A=0]/[S=1| A=0][ l(X,Y) | X, A=0]| A=0 ] =[[[S=1| X, A=0]/[S=1| A=0] l(X,Y) | X, A=0]| A=0 ] = [[S=1| X, A=0]/[S=1| A=0] l(X,Y)| A=0 ] where the fourth equation follows from that Y S | (X, A=0) according to Lemma <ref> and the fifth equation follows from that [1(S=1)| X,A=0]=[S=1| X, A=0]. §.§ Proof of Theorem <ref> It is straightforward to verify that the estimating equations in (<ref>) satisfy regularity conditions (listed in Appendix <ref>), and thus θ is asymptotically multivariate normal <cit.>. The consistency and asymptotic normality of ψ_0(e_0, h^*) are direct results from the M-estimation theories <cit.> and Lemmas <ref> and <ref>. §.§ Regularity conditions for M-estimation Denote the stack of estimating equations, and their derivative by m(O_i; θ) and m'(O_i; θ)=∂/∂θm(O_i; θ), respectively. We list the regularity conditions for the M-estimators to be consistent and asymptotically normal. m(O_i; θ) converges almost surely to zero as n goes to infinity. There is a neighborhood of θ on which with probability one -m(O_i; θ) is continuously differentiable; and -m'(O_i; θ) converges uniformly to a non-stochastic limit which is non-singular at θ. Under these two conditions, θ is a strongly consistent estiamtor of θ (Theorem 2 in <cit.>). √(n)m(O_i; θ) converges in distribution to 𝒩(0, B(θ)), where B(θ)=[m(O_i; θ)m(O_i; θ^⊤)]. Under these three conditions, θ is asymptotically normal (Theorem 4 in <cit.>). §.§ Obtain {ψ_0(e_0, h^*), ψ_1(e_1, g_1)} via M-estimation We already described how to obtain ψ_0(e_0, h^*) by M-estimation. Here, we will describe how to obtain {ψ_0(e_0, h^*), ψ_1(e_1, g_1)} by M-estimation. Compared to the estimation for ψ_0, estimating {ψ_0, ψ_1} jointly requires two additional estimating equations, for estimating the parameters in g_1(X), and for obtaining ψ_0(e_0, h^*), respectively. Denote g_1(X) by g_1(X; ζ), and let θ'={ q, α, β, γ, ψ_0, ζ, ψ_1} be a vector of smooth finite dimensional target parameters. We propose to estimate θ' by finding the θ' that solves the following joint estimating equation [ m_q(O_i; q); m_e_1(O_i; α); m_η_0(O_i; β); m_h^*(O_i; α, β, γ); m_ψ_0(O_i; q, α, γ, ψ_0); m_g_1(O_i; ζ); m_ψ_1(O_i; q, α, ζ, ψ_1); ] =0, where the first five estimating equations are the same five estimating equations in (<ref>). The estimating equation for g_1(X; ζ) depends on the outcome. For example, when the outcome Y is binary, m_g_1(O_i; ζ) is the score of the logistic regression models for g_1(X; ζ). To obtain ψ_1(e_1, g_1), according to the construction of the estimator, we define m_ψ_1(O_i; q, α, ζ, ψ_1)=1/qS_i[1(A_i=1)/e_1(X_i, α){Y_i- g_1(X_i, ζ)} + g_1(X_i, ζ) -ψ_1]. It is straightforward to verify that the above estimating equations satisfy regularity conditions (listed in Appendix <ref>). §.§ Proof of Lemma <ref> If τ(g) and τ(h^*) are obtained via joint M-estimation (the details are given in Appendix <ref>) then they are are asymptotically bivariate normal <cit.>. Because a linear combination of two bivariate normal distributed random variables is normally distributed, τ(λ^*) is asymptotically normal. As both τ(g) and τ(h^*) are consistent estimators, it also follows that their linear combination is consistent. §.§ Obtain {τ(g), τ(h^*)} via M-estimation We already described the how to obtain {ψ_0(e_0, h^*), ψ_1(e_1, g_1)} via M-estimation in Appendix <ref>. Here, we describe how to obtain {τ(g), τ(h^*)} via M-estimation for a given λ^*. Observe that τ(λ^*)= λ^*τ(g)+(1-λ^*)τ(h^*) = λ^*{ψ_1(e_1, g_1)-ψ_0(e_0, g_0)}+(1-λ^*){ψ_1(e_1, g_1)-ψ_0(e_0, h^*)}. Compared to the M-estimation described in Appendix <ref>, obtaining {τ(g), τ(h^*)} via M-estimation requires four additional estimating equations, for estimating the parameters in g_0(X), and for obtaining τ(h^*), ψ_0(e_0, g_0), and τ(g), respectively. Denote g_0(X) by g_0(X; ι), and let θ”={ q, α, β, γ, ψ_0, ζ, ψ_1, τ(h^*), ι, τ(g)} be a vector of smooth finite dimensional targeted parameters parameters. We propose to estimate θ” by finding the θ” that solves the following joint estimating equation [ m_q(O_i; q); m_e_1(O_i; α); m_η_0(O_i; β); m_h^*(O_i; α, β, γ); m_ψ_0(O_i; q, α, γ, ψ_0); m_g_1(O_i; ζ); m_ψ_1(O_i; q, α, ζ, ψ_1); m_τ(h^*)(O_i; ψ_1, ψ_0, τ(h^*)); m_g_0(O_i; ι); m_ψ_0^'(O_i; q, α, ι, ψ_0^'); m_τ(g)(O_i; ψ_1, ψ_0^', τ(g)); ] =0, where the first seven estimating equations are the same six estimating equations in Appendix <ref>. The estimating equation for g_0(X; ι) depends on the outcome. For example, when the outcome Y is binary m_g_0(O_i; ι) is the score of the logistic regression models for g_0(X; ι). To obtain ψ_0' (which is the estimate of ψ_0 using the trial-only estimator), τ(g), and τ(λ^*), we define m_τ(h^*)(O_i; ψ_1, ψ_0, τ(h^*))=ψ_1- ψ_0-τ(h^*). m_ψ_0^'(O_i; q, α, ι, ψ_0^')=1/qS_i[1(A_i=0)/1-e_1(X_i, α){Y_i- g_0(X_i, ι)} + g_0(X_i, ι) -ψ_0^'] m_τ(g)(O_i; ψ_1, ψ_0^', τ(g))=ψ_1- ψ_0^'-τ(g). It is straightforward to verify that all the above estimating functions satisfy regularity conditions (listed in Appendix <ref>). §.§ Proof of Theorem <ref> We first introduce two lemmas needed for the main proof. We denote converging in distribution and probability by d→ and p→, respectively. Let A_n and B_n be two sequences of random variables and a, b be two constants. If A_np→a and B_np→b, then A_n B_n-a B_np→0. By repeated application of Slutsky's theorem, and the equivalence of convergence in distribution to a constant and convergence in probability to the same constant, A_n B_np→ a b and a B_np→ab; and therefore A_n B_n - a B_n -p→ 0. [Theorem 25.4 of <cit.>] Let X_n and Y_n be two sequences of random variables and X be a random variable. If X_nd→ X and X_n-Y_np→0, then Y_nd→ X. Suppose that y'<x<y” and [X=y']=[X=y”]=0. If y'<x-ϵ<x<x+ϵ<y”, then [X_n]-[|X_n-Y_n|⩾ϵ] ⩽[Y_n⩽ x] ⩽[X_n⩽ y”]+[|X_n-Y_n|⩾ϵ]. Since X_nd→ X, letting n→∞ gives [X⩽ y'] ⩽lim inf_n[Y_n⩽ x] ⩽lim sup_n[Y_n⩽ x]⩽[X⩽ y”]. Since [X=y]=0 for all but countably many y, if [X=x]=0, then y' and y” can further be chosen so that [X⩽ y'] and [X⩽ y”] are arbitrary near [X⩽ x]; hence [Y_n⩽ X]→[X⩽ x] (i.e., lim_n→∞[Y_n⩽ X]=[X⩽ x]). The proof above is exactly the same as stated after the Theorem 25.4 of <cit.>. This lemma is also referred to as Continuous Mapping Theorem, II <cit.>, the property of two asymptotically equivalent sequences <cit.>, and the converging together lemma in the Exercises 3.2.13 in <cit.>. Now, we prove the theorem. Because the empirical sandwich estimators are consistent for the variance of M-estimators <cit.>, by Slutsky's theorem, λ^*p→λ^*. Because λ^*p→λ^*, τ(g)p→τ, and τ(h^*)p→τ, by Lemma <ref> and Slutsky's theorem, we have τ(λ^*) - τ(λ^*)p→ 0. Combining that we know τ(λ^*) d→𝒩(τ, V) and τ(λ^*) - τ(λ^*)p→ 0, by Lemma <ref>, it must hold that τ(λ^*)d→𝒩(τ, V). § DISCUSSION ON STUDY DESIGNS Settings with data from multiple sources can often be categorized as either nested trial designs or non-nested trial designs <cit.>. For nested trial designs, the target population is well-defined with the trial population nested inside of it; often, the target population corresponds to a census from which trial-eligible individuals are selected. Those in the census who are not selected to participate in the trial correspond to the external population. Meanwhile, in non-nested trial designs the datasets are obtained separately. In this case, we do not know how the datasets were sampled from the target population. Following the framework in <cit.>, the sampling mechanisms in both the nested and non-nested trial design can be formalized by introducing an indicator variable O. The variable O indicates whether an individual from the underlying target population is in the observed data: {O=1} for sampled individuals and {O=0} for non-sampled individuals. Using the diagram in Figure <ref>, we can illustrate how individuals from the underlying target population are first sampled into the actual population before they are divided into two sub-populations; in this case, either the trial population {S=1} or the external population {S=0}. Here, we assume observations are simple random samples from each respective sub-population, meaning that is the sampling probabilities are determined by [O=1| S=1] and [O=1| S=0]. Without any loss of generalization, we shall assume that all individuals in the trial sub-population are observed, i.e. [O=1| S=1]=1. We let [O=1| S=0]=u for some constant 0<u≤ 1, for nested trial designs is u known but for non-nested designs is u unknown. Due to the simple random sampling, the observation indicator O is independent of the other variables conditioned on the sub-population S; that is O (X, A,Y^1,,Y^0) | S. This means [Y^1-Y^0| S=s]=[Y^1-Y^0| S=s, O=1], which implies that the average treatment effect in both sub-populations is still identifiable from the observed individuals. However, we see that [Y^1-Y^0]≠[Y^1-Y^0 | O=1] can happen. The average treatment population on the target population is thus not identifiable unless we have a nested trial design, in which case we can get around this because u is known <cit.>. Still, [Y^1-Y^0 | O=1] can be interpreted as the average treatment effect on a mixture of the trial and external population that excludes all unobserved sub-populations <cit.>. We denote n as the number of observed individuals (S,X,A,Y,O=1) and N as the number of total individuals in the actual population (OS,OX,OA,OY,O). For obvious reasons, N is unknown to us. We assume that ratio n_s/n → q_s>0, for s=0,1, as n→∞. §.§ Identifiability of (<ref>) in non-nested study designs To minimize (<ref>), we need to estimate the study participation model η_0(X)=P[S=1| X, A=0]. While η_0 is identifiable from the observed data in a nested design, this is not necessarily the case in a non-nested design. Whereas one could believe this will cause issues for minimizing (<ref>), we shall however show that this does not matter in the end and that (<ref>) still is identifiable from observations only; that is, even when everything is conditioned on {O=1}. Letting l(X,Y)=[A=0|S=1]e_1(X)e_0^2(X){Y-h_fix(X)}^2, the claim of lemma <ref> is that [l(X,Y) | A=0, S=1] = [[S=1| X, A=0]/[S=1| A=0][A=0|S=1]e_1(X)/e_0^2(X){Y-h_fix(X)}^2| A=0 ] . However, due to simple random sampling of observations, we have [l(X,Y) | A=0, S=1]=[l(X,Y) | A=0, S=1, O=1] . Following the same proof as in the lemma, we can then show that [ l(X,Y)| S=1, A=0,O=1] = = [1(S=1)/[S=1| A=0, O=1] l(X,Y)| A=0, O=1] = [ [1(S=1)/[S=1| A=0,O=1] l(X,Y) | X, A=0, O=1]| A=0, O=1] = [ [1(S=1)/[S=1| A=0, O=1]| X, A=0, O=1][ l(X,Y) | X, A=0, O=1]| A=0 , O=1] = [ [S=1| X, A=0, O=1]/[S=1| A=0, O=1][ l(X,Y) | X, A=0, O=1]| A=0, O=1 ] =[[[S=1| X, A=0, O=1]/[S=1| A=0, O=1] l(X,Y) | X, A=0, O=1]| A=0, O=1 ] = [[S=1| X, A=0. O=1]/[S=1| A=0, O=1] l(X,Y)| A=0, O=1 ] where the fourth equation follows from that Y S | (X, A=0, O=1). This follows from lemma <ref> and that we have simple random sampling. It is easy to show that O (X. A, Y^1,Y^0) | S ⇒ O Y | (S, X, A=0) which combined with Y S | (X, A=0) implies that Y S | (X, A=0, O=1). Thus, to conclude, we see in fact that [l(X,Y) | A=0, S=1] can be expressed as an expectation over quantities of only observed data {O=1}. § SIMULATION STUDIES We conducted simulation studies to compare the finite-sample performance of our proposed estimators and that of previously proposed methods for integrating external control data in clinical trials. First, we considered a best-case scenario that is favorable to all the compared methods. In this scenario, conditions 1-5 hold, all parametric working models are correctly specified, and there is no distribution (covariate) shift in baseline characteristics between the trial and the population contributing the external control data. Second, we considered a more adversarial scenario: here conditions 4-5 do not hold, all parametric working models are misspecified, and there is a distribution shift such that f(X| S=1) ≠ f(X| S=0). We evaluate the estimators based on their absolute bias, relative variance (to that of the best trial-based estimator), and coverage of 95%-confidence intervals. Estimators: We compared the optimized randomization-aware estimator τ(h^*) as given in (<ref>); the combined estimator τ(λ^*) = λ^* τ(h^*)+(1-λ^* )τ(g); unadjusted trial-only estimator (∑_i=1^n S_i)^-1∑_i=1^n S_i(2A_i-1)Y_i; the trial-based augmented inverse probability estimator (AIPW) τ(g); the naive pooling estimator using (<ref>) that assumes exchangeability between study sources; a test-then-pool estimator that first tests H_0 in (<ref>) using a likelihood ratio test, and thereafter, if H_0 is accepted, uses the naive pooling estimator and otherwise uses the AIPW estimator; dynamic borrowing, a Bayesian method based on the mixture prior approach from <cit.> implemented in the R package from <cit.>; and selective borrowing, a method that dynamically selects subsets of compatible external controls for integration and implemented with the authors' source code <cit.>. All estimators, except for dynamic borrowing and selective borrowing, were implemented as (stacked) M-estimators using the geex R package <cit.>, with the default small-sample adjustment when constructing confidence intervals <cit.>. Data-generating process: For a given n_1 and n_0, we let S_i=1 for i=1,…,n_1 and S_i=0 for i=n_1+1,…, n_1+n_0. For observations with S_i=1, we generated A_i∼Bern(1/2); for observations with S_i=0, we let A_i=0. The covariates X_i were sampled from a 10-dimensional multivariate Normal distribution N(μ_S_i, Σ) where μ_s depended on the scenario (best-case or adversarial); the diagonal elements of Σ were set to 1; and the off-diagonal elements were set to 0.1. We generated outcomes according to Y_i = ∑_j=1^5α_j X_i,j + ∑_j=1^10β_j X_i,j^2 + 5 · A_i + ε_i with ε_i ∼ N(0, 1), where we let α=1/2(1, 1, -1, 1, -1) and β=(-1/4,-1, -1/2, -1, -1/2, 1/2, 1/2, 1/2, 1/2, 1/2). For the best-case scenario,all parametric working models were correctly specified and we set μ_1=μ_0=0 such that there was no distribution shift for the baseline covariates between the trial population and the population underlying the external control data. Meanwhile, for the adversarial scenario, working models were misspecified by intentionally omitting the variables with j=5,…,10 and removing all second-order terms, and we introduced distribution shift by setting μ_1=0 and μ_0=1/21. In both scenarios, we varied the number of available external controls n_0 between 10 and 200 for three different trial sample sizes n_1∈{50,100,200}; we mainly discuss the observations for the middle case n_1=100 and note the differences when we decreased or increased the trial sample size. Best-case scenario: The simulation results for this scenario with n_1 = 100 are plotted in Figure <ref>. All estimators had negligible bias and mainly differed in variance; ranked in increasing order: naive pooling, test-then-pool, optimized randomization-aware, combined, AIPW, selective borrowing, dynamic borrowing, and unadjusted trial-only. Naive pooling had the lowest relative variance at approximately 0.6 compared to trial-based AIPW, while optimized randomization-aware and combined had relative variances between 0.85 and 0.9. As expected, AIPW had lower variance than the unadjusted trial-only estimator. The dynamic and selective borrowing methods had lower variance than the unadjusted trial-only estimator, but performed worse compared to other estimators. Most estimators had a stable and close to nominal coverage, except for the coverage of dynamic and selective borrowing which decreased as n_0 increased. For the setting with n_1=50, plotted in Figure <ref>, the improvement in terms of relative variance were much greater for optimized randomization-aware, combined, naive pooling and test-then-pool. The difference in coverage can likely be attributed to the small-sample adjustment on the confidence intervals having a large effect. Optimized randomization-aware and combined improved the most, which indicates that settings with very small sample sizes in the trial are favorable for these estimators. Meanwhile, for the setting with n_1=200, plotted in Figure <ref>, the improvement in relative variance for the optimized randomization-aware and combined shrinked to about 0.95 (from between 0.85 0.9 when n_1=100). Adversarial scenario: The simulation results for this scenario with n_1 = 100 are plotted in Figure <ref>. Here, there is a significant difference in bias among the estimators. Naive pooling, test-then-pool, and dynamic borrowing became more biased as more external control data were used, whereas optimized randomization-aware and combined estimators remained unbiased, similar to the trial-based estimators. Selective borrowing had bias that was not negligible, even though it decreased with increasing n_0; this estimator also had very large variance when n_0 was small. The variance of the optimized randomization-aware estimator and the combined estimator was about 0.95 relative to trial-based AIPW, while naive pooling and test-then-pool had lower relative variances between 0.6 and 0.8. Dynamic and selective borrowing did not perform well. The variance reduction relative to AIPW was less pronounced than in the best-case scenario. Finally, the coverage of all estimators using external controls, except optimized randomization-aware and combined, decreased as the number of external controls increases, likely due to the increasing bias. For the setting with n_1=50 and n_1=200, plotted in Figure <ref> and Figure <ref> respectively, we observed no substantial difference in the performance of estimators. § DATA APPLICATION DETAILS §.§ Summary statistics of baseline covariates for different populations §.§ Additional analyses Furthermore, to empirically examine the methods under different sample sizes of the index trial, we varied the sample size of the controls in the index trial by randomly sampling with replacement a fraction of the available observations. Specifically, of the 91 patients in the control group of the index trial; we sampled 68 (∼75%), 46 (∼50%), or 23 (∼25%) individuals and repeated the analyses described above 100 times. The average of the results over the 100 analyses are shown in Table <ref> (lines 2-4). With small sample sizes in the control group of the index trial (68, 46, and 23), the point estimates of the naive pooling estimator deviated from the points estimates of IPW and the AIPW estimator. This showed that the naive pooling estimator is biased in this data application, mostly because conditions 4 and 5 did not hold. On the other hand, the point estimates of the proposed estimators (randomization-aware and combined estimators) were similar to the point estimates of IPW and AIPW estimators, indicating that the proposed estimators do not generate bias when augmenting the index trial using external controls even if conditions 4 and 5 do not hold. In addition, the combined estimator's standard error is always smaller than the standard error of AIPW estimator; the efficiency improvement is larger with the decreasing sample size of the controls in the index trial. plainnat
http://arxiv.org/abs/2406.17967v1
20240625224917
Unmasking the Imposters: In-Domain Detection of Human vs. Machine-Generated Tweets
[ "Bryan E. Tuck", "Rakesh M. Verma" ]
cs.CL
[ "cs.CL" ]
Robust integration of external control data in randomized trials Rickard Karlsson^1,†*, Guanbo Wang^2,3†*, Jesse H. Krijthe^1 and Issa J. Dahabreh^2,3,4 ^1 Pattern Recognition Laboratory, Delft University of Technology, Delft, the Netherlands ^2 CAUSALab, Harvard T.H. Chan School of Public Health, Boston, Massachusetts, U.S.A ^3 Department of Epidemiology, Harvard T.H. Chan School of Public Health, Boston, Massachusetts, U.S.A ^4 Department of Biostatistics, Harvard T.H. Chan School of Public Health, Boston, Massachusetts, U.S.A ^†Equal contribution; r.k.a.karlsson@tudelft.nl; g.wang@hsph.harvard.edu ========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== § ABSTRACT The rapid development of large language models (LLMs) has significantly improved the generation of fluent and convincing text, raising concerns about their misuse on social media platforms. We present a methodology using Twitter datasets to examine the generative capabilities of four LLMs: Llama 3, Mistral, Qwen2, and GPT4o. We evaluate 7B and 8B parameter base-instruction models of the three open-source LLMs and validate the impact of further fine-tuning and uncensored versions. Our findings show that “uncensored” models with additional in-domain fine-tuning dramatically reduce the effectiveness of automated detection methods. This study addresses a gap by exploring smaller open-source models and the effects of uncensoring, providing insights into how fine-tuning and content moderation influence machine-generated text detection. § INTRODUCTION Maintaining the integrity of digital communication platforms like Twitter has become increasingly vital due to exponential advancements in large language models (LLMs) in recent years. While the misuse of technology by bad actors is not new, the scale at which they can now disseminate misinformation, hate speech, and convincingly imitate others has grown alarmingly <cit.>. This development poses significant challenges distinguishing between human and machine-generated content, especially on social media, where generative AI can have far-reaching consequences <cit.>. The continual advancements in LLMs are fascinating due to their transformative potential, but they also pose significant risks. These models can generate highly convincing misinformation and fake news on an unprecedented scale, undermining the trust and reliability of digital communication platforms <cit.>. This makes it critical to develop robust automated detection systems to swiftly identify and mitigate the spread of false information. By addressing these challenges, we can safeguard digital spaces, ensure informed public discourse, and mitigate the harmful impacts of generative AI. However, the increasing sophistication of LLMs has made detecting machine-generated text increasingly difficult <cit.>. Naive approaches often fail due to LLMs' ability to incorporate recent information, adapt to specific writing styles, and generate text with few distinguishable patterns. The availability of open-source LLMs allows "bad actors" to fine-tune models on specific domains, producing highly fluent and convincingly human-like text tailored to specific contexts. Previous solutions for detecting machine-generated text have primarily focused on general-purpose datasets and models, which may not effectively capture the unique characteristics of social media text, such as non-conventional terminology, short texts, and emojis. Many of these approaches focus on a narrow set of well-known models like variants of GPT 2, GPT 3, ChatGPT <cit.>, nano LLMs with less than 1.5B parameters without domain adaptation, failing to account for the wide range of open-source community models that are increasingly prevalent <cit.>. Our work differs by considering a broader range of LLMs, including censored, uncensored open-source models, and GPT4o, while evaluating their performance on domain-specific social media data to provide a more realistic assessment of the challenges in identifying machine-generated text in the wild by bad actors. To address previous limitations, we adapt Twitter datasets from TweetEval <cit.> to distinguish between human-authored and machine-generated tweets. Our methodology employs four primary LLMs: Llama 3 <cit.>, Mistral <cit.>, Qwen2 <cit.>, and GPT4o <cit.>, with sub-variations in content moderation and fine-tuning, resulting in nine models and benchmark subsets. We evaluate the generated tweets using detection methods such as BERTweet <cit.>, a soft-voting ensemble, cross-domain transferability of adversarial networks via RADAR <cit.>, and additional linguistic features <cit.>. Our results show a large decrease in performance attributed to fine-tuning and reduction of content moderation of LLMs when identifying machine-generated text. While focused on Twitter, our study initiates understanding the capabilities of “censored” and “uncensored” open-source LLMs in generating in-domain Twitter text. §.§ Summary of Contributions * Novel Methodology for Evaluating LLMs on Twitter Data: We introduce a methodology that adapts publicly available Twitter datasets to examine the generative capabilities of four state-of-the-art LLMs, addressing a gap in previous research that primarily focused on OpenAI's GPT models (Section <ref>). * Comprehensive Analysis of Open-Source LLMs: Conducting experiments with 7B and 8B parameter base-instruction models of four LLMs, including three open-source models (Llama 3, Mistral, and Qwen2) and GPT4o, we validate the efficacy of fine-tuned and uncensored versions, providing insights into the impact of these factors on the detection of machine-generated text (Section <ref>). * Evaluation of Detection Methods and Benchmark Datasets: Our findings reveal that uncensored models with additional in-domain fine-tuning substantially decrease the ability of automated detection methods show an absolute drop of 16.86% detection rate in the worst-case scenario. We provide nine benchmark detection sub-datasets and our complete methodology to facilitate future research (Sections <ref>). § RELATED WORK §.§ Stylometric and Machine Learning Approaches Researchers have investigated stylometry and machine learning approaches for detecting machine-generated fake news and text. <cit.> demonstrated the effectiveness of stylometry in identifying text origin but highlighted its limitations in distinguishing legitimate and malicious uses of language models. <cit.> showed that energy-based models exhibit good generalization across different generator architectures but are sensitive to the training set. <cit.> proposed a novel algorithm using stylometric signals to detect AI-generated tweets, showing that stylometric features can effectively augment state-of-the-art detectors, especially with small Twitter timelines or limited training data. In contrast, we saw poor performance with stylometric features in our study, suggesting their limitations in more varied contexts. Recent studies have tackled the challenge of differentiating human-written and AI-generated text in academic contexts. <cit.> demonstrated the high accuracy of machine learning models, especially random forests and SVMs, in this task. <cit.> introduced an innovative ensemble neural model that leverages probabilities from pre-trained language models as features, yielding strong performance in binary and multi-class classification across English and Spanish. <cit.> showcased the power of ensembling lightweight transformers, achieving 95.55% accuracy on a shared task test set. However, <cit.> observed that while single transformer-based models excel on in-distribution data, they struggle with out-of-distribution samples. §.§ Zero-shot and Few-shot Detection Methods Zero-shot and few-shot detection methods have shown promise in identifying machine-generated text without extensive training data. DetectGPT <cit.> leverages the curvature of a language model's log probability function to outperform existing baselines without additional training. Similarly, FLAIR <cit.> uses carefully designed questions to elicit distinct responses from bots and humans, proving effective in differentiating between the two in an online setting. <cit.> show that smaller language models are more effective at detecting machine-generated text, regardless of the generator's architecture or training data. <cit.> explored how a machine learning model distinguishes between human-generated and ChatGPT-generated text in short online reviews, identifying patterns like polite language, lack of specific details, and impersonal tone using an explainable AI framework. §.§ Adversarial Attacks and Defenses Recent research has exposed the vulnerability of AI-generated text detectors to adversarial attacks, particularly those involving paraphrasing. <cit.> introduced DIPPER, an 11b powerful paraphrase generation model that can evade several detectors, including watermarking, GPTZero, DetectGPT, and OpenAI's now defunct text classifier <cit.>. They proposed a retrieval-based defense that can detect many paraphrased generations while maintaining a low false positive rate. However, <cit.> challenged the reliability of current state-of-the-art detectors by introducing EScaPe, a framework that learns evasive soft prompts that guide pre-trained language models to generate text that deceives detectors. To improve AI-generated text detection, <cit.> proposed RADAR, which jointly trains a detector and a paraphraser via adversarial learning, significantly outperforming existing methods. While they found strong transferability across LLMs, our findings contradict this, as RADAR performed poorly across all LLM variants in our study, highlighting the need for domain-specific fine-tuning. Additionally, <cit.> revealed reliability issues with watermarking, neural network-based, zero-shot, and retrieval-based detectors. They developed a recursive paraphrasing attack that compromises watermarking and retrieval-based detectors with minimal text quality degradation, demonstrating their vulnerability to spoofing attacks. §.§ Datasets and Benchmarks Large-scale datasets are crucial for developing effective machine-generated content detection algorithms. <cit.> introduced TweepFake, the first dataset of real deepfake tweets based on GPT 2, recurrent neural networks, and Markov Chains, benchmarking several traditional machine learning, character convolutional networks, and Bert-based models. <cit.> introduced CHEAT, a dataset containing 35,304 ChatGPT-generated abstracts using various methods. They analyzed the distribution differences between human-written and ChatGPT-written abstracts and found that existing detection schemes struggle with ChatGPT-written content, especially when human involvement is present. <cit.> constructed a testbed for deepfake text detection, collecting human-written texts from diverse domains and generating corresponding deepfake texts using GPT 3.5, T5, and Llama. The resulting dataset, containing 447,674 instances, showcased the challenges of deepfake text detection in the wild, with out-of-distribution data posing significant challenges. § DATASETS We use the TweetEval unified benchmark <cit.> for our human-labeled tweets. Specifically, we extract the emotion, irony, sentiment, hate speech, and offensive language datasets for fine-tuning our LLMs. We use the emotion recognition subset to generate our synthetic tweets. Detailed distributions are in Table <ref>. §.§ Characteristics and Selection Criteria The TweetEval datasets were selected for their high-quality annotations, diverse range of tasks, and suitability for benchmarking the performance of language models on short, user-generated text. The criteria for selecting these datasets focused on several key aspects: relevance to common natural language processing tasks in the social media domain; high-quality, human-annotated labels; sufficient dataset size for effective fine-tuning and evaluation; a balanced distribution of examples across labels and dataset splits; and diversity in the number of labels and task complexity. § METHODOLOGY §.§ Large Language Models We employ four primary LLMs in our research: Llama 3 (8B) <cit.>, Mistral (7B) <cit.>, Qwen2 (7B) <cit.>, and GPT4o (closed-source) <cit.>. These models have demonstrated impressive performance on various natural language processing tasks. Figure <ref> shows our experimental methodology, and Table <ref> shows a comparison between the different variants and training perplexity scores, showing an early indication of which models are adapting well. Llama 3, the third iteration of Meta's open-source language model, has recently offered several parameter variations. We opted to use the smallest 8B parameter version, which allows us to use lower computational resources while producing an effective fine-tune on our in-domain data. This approach showcases the efficacy of small-scale LLMs that are easily accessible to the public without requiring extravagant resources. Using the Meta-Llama-3-8B-Instruct[https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct] version allows us to evaluate the initial censored version of Llama 3. The first variation we explore is the Hermes 2 Pro-Llama-3 8B model[https://huggingface.co/NousResearch/Hermes-2-Pro-Llama-3-8B] <cit.>, which was further fine-tuned on the OpenHermes 2.5 open-source composition <cit.>, composed of several open-source and custom synthetic datasets. Our final variation is the Dolphin-2.9-Llama 3-8B[https://huggingface.co/cognitivecomputations/dolphin-2.9-Llama 3-8b], which is explicitly uncensored to remove alignment and bias. This was achieved through a heavily filtered Ultrachat dataset <cit.>, where dialog assistant replies like I do not have emotions or I don't have opinions are removed, as well as through OrcaMath <cit.> and OpenHermes 2.5 <cit.>. We also evaluate WizardLM, a Llama-derived model, which is fine-tuned using instructions automatically generated by a novel evolutionary algorithm called Evol-Instruct <cit.>, enabling it to handle more complex instructions than models trained on human-written instructions. We use the uncensored 7 billion parameter variant, WizardLM-7B-Uncensored,[https://huggingface.co/cognitivecomputations/WizardLM-7B-Uncensored] further refined by filtering the dataset. This filtering process includes removing non-English conversations, excessive Unicode (indicative of Chinese or Korean text), and excessive repeated characters. Additionally, instances of “AI Moralizing” are removed from conversations containing specific phrases related to ethical guidelines and sensitive topics. Mistral, a highly efficient 7 billion parameter model, outperforms larger models like the 13B Llama 2 on several benchmarks. We employ two Mistral variants: Mistral-7B-Instruct-v0.2,[https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2] an instruction fine-tuned model with moderation mechanisms for safer outputs, and OpenHermes-2.5-Mistral-7B,[https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B] an uncensored model similar to the Llama 3 variation, further trained on code datasets to improve its programming capabilities. Alibaba's recently introduced Qwen2, an evolution from Qwen1.5, trained in 27 additional languages besides English and Chinese. Similar to Llama 3 and Mistral, we utilize the base-instruct version[https://huggingface.co/Qwen/Qwen2-7B-Instruct] and the uncensored Dolphin-2.9.2-Qwen2-7B[https://huggingface.co/cognitivecomputations/dolphin-2.9.2-qwen2-7b] version. Finally, we use the recently updated GPT-4o from OpenAI. It allows us to test our open-source community models against what is arguably the flagship closed-source model. By comparing performance, we can evaluate the strengths and weaknesses of open-source versus proprietary approaches, highlighting the potential of community-driven models in competing with industry-leading solutions. This diverse set of models enables us to explore the impact of content moderation on performance and safety. By comparing open-ended generation with controlled outputs, we provide valuable insights for developing responsible AI systems that balance performance and safety considerations. §.§.§ Fine-tuning on In-Domain Twitter Data To adapt our LLMs to the domain of social media text, we perform fine-tuning using the TweetEval datasets described in Section <ref>. To create a comprehensive fine-tuning dataset, we concatenate all the datasets mentioned in Table <ref>, including emotion recognition, hate speech detection, irony detection, offensive language identification, and sentiment analysis. Tweet Lengths ≤10 are removed resulting in a total of 96,225 samples, which we break into a 95/5 train-validation split, resulting in 91,413 training and 4,812 validation samples. Outside of masking user mentions and URLs, we use the raw tweets without any preprocessing to maintain the authentic characteristics of the data. The fine-tuning process is implemented using the Hugging Face Transformers library <cit.> and the PEFT[https://huggingface.co/blog/peft] (Parameter-Efficient Fine-Tuning) framework. Low-Rank Adaptation (LoRA) <cit.> is applied, a PEFT technique, to reduce the number of trainable parameters while maintaining the model's performance. The LoRA configuration used in our experiments is detailed in Table <ref> in Appendix <ref>. Our fine-tuning process uses a batch size of 32, achieved by accumulating gradients over 4 steps with an initial batch size 8. Models are fine-tuned for 2,856 steps (one epoch) using the AdamW optimizer with a 2e-4 learning rate and 0.001 weight decay. We employ a cosine learning rate scheduler with a 0.05 warmup ratio. Full hyperparameters are listed in Table <ref> in Appendix <ref>. By fine-tuning our LLMs on the concatenated TweetEval datasets, we aim to adapt them to the Twitter domain to improve tweet generation quality. This approach leverages the diversity of tasks and tweets within the datasets to improve the models' performance in generating relevant and accurate tweets. We aim to create LLMs that effectively generate high-quality tweets tailored to social media contexts. §.§.§ Synthetic Tweet Generation Generating synthetic tweets involves using the original emotion recognition dataset as a foundation. For each original tweet, a prompt is created to instruct the language model to generate a new tweet that expresses the same emotion as the original one. The prompt includes the original tweet and guidelines for generating the new tweet, such as using creative and diverse linguistic techniques, paraphrasing the original content by substituting words or phrases with semantically similar alternatives, and varying the sentence structure. We also opt for greedy decoding, as preliminary experimentation showed it generated more difficult-to-detect text initially versus beam-search and contrastive-search. The user and the AI assistant interact in a dialogue, as shown in Table <ref>. The user initiates the conversation by providing the original tweet and generation guidelines, including the desired emotion; in our case, we use the original emotion from each tweet. The assistant responds with an acknowledgment and a request for the tweet. By explicitly specifying the desired emotion, the language model can focus on generating a new tweet that accurately reflects the same emotional content. This approach allows for a clear and structured exchange of information, enabling the model to create a version of the tweet that maintains the original emotion while introducing linguistic variations and diversity. §.§ Post-Processing of Generated Tweets After generating synthetic tweets using our language models and combining them with the original human-written tweets, we perform a series of post-processing steps to ensure the quality and consistency of the augmented dataset. We remove tweets with empty text values and those mainly containing non-text content (less than 50% alphanumeric characters.) We expand common contractions using a predefined dictionary (Table <ref> in the Appendix <ref>), replacing mentions with "@USER," URLs with "HTTPURL," and converting emojis to their text representations. To identify and remove incomplete thoughts, we employ a heuristic evaluation, checking for several patterns that might indicate an incomplete tweet, such as abrupt endings without punctuation or short lengths (less than 10 characters.) The heuristic also considers the unique characteristics of tweets, such as the presence of URLs, mentions, hashtags, or emojis at the end of the tweet, which are often used to convey complete thoughts and end of statements. By applying this heuristic, tweets that may not convey a complete tweet are filtered out, ensuring that the dataset contains meaningful and coherent content. Finally, duplicate tweets are removed based on their text content to avoid redundancy in the dataset. §.§.§ Human Vs. Machine-Generated Dataset Creation To evaluate the effectiveness of our fine-tuned language models in generating synthetic tweets, we create combined datasets for each LLM variation according to a 90/10/10 train-validation-test split, including both human-written and machine-generated tweets. To ensure a fair evaluation, the nine datasets are down-sampled to match the size of the smallest dataset, as some LLM-generated text was eliminated at a higher rate due to our post-processing heuristic evaluation. Table <ref> presents the train, validation, test splits, and label distributions, which are uniform across all nine dataset variations. §.§ Machine-Generated Text Detection Models We use the BERTweet model <cit.> as our initial baseline model, which generally shows robust performance on Twitter-oriented downstream tasks. The model is fine-tuned on the augmented dataset using the hyperparameters specified in Table <ref> in Appendix <ref>. To further improve the performance and robustness of our machine-generated text detection system, we implement a soft voting ensemble of five BERTweet models. Each model in the ensemble is trained independently using the same hyperparameters and dataset splits as the baseline model. During inference, the ensemble predicts the class probabilities for each input text, and the final prediction is determined by averaging the probabilities across all models and selecting the class with the highest average probability. Previous work has shown that the addition of linguistic features concatenated with the CLS token from bert-based models improves detection capabilities. To test this on our nine datasets, we implement <cit.> approach, utilizing a reduce network to gradually reduce the dimensionality of the CLS + linguistic features, followed by a final classification network to produce our probabilities of human or generated. The linguistic features integrated can be found in Table <ref>. In addition to the BERTweet models, we evaluate the transferability of RADAR <cit.>, a state-of-the-art synthetic text detection model, to our domain. RADAR is pre-trained on a large corpus of synthetic text and has demonstrated strong performance across various domains. We use the pre-trained RADAR model to perform inference on our test sets without additional fine-tuning. This allows us to assess the claims of generalizability to our domain. § RESULTS AND DISCUSSION Final experimentation included nine LLM variants, each with unique characteristics, training methodologies, and varying levels of content moderation against four different detectors. The performance metrics for each detector across the nine LLM variants are presented in Table <ref>, while the mean performance metrics are summarized in Table <ref>. The most significant observation from the results is the contrast between censored and uncensored LLM variants. Two of the uncensored variants, WizardLM and OpenHermes-2.5-Mistral, exhibit a substantial drop in performance across all detectors as displayed by Figure <ref> by the sharp increases in error rate and decrease in Matthews Correlation Coefficient (MCC); The remaining uncensored variants see consistent drops in performance from their censored counterparts, although not as drastic. This indicates that models without content filtering pose extra challenges for detection, likely because they include a broader range of language patterns, some of which may be offensive. Another notable finding is the poor transferability of the pre-trained RADAR model to our domain. RADAR consistently classified every sample as Human, resulting in a constant precision of 80.27% but low recall and F1 scores of 50.00% and 37.71%, respectively. RADAR's MCC of 0% shows that it performs no better than random guessing. We did not fine-tune RADAR, showing the need for domain-specific fine-tuning and highlighting the limitations of using models trained on different data distributions in new domains without adaptation. The BERTweet + Stylometric model, which incorporates stylometric features alongside the base BERTweet model, demonstrated inconsistent performance across the LLM variants. It only outperformed the other detectors on the Hermes-2-Pro-Llama-3 LLM, while showing performance degradation compared to the base BERTweet model on 6 out of 9 dataset variations. This contrasts with the findings of <cit.>, who observed the benefits of stylometric features in discriminating between Human and AI Generated tweets by GPT 2 variants and EleutherAI-gpt-neo-1.3B <cit.>. This discrepancy suggests that the effectiveness of stylometric features may vary depending on the specific characteristics of the text domain and the LLMs being investigated. The mean performance metrics in Table <ref> reveal that the BERTweet + Stylometric model underperforms the unmodified BERTweet model across all metrics, further emphasizing the limitations of stylometric features in this context. Among the detectors, the Soft Ensemble model proved the most robust, achieving top performance on 6 out of the 9 dataset variations. The success of the Soft Ensemble model can be attributed to its ability to leverage the collective knowledge and decision-making of multiple BERTweet models, reducing the impact of individual model biases and improving overall prediction stability. This is further supported by the mean performance metrics in Table <ref>, where the Soft Ensemble model outperforms all other detectors across all metrics, including the highest MCC of 0.7497, displaying a strong positive relationship between predictions and ground truth labels. The base BERTweet model demonstrated strong performance, particularly on the challenging WizardLM and OpenHermes-2.5-Mistral datasets, surpassing the soft ensemble approach. This unexpected outcome may be attributed to the ensemble's aggregation process. Additionally, the single BERTweet model might have benefited from a more direct and consistent learning signal, while the ensemble's complexity introduced difficulties in optimizing across multiple models. Finally, the ensemble approach provided little prediction diversity, as all models were identical, failing to capitalize on potential complementary strengths. Through our empirical evaluation, we highlight the importance of considering the specific attributes of LLM variants when developing detection models, an aspect overlooked by prior research. The Soft Ensemble and base BERTweet models demonstrated the most robust overall performance, with the Soft Ensemble exhibiting robustness across most dataset variations and achieving the highest mean metrics. The inconsistent performance and lower mean metrics of the BERTweet + Stylometric model compared to the base BERTweet model highlight the limitations of stylometric features. RADAR's poor transferability, evidenced by its low mean metrics and an MCC of 0%, emphasizes the need for domain-specific fine-tuning. § CONCLUSION The widespread availability of open-source LLMs offers significant societal benefits but poses risks for misuse. Our findings indicate that uncensoring LLMs by reversing or minimizing LLM alignment and bias significantly improves their ability to generate human-like text, thereby reducing the effectiveness of detection systems. This effect is particularly evident in the OpenHermes-2.5-Mistral and WizardLM variants, which are much harder to detect than their censored counterparts. Our study represents the first steps forward to evaluate both "censored" and "uncensored" models within the Twitter domain. Future research should consider scaling to larger parameter LLM variants and increasing the training data for further domain adaptation to determine if detector performance improves with the growing parameter count. § LIMITATIONS Our study primarily focuses on Twitter data and may not generalize to other social media platforms or domains outside social media. The unique characteristics of Twitter, such as the short text length, use of hashtags and mentions, and real-time nature of the platform, may influence the performance of our detection methods. Future work could explore the applicability of our approach to other platforms like Facebook, Instagram, or Reddit, as well as non-social media contexts such as news articles or academic writing. We make the assumption that the TweetEval dataset used for fine-tuning and evaluation is representative of real-world Twitter data. However, language use on social media evolves rapidly, and the performance of our detectors may degrade over time if the models are not continuously updated with new data. Additionally, the TweetEval dataset may not fully capture the diversity of topics, opinions, and demographics on Twitter, potentially limiting the generalizability of our findings. Focusing on specific versions of the Llama and Mistral language models (7B and 8B parameter variations), we do not explore the impact of model size on detection performance. Larger models with tens or hundreds of billions of parameters may generate higher-quality tweets that are more difficult to detect when adapted to a particular domain. Conversely, smaller models may produce lower-quality outputs that are easier to identify. § ETHICAL CONSIDERATIONS Our work on detecting machine-generated tweets raises important ethical considerations. First, we acknowledge the potential for malicious actors to misuse LLMs, particularly uncensored variants, to generate and spread harmful content at scale. While we intend to develop detection methods to mitigate these risks, we recognize the importance of responsible disclosure practices and the need to balance openness with potential harm. Second, we recognize that our findings on the effectiveness of uncensored LLMs could incentivize their misuse and continual development. To mitigate this risk, we emphasize that our work aims to inform the development of robust detection methods and raise awareness about the challenges posed by these models. We also recognize the sensitivity of the datasets used in our study and our responsibility to handle them ethically. Finally, we highlight the importance of ensuring that LLM detection methods are developed and deployed responsibly, guided by ethical principles and human rights considerations. Ultimately, the development and deployment of LLM detection methods should be guided by a collaborative effort among researchers, platform providers, policymakers, and the public to ensure these technologies are used in a manner that is transparent and accountable and respects the rights and dignity of all individuals. § ACKNOWLEDGMENTS Research partly supported by NSF grants 2210198 and 2244279 and ARO grants W911NF-20-1-0254 and W911NF-23-1-0191. Verma is the founder of Everest Cyber Security and Analytics, Inc. § APPENDIX
http://arxiv.org/abs/2406.18216v1
20240626095829
Probing a Modified Luttinger Sum Rule in the Strongly Interacting 1D Fermi-Hubbard Model
[ "Annika Böhler", "Henning Schlömer", "Ulrich Schollwöck", "Annabelle Bohrdt", "Fabian Grusdt" ]
cond-mat.str-el
[ "cond-mat.str-el", "cond-mat.quant-gas" ]
Department of Physics and Arnold Sommerfeld Center for Theoretical Physics (ASC), Ludwig-Maximilians-Universität München, Theresienstr. 37, München D-80333, Germany Munich Center for Quantum Science and Technology (MCQST), Schellingstr. 4, D-80799 München, Germany Department of Physics and Arnold Sommerfeld Center for Theoretical Physics (ASC), Ludwig-Maximilians-Universität München, Theresienstr. 37, München D-80333, Germany Munich Center for Quantum Science and Technology (MCQST), Schellingstr. 4, D-80799 München, Germany Department of Physics and Arnold Sommerfeld Center for Theoretical Physics (ASC), Ludwig-Maximilians-Universität München, Theresienstr. 37, München D-80333, Germany Munich Center for Quantum Science and Technology (MCQST), Schellingstr. 4, D-80799 München, Germany Munich Center for Quantum Science and Technology (MCQST), Schellingstr. 4, D-80799 München, Germany University of Regensburg, Universitätsstr. 31, Regensburg D-93053, Germany Department of Physics and Arnold Sommerfeld Center for Theoretical Physics (ASC), Ludwig-Maximilians-Universität München, Theresienstr. 37, München D-80333, Germany Munich Center for Quantum Science and Technology (MCQST), Schellingstr. 4, D-80799 München, Germany § ABSTRACT Fermi surface reconstruction in cuprates can lead to an abrupt change in the Fermi momentum k_F between different phases. This phenomenon remains subject of debate and is at the heart of an ongoing discussion about the nature of the metallic state in the pseudogap regime. Here we study a minimal model of a k_F changing crossover in the one-dimensional Fermi-Hubbard model, where a tuning of the onsite interaction leads to a crossover between a spin-1/2 Luttinger liquid with small Fermi momentum and a spinless chargon liquid with large Fermi momentum. We attribute this to an emergent U(1) symmetry in the strongly correlated limit, which can be used to derive a modified Luttinger sum rule recovering the large Fermi momentum. We analyse Friedel oscillations at the edge of a system to directly probe the change of Fermi momentum at zero and non-zero temperature. This paves the way for a direct experimental observation of changes of the Fermi momentum using ultracold fermions in a quantum gas microscope, with possible extensions to higher dimensional systems. Probing a Modified Luttinger Sum Rule in the Strongly Interacting 1D Fermi-Hubbard Model Fabian Grusdt July 1, 2024 ======================================================================================== § INTRODUCTION Understanding the nature of the Fermi surface is central to understanding the electronic properties of high-T_c superconductors such as cuprates. Among the many puzzling phenomena observed in these compounds is the concept of Fermi surface reconstruction in the pseudogap phase <cit.>. While a conventional large Fermi surface is found in the Fermi liquid phase of cuprates at large doping <cit.>, for the underdoped cuprates a small Fermi surface with a volume that violates the Luttinger theorem is observed <cit.>. The underlying metallic state of the pseudogap phase and the origin of the reconstructed Fermi surface remain poorly understood <cit.>. Here we analyze a minimal model featuring a similar change of Fermi momentum in one dimension <cit.>. We study the one-dimensional Fermi-Hubbard (FH) model Ĥ_FH = -t ∑_j=0^L-1∑_σ (ĉ^†_jσĉ_j+1 σ + h.c.) + U ∑_j=0^L-1n̂_j↑n̂_j↓, where t denotes the hopping strength between neighbouring sites and U is an onsite interaction introduced whenever two fermions occupy the same site. For large on-site interactions U≫ t, the low-energy physics of the FH model can be described by the related t-J model Ĥ_tJ = -t 𝒫̂∑_j=0^L-1∑_σ( ĉ_jσ^†ĉ_j+1 σ + h.c. )𝒫̂ + J/2∑_j=0^L-1( Ŝ⃗̂_j·Ŝ⃗̂_j+1 - 1/4n̂_jn̂_j+1), where J=4t^2/U, n̂_j=∑_σn̂_j,σ and 𝒫̂ is a Gutzwiller projection onto maximally singly occupied sites. Note that we dropped a three-site term that appears at order t^2/U <cit.>. For finite interaction strengths U, the Hubbard model of a spin-balanced system describes a Luttinger liquid of spin-1/2 particles with a Fermi momentum of k_F = π/2 n at filling n=N/L <cit.>. The U →∞ limit however, where the t-J model is valid, exhibits spin-charge separation and has been shown to be described as free spinless chargons <cit.>, for which a larger Fermi momentum of k_F = π n is found. The goal of this paper is two-fold: On one hand we analyse the observed phenomenology in light of the Luttinger theorem by following a topological proof due to Oshikawa <cit.>. We show that the change in Fermi momentum can be attributed to the emergence of an additional U(1) symmetry which is associated with the conservation of the number of free dopants in the t-J model. On the other hand, we argue that these effects can be readily explored by ultracold atom experiments at currently achievable temperatures. To this end we suggest probing the crossover between the two Fermi momenta regimes by observing Friedel oscillations <cit.> at the boundary of a system, which can be realized by ultracold fermions in a quantum gas microscope <cit.>. In such experiments the form of the confining potential can be engineered to observe Friedel oscillations at the boundary of a box potential or an impurity site, and the ratio of hopping t and interaction strength U can be tuned via the lattice potential <cit.>. We provide simulations of the Friedel oscillations in the FH model at both zero and finite temperatures in one dimension, along with a non-interacting model in two dimensions. We suggest a possible experimental extension to the interacting case in two dimensions, and argue how observing Friedel oscillations in cold atom experiments can be a powerful tool for investigating changes in the Fermi surface of strongly interacting fermion systems. § LUTTINGER THEOREM AND EMERGENT U(1)-SYMMETRY The Luttinger theorem <cit.> relates the volume enclosed by the Fermi surface V_FS of a system to the underlying particle density n=N/L^d in d dimensions: V_FS/(2π)^d = n 2π. In the one-dimensional (1D) case the volume enclosed by the Fermi surface is given by V_FS=2k_F, which reduces Eq. (<ref>) to the statement k_F = π n. Although in one dimension Fermi surfaces are not stable and make way for Luttinger liquids, the latter are still characterized by a well-defined Fermi-momentum k_F that obeys the Luttinger theorem <cit.>. Following Ref. <cit.> for a topological proof of Eq. (<ref>), the key idea is to adiabatically insert a U(1) gauge flux quantum through a periodic model, i.e. we set c_j=L,σ=c_j=0,σ in Eqs. (<ref>) and (<ref>). Introducing this gauge flux increases the momentum in the system. Analyzing this momentum change both via a gauge transformation of the eigenstates and from an effective Fermi liquid description, one can derive Eq. (<ref>) <cit.>. We apply the same approach to analyze the Luttinger theorem in the 1D t-J model. Due to the Gutzwiller projection in Eq. (<ref>), which arises from the U→∞ limit of the hopping term in the FH model, fermions in the t-J model are prohibited from occupying the same site even if they have opposite spins. This constitutes an additional U(1) symmetry of the t-J Hamiltonian, associated with the conservation of the total number of holes N_h=L-∑_iσn̂_iσ in the system. Here, we present a modified version of the flux insertion argument, which takes into account the emergent U(1) symmetry of holes explicitly in the t-J model. To this end we note that there are different possible flux insertion procedures we can follow for the 1D t-J and FH Hamiltonians, as summarized in Fig. <ref>. Both the t-J and FH model exhibit two separate U(1) symmetries associated with the number of particles N_σ of each spin species σ. This allows for an insertion of U(1) gauge fluxes ϕ_σ, coupling to fermions of spin σ. In the case of the t-J model an additional gauge flux ϕ_h coupling to the doped holes can be introduced. In order to determine the Fermi momentum k_F^c of the charges in the system, we introduce a flux ϕ_c coupling to all charge carriers. Note that since the holes in the t-J model can be viewed as the charge carriers of the system, this can be achieved by considering either two equal fluxes coupling to both spin species ϕ_c=ϕ_↑=ϕ_↓ or via a single, opposite flux coupling to the holes ϕ_c=-ϕ_h. We compare both flux insertion procedures and start by first inserting a U(1) gauge flux ϕ_σ for each spin species through a periodic t-J chain, as shown on the left-hand side of Fig. <ref>. Choosing the flux to be equal to one flux quantum ϕ_σ = 2π, we find a new Hamiltonian H(2π), which can be related to the original Hamiltonian H(0) via a gauge transformation Ĥ(2π)=Û_↑^†(2π)Û_↓^†(2π)Ĥ(0)Û_↓(2π)Û_↑(2π), where Û_σ(φ)=exp[-i∑_j=1^L (j-1)n̂_jσφ/L]. Here n̂_jσ=ĉ_jσ^†ĉ_jσ corresponds to the number of spins σ at site j, i.e. the gauge flux ϕ_σ only couples to particles with spin σ. Inserting equal fluxes and adiabatically increasing them to one flux quantum ϕ_↑=ϕ_↓=2π for both spin species, the ground state |Ψ_0⟩ will evolve to a new state |Ψ_σ⟩. In the U→∞ limit of the FH model, an additional U(1) symmetry of the number of holes emerges. This becomes a full symmetry of the t-J Hamiltonian and thus allows for an alternative flux insertion shown on the right-hand side of Fig. <ref>. This additional symmetry can be made explicit by rewriting the t-J Hamiltonian in terms of a slave particle representation ĉ_jσ = f̂_jσĥ_j^†, where the original particles represented by fermionic operators ĉ_jσ are decomposed into a spinon denoted by f̂_jσ, carrying the spin degree of freedom, and a chargon ĥ_j carrying the charge degree of freedom <cit.>. We choose the new flux ϕ_h to couple to the chargons of the system, and apply an analogous gauge transformation for ϕ_h=-2π, such that Ĥ(2π)=Û_h^†(-2π) Ĥ(0) Û_h(-2π) with Û_h(φ) = exp[-i∑_j (j-1) n̂_jhφ/L], where n̂_jh=ĥ_j^†ĥ_j is the number of chargons at site j. Here we choose ϕ_h=-2π in order to ensure that the total flux inserted in the system is the same in both procedures and the form of the final Hamiltonian Ĥ(2π) will be equal in both cases. Adiabatically increasing the flux from ϕ_h=0 to ϕ_h=-2π, the ground state will evolve analogously into a new state |Ψ_h⟩. Since the system stays translationally invariant throughout the adiabatic process, |Ψ_σ⟩ and |Ψ_h⟩ must also be eigenstates of the translation operator with eigenvalues e^iP_σ and e^iP_h respectively, where P_σ and P_h are generally different from P_0, the momentum of the original ground state |Ψ_0⟩. Expressing both the original and transformed states in the same gauge choice and evaluating P̂=∑_k_n,σĉ_k_nσ^†ĉ_k_nσ we find the following changes of momenta, Δ P_σ = P_σ-P_0 = 2πN_↑ + N_↓/L = 2π n Δ P_h = P_h-P_0= - 2πN_h/L = - 2π n_h, where n=N/L is the particle density with N=N_↑ + N_↓ and n_h=N_h/L the hole density in the system. Making use of the fact that Δ P is only defined modulo 2π and that the number of holes is conserved such that n_h=1-n, we see that - 2π n_h = -2π (1-n)=2π n, proving that the two flux insertions lead to the same momentum change, consistent with our claim that they insert the same total flux into the system. Following <cit.>, we continue to make use of a momentum balance argument to arrive at Eq. (<ref>). To derive an alternative expression for the momentum change, we note that the flux insertion leads to an increased momentum of the quasiparticle excitations of the system, as each quasi-momentum gets shifted by k→ k+ϕ/L <cit.>. After the adiabatic flux insertion this results in a shift of the entire Fermi sea by ϕ_σ /L = 2π/L, or ϕ_h = -2π/L respectively, which we integrate to obtain the total momentum change Δ P =V_FS. We see that there are two possible flux insertion protocols for the t-J model corresponding to the conserved U(1) charges of the total spin and hole number, which we have shown lead to the same momentum change in the system, and can therefore be regarded as equivalent. However, we distinguish two possible low energy states: For a Luttinger liquid of spin-1/2 particles, there are two underlying Fermi surfaces corresponding to the two spin species. This can be represented by the momentum space picture shown on the bottom of Fig. <ref>a. We therefore obtain for the volume of the charge Fermi surface V_FS^c = V_FS^↑ + V_FS^↓. In the free chargon picture we find that V_FS^c directly corresponds to the Fermi surface of holes V_FS^h, and the chargon Luttinger liquid therefore has a single, larger Fermi surface, as shown on the bottom of Fig. <ref>c. As the full U(1) symmetry of the total number of holes only emerges in the t-J model, the 1D FH model at finite interaction U can only be described as a spin-1/2 Luttinger liquid. However, both states are possible in the the t-J model. Inserting V_FS = 2k_F for the different Fermi surfaces underlying the corresponding Luttinger liquids with charge Fermi momentum k_F^c=k_F^↑=k_F^↓ and k_F^c=π-k_F^h respectively, and comparing to the momentum change derived in Eq. (<ref>), we find different expressions for the two distinct scenarios: k_F^c,LL = π/2 n spin-1/2 Luttinger liquid (LL) k_F^c, scl = π n spinless chargon liquid (scl) We see that the two different low-energy states of a spin-1/2 Luttinger liquid and a spinless chargon liquid make measurably different predictions about the systems Fermi momentum in the presence of the emergent U(1) symmetry. In the absence of the emergent symmetry, the only possible low-energy state is the spin-1/2 Luttinger liquid, and we conclude that the 1D FH model with finite interaction U is described by a Luttinger liquid with a small Fermi momentum of k_F=π/2n. § SIGNATURES IN FRIEDEL OSCILLATIONS Since the full symmetry of the t-J model, including the total number of holes, only emerges in the limit where U→∞ and double occupancies are completely forbidden, the question arises which perspective is more adequate for the FH Hamiltonian at large but finite U. To this end, we first follow Ref. <cit.> and make use of Friedel oscillations at the edge of an open boundary system to extract the Fermi momentum k_F and probe the different sum rules found above. These density oscillations are a direct result of open boundary conditions and have a frequency f=2k_F proportional to the Fermi momentum <cit.>. We also relate our observations to fluctuations of the total number of holes providing a measure for the emergent U(1) symmetry. We use density matrix renormalization group (DMRG) simulations <cit.> of the 1D FH model in Eq. (<ref>) with open boundary conditions to extract the ground state density distributions. Fig. <ref> shows the results for the Friedel oscillations in an open boundary FH model at L=200 and different interaction strengths. We observe that for U=0 there is only a single oscillation with a frequency of 2k_F^LL, where k_F^LL=πn/2 is the Fermi momentum of charges in the spin-1/2 Luttinger liquid as determined in Eq. (<ref>). For very large interaction strengths this is effectively replaced by an oscillation at 2k_F^scl=4k_F^LL, which we interpret as Friedel oscillations of a spinless chargon liquid with k_F^scl=2k_F^LL = π n, see Eq. (<ref>). For intermediate U both oscillations can be observed. As U is increased and the U(1) symmetry emerges, the relative amplitude of the 2k_F^scl oscillation grows, while the 2k_F^LL oscillation vanishes. The form of the two oscillations can be calculated from bosonization results <cit.>. It has previously been shown that the crossover point where the two amplitudes are equal happens at constant n/U for fixed system sizes <cit.>. We propose to use the hole number fluctuations Δ N_h^2 = ⟨ N_h^2⟩ - ⟨ N_h ⟩^2 as a good probe for the crossover between the two Fermi momenta regimes, which vanish in the case of an exact U(1) conservation of holes. Fig. <ref> shows Δ N_h^2 for different interaction strengths U and system sizes L, where a strong suppression of the hole number fluctuations with increasing interactions is observed as expected. The inset of Fig. <ref> shows the relative fluctuations Δ N_h^2/N_h on a logarithmic scale. We find that the suppression of the relative fluctuations follows a power law for large values of U and is in particular independent of system size. We further simulate the Friedel oscillations in a FH model at finite temperatures in order to relate our results to realistic experimental settings. Specifically, we propose to directly measure Friedel oscillations in quantum simulators using ultracold atoms in optical lattices, see e.g. <cit.> for a recent review. Using quantum gas microscopes with single site resolution, these systems can directly extract snapshots of the density modulations at the edges or an impurity site in the system <cit.>. Results of the finite temperature simulations in one dimension are shown in Fig. <ref>. We note that the thermal fluctuations introduce a faster decay of the oscillation amplitudes as one moves away from the boundary of the system, as can be seen in the real space distributions on the bottom of Fig. <ref>. Note that the specific form of the oscillation depends on the type of dopant. Fig. <ref> shows the case of n > 1/2, where the charge carriers conserved by the emergent symmetry are constituted by doublons, i.e. doubly occupied sites in the system. The same analysis as for the hole-doped scenario discussed above can be applied to this case, as the number of doublons also becomes conserved in the strongly correlated limit. Our results show signatures of Friedel oscillations up to experimentally accessible temperatures of T=0.25t. As expected, we observe the peaks in the Fourier transform depicted in the upper panels of Fig. <ref> to broaden with increasing temperature, as well as a decrease in the intensity of the peaks. By tuning the filling, the frequency of the Friedel oscillations can be controlled, which allows for an observation of multiple periods even at elevated temperatures. Analysis of the peak ratio A(k_F^σ)/A(k_F^c) suggests that a sharp crossover between the two regimes is expected to exists at finite U>0 even in the thermodynamic limit, which is discussed in Appendix A. § OUTLOOK - TWO-DIMENSIONAL FRIEDEL OSCILLATIONS We further analyse the Hubbard model on a two-dimensional square lattice, starting with non-interacting (U=0) fermions. We introduce an impurity at the center of the lattice and choose periodic boundary conditions in order to prevent interference from Friedel oscillations at the boundaries. Fig. <ref> shows signatures of the Friedel oscillations for non-interacting fermions in two dimensions. The frequency of the oscillation behaves as expected from the Luttinger theorem in Eq. (<ref>), where for the two-dimensional case V_FS=π k_F^2, which leads to a frequency of 2k_F= 4√(π n). Fig. <ref>b also shows a finite temperature calculation of the non-interacting case, allowing us to extract signatures of the Friedel oscillations up to experimentally feasible temperatures of T=0.25t. We see again a thermal decay of the oscillations between the zero and finite temperature calculations. A similar Fourier analysis to the one-dimensional case above allows to extract the non-interacting Fermi momentum. Extending the previous one-dimensional analysis we expect an analogous U(1) symmetry to emerge in the U→∞ limit. Since numerical simulations as shown for the one-dimensional case above are much more limited in two dimensions, cold atoms could provide an interesting platform to study the large-U regime of the FH model even at high doping. An interesting application of our analysis is the study of the formation of magnetic polarons in a 2D FH model, believed to be crucial to the pseudogap phase of cuprates. Upon doping, the FH model is expected to undergo a transition from a state resembling free spinful fermions at low filling to a system of magnetic polarons at higher fillings, associated with Fermi surface reconstruction and a change of Fermi momentum <cit.>. Fig. <ref> shows a comparison of the expected Fourier signals of the corresponding density oscillations at different hole-dopings. We compare a system of free spin-1/2 fermions with a dispersion of ϵ(k) = -2tcos(k) to free magnetic polarons in a t-J model, which are described by the dispersion ϵ(k)=A(cos(2k_x)+cos(2k_y))+B(cos(k_x+k_y)+cos(k_x-k_y)) <cit.>. Here A and B depend weakly on t/J. For t/J≈2 we take A=0.25J and B=0.36J <cit.>. The signatures of both theories are strikingly different and could thus be used to distinguish the two cases in an experimental setting, where we expect that upon doping a sufficient amount of holes into the system one can observe a change from a system of magnetic polarons to a system resembling free spinful fermions. Friedel oscillations in clean two-dimensional cold atom systems could thus provide an alluring novel way to study the Fermi momentum and possibly even the full Fermi surface, potentially shedding new light on the nature of the Fermi surface and its reconstruction in the cuprate pseudogap phase <cit.>, as well as a Lifshitz transition where the Fermi surface topology changes <cit.>. § SUMMARY AND DISCUSSION We have provided a minimal model in one dimension to study a change of Fermi momentum in a strongly correlated system. To this end we have described two different pictures for the large U limit of the FH model, which can be understood either in terms of a spinful Luttinger liquid or as free spinless chargons, for which different Fermi wave vectors k_F are found. We have argued that the two perspectives are distinguished by an emergent U(1) symmetry and have shown that this emergence drives a crossover between two regimes with different charge Fermi momenta k_F^c as the Hubbard interaction is varied. We have applied a proof of the Luttinger theorem to the t-J model as an approximation of the U→∞ limit of the FH model, making use of the full emergent symmetry in order to provide a modified flux insertion and recover the larger Fermi momentum of the spinless chargon liquid from a modified sum rule. As the full symmetry only emerges at U=∞ we propose to use Friedel oscillations in an open boundary system to probe which perspective is more adequate in strongly correlated systems. We find a smooth crossover between the two different Fermi momenta, with the Friedel oscillations of the spin-1/2 Luttinger liquid becoming strongly suppressed in the large U limit. We have further extracted the hole number fluctuations at different interaction strengths and found that they are suppressed with increasing interactions, consistent with our perspective of an emergent symmetry. Finally, we have also provided a simulation of the Friedel oscillations for realistic experimental settings to extract the Fermi momentum in cold atom simulations of the full FH model in one dimension and provided an outlook onto two dimensional settings where studies of Friedel oscillations can shed light on the formation of magnetic polarons in the 2D FH model and can become an valuable tool for examining Fermi surface reconstruction believed to underlie the transition from the pseudogap to the Fermi liquid regime at high doping. § ACKNOWLEDGEMENTS We thank Immanuel Bloch, Sebastian Eggert, Youqi Gang, Timon Hilker, Anant Kale, Lev Kendrick, Martin Lebrat, Muqing Xu and Aaron Young for fruitful discussions. We acknowledge the funding by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany’s Excellence Strategy – EXC-2111 – 390814868. This project has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation program (Grant Agreement no 948141) — ERC Starting Grant SimUcQuam. § THERMODYNAMIC LIMIT The question remains when and if the crossover discussed above happens in the thermodynamic limit. It has previously been suggested that for infinite system sizes the amplitude of the Friedel oscillations of the spin-1/2 Luttinger liquid would be larger than the oscillations of the spinless chargon liquid for any U<∞, and therefore no crossover at finite U could be observed in the thermodynamic limit  <cit.>. However, instead of focusing on the interactions where A(k_F^σ)=A(k_F^c), we propose as a more meaningful measure a different ratio of the two amplitudes that indicates when the system is already deep in the regime of the larger Fermi momentum. There we argue that any bosonization approach starting from the weakly correlated limit must become increasingly inaccurate, while smaller peaks at higher harmonics would still be expected even in those approaches. Fig. <ref>b shows the ratio of the amplitudes for different system sizes up to L=800. The white region corresponds to the points where A(k_F^σ)/A(k_F^c)=0.1. Here the system already exhibits an approximate U(1) symmetry and conservation of the total hole number. In contrast to the A(k_F^σ)=A(k_F^c) points, which have previously been found to depend on the system size L <cit.>, the corresponding interaction U of this ratio depends less sensitively on system size. Fig. <ref>c shows the system size dependence of different ratios of A(k_F^σ)/A(k_F^c). It can be seen that as this ratio is decreased, i.e. the emergent symmetry constraint becomes stronger, the corresponding U becomes independent of system size. In light of the emergent symmetry, we also argue that another, more natural quantity to measure this crossover are the relative hole number fluctuations as mentioned in the main text in Fig. <ref>. Fig. <ref>a further shows a sweep of the relative hole number fluctuations for different system sizes and interactions on a logarithmic scale, which we find to be independent of system size. Our result thus indicates that a crossover between regimes with different charge Fermi momenta happens for finite U even in the thermodynamic limit.
http://arxiv.org/abs/2406.18523v1
20240626175331
Integrability and renormalizability for the fully anisotropic ${\rm SU}(2)$ principal chiral field and its deformations
[ "G. A. Kotousov", "D. A. Shabetnik" ]
hep-th
[ "hep-th", "math-ph", "math.MP" ]
OMScmsybn tikzmark,calc OT1pzcmit addtoresetequationsection -2.2cm 24.cm 180mm -20mm †𝕀g_⊥g_∥g_⊥g_∥J̅I̅K̅ ↑↓ sgnϵ#̊1(<ref>)5ϕ̅Φ̅5ψ̅ζ̅q̅p̅z̅w̅a̅∂̅s̅σμ⟨#|1⟨ #1 ||#⟩1|#1⟩ i1/2 #1#1 #1 #1 #1 ∫#1#2#30=#1#2#3∫#2#3-.50=-OT1pzcmitI Integrability and renormalizability for the fully anisotropic SU(2) principal chiral field and its deformations Gleb A. Kotousov^1 and Daria A. Shabetnik^2 ^1Institut für Theoretische Physik, Leibniz Universität Hannover Appelstraß e 2, 30167 Hannover, Germany ^2NHETC, Department of Physics and Astronomy Rutgers University Piscataway, NJ 08855-0849, USA 13cm Abstract For the class of 1+1 dimensional field theories referred to as the non-linear sigma models, there is known to be a deep connection between classical integrability and one-loop renormalizability. In this work, the phenomenon is reviewed on the example of the so-called fully anisotropic SU(2) Principal Chiral Field (PCF). Along the way, we discover a new classically integrable four parameter family of sigma models, which is obtained from the fully anisotropic SU(2) PCF by means of the Poisson-Lie deformation. The theory turns out to be one-loop renormalizable and the system of ODEs describing the flow of the four couplings is derived. Also provided are explicit analytical expressions for the full set of functionally independent first integrals (renormalization group invariants). § INTRODUCTION One of the spectacular instances of when ideas from physics and geometry come together is in the study of a class of field theories known as the Non Linear Sigma Models (NLSM). Mathematically, these are defined in terms of maps between two (pseudo-)Riemannian manifolds known as the worldsheet and the target space such that the classical equations of motion take the form of a generalized version of Laplace's equation <cit.>. In physics, one of the uses of NLSM is as low energy effective field theories with the choice of the target space being dictated by the symmetries of the problem. The first such proposal appeared in a paper of Gell-Mann and Levy <cit.>. They put forward the following Lagrangian density as an effective field theory of pions: L=1/2 η^ij _i n⃗·_j n⃗ with |n⃗|^2=1/f^2 . Here the last equation means that the four component field n⃗=(n_1, n_2, n_3, n_4) is constrained to lie on the three dimensional round sphere whose radius coincides with 1/f. Thus the target space is 𝕊^3 equipped with the homogeneous metric while the worldsheet is four dimensional Minkowski spacetime 𝕄^1,3. The field theory is known as the O(4) sigma model as it possesses O(4) symmetry – the group of isometries of the three-sphere. Ignoring global aspects, one may replace the latter by SU(2)× SU(2) which play the role of the vector and axial symmetries appearing in the `chiral limit' of QCD. For this reason the model (<ref>) is also referred to as the SU(2) principal chiral field. The O(4) sigma model is rather special in 1+1 dimensional spacetime 𝕄^1,1. In this case, as was pointed out by Polyakov, the Lagrangian (<ref>) defines a renormalizable QFT. Following the traditional path-integral quantization, the model should be equipped with a UV cutoff Λ<cit.>. It was shown to one-loop order that a consistent removal of the UV divergences can be achieved if the bare coupling is given a dependence on the cutoff momentum, described by the RG flow equation <cit.>Λ/Λ (f^-2)=N-2/2π ħ+O(ħ^2) . Here ħ stands for the dimensionless Planck constant while N=4 (the computation was performed for the general O(N) sigma model with target space 𝕊^N-1). Notice that in the continuous limit Λ→∞ the coupling constant f^2 approaches zero. In turn, the curvature of the sphere to which the fields n_j(x^0,x^1) belong vanishes so that the theory becomes non-interacting. This phenomenon, known as asymptotic freedom, indicates consistency of the quantum field theory. As a result of the work of Polyakov and later Zamolodchikov and Zamolodchikov <cit.>, who proposed the associated scattering theory, it is commonly believed that the O(N) sigma model in 1+1 dimensions is a well defined (UV complete) QFT. The renormalizability of general NLSM in 1+1 dimensions was discussed in the work of Friedan <cit.>. He considered the class of theories where the Lagrangian density takes the form ℒ=1/2 G_μν(X) η^ij_i X^μ_j X^ν . Here G_μν(X) is the metric written in terms of local coordinates X^μ on the target space. The couplings are encoded in this metric so that the latter is taken to be dependent on the cutoff Λ. Extending the results of Ecker and Honerkamp <cit.>, Friedan computed the RG flow equation to two loops. To the leading order in ħ it takes the form _τ G_μν=-ħ R_μν+O(ħ^2) , _τ=-2πΛ /Λ , where R_μν is the Ricci tensor built from the metric. Without the O(ħ^2) term, (<ref>) is usually referred to as the Ricci flow equation <cit.>, which is a partial differential equation for G_μν=G_μν(X | τ). It found a remarkable application in mathematics in the proof of the Poincaré conjecture <cit.>. The question of renormalizability can be addressed within a class of NLSM where the target space metric depends on a finite number of parameters. The simplest example is the O(N) sigma model whose target manifold belongs to the family of the (N-1) dimensional round spheres, characterized by the radius 1/f. In this case, the Ricci flow equation boils down to the ordinary differential equation (<ref>). Another example is the Principal Chiral Field (PCF), where the target space is the group manifold of a simple Lie group G equipped with the left/right invariant metric. The latter is unique up to homothety and, in local coordinates, is defined by the relation G_μν(X) X^μX^ν=-1/e^2 ⟨ U^-1 U , U^-1 U⟩ , where U∈ G, e is the homothety parameter and the angular brackets ⟨·, ·⟩ denote the Killing form in the Lie algebra of G.[For a classical Lie group we take the Killing form to be the trace over the defining representation.] The Ricci flow (<ref>) implies _τ (e^-2)=-12 C_2 ħ+O(ħ^2) with C_2 being the value of the quadratic Casimir in the adjoint representation. This equation was essentially obtained in the original work of Polyakov <cit.>, see also <cit.>. Notice that the SU(2) PCF coincides with the O(4) sigma model. In this case C_2=2, while (<ref>) and (<ref>) are the same provided that e^2≡ 2f^2. An example of an NLSM which is renormalizable within a two parametric family is the so-called anisotropic SU(2) PCF. In this case the SU(2)× SU(2) isometry of the target space is broken down to SU(2)× U(1) and the manifold is still topologically 𝕊^3 but equipped with a certain asymmetric metric. The latter is given by G_μν(X) X^μX^ν=-1/e^2 ⟨ U^-1 U , O( U^-1 U)⟩ , where O is an operator acting from the Lie algebra 𝔰𝔲(2) to itself depending on the additional deformation parameter r, O: 𝔰𝔲(2)↦𝔰𝔲(2) , O=1+r P_3 , and P_3 projects onto the Cartan subalgebra. The Ricci flow equation reduces to a system of ordinary differential equations on e and r: -1/ħ _τ(e^-2) = 1-r -1/ħ _τ r = 2 e^2 r (r+1) . In the domain |r|<1, similar as with the SU(2) PCF, the theory is asymptotically free and it turns out to be a consistent QFT. When the τ dependence of the metric, satisfying the Ricci flow equation, is contained in a finite number of parameters, the partial differential equation (<ref>) reduces to a system of ordinary ones. From the point of view of physics, this means that the corresponding NLSM depends on a finite number of coupling constants and is one-loop renormalizable within this class. The construction of such solutions is difficult to achieve even when the dimension of the target manifold is low. Among the most impressive early results was the work of Fateev <cit.>, who discovered a three parameter family of metrics solving the Ricci flow equation. The NLSM with this background is a two parameter deformation of the SU(2) PCF, which contains the anisotropic case as a subfamily. A guiding principle for exploring the class of renormalizable NLSM was formulated in the work <cit.>. It arose from the observation that all the above mentioned models turn out to be classically integrable field theories. It is now believed that there is a deep relation between classical integrability and one-loop renormalizability in 1+1 dimensional sigma models. The notion of classical integrability in 1+1 dimensional field theory requires explanation. Recall that a mechanical system with d degrees of freedom is called integrable (in the Liouville sense) if it possesses d functionally independent Integrals of Motion (IM) in involution. This concept is difficult to extend to a field theory, where the number of degrees of freedom is infinite. A suitable paradigm of integrability in the case of 1+1 dimensions arose from the works of the Princeton group <cit.> and was later developed in the papers of Lax <cit.> and Zakharov and Shabat <cit.>. A key ingredient is the existence of the so-called Zero Curvature Representation (ZCR) of the Euler-Lagrange equations of the classical field theory: [_i- A_i, _j- A_j]=0 . Here A_i= A_i(x^0,x^1|λ) is a Lie-algebra valued worldsheet connection which also depends on the auxiliary (spectral) parameter λ. The ZCR implies that the Wilson loops T(λ)= Tr ← Pexp(-∫_ C x^i A_i) , where the trace is taken over some matrix representation of the Lie algebra, are unchanged under continuous deformations of the closed contour C. If suitable boundary conditions are imposed, this can be used to generate IM. For instance, in the case when the worldsheet is the cylinder and the connection is single valued, the contour C may be chosen to be the equal-time slice at some x^0 as in fig. <ref>. Then, it is easy to see that T(λ) does not depend on the choice of x^0, i.e., it is an integral of motion. Due to the dependence on the arbitrary complex variable λ, T(λ) constitutes a family of IM. The existence of these may provide a starting point for solving the classical equations of motion by applying the inverse scattering transform <cit.>. For this reason, we say that a 1+1 dimensional classical field theory is integrable if it admits the ZCR.[Such a `definition' of integrability does not guarantee that the equations of motion can be analytically solved in any sense. Thus, it is a much weaker notion than Liouville integrability in classical mechanics.] In this paper the interplay between classical integrability and one-loop renormalizability in sigma models is demonstrated. The structure is as follows. Section <ref> is devoted to a discussion of the so-called fully anisotropic SU(2) PCF, whose target space metric is given by G_μν(X) X^μX^ν=- 2⟨ U^-1 U , O( U^-1 U)⟩ , O=I_1 P_1+I_2 P_2+I_3 P_3 . Here P_a are projectors onto the basis t_a of the Lie algebra 𝔰𝔲(2), which is taken to be orthogonal w.r.t. the Killing form. The theory is a two parameter deformation of the SU(2) PCF and it reduces to the latter when I_1=I_2=I_3=1/2 e^-2. In addition for the special case I_1=I_2 it becomes the anisotropic SU(2) PCF, whose target space metric was presented above in eq. (<ref>). We discuss the classical integrability of the model with metric (<ref>). On the other hand, the latter is shown to be a solution of the Ricci flow equation for a certain τ dependence of the couplings I_a=I_a(τ). The corresponding system of ordinary differential equations is derived and its first integrals are obtained. In section <ref> the concept of the Poisson-Lie deformation <cit.>, which preserves integrability, is introduced. We apply it to the fully anisotropic SU(2) PCF and obtain a new classically integrable field theory depending on four parameters. It is argued that the resulting model is one-loop renormalizable. The ODEs for the τ dependence of the four couplings is presented and explicit analytical expressions for the renormalization group invariants are provided. The last section is devoted to a discussion. Among other things, it contains the formulae for the renormalization group invariants of the fully anisotropic SU(2) PCF with WZW term. § FULLY ANISOTROPIC SU(2) PCF Following the lecture notes <cit.>, let us gain some intuition about the fully anisotropic SU(2) PCF by considering its classical mechanics counterpart. It is obtained via `dimensional reduction' where one restricts to field configurations that depend only on the spacetime variable x^0 so that U= U(x^0). Then the Lagrangian density (<ref>), (<ref>) becomes L=∑_a=1^3I_a ω_a^2/2 , where ω_a are defined through the relation U^-1 U̇=- ∑_a=1^3ω_a t_a and the dot stands for differentiation w.r.t. the time x^0. Also, the basis for the Lie algebra has been normalized such that ⟨ t_a, t_b ⟩=12 δ_ab and [ t_a, t_b]= ϵ_abc t_c with ϵ_abc being the Levi-Civita symbol and summation over the repeated index is being assumed. It turns out that the Lagrangian (<ref>) describes the free motion of a rigid body where the translational degrees of freedom have been ignored. Recall that an arbitrary displacement of a rigid body is a composition of a translation and a rotation. For a free moving top, when the net external force is zero, one can without loss of generality consider the case when the centre of mass is at rest. Introduce two right handed coordinate systems called the fixed (laboratory) frame and moving frame, which are defined by the ordered set of unit vectors (E_1,E_2,E_3) and (e_1,e_2,e_3), respectively. The axes of the moving frame coincide with the principal axes of the rigid body w.r.t. the centre of mass. Then the orientation of the body is uniquely specified by a 3× 3 special orthogonal matrix which relates the fixed and moving frames as in fig. <ref>. Thus the configuration space of a rigid body with a fixed point coincides with the group manifold of SO(3). The matrix specifying the rotation can be identified with an SU(2) matrix U taken in the adjoint representation. Mathematically this is expressed as E_a t_a= U e_a t_a U^-1 , where again summation over a=1,2,3 is being assumed. The coefficients ω_a defined in (<ref>) coincide with the projections of the instantaneous angular velocity ω along the principal axes. This can be seen by differentiating both sides of (<ref>) w.r.t. time and comparing the result with ė_a=ω× e_a. The classical mechanics system governed by the Lagrangian (<ref>) is called the Euler top. The parameters I_a, which were introduced originally as formal couplings in (<ref>), coincide with the principal moments of inertia. Notice that the Lagrangian is built from U^-1U̇ which belongs to the Lie algebra and hence is insensitive to the difference between the groups SU(2) and SO(3).[ Topologically, the special unitary group SU(2) is the three sphere 𝕊^3, while the special orthogonal group SO(3) is the three dimensional real projective space ℝℙ^3. The latter coincides with 𝕊^3 with antipodal points ±n⃗∈𝕊^3 glued together (identified).] The Euler top is a textbook example of a Liouville integrable system. The IM that satisfy the conditions of Liouville's theorem are the Hamiltonian H and two more which are built from the angular momentum M: H=∑_a=1^3 I_a ω_a^2/2 , M=∑_a=1^3I_aω_a e_a . For a free moving body the angular momentum is conserved, i.e., Ṁ=0. On the other hand, the total time derivative Ṁ can be written in terms of the canonical Poisson bracket as {H,M}. Hence, the classical observable M Poisson commutes with the Hamiltonian. This way, the three functionally independent involutive Integrals of Motion may be taken to be H , M_Z≡ M· E_3 and M^2=∑_a=1^3I_a^2 ω_a^2 . It follows from Liouville's theorem that the equations of motion for the Euler top can be integrated in quadratures. The solution is discussed in any standard textbook on classical mechanics see, e.g., <cit.>. The rigid body with two of the principal moments of inertia equal I_1=I_2≡ I is usually referred to as the symmetric top. In this case the Lagrangian (<ref>) possesses invariance w.r.t. rotations about the axis e_3. For the symmetric top it is convenient to choose the three functionally independent, involutive IM to be M^2, M_Z and M_3≡ M· e_3. Notice that the Hamiltonian is given in terms of these as H=1/2I M^2+(1/2I_3-1/2I) M_3^2 (I_1=I_2≡ I) . The case I_1=I_2=I_3≡ I is known as the spherical top and the Hamiltonian is proportional to M^2. The field theory generalization of the symmetric top is the anisotropic SU(2) PCF (<ref>), (<ref>), while that of the spherical top is the SU(2) PCF (<ref>), (<ref>). Remarkably, the fully anisotropic SU(2) PCF is also an integrable field theory according to the technical definition given in the introduction. Namely, the equations of motion for the model admit the Zero Curvature Representation (<ref>). To demonstrate the integrability, it is useful to introduce the currents J_i^a via the formula: U^-1 _i U=- ∑_a=1^3 J_i^a t_a (i=0,1) . Then the Euler-Lagrange equations for the model (<ref>), (<ref>) can be written as follows: _-J_+^a+_+J_-^a=I_b-I_c/I_a (J_+^b J_-^c+J_+^c J_-^b) , where (a,b,c) is a cyclic permutation of (1,2,3) while _±=12 (_0±_1) , J_±^a=12 (J_0^a± J_1^a) . Note also the kinematic relations (Bianchi identities) which follow directly from the definition (<ref>): _-J_+^a-_+J_-^a=ϵ_abc J_+^bJ_-^c . The worldsheet connection for the fully anisotropic SU(2) PCF is rather complicated. For this reason we give it first for the case I_1=I_2=I_3=1/2 e^-2 which corresponds to the SU(2) PCF. Then the equations of motion (<ref>) simplify greatly since the term in the r.h.s. vanishes. The worldsheet connection A_± reads as A_±= J_±^a t_a/1±λ (I_1=I_2=I_3) and one can easily check that as a consequence of eqs. (<ref>) and (<ref>), [_+- A_+, _– A_-]=0 . This ZCR was first proposed in the work <cit.> and is valid for the sigma model associated with any simple Lie group G with J_±^a t_a replaced by - U^-1 _± U. The ZCR for the general case with I_1≠ I_2≠ I_3 was found in <cit.> and presented in a slightly different form in ref.<cit.>. In the following, the conventions of the latter paper will be used. To write the result, we swap the two independent combinations of (I_1, I_2, I_3) that enter into the equations of motion for m and ν according to m=I_2 (I_1-I_3)/I_3 (I_1-I_2) , cn^2(ν,m)=I_1/I_2 , where cn(ν,m) is the Jacobi elliptic function with the parameterm. Together with sn and dn, it satisfies the relations sn^2(ν,m)+ cn^2(ν,m)=1 , m sn^2(ν,m)+ dn^2(ν,m)=1 . The flat worldsheet connection reads explicitly as A_±= ∑_a=1^3w_a(ν∓λ)J_± ^a t_a , where w_1(λ)= sn(ν,m)/sn(λ,m) , w_2(λ)= sn(ν,m)/ cn(ν,m)cn(λ,m)/sn(λ,m) , w_3(λ)= sn(ν,m)/ dn(ν,m)dn(λ,m)/sn(λ,m) . In order to explore the one-loop renormalizability of the fully anisotropic SU(2) PCF, we turn to the analysis of the Ricci flow equation (<ref>). It requires one to calculate the Ricci tensor R_μν corresponding to the target space metric G_μν given in (<ref>). The computation is straightforward and we do not present it here. Instead, we mention the identity: R_μν=∑_a=1^3(I_a-I_b+I_c)(I_a+I_b-I_c)/2I_bI_c / I_a G_μν , where (a,b,c) is a cyclic permutation of (1,2,3). Then it follows that the Ricci flow equation is satisfied if the couplings I_a are assigned a τ dependence such that (see also refs.<cit.>) -1/ħ _τ(I_a I_b)=I_a+I_b-I_c , (a,b,c)= perm(1,2,3) . This constitutes a set of coupled nonlinear ordinary differential equations describing the flow. We found that the system (<ref>) possesses two Liouvillian first integrals.[ Liouvillian first integrals are those that are expressed in quadratures in the dependent variables of the differential equation.] They are given by Q_1=K(1-m)-(1-p) E(1-m)/(1-p) E(m)+p K(m) , Q_2=I^2_1 ((p-1) E(m)-p K(m) )^2/p (p-1) (p m-m+1) , where K(m) and E(m) stand for the complete elliptic integrals of the first and second kind, K(m)=∫^π/2_0 θ/√(1-m sin^2θ) , E(m)=∫^π/2_0 θ √(1-m sin^2θ) , while m=I_2 (I_1-I_3)/I_3 (I_1-I_2) , p=I_1/I_2 . Formula (<ref>) is one of the original results of this paper.[ A set of first integrals of the system (<ref>), similar to (<ref>), have also appeared in the recent work <cit.>. Their results and the ones of our study were achieved independently of each other.] After it was obtained, we discovered that the system of differential equations (<ref>) had been introduced, in a slightly different form, in the work of Darboux <cit.>. Its solution was discussed in refs.<cit.>. The flow of the couplings I_a as a function of τ can be analyzed numerically. The typical behaviour, for generic initial conditions such that all I_a at τ=0 are positive and different, is presented in fig. <ref>. One observes from the figure that the solution of (<ref>), i.e., the Ricci flow equation, remains real and non-singular only within the finite interval τ∈(τ_ min,τ_ max). At the end points one of the couplings goes to zero so that the curvature of the target space blows up. As a result, the one-loop approximation is no longer valid and the perturbative analysis is not sufficient to explore whether or not the model can be defined as a consistent (UV complete) QFT. There exists another three parameter family of deformations of the three dimensional round sphere (<ref>). It is the one mentioned in the introduction that was proposed by Fateev in ref. <cit.>. His metric, depending on (e^2,r,l), can be written as G_μν(X) X^μX^ν=-(1+r)(1+l)/e^2 ⟨ U^-1 U , O( U^-1 U)⟩/(1+r)(1+l)-4rl (⟨ U t_3 U^-1, t_3⟩)^2 . Here the operator O, acting on the Lie algebra, is given by O=1+r P_3+l Ad_ U∘ P_3∘ Ad^-1_ U , where Ad_ U stands for the adjoint action of the group: Ad_ U x= U^-1 x U , x∈𝔰𝔲(2) . The Ricci flow equation (<ref>) leads to the system of ordinary differential equations for the three parameters: -1/ħ _τ l = 2 e^2 l (1+l+r)/1+r -1/ħ _τ r = 2 e^2 r (1+l+r)/1+l -1/ħ _τ(e^-2) = (1-l-r) (1+l+r)/(1+l) (1+r) . Notice that for l=0 the metric (<ref>) becomes the one for the anisotropic SU(2) PCF (<ref>), while the above system of differential equations reduces to (<ref>). A remarkable feature of the flow is that e^2(τ), r(τ), l(τ) turn out to be real and non-singular on the half infinite line (-∞,τ_ max) with some real τ_ max. Such solutions of the Ricci flow equation, which can be continued to infinite negative τ, are called `ancient'. That (<ref>) admits ancient solutions suggests that the corresponding NLSM is a consistent QFT. The factorized scattering theory for the model was proposed in ref.<cit.>. The NLSM with metric (<ref>) is an integrable classical field theory. The ZCR for the Euler-Lagrange equations was originally obtained in the work <cit.>. This way, the Fateev model provides an additional example of the link between integrability and one-loop renormalizability in sigma models. § POISSON-LIE DEFORMATION The models discussed above illustrate the connection between integrable NLSM and solutions of the Ricci flow equation. This can be used as a guiding principle for constructing new multiparametric families of metrics that satisfy the Ricci flow. Here we will discuss the so-called Poisson-Lie deformation of integrable NLSM. Such a deformation preserves integrability and allows one to obtain new solutions of the Ricci flow equation. We first illustrate the idea by showing that the anisotropic SU(2) PCF can be obtained as the Poisson-Lie deformation of the SU(2) PCF <cit.>. Then, a new integrable model is constructed by deforming the fully anisotropic SU(2) PCF. §.§ Poisson-Lie deformation of PCF To explain the Poisson-Lie deformation, we start from the Hamiltonian formulation of the model. The latter, in the case of the SU(2) PCF, can be described using the currents J_i^a(<ref>). It follows from the Lagrangian (<ref>), (<ref>) that they form a closed Poisson algebra <cit.>:[ In our discussion of the Poisson-Lie deformation we set e^2=1. Since it appears in an overall factor multiplying the Lagrangian, this has no effect on the classical equations of motion.]{J_0^a(x), J_0^b(y) } = ϵ_abc J_0^c(x) δ(x-y) {J_0^a(x), J_1^b(y) } = ϵ_abc J_1^c(x) δ(x-y)- δ^ab δ'(x-y) {J_1^a(x), J_1^b(y) } = 0 . These are understood to be equal-time relations with x^0=y^0, while x≡ x^1 and y≡ y^1 are the space coordinates (the dependence of the currents on the time variable has been suppressed). The Hamiltonian is obtained by means of the Legendre transform and is given by H=1/2∫x ∑_a=1^3(J_0^a J_0^a+J_1^a J_1^a) . One can check that the Hamiltonian equations of motion, Ȯ={H, O }, for the currents are equivalent to eqs. (<ref>) and (<ref>) with I_1=I_2=I_3, i.e., _- J_+^a=12 ϵ_abc J_+^bJ_-^c , _+ J_-^a=12 ϵ_abc J_-^bJ_+^c . The Poisson algebra (<ref>) admits a certain deformation which preserves its defining properties, namely, skew-symmetry, the Jacobi and Leibniz identities. The deformed Poisson bracket relations read explicitly as {J̃_0^a(x), J̃_0^b(y) } = 1/1+r ϵ_abc J̃_0^c(x) δ(x-y) {J̃_0^a(x), J̃_1^b(y) } = 1/1+r ϵ_abc J̃_1^c(x) δ(x-y)- δ^ab δ'(x-y) {J̃_1^a(x), J̃_1^b(y) } = -r/1+r ϵ_abc J̃_0^c(x) δ(x-y) . Here r plays the role of the deformation parameter and we switch the notation from J_i^a to J̃_i^a as the above Poisson brackets will be associated with a different classical field theory. Remarkably, with the same form of the Hamiltonian as (<ref>), i.e., H̃=1/2 ∫x ∑_a=1^3(J̃_0^a J̃_0^a+J̃_1^a J̃_1^a) the equations of motion do not depend on the deformation parameter. Namely, they coincide with (<ref>) upon replacing J_±^a by J̃_±^a=1/2(J̃_0^a±J̃_1^a). This means that the Hamiltonian system defined through (<ref>) and (<ref>) is integrable by construction. The corresponding flat connection entering into the ZCR takes the same form as for the SU(2) PCF (<ref>) but written in terms of the currents J̃^a_±: A_±= J̃^a_± t_a/1±λ . The obtained classical field theory is called the Poisson-Lie deformation of the SU(2) PCF. The final and technically most involved step of the procedure is to derive the Lagrangian of the deformed model. It is well known in classical mechanics how to get from the Hamiltonian to the Lagrangian picture. Consider a mechanical system with a finite number of degrees of freedom d. The Poisson brackets are defined on the algebra of functions on the 2d-dimensional phase space. In local coordinates (z^1,…, z^2d) they are given by {f,g }=Ω^AB f/ z^A g/ z^B . Since the Poisson brackets are assumed to be non-degenerate, the inverse of the contravariant tensor Ω^AB exists and we will denote it as Ω_AB. Due to skew-symmetry of the Poisson brackets, the covariant tensor Ω_AB is antisymmetric, i.e., it defines a two-form as Ω=Ω_AB z^A∧ z^B. Moreover, the Jacobi identity implies that the form is closed, Ω=0. This allows one to write Ω as an exact form, Ω=α, at least locally. The action is expressed in terms of the one-form α and the Hamiltonian as S=∫(α-H t) with the integral being taken over a path in the phase space parameterized by the time t. According to the Darboux theorem there exists (locally) a set of canonical variables (q^1,…, q^d, p_1,… p_d) such that α=∑_m=1^d p_m q^m. Then the Lagrangian associated with the action (<ref>) is given by L=∑_m=1^d p_m q̇^m-H . This can be interpreted as the Legendre transform of H where the canonical momenta p_m are replaced by q̇^m as the independent variables. In order to apply the above procedure to the infinite dimensional Hamiltonian structure (<ref>), (<ref>), it is useful to realize the Poisson algebra in terms of the fields, similar to the canonical variables p_m and q^m in the finite dimensional case. For this reason we introduce local coordinates X^μ on the group manifold and the corresponding canonical momentum densities Π_μ. They obey the Poisson bracket relations {Π_μ(x),X^ν(y)}=δ_μ^ν δ(x-y) , {X^μ(x),X^ν(y)}={Π_μ(x),Π_ν(y)}=0 . In the case r=0, when (<ref>) becomes the undeformed algebra (<ref>), the currents K_i≡J̃_i^a t_a|_r=0 (i=0,1) can be expressed in terms of the canonical fields in the following way; first, define the 3× 3 matrix E^a _μ through the relation U U^-1= E^a _μ X^μ t_a . Its inverse will be denoted by E^μ a so that E^a _μ E^μ b=δ^a b. Then with the choice K_0= E^μ a Π_μ t_a, K_1= E^a _μ _1X^μ t_a=- _1 U U^-1 one can check via a direct computation that the Poisson algebra (<ref>) with J_i^a replaced by the components of K_i is satisfied. In fact, the r.h.s. of the first equation in (<ref>) is just - _0 U U^-1 written in terms of the canonical fields for the PCF.[We realise the algebra (<ref>) using the `left' currents K_i=- _i U U^-1 rather than the `right' ones J^a_i t_a= U^-1 _i U(<ref>) for future convenience. The latter obey the same Poisson bracket relations (<ref>).] For general r≠0 one should first apply the linear transformation I^a=1+r/2√(r) (√(r) J̃_0^a+ J̃_1^a) , J^a=1+r/2√(r) (√(r) J̃_0^a- J̃_1^a) . This brings the closed Poisson algebra (<ref>) to the form: { I^a(x), I^b(y) } = ϵ_abc I^c(x) δ(x-y)-k δ_ab δ'(x-y) { J^a(x), J^b(y) } = ϵ_abc J^c(x) δ(x-y)+k δ_ab δ'(x-y) { I^a(x), J^b(y) } = 0 , where k= (1+r)^2/2√(r) , which is a direct sum of two independent so-called SU(2) current algebras. It turns out that the Poisson algebra generated by I^a and J^a can be formally realised in terms of the currents K_i(<ref>) as well as the group valued field U∈ SU(2). The explicit formula, along with its verification, is contained in ref. <cit.> and is given by I^a t_a = 12 (1- Ad^-1_ U∘ R∘ Ad_ U) K_0+k K_1 J^a t_a = 12 (1+ Ad^-1_ U∘ R∘ Ad_ U) K_0-k K_1 . Here Ad_ U stands for the adjoint action of the group, see eq. (<ref>), while the linear operator R: 𝔰𝔲(2)↦𝔰𝔲(2) is defined via its action on the generators as R( t_1)= t_2 , R( t_2)=- t_1 , R( t_3)=0 . Formulae (<ref>), (<ref>) and (<ref>) allow one to realize the currents J̃_0^a and J̃_1^a, satisfying the Poisson bracket relations (<ref>), through the canonical fields (<ref>). The corresponding expression for the Hamiltonian follows from (<ref>). In the basis of canonical variables the construction of the Lagrangian is straightforward and is the field theory analogue of the Lengedre transform (<ref>). Applying the procedure, where Π_μ maps to Ẋ^μ={H̃, X^μ}, one arrives at the Lagrangian density L=-1+r/e^2 ⟨ U^-1 _+ U , O ( U^-1 _- U)⟩ with O=(1-√(r) R )^-1 . Here the dependence on e^2 was restored and we performed the substitution e^2↦ (1+r) e^2 to keep with the conventions of section <ref>. At first glance, in local coordinates, L can not be written in the form (<ref>). Instead, the latter should be modified as L=2 G_μν(X) _+X^μ _-X^ν-B_μν(X) (_+X^μ _-X^ν-_-X^μ _+X^ν) . Here the last term is not invariant w.r.t. the parity transformation x^1↦ -x^1, i.e., ∂_±↦∂_∓ and comes about because the Lagrangian density (<ref>) is not either. Models of this type motivate a generalization of the NLSM where the target space is additionally equipped with a two form B=B_μν X^μ∧X^ν known as the B-field <cit.>. It turns out that in the SU(2) case the B-field corresponding to L(<ref>) is an exact form. As a result, the term ∝ B_μν in (<ref>) is a total derivative and has no effect on the Euler-Lagrange equations. This way, for the SU(2) case, the obtained sigma model is equivalently described by (<ref>) where G_μν(X) X^μX^ν=-1+r/e^2 ⟨ U^-1 U , O_ sym ( U^-1 U)⟩ with O_ sym=(1+r-r P_3 )^-1 and P_3=1+R^2∈ End(𝔰𝔲(2) ) stands for the projector on the Cartan subalgebra generated by t_3. This way we arrive at the metric of the anisotropic SU(2) PCF (<ref>). It was discussed in section <ref> that the anisotropic SU(2) PCF is a integrable classical field theory. Having established that the model is a Poisson-Lie deformation of the SU(2) PCF, we obtain a way to derive the Zero Curvature Representation for the classical equations of motion. Namely, the flat connection is given by (<ref>) where the currents J̃_±^a=1/2(J̃_0^a±J̃_1^a) entering therein read as J̃_±^a t_a=(1+r) Ad_ U∘ (1±√(r) R) (_± U U^-1) . Indeed, as it follows from the Euler-Lagrange equations for the model (<ref>), _- J̃_+^a=12 ϵ_abc J̃_+^bJ̃_-^c , _+ J̃_-^a=12 ϵ_abc J̃_-^bJ̃_+^c . The following comment is in order here. The anisotropic SU(2) PCF admits an integrable generalization, where U belongs to an arbitrary simple Lie group G. The Lagrangian is still given by (<ref>) with R being a certain linear operator which is usually referred to as the Yang-Baxter operator. It acts on the Lie algebra 𝔤= Lie(G) and is required to satisfy a skew symmetry condition and the so-called modified Yang-Baxter equation <cit.>. A possible choice obeying the two properties is specified using the Cartan-Weyl decomposition of the simple Lie algebra, 𝔤=𝔫_+⊕𝔥⊕𝔫_-, where 𝔥 stands for the Cartan subalgebra and 𝔫_± are the nilpotent ones. Namely, the linear operator R is unambiguously defined through the conditions R ( e_±)=∓ e_± , R( h)=0 (∀ e_±∈𝔫_±, ∀ h∈𝔥) . The NLSM (<ref>) with R being the Yang-Baxter operator was introduced by Klimçík in ref. <cit.> who called it the Yang-Baxter sigma model. Written in terms of local coordinates, the Lagrangian takes the form (<ref>) where, for general group, the second term ∝ B_μν is no longer a total derivative and cannot be ignored. The model is classically integrable and the corresponding flat connection is given by the same formulae (<ref>) and (<ref>)<cit.>. The Yang-Baxter sigma model also turns out to be a one-loop renormalizable field theory. The proof is based on the extension of the results of the works <cit.> to the case of an NLSM equipped with a B-field that was carried out in ref.<cit.>, see also the textbook <cit.>. The one-loop RG flow equations are modified from (<ref>) as _τ G_μν = -ħ (R_μν-14H_μ ^σρH_σρν)+O(ħ^2) _τ B_μν = -12 ħ ∇_σ H^σ _μν+O(ħ^2) . Here H_μνλ are the components of the so-called torsion tensor. It is given by the exterior derivative of the B-field, i.e., H_μνλ=_μ B_νλ+_ν B_λμ+_λ B_μν . For the model (<ref>), (<ref>) with U belonging to a simple Lie group, the above equations boil down to a system of ordinary differential equations on e^2 and r. They read as <cit.> -1/ħ _τ(e^-2) = 12 C_2 (1-r) -1/ħ _τ r = C_2 e^2 r (r+1) . Remarkably, the only dependence on the group appears through the overall factor proportional to the value of the quadratic Casimir in the adjoint representation. We have just discussed that the Poisson-Lie deformation of the PCF yields the Yang-Baxter sigma model. The latter itself can be deformed along the similar line of arguments <cit.>, see also <cit.> as well as fig. <ref> for a summary. In the case of G= SU(2) the obtained theory turns out to be the Fateev model, i.e., the sigma model with target space metric (<ref>). For a general simple Lie group G, the two parameter deformation of the PCF was introduced by Klimčík in ref.<cit.>. The corresponding Lagrangian involves the Yang-Baxter operator R and is given by L=-2(1+r)(1+l)/e^2⟨ U^-1 _+ U , O ( U^-1 _- U)⟩ with O=(1-√(l(1+r)) R-√(r(1+l)) Ad_ U∘ R∘ Ad_ U^-1)^-1 . For U∈ SU(2) the B-field turns out to define an exact two form and has no effect on the equations of motion. It was shown in <cit.> by an explicit computation that the metric is equivalent to (<ref>). For arbitrary simple Lie group G the model (<ref>) is classically integrable and the ZCR was found in ref. <cit.>. One-loop renormalizability was demonstrated in the work <cit.> using the results of <cit.>. The differential equations describing the flow of the couplings (e^2,r,l) are -1/ħ _τ(e^-2) = C_2 (1-l-r) (l+r+1)/2 (1+l) (1+r) -1/ħ _τ r = C_2 e^2 r (l+r+1)/1+l -1/ħ _τ l = C_2 e^2 l (l+r+1)/1+r . They essentially coincide with (<ref>) which were derived in Fateev's original paper <cit.>. §.§ Poisson-Lie deformation of fully anisotropic SU(2) PCF Here we obtain a new clasically integrable NLSM as a Poisson-Lie deformation of the fully anisotropic SU(2) PCF. The procedure closely follows that which was explained above on the example of the SU(2) PCF. The Hamiltonian for the fully anisotropic SU(2) PCF (<ref>), (<ref>), written in terms of the currents (<ref>), is given by H=1/2 ∑_a=1^3∫ x I_a (J_0^aJ_0^a+ J_1^aJ_1^a ) , while the equal-time Poisson bracket relations for J_i^a read as {J_0^a(x), J_0^b(y) } = I_c/I_a I_b ϵ_abc J_0^c(x) δ(x-y) {J_0^a(x), J_1^b(y) } = 1/I_a ϵ_abc J_1^c(x) δ(x-y)-1/I_a δ^ab δ'(x-y) {J_1^a(x), J_1^b(y) } = 0 . The above Poisson algebra admits a deformation of the form {J̃_0^a(x), J̃_0^b(y) } = I_c-ξ/I_a I_b ϵ_abc J̃_0^c(x) δ(x-y) {J̃_0^a(x), J̃_1^b(y) } = I_b-ξ/I_a I_b ϵ_abc J̃_1^c(x) δ(x-y)-1/I_a δ^ab δ'(x-y) {J̃_1^a(x), J̃_1^b(y) } = -ξ/I_a I_b ϵ_abc J̃_0^c(x) δ(x-y) depending on the extra parameter ξ. Then, with the Hamiltonian H̃=1/2 ∑_a=1^3∫ x I_a (J̃_0^aJ̃_0^a+ J̃_1^aJ̃_1^a ) , which is formally the same as (<ref>) but expressed in terms of the new currents J̃_i^a, the Hamiltonian equations of motion imply _-J̃_+^a = I_a+I_b-I_c/2I_a J̃_+^b J̃_-^c-I_a-I_b+I_c/2I_a J̃_+^c J̃_-^b _+J̃_-^a = I_a+I_b-I_c/2I_a J̃_-^b J̃_+^c-I_a-I_b+I_c/2I_a J̃_-^c J̃_+^b . Here (a,b,c)= perm(1,2,3) and summation over repeated indices is not being assumed. The equations (<ref>) are equivalent to (<ref>), (<ref>) up to the replacement J_i^a↦J̃_i^a. The currents J̃_i^a obeying the Poisson bracket relations (<ref>) can be realized in terms of the fields X^μ and Π_μ subject to the canonical commutation relations (<ref>). This is done along the same line of arguments as was discussed in the previous subsection. Namely, one first considers certain linear combinations of J̃_i^a which obey two independent copies of the classical SU(2) current algebra (<ref>) with k being a certain function of the couplings I_a and deformation parameter ξ. Then realizing I and J in terms of the canonical variables (see formulae (<ref>), (<ref>)) and performing the Legendre transform of the Hamiltonian (<ref>), one obtains the Lagrangian of the deformed theory. The result of the calculations reads as L=-4⟨ U^-1∂_- U , O_+ ( U^-1∂_+ U) ⟩ , where a certain choice of the overall multiplicative factor for the Lagrangian density was made. Here and below we use the notation O_± for the linear operators acting on the Lie algebra 𝔰𝔲(2) given by O_±=(1/I_1-ξ P_1+1/I_2-ξ P_2+1/I_3-ξ P_3±√(γ) Ad_ U∘ R∘ Ad^-1_ U)^-1 with γ= ξ/(I_1-ξ)(I_2-ξ)(I_3-ξ) . The Lagrangian density (<ref>) is formally not invariant under the parity transformation x^1↦ - x^1 (so that _±↦_∓). Nevertheless, the theory possesses this symmetry. The reason is because in local coordinates, where L(<ref>) takes the form (<ref>), the term ∝ B_μν turns out to be a total derivative. Thus one is free to replace O_+ in (<ref>) by O_ sym=1/2 ( O_++ O^ T_+) , where the transposition is defined by the condition ⟨ x , O_+ y⟩=⟨ O^ T_+ x , y⟩ for any x, y∈𝔰𝔲(2). This way, the target space metric for the deformed sigma model can be written as G_μν(X) X^μX^ν=- 2 ⟨ U^-1 U , O_ sym( U^-1 U)⟩ . It is worth mentioning that for I_1=I_2 this becomes the Fateev metric (<ref>), (<ref>) upon the identification of parameters: I_1=I_2=(1+l)^2/2e^2 , I_3=(1+l) (1+l+r)/2e^2 , ξ=l (l+1)/2e^2 . By construction the obtained model (<ref>) is a classically integrable field theory. The corresponding flat connection takes the same form as for the fully anisotropic SU(2) PCF, i.e., A_±= ∑_a=1^3w_a(ν∓λ)J̃_± ^a t_a , where the functions w_a(λ) are given in (<ref>) and (<ref>). The formula for the currents J̃^a_± in terms of the SU(2) element U reads as J̃^a_±= C_a ⟨ O_± ( U^-1∂_± U), t_a⟩ , C_a=2/I_a-ξ √(I_b I_c/(I_b-ξ) (I_c-ξ)) with (a,b,c)= perm(1,2,3). One can check that the metric (<ref>)-(<ref>) satisfies the Ricci flow equation (<ref>). The parameter γ defined in (<ref>) turns out to be an RG invariant, i.e., 1/ħ _τγ=O(ħ) . As for the couplings I_a=I_a(τ), it is convenient to swap these in favour of Ĩ_a according to Ĩ_a≡ I_a-ξ . The latter obey the RG flow equations -1/ħ _τ(Ĩ_a Ĩ_b)= (1+γ Ĩ_a Ĩ_b ) (Ĩ_a+Ĩ_b-Ĩ_c+γ Ĩ_a Ĩ_b Ĩ_c )+O(ħ) , (a,b,c)= perm(1,2,3) , which may be compared to the underformed case (<ref>). The derivation of the above formulae uses the property that the Ricci tensor corresponding to the metric (<ref>)-(<ref>) can be written as R_μν=∑_a=1^3(Ĩ_a-Ĩ_b+Ĩ_c+γ Ĩ_a Ĩ_b Ĩ_c ) (Ĩ_a+Ĩ_b-Ĩ_c+γ Ĩ_a Ĩ_b Ĩ_c )/2Ĩ_b Ĩ_c ( G_μν/Ĩ_a)_γ , which generalizes the relation (<ref>). For γ=0 the two first integrals of (<ref>) coincide with Q_1, Q_2(<ref>) - (<ref>) with I_a↦Ĩ_a. We found that these RG invariants admit a deformation to arbitrary γ. The explicit expressions involve, apart from m̃=Ĩ_2 (Ĩ_1-Ĩ_3)/Ĩ_3 (Ĩ_1-Ĩ_2) , p̃=Ĩ_1/Ĩ_2 , also m(<ref>), which enters into the functions w_a that appear in the flat connection (<ref>). In terms of the parameters Ĩ_a, it is given by m= Ĩ_2 (Ĩ_1-Ĩ_3) (1+γ Ĩ_1 Ĩ_3) / Ĩ_3 (Ĩ_1-Ĩ_2) (1+γ Ĩ_1 Ĩ_2) . The two first integrals of the system (<ref>) read as Q_1^(γ) = K(1- m)-(1-p̃) m̃ Π(1-m̃,1-m)/(1-p̃)(1-m̃) Π(m̃,m)+p̃ K( m) Q_2^(γ) = m̃-m/ γ ((p̃-1)(1-m̃) Π(m̃,m) -p̃ K(m))^2/(p̃-1)^2 m̃ (1-m̃) , where Π(m̃, m) is the complete elliptic integral of the third kind: Π(m̃, m)=∫_0^1 t/(1-m̃ t^2)√((1-t^2)(1- m t^2)) . It is straightforward to check that Q^(0)_1=Q_1, while lim_γ→ 0 Q^(γ)_2=Q_2. § SUMMARY AND DISCUSSION In this work we explored the interplay between integrability and one-loop renormalizability for NLSM in 1+1 dimensional spacetime. Our main example was the fully anisotropic SU(2) PCF. On the one hand, it was explained that this is a clasically integrable field theory and the Zero Curvature Representation for its equations of motion was reviewed. On the other, the corresponding target space metric satisfies the Ricci flow equation (<ref>) so that the fully anisotropic SU(2) PCF is one-loop renormalizable within a three dimensional space of couplings. The system of ODEs describing the flow was derived and its full set of first integrals was obtained, independently from <cit.>. Another main result is the construction of a classically integrable NLSM depending on four parameters whose Lagrangian density is given by (<ref>)-(<ref>). It was found by applying a Poisson-Lie deformation to the fully anisotropic SU(2) PCF. The corresponding target space metric turned out to provide a new solution to the Ricci flow equation. The first integrals to the system of ODEs (<ref>) and (<ref>), which describe the flow of the four couplings, were derived in the course of this work and are given in (<ref>). The class of theories that we discussed admit a modification such that they remain one-loop renormalizable. This is achieved by adding the so-called Wess-Zumino term to the action. The Lagrangian takes the form (<ref>) with the B-field no longer being exact. This implies that the target space, together with the Riemannian metric G_μν, is equipped with the affine connection with non-vanishing torsion H= B<cit.>. In the case of SU(2), the 3-form H is proportional to the volume form for the group and can be written as H≡ B= k/48π ⟨[ U^-1 U ∧, U^-1 U] ∧, U^-1 U⟩ . Here k is an additional parameter of the model. In the classical theory there is no contraint on the values it may take, however, upon quantization it is required to be an RG invariant and, furthermore, must be an integer <cit.>. For the case of the fully anisotropic SU(2) PCF with Wess-Zumino term, the one-loop RG flow equations (<ref>) imply the system of ODEs for the couplings: -1/ħ _τ(I_a I_b) = I_a+I_b-I_c- k^2/I_c+O(ħ) , (a,b,c)= perm(1,2,3) 1/ħ _τ k = 0 . It possesses two Liouvillian first integrals, which are a simple generalization of (<ref>) and in terms of p and m(<ref>) take the form Q_1 = K(1-m)-(1-p) E(1-m)/(1-p) E(m)+p K(m) , Q_2 = I_1^2 ((p-1)E(m)-pK(m))^2/p (p-1)(pm-m+1) + k^2/p-1 K(m) ((p-1)E(m)-pK(m)) . A complete analysis of the behaviour of the solutions to (<ref>) has not been carried out yet. Moreover, the classical integrability of the model has not been established and the Zero Curvature Representation, if it exists, remains unknown to us. These would be interesting questions to pursue in future work. They can also be addressed for the Poisson-Lie deformed theory. Our work was mainly focused on sigma models associated with the Lie group SU(2). Nevertheless, we expect it to be possible to generalize the Poisson-Lie deformed theory constructed here to the case of higher rank Lie groups. One way to approach the problem uses the results of ref.<cit.>. In that paper, a classically integrable NLSM is introduced, which is a two parameter deformation of the PCF for Lie group SL(N). For N=2 it coincides with the fully anisotropic SU(2) PCF (upon an appropriate choice of reality conditions on the fields and parameters). We expect that this sigma model may also be deformed along the line of arguments presented in sec. <ref>. Another possibility for constructing integrable deformations, based on the formalism of the so-called affine Gaudin model, is mentioned in the perspectives section of ref.<cit.>. Finally, classically integrable multiparametric families of sigma models are of interest to string theory. In particular, the possibility of an integrable elliptic deformation of strings on Ad_3× S^3× T^4 was investigated in the recent paper <cit.>. § ACKNOWLEDGMENTS The authors would like to thank Sergei Lukyanov for collaboration at the early stages of this work and for his continued interest and support. GK acknowledges discussions with Sylvain Lacroix. Part of this work was carried out during GK's visits to the NHETC at Rutgers University. He is grateful for the support and hospitality he received during the stays. The research of DS is supported by the NSF under grant number NSF-PHY-2210187. 100Fuller F. B. Fuller, Harmonic mappings, https://www.jstor.org/stable/89361Proc. Natl. Acad. Sci. 40 (1954) 987. G-L M. Gell-Mann, M. Levy, The axial vector current in beta decay, https://doi.org/10.1007/BF02859738Nuovo Cim. 16 (1960) 705. Polyakov A. M. Polyakov, Gauge fields and strings, https://doi.org/10.1201/9780203755082Harwood (1987). Polyakov1 A. M. Polyakov, Interaction of Goldstone particles in two dimensions. Applications to ferromagnets and massive Yang-Mills fields, https://doi.org/10.1016/0370-2693(75)90161-6Phys. Lett. B 59 (1975) 79. ZZ A. B. Zamolodchikov, Al. B. Zamolodchikov, Factorized S-matrices in two dimensions as the exact solutions of certain relativistic quantum field theory models, https://doi.org/10.1016/0003-4916(79)90391-9 Ann. Phys. 120 (1979) 253. Friedan D. Friedan, Nonlinear models in2+ϵ dimensions, https://doi.org/10.1103/PhysRevLett.45.1057Phys. Rev. Lett. 45 (1980) 1057. EH G. Ecker, J. Honerkamp, Application of invariant renormalization to the nonlinear chiral invariant pion Lagrangian in the one-loop approximation, https://doi.org/10.1016/0550-3213(71)90468-8Nucl. Phys. B 35 (1971) 481. H R. Hamilton, Three manifolds with positive Ricci curvature, https://doi.org/DOI: 10.4310/jdg/1214436922J. Diff. Geom. 17 (1982) 255. Perelman1 G. Perelman, The entropy formula for the Ricci flow and its geometric applications, preprint (2002) pp.39 https://doi.org/10.48550/arXiv.math/0211159 [arXiv:math/0211159]. Perelman2 G. Perelman, Ricci flow with surgery on three-manifolds, preprint (2003) pp.22 https://doi.org/10.48550/arXiv.math/0303109 [arXiv:math/0303109]. Fateev V. A. Fateev, The sigma model (dual) representation for a two-parameter family of integrable quantum field theories, https://doi.org/10.1016/0550-3213(96)00256-8Nucl. Phys. B 473 (1996) 509. Lukyanov S. L. Lukyanov, The integrable harmonic map problem versus Ricci flow, https://doi.org/10.1016/j.nuclphysb.2012.08.002Nucl. Phys. B 865 (2012) 308https://arxiv.org/abs/1205.3201 [arXiv:hep-th/1205.3201]. Pr C. Gardner, J. Greene, M. Kruskal, R. Miura, Method for solving the Korteweg-de Vries equation, https://doi.org/10.1103/PhysRevLett.19.1095Phys. Rev. Lett. 19 (1967) 1095. Lax P. D. Lax, Integrals of nonlinear equations of evolution and solitary waves, https://doi.org/10.1002/cpa.3160210503Comm. Pure Appl. Math. 21 (1968) 467. ZS V. E. Zakharov, A. B. Shabat, Exact theory of two-dimensional self-focusing and one-dimensional self-modulation of waves in nonlinear media, http://zakharov75.itp.ac.ru/static/local/zve75/zakharov/1972/1972-05-e_034_01_0062.pdfJ. of Exp. Theor. Phys. 34 (1970) 62. Faddeev L. D. Faddeev, L. A. Takhtajan, Hamiltonian methods in the theory of solitons, https://doi.org/10.1007/978-3-540-69969-9Springer (2007). Klimchick3 C. Klimčík, On integrability of the Yang-Baxter sigma-model, https://doi.org/10.1063/1.3116242J. Math. Phys. 50 (2009) 043508https://arxiv.org/abs/0802.3518 [arXiv:hep-th/0802.3518]. Lukyanov:2019asr S. L. Lukyanov, A. B. Zamolodchikov, Integrability in 2D fields theory/sigma-models, in P. Dorey, G. Korchemsky, N. Nekrasov, V. Schomerus, D. Serban, L. Cugliandolo (eds.), Integrability: From Statistical Systems to Gauge Theory: Lecture Notes of the Les Houches Summer School: Volume 106 June 2016, https://doi.org/10.1093/oso/9780198828150.003.0006Oxford University Press (2019). Landau L. D. Landau, E. M. Lifshitz, Mechanics, https://doi.org/10.1002/zamm.19610410910Pergamon Press (1960). ZM V. E. Zakharov, A. V. Mikhailov, Relativistically invariant two-dimensional models of field theory which are integrable by means of the inverse scattering problem method, http://zakharov75.itp.ac.ru/static/local/zve75/zakharov/1978/1978-03-e_047_06_1017.pdfJ. Exp. Theor. Phys. 74 (1978) 1953. Cerednik I. V. Cherednik, Relativistically invariant quasiclassical limits of integrable two-dimensional quantum models, https://doi.org/10.1007/BF01086395Theor. Math. Phys. 47 (1981) 422. EllipticLax C. Appadu, T. J. Hollowood, D. Price, D. C. Thompson, Yang-Baxter and anisotropic sigma and lambda models, cyclic RG and exact S-matrices, https://doi.org/10.1007/JHEP09(2017)035JHEP 9 (2017) 35https://arxiv.org/abs/1706.05322 [arXiv:hep-th/1706.05322]. Bakas:2006bz I. Bakas, D. Orlando, P. M. Petropoulos, Ricci flows and expansion in axion-dilaton cosmology, https://doi.org/10.1088/1126-6708/2007/01/040JHEP 01 (2007) 040https://arxiv.org/abs/hep-th/0610281 [arXiv:hep-th/0610281]. Sfetsos:2014jfa K. Sfetsos, K. Siampos, Gauged WZW-type theories and the all-loop anisotropic non-Abelian Thirring model, https://doi.org/10.1016/j.nuclphysb.2014.06.012Nucl. Phys. B 885 (2014) 583https://arxiv.org/abs/1405.7803 [arXiv:hep-th/1405.7803]. Lacroix:2024wrd S. Lacroix and A. Wallberg, Geometry of the spectral parameter and renormalisation of integrable sigma-models, https://doi.org/10.1007/JHEP05(2024)108JHEP 05 (2024) 108https://arxiv.org/abs/2401.13741 [arXiv:hep-th/2401.13741]. Darboux G. Darboux, Mémoire sur la théorie des coordonnées curvilignes, et des systèmes orthogonaux, http://eudml.org/doc/80825Ann. Sci. Éc. Norm. Supér. 7 (1878) 101. Halphen1 G. H. Halphen Sur un système d'équations différentielles, C.R. Acad. Sc. Paris 92 (1881) 1001. Halphen2 G. H. Halphen Sur certains systèmes d'équations différentielles, C.R. Acad. Sc. Paris 92 (1881) 1004. DMV F. Delduc, M. Magro, B. Vicedo, On classical q-deformations of integrable σ-models, https://doi.org/10.1007/JHEP11(2013)192JHEP 11 (2013) 192https://arxiv.org/abs/1308.3581 [arXiv:hep-th/1308.3581]. Braaten E. Braaten, T. L. Curtright, C. K. Zachos, Torsion and geometrostasis in nonlinear sigma models, https://doi.org/10.1016/0550-3213(85)90053-7Nucl. Phys. B 260 (1985) 630. STS M. A. Semenov-Tian-Shansky, Dressing transformations and Poisson group actions, https://doi.org/10.2977/prims/1195178514Publ. RIMS Kyoto Univ. 21 (1985) 1237. Klimcik6 C. Klimčík, Yang-Baxter σ-models and dS/AdST-duality, https://iopscience.iop.org/article/10.1088/1126-6708/2002/12/051JHEP 12 (2002) 51 https://arxiv.org/abs/hep-th/0210095 [arXiv:hep-th/0210095]. GSW M. B. Green, J. H. Schwarz, E. Witten, Superstring Theory, https://doi.org/10.1017/CBO9781139248563Cambridge University Press (2012). Squellari R. Squellari, Yang-Baxter σ model: Quantum aspects, https://doi.org/10.1016/j.nuclphysb.2014.02.009Nucl. Phys. B 881 (2014) 502 https://www.arxiv.org/abs/1401.3197 [arXiv:hep-th/1401.3197]. Gleb V. V. Bazhanov, G. A. Kotousov, S. L. Lukyanov, On the Yang-Baxter Poisson algebra in non-ultralocal integrable systems, https://doi.org/10.1016/j.nuclphysb.2018.07.016Nucl. Phys. B 934 (2018) 529https://arxiv.org/abs/1805.07417 [arXiv:hep-th/1805.07417]. Hoare B. Hoare, R. Roiban, A. A. Tseytlin, On deformations of AdS_n× S^n supercosets, https://doi.org/10.1007/JHEP06Klimchick4 C. Klimčík, Integrability of the bi-Yang-Baxter sigma-model, https://doi.org/10.1007/s11005-014-0709-yLett. Math. Phys. 104 (2014) 1095https://arxiv.org/abs/1402.2105 [arXiv:math-ph/1402.2105]. SST K. Sfetsos, K. Siampos, D. C. Thompson, Generalised integrable λ-and η-deformations and their relation, https://doi.org/10.1016/j.nuclphysb.2015.08.015Nucl. Phys. B 899 (2015) 489https://arxiv.org/abs/1506.05784 [arXiv:hep-th/1506.05784]. VKQ G. Valent, C. Klimčík, R. Squellari, One-loop renormalizability of the Poisson-Lie sigma models, https://doi.org/10.1016/j.physletb.2009.06.001Phys. Lett. B 678 (2009) 143https://arxiv.org/abs/0902.1459 [arXiv:hep-th/0902.1459]. Witten1 E. Witten, Global aspects of current algebra, https://doi.org/10.1016/0550-3213(83)90063-9Nucl. Phys. B 223 (1983) 422. Lacroix:2023qlz S. Lacroix and A. Wallberg, An elliptic integrable deformation of the Principal Chiral Model, https://doi.org/10.1007/JHEP05(2024)006JHEP 05 (2024) 006https://arxiv.org/abs/2311.09301 [arXiv:hep-th/2311.09301]. Hoare:2023zti B. Hoare, A. L. Retore and F. K. Seibold, Elliptic deformations of the AdS_3× S^3× T^4 string, https://doi.org/10.1007/JHEP04(2024)042JHEP 04 (2024) 042https://arxiv.org/abs/2312.14031 [arXiv:hep-th/2312.14031].
http://arxiv.org/abs/2406.18141v1
20240626074640
Dispersion-induced $Q$-factor enhancement in waveguide-coupled surface lattice resonances
[ "Jussi Kelavuori", "Ali Panahpour", "Mikko J. Huttunen" ]
physics.optics
[ "physics.optics" ]
[ Dispersion-induced Q-factor enhancement in waveguide-coupled surface lattice resonances Jussi Kelavuori, Ali Panah Pour, and Mikko J. Huttunen Photonics Laboratory, Physics Unit, Tampere University, FI-33014 Tampere, Finland ] § ABSTRACT Diffractively coupled nanoparticle arrays are promising candidates for helping to flatten many photonic devices such as lasers, lenses, and metrology instruments. Their performance, however, is directly linked with the size of the metasurfaces, limiting their applicability in nanophotonic applications. Here, we dramatically reduce array sizes of high-Q-factor metasurfaces by utilizing strongly dispersive media. The effect is demonstrated by theoretically and numerically studying periodic arrays of plasmonic nanoparticles embedded inside Bragg-reflector waveguides. We demonstrate array dimensions reduction up to two orders of magnitude while still achieving ultra-high Q-factors in excess of 10^4. § INTRODUCTION Plasmonics, a subfield of photonics, utilizes the properties of interfaces between metals and dielectric materials to couple electromagnetic radiation with charge currents in the metals. Fabrication methods such as electron-beam and ion-beam lithographies <cit.> allow nanometer-scale control over the plasmonic structures, enabling the realization of metamaterials with exotic optical properties. Arrays of periodically placed nanoparticles (NPs) with interparticle distances close to the incident wavelength support so-called surface lattice resonances (SLRs), which couple the diffractive orders of the array with the individual plasmonic responses of the NPs. Plasmonic–diffractive hybrid resonances are the basis for many technologies demonstrated in plasmonic metamaterials, such as lasing <cit.>, nonlinear optics <cit.>, and sensing <cit.>. The usefulness of SLRs stems from the strong light–matter interaction typical for plasmonic resonances, while simultaneously achieving significantly decreased radiative losses, leading to even stronger local electric fields near the particles. The ability of a resonator to store energy is quantified using the quality factor (Q-factor), relating the energy loss per resonance cycle to overall resonance energy. Due to diffractive coupling, SLRs have naturally high Q-factors compared to other plasmonic resonances, with Q-factors higher than 2 000 observed <cit.>. The inherent ohmic losses in the plasmonic NPs, however, limit the Q-factors to inferior values compared to similar dielectric resonances such as quasi-bound states in the continuum <cit.>. Techniques to control and increase Q-factors in SLRs focus on controlling either the absorptive or radiative losses. The plasmonic absorptive losses are usually decreased by increasing the resonance wavelength. Consequently, the highest Q-factors in SLRs are typically achieved in the mid-infrared region <cit.>. Remarkably, recent theoretical work suggests that the absorptive losses in plasmonic structures can be avoided completely by cleverly engineering coupling between three different optical modes <cit.>. Conversely, the radiative losses in SLRs can be decreased by increasing either the array size <cit.> or the light source spatial coherence <cit.>. Furthermore, utilizing out-of-plane resonances <cit.> and quadrupolar coupling <cit.> might increase Q-factors. Moreover, the symmetry of the dielectric surroundings of the lattice has been shown to affect the Q-factor drastically, allowing active control over the resonance linewidth <cit.>. In optical microcavities, strong dispersion in the medium has been utilized to increase the Q-factors of the cavities <cit.>. Slow-light effects arising from strong material dispersion also allow decreasing the sizes of some nanophotonic devices, such as optical switches <cit.>. The radiative losses in SLRs are determined by how well the associated mode is coupled with the far field <cit.>. This coupling is weaker for larger arrays or increased light source coherence, leading to enhanced Q-factors. Large arrays are very effective at prohibiting radiative decay, with radiant Q-factors of 5 000 reached in large arrays <cit.>. On the other hand, bound states in the continuum are completely uncoupled from the far field, yielding them with infinite radiant Q-factors. Correspondingly, coupling into such modes is impossible from the far field. While increasing array size decreases radiative losses effectively, the leaking radiation can also be stopped by constructing the surroundings accordingly. The effect can be achieved in, for example, microring coupled NPs <cit.>. Moreover, coupling the diffractive orders through waveguide modes is also advantageous since the scattered fields are more confined in the lattice plane. These waveguide-mode coupled plasmonic–diffractive resonances have been called waveguide plasmon polaritons <cit.>, guided lattice resonances <cit.> and waveguided plasmonic surface lattice resonances <cit.>. Still, their advantage over free-space coupled SLRs is limited since traditional dielectric waveguides trap only a limited angular range. Here, we numerically investigate the formation and characteristics of SLRs, when NPs are placed inside strongly dispersive media. A simple and strongly dispersive system is realized by embedding plasmonic NP arrays inside Bragg reflector waveguides (BRWs), operated close to their cut-off region. The properties of SLRs are calculated by extending the conventional lattice-sum approach (LSA) formulation <cit.> to the studied mirror waveguide system. The radiant Q-factors are shown to be inversely proportional to the group index of refraction at the surroundings of the NP array. The strong dispersion of BRWs near their cut-off region significantly increases the radiant Q-factor, reaching values close to 12,000 with NP lattices of only 50 particles. The computational results demonstrate the effectiveness of using highly dispersive materials for achieving small-area–ultra-high-Q plasmonic structures. Such metasurfaces would be a step towards industrial plasmonic metasurfaces due to significantly decreased fabrication write time and device footprint. Furthermore, the reduced metasurface dimensions pave the way for utilizing SLR arrays as pixels in, for example, spatial light modulators or hyperspectral imaging applications. § THEORY Periodic NP arrays can support hybrid plasmonic–diffractive resonances known as SLRs. Since most of the losses associated with SLRs are associated with the plasmon oscillations, high-Q SLRs are usually designed to be spectrally far away from the single-particle plasmonic resonance. This also makes their wavelength almost fully dictated by the diffractive mode of the system, with only a minor shift arising from the phase delays related to the scattering in the individual NPs. For an array of scatterers, the resonance energy of the diffractive mode is dictated by the momentum conservation equation for grating coupling <cit.>: k_sub = k_inc+mk_g , where k_sub is the wavenumber in the substrate, k_inc =sin(θ)k_0 is the tangential component of the incident wavevector, θ is the incident angle, k_0 = 2π/λ is the free-space wavenumber, m∈ℤ is the diffractive order, and k_g = 2π/p is the grating wavenumber with an array period p. In free-space SLRs, the substrate wavenumber k_sub is affected by the bulk refractive index n_sub while in guided-mode SLRs the waveguide-mode specific effective refractive index n_eff dictates the resonance condition with k_sub = n_effk_0. In traditional dielectric waveguides, this does not substantially modify the resonance properties from their free-space counterparts, since the effective index is limited to values between the used bulk materials. In mirror waveguides, however, the effective index approaches zero at the so-called cut-off frequency of the waveguide mode. Designing an SLR in this region would imply exotic resonance conditions, where the resonance wavelength would be more strongly dictated by the angle of incidence θ. The effective refractive index in planar perfect-mirror waveguides is given by the following dispersion relation: n_eff = k_z/k_0 = √(n_core-k_c^2/k_0^2) , where k_z is the wavenumber in the propagation direction, n_core is the core bulk refractive index, k_c = mπ/b is the cut-off, i.e. transverse wavenumber for the waveguide mode of order m, and b is the height of the waveguide. As shown in Fig. <ref> the effective index in perfect-mirror waveguides approaches zero at the cut-off wavelength. Interestingly, the group index n_g = c/v_g = n_eff + ω n_eff/ω approaches infinity at the cut-off wavelength in perfect-mirror waveguides, due to the extreme dispersion near the cut-off. This behavior is only possible due to an idealized model of perfectly reflecting surfaces. In physical mirror waveguides, such as metal- and Bragg-mirror waveguides, the inherent material or radiative losses limit the achievable group indices to finite values. To approximate the lossy-mirror-waveguide dispersion relations, we introduce an imaginary part to the cut-off wavenumber k_c=k_c^'+ k_c^'', where k_c^', k_c^''∈ℝ. The modification leads to lossy-mirror waveguide dispersion relations <cit.>: k_z = n_effk_0 = √(k_0^2n_core^2-k_c^' 2-2 k_c^' k_c^''+k_c^'' 2) , where the complex cut-off wavenumber k_c is assumed constant for a given waveguide mode, similar to fully guided waveguide modes. The effective and group indices are shown in Figs. <ref>(c) and (d), respectively, for a lossy-mirror waveguide with k_c^''=40 mm^-1. The losses in the system limit the dispersion at the cut-off wavelength to a set value limiting the maximum group index for the waveguide. In this work, we deploy the lattice-sum approach (LSA) to estimate the properties of mirror-waveguide-coupled SLRs. The method uses dipole approximation, making each particles response dictated only by its dipole moment p⃗=ε_0 αE⃗, where E⃗ is the incident electric field, α is the particle polarizability and ε_0 is the permittivity of free space. In LSA specifically, each NP of the nano-array is assumed to have identical dipole moment. Using these assumptions, the collective response of the particles can be written using the effective polarizability α_eff <cit.>: α_eff = (α^-1-ε_0S)^-1 , where α is the single particle polarizability and S is the lattice sum of the array. The effective polarizability α_eff can be used to calculate the extinction cross-section of the metasurface <cit.>: σ_ext = k_0/AIm[α_eff] , where A is the unit cell area in the array. While single-particle polarizabilities are in general second-order tensors, the considered spherical nanoparticles have isotropic, i.e. scalar polarizabilities. Consequently, the polarizability α can be determined using the first order (dipole) approximation in Mie-theory <cit.>: α = 3l^3/2x^3mψ_1(mx)ψ_1^'(x)-ψ_1(x)ψ_1^'(mx)/ψ_1(mx)ξ_1^'(x)-mξ_1(x)ψ_1^'(mx) , where l is the radius of the sphere, x = 2π n_h l/λ is the size factor, m=n_s/n_h is the ratio between the refractive indices of the sphere n_s and the host material n_h, while ψ_1 and ξ_1 are the first-order Ricatti-Bessel functions of first and second kind, respectively. The lattice sum S, on the other hand, describes the collective effect of the array on the induced dipole moment through scattered fields. It is written as S = ω^2μ_0∑_j≠ i^N G_e(r⃗_i,r⃗_j) . Here, ω is the angular frequency of the incident light, μ_0 is the vacuum permeability, N is the total number of particles in the array and G_e is the dyadic electric Green's function from r⃗_j to r⃗_i. In general, the electric Green's function can be used to calculate the electric field an electric dipole induces to its surroundings: E⃗(r⃗_i) = ω^2μ_0G_e(r⃗_i, r⃗_j)p⃗(r⃗_j) . Therefore the lattice sum can be understood as a sum over the scattered electric fields from an array of particles at a one central particle r⃗_i in an empty lattice, where the lattice sites have not yet been specified. Conventional LSA formulations are based on the free-space electric Green's function. However, this formulation is only applicable when the radiation pattern of the scatterers is not disrupted by any major interfaces in the surrounding geometry. In waveguides, for example, some of the scattered light is radiated into the waveguide modes, a phenomenon not described by the free-space Green's function. In practice, electric Green's functions are easily obtained only in a handful of different geometries. An example of a relatively simple geometry is the rectangular perfect electric conductor (PEC) waveguide consisting of a hollow core with walls of lossless mirrors. Its dyadic electric Green's function is given by <cit.>: G_e 1(R⃗, R⃗^') = -1/k^2ẑẑδ(R⃗-R⃗^') +/a b∑_m,n2-δ_0/k_c^2 k_z[M⃗_mn(± k_z) M⃗_mn^'(∓ k_z). .+N⃗_mn(± k_z) N⃗_mn^'(∓ k_z)], z ≷ z^' . The primed variables refer to dipole source coordinates and the unprimed variables to field evaluation point. Hat variable denotes a unit vector and δ_0 = 1 if m=0     n = 0 and δ_0 = 0 otherwise. The sign of the arguments for the vector wave functions M⃗_mn and N⃗_mn is determined by whether the source is behind or in front of the evaluation point (z ≷ z^'). The vector wave functions were determined using Dirichlet boundary conditions for a PEC waveguide with width (x) of a and height (y) of b. The wave functions can be written as <cit.> M⃗_mn(k_z)= (-k_ycos(k_xx)sin(k_yy)x̂+ . . k_xsin(k_xx)cos(k_yy)ŷ)^ k_zz, N⃗_mn(k_z)= 1/k( k_zk_xcos(k_xx)sin(k_yy)x̂+. . k_zk_ysin(k_xx)cos(k_yy)ŷ+. . k^2_csin(k_xx)sin(k_yy)ẑ)^ k_zz with wavenumbers specified as: k_x = (mπ/a),   k_y = (nπ/b) k^2 = k_x^2+k_y^2+k_z^2 = k_c^2+k_z^2 . While the PEC waveguide is an interesting academic example, practical systems utilizing e.g. metallic mirrors are associated with non-negligible losses. For a more realistic Green's function, we replace the perfect mirror propagation wavenumber k_z from Eqs. (<ref>) with the lossy-mirror waveguide wavenumber from Eq. (<ref>). In practice, this modification introduces either exponential loss or gain in the propagation direction z depending on the sign of the imaginary part of the transverse wavenumber k_c. Applying the rectangular lossy-mirror waveguide Green's function to the lattice sum described in Eq. (<ref>), we can now calculate how an array of scatterers behaves when embedded inside a mirror waveguide. Specifically, we focus on a 1D array periodic in the propagation z direction, with each particle having identical x and y coordinates. By noting that the Green's function is dependent on the z-coordinate only through the phase term ^ k_zz, the lattice sum S of this array can be expressed as a geometric sum over the particle locations. For an infinite number of particles, the sum converges to a geometric series, and the lattice sum can be expressed in a closed form: S = -ω^2 μ∑_mnG_mn ( 1/1-^- p(k_z+k_inc,z)+. . 1/1-^- p(k_z-k_inc,z)-1 ) , where G_mn = 2-δ_0/k_c^2 k_z(M⃗_mn^* M⃗_mn^*'+N⃗_mn^*N⃗_mn^*') , and the modified vector wave functions M⃗_mn^*, M⃗_mn^*', N⃗_mn^* and N⃗_mn are devoid of the forward-propagation phase term ^ k_zz. Here, p is the array period and k_inc,z is the z-component of the incident wavevector. For detailed derivations of the lattice sum for both the mono- and multi-partite unit cells, refer to Supplemental material [LINK HERE BY THE PUBLISHER]. Note that the wavenumbers (k_x, k_y, k_z) differ for each waveguide mode (m, n) leading to different coupling conditions for each mode. Embedded scatterers may therefore be used as mode selective waveguide grating couplers, with a possibility to suppress diffractive coupling for selected waveguide modes (see Supplemental material [LINK HERE BY THE PUBLISHER]). The used approach does not take into account the uncertainty associated with the leaky waveguide mode wavenumbers. While the losses over propagated distance are included in the model, a δ-function-like effective index for each frequency is assumed. For accuracy, a Lorentzian distribution of propagation wavenumbers k_z for each wavelength should be assumed for leaky waveguide modes, with the accompanied linewidth proportional to the imaginary part of the effective index <cit.>. Applied to SLRs, the effect broadens resonance conditions and slightly reduces Q-factors, especially near the cut-off frequency where Im[n_eff] is at its highest. The effect is discussed more carefully in Supplemental material [LINK HERE BY THE PUBLISHER]. § RESULTS AND DISCUSSION To demonstrate the potential of mirror waveguides as a medium for ultra-high-Q SLRs, we employ the discussed theoretical framework for BRWs. The main advantage of using BRWs, instead of metal waveguides, concerns their losses at optical and near-infra red frequencies. BRWs support waveguide modes with close to zero effective refractive indices. In addition, group indices as high as n_g = 40 have been demonstrated earlier <cit.>. The investigated BRW consists of a planar waveguiding layer surrounded by ten pairs of quarter-wave thick high- and low-index materials on both sides of the waveguide. TiO_2/BK7-glass was used as a high/low index material, with the waveguiding layer made of BK7. Layer thicknesses are presented in Table <ref>. To couple electromagnetic radiation with the NPs embedded inside the BRW, the angle of incidence was carefully considered. Given the angle-dependent reflectance of Bragg reflectors, depicted in Fig. <ref>(b), the system was designed to be transparent for highly oblique angles near the wavelength 900 nm. Consequently, incident radiation can efficiently couple with the plasmonic scatterers inside the waveguide. Since the incident wavelength is close to a cut-off wavelength of the TE_0-mode of the waveguide, the NPs will scatter light dominantly to the waveguide mode due to its enhanced Purcell factor. Since the effective index of the mode is close to zero, the mode wave vector is directed close to the normal of the Bragg walls. High reflectivity at normal incidence results in the TE_0-mode having low losses in the used wavelength range. To fully take advantage of this effect, the BRW-guided SLR can be engineered to appear close to 900 nm range by utilizing Eq. (<ref>), and modifying the lattice period accordingly. At cut-off, the produced mode can be alternatively understood as the plasmonic particle resonance hybridizing with the epsilon-near-zero (ENZ) mode of the thin-film structure <cit.>. While the plasmonic contribution negatively affects the Q-factor of the pure ENZ-mode, the small mode volumes obtainable in the NPs grant them advantages over all-dielectric structures <cit.>. The reflection of the structure without the NPs is shown in Fig. <ref>(c), where the ENZ/Bragg-wall-cavity mode can be seen as a thin line starting from 905 nm at normal incidence. While the BRW structure restrains the electric fields in the direction normal to the Bragg walls, the periodic array imposes phase restrictions in the direction of the lattice vector. Furthermore, field confinement in the NPs results in higher local fields and increased applicability. Modal analysis was done for the described BRW with the finite-element COMSOL Multiphysics program. The dispersion graphs for the real and imaginary parts of the effective index of the TE_0-mode are shown in Figs. <ref>(d) and (e) with linear and logarithmic scaling, respectively. The lossy-mirror waveguide dispersion relation given by Eq. (<ref>) for the (rectangular) TE_10-mode was then fitted to the obtained data to be used for LSA. The field profile of the rectangular mirror waveguide mode TE_10 resembles the planar waveguide TE_0 field profile with both varying only in the vertical (y) direction. The fitted values are n_core=1.9, waveguiding layer thickness b=238.1 nm an Im[k_c] = k_c^'' = 600 m^-1. The fitted complex effective index of a lossy-mirror waveguide mode is shown in Figs. <ref>(d) and (e), with a good correspondence to the COMSOL simulations. Variations in the imaginary part of n_eff in COMSOL simulations are due to reflections from the finite-sized perfectly-matched layers. The proper imaginary part of the mode is found between the extrema of these oscillations. <cit.> LSA was then applied to study the lossy-mirror-waveguide system. The lattice sites for NPs were placed in the middle of the waveguide, and only the response from the TE_10 mode was considered. The used incident angle for the system was 64 degrees, aligning in the low-reflection region of the Bragg-reflector in Fig. <ref>(b) at wavelengths close to 900 nm. The results from LSA, namely the lattice sum and transmission, were analyzed by fitting a Fano function to the data <cit.>: T_fano(λ) = |a_1+a_2+I/(λ-λ_0-γ)|^2 , where λ_0 is the resonance wavelength, γ is the resonance half-width half-maximum, and a_1, a_2 and I are other fitting parameters determining the shape and strength of the resonance. The resonance Q-factor is given as Q=λ_0/2γ from the fitted function. Some examples of data-fitted functions are presented in Supplemental material [LINK HERE BY THE PUBLISHER]. We begin the analysis with the empty-lattice approximation (ELA) of our system by only considering lattice sites devoid of any NPs. This analysis of the lattice sum S is equivalent to a system with vanishingly small NPs. First, we analyze the ELA for an infinite array using Eq. (<ref>). Fig. <ref> shows the variations in the ELA Q-factor, wavelength λ_0, and the corresponding group index n_g(λ_0) as a function of the 1D array period p. The resonance characteristics change continuously and quite predictably as a function of the lattice period for different SLR orders m, with λ_0 following closely the prediction from Eq. (<ref>). The symbolically highlighted cases are further analyzed in Fig. <ref>. With increasing period, the first-order coupling mode (m=1) approaches the cut-off wavelength of the waveguide mode at around 905 nm. The cut-off is reached with a period of around 1 µm leading to a substantial increase in the group index. Intriguingly, the elevated group index does not affect the Q-factor. This phenomenon likely arises from two opposing factors counterbalancing each other as the cutoff is approached. While the elevated group index enhances the Q-factor <cit.>, the simultaneously increased losses (Im[n_eff]) reduce the coupling between far-away NPs consequently decreasing the Q-factor. Zeroth-order m=0 resonances are analyzed more carefully in Supplemental material [LINK HERE BY THE PUBLISHER]. While the group index hardly affects the Q-factor of infinite arrays, the underlying functions suggest finite arrays benefiting substantially from high-group-index coupling. Fig. <ref> illustrates the ELA Q-factors with various group indices as a function of the total number of lattice sites, depicted in (a) log-log scale and (b) linear scale. The different group indices were obtained by varying the array periods and SLR orders as illustrated in Fig. <ref>. For all cases, increasing lattice sites initially elevates the Q-factor until a saturation point is reached. Strikingly, this saturation occurs with significantly fewer lattice sites in regions with high group indices. For small arrays, it is evident that orders of magnitude Q-factor enhancement is achievable by utilizing the effect. Further analyzing the effect of group index on array size, we calculated the saturation point for distinct arrays. The saturated lattice site number N_sat was defined as the number of lattice sites required to achieve 98% of the infinite array Q-factor. Our findings, depicted in Fig. <ref>(c), indicate an almost inverse proportionality between the saturated lattice sites and the group index with a fit on the data suggesting proportionality of N_sat∝ n_g^-1.3. Before saturation, the addition of particles leads to a nearly linear growth in the Q-factor. Consequently, increasing the group index inversely impacts the number of particles necessary to attain specific Q-factors, serving as a useful rule of thumb. Since the findings presented in Figs. <ref> and <ref> originate directly from ELA, the depicted Q-factors are related to the extent the metasurface geometry can mitigate radiative losses of the NPs. Considering, that the overall losses encompass both absorptive and radiative components according to the equation Q_tot^-1=Q_abs^-1+Q_rad^-1 <cit.>, the relatively high radiant Q-factors do not necessarily translate to high total Q-factors for SLRs. To investigate the role of absorptive losses, different-sized gold NPs were introduced into the LSA simulation. The single-particle polarizabilities were determined using Eq. (<ref>) using tabulated values for the permittivity of gold <cit.>. The properties of first-order (m=1) infinite-array SLRs with different particle radii and array periods are depicted in Fig. <ref>. Increasing particle size leads to both higher scattering cross-section and increased absorptive losses, leading to a familiar trade-off between resonance visibility and Q-factor. Resonances with larger NPs are generally associated with lower Q-factors and higher peak extinctions. Furthermore, bigger particles generally induce a larger phase shift to the scattered light, leading to an increasingly redshifted resonance. To validate our findings, we conducted 3D COMSOL simulations of the BRW system to reproduce the results obtained from the LSA. The simulations had an angle of incidence of 64 degrees and were performed for the BRW structure outlined in Table <ref> with embedded periodic spherical gold NPs. We then subtracted the transmission spectra of the empty BRW from the obtained SLR spectra, allowing us to isolate the plasmonic resonance peak. Subsequently, the Fano function described in Eq. (<ref>) was fitted to estimate the resonance properties. The transmission, reflection, and fitted Fano function for selected simulations are shown in Fig. <ref>. We note that the periodic boundary conditions in COMSOL simulations emulate infinitely large arrays. Corresponding calculations with the LSA are compared with the COMSOL simulations in Fig. <ref> for both changing particle radii and array period. While the two methods are qualitatively comparable, some simplifying assumptions in the LSA model make for differences in quantitative comparisons. Firstly, LSA does not account for the changing transmission through the Bragg walls. In COMSOL, diminished extinction is observed when SLR does not align perfectly with the transmission window, apparent in Figs. <ref>(b) and (f). Secondly, as discussed at the end of Section <ref>, LSA does not take into account the uncertainty in the propagation wavenumber k_z <cit.>, which leads to LSA having systematically higher Q-factors as opposed to full wave analysis. Further discrepancies include the stronger redshift as a function of particle size in LSA compared to COMSOL. Unfortunately, the effect of array size on the Q-factor is unfeasible to investigate using COMSOL simulations. Corresponding multi-partite simulations would be computationally extremely heavy due to broadened simulation space. Nevertheless, the conducted simulations assure that the LSA method can yield qualitatively accurate results for the system. § CONCLUSIONS Reducing the sizes of metasurfaces that exhibit SLRs is essential for their adoption in selected applications. We have both theoretically and numerically demonstrated a new approach to decrease array sizes of diffractive nanoarrays while retaining their high Q-factor values. Based on our results, the approach enables up to two orders of magnitude reductions in array dimensions, at the expense of increased structural thickness. Radiant Q-factors in the order of 10^4 in arrays with dimensions smaller than 50 µm were achieved. This miniaturization of array area could be used in technologies, such as spatial light modulators, which demand pixel dimensions in the order of 10 µm. Scaling down plasmonic metasurface areas opens up possibilities also in applications like spectral imaging, where finer pixel sizes are advantageous. Furthermore, smaller arrays facilitate faster and simpler fabrication, especially with high-precision techniques like electron-beam lithography and focused ion beam milling, where writing areas are limited. § ACKNOWLEDGEMENTS We acknowledge the support of the Flagship of Photonics Research and Innovation (PREIN) funded by the Academy of Finland. JK also acknowledges the Magnus Ehrnrooth foundation for their PhD grant. left=2.54cm,right=2.54cm,top=25.4mm,bottom=2.54cm Supplemental material for Dispersion-induced Q-factor enhancement in waveguide-coupled surface lattice resonances § ZEROTH-ORDER SLR Zeroth-order SLR is a special case of SLR for which the periodicity of the array has no effect on the wavelength of the SLR. They are a solution to k_sub = k_inc+mk_g , with m=0. If the incidence is from air superstrate (n=1), the equation is reduced to n_eff(λ) = sinθ. Equivalently, grating equation with m=0 refers to Snell's law. However, in BRWs a resonance occurs when the refracted light matches with a waveguide mode. For normal incidence (θ = 0) the resonance is at exactly n_eff(λ) = 0, and light is directly coupled to the epsilon-near-zero (ENZ) mode of the thin-films structure. Since constant phase over the propagation direction is observed, every possible NP embedded in the structure will automatically oscillate in-phase, adding to the resonance. Tilting the incident angle, the zeroth-order SLR blueshifts with the ENZ/cavity mode as shown in Fig. 2(c). The situation is understood as the transverse incident wavenumber (k_inc) matching the effective propagation wavenumber (k_z) of the waveguide mode. k_z = k_0 n_eff = k_z,inc = k_0sinθ . Even with oblique angle of incidence, all NPs will be automatically in-phase with each other, due to incident transverse phase propagation matching the phase propagation in the substrate. We investigated these zeroth-order resonances in COMSOL with normal incidence in BRWs with and without particles. The results are depicted in Fig. <ref>. As expected, the pure ENZ mode had higher Q-factor, while the plasmonic particle increased local electric fields, and redshifted the resonance wavelength. § DISPERSION AND UNCERTAINTY OF WAVE NUMBERS IN LEAKY MODES In a waveguide with non-perfect mirrors such as real metals or dielectric Bragg-reflectors losses will be introduced into the system as either radiative losses or material losses. We quantify these losses by making the transverse k_c-vector a complex quantity: k_c = k_c^' + k_c^” Now the propagation wavenumber k_z must fulfill leaky dispersion relation: k^2 = k_z^2 + k_c^2. Equating the imaginary parts of the left and right-hand sides of the Eq. (<ref>) yields a relation between the imaginary (k_z^”) and real (k_z^') parts of the propagation constant with the transverse components <cit.> k^2 = (k_z^'+ k_z^”)^2 + (k_c^'+ k_c^”)^2 Im[k^2] = Im[k_z^'2-k_z^”2+2 k_z^'k_z^”+ k_c^'2-k_c^”2+2 k_c^'k_c^”] 0 = 2 k_z^'k_z^” +2 k_c^'k_c^” and consequently: k_z^'k_z^” = -k_c^'k_c^” Near the cutoff region k_c^' >> k_z^' k_c^” << k_z^”, and the wave vector of the mode is almost normal to the interfaces. Meanwhile, the fields are decaying (or increasing) extremely strongly in the propagation direction. Eq. (<ref>) necessitates exponential growth in either transverse or propagation direction, a known problem of the leaky waveguide mode analysis. The problem is solved if the wavenumbers are considered to have a continuum of values as opposed to singular δ-function-like values. As the waveguide mode assumes a Lorentzian distribution of transverse and propagation wavenumbers, the different background components become increasingly out-of-phase with each other. The different phases act as a nullifying effect for the exponential growth in the transverse direction. <cit.> Examples of the Lorentzian distributions are shown in Fig. <ref>, with λ = 800 nm, Im[k_c]=40 mm^-1, and n_core=1.5. Distributions P are normalized to one. Due to the relation (<ref>) between propagation and transverse wavenumbers, the Lorentzian linewidth for the effective index n_eff is larger near the cut-off (Fig. <ref>a), than far-away from the cut-off (Fig. <ref>b). The half-width of the Lorentzian line shape is equal to the imaginary part of the wavenumber. Importantly, the Lorentzian distribution of propagation wavenumbers also affect the coupling between individual NPs inside the waveguide. Consequently, the waveguide coupled SLRs have larger line widths than one would assume with a δ-function-like mode wave vectors. Since the imaginary part of k_z is larger near the mode cut-off, the resonance broadening is expected to be greater near the cut-off. However, the effect is also expected to affect large arrays more, since coupling between far-away particles is affected more compared to neighboring particles. Including the effect in LSA would require a convolutional operation done for each term in the lattice sum, rendering Eq. (<ref>) to: S(λ) = ω^2μ_0∑_j≠ i^N ∫_-n_0k_0^n_0k_0G_e(r⃗_i,r⃗_j,k_z)P_λ(k_z) k_z , where P_λ(k_z) is the leaky-mode-specific wavenumber distribution at wavelength λ. This modification however would render the computational complexity of the system unnecessarily high with minimal increase in the model accuracy. § MULTI-PARTITE UNIT CELL LSA Multi-partite unit cell formulation for LSA is constructed in this chapter. The formulation is useful in lattices, such as honeycomb lattice, which can not be constructed with single-particle unit cell. The dipoles inside one unit cell may now have different dipole moments, but each unit cell is assumed identical as a whole. The effective polarizability: p⃗ = α_effE⃗_inc , for a n -dipole unit cell system is now a 3n×3n block matrix with p⃗ and E⃗_inc being 3n sized vectors. The effective polarizability α_eff can be represented using the interaction dyadics 𝒢_l,k <cit.>: α_eff=inv([[ 𝒢_1,1 𝒢_1,2 ⋯ 𝒢_1, k ⋯ 𝒢_1, n; 𝒢_2,1 𝒢_2,2 ⋯ 𝒢_2, k ⋯ 𝒢_2, n; ⋮ ⋮ ⋱ ⋮ ⋮; 𝒢_l, 1 𝒢_l, 2 ⋯ 𝒢_l, k ⋯ 𝒢_l, n; ⋮ ⋮ ⋮ ⋱ ⋮; 𝒢_n, 1 𝒢_n, 2 ⋯ 𝒢_n, k ⋯ 𝒢_n,n ]]) Each interaction dyadic 𝒢_l,k represents how all particles in a sublattice k affect one (centrally located) particle in sublattice l. Each interaction dyadic can be written as: 𝒢_l,k = ∑_j∈ B_kA_c_l,j , where B_k is a group of dipoles in sublattice k, c_l is an index of a particle in the sublattice l and A_c_l,j determine the dipole interaction between particles i and j, i.e. E⃗_i = A_i,jp⃗_j. A_i,j is written as: A_i≠ j = -ω^2μ_0 G_e(r⃗_i,r⃗_j) A_ii = α^-1 Using Eqs. (<ref>) and (<ref>), the off- and on-diagonal interaction dyadics are written as: 𝒢_l≠ k = -ω^2 μ∑_j∈ B_kG_e(r⃗_c_l, r⃗_j ), 𝒢_l=k = α_c_l^-1-ω^2μ∑_j∈{B_k \ c_l}G_e(r⃗_c_l ,r⃗_j). A schematic representation of a bipartite unit cell LSA system is presented in Fig. <ref>. LSA reduces the computational complexity of conventional discrete dipole approximation <cit.> by summing over the Green's functions before the matrix inversion. Also, the reduced number of interactions taken into account in LSA decrease the computational complexity of the method. § GEOMETRIC SERIES In this chapter, we derive the closed form of the interaction dyadics in multi-partite unit cell LSA for infinite number of lattice sites in a 1D PEC-waveguide with losses. We start the derivation by noting that the only z-dependence in the Green's function Eq. (<ref>) are the phase-propagation terms ^ k_zz inside vector wave functions. Detaching the term from the functions, the Green's function is expressed as follows: G_e 1(R⃗, R⃗^') = -1/k^2ẑẑδ(R⃗-R⃗^') +/a b∑_m, nG_mn^± k_z(z-z^'), z ≷ z^' , G_mn = 2-δ_0/k_c^2 k_z[M⃗_mn^*(± k_z) M⃗_mn^*'(∓ k_z). .+N⃗_mn^*(± k_z) N⃗_mn^*'(∓ k_z)] where ^* notes that the vector wave function does not contain the phase-propagation term. Now taking into account the losses of the mode and incident angle, the interaction dyadic (<ref>) for one mode-order (m,n) comes into form 𝒢_l≠ k, mn = -ω^2 μ∑_j∈ B_kG_mn^(k_inc,x(x_c_l-x_j)+k_inc,y(y_c_l-y_j))^ (z_c_l-z_j)(± k_z+k_inc,z)), z_c_l≷ z_j, where k_z is defined by Eq. (<ref>). In a 1D array, all particles in the same sublattice B_k have identical x and y-coordinates. Furthermore, all the unit cells are evenly spaced in z-direction with a period p. The difference in z-coordinates can be expressed as z_c_l-z_j = z_c_l-z_0-j^'p , j∈ B_k , and j^'∈{-N^'/2,-N^'/2+1,…, N^'/2}, where N^' is the total number of particles in sublattice B_k. Eq. (<ref>) can be further simplified to 𝒢_l≠ k, mn = -ω^2 μG_mn^'(1+∑_j^'=1^N^'/2(^- p(k_z+k_inc,z)j^'+^- p(k_z-k_inc,z)j^') ) G_mn^' = G_mn^ (k⃗_inc· (r_c_l-r⃗_0) + (z_c_l-z_0)k_z ) Taking now N^'∞, both terms inside the sum are revealed to be individual geometric series. Changing the series now to their closed forms finally grant us closed form of the interaction dyadic: 𝒢_l≠ k, mn = -ω^2 μG_mn^'(1/1-^- p(k_z+k_inc,z)+1/1-^- p(k_z-k_inc,z)-1 ) Similarly the diagonal interaction dyadics can be expressed as 𝒢_l=k, mn = α^-1-ω^2 μG_mn^'(1/1-^- p(k_z+k_inc,z)+1/1-^- p(k_z-k_inc,z)-1 ) With the main difference arising from the self-interaction term A_ii = α^1, since z_c_l = z_0, and the j^' =0 is referring to the same particle the interaction dyadic is calculated for. § FANO FITS FOR Q-FACTORS Q-factors of different resonances were estimated by fitting Fano-resonaces to the obtained data. Fano-resonance function is given as <cit.> T_fano(λ) = |a_1+a_2+I/(λ-λ_0-γ)|^2, where λ_0 is the resonance wavelength, γ is the resonance half-width half-maximum, and a_1, a_2 and I fitting parameters related to resonance shape and magnitude. Q-factors were obtained by the following relation: Q = λ_0/2γ To approximate the radiant Q-factor, fitting was done for both the imaginary and real parts of the lattice sum S, separately. Then, a more reasonable fit was automatically chosen to increase robustness. Typically, Q-factors obtained from real and imaginary parts of the lattice sum deviated less than 1% from each other. Fig. <ref> shows some of the used fits. Fig. <ref>(a)–(d) depicts Fano fits to transmission spectra obtained from LSA with N=∞, p=950 nm, and different particle radii R. Fig. <ref>(e)–(h) shows fits to the normalized real part of the lattice sum with different number of lattice sites with a period of p=800 nm. Finally, Fig. <ref>(i)–(k) has fits for the normalized imaginary part of the lattice sum in a lattice with a period of p=1000 nm. It is apparent that the fits for transmission, imaginary and real parts of the lattice sums are very good and reliable for infinite arrays. In finite systems, the resonance line shapes start deviating from the ideal Fano shape due to oscillations, caused by the LSA-assumption that all particles have identical dipole moments. Naturally, in finite systems, the particles near the edges of the array would experience weaker field enhancement and dipole moments, leading to the nonphysical oscillations in LSA. Nonetheless, the fits for finite systems are fairly reliable at approximating the Q-factor. § SUPPRESSION OF MODES IN WAVEGUIDE-SLRS Particle locations in waveguide-SLRs play a key role in determining the strength of the coupling between the particles <cit.>. This interplay can either heighten or suppress the coupling depending on particle locations in respect to the mode profile. In the dipole approximation, an SLR mode can be completely suppressed if all the particles reside in the nodes of the transverse mode profile of the coupling waveguide mode. Placing the NPs symmetrically in two antinodes of opposite phase also yield similar suppression. The effect is illustrated in Fig <ref>. The excitation of a mode by a source inside the waveguide is proportional to <cit.> ∭J⃗·E⃗_mn V where J⃗ is the electric current density of the source and E⃗_mn are the mode-fields. In a node of the field, the integral goes to zero as particle size vanishes, leading to no excitation. However, for particles with finite volume, higher-order terms in electric multipole expansion become important at describing the scattering. As such, the particle may excite a mode even from a node of the transverse profile. The treatment for multipole expansion of localized sources in PEC-waveguides is found in <cit.>. The effect of mode suppression could be used as an advanced waveguide grating for selective coupling of incident light into the waveguide. Allowing only one mode to couple, large multimode waveguides could function as single-mode waveguides while retaining their dimensions. Such devices might be used as a large-bandwidth single-mode waveguides. Due to the fixed array period, the devices would, however, necessitate the use of an original angle of incidence for each coupled wavelength.
http://arxiv.org/abs/2406.18454v1
20240626160153
From Counting Stations to City-Wide Estimates: Data-Driven Bicycle Volume Extrapolation
[ "Silke K. Kaiser", "Nadja Klein", "Lynn H. Kaack" ]
cs.CY
[ "cs.CY", "stat.AP" ]
Application Paper]From Counting Stations to City-Wide Estimates: Data-Driven Bicycle Volume Extrapolation 1]Silke K. Kaiser* 2]Nadja Klein 1]Lynn H. Kaack Kaiser et al. [1]Data Science Lab, Hertie School, Berlin, 10117, Berlin, Germany [2]Department of Statistics, Technical University, Dortmund, 44227, North Rhine Westphalia, Germany.*Corresponding author. s.kaiser@phd.hertie-school.org Kaiser et al. S§ ABSTRACT Shifting to cycling in urban areas reduces greenhouse gas emissions and improves public health. Street-level bicycle volume information would aid cities in planning targeted infrastructure improvements to encourage cycling and provide civil society with evidence to advocate for cyclists' needs. Yet, the data currently available to cities and citizens often only comes from sparsely located counting stations. This paper extrapolates bicycle volume beyond these few locations to estimate bicycle volume for the entire city of Berlin. We predict daily and average annual daily street-level bicycle volumes using machine-learning techniques and various public data sources. These include app-based crowdsourced data, infrastructure, bike-sharing, motorized traffic, socioeconomic indicators, weather, and holiday data. Our analysis reveals that the best-performing model is XGBoost, and crowdsourced cycling and infrastructure data are most important for the prediction. We further simulate how collecting short-term counts at predicted locations improves performance. By providing ten days of such sample counts for each predicted location to the model, we are able to halve the error and greatly reduce the variability in performance among predicted locations. § INTRODUCTION Shifting from motorized transport to bicycles improves cardiorespiratory health, reduces the risk of cancer mortality oja_health_2011, woodcock_public_2009 and reduces greenhouse gas emissions h-o_portner_dc_roberts_es_poloczanska_k_mintenbeck_m_tignor_a_alegria_m_craig_s_langsdorf_s_loschke_v_moller_a_okem_eds_ipcc_2022. A promising lever to encourage people to cycle in cities is infrastructure improvements: Previous studies have shown that adult cyclists, and especially, women, prefer to ride infrastructure specifically designated for them dill_bicycling_2009, garrard_promoting_2008. This is probably because a cyclist is less likely to be involved in an accident when riding in a separate bicycle lane morrison_-road_2019; noting that most cyclists perceive risks in accordance with actual risk moller_cyclists_2008. However, introducing new bike lanes is expensive and often highly contested due to limited resources, such as funding and road space. Thus, data-driven approaches are crucial for accurately targeting infrastructure improvements to areas with the greatest need olmos_data_2020,larsen_build_2013. One relevant piece of information for such data-driven approaches is bicycle volume data. Currently, most of this data is collected by permanently installed bicycle counting stations, providing information on cyclists passing by a specific location. Due to their high cost, these stations are sparsely located across a road network. At the same time, several data sources related to cycling are openly available romanillos_big_2016. Given the scarcity of bicycle volume data on the one hand and the abundance of related data on the other hand, clamors for methods that are able to make use of all available information in order to better predict bicycle volumes at a fine-grained scale. We address this by combining ml (ml) methods with a wide variety of available data sources to extrapolate bicycle volume to a much higher spatial resolution. With this machinery, we aim to answer three important questions. First, can we predict bicycle volume at unseen locations using a variety of data? Second, which of these data sources are the most relevant for prediction? And third, how much can the performance be improved by adding sample counts for the predicted locations? Researchers have identified several datasets related to bicycle volume that have proven useful, especially for interpolating missing observations in bicycle count data. These include data sources that have long been available, such as weather, holiday, infrastructure, and socioeconomic indicators miranda-moreno_weather_2011, strauss_spatial_2013, holmgren_prediction_2017. The potential of additional available data sources associated with the widespread use of smartphones has also been explored lee_emerging_2020. This includes valuable information from crowdsourced bicycle usage data, in particular, from the Strava application lee_strava_2021, kwigizile_leveraging_2022, bike-sharing protocols miah_estimation_2022 or the use of photos and tweets wu_photos_2017. Among available studies, some extrapolate bicycle volume using only a few of these data sources. For instance, <cit.> explore how counting station data can be merged with crowdsourced data to estimate bicycle volumes across street networks using clustering and nonparametric modeling. They find that relying solely on crowdsourced data as an additional input to counts is challenging, particularly due to oversampling from counting stations located at high-volume locations. Similar studies estimate cyclists' exposure employing various data sources and using classical regression approaches sanders_ballpark_2017, griswold_pilot_2011, mixed effects models dadashova_random_2020 or Poisson regressions roy_correcting_2019. In addition to traditional statistical approaches, ML methods have been increasingly applied over the past decade. For instance, sekula_estimating_2018, das_interpretable_2020,zahedian_estimating_2020 have proven how ML methods can be leveraged for the extrapolation of motorized traffic. However, to the best of our knowledge, there is no study that combines ML methods with a large variety of different data sources to provide reliable, fine-grained predictions of bicycle counts beyond available counting stations. Our paper showcases our approach in the city of Berlin. In Germany's largest city, with 3.6 million residents, the transportation mode share for walking and cycling lies at 37%, aligning with the European average of 42% european_metropolitan_transport_authorities_barometer_2021, making it a suitable representative case for our analysis. We implement and compare different ML algorithms to predict the daily and aadb (aadb) at unseen locations. We use a wide array of features, many of which have proven pertinent in previous studies (see Table <ref> for an overview). To identify the most relevant data sources, we perform a grouped base permutation feature importance. In addition, to further improve the predictions, we evaluate whether collecting sample counts at unseen locations would be purposeful and what is the best strategy to collect this data. § RESULTS §.§ Data sources Our study uses data from 20 long-term bicycle counting stations in Berlin, which continuously measure the number of passing bicycles per hour. In addition, we employ data from 12 short-term counting stations, where counts are conducted on individual days throughout the year senate_department_for_the_environment_mobility_consumer_and_climate_protection_berlin_jahresdatei_2022. To accurately predict bicycle counts, we make use of information contained in a variety of further sources. These include data on infrastructure, socioeconomic factors, motorized traffic, weather, holidays, bike-sharing, and from a crowdsourcing application that tracks cyclists (Strava application). Bike-sharing and Strava data directly represent bicycle traffic. But they attract different users and differ in the type of information they provide. The former describes the exact time and origin-destination-pairs of individual trips taken on short-term free-floating rented bikes. The latter are anonymized georeferenced data from an application, which are aggregated to provide the number of trips for a region and for road segments between intersections based on tracking users as they ride. The bike-sharing, crowdsourced, and motorized traffic data are feature-engineered, to indicate the usage volume, respectively of passing motorized traffic within different radii around counting stations. The socioeconomic and infrastructure features are assigned in accordance with the location of the counting stations. Further details on the distinct data sources, including data clearing and feature engineering are provided in the Methods Section <ref>. A list of all features is provided in Table <ref> together with references to the data sources. The bike-sharing data is only available for April to December 2019 and June to December 2022. Therefore, we set our study period to these periods. This also largely omits the period of the COVID-19 pandemic and its impact on transportation. §.§ Spatial extrapolation using multi-source data We train our model using data from existing counting stations as ground truth, iteratively omitting one counter from the training dataset, and evaluating the performance for this omitted location. We compare the performance of different ML algorithms on this task. A description of the models, the feature selection, and the hyperparameter tuning can be found in the Methods Section <ref>. We evaluate the predictions at the daily and at the annual scales. The daily scale is valuable for providing a more detailed picture of the variation throughout the year, and it is relevant for understanding the effects of intra-week variation, special events, and seasonal weather conditions yi_inferencing_2021, sekula_estimating_2018, zahedian_estimating_2020. For infrastructure planning decisions, annual averages may be sufficient. The aadb is the average number of bicycles that pass a given location per day for a given year. We compute the performance for the aadb by predicting the daily counts and evaluating their average against the annual ground truth average. Since the counting station data is recorded hourly, we sum up the measurements for each day to obtain daily measurements. To simulate extrapolation, we evaluate our models using logo (logo) cv (cv). The method follows the same principle as standard cv but differs in how the data is partitioned. Instead of random partitioning, the data is organized into distinct groups, which, in our case, correspond to counting stations. Consequently, the model is trained on observations from all but one long-term counting station and then evaluated on this hold-out long-term counting station. In addition, we use the each short-term counting location as test data for a model trained on all long-term counting stations. We provide the average error across stations, which implies that each location is equally weighted in the test data. When computing these predictions, it is important to note that the hourly long-term data are measured from 0h-24h, while the short-term counts only from 7h-19h. Hence, we train the model, predicting the short-term locations, only on daily measurements, which are computed as the sum of the 7h-19h hourly measurements. We also perform the analysis of the long-term stations on daily measurements based on 0h-24h and 07-19h data separately. The former allows us to infer day effects for long-term stations, and the latter can be used to compare results with the short-term counting predictions. In order to provide information on the absolute and relative size of our errors, we use the mae (mae), and the smape (smape) as evaluation metrics and train the models on various ML algorithms (see Methods Section <ref>). Additionally, we include a baseline, in which we use the mean across the observations in training data as the prediction. We find that ensemble methods (XGBoost, gradient boosting, random forest, and decision trees) outperform the baseline, support vector machines, linear regression, and shallow neural networks (Table <ref>). We select XGBoost as the best performing model based on the logo analysis with all long-term counting stations on the 0h-24h data as evaluated by the MAE. We note that this model does not produce the lowest errors when predicting the short-term counts, with a larger discrepancy in the smape compared to the mae. To analyze the performance of the XGBoost model in more detail, we looked into the variation of smape between stations. At the daily scale, the model performs quite well for more than half the stations (smape of around 20), while for some the smape exceeds 80 (Figures <ref>), and the performance also varies considerably between counters for the AADB (Figure <ref>). The poorly performing locations each have a high variance in their measurements, and each of these locations is either consistently over-predicted or under-predicted. Our analysis revealed no further common characteristics of the worst-performing counters that would allow us to pinpoint where the model is failing. We conclude that there are latent factors within the data generation process that remain unaccounted for, despite our comprehensive inclusion of a wide range of features from the existing literature. We will explore how this can be mitigated using sample counts in Section <ref>. §.§ Relative importance of input data sources Each data source used requires time and effort for acquisition, cleaning, and integration. Given the variety of sources used in this study, we explore their relative importance so that city officials considering a similar modeling approach can anticipate which ones are essential to obtain. Feature importance measures the contribution of the feature to the prediction of the target variable. Given the large number of employed features, which can be grouped by their data source (Table <ref>), we choose to evaluate their grouped importance. We compute the grouped feature importance at the daily scale, using the gpi [gpi;][]plagwitz_supporting_2022, which is described in detail in Section <ref>. Additionally, we focus on the smape error, as correctly predicting both relatively busy and relatively slow roads is valuable when deciding where to prioritize infrastructure. Finally, since we want to get a comprehensive picture of the daily traffic situation, including at night, we use the data for the 0h-24h time window. We train the model on all long-term counting stations. Within gpi, we compute 100 permutations and use repeated 5-fold stratified cv. The gpi reveals that crowdsourced Strava application data is the most important group, followed by time, infrastructure, and socioeconomic indicators (Figure <ref>). The crowdsourced information is much more relevant than the bike-sharing data. Wile both directly represent bicycle traffic, the movement patterns of individuals tracking their trips turn out to be more indicative of the overall cycling volume of people renting short-term bikes. Therefore, consistent with previous research, we find that Strava indicators are very useful for estimating cycling volumes sanders_ballpark_2017,hochmair_estimating_2019, kwigizile_leveraging_2022. §.§ Proof of concept of multi-source model We empirically demonstrate benefits from our multi-source model (XGBoost trained on all available long-term counting stations) by simulating daily streetwise bicycle volume in a subarea of Berlin for the month of September 2022. Figure <ref> shows a snapshot from the simulation, which is available online at https://silkekaiser.github.io/researchhttps://silkekaiser.github.io/research. We predict streetwise bicycle volume for Berlin. Precisely, we predict volume for every street segment between two intersections. Since our estimates are based on discrete point locations, we compute the midpoint for each street segment and base our estimates on these points. We find that the demonstration effectively captures temporal variations, especially between weekends and weekdays. However, the spatial aspects of the predictions could be more convincing. The model often predicts that adjacent streets have similar bicycle volumes and fails to detect high outliers. This shortcoming is likely due to the construction of features based on large radii. Nevertheless, the model reasonably captures the differences between major streets and residential areas, picking out high and low-traffic zones. §.§ Spatial extrapolation using additional sample count data Our multi-source model has only a limited ability to reproduce spatial patterns of cycling volume. Here, we investigate whether collecting additional location-specific bicycle volume sample counts improves the predictive performance at unseen locations on a daily scale and what is the most effective strategy for conducting them. <cit.> elaborate on the usefulness of short-term counts to estimate annual averages for non-motorized traffic using scaling factors. They find that as the number of observation days increases, the extrapolation error decreases, but that the incremental gains become modest after the first week. Also, the advantage derived from conducting counts on consecutive days is minimal compared to nonconsecutive days. We seek to revisit their findings in the context of ml. We chose to simulate three different sample data collection strategies: Firstly, the collection of data is commissioned for one day at a time (1-day). The days are selected at random throughout the year. In the second and third strategies, we simulate the collection of data on three (3-day) or seven (7-day) consecutive days. Also, these multi-day periods are randomly distributed throughout the year. We compare the performance of the model with data from each of those three different sampling strategies. We simulate a collection of up to 28 days. We simulate this using 19 long-term counting stations only, as all short and one long-term station have too few observations per location available. We use the XGBoost model with smape. As before, we evaluate the performance by iterating over the counting stations. Each counting station serves once as the new (“hold-out”) location. For that location, we randomly choose some of the available data to represent sample counts performed at that location, following the three sampling strategies (1, 3 or 7-day). We use the remaining data from the location as the test set. For training, we implement two scenarios. For the first scenario, we make use of all available data and train the model on both the sample data and the data from the other counting stations. We give a weight of 25% to the sample counts and 75% to the other counting stations' data. Please refer to the Methods Section <ref> for details on the weights. This “full-city” scenario benefits from both location-specific sample data and city-wide long-term information. For the second scenario, we train the model on the sample data only. Since it only uses information from the location in question, we refer to this model as the “location-specific” scenario. Thus, by definition, the training data for this model exhibits no variation in infrastructure and socioeconomic features, as these features only vary across locations. We then use both scenario models to perform prediction on the test set. We repeat this process for each counting station and compute the average across the resulting errors. This procedure is repeated 10 times with different sample days, to allow for uncertainty estimation and provide 95% confidence intervals. We train and evaluate the models after each additional day of data collection. This allows a comparison of the different approaches for as little as 1 day and as much as 28 days of additionally collected data. As a simple baseline, we include the error of predicting the site-specific volume as the mean of the sample data collected at the respective location. Sample data collection notably enhances predictive performance for new locations in the full-city scenario (Figure <ref>). In the location-specific scenario, two or more days' worth of sample data already outperforms a model without any location-specific data. Sampling only one day at a time is the superior collection strategy, and this advantage is more pronounced for the full-city scenario. Collecting data on as many different days as possible may provide an advantage, as seasonal effects are better captured. This can also be seen in the curves for the 7-day and 3-day strategies, where the error decreases after the 7th and 14th as well as after the 3rd and 6th day when new random dates for the collection periods are chosen (<ref>). Given that setting up counting infrastructure at new locations may be costly, the 3-day and 7-day approaches may still yield sufficient results at lower costs. Moreover, we find that the full-city approach using the 1-day strategy consistently outperforms the location-specific approaches. A comparison of the 1-day strategy between the two approaches shows that to achieve a smape of 20, one would need to collect on average 7 days of sample data using the full-city scenario or 14 days using the location-specific model. This underscores the fact that models can benefit greatly from information obtained at locations other than the one under consideration. Finally, we find that the use of multi-source data is also highly relevant when using sample data, and simple averages over the counts do not suffice. The baseline error never drops below 30, while the errors for the location-specific models are below 20 after 25 days of sample counts (Figure <ref>). This demonstrates the importance of leveraging multi-source data in combination with sample counts. Based on these results, we seek to provide a numerical comparison of the performance of a model with sample data to the simple multi-source model. We train an XGBoost model on the full-city scenario, in combination with multi-source data and ten days' worth of sample counts using the 1-day strategy. We collect these ten days randomly across all observations and across both years. Again, to account for the randomness in the sample data collection, we compute 10 repeated samples and take their average. With this approach, we can predict new locations at the daily scale with an average smape of 17.44 (in comparison to the multi-source model of 41.24) and an mae of 594.59 (1634.61). For the aadb, we get 11.94 (38.86) and 360.84 (1557.19), respectively. On closer inspection, we also find that these errors vary little across counting stations (Figure <ref> and <ref>). This is a clear improvement over the multi-data-only model. Therefore, estimates predicted with sample counts and multi-source data are not only more accurate (around 2/3 lower) but also more reliable. § DISCUSSION Our research highlights the feasibility of estimating bicycle volumes for all streets across a city by leveraging open-source data together with long- and short-term counts and machine-learning models. Advances not only in data availability but also in analytical methods have made such purely data-driven approaches feasible. We find that using already existing multi-source and long-term counting station data allows for predicting bicycle volume at unseen locations using XGBoost with a reasonable error for both daily values and annual averages. The most important of these data sources are crowdsourced Strava data, features indicating the time as well as infrastructure and socioeconomic indicators. Yet, the prediction error varies greatly between locations, which means that the model is able to predict certain streets very well and others much less so (with no apparent pattern). This is also anecdotally shown in the proof of concept, where the model performs well in capturing temporal trends and identifying high-volume areas, but shows shortcomings in reproducing intricate geographic nuances. We conclude that there may be unobserved and latent variables that remain unaccounted for despite our comprehensive inclusion of a wide range of features, which go beyond what has been done in the existing literature. Addtionally, collecting sample counts for unseen locations not only drastically reduces the error across all locations but also the variance across locations. The decrease is at around 2/3 on average. This experiment showed that spending resources to collect additional short-term counts may be worthwhile. Municipal governments can replicate our model using data that are already owned by the city or can be obtained from third-party providers. Based on our findings, we advise that it is most relevant to obtain data from crowdsourced applications (Strava), infrastructure indicators from OpenStreetMaps, and, if available, socioeconomic data. We also recommend conducting multiple one-day sample counts at locations of interest to obtain more accurate results. Each day of sampling leads to a significant improvement in the estimate for that location. Using 10 days of sample data, our model provides policymakers with accurate and reliable estimates. Such estimates can allow them to make evidence-based decisions about infrastructure improvements or repairs. Busier roads can be prioritized, and financial expenditures can be justified by the number of cyclists they may benefit. Similarly, civil society can use such estimates to advocate for local infrastructure improvement needs. In future research, more complex modeling approaches that take into account spatial and temporal dependencies can be another promising direction. Such approaches may also benefit from more ground truth data, particularly from more continuous counting stations to cover spatial variability. We hypothesize that more ground truth could further improve predictions for unseen locations and possibly reduce the need for sample data collection. Expanding the case study of Berlin to a comparative analysis with other cities could shed light on the generalizability of the approach. § METHODS §.§ Data description A table explaining each individual feature is included in the Appendix <ref>, Table <ref>. In the following we elaborate on the data sources. All datasets are publicly available via the sources cited with one exception. Strava Metro provides their crowdsourced app data upon request strava_metro_strava_2023. Bicycle Counting Stations Data The Berlin city administration collects data on bicycle volume at various locations senate_department_for_the_environment_mobility_consumer_and_climate_protection_berlin_zahlstellen_2023 (Appendix <ref> Figure <ref>). The data come from long-term counting stations, which are permanent devices that identify passing bicycles through an electromagnetic field embedded in the ground. The city installed its first of 30 counting stations in 2012 and the most recent one in 2022. 10 of the stations are located on opposite sides of the street and also record the direction of flow, while in the other locations, there is only one counter for both directions. We sum counters on opposite sides of the same street into one count as we are interested in the number of bicycles passing by a certain location rather than their direction of flow. This reduces the number of counting locations to 20. Occasionally, counting stations are out of service (e.g., due to construction or malfunctioning), resulting in missing observations. We also exclude observations that are interpolated by the municipality, as the city does provide information on their interpolation method. Short-term bicycle counts have been conducted at 21 fixed locations repetitively on different dates by the city since 1983. We exclude short-term counters consisting of only one observation (one day) from the analysis. Consequently, the data set comprises information from a total of 12 short-term locations. There is no publicly available information on the criteria used by the city to determine the placement of these counting stations, but upon closer examination, most are located closer to the city center and along high-traffic roads. A map of the counting stations, a table detailing the number of observations per station as well as some basic descriptions of the measurements are included in the Appendix <ref>. Crowdsourced App Data We obtain crowdsourced app data from the Strava smartphone app, which allows users to track their speed, altitude gain, and exact route choice covered during physical activities such as cycling strava_metro_strava_2023. Since Strava has made some of its data available for research and city administrations, other studies have used these data for a similar purpose (Table <ref>). Strava Metro has modified the data to protect individual privacy. The data includes only publicly available trip records (as opposed to trips taken in a private mode). They also do not provide individual trip information, but instead aggregate the trip counts into two formats. In the first format, various features are available at the "street segment" level, which covers a street between two intersections. The data is available on an hourly basis. In the second format, features are available for regions in the form of hexagons, each spanning approximately 0.66km^2. In both formats bicycle counts are rounded to the nearest multiple of five, e.g., when 7 cyclists pass a street segment, Strava rounds it to 5. We use both the segment and the hexagon formats. For the segment data, we calculate the average of the available features for all segments within a certain radius of the counting station (500m, 1000m, 2000m, 5000m, whole city) at the daily level. For the hexagon data, we include the features for both the hexagon in which the counter is located and the mean of the six surrounding hexagons. A graphical representation of the feature engineering process is included in the Appendix <ref>. The Strava data is based only on the voluntary recording of Strava app users, and it has a sampling bias in its user base lee_strava_2021. Based on the few demographic indicators included in the data, a sampling bias towards young males with an ambitious riding style is apparent: 75.99% of trips were recorded by male users, only 3.76% of the trips were recorded by users aged 55 and over, e-bike trips contribute only 0.17%, and the average speed is relatively high at 21.14km/h (see Appendix <ref> for more details). Bike-Sharing Data The bike-sharing data used in this study consists of individual trips from free-floating bike-share systems. In these systems, bicycles are available for pick up and return anywhere within the city, unlike systems dependent on designated stations for both pickup and drop-off. It comprises two distinct periods. The months from April to December 2019, covering the providers Call-a-bike and Nextbike, as provided to us by <cit.>. And from June to December 2022, covering only Nextbike, which we collected ourselves via their application programmable interface (API) nextbike_official_2020. The data is made available for download kaiser_bike-sharing_2023. To the best of our knowledge, only <cit.> use information on bike-sharing to predict cycling counting stations. These data provide details of individual trips, unlike crowdsourced data (Strava), which only offer aggregate counts. The 2022 bike-sharing data was collected as follows. Nextbike's application programming interface furnishes real-time information on the location of all accessible bicycles, each identified by a unique bike ID, at a minute-by-minute granularity. When a bike is rented, it is temporarily removed from the available list and reappears when it is returned. By querying the list at one-minute intervals, we can accurately record trips to the minute, providing precise departure and arrival points and the respective times for every trip. For both bike-sharing datasets, 2019 and 2022, only the start and end points of each trip are available. We impute the route using OpenStreetMap as of July 2022 and the designated routing algorithm tailored for bicycles. OpenStreetMaps facilitates route planning for different modes of transportation, by adapting the suggested route according to the chosen mode openstreetmap_contributors_planet_2017. It is important to note that the resulting trajectories are approximations of the actual routes taken by bike-sharing users. Based on the routed trips, we perform data cleaning to account for possible incorrectly recorded journeys. We exclude trips shorter than 100m (0.64% of total trips), which may be due to errors in GPS measurements or aborted trips, e.g., due to a broken bike. Similarly, we exclude trips longer than the 45km diameter of Berlin (0.005%), shorter than 120 seconds (1.63%), and longer than 10 hours (6.05%), assuming that incorrect use of the rental system is the cause. Finally, we exclude trips with an average speed slower than 2km/h (16.57%) or faster than 40km/h (10.87%). After cleaning, the data contain 1,333,737 bike-sharing trips. We feature engineer the bike-sharing data based on the methodology proposed by <cit.>. For each counter, we count the number of bikes passing, the number of bikes whose rental started, and the number of bikes whose rentals ended within a certain radius within a day. As radii, we consider 250m, 500m, 1000m, 2000m, 5000m, and the whole city. We provide an example of feature engineering for the bike-sharing data in the Appendix <ref>. The following limitations to the bike-sharing data remain. In Berlin, the bike-sharing market is diverse, with multiple providers. We were only able to acquire data from two providers. These two maintain a fleet of traditional bicycles, unlike other companies that primarily offer e-bikes. In addition, neither Nextbike nor Call-a-bike responded to requests for information or provided detailed information about their data. This lack of transparency raises concerns about the stability of bike IDs, potentially leading to the inclusion of fictitious rides in our dataset due to how we compute trips. Also, these data are potentially biased, as bike-sharing users differ from private bicycle users. We do not have demographic information about the users of Nextbike or Call-a-bike, but bike-sharing users tend to have higher incomes and education fishman_bikeshare_2016. Weather data Cyclists are more exposed to environmental conditions than motorized traffic users, which affects their comfort while cycling and, consequently, their decision to use a bicycle. Research shows that weather conditions can have both positive and negative effects on cycling, resulting from both immediate and delayed weather effects miranda-moreno_weather_2011. The data we use comes from the German Weather Service and is provided by Meteostat meteostat_weathers_2022. We include various features at daily granularity. The weather indicators are for all of Berlin, i.e., they are the same for all counting locations but vary over time. Infrastructure data We include infrastructure data on road conditions, points of interest, and the land use around the counting stations. Cycling and road infrastructure plays a critical role in increasing cyclists' perception of safety and, consequently, influence bicycle use moller_cyclists_2008. The infrastructure features also affect how many trips may be made to an area. Similarly, points of interest, such as schools and shops, can influence bicycle volumes at different times throughout the day and week strauss_spatial_2013. From <cit.>, we obtain information about the maximum speed allowed for motorized traffic, the type of bike lane at the exact location of the counting station, and the number of different points of interest within different radii (500m, 1000m, 2000m, and 5000m). We also compute the distance of the counting stations from the city center, following the definition of a city center used by OpenStreetMap. Data from the city of Berlin provides information on land use, such as for parks or industry, which can impact the timing and volume of human frequenting in various areas. This data is collected at the “planning area” level: For urban planning purposes, the city is divided into planning areas that represent neighborhoods; each planning area has an average size of about 2 km^2. The city collected the indicators in 2015 berlin_open_data_nutzung_2022, senate_department_for_urban_development_building_and_housing_lebensweltlich_2023. We use data from the planning area surrounding each counting station. To standardize the measurements, we convert the features from square kilometers into percentages. For example, instead of stating that 0.5km^2 within the planning area surrounding the counting station is occupied by parks, we express it as 25% occupied by parks. Socioeconomic data Bicycle use varies with regard to socioeconomic characteristics such as age and gender goel_cycling_2022. We obtain socioeconomic data from the city of Berlin berlin-brandenburg_office_of_statistics_kommunalatlas_2020. The data is available at the level of “planning areas” (see the infrastructure data for details). For each counter, we use the indicators of the planning area in which it is located. Since the socioeconomic data are only available until 2020, we use the 2019 observations for the same year and the 2022 observations for 2020. Therefore, the data has spatial and temporal variation, but the data for 2022 remains an approximation. Motorized traffic data Similar to the bicycle counting stations, the city administration conducts counts of motorized traffic at 267 counting stations (for 2019 and 2022) berlin_open_data_verkehrsdetektion_2022. To the best of our knowledge, we are unaware of any studies that have attempted to predict bicycle volume from motorized traffic counts. We hypothesized that this could be a valuable source of additional information as bicycle counts and motorized traffic counts may show similar patterns in terms of commuting peaks, weekday/weekend behavior, and locations of interest. The data includes the volume, type, and speed of motorized vehicles collected at various locations in Berlin. The detectors measure the features only on one side of the road (e.g., only eastbound traffic). We compute the respective mean values of these motorized traffic features of all traffic counters within a 6-kilometer radius and for the city as a whole, all on a daily basis. The choice of a 6-kilometer radius as our spatial unit is intentional, as it is the smallest possible radius for the feature to be available for each counting station. The main drawback of the data is that the motorized traffic counting stations are unevenly distributed throughout the city (see Appendix <ref>). Therefore, not only do we have to employ a large radius, but we also have to compute the features for each counting station based on a different number of motorized traffic counting stations. Holiday and time data Traffic data can exhibit strong seasonality. We, therefore, include several time indicators: the day of the week, the day of the month, the month itself as features, the year, and whether it is a weekday. We also use features that indicate the presence of each public and school holiday senate_department_for_education_youth_and_family_ferientermine_2022. Pre-processing of the combined data In the merged dataset, an observation represents a daily measurement from one counting station. We pre-process these data as follows: We drop a feature if it correlates more than 99% with another feature, which is the case for the Strava non-e-bike trip count, which correlates with the Strava total trip count. We exclude infrastructure features that are constant across observations, as they do not provide additional predictive information. This is the case for the number of hospitals within 500m, the number of industries within 500m, 1000m, 2000m, and 5000m, and the percentage of land used for horticulture as they are all zero. The socioeconomic data is also missing for one counting station, which we replace with the mean of the respective features across all other counting stations. §.§ Data analysis Models and algorithms We implement all models with the Python library scikit-learn pedregosa_scikit-learn_2011. Based on the results, we select Extreme Gradient Boosting (XGBoost) as the best-performing model. It is an ml algorithm that combines boosting and regularisation techniques. By iteratively adding decision trees to an ensemble model, it corrects the errors of the previous trees, resulting in a more robust and accurate model compared to standard decision trees or random forests. The trees are trained using a gradient descent optimization method, which updates the weights of the features to minimize a given loss function. In addition, the algorithm uses a technique called tree pruning to remove unnecessary leaves and nodes from the trees. For information on the other models, we would like to refer the reader to geron_hands-machine_2022. Hyperparameter tuning For XGBoost, we tune the following hyperparameters with random search: the learning rate (controls the step size during the optimization process), the maximum depth of each tree (deeper trees can capture more complex relationships but can cause overfitting), the fraction of features used when constructing each tree (reducing overfitting also introducing randomness), the minimum sum of instance weight needed in a child (can prevent overfitting by controlling the minimum amount of instances required in each leaf) and a regularization parameter that encourages pruning of the tree brownlee_xgboost_2016. For the hyperparameters of the other models, we would like to refer the reader to the Appendix <ref>. Feature selection Given the richness of our features in the different data sources, we employ model-specific fs (fs), which allows for a reduction of computational needs and can improve the performance of the models. For each model, we test univariate fs with SelectKBest, recursive feature elimination based on XGBoost, SelectFromModel with XGBoost, and fs via Sequential Selection with Linear Regression pedregosa_scikit-learn_2011. The best feature selection for each model is assessed with logo for both the 0h-24h and 7h-19h subsets and smape and mae separately. Compared to dimensionality reduction, fs has the additional advantage of allowing for feature importance analysis. Which fs method is applied to which model is specified in the Appendix <ref>. Error metrics For benchmarking, we choose two error metrics, mae and smape, which are defined as follows, with n the number of observations, y_i the true value, and ŷ_i the prediction of the variable of interest: = 1/n∑_i=1^n| y_i-ŷ_i| = 1/n∑_i=1^n |ŷ_i-y_i|/(y_i+ŷ_i)/2. We have chosen these metrics over the more commonly used error metrics: rmse (rmse), mape (mape), or standard errors based on the underlying distribution. The counting station measurements include several high outliers. Compared to the rmse, the mae places less emphasis on outliers, which is more suited to the right tail distribution. See Appendix <ref> for histograms and boxplots of the counts. We have chosen smape over mape to measure the relative error, as it yields small percentage errors when the true value is a high outlier. Additionally, smape gives a symmetric measure that considers both overestimation and underestimation errors equally. Grouped feature importance Computing feature importance for groups of features cannot be simply done by summing individual feature importance scores. Neither can one sum individual feature importance for tree-based methods. This method often overfits, boosting features that contribute little, and thus, they cannot be summed at the group level since this would overweight groups with many features bramer_overfitting_2005, breiman_random_2001. Nor can one sum up permutation-based features, as all features besides the one in question are known during the permutation, which does not sufficiently reveal the impact of a particular feature group when summed up plagwitz_supporting_2022. Here, we use the grouped feature importance as introduced by <cit.>. The data is split in the sense of cross-validation into training and test data. On each fold, the following is computed: A model is trained on the training data. The test data is replicated a certain amount of times, and in each replication, the features belonging to a feature group in question are permuted. The model is then applied to the permuted test sets. The change in performance is estimated and averaged across all permuted test sets. This process is repeated for every feature group. The mean between the cross-validation folds returns the final grouped feature importance score, which provides the information gain per group. These scores are not comparable across models but only within each model. Sample weights in training We employ sample weights during model training with the sample data to enhance the model's emphasis on these particular observations. Sample weights assign different weights to individual observations in the training dataset. This is useful when dealing with imbalanced datasets or when certain samples are more critical than others. The latter is the case in our setting. In XGBoost, sample weights can be assigned to each instance, influencing the contribution of that instance's error to the overall loss function during training. This way, samples with higher weights contribute more to the model's updates, thus affecting the model's learning process pedregosa_scikit-learn_2011. § APPENDIX §.§ Overview counting stations This section provides further details on the counting stations, including an overview of the available counting stations (Table <ref>) and a mapping of them (Figure <ref>), as well as some descriptions of the measurements (Figure <ref>). The boxplots and histograms of counting stations' measurements reveal distinct patterns. The boxplot (Figure <ref>) demonstrates that long-term stations have very infrequent outliers. Conversely, short-term stations show a lower mean count with few outliers. This disparity arises as they cover a shorter period, including fewer days with extreme events. Short-term stations consider only daytime measurements (7-19h), omitting the lower nighttime counts. This assumption is supported by Figure <ref>, depicting permanently higher values for the long-term, in comparison to the long-term 7-19h. The distributions, exhibit a right-skewed, long-tailed pattern, occasionally indicating notably high cycling volumes. However, the right-skewedness is less pronounced for short-term stations. §.§ Location of motorized traffic counting stations §.§ Feature engineering crowdsourced data §.§ Feature engineering bike-sharing data §.§ Comparison crowdsourced and bike-sharing data Usage patterns differ between Strava and bike-sharing. On average bike-sharing usage is more evenly distributed throughout the day, whereas Strava trips are more likely to be recorded during the midday and evening. Also, bike-sharers ride on average with a speed of 11.05 km/h whereas Strava are almost at double the speed with 21.14 km/h. This seems reasonable, as Strava is used heavily to track sporting activities and bike-sharing bikes tend to be of lower quality. §.§ Detailed feature list p5cm|p2cm|p1.5cm|p1.5cm|p1cm|p1.5cm|p1.5cm|p3cm Overview of all considered features Feature Name Further explanation Spatial scope Timing used for daily model No. of Features Type Scaling Data Source 5lTime Features Year measurement from 2019 or 2022 stationary yearly 1 binary inherent Month indicating January through December stationary monthly 1 numerical one-hot-encoded inherent Day of month indicating numerical day of month stationary monthly 1 numerical standardized inherent Weekday indicating Monday through Sunday stationary daily 1 numerical one-hot-encoded inherent Weekend if Saturday or Sunday stationary daily 1 binary inherent 5lVacation and holiday features School holiday presence of school holiday stationary daily 1 binary senate_department_for_education_youth_and_family_ferientermine_2022 Public holiday presence of public holiday stationary daily 1 binary senate_department_for_education_youth_and_family_ferientermine_2022 5lBike-sharing Originating/Returned/Rented Number of within a certain radius to the counter daily 15 numerical standardized nextbike_official_2020 and web scraped from Nextbike, as well as Call-a-bike Originating/Returned/Rented Number of within the whole city daily 3 numerical standardized nextbike_official_2020 and web scraped from Nextbike, as well as Call-a-bike 5lStrava No. of trips overall/originating/arriving/for leisure/ for commute/ morning/ midday/ evening/ overnight/ weekday/weekend Number of in the respective hexagon daily 18 numerical standardized strava_metro_strava_2023 No. of trips overall/originating/arriving/for leisure/ for commute/ morning/ midday/ evening/ overnight/ weekday/weekend Number of in the six neighboring hexagons daily 18 numerical standardized strava_metro_strava_2023 No. of total trips/non-e-bikes/e-bikes/number of people/for commute/leisure/ sex (female, male and unspecified gender)/various age groups (18-34, 35-54, 55-64 and 65+), / morning/ midday/ evening/ overnight counted, average speed Number of in the segments within a certain radius [footnote] daily 44 numerical standardized strava_metro_strava_2023 No. of total trips/non-e-bikes/e-bikes/number of people/for commute/leisure/ sex (female, male and unspecified gender)/various age groups (18-34, 35-54, 55-64 and 65+), / morning/ midday/ evening/ overnight counted in the segments, average speed Number of in the whole city of Berlin daily 11 numerical standardized strava_metro_strava_2023 5lInfrastructure Counter within inner city limits indicating whether the counter located within the "Ring" (inner city limits) counting station location stationary 1 binary based on openstreetmap_contributors_planet_2017 Latitude & Longitude counting station location stationary 2 numerical standardized senate_department_for_the_environment_mobility_consumer_and_climate_protection_berlin_jahresdatei_2022 Distance to city center in km counting station location stationary 1 numerical standardized openstreetmap_contributors_planet_2017 Maximum speed in km/h counting station location stationary 1 categorical one-hot-encoded openstreetmap_contributors_planet_2017 Bicycle lane type counting station location stationary 1 categorical one-hot-encoded openstreetmap_contributors_planet_2017 No. of shops within a certain radius [footnote] stationary 4 numerical standardized openstreetmap_contributors_planet_2017 No. of education within a certain radius [footnote] stationary 4 numerical standardized openstreetmap_contributors_planet_2017 No. of hotels within a certain radius [footnote] stationary 4 numerical standardized openstreetmap_contributors_planet_2017 No. of hospitals within a certain radius [footnote] stationary 4 numerical standardized openstreetmap_contributors_planet_2017 Percent of area used for farming in the planning area stationary 1 numerical standardized senate_department_for_urban_development_building_and_housing_lebensweltlich_2023 Percent of area used for horticulture in the planning area stationary 1 numerical standardized senate_department_for_urban_development_building_and_housing_lebensweltlich_2023 Percent of area used for cemeteries in the planning area stationary 1 numerical standardized senate_department_for_urban_development_building_and_housing_lebensweltlich_2023 Percent of area used for waterways in the planning area stationary 1 numerical standardized senate_department_for_urban_development_building_and_housing_lebensweltlich_2023 Percent of area used for industry in the planning area stationary 1 numerical standardized senate_department_for_urban_development_building_and_housing_lebensweltlich_2023 Percent of area used for private gardening in the planning area stationary 1 numerical standardized senate_department_for_urban_development_building_and_housing_lebensweltlich_2023 Percent of area used for parks in the planning area stationary 1 numerical standardized senate_department_for_urban_development_building_and_housing_lebensweltlich_2023 Percent of area used for traffic areas in the planning area stationary 1 numerical standardized senate_department_for_urban_development_building_and_housing_lebensweltlich_2023 Percent of area used for forests in the planning area stationary 1 numerical standardized senate_department_for_urban_development_building_and_housing_lebensweltlich_2023 Percent of area used for residential housing in the planning area stationary 1 numerical standardized senate_department_for_urban_development_building_and_housing_lebensweltlich_2023 5lSocioeconomic indicators Population density Inhabitants/km^2 in the planning area yearly 1 numerical standardized senate_department_for_urban_development_building_and_housing_lebensweltlich_2023, berlin-brandenburg_office_of_statistics_kommunalatlas_2020 & senate_department_for_urban_development_building_and_housing_lebensweltlich_2023, berlin-brandenburg_office_of_statistics_kommunalatlas_2020 Total number of inhabitants in the planning area yearly 1 numerical standardized senate_department_for_urban_development_building_and_housing_lebensweltlich_2023, berlin-brandenburg_office_of_statistics_kommunalatlas_2020 Average age in the planning area yearly 1 numerical standardized senate_department_for_urban_development_building_and_housing_lebensweltlich_2023, berlin-brandenburg_office_of_statistics_kommunalatlas_2020 Gender distribution in the planning area yearly 1 numerical standardized senate_department_for_urban_development_building_and_housing_lebensweltlich_2023, berlin-brandenburg_office_of_statistics_kommunalatlas_2020 Share of population with migration background in the planning area yearly 1 numerical standardized senate_department_for_urban_development_building_and_housing_lebensweltlich_2023, berlin-brandenburg_office_of_statistics_kommunalatlas_2020 Share of foreigners (total, EU-foreigners, non-EU-foreigners) in the planning area yearly 3 numerical standardized senate_department_for_urban_development_building_and_housing_lebensweltlich_2023, berlin-brandenburg_office_of_statistics_kommunalatlas_2020 Share of population unemployed in the planning area yearly 1 numerical standardized senate_department_for_urban_development_building_and_housing_lebensweltlich_2023, berlin-brandenburg_office_of_statistics_kommunalatlas_2020 Share of population with tenure exceeding 5 years in the planning area yearly 1 numerical standardized senate_department_for_urban_development_building_and_housing_lebensweltlich_2023, berlin-brandenburg_office_of_statistics_kommunalatlas_2020 Net migration rate (moving to/away from the area) in the planning area yearly 1 numerical standardized senate_department_for_urban_development_building_and_housing_lebensweltlich_2023, berlin-brandenburg_office_of_statistics_kommunalatlas_2020 Age-specific demographic proportions (individuals aged < 18 & > 65) in the planning area yearly 2 numerical standardized senate_department_for_urban_development_building_and_housing_lebensweltlich_2023, berlin-brandenburg_office_of_statistics_kommunalatlas_2020 Greying index in the planning area yearly 1 numerical standardized senate_department_for_urban_development_building_and_housing_lebensweltlich_2023, berlin-brandenburg_office_of_statistics_kommunalatlas_2020 Birth rate in the planning area yearly 1 numerical standardized senate_department_for_urban_development_building_and_housing_lebensweltlich_2023, berlin-brandenburg_office_of_statistics_kommunalatlas_2020 5lWeather Average temperature in °C stationary daily 1 numerical standardized meteostat_weathers_2022 Daily maximum temperature in °C stationary daily 1 numerical standardized meteostat_weathers_2022 Daily minimum temperature in °C stationary daily 1 numerical standardized meteostat_weathers_2022 Precipitation in mm stationary daily 1 numerical standardized meteostat_weathers_2022 Maximum snow depth in mm stationary daily 1 numerical standardized meteostat_weathers_2022 Sunshine duration in minutes stationary daily 1 numerical standardized meteostat_weathers_2022 Average wind speed in km/h stationary daily 1 numerical standardized meteostat_weathers_2022 Wind direction in degrees stationary daily 1 numerical standardized meteostat_weathers_2022 Peak wind gust in km/h stationary daily 1 numerical standardized meteostat_weathers_2022 Average sea-level air pressure in hPa stationary daily 1 numerical standardized meteostat_weathers_2022 5lMotorized Traffic Total no° of vehicles / cars / lorries within a 6km radius to the counter daily 3 numerical standardized berlin_open_data_verkehrsdetektion_2022 Total no° of vehicles / cars / lorries within the whole city daily 3 numerical standardized berlin_open_data_verkehrsdetektion_2022 Speed of vehicles / cars / lorries within a 6km radius to the counter daily 3 numerical standardized berlin_open_data_verkehrsdetektion_2022 Speed of vehicles / cars / lorries within the whole city daily 3 numerical standardized berlin_open_data_verkehrsdetektion_2022 The features are each computed for a radius of 0.5, 1, 2, and 5km. §.§ ML models' hyperparameters §.§ Feature selection methods § ABBREVIATIONS AADB average annual daily bicycle volume. CV cross validation. FS feature selection. GPI Grouped Permutation Importance. LOGO leave-one-group-out. MAE mean absolute error. MAPE mean absolute percentage error. ML machine learning. RMSE root mean squared error. SMAPE symmetric mean absolute percentage error. Acknowledgments We are grateful to CityLab Berlin for providing their bike-sharing data. Funding Statement This research was supported by grants from the European Union’s Horizon Europe research and innovation program under Grant Agreement No 101057131, Climate Action To Advance HeaLthY Societies in Europe (CATALYSE). Furthermore, the authors acknowledge support through the Emmy Noether grant KL 3037/1-1 of the German Research Foundation (DFG). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. Competing Interests The authors declare no competing interests exist. Ethical Standards The research meets all ethical guidelines, including adherence to the legal requirements of the study country (Germany). Author Contributions Conceptualization & Methodology: S.K., L.K.; Formal analysis, Investigation: S.K.; Resources: S.K., L.K., N.K.; Writing - Original Draft: S.K., L.K.; Writing - Review & Editing: S.K., L.K., N.K. All authors approved the final submitted draft. [heading=bibintoc,title=References]
http://arxiv.org/abs/2406.18297v1
20240626123131
FactFinders at CheckThat! 2024: Refining Check-worthy Statement Detection with LLMs through Data Pruning
[ "Yufeng Li", "Rrubaa Panchendrarajan", "Arkaitz Zubiaga" ]
cs.CL
[ "cs.CL" ]
2024 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0). CLEF 2024: Conference and Labs of the Evaluation Forum, September 09–12, 2024, Grenoble, France mode=sub]Notebook for the CheckThat! Lab at CLEF 2024 1]Yufeng Li[ orcid=0009-0008-8740-4994, email=yufeng.li@qmul.ac.uk ] [1] [1] [1]School of Electronic Engineering and Computer Science, Queen Mary University of London 1]Rrubaa Panchendrarajan[ orcid=0000-0002-1403-2236, email=r.panchendrarajan@qmul.ac.uk ] [1] [1] 1]Arkaitz Zubiaga[ orcid=0000-0003-4583-3623, email=a.zubiaga@qmul.ac.uk, url=www.zubiaga.org, ] [1]Corresponding author. [1]These authors contributed equally. § ABSTRACT The rapid dissemination of information through social media and the Internet has posed a significant challenge for fact-checking, among others in identifying check-worthy claims that fact-checkers should pay attention to, i.e. filtering claims needing fact-checking from a large pool of sentences. This challenge has stressed the need to focus on determining the priority of claims, specifically which claims are worth to be fact-checked. Despite advancements in this area in recent years, the application of large language models (LLMs), such as GPT, has only recently drawn attention in studies. However, many open-source LLMs remain underexplored. Therefore, this study investigates the application of eight prominent open-source LLMs with fine-tuning and prompt engineering to identify check-worthy statements from political transcriptions. Further, we propose a two-step data pruning approach to automatically identify high-quality training data instances for effective learning. The efficiency of our approach is demonstrated through evaluations on the English language dataset as part of the check-worthiness estimation task of CheckThat! 2024. Further, the experiments conducted with data pruning demonstrate that competitive performance can be achieved with only about 44% of the training data. Our team ranked first in the check-worthiness estimation task in the English language. Check-worthiness Claim detection Fact-checking Language Models LLM [ [ Received 7 March 2024 / Accepted 23 May 2024 ================================================ § INTRODUCTION With the significant development of the Internet and social media over the past decades, the practical challenges associated with fact-checking have become more complex <cit.>. Social media platforms have facilitated the rapid dissemination of information, which increases the difficulty of distinguishing misinformation from accurate information <cit.>. Concurrently, the general agreement on what should be fact-checked has expanded to include online content and claims made by politicians, resulting in a wide range of claims to be verified. As the initial step in the fact-checking process, claim detection plays a crucial role in efficiently identifying check-worthy claims, allowing for quicker progression to subsequent stages of verification <cit.>. Therefore, research on check-worthy claim detection is essential for advancing the field of fact-checking, where the CheckThat! shared task has played a significant role in recent years <cit.>. Since the beginning of the CheckThat! competition, traditional machine learning models and neural network models have been commonly employed for the task of claim check-worthiness detection. At CheckThat! 2018, the top submission was achieved by a team using Support Vector Machines and Multilayer Perceptrons <cit.>, while the highest scores at CheckThat! 2019 were obtained using Long Short-Term Memory (LSTM) networks <cit.>. Although BERT <cit.> was introduced in 2018, the exploration of this transformer-based model for claim check-worthiness detection began only in 2020. In that year, the team utilizing RoBERTa secured the first position in the English category <cit.>. Beyond the application of machine learning models, various techniques have been explored throughout the CheckThat! competition. Feature representation methods, including word embeddings, Bag of Words, Named Entity Recognition, and Part of Speech tagging <cit.> have been widely used to enhance model understanding of the task. More sophisticated representation techniques, such as LIWC <cit.> and ELMo <cit.>, have also been investigated. Additionally, statistics related to word usage, such as subjectivity and sentiment, have been incorporated. To address the challenge of imbalanced datasets, data augmentation strategies have been explored, with common methods including machine translation and sampling <cit.>. Large language models (LLMs) have seen remarkable advancements in recent years, with GPT <cit.> models predominantly utilized as the latest and most effective solution in CheckThat! competitions. Several teams have shown competitive and winning performance of the model in various CheckThat! tasks, including check-worthiness estimation in multiple languages <cit.>. Although GPT models have demonstrated competitive performance in CheckThat! tasks, their fine-tuning and inference entail associated costs. Simultaneously, numerous open-source LLMs have also demonstrated substantial advancements showing equivalent performance to GPT models. This enables the global community of researchers to benefit by transferring the knowledge of these powerful models cost-effectively by fine-tuning them on various downstream tasks. While fine-tuned BERT-based and GPT models have been extensively and routinely examined in the domain of check-worthiness estimation <cit.>, open-source LLMs, as emerging language models, have not yet been thoroughly investigated within this specific field. Therefore, this study aims to explore a wide range of open-source LLMs while leveraging their capabilities through prompt engineering for check-worthiness estimation. This paper presents the experiments conducted for CheckThat! 2024 task 1 <cit.>, check-worthiness estimation in the English language. The task involves identifying check-worthy statements from political transcriptions. Drawing inspiration from the impressive performance of LLMs in the recent CheckThat! competitions, we explore eight popular open-source LLMs, specifically Llama2-7b, Llama2-13b <cit.>, Llama3-8b, Mistral <cit.>, Mixtral <cit.>, Phi3-Mini-4K <cit.>, Falcon <cit.>, and Gemma-7b <cit.> with prompt engineering for identifying check-worthy statements. Considering the noisy and imbalanced nature of the training data, we propose a two-step data pruning process to isolate high-quality training data instances for effective learning with LLMs. Especially, we identify the informative sentences first and apply an under-sampling technique, Condensed Nearest Neighbour <cit.>, to create a balanced training dataset. Our fine-tuned Llama2-7b <cit.> model on the original training data shared by the task organizers scored the highest F1-score in the task 1 leaderboard in the English language. However, the experimental results indicate that similar or better performance can be achieved with data pruning techniques while retaining only about 44% of high-quality data instances from the original training data. Furthermore, this approach resulted in a reduction in fine-tuning time by a similar proportion, which could significantly lower the resource demands for fine-tuning larger models. All relevant source code and data are available on GitHub,[<https://github.com/isyufeng/FactFinders>] and the fine-tuned model can be accessed on Huggingface.[<https://huggingface.co/Rrubaa/factFinders-checkworthy-estimation>] The remainder of the paper is structured as follows. Section <ref> presents the methodology with the introduction to the LLMs experimented, prompts used and the data pruning techniques proposed. Section <ref> discusses the experiment results, followed by section <ref> concluding the key findings and future directions. § METHODOLOGY Our goal was to automatically refine the training data to obtain high-quality training data instances and fine-tune open-source Large Language Models (LLMs) to identify check-worthy statements from political transcriptions. This section introduces the dataset, LLMs used in the experiments, prompt engineering carried out, fine-tuning process, and the two-step data pruning we applied to the training data for effective learning. §.§ Dataset The dataset provided by CheckThat! 2024 comprises the train, dev, and dev-test partitions, containing 23,849 sentences from political transcriptions, along with a later release of test partition, bringing the total to 24,163 sentences. Table <ref> presents the statistics for each partition. From this table, it is evident that the dataset is imbalanced, posing a challenge for the check-worthy statement detection task. Furthermore, Figure <ref> illustrates the distribution of text lengths across each partition, revealing that the median length of the sentences is approximately 10-14 words in each partition, with 28%-42% of sentences containing fewer than 10 words. This indicates that the dataset not only suffers from class imbalance but also contains predominantly short sentences, hence implying a limited amount of information. §.§ LLMs for Check-worthy Statement Detection Open-source LLMs offer substantial advantages in terms of cost, transparency, community support, and ethical considerations. Consequently, we investigated eight prominent open-source LLMs, as detailed in Table <ref> by fine-tuning them for check-worthy statement detection. §.§.§ Large Language Models Llama2-7b, Llama2-13b <cit.>, and Llama3-8b are part of the Llama family, developed by Meta, and are available in various sizes. The most recent release, Llama3-8b, was made available in April 2024. These models have been optimized for text generation and dialogue applications. Similarly, Mistral <cit.> and Mixtral <cit.> are both developed by Mistral AI. Mistral mainly focuses on optimizing transformer models for language tasks, achieving high efficiency and performance in a compact form. Mixtral, with its hybrid approach, aims to integrate the best of various AI methodologies, offering flexibility and scalability for complex applications. Compared to this bigger model, Phi3-Mini-4K <cit.> is a smaller variant of the Phi-3-mini, developed by Microsoft, designed to provide capabilities similar to its larger counterparts but with a reduced number of parameters, making it more accessible and easier to run on less powerful hardware. Similarly, Falcon <cit.>, developed by TII, stands as one of the most powerful open-source models and consistently achieves top positions on the OpenLLM leaderboard hosted on Hugging Face. One of the latest models we experimented with, Gemma-7b <cit.> belongs to the Gemma family developed by Google DeepMind, which is designed to offer a balance between computational efficiency and advanced capabilities in generating text and understanding complex language queries. We fine-tuned these eight open-source LLMs published in Huggingface platforms (links listed in Table <ref>) for check-worthy statement detection from political transcriptions. §.§.§ Prompt Engineering Given the critical role of prompts in the performance of LLMs, we initially came up with a simple yet direct prompt, as illustrated in Prompt <ref>. Observing that this initial prompt resulted in lengthy responses with redundant information in the zero-shot setting, and lacked a clear definition of check-worthiness, we employed ChatGPT-4 to refine and improve the prompt, resulting in Prompt <ref>. All eight LLMs were fine-tuned using this refined prompt to generate `Yes' or `No' answers indicating the check-worthiness of the input statement. Furthermore, we observed that a significant proportion of sentences in the training data utilized pronouns to refer to political entities, thereby increasing uncertainties and ambiguities. Therefore, we experimented with an expanded version of Prompt <ref> (i.e. Prompt <ref>) to evaluate whether explicitly indicating that the pronouns in the input statement may refer to political entities could enhance the performance of the fine-tuned model. However, the initial experiments on prompt engineering revealed that neither the compressed prompt (Prompt <ref>) nor the expanded (Prompt <ref>) improved the performance of fine-tuned models (refer to Table <ref>). §.§.§ Effective Fine-tuning Fine-tuning an LLM is a challenging task due to the resource requirements, especially the memory demand. While this challenge can be escalated by training only certain layers of the LLM, still the computational requirement associated with gradient updates requires a lot of GPU memory. Therefore we use the Low-Rank Adaption technique (LoRA) for fine-tuning the LLMs. Instead of updating the weights directly, LoRA keeps track of the changes through low-rank perturbations requiring only minimal GPU memory. The LoRA configuration used for the fine-tuning is listed in Section <ref>. To ensure experimental control, consistent hyperparameters were applied across all eight LLMs (see Table <ref>). The performance of each model on check-worthy statement detection is presented in Table <ref>. Unfortunately, we could only compare the performance of the Llama family, Mistral, and Mixtral models during the testing phase of the competition. Therefore, the Phi-3-Mini-4k, the best-performing model in the Dev-Test partition wasn't considered for the remaining experiments. Considering the competitive performance of the Llama2-7b model and the time and memory required, the rest of the experiments were carried out by fine-tuning the Llama2-7b using Prompt <ref>. [t] Check-worthy Statement Detection [t] Check-worthy Statement Detection - Compressed [t] Check-worthy Statement Detection - Expanded §.§ Data Pruning for Effective Learning As previously mentioned, the training data is highly imbalanced, mostly containing shorter sentences with limited information. Recent studies <cit.> have demonstrated that instead of using the entire training data for fine-tuning, using only the high-quality labels improves the performance of check-worthy statement detection from political transcriptions. Inspired by this direction, we experimented with a two-step data pruning approach to automatically identify high-quality data instances for effective learning. §.§.§ Step 1 - Identifying Informative Sentences We began with identifying informative sentences in the training data that could potentially convey meaningful information for the training. In other words, we intended to remove the noisy instances from the training data as the first step of the data-pruning process. We define a political statement as informative if it meets one of the following four criteria: * Check-worthy status is "Yes": If the class label of the statement is "Yes", then it is informative. * Contains a named entity: If the statement contains a named entity, it is highly likely to discuss information related to that entity. Hence the statement is informative. * Contains an informative verb: If the statement contains an informative verb, it is highly likely to discuss an informative action. Hence the statement is informative. * Lengthy enough: If the statement is lengthy enough, it is likely to convey meaningful information. While the first criteria check-worthy status is straightforward, the other criteria require extracting further information to determine the informative status of a sentence. We used the BERT model <cit.> fine-tuned[<https://huggingface.co/dslim/bert-base-NER>] for the Named Entity Recognition (NER) task to identify the presence of a named entity. The NER model identifies four types of named entities, person, organization, location, and miscellaneous from the input text. We noticed that only 37.8% of the training data contains at least one named entity and only 12.2% of the training data mentions a person's name. This indicates the prevalent use of pronouns to refer to the political entities in the transcriptions, which makes the task more challenging. In order to identify informative verbs, we first extracted all the verbs presented in the training data using the Part-of-Speech tagger from NLTK library.[<https://www.nltk.org/api/nltk.tag.pos_tag.html>] Extracted verbs were lemmatized further to bring them to their base form. This resulted in 3838 verbs in the training data to be identified as informative or not. We wanted to automatically classify each verb as either informative or not based on whether it conveys any check-worthy action. However, performing this binary classification in a zero-shot setting using a language model is a challenging task as explicitly defining an informative verb in the prompt may result in ambiguous classification. Therefore, we performed a fine-grained categorization of the verbs into the following 10 categories. * Physical Actions: e.g. Run * Mental Actions: e.g. Think * Changes in State: e.g. Grow * Creation or Destruction: e.g. Build * Communication: e.g. Discuss * Movement: e.g. Walk * Emotion: e.g. Hope * Perception: e.g. See * Linking verbs : e.g. is, has * None: Any verb that does not fit into the other categories The first 8 categories were obtained by prompting ChatGPT with the question "What are the types of action verbs?". In addition to the 8 action verb categories, we added Linking verbs and None resulting in 10 categories of verbs. The option "None" was added to the categories to indicate that the verb does not fit into any of the other 9 categories. [t] Verb Classification We utilized Mixtral <cit.> in a zero-shot setting to classify a verb into one of the 10 categories using the Prompt <ref>. Among the 10 categories, we chose the verb types Physical Actions, Changes in State, Creation or Destruction, Communication, and Movement as the informative verbs, as the other verb types are less likely to represent a check-worthy action. Figure <ref> shows the word cloud of verbs present in the training data. Informative verbs are highlighted in green color and non-informative verbs are highlighted in red color in the Figure. It can be observed that most of the verbs are classified into the two groups correctly. Figure <ref> presents the verb type distribution in the training data. Interestingly the occurrences of informative verb categories are relatively high compared to non-informative verb categories. We further noticed that 88.2% of the check-worthy statements contain at least one informative verb in the training data, whereas this value drops to 77.2% for non-check-worthy statements. The final factor determining an informative sentence, minimum length is difficult to define explicitly. Therefore, we choose the minimum length value that reaches the optimal F1-score in the dev-test data by varying the value from 3 - 10. We excluded the stop words[<https://www.nltk.org/search.html?q=stopwords>] while calculating the length of a statement. We observed that the most informative sentences are obtained when the minimum length factor is set to 8 (refer to Section <ref> for the results). With this optimal setting, the first step of data pruning resulted in a reduced training data with 20,141 sentences. In other words, 2358 sentences (10.5% of original training data) were filtered out as non-informative sentences at this stage of the data pruning process. §.§.§ Step 2 - Under Sampling using Condensed Nearest Neighbour The informative sentences identified in the previous stage are still imbalanced in class. Therefore, we executed an under-sampling technique, Condensed Nearest Neighbour (CNN) <cit.>, to generate class-balanced training data. We retained all the minority data instances (check-worthy statements) and sampled only the majority data instances (non-check-worthy statements). The idea behind CNN sampling is to identify a subset of data instances that can be used to correctly classify all the other unsampled data instances using the 1-Nearest Neighbour rule. This makes sure that the sampled data distribution is the same as the original data distribution without any information loss. CNN requires the input data to be represented as vectors to iteratively sample data points from a vector space and perform the 1-Nearest Neighbour classification in the unsampled data points. We used BERT <cit.> embeddings to convert the sentences in the training data to a vector representation of length 768. The imbalanced[<https://imbalanced-learn.org/stable/references/generated/imblearn.under_sampling.CondensedNearestNeighbour.html>] python library was used to perform CNN sampling. This resulted in a sampled data with 9907 sentences (44% of the original training data) as the high-quality training data. Since we retained all the positive data instances (check-worthy statements) during the data pruning process, the resulting high-quality training data comprised 54.6% positive instances and 45.4% negative instances. Figure <ref> visualizes the training data points in 2 dimensions at each stage of the data pruning process. It can be observed that uninformative data points filtered during step 1 (Figure <ref>) are concentrated around the bottom-left corner of the plot. Further, the CNN samples a similar distribution of non-check-worthy data points (Figure <ref>) for obtaining balanced training data. § RESULTS We discuss the experiment results in this section, especially the hyper-parameters used to fine-tune the LLMs, the environment setting used, the performance of various LLMs with the consistency analysis, effect of prompt engineering, and the impact of training data pruning on check-worthy statement detection task. §.§ Hyper-parameters and Environment Setting Hyper-parameters used to fine-tune the LLMs in our experiments are listed in Table <ref>. Most of the hyper-parameters were the same for all the LLMs we experimented with, except the training batch size which was reduced to 1 for Mixtral, due to its memory demand. Consequently, the gradient accumulation steps for this model was increased to 4. All the experiments were conducted using Queen Mary's Apocrita HPC facility, supported by QMUL Research-IT <cit.>. Specifically, 1 GPU (Volta V100 or Ampere A100) with 8 CPU cores, each composed of 11 GB memory was used to train and test all the models. While generating the predictions by the fine-tuned models for evaluation, we noticed that a language model may generate different predictions for the same prompt and input sentence. Therefore, we ran each fine-tuned language model 5 times and obtained the majority prediction as the final prediction of the model. Further, we fine-tuned each model 3 times and reported the average performance of 3 fine-tuned models. We use the partitions train and dev for training and validation of the models, and the performance of the models is reported on the other two partitions dev-test and test. §.§ Evaluation Metrics The official evaluation metric for CheckThat! 2024 task 1, check-worthiness estimation is F1-Score over the positive class. However, since one metric could have a bias toward the comparison, we report average accuracy, precision, and recall along with the F1-score. Further, we computed consistency@K of the fine-tuned models indicating the fraction of data instances for which the model generated the same output class in all K iterations. Consistency score report in the following subsections was computed over 5 iterations (consistency@5). §.§ Comparison of LLMs We compare the performance of the eight open-source LLMs in test and dev-test partitions and Table <ref> reports their average accuracy, precision, recall, F1-score, and consistency@5. Since the class distribution of dev-test and test partitions are different, we can observe that the performance in each partition varies for all the LLMs. While Phi3-Mini-4K obtains the highest F1-score in the dev-test portion, Llama2 models stand out as the best-performing models in the test partition based on F1-score. Further, both Llama2-7b and Phi3-Mini-4K demonstrate greater consistency in predicting class labels across both partitions compared to other models. On the other hand, Llama3-8b, one of the latest models from the Llama family reaches the highest accuracy and precision in the test partition. However, the model fails to outperform the other models in terms of F1-score due to poor recall. Similarly, Mixtral, the largest model we compared, fails to give a consistent performance across both partitions. Moreover, both Mistral and Falcon remain as the lower-performing models in both partitions. Table <ref> lists the fine-tuning time of each LLM. Except for Mixtral and Phi3-Mini-4K, the largest and smallest models compared respectively, all the other models' fine-tuning time remains between 6-12 hours. As expected, the fine-tuning time increases with the number of parameters. As we already mentioned, we were able to compare only the Llama models, Mistral, and Mixtral during the testing phase of the competition. Therefore, considering the performance of these models in terms of F1-score in the dev-test partition and the time and memory required to fine-tune the model, we chose Llama2-7b as the optimal LLM for the remaining experiments. §.§ Consistency Analysis Since LLMs are text generation models, they tend to generate different output for the same input text even in a fine-tuned environment. Therefore, we analyze their consistency in predicting the same output class label over K iterations with the metric consistency@K. Figure <ref> presents the change in consistency with the number of iterations varied from 2 to 25 in the test and dev-test partition. It can be observed that the consistency declines with the increase in the number of iterations and becomes stable after around 11-12 iterations. While the model could reach a stable consistency in both partitions, the percentage of drop in consistency (difference between initial consistency when K=2, and the stable consistency) is nearly double for the test partition (3.1% vs 5.9%). §.§ Effect of Prompt Engineering We conducted experiments using three proposed versions of prompts discussed in Section <ref> along with a prompt without any instruction to analyze the impact of prompt engineering on Llama2-7b model performance. Table <ref> presents the evaluation results on the test and dev-test partitions. While all three prompts proposed reach a similar F1-score in the test partition, their impact is quite evident in the dev-test partition. We can observe that Prompt <ref> achieves the best overall scores across all metrics. It is worth noting, that the expanded prompt Prompt <ref>, shows a notably high recall score and consistency in the test partition possibly due to the attention given to the pronouns in the instruction. Moreover, a substantial performance decline is observed when no instructions are included in the prompt highlights the importance of prompt engineering in achieving optimal results from LLMs. In addition to performance metrics, Table <ref> indicates that the fine-tuning time increases with the length of the instruction. According to the competition's settings, we primarily consider the F1-score performance of each prompt in the dev-test partition. Consequently, we selected Prompt <ref> as the optimal one for the remaining experiments. §.§ Effect of Data Pruning As we discussed earlier, we leave the minimum length factor as a parameter of the data pruning process. Therefore we varied the minimum length from 3-10 and observed the performance of the Llama2-7b model fine-tuned on the pruned dataset. Figure <ref> shows the F1-score of the fine-tuned model in the dev-test partition. It can be observed that Step 1 alone is not sufficient enough to identify high-quality training data, and Step 2 always boosts the performance when combined with Step 1 except when the minimum length factor is set to very low (3-4). The optimal performance for the two-step data pruning is obtained when the minimum length factor is set to 8. Table <ref> presents the performance of Llama2-7b on the original and pruned training data. It can be observed that the model trained using only the step 2 data pruning approach yields the highest F1-score and accuracy in the dev-test partition and its performance in the test partition is close to the model trained without any data pruned. While the two-step data pruning approach reaches a slightly lower F1 score in the test partition, it stands out as the high recall model in both partitions. Further, it is worth noting that this model resulted in a better precision-recall trade-off in the dev-test partition compared to the models trained with individual pruning steps (step 1 only and step 2 only). Moreover, the consistency of the models in predicting the same output class ranged from 0.9-0.95, while the lower range is always observed in the dev-test partition. During the testing phase of the completion, we could not compare the average performance of the models due to time limitations. Therefore, we submitted the class labels predicted by the Llama2-7b model fine-tuned without the data pruning process for the CheckThat! 2024 task1 leader board, as it yielded a slightly higher F1-score compared to other approaches. This submission was ranked 1st place in the leaderboard with the highest F1-score of 0.802. While this score is lower than the average F1-score reported in Table <ref>, the standard deviation indicates that the model performance could vary from 0.801 to 0.839. Table <ref> reports the training data size and their corresponding average fine-tuning time. This demonstrates that using data pruning strategies allows competitive performance on both test and dev-test data while utilizing only about 44%-44.5% of the original training data. Further, the fine-tuning time is reduced in a similar proportion, cutting the training time by more than half compared to using the original training data. This indicates that obtaining high-quality training data is crucial for developing effective check-worthy statement detection models from political transcriptions. § CONCLUSION This paper demonstrates the experiments conducted by the FactFinders team for CheckThat! 2024 task 1, check-worthiness estimation in English. We experimented with eight open-source LLMs with fine-tuning and prompt engineering to identify check-worthy statement detection from political transcriptions. Results using our Llama2-7b model fine-tuned on the training data secured the 1st position in the leaderboard among a total of 26 participants, with F1-scores surpassing the baseline for the task. This demonstrates that open-source models are powerful in check-worthy statement detection in the English language. Further, we demonstrated the role of data pruning in identifying high-quality training data for effective learning. Our results show that competitive or better performance can be obtained by utilizing only about 44% of training data by saving the fine-tuning time in a similar proportion. Apart from the fine-tuned LLMs for check-worthy statement detection, we utilized LLMs for refining prompts and identifying informative verbs in zero-shot setting. The key challenges we faced while utilizing LLMs for check-worthy statement detection were the memory requirements and their inconsistent response in predicting the class label for the same input statement. We used the Low-Rank Adaption technique (LoRA) for effective fine-tuning with low GPU memory usage to conquer the memory requirements. While we tried to overcome the consistency issue by running the fine-tuned models 5 times and obtaining the majority prediction, our consistency analysis reveals that the consistency score itself is unstable during early iterations, and may have a drop of around 6% while reaching stability. This behavior of LLMs highly questions their adaptation for classification tasks in general as inconsistent responses may result in less reproducible results. § ACKNOWLEDGMENTS Rrubaa Panchendrarajan is funded by the European Union and UK Research and Innovation under Grant No. 101073351 as part of Marie Skłodowska-Curie Actions (MSCA Hybrid Intelligence to monitor, promote, and analyze transformations in good democracy practices). Yufeng Li is funded by China Scholarship Council (CSC).