entry_id
stringlengths
33
33
published
stringlengths
14
14
title
stringlengths
11
164
authors
sequencelengths
1
664
primary_category
stringclasses
116 values
categories
sequencelengths
1
7
text
stringlengths
5
1.05M
http://arxiv.org/abs/2407.02989v1
20240703104031
Numerical solution of nonlinear Schrödinger equation by a hybrid pseudospectral-variational quantum algorithm
[ "Nikolas Köcher", "Hendrik Rose", "Jörg Schumacher", "Stefan Schumacher" ]
quant-ph
[ "quant-ph" ]
Department of Physics and Center for Optoelectronics and Photonics Paderborn (CeOPP), Paderborn University, D-33098 Paderborn, Germany Institute for Photonic Quantum Systems (PhoQS), Paderborn University, D-33098 Paderborn, Germany Institute of Thermodynamics and Fluid Mechanics, Technische Universität Ilmenau, D-98684 Ilmenau, Germany Tandon School of Engineering, New York University, New York City, NY 11201, USA Department of Physics and Center for Optoelectronics and Photonics Paderborn (CeOPP), Paderborn University, D-33098 Paderborn, Germany Institute for Photonic Quantum Systems (PhoQS), Paderborn University, D-33098 Paderborn, Germany Wyant College of Optical Sciences, University of Arizona, Tucson, AZ 85721, USA § ABSTRACT The time-dependent one-dimensional nonlinear Schrödinger equation (NLSE) is solved numerically by a hybrid pseudospectral-variational quantum algorithm that connects a pseudospectral step for the Hamiltonian term with a variational step for the nonlinear term. The Hamiltonian term is treated as an integrating factor by forward and backward Fourier transformations, which are here carried out classically. This split allows us to avoid higher-order time integration schemes, to apply a first-order explicit time stepping for the remaining nonlinear NLSE term in a variational algorithm block, and thus to avoid numerical instabilities. We demonstrate that the analytical solution is reproduced with a small root mean square error for a long time interval over which a nonlinear soliton propagates significantly forward in space while keeping its shape. We analyze the accuracy of the quantum algorithm and compare it with classical approaches. Furthermore, we investigate the influence of algorithm parameters on the accuracy of the results, including the temporal step width and the depth of the quantum circuit. Numerical solution of nonlinear Schrödinger equation by a hybrid pseudospectral-variational quantum algorithm Stefan Schumacher July 8, 2024 ============================================================================================================= § INTRODUCTION The question on the application of quantum computing methods for the solution of linear and nonlinear ordinary or partial differential equations has received substantial interest in the past years <cit.>. It is yet an open point if the specific and unique properties of quantum algorithms in comparison to classical ones lead to faster and more efficient numerical solution methods leaving aside the technological hurdles of present noisy intermediate scale quantum (NISQ) devices <cit.>. This comprises their encoding capabilities, which grow exponentially with the qubit number, and the unique parallelism and correlations due to the entanglement of multiple qubits. The question has been approached from different methodological directions and problem tasks. Fundamental nonlinear partial differential equations such as the nonlinear Schrödinger or Burgers equations appear in many applications ranging from nonlinear optics <cit.>, Bose-Einstein condensation <cit.>, oceanography <cit.> via fluid turbulence <cit.> to astrophysical jets <cit.>. Solution algorithms comprise (i) a combination of quantum linear systems algorithms (QLSA) with homotopy perturbation methods <cit.>, (ii) linearization of nonlinear problems <cit.> by Carleman embedding <cit.>, (iii) quantum feature map encodings of nonlinearities <cit.> as in kernel methods of machine learning <cit.>, and (iv) variational quantum algorithms (VQA) <cit.>. VQA are inspired by variational quantum eigenvalue solvers which search a ground state of a many-particle quantum system by minimization of an energy functional <cit.>. The class of variational methods provides the motivation for the present study. Here, we present a numerical solution method of the one-dimensional nonlinear Schrödinger equation (NLSE) which applies a pseudospectral-variational quantum algorithm. This algorithm combines a pseudospectral split-step method for the linear part of the NLSE and a variational algorithm for the nonlinear part. Variational algorithms typically rely on low-order schemes for the time-advancement <cit.>. In case of the NLSE this causes numerical instabilities such that higher-order time stepping methods have to be applied classically. The present split method overcomes this stability problem. We investigate the dependence of the results on the degree of entanglement of the parametric quantum circuit. This degree of entanglement depends on the depth of the quantum circuit which is the dependence to be investigated. It is shown that the algorithm can solve the equation for a longer period of at least up to 100 integration time steps, which is an exceptionally long integration period for a quantum algorithm. Figure <ref> shows an illustrative simulation result obtained with the algorithm presented in the present paper. Variational quantum algorithms – hybrid quantum-classical algorithms – have been used to solve linear and nonlinear partial differential equations in the past years. This includes linear one-dimensional advection-diffusion equations for simple transport problems <cit.> and heat equations in one and two dimensions <cit.>. Nonlinear problems include the steady one-dimensional NLSE <cit.> and the one-dimensional Burgers equation for the nonlinear steepening of a sine wave <cit.>. In refs. <cit.> an alternative Feynman-Kitaev algorithm is used which orders spatial and temporal qubits in one register and thus avoids the time stepping. However, this enhances the number of qubits in the required quantum register and thus limits an application for current NISQ devices to a few time steps only. The outline of this manuscript is as follows. In Sec. II we summarize the NLSE and analytical solutions for the validation of the method together with the basic ideas of variational algorithms and the split-step method. This includes the implementation of the nonlinear term s|Ψ|^2Ψ (see Sec. II A) and the evaluation of the cost function. The numerical results obtained from the quantum algorithm are presented in Sec. III, where we also analyze the accuracy of the method compared to classical algorithms and investigate the dependence on algorithm parameters. We conclude with a summary and outlook in Sec. IV. § METHODS §.§ Nonlinear Schrödinger Equation The one-dimensional, time-dependent nonlinear Schrödinger equation (NLSE), which is also known as the Gross-Pitaevskii equation in the context of Bose-Einstein condensation, for the complex wave function Ψ(x,t) is given in dimensionless form by i ∂Ψ/∂ t= -1/2∂^2Ψ/∂ x^2+VΨ-s |Ψ|^2 Ψ , Ψ(x,0)=Ψ_0(x) , where the nonlinear coupling constant s describes the strength of the nonlinearity. The potential energy operator is V. The case s>0 gives rise to a focusing nonlinearity and bright soliton solutions; the case s<0 leads to defocusing nonlinearities and dark solitons. The NLSE is integrable in one dimension. For the case of V=0, there exist several analytical solutions of (<ref>). For x∈ℝ, s=1, and the initial condition Ψ_0(x)=a (a(x-x_0)) e^iv(x-x_0) , one gets the following analytical solution Ψ(x,t)=a (a(x-x_0-vt)) e^iv(x-x_0)+i/2(a^2-v^2)t , with constants a>0, v>0, and x_0∈ℝ. For s=1 this solution corresponds to soliton propagation. We can furthermore construct a solution Ψ_p(x,t) that satisfies periodic boundary conditions on a finite domain of length L Ψ_p(x,t) = max({Ψ(x+k L) | k∈ℤ}). This analytical solution will be used as the test case for the present algorithm. Throughout the rest of the manuscript V=0. §.§ Variational Quantum Algorithms VQAs are hybrid quantum-classical algorithms where a parameterized cost function C is minimized by an optimizer <cit.>. The cost function is evaluated by a quantum circuit which is composed of n qubits and single- and two-qubit gates, the cost minimum search is performed classically in an N-dimensional parameter space that contains a parameter vector λ. This parameter vector λ, which consists of the angles of the single-qubit unitary rotation gates λ = (λ_1, λ_2, …, λ_N)∈ℝ^N, is the input to the algorithm. For each time step, the trial solutions for the NLSE | Ψ(t+Δ t) ⟩, represented by normalized quantum states | ψ(t+Δ t)⟩ are generated by the parametric quantum circuit |ψ(λ)⟩ = U(λ)|0⟩^⊗ n . Because the normalization of |Ψ(t) ⟩ and |ψ(t) ⟩ differs, they are related by a constant factor as further explained in section <ref>. The cost function C(λ) is then also evaluated on the quantum device in a Hadamard test-like circuit <cit.>. This implies to determine the following overlap C(λ) = |ψ(λ)⟩ - F[|ψ(t)⟩]^2_2 = |⟨ψ(λ) |F[ψ(t)⟩]|^2→ , where F[|ψ(t)⟩] is the (nonlinear) iteration from the past step t in correspondence with the underlying partial differential equation (PDE). Note, that we dropped the additional constant terms in the cost function. Discretized in space and time, Eq. (<ref>) can be formally written as ψ(x_j,t+Δ t)=F[ψ(x_k,t)] , see also next subsection. Multiple repeated measurements of identically prepared quantum states per iteration step evaluate the costs. These costs are minimized with a limited-memory Broyden-Fletcher-Goldfarb-Shanno algorithm with constant lower and upper bounds for λ_k (L-BFGS-B) which applies a quasi-Newton method for solving unconstrained, nonlinear optimization problems <cit.>. Thereby, the Hessian matrix of the cost function is approximated by the evaluation of the gradients (or the approximated gradients) in order to find the descent direction in the hyperparameter landscape. The optimal parameter set λ^* initializes the ansatz function such that the solution of the given problem can be observed <cit.>. We use random initial parameters for the first time step and the optimized parameters from the previous step for all subsequent steps. §.§ Pseudospectral Split-Step Method We consider the solution on a finite domain x ∈ [-π,π) and discretize the interval with M grid points x_j = -π + 2π j/M (0≤ j < M). We use periodic boundary conditions such that x_M ≡ x_0. The temporal evolution will be solved on an equidistant time grid starting from t_0 = 0 and a fixed timestep width of Δ t. We apply operator splitting to treat the linear and nonlinear operators in the NLSE (<ref>) separately in a two-step process <cit.>. First, we compute an implicit substep for the linear Laplacian operator using the exact solution in Fourier space, which implicitly includes the periodic boundary condition. The wavenumbers of the Fourier modes are given by k_j=j-M/2. Then, we compute an explicit step for the nonlinear operator by using an Euler step. The time-discretized stepping scheme from t to t+Δ t proceeds in two substeps. The first implicit substep is given by Ψ̃(x_j,t) = ℱ^-1( exp(-i/2 k_j^2 Δ t) ℱ(Ψ(x_j,t))) . In case of V 0, we could treat the additional potential term of the NLSE again with an integrating factor exp(-iVΔ t) <cit.>. Subsequently, the second explicit (Euler) substep follows to Ψ(x_j,t+Δ t) = Ψ̃(x_j,t) + i s Δ t |Ψ̃(x_j,t)|^2 Ψ̃(x_j,t) . The symbols ℱ and ℱ^-1 stand for the discrete Fourier and inverse Fourier transforms, respectively. Using the split-step method has a central advantage over other methods in that it remains stable for long temporal step widths Δ t due to the absence of the second derivative in x. This is important considering the limited number of time steps that can be done with a VQA, partially owed to the accumulating errors of the optimization. An explicit stepping scheme would not be applicable to solve the NLSE by VQA with the same amount of resources, since it requires a smaller Δ t while the total amount of accurate VQA steps remains fixed. Furthermore, all required steps can be mapped to a quantum circuit. Next, we will discuss the realization of this scheme with a quantum algorithm. Note that the implicit step Eq. (<ref>) requires two Fourier transforms and a multiplication with a phase. The latter can be done with a circuit shown in Ref. <cit.>, while the former is realized by a straightforward application of the quantum Fourier transform algorithm <cit.>. The explicit step can be computed by VQA. §.§ Evaluation of the Cost Function We expand the wave function into the n-qubit basis as follows | Ψ(t)⟩ = √(2a/Δ x)∑_j = 0^2^n-1ψ(x_j,t) | j ⟩ =√(2a/Δ x)|ψ(t)⟩ , where |j⟩ denotes the n-qubit basis state corresponding to the binary representation of j. Consequently, M=2^n with a qubit number n. Equation (<ref>) provides a construction for switching between the representations and satisfies the normalization constraint ⟨ψ(t) | ψ(t)⟩ = 1. Since the wave function is normalized according to ∫_-∞^∞ |Ψ(x,t)| dx = 2 a, we do not need an extra optimization parameter for the amplitude. The computed solution is related to the solution of the NLSE by a constant factor. Note that the initial condition for the soliton at t=0 is directly implemented into the cost function. We formulate a cost function for the operator F that describes the explicit step Eq. (<ref>). F is diagonal and its elements f_j,j have the form: f_j,j = 1 + i s Δ t 2a/Δ x |ψ(x_j,t)|^2 . Note that the factor 2a/Δ x originates from the normalization conditions Eqs. (<ref>) and (<ref>). Substituting F into Eq. (<ref>) and omitting constant terms yields the following cost function C(λ_t+Δ t) = s Δ t 2 a/Δ xIm{⟨ψ(t+Δ t)| ψ̃(t) ψ̃^*(t) | ψ̃(t) ⟩} -Re{⟨ψ(t+Δ t) | ψ̃(t) ⟩}, where |ψ̃(t)⟩ is the state vector after the implicit substep and |ψ(t+Δ t) ⟩ is a function of λ_t+Δ t. To initialize the state |ψ(t)⟩ = U(λ_t)|0⟩^⊗ n and the trial state |ψ(t+Δ t)⟩ = U(λ_t+Δ t)|0⟩^⊗ n we use the parameterized ansatz circuit shown in Fig. <ref> consisting of multiple layers of single-qubit rotations and controlled NOT gates, where we define the depth d of the quantum circuit as the number of layers of controlled NOT (CNOT) gates, each followed by a layer of single qubit rotation gates R_x and R_z, that are applied after the initial layer of rotation gates. Note that we tested other ansatz circuits as well, however, the one presented here was the most accurate one. For each time step, the state |ψ̃(t)⟩ after the first substep is then obtained by additionally applying the quantum Fourier transform (QFT) <cit.>, multiplying by the appropriate phase for each k in Fourier space, see Eq. (<ref>), and applying the inverse QFT. This includes | ψ̃(t) ⟩ = Ũ(λ_t) |0⟩^⊗ n , with Ũ(λ_t)=( QFT)^† X^(n-1) U_ ph X^(n-1) ( QFT) U(λ_t) . After applying the QFT, the zero frequency component of the state is shifted to the center of Fourier space by applying an X gate to the most significant qubit and another X gate before the inverse QFT reverses this shift. Following reference <cit.>, U_ ph can be constructed using n^2 phase and controlled phase gates U_ ph = ∏_i,j=0^n-1 P_ij with P_ij = P^(i)(γ (2^2i-2^n+i)) if i=j CP^(ij)(γ 2^i+j) if i ≠ j where γ = -Δ t/2 and P and CP denote the phase and controlled phase gate. Using these circuits the second term in Eq. (<ref>) can be computed using a Hadamard test, see e.g. Ref. <cit.> for a detailed description. Figure <ref> shows the circuit used to evaluate the nonlinearity, denoted as the quantum nonlinear processing unit (QNPU), Im{∑_j=0^2^n-1ψ_j(t+Δ t) |ψ̃_̃j̃(t)|^2 ψ̃_̃j̃(t) } . The QNPU is based on the circuits from refs. <cit.> for evaluating functions of the form F = f^(1)^*∏_j=1^r f^(j). To arrive at the circuit shown here we use r=3, f^(1)=f^(2)=ψ̃(t) and f^(3)=ψ^*(t+Δ t) and add an S^†-gate to compute the imaginary part instead of the real part of the expression. The value of expression (<ref>) is then obtained as the expectation value of a Z-measurement performed on the ancilla (first) qubit. The unitary U^*(λ_t+Δ t) initializes the complex conjugate of the state |ψ(t+Δ t)⟩ and is obtained by applying U(λ_t+Δ t) but changing the sign of the angles of rotation of the R_x and R_z gates. To keep the simulation times in a reasonable limit, we do not evaluate the cost function on a simulated quantum circuit for subsequent simulations. Nevertheless, we verified that simulated quantum circuits yield the same value for the cost function compared to matrix multiplication. For the same reason, we compute the Fourier transform classically with a fast Fourier transform (FFT) algorithm rather than with the QFT algorithm. The quantum circuit for the VQA is implemented with Qiskit quantum simulation software <cit.>. The classical optimization of the quantum circuit is done with the Qiskit implementation of the L-BFGS-B optimizer algorithm. A relevant parameter for the optimization algorithm is ftol which is an upper bound for the relative error of two consecutive iteration steps that must be fulfilled for the optimization to terminate. Qiskit can be used in three different ways, (1) as an ideal simulator without quantum circuit noise monitoring the complex quantum statevector (statevector simulator), (2) as an ideal simulator without quantum circuit noise and with measurement noise (qasm simulator), and (3) as a simulator that emulates the NISQ devices with quantum circuit and measurement noise. We will only use (1) throughout this work. § RESULTS This section is divided into two parts. First, we will demonstrate simulation results with the presented algorithm and will discuss its suitability and accuracy. Secondly, we will investigate the importance of algorithm parameters, i.e., the width of the timestep and the depth of the quantum circuit. §.§ Simulation Results and Accuracy Analysis Henceforth, we will use the parameters a = 2, x_0 = -1, and v = 10 for the intial state obtained from Ψ_p(x,0) in Eq. (<ref>), unless stated otherwise. We apply the quantum algorithm with n=6 qubits, a circuit depth of d=12, a time step of Δ t = 3 · 10^-3, and ftol = 10^-14 to compute solutions for the linear case with s=0 and the nonlinear case with s=1, the latter leading to soliton propagation. The simulation results are shown in Fig. <ref>. Both are obtained for a grid resolution of M=64 and 6 qubits. The simulation demonstrates that our proposed quantum algorithm describes the time evolution correctly over a long propagation range. The analytical solution is visually identical to the numerical solution and therefore not shown to preserve clarity. The accuracy of the quantum algorithm will be analyzed in the following by comparing the numerical result for the soliton solution with the analytically exact solution given by Eq. (<ref>). For a quantitative analysis we introduce the root mean square error (RMSE), which is given by RMSE(m) = √(1/2^n∑_j=0^2^n-1 | |Ψ_num(x_j,t_m)| - |Ψ_p(x_j,t_m)| |^2 ) , where Ψ_num denotes the numerical result. To make a more insightful comparison, we do not only investigate the quantum algorithm (Q), but also the classical version of it (C), as well as the classical version where we manually normalize the solution after each time step (NC), where classical version means that we use the classical algorithm given by Eqs. (<ref>) and (<ref>). The respective RMSE will be referred to as Q-RMSE, C-RMSE, and NC-RMSE. Figure <ref> shows all three error measures versus time t for the same parameters as in Fig. <ref>. We see that all errors are on the same order of magnitude. The Q-RMSE is smaller than the C-RMSE for most times t, which comes from the intrinsic norm conservation in quantum algorithms, which the classical algorithm lacks. This becomes clear when comparing the Q-RMSE against the NC-RMSE, where we note that the NC-RMSE is a lower bound for the Q-RMSE. Note that due to the random initial conditions before the optimization, the Q-RMSE will depend on the random numbers, but the general trend remains the same. Further simulations show that larger values for ftol are sufficient, more precisely, a value of ftol=10^-9 is sufficient to obtain a Q-RMSE smaller than the C-RMSE for most times. §.§ Dependence on Algorithm Parameters In this section we analyze the impact of parameters, including the timestep width Δ t and the quantum circuit depth, on the accuracy of the numerical results. We start by analyzing the dependence on Δ t. Figure <ref> shows the Q-RMSE for the last simulated time step at t = 0.3 for multiple VQA runs, that only differ by the chosen temporal step width Δ t and thus, in the number of simulated time steps, which are plotted on the x-axis. Fig. <ref> also depicts the C-RMSE and the NC-RMSE. We observe that the Q-RMSE and the NC-RMSE are similar for a wide range of time steps while the latter is always smaller, which is compatible with the result from Sec. <ref>. For larger numbers of time steps, however, the Q-RMSE exceeds both, the NC-RMSE and the C-RMSE due to the accumulated errors in the classical optimization. For small numbers of time steps, all algorithms produce large RMSE because the algorithms become unstable for large Δ t. In conclusion, one can identify a global minimum at which the Q-RMSE is minimized, which corresponds to 80 steps for the fixed integration time interval of t=0.3. Finally, we will discuss the dependence of the Q-RMSE on the circuit depth d. The quantum circuit has 2n(d+1) free parameters, while the state in Eq. (<ref>) has 2^n+1 unknowns, which means that for n=6, we have more free parameters than unknowns if d≥ 10, i.e., the optimization is overdetermined and underdetermined otherwise. Figure <ref> shows the Q-RMSE error for 6 VQA runs using Δ t = 10^-3, x_0=0, and ftol=10^-13 together with the respective C-RMSE. The simulations differ only in the depth d of the quantum circuit. We see that the error converges for a depth of d'=11, as deeper circuits do not reduce the error significantly. This behavior is expected, since the optimization is overdetermined for d≥ d'. In contrast, the Q-RMSE for d<10 is large, since the optimization is underdetermined. Overparametrizations have been discussed for quantum neural networks recently <cit.>. In that case, the authors could derive an upper bound on the number of network parameters by means of the dimension of the Lie algebra, the latter of which is formed by the infinitesimal generators of the network unitary. § CONCLUSION In this work, we introduce a quantum algorithm that combines VQA and pseudospectral split-step methods and demonstrate its suitability for solving the NLSE for sufficiently deep quantum circuits for a longer time period in which a soliton solution propagates over a significant distance in space. The split-step method allowed us to keep a first-order time integration scheme for the nonlinear terms of the NLSE while treating the linear part as an integrating factor in combination with Fourier transforms. This keeps the quantum circuit implementation feasible and allowed us to run the algorithm for longer time intervals than typically seen. The main bottleneck in the presented algorithm is the classical optimization that is required for the VQA. It not only restrains the scalability for larger systems with more qubits by requiring a high-dimensional classical optimization, but also must remain accurate, which constitutes another challenge. One promising candidate for an efficient optimization is surrogate-based optimization <cit.>. Besides these open problems, our results demonstrate that the algorithm works and generates accurate results. The present work should be considered as a proof-of-concept study. A continuation of the study long these lines should imply a number of steps: (1) an inclusion of the Fourier transformations in the form of quantum Fourier transformations into the algorithms, (2) a switch from an ideal statevector simulation to quantum simulation with measurement noise, and eventually (3) an implementation on a NISQ device. These steps will be reported elsewhere. The authors gratefully acknowledge the computing time made available to them on the high-performance computer Noctua 2 at the NHR Center Paderborn Center for Parallel Computing (PC^2). This center is jointly supported by the Federal Ministry of Education and Research and the state governments participating in the NHR (www.nhr-verein.de/unsere-partner). The work of JS is supported by the European Union (ERC, MesoComp, 101052786). Views and opinions expressed are however those of the author only and do not necessarily reflect those of the European Union or the European Research Council. Neither the European Union nor the granting authority can be held responsible for them.
http://arxiv.org/abs/2407.02807v1
20240703045504
Regional and Temporal Patterns of Partisan Polarization during the COVID-19 Pandemic in the United States and Canada
[ "Zachary Yang", "Anne Imouza", "Maximilian Puelma Touzel", "Cecile Amadoro", "Gabrielle Desrosiers-Brisebois", "Kellin Pelrine", "Sacha Levy", "Jean-Francois Godbout", "Reihaneh Rabbany" ]
cs.SI
[ "cs.SI", "J.4" ]
[1,3]Zachary Yangzachary.yang@mail.mcgill.ca 2]Anne Imouza 3]Maximilian Puelma Touzel 4]Cécile Amadoro 4]Gabrielle Desrosiers-Brisebois 1,3]Kellin Pelrine 1,3]Sacha Levy 3,4]Jean-François Godbout 1,3]Reihaneh Rabbany [1]School of Computer Science, McGill University [2]Political Science, McGill University [3]Montreal Institute for Learning Algorithms [4]Department of Political Science, University of Montreal Public health measures were among the most polarizing topics debated online during the COVID-19 pandemic. Much of the discussion surrounded specific events, such as when and which particular interventions came into practise. In this work, we develop and apply an approach to measure subnational and event-driven variation of partisan polarization and explore how these dynamics varied both across and within countries. We apply our measure to a dataset of over 50 million tweets posted during late 2020, a salient period of polarizing discourse in the early phase of the pandemic. In particular, we examine regional variations in both the United States and Canada, focusing on three specific health interventions: lockdowns, masks, and vaccines. We find that more politically conservative regions had higher levels of partisan polarization in both countries, especially in the US where a strong negative correlation exists between regional vaccination rates and degree of polarization in vaccine related discussions. We then analyze the timing, context, and profile of spikes in polarization, linking them to specific events discussed on social media across different regions in both countries. These typically last only a few days in duration, suggesting that online discussions reflect and could even drive changes in public opinion, which in the context of pandemic response impacts public health outcomes across different regions and over time. Regional and Temporal Patterns of Partisan Polarization during the COVID-19 Pandemic in the United States and Canada [ July 8, 2024 ==================================================================================================================== § INTRODUCTION Partisan polarization is increasingly prevalent in democracies around the world <cit.>. In the United States, the level of opposition between Democrats and Republicans has been steadily growing for decades <cit.> and reached unprecedented heights during the 2020 presidential election <cit.>. This polarization even affected how individuals responded to the pandemic by influencing their assessment of the dangers posed by the virus and their response to public health measures <cit.>. Several studies have now confirmed that supporters of the Democratic party were more likely to follow social distancing measures <cit.>, wear masks <cit.>, and get vaccinated <cit.> when compared to their Republican counterparts. This polarizing trend among partisans is not limited to the United States <cit.>. Other countries, like Canada, have also experienced the rapid politicization of pandemic responses, where researchers found that supporters of the Liberal Party were more likely to follow guidelines than supporters of the Conservative Party or the populist People's Party <cit.>. While countries like the United States and Canada adopted strategies to coordinate efforts in addressing the pandemic at the national level, there was still significant variation in both the amount and types of public health interventions introduced by subnational governments. For instance, Canadian provinces implemented different policies, ranging from comprehensive lockdowns and school closures to more targeted guidelines focusing on specific populations <cit.>. Similar variation can be observed in the United States, where some states implemented strict lockdown orders and mask mandates, while others refused to limit social distancing <cit.>. Much like at the national level, these different regional policies also became rapidly politicized along partisan lines <cit.>, especially on social media platforms, where the politicization of the pandemic largely unfolded <cit.>. Numerous studies have now confirmed that online discussions surrounding the pandemic <cit.> exhibited clear regional patterns characterized by the same partisan animosity that impacted the heterogeneous implementation of public health measures <cit.> and the resulting epidemiological outcomes <cit.>. Additional research highlights that these partisan divisions also contributed to the increasing polarization observed on social media <cit.>. The intense reactions from both supporters and opponents of public health measures <cit.> implies that public opinion could have significantly been influenced by local political dynamics and the geography of the pandemic <cit.>. This regional heterogeneity provides us with a unique opportunity to study polarization around specific events, topics, and regions, to understand how various factors affected compliance to COVID-19 guidelines. However, reliably measuring the polarization of public discourse at more fine-grained resolution is a challenge, even with the large quantity of human-generated text with extensive meta-data available on digital platforms. In this work, we propose a solution to this measurement problem by introducing a comprehensive approach to better understand the geographic and event-driven variation of online partisan polarization of discussions within American states and Canadian provinces. We examine variations in public discourse as contained on Twitter (now X) to determine how polarization is related to: (1) the ideological leanings of different regions; (2) the amount of conspiracy theory-related messages that users have been exposed to; and (3) vaccination data. The paper is organized as follows. First, we briefly outline our approach to region- and time-resolved polarization data from Twitter (X). In this section, we also describe our machine learning method to classify users as conservative or liberal and justify our choice of topic-conditioned language dissimilarity as a proxy for partisan polarization. Next, we present the results of applying our approach to a large-scale dataset we collected in 2020, filtered through three prominent pandemic-related public health interventions: lockdowns, masks, and vaccines. Our findings indicate that conservative regions in both countries exhibited higher polarization levels on these topics overall. We also find strong negative correlation between vaccination rates in different U.S. regions and the level of polarization in their online discussions related to vaccines. We close with a discussion of limitations of the approach and promising new areas of application. § APPROACH We developed a method to measure geographically-resolved partisan polarization over time from large-scale social media message datasets (see <ref>). The language of political discussions across socio-demographic groups can vary significantly <cit.>, each having their own lexicon, so dissimilar language on its own does not imply polarized positions. However, when conditioning on discussion of the same, contentious topic, the dissimilarity of the language used by different demographics is more likely to reflect alternative semantic understandings of that specific topic, which we assume are highly correlated with polarization. This correlation is weakened by linguistic differences not captured by the particular definition of semantic separation used, so the latter is only a noisy proxy of polarization strength. That said, we expect that a stronger correlation between semantic separation and polarization exists when language is represented in a more expressive model. Inspired by the demonstrated capacity of modern vector embeddings to represent the semantics of words, our approach focuses on transforming the sentences of social media posts using RoBERTa, a powerful open-source, language-embedding model <cit.>. As an indicator for polarization, we then measure dissimilarity by how far apart the tweets of left and right-leaning users are in this embedding space. In particular, we use the C-index <cit.>, a robust clustering measure based on the average of pairwise distances of embedded partisan users within a partisan group relative to the average of the largest and smallest distances overall. To label partisans in our data, we developed and validated a machine learning method that identifies users as conservative or liberal on the ideological spectrum by cross-indexing multiple metadata sources. We also developed and validated a method to geolocate users to resolve polarization's geographic heterogeneity. Details of these components of the approach can be found in the Methods section. §.§ Application to late 2020 pandemic discourse in the United States & Canada We collected a large-scale dataset of political discussions on Twitter (X) occurring between October 9, 2020, to January 4, 2021, comprising about 46.6 million tweets linked to Canada and 12.5 million tweets linked to the United States. We geolocated users based on their provided location and classified them by their declared party affiliation. Specifically, we include identifiers for the two major liberal and conservative ideological divisions in each country: the left (Liberal Party, New Democratic Party, and Green Party) and the right (Conservative Party and People's Party) for Canada; and the Democratic (left) and Republican (right) parties for the United States. Using official population census and election results, we verify that these data provide a politically balanced set of users in the different regions of these two countries. For each of the users, we then compute a vector representation for the language they used in their social media messages during this period. We condition on region, time, and topic and for each combination compute the value of the C-index as a proxy for polarization. To narrow the content of the messages analyzed, we focus on three specific topics of discussion: lockdowns, masks and vaccines. These topics were chosen because of their salience for polarized discourse around the pandemic <cit.>, and because they span different types of interventions (group behaviour, individual behaviour, and medicine, respectively). Based on these measurements, we then compare the polarization observed in different American states and Canadian provinces over time for each of the three topics. We also look into how polarization is correlated with epidemiological data and conspiracy-related content. We refer the reader to the Methods section for further details. § RESULTS We organize the presentation of results as follows. First, we report the geographical trends of the observed partisan polarization in the United States and Canada and confirm that conservative states and provinces display more polarized online discourse. Next, we highlight the correlation between partisan polarization on the topic of vaccines and the vaccination rates found across U.S. states. We then present our event-based analysis of the temporal patterns of polarization at the national level in both countries and report correlations between polarization and vaccination data, as well as the volume of conspiracy-related content on Twitter (X). Finally, we examine the different peaks in polarization and explain how they relate to various polarizing events. §.§ Regional Variation in Partisan Polarization Our analysis begins by visualizing the geography of partisan polarization in <ref> and <ref> for the United States and Canada. The regional heterogeneity in the amount of polarization observed over different topics is apparent in both countries. We also see heterogeneity in the amount of conspiracy-related tweets shown in <ref>d and <ref>d for both countries, respectively. Next, we analyze how this heterogeneity varies with the partisan leanings found in each region by analyzing election voting patterns (the 2020 presidential election in the case of the United States and the 2019 federal election for Canada). Our first observation is that conservative states and provinces show higher levels of polarization compared to their liberal counterparts. To display results over all covered regions, we show the polarization rankings for American states and Canadian provinces in <ref> and <ref>, respectively. This ranking is applied separately to each of the three topics and is based on their weekly polarization averaged over the 12-week period. An additional fourth ranking labelled overall is shown and gives the average over the three topics. Each region is associated with a color graded from blue to red based on the vote margin for the Republican party (US) or the conservative party family (Canada) obtained from the votes reported in the most recent election in their corresponding country. In these figures, a blue to red color gradient for conservative to progressive is used such that the names of predominantly conservative/Republican regions appear in red, predominantly liberal/Democratic regions in blue, and mixed or less definitive regions in purple. Referring to <ref>, in the United States we can see clearly that conservative states are more polarized compared to liberal states overall and specifically on discussions related to masks and vaccines. The ranking is more mixed in the discussions about lockdown measures with outliers from both liberal and conservative states; namely Idaho, Alabama, and Arkansas showing the least polarization, and Delaware (ranked 1st), Colorado (ranked 17th), and New Jersey (ranked 15th) showing higher values. The expected relationship between pandemic response and state partisanship is however still present for lockdown discussions, with liberal states such as Vermont (ranked 41st) and Massachusetts (ranked 43rd) displaying less polarization compared to more conservative states such as Mississippi (ranked 2nd), North Dakota (ranked 3rd), and Oklahoma (ranked 4th). Looking now at Canada <ref>, we find that Alberta, a conservative province, shows higher polarization compared to Ontario, and British Columbia (among the Canadian provinces with the most number of social media users). Quebec is overall the highest ranked province. Although the pandemic was highly polarized in Quebec—e.g., with violent protests <cit.>—we want to acknowledge the limitation of our study, which was focused on the English language; in Quebec, the main language is French whereas only English tweets were included in our analysis. Finally, the correlation between polarization and the partisan vote margin is more clearly represented in the scatter between the two, shown for both countries in <ref>. We see a strong and significant correlation between Republican vote share in the United States and the polarization index around masks and vaccines discourse, but not lockdowns. The remaining associations (Lockdown for US, and all topics for Canada) are, however, insignificant (for Canada this is in part due to the relatively small number of regions). The strong correlation that we observe between the polarization score for discourse around vaccines in conservative-leaning states follows the well-known negative correlation between Republican vote share and vaccination rates <cit.>. Since vaccines were not available yet over this time period, we nevertheless present a comparison using official vaccination rates measured for different states by the U.S. Centers for Disease Control and Prevention for a similar period of time one year later, after the vaccines were rolled out (i.e. October 11, 2021 to January 3, 2022). Averaging on a weekly basis, we confirm this correlation in <ref>, where we observe that vaccine polarization is strongly negatively correlated to vaccination rates in the different American states. We did not observe a similar pattern in Canada, due to small sample size and the implementation of vaccine mandates. While disentangling the causal relationships among conservative vote margin, polarization score, and vaccination rate is not possible here, the results suggest that polarized discourse played a role in shaping the highly heterogeneous vaccination rates across the U.S. §.§ Temporal Variation in Partisan Polarization We next focus on the temporal trends of daily partisan polarization at the national level for each topic and overall, as displayed in <ref> for the United States and Canada, respectively. In these figures, the value of the metric fluctuates rapidly on the timescale of days. This is on the faster end of the range of timescales found in other topic tracking studies, e.g., <cit.>. These short timescales are consistent with our assumption that language adapts quickly in rapid anticipation of or as an immediate response to specific events. In particular, we considered two kinds of events. First, we preselected political and vaccine-related events (shown in <ref> and as vertical lines in <ref>). These provide the scaffold for the socio-political trajectory of each country related to political discourse and pandemic response. Second, we detected highly polarized events through analysis of the highest two peaks in polarization (shown in <ref> and as red circles in <ref>). We also show in the figures the tweet volume as a relative indicator of day-by-day reliability of the estimation of the polarization score. While we do not have direct causal evidence linking a highlighted peak in the polarization score to a specific event, we do find that many of the largest polarization peaks occur around highly contentious events related to each country's specific context (<ref>). In the following two sections, we summarize these events and discuss how they relate to the topic for which the polarization simultaneously peaks. United States Polarization Timecourse The left column of <ref> reports the daily polarization measured for the three key topics of lockdowns, masks, and vaccines. Peaks in polarization on the lockdown topic in <ref> may correspond to partisan differences in public support (or discontent) and discourse surrounding measures. In the days leading up to the 2020 Presidential Election on November 3rd, a pillar of President Trump's campaign messaging on the pandemic characterized lockdowns as tyranny and economic repression <cit.>. For example, on November 1, 2020, the date of the second-largest peak, Trump made a highly controversial claim by stating that the election was a choice between implementing deadly lockdown measures supported by Biden or an efficient end to the crisis with a safe vaccine <cit.>. Trump also made other similar claims on Twitter (X) during this period, e.g., when he said (sic): “Biden wants to LOCK DOWN our Country, maybe for years. Crazy! There will be NO LOCKDOWNS. The great American Comeback is underway!!!” <cit.>. Next, contentious debates related to masks were found coincident with peaks in polarization, as shown in <ref>. For example, while we did not find an event external to social media on October 31, 2020, the date of the highest peak, we did find that the most retweeted tweet by Democrats on that day was “RT @JoeBiden: Be a patriot. Wear a mask.”. This, in turn, generated strong responses that day from Republicans with the third most retweeted tweet within this group: “RT @RealBrysonGray: There's literally nothing patriotic about being so scared of a virus with a 99.9...”. This is then possibly an example of influencer post-driven, rather than real world event-driven polarization. Another set of divisive messages were observed on November 14, 2020, the next highest peak, after presidential candidate Biden proposed mandatory mask mandates, and South Dakota Governor Kristi Noem announced her opposition to this measure; nearly half of all the top retweets referred to Noem's statement. Finally, <ref> reports trends in polarization around the topic of vaccines. Here, some of the peaks observed are simultaneous with important events surrounding vaccine efficacy. We see that the second-largest peak occurred on December 22, 2020, a day after Biden received his first vaccine shot <cit.>. The most retweeted tweet for both partisan groups that day was “RT @JoeBiden: Today, I received the COVID-19 vaccine. To the scientists and researchers who worked tirelessly to make this possible - than…". However, while supporters of Biden congratulated him, by tweeting messages like “RT @YAFBiden: And just like that, @JoeBiden has received the COVID-19 vaccine!”, opponents instead promoted pro-Trump messages, e.g. “RT @TheLeoTerrell: Finally a @JoeBiden confession. He finally gave credit to @realDonaldTrump and #OperationWarpSpeed. It's about time.”. Canadian Polarization Timecourse The right column of <ref> shows the daily polarization measured for the three key topics of lockdowns, masks, and vaccines. The pre-selected events in the Canadian timeline (<ref>) are marked as vertical lines in the figure. In <ref>, on the topic of lockdowns, we observe the highest peak on October 17, 2020, which coincides with the Toronto anti-mask protest, a large demonstration where thousands of protesters rallied against COVID-19 lockdown measures. The second-highest peak is observed on November 29, 2020, when the national news reported a Calgary Mask Measures protest on the preceding day <cit.>. The highest polarization peak on mask-related tweets is found on November 14, 2020, as seen in <ref>, coincident with the peak in the US plot <ref> mentioned earlier. This event was discussed by the conservative-Party family users, showing how partisan discourse in the U.S. might be driving some polarization in Canada. Looking at the polarization of discussions about vaccines in <ref>, we also observe the highest peaks are in response to key vaccine-related events: The two highest peaks in polarization observed on December 20 and 23, 2020 coincided with the distribution of the Moderna vaccines in the U.S. and Health Canada's approval of the Moderna vaccine <cit.>, respectively. While the former is an event associated with the United States, it led to discussions in Canada about vaccine prioritization and availability <cit.>. On the 20, top retweets by liberal Party Family users focused on news of the Republican politicians being first in line for the vaccine, while conservative Party Family users retweeted more diverse anti-vaccine sentiment. On December 23, 2020, the top retweets were strong sentiments in support of and in opposition to the approval. Aggregate Polarization To complement the granular analysis presented above, we also evaluated the measure's overall responsiveness to polarizing events. In particular, we computed an average of the polarization score over topics (shown in <ref>a,b) and then performed event-triggered averaging around such events, to show how the metric varies in time before and after these particular dates on average. This aggregate result (shown in <ref>c) confirms a fast (on the order of days) and largely symmetric rise-and-decay profile around these polarization peaks. §.§ The Relationship between Conspiracy and Polarization Finally, we explored the relationship between conspiracy discourse and polarization using our time-resolved measurements of the number of conspiracy-related tweets. In particular, we compared it with the time course of the aggregate daily polarization presented in the previous section. The profiles broken down by progressive and liberal partisanship for U.S. and Canada are shown in <ref> and <ref>, respectively. For both countries, conservative partisans tweet conspiracy-related content in higher numbers than progressive partisans. The correlation with aggregate daily polarization for the U.S. and Canada (<ref> and <ref>) is shown in <ref> and <ref>, respectively. For the U.S., we find a small but significant negative correlation with polarization. § DISCUSSION This article investigated regional and event-triggered variation in partisan division within social media debates surrounding the introduction of public health measures across American states and Canadian provinces. Our computational analysis was centered around quantifying partisan polarization by analyzing the language used in millions of online messages from users affiliated with different political parties. In particular, we focused on Twitter (X) discussions related to three key public health interventions during the early phases of the pandemic: lockdowns, masks, and vaccines, as well as tracking the volume of conspiracy-related tweets. Our analysis explored the geographic heterogeneity of polarization and identified political events that likely influenced public opinion over time. Like several other studies before us <cit.>, we found that more right leaning states and provinces exhibited greater partisan divisions around on Twitter (X), in particular concerning topics of mask mandates and vaccine distribution. However, we went beyond these studies to characterize the geographic heterogeneity and time course of polarization, relating features in our polarization metric to real world events. We looked into the relationship between polarization and public health initiatives in the U.S. and confirmed a strong negative correlation between partisan polarization and future vaccination rates and a moderate negative correlation between the temporal profiles of volume of conspiracy-related tweets and aggregate polarization. We did not observe similar patterns in Canada. §.§ United States The early phases of the pandemic in the United States prompted a variety of public discussions online that reflected strong regional variation of partisan support <cit.>. State-specific polarization obtained from our computational approach could be expected to be uniformly low, with the message content of conservative and liberal states each having internally homogeneous semantics. Instead, our analysis confirmed that polarization was notably higher in conservative states, where we found that Republican vote margins had a significant positive effect on polarization in discussions concerning masks and vaccines, after controlling for other factors. This main result is consistent with previous studies that suggest conservative states exhibited higher levels of polarization in response to public health interventions compared to their liberal counterparts <cit.>. Nevertheless, we found that this pattern does not hold across every state. Delaware, a liberal state, exhibited a distinctly high level of polarization, likely due to the strict public health measures implemented by the governor in response to the rapid increase in the number of cases during the first wave of the pandemic <cit.>. Similarly, low levels of polarization were observed in several conservative states, like Arkansas, Alabama, and Idaho, with a possible explanation coming from more unified opposition to restrictive measures like mask mandate orders <cit.>. Nevertheless, most conservative states exhibited relatively high levels of polarization (<ref>). Our analysis did not identify a single, general cause for this relationship. That said, commonalities in each state's trajectory in the pandemic offer some clues: e.g., Mississippi, North Dakota, and Oklahoma experienced specific political decisions—such as mask mandates—made in the earlier phase of the pandemic, that led to resistance despite rising cases <cit.>. In Mississippi, the decision by the governor to lift the statewide mask mandate in late September could also have contributed to heightened levels of polarization <cit.>. A similar pattern was observed in North Dakota, South Dakota, and Oklahoma, where initial hesitancy to enforce mask mandates also appears to have led to increased partisan divisions <cit.>. This finding is not surprising, since the party affiliation of governors is the most important predictor of the widespread adoption of mask mandates <cit.>. One possible explanation for our main result that can be gleaned from these anecdotes is that pandemic severity increasingly strains the more uniform opposition to restrictive health measures in more conservative states, leading them to exhibit higher levels of polarization <cit.>. This is a distinct source of polarization than that in states with more equal distribution of partisans across competitive districts. Through our approach, we could also dissect how polarization varies in time over the three topics we considered: lockdowns, masks, and vaccines. These three topics exhibited similar baseline levels of polarization during the period of study, which was between the second and third waves of the pandemic, punctuated by large positive deviations that typically rise and fall quickly. The prevalence of these deviations was smallest for the lockdown topic. Its low correlation with Republican party vote share suggests that it did not act as a meaningful indicator of partisan opposition. The polarization time course for masks and vaccines, however, contained many, sharp peaks, many of which we were able to identify with a real world event. For example, South Dakota initially experienced very high levels of mask polarization, coinciding with efforts by medical authorities to promote mask-wearing, despite Governor Kristi Noem's opposition <cit.>. Her public display of opposition to Biden's suggestion of mask mandates lead to one such peak in polarization. Similarly, North Dakota also displayed early signs of increased polarization on conspiracy theories <cit.>, which may have been exacerbated by the posthumous electoral win of a Republican candidate who died from <cit.>. The strong negative correlation between vaccines polarization and vaccination rates that we observed in the U.S. demonstrates that states with higher vaccination rates were also less polarized around this issue <cit.>. The exact origin of this correlation is unclear. However, factors such as education and political ideology, which also have a strong geographic dependence, likely played a role <cit.>. Indeed, higher education levels are generally associated with greater vaccine acceptance and trust in vaccine safety <cit.>. Moreover, Democrats tend to trust the vaccines more and have been early adopters, whereas Republicans generally show lower levels of such trust <cit.>. Overall, the high levels of polarization observed in the United States relative to Canada point to a more divided society. Several studies have confirmed that conservative states and counties were less likely to adopt social distancing measures, impose mask mandates, and get vaccinated in the second and third wave of the pandemic. Our study offers new insights into these trends by demonstrating that they correlate with regional heterogeneity in social media discourse, particularly during salient political events around health measures. We also found that this discourse reflects changes in the pandemic timeline, initially related to stay-at-home lockdown orders, followed by mask mandates, and later transitioning to vaccines as they first became available. §.§ Canada The Canadian set of results also suggest that partisan divisions influenced public responses to measures in this country <cit.>. Compared to the U.S., we found a similar, albeit much weaker association between polarization and conservative political leaning, with conservative provinces like Alberta and Saskatchewan experiencing higher levels of polarization during stricter lockdown measures than their more liberal counterparts, such as British Columbia and Ontario <cit.>. Polarization levels varied over smaller and medium-sized provinces as well, measured as relatively high for New Brunswick and low for Nunavut, where the rates of infections remained relatively low during the pandemic (the sole COVID-free jurisdiction in North America until November 2020) <cit.>. Additionally, we also found that polarization surrounding mask mandates and vaccines were not homogeneously distributed across provinces. Among the Canadian provinces, Quebec is an interesting case for our analysis of polarization. For example, our results confirmed that Quebec had the highest level of partisan division over vaccines, but also the highest reported incidence of in Canada during the first and second waves of the pandemic <cit.>. Quebec's unique approach to managing the pandemic with its more restrictive measures relative to other provinces is also somewhat reflected in our results. After a relative hiatus with several restrictions relaxed in the summer of 2020, Quebec once again became the epicenter of the pandemic in the fall <cit.>. This resurgence led to the reinstatement of strict pandemic control measures and a ban on public demonstrations following significant anti-government protests against lockdowns and mask mandates <cit.>. These events also coincided with an increase in online conflicts, promoted by Canadian far-right populist rhetoric and conspiracy theories on Twitter (X) <cit.>. It is important to note, however, that most of these conversations in Canada were heavily influenced by discussions in the US, with Canadians retweeting American vaccine-related content 8 times as often as Canadian content during the period covered by our study <cit.>. Likewise, vaccine hesitancy was also linked to political affiliation in Canada, with those supporting the Conservative Party more likely to refuse vaccination <cit.>. As in the U.S. case, the polarization time course computed for Canada also exhibits spikes observed around key events like protests against lockdown measures, mask mandates, and vaccine roll-outs. These findings suggest that public reactions to significant political and social events during the pandemic are reflected in the measure of polarization we use. We did not observe the negative correlation between polarization and volume of conspiracy-related tweets that we saw in the U.S. case. This contrasts with <cit.>, who found a reduction in negative sentiment in Canadian vaccine-related tweets between January and December 2020. The relationship between polarization and sentiment is complex and long-term trends are likely driven by processes besides pure volume of discussion around conspiracy theories <cit.>. Finally, the relatively lower influence of polarization on vaccine attitudes may be attributed to the country's more widespread vaccine mandates <cit.>. This prevalence, along with higher levels of trust in politicians <cit.> and social capital <cit.>, could have contributed to a broader acceptance of health interventions <cit.>. Indeed, there was a rare `cross-partisan consensus' among Canadians regarding emergency measures in the early stages of the pandemic <cit.>. This consensus, however, was not mirrored on social media, where conspiracy theories widely circulated <cit.>. Overall, our results indicate that online discussions surrounding lockdowns, masks, and vaccines did mirror polarization, and were shaped by regional reactions to events and circumstances specific to Canadian provinces. §.§ Limitations While our method offers valuable insights, it comes with certain limitations. First, we viewed partisan polarization only through the proxy of semantic similarity. This choice may in certain cases obscure some signals not captured by the semantic embedding representation. Second, specifically in the Canadian context, we categorized users into liberal (left) and conservative (right) party family groups. During the manual annotation of Twitter (X) profiles, we encountered few users who identified as supporters of the Bloc Quebecois political party; therefore, we opted to exclude them from the analysis. Additionally, our classification of users into liberal and conservative partisan groups is based on self-reported information, which may not be entirely accurate. Third, it is important to note that our analysis is based on Twitter (X) data, which may not fully capture the views and sentiments of the broader American and Canadian public. Fourth, our analysis is restricted to tweets in English. In the context of Canada, this means we are capturing only or primarily the perspectives of either anglophones or bilingual francophones, which could potentially bias our data; for example, the high levels of polarization observed in Quebec on measures may be influenced by this language bias. Finally, while several of our analyses rely on correlations, it is crucial to remember that these results do not imply causation; the relationship between polarization and public health measures is complex and multi-dimensional. §.§ Conclusion To conclude, our method has provided valuable insights into the dynamics of partisan polarization during the pandemic. Political ideology, public trust, and key events have emerged as important factors influencing public discussions on pandemic-related issues in the United States and Canada. By combining our polarization measure with other data, researchers and practitioners can better understand how polarization varies across location, time, and specific issues. This knowledge could help in detecting particularly polarizing discussions on social media and in developing communication strategies to mitigate the spread of misinformation, both for the current pandemic and for future health-related crises. The differences observed between these two countries are somewhat harder to explain. Our analysis, along with insights from recent studies, suggests that Canadian responses to public health measures could explain the lower levels of polarization found in Canada. Indeed, there was a significant consensus on the effectiveness of stay-at-home orders (i.e., lockdowns), mask mandates, and vaccines not only at the federal and provincial levels, but also within the news media. And unlike the U.S., where an important number of Republican leaders aligned with Trump's anti-mask and anti-lockdown positions, the pandemic did not become a salient partisan issue within a political campaign until much later in 2021. Prior to this, the opposition to public health measures in Canada was primarily found in online communities, outside of the mainstream media and political parties, where protesters remained heavily influenced by American sources. Although our results suggest that social networks contributed to the diffusion of these opinions during the pandemic, more work needs to be done to quantify the impact of online communities interactions on polarization. § METHODOLOGY In the following section, we describe in detail our text-based measurement of partisan polarization. We first explain the data collection process. We then show how we classified tweets into respective topics, geo-located users and grouped them by party affiliation. Finally, we describe the equation used to measure partisan polarization as well as our approximation algorithm. Figure <ref> provides a visual overview of our process in measuring partisan polarization. For additional details, please refer to Section <ref> in the Supplementary Material. §.§ Data Collection §.§.§ Twitter (X) Data We used Twitter's (X) official API to collect 1% of real-time tweets for Canada and the United States from October 9, 2020 to January 4, 2021. This represents 231,841,790 tweets and 4,765,115 users for Canada (a dataset filtered for COVID and politics) and 387,090,097 tweets and 23,758,112 users for the United States (a dataset filtered for election politics). We fed the following list of keywords in the API to filter relevant tweets: Canada: `trudeau', `legault', `doug ford', `pallister', `horgan', `scott moe', `jason kenney', `dwight ball', `blaine higgs', `stephan mcneil', `cdnpoli', `canpol', `cdnmedia', `mcga', `covidcanada' and all combinations of `covid' or `coronavirus' as the prefix and the (full & abbreviated) name of each provinces and territories as the suffix. United States: `JoeBiden', `DonaldTrump', `Biden', `Trump', `vote', `election', `2020Elections', `Elections2020', `PresidentElectJoe', `MAGA', `BidenHaris2020', `Election2020'. §.§ Vaccination Rate Similar to the pandemic data, we also used the officially reported vaccination rate of the populations. We used the vaccination rates one year later compared to the Twitter (X) data, as vaccines were created and approved at the very end of our data collection process. Thus, the vaccination rates are for those who obtained at least two doses. For Canada, this is the numtotal_fully from the government's https://health-infobase.canada.ca/covid-19/vaccination-coverage/vaccine coverage map. We normalize this column by Canada's 2021 population per province or territory. For the United States, we use the people_fully_vaccinated_per_hundred reported in the https://covid.cdc.gov/covid-data-tracker/#vaccinations_vacc-people-booster-percent-pop5COVID Data Tracker from the CDC. §.§ Classifying Tweets By Topics For this study, we looked into three key topics for : lockdowns, masks and vaccines. We also looked into conspiracy theories. For each topic, tweets were classified as relevant or irrelevant to the topic based on whether they contained at least one of the topic-specific keywords. For conspiracy-related tweets, relevant means that the content is related to conspiracy theories (either supporting or opposing). A tweet can belong to more than one topic. We first used a hashtag-based filtering step. We extracted all hashtags within our dataset, ordered it by frequency, and discarded those that appeared less than 100 times. This filtered list contained 3,600 hashtags for Canada and 18,000 for the United States. Two political scientists manually annotated this list with topic and relevance labels. The list was narrowed to only those hashtags labeled as relevant, resulting in 631 relevant hashtags. We then merged these with hashtags identified in previous studies for the same topics—i.e., from refs. <cit.>. For Canada, this process resulted in 46,636,206 tweets and 1,757,675 users that shared content related to . For the United States, this represents 12,552,213 tweets and 2,657,355 users. Using the RoBERTa-base model <cit.> from HuggingFace, we further pre-trained this model on the respective tweets from each country dataset—i.e., performing a self-supervised learning on predicting masked words within tweets. This results in two different country-specific pre-trained language models for tweets. We then randomly sampled 200 relevant and 200 irrelevant tweets per topic from each dataset, for a total of 1,600 tweets. The same two political scientists manually reviewed each tweet separately to determine if the tweet was relevant/irrelevant. We discarded tweets where the annotators could not reach a consensus. We then trained the respective pre-trained RoBERTa-base model on each dataset to classify by topics—i.e., 4 topic language models per country, for a total of 8 language models. We report the support, Cohen Kappa, F1-score and number of tweets we extracted for each topic within each dataset in Table <ref>. Our analysis achieved a near perfect F1-score for each of these topics. §.§ Classifying Users by Geo-Location We wanted to quantify users in each province or state represented in our data, as the users retrieved from Twitter (X) could be imbalanced relative to region population size. For this, we geolocated all users with an explicit location provided in the location field, a free-form text, as part of their profile information. We process the information with https://www.openstreetmap.orgOpen Street Map and the https://developers.arcgis.com/pythonArcGIS API. Both of these return a latitude and longitude if a location was found. We set the threshold for a clear geolocation if both API responded with a latitude and longitude that was within one degree of each other. We found this to be more accurate, and preferable to using a pre-trained Named Entity Recognition algorithm; most of the user-provided locations can be handled by these Geographic Information System APIs, and the APIs could also provide important details such as the country, state and city. We correlated the geolocated users with the official population census for each country's region. In total, we geolocated 282,454 users with strong correlation of 0.92 (n=13, p=6.20e-06, CI=[0.76, 0.98]) for Canada and 757,601 users with strong correlation of 0.98 (n=52, p=9.27e-35, CI=[0.96, 0.99]) for the United States. This means that each region is well represented in our data. Further information is reported in Table <ref>. §.§ Classifying Users By Party Affiliation We determine a user's party affiliation using a two step approach. First, we classify politically explicit users based on their profiles. We then use the predictions from this profile classifier as labels to train a classifier based on the user activity. We report the support, F1-score, and number of users we classified for each party within each dataset in Table <ref> for Canada and Table <ref> for the United States. We achieve a macro-F1-score of 91% for both Canada and the United States. Profile Classifier As a preprocessing step, we filter out users that are not politically explicit. Politically explicit users are those whose profile description contains at least one political keyword defined for any political party. For Canada, we focused on the five main political parties: Conservative, Green, Liberal, New Democratic Party and People's Party. For the United States, we focused on the Democratic and Republican parties. The following is the set of keywords we have per party: Canada: Conservative - `erin o'toole', `andrew scheer', `conservative', `conservative party', `cpc', `cpc2021', `cpc2019', `conservative party of canada' Green - `annamie paul', `green party', `gpc', `gpc2019', `gpc2021', `green party of canada' Liberal - `justin trudeau', `liberal', `liberal party', `lpc', `lpc2021', `lpc2021', `lpc2019', `liberal party of canada' New Democratic Party - `jagmeeet singh', `new democrat', `new democrats', `new democratic party', `ndp', `ndp2021', `ndp2019' People's Party - `maxime bernier', `people's party', `ppc', `ppc2019', `ppc2021', `people's party of canada' United States: Democrat - `liberal,' `progressive,' `democrat,' `biden' Republican - `conservative,' `gop,' `republican,' `trump,' We then randomly selected a set of politically explicit users for each party to be manually annotated by two political scientists. We only use party labels that both annotators agreed upon. The Cohen Kappa score for the pair of annotation sets is 0.74 and 0.76 for Canada and the United States respectively. We then train a RoBERTA-large model with a 80-20 train-test split to determine user party affiliation (see the profile classifier section for more details). Exact numbers can be found in their respective tables in the supplementary. Activity Classifier For this second phase, we make use of the respective RoBERTa-base model pre-trained on tweets for each dataset to extract the tweet embeddings (768-dimensional vector). We then generate user embeddings (768-dimensional vector) by pooling together (mean aggregation) all tweet embeddings from that user. We then train an MLP consisting of two fully connected layers with the user embeddings as input to predict the party affiliation. Before training, we filtered out users based on their activity α (i.e., number of -related tweets in the dataset). We performed a hyperparameter search for α among {1, 3, 5, 10, 15, 20} using 5-fold cross validation. This was 5 tweets for Canada and 10 tweets for the United States. Specifically for Canada, we found that the MLP could not distinguish the parties sufficiently. Hence, we grouped the parties based on their partisan leaning. The liberal (left) party family included the Liberal Party, New Democratic Party and Green Party while the conservative (right) party family included the Conservative Party and People's Party. We removed supporters of other minor parties and the Bloc Quebecois. External Evaluation Following the best practice for evaluating party affiliation predictions <cit.>, we matched Twitter (X) users from the United States with the primary voter registration records available for five states: Ohio, New York, Florida, Arkansas, North Carolina, as well as Washington DC. We describe this procedure and its result in more details in the supplementary material <ref>. We achieve an accuracy of 74.35% for the profile classifier and 73.35% for the activity classifier. Both classifiers are binary, but users can also be independent or support a third party, despite an ideology (and behavior/voting) that aligns strongly with one of the two main parties. They can also have an outdated registration that no longer reflects the beliefs they currently hold and express on Twitter (X). Therefore, although accuracy here is lower than in our training model, it still indicates our classification is acceptable and on par with the standard for this type of evaluation <cit.>. §.§ Measuring Partisan Polarization Given a set of political parties 𝒫 and a set of given user embeddings 𝒰={u^(1), u^(2), …, u^(n)} where u^(i)∈ℝ^768 and n is the number of users, we measure polarization as follows. We first look at the distance and dispersion between each party, p∈𝒫. We base our measure on the C-index of Hubert <cit.> to quantify the extent of clustering and overlap of each political party. This is done by first calculating the sum of inter-cluster distances: S_w = 1/2∑_p ∈𝒫∑_u,v ∈ p ||u-v|| We then normalize this value based on its minimum and maximum possible ranges, S_min and S_max. These correspond to the sum of the m smallest (resp. largest) distances between points in 𝒰; where m = ∑_p ∈𝒫|p|(|p|-1)/2. Based on these, we define our polarization index as: = S_max - S_w/S_max - S_min The minimum value 0 represents no polarization, whereas the maximum value 1 represents the most extreme possible polarization, i.e., p ∈𝒫 are completely isolated from each other. §.§ Approximation of Polarization Equation <ref> is not scalable to large n, as it is O(n^2 log(n^2)). We approximate it by sub-sampling a sufficient set of users which enables us to scale to a large number of users. To determine the minimum sample size needed, we use the coefficient of variation, which is defined as std/mean and expressed as a percentage. Generally, a coefficient of variation under 10 gives reasonable results <cit.>. Algorithm <ref> summarizes this procedure. This approximation allows us to scale our measure significantly without compromising accuracy. One loop of the approximation has a time complexity of O(rf^2 * n^2 log((n)^2)) where r is the repeat count and f is the fraction (e.g., 0.01). From our testing, we know that at large values of n, the fraction needed rarely increases, so only one loop is required. Therefore, the time saved is rf^2 for large values of n. We test the accuracy of approximation in Algorithm 1 over the daily lockdown and vaccine tweets. In Figure <ref>, we plot the total number of users against the absolute error (and its standard deviation) of the approximated poli compared to the exact value, binned for every 1,000 users. We observe a dramatic drop of the absolute error term at around 3,000 users. When we reach the 10,000 users value, the absolute error is usually below 0.001. In Figure <ref>, we plot the total number of users against the time saved in running the approximation algorithm compared to running the exact poli, binned also for every 1,000 users. We observe that the time saved is exponential to the number of users. We note that at around 50,000 users, the approximation rarely needs to increase the fraction of users sampled. These findings confirm that we can accurately approximate poli for large-scale data that is impossible to measure exactly because of memory constraints. As poli relies heavily on finding pairwise distances (time and memory intensive), we see from our analysis that a sampling approach can save both time and memory exponentially. § ACKNOWLEDGEMENTS This research is supported by CIFAR AI Catalyst Grants and Canada CIFAR AI Research Chair funding. § EXTENDED RESULTS §.§ Regional Variations of Partisan Polarization In <ref> and <ref>, we show the ranking of the regions (American states and Canadian provinces and territories, respectively) by partisan polarization per topic. In <ref>, we show the correlation in the polarization scores and relative vote margins. The latter shows that states in which the margin by which Republican party votes exceeded those of the Democractic party correlates significantly with the amount of polarization exhibited by the Twitter (X) discourse in those states, when conditioning the discourse on masks and on vaccines, but though significantly when conditioning on lockdowns. §.§ The Relationship between Regional Vaccine Polarization and Vaccination Rates in Canada In Figure <ref>, we remove Nunavut as an outlier because of its very small population. While we get strong positive correlation with vaccination rate, it is over a relatively low number of points. In Canada, vaccines were mandated, requiring vaccine passports to be served in public areas. We assume that vaccine partisan polarization increases, as people are not happy with being forced to be vaccinated, but most of the population still are vaccinated. However, with the few points, we do not have a definite conclusion for this result. §.§ Specific Regional Partisan Polarization and COVID-19 Deaths Here, we investigate the topic-specific polarization over time and how it relates to the reported number of Deaths for for specific regions in Figure <ref> for Canada and in Figure <ref> for the United States. §.§ Correlation Matrices §.§ The Relationship between National Partisan Polarization and Epidemiological Data Here, we investigate the aggregated polarization over time for each country and how it relates to the reported number of New Cases and Deaths for . In Figure <ref>, we observe that polarization is not correlated with the severity of the pandemic, in both Canada and the United States. To compute the daily aggregate polarization measure, we employ a weighted sum of each topic's polarization, considering the percentage of each topic's tweets within that day's volume of -related tweets. § METHODOLOGY DETAILS In the following section, we report the classification metrics of each module in the pipeline for measuring polarization. §.§ Classifying Tweets by Topics §.§ Classifying Users by Geolocation §.§ Classifying User Party Affiliation We report the classification metrics for Canada in Table <ref> and for the United States in Table <ref>. For Canada, our model classified users for each parties for the activity as well, but the F1-score was not satisfactory as parties within the liberal (left) party family and conservative (right) party family was easily confused as shown in the confusion matrix in Table <ref>. Thus, for our analysis, we merged the parties in Canada, and show the confusion matrix for after the merge in Table <ref>. §.§ Distribution Matching with Election Results We further validate our party affiliation distribution using the 2019 Canada Federal Election and the US 2020 Election results for the respective country. We calculate the correlation between the ratio of numbers of liberal and conservative families-labelled users per region in our data compared to the election results and obtain strong correlations of 0.815 for Canada visualized in Figure <ref> and 0.802 for the United States visualized in Figure <ref>. We also visualize the ratio for all geolocated users for the respective topics. §.§ Matching Users to US Voter Registration Following the best practice for evaluating party affiliation predictions <cit.>, we matched Twitter (X) users from our dataset with the primary voter registration records available for five states: Ohio, New York, Florida, Arkansas, North Carolina, as well as Washington DC. From these records, we obtain the party affiliation of unique users in each state by md5-hashing their names and county to construct a key identifier. From our set of geolocated Twitter (X) users, we kept everyone that belonged to one of the five states or DC, and we removed those whose location could not be retrieved. Finally, we matched the most recent voter party affiliation records from the registration data to the unique Twitter (X) users that matched both the county and either the first name and last name or the first, middle and last name. We pre-processed the user's name on Twitter (X) to remove emojis. After matching, we removed users not affiliated with either of the two major parties and users whose name matched with more than one record per county (indicating a non-unique match). We then compare users' party from voter registration with their predicted party, first using their profile description and second their -related tweets. Using our geolocation and voter record matching, Table <ref> shows we are able match a significant number of users, more than 30k, across the 5 states and DC with their voter records. We get an accuracy of 74.35% for the first method and 73.35% for the second one. We note that both methods are binary classifiers while users can also be independent or support a third party, despite an ideology (and behavior/voting) that aligns strongly with one of the two main parties. They can also have an outdated registration that no longer reflects the beliefs they currently hold and express on Twitter (X). Therefore, although accuracy here is lower, it still indicates our classification is acceptable and on par with the standard for this type of evaluation <cit.>. §.§ Approximation of Polarization We test accuracy of approximation in Algorithm 1 over the daily Lockdown and Vaccine tweets. In Figure <ref>, we plot the total number of users against the absolute error (and its standard deviation) of the approximated poli compared to the exact value, binned for every 1,000 users. We observe a dramatic drop of the absolute error term at around 3,000 users. When we reach the 10,000 users value, the absolute error is usually below 0.001. In Figure <ref>, we plot the total number of users against the time saved in running the approximation algorithm compared to running the exact poli, binned also for every 1,000 users. We observe that the time saved is exponential to the number of users. We note that at around 50,000 users, the approximation rarely needs to increase the fraction of users sampled. These findings confirm that we can accurately approximate poli for large-scale data that is impossible to measure exactly because of memory constraints. As poli relies heavily on finding pairwise distances (time and memory intensive), we see from our analysis how sampling can save both time and memory exponentially. We also explore the impact of changing the minimum coefficient of variation. We start with a minimum sample size of 1% or fraction = 0.01. We keep the step_size constant at 0.01. For the first experiment, repeats is set to 10. For the second experiment, shown here in figures <ref> and <ref>, epsilon is set to 0.05.
http://arxiv.org/abs/2407.03273v1
20240703165930
Global Out of Time Order Correlators as a Signature of Scrambling Dynamics of Local Observables
[ "Fabricio S. Lozano-Negro", "Claudia M. Sánchez", "Ana K. Chattah", "Gonzalo A. Álvarez", "Horacio M. Pastawski" ]
quant-ph
[ "quant-ph" ]
Instituto de Física Enrique Gaviola (CONICET-UNC) Facultad de Matemática, Astronomía, Física y Computación, Universidad Nacional de Córdoba, 5000, Córdoba, Argentina Facultad de Matemática, Astronomía, Física y Computación, Universidad Nacional de Córdoba, 5000, Córdoba, Argentina Instituto de Física Enrique Gaviola (CONICET-UNC) Facultad de Matemática, Astronomía, Física y Computación, Universidad Nacional de Córdoba, 5000, Córdoba, Argentina Centro Atómico Bariloche, CONICET, CNEA, S. C. de Bariloche, 8400, Argentina Instituto de Nanociencia y Nanotecnologia, CNEA, CONICET, S. C. de Bariloche, 8400, Argentina Instituto Balseiro, CNEA, Universidad Nacional de Cuyo, S. C. de Bariloche, 8400, Argentina Instituto de Física Enrique Gaviola (CONICET-UNC) Facultad de Matemática, Astronomía, Física y Computación, Universidad Nacional de Córdoba, 5000, Córdoba, Argentina § ABSTRACT Out-of-Time-Order Correlators (OTOCs) serve as a proxy for quantum information scrambling, which refers to the process where information stored locally disperses across the many-body degrees of freedom in a quantum system, rendering it inaccessible to local probes. Most experimental implementations of OTOCs to probe information scrambling rely on indirect measurements based on global observables, using techniques such as Loschmidt echoes and Multiple Quantum Coherences, via time reversal evolutions. In this article, we establish a direct connection between OTOCs with global and local observables in the context of NMR experiments, where the observable is the total magnetization of the system. We conduct a numerical analysis to quantify the differences in the evolution of both magnitudes, evaluating the excitation dynamics in spin ring systems with 8 to 16 spins, using a many-body Hamiltonian and long-range interactions. Our analysis decomposes the global echo into a sum of local echoes and cross-contributions, leading to local and global OTOCs. The results indicate that, after an initial transient period, local OTOCs determine the global ones. To confirm the equivalence, we observe that the difference between local and global OTOCs, as well as their fluctuations, becomes negligible as the system size increases. This behavior aligns with that observed in highly interacting or chaotic systems in several experiments. Horacio M. Pastawski ======================== § INTRODUCTION In recent years, the concept of Out-of-Time-Order Correlators (OTOC) captured the attention of many theoreticians and experimentalists alike as a probe of quantum chaos <cit.>. OTOCs become an analytical tool to identify manifestations of chaos in the scrambling of quantum information and the quantum butterfly effect <cit.>. Quantum information scrambling typically defines the process of how local information propagates into many degrees of freedom, creating complex correlations that prevents its recovery from local measurements. The chaotic behavior is a crucial requirement for a quantum field theory to adequately describe the extreme classical instabilities induced by gravity in the proximity of a black hole <cit.>. The OTOCs constitute a too-broad category of mathematical objects. Thus their physical significance and their experimental relevance remained somewhat obscure. This situation improved when Alexei Kitaev <cit.> realized that the concept of OTOC was contained in a paper by Larkin and Ovchinnikov where they studied the effects of electron scattering in disordered superconductors <cit.>. They observed that the same scattering processes that are responsible for the mean-free path also lead to the dynamical growth of the modulus square of a pair of initially commuting Heisenberg operators. Seeking a suitable quantum model with those extremely chaotic dynamics, Kitaev discarded the standard Heisenberg system in favor of Majorana Fermions with disorder and many-body infinite range interactions. This is now known as Sachdev-Ye-Kitaev (SYK) and should show an exponential growth of the OTO commutator. Nevertheless, an experimental approach to the problem seemed far-fetched, since it would involve forward as well as backward evolution of different many-body operators. In a completely independent pathway, various forms of OTOCs were discovered and exploited by the nuclear magnetic resonance (NMR) community while not using explicitly this name. In this context, time reversal procedures are regularly employed to evaluate the evolution of Multiple Quantum Coherences (MQC) and determine the number of correlated spins via a technique called spin counting as a probe of quatum information scrambling <cit.>. The backward evolution of individual spins dynamic is the basis for the Hahn echo, where the failure to recover the initial state is quantified in the time scale T_2 <cit.>, that characterizes the spreading enabled by the spin-spin interactions of a local excitation. Decades later the time reversal of the evolution driven by a many-body Hamiltonian yielded the observation of Magic Echoes and other generalized echoes of perturbed evolved states, from which multiple quantum coherences can be extracted <cit.>. The procedure of polarization echoes enabled to address a set of individual spins <cit.>, and the Reduced Evolution of the Polarization Echoes sequence (REPE) <cit.>, constituted a first attempt to scale down forward and backward dynamics, to disclose and quantify the role of perturbations on the many-body quantum evolutions. All of these procedures are now encompassed in the category of Loschmidt Echoes (LE) <cit.>. Nowadays, LEs are one of the primary tools for studying quantum chaos, thermalization, excitation and information scrambling, as well as many-body localization. These studies are performed using both NMR and a variety of NMR-inspired innovative experimental techniques <cit.>. Time reversal implementation also plays a key role in unmasking the environmental noise, eventually achieving its elimination through strategies broadly known as dynamical ecoupling <cit.>. Recently, multiple quantum coherences and spin counting methodologies were discovered outside the NMR community to be exploited as a general tool to quantify OTOCS and quantum information scrambling <cit.>. As one experimental limitation, in particular in NMR measurements of solid state systems, resides in the difficulty of exciting and detecting individual spins, most of the experimental implementations based on this strategy probes the information scrambling based on global observables <cit.>. Although local probing can be made in some cases <cit.>, it is not the general situation. While these experimental tools are useful for probing quantum information scrambling, they still hinder their direct connection to the information accessed by local observables, which are extensively studied in most theoretical analyses. This article aims to verify that the global observable, evaluated over the entire system in these experimental implementations, accurately reflects the ensemble average of uncorrelated local observables. This equivalence hypothesis was already stated in Ref. <cit.>. We demonstrate these results in the context of spin systems in NMR experiments, where the observable is the total magnetization, by employing a paradigmatic model for both numerical and analytical analyses. These results are important because experimental implementations based on global control and readout are easier to perform compared to those requiring stringent and challenging conditions for local control and readout. Thus, already available experimental platforms based on global observables, such as NMR quantum simulations <cit.>, experiments with trapped ions <cit.>, and ultra-cold polar molecules <cit.>, can be exploited to probe information scrambling from local observables. To demonstrate the results, Section <ref> first examines the analytical form of the specific OTOCs arising from MQC experiments and the spin-counting determined from the second moment of the MQC spectrum. There, we define the global and local observables, identifying in both cases, the particular contributions of OTOCs to local observables. Section <ref> presents the numerical evaluation of these magnitudes. Since it is impossible to solve a real system configuration, in which various dynamical regimes are present, we restrict the study to a paradigmatic model: a ring of spins with long-range interactions. This configuration partially mitigates the unavoidable finite-size effects. In a 1D system, long-range interactions are necessary to ensure coherence between multiple spin projection sub-spaces, while we introduce a local Zeeman disorder to crucially prevent symmetry-induced interferences. Given the small size of the systems that can be computationally studied, we show that the long-time behavior is representative of the equivalence between local and global observables, and thus is a robust numerical metric compared with the short and intermediate time regimes, which are more sensitive to the system specificity. We compare the time evolution and the equilibrium values of the global and local OTOCs for different system sizes, finding evidence that supports the validity of the mentioned equivalence as the complexity and size of the system increases. This version clarifies the role of long-range interactions and local disorder, emphasizing the long-term behavior and maintaining clarity in the comparison of OTOCs across different system sizes. These results are discussed explicitly in Section  <ref>, as they are of interest for a wide community pursuing related efforts <cit.> in characterizing scrambling dynamics under different Hamiltonians in connection with existing experiments <cit.>. § OTOCS AND ECHOES CONNECTION The Out of Time Order (OTO) commutator is defined as, C_V̂Ŵ(t)={[Ŵ(t),V̂]^†[Ŵ(t),V̂]}. In the case of Hermitian Heisenberg operators Ŵ and V̂ and unitary evolution, Ŵ(t)=e^-iℋ̂t/ħŴe^iℋ̂t/ħ where ℋ̂ is the system Hamiltonian, the expression can be rewritten in the form, C_V̂Ŵ(t)=2(1-{Ŵ(t)^†V̂^†Ŵ(t)V̂}). In the theoretical and numerical literature, Ŵ and V̂ are generally considered local operators as, e.g. Pauli matrices, due to its direct interpretation as a measure of space-time propagation of quantum information <cit.>. Considering that the operators V̂ and Ŵ initially commute, the OTO commutator of Eq. (<ref>) quantifies the degree by which the initially commuting operators fail to commute at time t due to the scrambling of information induced by the Hamiltonian that drives the evolution. The correlator F(t) defined as, F(t)={Ŵ(t)^†V̂^†Ŵ(t)V̂}, decays with time. In some particular conditions, an exponential decay with the same Lyapunov exponent that controls its classical counterpart is observed, serving as a diagnosis of information scrambling and quantum chaos <cit.>. The correlator F(t) involves a time-reversal procedure and its calculation can be thought of as an experiment in which V̂ sets a quantum excitation that evolves for a time t, and then it suffers the action of Ŵ. Later, it follows a time reversal evolution before a measurement is applied (V̂^†) <cit.>. Under this view F(t) has the form of a Loschmidt echo with a perturbation Ŵ=exp[-iΘ̂Δ t], that acts for a very brief period Δ t (pulse labeling) after the forward evolution lapse <cit.>. In actual experiments on many-body systems, there is also a small but mostly uncontrollable perturbation Σ̂ that persists during the whole time reversal period, and thus, there is no possible factorization in the above form <cit.>. In this last case, the recovered signal has an overall decay within perturbation-independent time-scale T_3 of about an order of magnitude larger than T_2. In single-particle semi-classical models, the decay is exponential and once Σ̂ exceeds a small critical value, the rate 1/T_3 is identified with the classical Lyapunov time-scale <cit.>. Loschmidt echo is also a global experimental magnitude widely used in experimental setups with practical implications for the normalization of the signal for quantifying OTOCs and for characterizing the many-body dynamics and its information scrambling <cit.>. The dynamics of OTOCs as a measure of growth in “size” and complexity of the spreading of an initial local operator have been studied in closed and open systems <cit.>, linking this complexity with the system's sensitivity to decoherence <cit.>, and a manifestation of quantum chaos <cit.>. The dynamical regimes for OTOCs can be separated into short, intermediate, and long times. The short and intermediate times are highly dependent on the Hamiltonian and the particular nature of the initial operators (local or global). At long times, the OTOCs of a finite system oscillate or, for highly chaotic systems, fluctuate around a mean value <cit.>. In Ref. <cit.> we proposed that the information extracted from global OTOCs is indicative of the behavior of the local observables. Specifically, we use the Lanczos expansion of the density matrix dynamics to infer the intermediate time behavior of local OTOCs from the global observables. The main hypothesis was that the latter is mainly composed of a set of almost identical local magnitudes plus small interferences. Then, Zhou and Swingle studied the contribution of local OTOCs to global observables, arguing that, in an expansion, the “diagonal” terms are those that contribute the most <cit.>. As with most numerical results, their verification was restricted to the dynamics of 1D chains of spins. In the present work, we take a different perspective on this matter, making a particular focus on the experimental observables in NMR experiments, which are the average magnetization associated with generalized time reversal echoes. §.§ Generalized Echoes and MQC in NMR In numerous real situations, particularly those connected to many-body spin systems analyzed through the solid-state NMR techniques, the acquired signal is related to global operators. In NMR, the observable is the total magnetization of the sample, which is proportional to the total spin magnetization denoted as Î^z=∑Î_i^z, which adds the contribution of individual spin magnetizations. The direction z is determined by the external magnetic field. Correspondingly, the initial condition is usually the equilibrium magnetization of an ensemble of polarized spins, in which the density matrix is ρ̂(t=0)∝Î^z, also a global operator that describes the initial excitation. Notice that ρ̂(t=0) is determined by the Boltzmann equilbrium in the high temperature limit, where terms proportional to the identity do not contribute to an observable signal and are therefore omitted <cit.>. In a solid sample, the natural interaction between spins I=1/2 is given by the dipolar Hamiltonian <cit.>. A huge number of radio frequency pulse sequences were developed to engineer the spin-spin Hamiltonian for practical applications. Typically the sequence design is based on the average Hamiltonian theory, which is typically based on the Magnus expansion and/or the Floquet approximation, as referenced in several works <cit.>. The key feature of these approaches is the ability to reverse the quantum dynamics by implementing a change in the sign of the acting Hamiltonian. The observation of MQC (or generalized echoes) for spin counting purposes, involves three time periods, see the upper panel of Fig. <ref>a. An initial excitation evolves (forward) with a specifically tailored Hamiltonian ℋ̂; this is followed by a brief encoding period that serves to imprint a different phases to different spin-projection sub-spaces (phase labeling of the quantum coherences). Finally, a time reversal is achieved by imposing a dynamics with -ℋ̂ Hamiltonian. Experimentally, the final observable and the initial state are proportional to the total magnetization Î_z operator. The phase encoding performed through a rotation around z axis play the rol of the perturbation (Ŵ) in the OTOCs procedure. According to the “pseudo-pure state” description <cit.>, this initial thermal state ρ̂(t=0)∝Î^z=∑Î_i^z can be thought of as a sum of individual magnetized spins with no correlations among them <cit.>. Each interacting spin will undergo a collective evolution and, after the perturbation and the backward dynamics, the resulting collective state contributes with magnetization not only at its original site but also to neighboring spins. Our primary hypothesis is that the main contribution to the global echo (total magnetization) arises from the individual magnetization of each spin returning to its original site. This concept is illustrated schematically in the Fig. <ref>a. We conjecture that any magnetization not returning to the original spin site will cancel each other out, as they arrive with “random” phases. The generalized echo sequence of Fig. <ref> produces a global observable signal denoted as M_G, which is measured after a final read-out pulse (not appearing in the figure). This echo can be summarized in the following equation, M_G(t,ϕ)=1/{Î^zÎ^z}{Î^z(t)R̂^†Î^z(t)R̂} where R̂=e^-iϕÎ^z, Î^z(t)=e^-iℋ̂tÎ^ze^iℋ̂t, and the normalization {Î^zÎ^z}=N2^N-2 ensures M_G(0,ϕ)=1. Here, the operators R̂ and Î^z(0) commute, however, this is no longer true once the state Î^z(0) evolved into Î^z(t). This led to the concept of the magnetization scrambling. In the experiments, the phase ϕ is varied between 0 and 2π in 2^M>m_max steps, enabling the acquisition of the multiple quantum coherence distribution M_G(t,m) by Fourier transforming the signal M_G(t,ϕ) as a function of ϕ, with m ranging from -m_max to m_max (see Appendix <ref>). This distribution reflect the superposition of states in different total magnetization subspaces. In the following section we clarify the connection of Eq. (<ref>) with a global OTOC and rewrite it as a combination of a set of local OTOCs. §.§ Local and Global observables The echo in Eq. (<ref>) can be expressed in terms of local echoes M_G(t,ϕ) = 1/N2^N-2{Î^z(t)R̂^†Î^z(t)R̂} = 1/N2^N-2∑_i,j{Î_i^z(t)R̂^†Î_j^z(t)R̂} = 1/N2^N-2∑_i{Î_i^z(t)R̂^†Î_i^z(t)R̂} + ∑_j≠ i{Î_i^z(t)R̂^†Î_j^z(t)R̂} ), where each local component of the polarization returns to its original site, and cross contribution terms. This allows the separation of the global echo into two terms containing local echoes and cross terms M_G(t,ϕ) = ∑_i(M_L^i(t,ϕ)+M_CT^i(t,ϕ)) = M_L(t,ϕ)+M_CT(t,ϕ). By applying the Fourier transformation of M_G(t,ϕ) with respect to ϕ, the multiple quantum coherence (MQC) distribution M_G(t,m) is obtained. The global MQC distribution can also be written in terms of contributions from local and cross-terms, M_G(t,m)=M_L(t,m)+M_CT(t,m). From the early NMR literature <cit.> and subsequent extensive usage, it is established that the second moment of this distribution is a statistical measure of the number of correlated spins, here denoted as K_G <cit.>. The global number of correlated spins (sometimes called cluster size) K_G, is expressed as K_G(t) = 2∑_mm^2M_G(t,m) = 2∑_mm^2(M_L(t,m)+M_CT(t,m)) = K_L(t)+K_CT(t), from which, the contributions from local echoes (L subindex) and cross-terms (CT subindex) are explicitly discriminated. The global cluster size can also be expressed as the OTOC <cit.>, K_G(t)=-2/N2^N-2{[Î^z,Î^z(t)][Î^z,Î^z(t)]}, as well as the corresponding local and cross-terms contribution, K_L(t) = -2/N2^N-2(∑_i,k{[Î_k^z,Î_i^z(t)]^2}. .+∑_i,q,k q≠ k {[Î_q^z,Î_i^z(t)][Î_k^z,Î_i^z(t)]}) K_CT(t) = -2/N2^N-2∑_i,j,k,q i≠ j{[Î_q^z,Î_i^z(t)][Î_k^z,Î_j^z(t)]}. A detailed derivation of these expressions is provided in the appendix <ref> and <ref>. An interesting feature to note is that, in a similar fashion as the global echo can be thought as the sum over different initial conditions Î_i^z, the equivalent procedure can be done with both K_L(t) and K_CT(t). Then, the sum over sites i can be separated in the previous expressions, defining the on-site averages, K_G(t) = ∑_iK_G^i(t)/N, K_L(t) = ∑_iK_L^i(t)/N, K_CT(t) = ∑_iK_CT^i(t)/N. Notice that while the local OTOCs corresponding to a site i, K_L^i(t) are composed for both, the (so-called in  <cit.>) diagonal terms {[Î_k^z,Î_i^z(t)]^2} and non-diagonal terms {[Î_q^z,Î_i^z(t)][Î_k^z,Î_i^z(t)]}, the cross-contribution corresponding to site i, K_CT^i(t), only have non-diagonal terms. Consequently, the same property is valid for their on-site averages K_L(t) and K_CT(t). Numerically, we can compute the i-contribution of these magnitudes K_L^i(t), K_G^i(t), K_CT^i(t) using the echo sequence, shown in Fig. <ref>, for an initial state Î_i^z. Therefore, starting from an excitation localized at site i, and observing the magnetization evolution and the subsequent return (Loschmidt echoes) to every site j, we can reconstruct M_L^i(t,ϕ) and M_CT^i(t,ϕ), enabling us to compute separately the local and cross-terms OTOCs, and the addition of both, which is the global OTOC. § NUMERICAL RESULTS Global, local, and cross-term magnitudes described in the previous section were calculated considering the paradigmatic model system shown in Fig. <ref>b: a ring of N spins 1/2 interacting through a long-range Double Quantum Hamiltonian, ℋ̂=∑_ih_iÎ_i^z+∑_i,j i<j D_ij[Î_i^xÎ_j^x-Î_i^yÎ_j^y]. This double quantum Hamiltonian is experimentally engineered using NMR pulse sequences developed in the early references <cit.>, and further modified for scaled interactions as described in <cit.>. Extending further the modeling qualities, we assume interactions between spins D_ij with different dependencies D_ij=J/|r_ij|^α on the “bond-distances” r_ij for α=1,2,3, where α=3 is the usual dipolar case. It is important to note that r_ij is defined as a “through-bond” distance, i.e. the minimum number of sites between the two spins rather than as a geometric distance. This definition is pivotal for preserving system homogeneity across varying values of N. Since we mostly use non-random D_ij's, it is crucial to introduce random fields h_i to break the high symmetry of the ring and wash away recurrences. We choose D_ij's ranging between [-J/2,J/2]. The interactions in typical molecular Hamiltonians have a sign that depends on the angle between the bond and the external field. However, for this study, we adopt a uniform sign convention, as would be the case in a Ferrocene ring <cit.>. Additionally, we explore the incorporation of random sign assignments in the coupling, considering α=1 as a paradigmatic case. The initial local excitation has the form ρ̂_0∝Î_i^z, and evolves, as in a typical MQC sequence (Fig. <ref>a) with the Hamiltonian defined in Eq. (<ref>). Since a form of self-average is naturally present in a global observable, a single disorder realization is considered. The evolution was done following the Trotter-Susuki <cit.> and quantum parallelism algorithm <cit.>. As pointed out above, by repeating the simulations for all the possible initial sites i, the global and local Loschmidt echoes can be computed, from which the OTOCs K_* are derived, where * represents (G,L,CT) . Fig. <ref>(a-b) shows the echoes M_G(t,ϕ) (dashed curves) obtained by performing the numerical evolution for a system of N=16 spins with α=3, starting from the initial condition (i.e. the excitation) at the different sites, i, and adding all the signals regardless at which site it is detected. This global magnitude represents the experimental observable, Eq. (<ref>). Together with the global echo, Fig. <ref> (a and b), also displays local echo M_L(t,ϕ), which is only accessible numerically, for different perturbations (phases), as a function of time. For short times, differences between local and global echoes are still noticeable, but these become smaller as the system evolves, becoming indistinguishable at long times. Appendix <ref> presents a more detailed analysis of the echoes to different sites. When analyzing the recovered signal for a fixed time as a function of phases, Figs. <ref>(c-e), it becomes clear that at short times, (c), the differences between global and local are still appreciable, but these decrease over time. This is evidenced by the cross terms, which are practically zero in (e). This temporal behavior is also reflected in the contributions of the different terms, Global, G; Local, L; and Cross-Terms, CT, to the corresponding distribution of coherence M_*(t,m), seen in Fig. <ref> (f-h). The second moments of these distributions are proportional to K_G(t), K_L(t) and K_CT(t). Fig. <ref> shows the time evolution of the number of correlated spins, global or local K_G(t) and K_L(t) (dashed and solid lines respectively) for different sizes of the ring N=8-16. These are obtained by averaging over the individual realizations at different sites, K_*^i. The evolution of individual K_*^i's is exemplified in Fig. <ref> for the case D_ij=J/|r_i,j|^3 and N=12. K_*^i present the same behavior of the total values, but differing in fluctuations. From left to right, Fig. <ref> displays the results for all values of α from 3 to 1, plus α=1 and random signs. We observe that, after the initial regime, curves K_L and K_G representing the growth in the number of correlated spins, differ by less than 10%. We use the long-time saturation value and its fluctuations to quantify this difference as the number of spins in the system N increases. As α decreases, the magnitudes K_* reach the saturation values at shorter times, due to the stronger interactions. Typically the saturation times t_s are Jt_s/ħ≈50 for interaction ∝1/r^3, Jt_s/ħ≈20 for ∝1/r^2 and Jt_s/ħ≈10 for ∝1/r, a more detailed analysis would yield t_s depending on N and α. We can observe that both K_G(t) and K_L(t), at saturation times, tend towards a value around the system size N. In the limit of large α the interaction among nearest neighbors predominates, leading to a chain-line behavior. In this limit, the Double Quantum Hamiltonian exclusively generates only second-order coherences <cit.>, and the difference between global and local OTOCs should be more relevant. Conversely, in the limit of very small α, the interaction extends infinitely, and in the case of couplings with random signs, it should behave like the SYK model <cit.>. The fact that for larger systems the cross terms become relatively less important, means that adding pathways to the dynamics increases the chances of destructive interferences. This suggests, that one could enhance the destructive interferences by allowing random signs in D_ij, as is exemplified in Fig. <ref>(d). Indeed, pseudo-random signs appear in an actual crystal due to the different directions of the coupling. At short times and intermediate times, the growth of the Local and Global OTOCs looks slightly different, as depicted in Fig. <ref>. This fact can be traced back to the particular interference patterns in the spin dynamics of the DQ Hamiltonian during time reversal. Components failing to return to their original sites within this brief time-frame exhibit a strong tendency to return to their adjacent sites, with specific phase relationships. Mathematically, the very short time difference can be analyzed using the Baker-Campbell-Hausdorff expansion <cit.>. After performing some algebraic manipulation, it can be shown (as elaborated in Appendix <ref>) that for the DQ Hamiltonian, both the global and local OTOCs exhibit quadratic behavior at short times, with coefficients differing only by a factor of two: K_G(t) ≈ 32/Nt^2ħ^2∑_i,jD_ij^2, K_L(t) ≈ 16/Nt^2ħ^2∑_i,jD_ij^2. Consequently, K_L(t)≈ K_CT(t) at short times. These expressions align well with the numerical findings, as shown in Fig. <ref>(b). It is also seen that growth of the OTOCs accelerates when the exponent α becomes smaller, as it increase the value of ∑_i,jD_ij^2. Moreover, this behavior should not change when random signs are included in the values of D_ij, as seen in Fig. <ref>(d). At intermediate times, after this initial quadratic expansion, the complexity of the Hamiltonian makes itself evident and the growth law changes depending on the exponent α, a behavior that it is theoretically expected <cit.>. In Fig. <ref> we observe that this growth law for local and global OTOCs shows the same behavior, making their relative difference K_CT(t)/K_G(t) rapidly smaller as time increases. However, the relatively small growth window makes it difficult to assign a particular law, leaving only the saturated regime to systematically study the dependence of the cross terms K_CT(t)/K_G(t) with N. Indeed, the clear intermediate time laws observed in the experiments, is a consequence of the exponential increase of the number of states of the Hilbert space enabled by the dynamics in 3D crystals <cit.>. In order to quantify the difference between the local and global OTOCs at long times, we computed the time average of the cross term at saturation ⟨ K_CT^i⟩=1/τ∫_t_s^t_maxK_CT^i(t)dt, where t_max is the final time in our simulation and τ=t_max-t_s, for various systems sizes. In Fig. <ref>, we depict these magnitudes relative to the system size as a function of N, which can be interpreted as the relative error between K_G and K_L, as N is approximately their saturation value. The site contributions ⟨ K_CT^i⟩/N are shown with purple crosses, while their average ⟨ K_CT⟩/N is shown with red dots. In all cases we observe that not only, ⟨ K_CT⟩/N decreases with N but also each ⟨ K_CT^i⟩/N, implying that the equivalence of the global or local observation is valid for each individual initial state, without the necessity of summing all the initial conditions to have the effect. The error bars of the red dots in Fig. <ref> correspond to the normalized standard deviation SD(K_CT)/N, where SD(K_CT)=√(1/τ∫_t_s^t_max(K_CT(t)-⟨ K_CT⟩)^2dt), consequently quantifying the temporal fluctuations around the saturation value. An equivalent expression is used to define the standard deviation from the average value of each site SD(K_CT^i)/N. We observe that, just as ⟨ K_CT^i⟩/N decreases when N increases, so do their fluctuations, as can be perceived in Fig. <ref>. The fluctuations in the long-time behavior of OTOCs have been directly associated with chaos, i.e. the more chaotic the system the smaller the fluctuations in the long-time behavior of the OTOC <cit.>. Given that the fluctuations of K_L are considerably smaller than the fluctuations of K_G, we can state that SD(K_G)≈ SD(K_CT). Consequently, as we increase N the systems become more chaotic and the local and global OTOCs become almost identical. This is evidenced by the simultaneous decrease in the average value of K_CT(t) and its fluctuations. Moreover, we can extend the analysis to the magnitude of the fluctuations in individual and total OTOCs. We observe that fluctuations on ⟨ K_CT⟩ are considerably smaller than the fluctuation of ⟨ K_CT^i⟩. It is easy to see the relation between both magnitudes satisfy, SD^2(K_CT) = 1/N^2∑_iSD^2(K_CT^i) + 1/N^2∑_i≠ jCov(K_CT^i(t),K_CT^j(t)) = σ_CT^2/N+1/N^2∑_i≠ jCov(K_CT^i(t),K_CT^j(t)) where we denoted σ_CT^2=1/N∑_iSD^2(K_CT^i). The over-line represents an average over initial sites (purple bars in Fig. <ref>). Thus, if there were no correlation between different K_CT^i(t), we would have SD^2(K_CT)=σ_CT^2/N. This last expression shows to be valid in the limit of large N, as can be seen in Appendix <ref>. In fact, we see that the total correlation (sum of the covariances) decreases very rapidly with N. For N=14 they already become of the same order as our statistical precision. Furthermore, for the Hamiltonian including random signs in D_ij the correlation between different K_CT^i(t) this happens for N=12. Notice that the dispersion between the values of ⟨ K_CT^i⟩/N is smaller when α is smaller. This can be rationalized by thinking that for large α the spins are less interconnected and site fluctuations, observed in each ⟨ K_CT^i⟩/N, are highly dependent on the local fields h_i affecting the spin and its neighbors. Fig. <ref> shows the average value ⟨ K_CT⟩/N in logarithmic scale. It highlights the decay discussed above. The sizes are not large enough for the fittings to discriminate between an exponential decay or a power law. In the latter case the exponent of this decay might vary between -4.3 and -3.1. In the case of complete random systems an exponential decay is expected resulting from an homogeneous distribution of the states in the Hilbert space. In fact, for a DQ Hamiltonian we expect that regions of the Hilbert space corresponding to a total magnetization to have a normal distribution. Under this assumption, it is reasonable that the decay with N follows a power law rather than an exponential. Nonetheless, the decay of the cross terms with N leaves evidence that, in systems whose complexity is strong enough to generate chaotic dynamics, the global echoes are composed of a simple sum of local ones. Contributions from outside the original site will be completely uncorrelated (pseudo-random) at long times canceling each other. Thus, local and global OTOCs will provide the same information. § CONCLUSIONS AND DISCUSSION. This work establish a direct connection between OTOCs with global and local observables confirming their equivalence in the context of NMR experiments, where the observable is the total magnetization of the system. Specifically, based on a theoretical analysis and numerical resolution of model systems, it states that global echoes and global OTOCs coincide with local echoes and local OTOCs. This equivalence should be particularly valid as the complexity increases and system size becomes macroscopic. However, as this limit is out of the possibilities of our classical computer, the numerical verification is restricted to fairly small systems. Such calculation could model liquid crystal molecules, which were also observed experimentally <cit.>. In this last case, the scrambling at long times through the whole Hilbert space acquires particular physical relevance, as it becomes equal to the system size, adding further support to the spin counting procedure. Our results show that the evolution of K(t), the number of quantum mechanically correlated spins determined from the multiple quantum coherence distribution, has a similar behavior for the global K_G(t) and local K_L(t) observables. Indeed, after a brief initial time where they differ, both K_G(t) and K_L(t) grow together exhibiting the same behavior over time. This is fundamental for assigning a local meaning to the global measurements, without being tied to the precise number of correlated spins. Their discrepancy, measured by K_CT, quickly becomes smaller that 10%. The fact that after an initial transient, both the K_CT values and their fluctuations go to zero with an increasing system size, endorses the equivalence hypothesis. In our interpretation, the fundamental feature behind the correspondence is that the time reversal after a rotation cannot fully undo the many-body dynamics. This fact produces that a substantial number of backward paths in the Liouville space do not lead to the individual initial magnetization, but remains as multi-spin superposition without a net polarization. Thus, the observed polarization after a LE corresponds to small portions of paths that have unscrambled the multi-spin correlation into its original state. Our conclusions are consistent, but not identical, with the results shown in Ref. <cit.>, by Zhou and Swingle. In that work, the expressions for the OTOCs discriminate the contribution of “diagonal sums” and “non diagonal” correlations. The local OTOC (K_L(t)) determined in our work contains all the contributions identified as “diagonal” OTOCs in Ref. <cit.> along with some of the “non-diagonal” sums. These “non-diagonal” terms reflect the correlation between two evolutions that start from the same spin, but are disturbed at different sites. Although these contributions are generally negligible, they constitute a necessary conceptual difference to formulate an experimentally relevant vision in terms of Loshcmidt echoes. The cross-correlation function (K_CT(t)) contains the remaining “off-diagonal” terms (Eq. (<ref>)). Thus, the assumption and results observed in a spin chain by Zhou and Swingle that the non-diagonal terms cancel out, implies the decay we found for K_CT(t). This means that after an initial transient, the local OTOCs determine the global ones. These results are important for already available experimental platforms that allows determing OTOCs and information scrambling based on global control and observables. Thus very diverse experimental results based on these approaches can now be interpreted as a probe of information scrambling from local observables. In particular the results of this article supports the hypothesis, explicitly stated in Ref. <cit.>, that in both Loschmidt echoes and MQCs experiments, the evaluated observables over the whole system actually reflect the ensemble average over uncorrelated “local” observables. This connection between MQC signal and local observables is crucial for the interpretation of several experiments. Of particular importance are: 1) the ballistic growth of the number of entangled spins in a crystal under a Double Quantum Hamiltonian <cit.>; 2) the diffusive scrambling of spin operators under dipolar Hamiltonian, showing the relevant role of many-body dipolar interactions <cit.>; 3) localization-delocalization transition of the controlled dynamics of quantum information in precense of perturbations mixing Hamiltonians, with ballistic and many-body terms, as a function of the mixture parameter <cit.>; 4) in a more traditional context, the different excitation spreading regimes observed in liquid crystal molecules before the saturation over the whole available Hilbert space <cit.>. While our results focus on NMR experimental conditions, they are also applicable to other experimental platforms where global control and observables are more easily achievable than their local counterparts, such as in experiments with trapped ions <cit.> and ultra-cold polar molecules <cit.>. Supercomputing time for this work was provided by CCAD (Centro de Computación de Alto Desempeño de la Universidad Nacional de Córdoba). GAA acknowledges support from CNEA; CONICET; ANPCyT-FONCyT PICT-2017-3156, PICT-2017-3699, PICT-2018-4333, PICT-2021-GRF-TI-00134, PICT-2021-I-A-00070; PIP-CONICET (11220170100486CO); UNCUYO SIIP Tipo I 2019-C028, 2022-C002, 2022-C030; Instituto Balseiro; Collaboration programs between the MINCyT (Argentina) and, MAECI (Italy) and MOST (Israel). Authors aknowledge to CONSOLIDAR SECYT-UNC 2018-2022, PIP-CONICET (11220200101508CO) and PICT-2017-2467 for their support. § MULTIPLE QUANTUM COHERENCES The m order of coherence corresponds to matrix elements that represent transitions between many-spin states, in a Zeeman basis, that have different magnetization m. The evolution density matrix can be expressed by a superposition of contributions from different orders as ρ̂=∑_mρ̂_m where the m-quantum coherence component behaves under rotation as, e^iϕÎ^zρ̂_me^-iϕÎ^z=ρ̂_me^imϕ. Formally, the m coherence intensity can be defined as g_m=1/{(I^z)^2}{ρ̂_mρ̂_-m}. Experimentally, by implementing systematic rotations around Z of steps ϕ, the coherence distribution can be decoded through Fourier transformation of the collected signals, M(ϕ,t)=1/{(I^z)^2}{e^-iϕ I^zρ̂(t)e^iϕ I^ze^iHtI^ze^-iHt}, where ϕ=2π/m_max, and m_max/2 represents the maximum coherence order to be decoded. By expanding ρ̂(t) in the form (<ref>), considering ρ̂(0)=Î^z, and using Eq. (<ref>) (rotation property), the collected signals satisfy M(ϕ,t) = 1/{(I^z)^2}{∑_mρ̂_me^imϕ∑_mρ̂_m} = ∑_mg_me^imϕ. Note that, M(ϕ=0,t)=∑_mg_m is the Loschmidt Echo intensity at t <cit.>. Separately, one can observe that the second moment of this MQC distribution is a global OTOC <cit.>: ∑_mm^2g_m =-.∂_ϕ^2M(ϕ,t)|_ϕ=0 =1/{(I^z)^2}{[Î^z,[Î^z,Î^z(t)]]Î^z(t)} =-1/{(I^z)^2}{[Î^z,Î^z(t)][Î^z,Î^z(t)]}. § MAPPING THE LOCAL AND GLOBAL CONTRIBUTIONS TO THE CLUSTER SIZE K WITH DIAGONAL AND OFF-DIAGONAL OTOCS From definitions in Eqs. (<ref>, <ref>), namely, M_G(t,ϕ) = 1/N2^N-2{Î^z(t)R̂^†Î^z(t)R̂} = 1/N2^N-2∑_i,j{Î_i^z(t)R̂^†Î_j^z(t)R̂} M_L(t,ϕ) = 1/N2^N-2∑_i{Î_i^z(t)R̂^†Î_i^z(t)R̂} M_CT(t,ϕ) = 1/N2^N-2∑_i,j i≠ j {Î_i^z(t)R̂^†Î_j^z(t)R̂} We apply the second derivative to each term and analyze their contributions to OTOCs, ∑_mm^2g_m= -.∂_ϕ^2M_G(ϕ,t)|_ϕ=0 = -.∂_ϕ^2M_L(ϕ,t)|_ϕ=0-.∂_ϕ^2M_CT(ϕ,t)|_ϕ=0, by doing so, we can explicitly write the echoes as a combination of “diagonal” contributions of the form ∑_i,k{[Î_k^z,Î_i^z(t)]^2} and “off-diagonal” ∑_i,j,k,q j≠ i or k≠ q {[Î_q^z,Î_i^z(t)][Î_k^z,Î_j^z(t)]} as defined in Ref. <cit.>. We found that, the only contribution of “diagonal” terms to the global OTOC comes from M_L(t,ϕ): N2^N-2K_L(t) =-2.∂^2/∂ϕ^2M_L(t,ϕ)|_ϕ=0 =-2∑_i{Î_i^z(t)Î^zÎ^zÎ_i^z(t)-Î_i^z(t)Î^zÎ_i^z(t)Î^z} =-2∑_i{[Î_i^z,[Î^z,Î_i^z(t)]Î_i^z(t)]} =-2∑_i{[Î^z,Î_i^z(t)]^2} =-2∑_i,q,k{[Î_q^z,Î_i^z(t)][Î_k^z,Î_i^z(t)]} =-2(∑_i,k{[Î_k^z,Î_i^z(t)]^2}. .+∑_i,q,k q≠ k {[Î_q^z,Î_i^z(t)][Î_k^z,Î_i^z(t)]}), while only “off-diagonal” terms appear from cross-term M_CT: N2^N-2K_CT(t) =-2∂^2M_CT(t,ϕ)/∂ϕ^2 =-2∑_i≠ j{[Î^z,Î_i^z(t)][Î^z,Î_j^z(t)]} =-2∑_i,j,k,q i≠ j {[Î_q^z,Î_i^z(t)][Î_k^z,Î_j^z(t)]}. § SHORT TIME BEHAVIOR To derive the expression for the short-time behavior of K_G(t), we start by using the Baker-Campbell-Hausdorff expansion in Î_z(t), which approximates the time evolution of Î_z under a Hamiltonian ℋ̂: Î_z(t) ≈ Î_z+(-it/ħ)[Î_z,ℋ̂] [Î_z,Î_z(t)] ≈ [Î_z,Î_z+(-it/ħ)[Î_z,ℋ̂]] ≈ (-it/ħ)[Î_z,[Î_z,ℋ̂]]. At this point, we carry out the commutator for double quantum Hamiltonian Eq. (<ref>), which yields: [Î_z,[Î_z,ℋ̂_DQ]]=4ħ^2ℋ̂_DQ. Finally, by substituting these expressions into Eq. (<ref>) and simplifying, we arrive at: K_G ≈ 216t^2ħ^2/{Î_z^2}{ℋ̂_DQ^2} = 16t^2ħ^2/{Î_z^2}∑_i,j,k,l,i≠ j,k≠ lD_i,jD_k,l{ℋ̂_DQ_i,jℋ̂_DQ_k,l} = 216t^2ħ^2/{Î_z^2}∑_i,j,k,l,i≠ j,k≠ l2D_i,jD_k,l{Î_i^xÎ_j^xÎ_k^xÎ_l^x} = 216t^2ħ^2/{Î_z^2}∑_i≠ j4D_i,j^2{Î_i^xÎ_j^xÎ_i^xÎ_j^x} = 216t^2ħ^2/N2^N-2∑_i≠ j4D_i,j^22^N-4 = 32t^2ħ^2/N∑_i,j,i≠ jD_i,j^2. Following the same procedure for a local OTOC we found that the initial growth only differ in a factor of two: K_L(t) = -2/N2^N-2∑_i{[Î^z,Î_i^z(t)]^2} ≈ 2/N2^N-216t^2ħ^2∑_i,j2D_i,j^22^N-4 ≈ 16/Nt^2ħ^2∑_i,jD_i,j^2. § BEHAVIOR OF INDIVIDUAL MAGNITUDES TEXT AND COVARIANCE. In the main text and preceding sections of the appendix, we have demonstrated that the K_*(t) can be expressed as an average of site contributions, denoted as K_*^i(t). Each of these contributions exhibits minimal deviation from the averaged value K_*(t), a fact supported by observing the variance of this average or directly comparing different curves, as depicted in Fig. <ref>. The curves corresponding to different initial sites differ mainly in fluctuations. Therefore, by averaging over initial sites, the primary effect is to mitigate these fluctuations, resulting in smoother curves. Nevertheless, one can extract information about the spin correlation ⟨ K_CT⟩ = 1/N∑_i⟨ K_CT^i⟩ ⟨ K_CT^2⟩ = 1/N^2∑_i,j⟨ K_CT^iK_CT^j⟩ = 1/N^2[∑_i⟨K_CT^i^2⟩+∑_i,j i≠ j ⟨ K_CT^iK_CT^j⟩] By expending Eq. (<ref>) into individual contributions we have, SD^2(K_CT)= 1/τ N^2∑_i,j∫_t_s^t_max[K_CT^i(t)K_CT^j(t)dt-⟨ K_CT^i⟩⟨ K_CT^j⟩], which can be rearranged in the following form: SD^2(K_CT)=1/N^2∑_iSD^2(K_CT^i) +1/τ N^2∑_i,j i≠ j ∫_t_s^t_max(K_CT^i(t)K_CT^j(t)-⟨ K_CT^i⟩⟨ K_CT^j⟩)dt =σ_CT^2/N+1/N^2∑_i≠ jCov(K_CT^i,K_CT^j). Here, we denoted σ_CT^2=1/N∑_iSD^2(K_CT^i), and define the total covariance as: Total Cov.=1/N^2∑_i≠ jCov(K_CT^i,K_CT^j). this last term gives a measure of the total correlation between the dynamics of K_*^i. If there spin dynamics were uncorrelated, we would have SD^2(K_CT)=σ_CT^2/N. Fig. <ref> compares these magnitudes for K_CT, we see that the error bars, representing SD(K_CT) and √(σ_CT^2/N) (blue and green bars respectively), becomes closer as N increases. For a system with α=1 plus random signs in the interactions this difference is small even for a small N. Figure <ref>(a) shows the standard deviation SD(K_CT) as a function of N. As it was seen directly on the plots of K_*(t) (Fig. <ref>) the fluctuations decrease with the system size and are smaller when random signs are included in the Hamiltonian. Furthermore, this trend is also observe in the total covariance, Eq. (<ref>), as it can be seen in Fig. <ref>(b). Indeed, this decrease is pronounced, becoming of the same order than our statistical precision for N=14. § ECHOS AWAY FROM THE INITIAL SITE. As discussed in the main body of the paper, our numerical computations of the global and local Out-of-Time-Ordered Correlators (OTOCs) rely on the implementation of the MQC sequence. Fig. <ref> show typical results that, apart for the discrimination between local and global contributions, are similar to the usually found in experimental implementations. In this section, we leverage the availability of numerical data and show the contributions to the total echo plots M_G(t,ϕ) from echoes observed at a distance n from the initial site: ℳ_G^n(t,ϕ)=1/N2^N-2∑_i{(Î_i+n^z(t)+Î_i-n^z(t))R̂^†Î_i^z(t)R̂}(1-δ_n,0/2). Note, that the contribution of sites -n and n comes from the ring geometry of the system and it means a translation of n sites to the right and left of site i. In the numerical implementation, the periodicity of the indexed needs to be done carefully. For example, for a ring of N spins ℳ_G^0(t,ϕ)=1/N2^N-2∑_i{Î_i^z(t)R̂^†Î_i^z(t)R̂} and ℳ_G^1(t,ϕ)=1/N2^N-2∑_i{(Î_i+1^z(t)+Î_i-1^z(t))R̂^†Î_i^z(t)R̂}, where Î_i+1^z and Î_i-1^z represent the the spin at the right and left of i respectively. The behavior of these echoes are shown in Fig. <ref> for two different perturbations. One might think that, as occurs in the Polarization echo, what is missed from the original site may end up as polarization in the neighboring spins. However, in the DQ Hamiltonian it ends up as correlations which in general are not observable. Some perturbations, as ϕ=π/2, could convert these correlations in observed magnetization at the neighboring sites. These correlations, however, are washed out much before the saturation times. It is clear how the system is highly correlated, observing a negative polarization returning at odd distances n, and positive polarization arriving at even values of n. This effect is clearer in Fig. <ref>(b) due to the magnitude of the echoes. This echoes appears at longer times as n increases. After these transient effect, we observed that all returning to n≠0 go to zero, as already hinted in the previous analysis. apsrev4-2
http://arxiv.org/abs/2407.02555v1
20240702180000
Searching for a dark matter induced galactic axion gradient
[ "Edward Hardy", "Mario Reig", "Juri Smirnov" ]
hep-ph
[ "hep-ph", "astro-ph.CO" ]
edward.hardy@physics.ox.ac.uk mario.reiglopez@physics.ox.ac.uk Department of Physics, Royal Holloway University of London, Egham, Surrey, TW20 0EX, UK. mailto:juri.smirnov@liverpool.ac.ukjuri.smirnov@liverpool.ac.uk; http://orcid.org/0000-0002-3082-09290000-0002-3082-0929 Department of Mathematical Sciences, University of Liverpool, Liverpool, L69 7ZL, United Kingdom § ABSTRACT An ultra-light axion with CP violating interactions with a dark sector and CP preserving interactions with the visible sector can act as a novel portal between dark matter and the Standard Model. In such theories, dark matter sources an axion field extending over the entire galaxy, the gradient of which can be searched for with precise spin precession experiments. A reinterpretation of existing co-magnetometer data already constrains theories that are consistent with astrophysical bounds, and near-future experiments will begin probing well-motivated models. The required interactions can arise from a confining hidden sector without necessitating fine-tuning of the axion's mass. Searching for a dark matter induced galactic axion gradient Juri Smirnov ============================================================== LTH-1373 § INTRODUCTION Despite compelling evidence for its existence, the nature of dark matter (DM) and its interactions with the Standard Model (SM) remain unknown. Investigation of the possible portal interactions between DM and the SM is therefore worthwhile. In this letter we propose a new such portal, in which the DM and the SM are connected via an ultra-light axion that has  violating couplings to the DM. We argue that this possibility is theoretically well-motivated and can lead to observable signals in table-top experiments. Axions (by which we mean any light pseudo-scalars) are generic in UV completions of the SM involving gauge fields in extra space-time dimensions, with string theory as the foremost example <cit.>, see <cit.> for a recent review. In field-theoretic realisations, an axion can couple to both the SM and a dark sector as a result of heavy matter charged under gauge groups in both sectors, while in string theory the couplings of an axion to fermions depend on the compactification <cit.>. It is plausible that the interactions of an axion with DM might be  violating; the axion itself is not the DM in the scenario we consider. As a guiding analogy, recall that the QCD axion has minuscule  violating interactions with SM particles only because the minimum of its potential is extremely close to the point where the strong  angle θ_ QCD=0. That θ_ QCD≃ 0 is both a quirk of the SM (due to the weak interactions inducing a non-zero θ_ QCD at high loop order suppressed by powers of G_ FΛ_ QCD^2≪ 1 <cit.>) and a theoretical problem in that contributions to the axion potential from the UV must be tiny (the quality problem <cit.>). One can imagine hidden sectors for which similar conditions are not fulfilled and sizable  violating interactions arise. Meanwhile, we require that the axion does not have an anomalous coupling to QCD. We further assume that the SM is decoupled from the dominant sources of  violation <cit.>, in which case is a reasonable expectation that an axion only has  conserving interactions with the SM. We therefore consider an effective Lagrangian ℒ_eff⊃ g^χ_s ϕχχ+c_ψ∂_μϕ/f_ϕψγ^μγ^5ψ - 1/2m_ϕ^2 ϕ^2 , where ϕ is the axion, f_ϕ is its decay constant (defined such that the period of the axion field is 2π f_ϕ), and m_ϕ is the axion mass. The field ψ is a generic SM fermion, and c_ψ is expected not to be much bigger than 𝒪(1) in minimal theories. Meanwhile, χ is the DM, which we assume to be a fermion. In typical UV completions involving a confining hidden sector with strong coupling scale Λ, the axion's  violating interactions have strength g_s^χ∼θΛ/f_ϕ, where θ∈ [-π,π]. (This is the form of the QCD axion's  violating couplings, up to a mild dependence on the masses of the light quarks <cit.>.) § A GALACTIC AXION GRADIENT As a result of first term in Eq. (<ref>), DM sources an axion field. We consider axions with masses m_ϕ≲ 1/( pc) ≃ 10^-23  eV such that this induced axion field extends over galactic scales. The equation of motion of ϕ in the background of the DM in our galaxy has a time-independent solution ϕ(r⃗) = g_s^χ∫_V d^3r⃗' n_χ (r⃗') e^- m_ϕ | r⃗- r⃗'| /4 π |r⃗ - r⃗'⃗| , where r⃗ is relative to the galactic centre and n_χ(r⃗) is the DM number density, which we assume to be time-independent and spherically symmetric; corrections to this are relatively small. We consider theories for which effective mass of ϕ in the galactic DM background m_ eff≥ m_ϕ≳ 10^-30  eV, such that the period of ϕ oscillations, 2π/m_ eff, is fast compared to the galaxy's formation timescale and its age. Consequently, we assume that ϕ is close to the solution in Eq. (<ref>) today, as is expected if ϕ tracks the minimum of its changing potential adiabatically in the early universe. We expect that deviations from this solution, e.g. remnant oscillations, will only strengthen the signals we consider. The expression in Eq. (<ref>) simplifies in two limits: First, if the Compton wavelength of the axion, 1/m_ϕ, is shorter than length scale over which the DM density varies, λ_χ≡ n_χ/| ∇ n_χ|, the axion field at any point can be approximated as that sourced from the surrounding 1/m_ϕ^3 volume, which contains N_ eff≡ n_χ/m_ϕ^3 DM particles <cit.>. As a result, ϕ(r) ≃g_s^χ N_ eff/λ_ϕ≃ g_s^χ n_χ(r) /m_ϕ^2 . Second, if the axion's Compton wavelength is comparable to the scale of the galaxy or larger, the axion field is well-approximated by the standard Coulombic potential. At the Earth's distance from the galactic centre, R, we therefore have an axion field gradient of | ∇ϕ(R) | ≃ g_s^χ |∇ n_χ(R) |/m_ϕ^2 if 1/m_ϕ < λ_χ, g_s^χ N_χ(R)/R^2 if 1/m_ϕ > λ_χ , where N_χ(R) = 4π∫_0^R n_χ(r) r^2 dr is the total number of DM particles within the Earth's galactic orbit. Such an axion gradient affects the SM particles through the second term in Eq. (<ref>). In the limit that the SM particles are non-relativistic, the corresponding interaction in the Hamiltonian is (see e.g. <cit.> for recent reviews) H_ϕ=-c_ψ/f_ϕ∇ϕ·𝐒 , where 𝐒 is the spin operator of the SM particle. Consequently, an axion gradient leads to a spin-dependent energy shift Δ E in SM states, analogous to the Zeeman effect. Using Eq. (<ref>), for a particle with spin aligned with the axion field gradient Δ E ≃ c_ψ g_s^χ| 𝐒| | ∇ n_χ(R) | /(f_ϕ m_ϕ^2) if 1/m_ϕ < λ_χ , c_ψ g_s^χ| 𝐒| N_χ(R)/(f_ϕ R^2) if 1/m_ϕ > λ_χ . In other words, there is a monopole-dipole axion force <cit.> with the monopole side from the coupling to DM. Eq. (<ref>) can easily be evaluated assuming, e.g., an NFW DM profile <cit.>. Fixing the local DM energy density ρ_0≃ 0.4  GeV/ cm^3 and the scale radius to 17.6 kpc, gives Δ E ≃ 3× 10^-24  eV κ 1/(λ_χ m_ϕ)^2 if m_ϕ > 1/λ_χ , 1 if m_ϕ < 1/λ_χ , with 1/λ_χ≃ 4 × 10^-28  eV and κ = |𝐒| ( g^χ_s/m_χ/M_ P ) ( 10^9  GeV/f_ϕ/c_ψ ) , where M_ P is the (non-reduced) Planck mass. Not surprisingly the energy shift is largest when m_ϕ≲ 1/λ_χ∼(10 kpc)^-1, such that the induced axion field is sensitive to the DM distribution throughout the entire galaxy. § CONSTRAINTS AND EXPERIMENTAL REACH There are existing, independent, constraints on the interactions of an ultra-light axion with DM and the SM.  violating interactions with DM are bounded because they lead to long-range self-interactions that affect the DM's dynamics <cit.>. Parameterising such forces as V(r) = -α G_ N m_χ^2 /r exp( - r/l ) , the constraints are on α as a function of l, independent of m_χ. In our theories, l = 1/m_ϕ and g^χ_s = √(α) m_χ/M_ P, so the combination g_s^χ/m_χ is bounded. For axion masses of order 10^-30  eV, corresponding to super-galactic lengths, these forces must be weaker than gravity, i.e. g_s^χ/m_χ≲ 1/M_ P. For larger axion masses, m_ϕ≳ 1/ kpc, the dominant constraints are from observations of the Bullet Cluster and values of g_s^χ/m_χ that are a few order of magnitude larger than 1/M_ P are allowed <cit.>. Meanwhile, the strongest constraints on  preserving couplings of light axions to SM fermions come from the evolution of stars; the limits on couplings to electrons, c_e/f_ϕ, and nucleons, c_N/f_ϕ, are both of order 10^-9 GeV^-1 <cit.>, see <cit.> for a recent review. We also note that  violating couplings of the axion to the visible sector (not included in Eq. (<ref>)) must be such that SM monopole-monopole forces are at least a factor of 10^10 weaker than gravity <cit.>. Remarkably, existing experiments sensitive to spin-dependent energy shifts from a galactic axion gradient explore parts of parameter space that are not excluded by the preceding constraints. Unlike measurements of e.g. the muon magnetic dipole moment, the energy shift due to an axion gradient cannot be boosted, so what matters is the absolute sensitivity of an experiment, which makes extremely precise table-top approaches well-suited. Crucially, because the axion gradient couples only to spin, the relative energy shift Δ E^(a)/Δ E^(b) of two different atomic or molecular states ((a), (b)) can differ from that due to a magnetic field. This is exploited by so-called co-magnetometers, which use two or more types of fermionic spins to distinguish new-physics energy shifts from uncontrolled magnetic field backgrounds (the precise conditions for general atomic and molecular states are given in <cit.>). Co-magnetometry has been used in the past to search for electric dipole moments, axion forces, and more exotic signals, see <cit.> for reviews. The radial orientation of the axion gradient relative to the galactic centre offers additional experimental opportunities. For a fixed experiment, it makes the induced energy shift a pseudo-DC signal with daily modulation due to the Earth's rotation. This feature of the signal is different to an axion gradient generated by the Earth <cit.>, see <cit.> for recent discussions and <cit.> for related studies of forces from test masses. Moreover, by rotating an experiment additional modulation can be induced. In case of a signal, the sharp prediction of the orientation of the axion gradient would allow discrimination from possible neglected backgrounds or other new physics sources. Out of the current experiments, those of Refs. <cit.> and <cit.> are among the most sensitive to a galactic axion gradient. These use co-magnetometers based on K-^3He and ^21Ne-Rb-K to reach frequency resolutions at the level of 0.7  nHz and 0.5  nHz respectively, corresponding to energies of order 10^-24  eV. Such experiments have been used to search for Lorentz violation of extra-solar origin (e.g. as in the Kostelecky SM extension <cit.>), making use of a rotating platform to help remove backgrounds and systematics. The resulting limits can immediately be reinterpreted as constraints on our theories. Moreover, the experiment based on ^21Ne-Rb-K is particularly promising for future improvements, having obtained a sensitivity similar to the K-^3He experiment with a factor of 8 less integration time. Indeed, in Ref. <cit.>, it is mentioned that with future upgrades, resolution at the level of 10^-3  nHz might be achieved. There are also several other experiments and proposals with sensitivities to energy shifts at the level of 𝒪( nHz) <cit.>. § UNDERLYING THEORIES So far we have taken Eq. (<ref>) at face value, however this can miss important effects that arise in a complete theory. In particular, such a Lagrangian is expected to appear as the first terms in an expansion in an axion field ϕ≪ f_ϕ, and higher order terms will be relevant if values ϕ≃ f_ϕ are induced in the galaxy. At this point, given that axions have a compact field range, the interaction g_s^χϕχ̅χ should be replaced by h(ϕ/f_ϕ)χ̅χ where h is a function with period 2π (similarly, the axion mass will be replaced by a periodic potential). It can immediately be seen that the compactness of the axion field is going to be borderline relevant for values of g_s^χ that lead to detectable energy shifts. Regardless of the axion mass, the maximum galactic axion field gradient possible without the compactness of the axion field being an issue is |∇ϕ| ≲ f_ϕ/R such that Δ E ≲c_ψ/f_ϕf_ϕ/R≃ c_ψ 10^-27  eV . For c_ψ∼𝒪(1) this is a couple of orders of magnitude below current sensitivities and might be achievable in the future. Assuming m_ϕ≲ 1/λ_χ for simplicity, using Eq. (<ref>), in the Milky Way ϕ∼ f_ϕ is reached for g_s^χ= 10^-26(m_χ/ MeV)( f_ϕ/10^9  GeV) , which is consistent with DM-DM self-interaction constraints for f_ϕ≃ 10^9  GeV. With the parameterization g_s^χ=θΛ/f_ϕ, Eq. (<ref>) corresponds to θ = 10^-4(10^-4  eV/Λ)(m_χ/ MeV)( f_ϕ/10^9  GeV)^2 , i.e. only a small CP violating angle in the hidden sector if m_χ is not too large. We explain the choice of normalisation of Λ shortly. In the case of larger g_s^χ, so additional operators are relevant, the induced axion field typically saturates at values of order f_ϕ rather than continuing to grow. Fields ϕ≫ f_ϕ are not forbidden by the axion field range's compact nature; this would simply correspond to the axion winding its fundamental domain. Instead, saturation occurs because ϕ reaches a value such that ∂_ϕ h(ϕ/f_ϕ)=0 at which point the coupling that sources a ϕ field turns off, effectively self-screening the DM. For instance, a typical form of the full axion potential might be (further details are given in Appendix <ref>) V⊃ -Λ^4 cos( ϕ/f_ϕ) + χ̅χΛcos(ϕ/f_ϕ+θ) . Expanded at small ϕ/f_ϕ the second term leads to a  violating coupling sin(θ) Λ/f_ϕϕ χ̅χ. As ϕ/ f_ϕ≃π-θ is approached in the galaxy, a non-zero χ̅χ background contributes less and less to the equation of motion of ϕ. Eventually, the induced galactic axion field saturates once ϕ/f_ϕ≃π-θ is reached. We also note that, if θ≪ 1, as the background ϕ/ f_ϕ increases the effective coupling of ϕ to χ is first enhanced, reaching a maximum strength at ϕ/ f_ϕ≃π/2-θ, before being screened. We therefore identify Δ E≲ 10^-27  eV, arising from minimal theories, as a well-motivated target for future experiments. However, we also note there are various ways that larger Δ E can arise from consistent UV theories. Most simply, values c_ψ∼ 100 are possible in normal models. Meanwhile, more unusual UV completions such as clockwork theories <cit.> can lead to effective values of c_ψ that are exponentially large. Finally, if the axion potential has a monodromy <cit.> the induced axion field might not saturate at values of order f_ϕ. Another question that depends on the particular theory underlying Eq. (<ref>) is whether an extremely small axion mass, m_ϕ∼ 10^-27  eV, is possible without fine tuning. If g_s^χ indeed arises from a hidden sector gauge group running into strong coupling at scale Λ, a typical expectation is m_ϕ∼Λ^2/f_ϕ (a smaller m_ϕ is possible if there are light hidden sector chiral fermions). To obtain m_ϕ∼ 10^-26  eV for f_ϕ∼ 10^9  GeV then requires Λ≲ 10^-4  eV. Small values of Λ are not necessarily problematic, although in the early universe the hidden sector must be colder than the SM to avoid bounds on new relativistic degrees of freedom <cit.>. However, observations require that fermionic DM must have a mass much larger than such Λ <cit.>. This is also needed so that the galactic DM number density is smaller than Λ^3 and the hidden sector is not locally deconfined. Moreover, m_χ > Λ guarantees that the DM mass induced by a background ϕ≲ f_ϕ is smaller than the bare DM mass, both for g_s=θΛ/f_ϕ (with θ≲ 1) and the example completion in Eq. (<ref>). Whether a DM candidate with a mass larger than Λ gets substantial  violating interactions with the axion is model-dependent. In Appendix <ref>, we describe an example theory in which DM is a particle in the adjoint of a hidden sector gauge group that forms bound states with dark gluons, as studied in Ref. <cit.>. Provided the mass of the adjoint field is greater than roughly a keV such a state can be a viable DM candidate, with the required  violating interactions arising from the gluon part of the bound state. § RESULTS In Figure <ref>, we show the parameter space of theories with a galactic axion gradient in the plane of (c_ψ g_s^χ)/(f_ϕ m_χ) against m_ϕ. In this, we plot the existing constraints from astrophysics by saturating the allowed values of c_ϕ/f_ϕ and g_s^χ/m_χ separately. Meanwhile, as can be seen from Eqs.(<ref>) and (<ref>), the induced spin-dependent energy shift of SM fields depends on the couplings via the combination plotted. We show our new limit, obtained by reinterpreting the results of Ref. <cit.>, and also the reach of possible future experiments with plausible improvements in sensitivity to Δ E. These curves are obtained from the analytic results in Eq. (<ref>); in reality the transition around m_ϕ≃ 1/λ_χ will be smooth. We also indicate in the same Figure the part of parameter space for which the galactic axion field induced by Eq. (<ref>) is self-consistently such that the compactness of the axion field can be neglected. In particular, we fix g_s^χ/m_χ such that the ϕ = f_ϕ at the galactic centre (assuming the underlying theory is such that a ϕ background does not self-enhance the ϕ–χ coupling, as is e.g. the case for θ≃ 1 in Eq. (<ref>)). Given that the induced ϕ is proportional to g_s^χ/m_χ, the maximum allowed g_s^χ/m_χ is proportional to f_ϕ, such that the combination g_s^χ c_ψ/(f_ϕ m_χ) is independent of f_ϕ, but depends on m_ϕ and c_ψ. The value of c_ψ is model dependent and the maximum typical value not sharply determined, so we blur the edge of this region over the range corresponding to 1<c_ψ<100. We stress that values of g_s^χ that do not saturate ϕ≃ f_ϕ (and smaller c_ψ) are equally plausible and lead to theories within the blue “Expected in Minimal Models” region. For m_ϕ≲ 10^-24  eV, existing and future experiments can probe sizable parts of parameter space of the effective Lagrangian in Eq. (<ref>) that are not otherwise excluded. As expected, for fixed (c_ψ g_s^χ)/(f_ϕ m_χ) the signal is weaker at larger axion masses, but the observational constraints also weaken. Although beyond current experimental sensitivity, future experiments will begin to explore the parameter space for which the galactic axion field is smaller than f_ϕ and the effective Lagrangian Eq. (<ref>) is valid even for the simplest underlying theories. We also reiterate that more exotic theories (with effective c_ψ≫ 1) can lie in the part of parameter space above this that is already experimentally tested. Finally, we note that for all m_ϕ and f_ϕ considered, the DM relic abundance of ϕ itself is indeed negligible assuming misalignment production. § DISCUSSION The theories we have studied have an elegant UV interpretation: It is plausible that a typical string theory compactification leads to multiple light axions and several dark sectors. Some of the axions might have  violating couplings to the dark sectors. One of these dark sectors could host the DM and would communicate with the visible sector through a long-range axion force in the form of a galactic axion gradient. Ultra-precise table top experiments already have impressive sensitivity to this scenario and orders-of-magnitude improvement is possible in the future. Such experimental searches are “looking under a lamppost”, in that a theory need not be in the upper part of the “minimal theories” parameter space shaded in Figure <ref> that can be explored in the near-future. Nevertheless, given that such technologies are being developed for independent purposes it is valuable to know that experimental results can be reinterpreted as a search for DM. Moreover, the prediction that the energy shift is maximised when the SM particle's spin is aligned with the galactic centre (typically not expected for other new-physics signals) might allow for simple experimental adaptions to increase sensitivity, e.g. by reorientating existing apparatus. Although the DM  violating/ SM  preserving axion portal that we have considered is novel, we note that related ideas have been analysed in the past. Experiments searching for monopole-dipole forces where both parts of the interaction correspond to SM fermions have been performed <cit.> and there are promising prospects for the near future <cit.>. There has also recently been work analysing the effect of scalar-scalar interactions between the SM and DM on trapping of dark matter in celestial bodies <cit.>. We have given an example, simple, UV completion in which the DM is a bound state of a dark fermion and a dark gluon. However, there are likely to be other models that also reduce to Eq. (<ref>) at low energies, and it would be interesting to explore whether these can lead to new phenomenology or signals. For example, there might be suitable theories in which the DM is a weakly coupled bound state of the type χ=𝒬𝒬 (where 𝒬 is a heavy hidden sector quark) similar to bottomonium in QCD. There might also be UV completions in which the induced axion field is not screened if ϕ∼ f_ϕ is reached in the galaxy. We have focused on the time-independent solution for the sourced ϕ field, Eq. (<ref>), which we expect to be, approximately, reached in the Milky Way after a full cosmological history. However, it would be interesting if ϕ was not fully relaxed to this form yet, e.g. due to oscillations left over from galactic formation, or had time-dependence, e.g. due to the dynamics of DM substructure within the galaxy. As mentioned, such effects could lead to stronger signals, and it would be interesting to investigate them in the future. More generally, we have conservatively applied constraints on long range DM-DM forces ignoring the periodicity of the axion potential, and it would be interesting to reanalyse these. Additionally, as mentioned in Ref. <cit.>, it would be interesting to investigate whether the cosmological effects on kpc range DM-DM self-interactions, similarly to the analysis carried out in Refs.<cit.> for forces with longer range. These might lead to stronger constraints than the existing constraints we have made use of as well as possible complementary signals. We thank Prateek Agrawal for collaboration at early stages of this project as well as for very useful discussions. We also thank Junwu Huang and Surjeet Rajendran for useful discussions, and Javier Fernandez Acevedo and Junwu Huang for useful comments on a draft. EH acknowledges the UK Science and Technology Facilities Council for support through the Quantum Sensors for the Hidden Sector collaboration under the grant ST/T006145/1 and UK Research and Innovation Future Leader Fellowship MR/V024566/1. MRs work is supported by the STFC grant ST/T006242/1. MR thanks the CERN Theory Department, where this work was finished, for hospitality. Searching for a dark matter induced galactic axion gradient Supplemental Material Edward Hardy, Mario Reig, and Juri Smirnov § COMPACTNESS OF THE AXION FIELD As described in the main text, the fact that the axion couplings are periodic means that the DM is automatically screened and the induced galactic axion field saturates once values ϕ∼ f_ϕ are reached (rather than winding the fundamental domain and growing indefinitely). To see this explicitly, we consider an axion potential motivated by that of the QCD axion, including its CP violating couplings to quarks. Suppose that the axion potential is generated by a hidden sector gauge group running into strong coupling in the IR. In the UV, we assume the axion is coupled to the dark gluons as: α_D/f_ϕϕ G_DG̃_D . Under quite general conditions, the minimum of the contribution to the potential resulting from strong coupling is CP conserving <cit.>. In the absence of light chiral fermions in the hidden sector (which could lead to a chiral suppression of the axion mass, analogous to the QCD axion mass' dependence on the light quark masses), the resulting axion mass is of order m_ϕ∼Λ^2/f_ϕ. Let us moreover assume that there is an additional  violating contribution to the axion's potential, associated with a scale Λ', that shifts the minimum slightly. Temporarily changing conventions such that ϕ=0 is the CP conserving point, the resulting axion potential (considering a simple analytic functional form; more complicated potentials lead to the same effects) is <cit.> V= -Λ^4 cos(ϕ/f_ϕ) -Λ'^4 cos(ϕ/f_ϕ + δ)  . Analogous to the QCD axion, if the DM is charged under the dark gauge group we expect a CP violating coupling of the axion to dark matter of the form V⊃Λχ̅χcos(ϕ/f_ϕ) . In more detail, a coupling of simply Λ in Eq. (<ref>) is expected for DM with a mass of order Λ and for the quark-gluon bound state described in the main text. For a 𝒬𝒬 bound state with mass larger than Λ the coupling is expected to take a more complicated form. If there is a chiral suppression of the axion mass, the strength of the interaction in Eq. (<ref>) is typically also suppressed (an analogous dependence on m_um_d appears in the QCD axion's  violating interactions). Redefining ϕ↦ϕ - θ with θ such that (with <χ̅χ>=0) the minimum of the potential is at ϕ=0, the matter part of the potential takes the form V⊃Λχ̅χcos(ϕ/f_ϕ+θ) . Expanded at small ϕ/f_ϕ this leads to the required CP violating coupling: ℒ⊃sin(θ) Λ/f_ϕϕ χ̅χ . From Eq. (<ref>) it can be seen that as ϕ/ f_ϕ≃π-θ is approached, a non-zero χ̅χ background contributes less and less to the equation of motion of ϕ due to the non-linearities. Numerical, time-independent, solutions of ϕ's equation of motion in the background of a static galactic DM distribution (with boundary condition ϕ=0 at spatial infinity) indeed shown that for Λ/f_ϕ sufficiently large that ϕ∼ f_ϕ is reached, the axion field indeed saturates. This is consistent with the results in <cit.> (see also <cit.>), where it is shown that for a light QCD axion – that is, an axion coupled to QCD with a potential that is suppressed with respect to the standard prediction – the finite density effects inside neutron stars can make the axion field to sit near a/f_a=π. In this case, the shift in axion field value is only due to the finite density potential, which has a relative π phase with respect to the vacuum potential, and is much more relevant than the CPV contributions for the QCD axion. § THE AXION MASS AND A MODEL OF DARK MATTER Give the extremely light axions that we have considered, it is interesting to see that this can be achieved without fine-tuning, although of course one can simply disregard fine-tuning arguments, which opens up a wide range of possible DM models. Assuming, as before, that the axion potential is dominantly generated by a hidden sector running into strong coupling, this requires Λ≲ 10^-3  eV assuming f_ϕ∼ 10^9  GeV. It also requires that there are no large contributions to the axion's potential from the high energy scale; this is a UV dependent issue analogous to the QCD axion quality problem, which we do not consider further. Insisting on such small values of Λ, restricts the range of phenomenologically viable DM candidates that can also have sizable  violating interactions with the axion. In particular, for fermionic DM the Tremaine-Gunn bound <cit.> requires m_χ≳ keV≫Λ. Additionally, to avoid the hidden sector gauge group being in the deconfined phase in the galaxy requires the typical inter-DM spacing n_χ^-1/3≫Λ^-1. Given that the DM energy density in the galaxy is of order 10^-6  eV^4 ≫Λ^4 this again requires a DM mass much larger than Λ. DM candidates with a mass much larger Λ might naively be expected to have  violating axion couplings that are strongly suppressed, and therefore do not lead to a large enough galactic axion gradient to obtain a detectable signal, but this is not necessarily the case. As an example, suppose that the hidden sector contains a dark fermion with a mass much larger than the dark confinement scale, m_𝒬≫Λ, in the adjoint representation, 𝒬∼Adj. Then the DM χ can consist of bound states of the fermion, 𝒬, and a dark gluon, g. This kind of dark matter candidate has been proposed in Ref. <cit.>, see also <cit.> for a related discussion of a similar DM candidate charged under the SM SU(3). The mass of such DM bound states, χ∼𝒬g, is dominated by the constituent quark m_χ= m_𝒬, while its size is set by the confinement scale, r∼Λ^-1≫ m_𝒬^-1. This implies that despite ρ_DM≫Λ^4, the typical occupancy number is low and, for sufficiently large m_χ, we will be in the confined phase of the theory provided m_χ≫ keV for Λ=10^-3  eV.  violating interactions between the DM and the axion are dominantly induced by the mixing between the axion and the lightest -even glueball. First, note that the the mixing angle between the axion and the lightest -odd glueball can be estimated in analogy with the axion-pion mixing and is of order Λ/f_ϕ. Then, in the presence of a non-zero θ, the CP-odd and CP-even glueballs will mix, which leads to an effective CPV coupling of the axion to the DM bound state of order g_s^χ∼θΛ/f_ϕ, which is the parametric form assumed in the main text. Finally, we note that even assuming thermal production, the calculation of the relic abundance in this scenario is complicated, involving many different regimes and effects as well as non-perturbative dynamics <cit.>. Moreover, it is possible that the relic abundance is set by an initial DM asymmetry. Consequently, for our present work, we do not attempt to analyse the full cosmological history or specify a complete theory that gives the correct DM relic abundance consistent with all observational constraints. We do however note that for small Λ it is likely to be required that the dark sector is colder than the visible sector in the early universe. This is needed both to satisfy observational constraints on additional relativistic degrees of freedom, and such that the dark sector is in a confined phase when the cosmic microwave background forms so that the dark matter does not have strong, long-range, self-interactions at these times.
http://arxiv.org/abs/2407.01910v2
20240702032124
MG-Verilog: Multi-grained Dataset Towards Enhanced LLM-assisted Verilog Generation
[ "Yongan Zhang", "Zhongzhi Yu", "Yonggan Fu", "Cheng Wan", "Yingyan Celine Lin" ]
cs.LG
[ "cs.LG", "cs.AI", "cs.AR" ]
[] 979-8-3503-7608-1/24$31.00 2024 IEEE [] MG-Verilog: Multi-grained Dataset Towards Enhanced LLM-assisted Verilog Generation Yongan Zhang, Zhongzhi Yu, Yonggan Fu, Cheng Wan, Yingyan (Celine) Lin celine.lin@gatech.edu Georgia Institute of Technology Atlanta, Gerogia, USA July 8, 2024 ============================================================================================================================================================= § ABSTRACT Large Language Models (LLMs) have recently shown promise in streamlining hardware design processes by encapsulating vast amounts of domain-specific data. In addition, they allow users to interact with the design processes through natural language instructions, thus making hardware design more accessible to developers. However, effectively leveraging LLMs in hardware design necessitates providing domain-specific data during inference (e.g., through in-context learning), fine-tuning, or pre-training. Unfortunately, existing publicly available hardware datasets are often limited in size, complexity, or detail, which hinders the effectiveness of LLMs in hardware design tasks. To address this issue, we first propose a set of criteria for creating high-quality hardware datasets that can effectively enhance LLM-assisted hardware design. Based on these criteria, we propose a Multi-Grained-Verilog (MG-Verilog) dataset, which encompasses descriptions at various levels of detail and corresponding code samples. To benefit the broader hardware design community, we have developed an open-source infrastructure that facilitates easy access, integration, and extension of the dataset to meet specific project needs. Furthermore, to fully exploit the potential of the MG-Verilog dataset, which varies in complexity and detail, we introduce a balanced fine-tuning scheme. This scheme serves as a unique use case to leverage the diverse levels of detail provided by the dataset. Extensive experiments demonstrate that the proposed dataset and fine-tuning scheme consistently improve the performance of LLMs in hardware design tasks. § INTRODUCTION Large Language Models (LLMs) have recently emerged as a promising approach to streamline hardware design processes <cit.>. By encapsulating vast amounts of domain-specific data and enabling users to interact with the design processes through natural language prompts, LLMs have the potential to make hardware design more accessible to a broader range of developers. This increased accessibility can foster innovation and accelerate the development of new hardware solutions, as it allows developers with varying levels of expertise to contribute to design processes. Despite the great potential of LLMs, existing state-of-the-art (SOTA) general LLMs, e.g., OpenAI's GPT-4 <cit.>, are still limited in their ability to generate practical hardware designs. For example, they might generate non-synthesizable or non-functional hardware source code <cit.>. To address this limitation, recent studies suggest that incorporating additional domain-specific data is crucial for enhancing LLMs' performance in hardware design tasks, using techniques across the scopes of LLM inference, fine-tuning, or pre-training. Specifically, one approach to improve LLMs' hardware design capabilities is to provide them with additional relevant design examples during inference-only generation, e.g., GPT4AIGChip <cit.>. It has been shown that this method can significantly enhance the quality of generated High-Level Synthesis (HLS) hardware code. Another approach is to fine-tune LLMs on carefully curated hardware design datasets, e.g., VerilogEval <cit.>, which has been shown to improve LLMs' performance in generating Verilog code. Alternatively, LLMs can also be pre-trained on diverse datasets from various hardware design domains to specialize in general hardware design concepts, as exemplified by ChipNemo <cit.>, leading to improved general performance across a range of hardware design tasks. Although the aforementioned approaches show promise in enhancing LLMs' performance in hardware design tasks, their progress can be hindered by the limitations of current publicly available hardware design datasets. As we will later analyze, the size, complexity, and detail granularity of datasets are essential factors for improving LLMs' performance. However, existing datasets often fall short in one or more of these aspects. Some datasets, e.g., those used in <cit.>, contain only a small number of data points (e.g., under 2e2), which are only suitable for benchmarking the LLMs' task performance but is insufficient for effectively fine-tuning LLMs. Other datasets, like those employed in <cit.>, can be simplistic, either lacking important features (e.g., code samples containing multiple module instantiations and aligned descriptions) or providing only high-level descriptions for each code piece. This simplicity can limit the fine-tuned LLMs' generalization performance when faced with diverse user instructions, thus reducing their effectiveness. To address the limitations of existing datasets and unlock the full potential of LLM fine-tuning and in-context learning for hardware design tasks, we propose a Multi-Grained-Verilog (MG-Verilog) dataset. This dataset includes hardware descriptions at different levels of detail and their corresponding Verilog code samples with varying design complexity. These features make it suitable for both inference and fine-tuning stages of LLMs to enhance their performance in hardware design tasks. Our main contributions can be summarized as follows: * We introduce a set of essential criteria for high-quality hardware datasets that can be effectively utilized by LLM-assisted hardware design techniques. These criteria can serve as a guide for the development of future datasets in this domain. * We present an open-source MG-Verilog dataset[<https://github.com/luke-avionics/mg-verilog>], which meets the aforementioned criteria. Additionally, we provide the necessary infrastructure for users to access, integrate, and extend the dataset for their specific project needs, promoting collaboration and facilitating further research in this area. * We demonstrate a unique use case of the MG-Verilog dataset by proposing a balanced fine-tuning scheme that leverages the diverse levels of detail provided by the dataset. This scheme validates and showcases the potential of the dataset to enable novel approaches in LLM-assisted hardware design. * Extensive experiments show that LLMs fine-tuned with our MG-Verilog dataset outperform those trained on datasets from other sources in terms of both code implementation accuracy and the sophistication of generated hardware designs. These results highlight the effectiveness of our dataset in enhancing LLMs' performance for hardware design tasks. § CRITERIA FOR DATASETS IN LLM-ASSISTED HARDWARE DESIGN To create a high-quality dataset for LLM-assisted hardware design, we first establish design criteria to guide the development of the MG-Verilog dataset. Sufficient dataset size. This is crucial for both training (i.e., domain-specific pre-training or fine-tuning) and inference (i.e., in-context learning) of LLMs. A larger dataset provides diverse examples for improved generalization performance during training <cit.> and enables effective techniques such as Retrieval-Augmented-Generation (RAG) for enhanced generation quality during inference <cit.>. Accurate code-description pairs. Each code sample needs to be correct, functional, and associated with a precise description of its functionality. Inaccuracies or ambiguity can mislead LLMs during fine-tuning or pre-training and lead to erroneous code generation during inference. Varied description detail levels. They are necessary to address two challenges. Datasets with only high-level descriptions may not provide sufficient detail for accurate code generation or effective LLM training (i.e., fine-tuning or pre-training), especially for complex designs. Conversely, datasets dominated by detailed descriptions may limit practical utility, as LLMs trained on such datasets might require users to provide elaborated prompts, which can be as labor-intensive as coding from scratch. Hence, an effective dataset should incorporate both high-level and detailed descriptions in a proper balance. In particular, high-level descriptions can facilitate user-friendly LLM interactions, while detailed descriptions are crucial for enabling LLMs to create complex designs, offering in-depth guidance for LLMs during training, or serving as a comprehensive reference during inference. Extensibility and integrability for future development. A high-quality hardware dataset should be designed with the research community in mind, allowing for easy extension and integration into various projects. The rapidly evolving nature of hardware design necessitates a dataset that can adapt to the latest trends and requirements. Moreover, the vast scope of hardware design means that different developers may have specific focused areas, making it challenging for a single organization to cover all possible scenarios in a one-time effort. To address this issue, the dataset should be structured in a way that encourages researchers to contribute to its growth and adapt it to their specific needs, fostering collaboration within the research community and ensuring its relevance and utility. This approach not only benefits individual projects but also contributes to the overall advancement of LLM-assisted hardware design methodologies. § THE PROPOSED MG-VERILOG DATASET §.§ Dataset Overview The MG-Verilog dataset consists of over 11,000 Verilog code samples and their corresponding natural language descriptions, serving as the desired outputs and test inputs for various LLM-assisted hardware design tasks, such as Verilog code generation. §.§ Dataset Construction The construction of the MG-Verilog dataset involves several steps to ensure the quality and usability of the data. §.§.§ Data Collection and Preprocessing Raw source code from open-source repositories is collected and preprocessed to ensure correctness. Adapting from VerilogEval <cit.>, we use Pyverilog <cit.> to parse the raw Verilog code and exclude code samples containing syntax errors. Deduplication techniques are applied to remove redundant code samples. Additionally, dependencies of the code samples are extracted, i.e., sub-modules of multi-module code samples are identified and recorded as metadata to facilitate research on techniques such as few-shot learning and RAG for generating multi-module Verilog code. §.§.§ Description Generation Natural language descriptions are appended to the code samples using an approach similar to VerilogEval <cit.>, leveraging LLMs' superior natural language generation capabilities. In addition to simple high-level descriptions for each code piece, varying levels of detailed descriptions aligned with the code complexity are provided, as detailed in Sec. <ref>. §.§ Multi-grained Dataset Structure To strike a balance between design generation accuracy and user-friendliness, we adopt a multi-grained data structure, which encompasses descriptions at various levels of detail in order to satisfy the third criterion in Sec. <ref>. As depicted in Fig. <ref>, this structure organizes hardware code descriptions, ranging from high-level summaries to detailed, line-by-line comments. The multi-grained structure is designed to mimic the learning and design processes of human designers. The objective is to simplify the learning curve for using the dataset and, as demonstrated later, to better leverage the strengths of LLMs for enhanced description generation accuracy. Specifically, the multi-grained structure mirrors the typical two phases experienced by human designers. In the learning phase, a hardware designer starts with the basic syntax and semantics of the design language, gradually advancing to apply this knowledge to design higher-level hardware modules. Conversely, in the design phase, the process begins with high-level architectural planning for the entire design, followed by a detailed, step-by-step implementation. §.§ Detailed Statistics of the Dataset Fig. <ref> presents detailed statistics of the MG-Verilog dataset, illustrating the distribution of token length for both the code and varying levels of descriptions. The complexity of the code samples is also reflected in the distribution of the number of module instances. The dataset shows a wide range of natural language description details and code complexities, making it suitable for diverse LLM-assisted hardware design tasks. §.§ Dataset Access and Extension Instructions The MG-Verilog dataset is publicly available and packaged in the standard HuggingFace Datasets format <cit.> for easy access and integration. Each dataset entry contains the following fields: code, high-level summaries, detailed summaries, block-level summaries, line-by-line comments, and metadata. The metadata field currently includes the module dependencies of the code samples. The MG-Verilog dataset is open-sourced from raw data collection to the final dataset construction in a modular manner for straightforward extension. The demonstrated balanced fine-tuning use case is also provided as a reference. § DATASET UNIQUE USE CASE: A BALANCED FINE-TUNING SCHEME In this section, we show a unique use case of our proposed MG-Verilog dataset. Specifically, we introduce a balanced fine-tuning scheme to fully harness the diverse levels of detail provided by our MG-Verilog dataset. The challenge to address. The ultimate goal of fine-tuning is to generate hardware code solely from high-level design descriptions. However, challenges arise when determining the type of descriptions to be used for fine-tuning. On the one hand, fine-tuning with only simple high-level descriptions may not provide LLMs with sufficient information to generate code for complex designs. On the other hand, exclusively relying on detailed descriptions could hinder LLMs' ability to respond to more high-level user instructions. Our balanced fine-tuning scheme. To tackle the aforementioned challenge, we present a balanced fine-tuning scheme that randomly selects training samples with varying levels of descriptions from the MG-Verilog dataset in each fine-tuning iteration. The aim is to achieve a balance when imparting knowledge of both global and local code semantics to LLMs. § EXPERIMENTAL RESULTS §.§ Experiment Setup Dataset generation. The primary model for generating descriptions is LLaMA2-70B-Chat. GPT-3.5-turbo serves as an automated backup for scenarios where the maximum token limit is exceeded. Based on empirical testing, we set the temperature to 0.7 and top_p to 0.95, maintaining other hyperparameters at their default values for the best quality. Fine-tuning and inference. CodeLLaMA-7B-Instruct is chosen as the primary model for hardware code generation due to its superior coding performance and small model size. For fine-tuning it on our dataset, the fine-tuning approach is based on QLoRA <cit.>, using its default training settings to demonstrate our delivered dataset's effectiveness. The fine-tuned model is evaluated using 143 Verilog coding questions from the benchmark in <cit.>, excluded from the training set. Hardware evaluation and metrics. The validity of each generated design is tested by compiling it and checking against its RTL simulation results in pre-defined testbench cases. We employ unbiased pass@1, pass@5, and pass@10 metrics, calculated from 20 generation runs, as established in <cit.>. §.§ Ablation Study on Different Evaluation Settings In this section, we explore the performance of fine-tuned models using varying data formats in both the training and evaluation phases. Although high-level global summaries are the most user-friendly data format, their ambiguity often results in a lack of detailed information necessary for precise code generation. In some cases, detailed global summaries can actually be more advantageous for expert users who have a deep understanding of code structures. Consequently, an ideal RTL code generation dataset would facilitate consistent model performance across a range of input instruction complexities. Observations and analysis. Tab. <ref> provides insights into these findings. Notably, we can observe: (1) Models fine-tuned with the MG-Verilog dataset exhibit the most robust performance in all tested evaluation settings. Specifically, while different evaluation settings tend to bias the fine-tuning setting that aligns with them, models fine-tuned with the MG-Verilog dataset consistently rank in the top two positions when compared to other baselines. In contrast, other baselines may perform well only under their aligned evaluation settings and notably under-perform in other evaluation settings. (2) Training exclusively with either overly detailed or overly high-level data can result in decreased performance, indicating the importance of having balanced training data. Specifically, Tab. <ref> reveals that, apart from the MG-Verilog dataset, models trained with detailed global summaries yield the highest pass rates. These summaries strike a balance between the generality of high-level global summaries and the specificity of block summaries. §.§ Ablations on the Number of Training Samples We further examine how the quantity of training samples affects the performance of models fine-tuned for RTL code generation tasks. As illustrated in Fig. <ref>, there is a clear trend where the model's performance improves with an increase in the number of training samples. However, we also note a diminishing returns phenomenon. Specifically, the performance gains from additional training samples decrease as the total number of samples grows. This trend could be attributed to either the limited diversity in the raw source code or the potential need for more optimal hyperparameter tuning and model configurations. These aspects, being orthogonal to the dataset structure proposed, are left for future exploration. § RELATED WORK LLMs have been applied in various stages of the hardware design process, including verification <cit.>, security flaw detection <cit.>, and code generation <cit.>. However, their performance is still limited due to insufficient exposure to hardware data during pretraining <cit.>. Some studies <cit.> have tried to rectify this by supplying more hardware code samples and fine-tuning the LLMs. Yet, the datasets used are still either too small <cit.> or overly simplistic <cit.>, which hinder effective fine-tuning of LLMs. Our MG-Verilog dataset addresses this issue by providing an open-sourced, high-quality dataset, essential for optimizing LLM fine-tuning and in-context learning. § CONCLUSION In this work, we aim to mitigate the limitations of existing datasets for LLM-assisted hardware design by proposing the open-sourced Multi-Grained-Verilog (MG-Verilog) dataset. The MG-Verilog dataset features hardware descriptions at different levels of detail and their corresponding Verilog code samples for more generic use cases. We have demonstrated the effectiveness of the dataset through a balanced fine-tuning scheme. Extensive experiments show that LLMs fine-tuned with the MG-Verilog dataset outperform those trained on other datasets in terms of Verilog code generation accuracy. § ACKNOWLEDGMENTS The work is supported by the National Science Foundation (NSF) through the RTML funding (Award number: 2400511) , an NSF CAREER award (Award number: 2345577), and CoCoSys, one of the seven centers in JUMP 2.0, a Semiconductor Research Corporation (SRC) program sponsored by DARPA. IEEEtranS
http://arxiv.org/abs/2407.02720v1
20240703001742
Next Generation Very Large Array Memo #122: Characterization of the synthesized beam with and without MID antennas in Mexico
[ "Alfonso Trejo-Cruz", "Roberto Galván-Madrid", "Carlos Carrasco-González", "Eric F. Jiménez-Andrade", "Stan Kurtz", "Jesús M. Jáquez-Domínguez", "Alice Pasetto", "Luis A. Zapata" ]
astro-ph.IM
[ "astro-ph.IM" ]
Model and Feature Diversity for Bayesian Neural Networks in Mutual Learning Cuong Pham1 Cuong C. Nguyen2 Trung Le1 Dinh Phung 1,4 Gustavo Carneiro3           Thanh-Toan Do1 1Department of Data Science and AI, Monash University, Australia 2Australian Institute for Machine Learning, University of Adelaide, Australia 3Centre for Vision, Speech and Signal Processing, University of Surrey, United Kingdom 4VinAI, Vietnam July 2, 2024 ========================================================================================================================================================================================================================================================================================================================================================================= § ABSTRACT Synthesized beam (PSF) synthetic observations with and without the antennas in Mexico are analyzed. For a simple continuum observing setup, we generated visibility files and their associated PSF images for a grid of parameters (robust weighting, tapering, and declination). The tests were done for both the MID and MID+Spiral+Core configurations and their cropped versions without antennas in Mexico. We show that the performance of the Array, in terms of the beam properties, is in general significantly better when both the MID array antennas are present in Northern Mexico and observations target southern sources. At a declination of –40 deg, there are increments in the ellipticity of at least ∼ 1.3× and 1.2× for a tapering of 3.0 and 4.0 mas, if the antennas in Mexico are not included. For the parameter space tested, the changes in ellipticity of the MID and MID+Spiral+Core configurations differ by ∼10%. Larger tapering values help to reduce the ellipticity for cropped configurations at all declinations, but it will impose more constraints in terms of angular resolution. § CONTEXT AND GOALS The current MID configuration is denoted by 28MOD <cit.> and has seven antennas located in Northern Mexico. This effectively extends the North-South baselines of the array, resulting in a more circular synthesized beam (PSF) when observing southern sources, in particular for declination Dec ≤ ∼ -20 deg. Therefore, those antennas are important in the overall context of the ngVLA, as the community is expected to actively pursue observations in that region of the sky and exploit synergies between the ngVLA and SKA. In this memo, we explore the implications for the synthesized beam if the antennas in Mexico were not to be deployed. This would affect not only the performance of the MID configuration but also observing modes using the entire MID + Spiral + Core. We focus on characterizing the PSF using a continuum observing mode, where the configuration, declination, and the imaging weighting and tapering values are part of the parameter space. The consequences of (not) having MID antennas in Northern Mexico for specific science cases will be explored in the future. § SETUP FOR SYNTHETIC OBSERVATIONS AND MAPPING In order to characterize the point spread function (PSF) or synthesized beam, we set a parameter space for both the creation of visibilities (uv plane) and imaging stages. Visibility (MS) files were created using the <cit.> simulator toolkit () and the PSF images with the task. Our parameter space uses -40 ≤Dec≤ 45, covered in 12 steps, every 15 and 5 deg for Dec >0 and <0, respectively, in order to better sample the response of the array for more challenging observing directions. For each of these the following parameters were used: an on-source time of 2 hrs resulting from the hour angle (HA) –1 to +1 hr, a total bandwidth of 2.0 GHz in a single channel, and a time averaging per visibility of 60 seconds. The synthetic observations presented here are only meant to make a basic characterization of the PSF. Multiple observing blocks and calibration loops are not used, as we seek to characterize the PSF in relatively short observations. For the imaging stage, we use robust values from R = –2.0 to –0.2 and from +1.2 to +2.0, for the MID+Spiral+Core and MID-only configurations, respectively. Steps of 0.2, in the Briggs weighting scheme were employed for both configurations. All images have an image size of 8192 pixels and a cell size of ∼ 0.027 mas. This corresponds to 1/20 of the angular resolution of the MID+Spiral+Core longest baseline and is chosen to over sample the synthesized beam core. 8192 pixels correspond to maps of 221 mas in size, adequate to properly grid visibilities from the Core array baselines. The specific robust range per configuration, instead of choosing the whole range from –2.0 to +2.0, is intended to obtain more gaussian PSFs, and therefore to enforce the CASA synthesized beams to be representative of those. The wide skirts in the MID+Spiral+Core images are reduced when choosing the more negative robust values; the PSFs from MID-only images are of better quality when choosing parameters closer to natural weighting. Tapering in the imaging stage was added to mitigate the non-gaussian features (narrow core plus wide skirt) in the PSF, which is more relevant for the MID+Spiral+Core images. A detailed discussion of these properties is in <cit.>. The list of tapering values used are 2, 3, 4, and 5 mas. While we tested smaller tapering values, e.g., 1.0 and 0.5 mas, the resulting PSF images still present the known central sharp spike combined with a wide skirt, for some of the robust/declination values. A PSF was produced for each of the parameter combinations, resulting in a total of 240 and 480 images, for the MID-only and MID+Spiral+Core configurations. In this memo we center in the cases more difficult to observe at southern declinations. All data were produced with version 6.5.4 of . §.§ Configurations Both the MID and MID+Spiral+Core configurations were used for the simulations, and for each of them we created an alternative configuration by removing the MID antennas located in Mexico (henceforth called the cropped configurations). We kept T27 as part of these cropped configurations, even though that antenna is located in Mexico in 28MOD. T27 is located about half a kilometer from the US-Mexico border, next to the Río Bravo. Therefore, the results of our simulations will not change in a significant way if this antenna were to be relocated to the US, just across the border. The MID configuration used (28MOD), along with its cropped version, is presented in Fig. <ref>. Note that in the following sections we only discuss results for the case of ngVLA Band 6 centered at 93 GHz. The main results from this memo would also apply to the other bands at lower observing frequencies, when pixel sizes and tapering values are chosen appropriately. § RESULTS Fig. <ref> presents examples of the distribution resulting from the MID and cropped MID configurations. We only display the cases for Dec values of +30 and –40 deg, to emphasize the clear differences between an ideal and a challenging case for the ngVLA. The relatively short on-source time helps to illustrate more clearly the consequences of removing the antennas in Mexico from the array; thus, short observations will be the most affected. Other ngVLA memos <cit.>, explore the properties of the PSF using longer integration times of ∼8 hours. As expected, there are important differences between ngVLA observations at northern declinations and those in the southern hemisphere. At Dec = +30 deg the v distribution is close to circular, whereas at Dec = –40 deg it is considerably smaller in the North-South direction, resulting in a very elongated East-West distribution. This situation is exacerbated with the loss of antennas in Northern Mexico (green-colored data). Note that the antennas in Mexico also help to increase the density of data points at mid-length baselines, regardless of declination. §.§ PSF images For the same declinations shown in Fig. <ref>, and by employing the MID configuration and its cropped version, the central regions of the PSF images (R = +2.0; taper = 3.0 mas) are displayed in Fig. <ref>. The visualization software <cit.> was used with an arbitrary color scale per image to better visualize the morphology of the main lobe and the structure around it. In the northern hemisphere, as shown in the example at Dec = +30 deg (upper panels), the absence of the Mexican antennas will not pose a serious concern to the performance of the array. Image structure is similar between the two panels and without any obvious artefacts around the PSF core. At Dec = –40 deg (lower panels), the 10% level contour occupies a larger area, signifying the more extended low-level emission outside the PSF core. While the structure around the main lobe is not completely different, the cropped configuration image has larger negative areas around the main lobe. It also produces a more elongated beam, which will in turn result in a larger geometric mean value. The situation is similar for the cases produced with the MID+Spiral+Core configurations (Fig. <ref>), using R = –1.6 and taper = 3.0 mas. Again, an extended 10% level contour indicates that the PSF core starts to blend with its surroundings. All this confirms that not only a MID-only configuration will benefit by keeping the antennas in Mexico, as currently planned. In the next sections we compare the beam properties produced by nominal and cropped configurations. §.§ PSF radial profiles In Fig. <ref> and <ref> we present examples of the PSF radial profiles for MID (R = +2.0; taper = 3.0 mas) and MID+Spiral+Core (R = –1.6; taper = 3.0 mas), as well as for the cropped versions. The profiles are azimuthally averaged and produced with the Python package for aperture photometry. In the figures, we compare the PSF profiles with the corresponding gaussian response built using the fitting (clean beam) results, i.e. using the geometric mean of the beam semi major and semi minor axes as the gaussian HWHM. In this memo, we mainly look at how the PSF gets degraded by removing the antennas in Mexico, so the images are compared one-to-one, i.e. produced by the same robust and tapering parameters. At Dec = +30 deg, the gaussian fitting parameters from produce a (clean beam) profile that follows well the actual PSF profile for both MID configurations (Fig. <ref>, left panels). Both HWHM values and the subtraction of the PSF profiles are within 1%. However, at Dec = –40 deg (right panels) the difference between the two MID configurations is obvious. The beam HWHM is now ∼15% larger when removing the antennas from Mexico, and the subtraction of the PSF profiles peaks at around ∼5%. A similar behavior is seen when comparing the MID+Spiral+Core configurations (Fig. <ref>). The differences in terms of beam HWHM and PSF profile subtractions are at very similar levels as for the MID-only case at Dec = –40 deg, ∼17% and ∼7%, respectively. Again, at Dec = +30 deg there is no significant difference between the two PSF profiles. In the next section we look at the ellipticity, where a more obvious trend in beam properties can be seen. §.§ PSF ellipticity We are in particular interested in looking at the performance of the array for cases at negative declinations, for both MID and MID+Spiral+Core. Cases with smaller beam ellipticity are of course those at positive declinations. With this, and the fact that the ellipticity has a more clear dependence on the robust values, and to a less degree on the configuration used, we look into more detail on the robust and declination parameter space. Fig. <ref> presents (taper = 3.0 mas), as a function of Declination, the beam ellipticity and its ratio between the cropped and nominal configurations. For the MID cases (upper panels), and for all values in the range Dec < 0 deg, cropped-version images have a larger beam ellipticity, as compared to those of the nominal configuration. For example, at Dec = –40 deg it is ∼ 2.0 vs ∼ 1.6, for all robust values used (1.2 to 2.0). The mean values (dotted and dashed lines) provide a confirmation of a larger ellipticity trend for the cropped configurations, especially at southern declinations. As expected, at positive declinations the beam ellipticity is small at ∼ 10% above the unity for both configurations. To see more clearly the change between cropped and nominal configurations, we look at their ellipticity ratio (right panel). It is clear that the beam will degrade (ellipticity wise) by as much as ∼ 30% for Dec = –40 deg; at Dec = –20 this figure decreases to ∼ 7%. At positive declinations, the ratio essentially becomes unity. MID+Spiral+Core is a very different configuration than MID with its large number of antennas at short baselines. We note of an increment (Fig. <ref> lower left panel) in the mean ellipticity, for both cropped and nominal versions, of ∼10-20% compared to the MID-only case, depending on Declination. Thus, compared to MID, the beam ellipticity for MID+Spiral+Core is more affected for the observations/imaging setup used. The imaging results in similar ellipticity ratios (lower right panel) compared MID, per Declination. We find a variation of ∼ 5% between the two, based on the mean values. Finally, Fig. <ref> shows corresponding results for the tests with a taper of 4.0 mas. The same trends, for cropped configurations, of higher ellipticity and its ratio at southern declinations are found. However, all values are smaller overall when compared to the 3.0 mas taper setting, e.g., with 4.0 mas the most extreme cases have ellipticities of ∼ 1.8 and 2.0 (vs ∼ 2.0 and 2.2 for taper = 3.0 mas) at Dec = –40 deg, for cropped versions of MID and MID+Spiral+Core, respectively. The same trend is observed when looking at the mean ellipticity. In terms of the ellipticity ratio (right panels), we find values in the range of ∼ 1.2-1.3, for Dec = –40 deg, depending on the robust value and configuration. Overall, the synthesized beams obtained with a taper of 3.0 mas degrade (ellipticity wise) in average ∼ 10% (at Dec ≤ –20 deg) when removing the antennas from Mexico, as compared to a taper of 4.0 mas. On the side of positive declinations, the difference in ellipticity ratio between tapering values is not significant. We can see that the mean (dotted lines) basically aligns with the unity (vertical dashed line). Table <ref> presents mean values for the synthesized beam major and minor axis and its ellipticity for –25 ≤ Dec ≤ –40 deg. We tested the potential impact of the image size used to properly grid all visibilities are the shortest baselines. Additional synthetic observations were obtained using a pixel and map sizes of 0.054 mas and 16384 pixels (or 885 mas). This test provided maps four times larger in angular units. While the fitted beams are not exactly the same as for the cases presented and discussed in this memo (0.027 mas/8192 pixels), the general trend of ellipticity and its ratio holds, i.e. the cropped configurations consistently show larger ellipticities at negative declination values. All mean values are calculated using all robust values, and therefore correspond to the dash-dotted lines in the left panels of Figs. <ref> and <ref>. Only one decimal is used for simplicity. § CONCLUSIONS Removing antennas from an interferometric array usually affects the PSF image quality. In this memo we showed that by removing the antennas in Mexico (with the exception of T27 which falls very close to the Mexico–U.S.A. border), several PSF properties worsen considerably at Dec ≲ –20 deg. In general, median values of the ellipticity increase, for all tapering values used. With respect to the nominal configurations and for a tapering of 3.0 mas, we see mean ellipticity (over the robust space) increments of ∼ 1.3× and 1.4× for a declination of –40, for the MID and MID+Spiral+Core cases, respectively. At 4.0 mas, the above values become ∼ 1.2× and 1.3×. At positive declinations there is essentially a negligible penalty for the exclusion of antennas in Mexico. Although larger tapering values will help to reduce the ellipticity for cropped configurations at all declinations, it remains less effective at the southern hemisphere and will impose more constraints in terms of angular resolution. Thus, higher resolution scientific cases, targeting sources in the southern hemisphere, will be the most affected if the most southern antennas (those in Northern Mexico) of the ngVLA are not deployed. § ACKNOWLEDGEMENTS We thank Viviana Rosero, Chris Carilli, and Eric Murphy (NRAO) for very useful discussion and feedback that helped to improve the content and reach of this work. apj_w_etal_3auth
http://arxiv.org/abs/2407.02226v1
20240702125132
RollupTheCrowd: Leveraging ZkRollups for a Scalable and Privacy-Preserving Reputation-based Crowdsourcing Platform
[ "Ahmed Mounsf Rafik Bendada", "Mouhamed Amine Bouchiha", "Mourad Rabah", "Yacine Ghamri-Doudane" ]
cs.CR
[ "cs.CR", "cs.DC" ]
RollupTheCrowd: Leveraging ZkRollups for a Scalable and Privacy-Preserving Reputation-based Crowdsourcing Platform Ahmed Mounsf Rafik Bendada, Mouhamed Amine Bouchiha, Mourad Rabah, Yacine Ghamri-Doudane L3i - La Rochelle University, La Rochelle, France {ahmed.bendada, mouhamed.bouchiha, mourad.rabah, yacine.ghamri}@univ-lr.fr July 8, 2024 ============================================================================================================================================================================================================================= § ABSTRACT Current blockchain-based reputation solutions for crowdsourcing fail to tackle the challenge of ensuring both efficiency and privacy without compromising the scalability of the blockchain. Developing an effective, transparent, and privacy-preserving reputation model necessitates on-chain implementation using smart contracts. However, managing task evaluation and reputation updates alongside crowdsourcing transactions on-chain substantially strains system scalability and performance. This paper introduces RollupTheCrowd, a novel blockchain-powered crowdsourcing framework that leverages zkRollups to enhance system scalability while protecting user privacy. Our framework includes an effective and privacy-preserving reputation model that gauges workers' trustworthiness by assessing their crowdsourcing interactions. To alleviate the load on our blockchain, we employ an off-chain storage scheme, optimizing RollupTheCrowd's performance. Utilizing smart contracts and zero-knowledge proofs, our Rollup layer achieves a significant 20x reduction in gas consumption. To prove the feasibility of the proposed framework, we developed a proof-of-concept implementation using cutting-edge tools. The experimental results presented in this paper demonstrate the effectiveness and scalability of RollupTheCrowd, validating its potential for real-world application scenarios. Blockchain, Decentralized Reputation, Crowdsourcing, Privacy, zkRollups [breakable,boxrule=1pt,colframe=black,colback=white] Paper accepted at IEEE 48th Annual Computers, Software, and Applications Conference (COMPSAC) IEEE, Osaka, Japan (2024). RollupTheCrowd: Leveraging ZkRollups for a Scalable and Privacy-Preserving Reputation-based Crowdsourcing Platform Ahmed Mounsf Rafik Bendada, Mouhamed Amine Bouchiha, Mourad Rabah, Yacine Ghamri-Doudane L3i - La Rochelle University, La Rochelle, France {ahmed.bendada, mouhamed.bouchiha, mourad.rabah, yacine.ghamri}@univ-lr.fr July 8, 2024 ============================================================================================================================================================================================================================= § INTRODUCTION As a result of the growth of the Internet and the prevalence of mobile devices, crowdsourcing become increasingly popular, and the platforms facilitating this approach have experienced a significant surge in usage and recognition. The term crowdsourcing was first introduced by Jeff in 2006 <cit.>, It refers to a collaborative approach that delegates tasks, problems, or ideas to a broad collective. Mobile crowdsourcing, on the other hand, utilizes mobile devices like smartphones to perform the tasks. Crowdsensing, relatedly, gathers data from numerous individuals via IoT devices, such as smartphones and integrated sensors, to understand the physical environment. A prominent illustration of crowdsourcing is exemplified by Wikipedia[https://en.wikipedia.orgwikipedia.org] Most current crowdsourcing platforms, such as Fiverr[https://www.fiverr.com/fiverr.com], are centralized, which raises concerns about privacy, security, and transparency. Centralization implies a concentration of control, wherein a single authority holds sway over operations and data. This concentration raises privacy worries as user information and activities may be more susceptible to breaches or misuse. Security becomes a pressing issue due to the vulnerability of a central point of access, potentially exposing the platform to various risks and threats. Moreover, the lack of transparency in decision-making or data handling within such centralized platforms can lead to ambiguity, eroding users' trust and understanding of how their information is managed and utilized. In reputation-centric crowdsourcing systems, transparency gains even greater importance as users need to know how their reputation scores are maintained. More precisely, they seek the ability to track and verify updates to their scores at any given moment. In the last few years, numerous efforts have arisen to leverage blockchain technology in addressing these issues <cit.>. The decentralization, transparency, and efficiency brought by blockchain are clearly what we always hoped for to build effective trustless reputation systems for crowdsourcing or any real-world application. However, transparent and effective blockchain-based reputation management requires the reputation model to be implemented on-chain often using smart contracts to enhance trust and achieve accountability. Unfortunately, in this situation, the blockchain is required to handle additional transactions such as task evaluation and overall reputation updates. This added workload significantly impacts both the scalability and performance of the system, leading to heightened gas costs, prolonged processing times, and increased time overhead. Consequently, addressing these challenges becomes imperative as they stand as substantial barriers to the practical implementation of this solution in real-world situations. Motivated by the above challenges, our contribution presented in this paper covers the following points: * A blockchain-powered fully decentralized platform to manage the entire reputation-based crowdsourcing process. * RollupTheCrowd leverages zkRollups (Layer-2) for empowering scalability by alleviating the burden on the mainchain (Layer-1). * A privacy-preserving reputation model adaptable to diverse crowdsourcing scenarios, and resilient against common reputation attacks. * A secure and robust crowdsourcing smart contracts automation using Decentralized Oracle Networks (DON). * The proposed solution is supported by a concrete proof of concept implemented using emerging technologies. RollupTheCrowd code is available on Github[https://github.com/0xmoncif213/RollupTheCrowdhttps://github.com/0xmoncif213/RollupTheCrowd] * Both the analytical and experimental evaluations validate the efficiency and scalability of our framework. The remaining organization of this paper is as follows. First, the preliminaries are introduced in Section <ref>. The existing related literature is summarized in Section <ref>. Section <ref> presents the overall framework proposed in this paper and describes the designed crowdsourcing scheme. Section <ref> details the proposed reputation model. Section <ref> is devoted to the proof of concept presentation and its performance analysis. Finally, Section <ref> concludes our paper and discusses future work. § PRELIMINARY * InterPlanetary File System (IPFS): is a peer-to-peer distributed file system that seeks to connect all computing devices with the same system of files. IPFS aims to replace the traditional centralized model of the Internet with a decentralized and more resilient system. It uses a Distributed Hash Table (DHT) to address content by its hash, making it efficient, secure, and resistant to censorship. IPFS is often utilized for Decentralized Applications (DApps) and to build a more robust and accessible Internet infrastructure <cit.>. * Decentralized Oracle Networks (DONs): are decentralized systems that facilitate the retrieval and delivery of external data to smart contracts in a decentralized and trustless manner. These oracles serve as bridges between blockchain networks and real-world data sources (APIs, external systems...). While storing all the data on-chain is non-suitable, oracles solve the problem of the inability of smart contracts to access data that are not already stored on-chain which can be a limiting factor for many application scenarios such as that of multi-party business processes <cit.>. * Rollups: A Layer-2 (L2) scaling solution that offers a method to streamline the validation of transactions, cutting down on the resources and time needed by minimizing the data each node must process. This optimization is achieved through a secondary layer network involving actors who handle transactions off the primary chain. Subsequently, the transaction data is consolidated into batches and broadcasted onto the Layer-1 (L1) blockchain. There exist two types of Rollups, Optimistic and Zero-Knowledge (zk) Rollups. Optimistic rollups assume that transactions are valid and no computation for verification is done by default to significantly improve scalability. In zk Rollups, on the other hand, each batch contains a cryptographic proof. Calculating the proofs is complex, but checking them on the mainchain is fast <cit.>. § RELATED WORK Having described the preliminaries of this work, Let us now review previous efforts that have contributed to the decentralization of crowdsourcing platforms using blockchain. zkCrowd <cit.> presents a hybrid blockchain crowdsourcing platform with two ledgers. It combines Delegated Proof of Stake (DPoS) and Practical Byzantine Fault Tolerance (PBFT). These consensus protocols are chosen for their good performance, but the dual use of these protocols introduces complexity and vulnerability into the design of the hybrid blockchain without sufficient feasibility analysis. CHChain <cit.> framework also proposes a hybrid structure and uses a Reputation-based PBFT consensus scheme to improve system throughput. However, the feedback-based reputation model is vulnerable to bad collusion attacks, raising concerns about the security of the whole system. RC-CHAIN <cit.> focuses on vehicular data sharing, using a consortium blockchain. However, it introduces centralization through Roadside Units (RSUs), acting as intermediaries. In <cit.>, supervised blockchain architectures are adopted for mobile crowdsourcing(sensing), introducing centralization concerns with a Key Distribution Center (KDC) and Task Distribution Center (TDC). In <cit.>, a decentralized reputation system for E-commerce stores content on IPFS, addressing content volume concerns without explicitly delving into other issues such as identity management and scalability. It also groups evaluations and considers transaction magnitude, interaction time, and historical reputation scores which lead to linkability and privacy exposure. RBT <cit.> tailors reputation assessment based on individual roles, raising re-entry attack concerns. ExCrowd <cit.> addresses challenges for newbies with an exploration approach through linear regression and decision tree algorithms, these algorithms can be computationally intensive and lack flexibility once are deployed. In <cit.>, a reputation model focuses on data reliability for the crowdsensing use case, noting potential issues with storing a high volume of sensed data. Ensuring scalability is fundamental in the design of blockchain solutions. Nevertheless, certain previous studies <cit.> tend to disregard the challenges associated with scalability. Meanwhile, alternative methods employing rapid consensus protocols to scale the system <cit.> or resorting to centralization for enhanced performance <cit.> have proven ineffective and, at times, insufficiently secure. L1 scaling solutions are essential but may not provide a comprehensive solution. In summary, balancing decentralization, privacy preservation, and scalability is vital for building feasible and robust blockchain-based crowdsourcing solutions. The aforementioned studies present numerous limitations that prevent their widespread application. Therefore, to overcome these issues, this paper introduces RollupTheCrowd, a scalable, privacy-preserving, and fully decentralized reputation-based crowdsourcing framework. § ROLLUPTHECROWD FRAMEWORK After exploring related studies and the challenges identified in current solutions, in this section, we will present our framework. We begin by describing the complete architecture of the system and then detailing all the components of the proposed solution. §.§ System Architecture Figure <ref> shows the proposed architecture for RollupTheCrowd. It has four components. §.§.§ Main Ledger with Dual Layers for Enhanced Scalability and Cost Efficiency The main ledger is the central element of our crowdsourcing platform, featuring a dual-layer structure as shown in Figure <ref>. The first layer operates as a traditional blockchain network, employing a Proof of Authority (PoA) consensus, ensuring the security and scalability of the mainchain (L1). The second layer employs zero-knowledge (zk) Rollups solution to enhance scalability. Instead of processing each transaction on the main chain, a batch of transactions is processed and validated off-chain (on L2), by the aggregator. It then publishes the new state root, compressed transaction data, and proof of validity on the main chain. This proof of validity ensures the computation made to execute the transactions was correct. zkRollups inherent security from L1 and upholds privacy by design, which makes them the perfect solution for our underlying crowdsourcing system. This design not only guarantees transparency for all participants but also upholds the decentralized nature of the system, reducing congestion and lowering fees. At the same time, the security guarantees of the main blockchain are preserved through cryptographic proofs. §.§.§ A Decentralized Registrar for a Secure Identity Management Complementing the prowess of the main ledger, we introduce a Registrar Ledger, responsible for identity management in our decentralized ecosystem. This ledger serves as an identity management entity such as CanDID <cit.>, It offers a secure environment where users can assert their identities, without sending any information but only proofs, The role of the registrar in our protocol can be assumed by the CanDID committee, a decentralized set of nodes, which performs deduplication (identity uniqueness) in a privacy-preserving way. §.§.§ Dual blockchain ledgers with IPFS Integration The main ledgers in our design are coupled with an InterPlanetary File System (IPFS), providing an efficient solution to the storage challenges inherent in traditional blockchain-based crowdsourcing systems. By offloading substantial data off-chain to IPFS, the transaction times and costs within RollupTheCrowd are optimized. §.§.§ Interoperability and Synchronization An essential facet of our system architecture is the seamless interaction between these two ledgers. Smart contracts deployed on the main ledger can be triggered by authorized users to execute operations, This can be achieved using the inter-chain decentralized oracles, ensuring that the data shared between the ledgers is accurate, tamper-proof, and auditable. The inclusion of IPFS and the off-chain storage of business logic data necessitates a reliable mechanism for data synchronization and retrieval. This is where decentralized oracles excel, ensuring that data from IPFS can be efficiently utilized on the main blockchain without compromising security or decentralization. Figure <ref> explains how these decentralized oracles serve as a bridge between on-chain and off-chain sides <cit.>. §.§ RollupTheCrowd Modules Within RollupTheCrowd, blockchain nodes at L1 can independently transmit, verify, and store data within the network. They are also responsible for validating transactions/blocks and reaching a consensus on the state of the blockchain. Below, we list the functional modules we developed and integrated into each blockchain node. * Oracle Operator Module: An Oracle operator module is a smart contract within a blockchain ecosystem that acts as an intermediary or bridge between the blockchain and external data sources. Its primary purpose is to fetch, verify, and provide off-chain data to on-chain smart contracts, enabling them to interact with real-world information. * Access Management Module: is a smart contract that manages access control and permissions within our Decentralized application (DApp). It is a common pattern used to control who can perform certain actions or access specific functionalities within the application. The primary objective of this is to ensure that only authorized users or addresses are allowed to execute specific operations or access sensitive data. * Business logic Module: is the smart contract that facilitates the management of crowdsourcing operations within our system. It enables users to create, submit, and complete tasks in a decentralized manner. It implements the complete business logic behind the crowdsourcing scenario (details will follow). * Reputation Module: is the component responsible for managing reputation scores in the system. It implements our proposed privacy-preserving reputation model (details in Sec. <ref>). It's important to point out that with the integration of this reputation module into the framework, we have the option of employing reputation-centric consensus <cit.>. This alternative offers better scalability and fairness than PoA and PoS respectively. We will further explore this aspect in an extended version of this work. §.§ Smart Contracts Design The entire business logic of RollupTheCrowd is implemented using smart contracts. We employ three primary entities in our crowdsourcing process: Requesters, Workers, and Evaluators. Smart contracts (SCs) in RollupTheCrowd, are designed to provide transparency and accountability among these entities by managing both reputation and crowdsourcing tasks on-chain. In the following, we detail the core functions executed through smart contracts in our system. Figure <ref> depicts the crowdsourcing workflow within our framework, illustrating SCs functions called by each entity. Deposits are locked into a SC as collateral for transactions, and any misconduct will result in penalties. Evaluators are selected before Workers, with Workers unaware of the Evaluators' identities. To ensure fair evaluations, Evaluators are randomly assigned to evaluate tasks. This setup encourages timely and high-quality contributions from Workers while maintaining the integrity of evaluations by Evaluators. §.§.§ Create Task The algorithm <ref> highlights the steps of the createTask function. The function initially verifies whether the caller is a registered user; if affirmative, it proceeds to store only the necessary data on-chain, as all other details have already been submitted off-chain to IPFS via the front end. Additionally, the function updates the amount to be used later in the reputation model. §.§.§ Submit Solution The Algorithm <ref> implements the submitSolution function, triggered by the worker upon task completion. This function first verifies if the bidder submitting the solution is indeed the assigned worker, ensuring that workers can only submit solutions within their bids. Subsequently, it checks whether the bid has been accepted by the requester, allowing only accepted workers to submit their solutions. Finally, it stores the Content Identifier (CID) of the submitted solution to IPFS. §.§.§ Distribute Evaluators to Random Sets The Algorithm <ref> implements how we distribute evaluators into random sets to achieve randomness in the evaluation process, each set is responsible for one submission. The function is called only by the oracle when there are enough evaluators for the task. §.§.§ Calculate New Reputation Upon completion of the evaluation of a specific task using corresponding measures detailed in Sec. <ref>, Evaluators transmit their local ratings to the Oracle. The Oracle network checks the validity of these ratings and then calculates the average scores and submits the result on-chain through the invocation of the calculateNewRep function. This function computes the new reputation using the proposed reputation model, considering the task type, and then initiates the updateReputation function. § REPUTATION MODELLING After describing the architecture of RollupTheCrowd and its main components, we will now delve into the mathematical details of the proposed reputation model. We propose a reputation model that can be adapted to various crowdsourcing situations such as situations that involve solving complex problems, collecting data, conducting research, or harnessing collective intelligence. Those diverse scenarios can be categorized from our perspective into two principal categories: problem-solving and knowledge acquisition. In problem-solving situations, crowdsourcing initiatives focus on human-level intelligence or expertise to perform. Participants bring reasoning, problem-solving, decision-making, and learning, among other cognitive abilities that are characteristic of human intelligence. Examples of such scenarios include crowdsourcing platforms dedicated to innovation, where individuals contribute creative solutions for product development or process improvement. On the other hand, knowledge acquisition situations in crowdsourcing aim to gather a broad range of data from a diverse group of individuals or machines. This may involve the crowdsensing use case. §.§ Task Evaluation Recognizing the two types of crowdsourcing situations allows us to design targeted evaluation strategies and approaches that meet the specific needs and goals of each scenario. We define a common metric for all crowdsourcing scenarios, namely value rating. We first present this common metric and then the metrics specific to each situation. §.§.§ Common Metric - Value Rating It is not expensive for a malicious requester to submit multiple low-cost tasks (i.e., micro-tasks) addressed to a particular worker to improperly boost their reputation. Therefore, to mitigate this type of coordinated attack and tackle the problem of unfair ratings, the rating of a task should be related to its amount A_t. The value rating V_R ∈ [0,1] is computed using the following formula: V_R = f(A_t)= A_t-A_minA_max-A_min §.§.§ Problem-Solving Tasks Metrics Problem-solving tasks refer to cognitive tasks that require human-level intelligence or expertise to perform. These tasks typically involve reasoning, problem-solving, decision-making, and learning, among other cognitive abilities that are characteristic of human intelligence. Examples of human intelligence tasks include natural language understanding, logical reasoning, creativity, and social intelligence. Evaluating these tasks involves subjective judgments, which can vary from one evaluator to another. For instance, tasks that require creativity may elicit multiple valid solutions, leading to diverse ideas and approaches among individuals. To minimize conflicts in evaluation, we introduce objectivity and establish clear criteria during the task posting phase. By providing explicit guidelines and specifications upfront, we strive to facilitate a more structured and consistent evaluation process. This helps to ensure that evaluators have a standardized framework to assess tasks and reduce discrepancies. Within each Problem-solving task, the assessment of user submission is influenced by various factors. We define two factors to assess the Worker's submission: Effort Rating and Contextual Rating. * Effort Rating (E_R): to bring more objectivity to feedback submission, we gauge the user effort on the task by considering the two following parameters: * Task completeness: C_t ∈ [0,1] designates the degree of completion or realization of a task or project. It is a measure of progress toward the task goal and can be computed using a defined checklist by the requester. * Task Quality: Q_t ∈ [0,1] refers to the level of expertise or efficiency in performing a specific task. It can be calculated using a set of rubric rules defined by the requester. Rubric rules are criteria or guidelines used to evaluate the quality of an assignment. For example, for a logo design task, quality evaluation using rubric rules may include creativity and originality, relevance to brand identity, technical execution, and aesthetic appeal. Each of these metrics can be rated on a scale of one to ten. We give requesters the freedom to determine the weighting of completeness and quality, enabling them to specify their preferences in advance. Therefore, the effort rating is computed as follows: E_R = f(C_t,Q_t)= α C_t + β Q_t ; α + β = 1 * Contextual Rating (C_R): Worker submissions can be evaluated taking into account additional validity aspects, which may differ depending on the use case. For example, in programming contexts, considerations may relate to the success of test cases or the cleanliness of the code, enabling a more in-depth and personalized assessment to measure the quality and effectiveness of the submission. The overall rating of the problem-solving task T_R = f(V_R,E_R,C_R), which is a linear combination between the three metrics. The weighting of each metric is determined by either the group or the platform operator. §.§.§ Knowledge Acquisition Task Metrics In this type of task, the focus is on collecting data. The gathered knowledge can be evaluated by its reliability and can be provided by IoT devices (temperature, pressure, etc.) or humans (Location, pictures, surveys). There are many existing methods for evaluating data reliability. Inspired by the method proposed in <cit.>, we develop a new evaluation method that enables accurate estimation of knowledge acquisition based on the reliability of acquired data. * Data Distortion Rating (D_R): Data distortion represents the difference between observation and truth. We use the deviation of sensed data V_i from V_a to denote the degree of data distortion. V_a is the final aggregation result held by the decentralized oracle, which is considered truth data. We calculate the squared difference between V_i and V_a, then the result is normalized as the deviation of V_i from V_a. d_i= (V_i-V_ab_U-b_L)^2 b_L and b_U represent the lower and upper bounds of the sensed data range, respectively. They are used to normalize the deviation d_i (, d_i ∈ [0, 1]). We calculate the data distortion metric as follows: D_R = 1-d_i * Contextual Rating (C_R): In addition to the data distortion rating, other contextual factors could be taken into account toward evaluating the reliability of data. For instance, the specific sensing task is strict in location and time, which means that the sensing data from the expected location might be more reliable than that from a remote location. Similar to the problem-solving task the overall score is T_R = f(V_R,D_R,C_R) the linear combination of three ratings V_R, D_R, and C_R. §.§ Reputation Update In our reputation calculation process, we use the task evaluation method outlined in the previous section, along with past behavior. While developing this model, we carefully considered privacy concerns. Therefore, the only data the model requires is the users' current reputation scores. In RollupTheCrowd, each new user is assigned an initial reputation score R_init. This value can be the average of the reputation scores of all existing users or another value fixed by the system operators. Reputation is the perception that users have about an individual in the system and past behavior is a key factor in the calculation of reputation. For that, we introduced the current reputation value to calculate the new one. The reputation update process diverges based on whether the behavior exhibited is deemed good or bad. We define good work when the task value exceeds a certain threshold T_min≥ R_init, which we consider the critical line of trust. Otherwise, the work is categorized as bad. In the update for good behavior, we accord higher significance to the old reputation score, resulting in a reputation growth that aligns with the expected positive behavior. Contrastingly, when bad behavior occurs, we shift the focus towards the current task evaluation, imposing a stricter punishment as a consequence <cit.>. The general update formula is as follows: * For good behavior (T_R ≥ T_min), R_new= ω R_old + (1- ω) T_R * For bad behavior (T_R < T_min), R_new= (1- ω) R_old + ω T_R where, T is the task value, R_old refers to the old Reputation, and R_new is new Reputation. ω refers to the weighting function, f_w(S) = tanh(S) where S is the number of submissions done by the worker which leads the model to become progressively stricter as the worker engages in more tasks. The ability to respond quickly to unexpected actions is an essential feature of an effective reputation model. To prove that RollupTheCrowd possesses this characteristic, we compared our model with the model employed in IPoT <cit.>, which uses a similar approach based on the evaluation of crowdsourcing interactions. Figure <ref> shows the variation in updates in response to positive behavior, following the user's interactions up to interaction 25, when his behavior becomes negative. As soon as a negative action is taken, the reputation score declines rapidly in both systems. However, the score decreases faster in our model. This difference reflects our system's increased ability to react to inappropriate behavior. § SECURITY ANALYSIS In this section, we explore the potential security vulnerabilities within RollupTheCrowd and illustrate its resilience against these threats and attacks. * Sybil attack: This attack involves creating multiple identities to take advantage of the reputation system. → Our system restricts the creation of multiple accounts, permitting only authentic users to join. Organizational (or consortium) admin accounts oversee system access through on-chain management using role-based access control. This protocol is established via smart contracts on-chain, granting exclusive authorization to admin accounts for user addition or removal from the system. * Whitewashing attack: This occurs when a dishonest worker attempts to reset their negative reputation by re-entering the system with a new identity to obtain an initial reputation score. → Users are registered through the registrar entity (a consortium of organizations). Rejoining the platform is only possible if the registrar blockchain committee grants permission. * Collusion attacks: This form of attack involves collusion between a group of workers and requesters to either lower a target worker's reputation or inflate their own. → In our system, a worker's reputation calculation is not based on the requester's feedback. It relies on evaluations conducted by a randomly selected set of evaluators. RollupTheCrowd's design prioritizes automatic evaluation metrics, making it resilient against this type of attack. * Free-riding, False-reporting: In Free-riding workers receive rewards without making real efforts, while in False-reporting requesters try to repudiate the payment. → In our system, both of these attacks are prevented. Workers cannot receive payment without undergoing evaluation, and the reward distribution is contingent upon the outcomes of the evaluation process. Additionally, requesters are unable to repudiate payments since they are already locked within the deposit contract. Workers and requesters are obligated to make a deposit, which guarantees their commitment to the system. * Bad mouthing: This happens when the evaluator submits an incorrect evaluation/rating. → Within our system, we address this form of attack by randomly choosing independent evaluators, as there is no advantage for evaluators in providing a false rating. Furthermore, we alleviate rating errors by calculating an average weighted score based on assessments from multiple raters. § EVALUATION AND RESULTS The main objective of RollupTheCrowd's development is to build a scalable and decentralized system capable of efficiently managing reputation and crowdsourcing tasks simultaneously. To validate the feasibility, scalability, and effectiveness of our proposed Framework, we developed a proof of concept for our framework. For a thorough explanation of the technical intricacies, we invite readers to explore our public GitHub repository.[https://github.com/0xmoncif213/RollupTheCrowdhttps://github.com/0xmoncif213/RollupTheCrowd], which provides a detailed description of our implementation. This repository serves as a valuable resource for those interested in replicating and further scrutinizing our work. This section details the experimental setup, followed by a brief overview of the technologies utilized during development and experimentation. Next, the key metric used for evaluation and the results demonstrating our system's performance for relevant benchmarks are described. §.§ Experimental Setup The deployment of the proposed crowdsourcing platform and performance tests are carried out on a cluster of two servers 'HPE ProLiant XL225n Gen10 Plus' dedicated to the experimentation and evaluation of blockchain solutions. Each server is equipped with two AMD EPYC 7713 64-Core 2GHz processors and 2x256 GB RAM. §.§ Tools and Libraries We implemented our smart contracts using Solidity and leveraged an Ethereum blockchain to run our network, employing Geth[https://geth.ethereum.org/https://geth.ethereum.org/] client. For the Oracle integration, we used Chainlink[https://chain.link/ https://chain.link/] nodes with customized external adapters within Docker-containerized Node.js servers. <cit.>. For the L2 scaling solution, we incorporated zkSync[https://zksync.io/zksync.io] Rollups. As a comprehensive measure to validate the efficiency and scalability of the proposed solutions, we conducted thorough testing and benchmarking using the Hyperledger Caliper framework[https://github.com/hyperledger/caliper-benchmarkshttps://github.com/hyperledger/caliper-benchmarks]. §.§ Performance Evaluation In our evaluation approach, we recognize the need for a fine-grained assessment of our platform’s performance. We conduct benchmarks on individual modules, including access management, business logic, pre-evaluation, and evaluation modules to achieve this. Each module has different characteristics and provides different functionality, so it is essential to tailor our benchmarking configurations (workloads) to capture the system performance associated with each module. These configurations include generating and sending transactions containing the parameters needed by each function, such as hashes, addresses, and reputation scores. For L1, the Caliper framework orchestrates workload generation and transmission, enabling preconfigured settings. Meanwhile, for L2, our approach involves designing workloads adhering to interaction standards specific to zkSync. To qualitatively assess the business logic and reputation functions, our experimental process emphasizes three essential metrics: * Throughput: the number of successful transactions per second (TPS). * Latency: refers to the time difference in seconds between the submission and completion of a transaction. * Gas: is a unit that measures the computational work required to perform operations and is influenced by the complexity of the operation, the computational steps involved, and the amount of data processed. The results below concern the evaluation of a complete crowdsourcing problem-solving scenario, from task creation to reputation updating. * L1 Throughput and Latency: We begin our analysis with the performance of the mainchain. Figure <ref> illustrates the throughput and latency of the heaviest function createTask in our design for different block periods (1s, 3s, 5s). Latency increases as the block period increases, which is obvious. However, even with a block duration of 5s, our approach of submitting data off-chain and storing only essential data on-chain proved to be very efficient. The system achieves its best throughput of 310 (TPS) with a send rate of 420 (TPS). By changing the workload type, Figure <ref> compares submitSolution to createTask, the former has better throughput and latency as it requires relatively less computation on-chain. * L2 vs L1 Performance: Now, let us discuss the results of the L2 versus L1 evaluation. Figure <ref> shows the gas consumption associated with the createTask and calculateNewRep functions in two different implementations. The transaction batching process in zkSync Rollup has three steps on L1: commit, prove, and execute, during which batches are committed, proven, and executed on L1. Each step incurs gas consumption, with the total gas being the sum of the gas expended in these three stages. The results clearly show that even when sending a single TX, the gas consumption is significantly reduced when passing through L2. Furthermore, these results also demonstrate that as the transaction complexity changes when calling the calculateNewRep function, the gas cost does not change much and remains below that of the L1 execution. In Table <ref>, we present the comprehensive end-to-end latency for the createTask, submitSolution, and calculateNewRep functions. The results indicate that simultaneous computation for multiple transactions takes no more than a few seconds. We must mention here that within this duration, the transactions are included in L2 blocks. Table <ref> illustrates the gas cost dynamics associated with multiple function calls simultaneously in both L1 and L2 (zkRollup) environments. In L1, the gas cost increases linearly with the number of calls. Since each call has a fixed gas cost, the resulting overall cost is equivalent to the cost of a single call multiplied by the number of calls. In L2 (zkRollup), on the other hand, the gas cost remains stable for up to 20 function calls, proving the effectiveness of the batching scheme within the zkRollup. The initial constant cost signifies the aggregation of up to 20 transactions into a single batch, significantly reducing gas expenses. Upon exceeding 20 function calls, a doubling of commit gas costs occurs, indicating the submission of a new batch to L1. Compared to costs obtained in the L1 scenario, there is a significant reduction of about 20X, demonstrating the consistent benefit of batching with zkRollups. Overall, thanks to the integration of the zkRollup layer, the scalability of the entire system is improved and the gas cost of each transaction is considerably reduced. As a result, we are convinced that the framework we propose can effectively manage reputation and crowdsourcing tasks simultaneously. § CONCLUSION In this paper, we presented RollupTheCrowd, an innovative blockchain-based crowdsourcing framework with a privacy-preserving reputation model and a L2 scaling solution. The use of zkRollups as a L2 solution enhances the scalability of the entire system and enables simultaneous management of reputation and crowdsourcing operations. The proposed framework incorporates an efficient, privacy-friendly reputation model. The designed model evaluates the trustworthiness of participants based on their crowdsourcing interactions. To reduce the load on our blockchain, we implement an off-chain storage scheme, improving the overall performance of RollupTheCrowd. The proof-of-concept we have provided supports the feasibility of our framework and the obtained results affirm its scalability and efficiency. In the future, our research will focus on anonymity aspects of blockchain-based reputation systems, to improve privacy protection while maintaining greater scalability. § ACKNOWLEDGEMENT This work was supported by the 5G-INSIGHT bilateral ANR-FNR project (ID: 14891397) / (ANR-20-CE25-0015-16), funded by the Luxembourg National Research Fund (FNR) and by the French National Research Agency (ANR), the Nouvelle-Aquitaine Region - B4IoT project, and The MIRES federation. 00 b1 J. Howe, “The rise of crowdsourcing,” Wired, vol. 14, 01 2006. b7 S. Zhu, Z. Cai, H. Hu, Y. Li, and W. Li, “zkcrowd: A hybrid blockchain-based crowdsourcing platform,” IEEE Transactions on Industrial Informatics, vol. 16, no. 6, pp. 4196–4205, 2020. b9 M. Li, J. Weng, A. Yang, W. Lu, Y. Zhang, L. Hou, J.-N. Liu, Y. Xiang, and R. H. Deng, “Crowdbc: A blockchain-based decentralized framework for crowdsourcing,” IEEE Transactions on Parallel and Distributed Systems, vol. 30, no. 6, pp. 1251–1266, 2019. b13 J. Benet, “Ipfs - content addressed, versioned, p2p file system,” 2014. b14 D. Basile, V. Goretti, C. Di Ciccio, and S. Kirrane, Enhancing Blockchain-Based Processes with Decentralized Oracles, 08 2021, pp. 102–118. b15 L. T. Thibault, T. Sarry, and A. S. Hafid, “Blockchain scaling using rollups: A comprehensive survey,” IEEE Access, vol. 10, pp. 93 039–93 054, 2022. b16 W. Tong, X. Dong, Y. Shen, Y. Zhang, X. Jiang, and W. Tian, “Chchain: Secure and parallel crowdsourcing driven by hybrid blockchain,” Future Generation Computer Systems, vol. 131, pp. 279 291, 2022. b17 L. Sun, Q. Yang, X. Chen, and Z. Chen, “Rc-chain: Reputation-based crowdsourcing blockchain for vehicular networks,” Journal of network and computer applications, vol. 176, p. 102956, 2021. b18 K. Zhao, S. Tang, B. Zhao, and Y. Wu, “Dynamic and privacy-preserving reputation management for blockchain-based mobile crowdsensing,” IEEE Access, vol. 7, pp. 74 694–74 710, 2019. b19 J. Chen, W. Liang, L. Xiao, C. Yang, R. Zhang, Z. Gui, and A. Poniszewska-Maranda, “Privbcs: a privacy-preserving and efficient crowdsourcing system with fine-grained worker selection based on blockchain,” Connection Science, vol. 35, no. 1, p. 2202837, 2023. b21 Z. Zhou, M. Wang, C.-N. Yang, Z. Fu, S. Xin, and Q. M. J. Wu, “Blockchain-based decentralized reputation system in e-commerce environment,” Future Generation Computer Systems, vol. 124, 06 2021. b22 T. Wang, J. Guo, S. Ai, and J. Cao, “Rbt: A distributed reputation system for blockchain-based peer-to-peer energy trading with fairness consideration,” Applied Energy, vol. 295, p. 117056, 2021. b23 S. L. Kodjiku, Y. Fang, T. Han, K. O. Asamoah, E. S. E. B. Aggrey, C. Sey, E. Aidoo, V. N. Ejianya, and X. Wang, “Excrowd: A blockchain framework for exploration-based crowdsourcing,” Applied Sciences, vol. 12, no. 13, 2022. b10 D. Maram, H. Malvai, F. Zhang, N. Jean-Louis, A. Frolov, T. Kell, T. Lobban, C. Moy, A. Juels, and A. Miller, “Candid: Can-do decentralized identity with legacy compatibility, sybil-resistance, and accountability,” in 2021 IEEE Symposium on Security and Privacy (SP), 2021, pp. 1348–1366. b31 L. Breidenbach, C. Cachin, B. Chan, A. Coventry, S. Ellis, A. Juels, F. Koushanfar, A. Miller, B. Magauran, D. Moroz et al., “Chainlink 2.0: Next steps in the evolution of decentralized oracle networks,” Chainlink Labs, vol. 1, pp. 1–136, 2021. guru M. A. Bouchiha, Y. Ghamri-Doudane, M. Rabah and R. Champagnat, GuRuChain: Guarantee and Reputation-based Blockchain Service Trading Platform, IFIP Networking Conference (IFIP Networking), Barcelona, Spain, 2023, pp. 1-9. b24 X. Zhu, Y. Li, L. Fang, and P. Chen, “An improved proof-of-trust consensus algorithm for credible crowdsourcing blockchain services,” IEEE Access, vol. 8, pp. 102 177–102 187, 2020.
http://arxiv.org/abs/2407.03311v1
20240703175411
Value-Penalized Auxiliary Control from Examples for Learning without Rewards or Demonstrations
[ "Trevor Ablett", "Bryan Chan", "Jayce Haoran Wang", "Jonathan Kelly" ]
cs.RO
[ "cs.RO", "cs.AI", "cs.LG" ]
font=small Y>X [ m #1 m #1 m .#1.^T m .#1.^H m tr(#1) m det(#1) m‖#1‖ m diag { #1 } d mm ∂#1∂#2 mm #1#2 m *argmax_#1 m *argmin_#1 m {#1} ℝ ℤ ℂ m SO(#1) m 𝔰𝔬(#1) m SE(#1) m 𝔰𝔢(#1) m Sim(#1) m 𝔰𝔦𝔪(#1) m Ad(#1) m ad(#1) m 𝔼[#1] mm 𝒩(#1,#2) mm 𝒢𝒫(#1,#2) m |#1| 0 1 m ℱ_#1 C ϕ T ξ m#̂1̂ m#̌1̌ m#1 m#1^* 𝒟_E 𝒟_π π_θ π_θ π_E mm 𝔼_#1[#2] lemmaLemma corollaryCorollary assumptionAssumption definitionDefinition theoremTheorem *inftheoremInformal Theorem Value-Penalized Auxiliary Control from Examples for Learning without Rewards or Demonstrations Trevor Ablett^1, Bryan Chan^2, Jayce Haoran Wang^1, Jonathan Kelly^1 ^1University of Toronto, ^2University of Alberta July 8, 2024 ========================================================================================================================================= § ABSTRACT Learning from examples of success is an appealing approach to reinforcement learning that eliminates many of the disadvantages of using hand-crafted reward functions or full expert-demonstration trajectories, both of which can be difficult to acquire, biased, or suboptimal. However, learning from examples alone dramatically increases the exploration challenge, especially for complex tasks. This work introduces value-penalized auxiliary control from examples (VPACE); we significantly improve exploration in example-based control by adding scheduled auxiliary control and examples of auxiliary tasks. Furthermore, we identify a value-calibration problem, where policy value estimates can exceed their theoretical limits based on successful data. We resolve this problem, which is exacerbated by learning auxiliary tasks, through the addition of an above-success-level value penalty. Across three simulated and one real robotic manipulation environment, and 21 different main tasks, we show that our approach substantially improves learning efficiency. Videos, code, and datasets are available at <https://papers.starslab.ca/vpace>. § INTRODUCTION Robotics presents a unique challenge to learning algorithms: feedback is often formulated as a manually-defined dense reward function, or demonstration trajectories of an expert completing the task, both of which can be difficult to acquire, suboptimal, or biased. Ensuring that expert trajectories are nearly optimal, or, alternatively, learning an optimal policy from mixed or suboptimal data, are both challenging, open problems in imitation learning <cit.>. Sparse reward functions are less biased <cit.>, but require significant exploration and can be non-trivial to acquire. We consider another form of feedback—example states of completed tasks. Obtaining example states can be far less laborious than designing a reward function or gathering trajectories: practitioners can gather states from a distribution that represents a completed task without consideration of how the states are reached by the expert. Learning from example states, equivalently referred to as example-based control <cit.>, can be inefficient, however. Similar to sparse rewards, the example states provide no information about which actions led to the goal state(s). In robotics applications, this inefficiency is particularly undesirable, since executing suboptimal policies is costly both in terms of potentially destructive environmental effects and time. In this work, we aim to address the following question: Is it possible to learn policies efficiently given only example states of completed tasks? To answer this question, we propose a new example-based control method, value-penalized auxiliary control from examples (VPACE). Our approach is inspired by LfGP <cit.>, a method that introduces the use of a scheduler and expert trajectories of auxiliary tasks to improve exploration. We build upon this idea to leverage only example states rather than expert trajectories. Our contributions are fourfold: (i) We find that the naïve application of scheduled auxiliary tasks to example-based control can result in poorly-calibrated value estimates, and we alleviate this problem by introducing value penalization based on the current expected value of the provided examples. r.3 < g r a p h i c s > Average normalized results for all tasks studied in this work. The combination and complimentary nature of these two additions results in a dramatic improvement in sample efficiency and performance across four environments with 19 simulated and two real tasks (see <ref>) compared with prior state-of-the-art example-based control (RCE, <cit.>) and imitation learning (DAC, <cit.>, SQIL <cit.>) approaches. (ii) We situate our approach among these baselines, showing the effects of mixing value penalization and auxiliary control under various reward-modelling assumptions. (iii) We demonstrate that VPACE can be as sample-efficient as learning algorithms with other forms of feedback including expert trajectories and reward functions. (iv) Finally, we theoretically show that our value-penalized objective does not prevent an agent from learning the optimal policy. § RELATED WORK Sparse rewards are a desirable form of feedback for learning unbiased, optimal policies in reinforcement learning (RL), but they can be difficult to obtain, and present an immense exploration challenge on long-horizon tasks <cit.>. Reward shaping <cit.> and dense rewards can help alleviate the exploration problem in robotics <cit.>, but designing dense rewards is difficult for practitioners <cit.>, and can lead to surprising, biased, and suboptimal policies. An alternative to manually-defined rewards is to perform inverse RL (IRL), in which a reward function is recovered from demonstrations, and a policy is learned either subsequently <cit.> or simultaneously, in a process known as adversarial imitation learning (AIL) <cit.>. AIL actor-critic approaches can be further divided into methods that learn both a value function and a separate reward model <cit.> or methods that learn a value function only <cit.>. Like dense rewards, full trajectory demonstrations can be hard to acquire, suboptimal, or biased. Unlike IRL/AIL, in example-based control (EBC), a learning agent is only provided distributions of single successful example states. Previous EBC approaches include using generative AIL (GAIL, <cit.>) directly (VICE, <cit.>), soft actor critic (SAC, <cit.>) with an additional mechanism for generating extra success examples (VICE-RAQ, <cit.>), learning the goal distribution as a reward function (DisCo RL, <cit.>), performing offline RL with conservative Q learning (CQL, <cit.>) and a learned reward function <cit.>, and using SAC with a classifier-based reward (RCE, <cit.>). All EBC methods can naturally suffer from poor exploration, given that success examples are akin to sparse rewards. Hierarchical reinforcement learning (HRL) aims to leverage multiple levels of abstraction in long-horizon tasks <cit.>, improving exploration in RL both theoretically <cit.> and empirically <cit.>. Scheduled auxiliary control (SAC-X, <cit.>) combines a scheduler with semantically meaningful and simple auxiliary sparse rewards. SAC-X has also been extended to the domain of imitation learning with full trajectories (LfGP, <cit.>). Our approach builds on aspects of VICE-RAQ <cit.>, SQIL <cit.>, RCE <cit.>, and LfGP <cit.>, in that we use off-policy learning without a separate reward model to maximize efficiency. § EXAMPLE-BASED CONTROL WITH VALUE-PENALIZATION AND AUXILIARY TASKS Our goal is to generate an agent, with as few environment interactions as possible, that can complete a task given final state examples of both the successfully completed task and a small set of reusable auxiliary tasks, with no known reward function or full-trajectory demonstrations. We begin by formally describing the problem setting for example-based control in <ref>. In <ref>, we describe how scheduled auxiliary tasks can be applied to example-based control. Finally, motivated by the increased exploration diversity of the multitask framework, we propose a new Q-estimation objective in <ref> that leverages value penalization for improved learning stability. §.§ Problem Setting A Markov decision process (MDP) is defined as = ⟨, , R, , , γ⟩, where the sets and are respectively the state and action space, is the state-transition environment dynamics distribution, is the initial state distribution, γ is the discount factor, and the true reward R : 𝒮×𝒜→ is unknown. Actions are sampled from a stochastic policy π(a|s). The policy π interacts with the environment to yield experience ( s_t, a_t, s_t+1), generated by s_0 ∼(·), a ∼π(· | s_t), and s_t+1∼(· | s_t, a_t). The gathered experience ( s_t, a_t, s_t+1) is then stored in a buffer , which may be used throughout learning. When referring to finite-horizon tasks, t=T indicates the final timestep of a trajectory. For any variables x_t, x_t+1, we may drop the subscripts and use x, x' instead when the context is clear. In this work, we focus on example-based control, a more difficult form of imitation learning where we are only given a finite set of example states s^* ∈^*, where ^* ⊆𝒮 and |^* | < ∞, representing a completed task. The goal is to (i) leverage ^* and to learn or define a state-conditional reward function R̂: 𝒮→ℝ that satisfies R̂(s^*) ≥R̂(s) for all (s^*, s) ∈^* ×, and (ii) learn a policy π̂ that maximizes the expected return π̂= _π𝔼_π[ ∑_t=0^∞γ^tR̂(s_t) ]. For any policy π, we can define the value function and Q-function respectively to be: V^π(s) = 𝔼_a ∼π[ Q^π(s, a) ], Q^π(s, a) = R̂(s) + γ𝔼_s' ∼[ V^π(s') ], corresponding to the return-to-go from state s (and action a). Both the value function and the Q-function for any policy π satisfy the above Bellman equations that can be used for reinforcement learning (RL), specifically by temporal difference (TD) algorithms. One way to learn the reward function is through adversarial imitation learning (AIL)—the learned reward function R̂ is derived from the minimax objective <cit.>: ℒ(D) = 𝔼_s ∼[ log( 1 - D(s)) ] + 𝔼_s^* ∼^*[ log( D(s^*)) ], where D attempts to differentiate the occupancy measure between the state distributions induced by ^* and . The output of D(s) is used to define R̂(s), which is then used for updating the Q-function with <ref>. In the single-task regime, this is algorithmically identical to state-only AIL approaches with full demonstrations <cit.>. §.§ Learning a Multitask Agent from Examples Example-based control presents a challenging exploration problem for complex tasks. We alleviate this problem by adapting learning from guided play (LfGP) <cit.>, an approach for improving exploration by learning from auxiliary-task expert data, in addition to main task data. Auxiliary tasks are selected to have semantic meaning (e.g. reach, lift). Individual task definitions, and sometimes the auxiliary example data themselves, are reusable between main tasks. LfGP augments an MDP to contain auxiliary tasks, where _aux = {_1, …, _K } are separate MDPs that share , , , and γ with the main task _main. Each task ∈_all has an example state buffer ^*_, where _all = _aux∪{_main} is the set of all tasks and we refer to all task-specific entities in our model as (·)_. The agent consists of multiple components for each task , specifically Q_, π_, and optionally D_. Each π_ maximizes the joint policy objective ℒ(π) = ∑_∈_all𝔼_s ∼_all, a ∼π_(· | s)[ Q_(s, a) ], and each Q_ minimizes the joint Bellman residual ℒ(Q) = ∑_∈_all𝔼_(s, ·, s') ∼[ (Q_ - y_(s, s'))^2 ] + 𝔼_s^* ∼^*_[ (Q_ - y_(s^*, s^*))^2 ], where y_ are TD targets defined based on <ref> with task-specific reward R̂_. Finally, a scheduler selects π_ to sample actions from during each environment interaction. Following <cit.>, the scheduler has a fixed period, allowing for a set number of policy switches in each episode. We use a weighted random scheduler and a small set of simple handcrafted high-level trajectories (e.g., reach then lift), following results from <cit.>, where this approach performed as well as a more complex learned scheduler. The probability of choosing the main task is a hyperparameter p__main, and each auxiliary task is chosen with probability p__k = (1 - p__main)/K. See <ref> for more scheduler details, including the full set of high-level trajectories. §.§ Value Penalization in Example-Based Control A scheduled multitask agent will, by design, exhibit far more diverse behaviour than a single-task policy <cit.>. We show in <ref> that the buffer generated by this behavior, consisting of transitions resulting from multiple policies π_, can result in highly unstable and poorly calibrated Q estimates, especially in example-based control. In this section, we extend TD-error minimization, a fundamental component of many off-policy RL algorithms, with a penalty that encourages Q-function outputs to stay well-calibrated with respect to the reward model. Generally, the choice of the reward model and the loss function for estimating the Q-function can greatly impact learning efficiency (see <ref> for details), but our value-penalty term applies to any reward model and commonly used regression-based loss functions. This penalty applies to both the single-task and multitask regime. For simplicity, we describe value penalization for the single-task framework. Consider a reward model R̂(·) where R̂(s^*), s^* ∈^*, indicates reward for successful states, while R̂(s), s ∈, for all other states. In AIL, R̂ is a function of D, while in SQIL <cit.>, for example, R̂(s^*) = 1 and R̂(s) = 0. Assuming that s^* transitions to itself, then for policy evaluation with the mean-squared error (MSE), we can write the TD target, y: 𝒮×𝒮→ℝ, of Q-updates as y(s, s') = R̂(s) + γ𝔼_a'[ Q(s', a') ], y(s^*, s^*) = R̂(s^*) + γ𝔼_a'[Q(s^*, a')], where (s, ·, s') ∼, a' ∼π(·| s'), and s^* ∼^*. <ref> can be replaced with y(s^*, s^*) = R̂(s^*) if one considers successful states to be terminal, but this can cause bootstrapping errors when a task times out or does not terminate upon success <cit.>, both of which are common practice in robotics environments. Regressing to TD targets <ref> will eventually satisfy the Bellman equation, but in the short term the targets do not satisfy y(s, s') ≤ y(s^*, s^*). This is because the TD targets leverage bootstrapping of the current Q-estimate, an estimate that may not satisfy the Bellman equation and can exceed the bounds of valid Q-values, implying that approximation error of Q updated with the MSE can be uncontrollable. We introduce a simple resolution to this issue by adding a penalty to our TD updates for s ∈ based on the current estimate of 𝔼_s^* ∼^*[ V^π(s^*) ]. This penalty term enforces the Q-estimate to focus on outputting valid values and drives the TD targets to satisfy the inequality y(s, s') ≤ y(s^*, s^*). We introduce both a minimum and maximum value for Q^π(s,a) respectively as Q^π_min = R̂_min / (1 - γ), Q^π_max = 𝔼_s^* ∼^*[ V^π(s^*) ], where R̂_min≤R̂(s) for all s ∈. Then, the value penalty is ℒ^π_pen(Q) = λ𝔼_(s,a) ∼[ ( max(Q(s,a) - Q^π_max, 0) )^2 + ( max(Q^π_min - Q(s,a), 0) )^2 ], where λ≥ 0 is a hyperparameter. Notice that when λ→∞, <ref> becomes a hard constraint that is only satisfied by functions with valid outputs. It immediately follows that y(s, s') ≤ y(s^*, s^*) holds with TD updates <ref>. We add value penalization ℒ^π_pen(Q) to the MSE loss as a regularization term for learning the Q-function. In <ref>, we show that the optimal solution of this regularized loss achieves low error on the unregularized loss. Consequently, if we use the regularized loss in policy evaluation, approximate policy iteration (API) still converges to the optimal solution. We state here the informal result that bounds the value error (see <ref> in <ref> for the formal theorem): Suppose the function class 𝒬 has bounded output and contains Q^π for all π, and suppose we use the value-penalization loss for policy evaluation in API with N samples for each iteration. Then, after k iterations of API, we have that with probability at least 1 - δ, ‖ Q^* - Q^π_k‖_2, ρ_0≤Õ(1/(1 - γ)√(log(k/δ)/2N)), where Õ hides the constants and concentrability coefficients. § EXPERIMENTS Through our experiments, we seek to understand if VPACE improves stability and efficiency in example-based control. We also complete an ablation study of various hyperparameter options, and finally analyze the learned values for agents with and without value penalization. §.§ Experimental Setup We learn agents in a large variety of tasks and environments, including those originally used in LfGP <cit.> and RCE <cit.>. Specifically, the tasks from <cit.> involve a simulated Franka Emika Panda manipulator, a blue and green block, a fixed “bring" area for each block, and a small slot with <1 mm tolerance for inserting each block. The environment has a single, shared observation space and action space with multiple options for main and auxiliary tasks. We additionally study all tasks from <cit.>: a slightly modified subset of those from <cit.>, involving a simulated Sawyer arm, and three of the Adroit hand tasks originally presented in <cit.>. We also generate three modified Adroit hand environments that use delta-position action spaces, instead of the absolute positions from the original environments, because we have found that policies learned in the original environments are very coarse and exploit simulator bugs (see <ref> for details). Finally, we study drawer and door opening tasks with a real Franka Emika Panda. The reward models that we consider are a learned discriminator (<ref>, <cit.>), SQIL (<ref> and <ref>, <cit.>), and RCE (<ref> and <ref>, <cit.>). For each of these reward models, we can include value penalization (VP), auxiliary control from examples (ACE), or both (VPACE). VPACE-RCE and VP-RCE are not considered because RCE circumvents the issue of value penalization through its classification-based approach to updating value functions (see <ref>). All VPACE and ACE algorithms learn from examples of completed auxiliary tasks, in addition to the main task. In the simulated Panda environment, all main tasks use release, reach, grasp, and lift as auxiliary tasks, and each auxiliary task example dataset is reused between main tasks. In the Sawyer, Adroit, and real Panda environments, each main task uses reach and grasp as auxiliary tasks. Our scheduler approach for all environments is the same weighted random scheduler with a small set of handcrafted trajectories from <cit.>. All implementations are built on soft actor-critic (SAC) <cit.>. For more environment, algorithm, task, and implementation details, see <ref>. §.§ Main Task Performance Results Our results for all algorithms for four of the most challenging main tasks we examined are shown at the top of <ref>, while the bottom shows normalized average results for all tasks, separated by environment. Our real Panda results are shown in <ref>. Since the Sawyer and Adroit tasks do not include specific success evaluation metrics, we only report returns. Policies are evaluated at 25k (environment step) intervals for 50 episodes for the simulated Panda tasks, 10k intervals for 30 episodes for the Sawyer and Adroit tasks, and 5k intervals for 10 episodes for the real Panda tasks. Our results clearly show the benefits of combining auxiliary task exploration with value penalization. For most tasks, VPACE-SQIL learns faster and more stably than any other method. Notably, VPACE-SQIL can solve all tasks, apart from , by the final evaluation step. While VPACE-DAC also tends to perform quite well, it is less sample efficient than VPACE-SQIL on average, most likely owing to VPACE-SQIL's reduced complexity and better discrimination between and ^*. For the Adroit non-DP tasks, VP-SQIL slightly outperforms methods including ACE, while SQIL alone performs comparably to VPACE methods, but we suspect that this is because these tasks have nuances allowing them to be “solved" with very coarse policies that may be undesirable for a real platform (see <ref> for details). ACE-RCE is consistently outperformed by both VPACE-SQIL and VPACE-DAC, and exhibits higher performance variance. We further examine this result, which conflicts with results presented in <cit.> showing very poor performance for both SQIL and DAC, in <ref>. On our real tasks ( and ), SQIL makes minor progress but is never able to solve either task, while VPACE-SQIL solves both. Value penalization leads to a dramatic improvement in many of the more complex tasks, but presents comparable results to its exclusion in other tasks. Crucially, there are no tasks where its inclusion harms performance. VPACE-SQIL, VPACE-DAC, and ACE-RCE all outperform their single-task counterparts, particularly for the most complex tasks. The improvement for DAC in the simulated Panda environment is particularly stark; reflecting results from <cit.>, DAC learns deceptive rewards for these tasks and, in turn, never learns to complete any of them except for . §.§ Ablations, Data Quantity, and Comparison to Full Trajectories and Sparse Rewards We completed experiments with many variations from our original implementation of VPACE-SQIL in (see <ref>). Our main experiments used λ=10 for value penalization strength, but we also tested λ={1,100}, and found a negligible effect on performance, indicating robustness to λ choice. We tested two different sizes of ^*_ (for all tasks in _all), |^*_| = {10, 100}, while the main experiments used |^*_| = 200 (following <cit.>). We found that |^*_| = 100 had a negligible effect on performance, but |^*_| = 10 slowed learning and impaired final performance. Even with |^*_| = 10, VPACE-SQIL outperformed all baselines with |^*_| = 200, apart from VPACE-DAC. A natural question regarding our approach is how its performance compares to more traditional approaches, such as using full expert trajectories and inverse reinforcement learning (IRL), or using RL with true sparse rewards. To test the former, we added full trajectories to each ^*_ (labelled +Full Trajectories in <ref>, +Full Trajectories & Actions for learning from actions as well, and VP-SQIL +Full Trajectories for single-task only), effectively making our approach similar to <cit.> but with value-penalization. Intriguingly, peak performance is reduced in this setting (especially without ACE), which we hypothesize is because the agent now has to minimize divergence between and ^*_ for many non-successful states, leading to an effect, commonly seen with dense reward functions, known as reward hacking <cit.>. This result suggests that inverse RL/adversarial IL can be significantly improved by switching to example-based control, but further investigation is required. To test RL with sparse rewards, we removed ^*_ entirely, and instead use a ground truth R(s,a) from the environment (R(s,a) already having been used for evaluating success rate), similar to SAC-X <cit.> with value-penalization. SAC-X (Sparse Rewards) does not start accomplishing the main task at all in 500k environment steps, exhibiting substantially poorer performance than VPACE-SQIL and all other baselines (apart from DAC), indicating the high potential utility of immediately and consistently sampling expert success data. §.§ Value Penalization and Calibration of Q-Values While our performance results show that value penalization improves performance, they do not explicitly show that y(s, s') ≤ y(s^*, s^*) (see <ref>). To verify that this goal was met, we took snapshots of each learned agent at 500k steps and ran each policy for a single episode, recording per-timestep Q-values. Instead of showing Q-values directly, we show Q(s_t, a_t) - 𝔼_s^* ∼^*, a^* ∼π(· | s^*) [ Q(s^*, a^*) ], which should be close to 0 when an episode is completed successfully, and should never climb above 0 for y(s, s') ≤ y(s^*, s^*) to hold. We show the results of doing so in in <ref> (see <ref> for other tasks). Both VP-SQIL and VPACE-SQIL clearly do not violate y(s, s') ≤ y(s^*, s^*), while both SQIL and ACE-SQIL do. ACE-SQIL, in particular, has no values, on average, where y(s, s') ≤ y(s^*, s^*), indicating that it has learned poorly calibrated estimates for Q. This is reflected in our main performance results, where, in the most difficult Panda tasks in particular, the improvement from ACE-SQIL to VPACE-SQIL is pronounced. Consequently, scheduled auxiliary control can produce poorer policies than its single-task alternative unless it is coupled with value penalization. § LIMITATIONS VPACE suffers from several limitations, though many of them are inherited from the use of reinforcement learning and learning from guided play. For an expansion of this section, including these inherited limitations, see <ref>. In this work, we exclusively learn from numerical state data, rather than raw images, and raw images may be required for tasks involving objects that are not rigid. As well, we claim that example distributions are easier to generate than full expert trajectories, but for certain tasks, generating these example distributions may also be challenging. Finally, tasks we investigate in this work have roughly unimodal example success state distributions, and our method may not gracefully handle multimodality. § CONCLUSION In this work, we presented VPACE—value-penalized auxiliary control from examples, where we coupled scheduled auxiliary control with value penalization in the example-based setting to significantly improve learning efficiency and stability. Our experiments revealed that scheduled auxiliary control can exacerbate the learning of poorly-calibrated value estimates, which can significantly harm performance, and we alleviated this issue with an approach to value penalization based on the current value estimate of example data. We theoretically showed that our approach to value penalization still affords an optimal policy. We empirically showed that value penalization, together with scheduled auxiliary tasks, greatly improves learning from example states against a set of state-of-the-art baselines, including learning algorithms with other forms of feedback. Opportunities for future work include the further investigation of learned approaches to scheduling, as well as autonomously generating auxiliary task definitions. We gratefully acknowledge the Digital Research Alliance of Canada and NVIDIA Inc., who provided the GPUs used in this work through their Resources for Research Groups Program and their Hardware Grant Program, respectively. § REWARD MODEL FORMULATIONS In this section, we investigate approaches to off-policy reinforcement learning (RL) studied in this work, modified to accommodate an unknown R and the existence of an example state buffer ^*. §.§ Learning a Reward Function The most popular approach to reward modelling, known as inverse RL, tackles an unknown R by explicitly learning a reward model. Modern approaches under the class of adversarial imitation learning (AIL) algorithm aim to learn both the reward function and the policy simultaneously. In AIL, the learned reward function, also known as the discriminator, aims to differentiate the occupancy measure between the state-action distributions induced by expert and the learner. In example-based control, the state-conditional discriminator loss is <ref>, where D attempts to differentiate the occupancy measure between the state distributions induced by ^* and . The output of D(s) is used to define R̂(s), which is then used for updating the Q-function using <ref>. In example-based control, the discriminator D provides a smoothed label of success for states, thus its corresponding reward function can provide more density than a typical sparse reward function, making this approach an appealing choice. Unfortunately, a learned discriminator can suffer from the deceptive reward problem, as previously identified in <cit.>, and this problem is exacerbated in the example-based setting. In the following sections, we describe options to remove the reliance on separately learned discriminators. §.§ Discriminator-Free Reward Labels with Mean Squared Error TD Updates A simple alternative to using a discriminator as a reward model was initially introduced as soft-Q imitation learning (SQIL) in <cit.>. In standard AIL algorithms, D is trained separately from π and Q^π, where D is trained using data from both and ^*, whereas π and Q^π are trained using data exclusively from . However, most off-policy algorithms do not require this choice, and approaches such as <cit.> train Q, and possibly π, using data from both and ^*. It is unclear why this choice is often avoided in AIL, but it might be because it can introduce instability due to large discrepancy in magnitudes for Q targets given data from and ^*. Sampling from both buffers, we can define R̂(s_t, a_t) in <ref> to be labels corresponding to whether the data is sampled from or ^*—in SQIL, the labels are respectively 0 and 1. The full training objective resembles the minimax objective in <ref> where we set D(s^*) = 1 and D(s) = 0. The resulting reward function is the expected label R̂(s, a) = 𝔼_∼Categorical_s({, ^*})[ 1( = ^*)], where Categorical_s is a categorical distribution corresponding to the probability that s belongs to buffers , ^*. Consequently, only successful states s^* yields positive reward and the corresponding optimal policy will aim to reach a successful state as soon as possible. If we further assume that s^* transitions to itself, then for policy evaluation with mean-squared error (MSE), we can write the temporal difference (TD) target, y, of Q-updates with: y(s, s') = γ𝔼_a'[ Q(s', a') ], y(s^*, s^*) = 1 + γ𝔼_a'[ Q(s^*, a') ], where (s, ·, s') ∼, a' ∼π(a' | s'), and s^* ∼^*. This approach reduces complexity as we no longer explicitly train a reward model, and also guarantees discrimination between data from and ^*. §.§ Discriminator-Free Reward Labels with Binary Cross Entropy TD Updates <cit.> introduced recursive classification of examples (RCE), a method for learning from examples of success. RCE mostly follows the approach outlined above in <ref> but uses a weighted binary cross-entropy (BCE) loss with weights 1 + γ w for data from and 1 - γ for data from ^*. The TD targets are also changed from <ref> and <ref> to y(s, s') = γ w(s') / (1 + γ w(s')), y(s^*, s^*) = 1, where w(s') = V^π(s')/(1 - V^π(s')). This approach can also be made closer to SQIL by removing the change in weights and by leaving <ref> as <ref>. This makes it equivalent to SQIL, apart from removing bootstrapping from <ref>, and using a BCE instead of MSE for TD updates. A major benefit of this approach, compared with the MSE TD updates in <ref>, is that y(s, s') ≤ y(s^*, s^*) is always enforced at every update, meaning that our approach to value penalization would provide no extra benefit. Nonetheless, our results from <ref> show that SQIL, even without value-penalization, almost always outperforms both RCE and SQIL with BCE loss (referred to SQIL-BCE here). § WHY DOES SQIL OUTPERFORM RCE? Our results show that VPACE-SQIL outperforms ACE-RCE, and that VP-SQIL and SQIL outperform RCE in almost all cases. This result is in conflict with results from <cit.>, which showed SQIL strongly outperformed by RCE. In this section, we show results that help explain our findings. <cit.> claimed that the highest performing baseline against RCE was SQIL, but it is worth noting that their implementation[Available at <https://github.com/google-research/google-research/tree/master/rce> at time of writing.] is a departure from the original SQIL implementation, which uses MSE for TD updates, described in <ref>. Furthermore, <cit.> also noted that it was necessary to add n-step returns to off-policy learning to get reasonable performance. In their experiments, however, this adjustment was not included for their version of SQIL with BCE loss. We complete the experiments using their implementation and show the average results across all RCE environments in <ref>, and also compare to using SQIL with MSE (without value penalization or auxiliary tasks) for TD updates. These results empirically show that RCE and SQIL with BCE loss perform nearly identically, indicating that benefits of the changed TD targets and weights described in <ref> may not be as clear as previously described. Furthermore, SQIL with MSE clearly performs better on average, although it still performs worse than VPACE (see <ref>). While these empirical results demonstrate that example-based control with BCE loss for TD updates is outperformed by SQIL with MSE, <cit.> had theoretical results indicating that RCE should still learn an optimal policy. In the following section, we describe a flaw found in one of their proofs that may help to further understand why RCE is outperformed by SQIL with MSE. §.§ Re-Examination of the Proofs from Section 4 of RCE <cit.> In this section, we outline potential flaws in the proofs of Lemma 4.2 and Corollary 4.2.1 in RCE <cit.>. We then provide a lemma that shows RCE can be considered as recovering a specific reward function, thereby unifying RCE with other IRL algorithms. To begin, recall that the RCE objective (i.e. Eq. (7) of <cit.>) is defined as follows: ℒ^π(θ) := p(e_t+ = 1) 𝔼_p(s_t, a_t | e_t+ = 1)[ log C^π_θ(s_t, a_t) ] + 𝔼_p(s_t, a_t)[ log( 1 - C^π_θ(s_t, a_t) ) ]. <cit.> stated the following Lemma and Corollary to demonstrate that optimizing <ref> is equivalent to value iteration under the tabular setting: (Lemma 4.2 of RCE) In the tabular setting, the expected updates for <ref> are equivalent to performing value iteration with reward function r(s_t, a_t) = (1 - γ) p(e_t = 1 | s_t) and a Q-function parameterized as Q^π_θ(s_t, a_t) = C^π_θ(s_t, a_t)/1 - C^π_θ(s_t, a_t). (Corollary 4.2.1 of RCE) RCE converges in the tabular setting. In <ref>, we find that the proof intends to demonstrate equivalence with policy evaluation. In particular, the temporal difference (TD) target is defined to be y = r(s, a) + γ𝔼_s', a'[ Q(s', a') ], which is optimizing the Bellman expected equation rather than Bellman optimal equation. Furthermore, the assignment equation for the ratio should in fact be an expectation over both the next state-and-action pairs (i.e. s_t + 1∼ p(·| s_t, a_t) and a_t + 1∼π(·| s_t + 1)) instead of only the next state, even if we aim to perform policy evaluation. As a result, <ref> also fails to hold if we simply use the convergence proof of value iteration. We now provide a lemma indicating that the optimal solution of <ref> is equivalent to finding Q^π with reward r(s, a) = (1 - γ)p^π(e = 1 | s): Fix any policy π : 𝒮→Δ(𝒜). Let θ^* = _θℒ^π (θ) be the optimal solution of <ref>. Consider an MDP with reward function r(s_t, a_t) = (1 - γ)p(e_t = 1 | s_t, a_t). Then, in the tabular setting, Q^π = C^π/1 - C^π satisfies the Bellman expected equation. Furthermore, we have that Q^π(s_t, a_t) = p^π(e_t+ = 1| s_t, a_t). That is, the solution of <ref> is equivalent to finding Q^π. Fix any policy π. Consider an MDP with reward function r(s, a) = (1 - γ) p (e = 1 | s) and transition function p(s' | s, a). We define the Bellman expected operator 𝒯^π : ℝ^𝒮×𝒜→ℝ^𝒮×𝒜 to be 𝒯^π q (s, a) := r(s, a) + γ𝔼_s' ∼ p(·| s, a), a' ∼π(·| s)[ q(s', a') ]. Note that under the tabular setting, 𝒯^π is a γ-contraction mapping under max-norm <cit.>. Thus, by Banach fixed-point theorem, we have that for any q ∈ℝ^𝒮×𝒜, Q^π = ( 𝒯^π)^∞ q. In other words, performing policy evaluation on π with reward r(s, a) = (1 - γ) p(e = 1 | s) yields Q^π. Now, by Lemma 4.1 of <cit.>, we have that C^π(s_t, a_t)/1 - C^π(s_t, a_t) = (1 - γ) p (e_t = 1 | s_t) + γ𝔼_s_t + 1∼ p(·| s_t, a_t), a_t + 1∼π(·| s_t)[C^π(s_t+1, a_t+1)/1 - C^π(s_t+1, a_t+1)]. <ref> satisfies the Bellman expected equation, thus by uniqueness of the fixed-point theorem, we have that Q^π(s_t, a_t) = C^π(s_t, a_t)/1 - C^π(s_t, a_t). Recall that C^π(s_t, a_t)/1 - C^π(s_t, a_t) is the Bayes' optimal classifier for <ref>. Therefore, suppose we obtain the minimum of <ref>, that is, θ^* = _θℒ^π(θ), then we have that p(e_t+ = 1 | s_t, a_t) = C^π_θ^*(s_t, a_t)/1 - C^π_θ^*(s_t, a_t) = C^π(s_t, a_t)/1 - C^π(s_t, a_t) = Q^π(s_t, a_t), meaning that finding the minimum of ℒ^π(θ) is equivalent to performing policy evaluation on π. With <ref> and Lemma 4.3 of <cit.>, we can view Algorithm 1 of <cit.> as a form of policy iteration under the reward function r(s, a) = (1 - γ) p(e = 1 | s), where (i) updating the classifier θ corresponds to performing a policy evaluation step on π, and (ii) updating the policy π corresponds to performing a policy improvement step. We note that, however, the RCE objective is not necessarily equivalent to policy evaluation when the true minimum of RCE, θ^*, is not obtained. § FINITE-SAMPLE ANALYSIS OF VALUE-PENALTY REGULARIZATION In this section, we show a finite-sample complexity bound of learning the Q-function via mean-squared error (MSE) with value-penalty regularization and demonstrate that we can use its solution for approximate policy iteration (API). Intuitively, given a fixed set of samples, a near-optimal solution obtained using the regularized loss is closed to the true optimal solution of the unregularized loss with high probability. Then, if we can control the error of the solution from the regularized loss, we can further bound the value error between the Q-function output by API and the optimal Q-function. More formally, consider a parameterized function class 𝒬 = {Q_θ: ℝ^𝒮×𝒜→ℝ|θ∈Θ}. We define the MSE as ℓ(θ, Z_N) = 1/N∑_n=1^N ( Q_θ (S_n, A_n) - G_n )^2, where Z_N = { (S_n, A_n, G_n) }_n = 1^N is N samples of the state-action pairs and the corresponding expected return. The value-penalty regularizer is defined as h(θ, Z_n) = λ1/N∑_n=1^N ( max(Q_θ(S_n,A_n) - Q_U, 0) )^2 + ( max(Q_L - Q_θ(S_n,A_n), 0) )^2, for some constants Q_L < Q_U, corresponding to the lowest and highest possible Q-values respectively, and positive constant λ. Then, the MSE with value-penalty regularization is defined as ℓ_reg(θ, Z_N) = ℓ(θ, Z_N) + h(θ, Z_N). During training, suppose we are given N i.i.d. samples Z_N from 𝒟. We aim to approximate the regularized empirical risk minimizer (ERM) to obtain θ̂: ℓ(θ̂, Z_n) + h(θ̂, Z_n) ≤min_θ∈Θ[ ℓ(θ, Z_n) + h(θ, Z_n) ] + ε'. Our goal is to find the approximate regularized ERM θ̂ that achieves a low test loss, defined by ℒ(θ, 𝒟) = 𝔼_Z ∼𝒟[ ℓ(θ, Z) ]. To begin, we assume that the function class has bounded outputs (or bounded parameters) which is a standard requirement for characterizing generalization errors: (Bounded function output and realizability.) Let C_𝒬≥max (| Q_L |, | Q_U |). The parameterized function class ℱ is bounded: 𝒬 = {Q_θ: ℝ^𝒮×𝒜→ [-C_𝒬, C_𝒬] |θ∈Θ}. Moreover, the optimal parameter of the test loss belongs to the function class: θ^* = _θℒ(θ, 𝒟) ∈Θ. Note that the regularizer term <ref> only has an effect when the function Q_θ predicts values that are out-of-range, therefore we set C_𝒬 to be larger than the maximum possible returns for a more meaningful analysis, otherwise we recover the unregularized loss <ref>. Furthermore, we characterize the complexity of the function class 𝒬 using the expected Rademacher complexity with offset. The expected Rademacher complexity is defined to be ℜ^h_N(𝒬, 𝒟) = 𝔼_Z_N ∼𝒟[𝔼_σ[ sup_θ∈Θ1/N∑_n=1^N σ_n ( ℓ(θ, Z_n) + 0.5 h(θ, Z_n) ) - 0.5 h(θ, Z_N) ]], where σ_1, …, σ_N are independent Rademacher random variables. Then, by Corollary 6.21 of <cit.>, we can show that the approximate ERM satisfies the oracle inequality with high probability, which we state here: (Corollary 6.21 of <cit.>.) Let Δ_N h(θ) ≤sup_z, z'[ h(θ, z) - h(θ, z') ]. Assume that for some C ≥ 0: sup_θ∈Θ[ sup_z, z'[ ℓ(θ, z) - ℓ(θ, z') + Δ_N h(θ) ] ] ≤ C. Then the approximate ERM specified by <ref>, θ̂, satisfies the following oracle inequality. For any δ, δ' > 0, with probability 1 - δ - δ', we have that ℒ(θ̂, 𝒟) ≤ inf_θ∈Θ[ ℒ(θ, 𝒟) + 𝔼_Z_n[ h(θ, Z_N)] + Δ_N h(θ) √(log (1 / δ')/2N)] + ε' + 2ℜ^h_N(𝒬, 𝒟) + 2C√(log(1/δ)/2N). We note that the first term of RHS in <ref> is zero due to <ref>. In particular, by <ref> and uniqueness of Bellman expected operator, we have that ℒ(θ^*, 𝒟) = 0. The second term 𝔼_Z_n[ h(θ, Z_N)] = 0 and the first factor in the third term Δ_N h(θ) = 0 since Q_θ^* is a valid Q-function, thus no clipping is applied. Without loss of generality, suppose that ε' = 0, then by <ref> we have that with probability 1 - δ - δ', ℒ(θ̂, 𝒟) ≤ 2ℜ^h_N(𝒬, 𝒟) + 2C√(log(1/δ)/2N) = ‖ε_approx‖_2, 𝒟. In other words, as we increase the number of samples N, the approximation error of θ̂, ‖ε_approx‖_2, 𝒟, approaches zero with high probability. The boundedness in <ref> further ensures that the Rademacher complexity is well-behaved and shrinks to zero as N increases under the test loss <cit.>. Now, consider the approximate policy iteration algorithm for Q-function called AMPI-Q <cit.>. Instead of approximating the Q-function using the standard squared-loss <ref>, we instead use the regularized loss <ref>. From <ref> we know that ℒ(θ̂, 𝒟) ≤‖ε_approx‖_2, 𝒟 with high probability. Consequently, at each iteration k of AMPI-Q, we can replace the policy evaluation error ‖ε_k‖_2, μ with ‖ε_approx‖_2, 𝒟 and immediately obtain a finite-sample analysis of AMPI-Q through Theorem 8 of <cit.>. With <ref> and the assumptions from <cit.>, after k iterations, AMPI-Q with regularized loss <ref> for policy evaluation satisfies the following with probability 1 - δ - δ': ‖ Q^* - Q^π_k‖_2, ρ_0≤2 (γ - γ^k) (𝒞^1, k, 0_∞)^1/2/(1 - γ)^2( 2ℜ^h_N(𝒬, 𝒟) + 2C√(log(k/δ)/2N)) + g(k), where 𝒞^1, k, 0_∞ and g(k) are the concentrability-coefficients related terms defined in <cit.>. We can directly apply Theorem 8 of <cit.> (setting p = 2, q' = 1, q = ∞) and take the union bound over the bad events, that is, with probability at most (δ + δ') / k, ℒ(θ̂, 𝒟) > ‖ε_approx‖_2, 𝒟 for each of k iterations. <ref> indicates that when we run AMPI-Q for sufficiently large number of iterations, the learned policy reaches optimal policy performance, even when using our proposed value-penalty regularization <ref> on the standard squared loss for policy evaluation. § ADDITIONAL ENVIRONMENT, ALGORITHM, AND IMPLEMENTATION DETAILS The following sections contain further details of the environments, tasks, auxiliary tasks, algorithms, and implementations used in our experiments. §.§ Additional Environment Details Compared with the original Panda tasks from LfGP <cit.>, we switch from 20Hz to 5Hz control (finding major improvements in performance for doing so), improve handling of rotation symmetries in the observations, and remove the force-torque sensor since it turned out to have errors at low magnitudes. Crucially, these modifications did not require training new expert policies, since the same final observation states from the full trajectory expert data from <cit.> remained valid. Compared with the original LfGP tasks, we also remove as an auxiliary task from , , and , since we found a slight performance improvement for doing so, and add , , and as main tasks. The environment was otherwise identical to how it was implemented in LfGP, including allowing randomization of the block and end-effector positions anywhere above the tray, using delta-position actions, and using end-effector pose, end-effector velocity, object pose, object velocity, and relative positions in the observations. For even further details of the original environment, see <cit.>. Since the Sawyer tasks from <cit.> only contain end-effector position and object position by default, they do not follow the Markov property. To mitigate this, we train all algorithms in the Sawyer tasks with frame-stacking of 3 and add in gripper position to the observations, since we found that this, at best, improved performance for all algorithms, and at worst, kept performance the same. We validate that this is true by also performing experiments using the original code and environments from <cit.>, unmodified, where the results of RCE without these modifications are presented in <ref>, with results comparable to or poorer than our own RCE results. §.§ Delta-Position Adroit Hand Environments The Adroit hand tasks from <cit.> use absolute positions for actions. This choice allows even very coarse policies, with actions that would be unlikely to be successful in the real world, to learn to complete and , and also makes the intricate exploration required to solve very difficult. Specifically, VPACE-SQIL and several other baselines achieve high return in and , but the learned policies use unrealistic actions that exploit simulator bugs. As well, no methods are able to achieve any return in in 1.5M environment steps. In the interest of solving and learning more skillful policies, we generated modified versions of these environments with delta-position action spaces. Furthermore, in , the action space rotation frame was changed to be in the palm, rather than at the elbow, and, since relative positions between the palm, ball, and relocate goal are included as part of the state, we removed the joint positions from the state. In our experiments, these modified environments were called , , and . See <ref> for a comparison of learned policies in each version of these environments. §.§ Real World Environment Details <ref> shows our experimental platform and setup for our two real world tasks. In both and , the observation space contains the end-effector position, an ArUco tag <cit.> to provide drawer or door position (in the frame of the RGB camera; we do not perform extrinsic calibration between the robot and the camera), and the gripper finger position. The action space in both contains delta-positions and a binary gripper command for opening and closing. The action space for is one-dimensional (allowing motion in a line), while the action space for is two-dimensional (allowing motion in a plane). The initial state distribution for allows for initializing the end-effector anywhere within a 10 cm line approximately 25 cm away from the drawer handle when closed. For , the initial state distribution is a 20 cm × 20 cm square, approximately 20cm away from the door handle when closed. Actions are supplied at 5 Hz. For both environments, for evaluation only, success is determined by whether the drawer or door is fully opened, as detected by the absolute position of the ArUco tag in the frame of the RGB camera. Our robot environment code is built on Polymetis <cit.>, and uses the default hybrid impedance controller that comes with the library. To reduce environmental damage from excessive forces and torques, we reduced Cartesian translational stiffness in all dimensions from 750 N/m to 250 N/m, and the force and torque limits in all dimensions from 40 N and 40 Nm to 20 N and 20 Nm. §.§ Additional Task Details <ref> shows representative images for all environments and tasks used in this work. Success examples for the Panda environments were gathered by taking s_T from the existing datasets provided by <cit.>. Success examples for main tasks from the Sawyer environments were generated using the same code from <cit.>, in which the success examples were generated manually given knowledge of the task. Auxiliary task data was generated with a similar approach. Success examples for the Adroit hand environments were generated from the original human datasets provided by <cit.>. Success examples for our real world tasks were generated by manually moving the robot to a small set of successful positions for each auxiliary task and main task. As stated in <ref>, all Panda main tasks use the the auxiliary tasks release, reach, grasp, and lift. There are two specific nuances that were left out of the main text for clarity and brevity: (i) the main task only uses release as an auxiliary task (since it also acts as a “coarse" reach), and (ii) half of the release dataset for each task is specific to that task (e.g., containing insert or stack data), as was the case in the original datasets from <cit.>. For the Sawyer, Hand, and real Panda environments, because the observation spaces are not shared, each task has its own separate reach and grasp data. §.§ Additional Algorithm Details <ref> shows a summary of VPACE, built on LfGP <cit.> and SAC-X <cit.>. <ref> shows a breakdown of some of the major differences between all of the algorithms studied in this work. §.§ Additional Implementation Details In this section, we list some specific implementation details of our algorithms. We only list parameters or choices that may be considered unique to this work, but a full list of all parameter choices can be found in our code. We also provide the VPACE pseudocode in <ref>, with blue text only applying to learned discriminator-based reward functions (see <ref> for various reward models). Whenever possible, all algorithms and baselines use the same modifications. <ref> also shows our choices for common off-policy RL hyperparameters as well as choices for those introduced by this work. DAC reward function: for VPACE-DAC and VP-DAC, although there are many options for reward functions that map D to R̂ <cit.>, following <cit.>, we set the reward to R̂_(s) = log( D_(s) ) - log( 1 - D_(s) ). n-step returns and entropy in TD error: following results from <cit.>, we also add n-step returns and remove the entropy bonus in the calculation of the TD error for all algorithms in all Sawyer and Adroit environments, finding a significant performance gain for doing so. Absorbing states and terminal states: for all algorithms, we do not include absorbing states (introduced in <cit.>) or terminal markers (sometimes referred to as “done"), since we found that both of these additions cause major bootstrapping problems when environments only terminate on timeouts, and timeouts do not necessarily indicate failure. Previous work supports bootstrapping on terminal states when they are caused by non-failure timeouts <cit.>. SQIL labels for policy data: The original implementation of SQIL uses labels of 0 and 1 TD updates in <ref> and <ref>, respectively. We found that changing the label for <ref> from 0 to -1 improved performance. Reward Scaling of .1: we use a reward scaling parameter of .1 for all implementations. Coupled with a discount rate γ = 0.99 (common for much work in RL), this sets the expected minimum and maximum Q values for SQIL to -.1/1 - γ = -10 and .1/1 - γ = 10. No multitask weight sharing: intuitively, one may expect weight sharing to be helpful for multitask implementations. We found that it substantially hurt performance, so all of our multitask methods do not share weights between tasks or between actor and critic. However, the multitask discriminator in VPACE-DAC does have an initial set of shared weights due to its significantly poorer performance without this choice. ^* sampling for Q: in SQIL, DAC and RCE, we sample from both and ^* for Q updates, but not for π updates (which only samples from ). The original DAC implementation in <cit.> only samples ^* for updating D, sampling only from for updating Q. All other architecture details, including neural network parameters, are the same as <cit.>, which our own implementations are built on top of. Our code is built on top of the code from <cit.>, which was originally built using <cit.>. §.§.§ Maintaining Q-pi-max, Q-pi-min Estimates for Value Penalization Our approach to value penalization requires maintaining estimates for or choosing Q^π_max and Q^π_min. In both DAC and SQIL, the estimate of Q^π_max comes from taking the mini-batch of data from ^*, passing it through the Q function, taking the mean, and then using a median moving average filter to maintain an estimate. The “Q^π_max, Q^π_min num. filter points" value from <ref> refers to the size of this filter. We chose 50 and used it for all of our experiments. We set Q^π_min to Q^π_min = rew. scale×min(R̂(s))/1 - γ, where in SQIL, min(R̂(s)) = R̂(s) is set to 0 or -1, and in DAC, we maintain an estimate of the minimum learned reward min(R̂) using a median moving average filter with the same length as the one used for Q^π_max . §.§.§ Scheduler Choices As noted in <ref>, our ACE algorithms use the same approach to scheduling from <cit.>. Specifically, we use a weighted random scheduler (WRS) combined a small set of handcrafted high-level trajectories. The WRS forms a prior categorical distribution over the set of tasks, with a higher probability mass p__main (Main Task Rate in <ref>) for the main task and p__main/K for all other tasks. Additionally, we choose whether to uniformly sample from a small set of handcrafted high-level trajectories, instead of from the WRS, at the Handcraft Rate from <ref>. Our selections for handcrafted trajectories are quite simple, and reusable between main tasks within each environment. In the Panda tasks, there are eight scheduler periods per episode and four auxiliary tasks (reach, grasp, lift, release), and the handcrafted trajectory options are: r.33 < g r a p h i c s > Additional results for the task from <ref>, either including or excluding the use of example augmentation described in <ref>. * reach, lift, main, release, reach, lift, main, release * lift, main, release, lift, main, release, lift, main * main, release, main, release, main, release, main, release In the Sawyer and Adroit environments, we actually found that the WRS was unnecessary to efficiently learn the main task, and simply used two handcrafted high-level trajectories. In these environments, there are five scheduler periods per episode and two auxiliary tasks (reach, grasp), and the handcrafted trajectory options are: * reach, grasp, main, main, main * main, main, main, main, main §.§.§ Expert Data Augmentation We added a method for augmenting our expert data to artificially increase dataset size. The approach is similar to other approaches that simply add Gaussian or uniform noise to data in the buffer <cit.>. In our case, we go one step further than the approach from <cit.>, and first calculate the per-dimension standard deviation of each observation in ^*, scaling the Gaussian noise added to each dimension of each example based on the dimension's standard deviation. For example, if a dimension in ^* has zero standard deviation (e.g., in , the pose of the blue block is always the same), it will have no noise added by our augmentation approach. The parameter “Expert Aug. Factor" from <ref> controls the magnitude of this noise, after our per-dimension normalization scheme. In <ref>, we show the results of excluding expert augmentation, where there is a clear, if slight, performance decrease when it is excluded, which is even more pronounced with a smaller ^*_ size. All methods and baselines from our own implementation use expert data augmentation. §.§.§ Other Ablation Details In our ablation experiments from <ref>, we included three baselines with full trajectory data, in addition to success examples. We added 200 (s,a) pairs from full expert trajectories to make datasets comparable to the datasets from <cit.>, where they used 800 expert (s,a) pairs, but their environment was run at 20Hz instead of 5Hz, meaning they needed four times more data to have roughly the same total number of expert trajectories. We generated these trajectories using high-performing policies from our main experiments, since the raw trajectory data from <cit.> would not apply given that we changed the control rate from 20Hz to 5Hz. §.§.§ Real Panda Implementation Details While most of the design choices in <ref> apply to all environments tested, our real Panda environment had some small specific differences, mostly due to the complications of running reinforcement learning in the real world. We list the differences here, but for an exhaustive list, our open source code contains further details. Maximum episode length: The maximum episode length for both and is 1000 steps, or 200 seconds in real time. This was selected to reduce how often the environment had to be reset, which is time consuming. Running episodes for this long, and executing actions at 5 Hz, our environments complete 5000 environment steps in roughly 20 minutes. The extra time is due to the time to reset the environment after 1000 steps or after a collision. VPACE took approximately 100 minutes to learn to complete consistently, and about 200 minutes to learn to complete the more difficult . Shorter initial exploration: To attempt to learn the tasks with fewer environment samples, we reduce buffer warmup to 500 steps, and initial random exploration to 1000 steps. Frame stack: For training, we stacked two regular observations to avoid state aliasing. Ending on success: We ended episodes early if they were determined to be successful at the main task only. Although this is not necessary for tasks to be learned (and this information was not provided to the learning algorithm), it gave us a way to evaluate training progress. Extra gradient steps: To add efficiency during execution and training, we completed training steps during the gap in time between an action being executed and an observation being gathered. Instead of completing a single gradient step at this time, as is the case for standard SAC (and VPACE), we completed four gradient steps, finding in simulated tasks that this gave a benefit to learning efficiency without harming performance. Previous work <cit.> has found that increasing this update rate can reduce performance, but we hypothesize that our value penalization scheme helps mitigate this issue. Collisions: If the robot is detected to have exceeded force or torque limits (20 N and 20 Nm in our case, respectively), the final observation prior to collision is recorded, and the environment is immediately reset. There are likely more efficient ways to handle such behaviour, but we did not investigate any in this work. § ADDITIONAL PERFORMANCE RESULTS In this section, we expand upon our performance results from <ref> and our Q-value calibration results from <ref>. We also show performance results for auxiliary tasks. §.§ Expanded Main Task Performance Results <ref>, <ref>, and <ref> show expanded per-main-task results for our Panda (originally presented in <cit.>), Sawyer (originally presented in <cit.>), and Adroit Hand (originally presented in <cit.>) tasks, respectively. In the Panda tasks (<ref>), the hardest tasks (, , and ) benefit the most from VPACE, but VPACE-SQIL is always the fastest and highest performing method. Notably, ACE-SQIL is always outperformed by both VPACE-SQIL and VPACE-DAC, and RCE is always outperformed by VP-SQIL. The benefits of value penalization are also clear, even in Bring, where ACE-SQIL intially learns the task, but starts to learn very poor Q estimates and have increasingly poorer and more unstable performance. In the Sawyer tasks (<ref>), the benefit of VPACE is not as prominent, even if VPACE-SQIL is still the best performing method on average. The clearer pattern is the improvement from adding auxiliary task exploration, although the improvement is still lower, on average, than the improvement in the Panda tasks. Compared with the Panda tasks, these tasks do not have randomized initial conditions (apart from ), and also generally require shorter movements to complete, so the benefits of VPACE are less pronounced. In the Adroit hand tasks (<ref>), VPACE-SQIL, VPACE-DAC, and VP-SQIL are always among the highest performing methods. In particular, RCE and ACE-RCE are always outperformed by all other methods. As stated in <ref> and further detailed in <ref>, we also generated delta-position variants of the environments, which (i) generated, qualitatively, far more realistic policies (see <ref>), and (ii) allowed many methods to solve the task. §.§ Value Penalization Calibration Improvement – All Environments This section expands on the value penalization calibration results from <ref> for the Panda and Sawyer tasks <ref> and for the Adroit hand tasks <ref>. Following the results shown for , the results for other difficult tasks also show clear violations of y(s,a) ≤ y(s^*,a^*), and tasks in which this rule is violated also tend to have poorer performance. The violations are more pronounced ACE-SQIL, which, as previously discussed, is likely because they generate far more diverse state distributions in . As shown for in <ref>, VP-SQIL and VPACE-SQIL never violate y(s,a) ≤ y(s^*,a^*). Intriguingly, although the rule is severely violated for many Adroit hand tasks, ACE-SQIL and SQIL still have reasonable performance in some cases. This shows that highly uncalibrated Q estimates can still, sometimes, lead to adequate performance. We hypothesize that this occurs because these tasks do not necessarily need s_T ∼ to match s^* ∼^*_main to achieve high return, but we leave investigating this point to future work. §.§ Auxiliary Task Performance Results In our main text, we only presented performance results for main tasks, since our primary goal was to efficiently generate policies that performed well on a particular task. However, our scheduled auxiliary task approach inherently learns multiple auxiliary task policies π__aux in addition to the main task policy π__main. While performance on these auxiliary task policies is not necessarily expected to be high throughout learning, observing auxiliary task performance can potentially provide useful insights, such as whether the auxiliary policies would be useful for transfer learning to learn a new main task. We show all auxiliary task performance in for our Panda environments in <ref>. Auxiliary tasks tend to be learned slightly before the main task is learned, and otherwise auxiliary task performance generally follows performance on the main task (i.e. if the main task is learned slowly or poorly, auxiliary task performance will reflect this). The performance on the Reach auxiliary task is sometimes lower than on other, ostensibly more difficult tasks such as Lift, but this is likely because the success criteria for Reach is less forgiving than that for Lift. We show auxiliary task performance for Sawyer and Adroit Hand environments in <ref> and <ref>. Returns for auxiliary tasks are calculated based on reward functions originally taken from <cit.>, in which a reaching reward is positive as an end-effector reaches closer to an object, and negative as it moves away, and grasp receives a similar bonus for moving or lifting the object of interest. In our ACE experiments for Sawyer and Adroit, we only use auxiliary tasks as part of a handcrafted high-level trajectory (see <ref>), meaning that reach and grasp are only learned after a reset, whereas during evaluation, the policy may have an opportunity to guide the end-effector to partially reach an object, then move away, with no learned strategy for recovery. We suspect that using a weighted random scheduler throughout learning may alleviate this issue, but we did not explore this in this work, since learning generally effective or reusable auxiliary tasks is beyond the scope of this work. § EXPANDED LIMITATIONS In this section, we expand on some of the limitations originally discussed in <ref>, where we previously skipped limitations that are inherent to reinforcement learning and to learning from guided play (LfGP, <cit.>), both of which are core parts of our approach. Experimental limitation—Numerical state data only All of our tests are done with numerical data, instead of image-based data. Other work <cit.> has shown that for some environments, image-based learning just results in slowing learning compared with numerical state data, and we assume that the same would be true for our method as well. Assumption—Generating example success states is easier We claim that success example distributions are easier to generate than full trajectory expert data, and while we expect this to be true in almost all cases, there may still be tasks or environments where accomplishing this is not trivial. As well, similar to other imitation learning methods, knowing how much data is required to generate an effective policy is unknown, but adding a way to append to the existing success state distribution (e.g., <cit.>) would presumably help mitigate this. Assumption/failure mode—Unimodal example distributions Although we do not explicitly claim that unimodal example state distributions are required for VPACE to work, all of our tested tasks have roughly unimodal example state distributions. It is not clear whether our method would gracefully extend to the multimodal case, and investigating this is an interesting direction for future work. Experimental limitation—Some environment-specific hyperparameters While the vast majority of hyperparameters were transferable between all environments and algorithms, the scheduler period, the inclusion of n-step targets, and the use of entropy in TD updates, were different between environments to maximize performance. Scheduler periods will be different for all environments, but future work should further investigate why n-step targets and the inclusion of entropy in TD updates makes environment-specific differences. §.§ VPACE and LfGP Limitations VPACE shares the following limitations with LfGP <cit.>. Assumption—Existence of auxiliary task datasets VPACE and LfGP require the existence of auxiliary task example datasets ^*_aux, in addition to a main task dataset ^*_main. This places higher initial burden on the practitioner. In future, choosing environments where this data can be reused as much as possible will reduce this burden. Assumption—Clear auxiliary task definitions VPACE and LfGP require a practitioner to manually define auxiliary tasks. We expect this to be comparatively easier than generating a similar dense reward function, since it does not require evaluating the relative contribution of individual auxiliary tasks. As well, all tasks studied in this work share task definitions, and the panda environment even shares task data itself, leading us to assume that these task definitions will extend to other manipulation tasks as well. Assumption—Clear choices for handcrafted scheduler trajectories VPACE and LfGP use a combination of a weighted random scheduler with a handcrafted scheduler, randomly sampling from pre-defined trajectories of high level tasks. <cit.> found that the handcrafted scheduler added little benefit compared with a weighted random scheduler, and further work should investigate this claim, or perhaps attempt to use a learned scheduler, as in <cit.>. §.§ Reinforcement Learning Limitations Experimental limitation—Free environment exploration As is common in reinforcement learning methods, our method requires exploration of environments for a considerable amount of time (on the order of hours), which may be unacceptable for tasks with, e.g., delicate objects. ]
http://arxiv.org/abs/2407.01917v1
20240702033209
Securing Distributed Network Digital Twin Systems Against Model Poisoning Attacks
[ "Zifan Zhang", "Minghong Fang", "Mingzhe Chen", "Gaolei Li", "Xi Lin", "Yuchen Liu" ]
cs.NI
[ "cs.NI", "cs.CR", "cs.DC" ]
Internet of Things Journal Shell et al.: A Sample Article Using IEEEtran.cls for IEEE Journals Securing Distributed Network Digital Twin Systems Against Model Poisoning Attacks Zifan Zhang, Minghong Fang, Mingzhe Chen, Member, IEEE, Gaolei Li, Member, IEEE, Xi Lin, Member, IEEE, Yuchen Liu, Member, IEEE Z. Zhang and Y. Liu are with the Department of Computer Science, North Carolina State University, Raleigh, NC, 27695, USA (Email: {zzhang66, yuchen.liu}@ncsu.edu). (Corresponding author: Yuchen Liu.) Minghong Fang is with the Department of Electrical and Computer Engineering, Duke University, Durham, 27705 (Email: < minghong.fang@duke.edu)>. M. Chen is with the Department of Electrical and Computer Engineering and Frost Institute for Data Science and Computing, University of Miami, Coral Gables, FL 33146 USA (Email: < mingzhe.chen@miami.edu)>. G. Li and X. Lin are with the School of Electronic Information and Electrical Engineering Shanghai Jiao Tong University, China (Email: {gaolei_li, linxi234}@sjtu.edu.cn). July 8, 2024 ============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== § ABSTRACT In the era of 5G and beyond, the increasing complexity of wireless networks necessitates innovative frameworks for efficient management and deployment. Digital twins (DTs), embodying real-time monitoring, predictive configurations, and enhanced decision-making capabilities, stand out as a promising solution in this context. Within a time-series data-driven framework that effectively maps wireless networks into digital counterparts, encapsulated by integrated vertical and horizontal twinning phases, this study investigates the security challenges in distributed network DT systems, which potentially undermine the reliability of subsequent network applications such as wireless traffic forecasting. Specifically, we consider a minimal-knowledge scenario for all attackers, in that they do not have access to network data and other specialized knowledge, yet can interact with previous iterations of server-level models. In this context, we spotlight a novel fake traffic injection attack designed to compromise a distributed network DT system for wireless traffic prediction. In response, we then propose a defense mechanism, termed global-local inconsistency detection (GLID), to counteract various model poisoning threats. GLID strategically removes abnormal model parameters that deviate beyond a particular percentile range, thereby fortifying the security of network twinning process. Through extensive experiments on real-world wireless traffic datasets, our experimental evaluations show that both our attack and defense strategies significantly outperform existing baselines, highlighting the importance of security measures in the design and implementation of DTs for 5G and beyond network systems. Digital twin, poisoning attack, security, distributed learning, wireless networks, traffic prediction. § INTRODUCTION In the realm of telecommunications, wireless networks are experiencing a paradigm shift, primarily driven by the advent of edge computing, spectrum sharing, and millimeter-wave communication technologies in the 5G era. These technological advancements are foundational to a multitude of novel applications and services, notably enhancing mobile broadband and facilitating the seamless integration of the Internet of Things (IoT) <cit.>, autonomous transportation <cit.>, smart urban infrastructure <cit.>, and remote healthcare delivery <cit.>. Further, the nascent stages of 6G research are indicative of potential revolutionary leaps in hybrid physical-virtual network technologies, paving the way for ubiquitous and intelligent connectivity worldwide. Parallel to these advancements, the concept of digital twin (DT) has surfaced as a significant technological breakthrough in the mixed reality era <cit.>. The DTs embody intricate virtual representations of physical entities or systems and gain traction in the context of the Fourth Industrial Revolution. This concept synergistically harnesses the capabilities of IoT, machine learning, and big data analytics, meticulously constructing a comprehensive digital model that mirrors the physical attributes, processes, interconnection, and dynamics of its real-world counterpart. Such models play a pivotal role in facilitating predictive simulations, what-if analysis, and system optimizations within a virtual environment, thereby offering tangible insights into operational challenges and maintenance requirements <cit.>. While DTs offer a wide range of benefits and applications, ensuring their security remains a critical concern that necessitates a comprehensive understanding and robust countermeasures. Common security threats to DTs include data breaches, unauthorized access, and cyber-attacks, which can disrupt the seamless interaction between the physical and virtual systems <cit.>. In the realm of wireless networks, these DTs face additional challenges such as Byzantine attacks, man-in-the-middle attacks, and signal interference, which can severely impact their availability and reliability. Furthermore, robust countermeasures, including encryption techniques and secure communication protocols, are needed to protect DTs from potential adversarial attacks in open wireless environments <cit.>. As DTs are increasingly integrated into the metaverse applications <cit.>, addressing the security and privacy challenges becomes paramount to ensure a trustworthy user interface <cit.>. In particular, trust evaluation schemes such as in <cit.> have been proposed for using federated learning (FL) in DT systems, aiming to enhance data usage security by evaluating the trustworthiness of participating network entities. In the realm of wireless networks, FL leverages its decentralized nature to facilitate multiple network services. With the exponential growth in the number of connected devices and the ever-increasing demand for data-intensive applications like streaming and IoT services, constructing precise network digital twins (NDTs) accurately becomes vital for ensuring various downstream forecasting tasks, such as wireless traffic prediction (WTP) <cit.>. Despite distributed learning's potential in accuracy, efficiency, and privacy preservation, its integration into NDT creation and operation is not devoid of challenges. Notably, Byzantine attacks, particularly model poisoning attacks, pose significant threats to the effectiveness and trustworthiness of NDT systems. In a model poisoning attack, malicious network entities introduce adversarial modifications to the model parameters during the mapping process of NDTs. This tampering results in a compromised server-level twin, i.e. global twin model, when aggregated at the central network controller, subsequently producing incorrect operations on the physical infrastructure. Such inaccuracies lead to the risk of network inefficiencies and even severe service disruptions, especially in real-time applications like autonomous driving systems. In more extreme scenarios, these attacks may serve as gateways to further malicious network intrusions, instigating broader security and privacy concerns as illustrated in <cit.>. The grave implications of model poisoning attacks underscore the pressing need for robust security measures to ensure the integrity, reliability, and resilience of distributed NDT systems against Byzantine failures, thereby safeguarding the overarching network infrastructure and the services reliant on it. While most existing DT mapping algorithms and their associated security strategies are typically assessed within the context of classification problems <cit.>, scant attention has been paid to the regression problems, as observed in examined WTP scenarios within NDTs, introducing distinct challenges related to data distribution, model complexity, and evaluation metrics. The distinction between data manipulation strategies in regression and classification problems, as well as their detection methodologies, underscores the nuanced challenges in safeguarding twin models against emerging adversarial attacks. For instance, in a regression-based DT-assisted WTP problem, attackers typically target the model's continuous output by altering the distribution or magnitude of input time-series data, intending to steer predictions in a specific direction. This differs from classification tasks, where the manipulation revolves around modifying input features to induce misclassification without noticeably changing the input's appearance to human observers. To bridge this gap, we make the first attempt to introduce a novel attack centered on injecting disruptive traffic data from malicious NDTs into wireless networks. Existing model poisoning attacks have predominantly depended on additional access knowledge and direct intrusions on physical base stations (BSs) <cit.>. However, in a practical cellular network system, BSs have exhibited a commendable level of resilience against attacks, making the extraction of training data from them a challenging endeavor. In contrast, the cost of deploying fake NDTs that mimic their behaviors is comparatively lower than the resources required for compromising authentic BSs <cit.>. This assumption asserts that these compromised NDTs lack insight into the training data and only have access to the initial and current global twin models, aligning with the practical settings studied in <cit.>. Importantly, other information, such as data aggregation rules and model parameters from benign NDTs or BSs, remains inaccessible to these compromised NDTs. In this work, we consider a distributed DT-assisted network architecture as depicted in Fig. <ref>, where wireless traffic data collected from BSs is mapped into local NDTs to establish an initial and private NDT for each BS. Within each cluster, a cluster-level NDT (C-NDT) is constructed by aggregating these local twin models. Subsequently, at the backend, a global twin model (G-NDT) is established by merging the C-NDT model parameters during each iterative phase. This global twin model is then synchronized with each local NDT, serving as a foundation for predictive analysis and enabling specific applications for each BS and its associated NDTs. In this situation, our threat model envisions a minimum-knowledge scenario for an adversary. First, we propose Fake Traffic Injection (FTI), a methodology designed to create undetectable fake NDTs with minimal prior knowledge. Each fake NDT employs both its initial model and current global information to determine the optimizing trajectory of the twinning process, as shown in Fig. <ref>. These malicious participants aim to subtly align the global model towards an outcome that undermines the integrity and reliability of the NDT system. Numerous numerical experiments are conducted to validate that our FTI demonstrates efficacy across various state-of-the-art model aggregation rules, outperforming other poisoning attacks in terms of vulnerability impacts. On the contrary, we propose an innovative defensive strategy known as Global-Local Inconsistency Detection (GLID), aimed at neutralizing the effects of model poisoning attacks on NDT systems. This defense scheme involves strategically removing abnormal model parameters that deviate beyond a specific percentile range estimated through statistical methods in each dimension. Such an adaptive approach allows us to trim varying numbers of malicious model parameters instead of a fixed quantity <cit.>. Next, a weighted mean mechanism is employed to update the global twin model parameter, subsequently disseminated back to each NDT. Our extensive evaluations, conducted on real-world datasets, demonstrate that the proposed defensive mechanism substantially mitigates the impact of model poisoning attacks on NDT systems, thereby showcasing a promising avenue for securing distributed NDT systems with trustworthiness. This paper is an extended version of our previous work in <cit.>, where we expand upon it by adapting the proposed attack and defense strategies from traditional federated learning settings to the practical NDT system. The contributions are briefly summarized into three folds: * We present a novel model poisoning attack, employing fake NDTs for traffic injection into distributed NDT systems under a minimum-knowledge scenario. * Conversely, we propose an effective defense strategy tailored to counteract various model poisoning attacks, which proactively trims an adaptive number of twin model parameters by leveraging the percentile estimation technique. * Lastly, we evaluate both the proposed poisoning attack and the defensive mechanism using real-world traffic datasets from Milan City, where the results demonstrate that the FTI attack indeed compromises distributed NDT systems, and the proposed defensive strategy proves notably more effective than other baseline approaches in mitigating various attacks. § RELATED WORKS AND PRELIMINARIES §.§ Distributed Network Digital Twin Systems The integration of DTs into the realm of wireless networking represents a significant leap forward in this rapidly evolving field. As outlined in <cit.>, the use of DTs involves the creation of detailed virtual replicas of network components and infrastructure. This approach enables real-time analytics and optimization, providing deep insights into network behavior under various scenarios. Such strategies are crucial for predictive maintenance and performance monitoring, greatly enhancing network reliability and efficiency. Furthermore, <cit.> introduces the use of Graph Neural Networks to enhance DTs in network slicing, aimed at predicting network performance and optimizing resources in high-bandwidth and low-latency scenarios. Additionally, the application of DTs in vehicular networks is detailed in <cit.>, showcasing DTs' ability to model and control software-defined vehicular networks, thereby improving the effectiveness and reliability of vehicular communications. Moreover, <cit.> proposes a digital twin-assisted security scheme for multi-resource heterogeneous RANs in space-air-ground integrated networks. Despite various explorations into DT applications within wireless networks, there is a gap in the literature regarding the development and mapping of NDTs, which our research seeks to address. At the forefront of DTs, distributed learning emerges as a revolutionary approach, especially for large-scale networks and the Industrial IoT. The combination of these technologies not only enhances system efficiency but also transforms data handling capabilities. <cit.> proposed a Joint Vertical and Horizontal Learning-based digital twinning strategy to perform a precise mapping from physical networks to digital twins. <cit.> exemplifies their potential in reliable edge caching and real-time data-driven optimization. Additionally, addressing the challenge of efficient data communication in distributed learning systems, <cit.> and <cit.> propose strategies to enhance data exchange and processing—crucial for scaling up applications with massive access. §.§ Poisoning Attacks on Distributed Systems The decentralized architecture of distributed DT systems renders them vulnerable to Byzantine attacks, as explored in previous study <cit.>. In these scenarios, adversaries can compromise BSs and their corresponding NDT models to undermine the entire distributed DT system. These malicious BSs may tamper with their local training data or directly modify their local twin models to negatively impact the global twin model. For example, the Trim attack <cit.> involves malicious BSs deliberately distorting their local twin models to create a significant discrepancy in the aggregated model post-attack compared to its pre-attack state. The MPAF attack <cit.> sees each compromised BS applying a negative scalar to the global twin model update before forwarding this tampered update to the server-level twin. In the Random attack <cit.>, malicious DTs generate and send a random vector, drawn from a Gaussian distribution, to the server as their update. Furthermore, a recent study by <cit.> introduced specific poisoning attacks targeting distributed DT systems. Here, an attacker manipulates some BSs under their control, each with its local training dataset. These DTs adjust their local models using this data and then scale their model updates by a factor before dispatching these altered updates to the server. These strategies highlight the critical challenge of securing the entire system against various forms of data and model tampering attacks, underscoring the need for robust defense mechanisms. Existing attacks in our considered setting suffer from the following limitations. In the MPAF attack, model updates from fake NDTs are exaggerated by a factor such as 1 × 10^6. However, this approach is impractical because the server can easily identify these excessive updates as anomalies and discard them. Furthermore, such blatant manipulation lacks subtlety, making it easy to detect and counter. On the other hand, our method involves carefully crafting model updates on fake clients by solving an optimization problem. This ensures that the server is unable to differentiate these fake updates from benign ones, allowing the attacker to simultaneously breach the integrity of the system without detection. Our approach maintains the updates within a plausible range, avoiding the pitfalls of easily detectable anomalies. The attack described in <cit.> is not feasible because it is based on the unrealistic assumption that an attacker can easily take control of authentic BSs or DTs. In reality, it is highly challenging for an attacker to gain such influence over existing, authentic facilities. Moreover, this attack does not consider the sophisticated security measures typically in place to protect these systems. Our research, however, focuses on developing more realistic attack scenarios that account for the complexities and security protocols of modern distributed DT systems, ensuring a more accurate assessment of their vulnerabilities. §.§ Byzantine-robust Aggregation Rules In environments free from adversarial intentions, server-level twins typically aggregate incoming local twin model updates through a simple averaging process <cit.>. However, recent studies <cit.> have revealed vulnerabilities in this averaging method of aggregation, particularly its susceptibility to poisoning attacks. In such attacks, a single malicious local twin model can significantly alter the aggregated result. To counter these vulnerabilities, the literature offers a range of Byzantine-resistant aggregation algorithms <cit.>. For instance, the Krum method <cit.> assesses each local twin's update by calculating the sum of Euclidean distances to updates from other twin models, selecting the update with the smallest sum for global aggregation. The Median aggregation strategy <cit.> involves the server-level twin computing median values across each parameter from all local updates, improving resistance to outlier manipulations. These strategies introduce robustness against adversarial actions, ensuring the integrity of the aggregated twin model in distributed systems. § CREATION AND SYNCHRONIZATION OF NETWORK DIGITAL TWINS FOR WIRELESS TRAFFIC PREDICTION This section introduces a novel framework for creating and synchronizing NDTs specifically designed for wireless traffic prediction. The framework is structured around three main stages: dynamic connectivity segmentation (DCS), vertical twinning (V-twinning), and horizontal twinning (H-twinning). The overall framework is shown in Fig. <ref>. The primary objective of the NDTs is to minimize prediction errors across all BSs for a better understanding of the physical network. This can be formulated as an optimization problem: α^* = min_α1/Mz∑_m=1^M∑_n=1^z F(f(r_m^n, α), s_m^n), where F is the quadratic loss function, M is the number of NDTs, z is the number of data points, r_m^n is the input traffic sequence, and s_m^n is the corresponding output traffic prediction. The optimization problem is resolved through FL with distributed NDTs, following the synchronization, local updating, and model aggregation process. This single-level mapping approach utilizes a classical FL strategy to aggregate multiple local twin models, serving as the baseline for comparing with our proposed joint vertical-horizontal mapping scheme. Specifically, Eq. (<ref>) can be resolved in a distributed fashion in traditional FL settings with the following three steps in each global training round t, as shown in Fig. <ref>. * Step I (Local twin update). Each NDT i ∈ [n] utilizes its private time-series training data along with the current global model to refine its own local model, then transmits the updated local model θ_i^t back to a central server. * Step II (Local twin manipulation/model poisoning attack). Each malicious NDT utilizes its knowledge to modify or create local twin models, and then send these malicious twin models to the server. * Step III (Aggregation of local twin models). The central server leverages the aggregation rule (AR) to merge the n received local models and subsequently updates the global model as follows: θ^t+1 = AR{θ_1^t, θ_2^t, …, θ_n^t}. The commonly used aggregation rule is the FedAvg <cit.>, where the server simply averages the received n local models from distributed NDTs, i.e., AR{θ_1^t, θ_2^t, …, θ_n^t} = 1/n∑_i=1^n θ_i^t. * Step IV (Synchronization). The central server sends the current global model θ^t to all NDTs. Specifically, our multi-level mapping framework encompasses a central DT, named global network digital twin (G-NDT), coordinating with a network of M NDTs, and multiple cluster network digital twin (C-NDT). Each NDT, denoted as m in the set [M], independently holds a proprietary dataset d_m = {d^1_m, d^2_m, …, d^L_m}. In this dataset, L indicates the total number of time intervals, and d^l_m represents the traffic load at NDT m during the l-th interval, where l ranges over [L]. The NDT involves constructing input-output predictive traffic sequences locally, denoted as {r_m^n, s_m^n}_n=1^z, for each NDT to generate future traffic predictions. Here, r_m^n is a suNDTet of historical traffic data corresponding to the output s_m^n ={d_m^l-1, …, d_m^l-a, d_m^l-ρ 1,…, d_m^l-ρ b}. The parameters a and b represent sliding windows that capture immediate and cyclical temporal dependencies, respectively, while ρ reflects inherent periodicities in the network, which might be influenced by user activity patterns or application service demands. The first stage, DCS, is employed periodically to ensure effective clustering of NDTs with similar communication characteristics and networking configurations. This clustering step is integral to the efficient creation and updates of multiple distributed NDTs, i.e. C-NDTs, which demonstrate distinct behaviors and perform parallel synchronization with the G-NDT. The DCS algorithm clusters the NDTs based on attributes such as geological distances, capacity of backhaul links, coverage area overlaps, and similarity of frequency of occurrence distribution. The relationship between two NDTs, n_1 and n_2, is quantified by a metric Φ_n_1,n_2: Φ_n_1,n_2 = ω_g/g_n_1,n_2 + ω_k · k_n_1,n_2 + ω_β·β_n_1,n_2 + ω_τ·τ_n_1,n_2, where ω represents the weights for each attribute. This dynamic clustering enhances the twinning performance in real time and forms the basis for accurate wireless traffic prediction by grouping NDTs with similar traffic patterns. In the V-Twinning stage, initial NDTs are created with historical data on caching requests and their frequency. It employs an FL strategy, where model parameters are shared among NDTs instead of raw data, enabling collaborative training of a global model. This approach efficiently distributes twinning tasks across NDTs while ensuring content data privacy. Specifically, the V-Twinning stage initializes a concrete G-NDT and synchronizes C-NDTs with the G-NDT after the twinning aggregation process. The aggregation of C-NDTs to form the G-NDT is given by: α^t+1 = 1/C∑_c=1^C α_c^t, where α_c^t represents the model parameters of the C-NDT for cluster c at time t, and C is the number of clusters. This stage is crucial for initializing the network DTs with historical traffic data, which serves as a foundation for future traffic prediction. The H-Twinning stage is designed to periodically synchronize the physical network and NDTs with real-time data. It adopts an asynchronous FL approach to update with dynamics from the physical network, providing a scalable and flexible solution for wireless networks composed of multiple clusters. This stage updates the twins regularly, ensuring that all NDTs remain relevant and accurately simulate and predict wireless traffic patterns. The update rule for the G-NDT based on the deviation ϵ between a C-NDT and the current G-NDT is as follows: α^t+1 = 1/C∑_c=1^C α_c^t if ϵ > ψ, α^t otherwise, where ψ is a predefined threshold, and ϵ = (α_c^t - α^t)^2 measures the deviation between the C-NDT and the G-NDT. This stage is critical for incorporating real-time traffic data into the NDTs, enabling them to adapt to changing network conditions and improve traffic prediction accuracy. § THREAT MODEL FOR DISTRIBUTED NETWORK DIGITAL TWINS Built upon the constructed distributed NDT system, this section discusses the threat model and explores a novel attack that poses a security breach to system functionality and network operations. §.§ Objective of the Attacker The fundamental aim of an attacker targeting a distributed NDT system is to impair the performance of the composite global twin model significantly. Such impairment directly undermines the precision of real-time traffic forecasts, which is crucial for effective network management and resource distribution. The ramifications of compromised traffic predictions include network congestion, diminished service quality, and suboptimal resource utilization, presenting considerable operational hurdles for network operators. This disturbance extends beyond the service providers, affecting end-users dependent on stable and efficient network services. §.§ Capabilities of the Attacker To achieve their goal, attackers introduce counterfeit NDT models into the system, as illustrated in Fig. <ref>. These fabricated NDTs can replicate the functionality of legitimate NDTs with minimal investment and effort. This tactic, which entails deploying fake BSs and NDTs using readily available open-source tools or emulators <cit.>, presents a low-barrier, high-feasibility threat vector distinct from the strategies like those in <cit.> that require compromising actual NDTs. Given the stringent security measures of contemporary networks, which complicate the direct manipulation of authentic twin models, this approach of deploying spurious BSs and NDTs emerges as a notably viable method for attack. §.§ Knowledge of the Attacker The attacker's limited understanding of the intricacies of the targeted distributed NDT system adds to the challenge of mounting a successful attack. In many practical scenarios, acquiring comprehensive knowledge about the aggregation algorithms or details of legitimate NDTs proves exceedingly difficult due to robust security measures and encryption. Consequently, an attack necessitating minimal specialized knowledge and training data not only appears more feasible but also carries a lower risk of detection. The operation of the counterfeit NDTs—receiving the global model and dispatching malicious updates—demands only basic intelligence, effectively lowering the threshold for entry for would-be attackers. This characteristic of the threat model heightens its potential danger, broadening the pool of possible adversaries to include those with scant technical skills or resources. §.§ Fake Traffic Injection Attack The proposed Algorithm <ref>, named the Fake Traffic Injection (FTI) Algorithm, presents a strategy for a Byzantine model poisoning attack aimed at compromising the prediction accuracy of an NDT system under specific assumptions. At the core of the FTI attack is an iterative procedure. In each iteration, the current global twin model θ^t and the base model θ̂ undergo a detailed examination. For each fake Base Station (BS) i, a malicious local model θ_i^t is constructed by blending the global model θ^t with the base model θ̂ in a weighted manner, as delineated in line 5 of Algorithm <ref>. Subsequent to the formation of θ_i^t, its deviation from the global model is assessed using the Euclidean norm, as depicted in line 7. The algorithm then evaluates whether this distance has increased compared to the previous measurement, denoted as PreDist. If an increase is observed, indicating that the malicious local model θ_i^t is diverging further from the global model θ^t, the value of η is incremented. Conversely, if no increase in distance is detected, η is decremented. The adjustment of η is executed in half-steps of its initial value, as outlined in lines 8 to 12. The algorithm aims to steer the global model towards greater alignment with a pre-defined base model in each round. Specifically, during the t-th round, fake NDTs compute the direction of local model updates, determined by the difference between the current global twin model and the base model, denoted as H = θ̂ - θ^t. Progressing in this direction signifies that the global model is becoming more akin to the base model. A straightforward method to obtain the local model of a fake BS involves scaling H by a factor η. However, this direct approach yields suboptimal attack performance. Assuming n represents the number of benign NDTs, and the attacker intends to inject m fake NDTs into the system, we propose a method for calculating θ_i^t for each fake NDT i ∈ [n+1, n+m]: θ^t_i = ηθ̂ - (η - 1)θ^t. In such scenarios, an attacker tends to opt for a higher η to ensure the sustained effectiveness of the attack, as illustrated in Fig. <ref> with an initial η of 10. This remains valid even after the server amalgamates the manipulated local updates from fake NDTs with legitimate updates from benign NDTs. § GLOBAL-LOCAL INCONSISTENCY DETECTION The defense against model poisoning attacks is founded on an aggregation protocol designed to identify malicious NDTs, termed the Global-Local Inconsistency Detection (GLID) method, as elaborated in Algorithm <ref>. In each global round t, GLID primarily examines anomalies present in each dimension of the model parameters θ_i^t, aiding in the identification of potentially malicious entities, where i ∈ [1, n+m] and n+m denotes the total number of NDTs in the system. This robust and versatile approach enables the system to adapt to various operational contexts without necessitating intricate similarity assessments. Then, choosing the parameter for the percentile range when trimming outliers for secure aggregation becomes crucial, as it directly influences the model's balance between robustness and accuracy. Typically, a narrow percentile range might exclude legitimate variations in data, reducing the model's accuracy and potentially leading to biased or incomplete representations. Conversely, a broad percentile range may fail to eliminate malicious or anomalous data contributions, compromising the model's security by allowing adversarial inputs to skew the aggregation process. Therefore, selecting an appropriate percentile range ensures that most benign data points are retained while effectively filtering out outliers or adversarial inputs. This balance is essential for maintaining both the performance and security of digital twin models, protecting against data poisoning attacks without sacrificing the overall quality and representativeness of the aggregated twinning data. Specifically, the GLID approach enhances the detection of potential malicious activities within the network by employing percentile-based trimming on each dimension of the model parameters. To establish an effective percentile pair for identifying abnormalities, four statistical methods can be adopted: Standard Deviation (SD), Interquartile Range (IQR), z-scores, or One-class Support Vector Machine (One-class SVM). Suppose the total count of dimensions of the model parameter is D. For the default SD method, the percentile pair for each dimension d can be calculated as follows: percentile pair^t_d = ( g( θ̅_d^t - k ·σ_d^t), g( θ̅_d^t + k ·σ_d^t ) ), where θ̅_d^t is the mean of the d-th dimension across all models in the t-th global training round, σ_d^t is the standard deviation of the d-th dimension, and k is a predefined constant dictating the sensitivity of outlier detection. g(·) is the interpolation function based on the standard deviation bound to estimate percentile pairs, defined as: g(x) = ( P(x) - 0.5/n+m) × 100, where P(x) is the position of x in the sorted dataset. We use k=3 for general purposes. Given that different tasks may require varied percentile bounds, a precise estimation method is crucial for generalizing our defense strategy. The detailed percentile estimation methods are discussed later in this section. In the FL-based WTP system, model parameters in the d-th dimension exceeding these percentile limits are flagged as malicious, and their weights α^t_i are assigned as 0. The other benign values in this dimension are aggregated using a weighted average rule, where the weights α^t_d,i are inversely proportional to the absolute deviation of each value θ^t_d,i from the mean θ̅_d^t, and normalized by the standard deviation σ_d^t. It can be represented as follows: α^t_d,i = σ_d^t/| θ^t_d,i - θ̅_d^t |. These weights of the d-th dimension are then normalized and applied to aggregate each BS's local model θ_i^t into a global model θ^t+1, which can be represented as follows in the view of each dimension: θ^t+1_d = ∑_i=1^n+mα^t_d,i·θ^t_d,i/∑_i=1^n+mα^t_d,i. Subsequently, the server broadcasts this aggregated global model parameter θ^t+1 back to all NDTs for synchronization. There are three additional percentile estimation strategies listed below. Based on the upper and lower bound computed below, we can get a final percentile estimation decision to detect abnormal values in each dimension. * Interquartile Range (IQR): The IQR method calculates the range between the first and third quartiles (25th and 75th percentiles) of the data, identifying outliers based on this range. For each dimension d, the outlier bounds are: lower bound^t_d,IQR = Q1^t_d - k_IQR·IQR^t_d, upper bound^t_d,IQR = Q3^t_d + k_IQR·IQR^t_d, where Q1^t_d and Q3^t_d are the first and third quartiles, and k_IQR adjusts sensitivity. * Z-scores: The Z-score method measures how many standard deviations a point is from the mean. For each dimension d, the normal range bounds are: lower bound^t_d,Z-score = g( θ̅_d^t - k_Z·σ_d^t), upper bound^t_d,Z-score = g( θ̅_d^t + k_Z·σ_d^t ), where k_Z is the number of standard deviations for the normal range. * One-Class SVM: One-class SVM constructs a decision boundary for anomaly detection. The decision function for each dimension d is: f^t_d(θ) = sign( ∑_i=1^n_SVγ_i · K(θ^t_SV_i, d, θ) - ρ), where θ^t_SV_i, d are the support vectors, γ_i are the Lagrange multipliers, K(·, ·) is the kernel function, and ρ is the offset. A point θ is an outlier if f^t_d(θ) < 0. In essence, this defense mechanism is a strategic amalgamation of direct statistical trimming and aggregation, targeting the preservation of the global model's integrity against poisoning attacks. By accurately isolating and excluding malicious NDTs prior to aggregation, it significantly diminishes the likelihood of adversarial disruption in the FL framework. Additionally, its capacity to accommodate various dimensions and adapt to different inconsistency metrics and aggregation protocols considerably extends its applicability across a broad spectrum of distributed wireless network scenarios. § EXPERIMENTAL EVALUATION In this section, we present an extensive evaluation of our proposed FTI poisoning attack and the GLID defense mechanism. We provide extensive results across various performance metrics to demonstrate their effectiveness in multiple dimensions. §.§ Experimental Setup §.§.§ Datasets To assess our methods, we employ real-world datasets from Telecom Italia <cit.>. The Milan wireless traffic dataset is partitioned into 10,000 grid cells, each served by an NDT covering an area of approximately 235 meters squared. The dataset comprises three subsets: “Milan-Internet”, “Milan-SMS”, and “Milan-Calls”, which capture diverse wireless usage patterns. Our primary focus is on the “Milan-Internet” subset, which facilitates a detailed analysis of urban telecommunications behavior. §.§.§ Baseline Schemes We benchmark our FTI attack against several state-of-the-art model poisoning attacks to underscore its effectiveness. Additionally, we employ these baseline attacks to demonstrate the efficacy of our GLID defense strategy: * Trim attack <cit.>: Processes each key in a model dictionary, using extremes in a specific dimension to determine a directed dimension. Model parameters are then selectively zeroed or retained to influence the model's behavior. * History attack <cit.>: Iterates over model parameters, replacing current values with historically scaled ones to warp the model parameters using past data and misguide the aggregation process. * Random attack <cit.>: Disrupts the model by replacing parameters with random values drawn from a normal distribution, scaled to maintain a semblance of legitimacy and inject controlled chaos into the aggregation process. * MPAF <cit.>: Calculates a directional vector from the difference between initial and current parameters, adjusting model values to intentionally diverge from the original trajectory and introduce adversarial bias. Fake NDTs are then injected into the system. * Zheng attack <cit.>: Inverts the direction of model updates by incorporating the negative of previous global updates, refined through error maximization to generate a poison that is challenging to detect due to its alignment with the twin model's error landscape. Furthermore, we consider several baseline defensive mechanisms to evaluate the robustness of our proposed attack and defense: * Mean <cit.>: Calculates the arithmetic mean of updates in each dimension, assuming equal trustworthiness among all NDTs. This method is susceptible to the influence of extreme values. * Median <cit.>: Identifies the median value in each dimension for each parameter across updates, discarding extreme contributions to enhance robustness against outliers. * Trim <cit.>: Discards a specified percentage of the highest and lowest updates before computing the mean in each dimension, reducing the influence of anomalous or malicious updates on the aggregate model. * Krum <cit.>: Scores each NDT's update based on the sum of Euclidean distances to other NDTs' updates, selecting the update from the NDT with the minimum score for the global update. * FoolsGold <cit.>: Calculates a cosine similarity matrix among all NDTs and adjusts the weights for each NDT based on these similarities, aggregating the weighted gradients to form a global model. * FABA <cit.>: Computes the Euclidean distance for each NDT's model from the mean of all received models, excluding a specific percentage of the most distant models to filter out potential outliers or malicious updates. * FLTrust <cit.>: Calculates cosine similarity between the server's current model and each NDT's model to generate trust scores, which are then used to weigh the NDT's contribution to the final aggregated model. * FLAIR <cit.>: Each NDT calculates “flip-scores” from the changes in gradient directions and “suspicion-scores” based on historical behavior, using these scores to adjust the weights assigned to each NDT's contributions to the global twin model. §.§.§ Experimental Settings and Performance Metrics For our experiments, we randomly select 100 BSs and their corresponding NDTs to evaluate the impact of poisoning attacks and the effectiveness of defense mechanisms. We primarily report results on the Milan-Internet dataset. Model training is configured with a learning rate of 0.001 and a batch size of 64. We inject a 20% percentage of fake NDTs to simulate benign ones in the system for the FTI attack and assume a scenario where 20% of the NDTs are compromised for other baseline attacks. Our proposed FTI attack employs a parameter η = 10, while other attacks utilize a scaling factor of 1000. For the Trim aggregation rule, we discard 20% of the twin model parameters from all NDTs. In our GLID defense, we use the standard deviation (SD) method as the default percentile estimation method. We adopt Mean Absolute Error (MAE) and Mean Squared Error (MSE) as the primary metrics for performance evaluation, with larger MAE and MSE values indicating better attack effectiveness. §.§ Numerical Results §.§.§ Performance of Proposed Methods Tables I and II demonstrate the significant vulnerabilities introduced by the proposed FTI Attack across various aggregation methods within our NDT construction. It is observed that under our FTI Attack, the Mean Rule is completely compromised over both the V-twinning and H-twinning stages, as reflected by their MAE and MSE values reaching over 100.0 (values exceeding 100 are capped at 100). This result denotes a total breakdown in their wireless traffic prediction functionality. The Median Rule further emphasizes the severity of the FTI Attack, with both its MAE and MSE escalating from modest baseline figures to 100. This sharp contrast highlights the FTI attack's reliable performance against other defenses, such as the Trim Attack against the Median Rule, where the increase in MAE and MSE is relatively minor at 0.283 and 0.106 for V-twinning, respectively. Additionally, the Trim Rule, typically considered robust, exhibits a drastic increase in MAE to over 100.0, a significant rise from its baseline without any attack (denoted as NO in Tables <ref> and <ref>) of 0.281. This surge underscores the Trim Rule's vulnerability to the FTI Attack, marking a notable departure from its typical resilience. Similar results can also be found in other aggregation rules under FTI attacks, such as Krum, FoolsGold, FABA, FLTrust, and FLAIR, where the FTI attack demonstrates the best overall performance against the given defenses. The Zheng Attack, however, presents a distinct pattern of disruption. When subjected to this attack, FLTrust, which typically exhibits lower error metrics, shows a significant compromise, evidenced by the dramatic increase in its MAE to 3.252 and MSE to 1.278. Such a tailored nature of the Zheng Attack appears to target specific vulnerabilities within FLTrust, which are not as apparent in other scenarios, such as the Trim Attack, where the rise in MAE and MSE for FLTrust is relatively modest. Regarding the MPAF Attack, most aggregation rules in the table do not show a convincing defense, except for a few like Median, Trim, and GLID. During the H-twinning stage, most baseline schemes demonstrate a similar performance under attacks. However, although the Median and Trim Rules could protect the NDT systems from being attacked, they do not perform well in maintaining a precise NDT after a valid initial construction, i.e. the V-twinning stage. For instance, the MAE and MSE values of Median Rule under NO attacks are 0.281 and 0.106, respectively. These performance metrics increase to 0.296 and 0.101 after a period of maintenance, which leads to inaccurate predictions compared to the initial twin models. This is due to the heterogeneous nature of data distribution, with the Median and Trim Rules trimming out too many participants during the twin model aggregation process. From the defender's standpoint, the proposed GLID aggregation method demonstrates consistent performance stability across various attacks. Both its MAE and MSE values remain close to their baseline levels. Even in the case of our FTI attack, GLID manages to keep errors below 100, with MAE and MSE values of 72.453 and 27.548, respectively. This stability is particularly noteworthy, especially when compared to other rules such as FLAIR, which exhibit a significant deviation from their non-attacked baselines under the same adversarial conditions. GLID's ability to sustain its performance in the face of diverse and severe attacks underscores its potential as a resilient aggregation methodology. Other rules, such as FABA, also experience inconsistent defense performance during the V-twinning and H-twinning stages. FABA maintains good performance under History and Random attacks during the V-twinning process but is degraded to over 100.0 during the H-twinning process. Such performance is led by the various data distribution and sample sizes from the real-time data stream. In later evaluations, we focus on the entire twinning process, combining V-twinning and H-twinning, to evaluate the effects of other parameters on our proposed poisoning attacks and defense mechanisms for NDT systems. §.§.§ Evaluation on the Impact of η The step size η in our proposed FTI attack (see Algorithm <ref>) serves as a dynamic scaling factor, and its initial value significantly influences the NDT's performance metrics. This impact is illustrated in Fig. <ref>, where the Median aggregation rule is employed as the baseline defense strategy. A notable observation is the correlation between increasing values of η and the corresponding rise in MAE and MSE of twin models. For example, at η = 1, the MAE and MSE are relatively low, recorded at 0.517 and 0.215, respectively. However, increasing η to higher values, such as 10 or 20, results in a dramatic surge that reaches the maximum error rate. This increase suggests a significant compromise in the twin models, surpassing the predefined threshold for effective detection of the attack. The rationale behind this analysis emphasizes the pivotal role of η in determining the strength of a poisoning attack. An increased initial η tends to degrade model performance, deviating significantly from its expected operational state. Simultaneously, a higher η also raises the risk of the attack's perturbations being detected and eliminated during the defense process. §.§.§ Evaluation on Percentage of Fake NDTs The degree of compromise in NDTs significantly influences the model's performance, as evidenced in Table <ref>. By adopting the Median aggregation as the defensive approach, the model first exhibits resilience at lower compromise levels, such as with only 5%–10% fake NDTs in the scenario. However, a noticeable decline in performance is observed as the percentage of fake NDTs increases to 20% or higher. This deterioration is evident as the MAE and MSE values reach 100.0 in all categories, signaling a complete model failure. The underlying principle behind this trend suggests the model's limited tolerance to malicious interference. More precisely, the network system can withstand below 20% compromise without significant performance degradation. However, beyond this threshold, the model's integrity is severely undermined, resulting in a complete system breakdown. This observation highlights the critical importance of implementing robust security measures to prevent excessive compromise of NDTs, ensuring the model's reliability and effectiveness. §.§.§ Evaluations on Percentile Estimation Methods The dynamic trimming of an adaptive number of model parameters through percentile estimation, which is adapted in GLID, proves to be an effective defense strategy against various model poisoning attacks. In the comparative analysis of various estimation methods, as shown in Table <ref>, SD estimation emerges as the best technique, exhibiting marked consistency and robustness across a spectrum of estimation approaches. This is evidenced by the consistently low MAE and MSE values for SD across these approaches, at 0.219 and 0.087, respectively. In contrast, other methods have varying degrees of inconsistency and vulnerability. For instance, One-class SVM exhibits pronounced variability, with MAE and MSE values reaching the maximal error level of over 100.0 under Trim, History, and MPAF attacks. Such a disparity in performance, particularly the stably lower error rates of SD compared to the significant fluctuations in other estimation methods, positions SD as a reliable and effective percentile estimation technique in GLID. §.§.§ Evaluations on the Impact of NDT Density Given a 20% proportion of fake NDTs, Figures 5(a)-(d) compare the Median and GLID rules with varying densities of NDTs in the network scenario. The total number of NDTs does not significantly impact the performance of any attack and defense mechanisms, especially for our FTI and GLID strategies, which is consistent with traditional federated learning settings <cit.>. Under Median aggregation, FTI consistently shows maximal errors, with MAE and MSE exceeding 100 across different NDT densities, indicating the failure of the defense. This consistency of performance across varying participants in the distributed NDT system suggests that the total number of NDTs does not substantially influence the effectiveness of the attack and defense strategies. §.§.§ Evaluations on the Percentile Range of GLID Table <ref> presents an evaluation of performance across a variety of percentile pairs used in the proposed GLID method on different attack methods. The configuration of the percentile pair guides the GLID method in identifying and eliminating outliers. For example, specifying a percentile pair of [10, 70] means that values below the 10^ th percentile and above the 70^ th percentile are trimmed away, focusing the analysis on the data within these bounds. It is observed that, when the percentile pair is set at [10, 70], most methods, except for the Zheng Attack, register a metric over 100.0, suggesting the models are fully attacked. Similarly, the percentile pair of [10, 90] yields a value over 100 for all methods except the Zheng Attack. The Zheng attack consistently records low metrics across all settings, such as 0.880 and 0.346 for the pair [10, 70], raising questions about its attack efficacy. On the other hand, the FTI attack shows varied performance; it achieves over 100.0 for most percentile pairs like [10, 70] and [20, 90] but drops to 79.634 and 29.849 for the pair [30, 70]. These results underscore the importance of fine-tuning the percentile pair parameters in the GLID method. Proper parameter selection can effectively trim outliers without significantly impacting overall network performance. § CONCLUSION AND FUTURE WORK In this study, we introduced a novel approach to perform model poisoning attacks on network digital twins through fake traffic injection. Operating under the assumption that real-world BSs are challenging to attack, we inject fake traffic distribution within NDTs with minimum knowledge that disseminates malicious model parameters into distributed network systems. Furthermore, we presented an innovative global-local inconsistency detection mechanism, designed to safeguard the NDT systems. It employs an adaptive trimming strategy, relying on percentile estimations that preserve accurate model parameters while effectively removing outliers. Extensive evaluations demonstrate the effectiveness of our attack and defense, outperforming existing baselines. With the advent of the digitalization era, the development of an effective security framework for NDT systems presents numerous opportunities for future research and development. Future work could focus on enhancing the capabilities of secure NDT to incorporate real-time data streams and predictive analytics, enabling proactive security management and optimization. Additionally, exploring the integration of explainable artificial intelligence to elucidate model aggregation decisions, detect biases, and ensure the reliability and trustworthiness of the models is a crucial area of research. Further, investigating the application of secure DTs in emerging technologies such as the IoT and 6G cellular systems offers promising avenues for integrated intelligence and autonomy. § ACKNOWLEDGMENT This research was supported by the National Science Foundation through Award CNS–2312138 and CNS–2312139. IEEEtran
http://arxiv.org/abs/2407.02356v1
20240702152111
Enable the Right to be Forgotten with Federated Client Unlearning in Medical Imaging
[ "Zhipeng Deng", "Luyang Luo", "Hao Chen" ]
eess.IV
[ "eess.IV", "cs.CV", "cs.LG" ]
Z. Deng et al. Department of Computer Science and Engineering, Department of Chemical and Biological Engineering, Division of Life Science, The Hong Kong University of Science and Technology, Hong Kong, China zdengaj@connect.ust.hk, cseluyang@ust.hk, jhc@cse.ust.hk Federated Client Unlearning Enable the Right to be Forgotten with Federated Client Unlearning in Medical Imaging Zhipeng Deng1 Luyang Luo1Hao Chen1,2,3 July 8, 2024 ==================================================================================== § ABSTRACT The right to be forgotten, as stated in most data regulations, poses an underexplored challenge in federated learning (FL), leading to the development of federated unlearning (FU). However, current FU approaches often face trade-offs between efficiency, model performance, forgetting efficacy, and privacy preservation. In this paper, we delve into the paradigm of Federated Client Unlearning (FCU) to guarantee a client the right to erase the contribution or the influence, introducing the first FU framework in medical imaging. In the unlearning process of a client, the proposed model-contrastive unlearning marks a pioneering step towards feature-level unlearning, and frequency-guided memory preservation ensures smooth forgetting of local knowledge while maintaining the generalizability of the trained global model, thus avoiding performance compromises and guaranteeing rapid post-training. We evaluated our FCU framework on two public medical image datasets, including Intracranial hemorrhage diagnosis and skin lesion diagnosis, demonstrating that our framework outperformed other state-of-the-art FU frameworks, with an expected speed-up of 10-15 times compared with retraining from scratch. The code and the organized datasets can be found at: https://github.com/dzp2095/FCU. § INTRODUCTION To address the strict requirements on the collection, storage, and processing of personal data proposed in regulations like the General Data Protection Regulation (GDPR) <cit.> and the California Consumer Privacy Act (CCPA) <cit.>, federated learning (FL) <cit.> is regarded a promising privacy-preserving approach in medical imaging, which enables multiple parties to train a model collaboratively without sharing patient data. However, despite its decentralized nature, current FL research in medical imaging has not fully addressed the right to remove the influence of data from a trained global FL model, a right explicitly stated in GDPR as the right to be forgotten and in CCPA as the right to delete. In centralized learning, the right to have data removed can be realized by Machine Unlearning (MU) <cit.>. Regardless, the existing MU techniques are developed for the centralized scenarios, posing significant challenges to their direct application in distributed FL settings <cit.>, highlighting the need for dedicated federated unlearning (FU) approaches. Retraining from scratch without the target forgotten data is regarded as a naive way to achieve unlearning <cit.>. Nevertheless, this approach demands a large cost in communication and computation, especially in FL <cit.>. Hence, recent studies have proposed various federated unlearning (FU) methods including re-calibration of historical updates <cit.>, gradient quantization <cit.>, gradient modification <cit.> or knowledge distillation <cit.>. However, these methods often compromise performance or privacy. For instance, FedEraser<cit.> accelerates retraining progress and removes the contribution of a target client iteratively by utilizing historical parameter updates of clients stored on the server side, yet this approach requires additional storage and poses a risk of data reconstruction by a malicious server <cit.>. FFMU <cit.> applies randomized gradient smoothing and quantization to execute unlearning operations on the target forgotten data, but may fail to retain the performance when a client decides to remove all their data. UPGA <cit.> formulates the unlearning process as a constrained maximization problem by limiting the unbounded loss to an ℓ_2-norm sphere by a designated reference model that may be difficult to obtain. FUKD <cit.> removes the contribution of a target client by subtracting the historical parameter updates and recovering the model performance through knowledge distillation <cit.>, necessitating unlabeled data on the server. Additionally, MoDe <cit.> adjusts pre-trained model parameters through two phases—knowledge erasure and memory guidance—to reduce discriminability for target forgotten data and restore performance, dependent on the initial state of the degraded model. These limitations underscore the necessity for a more effective FU framework that addresses these challenges without compromising efficiency or privacy. To make better use of the information contained in the teacher network, since <cit.>, most knowledge distillation shifted from output distillation to feature distillation <cit.>, showing superior performance on various tasks. However, existing knowledge distillation-based unlearning methods focus on merely output distillation <cit.>. MoDe <cit.> achieves unlearning by using a degraded model (i.e., a “Bad Teacher") that has never been trained on the forgotten data, to generate pseudo labels for the student model on forgotten data. Chundawat et al. <cit.> employ a teacher-student objective that minimizes KL-Divergence between the output of the “Bad Teacher" and the student, encouraging the student model to align closely with the “Bad Teacher" on the forgotten set. Similarly, SCRUB <cit.> suggests maximizing KL-Divergence to encourage the student model to “move away" from the trained teacher model on forgotten data. To the best of our knowledge, no method has yet considered encouraging the student model to learn from the “Bad Teacher" at the feature level to guarantee a higher level forgetting, marking an opportunity for innovation in unlearning. In this paper, we delve into the paradigm of Federated Client Unlearning and present the first FU framework in medical imaging to ensure the right of a target client to remove the contribution of their data from a trained global model efficiently. We use Model-Contrastive Unlearning (MCU) to encourage the model to perform similarly with a downgraded model and differently with the trained global model on the forgotten data, which pioneers a step towards achieving unlearning at the feature level. Meanwhile, to preserve the generalized knowledge and only remove client-specific knowledge of the target client, we use Frequency-Guided Memory Preservation (FGMP) to preserve the low-frequency components of the trained model, ensuring a rapid post-training on the remaining clients. We validate our proposed method on two real-world tasks, including intracranial hemorrhage (ICH) diagnosis and skin lesion diagnosis. Extensive experiments demonstrate that our method outperforms a number of state-of-the-art FU methods, without compromising privacy, and with an expected speed-up of 10-15 times compared with retraining from scratch. § METHODOLOGY §.§ Preliminaries and Overview We adopt a typical Federated Unlearning (FU) scenario as described in prior works <cit.>, involving K clients 𝐂= { C_1,C_2,…, C_K } and a central server participating in FL. The dataset held by each client is denoted as 𝐃= { D_1,D_2,…, D_K }. Suppose that after t rounds of FL, each client possesses a trained global model 𝐌_tr. We refer to the client C_u wants to opt out as the target client, where C_u requests to remove the contribution of their data D_u from 𝐌_tr. Following <cit.>, the goal of this study is to unlearn the D_u, effectively eliminating its influence from 𝐌_tr to produce an unlearned model 𝐌_un. Our proposed framework FCU is presented in Fig. <ref>. The target client C_u initiates the unlearning process by performing local unlearning to generate the initial unlearned model 𝐌_un, where the proposed Model-Contrastive Unlearning (MCU) is to make 𝐌_un perform similarly with a model that has never trained on D_u and differently with the trained model 𝐌_tr at the feature level. Furthermore, to preserve the generalized knowledge and only remove client-specific knowledge of the target client, we use Frequency-Guided Memory Preservation (FGMP) to preserve the low-frequency components of the trained model, thereby achieving a rapid post-training to generate the final unlearned model 𝐌_un. §.§ Model-Contrastive Unlearning The intuition of our Model-Contrastive Unlearning (MCU) is to achieve unlearning by encouraging the unlearned model 𝐌_un perform similarly at feature level to a model that has never been trained on D_u, which we refer to as a downgraded model 𝐌_down. Model-Contrastive Learning was first proposed in <cit.>, where they aim to decrease the representation drift between the local model and the global model in FL. In contrast, we propose to encourage the unlearned model to output similar features as a downgraded model 𝐌_down (pull) and dissimilar features as the trained global model 𝐌_tr (push), where we refer to this process as Model-Contrastive Unlearning (MCU). We use a model with the same structure as the 𝐌_tr but only pretrained on ImageNet<cit.> while not being trained on D_u as the downgraded model 𝐌_down, based on two intuitions: 1) 𝐌_down also serves as pretrained model for 𝐌_tr in FL training before FU, 2) 𝐌_down possess the ability to extract low-level features of images <cit.>. In contrast, MoDe<cit.> chose a randomly initialized model as a degraded model to generate pseudo labels for the unlearned model, where this randomly initialized model may be strongly dependent on the initial weights. Similar to <cit.>, our Model-Contrastive Unlearning loss can be formulated as: ℒ_mcu = -log( exp(sim(z, z_down) / τ)/exp(sim(z, z_down) / τ) + exp(sim(z, z_tr) / τ)), where sim(·,·) is a cosine similarity function, τ, known as the temperature, z represents the feature vector extracted by the model for a given input x, z_down is the feature vector extracted by the downgraded model 𝐌_down, and z_tr is the feature vector extracted by the trained global model 𝐌_tr. §.§ Frequency-Guided Memory Preservation The goal of the unlearning process is to remove the specific knowledge of the target client C_u without losing the generalized knowledge already learned in a trained global model 𝐌_tr. Inspired by the findings of <cit.>, which indicates that the low-frequency components of parameters may reflect the basis for global features across all clients while high-frequency components may contain specific knowledge for an individual client, we introduce Frequency-Guided Memory Preservation (FGMP). FGMP aims to preserve the low-frequency components of 𝐌_tr and high-frequency components of the unlearned model 𝐌_un to acquire a newly unlearned model after MCU. Intuitively, the newly unlearned model 𝐌'_un maintains the generalized knowledge inherited from 𝐌_tr and has the specific knowledge from D_u removed in high-frequency components by MCU. Specifically, we conduct FGMP every T_FGMP iterations (e.g. T_FGMP is set to 10 in our experiment) while MCU is continuously executed, resulting in an unlearned model 𝐌_un. We use FFT to convert the parameters of 𝐌_un and 𝐌_tr into frequency domain, and preserve the low-frequency part of 𝐌_tr and high-frequency part of 𝐌_un. Afterwards, the newly unlearned model 𝐌'_un is constructed by inverse FFT (IFFT). We conduct FFT and IFFT of model parameters similarly as <cit.>. To clarify, for the weights w in a convolutional layer that has N input channels, H output channels and a kernel with size d_1 × d_2, we reshape w∈ℝ^N× H× d_1× d_2 into a 2-D matrix w'∈ℝ^d1N× d2H to ease the process of FFT and IFFT. Then, we can obtain the amplitude map ℱ^A and phase map ℱ^P through the Fourier transform ℱ=ℱ^Ae^jℱ^P: ℱ(w)(m, n) = ∑_x,y w'(x, y)e^-j2π(x/d_1 Nm + y/d_2 Hn), j^2 = -1. To extract the low-frequency components, we define a mask matrix M of the same dimensions as w', M ∈{0, 1}^d_1N × d_2H. M_ij = 1 for the central region, and M_ij = 0 elsewhere. The central region is a rectangle centered around the middle of M, with dimensions ⌊ rd_1N ⌋×⌊ rd_2H ⌋, where r is the ratio of the low-frequency part to be preserved, and ⌊·⌋ denotes the floor function to ensure integer dimensions. Hence, the newly unlearned model in the frequency domain that preserves the low frequency part of 𝐌_tr and high frequency part of 𝐌_un can be formulated as: ℱ̂^A(w'_adj)) = M ⊙ℱ^A(w_pre) + (1-M)⊙ℱ^A(w_adj), where ⊙ means element-wise multiplication. Finally, we apply IFFT ℱ^-1 to convert the amplitude and phase maps back to the parameter as w'_adj=ℱ^-1(ℱ̂^A(w'),ℱ^P(w')). After FGMP, we obtain a newly unlearned model 𝐌'_un. Intuitively, the high-frequency components of 𝐌_un have been made to forget the specific knowledge of the target client due to the application of MCU, while the low-frequency components retain the generalized knowledge. This selective retention and forgetting forms a solid foundation for post-training, ensuring that the model preserves its ability to generalize well while removing client-specific knowledge. §.§ Overall of FCU framework After local unlearning, the server sends the unlearned model 𝐌_un to the remaining clients request them to conduct post-training using FedAvg <cit.>, where 𝐌_un serves as the initial global model and the global objective can be formulated as: min_W𝐋(W) = ∑_k ∈{1, …, K}∖{u}ℒ_k(W), where ℒ_k is the local objective of client C_k, and W is the parameters of the global model. The global model parameter W is iteratively updated with the aggregation of local models on the remaining clients, which is defined as W^t+1 = ∑_k ∈{1, …, K}∖{u}n_k/n - n_uW_k^t, where: W^t+1 represents the updated global model parameters, n_k and n_u denote the sample sizes of the k-th client and client u, respectively, n is the total sample size from all clients, and W_k^t are the parameters from the k-th client's model. Due to the memory preservation achieved by FGMP, our post-training can efficiently restore model performance on remaining datasets with a few rounds. § EXPERIMENTS §.§ Experiment Setup Datasets. We evaluated our method on two public real-world medical image classification tasks:1) Intracranial hemorrhage (ICH) diagnosis. We use the RSNA-ICH dataset <cit.> and follow <cit.> to perform the binary diseased-or-healthy classification, and randomly sample 25,000 slices. 2) ISIC2018 skin lesion diagnosis. We conducted skin lesion diagnosis with HAM10000 <cit.>, which contains 10,015 dermoscopy images. Training, validation and testing sets for both datasets were divided into 7:1:2. For both tasks, to simulate heterogeneous multi-source data, following <cit.>, Dirichlet distribution,i.e. Dir(α =1.0), is used to divide the training set to 5 clients. Implementation Details. For both tasks, We used DenseNet121<cit.> as the backbone. The network was optimized by Adam optimizer where the momentum terms were set to 0.9 and 0.99, with learning rate set to 1e^-5 at target client and 1e^-4 for other clients. The total batch size was 64 in both local training and local unlearning, the local unlearning iterations were 100. The temperature τ in MCU loss was 0.5 by default like <cit.>. The interval T_FGMP to execute FGMP in MCU was set to 10. During post-training, the local training iterations were 20, and the total communication rounds were 10. The images in both tasks were resized to 224×224. For Task 1, data augmentation included a combination of random flip, rotation, translation, scaling, and gaussian blur. For Task 2, we employed the random flip, rotation, and translation. Evaluation Metrics. We adopt four widely recognized metrics as in recent representative FU study FFMU <cit.> and other commonly recognized metrics to assess machine unlearning performance across three dimensions: 1) Fidelity that assesses whether the unlearning methods preserve the original model's performance. This includes measuring F1-score, Accuracy, errors on the retained data Error^r (evaluated on D_r, the data held by the remaining clients 𝐂∖ C_u), and errors on the test dataset Error^t. 2) Efficacy that evaluates the success of an FU method in eradicating the influence of the target client's data, D_u. This is gauged by Error^f, the classification errors on the forgotten dataset D_f (i.e. the dataset on the target client C_u). A model's Error^f close to that of a retrained model (which has never encountered D_f) is considered favorable <cit.>. 3) Efficiency that measures the reduction in communication and computational overheads by quantifying runtime, with each method trained to convergence for a fair comparison. All the results were averaged over 3 runs. §.§ Comparison with state-of-the-arts We compared our method with the trained global model denoted as origin, the model finetuned on the trained global model as a baseline, and the model retrained from scratch as gold-standard for efficacy <cit.>. Besides, we compared with five recent state-of-the-art (SOTA) methods, including FFMU <cit.> applying random gradient quantization, MoDe <cit.> utilizing a degraded model to unlearn, UPGA <cit.> employing projected gradient ascent to maximize the empirical loss on the target client, FedEraser <cit.> eliminating the influence of the target client by historical parameter updates iteratively, and FUKD <cit.> that erases the contributions of clients by subtracting the historical parameter updates and restore the performance by knowledge distillation. The quantitative results for two tasks are presented in Table  <ref> and Table  <ref>. Our method leads in fidelity, achieving improvements of approximately 1.5% in accuracy and 1.6% in F1 score for Task 1, and 1.9% in accuracy and 5.0% in F1 score for Task 2, compared to the second-best FU method, showing robustness in model performance maintenance. In efficacy, it nearly matches the retrained model considered the standard in forgetting <cit.>, ensuring effective unlearning. Besides, we proved that the finetune method failed to achieve unlearning. For Efficiency, we see an impressive runtime reduction, achieving roughly a 10 to 15 times speed-up compared with retraining from scratch. Ablation study. We conducted ablation studies to assess the effectiveness of the primary components of our FCU framework. As shown in Table  <ref> and Table  <ref>, the performance drops significantly without post-training, and the runtime increases a lot without FGMP. The effectiveness of our FGMP in facilitating local unlearning across various iterations is demonstrated in Fig. <ref>, which can maintain the performance at its maximum level. Without FGMP, there is a noticeable decline in test accuracy even before the model has fully executed unlearning across the entire dataset. Besides, we replace our MCU with KL-based knowledge distillation <cit.> and pseudo label-based knowledge distillation <cit.>, which shows that our MCU can better preserve the performance on test set when achieve at similar forgotten error. § CONCLUSION We present the first federated unlearning framework in medical imaging, which facilitates the right for a client to be forgotten. In the local unlearning phase, our FCU utilizes Model-Contrastive unlearning (MCU) to encourage the model to perform similarly to a model that has never seen the forgotten data at the feature level. To preserve the generalized knowledge, we use Frequency-Guided Memory Preservation (FGMP) to preserve the low-frequency components of the trained global model, ensuring a smooth forgetting process. Benefited from FGMP, our FCU framework quickly restores performance with minimal post-train rounds, achieving a 10-15 times speed-up over retraining from scratch, while demonstrating remarkable federated unlearning effectiveness for medical imaging. §.§.§ This work was supported by the Hong Kong Innovation and Technology Fund (Project No. MHP/002/22) and and HKUST 30 for 30 Research Initiative Scheme (No. 3030_024). §.§.§ The authors have no competing interests to declare that are relevant to the content of this article. splncs04
http://arxiv.org/abs/2407.01681v1
20240701180005
Nonreciprocal superconductivity
[ "Margarita Davydova", "Max Geier", "Liang Fu" ]
cond-mat.supr-con
[ "cond-mat.supr-con", "cond-mat.mes-hall", "cond-mat.str-el" ]
= 2 pt
http://arxiv.org/abs/2407.03185v1
20240703150716
Multiple-Resolution Tokenization for Time Series Forecasting with an Application to Pricing
[ "Egon Peršak", "Miguel F. Anjos", "Sebastian Lautz", "Aleksandar Kolev" ]
cs.LG
[ "cs.LG", "cs.AI" ]
Impact of planar defects on the reversal time of single magnetic domain nanoparticles Peter M. Derlet July 8, 2024 ===================================================================================== § ABSTRACT We propose a transformer architecture for time series forecasting with a focus on time series tokenisation and apply it to a real-world prediction problem from the pricing domain. Our architecture aims to learn effective representations at many scales across all available data simultaneously. The model contains a number of novel modules: a differentiated form of time series patching which employs multiple resolutions, a multiple-resolution module for time-varying known variables, a mixer-based module for capturing cross-series information, and a novel output head with favourable scaling to account for the increased number of tokens. We present an application of this model to a real world prediction problem faced by the markdown team at a very large retailer. On the experiments conducted our model outperforms in-house models and the selected existing deep learning architectures. § INTRODUCTION Forecasting remains a frontier for deep learning models. Non-stationarity, highly dependent stochastic processes, limited predictability, and most importantly small datasets differentiate this prediction problem as a particularly difficult one. The value of improved forecasting capabilities is nearly self-evident as it enables us to improve the contextualisation of decision making. The model we propose was developed specifically for time series for which we have control over some of the auxiliary variables. We assume those auxiliary variables have a significant effect on the time series. Pricing problems are a canonical example in which the auxiliary variable of price affects sales. The ubiquity of pricing problems and resulting large datasets make them a natural candidate for deep learning methods. Markdowns are a subset of pricing problems concerned with reducing prices to clear expiring stock. In this context a forecasting model which incorporates auxiliary variables such as price can be utilised as a simulator to narrow down pricing strategies for empirical testing. There is considerable operational and reputational risk in evaluating previously untrialled strategies. A good simulator enables a virtual exploration of the pricing strategy hypothesis space which can help select potentially more profitable and sustainable pricing strategies for testing. This work is about tokenisation strategies for time series and aligning a transformer architecture with these strategies. We believe that the main way of advancing transformer capabilities for time series forecasting is by developing and validating specialised tokensiation strategies. Our contributions are: * A set of new tokenisation modules to obtain: multiple-resolution past data tokens, multiple-resolution known time-varying tokens, and cross-series tokens. This creates a much broader context window. * A reverse splitting output head with favourable scaling properties to deal with the increase in the number of tokens. * An application of the model to a real-world forecasting problem faced by the pricing team at a very large retailer. In the experiments we demonstrate that our method outperforms existing in-house methods and two popular existing architectures on a real-world pricing problem. § DEEP LEARNING FOR TIME SERIES FORECASTING Denote the forecast horizon as f and the lookback horizon as l. We define three forms of data associated with each individual time series: * Static s (Store information, product information), * Time-varying known x_t_0-l+1:t_0+f (global seasonal data, prices), * Observed data/time-varying unknown y_t_0-l+1:t_0 (sales, weather) where t_0 is the point at which we want to make our forecast. The multivariate forecasting problem at t_0 with a forecast horizon of f is the prediction of future values of the series for the time periods t_0+1 to t_0 + f across all channels c_1,...,c_n∈ C which we denote as ŷ_c_1:c_n,t_0+1:t_0+f to minimise some loss function loss(y_c_1:c_n,t_0+1:t_0+f,ŷ_c_1:c_n,t_0+1:t_0+f). Each channel respectively corresponds to each variate we are interested in forecasting simultaneously. Channels are a choice reflecting our structural view of the time series. Aggregation can be done by concatenating series and adding a static variable denoting some characteristic of the series or by stacking different series along a new dimension. Time series tend to be non-stationary, that is the joint probability distribution across the same patch resolution is not constant across time. The nature of the data-generating process is fundamentally inconsistent across different time series meaning it is harder to aggregate large corpuses for training. Time series intrinsically produce fewer observations. In most cases we simply do not have a sufficient number of samples for deep learning to be viable. There are limited cases for which we have sufficient corpus sizes in time series: where there is an enormity of related observed processes, a high resolution of measurement, or, a very short forecasting interest relative to the total length of the observed sequence. Given that deep learning requires large datasets applying it to unsuitable forecasting problems is fruitless absent powerful foundation models. The forecasting problem in pricing markdowns is suitable as it benefits from being able to aggregate/concatenate many individual pricing time-series across SKUs and stores. §.§ Transformers Components of the original transformer architecture <cit.> powers the state of the art for large language models and has seen more computational resource investment than any other deep learning architecture. Empirically, the transformer is outstanding for language modelling and benefits from scaling laws. Due to the success of transformers with the sequential task of language there has been considerable interest in applying transformers to time series. Academic research has produced mixed results and tends to be limited in the scope of study. In fact there are strong arguments to be made against using transformers for time series with plenty of vocal opponents. Transformers have architectural limitations when applied to time series <cit.>. Transformers are permutation invariant and thus inherently do not create an inductive bias which reflects that observations happen sequentially (typically in a highly auto-correlated manner). Transformers for language use two core architectural adjustments for dealing with the lack of sequential inductive bias. Positional encoding adds a bias to the latent matrix representation of a language sequence which is designed or learnt to reflect sequentiality. Decoder architectures mask out the attention matrix so that for each unit of latent representation (token) can only observe the tokens that occurred before it in the sequence. The effectiveness of either of these techniques for forecasting is still an open question. The evidence so far is mixed and simple models have been shown to outperform very complex transformer models on academic benchmarks <cit.>. At a higher level, the concept of a token in language[Note that tokenisation in LLMs is typically not trained end-to-end and is fixed, transferred into the model from a different pretraining task. This is often a cause of problems.] does not trivially map to time series. Time series are processes which have regular or irregular observations, but single observations are not necessarily meaningful units in terms of the dynamics of a time series. An analogous reasoning from classical time-series approaches is the concept of the order of integration which is the number of times a time series has to be differenced (how many adjacent points need to be considered) to become stationary. Time series are often decomposed into trend and seasonality or frequencies, both of which are transformations of more than just a single observation. A notable transformer time series architecture uses frequency decomposition to tokenize a time series <cit.>. Alternatively, a popular and increasingly standard method <cit.> for tokenisation is patching <cit.>: it splits a series up into fixed length sequences which are then mapped into the embedding space using a shared learnable linear transformation. The original PatchTST uses patches of length 16 with a stride of 8 (allowing for overlapping patches). This can be interpreted as the resolution at which a time series is processed. As opposed to previous by observation period encoder-decoder designs <cit.> patching loses the 1-to-1 correspondence between tokens and forecast horizon periods. Instead, the matrix output from the transformer blocks is flattened and passed to a large linear layer (presenting a potential bottleneck) which outputs the forecast vector. The implicit assumption is that meaningful information about the time series occurs at resolutions denser than the base time series. A linear model over the whole series assumes that we need to look at the highest density; the whole series. Our work builds on this assumption by extending patching to multiple resolutions. There have been many attempts to design transformer and transformer like architectures for time series, particularly for long forecast horizons due to a set of popular testing benchmarks. The vast majority of innovation appears either in the design of the tokenisation or the architecture of the token processing engine. This https://github.com/ddz16/TSFpaperrepository is an excellent overview of recent developments across a range of modelling approaches. §.§ Existing Multiple Resolution Patching Approaches We would like to outline the differentiating contributions of our work in contrast with other recent approaches which employ the idea of utilising multiple resolution modelling in transformer architectures. We provide a brief description of each model and point out the key difference in capabilities and architecture. Our model is unique in that it uses the tokens from multiple resolutions in the same attention mechanism allowing for explicit pairwise modelling at multiple resolutions across multiple data types. In addition we provide a scheme for how to tokenize time-varying known data at multiple resolutions including for the time steps which occur in the future. <cit.> The Scaleformer was the first notable attempt at multi-scale modelling applied to time-series forecasting with transformers. It iteratively implements broader pooling which forms the inputs for an encoder and feeds the decoder linear interpolations of the prediction of the previous layer. The outputs of all layers are combined with a "normalising"-weighted sum. It does not process multiple resolutions simultaneously, instead opting to do so sequentially. It only handles past/observed data and not any auxiliary variables. <cit.> The Pathformer is designed to work as a foundation model, amenable to working with time series of differing scales. It is composed of many blocks which operate at different resolutions. Each block first uses QKV-attention to process by individual patch, allowing for cross-series modelling and the inclusion of all data available at a given past time-stamp by compressing it into a single vector patch embedding. The next step is a layer of self-attention which models the pairwise relationships of all of the resulting patch embeddings. This is paired with an adaptive pathway which routes the input series into a weighted combination of the resolution blocks. The routing is based on a learned embedding obtained from the trend and seasonality of a series. At no point does attention look at multiple resolutions simultaneously meaning the routing is the only source of modelling such relationships which is sensible when trying to accommodate vastly different time-series. <cit.> Another foundation model. The key motivation is being able to process any arbitrary time series, this approach can model all auxiliary variables. All related inputs are padded, flattened, and concatenated into a single series which is then patched to tokenize. Multiple scales are not used actively. Instead multiple resolutions are defined and contained in the model but only one is used for a given forecasting task. The choice of resolution is determined heuristically. <cit.> This work is the closest to ours in to what degree it models the relationships between embeddings at different resolutions. Each block uses separate branches to patch for different resolutions but at the end of the block it learns a linear projection which fuses all tokens to form a vector which then gets patched again in the next block. Every fusion models the relationships of representations at different resolutions. Interestingly the patching is done at each block. The approach does not extend to auxiliary data or model cross-series relationships. § THE MODEL: MULTIPLE-RESOLUTION TOKENIZATION (MRT) Our model is based on two ideas. First, that time series carry relevant information at multiple resolutions and that this should be represented in attention simultaneously. Second, that the resulting multiple resolution tokens can and should be joined with tokens obtained from auxiliary data to form the input matrix for the transformer. This idea is a response to one of the key design questions in transformers for time series forecasting: what are meaningful units of information. Tokens in this context should just be understood as meaningful representations of some facet of the time series. In designing separate tokenisation for past and auxiliary data we implicitly assume that auxiliary data carries significant information. Designs which include auxiliary information often meld all information at a given token together <cit.>, whereas we process separately for each type of auxiliary information. The separation into more tokens enables explicit attention links to aspects of auxiliary information which increases interpretability. The multitude of resolutions complemented by representations of auxiliary variables as a tokenization strategy enables the learning of complex pairwise relationships. To add clarity we introduce a common notation to describe the view of the tensor and the operation performed on it at a given point in the model. We will describe the dimensions of a tensor with a collection [ ] of letters which represent dimensions. The last k letters represent the dimensions the operation is taking place over. Most operations are performed over a single dimension, however there are exceptions for example self attention operates on matrices so the last two dimensions. If a tensor has been flattened this will be indicated in the new view as with a · and if two tensors have been concatenated in a dimension we will use +. The relevant dimensions in our case are B for batch or sample, C for channel, T for time and for patch which is typically a compression across the time dimension and auxiliary variable embeddings (can be understood as the token dimension), V for the variable dimension, and L for the latent dimension. Note that these dimensions will not describe unique states as they serve as illustrators as opposed to an exact functional description of the model which can be found in the code. We use lower case letters for fixed values to emphasise the size of the dimension, for example (unless stated otherwise) the size of the latent dimension is d_m so [B,C,T,L=d_m]. To illustrate this notation consider the standard output head of PatchTST <cit.>, the operation takes the output from the transformer in the form [B,C,T,L], flattens it to [B,C,T · L] applies a linear layer to the last dimension to produce an output [B,C,f]. In short we denote this as: linear layer [B,C,T · L] → [B,C,f]. In the case of linear layers we can also identify the size of the matrix that needs to be learnt in the transformation. For a generic linear transformation [...,X]→ [...,Y] the learnt transformation matrix ∈ℝ^|X|×|Y|. We employ channel independence in our base and auxiliary architecture, which is a broadly used modelling approach in the field. Channel independence means that the computation never occurs across dimension C and the channels are treated as independent samples by the modules. We propose a separate module which extracts cross-series information in the form of cross-series tokens. §.§ Multiple-Resolution Patching The multiple resolution patching is defined by a resolution set K = k_1,...,k_r takes an input of the form [B,C,T=l] and uses a set of |K|=r linear transformations to create an output of the form [B,C,T,L]. Denote the size of the latent dimension of the model as d_m. We extend patching to multiple resolutions by using several divider blocks. A divider block splits a time series into k_i roughly equal parts. For a series of length h the base patch length is set as b_i = ⌊ h/k_i⌋. If k does not divide h we increase the length of the first h-b_ik patches by 1. In our application h is either l (past data) or l+h (time-varying known). For each resolution k_i we learn a linear projection into the latent space ℝ^b+1↦ℝ^d_m. Weights are shared for all patches within a resolution. We left pad patches that are b long with a 0. The core operation is [B,C,T = b_i+1] → [B,C,L=d_m] which is repeated ∑_k_i∈ Kk_i times (k_i times for each resolution) and stacked along a patch/token dimension into [B,C,T =n_MRP,L]. The created dimension T contains ∑_k_i∈ Kk_i = n_MRP tokens obtained from multiple resolution patching. An illustration of multiple resolution patching is presented in figure <ref>. §.§ Handling Auxiliary Information A key advantage of deep learning models over classical models is that they can accommodate arbitrary forms of input enabling us to jointly model all available information for a time series. Auxiliary information is not trivially useful for time series forecasting. Most recent state of the art papers exclusively focus on past/observed data. The inclusion of auxiliary data often leads to issues with overfitting, explainable by the noise in time series data and small datasets. In existing work which includes auxiliary information it is typically mixed with past data to form tokens representing all information at each time step or resolution. We take a different approach and design a separate tokenisation scheme for different types of auxiliary data. This enables us to explicitly learn contextual representations of the time series that are purely auxiliary and not directly dependent on the noise in the time series, which should help reduce overfitting. Base Tokenization Data for time series is either continuous or categorical. Categorical variables need additional processing, in the case of transformers entity embeddings are standard practice and amount to learning a vector embedding for each type within a category. To match the embedding size in the case of numerical auxiliary variables we employ numerical embeddings. We learn a vector for each numerical auxiliary variable which is then scaled by the value of that variable. We call this process base tokenisation. To handle missing values, we assign a separate learnable vector embedding for each variable in cases the value has not been observed. When there is variation in the length of the input horizon we left pad all temporal series with a special category or numerical value that always yields a zero vector embedding, which then does not activate the corresponding aspects of the network. We do not utilise these base auxiliary tokens directly as the inclusion of many auxiliary variables, especially time-varying known tokens (TVKT), would dominate the transformer input and the increased input size would scale poorly due to the quadratic complexity of self-attention. In multivariate forecasting, the auxiliary data needs to be distinguished as global (ex: holiday dummies) or specific (ex: price data). In the case of global variables the values for a given sample are identical across C. This distinction depends on the choice of aggregation. In our implementation use the same module architectures to process both specific and global auxiliary variables but implement them separately generating one group of specific and global TVKT. We include auxiliary variable tokens [B,C,T = n_S + n_TVK,L] where n_S is the number of static tokens and n_TVK is the number of TVKTs by right concatenating them with the output of the MRP to obtain [B,C,T = n_MRP + n_S + n_TVK,L]. Time-Varying Known Variables (TVK) The TVK of a time series form a tensor [B,C,T = l + f,V]. We process global and specific variables separately. The base tokenisation first transforms both the categorical and numerical variables in the tensor from [B,C,T,V] → [B,C,T,V,L]. We then learn a linear projection that compresses the variable dimension to one [B,C,T,L,V] → [B,C,T,L,V=1] = [B,C,T,L]. This base tokenisation plus mixing across variables creates one auxiliary token per time step. In line with previous multiple resolution patching we apply a modified version built on basis combination. In contrast with past-data we want to patch across tokens (collection of vectors) not individual observations (collection of points). We split the tensor k_i roughly equal parts for each resolution i along the time dimension. We use the same resolutions, rules on length (denote the length of a patch a_i), and padding (so each patch is actually a_i+1), but for a longer series as the size of T is l+f. The splitting leaves us with a number of tensors of the shape [B,C,L,T = a_i+1]. We treat the columns of each matrix in the last two dimensions of the from [L=d_m,T =a_i+1] as a basis. For each resolution we learn a basis layer: vector of size a_i+1 which linearly combines the columns of the matrix into a single vector. The basis layer transforms [B,C, L,T = a_i+1] → [B,C,L,T =1] = [B,C,L] for each patch in each resolution. This operation is illustrated in figure <ref>. The basis combination results in ∑_k_i∈ Kk_i = n_MRP such tokens. The tensors are stacked along a patch dimension into [B,C,T = n_MRP,L]. To reduce the relative influence of auxiliary variables and reduce the size of the input matrix to the transformer we employ a linear compression layer to compress the patch dimension to a predefined number of tokens n_TVK. This layer mixes across patch representation. The transformation is a linear layer [B,C,L,T = n_MRP]→[B,C,L,T = n_TVK], which we transpose to [B,C,T,L]. This process creates a a set of TVKT which are a mixture of representations extracted at multiple resolutions from joint base representations of the time-varying auxiliary data. Static Variables Static variables form a tensor [B,C,V]. They do not have a time component and are equal across channel C if they are global and can differ if specific. The base tokenisation transforms the static variable tensor into [B,C,V,L]. If |V| is low we append the base tokenisation to the MRP directly meaning the number of static tokens (ST) n_S = |V| and the variable dimension is just treated as the token dimension. Otherwise we employ an additional linear layer [B,C,L,V] → [B,C,L,T = n_S] which operates over the V dimension and condenses the representation to a predefined number of mixed tokens n_S. §.§ Cross-Series Information The overfitting problem has also been empirically observed for the inclusion of cross-series information, which has motivated the use of channel independence <cit.>. We propose a channel mixer module which employs a mixer architecture to generate a set of cross-series tokens (CST) from the tensor containing all other tokens. We call the module responsible for extracting cross-series information the channel mixer. As input the channel mixer takes the tensor of all existing tokens MRP, ST, and TVKT. Define the number of these tokens as n_B = n_MRP + n_S + n_TVK. The input tensor takes the form [B,C,T = n_B,L]. We employ a mixer architecture inspired by <cit.> to extract cross series tokens. Mixer architectures have been shown to perform well on time series forecasting while extracting cross-series representations in every block. We roughly illustrate the module in figure <ref>. The operating tensor is normalised before each mixing step and a skip connection is employed across each mixing step. The first mixing step is across the token dimension T, applying a linear transformation [B,C,L,T = n_B]→ [B,C,L,T=n_B] followed by an activation function and dropout. The number of tokens is then squeezed down to a predefined number n_CST with another linear layer [B,C,L,T = n_B]→ [B,C,L,T = n_CST] (we use A here to signal that the tokens have been mixed into an auxiliary information representation). The second mixing step is across the channel dimension C. Two linear layers are employed, the first projects the channel dimension of size d_c into the latent dimension of the cross series module d_cross (adjustable hyperparameter), so [B,A,L,C = d_c]→[B,A,L,C = d_cross]. After an activation function and a dropout layer the second linear layer projects back into the original channel dimension [B,A,L,C = d_cross]→[B,A,L,C = d_c]. After the end of the second mixing another linear layer squeezes the channel dimension to one leaving us with [B,C =1,T,L]. The reason this is done is so that the cross-series tokens are the same across channels. To align it with the dimensions of our collection of tokens we repeat the tensor |C| = d_c times to obtain [B,C = d_c,T = n_CST,L]. We append it to all other existing tokens to obtain the final input matrix A with size n_MRT = n_B + n_CST which is passed to the transformer module [B,C,T = n_MRT,L]. §.§ Output Head We propose a novel output head architecture which has favourable scaling properties. We call it reverse splitting. The transformer blocks produce a matrix which is the same size as the input matrix [B,C,T,L]. Using the same resolution values as for the splitting, we reverse the process. For resolution k_i we take the next k_i column vectors from the matrix and individually project them into patches that are composed to represent the forecast. The length of the projected patches (p_i or p_i+1) is determined the same way as the splitting in MRP. We learn a projection matrix for each resolution ℝ^d_m× p_i+1 and omit the final vector value for the reverse patching for which we have set the length to p. The core operation iterates over the dimension T so each input is [B,C,L] which is projected into [B,C,T = p_i+1], this is repeated k_i times and concatenated according to the length rule into [B,C,T = f]. This leaves us with |K| forecast vectors of the form [B,C,T = f] which we simply sum to obtain the final output. We call this reverse splitting. We illustrate this module in figure <ref>. It scales favourably to simple flattening as it requires us to learn d_m p_i parameters for each resolution i for a total of d_m ∑ (p_i+1). This means that scaling depends on a sum of fractions of the forecast horizon f and inversely depends on the resolution (number of divisions) as p_i ≈ f/k_i. Restated, our output head requires approximately d_m f ∑1/k_i parameters. In comparison, the flattening approach would require learning d_m f∑ k_i parameters which is considerably greater as k_i are natural numbers. Note that the reverse splitting does not use the corresponding position outputs of auxiliary tokens. §.§ Other Architectural Choices The tensor of all tokens A is processed by a sequence of transformer blocks after which it is passed to the reverse splitter output head. We use multi-head self-attention in the transformer block. The learning in self attention can be interpreted as learning four different mixing rules across the L dimension to infer good pairwise comparisons. Since the focus of this work is on tokenisation strategies we avoid extensively investigating positional encoding or masking strategies. Masking We use an encoder design without masking. While a decoder architecture with masking could be designed to be time consistent with patching at multiple resolutios it is not trivial to extend it to TVKT and CST so we avoid masking. Any form of mixing such as compressing TVKT or CST across the T dimension obscures causality. Positional Encoding We use learnable positional encoding. This is equivalent to learning a fixed bias for the input matrix to the transformer. We do this as it is difficult to design positional encoding for a complex set of tokenisation procedures. Certain patching approaches use aggregations over conventional positional encoding, but that is still limited to only past data. After Attention We follow the standard transformer architecture: self-attention is followed by a skip connection, normalisation and then a feed-forward network. We use a standard two linear layer MLP with an activation function and dropout. The latent dimension in between the two layers is another hyperparameter d_ff. The feed-forward network is applied to the latent dimension, so separately to each token. The operation is [B,C,T = n_MRT ,L]→ [B,C,T = d_ff,L]→ [B,C,T = n_MRT,L]. This is followed by a skip connection from the input to the feed-forward network and another normalisation. §.§ Architectural Limitations Multiple resolution patching and associated auxiliary variable tokenisation adds many more hyperparameters in the form of the resolution set. The space of resolution sets is combinatorial making hyperparameter optimisation a challenging task. We somewhat constrain the space by not allowing overlaps, thus avoiding the need for a stride hyperparameter associated with every resolution. Our tokenisation modules increase the number of learnable parameters in the embedding especially for small k_i which makes scaling to large latent sizes more difficult. The CST are sequentially computed downstream of all other tokens limiting the degree of parallelisation in tokenisation. Since all tokenisation is done end-to-end a degree of overfitting is expected. Computationally the main difficulty is the increased number of tokens which scales computation quadratically. This constrains how high n_MRP can get limiting dense resolution sets which may be useful for longer context windows with long lookback horizons. Exploiting and encouraging sparsity in self-attention may be one possibility to deal with the larger context window. § APPLICATION TO A FORECASTING PROBLEM IN PRICING The data for the experiments is real markdown data from a very large European retailer. The markdown pricing prediction problem is to estimate the sales of an item given a series of prices and other contextual information. This prediction problem is understood as particularly difficult. The theory is that each time there is a price reduction we observe a spike in sales which then decays. Subsequent price reductions produce a similar spike and decay effect which typically reduces in magnitude with each reduction. This highly non-linear behaviour is difficult to model with simple approaches. This is further exacerbated by the high levels of noise associated with consumer behaviour owing to stochastic factors like the weather or competitor behaviour and the human factor in actually delivering the pricing strategy in stores. Historical data is limited to a narrow set of tried and tested pricing strategies such as 25% off then 50% then 75% adding to the epistemic uncertainty. The downstream decision problem of setting reduced prices for items that are about to expire sees about a million decisions made every day. Auxiliary Information In the case of markdowns the inclusion of auxiliary data into the model is necessary as it is otherwise not possible to use it as a hypothesis generator for new pricing strategies. The decision variable (price) must be included as an input in the prediction model in order to be able to compare predictions under different markdown strategies. We try to predict both the full-price and the reduced price-sales concurrently, this defines the channels in C. This reflects the operational characteristics of the retailer where some proportion items that are about to expire get moved to a discounted section. One view of the utility of auxiliary information beyond the necessary decision variables is that it enables the aggregation of a larger number of time series in a single training corpus with more learning potential. We can learn tokenizations which capture the relatively specific context of each time series, which leads to a more general purpose prediction model for sales. In the case of markdowns individual time series are relatively short and highly contextual. We take each series to be defined by as the unique combination of product, store, and expiration date. We want to aggregate all products of interest across all shops for all dates in a single model. We include auxiliary information because we do not believe that there is sufficient information in the starting course of a series to effectively differentiate amongst products and stores and expiration dates. We use following auxiliary information: * Static: * Global: product group, brand, stock. * Specific: dummy variable for channel (either full price or reduced price sales channel). * TVK: * Global: day of week, hour (half-hourly, continuous), month, which iteration of price reduction, proportion of stock marked down. * Specific: price (constant for full price prediction, changing for reduced price prediction). Normalisation To further enable the aggregation of many potentially heterogeneous series and in line with effective practice for deep learning we use a collection of normalisation procedures. Normalisation is principally utilised to deal with the problem of different scales. Neural network training can easily result in unsatisfactory outcomes without scaling due to issues with the absolute and relative magnitude of gradients. Normalisation leads to improved generalisation due to increased stability in training and being able to deal with previously unseen scales. We employ two types of normalisation: instance normalisation for each series, and residual connections with normalisation in the transformer and channel mixer blocks. Instance normalisation subtracts the final value of the series from the entire series and optionally divides it by the standard deviation of the series and adds a learnable bias (affine transformation). The normalisation is reversed after the pass through the network. This type of normalisation has been shown to perform empirically and was originally proposed to combat distribution-shift/non-stationarity <cit.>. The second type of normalisation is designed in a modular way and is applied after residual connections. It can either take the form of layer or batch normalisation. LayerNorm standardises the entire layer per instance, BatchNorm standardises each value in a layer across the batch. As opposed to language modelling, which typically uses LayerNorm, we use BatchNorm in experiments due to evidence that it is favourable in the time series context as it is better at handing outlier values <cit.>. In preprocessing, we scale all continuous variables in the dataset. We fit many scalers based on what store type-product group combination a series falls into. The intuition is that this heuristic clustering should provide for more consistent scaling across different distributions of variables. Data Setting The main issue encountered when attempting to apply Multiple Resolution Tokenisation to pricing data is that the data has a distinct real-world flavour. Sales are aggregated within varying blocks of time that represent a duration of displayed price. The time series is irregularly spaced and each series has very few observations (at most 4). The data does not contain time series of sales when markdowns are not present. Each set of scan observations must be treated as its own series. Since only markdowns are observed, there is a clear selection effect as the process to select markdowns is biased toward existing more profitable strategies. This selection bias in the data generating process may result in poorer generalisation and overconfidence. The signal is also somewhat limited as there is only a narrow scope of markdown strategies that are regularly implemented. The dataset features many missing values and heuristics to deal with data acquisition problems. §.§ Preprocessing From a model architecture standpoint this dataset poses two key problems: irregularly spaced sampling, and the varying length of the series. Quantisation The aggregation boundary times are rounded to the nearest 30 minute mark. The series is then quantised into 30 minute sections. We chose 30 minute intervals to create sequences with a non-trivial length, to add more auxiliary data to the sequence, and to enable a prediction of more potential pricing strategies. We quantise the entries in all columns which are aggregated (sales) over the given time period, the rest are simply copied. The quantisation is as simple as dividing the values by the number of 30 minute periods (exclusive of closure times) that occur from one price reduction to the next. There are two simplifications here: the rounding to 30 minutes means that we do not perform any imputation on the boundaries of the aggregations, the second is assigning the average value to each of the quantised fields. More disaggregated data would obviously work even better. Note that there are approaches that use irregular time series natively such as neural differential equations <cit.> to model the latent state between two observations. Unfortunately, an initial value problem has to be solved at every iteration which is computationally infeasible for large datasets. §.§.§ Coping with Varying Length To reduce the effect of changing sequence lengths we only keep the top five percent of sequences in terms of length. This also reduces the dataset to a much more manageable size. Since the sequences vary in the length of time, so does the number of steps obtained after quantisation. We fix the forecast horizon to the final 24 periods so f=24. In its current design the model works for any fixed forecast horizon. We contend with varying input lengths by setting the lookback horizon to the longest sequence minus the fixed forecast horizon and padding all shorter sequences with values that result in zero vector tokens effectively deactivating that aspect of the network. §.§ Experiments The experiments were defined by the range of year-weeks they contained. The experiments are limited to just one week due to memory constraints. Each experiment contains the markdown data for one week across all shops and all items. For any experiment the series corresponding to the last 20% (rounded up) of the due-dates were held out as the testing set, the 15% (rounded up) of due-dates immediately preceding were used as the validation set and the remaining as the training set. Model Hyperparameters In this set of experiments we only used one set of model hyperparameters. The parameters picked reflect a set of informal observations on academic datasets and computational limits. We set the base latent dimension to d_m = 64, the dimension of the middle fully connected layer in the transformer d_ff = 128, the latent dimension in the channel mixer is set to d_cross = 16, we use the GeLU activation function, and set dropout to 0.0. We use 8 heads in the multi-head attention and stack 2 transformer layers. We compress the number of TVKTs and CST to 8. We pick the following set of resolutions { 1,2,3,4,6,8}. This results in n_MRP=24, n_TVK=2*8 (specific and global), n_S = 4, and n_CST = 8 for a total of 52 tokens. The resulting model had a parameter count of roughly 128k. Training The batch size was set to 128, we used the Adam optimiser with a fixed learning rate of 0.0003, the loss function was minimising the root mean squared error (summed across channels, time steps, and samples). Note that MSE is not the optimal loss function for generalisation and calibration in this case, but investigating what is is beyond the scope of this work. We train the model for 20 epochs every experiment with early stopping if no improvement in validation score is observed for 3 epochs. Results We display the results of our two main experiments in tables <ref> and <ref>. The lookback horizon in the week 30 experiment is 28 and 37 in week 45, the forecast horizon is fixed at 24, so the next 12 hours of operation. The week 45 experiment contains approximately 63 thousand series and training took approximately 2 hours per epoch run locally on a CPU. Early stopping in the week 30 experiment was triggered after 7 epochs and after 19 epochs in the week 45 experiment. We only had access to internal predictions on full price sales, so we exhibit the mean squared error (MSE), and mean average error (MAE) for that channel, comparing the internal result with our MRT result. The specifics of the internal predictions methods are confidential but are built with statistical and classical machine learning techniques. We also train PatchTST <cit.> as the canonical transformer for forecasting (patch length 8 with stride 4, same latent dimensions) and DLinear <cit.> as an example of a simpler alternative to transformers under the same training conditions. The model is competitive with internal predictions and an improvement on existing models in the week 30 experiment and significantly outperforms all other models on week 45. Note that the test scores have been reverted to their original scales. Given the complexity of the prediction task across a range of products and locations and the necessary adjustments to fit a sequence-to-sequence model the result for week 45 provides strong evidence that MRT is a powerful model which could deliver much with hyperparameter optimisation and scaling. The week ranges on these experiments are somewhat limited but we are confident that over longer ranges these models would be even more competitive due to stronger auxiliary embeddings (especially for global temporal information) and general scaling effects. One contrarian explanation of week 45 results is that many values are 0 and our network learns to predict near 0s in many cases. The many values being 0 case is somewhat credible given our selection of only the longest series; the longer duration could reflect that getting rid of stock in those cases is difficult. An encouraging sign is that the predictions for reduced price sales are better than those for the full price sales (assuming similar scales across channels). Since we have shown that we are competitive or better than internal models for full price sales, the reduced price sales predictions show even more promise. §.§ Discussion The experiments are somewhat limited in scope. This is mainly due to computational constraints and the delayed timing of the project. More experimentation is needed to weigh the evidence on this architecture. With more compute the experiments could be extended to much larger dataset ranges and latent spaces. Scaling properties are clearly of interest. Ablation would demonstrate what relative benefit there is for each of our tokenisation procedures. Hyperparameter exploration in the space of resolution sets would help determine heuristics on what resolution combinations are performant and uncover good heuristics like perhaps including multiple copies of the same resolution if it is highly involved in attention patterns. Due to commercial sensitivity the dataset cannot be shared[We provide the model code https://github.com/EgoPer/Multiple-Resolution-Tokenizationhere.]. We acknowledge that this is a serious limitation from a reproducibility standpoint, but believe that the experiments performed here are a substantial contribution to research on transformers for forecasting. This is because most existing architectures have been empirically evaluated on a narrow set of datasets which are typically small scale such as ETT <cit.>. What is most likely dataset bias has caused a shift of interest away from using auxiliary or cross-series information. In contrast we demonstrate that our transformer architecture outperforms currently used in-house methods, DLinear, and PatchTST on a messy real-world problem. The work presents evidence that for certain forecasting problems modelling auxiliary data and cross series information adds to generalisation. More ablation is needed to understand what aspects of our tokenisation strategies are the most important in supporting this claim. § CONCLUSION The experiments provide clear evidence that our proposed MRT architecture works well on a noisy real-world problem. This is achieved with an architecture which includes both auxiliary variables and models cross-series information which otherwise is often avoided in transformer architectures for time series forecasting. The modules designed for this purpose are novel as is the practice of treating all of their outputs as tokens in a single attention mechanism. While our experimentation is limited one of the experiments shows a substantial improvement in performance over existing architectures. Given the difficulty of the underlying prediction problem this is a significant observation in favour of testing and potentially deploying transformer architectures on difficult real-world forecasting problems in the pricing domain and beyond. tmlr
http://arxiv.org/abs/2407.01855v1
20240701235615
What is the spectral density of the reservoir for a lossy quantized cavity?
[ "Chris Gustin", "Juanjuan Ren", "Stephen Hughes" ]
quant-ph
[ "quant-ph", "physics.optics" ]
E. L. Ginzton Laboratory, Stanford University, Stanford, California 94305, USA cgustin@stanford.edu Department of Physics, Engineering Physics, and Astronomy, Queen's University, Kingston, Ontario K7L 3N6, Canada Department of Physics, Engineering Physics, and Astronomy, Queen's University, Kingston, Ontario K7L 3N6, Canada § ABSTRACT By considering a single lossy three-dimensional cavity mode and using a quasinormal mode expansion for the photonic medium, we show that the frequency-dependence of the coupling between the cavity and its reservoir is dependent on both the cavity contents and the gauge used. For the case of coupling to a single quantum dipole, we identify the form of the spectral density, revealing a ∼ω^-1 prefactor scaling, as well as a spatially-dependent contribution. We thus establish the correct quantum form for the cavity-reservoir interaction and show its significant impact on broadband strong coupling. What is the spectral density of the reservoir for a lossy quantized cavity? Stephen Hughes 2 July 2024 =========================================================================== Introduction. One of the most fundamental concepts in quantum optics is that of the quantized cavity. By spatially confining electromagnetic (EM) fields, the continuous nature of the fields' degrees of freedom can be collapsed into discrete resonances, modelled as harmonic oscillator modes. The simplicity of this model and ease of physical implementation has made quantum optics an indispensable testbed for many aspects of quantum mechanics, from fundamental science <cit.> to quantum information applications <cit.>. Despite the ubiquity of this quantized cavity model, the ab initio fundamental description of the underlying reality is subtle and complex, and the complete details of how such a collapse into discrete quantized modes can be recovered from an underlying quantization of the EM fields in media remains an outstanding challenge. This is because replacing continua with discrete resonant modes relies on an assumption of a closed system, where photons remain localized in the cavity modes indefinitely. However, in reality, all cavities are leaky cavities. A major advance in modelling this reality came with the input-output and system-reservoir theory <cit.>, where the discrete cavity modes couple via a Hamiltonian term to a continuous “reservoir” of photon modes. Under reasonable assumptions, a Lindblad master equation can be derived for the subspace of discrete modes, giving Markovian photon loss. Moreover, the quantum statistics of the photons inside the cavity can be directly related to those which are lost from the cavity by means of a time-local “input-output” relationship. Although such models have since found widespread application in quantum optics, they still rely on phenomenological assumptions on the form of the system-reservoir interaction. For example, specifying their general analysis to the case of a lossy single-mode cavity, standard input-output theories posit the form of the interaction Hamiltonian Ĥ_ int as (using ħ=1 throughout) Ĥ_ int = ∫_-∞^∞ dωΛ(ω)[âb̂^†_ω + â^†b̂_ω], where [b̂_ω,b̂^†_ω'] = δ(ω-ω'), and [â,â^†]=1. The functional form of the system-bath coupling function Λ(ω) (related to the spectral density J(ω)= 2π |Λ(ω)|^2) is left unspecified, as the only value that enters into the final calculations, after a Markov approximation, is |Λ(ω_c)|, with ω_c the resonant cavity frequency. Typically, one assumes that the form of Λ(ω) is thus not important, at least in instances where the rotating-wave approximation holds [i.e., the “empty” cavity of Fig. <ref>(a)]. Introducing a strong dipole-coupling of a two-level system (TLS) to the cavity can allow for a surpassing of this regime towards the ultrastrong-coupling (USC) regime of quantum electrodynamics <cit.>, where it is now well-established that the form of Λ(ω) does matter a great deal when it comes to predicting observable outputs of the cavity-TLS system, including emission spectra <cit.> [Fig. <ref>(b)]. To go beyond a phenomenological approach as in Eq. (<ref>) to general “few-mode” scattering problems has also been highlighted in other contexts recently <cit.>. Progress has been made in this manner in certain 1-D systems <cit.>, as well as dispersionless systems <cit.>. The development of a theory consistent with causality (including material dispersion and absorption) which is valid and tractable for general, 3-D resonators, however, remains an outstanding challenge. Pseudomode approaches offer a potential solution, but often involve the introduction of many additional modes beyond the actually present resonances in the system <cit.>. In this work, we disprove the conventional wisdom that such a frequency dependence of the system-bath coupling only produces observable effects in the USC regime, and in doing so, make a major advance towards a more rigorous theory of quantized cavity loss for general systems. We show that a resolution to the question of the system-bath coupling interaction form is intrinsically connected to the complex-valued open quasinormal mode (QNM) of the cavity. Our general findings are: (i) for cavity-QED systems with a single quantized mode interacting with a TLS dipole, if the mode can be described by a purely real-valued QNM at the dipole location, the system-bath coupling scales universally as Λ(ω) ∼ω^-1/2. (ii) For realistic cavity modes, the product of the QNM Q-factor and phase (ϕ) at any location is typically non-negligible, which leads to a failure of traditional system-bath coupling models to capture the correct broadband dynamics, for which we present a heuristic fix. (iii) Gauge-dependent predictions, typically thought to be unique to the USC regime of cavity-QED, can in fact be significant also in weak and strong coupling regimes. We consider a weakly-coupled system of a single TLS and cavity mode interacting within the dipole approximation. By employing the rigorous theory of gauge-invariant macroscopic QED <cit.>, we utilize a QNM expansion of the medium (cavity + environment) Green's function to determine the frequency dependence of a TLS subject to radiative decay in a frequency window dominated by a single mode, which we also verify with a full numerically exact calculation of the Green's function. Comparison with the quantized cavity-photon reservoir model yields the correct choice of system-bath coupling Λ(ω). Interestingly, this result is gauge-dependent. By viewing the phenomenological system-reservoir interaction as arising from an underlying discrete mode projection of the rigorous continuum macroscopic QED theory <cit.>, we identify the Coulomb gauge result as the uniquely correct one. These results challenge the notion that so-called “gauge ambiguities,” the resolution to which is now well known <cit.>, are only perceptible in extreme regimes of light-matter coupling (USC), but in fact are also relevant whenever a broadband frequency dependence is present. These findings also highlight the importance of our recently developed gauge-invariant theory of macroscopic QED in a truncated system <cit.>. Continuum Theory. As a representative system, we consider a dipole-TLS weakly coupled to a single resonant cavity mode, and derive the spontaneous emission (SE) rate of the dipole. First, we consider the photonic medium (cavity) to consist of a quantized continuum of reservoir operators, corresponding to the (complex) dielectric medium of the cavity and environment. To this end, we employ the macroscopic QED formalism, which is appropriate for arbitrary dispersive and absorbing media. The transverse part of the (manifestly gauge-invariant) vector potential of the EM fields is <cit.> 𝐀̂_⊥(𝐫) = ∫ d^3r' ∫_0^∞dω/ω√(ϵ_I(𝐫',ω)/ϵ_0 π) ×𝐆_⊥(𝐫,𝐫',ω) ·𝐛̂(𝐫',ω) + H.c., where ϵ_I is the imaginary part of the medium's dielectric function, and 𝐆_⊥(𝐫,𝐫',ω) is the medium's Green's function, transverse with respect to the left-hand side of the dyad, and spatial argument 𝐫. The operators that correspond to the EM fields' polariton excitations satisfy: [b̂_i(𝐫,ω),b̂^†_j(𝐫',ω')] = δ_ijδ(𝐫-𝐫')δ(ω-ω'). In the Coulomb gauge <cit.>, the Hamiltonian for the TLS interacting with the EM fields is Ĥ = Ĥ_ F + e^i 𝐝·𝐀̂_⊥,0σ̂_xĤ_0 e^-i 𝐝·𝐀̂_⊥,0σ̂_x, with 𝐀̂_⊥,0 = 𝐀̂_⊥(𝐫_0), where 𝐫_0 is the dipole location, Ĥ_ F = ∫ dω∫ d^3r ω𝐛̂^†(𝐫,ω) ·𝐛̂(𝐫,ω), Ĥ_0 = ω_0 σ̂^+σ̂^-, and 𝐝 is the (assumed real) dipole moment. The Pauli operators σ̂^± correspond to raising/lowering operators of the TLS [Here we have neglected a possible quasi-static term giving the coupling to the longitudinal part of the electric field, assuming this term has already been incorporated into the energy levels compromising the TLS.] Assuming non-ultrastrong coupling, we expand Eq. (<ref>) to first order in 𝐝·𝐀̂_⊥,0. This gives Ĥ = Ĥ_ F + Ĥ_0 + ω_0 𝐝·𝐀̂_⊥,0σ̂_y. One can then derive the master equation of the TLS subsystem, using standard second-order Born-Markov techniques <cit.>. The result is (neglecting Lamb shifts and making a rotating-wave approximation) a Lindblad contribution to the master equation γ[σ̂^-ρ̂_ Sσ̂^+ - 1/2{σ̂^+σ̂^-,ρ̂_ S}], where ρ̂_ S is the system density operator, and γ(ω_0) = 2/ϵ_0𝐝·Im{𝐆_⊥(𝐫_0,𝐫_0,ω_0) }·𝐝, and here the Green's function is transverse with respect to both spatial arguments and sides of the dyad. We can simplify this expression for a situation where only one resonant mode is dominant at the dipole location and frequency by using a QNM expansion of the Green's function. The QNMs are solutions to the Helmholtz equation with open boundary conditions (i.e., the Silver-Müller radiation condition): ∇×∇×𝐟̃_μ(𝐫) - (ω̃_μ/c)^2 ϵ(𝐫,ω̃_μ)𝐟̃_μ(𝐫)=0, where ω̃_μ = ω_μ - iκ_μ/2 is a complex QNM frequency, with κ_μ corresponding to the cavity photon decay rate. The transverse part of the Green's function can be expanded in terms of the QNMs as 𝐆_⊥(𝐫_0,𝐫_0,ω) = ∑_μ A_μ(ω)𝐟̃_μ(𝐫_0)𝐟̃_μ(𝐫_0), provided that 𝐫_0 is sufficiently close to the resonator, and we use the expansion coefficient, A_μ(ω) = ω/[2(ω̃_μ - ω)] <cit.>. Using a single QNM expansion with μ = c, we obtain γ(ω_0) = 4|g̃_ d|^2/κ_cω_0/ω_cκ_c^2/4/κ_c^2/4 + (ω_0-ω_c)^2χ_c(ϕ_0,ω_0), where we have defined g̃_ d = √(ω_c/2ϵ_0)𝐝·𝐟̃_c(𝐫_0), and χ_c(ϕ_0,ω_0) = cos(2ϕ_0) - 2Q_csin(2ϕ_0)[ω_0/ω_c-1] is a factor accounting for the phase of the QNM at the position of the dipole, ϕ_0 = arg{𝐟̃_c(𝐫_0)}; note χ_c =1 only when the QNM is real at the dipole location. For a real-valued QNM at the dipole location, the SE rate takes the form of a Lorentzian function centered around the cavity frequency, modulated by a factor of ω_0/ω_c. This is in contrast to the phenomenological result often employed, where the decay rate is simply Lorentzian. However, even for a small non-zero QNM phase at the dipole location, the deviations from a pure Lorentzian can be more complex; for example, expanding to leading order in δ=(ω_0 - ω_c)/ω_c (excluding the Lorentzian factor ), the rate becomes γ(ω_0)/γ(ω_c) = κ_c^2/4/κ_c^2/4 + (ω_0-ω_c)^2[1+δ(1-2Q_ctan(2ϕ_0)], and so the QNM phase is only negligible when 4Q_c|ϕ_0| ≪ 1, which is a very strict condition. This derivation can also be carried out in the dipole gauge <cit.>. In Fig. <ref>(b,d), we plot the decay rate γ(ω_0) for two example physical modes corresponding to a metallic dimer (a) and photonic crystal (PC) (c) cavity, normalized by the Lorentzian factor L(ω_0)=γ(ω_c)κ_c^2/4[κ_c^2/4 + (ω_0-ω_c)^2]^-1 in order to investigate the accuracy of the QNM model of the spectral density, compared with a full finite-element Green's function simulation with no modal approximations. These calculations are for full 3-D geometries and also include material dispersion and absorption. Near the QNM resonance, the trend agrees well with Eq. (<ref>) evaluated with a single QNM. Further away, deviations arise which can be associated to multi-mode effects (which are easily treated by adding more modes to our formalism; see SM <cit.>). In contrast, neglecting the QNM phase [which leads to the linear ∼ω_0 scaling modulating L(ω_0)] does not accurately reflect the trend. Quantized Lossy Cavity Mode Theory. We now model our system as a single quantized cavity mode (coupled to a broadband photon reservoir) weakly coupled to a TLS, which is described by the Jaynes-Cummings model. It is sufficient to consider the system as initially in the excited state of the TLS, with zero cavity photons present. Under these assumptions, and working in an appropriate rotating frame, this system is described by the Hamiltonian Ĥ^ g(t) = Ĥ^ g_ S + Ĥ_ int(t), where Ĥ_ S^ g = Δσ̂^+ σ̂^- + g_ g(âσ̂^+ + â^†σ̂^-), and Ĥ_ int(t) = ∫ dωΛ(ω)[â^†b̂_ωe^-i(ω-ω_c) t + âb̂^†_ωe^i(ω-ω_c) t]. Here, â, â^† are cavity mode operators, Δ = ω_0 - ω_c, and g_ g is a gauge-dependent coupling rate. The operators b̂_ω and b̂_ω^† are continuum reservoir operators that satisfy [b̂_ω,b̂_ω'^†]=δ(ω-ω'), and we let Λ(ω_c)= √(κ_c/(2π)) to recover an empty cavity photon decay rate κ_c. Considering now the gauge-dependence of the model, if using the dipole gauge (g=d), then g_ g→ |g̃_ d|, and in contrast to the continuum model, the phase of 𝐟̃_c(𝐫_0) plays no role in the quantized cavity theory presented here, and thus we choose g_ d≡ |g̃_ d| to be real in this model without loss of generality (i.e., by absorbing the phase of the QNM at the dipole location into the TLS operators by a unitary rotation). In general, this can be derived from a discrete mode projection from the underlying continuum model described previously, and can also be directly related to quantized QNMs <cit.>. In the Coulomb gauge, g_ c = ω_0/ω_cg_ d. Since only g_ d is independent of ω_0, to freely vary the TLS frequency ω_0 independently of the cavity coupling parameters, we express all results in terms of g_ d. When considering models of discrete cavity modes coupled to bosonic photon reservoirs, the Coulomb gauge must be used, as it is the unique gauge in which the reservoir subsystem can be approximated within the Born approximation to remain in an unentangled purely bosonic subspace, necessary to derive a master equation <cit.>. We now diagonalize the system Hamiltonian. From the initial condition, the system is closed under the eigenstates |G⟩ = |0,g⟩ (see SM <cit.> for full model), and |±⟩ = 1/√(2)[±√(1 ±Δ/η)|0,e⟩ + √(1 ∓Δ/η)|1,g⟩] with the corresponding energies, ω_± = Δ/2±1/2η, where η = √(Δ^2+4g_ g^2). We decompose the Hamiltonian in terms of these eigenstates as Ĥ_ S^ g = ∑_α={+,-}ω_α|α⟩⟨α|, and Ĥ_ int(t) = ∑_α ={+,-}c_α|G⟩⟨α|∫ dωΛ(ω) b̂_ω^† e^i(ω-ω_c)t + H.c., where c_α = √((1-αΔ/η)/2). Following a 2nd-order Born-Markov approximation, the master equation for the system is ρ̇̂̇_ S = -i[Ĥ^ g_ S,ρ̂_ S] + ∫_0^∞ dτTr_ R([e^-i Ĥ_ S^ gτĤ_ int(t-τ) e^iĤ_ S^ gτρ̂_ Sρ̂_ R,Ĥ_ int(t)]] + H.c., which evaluates to ρ̇̂̇_ S = -i[Ĥ^ g_ S,ρ̂_ S]+ π∑_αc_αΛ^2(ω_c+ω_α)( [|G⟩⟨α|ρ̂_ S, â^†] + H.c.). To determine the TLS decay rate γ, we can solve for the eigenvalues of Eq. (<ref>), viewed as a matrix equation for the elements of ρ̂_ S. For the weak-coupling decay rate, we expect γ∼ g_ d^2, from the usual Purcell effect. Thus, we consider the eigenvalue γ_ g∼ g_ g^2, and use perturbation theory <cit.> to find, to second-order in g_ g: γ_ g(ω_0) = g_ g^22πΛ^2(ω_0)/κ_c^2/4 + Δ^2. Now we employ the ansatz Λ(ω) = √(κ_c/2π)(ω/ω_c)^n, →γ_ g = (4g_ g^2/κ_c)(ω_0/ω_c)^2n[κ_c^2/4/κ_c^2/4 + Δ^2]. If working in the dipole gauge, then we must choose n = 1/2 to satisfy Eq. (<ref>), for the case of a real valued QNM. In contrast, in the Coulomb gauge, g_ c = ω_0/ω_cg_ d, and so we must choose n=-1/2. As previously mentioned, the Coulomb gauge is the correct result if gauge invariance is to be preserved in a more ab-initio model of system-reservoir coupling. Thus, we see that Λ_ R(ω) = √((κ_c/2π)(ω_c/ω)) is the system-bath coupling function which, in cavity-QED, produces the correct results for an isolated mode with a strictly real QNM at the dipole location; surprisingly, this corresponds to a sub-Ohmic reservoir (in contrast, say, to the Ohmic result encountered in circuit-QED <cit.>, often also assumed in optics). Interestingly, this result can only be obtained if the secular approximation is not made, meaning the correct result cannot be obtained from a master equation in the usual Lindblad form. The analytic result derived in Eq. (<ref>) for the Coulomb gauge [such that it is equivalent to Eq. (<ref>)] agrees excellently with a full numerical calculation from the master equation in Eq. (<ref>), which we show in the Supplementary Material (SM) <cit.>. However, the case where the QNM is not real at the TLS location is much more realistic for practical mode geometries. In Fig. <ref> (a,c), we show the QNM phase ϕ(𝐫) as a function of position for the two considered example physical modes. The product 2Q_ctan(2ϕ_0) remains substantial at all locations near the modal electric field maximum (even where |ϕ_0| ≪ 1). It is difficult if not impossible to obtain the general phase-dependent result in Eq. (<ref>) using simple system-reservoir constructions of a single lossy cavity with a general spectral bath coupling Λ(ω) which is independent of the contents of the cavity. This points to important and potentially fundamental limitations to the ability of these simple universal models of the spectral density to predict accurately observables in the USC regime. We note that other approaches, such as those utilizing rigorous quantized QNM expansions may bypass this limitation. Additionally, one can make the heuristic replacement Λ(ω) →Λ_ R(ω)√(χ_c(ϕ_0,ω)), which recovers the correct result, but is clearly not a universal form of the spectral density as it implies the cavity-bath interaction should be dependent on the position of a dipole placed inside the cavity. This function is shown in Fig. <ref>(c) for various values of ϕ_0. Although a model-specific and ad-hoc fix, this replacement to the system-bath coupling function does allow for quantitatively accurate spectral modelling in the strong-coupling regime (and with additional assumptions, the USC regime), which has previously been inaccessible using few-mode master equation methods. In Fig. <ref>, we show simulations highlighting the significance of this replacement, considering intrinsically quantum effects by pumping the system beyond the weak excitation regime (details in the SM <cit.>). Notably, even outside of the USC regime (where such considerations become even more important), the importance of both the QNM phase-dependent correction factor and the correct coupling function prefactor ∼ω^-1/2 can be highly substantial, which has significant implications for current commonly used models, which typically assume a flat or Ohmic reservoir for the cavity. Conclusions. By comparison with a rigorous gauge-invariant macroscopic QED formalism in arbitrary media, we have shown the correct form of the spectral density for a lossy quantized cavity mode interacting with a TLS. The spectral density takes a different form than those typically assumed in previous works, is not universal (it depends on the position of the TLS), and can significantly impact predictions even in the weak and strong coupling regimes. This form is also gauge-dependent. In a future work, we will use an ab-initio construction of the system-reservoir Hamiltonian by means of a quantized QNM approach to study dissipative cavity QED in the USC regime, beyond the phenomenological approaches which are known to fail to predict things like emission spectra accurately <cit.>. This work was supported by the Natural Sciences and Engineering Research Council of Canada (NSERC), the Canadian Foundation for Innovation (CFI), Queen's University, Canada, NSF awards PHY-2011363 and CCF-1918549, and CMC Microsystems for the provision of COMSOL Multiphysics. We thank Sebastian Franke and Hideo Mabuchi for useful discussions and comments.
http://arxiv.org/abs/2407.02834v1
20240703062107
Aspect-Based Sentiment Analysis Techniques: A Comparative Study
[ "Dineth Jayakody", "Koshila Isuranda", "A V A Malkith", "Nisansa de Silva", "Sachintha Rajith Ponnamperuma", "G G N Sandamali", "K L K Sudheera" ]
cs.CL
[ "cs.CL" ]
Aspect-Based Sentiment Analysis Techniques: A Comparative Study Dineth Jayakody12, Koshila Isuranda12, A V A Malkith12, Nisansa de Silva3, Sachintha Rajith Ponnamperuma4, G G N Sandamali15, K L K Sudheera15 1Department of Electrical and Information Engineering, University of Ruhuna 2 5 3Department of Computer Science & Engineering, University of Moratuwa 3 4Emojot Inc. 4 July 8, 2024 =============================================================================================================================================================================================================================================================================================================================== § ABSTRACT Since the dawn of the digitalisation era, customer feedback and online reviews are unequivocally major sources of insights for businesses. Consequently, conducting comparative analyses of such sources has become the de facto modus operandi of any business that wishes to give itself a competitive edge over its peers and improve customer loyalty. Sentiment analysis is one such method instrumental in gauging public interest, exposing market trends, and analysing competitors. While traditional sentiment analysis focuses on overall sentiment, as the needs advance with time, it has become important to explore public opinions and sentiments on various specific subjects, products and services mentioned in the reviews on a finer-granular level. To this end, Aspect-based Sentiment Analysis (ABSA), supported by advances in Artificial Intelligence (AI) techniques which have contributed to a paradigm shift from simple word-level analysis to tone and context-aware analyses, focuses on identifying specific aspects within the text and determining the sentiment associated with each aspect. In this study, we compare several deep-NN methods for ABSA on two benchmark datasets (Restaurant-14 and Laptop-14) and found that FAST LSA obtains the best overall results of 87.6% and 82.6% accuracy but does not pass LSA+DeBERTa which reports 90.33% and 86.21% accuracy respectively. Aspect-based Sentiment Analysis, Comparative Analysis, BERT-based Deep Neural Methods, Benchmark Study § INTRODUCTION Social media and other online platforms have enjoyed exponential growth, which in turn has created an unprecedented abundance of user-generated content. However, conversely, this has added a de facto expectation on businesses to understand the user sentiments expressed in these texts if they intend to make informed decisions and enhance customer satisfaction. Aspect-based sentiment analysis (ABSA) has emerged as a valuable technique in Natural Language Processing (NLP) to analyze opinions at a finer granular level by identifying sentiment towards specific aspects or features within a given domain <cit.>. In this paper, we focus on domain-specific ABSA <cit.>, particularly focusing on its application in analyzing customer reviews, which provides valuable feedback for businesses to improve their products or services. In our initial analysis, we present the accuracy levels of various ABSA models using the benchmark 2014 SemEval restaurant and laptop datasets <cit.>. The results are tabulated to provide a comparative view of their performance in sentiment analysis tasks. Following this assessment, we proceed to explore avenues for improving model performance through fine-tuning and testing on the same dataset. Through the fine-tuning process, our objective is to adapt these pre-trained models to the specific variations of the domain under consideration, thereby enhancing their effectiveness in capturing sentiment expressions related to various aspects contained within customer reviews. Specifically, we focus on the model, which utilizes parameter-efficient techniques enabled by the architecture, as well as the model, integrated into the framework. Additionally, we investigate transformer pre-trained models within the context of the SetFit framework. Leveraging this framework, we experiment with hybrid models by connecting different transformer models and testing them on the aforementioned dataset. Further elaboration on these models and their fine-tuning methodologies will be provided in subsequent sections of this paper, where we look into their architectures, techniques, and experimental results in greater detail. § LITERATURE REVIEW <cit.> undertook a comprehensive study focusing on Aspect-Target Sentiment Classification (ATSC) within ABSA, presenting a novel two-step approach. Their methodology involved domain-specific fine-tuning of  <cit.> language models followed by task-specific fine-tuning, resulting in an accuracy of approximately 84.06% with the model which surpassed the performance of baseline models such as vanilla and  <cit.>. The success of this model underscores the significance of domain-specific considerations for improving model robustness and performance in real-world applications. Subsequent studies by <cit.> and <cit.> explored alternative approaches utilizing BERT-based models for ABSA. <cit.> introduced adversarial training to enhance ABSA performance, leveraging artificial data generation through adversarial processes. Their BERT Adversarial Training architecture surpassed both general-purpose and domain-specific post-trained () models in ABSA tasks, without the need for extensive manual labelling. Similarly, <cit.> introduced two novel modules, Parallel Aggregation and Hierarchical Aggregation, to augment ABSA using . These modules aimed to enhance Aspect Extraction (AE) and Aspect Sentiment Classification (ASC) tasks, yielding superior performance compared to post-trained vanilla . Introducing a multi-task learning model for ABSA, <cit.> achieved an accuracy of 86.60% with their model. Meanwhile, <cit.> explored the potential of pre-trained models (), particularly the  <cit.> model in ABSA tasks but could not surpass the performance of model. The superior performance of the highlights the efficacy of multi-task learning approaches in ABSA, showcasing the importance of considering aspect term extraction (ATE) alongside polarity classification for comprehensive sentiment analysis.  <cit.> (Decoding-enhanced with Disentangled Attention)-based models were explored by <cit.> and <cit.>, introducing and , respectively. <cit.> delved into disentangled learning to enhance -based representations in ABSA. They separated syntactic and semantic features, showcasing the improvement in ABSA task performance through the incorporation of disentangled attention. This enabled the isolation of position and content vectors, potentially enhancing model performance by focusing on syntactic and semantic aspects separately. On the other hand, <cit.> introduced a novel perspective in aspect-based sentiment classification (ABSC) by emphasizing the significance of aspect sentiment coherency. Subsequently, <cit.> and <cit.> also explored the utilization of -based models, introducing (Knowledge-aware Gated Recurrent Memory Network with Dual Syntax Graph Modeling) and , respectively. However, despite these advancements, neither nor could surpass the accuracy achieved by the  <cit.> model. In <ref> we summarize the accuracies of the models the relevant studies in the literature have reported for the benchmark SemEval <cit.> Restaurant (Res-14) and Laptop (Lap-14) datasets. The accuracies range from 82.69% to 88.27%, showcasing the varying degrees of success in sentiment analysis across different approaches. § METHODOLOGY Here we followed three innovative approaches in NLP: 1) fine-tuning with Parameter-Efficient Fine-Tuning (PEFT) techniques such as QLoRA; 2) for efficient few-shot fine-tuning of Sentence Transformers; and 3) FAST LSA <cit.> V2 on PyABSA framework. §.§ LLaMA with QLoRA Given the current state-of-the-art interest in Large Language Models (LLMs), we opted to include an LLM-based analysis in our comparative study. is a collection of second-generation open-source LLMs from Meta that comes with a commercial license. <cit.> presented that shows a significant leap forward in natural language understanding and generation, by its advanced architecture, large training data and refined training strategies. The architecture of is based on the transformer model, a neural network architecture that has proven highly effective in a wide range of NLP tasks. employs a multi-layered transformer architecture with self-attention mechanisms. It is designed to handle a wide range of natural language processing tasks, with models ranging in scale from 7 billion to 70 billion parameters. Fine-tuning in machine learning is the process of adjusting the weights and parameters of a pre-trained model on new data to improve its performance on a specific task. There are three main fine-tuning methods in the context: * Instruction Fine-Tunning (IFT): According to <cit.>, IFT involves training the model using prompt completion pairs, showing desired responses to queries. * Full Fine Tunning: Full fine-tuning involves updating all of the weights in a pre-trained model during training on a new dataset, allowing the model to adapt to a specific task. * Parameter-Efficient Fine-Tunning (PEFT): Selectively updates a small set of parameters, making memory requirements more manageable. There are various ways of achieving Parameter efficient fine-tuning. Low-Rank Parameter (LoRA) <cit.> and Quantized Low-Ranking Adaptation (QLoRA) <cit.> are the most widely used and effective. Traditional fine-tuning of pre-trained language models (PLMs) requires updating all of the model's parameters, which is computationally expensive and requires massive amounts of data; thus making it challenging to attempt on consumer hardware due to inadequate VRAMs and computing. However, Parameter-Efficient Fine-Tuning (PEFT) works by only updating a small subset of the model's most influential parameters, making it much more efficient. Four-bit quantization via QLoRA allows such efficient fine-tuning of huge LLM models on consumer hardware while retaining high performance. QLoRA quantizes a pre-trained language model to four bits and freezes the parameters. A small number of trainable Low-Rank Adapter layers are then added to the model. In our case, we created a 4-bit quantization with NF4-type configuration using https://github.com/TimDettmers/bitsandbytes. According to <cit.> under the model fine-tuning process, Supervised fine-tuning (SFT) is a key step in Reinforcement Learning from Human Feedback (RLHF). The SFT models come with tools to train language models using reinforcement learning, starting with supervised fine-tuning, then reward modelling, and finally, Proximal Policy Optimization (PPO). During this process, we provided the SFT trainer with the model, dataset, LoRA configuration, tokenizer, and training parameters. To test the fine-tuned model, we used the Transformers text generation pipeline including the prompt. The model was fine-tuned using techniques such as QLoRA, PEFT, and SFT to overcome memory and computational limitations. By utilizing Hugging Face libraries such as https://huggingface.co/transformers/, https://huggingface.co/accelerate/, https://huggingface.co/peft/, https://huggingface.co/trl/, and , we were able to successfully fine-tune the 7B parameter model on a consumer GPU. §.§ SetFit Few-shot learning has become increasingly essential in addressing label-scarce scenarios, where data annotation is often time-consuming and expensive. These methods aim to adapt pre-trained language models (PLMs) to specific downstream tasks using only a limited number of labelled training examples. One of the primary obstacles is the reliance on large-scale language models, which typically contain billions of parameters, demanding substantial computational resources and specialized infrastructure. Moreover, these methods frequently require manual crafting of prompts, introducing variability and complexity in the training process, thus restricting accessibility for researchers and practitioners. In response to this, <cit.> proposed (Sentence Transformer Fine-tuning) which presents an innovative framework for efficient and prompt-free few-shot fine-tuning of Sentence Transformers (ST). Diverging from existing methods, does not necessitate manually crafted prompts and achieves high accuracy with significantly fewer parameters. The approach consists of two main steps. In the first step, the ST is fine-tuned using a contrastive loss function, encouraging the model to learn discriminative representations of similar and dissimilar text pairs. In the second step, a simple classification head is trained on top of the fine-tuned ST to perform downstream tasks such as text classification or similarity ranking. By decoupling the fine-tuning and classification steps, achieves high accuracy with orders of magnitude fewer parameters than existing methods, making it computationally efficient and scalable. In our study, we utilized several available sentence transformers through the framework to obtain accuracies for aspect extraction and sentiment polarity identification. §.§ PyABSA <cit.> addressed the challenge of the lack of a unified framework for ABSA by developing , an open-source ABSA framework. integrates ATE and text classification functionalities alongside ASC within a modular architecture. This design facilitates adaptation to various ABSA subtasks and supports multilingual modelling and automated dataset annotation, thereby streamlining ABSA applications. Moreover, offers multi-task-based ATESC models, which are pipeline models capable of simultaneously performing ATE and ASC sub-tasks. To tackle the data shortage problem, provides automated dataset annotation interfaces and manual dataset annotation tools, encouraging community participation in annotating and contributing custom datasets to the repository. In our study, we utilized the framework on the SemEval 2014 restaurant and laptop benchmark dataset to evaluate accuracy to be consistent with the practices followed in the literature as shown in <ref>. Specifically, we employed the model with , which is included in the checkpoint, to assess aspect extraction performance. § RESULTS §.§ LLaMA with QLoRA The first section of <ref> shows the performance of Llama-2-7b <cit.> with QLoRA <cit.>. These performances were obtained using the L4 GPU. It can be noted that even though the sentiment polarity results are comparable between the two datasets, the aspect extraction on Res-14 is several magnitudes weaker than that of Lap-14. §.§ SetFit In the second section of <ref>, we provide a comprehensive overview of the accuracies attained by various sentence models using the framework <cit.>. Due to the modular nature of the framework, we could fine-tune and test combinations of models. If a model is reported in a single row, it means we have used the said sentence transformer model for both aspect extraction and sentiment polarity identification (eg,  <cit.>). In the cell blocks where a model is followed by other models with + are combinations. For example, the first row of  <cit.> contains results of that model being used both for aspect extraction and sentiment polarity identification. The subsequent line with  <cit.> indicates that was used for the aspect extraction component and was used for the sentiment polarity identification component. At this point, a question may be raised as to why would the aspect extraction have two different values for accuracy (79.80 vs. 85.40) in the two configurations if in both cases the same model (ie, in this example) was used for that task. The reason is the fact that the fine-tuning is conducted end-to-end in a holistic manner and thus, the choice of the model used for the sentiment polarity identification ends up influencing the ultimate accuracy obtained by the aspect extraction component. It may enhance the result as in the case of and . It may also hinder as in the case of . Overall, it can be noted that  <cit.> consistently emerges as a standout performer; either by itself or as the aspect extraction component of a pair. It can be argued that this robust performance is owed to its capability to capture nuanced complex information crucial for understanding both aspect-based sentiment analysis and sentiment polarity classification tasks. Specifically on the sentiment polarity classification task, it can be noted that and  <cit.> elevates performance multiple configurations. Further, the results also reveal domain-specific variations in model performance, as evidenced by the disparity between that Res-14 and Lap-14 results of  <cit.>. To give a better overview of how various models perform, we include <ref> and <ref> which visualize the results discussed in <ref>. In <ref>, we present a detailed analysis of aspect extraction accuracy for various models. emerging as the top performer across both datasets can easily be noted. It is also evident how and closely follow with accuracies verging on 90%. The outlying low accuracies of BGE <cit.> and Sentence-T5 <cit.> are also evident. Similarly, in <ref>, we look into the analysis of sentiment polarity identification percentages for the same models and datasets. Here, shows the highest accuracy for Res-14 while shows the highest accuracy for Lap-14. These results reaffirm the effectiveness of across both tasks and datasets. Conversely, models such as CLIP-ViT-B-32-multilingual-v1 <cit.> and demonstrate relatively lower performances. §.§ PyABSA The third section of <ref> reports the results obtained from the implementation of the  <cit.> model on  <cit.>. is also fine-tuned end-to-end similar to  <cit.>. However, unlike , does not report Aspect Extraction and Sentiment Polarity accuracies separately. It only gives an overall value. This is the reason for it having only one value per dataset in <ref>. Alternatively, it is not wrong to take the reported accuracies as the values for the Sentiment Polarity task as it is the task that we have at the tail end of the pipe. If regarded in that perspective, it can be claimed that on has the best results for Sentiment Polarity among all the model combinations and configurations tested by this study. Hence we have opted to highlight those results in bold as we did for the best results in the second () section. § CONCLUSION This study evaluates three NLP approaches for ABSA: 1) fine-tuning with Parameter-Efficient Fine-Tuning (PEFT) technique QLoRA; 2) for efficient few-shot fine-tuning of Sentence Transformers; and 3) FAST LSA <cit.> V2 on PyABSA framework. These approaches aimed to overcome memory and computational limitations while enhancing efficiency and scalability in NLP tasks. We observe that , a collection of second-generation open-source LLMs, after fine-tuning with 4-bit quantization via Parameter-Efficient Fine-Tuning (PEFT) only manages middling performance. From the modular options in , fine-tuned models demonstrate standout performances. Finally, FAST LSA on PyABSA gives out the overall best performance with 87.6% and 82.6% accuracy respectively for Res-14 and Lap-14 datasets. Nevertheless, none of the tested models are able to surpass the reported accuracy of LSA+DeBERTa-V3-Large <cit.> which claims 90.33% and 86.21% respectively. In summary, this study explores the importance of innovative methodologies such as fine-tuning techniques, prompt-free few-shot learning, and modular frameworks in advancing NLP tasks. IEEEtranN
http://arxiv.org/abs/2407.03249v1
20240703162912
Quantum coarsening and collective dynamics on a programmable quantum simulator
[ "Tom Manovitz", "Sophie H. Li", "Sepehr Ebadi", "Rhine Samajdar", "Alexandra A. Geim", "Simon J. Evered", "Dolev Bluvstein", "Hengyun Zhou", "Nazli Uğur Köylüoğlu", "Johannes Feldmeier", "Pavel E. Dolgirev", "Nishad Maskara", "Marcin Kalinowski", "Subir Sachdev", "David A. Huse", "Markus Greiner", "Vladan Vuletić", "Mikhail D. Lukin" ]
quant-ph
[ "quant-ph", "cond-mat.quant-gas", "physics.atom-ph" ]
^1Department of Physics, Harvard University, Cambridge, MA 02138, USA ^2Department of Physics, Princeton University, Princeton, NJ 08544, USA ^3Princeton Center for Theoretical Science, Princeton University, Princeton, NJ 08544, USA ^4QuEra Computing Inc., Boston, MA 02135, USA ^5Harvard Quantum Initiative, Harvard University, Cambridge, MA 02138, USA ^6Department of Physics and Research Laboratory of Electronics, Massachusetts Institute of Technology, Cambridge, MA 02139, USA ^Current address: Department of Physics, Massachusetts Institute of Technology, Cambridge, MA 02139, USA ^*These authors contributed equally to this work ^†Corresponding author; E-mail: lukin@physics.harvard.edu § ABSTRACT Understanding the collective quantum dynamics of nonequilibrium many-body systems is an outstanding challenge in quantum science. In particular, dynamics driven by quantum fluctuations are important for the formation of exotic quantum phases of matter <cit.>, fundamental high-energy processes <cit.>, quantum metrology <cit.>, and quantum algorithms <cit.>. Here, we use a programmable quantum simulator based on Rydberg atom arrays to experimentally study collective dynamics across a (2+1)D Ising quantum phase transition. After crossing the quantum critical point, we observe a gradual growth of correlations through coarsening of antiferromagnetically ordered domains <cit.>. By deterministically preparing and following the evolution of ordered domains, we show that the coarsening is driven by the curvature of domain boundaries, and find that the dynamics accelerate with proximity to the quantum critical point. We quantitatively explore these phenomena and further observe long-lived oscillations of the order parameter, corresponding to an amplitude (Higgs) mode <cit.>. These observations offer a unique viewpoint into emergent collective dynamics in strongly correlated quantum systems and nonequilibrium quantum processes. Quantum coarsening and collective dynamics on a programmable quantum simulator Tom Manovitz^1,*, Sophie H. Li^1,*, Sepehr Ebadi^1,*,, Rhine Samajdar^2,3, Alexandra A. Geim^1, Simon J. Evered^1, Dolev Bluvstein^1, Hengyun Zhou^1,4, Nazlı Uğur Köylüoğlu^1,5, Johannes Feldmeier^1, Pavel E. Dolgirev^1, Nishad Maskara^1, Marcin Kalinowski^1, Subir Sachdev^1, David A. Huse^2, Markus Greiner^1, Vladan Vuletić^6, and Mikhail D. Lukin^1,† ======================================================================================================================================================================================================================================================================================================================================================================= Quantum phase transitions (QPTs) are transformations between states of matter that are driven by quantum fluctuations <cit.>. Analogously to thermal fluctuations in classical phase transitions, quantum fluctuations play a dominant role in the emergence of order in quantum systems. While classical dynamics near thermal critical points have been studied extensively over the past several decades, only recently have quantum dynamics across QPTs become experimentally accessible, due to the advent of quantum simulators <cit.> and ultrafast spectroscopic methods in solid-state systems <cit.>. Their universal properties have been studied in systems of varied dimensionality using the quantum Kibble-Zurek mechanism (KZM) <cit.>. The KZM stipulates that a quantum system's dynamics and correlations “freeze” in the vicinity of a QPT when the system can no longer respond adiabatically to dynamical changes. However, in many instances, other mechanisms of correlation growth beyond KZM can dominate ordering <cit.>. In particular, when an unordered system passes through a continuous phase transition into a symmetry-broken phase, a progressive growth of long-range order, known as coarsening, is expected. These ordering dynamics are predicted to exhibit universality, manifested as self-similarity in the growth of correlations <cit.>. While such phenomena are well understood in the context of thermal phase transitions <cit.>, quantum effects in coarsening dynamics have only recently emerged as a subject of intense theoretical <cit.> and experimental <cit.> investigation. We use a programmable quantum simulator based on Rydberg atom arrays to investigate the collective out-of-equilibrium dynamics associated with the growth of order following an Ising QPT. We observe key features of beyond-mean-field quantum coarsening processes arising from quantum fluctuations: the curvature-driven dynamics of domain walls and their acceleration when approaching the critical point. We further explore self-similarity and universality in the ordering process. Additionally, we observe long-lived coherent oscillations of the correlation length and the order parameter on both sides of the QPT, which we identify as an amplitude (Higgs) mode. We study these fundamental features of QPTs <cit.>, demonstrating consistency with theoretical predictions <cit.>, and extending these studies in a regime that is difficult to simulate classically. Our experiments are performed using a two-dimensional programmable atom array, previously described in Ref. <cit.>. The measurements are conducted on a 16×16 square lattice of ^87Rb atoms trapped in an array of optical tweezers generated by a spatial light modulator (SLM). Atoms are initialized in the electronic ground state |g⟩ and are coupled to the high-lying electronic Rydberg state |r⟩ through a two-photon excitation with time-dependent Rabi frequency Ω (t) and global detuning Δ (t). As a key upgrade, we introduce a second SLM for generating locally controlled light shifts, allowing for programmable site-dependent detunings δ_i (t) = α_i δ (t) (see Methods). The atoms in the |r⟩ state interact strongly via a van der Waals potential, giving rise to the following Hamiltonian governing the system's dynamics: H/ħ = Ω(t)/2∑_i X_i - ∑_i n_i (Δ(t) +δ_i(t)) + ∑_i<j V_ijn_i n_j. Here, n_i ≡ |r_i⟩⟨r_i| denotes the Rydberg occupation at site i, X_i ≡|g_i⟩⟨r_i|+|r_i⟩⟨g_i| describes the laser-induced coupling between the states at that site, and V_ij≡ V_0/|r_i - r_j |^6 is the van der Waals interaction. The Rydberg interactions prevent simultaneous excitation of two atoms if they lie within a blockade radius (R_b ≡ (V_0/Ω)^1/6) of each other. We maintain a lattice spacing a such that R_b/a≈1.1, and only nearest-neighboring sites fall within the blockade radius. For large positive values of Δ/Ω, this configuration leads to a ℤ_2-symmetry-broken checkerboard phase with two antiferromagnetically ordered ground states <cit.>, labeled |AF_1⟩ and |AF_2⟩. The disordered and ordered phases are separated by a QPT belonging to the (2+1)D Ising universality class, which occurs at Δ/Ω≈ 1.1 <cit.>. The order parameter diagnosing this transition is the staggered magnetization: m_s=∑_x,yZ̃_x,y=∑_x,y(-1)^x+y Z_x,y, where (x,y) denotes the two-dimensional coordinates of an atom, and Z_x,y≡|r_x,y⟩⟨r_x,y|-|g_x,y⟩⟨g_x,y|. § ORDERING DYNAMICS We first study the nonequilibrium dynamics of the atom array after crossing the QPT into the ordered phase. Our protocol is illustrated in Fig. <ref>a,b. The state-preparation stage is similar to that used in Refs. <cit.>. First, all atoms are initialized in |g⟩, which is the ground state for Δ/Ω≪ 0. While keeping Δ negative, Ω is ramped up to its final value, remaining constant until the end of the protocol (δ_i is held at 0 for this measurement). Then, Δ(t) is swept from negative to positive values, through the quantum critical point and into the ordered phase. We use a linear sweep profile for all measurements. The sweep is halted at various endpoints within the ordered phase. Subsequently, Δ and Ω are held constant for a given hold time, and finally Ω is ramped down followed by a projective readout of the atomic states. During the hold time, we probe the dynamics of the correlation length, as shown in Fig. <ref>c. To quantify the growth of correlations, we evaluate the two-point correlation function G(r,t) and the radially averaged structure factor S(k,t), from which we extract a correlation length (ξ)(see Methods). In contrast to the Kibble-Zurek prediction, the correlation length grows significantly with hold time (Fig. <ref>c), indicating the gradual establishment of long-range order. Up to a hold time of ∼ 0.4–0.5 μ s, we observe that the dynamics are consistent with a linear growth of ξ^2 with time (Fig. <ref>d), as expected for coarsening <cit.>. Importantly, the rate of growth increases with proximity to the QPT. Motivated by the theoretical expectation of universality of these dynamics in the thermodynamic limit <cit.>, we also study the structure factor as a function of a scaling variable k ξ(t). We find that the data collapse onto a single functional form S (k,t) ∼ b(ξ(t))ξ^2 (t) f (k ξ(t)) for some scaling function f and an amplitude b(ξ), which is suggestive of self-similarity (Fig. <ref>e inset and Methods). In addition to the ordering, we also observe long-lived oscillations of the correlation length (Fig. <ref>c). In what follows, the universal aspects and origin of these oscillations are explored in detail. In order to gain further insight into the system's dynamics, we use single-site-resolved detection to identify the domains in each individual snapshot. We measure the probability that a given atom will appear as part of a domain of area A_d, and find that over time, increasingly larger domains are formed at the expense of their smaller counterparts (Fig. <ref>a). This is manifested in the growth of the mean area of the largest domain, concurrent with the shrinking of the second-largest one (Fig. <ref>b). Due to energy conservation, the appearance of progressively larger domains has to be offset by the proliferation of very small domains and single-site spin flips, as apparent in Fig. <ref>a. This phenomenon is known as bidirectional flow, and is generic for coarsening in closed systems <cit.>. We quantify this flow of energy by measuring the spatial distribution of the classical energy, as defined by the diagonal contribution to Eq. (<ref>) up to a constant shift: H_cl= -Δ∑_i (n_i-1) + ∑_i<jV_ijn_in_j. For every snapshot, we identify the domain walls and the bulk, and accordingly determine the contribution of each towards ⟨H_cl|$⟩ (see Methods). While the classical energy is indeed conserved over time, it is redistributed from the domain walls into the bulk (Fig. <ref>c). This is consistent with a picture of coarsening which is driven by the surface tension, and elimination, of domain walls <cit.>. § DOMAIN WALL DYNAMICS In order to study the real-time dynamics of domains and domain walls, we deterministically prepare specific configurations of domain walls using programmable, locally controlled light shifts. Our protocol is described in Fig. <ref>a: we apply site-dependent negative local detuningsδ_i<0, with amplitudes|δ| ∼4Ω(see Methods), on the chosen atoms prior to ramping upΩ, and then continue the state-preparation protocol as previously described. The local detuning strongly biases the chosen atoms to|g⟩, and consequently, locally favors either an|AF_1⟩or an|AF_2⟩configuration. After the sweep is completed, and before the hold time begins, the local detuning is quenched off and the state is allowed to evolve freely. We start by preparing a small square domain of one AF order within the bulk of the other (Fig. <ref>b) <cit.>. Upon removal of the local detunings, we observe that the area of the injected domain shrinks linearly with time (Fig. <ref>c and Supplementary Video 1). This observation is in agreement with coarsening dynamics for nonconserved fields, where surface tension due to the energy cost of domain walls generates curvature-driven dynamics <cit.>. In such a scenario, the local velocity of a domain wall is proportional to its local curvature1/R:∂_t R ∝-1/R, and therefore,∂_t R^2 = -v_a, wherev_ais some positive time-independent constant. Strikingly, we find thatv_aincreases as one approaches the quantum critical point. This behaviour is unique for coarsening in the vicinity of a quantum phase transition <cit.>; in contrast, for a classical Ising transition, the dynamics should be slower near the thermal phase boundary than when deep in the ordered phase <cit.>; we also observe indications of this speedup in the global sweeps (see Fig. <ref>d and Methods). We examine the dependence ofv_aon the distance to the quantum critical point,Δ-Δ_c(Fig. <ref>d). Near the QPT, we find thatv_ais approximately consistent with a scaling∝(Δ-Δ_c)^-ν(whereν≈0.629is the correlation length exponent of the (2+1)D Ising QPT <cit.>). In Fig. <ref>e, we also analyze the evolution of several concentric spatial layers of the system, and observe the outer layers of the central domain morphing earlier with the dynamics moving progressively inwards. This supports a picture of coarsening in which the dynamics are indeed driven by the shrinking of domain walls, as opposed to being generated within the bulk of the domains. Note that atoms in the bulk, far from the domain walls, undergo very little change over the hold period, remaining mostly in their initially prepared state. To further explore the curvature-driven nature of the coarsening dynamics, we prepare an initial state with a zigzag domain wall. Over time, the domain wall straightens into a vertical line separating the two orders, as visible in Fig. <ref>a and Supplementary Video 2. At points of high local curvature (red and blue in Fig. <ref>b), the domain wall moves towards the center of the domain. This is dramatically different from the dynamics at points with no local curvature, where the initially prepared domain is aligned with the diagonal of the lattice ordering. At these points, the horizontal position of the domain wall remains remarkably stationary over time (green and yellow in Fig. <ref>). This motion is also evident in the dynamically changing domain-wall shape (Fig. <ref>c). As the corners of the zigzag pattern change their order, the directly adjacent rows become locally curved and the domain wall at those points begins to move towards the center of the array. These key features of curvature-driven dynamics are in agreement with numerical simulations <cit.> as detailed in the Methods. § HIGGS MODE In addition to the curvature-driven coarsening dynamics, our experiments clearly reveal persistent long-lived oscillations of the correlation length and the order parameter across a range of experimental parameters, as shown in Fig. <ref>c and Methods. We explore the origin of these oscillations in Fig. <ref>. First, we apply local detunings to one of the two sublattices, which biases the order parameter. We then repeat the protocol described in Fig. <ref>a, ramping to various values ofΔon both sides of the QPT. Directly after the ramp, we quench the pinning field off and follow the dynamics. We observe large-amplitude, long-lived oscillations of the order parameter, well modeled as a damped harmonic oscillator,m_s(t) ≈ϕ_0+A cos(ωt+θ_0)e^-γt, with amplitudeA, frequencyω, dampingγ, and offsetϕ_0that strongly depend onΔ. We find that upon approaching the phase transition from both sides,ωdecreases whileγandAincrease;ϕ_0changes from zero in the disordered phase to a nonzero value in the ordered phase. To understand the origin of these observations, we perform numerical simulations using matrix product state (MPS) methods. Through density-matrix renormalization group (DMRG) calculations on up to10×10sites with periodic boundary conditions, we find that the pinned initial state corresponds to a low-energy state of the post-quench Hamiltonian even on the disordered side of the transition (Fig. <ref>b). This is in contrast to the high-energyℤ_2states typically associated with oscillations due to quantum many-body scars <cit.>. We then attempt to simulate the dynamics of the pinned initial state using the time-dependent variational principle (TDVP) <cit.> at a relatively small bond dimensionχ=256and find good qualitative agreement with the experimentally extracted frequencies (Fig. <ref>d). Moreover, the oscillation frequencies closely match the numerically determined ground-state gap, and are relatively robust to variations in the system size away from the critical point (see Methods). Our experimental findings, coupled with the numerical insights into their low-energy nature, suggest that the observed oscillations can be understood as an amplitude (Higgs) mode <cit.>, which is a collective excitation of the order parameter. Qualitatively, the Higgs mode can be viewed through the lens of Landau theory. In this framework, order-parameter dynamics are determined by an effective potential describing a quartic anharmonic oscillator:V(ϕ)=q/2ϕ^2 + λ/4ϕ^4 + 𝒪(ϕ^6), as shown in Fig. <ref>c (we continue to identify the oscillation on the disordered side as a "Higgs mode" for convenience). The ordered phase(q∼Δ_c-Δ<0)is differentiated from the disordered phase(q>0)by a finite valueϕ_0≠0of the potential minima, which determines the offset of the oscillation. Beyond the changing offsetϕ_0, this simplified picture reproduces the increase of the oscillation amplitudeAand the decrease of the frequencyωwhen approaching the phase transition. To further investigate this amplitude mode, we prepare lower-energy biased states by softening the pinning field (smaller|δ_i|). We find a corresponding decrease in the oscillation amplitudeAand the frequencyωwith a sharper dependence near the QPT (Fig. <ref>d, Extended Data Fig. <ref>), in agreement with Landau theory. The oscillation frequency of these lower-energy states shifts close to the many-body gap. In order to explore the Higgs mode deep in the ordered phase, where our state preparation scheme generates low-amplitude oscillations, we use an alternative experimental protocol, as detailed in Methods. The Higgs-mode dynamics present a unique probe of the quantum critical point. In particular, the ratio of the oscillation frequencies on the two sides of the QPT is universal in equilibrium, and predicted to beω(-|q|) / ω(|q|)=√(2)<cit.> by Landau mean-field theory. However, our experimental results, in whichω(-|q|) / ω(|q|)>√(2)(see Extended Data Fig. <ref>), indicate a significant deviation from this simplistic prediction, broadly consistent with more advanced calculations predictingω(-|q|) / ω(|q|)≈1.9(see discussion in Methods). The discrepancy with mean-field results emphasizes the central role of quantum fluctuations, and in particular, finite-momentum order parameter fluctuations, in the vicinity of the QPT. The dynamics of these fluctuations are also expressed in the progressively larger oscillations of the correlation length (Fig. <ref>e) and a sharp increase in the damping termγ(Fig. <ref>d) observed upon approaching the critical point. § DISCUSSION AND OUTLOOK Our observations shed new light on paradigmatic collective processes in closed nonequilibrium quantum many-body systems. In particular, they highlight the important role of coarsening dynamics, and reveal their curvature-driven character in systems with a nonconserved order parameter <cit.>. Crucially, we measure an acceleration of the ordering processes when approaching the phase transition, a signature of the intrinsically quantum nature of the dynamics <cit.>. While we observe a scaling collapse of the structure factor suggestive of self-similarity <cit.>, the dynamically varying amplitudeb(t)deviates from the expected universal behavior. As discussed in the Methods, this deviation could originate from finite-size effects (as well as, potentially, residual disorder or decoherence), and its detailed understanding constitutes an interesting theoretical problem <cit.>. Similar mechanisms may account for the slowdown of coarsening at late times observed in Figs. <ref>c,d. Further evidence for the role of finite-size effects is provided by measurements involving local control: we find that the dynamics of domains seeded away from the system's boundaries (Fig. <ref>) are in much closer agreement with universal theoretical predictions. Additionally, we observe the concurrent excitation of the Higgs mode upon crossing the phase transition. Investigation of this mode yields detailed information on important observables—such as the damping rate of the Higgs mode in the vicinity of the phase transition—which are difficult to access classically <cit.>. For the numerically accessible system sizes and bond dimensions, our simulations cannot capture the damping rateγ, and generally break down near the critical point (see Methods). More generally, the possible interplay of coarsening with the Higgs mode presents an intriguing question that warrants further theoretical investigation. These studies can be extended along several directions. In contrast to traditional condensed-matter systems, programmable quantum simulators can directly access correlation functions of any order <cit.> as well as other important observables, such as the entanglement entropy <cit.>, using, e.g., hybrid digital-analog approaches <cit.>. These could provide further insights into complex dynamics, particularly near the quantum critical point, where numerical calculations are prohibitively challenging. Besides the symmetry-broken ordered states probed in this work, it would also be interesting to extend our study of coarsening dynamics to the formation of topologically ordered states of matter <cit.>, which cannot be characterized by local order parameters. Moreover, quantum dynamics following a first-order transition can also be explored. For instance, the local programmability of the system could allow for the preparation and biasing of metastable states <cit.>. Starting from these states, the quantum system can order through a high-order tunneling process, known as false vacuum decay, that nucleates a critical-size domain of the true ground-state ordering <cit.>. During the completion of this work, we became aware of related work demonstrating coarsening phenomena driven by quantum fluctuations on a superconducting quantum processor <cit.>. 10 url<#>1 urlprefixURL altman2023quantumauthorAltman, E.et al.titleQuantum simulators: Architectures and opportunities. journalPRX Quantumvolume2, pages017003 (year2021). bauer2023highenergyauthorBauer, C. W.et al.titleQuantum simulation for high-energy physics. journalPRX Quantumvolume4, pages027001 (year2023). degen2017sensingauthorDegen, C. L., authorReinhard, F.&authorCappellaro, P.titleQuantum sensing. journalRev. Mod. Phys.volume89, pages035002 (year2017). li2023scramblingauthorLi, Z.et al.titleImproving metrology with quantum scrambling. journalSciencevolume380, pages1381–1384 (year2023). ebadi2022quantumauthorEbadi, S.et al.titleQuantum optimization of maximum independent set using Rydberg atom arrays. journalSciencevolume376, pages1209–1215 (year2022). Samajdar2024authorSamajdar, R.&authorHuse, D. A.titleQuantum and classical coarsening and their interplay with the Kibble-Zurek mechanism (year2024). arXiv:2401.15144 [quant-ph]. pekker2015amplitudeauthorPekker, D.&authorVarma, C. M.titleAmplitude/Higgs modes in condensed matter physics. journalAnnu. Rev. Condens. Matter Phys.volume6, pages269–297 (year2015). sachdev2011quantumauthorSachdev, S.titleQuantum Phase Transitions (publisherCambridge University Press, addressNew York, year2011). bakr2010probingauthorBakr, W. S.et al.titleProbing the Superfluid–to–Mott Insulator Transition at the Single-Atom Level. journalSciencevolume329, pages547–550 (year2010). keesling2019quantumauthorKeesling, A.et al.titleQuantum Kibble–Zurek mechanism and critical dynamics on a programmable Rydberg simulator. journalNaturevolume568, pages207–211 (year2019). ebadi2021quantumauthorEbadi, S.et al.titleQuantum phases of matter on a 256-atom programmable quantum simulator. journalNaturevolume595, pages227–232 (year2021). scholl2021quantumauthorScholl, P.et al.titleQuantum simulation of 2D antiferromagnets with hundreds of Rydberg atoms. journalNaturevolume595, pages233–238 (year2021). ruegg2008quantumauthorRüegg, C.et al.titleQuantum magnets under pressure: controlling elementary excitations in TlCuCl_3. journalPhys. Rev. Lett.volume100, pages205701 (year2008). jain2017higgsauthorJain, A.et al.titleHiggs mode and its decay in a two-dimensional antiferromagnet. journalNature Physicsvolume13, pages633–637 (year2017). shimano2020higgsauthorShimano, R.&authorTsuji, N.titleHiggs mode in superconductors. journalAnnu. Rev. Condens. Matter Phys.volume11, pages103–124 (year2020). pyka2013topologicalauthorPyka, K.et al.titleTopological defect formation and spontaneous symmetry breaking in ion Coulomb crystals. journalNature communicationsvolume4, pages2291 (year2013). biroli2010kibbleauthorBiroli, G., authorCugliandolo, L. F.&authorSicilia, A.titleKibble-Zurek mechanism and infinitely slow annealing through critical points. journalPhys. Rev. Evolume81, pages050101 (year2010). roychowdhury2021dynamicsauthorRoychowdhury, K., authorMoessner, R.&authorDas, A.titleDynamics and correlations at a quantum phase transition beyond Kibble-Zurek. journalPhys. Rev. Bvolume104, pages014406 (year2021). schmitt2022quantumauthorSchmitt, M., authorRams, M. M., authorDziarmaga, J., authorHeyl, M.&authorZurek, W. H.titleQuantum phase transition dynamics in the two-dimensional transverse-field Ising model. journalSci. Adv.volume8, pageseabl6850 (year2022). zeng2023universalauthorZeng, H.-B., authorXia, C.-Y.&authordel Campo, A.titleUniversal Breakdown of Kibble-Zurek Scaling in Fast Quenches across a Phase Transition. journalPhys. Rev. Lett.volume130, pages060402 (year2023). lifshitz1962kineticsauthorLifshitz, I. M.titleKinetics of ordering during second-order phase transitions. journalSov. Phys. JETPvolume15, pages939 (year1962). hohenberg1977criticalauthorHohenberg, P. C.&authorHalperin, B. I.titleTheory of dynamic critical phenomena. journalRev. Mod. Phys.volume49, pages435–479 (year1977). Bray1994authorBray, A.titleTheory of phase-ordering kinetics. journalAdvances in Physicsvolume43, pages357–459 (year1994). goo2022universalauthorGoo, J.et al.titleUniversal Early Coarsening of Quenched Bose Gases. journalPhys. Rev. Lett.volume128, pages135701 (year2022). gazo2023universalauthorGazo, M.et al.titleUniversal Coarsening in a Homogeneous Two-Dimensional Bose Gas. journalarXiv:2312.09248 (year2023). chandran2012kibbleauthorChandran, A., authorErez, A., authorGubser, S. S.&authorSondhi, S. L.titleKibble-Zurek problem: Universality and the scaling limit. journalPhys. Rev. Bvolume86, pages064304 (year2012). chandran2013equilibrationauthorChandran, A., authorNanduri, A., authorGubser, S. S.&authorSondhi, S. L.titleEquilibration and coarsening in the quantum O(N) model at infinite N. journalPhys. Rev. Bvolume88, pages024306 (year2013). maraga2015agingauthorMaraga, A., authorChiocchetta, A., authorMitra, A.&authorGambassi, A.titleAging and coarsening in isolated quantum systems after a quench: Exact results for the quantum O(N) model with N. journalPhys. Rev. Evolume92, pages042151 (year2015). gagel2015universalauthorGagel, P., authorOrth, P. P.&authorSchmalian, J.titleUniversal postquench coarsening and aging at a quantum critical point. journalPhys. Rev. Bvolume92, pages115121 (year2015). sadler2006spontaneousauthorSadler, L., authorHigbie, J. M., authorLeslie, S. R., authorVengalattore, M.&authorStamper-Kurn, D. M.titleSpontaneous symmetry breaking in a quenched ferromagnetic spinor Bose–Einstein condensate. journalNaturevolume443, pages312–315 (year2006). PhysRevX.8.021070authorLienhard, V.et al.titleObserving the Space- and Time-Dependent Growth of Correlations in Dynamically Tuned Synthetic Ising Models with Antiferromagnetic Interactions. journalPhys. Rev. Xvolume8, pages021070 (year2018). huh2024universalityauthorHuh, S.et al.titleUniversality class of a spinor Bose–Einstein condensate far from equilibrium. journalNat. Phys.pages1–7 (year2024). andersen2024thermalizationauthorAndersen, T. I.et al.titleThermalization and criticality on an analog-digital quantum simulator. journalarXiv:2405.17385 (year2024). endres2012higgsauthorEndres, M.et al.titleThe `Higgs' amplitude mode at the two-dimensional superfluid/Mott insulator transition. journalNaturevolume487, pages454–458 (year2012). PhysRevLett.124.103601authorSamajdar, R., authorHo, W. W., authorPichler, H., authorLukin, M. D.&authorSachdev, S.titleComplex Density Wave Orders and Quantum Phase Transitions in a Model of Square-Lattice Rydberg Atom Arrays. journalPhys. Rev. Lett.volume124, pages103601 (year2020). Kalinowski2022authorKalinowski, M.et al.titleBulk and boundary quantum phase transitions in a square Rydberg atom array. journalPhys. Rev. Bvolume105, pages174417 (year2022). semeghini2021probingauthorSemeghini, G.et al.titleProbing topological spin liquids on a programmable quantum simulator. journalSciencevolume374, pages1242–1247 (year2021). barenblatt1996scalingauthorBarenblatt, G. I.titleScaling, self-similarity, and intermediate asymptotics: dimensional analysis and intermediate asymptotics. number14 (publisherCambridge University Press, year1996). glidden2021bidirectionalauthorGlidden, J. A.et al.titleBidirectional dynamic scaling in an isolated Bose gas far from equilibrium. journalNat. Phys.volume17, pages457–461 (year2021). pavevsic2024constrainedauthorPavešić, L., authorJaschke, D.&authorMontangero, S.titleConstrained dynamics and confinement in the two-dimensional quantum ising model. journalarXiv:2406.11979 (year2024). humayun1991nonauthorHumayun, K.&authorBray, A. J.titleNon-equilibrium dynamics of the Ising model for T ≤ T_c. journalJ. Phys. A: Math. Gen.volume24, pages1915 (year1991). bernien2017probingauthorBernien, H.et al.titleProbing many-body dynamics on a 51-atom quantum simulator. journalNaturevolume551, pages579–584 (year2017). bluvstein2021controllingauthorBluvstein, D.et al.titleControlling quantum many-body dynamics in driven Rydberg atom arrays. journalSciencevolume371, pages1355–1359 (year2021). haegeman2016tdvpauthorHaegeman, J., authorLubich, C., authorOseledets, I., authorVandereycken, B.&authorVerstraete, F.titleUnifying time evolution and optimization with matrix product states. journalPhys. Rev. Bvolume94, pages165116 (year2016). sachdev2009exoticauthorSachdev, S.titleExotic phases and quantum phase transitions: model systems and experiments. In booktitleQuantum Theory of Condensed Matter, 24th Solvay Conference on Physics (year2009). arXiv:0901.4103 [cond-mat.str-el]. sachdev1997theoryauthorSachdev, S.titleTheory of finite-temperature crossovers near quantum critical points close to, or above, their upper-critical dimension. journalPhysical Review Bvolume55, pages142 (year1997). brydgesProbingRenyiEntanglement2019authorBrydges, T.et al.titleProbing Rényi Entanglement Entropy via Randomized Measurements. journalSciencevolume364, pages260–263 (year2019). teng2024learningauthorTeng, Y.et al.titleLearning topological states from randomized measurements using variational tensor network tomography (year2024). arXiv:2406.00193 [quant-ph]. bluvstein2022quantumauthorBluvstein, D.et al.titleA quantum processor based on coherent transport of entangled atom arrays. journalNaturevolume604, pages451–456 (year2022). Samajdar.2021authorSamajdar, R., authorHo, W. W., authorPichler, H., authorLukin, M. D.&authorSachdev, S.titleQuantum phases of Rydberg atoms on a kagome lattice. journalProc. Natl. Acad. Sci. U.S.A.volume118, pagese2015785118 (year2021). 2011.12295. Verresen.2020authorVerresen, R., authorLukin, M. D.&authorVishwanath, A.titlePrediction of Toric Code Topological Order from Rydberg Blockade. journalPhys. Rev. Xvolume11, pages031005 (year2021). darbha2024falseauthorDarbha, S.et al.titleFalse vacuum decay and nucleation dynamics in neutral atom systems. journalarXiv:2404.12360 (year2024). darbha2024longauthorDarbha, S.et al.titleLong-lived oscillations of false and true vacuum states in neutral atom systems. journalarXiv:2404.12371 (year2024). zenesini2024falseauthorZenesini, A.et al.titleFalse vacuum decay via bubble formation in ferromagnetic superfluids. journalNature Physicspages1–6 (year2024). coleman1977fateauthorColeman, S.titleFate of the false vacuum: Semiclassical theory. journalPhysical Review Dvolume15, pages2929 (year1977). bluvstein2024logicalauthorBluvstein, D.et al.titleLogical quantum processor based on reconfigurable atom arrays. journalNaturevolume626, pages58–65 (year2024). festa2022blackbodyauthorFesta, L.et al.titleBlackbody-radiation-induced facilitated excitation of rydberg atoms in optical tweezers. journalPhysical Review Avolume105, pages013109 (year2022). dongyukimGSauthorKim, D.et al.titleLarge-scale uniform optical focus array generation with a phase spatial light modulator. journalOpt. Lett.volume44, pages3178–3181 (year2019). chen2023continuousauthorChen, C.et al.titleContinuous symmetry breaking in a two-dimensional rydberg array. journalNaturevolume616, pages691–695 (year2023). de2024demonstrationauthorde Oliveira, A.et al.titleDemonstration of weighted graph optimization on a rydberg atom array using local light-shifts. journalarXiv:2404.02658 (year2024). sandvik2010computationalauthorSandvik, A. W.titleComputational Studies of Quantum Spin Systems . journalAIP Conf. Proc.volume1297, pages135–338 (year2010). kennedy1991ornsteinauthorKennedy, T.titleOrnstein-Zernike decay in the ground state of the quantum Ising model in a strong transverse field. journalCommun.Math. Phys.volume137, pages599–615 (year1991). manetsch2024tweezerauthorManetsch, H. J.et al.titleA tweezer array with 6100 highly coherent atomic qubits. journalarXiv:2403.12021 (year2024). haegeman2011tdvpauthorHaegeman, J.et al.titleTime-dependent variational principle for quantum lattices. journalPhys. Rev. Lett.volume107, pages070601 (year2011). dolgirev2022periodicauthorDolgirev, P. E.et al.titlePeriodic dynamics in superconductors induced by an impulsive optical quench. journalCommun. Phys.volume5, pages234 (year2022). brezin1974authorBrezin, E., authorLe Guillou, J.-C.&authorZinn-Justin, J.titleUniversal ratios of critical amplitudes near four dimensions. journalPhysics Letters Avolume47, pages285–287 (year1974). caselle1997authorCaselle, M.&authorHasenbusch, M.titleUniversal amplitude ratios in the three-dimensional ising model. journalJournal of Physics A: Mathematical and Generalvolume30, pages4963 (year1997). campostrini2002authorCampostrini, M., authorPelissetto, A., authorRossi, P.&authorVicari, E.title25th-order high-temperature expansion results for three-dimensional ising-like systems on the simple-cubic lattice. journalPhys. Rev. Evolume65, pages066127 (year2002). hauschild2018tenpyauthorHauschild, J.&authorPollmann, F.titleEfficient Numerical Simulations with Tensor Networks: Tensor Network Python (TeNPy). journalSciPost Phys. Lect. Notespages5 (year2018). § METHODS Experimental platform A detailed description of our experimental platform is given in <cit.>. All measurements are realized using a two-dimensional programmable quantum simulator based on Rydberg atom arrays. Single^87Rb atoms are stochastically loaded into optical tweezers shaped by a spatial light modulator (SLM), and then rearranged into defect-free patterns using a pair of crossed acousto-optic deflectors (AODs). Both sets of tweezers use 852 nm-wavelength light. The atoms are then laser-cooled and optically pumped to the|5S_1/2,F=2,m_F=-2⟩state, which we denote as|g⟩in the main text. A pair of counterpropagating lasers at 420 nm and 1013 nm wavelengths couple|g⟩to the highly excited Rydberg state|r⟩ = |70S_1/2,J=1/2,m_J=-1/2⟩via a two-photon transition through the6P_3/2orbital, blue-detuned by approximately2π×2.4GHz with respect to the5S_1/2→6P_3/2transition. We use16×16lattices of atoms, and maintain anR_b/aratio of1.12-1.15, such that only nearest neighbors lie within the blockade radius. The data shown in the main text is taken with two-photon Rabi frequencies of eitherΩ/2π=3.8, 6.0,or3.1MHz, with corresponding lattice spacingsa = 6.8, 6.45 , or 7.15μmandR_b/a = 1.15, 1.12,or1.13. The 1013 nm and 420 nm single-photon Rabi frequencies are approximately balanced(Ω_1013≈Ω_420). The experiment timeT, sweep parameters, and lattice spacings are all rescaled for differentΩ, such thatΩ/Δ,R_b/a, and the total phase accumulatedΩTare constant when comparing different experimental configurations. In order to observe coarsening, the sweep rate through the critical point must fall within a certain range. A sweep rate that is too slow would create a low-energy state, and consequently, the coarsening dynamics may be too slow to measure. In contrast, a very fast sweep could inject too much energy and bring the system out of the ordered phase (which persists up to a finite energy density). Here, all measurements use linear sweeps with sweep rates ofΔ/Ω/ΩT≈1/2π×3.0, which, we find, falls within the desired range. For the global sweeps, we apply post-selection based on the success of the rearrangement protocol, selecting shots with≤3defects. Additionally, we post-select on measurement results in order to discard runs in which we suspect large-scale errors have occurred. Due to the high energy of the Rydberg blockade, a large number of blockade violations are extremely unlikely to be naturally generated by the coherent dynamics of Eq. (<ref>). Furthermore, since our projective measurement cannot differentiate|r⟩occupation from loss induced by other mechanisms, we attribute the presence of a large number of apparent blockade violations to manifestations of unwanted noise processes, such as blackbody-induced avalanche decays <cit.>. We therefore discard runs where the longest chain of consecutive atoms in a single row or column detected in state|r⟩has a length of more than four sites. Local control To enable individual single-site addressing of atoms with a local light shift, we use an SLM (Hamamatsu LCOS-SLM X15213-02) to generate optical tweezers in arbitrary spatial patterns with beam waist1 μm, ensuring robustness to atomic position fluctuations (Extended Data Fig. <ref>). The wavelength we choose to operate at, 784 nm, achieves a measured differential AC Stark shift between the5S_1/2and5P_3/2states of 12.2(3) MHz with≈160 μW per spot, but a negligible scattering rate (≈35Hz) (the scaling of the light shift with laser amplitude is shown in Extended Data Fig. <ref>c). The light is linearly polarized to minimize vector light shifts on the ground state hyperfine manifold. We further measure the shift on the|g⟩→|r⟩transition and find that the light shift is well approximated by the differential5S_1/2→5P_3/2light shift asδ_0 = -2π×12(2)MHz at the same power per spot. The phase holograms for the SLM are generated using the phase-fixed weighted Gerchberg–Saxton (WGS) algorithm, taking into account the desired position and relative intensity of the local light-shift pattern <cit.>. We first generate a local addressing pattern that closely matches the positioning of the atomic tweezer array; however, perfect matching of the two arrays is computationally expensive as it requires an extremely high sampling rate of the image plane of the local addressing pattern. In order to overcome this computational barrier, after creating an initial local addressing pattern, we align it to the atom positions by transforming the phase hologram. By stretching, rotating, and applying tilts and defocus, we can match the two patterns with feedback on the atom signal. The latter three can be easily controlled using Zernike polynomials, while the stretching and rotation require more care to preserve the intensity homogeneity of the desired pattern. We find that naïve rescaling or rotating of the hologram results in unwanted distortion of the intensity pattern, attributed to software interpolation when working with a pixelated hologram. This is mitigated by applying the computational corrections in the image plane. We take the Fourier transform of the hologram, convolve the intensity profile with a 2D Gaussian to broaden each spot over several pixels (in order to minimize effects of interpolation), and then apply the rotation and stretching. Lastly, we apply an inverse Fourier transform back to the Fourier plane and use the resultant phase hologram for the SLM. Using this procedure, we first coarsely align the individual addressing pattern to the tweezers on a camera, and then precisely align the two using a spin-echo measurement of the light shift (Extended Data Fig. <ref>b) to optimize the alignment parameters such that the intensity is maximized at the atom sites. Good alignment is also crucial to prevent atom loss from turning on a misaligned potential. Finally, we correct the tweezer intensities as required using the fitted light shifts to feedback on the target weights in the hologram generation. Examples of states prepared using such tweezer profiles, where the detuning is used to strongly pin atoms to the ground state, are shown in Extended Data Fig. <ref>d. At the boundary between different AF orders and the edges of the array, the mean-field repulsive interaction strength decreases for sites with fewer Rydberg neighbors; we therefore weight the local detuning strength inverse-proportionally to the number of neighbors. Note that when arbitrary weighting is used, the total power remains constant (number of addressed sites× 2π×12MHz), but the power is redistributed in the tweezers accordingly. Nevertheless, particularly at largeΔ/Ω, neighboring Rydberg excitations start to be energetically favored (antiblockaded) along the domain boundaries. Excluding such edge effects, the preparation probability of preparing the single-atom ground state on the pinned sites is 93-95% (Extended Data Fig. <ref>e). For other realizations of single-site addressing using light shifts in atom arrays, see e.g. <cit.>. Theoretical background of coarsening dynamics In this section, we summarize the theoretical details of the different kinds of coarsening processes that govern the dynamics of the system as long-range order is formed. Although our focus will be on the Rydberg atom array, to begin, let us consider the generic situation of a system driven through a continuous quantum phase transition (QPT) by tuning some parameter of the Hamiltonian,g, linearly with time. Without loss of generality, we assume that the quantum critical point (QCP) is located atg=0and the zero of time is set such thatg(t) = t/τ; hence, the system crosses the QCP att=0. For the specific case of the neutral atom array considered in this work, the time-dependent parametergcan be defined asg (t) = (Δ(t) - Δ_c)/Ω. As the system approaches the quantum critical point, its relaxation time diverges and it necessarily falls out of equilibrium. However, when it does so depends on the velocity of the linear ramp,ġ (t) = 1/τ. The quantum Kibble-Zurek mechanism posits that the time at which the system's evolution ceases to be adiabatic ist=-t_kzwitht_kz ∼t_0(τ/t_0)^νz/(νz+1), whereνis the correlation length exponent,zis the dynamical critical exponent, andt_0is some microscopic time scale. Thereafter, since the system cannot dynamically respond fast enough to the changing parameter of the Hamiltonian, it remains “frozen” through a so-called impulse regime until a later timet=+t_kz, when it unfreezes on the other side of the quantum phase transition. During this impulse regime, the KZM presumes that the system's correlation length remains the same as when it initially froze:ξ_kz ∼l_0(τ/t_0)^ν/(νz+1), wherel_0is some microscopic length scale. As a consequence, in this picture, the correlation length in the ordered phase is also set byξ_kzwith no subsequent dynamics. However, the nonequilibrium correlation length of the system,ξ(t), can and does grow both in the impulse regime as well as in the ordered phase since the long-range correlations take time to develop. In the experiments described in the main text, this occurs via a two-step process. First, as the system passes through the quantum critical regime, it undergoes quantum critical coarsening, which is governed by the dynamical critical exponentzof the particular QCP; for the(2+1)D Ising transition,z=1. Then, as time progresses and the ramp continues, the system eventually enters the ordered phase. Here, once the growing nonequilibrium correlation lengthξ(t)exceeds the equilibrium correlation length of the quantum ground state (which, recall, scales asξ_q ∼|g |^-ν), the dynamics cross over to a regime of noncritical coarsening, for which d ξ(t)/dt∼ξ_q^z_d ε/(ξ(t))^z_d-1, whereεis the many-body gap between the ground and first-excited states. The dynamical exponentz_dis dependent on the dimensionality and conservation laws of the system. For curvature-driven coarsening dynamics with a nonconserved scalar order parameter—as is indeed the case experimentally—z_d = 2 > z. A particular feature of noncritical coarsening worth emphasizing is the dependence of the dynamics on the distance to the QCP encoded in Eq. (<ref>). Specifically, the ground-state equilibrium correlation length scales asξ_q ∼|g |^-νand the gapε∼|g |^νz. Plugging in the exponents of the(2+1)D Ising QPT,ν= 0.629andz=1, along withz_d = 2, in Eq. (<ref>), we find the growth law d ξ(t)/dt∼(Δ - Δ_c)^-0.629/ξ(t), This relation can be observed in Fig. <ref>, which studies the rate at which a locally introduced domain in the center of the array shrinks. The area of such a domain decreases at a rated r^2/dt ∼- ξd ξ/d t, which scales as(Δ- Δ_c)^-0.629, in consistency with the behavior observed in Fig. <ref>d. For a ramp that continues indefinitely without stopping, the entire dynamical evolution of the correlation length can be described by a single universal scaling function encompassing the adiabatic, quantum critical coarsening, and noncritical coarsening regimes <cit.>: ξ(t) ≈ξ_kz f(t/t_kz) ≡ξ_kz f(x ), wheref(x)is some universal function, andξ_kzandt_kzdepend on the ramp rateτas specified earlier. The scaling variablexdelineates the three regimes discussed above as x≪ -1: , |x|≲𝒪(1): , x≫1: . More generally, if the ramp is stopped at a timet_s, the dynamical scaling form is altered to ξ (t) ≈ξ^_kz ℱ( t/t_kz, t_s/t_kz) ≡ξ^_kz ℱ(x,x_s) ; forx≤x_s, one recovers the earlier scaling asℱ(x,x_s)=f(x). If the ramp is stopped atx_s=t_s/t_kz≫1in the noncritical coarsening regime, the behavior of the universal scaling function forx>x_sdescribes the physics during the hold time and is given by ℱ(x, x_s) ≈ x_s^-ν+(ν z/z_d)(𝒞x-𝒞_s x_s)^1/z_d , for some𝒪(1)constants𝒞>𝒞_s. Note that becausez<z_d, the coarsening speeds up as we stop earlier in the ordered phase, closer to the QCP (which results in a lowerx_s). This is indeed what we observe experimentally during the hold time following global sweeps across the phase transition, as shown in the inset of Fig. <ref>c. Intuitively, this is because a smallerΔ/Ωcorresponds to a greater relative influence of critical coarsening, which is faster than noncritical coarsening. In contrast, near the thermal phase boundary, the system can undergo an interval of classical critical coarsening, which is described by a growth lawξ(t) ∼t^1/z̅with a distinct dynamical exponent. For the 2D classical Ising phase transition,z̅ ≈2.16 > z_d<cit.>, so the growth of correlations via classical critical coarsening is slower than for noncritical coarsening. Correspondingly, the dynamics should decelerate as one approaches the classical critical point, in sharp contrast to the speedup outlined above in the vicinity of a QCP. Structure factor and correlation length In order to extract the structure factor and the correlation length <cit.>, we first calculate the two-point connected correlation functionG(r_1,r_2) = ⟨Z̃_r_1Z̃_r_2|-⟩⟨Z̃_r_1|⟨%s|%s⟩⟩Z̃_r_2and then average over all pairs of points with identical displacementsr: G(r) = ∑_r_1,r_2 G(r_1,r_2) δ_r_1-r_2,r/∑_r_1,r_2δ_r_1-r_2,r. We first derive the standard structure factor by computing the Fourier transform ofG(r), S(k) ≡ℱ_k[G(r)] = ∑_r e^-i k·rG(r), and then calculate the radially averaged structure factor, S(k)=∑_kS(k) δ_|k|,k/∑_kδ_|k|,k. To extract a correlation length, we fitS(k)to: S(k)≈S_0/(1+ξ^2 k^2)^3/2, and we factorizeS_0asS_0= b ξ^2/π. This form of Eq. (<ref>) is equivalent to assuming that the position-space correlations follow an exponential decay,G(r)≈Aexp(-r/ξ)<cit.>, up to finite-size corrections. While equilibrium considerations for an infinite-size system suggest that for smallk,S(k)should obey the Ornstein-Zernike formS(k)≈S_0/1+ξ^2 k^2<cit.>, we empirically find that Eq. (<ref>) better captures our observed nonequilibrium distributions. For universal coarsening dynamics, we theoretically expectbto be constant. While our data indeed exhibit scaling collapse as in Eq. (<ref>), we find thatbvaries during the dynamics, indicating the presence of an additional length scale(s). We observe thatbis correlated withξand depends onΔ/Ω, as shown in ED Fig. <ref>(d). Additional length scales that may affect the dynamics include the finite system size, the finite width of the domain walls (which depends onΔ/Ω), spatial inhomogeneity inΔand/orΩ, and length scales introduced by decoherence effects such as decay due to the finite lifetime of the Rydberg state. Specifically, the expected universal scaling regime is expected to hold for distancesrand correlation lengthsξsuch thatl≪r,ξ≪L , wherelis the width of a domain wall andLis the system size <cit.>, indicating that finite size effects are likely playing an important role in the present experiments. Hence, observing the theoretically expected universal coarsening behaviour in global quenches would likely require access to larger system sizes and correspondingly longer experiment times. While recent experimental advances in neutral atom array platforms suggest that lattices more than an order of magnitude larger than the one presented in this paper are within reach <cit.>, elsewhere we describe the use of local control to deterministically nucleate and study domain dynamics away from the system's boundaries, allowing us to study universal properties of coarsening under present experimental conditions. Analysis of domains in global sweeps Using single-site-resolved detection, we can map out the domains in each snapshot. First, we calculate the local staggered magnetization. Each domain is then identified as a region of the array where the same orderingAF_1orAF_2is connected by nearest neighbors. We do not consider single spins of opposite order as a separate domain. For Fig. <ref>a,b, we therefore first identify and correct individual spin flips. These are identified as single atoms which are of the opposite order compared to all of their nearest and next-nearest neighbors. Only after we have identified single spin flips and corrected them to match their surrounding bulk order do we identify the domain boundaries. A domain's area is defined as the total number of atoms comprising the domain. For the probability distribution of domain occupations presented in Fig. <ref>a, the frequency of each domain area is weighted by the area of that domain. We normalize the distribution by the sum of all area-weighted frequencies at each time step. Classical energy analysis To calculate the classical energy per single shot of the experiment, we first perform the single-spin-flip correction as described above. We then identify regions of the array which do not belong to either AF ordering by calculating a coarse-grained local staggered magnetization with a similar approach to previous works <cit.>. In this work, specifically, we calculate the convolutionC_x,yof the Rydberg occupationn_x,ywith the kernel W = ([ 0 1 0; 1 0 1; 0 1 0 ])for each snapshot. The output values ofC_x,yrange from 0 to 4, where the extremal values correspond to atoms surrounded by nearest and next-nearest neighbors that all belong to the same AF orderings, as shown in Extended Data Fig. <ref>a. We consider an atom to be at a boundary ifn(x,y)= 1 (0)andC_x,y≠0(4)(see Extended Data Fig. <ref>b). In the raw array (not single-spin-flip corrected), we then compute the classical energy using Eq. (<ref>) for each snapshot. The value of the interaction energy is calculated from the lattice spacing (a) andV_0asV_nn = V_0/a^6. For the dataset presented in Fig. <ref>c,V_ij = V_nn/2π= 11.26MHz forΩ/2π= 6MHz. By using the spin-flip-corrected lattice for domain identification and the uncorrected one for the subsequent energy calculation, single spin flips that occur contribute to the classical energy in identified domain walls and bulk orderings but not as separate domains. We exclude the layer of atoms closest to the edge of the array for all contributions to the classical energy. Note that for the classical energy calculation in Fig. <ref>c, we post-select such that the maximum number of directly adjacent Rydberg atoms can be no more than three (compared to four used throughout this work). Due to the sensitivity of the boundary identification procedure used here to correlated decays, this post-selection is slightly stricter. Analysis of locally prepared domains The radius,r, of the central domain in Fig. <ref> is defined as the Manhattan distance,d_m, at which the radially averaged local staggered magnetization,m_s, crosses zero (see Extended Data <ref>c). We consider Manhattan distances instead of Euclidian distances from the center of the injected domains as the former is more representative of the nearest-neighbor interactions dominating the dynamics for very short times (on the order of one Rabi cycle). For long times, both measurements of distances in our lattice reveal a collapse to the linear form shown in Fig. <ref>c. For this analysis, we only consider the unpinned sublattice. The local state-preparation protocol prepares states where atoms that are locally detuned are prepared in|g⟩with high probability for all values ofΔ/Ω(Extended Data Fig. <ref>d). We observe that forΔ/Ωclose to the QPT, there is a larger discrepancy of the local order parameterm_sbetween the pinned and unpinned sublattices. Therefore, when considering both sublattices, the radius is less clearly defined by a single point at which the order parameter crosses zero. We fit a linear relationship to extractdr^2/dtat eachΔ/Ω. A similar procedure is followed for the analysis of the coarsening dynamics in the zigzag domain wall in Fig. <ref>. Here too, the mean value of the local order parameterm_x,yis calculated at each lattice site per time step. For Fig. <ref>c, the domain wall's horizontal position is calculated as the point at which the linearly interpolated line between points crossesm_s = 0. In Extended Data Fig. <ref>d, we show the variation of the domain-wall position with hold time for two additional values ofΔ/Ω. These data points reinforce the strongΔ/Ω-dependence of the domain-wall velocity already seen in Fig. <ref>c, d. Note that for this analysis, we include atoms which were initially locally detuned as we are considering each row separately. We therefore see larger uncertainty in the domain walls' positions for lowΔ/Ω(see Extended Data Fig. <ref>d). Errors in Figs. <ref>c, <ref>c, and Extended Data Fig.<ref>d are calculated using bootstrapping. From the full set of experimental single snapshots of sizeN∼600shots, we sampleNtimes with replacement and calculate the value of interest (r^2or the horizontal domain-wall positionx) on each sample. The plotted errorbar is the standard deviation of the value of interest, calculated from 1000 repetitions of the above procedure. Numerical simulations of local domains We simulate the dynamics of locally prepared domains using the time-dependent variational principle (TDVP) <cit.>. We use a two-site variant of this algorithm, which allows the bond dimension to grow with the evolution time at the expense of forgoing strict energy conservation due to the truncation step involved. In our calculations, we find that the energy is conserved to within0.004 %of that of the initial state up to the longest times simulated. The initial states for the numerics are chosen to be a mean-field approximation of the experimentally prepared state. Specifically, we pin certain lattice sites to|g ⟩, as specified by the target configuration, while the remaining ones are set to the vector on the Bloch sphere that minimizes the system’s mean-field energy for a givenΔ/Ω(instead of the fully polarized|r ⟩state). The many-body evolution is simulated using a maximum bond dimension ofχ= 1200with a time stepΔtof0.2 Ω^-1(the dynamics are also consistent with those for a smallerΔt = 0.1 Ω^-1). The results thus obtained are showcased in Extended Data Fig. <ref> and are found to be in good agreement with the experiments. Amplitude/Higgs mode Background and theory To describe the observed amplitude mode, we consider the low-energy effective action that describes the transition between the disordered and antiferromagnetic phases. Its Lagrangian is theϕ^4-theory L[ϕ] = 1/2[ (∂_t ϕ)^2 + (∇ϕ)^2 - q ϕ^2 ] - λ/4ϕ^4, whereϕcorresponds to the coarse-grained order parameter of the antiferromagnetic phase. While the phase transition is described via the Wilson-Fisher fixed point, here we perform a simple mean-field treatment of Eq. (<ref>) in order to capture the physics away from the immediate vicinity of the transition. In particular, the classical mean-field equation of motion for the order parameter's expectation value is given by ∂_t^2 ϕ = - (q+λϕ^2) ϕ, which corresponds to a classical anharmonic oscillator. The stationary value of the order parameter is given byϕ_0=0in the disordered phase (q>0) andϕ_0 = ±√(-q/λ)on the ordered side of the transition (q<0). Expanding Eq. (<ref>) for small amplitudes,ϕ= ϕ_0 + δϕ,around the potential minima leads to harmonic oscillations of the order parameter with frequenciesω(q>0) = √(q)andω(q<0) = √(2|q|). Numerical simulations We first investigate the low-energy spectrum of the 2D Rydberg Hamiltonian at different values ofΔ/Ωusing the excited-state density matrix renormalization group (DMRG) method, which iteratively finds the lowest-energy eigenstate that is orthogonal to previous lower-energy eigenstates. Our simulations take into account van der Waals interactions up to third-nearest neighbors on the square lattice. We identify the gapped paramagnetic and spontaneous-symmetry-breaking (SSB) phases, and obtain the ground-state energy as well as the energy gapsΔE_1,ΔE_2of the first two excited states above the ground state (in the SSB phase, the first excited state that we identify is the symmetry-related ground state). In Extended Data Fig. <ref>a, we perform bond-dimension scaling forΔE_1,ΔE_2on a10 ×10lattice with open (OBC) and periodic (PBC) boundary conditions up to bond dimensionsχ=512andχ=256, respectively. We note that while the energy gap in the paramagnetic (disordered) phase is robust to boundary conditions, in the SSB (ordered) phase, we identify a distinct boundary energy gap (panel i) smaller than the bulk energy gap (panel ii). In the dynamics, the coupling of the order parameter to the boundary mode vanishes with increasing system size, and we thus consider the bulk gap extracted from periodic boundary conditions as the relevant frequency of the amplitude (Higgs) mode. Furthermore, we perform DMRG calculations to obtain the initial state of the quench dynamics shown in Fig. <ref>, which is the ground state of the Hamiltonian with additional local detunings|δ_l|/Ω=0.7that pin one sublattice of the checkerboard to the ground state. We evaluate the energy expectation value of this initial state with respect to the unpinned 2D Rydberg Hamiltonian, and compare this with the energy of aℤ_2product state, confirming that the pinned state is indeed a low-energy state (see Fig. <ref>b). Dynamics from the low-energy pinned initial state lead to amplitude oscillations at frequencies matching the ground-state gap, which we simulate using TDVP <cit.> on a10 ×10lattice at a bond dimensionχ=256. We use time steps of0.25 Ω^-1and have verified that the resulting dynamics agrees with smaller time steps of0.1Ω^-1. As seen in Extended Data Fig. <ref>b, for small systems with OBC, the dynamics in the ordered phase (here,Δ/Ω=2) are dominated by a slow mode with frequency matching the boundary gap shown in Extended Data Fig. <ref>a.i. On top of this slow mode, we see the presence of a mode with higher frequency matching the bulk gap shown in Extended Data Fig. <ref>a.ii. Therefore, in the following, we consider dynamics with PBC, which allow us to isolate the bulk mode. In Extended Data Fig. <ref>c, we observe clear oscillations of the order parameter deep in either phase, which we fit to a damped harmonic oscillatorϕ(t) ≈ϕ_0+A cos(ωt+θ_0)e^-γtwith frequencyω, dampingγ, offsetϕ_0, and amplitudeA. We show the dependence of these parameters on the ratioΔ/Ωin Extended Data Fig. <ref>d. Away from the phase transition, the oscillation frequencies overlap with the previously obtained ground-state energy gaps and are robust to system size. We further see that the damping and amplitude become larger towards the transition, where the offset acquires a nonzero value. Close to the transition, the TDVP dynamics fails to converge in bond dimension and fit to the damped harmonic oscillator's functional form, as apparent in Extended Data Fig. <ref>c.ii. Moreover, even away from the critical point, the limited bond dimension does not capture a finite damping rate of the oscillations. Dependence on local detuning strength We experimentally investigate the dependence of the amplitude mode on the strength of the applied local detunings at two points,Δ/Ω= 0andΔ/Ω=1.1. This data is obtained by performing the state-preparation sequence illustrated in Fig. <ref>a for varying strengths of the applied local Stark shiftδ. For all other uses of the local detunings in this work, the pinning is applied at a constant magnitude,δ_0 = -2π×12(2)MHz, and the corresponding state preparation for variousΔ/Ωis documented in Extended Data Fig. <ref>c, d. Here, this strength is varied to much lower values than for the saturated pinning of the ground state (0.04δ_0-0.1δ_0) in the rest of the work (as indicated in Extended Data Fig. <ref>a,b) before significant differences in the resultant oscillations are observed. We find that at both values ofΔ/Ωconsidered, the amplitude of the oscillations is progressively reduced as|δ|is decreased. The frequency of the oscillations also decreases with|δ|. The change in the oscillation frequency is more pronounced for oscillations near the critical point than in the disordered phase. Theδdependence of the oscillation frequency, as well as the increased sensitivity near the phase transition, are qualitatively consistent with the behavior of an anharmonic oscillator, as predicted by the mean-field Eq. (<ref>). Amplitude mode in global sweeps The Higgs-mode dynamics are also apparent in parallel to coarsening, as manifested in the oscillations of the magnetizationn_i(Extended Data Fig. <ref>a). Additionally, they can be clearly discerned by observing the dynamics of the total magnetization of the two-point correlation function in position spaceC(r), as shown in Extended Data Fig. <ref>b–d. The frequency of the oscillations of the correlation length closely support those extracted from the quench protocol described below (Extended Data Fig. <ref>) and the calculated ground-state energy gap (the global sweep data is plotted in purple in Fig. <ref>). The interplay of the amplitude mode with coarsening dynamics is generally unexplored theoretically. Therefore, we note further exploration of the two processes occurring in parallel, as a possible future extension of this work. Quenches in the ordered phase Although Higgs oscillations in the ordered phase can be extracted through the state-preparation sequence through deterministic preperation with local detunings, as described in Fig. <ref>a, the amplitude of the oscillations is substantially reduced when compared to that inside the disordered phase (seeΔ/Ω= 1.5in Fig. <ref>). We therefore perform an alternate state-preparation sequence to extract the amplitude-mode frequencies deep in the ordered phase, as shown in Extended Data Fig. <ref>a. First, local detunings are applied in a checkerboard pattern as the global detuningΔis swept from negative values toΔ/Ω= 3.3, a point far inside the ordered phase. We hold the global detuning constant while quenching off the site-dependentδ. At this point, we quenchΔto its final detuning value in the ordered phase, yet closer to the phase transition. By way of this protocol, we observe, as with all other sweep protocols presented thus far, long-lived oscillations of the order parameter and correlation lengths as shown in Extended Data Fig. <ref> b,c. The extracted oscillation frequenciesωare in close agreement with the ground-state energy gap in the ordered phase. Note that in Fig. <ref>d, the points in red atΔ/Ω= 2.0, 2.5are measured via this quench protocol. Frequency doubling From the above-mentioned quenches to the ordered phase as well as in data from the local protocol, we extract the oscillations in both the order parameter and the correlation lengths. We find, as shown in Extended Data Fig. <ref>d, that in the ordered phase, these two frequencies are approximately equal, while in the disordered phase they vary byω_ξ/ω_m_s ≈2. The changing relationship between the two observables can be understood by the following symmetry argument. We begin by taking into account the dynamics of order-parameter fluctuations within a Gaussian approximation. Neglecting corrections to the effective mass due to fluctuations, the relevant equations of motion are <cit.>: ∂_t D^ϕϕ_ k,t = 2 D^ϕπ_ k,t, ∂_t D^ϕπ_ k,t = D^ππ_ k,t - (k^2 + q + 3λϕ^2) D^ϕϕ_ k,t, ∂_t D^ππ_ k,t = - 2(k^2 + q + 3λϕ^2) D^ϕπ_ k,t, whereπ_k(t) ≡∂_t ϕ_k(t)andD^ϕϕ_k,t ≡⟨ϕ(-k,t)ϕ(k,t)⟩_c. From the correlation functionD^ϕϕ_k,t, which corresponds to the structure factor discussed in the main text, one can extract the evolution of the correlation length. Expandingϕ= ϕ_0 + δϕandD^ϕϕ_k,t = D^ϕϕ_k + δD^ϕϕ_k,t, in the disordered phase, an eigenmode analysis of Eqs. (<ref>)–(<ref>) yields a frequency spectrum2√(k^2 + q). The smallest frequency, which is expected to set the correlation-length oscillations <cit.>, is thus2√(q), i.e., twice that of the order parameter. In contrast, in the ordered phase, Eq. (<ref>) contains a term6λϕ_0 D_k^ϕϕ δϕ(t), and the oscillation of the order parameter thus acts as a linear drive on the dynamics of two-point correlation functions. As such, the correlation length will oscillate at the corresponding frequency√(2|q|)of the order parameter. Frequency ratio of oscillations In Extended Data Fig. <ref> we present the full dataset of amplitude-mode oscillations (also shown also in Fig. <ref>) as a function of the distance from the phase transition. As described above, Landau mean-field theory predicts the relationship between oscillation frequencies to beω(-|q|)/ω(|q|) = √(2)<cit.>. However, beyond mean-field theory, the phase transition is described by the Wilson-Fisher fixed point, and the universal frequency ratio is shifted accordingly. Theoretical estimates based on both analytical and numerical methods yield a frequency ratio aroundω(-|q|)/ω(|q|) ≈1.9<cit.>. We find that both the experimental data and MPS simulations deviate from the mean-field prediction and are suggestive of a similarly higher ratio. However, very close to the critical point,|(Δ-Δ_c)/Ω|≲0.3, we observe deviations from this theoretical ratio. Possible explanations (besides the limited bond dimension for the MPS data) include finite-size effects (resulting e.g. in non-vanishing gap at QPT point), possible errors in QPT location, and the possibility that sufficiently close to the transition, the overdamped oscillations may no longer track the ground-state excitation gap. In future work, a detailed exploration of the region near the critical point via the amplitude mode could allow for higher precision tests of this universal ratio. Data Availability The data that supports the findings of this study are available from the corresponding author on reasonable request. Acknowledgements We thank Manuel Endres, Tout Wang, Dries Sels, Hannes Pichler, Maksym Serbyn and Ahmed Omran for helpful discussions. MPS simulations were performed using the TeNPy library <cit.>. We acknowledge financial support from the US Department of Energy (DOE Quantum Systems Accelerator Center, grant numbers DE-AC02-05CH11231 and DE-SC0021013), the DARPA ONISQ program (grant number W911NF2010021), the DARPA IMPAQT program (grant number 124356), the Center for Ultracold Atoms (an NSF Physics Frontiers Center), the National Science Foundation, and QuEra Computing. T.M. and J.F. acknowledge support from the Harvard Quantum Initiative Postdoctoral Fellowship in Science and Engineering. S.J.E. acknowledges support from the National Defense Science and Engineering Graduate (NDSEG) fellowship. D.B. acknowledges support from the NSF Graduate Research Fellowship Program (grant DGE1745303) and the Fannie and John Hertz Foundation. N.U.K. acknowledges support from The AWS Generation Q Fund at the Harvard Quantum Initiative. N.M. acknowledges support by the Department of Energy Computational Science Graduate Fellowship under award number DE-SC0021110. R.S. is supported by the Princeton Quantum Initiative Fellowship. D.A.H. was supported in part by NSF QLCI grant OMA-2120757. S.S. acknowledges support by U.S. National Science Foundation grant DMR-2245246. Author contributions T.M., S.H.L., S.E., A.A.G., S.J.E., D.B., H.Z. and M.K. contributed to building the experimental setup, performed the measurements and analysed the data. R.S., N.U.K., J.F., P.E.D., N.M., S.S and D.A.H. performed numerical simulations and contributed to the theoretical prediction and interpretation of the results. All work was supervised by S.S, D.A.H., M.G., V.V. and M.D.L. All authors discussed the results and contributed to the manuscript. Competing interests: M.G., V.V., M.D.L. are co-founders and shareholders and H.Z. is an employee of QuEra Computing. Correspondence and requests for materials should be addressed to M.D.L. EDfigget arXiv to do 4 passes: Label(s) may have changed. Rerun
http://arxiv.org/abs/2407.02073v1
20240702090543
Contribution Evaluation of Heterogeneous Participants in Federated Learning via Prototypical Representations
[ "Qi Guo", "Minghao Yao", "Zhen Tian", "Saiyu Qi", "Yong Qi", "Yun Lin", "Jin Song Dong" ]
cs.LG
[ "cs.LG" ]
Xi'an Jiaotong University Xi'an China National University of Singapore Singapore Singapore qiguo@u.nus.edu Xi'an Jiaotong University Xi'an China 710129 minghao_yao@stu.xjtu.edu.cn Shanghai Jiao Tong University Shanghai China lin_yun@sjtu.edu.cn National University of Singapore Singapore Singapore dcsdjs@nus.edu.sg ^* Equal contribution ^† Corresponding author. saiyu-qi@xjtu.edu.cn § ABSTRACT Contribution evaluation in federated learning (FL) has become a pivotal research area due to its applicability across various domains, such as detecting low-quality datasets, enhancing model robustness, and designing incentive mechanisms. Existing contribution evaluation methods, which primarily rely on data volume, model similarity, and auxiliary test datasets, have shown success in diverse scenarios. However, their effectiveness often diminishes due to the heterogeneity of data distributions, presenting a significant challenge to their applicability. In response, this paper explores contribution evaluation in FL from an entirely new perspective of representation. In this work, we propose a new method for the contribution evaluation of heterogeneous participants in federated learning (FLCE), which introduces a novel indicator class contribution momentum to conduct refined contribution evaluation. Our core idea is the construction and application of the class contribution momentum indicator from individual, relative, and holistic perspectives, thereby achieving an effective and efficient contribution evaluation of heterogeneous participants without relying on an auxiliary test dataset. Extensive experimental results demonstrate the superiority of our method in terms of fidelity, effectiveness, efficiency, and heterogeneity across various scenarios. Contribution Evaluation of Heterogeneous Participants in Federated Learning via Prototypical Representations Jin Song Dong July 8, 2024 ============================================================================================================= PVLDB Reference Format: . . PVLDB, (): , . https://doi.org/doi: [This work is licensed under the Creative Commons BY-NC-ND 4.0 International License. Visit <https://creativecommons.org/licenses/by-nc-nd/4.0/> to view a copy of this license. For any use beyond those covered by this license, obtain permission by emailing mailto:info@vldb.orginfo@vldb.org. Copyright is held by the owner/author(s). Publication rights licensed to the VLDB Endowment. Proceedings of the VLDB Endowment, Vol. , No. ISSN 2150-8097. https://doi.org/doi: ]footnote-1 § INTRODUCTION Traditional centralized deep learning, which typically relies on collecting extensive privacy-sensitive data on centralized servers, faces substantial privacy and legal challenges <cit.>. To maintain local data privacy and comply with legal regulations, federated learning (FL) emerges as a solution to enable collaborative model training across multiple participants without sharing private data <cit.>. FL promotes the joint collaboration of isolated data sources to achieve greater benefits and achievements <cit.>. When participants' data are independently and identically distributed (IID) and equal in quantity in FL, it is logical to share the same global model training outcome among participants. However, participants' data often exhibit inherent heterogeneity in practical scenarios, making it unfeasible to share the same outcome for all participants <cit.>. Meanwhile, considering the wide applicability of contribution evaluations in detecting low-quality datasets, enhancing model robustness, and designing incentive mechanisms, etc., <cit.>, it necessitates a reasonable and effective evaluation of heterogeneous participants' contributions to the FL process. Therefore, in this work, we focus on investigating the contribution evaluation of heterogeneous participants in FL, fostering the sustainable development of FL in practical applications. Existing methods for contribution evaluation in FL typically fall into three categories: data valuation-based methods <cit.>, model similarity-based methods <cit.>, and auxiliary test dataset-based methods <cit.>. The rough comparison of different categories of contribution evaluation methods in FL is shown in  <ref>. Specifically, data valuation-based methods assume that contributions are positively correlated with data valuation <cit.>. The most straightforward approach for data valuation-based methods is to consider the volume of participant data as a standard for evaluating their contribution to FL process. However, due to the differences in data sources, collection, cleaning, and integration processes among participants in practical scenarios, the quality of data provided by each participant is inherently variable <cit.>. Therefore, despite the fact that this method is efficient by directly using data valuation results, it may not be effective in contribution evaluation due to the unreliability of valuations in reality. In model similarity-based methods, contributions are evaluated by measuring the similarity of the participant's model to the global model, such as using the L_2 norm distance (a smaller distance indicates a greater contribution) <cit.>. These methods generally evaluate contributions more effectively than data valuation-based methods. However, in practice, multiple vastly different model parameters can achieve similar local optima under various random training conditions, rendering direct comparison impractical. Additionally, large model parameters not only decrease the efficiency of contribution evaluation but also bring about the curse of dimensionality when calculating similarities <cit.>. This phenomenon significantly complicates the process of accurately assessing model similarity, as the vast number of parameters can distort the perception of similarity and obscure meaningful comparisons. For auxiliary test dataset-based methods, it is assumed that a representative auxiliary test dataset exists with the same data distribution as the data from all participants <cit.>. The contribution of the participants can be effectively evaluated based on the accuracy of their models on this dataset. However, continuous data testing makes its efficiency low. Additionally, data is often associated with privacy concerns, acquisition difficulties, and heterogeneity. Consequently, it is highly challenging to access an ideal auxiliary test dataset that accurately represents all participants <cit.>. Apart from auxiliary test dataset-based methods, which can handle heterogeneity by testing on an ideal test dataset (noting that such a dataset is hard to obtain), the other categories have not focused sufficiently on heterogeneity. We aim to develop a new indicator that effectively and efficiently evaluates the contributions of heterogeneous participants without relying on the auxiliary test dataset. Although this goal is challenging, we have found that data representations capture the model's current learning state and data mappings. They also have the advantage of reduced dimensionality compared to the model itself <cit.>. Therefore, we propose using data representations to evaluate the contributions of heterogeneous participants in FL. However, three main challenges remain. First, direct local average representations may not accurately reflect the actual contributions of heterogeneous participants due to the mutual influence of different class representations. Second, representations are dynamic and evolve towards stability, requiring consideration of their changes over rounds. Third, it is unfair not to recognize the contributions of participants not selected for training in each round. To this end, we propose a new method for contribution evaluation of heterogeneous participants in federated learning (FLCE), which introduces a novel indicator class contribution momentum to conduct refined contribution evaluation. Our core idea is the construction and application of the class contribution momentum indicator, thereby achieving an effective and efficient contribution evaluation of heterogeneous participants without relying on an auxiliary test dataset. Class contribution momentum consists of the class contribution mass and class contribution velocity, both of which derived from the average representation of the same data class. Class contribution momentum effectively mitigates interference between different data classes in heterogeneous data contribution evaluation by differentiating the impact of different data classes. It also reflects the representational mass and variation of the participant's local data in the trained model, making it an effective foundational indicator for evaluating contributions of heterogeneous participants. Furthermore, the dimensions of representations are much smaller than those of the entire model and do not grow with the number of model parameters, enabling efficient computation during the evaluation of contributions. Specifically, FLCE evaluates contributions from three perspectives: (1) the individual perspective from the autonomous contribution of each participant, (2) the relative perspective from contribution differences across training rounds, and (3) the holistic perspective from the collective contribution of all participants. From the individual perspective from the autonomous contribution of each participant, we first compute local data representations through locally trained models of each participant and then aggregate these representations by class. The centroid of these class representations, termed the class prototype, represents the model's representational capacity for that class and signifies the mass of each class contribution. Then, these models and class prototypes are uploaded to the central server. From the relative perspective from contribution differences across training rounds, we consider changes in each class prototype between rounds, representing the velocity of each class contribution under model training. Based on the class contribution mass and velocity, we introduce the concept of class contribution momentum, representing the contribution of each data class. From the holistic perspective from the collective contribution of all participants, considering that only a subset of participants is selected for aggregation in each round, we further propose a class contribution momentum completion technique to complete missing class contribution momentums in each round. Meanwhile, we also consider the different importance of distinct categories. These three perspectives build on each other progressively, working collaboratively to effectively and efficiently evaluate the nuanced contributions of heterogeneous participants throughout the training cycle. Extensive experimental results demonstrate FLCE's superior performance in evaluating the contributions of heterogeneous participants. FLCE exhibits high fidelity to the actual performance of the global model, effectively differentiates the contributions of heterogeneous participants, and efficiently computes contribution scores without relying on an auxiliary test dataset. In summary, the key contributions of our work are as follows: (1) To the best of our knowledge, this is the first work to introduce a representation-based approach for evaluating contributions in federated learning without an auxiliary test dataset. Concurrently, we propose a novel contribution evaluation indicator Class Contribution Momentum for contribution evaluation of federated learning. This work marks a groundbreaking shift in the paradigm of contribution evaluation research within federated learning, offering a viable and unexplored perspective. (2) We present FLCE, a new method for evaluating contributions from heterogeneous participants in federated learning. Utilizing individual, relative, and holistic perspectives, this method enables an effective and efficient contribution evaluation of heterogeneous participants without relying on an auxiliary test dataset. (3) Our investigation is the first to involve two critical yet previously neglected issues in federated learning contribution evaluation: the contributions of participants not selected in the current training round and the different importance of distinct categories in contribution evaluation. (4) Our extensive experiments illustrate FLCE's superiority in evaluating contributions from heterogeneous participants in terms of performance fidelity, effectiveness, efficiency, and handling of heterogeneity. § RELATED WORKS §.§ Federated Learning Federated Learning (FL) is a new paradigm that addresses the conflict between privacy protection and knowledge acquisition by training local models across multiple decentralized participants. In this approach, instead of transferring raw data to a central server, participants train models on their own data and devices. They then upload these models to the server where they are aggregated (e.g., using FedAvg <cit.>, which averages the local model parameters of participants) before being redistributed. This collaborative learning method enables privacy-preserving model training without exposing sensitive data. However, the original FL framework faces several challenges throughout the training stages <cit.>. During the interaction between participants and the server, the training process may be impeded by issues such as device or network heterogeneity <cit.>. Additionally, the aggregation and distribution of the global model can be vulnerable to privacy breaches and poisoning attacks if malicious actors are involved <cit.>. Moreover, a fundamental challenge is that the data heterogeneity among participants significantly impacts the efficiency and performance of FL <cit.>. §.§ Data Heterogeneity Data heterogeneity among participants primarily involves differences in data distribution, size, categories, and noise <cit.>. Previous studies have proposed various methods to address these heterogeneity issues <cit.>. Tian et al. <cit.> improved performance in heterogeneous environments by adding regularization, while Fang et al. <cit.> reduced the impact of noise in heterogeneous datasets by assigning weights to participants. However, these methods often struggle to fully address data heterogeneity by focusing primarily on the data itself and exploring different data types. FedCA <cit.> was the first to merge contrastive learning with FL in an unsupervised manner, while MOON <cit.> uses supervised contrastive learning to boost model performance. Additionally, several studies have validated the effectiveness of these representations in heterogeneous scenarios <cit.>, mainly focusing on enhancing model performance. The improvement in performance is largely due to the reasonable allocation of local model weights, laying the groundwork for achieving greater contribution evaluations. §.§ Fairness Fairness presents a significant challenge in FL and is closely related to research on contribution evaluation. In this field, various concepts of fairness are considered, each focusing on different aspects. Some studies emphasize performance distribution fairness, which assesses consistency in performance across client devices in FL <cit.>. Others, such as group fairness, aim to reduce discrepancies in algorithmic decisions among diverse groups <cit.>. Additionally, some research seeks to minimize maximum loss for protected groups, thus preventing overfitting to any specific model at the expense of others <cit.>. However, existing fairness-oriented approaches face challenges in evaluating participant contributions in real-world scenarios. These methods struggle to accurately and efficiently evaluate participant contributions, which is crucial for attracting excellent local models for global model updates. §.§ Contribution Evaluation in Federated Learning Contribution evaluation in FL has emerged as a critical research area due to its applicability across various domains, including detecting low-quality participants, enhancing model robustness, designing incentive mechanisms, and accelerating model convergence <cit.>. Given the inherent challenges of data heterogeneity in FL, it is crucial to develop a reasonable and effective method for evaluating the contributions of heterogeneous participants. Previous methods for evaluating contributions in FL can typically be grouped into three categories: data valuation-based, model similarity-based, and auxiliary test dataset-based methods. Initially, contributions can be evaluated by the volume of data from participants, with methods like FedAvg <cit.> and FedProx <cit.> assigning weights based on data size. Additionally, Ditto <cit.> uses data volume to balance fairness and robustness in personalized learning. However, data volume alone may not fully reflect a participant's contribution to the global model. Thus, evaluating the similarity between local and global models becomes a viable approach. For example, FedFV <cit.> mitigates potential conflicts among participants to acquire fairness; CGSV <cit.> evaluates contributions by calculating the cosine similarity between participants and the global model; FedMDFG <cit.> ensures fairness by finding appropriate model update directions and step sizes. Auxiliary test datasets also play a crucial role in overcoming the limitations of data scale and model similarity evaluations due to their flexibility. For instance, q-FedAvg <cit.> ensures fairness by uploading cross-entropy on auxiliary test datasets; FedFa <cit.> allocates aggregate weights by uploading participant accuracy and participation frequency. Moreover, some other works depend on Game Theory to evaluate each participant's effect, they also require auxiliary test datasets and it requires a significant amount of time to calculate contribution evaluation metrics like Shapley Value <cit.>. These methods often struggle to efficiently and effectively evaluate the contributions of heterogeneous data from participants, impacting the fairness of weight allocation during model aggregation and potentially disadvantaging some participants. Through the construction and application of the class contribution momentum indicator, our proposed method achieves an effective and efficient contribution evaluation of heterogeneous participants without relying on an auxiliary test dataset. § METHODOLOGY §.§ Problem Definition and Notation In FL, there are n participants and one central server. Each participant has a local private dataset 𝒟_k={(x_i, y_i)}_i=1^|𝒟_k|, k ∈{1, 2, ..., n}, where x_i ∈ℝ^I represents the I-dimensional feature vector of a sample, y_i is the one-hot vector of the ground truth label, and |𝒟_k| is the size of dataset 𝒟_k. The goal of FL is to enable all participants to jointly train a shared global model using their individual private datasets, which can be formulated as an optimization problem: min _w∈ℝ^d F(w):= ∑_k = 1^nα_k F_k(w), where n is the total number of participants, w∈ℝ^d signifies the d parameters of the global model (like weights in a neural network)), α_k > 0 with ∑_kα_k = 1, F_k(w)=𝔼_(x_i, y_i) ∼𝒟_k[ℓ(w ; (x_i, y_i))] represents the expected risk for the k-th participant, and ℓ(w ; (x_i, y_i)) is the loss function of participants. Our study considers a classic FL scenario in which a trusted third party acts as the central server, and n non-malicious participants engaged in FL are presumed to be heterogeneous. Notably, the datasets possessed by these non-malicious participants may contain some label noise and feature noise, stemming from the complexity of data collection and processing in real-world scenarios. While the absolute contribution values of each participant may vary depending on specific tasks and settings, the normalized relative contributions among different participants exhibit universality in practical. Therefore, our contribution evaluation aims to evaluate the normalized contribution proportion of each participant relative to all participants in the entire FL process, where the sum of all participants' contribution proportions equals 1. In FL with heterogeneous participants, our goal is to conduct an effective and efficient contribution evaluation of heterogeneous participants without relying on the auxiliary test dataset. §.§ Overview We present a brief overview of our proposed method, FLCE, for the contribution evaluation of heterogeneous participants in FL, as illustrated in  <ref>. FLCE is a client-server architecture framework that is consistent with the standard FL framework. We adopt a tripartite perspective to conduct FLCE, encompassing the individual perspective, the relative perspective, and the overall perspective. The individual perspective focuses on the autonomous contribution of each participant. The relative perspective examines the contribution differences across training rounds. Lastly, the overall perspective considers the collective contribution of all participants. The details of FLCE are presented in the subsequent content. §.§ The individual perspective from the autonomous contribution of each participant For the contribution evaluation in FL, the most straightforward approach is to evaluate the contribution of each participant in the current training round. This reflects the individual contribution of the participants selected in each round, thus representing the individual perspective from the autonomous contribution of each participant. Directly using average representations of the participant's local data, processed through the post-training local model, may be ineffective in accurately reflecting the participant's current round contribution due to the mutual influence and interference of representations of different classes. Considering the unique data distribution of each participant, to better capture their contributions, it is important to identify both the commonality and the difference in representations among participants. The commonality lies in all participants' data sharing a common latent class-aware data distribution space. Due to the fact that each participant only has its own private data, each participant has only a subset of the complete latent class-aware distribution space. Considering that the central representation of each data class, also known as the class prototype, can be viewed as an effective representation of that class using the current model, we propose utilizing the class prototype as a reference for contribution evaluation, termed class contribution mass. From the viewpoint of class prototypes, we deconstruct the complete latent data distribution space into separate class-aware data distribution spaces. This approach effectively mitigates the interference between different class data representations within each participant. It also facilitates the collaboration of different class distributions among all participants. Specifically, each participant, after receiving the global model from the central server, trains the model with their local data. The loss of local training for a batch of N samples can be expressed as follows: ℒ = ℒ_CE + λℒ_CL, ℒ_CE=-1/N∑ ^N_i=1y_ilog(ŷ)), ℒ_CL = -1/N∑_i=1^N1/N_y_i∑_j=1^N 1_y_i = y_jloge^sim(z_i, z_j) / τ/∑_k=1^N 1_i ≠ k e^sim(z_i, z_k) / τ, where ℒ_CE is the cross entropy loss, ℒ_CL is the contrastive loss, λ is the coefficient balancing cross entropy loss and contrastive loss, ŷ is the probability output predicted by the model for the sample, and z_i (z_j) denotes the representation of the input x_i (x_j). We use an encoder to extract the representation z from an input x. For a given i-th sample, N_y_i is the number of samples in the batch that share the same label as i-th sample. The similarity measure sim(z_i, z_j) quantifies the resemblance between the representations of i-th and j-th samples, and is typically computed using dot product or cosine similarity. The parameter τ serves as a temperature scaling factor, modulating the smoothness of the distribution. The indicator function 1_condition yields 1 when the condition is true, and 0 otherwise. The cross entropy loss ℒ_CE is a fundamental loss function in supervised learning. Alongside this, we introduce the contrastive loss function ℒ_CL in supervised learning. This function aims to increase the similarity for pairs of samples with the same label and decrease it for those with different labels, which is used to improve the reliability of prototypes. Then, the participant processes local data with the trained model to generate representations. These representations are then grouped by class to create class prototypes as follows: z_y = 1/N_y∑_i=1^N 1_y_i = y z_i, where the average representation for a specific class y, denoted as z_y, is computed by averaging the representations of all samples correctly classified as belonging to class y. Here, N_y represents the count of samples in the batch that are of class y. The representation of the i-th sample is denoted by z_i, and its corresponding label is y_i. The function 1_y_i = y acts as an indicator, equating to 1 when the label of the i-th sample matches the class y, and 0 otherwise. This formulation enables the computation of the centroid of the representations for a specific class, reflecting the average location in the feature space of the samples correctly identified as belonging to that class. Afterward, the trained local model and class prototypes are uploaded back to the central server. In the t-th global training round, K participants are selected for federated training. After local training is completed, the central server receives the models and prototypes uploaded by these participants. The prototype for the c-th class from the k-th participant is represented as p^t_k,c. The class contribution mass ℳ_k,c^t of the c-th class prototype from the k-th participant is calculated as follows: ℳ^t_k,c = cos(p^t_k,c, p̂^t_c)/∑_k = 1^Kcos(p^t_k,c, p̂^t_c), where p̂^t_c = ∑_k = 1^K𝓈^t_k,c p^t_k,c represents the weighted average prototype of the c-th class. Here, 𝓈^t_k,c is the normalized weight of the cosine similarity of p^t_k,c relative to 1/K∑_k = 1^Kp ^t_k,c. Class contribution mass effectively captures and measures the unique and specific contribution of each participant in the learning process, focusing on individual class-level contributions rather than general participation. The significance of class contribution mass lies in its ability to reflect the individual and autonomous contribution of each participant within a federated learning round. This measure takes into account not only the commonality shared among all participants in terms of their latent class-aware distribution spaces but also acknowledges the unique data distributions of individual participants. Since each participant possesses only a subset of the complete latent class-aware distribution space, the class contribution mass becomes an effective metric to evaluate their specific contributions. §.§ The relative perspective from contribution differences across training rounds If there is minimal change in the class prototype relative to the previous global class prototype, it suggests a smaller contribution by the participant for that specific class in the current round. Conversely, a significant change in the class prototype indicates a larger contribution. Therefore, it is essential to consider the changes in class prototypes between consecutive rounds. This reflects the divergence of the class prototype obtained from local data training in the current round relative to the global class prototype derived from the previous round. Acknowledging the impact of these class prototype changes across rounds on contribution evaluation, we introduce the concept of class contribution velocity. On the central server, we maintain the latest global class prototypes. In the t-th round of global training, the most recent global prototype for c-th class is denoted as g^t-1_c. Consequently, we can define the class contribution velocity 𝒱^t_k,c of the c-th class prototype from the k-th participant as follows: 𝒱^t_k,c = p^t_k,c - g^t-1_c^2/∑_k = 1^K p^t_k,c - g^t-1_c^2 , 𝒱^t_k,c = 𝒱^t_k,c/∑_k=1^K𝒱^t_k,c , where 𝒱^t_k,c is the normalized distance between each selected participant's prototype and the global class prototype from the previous round. We then normalize 𝒱^t_k,c to obtain the class contribution velocity. Class contribution velocity focuses on the dynamic nature of participants' contributions across successive training rounds, offering a detailed understanding of how each participant's contribution evolves during the learning process. By examining the changes in class prototypes between consecutive training rounds, class contribution velocity captures the divergence of these prototypes as they evolve. Essentially, class contribution velocity serves as a dynamic indicator, which contextualizes the participant's current contribution within the broader trajectory of the federated training process. Class contribution mass captures the static, individual autonomous contributions in the current training round, while class contribution velocity indicates the dynamic, relative changes in contributions across successive training rounds. To enhance the evaluation of participant contributions, we introduce the concept of class contribution momentum. This concept combines class contribution mass and velocity, offering a more comprehensive view of participant engagement. Class contribution momentum is quantified as the normalized product of class contribution mass and class contribution velocity, as follows: 𝒬^t_k,c = ℳ^t_k,c𝒱^t_k,c , 𝒬^t_k,c = 𝒬^t_k,c/∑_k=1^K𝒬^t_k,c , where 𝒬^t_k,c denotes the class contribution momentum of participant k with class c in round t. We can use the class contribution momentum to get the global prototype g_c^t of the c-th class at t-th round as follows: g_c^t=∑_k=1^K𝒬^t_k,cp^t_k,c, Moreover, we can also use the class contribution momentum for model aggregation to obtain an updated global model w^t+1 as follows: w^t+1=∑_k=1^K∑_c=1^C𝒬_k,c^t/∑_k=1^K∑_c=1^C𝒬_k,c^tw_k^t, where w_k^t is the uploaded model by the k-th client at t-th round. By merging class contribution mass and velocity, class contribution momentum provides a more holistic evaluation of participant contributions in FL. Class contribution momentum allows for a nuanced evaluation that captures both the immediate, static contribution of participants and their ongoing, dynamic involvement across training rounds. It not only acknowledges the immediate value brought by participants in a single round but also their evolving contribution throughout the learning process, providing a reasonable and interpretable contribution evaluation to federated training. §.§ The holistic perspective from the collective contribution of all participants With the establishment of the class contribution momentum, we can now directly calculate each participant's contribution by summing their class contribution momentums across various categories. However, there are still two issues that need to be solved. First, only a select group of participants in FL is chosen for federated training in each round. Those not selected are excluded from the contribution evaluation. This can potentially lead to imbalances in contribution allocation due to selection strategies or randomness in the training process. Second, the significance of contribution from each round may continually vary across different training rounds in the FL cycle. Concurrently, the importance of contributions from different data classes could also differ. Therefore, it is essential to consider both the distribution of total contributions over training rounds and the varying importance of different data classes. In light of two issues, we must analyze contribution evaluations with a holistic perspective from the collective contribution of all participants. To tackle the first issue, we introduce a class contribution momentum completion technique. This technique, taking a global view of training across all participants, uses matrix factorization and completion to estimate the contributions of participants not selected in each training round. After completing all training rounds, we obtain a real contribution matrix X that records the contribution 𝒬^t_k,c for each round t, participant k, and class c. However, in the FL framework, only a subset of participants is chosen for each round, and those not selected contribute zero, regardless of their data mass. Ideally, if two participants have identical data, they should have the same contribution result in an ideal scenario. Yet, the selection mechanism can lead to discrepancies where non-selected clients do not contribute. To mitigate this fairness issue, we need to complement the real contribution matrix X of FLCE to obtain an approximate contribution matrix X̂ that approaches the ideal scenario<cit.>. The contribution matrix completion technique can be explained as follows: min _U,VE = ||X - X̂||^2= ||X -UV||^2 , where E is the error function of the distance between the real contribution matrix X and the approximation matrix X̂. To acquire approximation matrix X̂, we perform matrix factorization X̂ = UV, where U (size m × k) and V (size k × n) represent the matrices, m and n denote the number of rounds and participants, respectively. To expedite computation, we employ low-rank matrix factorization which k < min{m, n}. The error ||X - X̂|| quantifies the difference between the real and approximated matrices. We use gradient descent to approximately estimate their values and acquire U and V to compose the approximation contribution matrix without missing values. Obtaining the approximated matrix that closely resembles the real-world scenario allows our algorithm's results to improve from an unfair contribution matrix to a relatively fair result which maintains the interests of non-participating participants due to the selection mechanism. To address how the total contribution is distributed across different rounds in the entire training cycle and the difference in the contribution importance of different data classes, we introduce two concepts: the global contribution distribution vector and the class contribution distribution vector. The global contribution distribution vector shows the spread of total contribution across different rounds. It is defined as: A = (a_1, a_2, ..., a_T), where T is the total number of global training rounds. When each element in the vector equals 1/T (the reciprocal of the total number of rounds), it indicates a typical scenario where contributions are evenly distributed across all rounds. Additionally, the class contribution distribution vector indicates the significance of contributions in different classes. It is defined as: B = (b_1, b_2, ..., b_C), where C is the total number of categories. When each element in the vector equals 1/C (the reciprocal of the total number of categories), it suggests a common case where all categories are equally important in contribution evaluation. Finally, we can calculate each participant's contribution in FL. The contribution of the k-th participant in a complete FL cycle is presented as follows: 𝒞ℰ_k = ∑_t=1^T a_t∑_c=1^C b_c𝒬_k,c^t, 𝒞ℰ_k = 𝒞ℰ_k/∑_k=1^n𝒞ℰ_k. As a result, we obtain the final contribution evaluation result {𝒞ℰ_k}_k=1^n for all participants in FL. FLCE adopts a tripartite perspective, encompassing individual, relative, and overall contributions. This comprehensive approach ensures that each participant’s contribution is evaluated from different dimensions, providing a more complete and nuanced understanding of their role in the FL process. The complete description of FLCE is presented in Algorithm <ref>. § EXPERIMENTS §.§ Experimental Setup §.§.§ Datasets and Network Architecture We evaluated the performance of FL methods using three real-world datasets: CIFAR-10 <cit.>, CIFAR-100 <cit.>, and EuroSAT <cit.>. CIFAR-10 <cit.>: This public dataset for image classification consists of 60,000 32x32 color images distributed across 10 categories. Each category has 6,000 images, with the dataset split into 50,000 training images and 10,000 testing images. CIFAR-100 <cit.>: This dataset is designed for image classification and covers a wide range of objects and scenes. It includes 60,000 32x32 color images distributed across 100 categories. Each category contains 500 training images and 100 test images. EuroSAT <cit.>: This dataset for Earth observation and remote sensing image classification comprises 27,000 64x64 color satellite images from various regions in Europe. The dataset has 10 classes. Each class has 2,160 training images and 530 testing images. In the experiment, we employ ResNet20 as the default network architecture, which includes 20 convolutional layers and is part of the residual network family <cit.>. §.§.§ Baselines We categorize nine baselines into three groups: (1) Data valuation-based methods: FedAvg <cit.>, FedProx <cit.>, and Ditto <cit.>; (2) Model similarity-based methods: FedFV <cit.>, CGSV <cit.> and MOON <cit.>; (3) Auxiliary test dataset-based: q-FedAvg <cit.>, FedFa <cit.>, and FedSV <cit.>. The details of the nine baselines are presented as follows. FedAvg <cit.>: A foundational approach in federated learning that aggregates local model updates using a simple average, aiming to achieve a global model without sharing raw data. FedProx <cit.>: It enhances FedAvg by introducing a proximal term to mitigate system heterogeneity. This term penalizes the difference between local model updates and the global model, allowing for more effective learning in non-IID data environments. Ditto <cit.>: It is a personalized FL framework that can trade off between the local model and the global model. Ditto can inherently provide fair contribution evaluations and robustness. FedFV <cit.>: This method is designed to address fairness in FL. It aims to reduce potential conflicts between clients before averaging gradients. The algorithm initially utilizes cosine similarity to detect gradient conflicts and then iteratively eliminates such conflicts by modifying the direction and magnitude of the gradients. CGSV <cit.>: The approach utilizes the cosine similarity of local and global models to evaluate the contribution of participants. MOON <cit.>: This algorithm uses the similarity in model representations to enhance the local training of individual participants. qFedAvg <cit.>: q-FedAvg introduces a parameter 'q' to control the contributions of local models in the aggregation process. It offers a flexible approach in FL, allowing adjustments to the aggregation mechanism based on local model performance. FedFa <cit.>: It introduces a dual-momentum gradient optimization scheme, which accelerates the model's convergence speed. The proposed algorithm combines training accuracy and training frequency information to measure the weights, aiding clients in participating in server aggregation with fairer weights. FedSV <cit.>: We use the canonical Shapley value to calculate the contribution of participants. Due to the computational complexity, we employ Monte-Carlo estimation of Shapley Value, which is conducted by randomly sampling participant permutations and eliminating unnecessary sub-model utility evaluations. §.§.§ Metrics In our experiments, we evaluate performance using three primary metrics: Accuracy, F1 Score, and Kullback-Leibler (KL) Divergence. KL Divergence is a statistical measure quantifying the dissimilarity between two probability distributions. Some previous papers have utilized metrics such as Cosine Distance  <cit.> or Euclidean distance <cit.> to assess the difference between contributions and evaluation criteria. In our study, considering the normalized relative contribution of individual participants and the overall contribution evaluation of all participants, we utilize KL Divergence to assess the effectiveness of various contribution evaluation methods in FL. Specifically, we compare the distribution of data quality against the distribution of contribution evaluation results obtained from different methods. The KL Divergence between two distributions P and Q is defined as KL(P‖ Q). Q represents the distribution of data quality, with each element denoting the normalized data quality of an individual participant (i.e., the proportion of a participant's data volume relative to the total data volume across all participants). P represents the distribution of contribution evaluation results, where each element is the normalized contribution proportion as determined by the evaluation method. A smaller KL Divergence value indicates greater similarity between the two distributions, suggesting superior effectiveness of the contribution evaluation method. However, few FL algorithms are specifically aimed at evaluating contributions. If the baseline can directly calculate the contribution (e.g., FedSV) or aggregation weight (e.g., FedFa), we use the corresponding calculation results. Otherwise, we compute the similarity from the local model of participants to the global model and normalize it as the contribution value for the current round. §.§.§ Federated Learning Setting and Details In our experiments, we set the total number of clients at 50, with 10 clients selected per round. The training was conducted for 1000 rounds. The default Dirichlet coefficient δ=0.5 for the Non-IID scenario. The accuracy and F1 score are the average test performance of the global model over the last hundred rounds. We used a batch size of 64, a learning rate of 0.01, and a prototype size of 64. The experiments were conducted on a server with Ubuntu 20.04.3 LTS, Intel(R) Xeon(R) Gold 6226R 2.90GHz CPU, and NVIDIA A100 Tensor Core GPU with 80G RAM. §.§ Fidelity Previous contribution evaluation methods in FL often overlooked the impact on the global model's performance, focusing mainly on objectives like contribution evaluation or fairness. For effective contribution evaluation in FL, ensuring the proposed method doesn't harm the global model's performance is crucial. Our primary focus is on our method's performance fidelity. To demonstrate this, we compared our method against nine baseline algorithms, showcasing the fidelity results in Table <ref>. In the context of CIFAR-10 and CIFAR-100 datasets, FLCE demonstrates superior performance in both IID and Non-IID settings. Specifically, for CIFAR-10, FLCE achieves the highest accuracy and F1 score, marking 89.11% and 88.99% for IID settings and 85.06% and 84.89% for Non-IID settings, respectively. This outperforms the second-best model, FedSV, which achieves a notable but lower accuracy and F1 score in the Non-IID setting for CIFAR-10. The performance trend is consistent in the CIFAR-100 dataset. Turning our attention to the EuroSAT dataset, FLCE continues to exhibit exemplary performance, especially in the Non-IID setting where it achieves the highest F1 score of 96.66% and a top accuracy of 96.79%. Notably, while FedSV shows a marginally better accuracy in the IID setting with 97.68%, FLCE's performance remains competitive, with an accuracy of 97.66% and the highest F1 score of 97.57%, illustrating its consistent effectiveness across different types of datasets. The higher performance of FedSV is mainly due to the extensive computation and verification based on the auxiliary test dataset. The experimental results affirm the excellence and generalizability of FLCE in terms of performance fidelity. FLCE's approach demonstrates that it is possible to achieve a balance between accurately evaluating contributions and enhancing the overall model performance. The reason is that FLCE enables better contribution evaluation (see in <ref>) and thus better weight distribution of client models, which leads to consistently superior model performance. §.§ Effectiveness The purpose of this experiment is to assess the effectiveness of FLCE in the contribution evaluation of heterogeneous participants. This assessment is crucial to ascertain the method's ability to provide fair and accurate contribution evaluations across diverse scenarios. The metric is to calculate the Kullback-Leibler (KL) divergence between the contribution distribution calculated by the algorithm and the actual contribution distribution. If the divergence is smaller, it means that the two distributions are closer, then the contribution evaluation effect is better. The effectiveness results, depicted in the  <ref>, show the KL divergence scores for the FLCE method compared to nine baselines across CIFAR-10, CIFAR-100, and EuroSAT datasets in both IID and Non-IID settings. With the exception of FedSV's method, FLCE consistently achieves the lowest KL divergence scores, indicating closer alignment between participants' actual contributions and their evaluations. The Shapley value-based methods have remained at the forefront of performance and contribution evaluation effectiveness due to their utilization of additional huge amounts of computing and verification resources. To the best of our knowledge, this is the first work that a non-Shapley-based approach in contribution evaluation has surpassed Shapley value-based methods in certain scenarios. The FLCE method demonstrates superior effectiveness in contribution evaluation, evidenced by its consistently lower KL divergence scores across all datasets and settings. This suggests a more accurate and fair evaluation of participants' contributions. This success is largely due to its innovative use of the class momentum contribution, which allows for a more nuanced evaluation of contributions that consider the quality and category of data each participant provides. Unlike traditional methods that might oversimplify the contribution evaluation process, FLCE's approach ensures a more equitable and comprehensive evaluation, leading to enhanced model performance (as discussed in <ref>) and contribution evaluation effectiveness. §.§ Ablation Studies To better understand FLCE, we conducted ablation studies to evaluate the impact of its key components. Each experiment was set up identically, except for the variable of interest being tested. We constructed five variants of FLCE as follows: 1) FLCE^-ℳ: This variant removes the class contribution mass component on the server. 2) FLCE^-𝒱: This variant removes the class contribution velocity component on the server. 3) FLCE^-𝒬: This variant removes the entire class contribution momentum component on the server. 4) FLCE^-𝒞ℒ: This variant removes the contrastive loss component from the local training in the clients. 5) FLCE^-𝒞ℳ𝒞: This variant removes the contribution matrix completion component. As shown in  <ref>, we compared the KL divergence of FLCE and its five variants in the IID and Non-IID settings across CIFAR-10, CIFAR-100, and EuroSAT datasets. Notably, the removal of any component leads to an increase in KL divergence scores, signifying a drop in the ability of contribution evaluation. The Δ values indicate the relative degradation in contribution evaluation compared to the full FLCE method. The results demonstrate the individual importance of each FLCE component in reducing KL divergence, thus improving the fairness and accuracy of participant contribution evaluations. Particularly, the removal of class contribution momentum and the class contribution completion show significant increases in KL divergence, highlighting their critical roles in the FLCE approach. The ablation experiments not only illustrate the effectiveness of class contribution momentum, but also the individual contributions of class contribution mass and class contribution velocity. Furthermore, the findings affirm the enhancement brought by integrating contrastive loss and contribution matrix completion techniques in the class contribution momentum-based evaluation method. §.§ Effectiveness from Different Perspectives To further demonstrate the effectiveness of FLCE, we conducted a comprehensive evaluation from two additional perspectives: data quality based on class diversity and canonical Shapley value <cit.> on the CIFAR-10 dataset with the Non-IID setting. Firstly, existing research suggests that the quality of participants' data may be related to class diversity <cit.>. Therefore, we incorporated the consideration of class diversity for each participant's data quality. Specifically, when calculating the diversity-based data quality for each participant, we multiplied the data volume by the ratio of the number of classes owned by the participant to the total number of classes. The results for all participants were then normalized. As illustrated in  <ref>, even when accounting for data class diversity, FLCE demonstrates the second-best performance, surpassed only by the FedSV method based on Shapley value calculations. Secondly, the canonical Shapley value method is a classical approach in cooperative game theory for determining participants' contributions. Despite its computational intensity, its rationality in evaluating participant contributions is widely acknowledged. Therefore, we used the contribution evaluation results calculated by the canonical Shapley value method as a reference. Considering that some existing evaluation works employ Euclidean distance as a metric when using Shapley values <cit.>, we utilized both KL divergence and Euclidean distance in our assessment of different contribution evaluation methods. The experimental results are presented in  <ref>.  <ref>(a) and  <ref>(b) respectively show the changes in Euclidean distance and KL divergence between the contribution evaluation results computed by different methods and those of the canonical Shapley value method as the number of training rounds increases. The experimental results indicate that FLCE approaches the performance of Monte Carlo sampling-based FedSV, but with significantly reduced computational time (details in Section <ref>). These experimental findings from two distinct perspectives further validate the effectiveness of our proposed FLCE method in contribution evaluations. §.§ Contribution Evaluation of Class Perspective Previous studies typically focused on evaluating contributions at the participant level, neglecting the class level. In the real world, the global model often prioritizes different classes variably, leading to disparities in class weights. FLCE excels not only at the participant level but also in evaluating contributions at the class level. To further analyze contributions at the global and local class levels, we predefined individual weights for all classes, referred to as ground truth weight. We then compared the class contribution weights determined by various methods with the ground truth weight on the CIFAR-10 and EuroSAT datasets, as shown in  <ref>. The red solid line represents the ground truth weight. All baseline methods yield a uniform contribution weight of 0.1 for different classes, indicated by the green dashed line, because they overlook the weight variations among different classes. The contribution weight results for different classes obtained by FLCE in various scenarios are shown by the dotted and dashed lines. As illustrated in  <ref>(a), despite dataset and data distribution changes, FLCE's approach to evaluating weighted contributions of various classes globally remains highly effective. Furthermore, when focusing on individual class contributions within participants as depicted in  <ref>(b), FLCE effectively evaluates class contributions with varying weights, delving into the details of individual participants. Contribution evaluation analysis from the global class level and local class level enhances the interpretability of the FLCE method. In contrast to earlier methods, FLCE can flexibly adjust class weights to effectively evaluate contributions in real-world scenarios. §.§ Contribution Evaluation of Noise Perspective In actual datasets, the presence of varying degrees of noise is also a significant form of heterogeneity. To assess FLCE's capability in handling noisy data, we simulated the scenarios by artificially injecting noise into features and labels. We categorized the noisy data into three types: feature noisy dataset, label noisy dataset, and dataset containing both noisy features and labels. We evaluated the accuracy and KL divergence of these datasets under IID and Non-IID conditions for CIFAR-10, as shown in  <ref> and  <ref>. As illustrated in  <ref>, the left column represents the accuracy of FL algorithms in IID scenarios, while the right column represents the accuracy in Non-IID scenarios. The KL divergence in various noise scenarios is shown in  <ref>. We can intuitively observe that FLCE achieves higher accuracy than other algorithms and exhibits smaller fluctuations during training. Meanwhile, FLCE also exhibits advantages in terms of KL divergence compared to other algorithms under noise scenarios. This robustness advantage is partly due to the construction and application of class contribution momentum. Specifically, by grouping prototypes of the same class and distancing those of different classes using contrastive learning, FLCE is less affected by noise compared to methods that rely on cross-entropy. Additionally, the server enhances robustness against low-quality participants by conducting a comprehensive evaluation of participants' contributions by category. Notably, FLCE's performance under label noise proves more effective than under feature noise, which supports the assertion about its strengths. The experimental results indicate that FLCE exhibits excellent performance in noisy scenarios due to its emphasis on the intrinsic properties of the data and its reduced susceptibility to errors in labeling. This reflects the fundamental rationality and effectiveness of FLCE in handling noisy heterogeneous scenarios. §.§ Communication Cost During the training process, FLCE requires participants to compute prototypes for each class, which means participants need to upload not only their models but also prototypes for each class. However, the contribution of prototypes to overall communication in each round is very small. For instance, taking FLCE using ResNet20 on the CIFAR-10, CIFAR-100, and EuroSAT. For instance, the model contains 272474 parameters to upload and download on CIFAR-10, while the size of each prototype is 64 with a total of 10 classes, resulting in a total prototype size of 640, accounting for only 0.12% of the total communication in each round as  <ref>. We can directly observe that the additional upload of prototypes by participants only accounts for a very small portion of the total communication cost. Therefore, the communication cost incurred by uploading prototypes is minimal for participants, but it can significantly improve fidelity and effectiveness. §.§ Computational Cost Typically, the server in FL has higher performance capabilities than participants' local devices, making additional computations on the server a viable strategy to enhance model performance. During the global update process, computational cost varies depending on the algorithm used. We conducted tests to measure the time required by the server on the CIFAR-10 dataset with the Non-IID setting, as shown in  <ref>. On the one hand, FedAvg requires simple aggregation at the server, resulting in minimal time consumption. On the other hand, FedSV requires significant time for computing the Shapley values at the server, which involves arranging and combining participant models. As shown in  <ref>, FLCE's time consumption is comparable to FedAvg, demonstrating its efficiency in computation. Because the prototypes are extracted from the participants' data, the smaller size of the prototypes results in minimal computational overhead. The experimental results indicate that FLCE exhibits a time-cost advantage and is an efficient contribution evaluation method for heterogeneous participants in FL. §.§ The Impact of Statistical Heterogeneity The statistical heterogeneity of participant data in the real world can significantly impact algorithm performance. To assess the effect of statistical heterogeneity on FLCE, we used the CIFAR-10 dataset with varying Dirichlet coefficients δ to measure FLCE's performance, as shown in  <ref>. Our observations reveal that as data statistical heterogeneity varies, both accuracy and KL divergence systematically change. When statistical heterogeneity is at its maximum (δ=0.1), participants exhibit the lowest accuracy 69.79%, and the highest KL divergence 0.5793. As δ increases, indicating reduced heterogeneity, both accuracy and KL divergence improve, with accuracy peaking at 89.11% and KL divergence minimizing to 0.0465 when δ=Max. The experimental results indicate that FLCE's performance gradually improves as the statistical heterogeneity decreases, demonstrating a consistent pattern when facing data of varying degrees of heterogeneity. This highlights FLCE's robust performance across a spectrum of statistical heterogeneity, confirming its effectiveness in diverse federated environments. § CONCLUSION In this work, we propose the first contribution evaluation method via participants' representations and introduce a novel contribution evaluation indicator class contribution momentum. We adopt a tripartite perspective to conduct contribution evaluation, encompassing the individual, relative, and overall perspectives. The server can effectively and efficiently evaluate participants' contributions by leveraging representations extracted from their heterogeneous data. The results of numerous experiments demonstrate that FLCE performs excellently in various heterogeneous scenarios. Moreover, as far as we know, we are the first to achieve contribution evaluation at the class level, which is a common real-world scenario. In addition, due to our FLCE adopting an original FL framework, participants only need to compute representations of their local data, making it versatile and scalable. ACM-Reference-Format
http://arxiv.org/abs/2407.01898v1
20240702024626
Learning Granular Media Avalanche Behavior for Indirectly Manipulating Obstacles on a Granular Slope
[ "Haodi Hu", "Feifei Qian", "Daniel Seita" ]
cs.RO
[ "cs.RO" ]
Enhancing Multi-Class Anomaly Detection via Diffusion Refinement with Dual Conditioning Jiawei Zhan1 Jinxiang Lai1 Bin-Bin Gao1 Jun Liu1 Xiaochen Chen1 Chengjie Wang1 July 8, 2024 ======================================================================================= § ABSTRACT Legged robot locomotion on sand slopes is challenging due to the complex dynamics of granular media and how the lack of solid surfaces can hinder locomotion. A promising strategy, inspired by ghost crabs and other organisms in nature, is to strategically interact with rocks, debris, and other obstacles to facilitate movement. To provide legged robots with this ability, we present a novel approach that leverages avalanche dynamics to indirectly manipulate objects on a granular slope. We use a Vision Transformer (ViT) to process image representations of granular dynamics and robot excavation actions. The ViT predicts object movement, which we use to determine which leg excavation action to execute. We collect training data from 100 real physical trials and, at test time, deploy our trained model in novel settings. Experimental results suggest that our model can accurately predict object movements and achieve a success rate ≥ 80% in a variety of manipulation tasks with up to four obstacles, and can also generalize to objects with different physics properties. To our knowledge, this is the first paper to leverage granular media avalanche dynamics to indirectly manipulate objects on granular slopes. Supplementary material is available at <https://sites.google.com/view/grain-corl2024/home>. § INTRODUCTION Legged locomotion across granular surfaces such as sand is a formidable challenge due to factors such as insufficient support offered by the sand surface and the complex dynamics of leg-sand interactions <cit.>. It is particularly challenging for robots to climb up steep granular slopes, as the sand could easily flow underneath robot legs due to the reduced shear resistance forces <cit.>. Recent studies on obstacle-aided robot locomotion show the potential for legged robots to strategically leverage large obstacles within sand, such as rocks and boulders, to traverse granular and uneven terrains <cit.>. However, such “obstacle-aided locomotion” strategies require specific leg-obstacle contact locations <cit.>, thus the ability to move rocks and boulders to desired locations became essential for these strategies to apply. To address this challenge, this study aims to propose a method for a legged robot to effectively reposition obstacles on granular slopes by primarily leveraging indirect manipulation. Prior work has shown that external disturbances on a sand incline can trigger avalanche behavior <cit.>, suggesting the potential for a legged robot to exploit this property to advantageously relocate obstacles on a granular surface. In this work, we propose Granular Robotic Avalanche INteraction (), a novel learning-based method for leveraging granular avalanche dynamics for indirectly manipulating objects on a granular slope. Due to a lack of accurate simulators for simulating legged robots and avalanche behavior on granular slopes, we do all experiments in the real world. We use an RHex <cit.> family robot leg as an external disturbance source which performs excavation actions within a grain tank with mechanical support to form a slope. We also design a gantry structure to enable the robot leg to move to different positions for performing leg excavations. We collect training data through physical interaction and deploy our trained model in the real world with a quadrupedal robot. See Fig. <ref> for our setup. Accurately predicting the granular media dynamics is essential for our proposed task, but is fundamentally challenging <cit.>. To address this, we leverage learning methods in this study. We place a rigid 3D-printed obstacle on the granular slope and collect data by applying continuous robot leg excavation actions to investigate avalanche behavior. We represent the granular surface state via the current depth image and the change in depth between sequential robot leg excavations, and we also use an image to represent the leg excavation action. This enables a unified image-based input to our granular media dynamics model. We train a ViT <cit.> to take our proposed granular media and robot excavation action representations as input and output the object movement on the granular slope. Experiments over 30 trials suggest that our model can accurately predict object movements on a granular slope, and can generalize to objects with different physics properties. In summary, our main contributions are as follows: * A novel problem formulation of using legged robots and leg excavation actions to indirectly manipulate obstacles to targets over complex, granular terrains. * , a novel approach that uses image-based representations of granular dynamics and legged excavation actions to predict object movements on a granular surface. * Experimental results showing that a legged robot using can move obstacles to targets, and that outperforms an ablation and a non-learning baseline. § RELATED WORK Robot interaction with granular media: Researchers have explored robotics and granular media in locomotion and manipulation contexts. For example, researchers have enabled crawling <cit.>, hopping <cit.>, and running <cit.> over granular media by leveraging advances in granular force models <cit.> and machine learning methods <cit.>. We focus on the orthogonal task of leveraging the properties of granular media to manipulate objects on it. This requires some understanding of how granular media behave, which could be from direct physics analysis <cit.> or learned models <cit.>. These models can aid common manipulation tasks involving granular media, which include pouring <cit.>, scooping and bulldozing <cit.>, trenching <cit.> or adjusting soil with plates <cit.>. In some of the most relevant prior work, Schenck et al. <cit.> use an image-based representation of granular media and train a convolutional neural network to predict changes in the granular media state. Similarly, we use image-based representations of granular media, but our actions are based on leg excavations instead of scooping or pouring. Moreover, we use legs of a legged robot to indirectly manipulate objects on granular media surfaces. Locomotion over diverse terrains: Recent research has proposed methods that enable legged robots to traverse over a variety of terrains, including outdoor settings that may have sand, vegetation, rocks, or other granular media. A promising technique is reinforcement learning coupled with advanced simulators, which can help avoid significant manual engineering <cit.>. To better handle diverse terrains, one line of work proposes adaptation methods, either via rapid motor adaption <cit.> or by encoding a family of gait methods which facilities tuning to new terrains <cit.>. Other works study navigation over challenging terrains <cit.>, which may involve climbing, jumping <cit.>, and crawling under parkour-style settings <cit.>. While impressive, these works study locomotion over terrains that are much sturdier than our granular surface. Furthermore, they primarily consider locomotion, whereas we focus on manipulating objects on a granular surface. Manipulation with legged robots: While legged robots primarily use legs to move to a target location, they can also use legs for manipulation. For example, researchers have proposed methods for using legs to kick soccer balls <cit.> and to push obstacles <cit.>. A legged robot can also use two legs to stand up to better enable other legs to press against higher objects such as door buttons <cit.>. Other works mount an arm on top of a legged robot, and leverage methods such as optimization <cit.> or machine learning <cit.> to allow the arm to manipulate objects. These works use legged robots for manipulation via direct contact with an object. To our knowledge, our work is the first to show a legged robot indirectly manipulating an object to make it reach a desired pose. To do this, the robot adjusts a granular surface that supports the object. § PRELIMINARIES AND PROBLEM STATEMENT We consider the RHex <cit.> family of legged robots, with 1 DOF for each leg. We generate an excavation action, per leg, by commanding the leg to rotate at a constant angular speed for one circular cycle. We assume the robot lies on a granular (sand-like) surface which has a slope of Φ degrees. This surface has K ≥ 1 rigid obstacles, and we indicate their respective positions at a given time t as {_t^(1), …, _t^(K)}. The task is to move all obstacles from their initial locations to pre-specified desired locations {^(1), …, ^(K)}. After doing this, we may also want to move the obstacles to a second target location, and we indicate these optional (per-obstacle) target locations as {^(1), …, ^(K)}. Here, each _t^(k)∈ℝ^2, ^(k)∈ℝ^2, and ^(k)∈ℝ^2 for k ∈{1, 2, …, K}, since we specify 2D positions over an image of the granular surface. For notational convenience, when K=1 we may suppress the superscript (k). We assume access to an overhead camera, which provides (H× W) depth image observations _t. The objective is to learn a policy which produces a leg excavation action _t at time t, where the robot rotates one of its legs. A trial consists of executing the robot's policy until a termination criteria. We evaluate a trial's performance by averaging the mean absolute error (MAE) distance among all obstacle positions and their respective target positions. We also use MAE to evaluate models which predict where obstacles move based on robot actions. § APPROACH: We propose a novel approach for robot leg excavation where a ViT <cit.> takes as input a set of spatially-aligned image representations of both the system state and the leg excavation action, and predicts obstacle movement on the granular slope. During experiments, we use a greedy approach to find the robot's optimal excavation action that brings the obstacle closest to the desired location. §.§ Image Representations of Granular Dynamics Prior work has shown that external disturbances can trigger avalanche behavior on granular slopes <cit.>. Inspired by this, we aim to leverage robot leg excavation actions to cause avalanche behaviors to move obstacles to desired locations. Intuitively, the robot leg excavation location affects the avalanche area, and an obstacle's relative position to the excavation affects how much it is influenced by the avalanche. Due to the complexities of modeling avalanche dynamics and obstacle movements on the granular surface, we learn an action-conditioned dynamics machine learning model to predict obstacle movement. Similar learned models have predicted complex physical interactions for planning robot manipulation <cit.>, suggesting their utility for our task. To represent the obstacle and the granular slope surface, we use a top-view depth image, _t. r0.4 < g r a p h i c s > Granular flow for 2 sequential excavations. Red vectors represent the change of particle positions between two frames. Furthermore, during preliminary experiments, we observed that successfully relocating obstacles often required consecutive excavations at the same location. See Fig. <ref> for a visualization of grain flows with sequential excavations. To highlight this physical finding in our model, we use the change in the depth images of the granular slope surface between sequential excavation actions, denoted as Δ_t = _t - _t-1 for t > 0, with Δ_t = 0 (all pixels zero) if t=0. We also introduce an image representation of robot leg excavation locations, which enables the action input to be spatially aligned with _t and Δ_t. This may help the network better learn the avalanche behavior compared to if the action is processed separately and concatenated with downstream visual features. We discretize excavation locations to 15 locations in a 5× 3 grid (see Fig. <ref>). We plot a white square with a side length the same as the robot leg length on a black background to represent the robot leg excavation location, I(𝐚_t), where 𝐚_t is the excavation location on the 2D planar surface, and I is a function that generates the RGB image representation of 𝐚_t. §.§ Training Objective for the Dynamics Model We train a ViT to learn granular avalanche dynamics from C-shape robot leg excavations. The ViT predicts one obstacle movement on the granular slope, and we deal with multiple obstacles at test-time by repeatedly querying the model (see Sec. <ref>). The training loss ℒ is defined as: ℒ(𝐱_t, Δ𝐱_t, 𝐚_t, 𝐬_t+1)= F_θ(𝐱_t, Δ𝐱_t, I(𝐚_t)) - 𝐬_t+1_2, where F_θ is the ViT parameterized by θ that takes, as input, a channel-wise concatenation of three spatially-aligned images: the depth image 𝐱_t, the change in depth Δ𝐱_t, and the action representation I(𝐚_t). The ViT outputs the predicted post-excavation obstacle position _t+1 = F_θ(𝐱_t, Δ𝐱_t, I(𝐚_t)), which we compare with the ground truth obstacle position 𝐬_t+1 on the inclined 2D surface at time t+1. Since the ViT predicts continuous values, we modify the ViT's default MLP classification header with an MLP regression header. See Fig. <ref> (left) for an overview of training. §.§ Leg Manipulation Policy We propose a greedy strategy for manipulation, which means the robot leg performs the excavation action at the location that has a maximum obstacle movement projection on the current line connecting the desired location and obstacle center toward the target area, described in Eq. <ref>: _t^* = max_𝐚_t ∈𝒜 𝐞_t^T(F_θ(𝐱_t, Δ𝐱_t, I(𝐚_t)) - 𝐬_t), r0.25 < g r a p h i c s > Example of _t. where 𝒜 is the set of 15 possible actions (see Fig. <ref>), and 𝐞_t^T is a 2D unit vector that points from the obstacle center to the target location at time t (see Fig. <ref>, target is the shaded red square). Prior work has shown that a greedy policy for planning can be useful for object relocation tasks <cit.>, and we hypothesize that the same may be true for our setting. Our model is trained using robot excavation actions with a single obstacle. To use the model with multiple obstacles, we treat multiple obstacle movement prediction as a collection of independent predictions of a single obstacle movement. In particular, we randomly select an obstacle and mask out other obstacles on the depth image input, _t, where we use a window that has a length 3 times the obstacle radius to compute the average pixel values in the windows and replace the pixel value of the obstacle with the computed averaged pixel value. Furthermore, we mask out other obstacles in Δ_t, where we set the pixel values to 0 corresponding to other obstacle positions. We repeat this process for all obstacles and get the predictions of all obstacle movements. We modify our manipulation policy to fit the task; the policy now considers the sum of obstacles projected movement on lines that connect their centers to their desired locations as described in Eq. <ref>: ^*_t = max__t ∈𝒜 ∑__t^T, _t_t^T(F_θ(_t, Δ_t, I(_t)) - _t), where images _t and Δ_t are masked versions of _t and Δ_t. § EXPERIMENT SETUP Existing simulators used in learning-based legged robot manipulation research, such as PyBullet <cit.>, MuJoCo <cit.>, or IsaacGym <cit.>, do not support realistic robot interaction on granular media surfaces. Thus, we do all data collection, training, and experiments directly in the real world. Experiment environment: Figure <ref> illustrates our experiment setup. The main structure of the testbed is a granular trackway (60 L × 60 W × 20 D) filled with 6 plastic BBs (Matrix Tactical Systems), which have qualitatively similar rheological behavior as sand <cit.>. The granular trackway can be tilted up to 35 degrees to emulate a wide variety of sand slopes <cit.> in natural environments. To study the avalanche dynamics and object movement upon different leg excavation actions, we build a gantry system with two linear actuators (one moves along the x axis and another along the y axis) to move a C-shape robot leg on a 2D surface above the granular slope. The C-shape robot leg has a diameter of 6.0 and a width of 2.0, and the rotation center of the leg is 1.0 above the initial granular slope surface. The rotation frequency of the robot leg is fixed at 0.33. The robot leg uses a steel bar to attach to the table of the second linear actuator. This is sufficiently above the granular surface, so the robot avoids touching it while transitioning between consecutive excavation actions. We mount an RGBD camera (Intel RealSense 435-i) above the granular slope to record the granular flow and obstacle movement. Data collection: We collect a dataset of 100 trials, where each trial has 10 excavation actions. The time interval between two consecutive excavation actions is 12 to enable the robot leg to transit to different locations. Before each trial, a human operator manually smoothed the granular media to a (roughly) even granular slope with an inclination angle Φ = 18 degrees. This inclination angle is close to the angle of repose <cit.> of the granular material used in our work, which facilitates the study of the avalanche dynamics. Once the granular surface is prepared, the human placed a 3D-printed (PLA) obstacle on the granular surface at different locations relative to the leg. For all 100 trials, a semi-spherical obstacle with a 5 diameter is used. The RGBD camera has a video streaming rate of 15, and it collects 640x480 RGBD images after every excavation action. The ground truth obstacle movement is calculated based on the post-excavation RGBD image. Among the 100 trials, 36 use the same excavation location but different initial obstacle positions, 30 use the same initial obstacle position but different excavation locations, and 34 vary both the initial obstacle positions and excavation locations for each action. We use this data to train F_θ. Evaluation: For each trial, after the human places the obstacles, a computer program randomly selects target location(s) for each obstacle. Targets can be anywhere in the robot excavation action space, as long as they do not overlap with each other. Then, the program randomly selects a method among a set of methods we test (, a baseline, or an ablation, see Sec. <ref>) for manipulation, to reduce human bias in making initial settings easier for our method. We evaluate our dynamics model and manipulation outcomes based on the prediction error for each excavation action, using Mean Absolute Error (MAE). In test trials, we compute performance based on the final distance between the obstacle center and its desired location. The success threshold is below 2.5 (the radius of the obstacle) which we measure via converting pixel distances in images to centimeters. We also use the average error between prediction and ground truth as a performance evaluation. § EXPERIMENT RESULTS AND DISCUSSIONS §.§ Model Performance on Predicting Obstacle Movements To evaluate the performance of our model in predicting obstacle movements, we place an obstacle on an undisturbed granular slope and test 15 excavation action locations; we reset the obstacle to the same location after each action. See Fig. <ref> for a comparison between the predictions and the ground truth locations. The MAE for these 15 excavation actions is 1.13, below our 2.5 threshold. Based on the promising MAE results, we use our trained model for planning in Sec. <ref> and Sec. <ref>. §.§ Single Leg Manipulation Performance We evaluate in real-world experiments with a single leg using four types of tasks: * Single obstacle with a single task: Using K=1 obstacle, with a target position . * Single obstacle with sequential tasks: Using K=1 but with an additional (second) target . * Multiple obstacles: Using K=4 obstacles with targets {^(1), ^(2), ^(3), ^(4)}. * Unseen obstacle: Using K=1 obstacle with a target position , but where the new obstacle has a star shape and weighs twice as much as the standard obstacle we use. We compare against a baseline and an ablation. Our baseline is an algorithmic, non-learning manipulation strategy. The baseline randomly selects an obstacle, and the robot performs the excavation at the closest available location to the target until the obstacle has met one of two termination criteria (see next paragraph). Then, this baseline randomly selects one of the remaining obstacles to manipulate and repeats until all obstacles are selected. Our ablation investigates the importance of our image representation of the robot excavation action to the successful training of our model. We train a ViT with a lower-dimensional, 2D vector representation of the robot excavation action. This cannot be directly channel-wise combined with the depth-based image observations, so we instead input it to the regression header of our ViT. A manipulation trial terminates upon either of these conditions: (i) none of the excavation actions can take obstacles closer to their targets, or (ii) all obstacles have a small accumulated movement (≤ 0.5 cm) over 3 sequential excavations. In (i), for the baseline method, we terminate when we observe that the previous excavation action took the obstacle away from its target or reached it. Results from Tab. <ref> suggest that outperforms the baseline and the vector representation ablation. has a success rate ≥ 80 % on all manipulation tasks, while the baseline has trouble with the “Multiple Obstacles" task, and the ablation has trouble with the “Unseen obstacle" task. The baseline particularly struggles when manipulating multiple obstacles (20% success), as it does not consider other obstacle movements when manipulating one obstacle. The ablation's granular dynamics model has a higher (i.e., worse) MAE in the predictions, which results in lower success rates versus . We show one trial each of “Single obstacle with sequential tasks” and “Multiple obstacles” in Fig. <ref>. We refer the reader to the supplementary website for videos. §.§ Quadruped Robot Manipulation Experiments We evaluate on the quadrupedal robot with 2 front legs as manipulators; we do not use the back legs since the robot body would likely block obstacles. Specifically, we use the same trained model as in Sec. <ref> but we query the model at the 2 excavations where the legs are located, instead of the full set of 15. We show one trial in Fig. <ref>. Results in Tab. <ref> suggest that achieves higher performance (i.e., lower MAE) than the baseline or vector representation methods. §.§ Failure Cases and Limitations A common failure of with multiple obstacles occurs when the robot leg moves one obstacle to its target while simultaneously moving other obstacles too far from their targets. See Fig. <ref> for an example, where two obstacles are already at their target locations (red shaded boxes), but the robot has trouble moving the third obstacle to its target (white arrow and green shaded box). The rightmost obstacle has reached its target, and further excavations will move it away from its target. r0.29 < g r a p h i c s > Failure case. Our manipulation policy only predicts obstacle movements for one step, and thus can have difficulty with tasks that require multiple-step planning, which we plan to address in future work. § CONCLUSION In this work, we present GRAIN, a method for a legged robot to indirectly manipulate obstacles on a granular surface. We show the potential for a quadrupedal robot to advantageously leverage granular avalanche dynamics to relocate obstacles on a granular slope. We hope our work drives future research in using legged robots for manipulation and locomotion across diverse terrains. This work is supported by funding from the National Science Foundation (NSF) CAREER award #2240075, the NASA Planetary Science and Technology Through Analog Research (PSTAR) program, Award # 80NSSC22K1313, and the NASA Lunar Surface Technology Research (LuSTR) program, Award # 80NSSC24K0127. The authors would like to thank Luke Cortez for helping with preliminary data collection, and Vedant Raval for helpful writing feedback. corlabbrvnat § ADDITIONAL DETAILS OF §.§ Algorithm for Image Representation of Robot Excavation Action To highlight the relationship among the robot excavation action, avalanche behavior and obstacle movement, we introduce our image representation of the excavation actions in Sec. <ref>. The pseudocode in Alg. <ref> shows how we obtain this representation. We show several examples of the image representation in Fig. <ref>, for different locations of the robot leg. §.§ Algorithm for Masking Obstacles in _t Sec. <ref> discusses how we handle multiple obstacles. In Alg. <ref>, we formalize our method to mask other unselected obstacles. This lets the masked images look similar to images from the single obstacle case, which we use for training the ViT. See Fig. <ref> for an example of an image and its corresponding masked version (obtained via Alg. <ref>). During training, we use , a package in , to convert depth to RGB, as shown in the figure. For visual clarity, we overlay “Obstacles” and “Masked Obstacles.” §.§ Algorithm for Masking Obstacles in Δ_t Given the pixel set of obstacles at the current step and the previous step, we set all pixel values in these pixel sets to 0. Not sure whether we need to this subsection in the paper or not. § ADDITIONAL EXPERIMENT DETAILS §.§ Single Leg Manipulation Results Fig. <ref> shows one experiment trial for two manipulation tasks: “Single obstacle with single task,” and “Unseen obstacle.” We refer the reader to Section <ref> for a description of what the tasks mean, and for example trials from other tasks. The statistics of all manipulation trials are shown in Tab. <ref>. §.§ Multiple Unseen Obstacles Manipulation To test our trained model's generalization ability, we use obstacles with different shapes and weights in this task. Specifically, we use a star shape obstacle, a cuboid obstacle that has half of the weight of the obstacles used in the training dataset, and a hemisphere obstacle the same size but 4 times the weight of the obstacles used in the training dataset. All obstacles are 3D-printed. We placed these unseen obstacles on the granular slope plus the obstacle we used in the training dataset as a total of K=4 obstacles with a random distribution and executed the manipulation policy. We show one manipulation trial in Fig. <ref>. Our system succeeds in 3 out of 5 trials.
http://arxiv.org/abs/2407.02302v1
20240702143510
Towards Human Understanding of Paraphrase Types in ChatGPT
[ "Dominik Meier", "Jan Philip Wahle", "Terry Ruas", "Bela Gipp" ]
cs.CL
[ "cs.CL", "I.2.7" ]
Stochastic Differential Equations models for Least-Squares Stochastic Gradient Descent Adrien Schertzer^† and Loucas Pillaud-Vivien^⋆ July 8, 2024 ====================================================================================== § ABSTRACT Paraphrases represent a human's intuitive ability to understand expressions presented in various different ways. Current paraphrase evaluations of language models primarily use binary approaches, offering limited interpretability of specific text changes. Atomic paraphrase types (APT) decompose paraphrases into different linguistic changes and offer a granular view of the flexibility in linguistic expression (e.g., a shift in syntax or vocabulary used). In this study, we assess the human preferences towards ChatGPT in generating English paraphrases with ten APTs and five prompting techniques. We introduce APTY (Atomic Paraphrase TYpes), a dataset of 500 sentence-level and word-level annotations by 15 annotators. The dataset also provides a human preference ranking of paraphrases with different types that can be used to fine-tune models with RLHF and DPO methods. Our results reveal that ChatGPT can generate simple APTs, such as additions and deletions, but struggle with complex structures (e.g., subordination changes). This study contributes to understanding which aspects of paraphrasing language models have already succeeded at understanding and what remains elusive. In addition, our curated datasets can be used to develop language models with specific linguistic capabilities. § INTRODUCTION Paraphrases are changes in a text's wording or structure, resulting in a new text with approximately the same meaning <cit.>. Paraphrase plays a fundamental role in NLP as understanding the variability in linguistic expression is key for various tasks, e.g., prompt engineering, text summarization, and plagiarism detection <cit.>. Many have assessed whether two texts convey the same meaning through a single similarity score or binary assessment, limiting the granularity of predictions. Atomic Paraphrase Types (APT) <cit.> can be used as a new lens with which the linguistic relationship between two paraphrases can be explained. Generating and detecting APTs over binary categorizing paraphrases has multiple advantages <cit.>. For example, APTs can pinpoint whether a sentence's grammatical structure or the used vocabulary has changed between potential plagiarism cases <cit.>. Understanding how language models understand this variation in linguistic expression gives us insights into how their understanding of language differs from that of humans. It also explains in which language aspects models are proficient, where challenges remain, and how we can make models more robust to a wide array of paraphrase characteristics (e.g., syntactical and lexical changes). There are many ways in which two paraphrases can differ. Consider the following example: [c]0.9 Original: “Theya had published an advertisement on the Internet on June 10b, offering the cargoc for sale, he addedd.” Paraphrase: “On June 10b, the ship's ownersa had published an advertisement on the Internet, offering the explosivesc for sale.” Here, the paraphrase contains the following APT changes: (a) and (c) change the lexical unit for another one with the same meaning, (b) re-orders the words in the sentence, and (d) adds lexical and functional units. So far, it has been largely unknown how well models generate or detect paraphrases with specific APTs. In this work, we asked 15 humans to annotate 500 APT generations with various properties such as the perceived difficulty of generation, the model's success at generating a certain type and the reasons behind their failure, its confusion with other types, and the similarity of the APT on a sentence and word level. We publish this dataset as Atomic Paraphrase TYpes Base (APTYBase)[https://github.com/worta/aptyhttps://github.com/worta/apty]. To further contribute to the future of research on APTs, we extended this new dataset by ranking the different APT examples by human preferences that can be used to optimize paraphrase generation models in the future using human preference methods such as RLHF <cit.> and DPO <cit.> called . The whole generation and annotation process is shown in <Ref> and discussed in <Ref>. =-1Our results show that ()is capable of generating Same Polarity Substitution, Semantic-Based Changes, and Change of Order and struggle with generating Inflectional Changes, Synthetic/Analytic Substitutions, and Subordination and Nesting Changes. In general, changes requiring deeper grammatical understanding are difficult to generate. Few-shot and chain-of-thought (CoT) prompting have increased generation success compared to other prompting techniques, especially for Addition/Deletion and Semantic-Based Changes. Surprisingly, humans often rank CoT generations lower than other methods. We also found that the most common error for when applying APT is applying the wrong kind of APT and that morpholexical changes, i.e., changes that arise at the word or morpheme level, are the most commonly wrongly or additionally applied. Lastly, our results suggest that the human-estimated hardness of the generation task does not affect the generation success rate for zero-, one- and few-shot prompts. CoT and a fine-tuned model perform markedly worse for tasks rated as hard. Our main contributions are: * A human study with 500 annotations of 15 participants on the ability of to [q_apt_generation]generate paraphrase types (Q1); * A new dataset () with sentence-pair information on sense-preservation, specific paraphrase type applied, the location of the change, and error reasons over [q_apt_generation]five methods of paraphrase generation (Q2), with different prompt styles of zero-shot, one-shot, few-shot, chain-of-thought, and a fine-tuned model; * Analysis of [q_quality]human preferences (Q3) of paraphrase generation and a new dataset with human-ranked paraphrase type generations using best-worst scale which enables training RLHF and DPO methods; * Investigation of [q_mistakes]types of errors (Q4) makes when generating APTs and [q_confusion]how much confuses APTs (Q5), i.e., confusion between the types; * Correlation of model success rate at generating paraphrases and human perceived [q_difficulty]difficulty (Q6) of the task. § RELATED WORK Approaches for paraphrase generation range from rule- and template-based approaches <cit.> to trained transformers generating paraphrased text <cit.>. Rule-based methods rely on parsing the original sentence and applying either hand-crafted <cit.> or automatically inferred <cit.> rules to transform the text. Recently, paraphrase generation involves deep learning models, especially LLMs. <cit.> use a fine-tuned version of GPT-2 to generate paraphrases and evaluate their semantic similarity. <cit.> fine-tune a T5 model to generate and identify paraphrases. <cit.> explore T5 and GPT-3 regarding qualitative properties of generated paraphrases, access the ability of humans to identify machine-paraphrased textm and suggest that LLMs can generate paraphrases that match human-generated paraphrases in clarity, fluency, and coherence. As previous paraphrase tasks rely heavily on similarity scores and do not capture the linguistic flexibility of paraphrases, <cit.> proposed two new tasks, i.e., paraphrase type generation and detection, using the ETPC <cit.> dataset. Their findings indicate that current LLMs (e.g., ) perform well when generating paraphrases with generic semantic similarity but struggle to generate them with fixed APTs. Additionally, models trained with APTs have improved performance in general paraphrase tasks (i.e., without APTs). While little is known yet in full detail on the paraphrastic mechanisms of all paraphrase types, further work revealed that specific types elicit prompt engineering capabilities over various downstream tasks, e.g., polarity for sentiment, or discourse for summarization <cit.>. Although <cit.> has sparked interest in more granular paraphrasing, their work relies only on automatic metrics such as ROUGE <cit.>, which is limited and lacks the human component in the evaluation process. We explore human preferences in paraphrases by examining whether a paraphrase type was correctly applied by a model, the perceived difficulty of the paraphrasing task, and the kind of errors made by at the time of generation. § METHODOLOGY =1Our experiments are split into two parts, i.e., generating paraphrase types using and annotating and evaluating their outputs according to multiple criteria with human participants. The process is shown in <Ref>. In the following subsections, we detail the generation and annotation processes. §.§ Paraphrase Type Generation We consider a variant of the paraphrase type generation task described in <cit.>. Given an APT l ∈ L, where L is the set of possible paraphrase types, and x a base sentence, we want to generate a paraphrase x̃ incorporating the given change l while maximizing the similarity between x and x̃. We limit ourselves to applying a single paraphrase type for the highest degree of control; we leave research into the complexity of combining different APTs for future work. We do not restrict the position where the change has to be applied; the model must choose where to apply the change. =-1In our experiments, we use the ETPC dataset <cit.>, which contains pairs of original and paraphrased sentences annotated with APTs. We choose the ten APTs with the most examples in the dataset as primary for this study. We excluded Identity changes, as those contain no change in the sentence; Same Polarity Substitution (habitual and named entity) as it is similar to Same Polarity Substitution (contextual), which could harm the diversity for the chosen paraphrase types. We also exclude Syntax/Discourse Structure changes as it would require our annotators to understand all other paraphrase types, even those not considered in the study. <Ref> details a full list of types. We sample ten paraphrase pairs for each paraphrase type l. We use the first sentence in a paraphrase pair as a base sentence and generate the paraphrase using . The prompts are constructed by asking the model to generate a sentence with the same meaning using the APT l. We prompt the model using zero-shot, one-shot, few-shot <cit.>, chain-of-thought (CoT) <cit.>, and a fine-tuned model from <cit.>. The prompt contains the name of type l and its definition, taken from <cit.> with minimal changes to fit the prompt (see <Ref> for exact prompts used.) Similarly to <cit.>, five few-shot examples are given; one instance with added reasoning is provided for CoT prompts. <Ref> details the exact APT definitions. §.§ Annotation We recruited 15 annotators to perform ratings in two phases. The annotators were students from computer science, data science, and related programs with self-accessed English language skills of C1 or C2 and an average age of 24. In two one-hour meetings, they were trained with detailed guidelines. <Ref> detail the annotation process and the guidelines. In the first phase, the annotators were shown a pair of original and paraphrased candidates and APTs to be applied. The annotators were asked whether the given sentence pair had approximately the same meaning, to specify if a specific APT was applied correctly, and to determine the groups of any additional APT if multiple changes were made to the original sentence or if the given APT was not applied correctly. Annotators were also asked to highlight the word position of change if the paraphrase type had been applied correctly (i.e., a span of multiple words that can be consecutive or disjoint). <Ref> in <Ref> shows and example of the system. We also included five manually created gold examples to certify the annotations were carried out carefully and to check for agreement among annotators; no annotator answered more than one gold question incorrectly; the details can be found in <Ref>. On average, the median time to complete the annotation of one paraphrase pair was 74 seconds, with the 25th percentile being 47 seconds and the 75th percentile 121 seconds. Each paraphrase type was annotated by one annotator, except for our gold examples, which were given to all annotators. These annotations compose the dataset. In the second phase, paraphrases with successfully applied paraphrase types were ranked from best to worst <cit.>. The list was discarded if less than two generations were successful for a given sentence. In total, 80 of 100 possible lists remained, and the annotators were then asked to rank them according to their preferences. Five annotators gave their preferences for each list, leading to 5 × 80 = 400 preference annotations composing the dataset. § EXPERIMENTS How successful is , on average, at generating specific paraphrase types? Which paraphrase types can it already produce with high success, and which ones do they struggle with? Based on the annotation of the first phase, we measure the success rate of the different prompt and paraphrase type combinations to assess which APTs could already generate well and which APTs are more challenging. The results are shown in <Ref>. We color the different generation strategies and group them by APT on the x-axis. Humans often judge the generation as successful when prompting to generate Change of Order (82%), Semantic Changes (82%), and Same Polarity Substitution (78%). In contrast, Derivational Changes (46%), Subordination and Nesting Changes (38%), and Synthetic/Analytic Subsitution (34%) show low success rates. While Change of Order and Same Polarity Substitution require small changes and understanding on the sentence level or of grammatical concepts, Derivational Changes require an understanding of the parts-of-speech of the affected lexical unit. Subordination and Nesting Changes similarly require a nuanced understanding of sentence structure and grammar. Consider the following example of a paraphrase using both a Derivational change (a) and Same Polarity Substitution (b). [c]0.9 Original: “A smilinga senior policemanb shook hands with Mr Laczynski” Paraphr.: “A senior police officerb smileda and shook hands with Mr Laczynski” Note that for Same Polarity Substitution, no change in sentence structure or on word level somewhere else in the sentence is necessary, i.e., the change is locally constrained and does not require understanding beyond synonym relations. However, changing smiling to smiled makes it necessary to shift the word's position, as it is now used as a verb. It needs to be conjugated to match the rest of the sentence, which is in the simple past tense and third person singular and requires the addition of and to keep the sentence correct. The high success rate of 82% for Semantic Changes is surprising as a semantic shift is typically complex. As an example, consider: [c]0.9 Original: “Blair himself took the reins of the Labour Partya.” Paraphrase: “Blair himself became leader of the Labour Partya.” However, the lack of correspondence also gives the models the most leeway, as the whole grammatical structure can be exchanged. Additionally, the change often involves idioms and other common turns of phrases, which might be seen often in training data. =-1Structure-based changes present further difficulty for the model. These changes require a deep grammatical understanding of the text. More complex paraphrase scenarios seem more difficult to understand for LLMs, although based on performance in the general paraphrasing task, they should be capable of handling them. In the general paraphrase generation task without APTs, models might seem to generate diverse paraphrases but repeatedly use a small selection of APTs that convince annotators of diversity as a form of reward hacking <cit.>. One explanation could be that LLMs are more familiar with some paraphrase types than others, i.e., paraphrasing is also biased towards certain types in the training data and fine-tuning steps <cit.>. Humans use different paraphrase types in varying degrees which is reflected in human-created training data; this might explain this observation. Without diversity of paraphrase types in training, models might be restricted in linguistic diversity, even in cases where underrepresented paraphrase types would be advantageous, similar to how it hinders them in detection tasks <cit.>. How do different prompting techniques affect the success in generating paraphrase types? <Ref> decomposes <Ref> into individual prompting techniques to compare how different prompt techniques influence the success rate of in generating paraphrase types. This allows us to observe how familiar is with the tasks (zero-shot performance) and how much reasoning of improves performance (CoT). For detailed values, refer to <Ref> <Ref>. =1 Depending on the APT, prompt and type combinations have notable differences in success rates. For example, while CoT is relatively successful at Same Polarity Substitution when zero-shot prompted, struggles with this task. Human annotators found that the CoT prompt was applied successfully in 69% of the cases, followed by few-shot (63%), zero-shot (61%), and one-shot prompt (55%). The fine-tuned model had the worst performance with only 50%. As the fine-tuned model is trained on applying multiple APTs at once, applying a single change might have compromised its performance. The one-shot prompt might overly rely on the given example and perform worse on APTs where different changes belong to the same APT. For instance, a verb might be changed to an adjective in the one-shot prompt for generating Derivational Changes. Our findings suggest that providing multiple examples of the same type and reasoning about the generation improves APT generation performance in LLMs. For some examples, profits from reasoning about the task at hand, but often, the performance remains poor (e.g., Synthetic/Analytic Substitution), suggesting a need to improve model understanding. Are there qualitative differences between prompt methods? Do humans favor the generation of certain prompt styles more than others? Besides whether a change is applied correctly, the resulting paraphrases have varying degrees of quality. Humans might prefer one paraphrase over another, although they have applied the same type (e.g., some grammatical structures are easier to grasp than others). These preferences can indirectly inform what criteria humans use to judge paraphrases, e.g., at which position a change happens or how creative a change is. We investigate whether different prompt techniques (e.g., CoT, zero-shot) produce paraphrases rated as more or less favorable by humans. Of all paraphrases where humans have annotated that the APT was correctly applied, we ask annotators to rank these paraphrases from best to worst according to their preferences. The best paraphrase is ranked first, i.e., lower numbers are better. To avoid penalizing prompt methods that produced fewer successful generations, we did not rank unsuccessful generations, and we only looked at the rate at which a generation from one prompt technique was preferred compared to another when both generated an APT successfully. We give the average ranks of the different prompt techniques and the proportion of times one generation is preferred to another in <Ref>. Based on the average rank, few-shot and one-shot generated the most preferred paraphrases, zero-shot and CoT seem to be in the middle, and fine-tuned is ranked worst. For the direct comparison, the trend repeats; we see that few-shot is generally favored compared to the other approaches and fine-tuned and CoT are often perceived as worse. As generations from few-shot prompts are perceived best on average, it suggests that practical examples allow to generate natural-sounding paraphrases more than reasoning. Further, while CoT has one of the highest success rates in generating APTs, it seems to generate paraphrases of lesser quality. One reason for the poor reception of CoT-generated paraphrases might be that more consideration of the formal aspects of a paraphrase leads to less natural outcomes, which are not perceived well by the annotators. The generations of the fine-tuned model were also rated lower than prompting the general model. We noted that the fine-tuned model struggled to generate specific APT changes; this seems to extend to natural-sounding paraphrases with single APTs. That highlights the importance of high-quality datasets tailored to the specific tasks for fine-tuned models. Our preference data can be used in future work to improve model generations via techniques like RLHF or DPO. Which mistakes does make when generating specific paraphrase types? Does it re-generate identical sentences, apply other types than those instructed, or change the meaning? To determine what kind of errors the models are most likely to make, we asked the annotators to indicate why a paraphrase type was wrongly applied and if other changes were applied instead. If an annotator judged the generated APT from as a mistake, we provided four possible explanations: the sentences were identical, a different change was applied, the generation was nonsensical, and other reasons. The relative frequencies of these error types are listed in <Ref>. Applying the wrong type is the most frequent reason for the error, which happens in 60% of the erroneous cases, while not performing any change is also a common source of error in roughly 20% of the wrong annotations. Only 17% of wrongly applied APTs change the meaning from the original sentence, meaning a paraphrase is usually generated even if fails to use the given APT. An open question is whether giving the model a way to refuse to produce a paraphrase in the prompt if it is unsure, i.e., if it could express uncertainty, would reduce the number of erroneous generations. We leave the investigation of this sub-question to future work. is good at generating paraphrases with the same meaning but fails to understand the underlying linguistic properties involved. It rarely changes the meaning but often changes the wrong aspect of the sentence or does not change the sentence at all. Since applying a different type than instructed is the most common issue, we investigate which types are most often mistaken or applied in the following question. Which paraphrase types is confusing most with another? Do we see a correlation between certain paraphrase types? In the previous section, we have seen that often applies different APTs if an error occurs. also applies changes besides the desired one in the case of successful applications. In such instances, we can examine the relationship between the different APTs from the perspective of , assessing whether and how they correlate. For these cases, the annotators indicated which kind of APT groups these erroneous or additional changes belong to. We used four major groups for annotators to indicate confusion: morpho-lexical, semantic, structural, and other changes; the definitions can be found in the <Ref>. We show the confusion matrices in <Ref>, where 'Additional Change' indicates cases where the correct APT is applied along with one or more additional APTs. In contrast, `Mistaken Change' illustrates instances where the model erroneously applies an APT. For the numbers in <Ref>, we count the number of additional erroneous APT applications that belong to the group given at the x-axis, where an APT that is part of the group at the y-axis should have been applied. Then, we divide this count by the total number of paraphrases annotated in the group. For morpho-lexical changes, the additional changes are evenly spread and happen comparatively rarely. If makes an erroneous change, it is often a different morpho-lexical change. This is also the case for the other groups, e.g., the most common type of wrong application concerns the model making morpho-lexical changes instead of the requested changes. These are also probably the most straightforward changes to make without changing the meaning of the sentence and also the most common change, e.g., the most common types in the ETPC dataset are morpho-lexical changes. For Semantic Changes, there is an association with a high proportion of additional/erroneous morpho-lexical changes and structural changes, but not conversely. For example, often mistakes morpho-lexical changes for semantic ones but not vice versa. As Semantic Changes include all changes on a semantic level without clear mapping between the linguistic components, models often introduce other APT simultaneously. =-1 Generally, the boundaries between different types of the same group seem larger than between different groups. So, a model focused on one APT group, e.g., structural changes, should be more reliable than one handling multiple APT groups. Otherwise, models often introduce other changes. One reason might be that models rarely see isolated changes in the training data (i.e., sentences that only differ by a single APT), leading to a poor understanding and associating them with other changes. We also found that the confusion correlates with the difficulty of the paraphrasing task; the more difficult the task is, the more likely it is that other changes are made either in addition or erroneously. We provide more detailed investigations in the additional research questions in <Ref>. How does perform on examples perceived as easy or hard by humans? Applying different APTs can be challenging for humans, depending on the precise APT and the base sentence. We explore if the human perceived difficulty of paraphrases also relates to the performance of ; that is, are paraphrasing tasks difficult for humans also difficult for ? To answer this question, we asked our annotators, for each presented generation, whether they would find applying the paraphrase type to a given sentence easy or hard. Then, we computed the rate of successfully generated paraphrases for different approaches depending on the estimated difficulty of the task, as shown in <Ref>. On the x-axis, the tasks are split according to rated difficulty, and the y-axis gives the rate of successful application. =-1We observe that the few-shot approach works better in cases where humans evaluate the task as hard, seeming to profit from a human perspective more difficult task. The CoT-prompted model performs poorly for hard examples, with 23% points less generation success. One reason might be that human annotators evaluate examples as hard, which are difficult to reason about, and this extends to reasoning attempts from LLMs. Similarly, the fine-tuned model also performs worse for hard examples. Surprisingly, the performance of the other prompt paradigms seems largely independent of the difficulty. The results suggest that explicit reasoning about paraphrasing tasks that are hard for humans is also hard for LLMs. If more complex insights are required, as judged by humans, they fail more often as they lack the required understanding of the concepts associated with the APTs. Still, the LLMs exhibit some reasoning capability for APTs, even if they are led astray by it in difficult scenarios. § FINAL CONSIDERATIONS =-1In this work, we contributed to the understanding of how LLMs can generate variability in linguistic expression through paraphrase types. Using different prompting techniques, we generated 500 paraphrase candidates and asked 15 participants to annotate these paraphrase candidates according to whether the paraphrase type was correctly applied by , the perceived difficulty of the paraphrasing task, and the kind of errors made by at generation. We made the annotations and human preference rankings available through two new datasets (i.e., and ). We found that CoT prompting generally outperforms other methods for generating paraphrases with specific APTs. For paraphrasing tasks perceived as hard for humans, few-shot prompting performed better, while CoT had a remarkable decrease in generation success rate. This suggests that asking to reason about complex paraphrase types (e.g., Synthetic/Analytic Substitution) is ineffective, as the required understanding of these APTs is missing. When made mistakes or added additional changes, they most commonly chose APTs from the morpholexical changes. This is likely because these changes are the most commonly applied for paraphrasing and dominate paraphrasing datasets. We also found that for non-morpholexical changes, rarely confuses changes that can be categorized under the same group (e.g., confusion of one structural change with another instead of a non-structural change). One reason might be that pays increased attention to the properties related to the change so that the retaining of other properties is weighed less. opens an interesting avenue for further research in improving the generation and identification quality of APTs using techniques such as DPO or RHLF. Alternatively, our dataset can be used as a benchmark to develop metrics that better correlate with human preferences. § LIMITATIONS Although we extend the research in paraphrase types, this study has a few limitations. We consider only a selection of APTs and could not investigate all described types due to the resource-intensive work of human annotation. We focused on the most common ones in the ETPC dataset, which should give a decent proxy for the most relevant types and selected a sample to ensure diversity in the examined types. Additionally, we are limited to one annotation per task for the questionnaire due to effort risking a big variance. However, we used gold examples annotated by each annotator to check the agreement between annotators and to ensure that each annotator performed the annotation attentively. Although we only considered generations from , additional LLMs could have been employed. We chose ChatGPT as it is one of the most used models, and this is the first investigation of APT paraphrase generation combined with instruction-trained LLMs. The inclusion of additional models would make the human evaluation too expensive. We do not fine-tune a model on the newly proposed dataset; this paper focuses on the human-evaluated capabilities of an existing model, like . acl_natbib § ADDITIONAL ANNOTATION INFORMATION The annotators were paid the usual rate for student workers, corresponding to at least minimum wage in Germany. Consent was explicitly requested via email to publish annotation data and aggregated demographic data. No ethics review board was consulted as we judged the collected data as unproblematic. The resulting datasets will be released under CC BY 4.0. Besides the questions mentioned in <ref>, the annotators also indicated whether additional text was generated, i.e., whether the model generation contained more than just the indicated paraphrase. The annotators were instructed to ignore additional text if it could be clearly separated from the paraphrase candidate. For example, sometimes the generated text might be in quotes, or the model might add a phrase like "The paraphrased sentence is:" in front of the generation. The question was asked to clean up the data and provide a better foundation for future work. For annotation phase one, we also used gold questions to evaluate whether the annotators understood the assignment and checked their agreement. The gold examples include completely correct applied paraphrase types, a correctly applied paraphrase with additional applied paraphrase types, a non-paraphrase, and a paraphrase with a different applied APT. The Fleiss Kappa for sense-preservation was 0.92, for correct application 0.68, and for failure reasons 0.61. Determining if a type was applied correctly can depend on details that can be missed, similarly failure reasons are not always trivial to see and depend on the previous answers. There was no annotator whose answers to the gold questions suggested a systematic lack of understanding besides normal human errors, e.g., no annotator answered more than one sense-preserving question incorrectly. For the second annotation phase, i.e., the preferences, Kendall's coefficient of concordance is 0.52, indicating a moderate agreement on preferences across annotators. Precise preferences are likely highly individual, but trends exist. §.§ Full List of Considered APT After filtering, the considered paraphrase types are: * Addition/Deletion, * Same Polarity Substitution (cont.), * Syntheticanalytic Substitution, * Change of Order, * Punctuation Changes, * Spelling Changes, * Inflectional Changes, * Subordination and Nesting Changes, * Semantic-based Changes, * and Derivational Changes. §.§ Dataset The given features in the datasets are given in <Ref> for and in <Ref>for . For we give the ranking pairwise and follow the format of Anthropic for RHLF <cit.> and add information so that the full information can easily be reconstructed. § ADDITIONAL EXPERIMENTS AND DETAILS §.§ Additional Questions How does the detailed confusion matrix look? Do specific APTs get confused more often than other APTs? To get a more granular view of the confusion, we also looked at the detailed confusion based on single APTs as sources instead of groups. The result is shown in <Ref>. We plotted the absolute numbers of additional/erroneous changes, as here, all groups are the same size, i.e., for each APT, there are 50 generations. We have already noticed that most confusion happens intra-group and not inter-group. There are big differences inside the groups, depending on the APT. For instance, for morpho-lexical changes, Same Polarity Substitution is rarely confused, while Inflectional and Derivational Changes are often confused for different morpho-lexical APTs. As Same Polarity Substitution, i.e., the change of one unit for a synonym, is a very common APT, the model seems to have a very good understanding of it. Still, additional changes to Same Polarity Sub. are made at a comparable or even larger rate than other group members. So, even with a good representation in the trainings data, it is difficult for the model to generate only the requested type. The big intra-group differences suggest that it makes sense to characterize the common types in as much detail as possible for any paraphrasing-related task, as model performance differs at the APT group and the APT level. Is there a correlation between perceived hardness of a task and confusion? We have already looked at how difficulty relates to model performance regarding generation success. We also wanted to investigate how it relates to confusion. Therefore, we plotted the same confusion matrix for tasks rated as hard in <Ref>. It follows similar trends to the unrestricted confusion matrix but with remarkably higher confusion rates across the board. This further supports the assumption that tasks rated hard by humans are also harder for LLMs. If a task is rated hard, is likelier to perform additional, unwanted changes and even more likely to perform the wrong kind of change. How much do humans and agree on paraphrase type evaluation? Human annotation at scale is costly and difficult to perform. Works like <cit.> have raised the possibility of using LLMs for annotation tasks. Besides potentially posing as an alternative to human annotation, LLM annotation might also give insight into how well LLMs understand paraphrase types from a detection perspective. Therefore, we want to see how well LLM annotation aligns with human annotations. To that end, we used the same annotation document provided to the annotators to prompt and asked it to perform the same tasks the human annotators performed. We found little agreement between the human annotation and the answers produced by . For evaluating whether the given text pair is a paraphrase, the annotations of humans and are identical in 54 % of the cases. Similarly, to check whether a paraphrase type is applied correctly, humans and only agree in 43% of the cases. In evaluating task difficulty, they only agreed in 50 % of the cases. As these annotation tasks were binary choices, i.e., yes or no and easy or hard, seems to only agree by chance with the human annotation (even with the same instructions). While an improvement in alignment is probably possible by different prompting techniques and prompts more tailored to the setting, the results still show that has trouble understanding the phenomena from a detection perspective. The models can also not estimate the task difficulty from a human perspective, i.e., can not estimate what tasks humans find complex or easy. §.§ Result Details The precise values for the success rates corresponding to <Ref> of the different prompt methods are given in <Ref>. =0.11cm § PROMPTS In the following, we present the prompts used to generate the paraphrases. The placeholder {definition} stands for the definition of an APT, {sentence} stands for the base sentence that should be altered, {example} is replaced by an example application of the given APT, and lastly, {type} is a stand-in for the name of the APT. The examples are constructed from paraphrase pairs in ETPC by manually editing a pair such that only one change of type l is present. The following figures show the prompt template for zero-shot (<Ref>), one-shot (<Ref>), few-shot (<Ref>) and chain-of-thought (<Ref>) prompts. The fine-tuned models are prompted according to <cit.> using the sentence and the types. Lastly <Ref> shows one concrete one-shot prompt and the corresponding model response. § APT DEFINITIONS Addition/Deletion: Addition/Deletion consists of all additions/deletions of lexical and functional units. Same Polarity Substitution (contextual): Same Polarity Substitution consists of changing one lexical (or functional) unit for another with approximately the same meaning. Among the linguistic mechanisms of this type, we find synonymy, general/specific substitutions, or exact/approximate alternations. Syntheticanalytic substitution: Synthetic/analytic substitution consists of changing synthetic structures for analytic structures, and vice versa. This type comprises mechanisms such as compounding/ decomposition, light element, or lexically emptied specifier additions/deletions, or alternations affecting genitives and possessive. Change of order: Change of order includes any type of change of order from the word level to the sentence level. Punctuation changes: Punctuation and format changes consist of any change in the punctuation or format of a sentence (not of a lexical unit, cf. lexicon-based changes). Inflectional Changes: Inflectional changes consist of changing inflectional affixes of words Spelling changes: Spelling and format changes comprise changes in the spelling and format of lexical (or functional) units, such as case changes, abbreviations, or digit/letter alternations. Subordination and nesting changes: Subordination and nesting changes consist of changes in which one of the members of the pair contains a subordination or nested element, which is not present, or changes its position and/or form within the other member of the pair. Semantic based Changes: Semantics-based changes are those that involve a different lexicalization of the same content units. These changes affect more than one lexical unit and a clear-cut division of these units in the mapping between the two members of the paraphrase pair is not possible. Derivational Changes: Derivational Changes consist of changes of category with or without using derivational affixes. These changes imply a syntactic change in the sentence in which they occur. § ANNOTATION GUIDELINES Dear annotators, In this experiment, we want to explore how humans perceive generated paraphrases. Paraphrases are texts expressing identical meanings that use different words or structures. However, instead of looking at general paraphrases, we want to examine specific paraphrase types. Paraphrase types, also known as atomic paraphrase types, are specific lexical, syntactic, and semantic changes that can be grouped into a hierarchical topology. Each annotator will work on roughly 40 examples. Your role will be to look at paraphrases and evaluate if these generations were successful. Disclaimer: No sensitive information will be collected during annotation. §.§ Paraphrases - Background Paraphrasing is the act of rephrasing or restating a text or idea using different words while retaining the original meaning. For example, consider the following two sentences: 1. Amrozi accused his brother , whom he called "the witness", of deliberately distorting his evidence. 2. Referring to him as only "the witness", Amrozi accused his brother of deliberately distorting his evidence. You can see that these two sentences, while using a different order and wording and possibly expressing different nuances, mean the same thing. We are specifically interested in the linguistic building blocks that make up paraphrases. These can take different forms (e.g., lexical, semantic). We can illustrate some of them with the example above. The paraphrase types that appear are: Change of order : Change of order includes any type of change of order from the word level to the sentence level. Here, the part “Amrozi accused his brother” is moved in the sentence. Same Polarity Substitution : Same-polarity substitutions change one lexical (or functional) unit for another with approximately the same meaning. In this example, this happens with “whom” and “to him” and “called” and “Referring” respectively. Addition/Deletion : This type consists of all additions/deletions of lexical and functional units. The word “only” is added. Please find the full list of atomic paraphrase types we will consider at the end of the document. In <ref>, we show an example of how the annotation process will take place. §.§ Tasks You will be given a base sentence, an atomic paraphrase type, and an altered sentence. Additionally, you will be given the definition and an example for each atomic paraphrase type you are working on. Please see Figure 1 for an example annotation. Please read these additional materials carefully to understand what a valid paraphrase of that type would look like. * Indicate whether the altered sentence is a paraphrase of the base sentence * Indicate whether the altered sentence contains a correct application of the given atomic paraphrase type. * If yes: * Highlight the given change * If no: * Please indicate what went wrong with the application of the paraphrase type: * The sentences are identical * Nonsense * Different APTs were applied * Other reason * If additional or different changes were made than the one initially provided, please identify the groups (see item Atomic Paraphrase Type Groups) of change the additional changes belong to. See the end for a reference of possible categories. * For each example which you annotated, please indicate whether you find applying the paraphrase to the sentence easy or hard. * Sometimes additional text, besides the paraphrase, might be given (e.g. “Altered sentence” or explanations about the change). Please indicate whether that was the case. Disregard the additional text for the previous tasks. The annotation interface will support you as far as possible at the annotation and only show the decisions you need to make depending on your prior annotations, i.e., unnecessary questions will not be displayed. In case of any questions whether about the process or any specific example, please contact me at {author email}. When you are done with all assigned tasks, please also send me a quick email letting me know. §.§ Atomic Paraphrase Types: Addition/Deletion consists of all additions/deletions of lexical and functional units. The word “only” was added/removed in the example below. * Amrozi accused his brother, whom he called "the witness", of deliberately distorting his evidence. * Amrozi accused his brother, whom he only called "the witness", of deliberately distorting his evidence. Same Polarity Substitution changes one lexical (or functional) unit for another with approximately the same meaning. Among the linguistic mechanisms of this type, we find synonymy, general/specific substitutions, or exact/approximate alterations. * Apple noted that half the songs were purchased as part of albums. * Apple said that half the songs were purchased as part of albums. Synthetic/analytic substitution consists of changing synthetic structures for analytic structures, and vice versa. That means that the concept of the predicate is already included in the concept of the subject or the additional predicate is removed. This type comprises mechanisms such as compounding/ decomposition, light element, or lexically emptied specifier additions/deletions, or alterations affecting genitives and possessives. a. About 120 potential jurors were being asked to complete a lengthy questionnaire. b. The jurors were being asked to complete a lengthy questionnaire. Change of order includes any type of change of order from the word level to the sentence level. See the example in the Background section. * Amrozi accused his brother, whom he called "the witness", of deliberately distorting his evidence * Calling him "the witness", Amrozi accused his brother of deliberately distorting his evidence. Punctuation changes consist of any change in the punctuation or format of a sentence (not of a lexical unit, like doesn’t to does not). * PG&E Corp. shares jumped $1.63, or 8 percent, to close Friday at $21.51 on the New York Stock Exchange. * PG&E Corp. shares jumped $1.63 or 8 percent to close Friday at $21.51 on the New York Stock Exchange. Inflectional Changes consist of changing inflectional affixes of words. In the example, a plural/singular alternation (streets/street) can be observed. * It was with difficulty that the course of streets could be followed * It was with difficulty that the course of the street could be followed Spelling changes and format changes comprise changes in the spelling and format of lexical (or functional) units, such as case changes, abbreviations, or digit/letter alterations. * The DVD-CCA then appealed to the state Supreme Court. * The DVD CCA then appealed to the state Supreme Court. Subordination and nesting changes consist of changes in which one of the members of the pair contains a subordination or nested element, which is not present, or changes its position and/or form within the other member of the pair. What is a relative clause in (b) (a spokeswoman for Child) is part of the main clause in Example (a) * Sheena Young of Child, the national infertility support network, hoped the guidelines would lead to a more "fair and equitable" service for infertility sufferers. * Sheena Young, a spokesman for Child, the national infertility support network, hoped the guidelines would lead to a more "fair Semantic based changes are those that involve a different lexicalization of the same content units. These changes affect more than one lexical unit and a clear-cut division of these units in the mapping between the two members of the paraphrase pair is not possible. In the example, the content units referring to increases are present in both sentences, but there is not a clear-cut mapping between the two. * The largest gains were seen in prices, new orders, inventories and exports. * Prices, new orders, inventories and exports increased. Derivational Changes consist of changes of category with or without using derivational affixes. These changes imply a syntactic change in the sentence in which they occur. * Tyco later said the loan had not been forgiven, and Swartz repaid it in full, with interest, according to his lawyer, Charles Stillman. * Tyco later said the loan had not been forgiven, and Swartz repaid it fully, with interest, according to his lawyer, Charles Stillman. §.§ Atomic Paraphrase Type Groups Morpholexical Changes: These include all changes where a single word or lexical unit is changed. From the paraphrase types you have seen, this includes: * Inflectional Changes * Derivational Changes * Spelling changes and format changes * Same Polarity Substitution * Synthetic/analytic substitution Structure-based Changes: These include all changes that arise from a different structural organization of a sentence. For examples from the paraphrase types you have seen, this includes: * Subordination and nesting changes * Punctuation changes Semantic-based Changes: These include all changes that arise from distributing semantic meaning across different lexical units. * Semantic Based Changes Others: Any other changes. For examples from the paraphrase types you have seen, this includes: * Change of Order * Addition/Deletion § TOOL USE ACKNOWLEDGMENTS In the conduct of this research project, we used specific artificial intelligence tools and algorithms: ChatGPT, Gemini and DeepL Write to assist with phrasing and editing. While these tools have augmented our capabilities and contributed to our findings, it's pertinent to note that they have inherent limitations. We have made every effort to use AI in a transparent and responsible manner. Any conclusions drawn are a result of combined human and machine insights. This is an automatic report generated with © AI Usage Cards https://ai-cards.org
http://arxiv.org/abs/2407.02260v1
20240702132921
Dynamical robustness of network of oscillators
[ "Soumen Majhi", "Biswambhar Rakshit", "Amit Sharma", "Jürgen Kurths", "Dibakar Ghosh" ]
nlin.AO
[ "nlin.AO" ]
Physics Department, University of Rome Tor Vergata, Via della Ricerca Scientifica 1, 00133 Rome, Italy Physics and Applied Mathematics Unit, Indian Statistical Institute, 203 B. T. Road, Kolkata 700108, India Department of Mathematics, Amrita School of Physical Sciences, Coimbatore, Amrita Vishwa Vidyapeetham, India Department of Physics, University Institute of Sciences, Chandigarh University, Mohali 140413, India Potsdam Institute for Climate Impact Research - Telegraphenberg A 31, Potsdam, 14473, Germany Humboldt University Berlin, Department of Physics, Berlin, 12489, Germany dibakar@isical.ac.in Physics and Applied Mathematics Unit, Indian Statistical Institute, 203 B. T. Road, Kolkata 700108, India § ABSTRACT Most complex systems are nonlinear, relying on emergent behavior resulting from many interacting subsystems, which are often characterized by oscillatory dynamics. Having collective oscillatory behavior is an essential requirement for an appropriate functioning of various real-world systems. Complex networks have proven to be exceptionally efficient in elucidating the topological structures of both natural and artificial systems, as well as describing diverse processes taking place over them. Remarkable advancements have been achieved in recent years in comprehending the emergent dynamics atop complex networks. Specifically, among other processes, a large body of works intend to explore the dynamical robustness of complex networks, which is the networks' ability to withstand dynamical degradation in the network constituents while maintaining collective oscillatory dynamics. Indeed, various physical and biological systems are recognized to undergo a decline in their dynamic activities, whether occurring naturally or influenced by environmental factors. The impact of such damages on network performance can be significant, and the system's robustness is indicative of its capability to maintain fundamental functionality in the face of dynamic deteriorations, often called aging. This review offers a comprehensive excerpt of notable research endeavors that scrutinize how networks sustain global oscillation under a growing number of inactive dynamical units. We present the contemporary research dedicated to the theoretical understanding and the enhancement mechanisms of the dynamical robustness in complex networks. Our emphasis lies on various network topologies and coupling functions, elucidating the persistence of networked systems. We cover variants of system characteristics from heterogeneity in network connectivity to heterogeneity in the dynamical units. Finally we discuss challenges ahead in this potential field and open areas for future studies. ^a These authors equally contributed to the manuscript 89.75.-k, 89.75.Hc, 05.45.Xt Dynamical robustness of network of oscillators Dibakar Ghosh 29 February 2024 ============================================== Keywords: Complex networks, coupled oscillators, dynamical robustness, aging transition § INTRODUCTION Many natural systems are nonlinear and rarely isolated, and hence understanding of such complex systems requires system-level interpretation. The complexity of many social, biological, and physical systems emanates from the intricacy in the patterns of interaction among their constituents. Within this paradigm, the field of network science has been developed as the ideal platform for providing tools for modeling and analysis of complex systems <cit.>. These networked systems often function through the emergent behavior of many interacting units, where each unit exhibits oscillatory dynamics. Consequently, interacting oscillatory systems constitute an efficient framework to model many complex systems. In such a framework, the inherent dynamics of an individual node can be modeled as a system of nonlinear differential equations, and different coupling functions can describe the interactions among the nodes <cit.>. Even simple nonlinear systems, when connected with each other through rather simple coupling functions, can generate complex collective dynamics. With various types of coupling schemes and network structures, a wide variety of collective dynamics have been explored. For example, synchronization processes are ubiquitous in nature and has been studied extensively in populations of locally interacting elements in the context of physical, biological, chemical, and technological systems <cit.>. Among various types of partial synchronization, the coexistence of coherent and incoherent patterns, commonly referred to as the chimera state <cit.>, has been in the focus over the last decade across a diverse array of systems. Coupled oscillators, therefore, play a key role in many areas of science and technology, and their dynamics help us to explore various emergent behaviors of living and non-living systems. Among all the research perspectives concerning dynamics of networks, studies focused on network robustness, which refers to the ability to withstand even strong perturbations, holds considerable significance from various aspects. This phenomenon of network robustness can be conceptualized in two distinct ways: structural and dynamical robustness. i) Structural robustness addresses the endurance of network activities when faced with structural perturbations, which could involve the removal of links (bond percolation) or nodes (site percolation) within a network <cit.>. ii) On the other hand, dynamical robustness of complex networks, in general, is defined as the network’s capacity to sustain its dynamic activity despite local disturbances. Specifically, throughout this review, we uncover dynamical robustness of networks of coupled oscillators which refers to the network’s ability to sustain its global dynamical activity even when a portion of its dynamical components are functionally degraded. In the literature, this is, popularly presented in terms of aging transition. Aging transition is an emergent collective phenomenon in networked systems comprising of self-oscillatory and non self-oscillatory nodes. This transition occurs when the network shifts from a globally oscillatory state to a oscillation quenched state as the proportion of inactive (aged) oscillators exceeds a critical threshold. Possessing collective oscillatory behavior is an essential prerequisite for the regular functioning of many complex systems. Examples include circadian rhythms <cit.>, biological pacemaker cells <cit.>, cardiac and respiratory systems <cit.> etc. Oscillation plays important roles in other several dynamic processes within both single cell and multicellular processes <cit.>. Additionally, coupled oscillator models are applicable to electric power-grid networks, where components such as power sources need to be synchronized to the same frequency <cit.>. Robust oscillatory dynamics is thus a fundamental characteristic of such systems <cit.>. Despite being regularly subjected to internal and external disturbances, these systems can maintain their rhythmic activities to a certain degree. If a limited number of units below a specific threshold fail to generate oscillatory behavior, the remaining units can compensate, enabling the entire system to resiliently preserve its proper functioning. However, if a substantial number of units transit to an inactive quenched state, it can significantly impede their functions, potentially resulting in a partial collapse or even complete failure of the system in question. Keeping this fundamental and inevitable context in mind, in the last two decades, researchers from the nonlinear dynamics community have been working on the dynamical robustness of a network of coupled oscillators  <cit.>. It is defined as the ability of a network to sustain its collective macroscopic oscillation when a few of its nodes fail to produce rhythmic dynamics due to local degradation  <cit.>. Daido and Nakanishi <cit.> in their pioneering work laid the mathematical framework for studying dynamical robustness. They examined a situation in which oscillating nodes gradually transform into stable points. If the quantity of nodes transitioning to stable points exceeds a critical threshold, it could disrupt the typical oscillatory patterns in these systems, leading to an abrupt phase shift toward a universally non-oscillatory state. They explored a scenario, where oscillatory nodes progressively transition to fixed points. If the number of nodes that shift to fixed points surpasses a critical threshold, the usual oscillatory behavior of these systems could be disrupted, resulting in a sudden phase transition towards a globally non-oscillatory state. This phenomenon, characterized by such abrupt and catastrophic emergence, is termed an aging transition. The authors demonstrated that in a global network of diffusively coupled oscillators, an aging transition can be characterized by a universal scaling law of an order parameter involving inactivation fraction and the strength of coupling. After their initial work a series of research articles have been published. Owing to its widespread relevance, aging transition has been studied in diverse models with different coupling functions and network structures. Among the many significant attempts made along this topic, Pazó et al. <cit.> studied aging transition in an ensemble of globally coupled Morris-Lecar model which exhibits a saddle-node bifurcation on an invariant circle. A similar scaling law is established for an ensemble of excitable and oscillatory units. Tanaka et al. <cit.> have explored the aging transition in a complex network and have shown that scale-free networks are highly resilient to random inactivation but extremely vulnerable to targeted inactivation of low-degree oscillators with respect to the dynamical robustness. Their finding is not in agreement with the structural robustness of a scale-free network where high degree nodes play a crucial role. Dynamical robustness has been studied in the context of metapopulation dynamics by Kundu et al. <cit.>. Their results reveal how the network topology plays a crucial role in metapopulation survivavbility. Thakur et al. <cit.> studied the influence of time-delayed coupling on the nature of the aging transition in globally coupled Stuart-Landau oscillators. Their findings divulge that time delay in the coupling does not favor dynamical robustness. In Ref. <cit.> the authors demonstrated aging transition in a multi-layer network of active and inactive units, while contemplating with various interlayer coupling functions. As mentioned above, such global dynamical degradation in the form of aging transition can have outright impacts on the substantive system performance. Thus for many of real-world and man-made systems, continued dynamic oscillatory activity of the components are often extremely crucial for maintaining proper functioning. For instance, in case of neuronal activity <cit.>, from physiological processes like cell necrosis within organs <cit.> to cardiac and respiratory systems <cit.>, robust oscillatory dynamics is quite necessary. Due to such high practical importance many researchers proposed remedial measures to enhance the dynamical robustness against aging or deterioration of individual units. Among the notable attempts in this regard, Liu et al. <cit.> proposed an efficient method to enhance dynamical persistence by introducing an additional parameter that controls the diffusion rate. In Ref. <cit.> authors established that by adding a linear positive mean-field feedback term, network's dynamical robustness can be improved substantially. Effectiveness of self-feedback delay to increase the dynamical resilience has been studied by Sharma and Rakshit <cit.>. The increasing body of vast literature on aging transition and dynamical robustness is itself a testimony of its relevance in various fields of science and engineering. In this report, we intend to provide an exhaustive overview on aging transition by integrating prevailing knowledge achieved in the last two decades. Thus the relevant results and methodology related to dynamical robustness will be more generally accessible for researchers in diverse communities of science and technology. There are also several open challenging problems emphasized here. § DYNAMICAL ROBUSTNESS FOR DIFFERENT NETWORK STRUCTURES Based on the network topologies, we discuss the dynamical robustness by taking different types of networks. Most of the results are explored for globally connected network and complex networks. We also discuss the results on dynamical robustness for multiplex, time-varying and long-ranged networks. §.§ Dynamical robustness of globally coupled networks Initially, we address the outcomes regarding dynamical robustness concerning instantaneous diffusive coupling, followed by an exploration of time-delay diffusive coupling topology. Subsequently, we introduce findings related to weighted conjugate coupling and interactions characterized as attractive-repulsive. Finally, we delve into dynamical robustness in scenarios where inactive oscillators are absent. §.§.§ Diffusive coupling Diffusive coupling represents the predominant form of coupling observed in numerous real-world systems<cit.>, consequently garnering significant attention when investigating aging transitions. We first discuss the aging transition phenomena in a system of N globally coupled Stuart–Landau oscillators <cit.>. The mathematical form of all-to-all diffusively coupled network is, ż_j(t)=(α_j+iω-|z_j(t)|^2)z_j(t) +K/N∑_l=1^N(z_l-z_j), where j = 1,2, . . . ,N and z=x+iy is a complex variable. Here ω signifies the internal frequency of each oscillator, and α_j is the bifurcation parameter of the j-th oscillator, denoting its proximity to a Hopf bifurcation point. When α_j>0, each isolated Stuart-Landau oscillator exhibits a stable sinusoidal oscillation, while it converges to the stable trivial fixed point z_j = 0 for α_j<0. The second term on the right-hand side of Eq.(<ref>) denotes the presence of diffusive coupling with a strength represented by K. The framework introduced by Daido and Nakanishi <cit.> facilitates the study of robustness in a manner where an active oscillator with α_j=a>0 transitions to an inactive state with α_j=-b<0, with a parameter p representing the fraction of inactive oscillators. To simplify, we can designate the group of oscillators as active ones, denoted by j = 1, 2, ..., N(1-p), and the remaining oscillators as inactive, denoted by j = N(1-p)+1, N(1-p)+2, ..., N. This implies that 0 < p < 1 signifies the fraction of inactive oscillators in the entire networked system. The degree of macroscopic oscillation of the whole network is then measured by the order parameter |Z(p)|, Z(p)=1/N∑_j=1^N z_j, and subsequently by the normalized order parameter R, which is defined as follows: R=|Z(p)|/|Z(0)|. As one increases the inactivation parameter p, the order parameter gradually diminishes and at a critical value p = p_c, the loss of global oscillation takes place and leads to a transition in the mean-field dynamics called aging transition. Figure <ref> portrays the normalized order parameter R as a function of p. It depicts the gradual loss of global oscillation for various strengths of coupling constant K. For decreasing K the critical value of the inactivation parameter p_c monotonically increases until it reaches unity at a threshold value of K=K_c, below which p_c remains at unity. Synchronized activities among the oscillators permit us to assume that within each group of active and inactive nodes oscillators behave identically. With this presumption in mind, we assign z_j as A for the active ensemble and z_j as I for the inactive ensemble of oscillators. Consequently, Eq. (<ref>) transforms into the subsequent interconnected systems: Ȧ(t) = (a-Kp+iω-|A(t)|^2)A(t)+KpI(t), İ(t) = (-b-Kq+iω-|I(t)|^2)I(t)+KqA(t), where q=1-p. A linear stability analysis of the reduced system leads to an analytical formula of the critical point p_c as p_c=a(K+b)/(a+b)K. In the limiting case, lim_K →∞ p_c=a/a+b. Consequently, one can conclude that the dynamical robustness is stronger when the active oscillators have larger amplitude of oscillation. We can derive the scaling property of the order parameter near the critical point p_c <cit.>. The scaling law is represented as |Z|∝(p_c-p)^β, where the critical exponent β varies based on the coupling strength in the following manner, β = 1/2 for K<K_c 1 for K=K_c 3/2 for K>K_c. The critical exponent β increases by increasing the coupling strength K. Until recently, the investigation of dynamical robustness was primarily focused on coupled Stuart–Landau oscillators, which exhibit typical sinusoidal oscillations. However, many natural systems can be modeled by networks of non-sinusoidal oscillators, such as the Van der Pol oscillator. Rakshit et al. <cit.> have studied the aging transition in a network of globally coupled Van der Pol oscillators and highlighted how it differs from the Stuart-Landau model. Their investigation uncovers distinct pathways to aging transition in networks of Van der Pol oscillators compared to typical sinusoidal oscillators like Stuart–Landau oscillators. Unlike sinusoidal oscillators, where the order parameter smoothly undergoes a second-order phase transition, they observed an unconventional phase transition characterized by the abrupt emergence of unbounded trajectories at a critical point. Through detailed bifurcation analysis, they elucidated this abnormal phase transition, demonstrating that it is driven by the boundary crisis of a limit-cycle oscillator, paving the way for an unconventional and discontinuous aging transition path. §.§.§ Time-delay diffusive coupling Dynamical systems characterized by temporal delays, are prevalent in nature. They manifest due to finite signal propagation time and memory effects in a diverse range of natural phenomena <cit.>, spanning physical, chemical, engineering, economic, and biological domains, including their respective networks. Several dynamical systems can be portrayed by delay differential equations with single constant delay <cit.>, discrete delays <cit.>, distributed delay <cit.>, state-dependent delay <cit.>, and time-dependent delay <cit.>. Let us now introduce a time delay in the signal transmission and observe how it affects the dynamical robustness of networked systems. For simplicity, we contemplate with globally (all-to-all) coupled systems of oscillators. The system of N all-to-all coupled Stuart-Landau oscillators subject to linear time-delayed coupling can then be represented by the following set of equations, [ ż_̇j̇=(α_j+ iω-|z_j|^2)z_j+κ'N∑_k=1, k ≠ j^N[z_k(t-τ)-z_j(t)];   j= 1,2,⋯,N, ] with κ'=2κ as the interaction strength and τ as the parameter describing the time delay in the signal transmission. Choosing a network of N=500 dynamical units, we depict in Fig. <ref>(a) how the normalized order parameter R (Eq. (<ref>)) changes with respect to increasing inactivation represented by the parameter p, for various values of τ. We keep κ'=5 fixed and start with the non-delay case (i.e., τ=0). Then we choose a higher time-delay with τ=0.01 and observe a slightly faster aging transition in the system. More significant change takes place when we increase τ=0.03 where the critical inactivation ratio p_c evidently decreases. This scenario remains valid as we increase the delay to τ=0.05 and τ=0.07. In fact, with increasing τ this scenario becomes more pronounced and the aging transition occurs faster. So, for a fixed interaction strength, the critical inactivation ratio p_c decreases with increasing τ. Thus, the aging takes place faster and hence the robustness of the networked system decreases due to the introduction of time-delay. Going further, one can perform a linear stability analysis of this reduced system of equations around the origin (as in Ref. <cit.>), ending up with a characteristic equation for the eigenvalues, from which the aging islands in the (κ, τ) parameter plane can be determined. Subsequently, for a better perception of the robustness subject to time-delay, along with varying the time-delay, we consider simultaneous change in the interaction strength, and plot the phase diagram in the (κ, τ) parameter plane for different values of the inactivation ratio p (in Fig. <ref>(b)). Specifically, p=0.0,0.2, and 0.4 are chosen and it is clear that the aging island expands for raising the ratio of the inactive oscillators. The aging islands expand in both directions of κ and τ reflecting the fact that along with the interaction strength and the inactive elements, the time-delay also carries the capability of suppressing oscillations to the trivial fixed point and hence to decrease the robustness of the system. Further, the study by Rahman et al. <cit.> examines a globally connected network comprising active and inactive oscillators with distributed-delay coupling. It establishes conditions for aging transition, derived for both uniform and gamma delay distributions. The findings suggest that in the case of a uniform distribution, increasing the width of the delay distribution, while maintaining the same mean delay, enables an aging transition to occur with a smaller coupling strength and a lower proportion of inactive elements. Provided the coupling strength falls within a specific range and the mean time delay is substantial, it may be feasible to achieve an aging transition for any proportion of inactive oscillators for a gamma distribution. §.§.§ Mean-field diffusive coupling The study of mean-field coupling is extensively studied due to its prevalence in numerous natural occurrences within the realms of physics, biology and engineering. Effects of mean-field diffusion have previously been explored in synchronization <cit.>, multistable dynamics of synthetic genetic networks <cit.>, oscillation suppression processes <cit.>, and also in the dynamics of chimera-death <cit.>. Recently, the dynamical robustness in the coupled Stuart-Landau oscillators through mean-field diffusion has been reported in <cit.>. The mathematical model of the coupled oscillators is given by ż_j = (α_j + i ω - |z_j|^2)z_j+ϵ(Qz̅ - z_j), where z̅ is mean value which represent by 1/N∑_l=1^Nz_l. In this coupling scheme, the control parameter Q associated with the mean-field interaction describes the influx of the mean- field in the dynamical units. The parameter Q essentially controls the rate of mean-field interaction in the diffusive coupling among the interacting oscillatory systems. When Q=0, there is no interaction between the oscillators and they behave like uncoupled ones subjected to self-feedback, while Q=1 maximizes the interaction with the mean field <cit.>. Linear stability analysis around the origin enables us to analytically derive the critical value p_c for Eq. (<ref>) as <cit.>, p_c = (b+ϵ)(1/a+b+a-ϵ/Qϵ(a+b)),  with  Q > 1-a/ϵ, where b-a-Qϵ+2ϵ>0. The above expression for p_c clearly indicates that lowering the mean-field parameter Q has a negative effect on the dynamical robustness of the networked system. This result is in agreement with the previous study that the mean-field control parameter plays an important role in the suppression of oscillations <cit.>. As Q→ 0, the effect of the mean-field interaction in coupling decreases, which in turn hinders the collective macroscopic oscillation. Figure <ref> depicts the normalized order parameter R (Eq. (<ref>)) as a function of the inactivation parameter p for various values of Q. It is discernible from the plot for the maximum mean-field scenario (with Q=1) that the order parameter sharply decreases with increasing p, and eventually vanishes at the critical value p_c∼ 0.75, indicating the occurrence of an aging transition. The value of Q is decreased subsequently and the order parameter is plotted for Q=0.9, for which aging transition takes place at a lower value of p, and hence the dynamical robustness of the coupled system reduces. Similarly for a lower value of Q=0.8, the robustness decreases even more. This trend remains intact for even lower values of the mean-field parameter Q=0.7 and Q=0.6. These numerical results are in line with the analytically obtained critical values of the inactivation ratio as well. Thus, these plots demonstrate that a lower mean-field density forces the entire system to dynamically collapse for lower values of the critical inactivation ratio, and hence leads to the decrement of dynamical robustness. Similar results are observed for networks of delay-coupled systems as well <cit.>. §.§.§ Weighted conjugate coupling Apart from the traditional coupling via the similar variables among the dynamical systems, coupling through dissimilar or conjugate variables is also natural in a number of experimental scenarios in which the units are coupled by feeding the output of one into the other. Examples include the experiments by Kim et al. <cit.> of coupled semiconductor laser systems. Later, in studying the phenomenon of oscillation quenching <cit.>, this form of coupling is utilized in detail <cit.>, as coupling through dissimilar variables naturally breaks the rotational symmetry of diffusively coupled systems. Conjugate coupling has further been shown to be effective in enhancing coherence <cit.> and inducing explosive death <cit.>. Here we consider a globally coupled network of Stuart-Landau limit cycle oscillators with weighted conjugate coupling <cit.>, for which the governing equation can be represented as, ż_j = (α_j+iω - |z_j|^2)z_j + ϵ m_(1,2)/N∑_k=1^N [(Img(z_k)-β Re(z_j)) + i(Re(z_k)-β Img(z_j))], where ϵ is the overall coupling strength, and β is the feedback control parameter (0 ≤β≤ 1). Here Re(z) and Img(z) are the real and imaginary parts of z, respectively. α_j is the bifurcation parameter for Stuart-Landau oscillators, specifically we choose α_j = 2 for the active set of oscillators, and α_j = -1 for the inactive ones. The intra-group coupling strengths of active and inactive oscillators are m_1 and m_2. When m_1 = m_2, the interaction is symmetric, while for m_1 ≠ m_2 the interaction becomes asymmetric. For the stability analysis, the global network is divided into two groups of active (A) and inactive (I) oscillators. The A = A_r+iA_im, and I = I_r+iI_im, where A_r, I_r and A_im, I_im are the real and imaginary variables which satisfy the following equations, Ȧ_r = aA_r-A_imω-A_rA^2_im-A^3_r+m_1ϵ[(1-p)A_im+pI_im-β A_r], Ȧ_im = aA_im+A_rω-A_imA^2_r-A^3_im+m_1ϵ[(1-p)A_r+pI_r-β A_im], İ_r = bI_r-I_imω-I_rI^2_im-I^3_r+m_2ϵ[(1-p)A_im+pI_im-β I_r], İ_im = bI_im+I_rω-I_imI^2_r-I^3_im+m_2ϵ[(1-p)A_r+pI_r-β I_im]. The determination of the critical value p_c involves conducting a linear stability analysis on Eq. (<ref>) at the origin, denoted as ( A, I) = ( 0, 0). For this purpose, the 4th order characteristic equation is deduced from the Jacobian, and the Hopf and pitchfork bifurcation points using the Routh-Hurwitz stability criterion are computed (for more details see <cit.>). For the numerical analysis, first we consider the symmetric case (m_1 = m_2 = 1) in the conjugate coupled system of Stuart-Landau oscillators with the natural frequency ω=1 and the control parameter β=1. In Figure <ref>(a), we illustrate the variation of the order parameter R as a function of the inactivation ratio p across various coupling strengths ϵ. The critical value p_c of aging transition to the homogeneous steady state (HSS) decreases with increasing the coupling strength ϵ. To show the effect of ϵ on the aging transition, we portray the corresponding phase diagram in the p-ϵ parameter plane in Fig. <ref>(e), which confirms the fall of p_c with coupling strength for HSS. To untangle the impact of asymmetry parameter on the aging transition, we assume three cases with the parameter sets (m_1, m_2)= (1, 2), (2, 1) and (1, 3). The aging transition in terms of R and the corresponding phase diagrams for these three cases are respectively depicted in Figs. <ref>(b-d) and <ref>(f-h). As the impact of inactive (active) oscillators on the active (inactive) oscillators intensifies with rising values of m_2 (m_1), it either enhance or shrink the HSS region in the parameter plane. The impact of the natural frequency on the aging transition is shown in the (ϵ-ω) parameter plane for different combinations of (m_1, m_2) at fixed inactivation ratio p=0.8 in Fig. <ref>(a-d). In the parameter plane, dynamical transitions occur through three bifurcations (similar to Fig. <ref>) among the oscillatory regime (OS), HSS and the inhomogeneous steady state (IHSS). The variation in the spread of the aging transition island as a function of ω and ϵ for different combinations of (m_1, m_2) is depicted in Fig. <ref>. Increasing the coupling strength of inactive oscillators through m_2 enlarges the aging transition region (HSS), as a result of the influence of the inactive oscillators over the active group of oscillators. On the other hand, with the increase of m_1 the aging transition region, i.e., the HSS region shrinks in the parameter plane due to the influence of active oscillators over the inactive ones in the globally conjugate coupled Stuart-Landau oscillators in Figs. <ref>(a-d). §.§.§ Attractive and repulsive interaction Up to this point, we have explored the concept of aging transition, predominantly focusing on scenarios where interactions between dynamical nodes are attractive (positive). However, real-world systems often exhibit a more intricate nature, involving mixed coupling with both attractive (positive) and repulsive (negative) connections <cit.>. It is important to emphasize that the coexistence of positive and negative couplings can simultaneously occur in diverse physical, ecological, and biological systems <cit.>. In recent years, numerous studies have delved into the emergent dynamics within an ensemble of oscillators featuring attractive-repulsive coupling. These investigations have showcased the emergence of manifold collective dynamics, including chimera states, solitary states, extreme events, amplitude (or oscillation) death, and anti-phase synchrony in a network of oscillatory nodes experiencing both attractive and repulsive interactions <cit.>. The interplay between attractive and repulsive couplings can lead to suppression of oscillatory activities among coupled oscillators. As a result, the phenomenon of aging transition exhibits qualitative differences in the presence of competitive attractive-repulsive interactions. In this direction the very first work was carried out by Bera <cit.>. The mathematical form of a N coupled network of Stuart-Landau oscillators having attracting and repulsive interactions is, ż_j = (α_j+iω-|z_j|^2)z_j+K/N∑_k=1^NC_jkG(z_k,z_j)-ϵ/N∑_k=1^NB_jkH(z_k,z_j), where j=1,2,…,N. Here C_jk and B_jk represent the adjacency matrices of attractive and mean repulsive interactions, respectively. The functional forms of attractive and repulsive interactions are defined by G(z_k, z_j)=(z_k-z_j) and H(z_k, z_j)=(z_k+z_j). In this context, the parameters K and ϵ represent the strengths of attractive and repulsive interactions, respectively. To determine the critical threshold p_c of the aging parameter, we set z_j = A for all active oscillators and z_j = I for all inactive oscillators. Consequently for a globally connected network Eq. (<ref>) reduced to, Ȧ = (a+iω-|A|^2-2 ϵ-Kp-pϵ)A+(K-ϵ)pI, İ = (-b+iω-|I|^2- ϵ-Kq-pϵ)I+(K-ϵ)qA, where q = 1-p. Employing a linear stability analysis of this reduced model around the origin, we derive the critical value p_c as p_c = (a-2ϵ)(b+K+ϵ)/(a+b)(K-ϵ),    for ϵ+K≥ a. When ϵ + K ≤ a, the critical parameter p_c remains at unity. In the limiting case as ϵ approaches 0, lim_ϵ→ 0 p_c = a(b+K)/K(a+b), aligning with the critical aging parameter for a diffusively coupled global network. Additionally, lim_K →∞ p_c = a - 2ϵ/a+b. Consequently, it can be inferred that an increase in the repulsive interaction strength ϵ leads to a decrease in dynamical robustness. We plot the order parameter R as a function of aging parameter p for various scenarios. Figs. <ref>(a) and <ref>(b) depict the transition scenario for ϵ=0 and ϵ=0.2 respectively, considering a spectrum of attractive coupling strengths. The general trend of aging transition remains same for both settings (absence and presence of repulsive interaction) for increasing attractive coupling strength K. But, for the latter case of ϵ=0.2, the aging occurs for smaller values of the inactivation ratio p compared to the former case of ϵ=0, implying a decline of the robustness. This illustration demonstrates that the inclusion of repulsive interaction significantly reduces the dynamical robustness. In Fig. <ref>(c), we portray, how the transition takes place for increasing values of ϵ, with a fixed attractive coupling strength K=0.2. The figure clearly shows that for increasing ϵ, the aging transition keeps occurring for lower values of p. Thus, Fig. <ref>(c) further confirms that repulsive interactions negatively impacts dynamical robustness. —————————————————————————————— In contrast to the above study based upon the assumption of attractive and repulsive interactions in both real and imaginary variables of Stuart-Landau oscillators, our next analysis delves into the robustness of an alternative form of attractive and repulsive interactions, as outlined in <cit.>. Consider N coupled Stuart-Landau oscillators with attractive and repulsive interactions given by, ż_j(t) = (α_j+iω-|z_j(t)|^2)z_j(t)+k_a/λ_j∑_l=1^NB_jl((z_l)-(z_j))-k_r/λ_j∑_l=1^NB_jl((z_l)-(z_j)), Here k_a, and k_r indicate the magnitude of attractive and repulsive interactions in coupled oscillators, respectively. B_jl is the connection matrix, and λ_j is the average degree of the jth node of the coupled oscillators. Two order parameters, namely, the average amplitude A_amp= 1/N∑_i=1^N(⟨ x_i,max⟩_t-⟨ x_i,min⟩_t) and the variance ρ=σ^2(⟨(x_i)⟩_t) are used to identify the aging transition. These order parameters are employed to differentiate between homogeneous and inhomogeneous steady states. Specifically, when A_amp = ρ = 0, all oscillators converge to HSS, while A_amp = 0 and ρ≠ 0 indicate the presence of oscillators in IHSS. In Fig. <ref>, the average amplitude (A_amp) is plotted with respect to p for different combinations of the attractive coupling k_a and the repulsive coupling strength k_r. Figure <ref>(a) suggests that as we augment k_a, the critical inactivation ratio p_c decreases, indicating a faster occurrence of the phase transition due to aging. This observation aligns with earlier studies suggesting that increased attractive couplings serve to dampen oscillations. Conversely, it is evident from Fig. <ref>(b) that repulsive interactions enhance robustness to large extent. Yet, in Figs. <ref>(c-d), it becomes evident that as the repulsive interaction strength k_r reaches higher values, a sudden phase transition unfolds in the order parameter A. This transition is characterized by its catastrophic nature, with the magnitude of the order parameter providing no forewarning of the impending collapse of the system. Consequently, our observation suggests that depending on parameter values, repulsive interactions have the potential to bolster dynamical robustness in coupled oscillators, but concurrently, they might instigate an abrupt suppression of macroscopic oscillations within the network. Subsequently, we delve into the mechanism behind this sudden catastrophic phase transition by examining the 2-dimensional parameter space. The phase diagram on the k_r-p plane is presented in Figs. <ref>(a-d) for various values of attractive coupling strength. This figure delineates three distinct regions: OS (oscillatory state), IHAT (inhomogeneous aging transition state), and HAT (homogeneous aging transition state). With an increase in k_a, we note an expansion in both the IHAT and the HAT regions. Additionally, it is observable that for higher values of p, the transition from homogeneous steady-state to an inhomogeneous one occurs with variations in k_r. This result is also apparent in Fig. <ref>, where we have graphed A_amp and ρ as functions of p, alongside the corresponding time series near the aging transition point. In Fig. <ref>(a), a typical aging transition is evident as the order parameters A and ρ exhibit smooth functions of p and converge to zero at a critical value p=p_c. The corresponding time-series of the oscillators on both sides of the aging transition point p_c is depicted in Figs. <ref>(b-c). However, Figs. <ref>(d-f) portray an aging transition that is qualitatively distinct from the previous one. In this case, a discontinuity in the order parameters is noticeable at the aging transition point p_c. Additionally, the corresponding time series indicates an aging transition through an inhomogeneous steady-state, where oscillators settle into three different steady states after the aging transition. Therefore, we can infer that the abrupt discontinuous jump in the order parameter is a consequence of a bifurcation from the oscillatory state to inhomogeneous steady states. §.§.§ Dynamical robustness in the absence of inactive oscillators Thus far, whatever we have discussed above is the scrutinization of the phenomenon of dynamical robustness in ensembles of dynamical systems achieved by increasing the proportion of inactive oscillatory systems. Now, we move aside from this specification of the dynamical units, and demonstrate that even an ensemble of co- and counter-rotating oscillatory systems <cit.> can exhibit similar aging transition upon raise in the fraction of counter-rotating oscillators. The impact of the mean-field feedback in the symmetry preserving as well as the symmetry breaking coupling is explored in Ref. <cit.>. The network becomes dynamically vulnerable completely and hence the regime of global oscillation transits to aging through a Hopf bifurcation, whereas the transition from aging to oscillation death occurs via a pitchfork bifurcation. The dynamical evolution of a network of N globally interacting Stuart-Landau oscillators with symmetry preserving coupling is described as, [ ż_̇j̇=(λ+ iω_j-|z_j|^2)z_j+KN∑_k=1^N(α z_k-z_j);      j= 1,2,⋯,N, ] where α refers to the strength of the mean-field feedback. Moreover, ω_j is the frequency of the j-th dynamical unit. Specifically, for a frequency +ω, the system rotates in a counter-clockwise direction, and in the clockwise direction whenever the frequency is -ω. Analogous to the studies with inactive oscillators, we divide the entire system into two groups. To be precise, the frequency ω_j=ω for j ∈ 1,2,…, N-Np and frequency ω_j=-ω for j ∈ N-Np+1,…, N, and consequently the parameter p characterizes the proportion of the counter-rotating oscillators in the networked system. Further choosing the network size N=100, frequency ω=5.0, and defining the normalized order parameter R similarly as before, we plot R with respect to the fraction p of counter-rotating oscillators in Fig. <ref>, for two different values of the mean-field feedback strength α=1.0 and α=0.95. The dynamical outcomes being symmetric for both the ranges p ∈ (0,0.5) and p ∈ (0.5,1), we effectively confine ourselves to the range p ∈ (0,0.5) in the plots. In Fig. <ref>(a), we encounter that whenever α=1.0, with K=1, R remains non-zero finite for the entire range of p, depicting oscillatory regime of all the individual units and hence the whole network. However, with higher coupling strengths K=4,7, and 10, the order parameter R transits from non-zero to zero value and hence aging transition takes place through Hopf bifurcations at p_HB=0.28,0.20, and 0.19, respectively. Thus, increasing coupling strength decreases the critical value of p at which aging occurs via Hopf bifurcation. In order to perceive the impact of the mean-field feedback on the dynamical robustness, we depict similar transitions for a lower feedback strength α=0.95, with the same set of interaction strengths (in Fig. <ref>(b)). Similarly as before, the entire system shows oscillatory nature for K=1. For increasing K=4,7, and 10, the aging transition occurs at the Hopf bifurcation points p_HB=0.24,0.15, and 0.10 respectively. Thus, even for a tiny decrement in the feedback strength α, the onset of aging transition takes place for smaller values of the fraction p. For a better understanding, we further portray this scenario through the cooresponding phase diagrams in the (K,p) parameter planes, in the lower panel of Fig. <ref>. As can be witnessed, for both values of the mean-field feedback strength ( Figs. <ref>(c) and <ref>(d)), the networked system displays oscillatory solutions for a small interaction strength and smaller proportion of the counter-rotating oscillators. However, as K and p surpass certain values, aging transitions take place. The regimes of oscillation and aging are classified as OS and AG, respectively. More importantly, for smaller feedback strength, the aging region expands in the phase diagram, as expected from earlier observations. Our investigation thus demonstrates how networked dynamical systems comprising of co- and counter-rotating oscillators instead of active-inactive decomposition, can also give rise to the phenomenon of aging transition and, hence, a potential of the study of dynamical robustness of networked systems from a different perspective. §.§ Dynamical robustness of complex networks In the above, we have examined the dynamical robustness of globally coupled networks under various coupling schemes. In this section, we turn our attention to aging transitions within different complex network topologies. In the past two decades, network science has experienced significant advancements due to the discovery of various topological structures in many real-world networks <cit.>. The complexity of a network structure can be characterized by the connectivity properties of interaction pathways (links) among network components (nodes). In terms of the degree distribution (the probability distribution of node degrees across the network), complex networks mainly fall into two categories: homogeneous and heterogeneous networks. Homogeneous networks, exemplified by random graphs <cit.> and small-world models <cit.>, exhibit a binomial or Poisson degree distribution where node degrees cluster around the mean degree. In contrast, heterogeneous networks like scale-free networks display a heavy-tailed degree distribution that approximately follows a power-law distribution <cit.>. We will first discuss the results obtained for homogeneous complex networks, followed by the study of heterogeneous networks. Then, we will present the findings for weighted complex networks, and finally, we will discuss the dynamical robustness within a correlated network topology. §.§.§ Homogeneously coupled complex networks Tanaka et al. <cit.> extended the work of Daido and Nakanishi <cit.> to complex network structures. They considered a homogeneously coupled network having a Poisson degree distribution. In particular they explored an Erdös-Rényi random graph <cit.> by considering a network of diffusively coupled oscillators as follows: ż_̇j̇(t)=(α_j+iω-|z_j(t)|^2)z_j(t) +K/N∑_l=1^N A_jl(z_l-z_j). Using the system reduction techniques proposed by Daido and Nakanishi <cit.>, the critical value p_c of the inactivation parameter can be analytically calculated. For a homogeneous network, the degree of a node can be approximated by the average degree of the network. With this assumption, we can conclude that each active oscillator is connected to neighboring (1-p)⟨ k ⟩ oscillators and each inactive oscillator is to p⟨ k ⟩ neighboring oscillators. Here ⟨ k ⟩ represents the mean degree of the network. When we designate z_j as 'A' for the active group and 'I' for the inactive group of oscillators, Eq. (<ref>) in its original form transforms then into the following coupled system: Ȧ(t) = (a-Kpd+iω-|A(t)|^2)A(t)+KpdI(t), İ(t) = (-b-K(1-p)d+iω-|I(t)|^2)I(t)+K(1-p)dA(t). A linear stability analysis of the equilibrium point (A,I)=(0,0) leads to p_c^hom=a(Kd+b)/(a+b)Kd for K>K_c^hom, where d represents the link density, defined as ⟨ k⟩/N, and K_c^hom=a/d provides the critical coupling strength, below which p_c^hom=1. Here we consider α_j=a for an active oscillator and α_j=-b for an inactive oscillator. From the Eq. (<ref>), we can conclude that for a fixed K, the robustness increases with the decrease of link density d. §.§.§ Dynamical robustness of heterogeneous networks To understand how robust a networked dynamical system is under increasing inactivation of its dynamical units whenever the degree distribution of the network is heterogeneous, we present a detailed comparative analysis of aging transition between homogeneous and heterogeneous networks <cit.>, while considering a network of N dynamical systems as described by the networked system Eq. (<ref>). In order to derive the desired expression for the critical inactivation ratio p_c for heterogeneous networks under random inactivation, we follow the degree-weighted mean field approximation <cit.>. Then the original system (Eq. (<ref>) can be approximated as [ ż_̇j̇ = (α + iω - |z_j|^2)z_j + Kk_jN[(1-p)M_A(t)+pM_I(t)-z_j],; ] where [ M_A(t) = ∑_j∈ S_Ak_jz_j(t)∑_j∈ S_Ak_j,   M_I(t) = ∑_j∈ S_Ik_jz_j(t)∑_j∈ S_Ik_j ] are the degree-weighted mean fields for the active and inactive sets of dynamical units, with k_j(j=1,2,...,N) being the degree of the j-th node in the network. Based upon the fact that the oscillators display phase synchronization with frequency Ω, let us now assume that the state variables can be written as z_j(t)=r_j(t)e^i(Ω t+θ), r_j being the amplitude and θ being the phase shift. On replacing this in Eq. (<ref>), we get [ ṙ_j = (α_j - Kk_jN-r_j^2)r_j + Kk_jN[(1-p)R_A(t)+pR_I(t)],; ] where [ R_A(t) = ∑_j∈ S_Ak_jr_j(t)∑_j∈ S_Ak_j,   R_I(t) = ∑_j∈ S_Ik_jr_j(t)∑_j∈ S_Ik_j. ] Further supposing that in the stationary oscillatory regime, R_A(t) and R_I(t) are time-independent, the phase transition from the oscillatory (R_A,R_I>0) regime to the non-oscillatory (R_A=R_I=0) regime eventuates due to the change in the stability of the equilibrium point at the origin. The stability is governed by the following Jacobian matrix J_0=[ ∂ G_A(R_A,R_I)∂ R_A ∂ G_A(R_A,R_I)∂ R_I; ∂ G_I(R_A,R_I)∂ R_A ∂ G_I(R_A,R_I)∂ R_I ]|_R_A=R_I=0, in which [ G_A(R_A,R_I)=∑_j∈ S_Ak_jr_j^*(R_A,R_I)∑_j∈ S_Ak_j,; ; G_I(R_A,R_I)=∑_j∈ S_Ik_jr_j^*(R_A,R_I)∑_j∈ S_Ik_j. ] The stationary amplitude r_j^* is a positive real solution of the following equation, [ r_j^3 - (α_j-Kk_jN)r_j - Kk_jN((1-p)R_A+pR_I)=0. ] Eq. (<ref>) has only one positive real root if [ α_j - Kk_jN<0, ∀ j ∈ S_A. ] Now we differentiate Eqns. (<ref>) and (<ref>) with respect to R_A and we find the (1,1)-th entry of J_0 as, [ ∂ G_A∂ R_A|_R_A=R_I=0 = (1-p)NK∑_j ∈ S_Ak_j[∑_j ∈ S_Ak_j^2(Kk_j/N)-α_j]; ; ≃1d(1N∑_j ∈ S_Ad_j^2d_j-α_j/K) ,; ] in which d_j=k_j/N (j=1,2,...,N) is the normalized degree of the j-th node, where the approximation ∑_j ∈ S_Ak_j≃ (1-p)dN^2 is used. Also, in the large N limit, the link density is d= ⟨ k ⟩ /(N-1). If we now define [ H(K,α)=1N∑_j=1^Nd_j^2d_j-α/K, ] then we can write [ ∂ G_A∂ R_A|_R_A=R_I=0≃ (1-p)H(K,a)/d. ] Similarly, we obtain [ ∂ G_A∂ R_I|_R_A=R_I=0≃ pH(K,a)/d,; ∂ G_I∂ R_A|_R_A=R_I=0≃ (1-p)H(K,-b)/d,; ∂ G_I∂ R_I|_R_A=R_I=0≃ pH(K,-b)/d.; ] Thus we arrive at J_0=[ (1-p)H(K,a)/d pH(K,a)/d; (1-p)H(K,-b)/d pH(K,-b)/d ]. The equilibrium point at R_A=R_I=0 changes its stability near the phase transition, providing us with the following critical inactivation ratio [ p_c^het = H(K,a)-dH(K,a)-H(K,-b)      K > K_c^het, ] in which the definition of H follows from Eq. (<ref>). The result is true if K> a/d_min where d_min=k_min/N. Figure <ref> portrays a comparative analysis between the critical inactivation ratios obtained for the homogeneous and the heterogeneous networks. In Fig. <ref>(a), we depict the critical ratio p_c with respect to the interaction strength K, with a fixed linked density of d ∼ 0.08 for a network of size N=3000 and the mean-degree ⟨ k⟩=240. The dashed and solid black curves respectively stand for the analytically obtained outcomes in Eqs. (<ref>) and (<ref>), whereas the red triangles and blue diamonds represent the numerical results. As can be witnessed, the analytically obtained expressions are in sufficiently good agreement with the numerical results. p_c values expectedly start decreasing with increasing K, after the respective certain critical interaction strengths K_c^hom and K_c^het. The critical ratio p_c for the homogeneous network is smaller than that for the heterogeneous one for the entire range of the coupling strength K ∈ [0,50]. Similar results are shown in Fig. <ref>(b), but this time for varying link density d ∈ [0,0.1] and fixed coupling strength K=30. Qualitatively the same scenario remains valid here as far as the evolution of p_c are concerned. This analytical study for random failures has further been generalized to targeted attacks <cit.>. The study presents a universal formula for the critical fraction of inactive units, applicable to both random failures and targeted attacks on networked systems. It examines the impact of targeting nodes based on their degrees, starting with identical oscillators and homogeneous edge weights. The theory is subsequently extended to networks with heterogeneous edge weights and non-identical oscillators. The analytical findings are confirmed through extensive numerical simulations. On the other hand, Tanaka et al. <cit.> presented a general formula for the critical inactivation ratio for interacting heterogeneous oscillators. This is done while assuming different values of the units' intrinsic parameters, instead of choosing the same fixed parameter value for all the active and all the inactive elements. The study demonstrates that increasing heterogeneity in the oscillator components of networks leads to improved dynamical robustness, as evidenced by the comparison of critical values for networks with various extents of heterogeneity. This observation is further validated for networks of Morris-Lecar neuronal systems communicating through electrical synapses. §.§.§ Weighted complex networks The majority of the early research on dynamical robustness focused on networks without taking weightings into account. But in reality, many complex networks are indeed weighted, and the connection strength of the nodes highly influences the network dynamics  <cit.>. Both the topology and the strength of a network's connections have an impact on its dynamics. In particular, studies have shown that in complex networks, the presence of both degree heterogeneity and weight heterogeneity tends to impede full synchronization. However, synchronization can be notably enhanced and made unaffected by these types of heterogeneity, when the weight distribution is appropriately integrated with the degree distribution <cit.>. He et al. <cit.> first studied dynamical robustness in a weighted complex network. Their investigation highlights the roles of high and low degree nodes on dynamical robustness in weighted complex networks and generalizes the works of Tanaka et al. <cit.>. Study of dynamical robustness in a weighted complex network has been carried out for N diffusively coupled Stuart-Landau oscillators as expressed below, ż_̇j̇(t)=(α_j+iω-|z_j(t)|^2)z_j(t) +K∑_l=1^N W_jl A_jl(z_l-z_j). Here A = (A_jl) and W = (W_jl) are the adjacency and weight matrix of the network, respectively. The weight matrix W is defined as W_jl=1/k_j^β, where k_j is the degree of the jth node and β is a tunable parameter, whose value determines whether the network is weighted (β≠0) or unweighted (β=0). Dynamical robustness has been studied following the mathematical framework proposed by Daido and Nakanishi <cit.>. In Fig. <ref>, the critical values p_c are depicted against K for different β values in a weighted heterogeneous network. The graph includes data for random inactivation (circles), targeted inactivation of high-degree nodes (stars), and targeted inactivation of low-degree nodes (squares). Three distinct critical couplings at p_c=1 are indicated as K_c^R, K_c^H, and K_c^L. In Fig. <ref>(a-b) we observe that all the critical curves intersect at a point, denoted by (K_cin,p_cin). Clearly for 0<β<1, p_c^L<p_c^H, while K<K_cin but for K>K_cin it becomes completely opposite. This implies the crucial role of low-degree nodes in impacting the dynamical robustness which can only happen in a weakly weighted network and for weak coupling. For β=1.0, p_cin=1 and K_c^R=K_c^H=K_c^L=K_cin as shown in Fig. <ref>(c). From Fig. <ref>(d) it is quite obvious that for β>1, K_c^H<K_c^R<K_c^L. By applying the heterogeneous mean-field approximation, we can derive the critical value of the inactivation parameter as p_c^het=F(K,a)-⟨ k ⟩ N/F(K,a)-F(K,-b) for K>K_c^hetR, where F(K,α)=∑_j=1^N Kk_j^2-β/-α+Kk_j^1-β, and K_c^hetR=max_j∈ S_A{a k_j^β-1}. Here we consider α_j=a for an active oscillator and α_j=-b for an inactive oscillator. The phenomenon reported in the Ref. <cit.>, where heterogeneous networks are more susceptible to the failure of low-degree nodes rather than high-degree nodes, is observed specifically in weakly weighted networks and under weak coupling conditions. However, when considering the impact of weighted connections, we discover that the susceptibility to high-degree node failures is more widespread and occurs across a broader parameter range. This finding indicates a strong alignment between dynamical and structural robustness. In the case of the unweighted version of a heterogeneous network, high-degree nodes (hubs) are impacted by numerous neighbors, whereas low-degree nodes are influenced by only a small number of neighboring nodes, rendering them relatively isolated. Consequently, active low-degree nodes can sustain relatively high dynamical activity in comparison to active high-degree nodes. Consequently, carrying out targeted inactivation on low-degree nodes (rather than high-degree nodes) leads to a notable decrease in the network's dynamical activity. This aligns precisely with the observations made by Tanaka et al. in Ref.<cit.>. Nonetheless, as the overall coupling strength K increases, the exchange of information and dynamics among these networked nodes becomes more effective, eliminating the isolating effect previously experienced by low-degree nodes. As the weighted coupling scale β increases, the coupling on each node (influenced by the weight matrix W) becomes more evenly distributed, diminishing the system's heterogeneity associated with the adjacency matrix A. A notable instance is β=1, where all nodes experience the same level of input signal intensity. Consequently, the distinctive isolating effect of low-degree nodes diminishes as well. Under these conditions, it is plausible to infer that the typical influential behavior exhibited by high-degree nodes extends across a broader parameter range, aligning with the findings of the structural robustness analysis. Recently, Ray et al. <cit.> delved into aging transitions within a weighted heterogeneous network. In their investigation, weights are randomly selected from a uniform distribution [0, w], where no connection exists between node degrees and the weights. Their findings reveal a direct link between weight heterogeneity and the aging transition point p_c. It was observed that dynamical robustness diminishes as the mean value w̅ of the weight distribution increases. Heightened heterogeneity among connection weights correlates with reduced dynamical robustness. Moreover, the analytical expression of the critical value p_c depends on both the mean weights and the network's average degree. §.§.§ Correlated networks Up to this point, we have explored the dynamical robustness of complex networks with structures characterized by degree distributions, which represent the probability distribution of the number of connections per node throughout the network. In particular, we have examined the dynamical robustness of these networks in relation to both homogeneously and heterogeneously connected networks featuring various degree distributions. Nonetheless, it is crucial to recognize that the degree distribution alone does not completely define the network's topology. Networks with identical degree distributions can exhibit diverse network structures. These distinctions can be quantified by examining network assortativity concerning node degrees (degree-degree correlations), assessing the clustering coefficient, and considering various network characteristics<cit.>. Network assortativity, in particular, measures the correlation between a node's degree and the degrees of its neighboring nodes. In assortative networks, there is a positive correlation, meaning that nodes tend to link with others of similar degrees. Conversely, in disassortative networks, there is a negative correlation, indicating that high-degree nodes are more inclined to connect with low-degree nodes. Here, we examine the influence of network assortativity on the dynamical robustness of coupled oscillator networks, as explored in the study <cit.>. The assortativity coefficient r is determined by computing the Pearson correlation coefficient of the degrees between pairs of connected nodes, and is calculated as follows, r = 1/σ^2_q∑_j∑_k jk(E(j,k) - Q(j)Q(k)). Here Q represents the probability distribution of the remaining degree, which quantifies the probability that a node in the end of a randomly chosen edge has k edges except for the chosen one. The distribution of Q is derived from the degree distribution P(k) as Q(k) = (k+1)P(k+1)/∑_j jP(j). The term σ^2_q = ∑_k k^2Q(k) - (∑_kQ(k))^2 is the variance of the distribution Q(k), and E(j,k) represents the joint probability distribution of the remaining degrees of two vertices. This distribution exhibits symmetry in undirected graphs and adheres to sum rules ∑_j∑_k E(j,k)=1 and ∑_j E(j, k) = Q(k). The assortativity coefficient r in Eq (<ref>) can be rewritten as follows, r = 4M∑_mj_mk_m -[∑_m(j_m+k_m)]^2/2M∑_m(j^2_m+k^2_m) -[∑_m(j_m+k_m)]^2, here M represents number of edges in the network, m∈1,2,..,M is the index of edges, and j_m and k_m represent the degrees of the two nodes j, and k connected by the edge m. The assortativity coefficient r varies within the range of -1 to 1. A value of r greater than 0 signifies assortative networks, r=0 indicates uncorrelated networks, and an r<0 implies disassortative networks. We investigate the dynamical robustness by altering r through two approaches: Greedy edge rewiring (GER) <cit.> and Stochastic edge rewiring (SER) <cit.>. We start with an uncorrelated network, characterized by r=0, and then perform edge reshuffling without permitting self-loops or overlaps. We then randomly select two existing edges of the network, given by the connected node pairs (v_1, w_1) and (v_2, w_2). The remaining degrees of these node pairs are denoted as (j_1, k_1) and (j_2, k_2) respectively. i) The degree of the connected node pairs is used to guide the edge rewiring in the GER approach. We arrange the remaining degrees, namely, j_1, j_2, k_1, and k_2 in descending order and assign them new labels, l_1, l_2, l_3, and l_4, respectively, ensuring that l_1≥ l_2≥ l_3≥ l_4. There are three possible ways to partition the four nodes into two pairs of connected nodes as depicted in Figure <ref>(a). To enhance the assortativity coefficient, we opt for Case I to establish connections between nodes with more analogous degrees if the current state is Case II or III. Conversely, to reduce the assortativity coefficient, we employ Case III, establishing an edge between nodes with the highest and lowest degrees if the current state is Case I or II. With repeated employment of edge rewiring in a greedy manner we can monotonically increase or decrease r until it no longer changes. ii) In the Stochastic Edge Rewiring (SER) technique, we iteratively rewire edges in a stochastic manner, altering the network's assortativity. We aim to construct a network that adheres to a predefined joint probability distribution E(j, k) for the remaining degrees. This process is implemented using a numerical method based on <cit.>. To study the effect of network assortivity on dynamical robustness, we consider the network model consisting of N diffusively coupled Stuart-Landau oscillators which is described by, ż_j = (α_j+iω - |z_j|^2)z_j + K/N∑_k=1^NA_jk(z_k - z_j). First, we investigate the dynamical robustness of an uncorrelated Erdős-Rényi random graph <cit.>, with degrees centered around the average degree. Subsequently, we modify the network to make it assortative or disassortative by applying the edge-rewiring algorithms outlined above. Figures <ref>(a) and <ref>(b) display the critical value p_c as a function of r for the networks generated using the GER and SER methods, respectively. In both panels, for random inactivation, the value of p_c remains almost constant, regardless of the r value. This is due to the fact that the number of inactive oscillators in the vicinity of each oscillator node is not influenced by the value of r. The targeted inactivation of high-degree and low-degree oscillator nodes both exhibit a monotonic increase in the value of p_c with r, as depicted in Figs. <ref>(a) and <ref>(b). This outcome can be attributed to the fact that in more assortative networks, the amplitudes of the active oscillators, which play a dominant role in determining the order parameter, are larger. Consequently, it can be concluded that network assortativity has a positive impact on the dynamical robustness of oscillator networks when facing targeted inactivation. Next, we explore the impacts of assortativity on dynamical robustness of correlated networks with power-law degree distributions <cit.>. The critical fraction p_c plotted versus r for the GER and SER techniques are shown in Figs. <ref>(c) and <ref>(d), respectively. In assortative networks, it is evident from the figure that for all types of inactivation, the value of p_c consistently increases as r progresses from 0. In assortative networks, connections tend to be formed between high-degree nodes and between low-degree nodes. Consequently, when targeting the inactivation of high-degree nodes, low-degree active oscillators that are connected to a few inactive oscillators can sustain large oscillation amplitudes. Likewise, when targeting the inactivation of low-degree nodes, high-degree active oscillators connected to a few inactive oscillators can also maintain large oscillation amplitudes. These nodes, which preserve substantial oscillation amplitudes, contribute significantly to the high value of p_c, signifying a highly robust oscillatory behavior. Thus, assortativity plays a positive role in enhancing dynamical robustness. In contrast, for disassortative network as r is decreased from 0, the dynamical robustness increases after a slight downward trend for the GER method, as shown in Fig. <ref>(c), but it gradually decreases until r=-0.5 for the SER method, as shown in Fig. <ref>(d). In summary, we conclude that network assortativity enhances dynamical robustness, while the impact of network disassortativity on dynamical robustness depends on the specific edge-rewiring methods employed. §.§ Dynamical robustness of multiplex networks Recent studies have further confirmed that the functions emerging within a single network can exert a notable impact on other networks. Specifically, a node within one network is often found to be a component of another network. From ecological <cit.> and climate systems <cit.>, via physical and transportation systems to social networks <cit.>, these interconnections have been identified across various contexts. This underscores the effectiveness of an interdependent <cit.>, particularly multilayer (multiplex) <cit.>, network of networks architecture in describing numerous systems and scenarios. Thus, a multilayer network <cit.> is simply defined to be a network with a set of nodes, together with the concept of layers in addition to nodes and links. These layers are often present in order to represent different types or aspects of interactions (links). In particular, a special type of multilayer network having same number of nodes in each layer, in which each node in each layer posseses exactly one connection with a node (its replica) in another layer is known to be a multiplex network <cit.>. Let us here consider a multilayer network consisting of L layers, each layer comprised of N globally coupled nodes. Casting the dynamics of the nodes of the network by Stuart-Landau oscillatory systems, the time evolution of the entire multilayer network is then governed by the following Eq. <cit.>, [ ż_̇j̇^[l]={α_j + iω-|z_j^[l]|^2}z_j^[l]+ϵN∑_k=1^N(z_k^[l]-z_j^[l])+σ H(z_j^[1],z_j^[2],…, z_j^[L]), ] where z_j^[l] represents the state of the j-th node in the l-th layer, j= 1,2,…,N; l= 1,2,…,L. Here ϵ is the intralayer coupling strength and σ corresponds to the interlayer interaction strength with H(·) being the function characterizing the interlayer connections. Then, three different interlayer coupling functions are assumed, namely mean-field, chain and diffusive interlayer interactions which are realized by the following forms H(z_j^[1],z_j^[2],…, z_j^[L])= 1/L∑_m=1^Lz_j^[m]      (case I), z_j^[l-1]/2      (case II), 1/L∑_m=1^L(z_j^[m]-z_j^[l])      (case III). Next, we modify the order parameter for the multilayer network as R=1NL|∑_l=1^L∑_j=1^Nz_j^[l]| characterizes the intensity of the global oscillation in the entire multilayer networked system. We then set the value of the system parameters as α_j=2 for an active oscillator, α_j=-1 for an inactive oscillator, and ω=3, while keeping the network size N=3000 and the coupling strength fixed at ϵ=8. In Fig. <ref>(a), an exemplary multi-layer network is portrayed with N=5 and L=2, in which the fraction of inactive units is chosen as p=0.4. Figure <ref>(b) depicts the variation in the order parameter R as a function of the inactivation ratio p for the three different network setups. To be precise, the first plot (in red) corresponds to the single-layer case, i.e., when L=1. As witnessed, starting from the normalized unit value, R monotonically decreases and eventually drops down to zero near p ∼ 0.74 implying an aging transition around p_c ∼ 0.74. Next, R is shown (in green) for the bilayer framework (L=2), precisely for case II, while keeping the interlayer coupling strength σ at σ=1.5. This time, the critical inactivation ratio p_c ∼ 0.94 for which the aging transition occurs is much higher than that of the earlier case (single-layer network). This implies that the robustness of the bilayer network for case II is higher than that of the single-layer network. Finally, R is plotted (in blue) for the multilayer network (L=2) under case III, for a much longer σ=8. In contrast to the previous case, now the aging transition takes place earlier than the single-layer network. Specifically, the critical ratio p_c ∼ 0.72 of the inactive elements is smaller than that for the single-layer case. Thus, as compared to the single-layer network formulation, the robustness of the multilayer network can be higher or lower depending on the functional form of the interlayer coupling. §.§ Dynamical robustness of long-range connectivity networks As far as the interactions among the constituents of a complex system are concerned, we have, thus far, confined ourselves only on the short-range direct communications. However in networked systems, interactions arise not only from the direct connections between nodes but also from indirect long-range communications facilitated by numerous other existing pathways that link the nodes. Long-range connectivity <cit.> is omnipresent in complex systems and hence has recently emerged as one of the flourishing areas of research. In particular, researchers have studied the presence of long-range interactions in various networks characterized by a power-law decay. The examples include biological networks <cit.>, Rydberg atoms <cit.>, hydrodynamic interaction <cit.>, plasmas <cit.>, nuclear spins <cit.>, and in climate, called teleconnections <cit.>. From synchronization <cit.> to chimera state <cit.> and oscillation quenching <cit.>, various phenomena have been examined in networks subject to long-range communications among the nodes. Here, let us first describe long-range interaction in networks through the adjacency matrices corresponding to different paths. Let G=(V,E) be a network consisting of N nodes in which V and E represent the collections of nodes and links respectively, so that V={1,2,…,N}, and E⊂ V× V is the set of links. Then we can characterize the network's diameter as D=max{(j,k):j,k=1,2,…,N}. Here (j,k) denotes the distance between the nodes j and k. The d-path adjacency matrix A^[d] can then be written as, A_jk^[d]= 1, if (j,k)=d, 0, otherwise. For a clear perception, we choose a small exemplary network with N=6 nodes with the diameter D=3, and in Fig. <ref>, we depict the associated d-path (d=1,2,3) networks along with the adjacency matrices. In Fig. <ref>(a), we display the original given network, i.e., the direct 1-path network, together with the associated adjacency matrix A^[1] in the lower panel. Similarly, the extracted 2- and 3-path networks are depicted in Figs <ref>(b) and <ref>(c) respectively, with the corresponding adjacency matrices A^[2] and A^[3]. Then casting each node by the dynamics of Stuart-Landau oscillators, the dynamics of the j-th node in the network subject to long-range communication can be described as <cit.>, [ ż_j=(α_j + iω - |z_j|^2)z_j+1N∑_d=1^Dσ_d∑_k=1^NA^[d]_jk(z_k-z_j),j= 1,2,⋯,N, ] In this context, σ_d represents the interaction intensity between the j-th and k-th nodes when the distance between them is d, where D denotes the network's diameter. Thus the strength of interaction between each pair of nodes essentially depends on the distance between them. We also choose α_j=a>0 and α_j=-b<0 respectively for the active and inactive set of oscillators in the system. We specifically focus on a power-law decay of the coupling strength in relation to the distance between the nodes, which is best represented by σ_d=σd^β. Here, the exponent β governs the rate of decay in the power law. We consider Erdos-Renyi random network architecture (G(N,q) graph <cit.>) as the underlying network with network size N=200 and the connection probability q=0.05, and we choose a=1, b=1 with ω=3. Figure <ref>(a) portrays the variation in the order parameter R as a function of the interaction strength σ∈ [0,30] and the inactivation ratio p, in which the decay rate β=2 is kept fixed. The phase diagram explains how R transits from non-null to null value and hence aging takes place in the entire range of σ∈ [0,30] as the inactivation ratio p increases. The increasing values of both the interaction strength σ and the fraction p lead to faster aging and hence the network becomes more dynamically vulnerable. On this phase diagram acquired through numerical means, we depict the critical value p_c, which is obtained through analytical methods (discussed later), and aligns with the numerical findings. This analytical curve thus separates the aging region from the oscillatory regime. In Fig. <ref>(b), order parameter R is plotted with respect to the simultaneous variation of p and the decay rate β∈ [0,4], with a fixed coupling strength σ=20. In contrast to the effect of the interaction strength σ on the dynamical robustness, increasing values of β inhibits aging transition and hence dynamical robustness increases. The analytically obtained p_c values are plotted over the phase diagram, as before, that fits into the numerical outcome. Ultimately, in Fig. <ref>(c), we illustrate the variations in the critical inactivation ratio p_c across the parameter plane (σ,β). The phase diagram describes how the robustness of the network alters due to the simultaneity of the two crucial parameters σ and β. The critical ratio p_c remains unity for sufficiently small interaction strength σ, irrespective of the value of β. However, as σ increases aging takes place in the networked system so that p_c decreases. On the contrary, p_c increases for increasing values of β. This indicates that the system is dynamically vulnerable, whenever the coupling strength σ is high and the long-range exponent β is low. To establish a comprehensive analytical framework for evaluating the dynamic robustness of the networked system under consideration, we employ a degree-weighted mean field approximation. As a result, the system (<ref>) can be approximated as [ ż_̇j̇ = (α + iω - |z_j|^2)z_j + 1N∑_d=1^Dσ_dk_j^[d][(1-p)M_A(t)+pM_I(t)-z_j].; ] Here, [ M_A(t) = ∑_d=1^Dσ_d∑_j∈ S_Ak_j^[d]z_j(t)∑_d=1^Dσ_d∑_j∈ S_Ak_j^[d],   M_I(t) = ∑_d=1^Dσ_d∑_j∈ S_Ik_j^[d]z_j(t)∑_d=1^Dσ_d∑_j∈ S_Ik_j^[d] ] are the mean-fields with weighted degrees corresponding to the active and inactive sets of dynamical units, respectively. where k_j^[d](j=1,2,...,N) refers to the degree of the j-th node related to the d-path network. Assuming the state variables in the form z_j(t)=r_j(t)e^i(ω t+θ), Eq. (<ref>) can be expressed as [ ṙ_j = (α_j - 1N∑_d=1^Dσ_dk_j^[d]-r_j^2)r_j + 1N∑_d=1^Dσ_dk_j^[d][(1-p)R_A(t)+pR_I(t)],; ] in which [ R_A(t) = ∑_d=1^Dσ_d∑_j∈ S_Ak_j^[d]r_j(t)∑_d=1^Dσ_d∑_j∈ S_Ak_j^[d],   R_I(t) = ∑_d=1^Dσ_d∑_j∈ S_Ik_j^[d]r_j(t)∑_d=1^Dσ_d∑_j∈ S_Ik_j^[d]. ] Then, pursuing a similar procedure as above in Sec. <ref>, we arrive at the critical inactivation ratio [ p_c = H(σ,a)-1H(σ,a)-H(σ,-b), ] where H arises from the following equation, [ H(σ,α)=∑_d=1^Dσ_d∑_j=1^N[k_j^[d]L_jL_j-Nα_j]N^2∑_d=1^Dσ_d s^[d], ] in which L_j=∑_d=1^Dσ_dk_j^[d]; j=1,2,...,N. § DYNAMICAL ROBUSTNESS OF QUANTUM OSCILLATORS So far, we have considered classical Stuart-Landau oscillators to understand different routes leading to aging transition. We now pose the question: “Can the aging transition occur in the quantum domain, and if it does, how would it manifest?” Motivated by this question, Bandyopadhyay et al. <cit.> investigate the occurrence of aging transitions in quantum systems, emphasizing the differences compared to classical models <cit.>. They examine a globally coupled quantum Stuart-Landau oscillators, which can be either active or inactive. The classification of nodes as active or inactive in the quantum context is determined by the characteristics of the dissipators in the quantum master equation. The quantum master equation for N globally coupled quantum Stuart-Landau oscillators with diffusive coupling is expressed as <cit.>, ρ̇ = ∑_j=1^N(-i[H,ρ]+G_j𝒟[O_j](ρ)+κ𝒟[a_j^2](ρ)) + V/N∑_j=1^N∑_j'=1^N^'𝒟[a_j - a_j'](ρ), where H=ωa_j^† a_j. a_j and a_j^† represent bosonic annihilation and creation operators of the j-th oscillator, respectively. The Lindblad dissipator 𝒟[L̂] have the form 𝒟[L̂](ρ)=L̂ρL̂^†-1/2{L̂^†L̂,ρ}, where L̂ is an operator (we set ħ=1, without any loss of generality ). The operator O_j of the second term (shown under the brace) is introduced to incorporate the concept of active and inactive elements. As system approaches the classical limit (G_j>κ), the quantum master equation becomes equivalent to the classical Stuart-Landau equation by the relation: ⟨ȧ⟩=(ρ̇a). Here ∑_j'^' indicates that the sum does not include the condition j'=j. The concept of active and inactive elements can be introduced into the quantum master equation through the properties of the operator associated with the coefficient G_j in the Lindblad dissipator as follows: O_j = a_j^† for active oscillators, a_j for inactive oscillators. O_j=a_j^† is the dissipator G_j𝒟[O_j](ρ) in Eq. (<ref>) describes a single boson, gain with a rate of G_j and it lead to stable limit cycle for the jth oscillator to make it quantum active element. When O_j=a_j, the system experiences a single boson loss at a rate of G_j, which show the non-oscillatory or inactive quantum behaviour of jth oscillator. Similar to the case of the classical system <cit.>, the whole network is divided into two groups one group consists of N_a active elements and the other consists of N_i inactive elements and calculate fraction of inactive node p=N_i/N in the network. In the uncoupled state (V=0), the phase space representation of the Wigner function for active and inactive elements is depicted in Fig.<ref>(a) and (b), respectively. The ring-shaped Wigner function <cit.> indicates a quantum limit cycle at G_j=4, O_j=a_j and a probability blob at G_j=2, O_j=a_j† represents a non-self-oscillatory element. When dealing with a large number of oscillators, the density matrix of the many-body system can be approximately factorized as ρ≈⊗_j=1^N ρ_j. This approach aligns with the mean-field approximation, simplifying the master equation Eq. <ref> into individual master equations for each oscillator, which then interact with the mean-field as follows <cit.>: ρ̇_̇j̇ =-i[ωa_j^† a_j,ρ_j]+G_j𝒟[O_j](ρ_j)+κ𝒟[a_j^2](ρ_j) +2V(N-1)/N𝒟[a_j](ρ)+V(A[a_j^†,ρ_j]-A^*[a_j,ρ_j]), where A and A^* are defined as follows: A=1/N∑_j'=1^'N⟨ a_j'⟩_j and A^=1/N∑_j'=1^'N⟨ a_j'^†⟩_j. Eq. <ref> is numerically solved by self-consistent method using QuTiP <cit.>. We take G_j=4 for the active elements [j ∈{1, 2, ..., N_a}], and G_j=2 for the inactive elements [j ∈{N_a+1, ..., N}]. In the network, we differentiate between the oscillatory state and the oscillation-collapsed state by calculating the average boson number per oscillator: Q=n̅_mf(p)/n̅_mf(0), where, n̅_mf(p) is the mean boson number per oscillator for a particular p value. To examine how the average mean boson number changes as the value of p increases, Q is plotted as function of p for different coupling strengths V in Fig. <ref>(a). For V≤ 2.73, Q decreases monotonically as p increases. However, once V exceeds approximately 2.73, the rate at which Q decreases exhibits two distinct phases: initially, Q declines sharply as p increases, but beyond a critical threshold p_cq, the rate of decrease becomes nearly linear. The point on the curve that marks the transition between the steep decline and the inclined linear region is referred to as the knee point (for V=5 star mark in the Fig.<ref>(a)). These knee points p_cq consider as aging transition threshold and its associated order parameter denoted as Q_c. As p increases further, the curve progressively approaches Q=0 because of the growing number of inactive elements in the network. When p reaches one, all oscillators become inactive, resulting in Q=0, which is a straightforward scenario. The dependence of p_cq and Q_c on the coupling strength V is illustrated in Figs. <ref>(b) and (c), respectively. The results shows that p_cq initially increases as V increases. However, for strong coupling strength, p_cq reaches a plateau and exhibits little further change. This behavior differs from the classical case, where p_c generally decreases with increasing coupling strength <cit.>. It's notable that Q_c diminishes as the coupling strength increases, aligning with the anticipated effect of stronger coupling promoting aging. The key observations that distinguish the quantum aging transition from its classical counterpart are significant. Unlike in classical systems, where aging transition is marked by the complete collapse of the network, the quantum aging transition is characterized by a rapid decrease in the average mean boson number. Furthermore, the quantum aging process involves two distinct phases: initially, up to a critical “knee" point, the order parameter decreases rapidly; beyond this point, the rate of decrease slows down. During this latter phase, all inactive elements populate the ground state through the single boson loss process. However, active oscillators never fully reach the ground state, as their relaxation is governed solely by the two-boson absorption process. This unique scenario, which has no classical counterpart, underscores the distinct nature of quantum aging processes. § DYNAMICAL ROBUSTNESS OF BIOLOGICAL NETWORKS So far, we confined ourselves to the Stuart-Landau model which is a normal form of a nonlinear oscillating system near the Hopf bifurcation point. However, the scenario of loss of collective oscillatory activity in diverse networked systems due to the failure of their components exhibits numerous practical applications in specific biological systems as well. It is critically important to investigate how various conditions affect macroscopic activity when certain microscopic units fail and lose their self-oscillatory behavior. This is especially pertinent in biological systems, since collective functions play an integral part frequently resulting from interactions among oscillatory units that can gradually deteriorate under pathological conditions. Notable attempts have been made to further explore dynamical robustness in terms of aging transition in realistic biological networks composed of active and inactive elements, such as neuronal ensembles, and ecological systems. Motivated by these facts, in this section we highlight important results obtained for basic models in ecological and neuronal systems. §.§ Spatial metapopulation networks In recent past, the aging transition phenomenon has been studied in the context of metapopulation survivability by Kundu et al. <cit.>. Metapopulation dynamics, a concept used to describe the movement of spatially separated populations of one species in spatial ecology<cit.>, has shed light on the long-term dynamics of structured populations. These studies have revealed that population densities of a particular species often undergo synchronized fluctuations across extensive geographic regions <cit.>. In this framework, a patch is typically represented as a system of differential equations that displays oscillatory solutions. Spatially structured metapopulations can be conceived as a network composed of interconnected oscillators. In this context, nodes correspond to viable habitat patches, and the links connecting these nodes signify functional pathways. This conceptual framework enables the examination of the ecological network's dynamical robustness, particularly in the context of predator-prey patches. The mathematical representation of the dynamics within a single patch is as follows, ẋ= f(x, y)=1/ϵ[x(1-x)(x-θ)-xy], ẏ=g(x, y)= xy-dy. In this context, the variables x and y represent the normalized prey and predator population densities, respectively. The parameter ϵ∈(0,1] signifies the time scale separation between the prey and predator populations, θ∈(0,1) represents the Allee threshold, and d stands for the natural mortality rate of the predator population. The nontrivial fixed point (d,(1-d)(d-θ)) exists under the condition θ<d<1. This fixed point is stable when d>1+θ/2, and a supercritical Hopf bifurcation occurs at d=1+θ/2. When d≤1+θ/2, the coexistence of oscillation (stable limit cycle) and a stable extinction state (0,0) arises based on the initial population density. However, by further reducing the predator mortality rate, species extinction occurs through a boundary crisis of the limit cycle attractor. The prey-predator model involving N patches is mathematically described by the following equation, 𝐗̇_̇i̇ = 𝐅(𝐗_i) + 𝐌∑_j=1^NA_ij(𝐗_j-𝐗_i), In this framework, 𝐗i = (x_i, y_i)^T represents the state vector, and 𝐅(𝐗i) = (f(x_i,y_i), g(x_i,y_i))^T describes the inherent dynamics of the i-th patch. The second term signifies diffusive coupling, illustrating interactions among species across different patches. Here, 𝐌=( m/deg(i), m/deg(i))^T represents the dispersal matrix, where m denotes the dispersal rate between patches and the term deg(i) signifies the number of patches (degree) connected to the i-th patch. Here A_ij stands for the adjacency matrix. In this study, an active patch indicates stable limit cycle oscillations in both populations, with d=0.5 set. Conversely, an inactive patch implies species extinction, with d=0.3. To investigate the aging transition, we adopt the mathematical framework introduced by <cit.>. This approach involves considering a scenario where, in a fraction p of patches, species go extinct due to the absence of dispersal between these patches. The order parameter R, which quantifies the level of dynamical activity in the network, is defined as follows, R = 1/2(R_x+R_y), where R_x = 1/N∑_i=1^N(⟨ x_i,max⟩_t-⟨ x_i,min⟩_t), R_y=1/N∑_i=1^N(⟨ y_i,max⟩_t-⟨ y_i,min⟩_t), with ⟨...⟩ representing the long-time average. R=0 indicates the presence of stable steady states. To distinguish between trivial (extinction state) and non-trivial steady states, the concept of Δ = Θ(𝐗_i - δ) is introduced, where δ is a predefined threshold, and Θ(x) is the Heaviside step function. Non-zero values of the order parameter R indicate the continued existence of the metapopulation throughout the network, whereas R=0 indicates extinction of this metapopulation. We now examine three distinct types of dispersal networks: global, small world, and scale-free. i) In the case of an all-to-all network, as illustrated in Fig. <ref>, the normalized order parameter D=R(p)/R(0) is plotted against p for various dispersal rates m. The figure reveals that as the dispersal rate decreases, the dynamical robustness increases until it reaches a value of unity at a critical threshold of m=0.03, below which p_c remains at unity. This implies that a lower dispersal rate supports metapopulation survivability. Additionally, we explore complex dispersal topologies, specifically small-world and scale-free networks, among patches, aiming to understand population revival in inactive patches through dispersal. ii) The small-world dispersal topology exhibits aging transition behavior that is qualitatively akin to the global (all-to-all) network. iii) In case of a scale-free dispersal network we employ three distinct inactivation strategies for the patches, namely random, targeted hub (highest degree node), and targeted low-degree nodes. Analyzing the variation of D in relation to p across the three different dispersal topologies, it becomes apparent that in the case of random inactivation, the likelihood of metapopulation persistence is higher compared to targeted inactivation. Concurrently, it is evident that for targeted inactivation, particularly of high-degree nodes, the critical value p_c is notably lower. Across all conceivable scenarios, there exists a substantial abundance of species within the metapopulation until the inactivation ratio p reaches the critical value p_c, at which point sudden and explosive extinction occurs. In conclusion, we undertake a comparison of the aging transition across all three dispersal topologies. Figure <ref> illustrates the variation of the critical inactivation ratio p_c with the dispersal rate m, clearly indicating that a small-world network exhibits the highest ecological robustness. Conversely, in the case of global dispersal, the likelihood of metapopulation extinction is notably higher compared to the more intricate complex dispersal networks. As an extension to this study, multilayer framework being capable of offering an inherent structural framework to model diverse ecological systems, the authors in the Ref <cit.> investigated the persistence in multilayer ecological network consisting of harvested patches. They considered small-world dispersal topologies in the layers for modeling the communications between the prey-predator patches. The significant effect on the global persistence of species caused by asymmetric intralayer and interlayer dispersal strengths, along with the unique network topologies within the layers, is examined in detail. §.§ Neuronal networks We here present some significant results for studying neuronal ensembles. In contrast to the above discussions of dynamical robustness of networked systems in which the self-oscillatory dynamics of the inactive units is lost via inverse Hopf bifurcation, we start our discussion by focusing on the robustness of neuronal populations where the inactive neuronal systems lose their dynamism through a saddle-node bifurcation on the invariant circle (SNIC) <cit.>. To be precise, here the inactive units exhibit the regime of class-I excitability. Analogous to the earlier approach, we split the entire population of size N into two sets S_E and S_A, comprising of pN excitable and (N-Np) acitive systems, respectively. The dynamical evolution of the networked system reads as, [ ẋ_j= F_j( x_j)+KN∑_k=1^N( x_k- x_j);      j= 1,2,⋯,N, ] in which F_j= F_A(E) whenever j ∈ S_A(E). We, specifically, choose the paradigmatic Morris-Lecar models for our analysis so that system (<ref>) becomes, [ C V̇_j=g_L(-V_j-V_L)-w_jg_K(V_j+V_K)-g_Cam_∞(V_j)(V_j-V_Ca)-ϕ_j(V_j-0.2)+KN∑_k=1^N(V_k-V_j),; ẇ_̇j̇=λ(V_j){w_∞(V_j)-w_j},      j= 1,2,⋯,N, ] where m_∞(V_j)=[1+{(V_j-v_1)/v_2}]/2, λ(V_j)=λ_0[1+{(V_j-v_3)/v_4}] and w_∞(V_j)=[1+{(V_j-v_3)/v_4}]/2. The parameters are chosen as g_L=0.5, V_L=0.4, g_K=2, V_K=0.7, g_Ca=V_Ca=C=1. λ_0=0.33 and v_1,2,3,4=(-0.01,0.15,0.10,0.145). Moreover, ϕ_j=ϕ_A assumes the value ϕ_A>ϕ_*∼ 0.076 for an active (self-oscillatory) individual dynamics whereas ϕ_j=ϕ_E follows ϕ_E<ϕ_* for an excitable cell. We next define the mean ensemble's frequency as a measurement of the global oscillation in the ensemble as follows, [ Ω=1N∑_k=1^NΩ_k, ] with its normalized value R=Ω(p)/Ω(0), in which Ω_k is the mean frequency of the k-th dynamical unit. For a network of N=500 Morris-Lecar units, Fig. <ref>(a) displays the normalized average frequency of the ensemble (along with the individual frequency Ω_j in the right panels) as a function of the increasing inactivation ratio p, for different values of the interaction strength K. It is reasonably discernible that for low K, the transition to the global quiescence state (whenever) takes place only when all the units are in the inactive (excitable) regime, i.e., for p_c=1. This scenario alters when we consider a coupling strength higher than K_c ∼ 0.144 for which aging occurs even when not all the elements are excitable. For high K, smooth profiles of R and hence of aging are observed, whereas intermediate values of K leads to step-like profiles of R. The corresponding step-like profiles of the individual frequencies Ω_j are clearly visible in Figs. <ref>(c) and <ref>(d) for K=0.144 and K=0.2, respectively. In contrast to the above study based upon the assumption of the presence of only linear diffusive electrical coupling, we next examine the robustness of another neuronal population which consists of neuronal systems interacting through both diffusive gap junctional and non-linear chemical synaptic communication <cit.>. We particularly emphasize on the multi-layer framework of the neuronal ensemble and demonstrate that the chemical synapses acting through the inter-layer connections are sufficiently potential in recovering the global oscillation and hence the dynamical rhythmicity of the network. We now choose Hindmarsh-Rose neuron model in order to cast the nodes in both layers organized in a framework of bi-layer multiplex network. We presume that the neurons communicate via electrical coupling within each layer while the neurons across the layers are connected through chemical synapses. The intra-layer connectivities are considered to be of small-world topology, which is evident in case of brain networks. The mathematical description of the entire multi-layer network can then be given by the following equations as, [ ẋ_i,k=ax^2_i,k-x^3_i,k-y_i,k-z_i,k+K_elN∑_j=1, j≠ i^NA_ij(x_j,k-x_i,k)+K_ch(v_s-x_i,k)Γ(x_i,l),; ẏ_i,k=(a+α)x^2_i,k-y_i,k,; ż_i,k=c(bx_i,k-z_i,k+e), ] where N is the number of neurons present in each layer with i= 1,2,…,N, k,l=1,2 and k ≠ l. The parameters K_ch and K_el account for the chemical and electrical synaptic strengths, respectively. Finally, (A_ij)_N × N represents the adjacency matrix associated to a small-world architecture considered for each layer. Besides, the variables x_i,k, y_i,k and z_i,k correspond to the membrane potentials, ion transport across the membrane via the fast and slow channels, respectively. The synapses are assumed to be excitatory with the reversal potential v_s> x_i,k(t) for all t. Further, the chemical synaptic function is of the form Γ(x)=1/1+e^-10(x+0.25). With the parameters a=2.2, c=0.005, e=5 and α=1.6, isolated neuronal systems display a plateau bursting for b=9 and exhibit a stable equipoint regime whenever b=4. Thus, the active units correspond to b=b_A=9, whereas the inactive systems are associated to b=b_I=4. With a similar set-up of dividing the whole ensemble into two groups of active ((1-p)N number of neuronal systems) and inactive dynamical units (pN number of neuronal systems), we define the following order parameter for each layer R̅_k=√(⟨ ( X_c_k-⟨ X_c_k⟩ )^2 ⟩), where X_c_k=1/N∑_j=1N(x_j,k,y_j,k,z_j,k) is the centroid of the k-th layer (k=1,2) and ⟨⋯⟩ stands for long-time average. The order parameter for the entire network then is defined as R̅(p)=1/2[R̅_1(p)+R̅_2(p)] with the normalized value R=R̅(p)/R̅(0). Assuming N=200 neuronal systems in each layer, and a small-world network topology (with p_sw=0.05 and average-degree ⟨ k ⟩=50) in each layer, we plot the order parameter R as a function of the inactivation ratio p and the gap-junctional strength K_el for different values of the chemical synaptic strength K_ch, in Fig. <ref>. The black regions in each of the plots correspond to R=0 reflecting the aging transition, that is, when the entire neuronal ensemble loses its dynamism. We start with the no-multiplexing case (i.e., with K_ch=0) in Fig. <ref>(a), and observe that increasing p leads to an aging transition, depending on the strength K_el of the electrical coupling. Higher the gap-junctional strength, the earlier the global oscillation of the network vanishes and hence aging transition takes place. However, as we introduce multiplexing in the ensemble through the non-zero chemical synaptic strength K_ch=0.5 (cf. Fig. <ref>(b)), we witness a significant improvement in the dynamical robustness of the networked system. This is realized in the form of a shrinked black region associated with the aging transition. We further increase the chemical synaptic strength to K_ch=1.5 and depict a similar phase diagram in Fig. <ref>(c). With this higher K_ch, we encounter narrower black region demonstrating a further enhancement in the robustness of the system. This is how the chemical synapses are capable of enhancing the rhythmicity of the multiplexed neuronal ensemble. In addition to these studies, the dynamical robustness of neuronal networks is further examined in terms of the phenomenon of aging transitions in an Erdős–Rényi network of interacting Rulkov neurons based on network connectivity, connection strength, and the ratio of inactive neurons <cit.>. Both noise-free and stochastic networks, with additive noise affecting coupling strength, are investigated. Both smooth and explosive aging transitions are witnessed in both noise-free and stochastic networks. Although, noise is found to mitigate the impact of inactive neurons and reduce the occurrence of explosive transition in the networked system. Research in this area includes the illustrious study presented by Barać at al. <cit.> which explores dynamical robustness in terms of the collective failures in networks of interacting heterogeneous excitable systems using the FitzHugh-Nagumo neuronal model. These networks are assumed to exhibit essential characteristics like broad-scale degree distribution, small-world feature and high modularity. The proportion of inactive excitable units, the interaction strength between them, and their proximity to the bifurcation point all influence the network failure resulting in collective aging transition. It is further demonstrated that intermediate coupling strengths prolong global network activity when high-degree nodes are inactivated first. Additionally, the most effective strategy for inducing collective failure adepends non-monotonically on coupling strength and the distance from the bifurcation point to the oscillatory dynamics of the units. Furthermore, the Ref. <cit.> studied the dynamical robustness of a multilayer neuronal network with electrical intra-layer interaction and non-synaptic ephaptic coupling between the layers. Ephaptic coupling arises due to electromagnetic induction caused by extracellular electric fields <cit.>, and is a form of non-synaptic interaction among neurons that plays crucial role in neuronal communication. It is worth mentioning that the inter-layer ephaptic interaction enhances the dynamical robustness of both individual layers and the entire network, contrasting with electrical coupling, which tends to weaken it. The network dynamics with such inter-layer ephaptic coupling is also validated using an analog circuit built on Multisim. § ENHANCEMENT AND MAINTENANCE OF DYNAMICAL ROBUSTNESS In today's interconnected world, where complex systems govern various aspects of our lives, ensuring their stability and robustness has become paramount. Enhancing dynamical robustness in the form of resurrecting oscillatory activity is a crucial endeavor that safeguards the reliability and safety of critical system, ranging from power grids and transportation networks to financial systems and healthcare infrastructure. The oscillatory behavior in neurons is pivotal for processing neural information and coordinating processes related to cognitive functions and memory <cit.>. Neurons thus demonstrate a pronounced inclination to engage in rhythmic activity both at the individual and collective level <cit.>. Therefore, the interruption in the oscillatory behavior of neurons can directly impact essential neural processes. Proper functioning of cardiac and respiratory systems <cit.>, as well as physiological processes like cell necrosis within organs <cit.> relies on oscillatory dynamics. Power-grid networks necessitate stable, synchronized rhythmic activity as a requirement <cit.>. On the other hand, the crisis of extinction of species is primarily attributed to factors such as climate change or excessive utilization of natural resources, significantly impacting the surrounding ecosystems on a broad scale <cit.>. In ecological networks, the extinction of patches within the meta-population can thus result in significant alterations to its overall sustainability <cit.>. The significance and worth of investigations on the enhancement of robustness is thus evident in both natural and artificial contexts. §.§ Controlled diffusion method In the last few years, there have been a number of significant attempts in reaching an enhancement of the dynamical robustness of complex networked systems. In order to retrieve dynamism from the state of aging in damaged dynamical networks, we first describe one of the most efficient and simple procedure based upon controlled diffusion <cit.>. We unravel that a tiny deviation from the usual diffusive coupling among the dynamical units of a network can increase the dynamical robustness of networks quite comprehensively. Instead of easier aging, strong interaction under such controlled diffusion can support robustness of the concerned system. Let us express the dynamical evolution of the network of N all-to-all coupled Stuart-Landau oscillators as, [ ż_̇j̇=(a_j+ iω-|z_j|^2)z_j+KN∑_k=1^N(z_k-α z_j);      j= 1,2,⋯,N, ] where α∈ [0,1] is a feedback parameter that controls the diffusion rate. This α thus differentiates this coupling form from that of the other studies which consider standard diffusive coupling without any such controlling parameter. α∈ [0,1] controls the diffusion, while making a bridge between the direct coupling (whenever α=0) and the usual symmetric diffusive coupling (for α=1). This is indicative of controlled diffusion being able to represent the feature of diffusion in a large variety of real-world systems including biological and technological networks. Similar to the approach discussed above, setting z_j=A for all the active dynamical units j=1, 2, ..., N-N_p and z_j=I for all the inactive elements j=N-N_p+1,..., N, the system (<ref>) can be reduced to the following coupled system, [ Ȧ=(a+ iω-pK+K-α K-|A|^2)A+KpI,; İ=(-b+ iω+pK-α K-|I|^2)I+K(1-p)A. ] Linear stability analysis of the system (<ref>) around one of the equilibrium (A,I)≡ (0,0) leads us to the following Jacobian matrix, [ a+ iω-pK+K-α K Kp; K(1-p) -b+ iω+pK-α K ]. The critical inactivation ratio can then be determined from this Jacobian following the usual approach (explained above) as [ p_c=a(b+K)+K^2α(1-α)+K(b-a)(1-α)K(a+b). ] It is easy to verify that this expression of p_c provides the same result as in the Ref. <cit.>, whenever the feedback α=1. Let us now discuss the numerical results obtained for a network of N=1000 nodes and a=2, b=1 with ω=3. For this, we also fix the coupling strength K at K=8. Figure <ref>(a) depicts the variation in the order parameter |Z| (as defined in Eq. (<ref>)) as a function of the inactivation ratio p for various values of the diffusion control parameter α. Precisely, we start with showing the results for the standard diffusion i.e., α=1. The order parameter monotonically decreases with increasing p and finally reaches to the null value characterizing the aging transition, which also confirms the observation of the Ref. <cit.>. We then provide a minute deviation from the unit value of α and choose α=0.95. As can be seen as a result of this tiny deviation from the usual diffusion, the order parameter |Z| drops to zero for higher value of the inactivation ratio p and hence the critical inactivation ratio p_c increases. This essentially means that the networked system becomes more robust to the progressive inactivation of its dynamical units compared to the scenario of usual diffusive interaction. We further decrease the value of α to α=0.90, 0.88 and 0.87 respectively and witness that the p_c values further increase progressively. Besides, we also observe that for α=0.87, aging does not occur at all for any value of the inactivation ratio p. This is because there is a critical value α_c of the control parameter α that must be surpassed in order to have p_c<1. Let us now demonstrate this fact through Fig. <ref>(b), in which we plot the critical inactivation ratio p_c against the decreasing values of the diffusion control parameter α. As can be seen, starting from a value around p_c ∼ 0.76 whenever α=1, the p_c values increase for decreasing α. This remains valid until p_c reaches unity at around α=α_c ∼ 0.88, beyond which an aging transition does not take place anymore. As we have already understood, for the standard diffusive coupling, aging arises for all K>K_c=a and increasing K results in earlier aging implying decreasing p_c. But one of the most interesting observations from our analysis includes the fact that for higher coupling strengths K, aging does not occur if the control parameter α is decreased from a certain value. To be specific, in our case, an aging transition takes place for the following condition on the interaction strength K, K>a,                    α=1, aα<K<b1-α,       α<1. Thus the interval of interaction strength monotonically decreases for decreasing α and eventually vanishes if α<a/a+b. Figs. <ref> displays the phase diagrams in the (α,K) parameter plane for different values of the parameter b. Actually, an aging transition happens due to the competing forces between the two sets of active units possessing A_j>0 and inactive units with A_j<0, i.e., the values and magnitudes of a and b. Through these phase diagrams, we explain how the networked system (<ref>) evolves for simultaneous variation in the control parameter α and the interaction strength K, on one hand. On the other hand, we show how different magnitudes of the parameter b affect the robustness of the network. Figure <ref>(a) depicts the (α,K) phase diagram for b=2 instead of b=1. The upper-right region surrounded by the bold black curve represents the aging transition (AT). The lower-right area encompassed by the dashed red curve stands for the oscillatory state (OS). The shaded region describes the transition zone, where p_c remains at its unit value even for decreasing α. The aging region is clearly visible in the phase diagram, where the shaded transition zone touches the K-axis whenever a<K<b for b>a. Next we progressively increase the value of b and portray similar phase diagrams in the (α,K) plane in Figs. <ref>(b), <ref>(c), and <ref>(d) respectively for b=3, b=4, and b=5. Our observation includes that the strong interaction with controlled diffusion (α<1) favors dynamical robustness of the network. It is also discernible that aging island expands and the oscillatory region shrinks for increasing values of b. Thus, the dynamism of the networked system is difficult to resurrect, whenever the attraction strength of the inactive elements becomes stronger. §.§ Mean-field feedback method Feedback is the mechanism by which the output of a system is reintroduced into the system as input. It is widely recognized as one of the most applicable concepts across various scientific disciplines, spanning from physics and mathematics to biology and engineering <cit.>. In control theory <cit.>, dynamical systems <cit.>, neuronal systems <cit.>, physiology <cit.>, game theory <cit.>, and also in the study of ecological science and climate science the feedback theory has made significant contributions. Instead of controlling the diffusion among the dynamical units, we now confine ourselves to the standard diffusive coupling. However, we design the networked system to be exposed to an external mean-field feedback, which we demonstrate to be capable of efficiently resuming dynamic activity in damaged networks and hence enhance the dynamical robustness of the network <cit.>. We analyze the following mathematical form of the considered dynamical network as, [ ż_̇j̇=(α_j+ iω-|z_j|^2)z_j+ϵN∑_k=1^N(z_k-z_j)+ηN∑_k=1^Nz_k;      j= 1,2,⋯,N, ] where ϵ is the diffusive coupling strength and η accounts for the strength of the mean-field feedback. Proceeding similarly as above, assuming z_j=A for the set of active elements j=1,2,…,N-Np and z_j=I for the group of inactive units j=N-Np+1,…,N, the system (<ref>) reduces to the following coupled system, [ Ȧ=[a+ iω-pϵ+η(1-p)-|A|^2]A+(ϵ+η)pI,; İ=[-b+ iω+pη-ϵ(1-p)-|I|^2]I+(ϵ+η)(1-p)A. ] Linear stability analysis of (<ref>) around the equipoint (A,I)≡ (0,0) results in the Jacobian matrix, [ a+ iω-pϵ+η(1-p) p(ϵ+η); (ϵ+η)(1-p) -b+ iω+pη-ϵ(1-p) ]. Negative real parts of all the eigenvalues of this Jacobian determine the stability of the origin, leading to the critical ratio as, [ p_c=(b+ϵ)(a+η)(a+b)(ϵ+η), ] with ϵ≥ϵ_c=a. In Fig. <ref>, we plot this theoretically obtained expression of the critical inactivation ratio p_c as a function of the interaction strength ϵ for various values of the feedback strength η. Firstly, we present the variation in the p_c values with respect to ϵ for no feedback (i.e., η=0). The initial fall of p_c is quite swift for increasing ϵ, however, for sufficiently high ϵ the decrement in p_c becomes less sharp. When we introduce mean-field feedback with strength η=0.5, a qualitatively similar trend in the drop of p_c is observed. Interestingly enough, this time, the p_c values remain much higher altogether than the no-feedback case. This readily indicates that the feedback (even if it is of small strength) enhances the dynamical robustness of the networked system to a significant extent. We further increase the feedback strength to η=0.7 and η=0.9 and witness that in both the cases the p_c values increase even more, whatever be the value of the interaction strength ϵ, and thus makes the system (<ref>) more robust to progressive inactivation of its units. Thus, it is conspicuous that the mean-field feedback is highly effective in resuming the dynamic activity of damaged networks of active and inactive dynamical systems. We further dive deep on the possible ways of employing a mean-field feedback into the system, and study how the robustness gets affected. We portray the alteration in the p_c values with respect to the feedback strength η for feedback added to all the units, only the inactive units, and only the active units in Fig. <ref>. There, all the solid curves stand for the numerical results, whereas the respective symbols correspond to the theoretical expressions. It is clear that the p_c values increase strictly monotonically for increasing η, implying a sharp enhancement in the dynamical robustness of the system. The theoretical and numerical outcomes are in excellent agreement. We next induce a feedback to only the inactive units and following the similar procedure as above, reaching to the theoretical expression of p_c as p_c=(a(b+ϵ))/((a+b)ϵ+(a-ϵ)η). We plot this expression along with the numerical results as functions of η, and observe that p_c increases sharply again. Moreover, the results do not differ much from those of the previous case of feedback to all the units. This implies that even if we induce feedback to only the inactive dynamical systems, we can recover dynamism of the network. Finally, we employ feedback to only the active set of elements for which we get the theoretical p_c=(a+η)(b+ϵ)/((a+b)ϵ+(b+ϵ)η). Plotting this expression along with the numerical p_c, we observe that p_c increases for increasing η, but the increment is not as sharp as in the earlier two cases. Thus, although the outcome is not as good as the other two cases, the robustness still increases even if we add feedback to only the active dynamical systems. §.§ Addition of active oscillators Next we discuss another useful mechanism of resurrecting dynamism of damaged dynamical networks, by adding oscillatory units to the network <cit.>. We present numerical and theoretical analysis of how the additional supporting oscillators can improve the dynamical robustness of networked systems. We add at most, a single supporting oscillator to each existing dynamical units of the network. Active and inactive units are respectivley supported by Nq_A and Nq_I number of oscillators. The ratio of supported oscillators thus becomes q=q_A+q_I, and the updated size of the networked system turns out to be N+Nq. We denote the sets of supported and unsupported dynamical units by S and U, respectively. Whenever a supporting oscillator is added to the j-th dynamical unit, the state variable of the supporting system is represented by z_j^*, the state variables for the existing units are denoted by z_j, j=1,2,…,N. The time-evolution of the dynamical network is then described as follows, [ ż_̇j̇=(α_j+ iω-|z_j|^2)z_j+KN∑_k=1^N(z_k-z_j)+D2(z_j^*-z_j),     j ∈ S,; ; ż_̇j̇=(α_j+ iω-|z_j|^2)z_j+KN∑_k=1^N(z_k-z_j),                         j ∈ U,; ; ż_̇j̇^*=(a+ iω-|z_j^*|^2)z_j^*+D2(z_j-z_j^*),                           j ∈ S, ] where K is the interaction strength between the dynamical systems in the existing network and D accounts for the coupling strength between the supporting and the supported oscillators. Then the modified order parameter is |Z|, where Z is expressed as, [ Z=∑_j=1^Nz_j+∑_j ∈ S^z_j^*(1+q)N. ] Then following the same approach as before and splitting the entire networked system into all possible sub-groups, active and inactive, the critical inactivation ratio p_c is found as  <cit.>, [ p_c=p_0+DJ_Aa(b+K)a+bq_A+DJ_Ia(K-a)a+bq_I, ] in which [ p_0=a(b+K)(a+b)K,; J_A=2a^2+KD-2a(K+D),; J_I=(b+K)D-aD-2a(b+K). ] Thus the term p_0 reflects the critical inactivation ratio in absence of any supporting oscillator added into the system  <cit.>. It can be further shown that the supporting oscillators added to the active dynamical units are much more effective in improving the dynamical robustness than the mechanism of adding supporting oscillators to the inactive units. Opposing to our intuition that the inactive systems should be supported, support to active units works more efficiently. Through Fig. <ref>, we delineate the variation of the critical inactivation ratio p_c against the ratio of the supported oscillators q. We fix the fundamental parameters' values at a=2,b=5,p=0.9 and D=K=8, and depict the best procedure of preferentially choosing the active oscillators. This means q=q_A for 0≤ q ≤ 1-p and q=q_I+(1-p) for 1-p<q ≤ 1. We also plot the worst mechanism of preferentially choosing the inactive oscillators, i.e., q=q_I for 0≤ q ≤ p and q=q_A+p for p<q ≤ 1. The global oscillation of the networked system is retrieved where p_c equals p. As conspicuous from this portrayal, the ratio of the supporting oscillators required to develop the dynamical robustness of the network in the best mechanism is much lower than that in the worst mechanism. §.§ Self-feedback delay Lately, researchers have explored a self-feedback delay as an additional mechanism to improve dynamical robustness within a globally coupled network in terms of mean-field interactions <cit.>. We exemplify the enhancement in dynamic robustness through a network of N Stuart–Landau oscillators that are mutually coupled through mean-field interactions, along with self-feedback delay. The governing equation for their dynamics can be formulated as follows, ż_j(t) = (α_j+iω-|z_j(t)|^2)z_j(t)+ k[z̅-z_j(t-τ)]. Here, z denotes the mean-field average, and τ represents the time delay in the local self-feedback component z_j(t-τ), functioning essentially as a form of negative feedback. In this context, ω(=5) denotes the intrinsic frequency of individual oscillators, while k characterizes the strength of coupling. α_j serves as the bifurcation parameter for the oscillator indexed by j, as before. We select a network of size N = 500 and assign the values α_j=a=1 to the set of active oscillators and α_j=b=-3 to all the inactive ones. In Fig. <ref>(a), we illustrate the relationship between the order-parameter |Z| and the inactivation ratio p across various settings of the local self-feedback delay τ, while keeping k=5. It is evident that when τ=0, the order parameter Z reaches zero (at a certain p_c), suggesting that the process of aging transition occurs at a noticeably faster rate. As the self-feedback delay τ is extended, the aging transition takes place at a higher value of the critical inactivation ratio p_c. This implies that adding τ has a major impact on improving dynamical robustness. To gain deeper insight into how local self-delayed feedback affects the robustness of coupled oscillators, we have illustrated the phase transition diagram in Fig. <ref>(b) within the τ-p plane, while maintaining a constant value of k=5. In this illustration, region OS represents the oscillatory state, while region AT signifies island corresponding to aging transition (i.e., where |Z|=0). The figure clearly explores a minimal alteration in the p_c value whenever very small τ is considered. However, when we elevate the value of τ towards the upper end, the critical threshold of p_c also rises and ultimately reaches p_c=1 when τ∼ 0.12. The presence of a local self-feedback delay τ is thus demonstrated to be the primary factor influencing the aging transition in the coupled oscillator system. This delay efficiently enhances the dynamical robustness of the networked system of mean-field coupled oscillators. Afterward, we determine the critical value p_c through an analytical method. During the aging transition, the global oscillation ceases at p_c, leading to the stabilization of the trivial fixed point z_j=0, ∀ j. We assume that the coupled system is composed of two groups, where each group consist of identical nodes. Essentially, the synchronization among the oscillators allows us to redefine the system accordingly. When assigning z_j = A for the active set and z_j =I for the inactive set of oscillators, the system Eq. (<ref>) simplifies into the subsequent coupled system, as presented in <cit.>, Ȧ = (a+iω+kq-|A|^2)A-kA(t-τ)+kpI, İ = (b+iω+kp-|I|^2)I-kI(t-τ)+kqA, where q = 1-p. Now, we perform a linear stability analysis to approximate Eq.(<ref>) in the vicinity of the equilibrium point A = I = 0. The characteristic equation that arises from conducting a linear stability check around the point reads as, (a+iω+qk-ke^-λτ-λ)(b+iω+pk-ke^-λτ-λ) -pqk^2=0, Here, λ=λ_R+iλ_I, where λ_R and λ_I are the real and imaginary part of the eigenvalue of λ. The following equations are obtained by splitting the real and imaginary parts of Eq. <ref> and putting the real part of the eigenvalue equal to zero (λ_R=0). [a+qk-k cos(λ_Iτ)][b+pk-k cos(λ_Iτ)]- (ω-λ_I+k sin(λ_Iτ))^2+pqk^2 = 0, [a+b+k(p+q)-2k cos(λ_Iτ)][ω-λ_I+k sin(λ_Iτ)] = 0, where p+q = 1. The critical value of the inactivation ratio p_c can be obtained by solving these equations. p_c = -ab+k(a+b)β+k^2β-kb-k^2β^2/k(a-b), where β = cos(ατ) and α = ω+k√(1-(a+b+k/2k)^2). The aging transition is identified though this critical value p_c of the inactivation ratio p. The alignment between the critical value of p_c (depicted as a black solid line) and the numerical outcome (illustrated as a shaded region) for the aging transition in Fig. <ref>(b) is notably strong. When τ=0, the aging transition takes place for all values of k>1. As the coupling strength k rises, the critical probability p_c experiences a decrease. In order to examine the influence of k on p_c, we graph the relationship between p_c and the coupling strength k for various τ values, as depicted in Fig. <ref>. Remarkably, we notice that in the presence of τ≠ 0, the aging transition is present solely within a limited range of k. In this range, the p_c value gradually diminishes until it reaches its lowest point, after which it steadily rises to one. This observation suggests that robust network dynamics against aging is promoted by strong coupling, particularly when the feedback delay τ reaches a significant magnitude. §.§ Asymmetric interactions In the following, we report a mechanism to enhance dynamical robustness by employing asymmetric interactions between active and inactive nodes <cit.>. We consider the following mathematical form of the coupled Stuart-Landau oscillators ż_̇j̇=(α_j+ iω-|z_j|^2)z_j+Mϵ/N∑_k=1^NA_jk(z_k-z_j);      j= 1,2,⋯,N. In this context, ϵ represents the strength of diffusive coupling, and M serves as an asymmetry parameter introduced into the system to differentiate the coupling strengths of active and inactive sub-populations. Specifically, we set M=1 for the active group and M=m(≥ 1) for the inactive group of oscillators. Now based upon the assumption of z_j = A for active oscillators and z_j = I for inactive oscillators, Eqs. (<ref>) reduces to the following coupled system for a homogeneous network [ Ȧ = (a + iω - |A|^2)A + ϵ pd(I-A),; İ = (-b + iω - |I|^2)I + mϵ(1-p)d(A-I). ] Here, d is defined as d = ⟨ k ⟩/(N-1), where ⟨ k ⟩ represents the average degree of the network(in the case of global coupling, d = 1). A linear stability analysis of the system (<ref>) around the origin (A, I) =(0, 0) leads to the following critical inactivation ratio p_c = 1-b(ϵ d-a)/ϵ d(b+am), for ϵ≥ a/d = ϵ_c. From Eq. (<ref>), it is convincing that as one increases the asymmetric parameter m, the p_c value also increases. So by merely increasing the interaction strength of the dynamical units in the inactive group compared to the active ones, it is possible to significantly enhance dynamical resilience with great efficiency. Next, we proceed to derive the analytical expression of p_c for a scale-free network. In the case of a heterogeneous network with a large number of nodes, denoted as N, its behavior is primarily governed by two mean fields representing the sub-populations of active and inactive oscillators. To facilitate the application of the degree-weighted mean field approximation or the annealed network approximation to each sub-network, we assume that oscillators with the same degree within the same sub-population are indistinguishable. Based on this assumption and following the analytical methods described in the section <ref>, we can derive the critical ratio p_c as follows: p_c= F(ϵ,a)-dF(ϵ,a)-F(ϵ,-b), for ϵ > ϵ_c(=a/d_min), where d_min = k_min/N with the minimum degree k_min=min{k_j}, and F(ϵ,α)≃1/N∑_j=1^Nd_j^2/d_j-α/Mϵ. Here d_j=k_j/N is the ratio of the degree of j-th oscillator and the system size. §.§ Low-pass filtering mechanism The functioning of low-pass filters has been widely utilized in diverse physical systems, including electronic and optical devices. Low-pass filters are widely present in electrical and biological networks as well <cit.>. The impact of a low-pass filter has been previously investigated within the realm of synchronization <cit.>, as well as the process of oscillation suppression <cit.> and transition between limit cycles <cit.>. Recently, the approach involving low-pass filtering is employed to restore the oscillatory characteristic and hence strengthen the dynamical robustness of a network. This technique allows low-frequency signals to pass through, while potentially reducing the strength of signals with higher frequencies. The description of the mechanics of the low-pass filter for Stuart-Landau oscillators is outlined by the following set of equations <cit.>, ż_j = (α_j+iω_j-|z_j|^2)z_j+ϵ/d_j+1∑_k=1^N A_jk(z_k - μ_j), βμ̇_j = -μ_j+z_j. For each value of j from 1 to N, z_j represents the complex amplitude, and α_j indicates the intrinsic parameter of the j-th oscillator, signifying its distance from the Hopf bifurcation point. ω_j denotes the inherent frequency of the j-th oscillator, while ϵ indicates the total coupling strength, and d_j denotes the degree of the j-th node. The second equation of (<ref>) illustrates the standard low-pass filter, and 1/β(β > 0) signifies the cutoff frequency. The utilization of a low pass filter for interaction introduces a frequency dependent impact on the dynamical behaviour. When β approaches 0, μ_j converges precisely to the initial z_j, resulting in a standard scalar diffusive type of interaction. We investigate the influence of a low-pass filter on the dynamical robustness for the globally coupled Stuart-Landau network. Our intent is to check whether for a fixed interaction strength ϵ, a proper tuning of β can enhance the dynamical robustness of the network. Choosing several values of β for a fixed coupling strength ϵ=8, we depict how the order parameter R varies with the inactive ratio p. For β=0, the critical value of p for the aging transition is p_c=0.5625, but as β increases, the critical values of p_c transits to a higher p_c value. This scenario is depicted in Fig. <ref>(a) by taking several exemplification increase values of β. Thus, with a proper choice of β, the coupling mechanism leads to the recovery of the oscillatory dynamics in the dynamical units from the damaged network. Our next step is to further verify this result, and we plot the order parameter as a function of the inactivation ratio p, for a fixed value of β=0.055, and several values of ϵ (cf. Fig. <ref>(b)). For ϵ=5, the critical value of p for aging transition is p ∼ 0.66. The critical transition point keeps shifting to higher values as ϵ increases. Interestingly, with sufficiently high coupling, no AT is observed, and global oscillation of the entire network resumes even when the network contains all inactive nodes. This scenario is depicted in Fig. <ref>(b) using the outstanding value of ϵ=15. However, the improvement in dynamical robustness provided by such transition scenarios is independent of the network size. Thus the parameter β makes the dynamical network more robust to the inactivation of the local units, and can be well-treated as an efficient mechanism for this purpose. § MACHINE LEARNING TECHNIQUES TO PREDICT AGING TRANSITION The aging transition stands as an undesirable phenomenon in real-world systems, emphasizing the critical need to predict its occurrence while the underlying system is still in normal operation. In numerous practical applications, the governing equations of the system dynamics are often unknown, posing a challenge for the development of a mathematical model for prediction or control. As a solution, data-driven approaches, including machine learning and deep learning, have garnered substantial attention in recent years. These methodologies have the advantage of learning directly from available data or time series without the necessity of prior knowledge about the underlying system equations<cit.>. Here we discuss a machine-learning based method to explore the possibility of predicting aging transitions in a system currently experiencing a “normal" regime with oscillations but undergoing a gradual parameter drift, potentially influenced by environmental changes<cit.>. Specifically, we leverage a parameter-aware reservoir computing method introduced by Xiao et al<cit.>. Instead of inputting the system's intrinsic parameter values directly into the machine, we provide information regarding the fraction of inactive oscillators within the network. The setup of Echo State Network based reservoir computing involves three components: an input layer, a reservoir network, and an output layer. In this setup, an M-dimensional input signal u(t)∈ R^M is internally fed to an N_r-dimensional cyclic reservoir network through an N_r× M input weight matrix W_in. The reservoir network comprises N_r nodes connected in an Erdős–Rényi graph configuration, represented by an N_r× N_r weight matrix W_res. At time t, the state of the reservoir network is denoted by the vector r(t)=[r_1(t),r_2(t),…,r_N_r(t)]^T, where r_i(t) signifies the state of the i-th node in the reservoir at time t. The N_r-dimensional state vector of the reservoir network is then mapped to an M-1 dimensional output signal using the output matrix W_out with dimensions (M-1) × N_r. The updated equations governing the reservoir states are described as follows, r(t + dt) = (1 - α)r(t) + αtanh[W_res· r(t) + W_in· u(t)]. Here, α∈(0,1] represents the leaking rate, and u(t)∈ R^M the M-dimensional input data incorporating the system parameter value. The input data comprises the time series data ũ(t), which constitutes the first M-1 elements of u(t), along with the corresponding parameter p_s, represented as the last element of u(t), i.e., u(t)=[ũ(t);p_s]. To construct the input weight matrix W_in, we follow the methodology outlined in<cit.>. This method entails linking the i-th component of the (M-1)-dimensional input signals to N_r/(M-1) reservoir nodes using the connection weights in the corresponding column of W_in. The non-zero elements of the input weight matrix are randomly selected from a uniform distribution and then scaled to fit within the interval of [-σ, σ]. Notably, each node of the reservoir network is connected to the parameter channel, allowing it to capture the relationship between the dynamics and the parameter value. The parameter input is determined by (p_s-p_b)k_p, where p_b and k_p serve as hyper-parameters. We train the reservoir-computing machine separately for various parameter values and store the corresponding values of r(t) successively. Following is the conventional way<cit.>, r_n(t) = r_n(t) if n is odd r_n^2(t) if n is even We organize r(t) in the sequence of parameter values and create a single N_r× n_p(N_t-N_τ) matrix, forming the reservoir state matrix R. Here, n_p is the number of training parameters, N_t represents the number of training data points for each parameter value, and N_τ denotes the reservoir transient time. To derive an explicit expression for the output matrix W_out through optimization, we introduce a target data matrix. This matrix encompasses all the desired outputs of the reservoir during training. Assuming the reservoir machine is trained with time series data from n_p different system parameter values, the total number of training time steps becomes n_pN_t. The available data points are arranged in the sequence of parameter values and stacked into a matrix U with dimensions (M-1) × n_p(N_t-N_τ). The computation of the output matrix W_out is conducted via a regression scheme with the goal of minimizing the following loss function, L = ∑_t U(t) - W_outR(t) + β W_out^2 where the regularization parameter β is employed to prevent over-fitting. The readout matrix W_out can be determined through Ridge regression as follows: W_out = UR^T(RR^T + β I)^-1. In the prediction phase, the input data vector u(t) is substituted with the output vector v(t), creating a closed-loop, self-evolving dynamical system within the reservoir computing machine. The system updates v(t) to v(t+dt) following the specified rules, r(t + dt) = (1 - α)r(t) + αtanh [W_res· r(t) + W_in·v(t)], v(t) = [ṽ(t), (p_new-p_b)k_p]^T, ṽ(t + dt) = W_out·r(t + dt). We are now able to generate machine-predicted time series for a different parameter value using Eq. <ref>. In our approach, instead of explicitly inputting the inherent parameter values of the oscillators, we train the machine by passing the information of fraction p representing inactive oscillators. We demonstrate the predictive capability of the parameter-aware reservoir computing approach in anticipating the aging transition in N=100 globally coupled Stuart-Landau oscillators, as described by Eq.<ref>. To generate the ESN training time series, we numerically integrate Eq. <ref> using RK45 with a fixed step size of Δ t=0.01. The training phase incorporates data for p=0.3, p=0.4, and p=0.5, corresponding to 30, 40, and 50 inactive oscillators in the system, respectively. This approach enables the machine to learn diverse network dynamics under varying levels of inactive oscillators. Instead of providing the ESN with time series data for all active and inactive oscillators, we simplify the input by using data from one active and one inactive oscillator for each p value. Subsequently, we forecast the network dynamics for p values 0.6 and 0.7. In Fig. <ref>, the original dynamics of the network obtained through numerical integration are presented and compared with the predicted dynamics of the ESN. Particularly, in Figs. <ref>(a) and <ref>(b), the time series of one active and one inactive oscillator in the network are depicted for p=0.6 and p=0.7, respectively. We observe oscillatory dynamics for both active (red) and inactive (blue) units at p=0.6, while at p=0.7, they both converge to a stable equilibrium point. The machine-predicted dynamics for p=0.6 and p=0.7 are illustrated in Figs. <ref>(c) and <ref>(d), respectively. The ESN demonstrates accurate qualitative predictions of the dynamics and captures the quantitative behavior of the system, as evidenced by the excellent agreement between the actual and predicted time series. To validate our proposed method further, we train the machine using the mean-field dynamics of active (A_z) and inactive oscillators (I_Z). This aims to assess the machine's proficiency in predicting the network dynamics across the entire range of the inactivation ratio parameter p. Subsequently, we plot the ESN-generated order parameter R (Refer Eq. <ref>) as a function of p and compare it with the corresponding curve obtained from the actual network model. Figure <ref>(a) shows the order parameter R against the inactivation ratio p obtained from the model dynamics, while in Fig. <ref>(b), we plot the same using the ESN. We observe a very good agreement between these two plots, indicating that the machine is capable of accurately predicting the aging transition of the original network model. § CONCLUSIONS AND FUTURE PERSPECTIVES Comprehending the emerging behaviors in various scientific and technological fields relies heavily on grasping the dynamics of coupled oscillatory systems, as the presence of robust rhythmic dynamics is a prerequisite for both natural and artificial systems in these domains. It is inherent for certain oscillatory units to undergo aging, transitioning into a non-self-oscillatory state due to diverse internal and external factors. The aging of oscillatory behaviors in coupled dynamical networks has been a thriving research area, with substantial advancements achieved over the past two decades. We have comprehensively summarized studies on dynamical robustness and its enhancement in coupled dynamical networks, which is endeavoured to amplify our understanding of aging transition in different physical and biological systems. The phenomenon of aging transition has been elaborated by considering diverse network topologies and coupling schemes relevant to many real-world situations. Also, schemes proposed to enhance the dynamical robustness of networked systems which is experiencing aging transition, have been systematically reviewed. Results discussed here are believed to be constructively useful in uncovering different aging transition routes and propose suitable control mechanisms in various complex systems in science and engineering. Here, we have endeavored to integrate and consolidate existing knowledge on dynamical robustness theory, making it more accessible to researchers in various scientific communities. This review aims to encourage more profound discussions on the aging transition and its potential reversal in complex systems. These discussions should pave the way for further research into the open problems outlined below. Despite the growing body of literature on aging transitions and dynamical robustness in coupled dynamical networks, several open issues and challenges persist. Below, we highlight several open problems that may be of interest for further research and future directions. First of all, it is clear that all these developments summarized in this review have specifically assumed that dyadic or pairwise interactions form the foundation for connections among the units of the system. However, for a better understanding of many complex systems, one needs to further consider more realistic different structural forms of networked systems. For instance, group interactions (of three or more entities) are prevalent in systems arising in ecology <cit.>, neuronal <cit.>, and social systems <cit.>. Recent theoretical research suggests in complex systems, the inclusion of higher-order interactions, which are often represented by network generalizations like simplicial complexes or hypergraphs <cit.>, can have a significant impact on the system’s dynamics <cit.>. Thus, future attention must be given to exploring aging transition in networks with higher-order interactions. So far, there have been some efforts undertaken in perceiving the phenomenon of dynamical robustness in neuronal systems, as explained above. Nevertheless, the robustness of neuronal systems subject to higher-order interactions has yet to be explored, despite their significant relevance for various processes in neuronal networks <cit.>. This gap needs to be addressed in the near future. Many real networks display community structures <cit.>, where groups of nodes are highly connected to each other within their own community but have very few connections to nodes in other modules. Examples include ecological networks, neuronal network, metabolic and regulatory networks <cit.>. It will be highly interesting to investigate dynamical robustness of networks having community structures. In addition, future research should prioritize quantum oscillatory systems, as initial studies indicate that aging transitions occur in the quantum regime, albeit with different manifestations compared to classical systems. Apart from the theoretical motivation, studying aging in the quantum domain is further driven by the inevitability of system degradation or aging in real-world quantum systems, primarily due to unwanted losses such as those caused by lossy cavities <cit.> and mechanical dissipation in optomechanical systems <cit.>. Consequently, with the advancement of current quantum technology, we believe that the experimental realization of aging transitions is possible. Further, time-varying networks <cit.>, in which interactions do not persist for all the course of time, rather they arise or vanish over time, are considered to be highly capable of modeling several real-world instances. Temporal networks of static nodes and that of mobile agents are reasonably significant in this context. It also includes the crucial scenario of adaptive networks <cit.>, which constitute a wide range of systems capable of altering their connectivity based on their dynamical state over time. This readily suggests that contemplation of temporality in the network connectivity itself is pretty essential for further grasp on dynamical robustness of the complex networked systems. § DECLARATION OF COMPETING INTEREST The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. unsrt
http://arxiv.org/abs/2407.01956v1
20240702051804
Cloud-Edge-Terminal Collaborative AIGC for Autonomous Driving
[ "Jianan Zhang", "Zhiwei Wei", "Boxun Liu", "Xiayi Wang", "Yong Yu", "Rongqing Zhang" ]
eess.SY
[ "eess.SY", "cs.RO", "cs.SY" ]
Cloud-Edge-Terminal Collaborative AIGC for Autonomous Driving Jianan Zhang, Zhiwei Wei, Boxun Liu, Xiayi Wang, Yong Yu, Rongqing Zhang This work has been submitted to the IEEE Wireless Communications Magazine for publication. Copyright may be transferred without notice, after which this version may no longer be accessible. Jianan Zhang, Boxun Liu, Xiayi Wang, and Yong Yu are with the State Key Laboratory of Advanced Optical Communication Systems and Networks, School of Electronics, Peking University, Beijing 100871, China. Zhiwei Wei is with the Shanghai Research Institute for Intelligent Autonomous Systems, Tongji University, Shanghai 201210, China. Rongqing Zhang (corresponding author) is with the School of Software Engineering, Tongji University, Shanghai 200092, China, and also with the Shanghai Research Institute for Intelligent Autonomous Systems, Tongji University, Shanghai 201210, China. July 8, 2024 ============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================= § ABSTRACT In dynamic autonomous driving environment, Artificial Intelligence-Generated Content (AIGC) technology can supplement vehicle perception and decision making by leveraging models’ generative and predictive capabilities, and has the potential to enhance motion planning, trajectory prediction and traffic simulation. This article proposes a cloud-edge-terminal collaborative architecture to support AIGC for autonomous driving. By delving into the unique properties of AIGC services, this article initiates the attempts to construct mutually supportive AIGC and network systems for autonomous driving, including communication, storage and computation resource allocation schemes to support AIGC services, and leveraging AIGC to assist system design and resource management. § INTRODUCTION Autonomous driving has developed rapidly with the goal of improving traffic safety, efficiency, and convenience through perception, decision making, and control of vehicles. Autonomous driving also involves interactions with human through communicating driving intentions and vehicle responses based on individual needs. The technologies span multiple fields, such as computer vision, machine learning, sensor fusion and control theory, and is highly complex and challenging. As autonomous driving systems progress from controlled environments and straightforward tasks to more complex and unpredictable urban landscapes, current technologies may falter to generalize and touch their performance limitations. Artificial Intelligence-Generated Content (AIGC) technology has the potential to empower autonomous driving. AIGC refers to the use of artificial intelligence technology to automatically or collaboratively generate various types of contents such as texts, images, audios, and videos based on user needs and goals. The core of AIGC technology is to use deep neural network models to learn the potential distribution of data and generate new data that conform to the distribution according to given conditions or goals. This capability enables AIGC technology to generalize from learned data distributions to new scenarios. Language Foundation Models <cit.> can be used to understand and generate natural languages, such as dialogue and summary, to improve the communication efficiency and quality between drivers and vehicles. Visual Foundation Models <cit.> can be used to detect and recognize objects, scenes, and emotions in images, improving perception of the surrounding environment. Multi-modal Foundation Models <cit.> can be used to fuse different types of data, such as text, voice and images, to improve driver’s entertainment experience and personalized needs. Moreover, AIGC can be applied to autonomous driving end-to-end. For example, DriveGPT4 <cit.>, a large-scale multi-modal language model for autonomous driving, uses videos collected by the vehicle and historical vehicle control decisions to output the next control decision, while providing natural language explanations for its decision to improve the interpretability. Personalized data generation capabilities of AIGC enhance vehicle and driver copilot experience. First, AIGC can tailor the driving experience to the preferences and needs of each individual driver, such as adjusting the speed, route, and ambiance. Second, AIGC can provide personalized feedback and guidance to the driver, such as suggesting optimal driving habits, alerting potential hazards, and offering emergency assistance. Third, AIGC can enhance the communication and interaction between the driver and the vehicle, such as using natural language processing, voice recognition, and facial expression analysis. These features make autonomous driving more enjoyable, safe, and efficient for each driver. However, stringent latency and availability requirements of autonomous driving tasks bring challenges of applying AIGC. A vehicle alone hardly has sufficient communication, storage, and computation resources to support large model storage and inference under a limited budget constraint, while model stored at the cloud requires communication between cloud and vehicle and has higher communication latency. To address this challenge, this article proposes a cloud-edge-terminal collaborative architecture and operation schemes to support different AIGC models and meet the quality-of-service requirements of autonomous driving tasks. The main contributions of this article are three-fold. First, we survey potential applications of AIGC to autonomous driving tasks. Second, we propose a cloud-edge-terminal collaborative AIGC architecture and service workflow to address the resource and latency challenges of applying AIGC to autonomous driving. Third, we initiate attempts to develop communication, storage, and computation resource allocation mechanisms to support the operation of the cloud-edge-terminal collaborative AIGC system, and leverage AIGC to enhance network design. § AIGC FOR AUTONOMOUS DRIVING In this section, we first survey autonomous driving tasks that benefit from the generative and predictive capabilities of AIGC. We then discuss the challenges of applying AIGC to autonomous driving, which demands for location-dependent and personalized content under strict latency constraints. §.§ AIGC Applications to Autonomous Driving AIGC can be applied to several aspects of autonomous driving, illustrated by Fig. <ref>. Perception: Perception is the process of acquiring, interpreting, and using sensory information to enable an autonomous vehicle to navigate safely and efficiently in complex and dynamic environments. Generative models can be used to supplement the characterization of unknown environments that are not present in sensory perceptions. For instance, generative models can estimate the likelihood of vehicles or pedestrians appearing behind an obstacle based on the perception near the obstacle. This is possible because generative models can recover a picture using only a small fraction of pixels <cit.>. Generative models can also infer terrain properties outside the perception range, which improves perception under cluttered environments where sensors are obstructed. Additionally, generative models can perform targeted object detection based on language prompts. For example, given a traffic scene image and a language prompt “detect traffic light colors,” the generative model can identify the traffic lights and provide corresponding text or voice descriptions to assist color-defective drivers. Generative models can better extract semantics and reduce communication loads, while transmitting raw perceptual information requires a large amount of communication resources. For example, converting an image into a short text description and then transmitting the text description to other vehicles can reduce bandwidth and latency in cooperative sensing. Motion prediction and risk assessment: Motion prediction aims to estimate the trajectories of surrounding objects, such as vehicles, pedestrians and cyclists, based on their current and past positions, velocities, orientations, and other attributes. Risk assessment aims to predict collision and localize risky regions, given the predicted motions of the ego-vehicle and other moving objects. Generative AI models have the ability to learn the potential distribution of data, including the trajectories of vehicles and pedestrians under various scenarios. Moreover, the prediction task can be transformed into a language modeling task <cit.>, which can be used to predict the behaviors of vehicles and pedestrians. For example, given previous traffic scenes and a question “What will the last car in the middle lane do when the yellow light comes on ahead?” the language model can analyze the scene and provide a possible answer “The last car will accelerate through the intersection.” Based on motion prediction, generative models can perform risk assessments. For example, given an alleyway scene and traffic, the generative model will predict that “Two cars at the intersection ahead may collide.” Motion planning: Motion planning is the process of finding a feasible and safe trajectory for a vehicle to follow in a dynamic environment, including obstacle avoidance, lane keeping and path finding. For example, the Tesla FSD V12 end-to-end autonomous driving model has been successfully applied in practical driving systems, achieving driving performance close to that of humans. Motion planning can also be transformed into a language modeling task <cit.>. For example, given a traffic scene and a goal “from point A to point B,” the language model can generate a reasonable path and provide corresponding text descriptions <cit.>. In addition, generative language models can provide explanations for the decisions made. For example, given a traffic scene and a question “What should the autonomous vehicle do?” the model can generate an optimal behavior based on the scene and provide corresponding text explanations “The autonomous vehicle should slow down and stop on the side of the road because there is an ambulance passing ahead.” Traffic simulation, prediction and control: Besides microscopic content generations such as vehicle’s local perception, motion prediction and planning, AIGC can be applied to macroscopic traffic control such as simulating, predicting and controlling traffic flows in complex urban environments <cit.>. By combining digital twins, generative models can simulate real driving environments in virtual space and evaluate traffic control strategies. Suppose that a control center has the knowledge of real-time traffic. Generative models at the control center can output traffic light control decisions that alleviate traffic congestion through simulations. The control decisions can then be implemented at traffic lights through communicating with Road Side Units (RSU). In addition, given a traffic scene in a city or region and a parameter “increase traffic flow by 10%,” generative models can generate a new traffic scene though simulation, predict possible congestion situations, and provide corresponding text descriptions. Autonomous driving dataset generation: Autonomous driving requires high-quality data for training and testing. However, collecting and labeling real-world data is costly, time-consuming, and may not cover all possible scenarios. Therefore, synthetic data generation is a promising alternative that can provide large-scale, diverse, and high-fidelity data for autonomous driving applications. Generative models use existing data to create new traffic data that can provide more corner cases and various conditions for training autonomous driving algorithms. For example, given a traffic scene and a parameter “generate perception under rainy or snowy weather,” generative models can create new perception data by adding rain or snow effects to the original scene. Human-vehicle interface: A human-vehicle interface is essential for ensuring the safety, comfort, and trust of the drivers and passengers in autonomous vehicles, since there are situations where human input or supervision may be required, such as in emergencies, complex scenarios, or legal regulations. The interface should be designed to provide clear and timely information, intuitive and easy-to-use controls, and adaptive and personalized features. Generative models enhance the way vehicles understand and respond to human input, including voices and gestures. Vehicle can anticipate the driver's needs and offer assistance proactively, making the driving experience more immersive and personalized. For instance, if the vehicle senses that the driver is tired or predicts that the driver will be tired based on the driving history, it can recommend a rest area or coffee shop ahead. Generative models can also explain the behavior of the vehicle, making it easier for humans to understand and supervise. For example, given a vehicle’s behavior “stop on the side of the road”, generative models can generate a text explanation based on the behavior “stop on the side of the road because there is an ambulance passing ahead” and provide feedback to the user through voice or screen. §.§ Challenges of Applying AIGC to Autonomous Driving High computational complexity and latency: To achieve safe autonomous driving, AIGC applications need to analyze and make decisions on the vehicle’s status, environment, road conditions, and other information in milliseconds. This poses a challenge to the computation and storage resources of vehicles because AIGC models usually have a large number of parameters and complex structures. If AIGC models are deployed in the cloud, network transmission delay, bandwidth limitation, and vehicular mobility issues may lead to a decreased quality of service. Edge collaboration is a feasible solution, which can use resources at RSU near the vehicle to provide low-latency, high-reliability, and high-performance AIGC services. Adaptive to different regions and traffic conditions: Some autonomous driving tasks depend on geographical location and terrain environmental characteristics. Different countries or regions may have different road rules and traffic regulations, such as driving directions, speed limit signs and traffic lights. These rules will affect the behavior selection and content generation of autonomous driving vehicles in different regions. For example, in the United States, autonomous driving vehicles need to drive on the right, while in the United Kingdom, they need to drive on the left. On highways, autonomous driving vehicles need to adjust their speed according to speed limit signs, while on city roads, they need to stop or start according to traffic lights. In mountain roads, obstacles are complex and have various shapes, while the vehicle’s field of view is limited, requiring more cautious decision-making. Therefore, AIGC services need to be able to identify localized rules and properties in different regions and generate appropriate content based on these rules. Additionally, vehicles may encounter varied road conditions and traffic flows, which necessitate adapting navigation strategies accordingly. For example, in congested road sections, autonomous driving vehicles may need to change lanes or decelerate more frequently, while in open road sections, they can accelerate or maintain stability. In these cases, AIGC services need to be able to perceive the current system environment and generate content that is highly adaptive, flexible, and efficient based on this environment. Personalized content for ego-vehicles: Different vehicle owners may have different driving habits and preferences. For example, some people like to drive smoothly, while others like to drive fast; some people like to maintain a certain distance, while others like to follow closely; some people like to change lanes in advance, while others like to change lanes temporarily. These preferences will affect the decision-making mode of autonomous driving vehicles, such as when, where, and how to accelerate, decelerate, change lanes, and overtake. Therefore, AIGC services need to be able to learn and adapt to the personalized preferences of vehicle owners and generate content that meets their preferences. Vehicle owners may also have different style preferences for generated content. For example, some people like concise and clear content, while others like detailed and rich content; some people like formal and rigorous content, while others like humorous and relaxed content. These preferences will affect the content expression of autonomous driving vehicles, including voice prompts, image displays, and text displays. Therefore, AIGC services need to be able to recognize and adapt to the personalized styles of vehicle owners and generate content that meets their preferences. Providing personalized vehicle control and content style to the driver poses a significant challenge for generative models, as they have to generate tailored responses based on a limited amount of interaction history, while the driving history may span a long period of time. Therefore, generative models need to learn how to efficaciously capture the preferences, habits, and goals of the driver from a few conversational turns or metadata of the vehicle and the driver, and generate relevant and coherent content that adapts to the driving context. § CLOUD-EDGE-TERMINAL COLLABORATIVE AIGC ARCHITECTURE We propose a cloud-edge-terminal collaborative AIGC architecture to support low-latency, location-dependent, and personalized autonomous driving task requests. Based on the architecture, we further discuss the mutual assistance between AIGC services and network resource management. We initiate attempts of leveraging AIGC to improve network communication, storage and computation resource management, and proposing resource allocation schemes to support AIGC for autonomous driving. §.§ Architecture Design The cloud-edge-terminal collaborative AIGC architecture is shown in Fig. <ref>. Cloud: AIGC Service Providers (ASP) train large generative models using a large amount of data, and deploy pre-trained and fine-tuned models with strong reasoning capabilities by utilizing sufficient computing and storage resources in the data center. These models handle complex AIGC tasks such as high-quality traffic simulations, predictions, and traffic control evaluations. ASP can also customize large models according to the characteristics and needs of different regions and compress them into smaller models for deployment at the edge. Edge: At the edge, RSU obtains smaller fine-tuned generative models from the cloud and schedules resources to provide localized and timely responses for vehicles. Most AIGC services for autonomous driving can be completed at the edge, including perception, motion prediction, and risk assessment. Only when the smaller models at the edge are insufficient to complete the specified task, the task is offloaded to the cloud. For example, RSU can collect local traffic information and send it to the cloud for centralized traffic light control using larger models. Terminal: Vehicles host pruned and quantized generative models using limited computation and storage resources. The models generate personalized content using lightweight computation and therefore have limited capabilities <cit.>. To support more demanding generative tasks, a vehicle chooses the appropriate ASP at the edge or cloud, sends the request to the ASP, and the generated content is then communicated back to the vehicle. The generated content can be further processed at the terminal to meet personalized needs. To enhance the personalization of content generation, a long interaction history can be compressed semantically <cit.> and stored in the user profile. Service workflow in cloud-edge-terminal collaborative AIGC architecture for autonomous driving includes the following steps. 1. AIGC service request generation: The vehicle or driver at the terminal generates a request, which may include a personalized prompt. The system decides whether to execute locally or offload to edge (or cloud), by considering task complexity, privacy, and latency requirements. 2. ASP selection and offloading: Given computation and storage resource constraints of different ASPs at the edge (or cloud), and communication constraints between the vehicle and the edge (or cloud), vehicles select the appropriate ASPs for their task requests. The objective of ASP selection is to generate high-quality responses within the resource and latency constraints. The generated content can then be transmitted from the remote ASP to the vehicle. 3. Post-processing of generated content: Some content returned by remote ASP may be in intermediate formats to reduce the communication load (e.g., features or text description of an image) and need to be further processed by generative models at the terminal to be consumed by the driver. Moreover, post-processing can further personalize the content since generative models at the vehicle maintain a more comprehensive set of driver preferences. While the above service workflow focuses on model inference and content generation, the large number of requests and traffic information can further enhance model training and fine-tuning. By analyzing additional traffic data, ASP can update the generative models at the cloud and the edge. This process is latency insensitive and requires large amounts of resources, in contrast with real-time service requests. §.§ System Operations AIGC relies on communication, storage, and computation resources that collaborate throughout the whole workflow. Compared with traditional workloads, AIGC services have unique properties that can be incorporated in the design of network resource allocation strategies. First, the versatile content generation capability can adapt not only to users, but also to the available network resources. Generative models are able to output contents of various qualities (e.g., pictures of various resolutions). Satisfying resource demands of generation tasks with available network resources through joint optimization of task adaptations and resource allocations has the potential to increase user satisfaction and utility. Second, different from traditional content distribution networks where users request the same content stored at edge servers, AIGC services generate content tailored to the terminal users, and the generated content could even change in response to the same question of the same user as interaction evolves. Therefore, AIGC services require more tightly coupled computation and storage resources in the collaborative framework. On one hand, the stored model requires computation resources to generate content, and caching alone is insufficient to provide personalized content. On the other hand, content generation depends on the interaction history, which occupies additional storage resources, and the interaction is unique to the user, in contrast to the traditional content distribution network where different users send the same request. Therefore, additional storage resources are needed to support users’ stateful service requests, whereas traditional requests are often stateless. In the remainder of this section, we discuss task adaptation and resource allocation strategies to support AIGC services, and leverage AIGC to assist resource management. Communication: One of the challenges of autonomous driving is to ensure reliable and efficient communication between vehicles and infrastructures when vehicles are moving at high speeds. Link transmission capacity varies rapidly over time, especially for higher frequency band wireless communications such as 6G, which can affect the quality and timeliness of the data exchanged. Addressing this challenge involves two issues: determining what to transmit and ensuring efficacy under rapidly changing network conditions. Elastic task generation and resource allocation match task demands and network resources in dynamic vehicular networks, and can both be enhanced by applying AIGC technologies. On one hand, generative models can be used to create content with different volumes that match the current link transmission capacity. For example, a generative model can produce a low-resolution image when the link is weak, and a high-resolution image when the link is strong. A generative model can also convert images to texts, which further reduces data volumes for communications. This way, the generative models can adapt to the changing network conditions and optimize data transmission for autonomous driving. On the other hand, AIGC has the potential to improve wireless communications by addressing mobility challenges and proactively allocating resources. Generative models for motion planning can predict the future trajectories of vehicles based on their previous states and the environment. As shown in Fig. <ref>, the predicted vehicle locations can improve beam tracking between vehicles and access points in RSU or base station. Higher frequency band wireless transmission is prone to blockage and scattering. Environment perception improves beam forming by identifying the locations and types of blockers and scatterers <cit.>. Moreover, predicting vehicle trajectories facilitates handover, in ultra-dense small cells with shorter transmission range and line-of-sight transmission for 6G, by selecting the optimal access point to communicate with the vehicles. The network can proactively reserve bandwidth in the most probable adjacent cell instead of all adjacent cells to reduce bandwidth waste before the handover occurs, to improve resource utilization and handover accuracy. Generative models can also assist communication resource allocation by predicting dynamic communication demand and available bandwidth resources. AIGC for macroscopic traffic simulation estimates the future traffic flow intensity, which reflects the amount of communication resources required by the vehicles. AIGC for microscopic motion planning and perception can assist in estimating channel state information by using sensing information and reducing pilot overhead. These predictions can then be used to improve the routing and scheduling algorithms that manage the communication networks and enhance their performance. By applying AIGC to task generation, resource prediction and allocation, the cloud-edge-terminal framework can optimize the utility of communication operations and satisfy various task demands. Storage: Contents, models, user profiles, and interaction histories can all be stored since they are key ingredients for AIGC services. Caching generated content avoids repeated model inference computation, and can serve multiple vehicles that have the same request. However, in the case of diverse service requests and preferences in autonomous driving, to obtain personalized content and increase the cache hit rate, caching models become necessary <cit.>. A model can generate different content for different users according to user profiles and their interaction histories. A user profile includes driver identity, vehicle type information, travel trajectory, real-time location and speed, social interaction with other vehicles, etc., reflecting users’ driving behavior and entertainment preferences. Interaction history is recorded while using AIGC service, such as input prompts and responses, reflecting users’ preferences and feedback on system performance. Both user profiles and interaction histories can be compressed to save storage space. Moreover, popular generative language models cannot generalize to longer texts than the training sequence lengths. Compressed contents or features extracted from the interaction history can instead be used when interacting with generative models. Since vehicles and RSU have limited storage resources, they cache most relevant models and content by estimating future demand, shown in Fig. <ref>. Proactive caching based on AIGC traffic estimation and motion prediction enhances driving safety while reduces service latency. With traffic congestion prediction, relevant models and content with higher sensing accuracy can be cached in RSU near congested road sections, to meet vehicles’ navigation demand on higher-risk congested roads while reduces service latency and network load. As vehicles drive across different regions, region-specific generative models can be proactively cached in vehicles. For example, before entering mountainous terrains, higher precision perception models can be proactively cached. Such models facilitate identifying different types of obstacles in mountainous terrains and inferring potential obstacles outside vehicle’s perception range, which assists vehicles in complex environment to take a more conservative driving strategy. In mountainous terrains, where RSUs are limited, proactive caching in vehicles becomes necessary. In addition, proactive caching based on user demand estimation enhances service quality. Using user profiles and interaction histories as input, generative models can predict future driving and entertainment demands. The corresponding models and contents can be cached in advance. For instance, generative models with recreation features can be proactively cached in a vehicle before a road trip, which can be inferred based on the interaction history. Computation: Computation is the core of the cloud-edge-terminal collaborative AIGC architecture to support real-time personalized AIGC services. The challenges of allocating computation resources for autonomous vehicles are two-fold. On one hand, it is difficult to quantify the subjective and personalized preferences of users towards the generated content. Consider a tourist in an autonomous vehicle who prefers scenic routes over the fastest ones. The system must understand the subjective concept of “scenic value”, which might include views and landmarks. This preference varies greatly from one individual to another and depends on personal taste and current mood. On the other hand, the multi-user resource allocation problem for multiple ASPs, i.e., the ASP selection, is framed as a resource-constrained task assignment problem <cit.>, which is NP-hard <cit.>. The complexity of this problem is further exacerbated when considering the autonomy of vehicles. To align the autonomous systems with human preferences, the first step involves training a cross-modal semantically capable model to serve as the Reward Model (RM). This model is crucial for understanding and interpreting the multi-modal data streams pertinent to AIGC tasks, and is fine-tuned using human feedback collected from long-term interactions with generative models as well as user profiles. This feedback mechanism allows the model to capture a wide range of human preferences, from aesthetic considerations in content generation to pragmatic concerns like energy efficiency and latency. The RM processes input parameters such as the quality of generated content, latency, and energy consumption, and produces a scalar reward as an output. This reward represents a quantified estimation of user satisfaction, encapsulating the complex and subjective nature of human preferences in a format that is computationally manageable for optimization. Vehicles generate multiple AIGC service requests at different times, which will be processed by ASPs. The ASP selection problem is complex due to the autonomy of vehicles, diverse user preferences, resource constraints, dynamic vehicular environments, and interdependence of selection decisions. Given these features, the Partially Observable Stochastic Game (POSG) framework is anticipated to capture the intricacies of this decision-making process. POSG accounts for the partial observability inherent in each vehicle’s decision-making process, where a vehicle may not have complete information about the network state or the service choices being made by other vehicles. Moreover, POSG accommodates the dynamic nature of the environment and the strategic interplay among multiple vehicles, each striving to optimize its own service experience in the context of limited resources at the edge. To navigate the challenges posed by the POSG framework, we propose an interactive Multi-Agent Reinforcement Learning (MARL)<cit.> approach for ASP selection, shown in Fig. <ref>. This approach allows vehicles to engage in pre-decision interactions with nearby vehicles through communication modules. Such interactions enable vehicles to share insights and coordinate their decisions, thereby enhancing the overall decision-making process in the context of ASP selection. The actions, rewards, and human feedback are stored for continuous training and refinement of the RM, thus ensuring that the system remains adaptable and responsive to the evolving user preferences. In summary, by integrating the RM concept into the AIGC task management framework for autonomous vehicles, we provide a robust MARL mechanism to align automated decision-making processes with the nuanced and dynamic nature of human preferences. This approach not only enhances the relevance and effectiveness of AIGC tasks but also ensures that autonomous systems remain user-centric and adaptable in real-world scenarios. § CONCLUSION Mobility in complex autonomous driving environments poses challenges to vehicle perception and decision-making. Generative models can augment perception and predict future vehicle motions by leveraging the generative capability based on distributions learned from previous data. This article surveys the potential applications of AIGC to autonomous driving and proposes a cloud-edge-terminal collaborative architecture to support AIGC. The unique properties of generative models bring challenges to communication, storage, and computation resource allocations, while models’ predictive capability can assist network design and resource management. This article delves into the challenges and research opportunities, and proposes initial attempts to construct mutually supportive AIGC and network systems for autonomous driving. § ACKNOWLEDGMENTS This work was supported in part by the National Natural Science Foundation of China under Grants 62341101, 62301011, and 62271351. 10 url@samestyle brown2020language T. Brown et al., “Language models are few-shot learners,” Advances in neural information processing systems, vol. 33, pp. 1877–1901, 2020. rombach2022high R. Rombach, A. Blattmann, D. Lorenz, P. Esser, and B. Ommer, “High-resolution image synthesis with latent diffusion models,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022, pp. 10 684–10 695. cherti2023reproducible M. Cherti, R. Beaumont, R. Wightman, M. Wortsman, G. Ilharco, C. Gordon, C. Schuhmann, L. Schmidt, and J. Jitsev, “Reproducible scaling laws for contrastive language-image learning,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2023, pp. 2818–2829. xu2023drivegpt4 Z. Xu, Y. Zhang, E. Xie, Z. Zhao, Y. Guo, K. K. Wong, Z. Li, and H. Zhao, “DriveGPT4: Interpretable end-to-end autonomous driving via large language model,” arXiv preprint arXiv:2310.01412, 2023. seff2023motionlm A. Seff, B. Cera, D. Chen, M. Ng, A. Zhou, N. Nayakanti, K. S. Refaat, R. Al-Rfou, and B. Sapp, “Motionlm: Multi-agent motion forecasting as language modeling,” in Proceedings of the IEEE/CVF International Conference on Computer Vision (CVPR), 2023, pp. 8579–8590. mao2023gpt J. Mao, Y. Qian, H. Zhao, and Y. Wang, “GPT-driver: Learning to drive with GPT,” arXiv preprint arXiv:2310.01415, 2023. zhang2023generative R. Zhang, K. Xiong, H. Du, D. Niyato, J. Kang, X. Shen, and H. V. Poor, “Generative AI-enabled vehicular networks: Fundamentals, framework, and case study,” arXiv preprint arXiv:2304.11098, 2023. xu2023generative M. Xu, D. Niyato, J. Chen, H. Zhang, J. Kang, Z. Xiong, S. Mao, and Z. Han, “Generative AI-empowered simulation for autonomous driving in vehicular mixed reality metaverses,” arXiv preprint arXiv:2302.08418, 2023. li2020gan M. Li, J. Lin, Y. Ding, Z. Liu, J.-Y. Zhu, and S. Han, “GAN compression: Efficient architectures for interactive conditional GANs,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020, pp. 5284–5294. jiang2023llmlingua H. Jiang, Q. Wu, C.-Y. Lin, Y. Yang, and L. Qiu, “Llmlingua: Compressing prompts for accelerated inference of large language models,” arXiv preprint arXiv:2310.05736, 2023. SoM X. Cheng, H. Zhang, J. Zhang, S. Gao, S. Li, Z. Huang, L. Bai, Z. Yang, X. Zheng, and L. Yang, “Intelligent multi-modal sensing-communication integration: Synesthesia of machines,” IEEE Communications Surveys and Tutorials, 2023. xu2023joint M. Xu, D. Niyato, H. Zhang, J. Kang, Z. Xiong, S. Mao, and Z. Han, “Joint foundation model caching and inference of generative AI services for edge intelligence,” arXiv preprint arXiv:2305.12130, 2023. xu2023unleashing M. Xu et al., “Unleashing the power of edge-cloud generative AI in mobile networks: A survey of AIGC services,” arXiv preprint arXiv:2303.16129, 2023. tindell1992allocating K. W. Tindell, A. Burns, and A. J. Wellings, “Allocating hard real-time tasks: an NP-hard problem made easy,” Real-Time Systems, vol. 4, no. 2, pp. 145–165, 1992. iqbal2019actor S. Iqbal and F. Sha, “Actor-attention-critic for multi-agent reinforcement learning,” in International Conference on Machine Learning (ICML).1em plus 0.5em minus 0.4emPMLR, 2019, pp. 2961–2970.
http://arxiv.org/abs/2407.01990v1
20240702070606
Hybrid Rotational Cavity Optomechanics Using Atomic Superfluid in a Ring
[ "Sanket Das", "Pardeep Kumar", "M. Bhattacharya", "Tarak N. Dey" ]
quant-ph
[ "quant-ph", "physics.atom-ph", "physics.optics" ]
Department of Physics, Indian Institute of Technology Guwahati, Assam 781039, India pardeep.kumar@mpl.mpg.de Max Planck Institute for the Science of Light, Staudtstraße 2, 91058 Erlangen, Germany School of Physics and Astronomy, Rochester Institute of Technology, 84 Lomb Memorial Drive, Rochester, New York 14623, USA Department of Physics, Indian Institute of Technology Guwahati, Assam 781039, India § ABSTRACT We introduce a hybrid optomechanical system containing an annularly trapped Bose-Einstein condensate (BEC) inside an optical cavity driven by Lauguerre-Gaussian (LG) modes. Spiral phase elements serve as the end mirrors of the cavity such that the rear mirror oscillates torsionally about the cavity axis through a clamped support. As described earlier in a related system [https://doi.org/10.1103/PhysRevLett.127.113601P. Kumar et. al., Phys. Rev. Lett. 127, 113601 (2021)], the condensate atoms interact with the optical cavity modes carrying orbital angular momentum which create two atomic side modes. We observe three peaks in the output noise spectrum corresponding to the atomic side modes and rotating mirror frequencies, respectively. We find that the trapped BEC's rotation reduces quantum fluctuations at the mirror's resonance frequency. We also find that the atomic side modes-cavity coupling and the optorotational coupling can produce bipartite and tripartite entanglements between various constituents of our hybrid system. We reduce the frequency difference between the side modes and the mirror by tuning the drive field's topological charge and the condensate atoms' rotation. When the atomic side modes become degenerate with the mirror, the stationary entanglement between the cavity and the mirror mode diminishes due to the suppression of cooling. Our proposal, which combines atomic superfluid circulation with mechanical rotation, provides a versatile platform for reducing quantum fluctuations and producing macroscopic entanglement with experimentally realizable parameters. Hybrid Rotational Cavity Optomechanics Using Atomic Superfluid in a Ring Tarak N. Dey July 8, 2024 ========================================================================= § INTRODUCTION Soon after its experimental realization <cit.>, Bose-Einstein condensate (BEC), has emerged as a prominent and controllable system <cit.> to investigate and mimic the persistent flow of superfluidity <cit.> and superconductivity <cit.>. Especially, when confined in multiply-connected geometries (toroidal traps) <cit.> such systems exhibit persistent currents of superflow <cit.>. These geometries can provide (i) topological protection to the quantum circulation <cit.>, (ii) longer dissipationless flow <cit.>, and (iii) `supersonic' rotations <cit.>. Since the first experimental illustration of atomic persistent currents in annularly-trapped BEC <cit.>, an incredible progress has been made in this configuration to study atomic superflow for the investigation and development of matter-wave interferometry <cit.>, atomtronic circuits <cit.>, topological excitations <cit.>, superfluid hydrodynamics <cit.>, phase slips <cit.>, time crystals <cit.>, gyroscopy <cit.> and cosmological studies <cit.>. Therefore, in a toroidal geometry, it is very important to determine the atomic circulation which involves the phase (winding number) measurement of a rotational state. For detecting the winding number, the current state-of-the-art technologies involve destructive methods <cit.>, namely, optical absorption imaging of the atoms in the ring which destroys its superfluid character. Furthermore, in situ detection in the existing techniques is difficult due to issues related to the optical resolution of the radius of the vortex which demands time-of-flight expansion methods <cit.>. In recent studies, our group proposed a versatile method to detect the magnitude <cit.> as well as direction <cit.> of rotation of a bosonic ring condensate with minimal destruction, in situ and in times. Specifically, the method uses the tools of cavity optomechanics <cit.>, a unique platform to explore the radiation-pressure interaction of vibrating elements with the photons confined inside an optical resonator. This method not only improves the rotation sensing by three orders of magnitude but also provides a test bed to manipulate the persistent currents by generating the optomechanical entanglement between matter waves. The aforementioned radiation-pressure interaction plays a dual role in cavity optomechanics. On one side, it assists in manipulating the properties of the mechanically pliable objects for applications such as quantum ground state cooling <cit.> and generation of entanglement between macroscopic objects <cit.>. On the other end, it can also be used to control the quantum properties of the light. For instance, optomechanical interactions can generate squeezed states of light where the quantum fluctuations in one of the optical quadratures are reduced below the shot-noise level. This comes with increasing fluctuations in other orthogonal quadrature <cit.>. Such engineered squeezed light states play a vital role in enhancing displacement sensitivity in kilometer-sized gravitational wave observatories <cit.>, optical communication <cit.>, and metrology <cit.>. Interestingly, the rotational analog of cavity optomechanics utilizes radiation torque <cit.> from the angular momentum exchange between the Laguerre-Gaussian (LG) cavity mode and a spiral phase plate as a rotating mirror <cit.>. Such systems have been investigated to cool the rotational degrees of freedom to their quantum ground state <cit.> and for the realization of opto-rotational entanglement <cit.>. Nowadays, hybrid optomechanical systems <cit.> pave a versatile pathway in the development of quantum technologies <cit.>. Such systems take into account of a mechanical oscillator interacting with an electromagnetic field <cit.> and an additional physical system <cit.> or a degree of freedom <cit.>. In this paper, we present a hybrid setup formed by confining an annularly trapped BEC inside a LG cavity. The spiral phase elements serve as the end mirrors of the cavity such that one mirror rotates about the cavity axis through a clamped support. Specifically, in this hybrid system, we explore (i) ponderomotive squeezing i.e., the reduction of quantum fluctuations of the output optical quadratures below the shot noise level at various frequencies, and (ii) the generation of bipartite and tripartite entanglement between the cavity and the matter waves and the macroscopic rotating mirror. The main advantages of our hybrid setup are: (a) From the perspective of toroidal BEC, the atomic rotation in it provides a distinctive tool to correlate the optical amplitude and phase quadratures and provide squeezing of about 87% at a measurement angle of 7^∘ around the frequencies of the Bragg-scattered sidemodes. On the other hand, an optimum ponderomotive squeezing of 40% below the shot noise level occurs at the angular frequency of the rotating mirror that can further be manipulated by the persistent currents of ring BEC. The same effect also controls the bipartite and tripartite entanglement between the atomic superfluids and the macroscopic object, paving a useful resource for quantum information processing. (b) From the point of view of the LG cavity, it is relatively easy to increase the orbital angular momentum (OAM) of the LG mode. Moreover, in such a cavity setup, the optorotational effects scale with the square of OAM <cit.> in contrast to the linear scaling of the conventional cavity optomechanics. Using these facts, it is comparatively simple to manipulate the optical interaction of the ring BEC and the rotating mirror with a common LG mode. This in turn can be harnessed to increase the ponderomotive squeezing and the simultaneous existence of bipartite and tripartite entanglements. Our hybrid system represents a first proposal involving matter wave rotation in hybrid optomechanics and can be exploited for applications in optomechanical sensing, atomtronics and quantum information processing. The paper is organized as follows: In Sec. <ref>, we provide details of our hybrid system. In Sec. <ref>, we then provide relevant equations for the quantum dynamics and study the bistability and stability analysis. Sec. <ref> contains a discussion on ponderomotive squeezing while Sec. <ref> describes the bipartite and tripartite entanglement in our hybrid setup. Finally, we conclude in Sec. <ref>. § MODEL The configuration under consideration is a hybrid setup consisting of a ring BEC and an optical cavity. In the following, we provide details of each of these elements. §.§ Ring BEC The first ingredient of our hybrid setup is an atomic BEC confined in a ring trap and located inside the optical cavity. In the toroidal trap, the harmonic potential experienced by each condensate atom along the radial (ρ) and axial (z) directions is given by <cit.> U(ρ,z) =1/2mω_ρ(ρ-R)^2+1/2mω_zz^2 , where m, ω_ρ (ω_z) and R represent the mass of the condensate atom, harmonic trapping frequency along the radial (axial) direction and radius of the ring trap, respectively. Due to the above potential, we assume that the atomic evolution along radial and axial directions remains unchanged. However, we consider the dynamical evolution along the azimuthal direction, φ, as it is not subjected to any potential. This one-dimensional description is possible to be applied within the current state-of-the-art experiments if the number of atoms N of the condensate obeys the following constraint <cit.> N <4R/3a_Na√(πω_ρ/ω_z) , where a_Na is the s-wave scattering length of the sodium atoms in the condensate. §.§ Optical Cavity In our setup, an optical cavity is formed with two spiral phase elements with the same handedness. In this arrangement, the input coupler (IC) is a fixed partially transmissive mirror, and the rear mirror (RM) is perfectly reflective. Now, IC is designed to preserve the OAM of the light while transmitting. On the other hand, it removes OAM of 2lħ per photon in reflection. The perfect reflection from RM removes 2lħ angular momentum from each photon. In Fig. <ref>, we provide the mode buildup at the various locations along the optic axis for an input Laguerre-Gaussian beam carrying orbital angular momentum +lħ. With the above design, the radiation torque per photon transferred to RM is written as ξ_ϕ=2lcħ/2L, where c is the speed of light and L is the cavity length, respectively. § QUANTUM DYNAMICS §.§ Hamiltonian As described above, the modes carrying OAM ± lħ build up inside the cavity, creating an angular lattice about the cavity axis. From such an optical lattice, some of the condensate atoms get coherently Bragg scattered from a macroscopically occupied rotational state with winding number L_p to states with L_p± 2nl, with n=1,2,3,⋯. In the following, we consider a dipole potential weaker than the condensate's chemical potential and consider only first-order diffraction, L_p→ L_p± 2l. In dimensionless units, the Hamiltonian for our hybrid configuration is expressed as H =H_BEC+H_ϕ . Here, H_BEC is one-dimensional Hamiltonian for the azimuthal motion of the ring BEC and is governed by <cit.> H_BEC =∑_j = c, d[ħω_j/2( X_j^2 + Y_j^2)+ħ(G X_j+U_0 N/2)a^†a] +∑_j = c, d2ħg̃N(X_j^2 + Y_j^2)+2ħg̃N(X_cX_d-Y_cY_d) . The first term in the square bracket in Eq. (<ref>) denotes the energies of the Bragg-scattered side modes <cit.> of frequencies ω_c = ħ (L_p + 2 l)^2 / (2 I_a) and ω_d = ħ (L_p - 2 l)^2 / (2 I_a). The moment of inertia of each atom about the cavity axis is I_a=m R^2. The dimensionless position and momentum quadratures are defined as X_j=(j^†+j)/√(2) and Y_j=(j-j^†)/i√(2), respectively. The second term in the square bracket on the right-hand side of Eq. (<ref>) governs the effective optomechanical coupling between the side modes and the optical field with the coupling strength G = U_0√(N)/2 √(2). Here U_0=g_a^2/Δ_a such that g_a gives the interaction between single atom and single photon and Δ_a denotes the atomic detuning. The final two terms arise due to two-body atomic interactions of strength g=(2ħω_ρ a_Na)/R, which can be scaled such that g̃= g/4πħ. Interestingly, from the Bogoliubov analysis, the actual side mode frequencies can be written as ω_j^' = √(ω_j^2 + 4 ω_jg̃ N) <cit.>. However, for the rest of our analysis, we impose ω_c, d≫ 4 g̃ N and hence use ω_c, d^'∼ω_c, d. On the other hand, the Hamiltonian for the optical cavity in the rotating frame of the driving laser is governed by H_ϕ=-ħΔ_oa^†a+ħω_ϕ/2(L_z^2+ϕ^2)+ħ g_ϕa^†aϕ-iħη(a-a^†) , where the first two terms describe the free energy of the detuned cavity mode with Δ_0=ω_l-ω_0 and the rotating mirror with resonance frequency ω_ϕ, respectively. Here L_z and ϕ represent the respective dimensionless angular momentum and angular displacement of RM about the cavity axis and these conjugate variables satisfy [L_z,ϕ]=-i. The third term in Eq. (<ref>) governs the radiation torque on RM, giving rise to optorotational coupling given by g_ϕ =cl/L√(ħ/Iω_ϕ) . The moment of inertia of the rotating mirror about the cavity axis is described as I=MR_m^2/2, where M (R_m) is the mass (radius) of the RM. Finally, the last term on the right-hand side of Eq. (<ref>)represents the cavity drive having amplitude η=√(P_inγ_o/(ħω_o)), where P_in is the input drive power and γ_o is the cavity linewidth. Using Eq. (<ref>), we derive the Heisenberg equations of motion and include damping and noise appropriately to obtain the following quantum Langevin equations ȧ-i[Δ̃-G(X_c+X_d)-g_ϕϕ]a =-γ_o/2a+η+√(γ_o)a_in , Ẍ_c+γ_mẊ_c+Ω_c^2X_c =-ω̃_cGa^†a-𝒜X_d +Ω_cϵ_c , Ẍ_d+γ_mẊ_d+Ω_d^2X_d =-ω̃_dGa^†a+𝒜X_c +Ω_dϵ_d , ϕ̈+γ_ϕϕ̇+ω_ϕ^2ϕ =-ω_ϕg_ϕa^†a+ω_ϕϵ_ϕ , where Δ̃=Δ_o-U_0N/2 is the effective cavity detuning and γ_m (γ_ϕ) is the damping of each condensate side mode (RM). Further, we have written the quantities Ω_c,d^2=(ω_c,d+4g̃N)^2-4g̃^2N^2, ω̃_c,d=ω_c,d+2g̃N, 𝒜=2g̃N(ω_c-ω_d). In the above Heisenberg-Langevin equation, a_in(t) represents the vacuum Gaussian noise of average ⟨ a_in(t)⟩=0, injected into the cavity, and its fluctuations are delta-correlated as ⟨δ a_in(t)δ a_in^†(t^') ⟩ =δ(t-t^') . Additionally, ϵ_j (j=c,d), and ϵ_ϕ in the quantum Langevin equation describe the Brownian noise in the condensate side modes and the rotating mirror, respectively. Their average values are zero and fluctuations at respective temperatures T_j and T_ϕ obey following correlations ⟨ϵ_j(t)ϵ_j(t')⟩ =γ_m/Ω_j∫_-∞^+∞dω/2πe^-iω(t-t')ω[1+ħω/2k_BT_j] , ⟨ϵ_ϕ(t)ϵ_ϕ(t')⟩ =γ_ϕ/ω_ϕ∫_-∞^+∞dω/2πe^-iω(t-t')ω[1+ħω/2k_BT_ϕ] , where k_B is Boltzmann's constant. §.§ Steady State Following the linearization approach, each operator 𝒪(t) can be decomposed into its steady-state values 𝒪_s and a small fluctuation δ𝒪(t). The steady-state values of each operator are a_s =η/√(Δ^'^2+(γ_o/2)^2) , X_cs =-Ω̃_cGa_s^2 , X_ds =-Ω̃_dGa_s^2 , ϕ_s =-g_ϕ/ω_ϕa_s^2 , where modified cavity detuning is given by Δ^'=Δ̃+Ω̃G^2a_s^2+g_ϕ^2/ω_ϕa_s^2 and Ω̃=Ω̃_c+Ω̃_d, such that Ω̃_c,d=(ω̃_c,dΩ_d,c^2∓ω̃_d,c𝒜)/(𝒜^2+Ω_c^2Ω_d^2). Here the phase of the cavity drive is chosen such that a_s is real. When the effective cavity detuning is larger than the critical value, Δ̃_cr=-√(3)γ_0/2, the above steady-state solution of |a_s|^2 manifests a bistable response concerning the input drive field intensity as depicted in Fig. <ref>(a). Additionally Fig. <ref>(b) suggests the input drive field intensity (P_in>P_cr) can lead to the bistability response in |a_s|^2 where the critical power is P_cr=ħω_0γ_0^2/3√(3)(Ω̃G^2+g_ϕ^2/ω_ϕ). In the remainder of the paper, we choose the parameters to avoid the bistable regime and keep our system monostable. This can also be achieved by using electronic feedback <cit.>, which allows us to choose the modified detuning, Δ^', independently of the radiation pressure. §.§ Stability Analysis The fluctuation part of Eq. (<ref>) can be expressed as a set of linearized equations u̇(t) =F̃u(t)+v(t) , where the fluctuation vector u(t)=(δ X_c(t), δ Y_c(t), δ X_d(t), δ Y_d(t), δ𝒬(t), δ𝒫(t), δϕ(t), δ L_z(t))^T and the input noise vector v(t)=(0,ϵ_c(t),0, ϵ_d(t), √(γ_0)δ𝒬_in(t), √(γ_0)δ𝒫_in(t),0, ϵ_ϕ(t))^T, respectively. We have expressed the cavity field fluctuations in terms of their amplitude and phase quadratures as δ𝒬=(δ a+δ a^†)/√(2), and δ𝒫=-i(δ a-δ a^†)/√(2). The explicit form of the drift matrix F̃ is given by F̃ =[ 0 Ω_c 0 0 0 0 0 0; -Ω_c -γ_m -𝒜/Ω_c 0 -ω̃_cG_r/Ω_c 0 0 0; 0 0 0 Ω_d 0 0 0 0; 𝒜/Ω_d 0 -Ω_d -γ_m -ω̃_dG_r/Ω_d 0 0 0; 0 0 0 0 -γ_o/2 -Δ^' 0 0; -G_r 0 -G_r 0 Δ^' -γ_o/2 -g_ϕ^r 0; 0 0 0 0 0 0 0 ω_ϕ; 0 0 0 0 -g_ϕ^r 0 -ω_ϕ -γ_ϕ ] . The enhanced side modes-cavity coupling strength and optorotational coupling are described as G_r=√(2)Ga_s, and g_ϕ^r=√(2)g_ϕa_s, respectively. We derive the stability condition for the hybrid rotational cavity optomechanical system by invoking the Routh-Hurwitz criterion <cit.>, which suggests all the eigenvalues of F̃ must have negative real parts. From Fig. <ref>, it is evident that our system lies in the stable region for the weaker single photon optorotational coupling strength with respect to the side mode-cavity coupling strength. Moreover, increasing g_ϕ results gradual decrease in the characteristics frequency bound of the RM. § PONDEROMOTIVE SQUEEZING In this section, we analyze a hybrid system to manipulate the quantum properties of the output light. Our interest, in particular, is to reduce the quantum fluctuations in the optical quadratures well below the shot noise level and to describe the influence of quantum circulation on the optical squeezing. §.§ Quadrature Noise Spectrum To describe the ponderomotive squeezing, we invoke the homodyne measurement technique and obtain the quadrature noise spectrum of the output optical field. The homodyne-detected signal can be expressed as <cit.> 𝒬^out_θ(ω) =𝒬_out(ω)cosθ+𝒫_out(ω)sinθ , where θ determines the measured field quadrature and is adjusted experimentally. The cavity relates the output and input field quadratures as 𝒬_out(ω)=√(γ_o)δ𝒬(ω)-𝒬_in(ω), and 𝒫_out(ω)=√(γ_o)δ𝒫(ω)-𝒫_in(ω). The output quadrature noise spectrum is then calculated as S(ω) =|ξ_1(ω)|^2+|ξ_2(ω)|^2+i[ξ^∗_1(ω)ξ_2(ω)-ξ^∗_2(ω)ξ_1(ω)] -2γ_mω[|ξ_3(ω)|^2/Ω_c+|ξ_4(ω)|^2/Ω_d][1-(ħω/2k_BT)] -2γ_ϕω/ω_ϕ|ξ_5(ω)|^2[1-(ħω/2k_BT_ϕ)] . A full derivation of the above spectrum and detailed expressions of ξ_i's are given in Appendix <ref>. Note that the noise spectrum in Eq. (<ref>) is normalized in such a way that S(ω)=1 represents a shot-noise level <cit.>. In Fig. <ref>, we show the power spectral density (PSD) of the optical quadrature as a function of the response frequency for different homodyne measurement angles. Here the black dotted curve represents the shot noise level. For θ=90^∘, the phase quadrature of the output optical field results in three Lorentzian peaks on top of the shot noise background at the characteristic frequencies, Ω_d/2π∼ 595 Hz, and Ω_c/2π∼ 717 Hz corresponding to side modes of the rotating BEC, and the rotating mirror, ω_ϕ/2π∼ 653 Hz, respectively. It is evident that the fluctuations remain above the vacuum noise in this scenario. However, decreasing the measurement angle θ reduces the value of S(ω) well below the shot noise level near the atomic side mode frequencies as presented by the blue dot-dashed curve of Fig. <ref>(a). It is a clear signature of the ponderomotive squeezing which occurs due to the existence of correlations between optical amplitude and phase quadratures. For instance, at a measurement angle, θ=5^∘, the output optical noise is reduced by 84% below the shot noise within a bandwidth of 20 Hz around ω=Ω_c,Ω_d and the amplitude quadratures become asymmetric like the Fano lineshape <cit.>. Such a line profile arises due to the interference effects generated by the optical quadratures and the resonant process produced by the atomic density modulation driven by amplitude quadrature <cit.>. In Fig. <ref>(a), we set the effective cavity detuning equal to ω_ϕ. As a result, the correlation between amplitude and phase quadratures of the input field produces a small squeezing at θ=0^∘ <cit.>, as shown in the inset of Fig. <ref>(a). As discussed above, the measurement angle during the homodyne detection plays a crucial role to reduce the spectral noise below the shot noise. Now, in Fig. <ref>(b), we display the PSD by tuning the measurement angle in the range [0,180^∘] by fixing the response frequencies around the mechanical side modes . For our parameters, the maximum noise reduction near the mechanical side mode frequencies occurs around θ=7^∘ and the maximum of 87% ponderomotive squeezing is obtained. §.§ Optimum Squeezing In the preceding section, we have described that in our hybrid model, ponderomotive squeezing can be generated by choosing appropriate measurement angle. This happens only around the frequencies of the mechanical side modes. However, the fluctuations in the quadrature spectrum at the frequency of the rotating mirror still remains well above the background noise floor. In order to reduce the fluctuations, we optimize the optical quadrature squeezing spectrum S_opt(ω) by choosing θ(ω) in such a way that d S(ω)/dθ=0 for all the frequencies <cit.>. This gives θ_opt(ω)=1/2arctan[-B_2(ω)/B_1(ω)] , where the expressions of B_1(ω) and B_2(ω) are too involved and are shown in the appendix <ref>. The optimized squeezing spectrum is presented in Fig. <ref>. The exciting findings of the optimized squeezing spectrum are: (i) there is a significant reduction in the fluctuations below the shot noise level at the vicinity of the side mode frequencies Ω_c,d. (ii) Increasing the coupling strength between the atomic side modes with the cavity enhances the ponderomotive squeezing near the atomic side mode frequencies. Further, we observe a significant enhancement in the optical squeezing spectrum at the side mode frequencies Ω_c,d. To understand its reason, we plot the optimized homodyne angle as a function of the frequency as depicted in Fig. <ref>(b). The black-solid, red-solid, and blue-solid curves suggest the optimized homodyne angle becomes zero at the atomic side mode frequencies and at the mirror frequency. At these specific frequencies, the major contribution to the optimized optical squeezing spectrum originates from the first term of Eq. <ref> (|ξ_1(ω)|^2), only. The vanishingly small functional values of F̃_̃2̃ (Eq. <ref>) at Ω_c, Ω_d, and ω_ϕ lead the optimal squeezing to the unity. So far we have described the ponderomotive squeezing in our hybrid system by tuning various parameters. However, such squeezing occurs only at the frequencies of the Bragg-scattered mechanical side modes, whereas the optical quadrature fluctuations at the frequency of rotating mirror reduced just to the level of shot noise. In Fig. <ref>, we explore the influence of the persistent flow in the ring BEC to manipulate the noise reduction and to achieve ponderomotive squeezing at the resonance frequency of the rotating mirror. The blue solid curve of Fig. <ref>, suggests the increase in the winding number of the BEC enhances its interaction with the OAM carrying input field (l=15) to obtain 40% of ponderomotive squeezing. A weaker topological charge of the input field requires a relatively higher winding number of the rotating BEC to produce a similar amount of ponderomotive squeezing. Additionally, for lower L_p values, the atomic collisions dominate to give rise to the optical mode squeezing. § ENTANGLEMENT In the preceding section, we have exploited the radiation pressure force to squeeze the quantum fluctuations of the output light field. The radiation pressure also plays a crucial role in cooling down the rotational mirror to its quantum ground state and producing entanglement. In particular, our hybrid system sets a stage where the interactions between the atomic side modes with the optical field and the radiation torque play a pivotal role in obtaining the bipartite entanglements between various degrees of freedom. §.§ Bipartite Entanglement To quantify the entanglement between various subsystems, we use the linearized dynamics of Eq.(<ref>) and the Gaussian character of the quantum noise to achieve the stationary Gaussian state, which can be fully characterized by a 8× 8 covariance matrix V, whose elements are written as V_ij=1/2⟨ u_i(∞)u_j(∞)+u_j(∞)u_i(∞)⟩. Under the stable condition, the covariance matrix V satisfies the Lyapunov equation <cit.> F̃V+VF̃^T=-D, where the matrix of the stationary noise correlation function is D=diag{0,γ_m(2n_c+1),0,γ_m(2n_d+1),γ_0/2,γ_0/2,0,γ_ϕ(2n_m+1)}. The mean thermal excitation for the BEC side modes and the mechanical excitations of the rotating mirror are n_i = (exp{ħΩ_i/k_B T}-1)^-1 (i=c,d), and n_m = (exp{ħω_ϕ/k_B T_ϕ}-1)^-1, respectively. Using the above formalism, we study the two body entanglement in the hybrid system by evaluating the logarithmic negativity E, defined as <cit.> E =max[0,-ln 2η^-] , where η^-=2^-1/2[Σ(V)-[Σ(V)^2-4 (V_sub)]^1/2]^1/2, with Σ(V)= A + B - 2 C. Here, V_sub is a generic 4× 4 submatrix V_sub =[ A C; C^T B; ] , representing a particular bipartite system under consideration. A,B and C are 2× 2 blocks of the covariance matrix. The bipartite entanglement exists if E>0 i.e., when η^-<1/2. In Fig. <ref>(a), we study the influence of the cavity detuning on the bipartite entanglements in our hybrid system. E_am, E_ac, and E_ad denote the bipartite entanglements between cavity-mirror, cavity-atomic c mode, and cavity-atomic d mode, respectively. The black solid curve suggests the optimal cavity-mirror entanglement occurs around Δ'≈-0.6ω_ϕ. However, the blue and red solid curves represent the optimal values of E_ac and E_ad at Δ'≈-1.2ω_ϕ. The entanglement between the cavity and the atomic side modes sustains up to the bath temperature of the rotating mirror T_ϕ≈13 mK, as presented by the inset of Fig. <ref>(a). In Fig. <ref>(b) and (c), we investigate the influence of the topological charge of the input optical drive and atomic rotation on the entanglement properties between various bipartite subsystems, respectively. The prominent interaction strength between the optical and the acoustic modes produces a more significant entanglement response than cavity-atomic side modes. More interestingly, our study predicts diminishing entanglements at a specific region of the OAM of the input beam. Also, the black solid curve of Fig. <ref>(c) suggests for the topological charge of the input beam l=243 <cit.>, the entanglement between the optical and the acoustic mode diminishes when the L_p values lie between 30 and 34. Moreover, the two Bragg scattered atomic side modes produce distinct entanglement responses arising from the optimal cavity detuning condition Δ'=-1.2ω_ϕ=-Ω_c,d. Now, to explain the diminishing entanglements, we determine the energy of the rotating mirror, which is given by U =ħω_ϕ/2[⟨δϕ^2⟩ + ⟨δ L_z^2⟩]=ħω_ϕ/2(V_77+V_88) =ħω_ϕ(n_eff+1/2), where the steady-state mean phonon number (n_eff) is associated with the effective mirror temperature (T_eff) by the relation n_eff=(exp(ħω_ϕ/k_B T_eff)-1)^-1. In Fig. <ref> and in its inset, we present the effect of the OAM of the driving field and the angular momentum of atomic BEC on the steady-state phonon number, respectively. The solid black curve anticipates two distinct peaks in the effective phonon response stemming from the cooling inhibition associated with the topological charge l≈226 and 227, and the red solid curve of the inset depicts the suppression of cooling when the winding number L_p lies between 30 and 34. In the subsequent discussion, we demonstrate how the occurrence of the dark modes suppresses the cooling mechanism by introducing a center of mass coordinate (X_1cm, P_1cm) and a relative coordinate (X_1r, P_1r) as X_1cm=G X_d+g_ϕϕ/√(G^2+g_ϕ^2), P_1cm=G Y_d+g_ϕ L_z/√(G^2+g_ϕ^2), X_1r=G ϕ-g_ϕ X_d/√(G^2+g_ϕ^2), P_1r=G L_z-g_ϕ Y_d/√(G^2+g_ϕ^2). Neglecting the atom-atom interaction, we can further express the Hamiltonian of Eq. <ref> as H =-ħ(Δ_0-U_0N/2)a^† a - iħη(a-a^†)+ħω_c/2(X_c^2+Y_c^2)+ħ G X_c a^† a+ħ/2(ω_dG^2+ω_ϕ g_ϕ^2/G^2+g_ϕ^2)(X_1cm^2+P_1cm^2) +ħ√(G^2+g_ϕ^2)X_1cma^† a+ħ/2(ω_dg_ϕ^2+ω_ϕ G^2/G^2+g_ϕ^2)(X_1r^2+P_1r^2) +ħ/2G g_ϕ/G^2+g_ϕ^2(ω_ϕ-ω_d)(X_1cmX_1r+X_1rX_1cm+P_1cmP_1r +P_1rP_1cm), where the sixth and the last terms correspond to the interaction of the center of mass coordinates with the optical field and the relative coordinate, respectively. The above analysis shows that when L_p=1 and l≈226, one of the atomic side mode frequency ω_d matches with ω_ϕ and the relative coordinate is decoupled from the center of mass coordinate as well as the optical mode. Nonetheless, it is straightforward to show the existence of another set of center of mass and relative coordinates defined as X_2cm=G X_c+g_ϕϕ/√(G^2+g_ϕ^2), P_2cm=G Y_c+g_ϕ L_z/√(G^2+g_ϕ^2), X_2r=G ϕ-g_ϕ X_c/√(G^2+g_ϕ^2), P_2r=G L_z-g_ϕ Y_c/√(G^2+g_ϕ^2), such that when ω_c=ω_ϕ, the relative coordinate decouples from the center of mass and the cavity field. Hence, the radiation torque cooling is suppressed when the two atomic side modes degenerate with the acoustic mode (ω_ϕ=ω_c,d) as the relative coordinate stays in the initial thermal state. Comparing Fig. <ref>(b) and Fig. <ref>, we can say that cooling a rotating mirror close to its quantum ground state (l < 225 and l > 230) helps to maintain quantum correlations. As a result, entanglement persists. However, between l=226 and 227, the phonon number attains a minimum value of 2.2, which signifies the proximity of the quantum ground state. Even at these low phonon numbers, the quantum fluctuations are sufficiently strong to disrupt the coherence necessary to attain any entanglement. §.§ Tripartite Entanglement Lastly, we study the tripartite entanglements between the cavity, mirror, and atomic side modes by quantifying the minimum residual contangle as <cit.> ℛ_τ^min =min[ℛ_τ^a|mk,ℛ_τ^m|ak,ℛ_τ^k|am] , where, ℛ_τ^i|jk=C_i|jk-C_i|j-C_i|k≥ 0   (i,j,k=cavity(a),mirror(m),  atomic c or d modes), is the residual contangle written in terms of contangle C_u|v of subsystems of u and v (see Ref. <cit.> for more details of calculating ℛ_τ^min). The black solid and the red solid curve of Fig. <ref>(a) suggest that the tripartite entanglements between the three constituents of the model system (cavity, mirror, and c-or d-modes) are optimized when the cavity detuning Δ'=-1.9ω_ϕ. Moreover, the tripartite entanglement arises when there's frequent optical interaction with matter waves, particularly in proportion to the number of lattice maxima (2l). It means that the likelihood of optical interaction increases as the number of lattice maxima increases. When such interactions become frequent, they can lead to the emergence of tripartite entanglement, as shown in Fig. <ref>(b). In Fig. <ref>(c), we present the tripartite entanglement between the cavity, mirror, and atomic side modes as a function of the winding number of the condensate atoms when the topological charge of the driving field is l=243. The black solid curve presents the diminishing effect of the three body entanglement around L_p=30 due to the thermal fluctuations of the rotating mirror. The red solid curve depicts the tripartite entanglement response between the cavity, mirror, and the atomic d-mode. The presence of the atomic d-mode produces a wider range in L_p, where the tripartite entanglement does not persist. Moreover, the very distinct three-body entanglement response can be utilized to distinguish clearly between the two atomic side modes. § CONCLUSION We have presented an unique hybrid opto-rotational system consisting of a ring BEC confined inside a LG cavity. The system shows a distinctive way to squeeze the quantum fluctuations of the output light field quadratures below the shot noise. With the experimentally feasible parameters and for a measurement angle of 7^∘, we achieved 87% of the ponderomotive squeezing around the frequencies of Bragg-scattered sidemodes. Furthermore, two very distinct systems in our hybrid configuration get coupled to a common cavity mode. As a consequence, optical squeezing of about 40% occurs even at the rotating mirror frequency and can further be manipulated by the persistent currents in the annularly trapped BEC. Our scheme also provides a versatile pathway to utilize the atomic interactions and the radiation pressure force to produce bipartite and tripartite entanglement between different physical entities of the hybrid system. Interestingly, the quantum correlations in the system last under the cooling conditions of the rotating end mirror and correspondingly there exists a parameter regime where the entanglement survives. We expect our results find interesting applications in atomtronics <cit.>, optomechanics <cit.>, sensing <cit.> and quantum information processing <cit.>. § ACKNOWLEDGMENTS We thank R. Kanamoto for discussions. P. K. acknowledges the financial support from the Max Planck Society. M.B. thanks the Air Force Office of Scientific Research (FA9550-23-1-0259) for support. § POWER SPECTRAL DENSITY To obtain the output quadrature noise spectrum, we use following input-output relation, a_out(ω) =√(γ_o)δ a(ω)-δ a_in(ω) . From Eq. (<ref>), we can write following quadrature relations 𝒬_out(ω) =√(γ_o)δ𝒬(ω)-𝒬_in(ω) , 𝒫_out(ω) =√(γ_o)δ𝒫(ω)-𝒫_in(ω) , By Fourier transforming the Eq. (<ref>), we obtain the output optical quadratures as 𝒬_out(ω) =[√(γ_o)F̃_2(ω)-1]𝒬_in(ω)+i√(γ_o)F̃_3(ω)𝒫_in(ω)+√(2γ_o)F̃_5(ω)ϵ_c(ω)+√(2γ_o)F̃_7(ω)ϵ_d(ω)+√(2γ_o)F̃_9(ω)ϵ_ϕ(ω) , 𝒫_out(ω) =-i√(γ_o)[2F̃_1(ω)+F̃_3(ω)]𝒬_in(ω)+[√(γ_o)F̃_2(ω)-1]𝒫_in(ω)-i√(2γ_o)F̃_4(ω)ϵ_c(ω)-i√(2γ_o)F̃_6(ω)ϵ_d(ω) -i√(2γ_o)F̃_8(ω)ϵ_ϕ(ω) , where F̃_i(ω)'s are complicated complex functions written as below. F̃_̃1̃(ω) =i a_s^2√(γ_0)/D(ω)[G^2A_ϕ(ω)[ω̃_c(𝒜+Ã_̃d̃(ω))+ω̃_d(Ã_c(ω)-𝒜)]+ω_ϕ g_ϕ^2[𝒜^2+Ã_c(ω)Ã_d(ω)]], F̃_̃2̃(ω) =√(γ_0)/D(ω)[𝒜^2+Ã_c(ω)Ã_d(ω)]A_3(ω)A_ϕ(ω),  F̃_̃3̃(ω)=iΔ'√(γ_0)/D(ω)A_ϕ(ω)[𝒜^2+Ã_c(ω)Ã_d(ω)], F̃_̃4̃(ω) =-iGa_sΩ_c/D(ω)A_3(ω)A_ϕ(ω)[𝒜+Ã_d(ω)],  F̃_̃5̃(ω)=Δ'Ω_c G a_s/D(ω)A_ϕ(ω)[𝒜+Ã_d(ω)], F̃_̃6̃(ω) =-iGa_sΩ_d/D(ω)A_3(ω)A_ϕ(ω)[Ã_c(ω)-𝒜],  F̃_̃7̃(ω)=Δ'Ω_d G a_s/D(ω)A_ϕ(ω)[-𝒜+Ã_c(ω)], F̃_̃8̃(ω) =-ig_ϕ a_sω_ϕ/D(ω)A_3(ω)[𝒜^2+Ã_c(ω)Ã_d(ω)],  F̃_̃9̃(ω)=Δ'ω_ϕ g_ϕ a_s/D(ω)[𝒜^2+Ã_c(ω)Ã_d(ω)]. Further, using Eq. (<ref>) in Eq. (<ref>), we obtain 𝒬_θ^out(ω) =ξ_1(ω)𝒬_in(ω)+ξ_2(ω)𝒫_in(ω) +ξ_3(ω)ϵ_c(ω) +ξ_4(ω)ϵ_d(ω)+ξ_5(ω)ϵ_ϕ(ω) , where, ξ_i's can be expressed as ξ_1(ω) =-i√(γ_o)[2F̃_1(ω)+F̃_3(ω)]sinθ +[√(γ_o)F̃_2(ω)-1]cosθ , ξ_2(ω) =[√(γ_o)F̃_2(ω)-1]sinθ+i√(γ_o)F̃_3(ω)cosθ , ξ_3(ω) =-i√(2γ_o)F̃_4(ω)sinθ+√(2γ_o)F̃_5(ω)cosθ , ξ_4(ω) =-i√(2γ_o)F̃_6(ω)sinθ+√(2γ_o)F̃_7(ω)cosθ , ξ_5(ω) =-i√(2γ_o)F̃_8(ω)sinθ+√(2γ_o)F̃_9(ω)cosθ . The output quadrature spectrum is obtained from the definition S(ω) =1/π∫_-∞^∞dω'⟨𝒬_θ^out(ω')𝒬_θ^out(ω)⟩ . By using the following correlation relations ⟨𝒬_in(ω^')𝒬_in(ω)⟩ =⟨𝒫_in(ω^')𝒫_in(ω)⟩=πδ(ω^'+ω) , ⟨𝒬_in(ω^')𝒫_in(ω)⟩ =-⟨𝒫_in(ω^')𝒬_in(ω)⟩=iπδ(ω^'+ω) , ⟨ϵ_c,d(ω^')ϵ_c,d(ω)⟩ =2πγ_mω^'/Ω_c,d[1+(ħω^'/2k_BT)]δ(ω^'+ω) , ⟨ϵ_ϕ(ω^')ϵ_ϕ(ω)⟩ =2πγ_ϕω^'/ω_ϕ[1+(ħω^'/2k_BT_ϕ)]δ(ω^'+ω) , we obtain the analytical expression of Eq. <ref>. § OPTIMIZED SQUEEZING The optimized angle in Eq. (<ref>) is expressed in terms of B_1(ω) and B_2(ω) and their expressions are given below B_1(ω) =|κ_1(ω)|^2-|κ_3(ω)|^2+ i(κ_1^*(ω)κ_2(ω)-κ_1(ω)κ_2^*(ω)-κ_2^*(ω)κ_3(ω)+κ_3^*(ω)κ_2(ω)) -2γ_mω/Ω_c[1-(ħω/2k_B T)](|κ_4(ω)|^2-|κ_5(ω)|^2) -2γ_mω/Ω_d[1-(ħω/2k_B T)](|κ_6(ω)|^2-|κ_7(ω)|^2) -2γ_ϕω/ω_ϕ[1-(ħω/2k_B T_ϕ)](|κ_8(ω)|^2-|κ_9(ω)|^2), B_2(ω)= κ_1^*(ω)κ_2(ω)+κ_2^*(ω)κ_1(ω)+κ_2^*(ω)κ_3(ω)+κ_2(ω)κ_3^*(ω)+i(κ_1^*(ω)κ_3(ω)-κ_3^*(ω)κ_1(ω)) -2γ_mω/Ω_c[1-(ħω/2k_B T)](κ_4^*(ω)κ_5(ω)+κ_4(ω)κ_5^*(ω)) -2γ_mω/Ω_d[1-(ħω/2k_B T)](κ_6^*(ω)κ_7(ω)+κ_6(ω)κ_7^*(ω)) -2γ_ϕω/ω_ϕ[1-(ħω/2k_BT_ϕ)](κ_9(ω)κ_8^*(ω)+κ_9^*(ω)κ_8(ω)) where κ_i's are written as κ_1(ω)=-i√(γ_o)[2F̃_1(ω)+F̃_3(ω)] , κ_2(ω)=√(γ_o)F̃_2(ω)-1 , κ_3(ω)=i√(γ_o)F̃_3(ω) , κ_4(ω)=-i√(2γ_o)F̃_4(ω) , κ_5(ω)=√(2γ_o)F̃_5(ω) , κ_6(ω)=-i√(2γ_o)F̃_6(ω) , κ_7(ω)=√(2γ_o)F̃_7(ω) , κ_8(ω)=-i√(2γ_o)F̃_8(ω) , κ_9(ω)=√(2γ_o)F̃_9(ω) . * 79 fxundefined [1] ifx#1 fnum [1] #1firstoftwo secondoftwo fx [1] #1firstoftwo secondoftwo noop [0]secondoftwo ref[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0] rl [1]href #1 @bib@innerbibempty [Anderson et al.(1995)Anderson, Ensher, Matthews, Wieman, and Cornell]doi:10.1126/science.269.5221.198 author author M. H. Anderson, author J. R. Ensher, author M. R. Matthews, author C. E. Wieman, and author E. A. Cornell, title title Observation of bose-einstein condensation in a dilute atomic vapor, https://doi.org/10.1126/science.269.5221.198 journal journal Science volume 269, pages 198 (year 1995), https://arxiv.org/abs/https://www.science.org/doi/pdf/10.1126/science.269.5221.198 https://www.science.org/doi/pdf/10.1126/science.269.5221.198 NoStop [Davis et al.(1995)Davis, Mewes, Andrews, van Druten, Durfee, Kurn, and Ketterle]davis1995bose author author K. B. Davis, author M.-O. Mewes, author M. R. Andrews, author N. J. van Druten, author D. S. Durfee, author D. Kurn, and author W. Ketterle, title title Bose-einstein condensation in a gas of sodium atoms, @noop journal journal Physical review letters volume 75, pages 3969 (year 1995)NoStop [Leggett(2001)]RevModPhys.73.307 author author A. J. Leggett, title title Bose-einstein condensation in the alkali gases: Some fundamental concepts, https://doi.org/10.1103/RevModPhys.73.307 journal journal Rev. Mod. Phys. volume 73, pages 307 (year 2001)NoStop [Vilchynskyy et al.(2013)Vilchynskyy, Yakimenko, Isaieva, and Chumachenko]vilchynskyy2013nature author author S. Vilchynskyy, author A. Yakimenko, author K. Isaieva, and author A. Chumachenko, title title The nature of superfluidity and bose-einstein condensation: From liquid 4he to dilute ultracold atomic gases, @noop journal journal Low Temperature Physics volume 39, pages 724 (year 2013)NoStop [Hashimoto et al.(2020)Hashimoto, Ota, Tsuzuki, Nagashima, Fukushima, Kasahara, Matsuda, Matsuura, Mizukami, Shibauchi, Shin, and Okazaki]doi:10.1126/sciadv.abb9052 author author T. Hashimoto, author Y. Ota, author A. Tsuzuki, author T. Nagashima, author A. Fukushima, author S. Kasahara, author Y. Matsuda, author K. Matsuura, author Y. Mizukami, author T. Shibauchi, author S. Shin, and author K. Okazaki, title title Bose-einstein condensation superconductivity induced by disappearance of the nematic state, https://doi.org/10.1126/sciadv.abb9052 journal journal Science Advances volume 6, pages eabb9052 (year 2020), https://arxiv.org/abs/https://www.science.org/doi/pdf/10.1126/sciadv.abb9052 https://www.science.org/doi/pdf/10.1126/sciadv.abb9052 NoStop [Fetter(2009)]RevModPhys.81.647 author author A. L. Fetter, title title Rotating trapped bose-einstein condensates, https://doi.org/10.1103/RevModPhys.81.647 journal journal Rev. Mod. Phys. volume 81, pages 647 (year 2009)NoStop [Mueller et al.(1998)Mueller, Goldbart, and Lyanda-Geller]PhysRevA.57.R1505 author author E. J. Mueller, author P. M. Goldbart, and author Y. Lyanda-Geller, title title Multiply connected bose-einstein-condensed alkali-metal gases: Current-carrying states and their decay, https://doi.org/10.1103/PhysRevA.57.R1505 journal journal Phys. Rev. A volume 57, pages R1505 (year 1998)NoStop [Das et al.(2012)Das, Sabbatini, and Zurek]Das2012 author author A. Das, author J. Sabbatini, and author W. H. Zurek, title title Winding up superfluid in a torus via bose einstein condensation, https://doi.org/10.1038/srep00352 journal journal Scientific Reports volume 2, pages 352 (year 2012)NoStop [Beattie et al.(2013)Beattie, Moulder, Fletcher, and Hadzibabic]PhysRevLett.110.025301 author author S. Beattie, author S. Moulder, author R. J. Fletcher, and author Z. Hadzibabic, title title Persistent currents in spinor condensates, https://doi.org/10.1103/PhysRevLett.110.025301 journal journal Phys. Rev. Lett. volume 110, pages 025301 (year 2013)NoStop [Guo et al.(2020)Guo, Dubessy, de Herve, Kumar, Badr, Perrin, Longchambon, and Perrin]PhysRevLett.124.025301 author author Y. Guo, author R. Dubessy, author M. d. G. de Herve, author A. Kumar, author T. Badr, author A. Perrin, author L. Longchambon, and author H. Perrin, title title Supersonic rotation of a superfluid: A long-lived dynamical ring, https://doi.org/10.1103/PhysRevLett.124.025301 journal journal Phys. Rev. Lett. volume 124, pages 025301 (year 2020)NoStop [Ryu et al.(2007)Ryu, Andersen, Cladé, Natarajan, Helmerson, and Phillips]PhysRevLett.99.260401 author author C. Ryu, author M. F. Andersen, author P. Cladé, author V. Natarajan, author K. Helmerson, and author W. D. Phillips, title title Observation of persistent flow of a bose-einstein condensate in a toroidal trap, https://doi.org/10.1103/PhysRevLett.99.260401 journal journal Phys. Rev. Lett. volume 99, pages 260401 (year 2007)NoStop [Ramanathan et al.(2011)Ramanathan, Wright, Muniz, Zelan, Hill, Lobb, Helmerson, Phillips, and Campbell]PhysRevLett.106.130401 author author A. Ramanathan, author K. C. Wright, author S. R. Muniz, author M. Zelan, author W. T. Hill, author C. J. Lobb, author K. Helmerson, author W. D. Phillips, and author G. K. Campbell, title title Superflow in a toroidal bose-einstein condensate: An atom circuit with a tunable weak link, https://doi.org/10.1103/PhysRevLett.106.130401 journal journal Phys. Rev. Lett. volume 106, pages 130401 (year 2011)NoStop [Marti et al.(2015)Marti, Olf, and Stamper-Kurn]PhysRevA.91.013602 author author G. E. Marti, author R. Olf, and author D. M. Stamper-Kurn, title title Collective excitation interferometry with a toroidal bose-einstein condensate, https://doi.org/10.1103/PhysRevA.91.013602 journal journal Phys. Rev. A volume 91, pages 013602 (year 2015)NoStop [Ryu et al.(2013)Ryu, Blackburn, Blinova, and Boshier]PhysRevLett.111.205301 author author C. Ryu, author P. W. Blackburn, author A. A. Blinova, and author M. G. Boshier, title title Experimental realization of josephson junctions for an atom squid, https://doi.org/10.1103/PhysRevLett.111.205301 journal journal Phys. Rev. Lett. volume 111, pages 205301 (year 2013)NoStop [Pandey et al.(2021)Pandey, Mas, Vasilakis, and von Klitzing]PhysRevLett.126.170402 author author S. Pandey, author H. Mas, author G. Vasilakis, and author W. von Klitzing, title title Atomtronic matter-wave lensing, https://doi.org/10.1103/PhysRevLett.126.170402 journal journal Phys. Rev. Lett. volume 126, pages 170402 (year 2021)NoStop [Amico et al.(2021)Amico, Boshier, Birkl, Minguzzi, Miniatura, Kwek, Aghamalyan, Ahufinger, Anderson, Andrei, Arnold, Baker, Bell, Bland, Brantut et al.]Amico2021 author author L. Amico, author M. Boshier, author G. Birkl, author A. Minguzzi, author C. Miniatura, author L.-C. Kwek, author D. Aghamalyan, author V. Ahufinger, author D. Anderson, author N. Andrei, author A. S. Arnold, author M. Baker, author T. A. Bell, author T. Bland, author J. P. Brantut, et al., title title Roadmap on atomtronics: State of the art and perspective, https://doi.org/10.1116/5.0026178 journal journal AVS Quantum Science volume 3, pages 039201 (year 2021)NoStop [Kanamoto et al.(2008)Kanamoto, Carr, and Ueda]PhysRevLett.100.060401 author author R. Kanamoto, author L. D. Carr, and author M. Ueda, title title Topological winding and unwinding in metastable bose-einstein condensates, https://doi.org/10.1103/PhysRevLett.100.060401 journal journal Phys. Rev. Lett. volume 100, pages 060401 (year 2008)NoStop [Corman et al.(2014)Corman, Chomaz, Bienaimé, Desbuquois, Weitenberg, Nascimbène, Dalibard, and Beugnon]PhysRevLett.113.135302 author author L. Corman, author L. Chomaz, author T. Bienaimé, author R. Desbuquois, author C. Weitenberg, author S. Nascimbène, author J. Dalibard, and author J. Beugnon, title title Quench-induced supercurrents in an annular bose gas, https://doi.org/10.1103/PhysRevLett.113.135302 journal journal Phys. Rev. Lett. volume 113, pages 135302 (year 2014)NoStop [Eckel et al.(2014)Eckel, Lee, Jendrzejewski, Murray, Clark, Lobb, Phillips, Edwards, and Campbell]Eckel2014 author author S. Eckel, author J. G. Lee, author F. Jendrzejewski, author N. Murray, author C. W. Clark, author C. J. Lobb, author W. D. Phillips, author M. Edwards, and author G. K. Campbell, title title Hysteresis in a quantized superfluid `atomtronic' circuit, https://doi.org/10.1038/nature12958 journal journal Nature volume 506, pages 200 (year 2014)NoStop [Wang et al.(2015)Wang, Kumar, Jendrzejewski, Wilson, Edwards, Eckel, Campbell, and Clark]Wang_2015 author author Y.-H. Wang, author A. Kumar, author F. Jendrzejewski, author R. M. Wilson, author M. Edwards, author S. Eckel, author G. K. Campbell, and author C. W. Clark, title title Resonant wavepackets and shock waves in an atomtronic squid, https://doi.org/10.1088/1367-2630/17/12/125012 journal journal New Journal of Physics volume 17, pages 125012 (year 2015)NoStop [Wright et al.(2013)Wright, Blakestad, Lobb, Phillips, and Campbell]PhysRevLett.110.025302 author author K. C. Wright, author R. B. Blakestad, author C. J. Lobb, author W. D. Phillips, and author G. K. Campbell, title title Driving phase slips in a superfluid atom circuit with a rotating weak link, https://doi.org/10.1103/PhysRevLett.110.025302 journal journal Phys. Rev. Lett. volume 110, pages 025302 (year 2013)NoStop [Moulder et al.(2012)Moulder, Beattie, Smith, Tammuz, and Hadzibabic]PhysRevA.86.013629 author author S. Moulder, author S. Beattie, author R. P. Smith, author N. Tammuz, and author Z. Hadzibabic, title title Quantized supercurrent decay in an annular bose-einstein condensate, https://doi.org/10.1103/PhysRevA.86.013629 journal journal Phys. Rev. A volume 86, pages 013629 (year 2012)NoStop [Öhberg and Wright(2019)]PhysRevLett.123.250402 author author P. Öhberg and author E. M. Wright, title title Quantum time crystals and interacting gauge theories in atomic bose-einstein condensates, https://doi.org/10.1103/PhysRevLett.123.250402 journal journal Phys. Rev. Lett. volume 123, pages 250402 (year 2019)NoStop [Cooper et al.(2010)Cooper, Hallwood, and Dunningham]PhysRevA.81.043624 author author J. J. Cooper, author D. W. Hallwood, and author J. A. Dunningham, title title Entanglement-enhanced atomic gyroscope, https://doi.org/10.1103/PhysRevA.81.043624 journal journal Phys. Rev. A volume 81, pages 043624 (year 2010)NoStop [Eckel et al.(2018)Eckel, Kumar, Jacobson, Spielman, and Campbell]PhysRevX.8.021021 author author S. Eckel, author A. Kumar, author T. Jacobson, author I. B. Spielman, and author G. K. Campbell, title title A rapidly expanding bose-einstein condensate: An expanding universe in the lab, https://doi.org/10.1103/PhysRevX.8.021021 journal journal Phys. Rev. X volume 8, pages 021021 (year 2018)NoStop [Kumar et al.(2016)Kumar, Anderson, Phillips, Eckel, Campbell, and Stringari]Kumar_2016 author author A. Kumar, author N. Anderson, author W. D. Phillips, author S. Eckel, author G. K. Campbell, and author S. Stringari, title title Minimally destructive, doppler measurement of a quantized flow in a ring-shaped bose–einstein condensate, https://doi.org/10.1088/1367-2630/18/2/025001 journal journal New Journal of Physics volume 18, pages 025001 (year 2016)NoStop [Freilich et al.(2010)Freilich, Bianchi, Kaufman, Langin, and Hall]doi:10.1126/science.1191224 author author D. V. Freilich, author D. M. Bianchi, author A. M. Kaufman, author T. K. Langin, and author D. S. Hall, title title Real-time dynamics of single vortex lines and vortex dipoles in a bose-einstein condensate, https://doi.org/10.1126/science.1191224 journal journal Science volume 329, pages 1182 (year 2010), https://arxiv.org/abs/https://www.science.org/doi/pdf/10.1126/science.1191224 https://www.science.org/doi/pdf/10.1126/science.1191224 NoStop [Kumar et al.(2021)Kumar, Biswas, Feliz, Kanamoto, Chang, Jha, and Bhattacharya]PhysRevLett.127.113601 author author P. Kumar, author T. Biswas, author K. Feliz, author R. Kanamoto, author M.-S. Chang, author A. K. Jha, and author M. Bhattacharya, title title Cavity optomechanical sensing and manipulation of an atomic persistent current, https://doi.org/10.1103/PhysRevLett.127.113601 journal journal Phys. Rev. Lett. volume 127, pages 113601 (year 2021)NoStop [Pradhan et al.(2024a)Pradhan, Kumar, Kanamoto, Dey, Bhattacharya, and Mishra]Pradhan_PRR2024 author author N. Pradhan, author P. Kumar, author R. Kanamoto, author T. N. Dey, author M. Bhattacharya, and author P. K. Mishra, title title Cavity optomechanical detection of persistent currents and solitons in a bosonic ring condensate, https://doi.org/10.1103/PhysRevResearch.6.013104 journal journal Phys. Rev. Res. volume 6, pages 013104 (year 2024a)NoStop [Pradhan et al.(2024b)Pradhan, Kumar, Kanamoto, Dey, Bhattacharya, and Mishra]Pradhan_PRA2024 author author N. Pradhan, author P. Kumar, author R. Kanamoto, author T. N. Dey, author M. Bhattacharya, and author P. K. Mishra, title title Ring bose-einstein condensate in a cavity: Chirality detection and rotation sensing, https://doi.org/10.1103/PhysRevA.109.023524 journal journal Phys. Rev. A volume 109, pages 023524 (year 2024b)NoStop [Aspelmeyer et al.(2014)Aspelmeyer, Kippenberg, and Marquardt]RevModPhys.86.1391 author author M. Aspelmeyer, author T. J. Kippenberg, and author F. Marquardt, title title Cavity optomechanics, https://doi.org/10.1103/RevModPhys.86.1391 journal journal Rev. Mod. Phys. volume 86, pages 1391 (year 2014)NoStop [Genes et al.(2008a)Genes, Vitali, Tombesi, Gigan, and Aspelmeyer]PhysRevA.77.033804 author author C. Genes, author D. Vitali, author P. Tombesi, author S. Gigan, and author M. Aspelmeyer, title title Ground-state cooling of a micromechanical oscillator: Comparing cold damping and cavity-assisted cooling schemes, https://doi.org/10.1103/PhysRevA.77.033804 journal journal Phys. Rev. A volume 77, pages 033804 (year 2008a)NoStop [Marquardt et al.(2007)Marquardt, Chen, Clerk, and Girvin]PhysRevLett.99.093902 author author F. Marquardt, author J. P. Chen, author A. A. Clerk, and author S. M. Girvin, title title Quantum theory of cavity-assisted sideband cooling of mechanical motion, https://doi.org/10.1103/PhysRevLett.99.093902 journal journal Phys. Rev. Lett. volume 99, pages 093902 (year 2007)NoStop [Wilson-Rae et al.(2007)Wilson-Rae, Nooshi, Zwerger, and Kippenberg]PhysRevLett.99.093901 author author I. Wilson-Rae, author N. Nooshi, author W. Zwerger, and author T. J. Kippenberg, title title Theory of ground state cooling of a mechanical oscillator using dynamical backaction, https://doi.org/10.1103/PhysRevLett.99.093901 journal journal Phys. Rev. Lett. volume 99, pages 093901 (year 2007)NoStop [Mancini et al.(2002)Mancini, Giovannetti, Vitali, and Tombesi]PhysRevLett.88.120401 author author S. Mancini, author V. Giovannetti, author D. Vitali, and author P. Tombesi, title title Entangling macroscopic oscillators exploiting radiation pressure, https://doi.org/10.1103/PhysRevLett.88.120401 journal journal Phys. Rev. Lett. volume 88, pages 120401 (year 2002)NoStop [Vitali et al.(2007)Vitali, Mancini, and Tombesi]Vitali_2007 author author D. Vitali, author S. Mancini, and author P. Tombesi, title title Stationary entanglement between two movable mirrors in a classically driven fabry–perot cavity, https://doi.org/10.1088/1751-8113/40/28/S14 journal journal Journal of Physics A: Mathematical and Theoretical volume 40, pages 8055 (year 2007)NoStop [Paternostro et al.(2007)Paternostro, Vitali, Gigan, Kim, Brukner, Eisert, and Aspelmeyer]PhysRevLett.99.250401 author author M. Paternostro, author D. Vitali, author S. Gigan, author M. S. Kim, author C. Brukner, author J. Eisert, and author M. Aspelmeyer, title title Creating and probing multipartite macroscopic entanglement with light, https://doi.org/10.1103/PhysRevLett.99.250401 journal journal Phys. Rev. Lett. volume 99, pages 250401 (year 2007)NoStop [Genes et al.(2008b)Genes, Vitali, and Tombesi]Genes_2008 author author C. Genes, author D. Vitali, and author P. Tombesi, title title Simultaneous cooling and entanglement of mechanical modes of a micromirror in an optical cavity, https://doi.org/10.1088/1367-2630/10/9/095009 journal journal New Journal of Physics volume 10, pages 095009 (year 2008b)NoStop [Fabre et al.(1994a)Fabre, Pinard, Bourzeix, Heidmann, Giacobino, and Reynaud]PhysRevA.49.1337 author author C. Fabre, author M. Pinard, author S. Bourzeix, author A. Heidmann, author E. Giacobino, and author S. Reynaud, title title Quantum-noise reduction using a cavity with a movable mirror, https://doi.org/10.1103/PhysRevA.49.1337 journal journal Phys. Rev. A volume 49, pages 1337 (year 1994a)NoStop [Mancini and Tombesi(1994a)]PhysRevA.49.4055 author author S. Mancini and author P. Tombesi, title title Quantum noise reduction by radiation pressure, https://doi.org/10.1103/PhysRevA.49.4055 journal journal Phys. Rev. A volume 49, pages 4055 (year 1994a)NoStop [Qu and Agarwal(2014)]Qu_2014 author author K. Qu and author G. S. Agarwal, title title Strong squeezing via phonon mediated spontaneous generation of photon pairs, https://doi.org/10.1088/1367-2630/16/11/113004 journal journal New Journal of Physics volume 16, pages 113004 (year 2014)NoStop [Aasi et al.(2013)Aasi, Abadie, Abbott, Abbott, Abbott, Abernathy, Adams, Adams, Addesso, Adhikari, Affeldt, Aguiar, Ajith, Allen, Amador Ceron et al.]Aasi_NatPhoton2013 author author J. Aasi, author J. Abadie, author B. P. Abbott, author R. Abbott, author T. D. Abbott, author M. R. Abernathy, author C. Adams, author T. Adams, author P. Addesso, author R. X. Adhikari, author C. Affeldt, author O. D. Aguiar, author P. Ajith, author B. Allen, author E. Amador Ceron, et al., title title Enhanced sensitivity of the ligo gravitational wave detector by using squeezed states of light, https://doi.org/10.1038/nphoton.2013.177 journal journal Nature Photonics volume 7, pages 613 (year 2013)NoStop [Ganapathy et al.(2023)Ganapathy, Jia, Nakano, Xu, Aritomi, Cullen, Kijbunchoo, Dwyer, Mullavey, McCuller, Abbott, Abouelfettouh, Adhikari, Ananyeva, Appert et al.]Ganapathy_PRX2023 author author D. Ganapathy, author W. Jia, author M. Nakano, author V. Xu, author N. Aritomi, author T. Cullen, author N. Kijbunchoo, author S. E. Dwyer, author A. Mullavey, author L. McCuller, author R. Abbott, author I. Abouelfettouh, author R. X. Adhikari, author A. Ananyeva, author S. Appert, et al. (collaboration LIGO O4 Detector Collaboration), title title Broadband quantum enhancement of the ligo detectors with frequency-dependent squeezing, https://doi.org/10.1103/PhysRevX.13.041021 journal journal Phys. Rev. X volume 13, pages 041021 (year 2023)NoStop [Peuntinger et al.(2014)Peuntinger, Heim, Müller, Gabriel, Marquardt, and Leuchs]Peuntinger_PRL2014 author author C. Peuntinger, author B. Heim, author C. R. Müller, author C. Gabriel, author C. Marquardt, and author G. Leuchs, title title Distribution of squeezed states through an atmospheric channel, https://doi.org/10.1103/PhysRevLett.113.060502 journal journal Phys. Rev. Lett. volume 113, pages 060502 (year 2014)NoStop [Lee et al.(2020)Lee, Lee, and Seok]Lee_SciRep2020 author author C.-W. Lee, author J. H. Lee, and author H. Seok, title title Squeezed-light-driven force detection with an optomechanical cavity in a mach–zehnder interferometer, https://doi.org/10.1038/s41598-020-74629-1 journal journal Scientific Reports volume 10, pages 17496 (year 2020)NoStop [Lawrie et al.(2019)Lawrie, Lett, Marino, and Pooser]Lawrie_ACSPhoton2019 author author B. J. Lawrie, author P. D. Lett, author A. M. Marino, and author R. C. Pooser, title title Quantum sensing with squeezed light, https://doi.org/10.1021/acsphotonics.9b00250 journal journal ACS Photonics volume 6, pages 1307 (year 2019)NoStop [Shi and Bhattacharya(2016)]Shi_JPB2016 author author H. Shi and author M. Bhattacharya, title title Optomechanics based on angular momentum exchange between light and matter, https://doi.org/10.1088/0953-4075/49/15/153001 journal journal Journal of Physics B: Atomic, Molecular and Optical Physics volume 49, pages 153001 (year 2016)NoStop [Bhattacharya(2015)]Bhattacharya_JOSAB2015 author author M. Bhattacharya, title title Rotational cavity optomechanics, https://doi.org/10.1364/JOSAB.32.000B55 journal journal J. Opt. Soc. Am. B volume 32, pages B55 (year 2015)NoStop [Bhattacharya and Meystre(2007)]Bhattacharya_PRL2007 author author M. Bhattacharya and author P. Meystre, title title Using a laguerre-gaussian beam to trap and cool the rotational motion of a mirror, https://doi.org/10.1103/PhysRevLett.99.153603 journal journal Phys. Rev. Lett. volume 99, pages 153603 (year 2007)NoStop [Bhattacharya et al.(2008a)Bhattacharya, Giscard, and Meystre]Bhattacharya_PRA2008 author author M. Bhattacharya, author P.-L. Giscard, and author P. Meystre, title title Entanglement of a laguerre-gaussian cavity mode with a rotating mirror, https://doi.org/10.1103/PhysRevA.77.013827 journal journal Phys. Rev. A volume 77, pages 013827 (year 2008a)NoStop [Bhattacharya et al.(2008b)Bhattacharya, Giscard, and Meystre]Bhattacharya_PRAII2008 author author M. Bhattacharya, author P.-L. Giscard, and author P. Meystre, title title Entangling the rovibrational modes of a macroscopic mirror using radiation pressure, https://doi.org/10.1103/PhysRevA.77.030303 journal journal Phys. Rev. A volume 77, pages 030303 (year 2008b)NoStop [Rogers et al.(2014)Rogers, Gullo, Chiara, Palma, and Paternostro]Rogers_QMQM2014 author author B. Rogers, author N. L. Gullo, author G. D. Chiara, author G. M. Palma, and author M. Paternostro, title title Hybrid optomechanics for quantum technologies, https://doi.org/doi:10.2478/qmetro-2014-0002 journal journal Quantum Measurements and Quantum Metrology volume 2, pages 000010247820140002 (year 2014)NoStop [Černotík et al.(2019)Černotík, Genes, and Dantan]Cernotik_QST2019 author author O. Černotík, author C. Genes, and author A. Dantan, title title Interference effects in hybrid cavity optomechanics, https://doi.org/10.1088/2058-9565/aaf5a6 journal journal Quantum Science and Technology volume 4, pages 024002 (year 2019)NoStop [Barzanjeh et al.(2022)Barzanjeh, Xuereb, Gröblacher, Paternostro, Regal, and Weig]Barzanjeh2022 author author S. Barzanjeh, author A. Xuereb, author S. Gröblacher, author M. Paternostro, author C. A. Regal, and author E. M. Weig, title title Optomechanics for quantum technologies, https://doi.org/10.1038/s41567-021-01402-0 journal journal Nature Physics volume 18, pages 15 (year 2022)NoStop [Regal et al.(2008)Regal, Teufel, and Lehnert]Regal_Nat2008 author author C. A. Regal, author J. D. Teufel, and author K. W. Lehnert, title title Measuring nanomechanical motion with a microwave cavity interferometer, https://doi.org/10.1038/nphys974 journal journal Nature Physics volume 4, pages 555 (year 2008)NoStop [Chauhan and Biswas(2016)]Chauhan_PRA2016 author author A. K. Chauhan and author A. Biswas, title title Atom-assisted quadrature squeezing of a mechanical oscillator inside a dispersive cavity, https://doi.org/10.1103/PhysRevA.94.023831 journal journal Phys. Rev. A volume 94, pages 023831 (year 2016)NoStop [Neukirch et al.(2015)Neukirch, von Haartman, Rosenholm, and Nick Vamivakas]Neukirch_NatPhoton2015 author author L. P. Neukirch, author E. von Haartman, author J. M. Rosenholm, and author A. Nick Vamivakas, title title Multi-dimensional single-spin nano-optomechanics with a levitated nanodiamond, https://doi.org/10.1038/nphoton.2015.162 journal journal Nature Photonics volume 9, pages 653 (year 2015)NoStop [Morizot et al.(2006)Morizot, Colombe, Lorent, Perrin, and Garraway]Morizot_PRA2006 author author O. Morizot, author Y. Colombe, author V. Lorent, author H. Perrin, and author B. M. Garraway, title title Ring trap for ultracold atoms, https://doi.org/10.1103/PhysRevA.74.023617 journal journal Phys. Rev. A volume 74, pages 023617 (year 2006)NoStop [Kanamoto et al.(2003)Kanamoto, Saito, and Ueda]PhysRevA.67.013608 author author R. Kanamoto, author H. Saito, and author M. Ueda, title title Quantum phase transition in one-dimensional bose-einstein condensates with attractive interactions, https://doi.org/10.1103/PhysRevA.67.013608 journal journal Phys. Rev. A volume 67, pages 013608 (year 2003)NoStop [Schliesser et al.(2006)Schliesser, Del'Haye, Nooshi, Vahala, and Kippenberg]Schliesser_PRL2006 author author A. Schliesser, author P. Del'Haye, author N. Nooshi, author K. J. Vahala, and author T. J. Kippenberg, title title Radiation pressure cooling of a micromechanical oscillator using dynamical backaction, https://doi.org/10.1103/PhysRevLett.97.243905 journal journal Phys. Rev. Lett. volume 97, pages 243905 (year 2006)NoStop [Arcizet et al.(2006)Arcizet, Cohadon, Briant, Pinard, and Heidmann]Arcizet_Nat2006 author author O. Arcizet, author P.-F. Cohadon, author T. Briant, author M. Pinard, and author A. Heidmann, title title Radiation-pressure cooling and optomechanical instability of a micromirror, https://doi.org/10.1038/nature05244 journal journal Nature volume 444, pages 71 (year 2006)NoStop [DeJesus and Kaufman(1987)]Edmund_PRA1987 author author E. X. DeJesus and author C. Kaufman, title title Routh-hurwitz criterion in the examination of eigenvalues of a system of nonlinear ordinary differential equations, https://doi.org/10.1103/PhysRevA.35.5288 journal journal Phys. Rev. A volume 35, pages 5288 (year 1987)NoStop [Mancini and Tombesi(1994b)]Mancini_PRA1994 author author S. Mancini and author P. Tombesi, title title Quantum noise reduction by radiation pressure, https://doi.org/10.1103/PhysRevA.49.4055 journal journal Phys. Rev. A volume 49, pages 4055 (year 1994b)NoStop [Xu and Taylor(2014)]Xunnong_PRA2014 author author X. Xu and author J. M. Taylor, title title Squeezing in a coupled two-mode optomechanical system for force sensing below the standard quantum limit, https://doi.org/10.1103/PhysRevA.90.043848 journal journal Phys. Rev. A volume 90, pages 043848 (year 2014)NoStop [Fabre et al.(1994b)Fabre, Pinard, Bourzeix, Heidmann, Giacobino, and Reynaud]Fabre_PRA1994 author author C. Fabre, author M. Pinard, author S. Bourzeix, author A. Heidmann, author E. Giacobino, and author S. Reynaud, title title Quantum-noise reduction using a cavity with a movable mirror, https://doi.org/10.1103/PhysRevA.49.1337 journal journal Phys. Rev. A volume 49, pages 1337 (year 1994b)NoStop [Militaru et al.(2022)Militaru, Rossi, Tebbenjohanns, Romero-Isart, Frimmer, and Novotny]Militaru_PRL2022 author author A. Militaru, author M. Rossi, author F. Tebbenjohanns, author O. Romero-Isart, author M. Frimmer, and author L. Novotny, title title Ponderomotive squeezing of light by a levitated nanoparticle in free space, https://doi.org/10.1103/PhysRevLett.129.053602 journal journal Phys. Rev. Lett. volume 129, pages 053602 (year 2022)NoStop [Magrini et al.(2022)Magrini, Camarena-Chávez, Bach, Johnson, and Aspelmeyer]Magrini_PRL2022 author author L. Magrini, author V. A. Camarena-Chávez, author C. Bach, author A. Johnson, and author M. Aspelmeyer, title title Squeezed light from a levitated nanoparticle at room temperature, https://doi.org/10.1103/PhysRevLett.129.053601 journal journal Phys. Rev. Lett. volume 129, pages 053601 (year 2022)NoStop [Ghobadi et al.(2011)Ghobadi, Bahrampour, and Simon]Ghobadi_PRA2011 author author R. Ghobadi, author A. R. Bahrampour, and author C. Simon, title title Quantum optomechanics in the bistable regime, https://doi.org/10.1103/PhysRevA.84.033846 journal journal Phys. Rev. A volume 84, pages 033846 (year 2011)NoStop [Zhou et al.(2024)Zhou, Tang, Yin, Xia, and Yin]Zhou:24 author author J. Zhou, author J. Tang, author Y. Yin, author Y. Xia, and author J. Yin, title title Fundamental probing limit on the high-order orbital angular momentum of light, https://doi.org/10.1364/OE.516620 journal journal Opt. Express volume 32, pages 5339 (year 2024)NoStop [Dezfouli et al.(2022)Dezfouli, Abramović, Rakić, and Skenderović]10.1063/5.0089735 author author A. M. Dezfouli, author D. Abramović, author M. Rakić, and author H. Skenderović, title title Detection of the orbital angular momentum state of light using sinusoidally shaped phase grating, https://doi.org/10.1063/5.0089735 journal journal Applied Physics Letters volume 120, pages 191106 (year 2022), https://arxiv.org/abs/https://pubs.aip.org/aip/apl/article-pdf/doi/10.1063/5.0089735/16479659/191106_1_online.pdf https://pubs.aip.org/aip/apl/article-pdf/doi/10.1063/5.0089735/16479659/191106_1_online.pdf NoStop [Ni et al.(2022)Ni, Liu, Yang, Tian, Hou, Shum, and Chen]Ni:22 author author W. Ni, author R. Liu, author C. Yang, author Y. Tian, author J. Hou, author P. P. Shum, and author S. Chen, title title Annular phase grating-assisted recording of an ultrahigh-order optical orbital angular momentum, https://doi.org/10.1364/OE.473624 journal journal Opt. Express volume 30, pages 37526 (year 2022)NoStop [Pinnell et al.(2020)Pinnell, Rodríguez-Fajardo, and Forbes]Pinnell_2021 author author J. Pinnell, author V. Rodríguez-Fajardo, and author A. Forbes, title title Probing the limits of orbital angular momentum generation and detection with spatial light modulators, https://doi.org/10.1088/2040-8986/abcd02 journal journal Journal of Optics volume 23, pages 015602 (year 2020)NoStop [He et al.(2022)He, Shen, and Forbes]He2022 author author C. He, author Y. Shen, and author A. Forbes, title title Towards higher-dimensional structured light, https://doi.org/10.1038/s41377-022-00897-3 journal journal Light: Science & Applications volume 11, pages 205 (year 2022)NoStop [Adesso and Illuminati(2007)]Adesso_2007 author author G. Adesso and author F. Illuminati, title title Entanglement in continuous-variable systems: recent advances and current perspectives, https://doi.org/10.1088/1751-8113/40/28/S01 journal journal Journal of Physics A: Mathematical and Theoretical volume 40, pages 7821 (year 2007)NoStop [Adesso and Illuminati(2006)]Adesso_2006 author author G. Adesso and author F. Illuminati, title title Continuous variable tangle, monogamy inequality, and entanglement sharing in gaussian states of continuous variable systems, https://doi.org/10.1088/1367-2630/8/1/015 journal journal New Journal of Physics volume 8, pages 15 (year 2006)NoStop [Li et al.(2018)Li, Zhu, and Agarwal]PhysRevLett.121.203601 author author J. Li, author S.-Y. Zhu, and author G. S. Agarwal, title title Magnon-photon-phonon entanglement in cavity magnomechanics, https://doi.org/10.1103/PhysRevLett.121.203601 journal journal Phys. Rev. Lett. volume 121, pages 203601 (year 2018)NoStop [Kippenberg and Vahala(2007)]Kippenberg_OptExp2007 author author T. Kippenberg and author K. Vahala, title title Cavity opto-mechanics, https://doi.org/10.1364/OE.15.017172 journal journal Opt. Express volume 15, pages 17172 (year 2007)NoStop [Li et al.(2021)Li, Ou, Lei, and Liu]Li_NanoPhoto2021 author author B.-B. Li, author L. Ou, author Y. Lei, and author Y.-C. Liu, title title Cavity optomechanical sensing, https://doi.org/doi:10.1515/nanoph-2021-0256 journal journal Nanophotonics volume 10, pages 2799 (year 2021)NoStop [Stannigel et al.(2012)Stannigel, Komar, Habraken, Bennett, Lukin, Zoller, and Rabl]Stannigel_PRL2012 author author K. Stannigel, author P. Komar, author S. J. M. Habraken, author S. D. Bennett, author M. D. Lukin, author P. Zoller, and author P. Rabl, title title Optomechanical quantum information processing with photons and phonons, https://doi.org/10.1103/PhysRevLett.109.013603 journal journal Phys. Rev. Lett. volume 109, pages 013603 (year 2012)NoStop
http://arxiv.org/abs/2407.02364v2
20240702152929
On the stochastic selection of integral curves of a rough vector field
[ "Jules Pitcho" ]
math.AP
[ "math.AP", "math.CA", "math.PR", "34A99, 35C99, 60G99" ]
Rediscovering Bottom-Up: Effective Forecasting in Temporal Hierarchies Lukas Neubauer TU Wien Peter Filzmoser TU Wien July 8, 2024 ============================================================================= § ABSTRACT We prove that for bounded, divergence-free vector fields in L^1_loc((0,1];BV(^d;^d)), there exists a unique incompressible measure on integral curves of . We recall the vector field constructed by Depauw in <cit.>, which lies in the above class, and prove that for this vector field, the unique incompressible measure on integral curves exhibits stochasticity. § INTRODUCTION Consider a bounded, divergence-free vector field :[0,1]×^d→^d. A general principle due to Ambrosio (see Theorem <ref>) guarantees existence of a measure concentrated on integral curves of , whose 1-marginals at every time is the Lebesgue measure on ^d. Is there a robust selection criterion amongst all such measures? When lies in L^1((0,1);BV(^d;^d)), Ambrosio <cit.> following on DiPerna and Lions <cit.> proved that there exists a unique such measure, thereby answering the above question affirmatively. In this work we give an affirmative answer to the above question for a class of bounded, divergence-free vector fields to which the work of Ambrosio does not apply. Assume that ∈ L^1_loc((0,1];BV(^d;^d)). Then there exists a unique incompressible measure concentrated on integral curves of . Furthermore, there is a vector field in this class for which this unique incompressible measure is supported on several disinct integral curves of for almost every starting point. Let us introduce the main objects of study. All vector fields :[0,1]×^d→^d will be Borel, essentially bounded and divergence-free, meaning that ÷_x =0 in the sense of distributions on [0,1]×^d. Let us define integral curves of . We shall say that a curve γ:[0,1]→^d is an integral curve of , if it is an absolutely continuous solution to the ODE γ̇(t)=(t,γ(t)), which means explicitely that for every s,t∈[0,1] γ(t)-γ(s)=∫_s^t(τ,γ(τ))dτ and ∫_0^1|(τ,γ(τ))|dτ<+∞. We shall further say that γ is an integral curve starting from x at time s, if γ(s)=x and we shall write γ_s,x. Let Γ denote the metric space C([0,1];^d) of continuous paths endowed with the uniform metric, and let ℳ denote the corresponding Borel -algebra. We recall that a selection of integral curves {γ_s,x:x∈^d} is measurable, if the map ^d∋ x⟼γ_s,x∈Γ is Borel. Ambrosio proposed the following definition in <cit.>. A measurable selection {γ_s,x:x∈^d} of integral curves of is said to be a regular measurable selection, if there exists C>0 such that |∫_^dϕ(γ_s,x(t))dx|≤ C ∫_^d|ϕ(y)|dy ∀ϕ∈ C_c(^d), ∀ t∈[0,1]. When belongs to L^1((0,1);BV(^d;^d)), and satisfies ∫_0^1[÷_x(s,·)]_-_L^∞_xds<+∞, Ambrosio proved the existence of a regular measurable selection and the following essential uniqueness result: any two regular measurable selection {γ^1_s,x} and {γ^2_s,x}[From now on, when we omit to specify the indexing set for a family of paths or a family of measures, it will implicitely be understood that the family is indexed by ^d.] of integral curves of must coincide up to a set of vanishing Lebesgue measure. Moreover, if is divergence-free, essential uniqueness holds amongst measure-preserving measurable selection, that is those which satisfy |∫_^dϕ(γ_s,x(t))dx|=∫_^d|ϕ(y)|dy ∀ϕ∈ C_c(^d), ∀ t∈[0,1]. Recently, Pappalettera constructed in <cit.> a divergence-free vector field for which there does not exist a measure-preserving selection of characteristics. This vector field does not belong to L^1((0,1);BV(^d;^d)). This, however does not exclude that for d-a.e. x∈^d, a probability measure concentrated on integral curves of starting from x at time 0 can be uniquely selected by some appropriate criterion. The present work investigates such a selection criterion for divergence-free vector fields in L_loc^1((0,1];BV(^d;^d)). For this class of vector fields, a selection criterion for solutions of the continuity equation using regularisation by convolution was already proved by the author in <cit.>. We here give a Lagrangian counterpart to this previous result. We begin by defining the measures on integral curves we shall study. §.§ Lagrangian representations All measures are be Radon measures. e_t:Γ→^d is the evaluation map. ρ:[0,1]×^d→^+ will always be assumed in C([0,1];w^*-L^∞(^d)), which we may do without loss of generality, when the vector field ρ(1,) solves PDE÷_t,xρ(1,)=0 in the sense of distributions on [0,1]×^d. A proof of this fact is given in <cit.>. Let us now define the main object under consideration in this paper. We shall say that a bounded, positive measure on Γ is a Lagrangian representation of the vector field ρ(1,), if the following conditions hold: * is concentrated on the set Γ of integral curves of , which explicitely means that for every s,t∈ [0,1] ∫_Γ|γ(t)-γ(s)-∫_s^t(τ,γ(τ))dτ|(dγ)=0; * for every t∈[0,1], we have ρ(t,·)d=(e_t)_#. We now record the following general existence theorem for Lagrangian representations. In the context of this paragraph, and assuming that ρ(1,) solves (<ref>), there exists a Lagrangian representation of ρ(1,). The above theorem is proved (see for instance <cit.>) by a regularisation and compactness argument, where the hypothesis that ρ is non-negative plays an essential role. Let us now introduce the set of Lipschitz paths with constant L>0 Γ_L:={γ∈Γ : |γ(s)-γ(t)|≤ L|s-t| ∀ s,t∈ [0,1]}. Γ_L is a compact, separable metric space with the metric induced from Γ. We now have the following lemma. Any Lagrangian representation of (1,) is concentrated on Γ__L^∞_t,x. Let be a Lagrangian representation of (1,). Let D be a countable dense subset of [0,1]. Let ϕ∈ C_c^∞((0,1)× (0,1)^d) be a standard mollifier. Define ϕ^(t,x):=1/^d+1ϕ(t/,x/), and denote ^=∗ϕ^. Let s,t∈ D. Notice that for every >0, we have ∫_Γ|γ(t)-γ(s)-∫_s^t^(τ,γ(τ))dτ|(dγ) ≤∫_Γ|γ(t)-γ(s)|(dγ)+∫_Γ∫_s^t|^(τ,γ(τ))dτ|(dγ) ≤∫_Γ∫_s^t|(τ,γ(τ))|dτ(dγ)+∫_Γ∫_s^t|^(τ,γ(τ))|dτ(dγ) =∫_s^t∫_^d |(τ,x)|dxdτ + ∫_s^t∫_^d |^(τ,x)|dxdτ ≤ 2|t-s|_L^∞_t,x, where in the second to last line, we have used Fubini, as well as (e_τ)_# = d, and in the last line we have used that ^_L^∞_t,x≤_L^∞_t,x and Hölder's inequality. Therefore, by the dominated convergence theorem, it holds that ∫_Γ|γ(t)-γ(s)-lim_↓ 0∫_s^t^(τ,γ(τ))dτ|(dγ)=0, which implies that there exists a set N_s,t⊂Γ of vanishing measure such that for every γ∈Γ-N_s,t, we have γ(t)-γ(s)=lim_↓ 0∫_s^t^(τ,γ(τ))dτ. Since ^_L^∞_t,x≤_L^∞_t,x, it therefore holds that for every γ∈Γ-N_s,t |γ(t)-γ(s)|≤ |t-s|_L^∞_t,x. Define N:=⋃_s,t∈ D N_s,t, which is a set of vanishing measure since D is countable. As s,t were arbitrary in D, and by density of D in [0,1], we therefore have that for every γ∈Γ-N, it holds |γ(t)-γ(s)|≤ |t-s|_L^∞_t,x ∀ s,t∈[0,1]. This proves the thesis. Now, for every Borel set A in Γ, and every s,t∈ D, we have ∫_A|∫_s^t(τ,γ(τ))dτ|(dγ) ≤∫_A∫_s^t|(τ,γ(τ))|dτ(dγ) =∫_s^t∫_A |(τ,γ(τ))| (dγ)dτ =∫_s^t∫_e_τ (A)|(τ,x)|dxdτ ≤_L^∞_t,x∫_s^t∫_e_τ(A) dxdτ =_L^∞_t,x (A)|t-s|. where the last line follows from Hölder's inequality, and the second to last line follows since (e_τ)_#=d. By standard measure-theoretic argument as above, there exists a set M of vanishing measure such that for every γ∈Γ-M, we have |∫_s^t(τ,γ(τ))dτ|≤ |t-s|_L^∞_t,x ∀ s,t∈[0,1]. Together with (<ref>), we have that for every γ∈Γ -M∪ N |γ(t)-γ(s)|≤ |t-s|_L^∞_t,x ∀ s,t∈ [0,1]. As M∪ N is of vanishing measure, this proves that is concentrated on Lipschitz curves with constant _L^∞_t,x, namely is concentrated on Γ__L^∞_t,x. Next we present our main tool: the disintegration of a measure with respect to a Borel map and a target measure. In the study of weak solutions of the continuity equation, disintegration has previously been used in <cit.> by Alberti, Bianchini and Crippa to establish the optimal uniqueness result for the continuity equation along a bounded, divergence-free, and autonomous in the two-dimensional setting. In <cit.>, Bianchini and Bonicatto also used disintegration to prove a uniqueness result for nearly incompressible vector fields in L^1_tBV_x. In view of Lemma <ref>, we will identify a Lagrangian representation of (1,) with its restriction to the Borel -algebra of the compact set Γ__L^∞_t,x, and thanks to Remark <ref>, we will be able to perform a disintegration of this measure. In this work, we shall be interested in the selection of Lagrangian representations. In particular, we shall investigate how regularisations, which preserve the divergence-free structure of a vector field can serve as a mechanism for selection of Lagrangian representations. This leads to the following definition. Consider a bounded, divergence-free vector field :[0,+∞)×^d→^d. We shall say that a Lagrangian representation of (1,) is selected by the class of divergence-free regularisations of , if for every regularisation (^k)_k∈ of such that ÷_x^k=0, the sequence (^k)_k∈ of unique Lagrangian representations of (1,^k) converges narrowly to as k→+∞. Consider a divergence-free vector field :[0,+∞)×^d→^d. We shall say that a Borel map _s:[0,+∞)×^d→^d is an s-regular Lagrangian flow of if the following holds: * [0,+∞)∋ t↦_s(t,x)∈^d is an absolutely continuous solution of (<ref>); * _s(t,·)_#d=d. Consider a divergence-free vector field :[0,+∞)×^d→^d. Assume that ∈ L^1_loc([0,+∞);BV(^d;^d)). Then, there exists a unique regular Lagrangian flow _0 along . Namely, for any two Borel maps _0^1 and _0^2 from [0,+∞)×^d to ^d, which satisfy (i) and (ii), it holds that _0^1(·,x)=_0^2(·,x) for d-a.e. x∈^d. Consider a divergence-free vector field :[0,+∞)×^d→^d and s>0. Assume that ∈ L^1_loc([0,+∞);BV(^d;^d)). Then, there exists a unique s-regular Lagrangian flow along . Namely, for any two Borel maps _s^1 and _s^2 from [0,+∞)×^d to ^d, which satisfy (i) and (ii), it holds that _s^1(·,x)=_s^2(·,x) for d-a.e. x∈^d. Existence: Let _0:[0,+∞)×^d→^d be a representative of the 0-regular Lagrangian flow along . STEP 1. As _0(s,·)_#d=d, it follows that there exists for every s≥ 0 sets A_0,s, A_s⊂^d of full Lebesgue measure such that _0(s,·):A_0,s→ A_s is bijective. We define _s(·,x):=_0(·,^-1_0(s,x)) for x∈ A_s. Consider a divergence-free vector field :[0,+∞)×^d→^d. Assume that ∈ L^1_loc([0,+∞);BV(^d;^d)). Then, there exists a unique Lagrangian representation of (1,), and for every s>0, we have =∫_^dδ__s(·,x)dx, where _s is the unique s-regular Lagrangian flow along . Existence of a Lagrangian representation of (1,) follows from Ambrosio's superposition principle. §.§ Disintegration of measures Let X and Y be compact, separable metric spaces, a measure on X, f:X→ Y a Borel map, a measure on Y such that f_#≪. Then there exists a Borel family {_y : y∈ Y} of measures on X such that * _y is supported on the level set E_y:=f^-1(y) for every y∈ Y; * the measure can be decomposed as =∫_Y_yd(y), which means that (A)=∫_Y_y(A)d(y) for every Borel set A contained in X. If we further assume that and are positive measures, and that f_#=, then there exists a Borel family {_y : y∈ Y} of probability measures on X satisfying (i) and (ii). Any family satisfying (i) and (ii) is called a disintegration of with respect to f and . The disintegration is essentially unique in the following sense: for any other disintegration {_y:y∈ Y} there holds _y=_y for -a.e. y∈ Y. The above facts are cited from <cit.>, and follow are proven in Dellacherie and Meyer <cit.>. We now give a useful fact. Let g:X→ X a continuous map such that (f∘ g)_#≪, and such that: (P) for every subset A⊂ X, we have g(A)⊂ g^-1(A). The following is true. In the context of this paragraph, if {_y : y∈ Y} is a disintegration of with respect to f and , then {g_#_y : y∈ Y} is a disintegration of g_# with respect to f∘ g and . Let y∈ Y. As g is continuous, we have g_#_y=g(_y), by to <cit.>. We know that _y is contained in f^-1(y). So with (P), this yields g_#_y=g(_y)⊂ g(f^-1(y))⊂ g^-1f^-1(y). So g_#_y is supported on g^-1f^-1(y), and since y was arbitrary, this proves (i). Let A a Borel set in X. Then, as g^-1(A) is a Borel set in X, it follows that g_#(A)=(g^-1(A))=∫_Y _y (g^-1(A))d(y)=∫_Y g_#_y (A)d(y), which gives (ii). We also have the following property of the disintegration, which we will use in this paper: ∫_X ϕ d=∫_Y[∫_E_yϕ d_y]d(y), for every Borel function ϕ:X→ [0,+∞]. §.§ Uniqueness of regular measurable selection Ambrosio <cit.> proved the existence and essential uniqueness of regular measurable selections in the bounded variation setting, thereby extending the work of DiPerna and Lions <cit.>. We recall that ρ:[0,1]×^d→^+ is assumed to be in C([0,1];w^*-L^∞(^d)). The following can be extracted from Ambrosio <cit.>. Assume that belongs to L^1((0,1);BV(^d;^d)) and that ρ(1,) solves (<ref>). Then, there exists a unique Lagrangian representation of ρ(1,), which further has the following property. For every s∈[0,1], there exists a regular measurable selection {γ_s,x} of integral curves of such that =∫_^d_γ_s,xρ(s,x)dx. The above theorem implies that, if two bounded vector fields ρ(1,) and ρ̃(1,) solve (<ref>) and satisfy ρ(s,x)=ρ̃(s,x) for d-a.e. x∈^d, then ρ=ρ̃, which is the uniqueness result of Ambrosio for the Cauchy problem for the continuity equation with a vector field in L^1_tBV_x. The essential uniqueness of regular measurable selections can then be deduced. We record it in the following remark. Under the hypothesis of Theorem <ref>, if {γ^1_s,x} and {γ^2_s,x} are two regular measurable selection of integral curves of , then for d-a.e. x∈^d, we have γ^1_s,x=γ^2_s,x. Indeed, let ρ̅∈ L^∞(^d) with ρ̅≥ 0, and define the measures ^i=∫_^d_γ^i_s,xρ̅(x)dx for i=1,2. Then, consider the densities ρ^1,ρ^2:[0,1]×^d→^+, which lie in C([0,1];w^*-L^∞(^d)) and are given by ρ^i(t,·)d=(e_t)_#^i for i=1,2. The vector fields ρ^i(1,) both solve (<ref>). Therefore, by Theorem <ref>, we have ∫_^d_γ^1_s,xρ̅(x)dx=∫_^d^1_γ_s,xρ̅(x)dx=∫_^d_γ^2_s,xρ̅(x)dx. As ρ̅ was an arbitrary bounded, nonnegative function, and as the -algebra ℳ of Γ is countably generated, we have for d-a.e. x∈^d _γ_s,x^1=_γ_s,x=_γ^2_s,x, which implies that for d-a.e. x∈^d γ_s,x^1=γ_s,x=γ^2_s,x. Given τ>0, we define the truncated versions of ^τ(t,x):={(t,x) if t≥τ, 0 if t<τ. . Under the assumption that the bounded variation norm of is not integrable at time zero, the following essential uniqueness of regular measurable selections of integral curves still holds. Let s∈(0,1]. Assume that belongs to L_loc^1((0,1];BV(^d;^d)), and consider two regular measurable selections {γ^1_s,x} and {γ^2_s,x} of integral curves of starting from s. Then γ^1_s,x=γ^2_s,x for d-a.e. x∈^d. Let k∈. It can be verified directly that the two measurable selections {γ^1_s,x(1/k∨·)} and {γ^2_s,x(1/k∨·)} are regular measurable selections of integral curves of the vector field ^1/k defined in (<ref>). Note that this vector field belongs to L^1((0,1);BV(^d;^d)), so in view of Remark <ref>, we have γ_s,x^1(1/k∨·)=γ^2_s,x(1/k∨·) for every x∈^d-N_k, where N_k is a set of vanishing Lebesgue measure. Define N:=⋃_k∈ N_k, which is of vanishing Lebesgue measure. Then, for every k∈, we have γ_s,x^1(1/k∨·)=γ^2_s,x(1/k∨·) for every x∈^d-N, which implies γ_s,x^1=γ^2_s,x for every x∈^d-N by continuity. The thesis follows. §.§ Statement of results In this paper, the vector field will satisfy the hypothesis of Proposition <ref>. Accordingly, we fix for the rest of the paper {γ_1,y} a (essentially unique) regular measurable selection of integral curves of starting from time 1. _DP denotes the bounded, divergence-free vector field in L_loc^1((0,1];BV(^2;^2)) constructed by Depauw in <cit.>. For completeness we give a construction of _DP in the Appendix. We now state our main theorem. Consider a bounded, divergence-free vector field :[0,1]×^d→^d. Assume that ∈ L_loc^1((0,1];BV(^d;^d)). Then, there exists a unique Lagrangian representation of (1,), which furthermore has the following properties: * the family of probability measures {_γ_1,y} is a disintegration of with respect to e_1 and d; * for every sequence (^k)_k∈ such that ^k→ in L^1_loc, ^k_L^∞_t,x≤_L^∞_t,x, and ÷_x^k=0, the unique Lagrangian representation of (1,^k) converges narrowly to as k→+∞; * there exists a Borel family of probability measures {_x} on ^d such that any disintegration {_0,x} of with respect to e_0 and d satisfies _0,x=∫_^d_γ_1,y_x(dy), for d-a.e. x∈^d. Moreover, for the vector field _DP, the measure _x is not a Dirac mass for 2-a.e. x∈^2. Observe that a sequence satisfying the hypothesis of (ii) can be generated by regularising by convolution. We finally note that the class of vector field under study in this article has been previously investigated in <cit.> and that stochastic selection has been investigated for a toy model in <cit.>. §.§ Plan of the paper In Section <ref>, we prove that there exists a unique Lagrangian representation of (1,) under the hypothesis of Theorem <ref>, as well as (i) and (ii) of Theorem <ref>. In Section <ref>, we prove (iii) of Theorem <ref>. In the Appendix, we give for completeness a construction of _DP, the vector field constructed by Depauw in <cit.>. §.§ Acknowledgements The author is thankful to his advisor Nikolay Tzvetkov for his support. The author is thankful to Stefano Bianchini for enlightening discussions on measure theory and on the disintegration of a measure. The author acknowledges the hospitality of the Pitcho Centre for Scientific Studies where this work was done. § UNIQUENESS OF THE LAGRANGIAN REPRESENTATION In this section :[0,1]×^d→^d is an essentially bounded, divergence-free Borel vector field satisfying the assumptions of Theorem <ref>, namely ∈ L^1_loc((0,1];BV(^d;^d)). §.§ Backwards stopping of the Lagrangian representation Let be a Lagrangian representation of (1,), which exists by Theorem <ref>, and let {_1,y} be a disintegration with respect to e_1 and d. Let {γ_1,y} be a (essentially unique) regular measurable selection of integral curves of . Our goal is now to prove that for d-a.e. y∈^d, we have _1,y=_γ_1,y, which will imply by definition of the disintegration that =∫_^d_γ_1,ydy, from which uniqueness of the Lagrangian representation of (1,), as well as part (i) of Theorem <ref> will follow. Given two positive real numbers a and b, we will write a∨ b =max{a,b}. Let τ>0 and consider the backward stopping map S^τ :Γ∋γ(·) ⟼γ( τ∨·)∈Γ. Note that S^τ clearly satisfies (P) of Section <ref> with X=Γ__L^∞_t,x and g=S^τ. Therefore, by Lemma <ref>, {(S^τ)_#_1,y} is a disintegration with respect to e_1 ∘ S^τ and d. For simplicity, we will write ^τ:=(S^τ)_# and _1,y^τ:=(S^τ)_#_1,y. Recall that we have defined in (<ref>) the truncated version ^τ of . We then have the following lemma. For every τ>0, the family {_γ_1,y(τ∨·)} is a disintegration of ^τ with respect to e_1 and d. As ^τ belongs to L^1((0,1);BV(^d;^d)) is bounded, and divergence-free, there is an essentially unique regular measurable selection {γ^τ_1,y} of integral curves of ^τ thanks to Remark <ref>. Now, observe that {γ_1,y(τ∨·)} is a regular measurable selection of integral curves of ^τ, hence for d-a.e. y∈^d, we have that γ^τ_1,y( ·)=γ_1,y(τ∨·). It can be checked directly that ^τ is a Lagrangian representation of (1,^τ). So by Theorem <ref>, {_γ^τ_1,y} is a disintegration of ^τ with respect to e_1 and d so that ^τ=∫_^d_γ^τ_1,ydy. Therefore, by essential uniqueness of the disintegration, {_γ_1,y(τ∨·)} is a disintegration of ^τ with respect to e_1 and d so that ^τ=∫_^d_γ_1,y(τ∨·)dy. We also have the following simple fact. Let be a probability measure on Γ. Then (S^τ)_# converges narrowly to as τ↓ 0. Let Φ∈ C_b(Γ). For every γ∈Γ, we have lim_τ↓0 S^τγ=γ. Then we have lim_τ↓ 0Φ(S^τγ)=Φ(γ) by continuity of Φ. Also, we clearly have ∫_Γ| Φ(S^τγ)|(dγ)≤Φ_C^0 ∀τ>0. So, by dominated convergence, it holds that lim_τ↓ 0∫_ΓΦ(γ)(S^τ)_#(dγ)= lim_τ↓ 0∫_ΓΦ(S^τγ)(dγ)=∫_ΓΦ(γ)(dγ). Since Φ was arbitrary in C_b(Γ), the thesis follows. §.§ Proof of (i) of Theorem <ref> Recall that is a Lagrangian representation of (1,) and that {_1,y} is a disintegration of with respect to e_1 and d. Recall also that {γ_1,y} is a (essentially unique) regular measurable selection of integral curves of . Let us prove (<ref>), which will yield both uniqueness of the Lagrangian representation of (1,) as well as (i) of Theorem <ref>, namely it will show =∫_^d_γ_1,ydy. By separability of C_c(Γ), there exists a countable subset 𝒩 of C_c(Γ), which is dense. Let τ>0, and let {_1,y} be a disintegration of with respect to e_1 and d. By Lemma <ref>, {_1,y^τ} is a disintegration of ^τ with respect to e_1 and d. By Lemma <ref>, and by essential uniqueness of the disintegration, we have _γ_1,y(τ∨·)=_1,y^τ for d-a.e. y∈^d. Let Φ∈𝒩, and let B be a Borel set in ^d. We then have that ∫_B ∫_ΓΦ(γ)_γ_1,y(dγ)dy =∫_Blim_τ↓ 0∫_ΓΦ(γ)_γ_1,y(τ∨·)(dγ)dy, =∫_Blim_τ↓ 0∫_ΓΦ(γ)^τ_1,y(dγ)dy, =∫_B∫_ΓΦ(γ)_1,y(dγ)dy, where in the first equality, we have used that _γ_1,y(τ∨·) converges narrowly to _γ_1,y as τ↓ 0 by Lemma <ref>. In the second to last equality, we have used that _γ_1,y(τ∨·)=_1,y^τ for d-a.e. y∈^d, which follows from Lemma <ref>. In the last equality, we have used Lemma <ref>. As B was an arbitrary Borel set of ^d, we have that there exists a set N_Φ of vanishing Lebesgue measure such that for every y∈^d-N_Φ, we have ∫_ΓΦ(γ)_γ_1,y(dγ)=∫_ΓΦ(γ)_1,y(dγ). Now, let N:=⋃_Φ∈𝒩N_Φ. It is a set of vanishing Lebesgue measure as 𝒩 is countable, and for every y∈^d-N, we have ∫_ΓΦ(γ)_γ_1,y(dγ)=∫_ΓΦ(γ)_1,y(dγ) ∀Φ∈𝒩. By density of 𝒩 in C_c(Γ), for every y∈^d-N, we have ∫_ΓΦ(γ)_γ_1,y(dγ)=∫_ΓΦ(γ)_1,y(dγ) ∀Φ∈ C_c(Γ). This proves that for d-a.e. y∈^d, we have _1,y=_γ_1,y, which proves (<ref>). As was an arbitrary Lagrangian representation of (1,), this proves both that there exists a unique Lagrangian representation of (1,), and (i) of Theorem <ref>. We will now prove that the unique Lagrangian representation of (1,) can be obtained as the unique limit of Lagrangian representations of suitable regularisations of (1,). §.§ Proof of (ii) of Theorem <ref> Let (^k)_k∈ be a sequence such that ^k→ in L^1_loc such that, for every k∈, we have sup_(t,x)∈ [0,1]×^d |^k(t,x)|≤_L^∞_t,x, and such that ÷_x^k=0. Let ^k:[0,1]×^d→^d be the unique flow along ^k, namely ^k solves {∂_t ^k(t,x) =^k(^k(t,x)), ^k(0,x) =x. . The measure defined by ^k(A)=∫_^d_^k(·,x)(A)dx, for every Borel set A in Γ is then the unique Lagrangian representation of (1,^k), as we clearly have (e_t)_#^k=^k(t,·)_#d=d, since ^k is divergence-free, and also that ^k is clearly concentrated on integral curves of ^k. Step 1. Compactness. Recall the definition of the space Γ_L of Lipchitz paths with Lipschitz constant L>0 given in (<ref>). In view of Lemma <ref>, we have that ^k(Γ__L^∞_t,x)=1 for every k∈. As Γ__L^∞_t,x is compact by Remark <ref>, it follows by Prokhorov theorem, that there exists an increasing map ξ :→ such that ^ξ(k) converges narrowly to some probability measure on Γ__L^∞_t,x as k→+∞. For every L>0 and define Γ_L:={γ∈Γ : |γ(s)-γ(t)|≤ L|s-t| ∀ s,t∈ [0,1]}, which is the set of paths with Lipschitz constant less or equal to L. It is compact in Γ by Ascoli's Theorem. For every s,t∈ [0,T] and every k∈, we then have ∫_Γ|γ(t)-γ(s)|^k(dγ) = ∫_^d|^k(t,x)-^k(s,x)|dx, =∫_^d∫_s^t|^k(u,^k(u,x))|dudx, ≤ |t-s|_L^∞_t,x. Therefore, by the Markov inequality for every k∈, we have ^k((Γ_L)^c)≤_L^∞_t,x/L, which proves tightness. By Prokhorov's Theorem, there is an increasing map ξ:→ such that ^ξ(k) converges narrowly to some as k→+∞. Step 2. Let us prove that is a Lagrangian representation of (1,). Let ϕ∈ C(^d) and t∈ [0,1]. We have that Γ∋γ⟼ϕ(e_t(γ)) is in C_b(Γ). Therefore, ∫_Γϕ(e_t(γ))(dγ)=lim_k→+∞∫_Γϕ(e_t(γ))^ξ(k)(dγ)=∫_^dϕ(x)dx. As ϕ and t were arbitrary, this implies that (e_t)_#=d for every t∈[0,1]. We still need to prove that is concentrated on integral curves of . Let s,t∈[0,1]. We have to check that ∫_Γ|γ(t)-γ(s)-∫_s^t(τ,γ(τ))dτ|(dγ)=0. We know that ∫_Γ|γ(t)-γ(s)-∫_s^t(τ,γ(τ))dτ|^ξ(k)(dγ)=0, however we cannot pass into the limit k→ +∞ in the above equation because the functional Γ∋γ⟼|γ(t)-γ(s)-∫_s^t(τ,γ(τ))dτ|, need not be continuous since is not continuous. To circumvent this problem, let >0 and let :[0,1]×^d→^d be a continuous vector field such that ∫_s^t|(τ,x)-(τ,x)|dτ dx<. We then have ∫_Γ|γ(t)-γ(s)-∫_s^t(τ,γ(τ))dτ|(dγ) 1≤∫_Γ|γ(t)-γ(s)-∫_s^t(τ,γ(τ))dτ|(dγ)+∫_Γ|∫_s^t(τ,γ(τ))-(τ,γ(τ))|(dγ) 2≤lim sup_k→ +∞∫_Γ|γ(t)-γ(s)-∫_s^t(τ,γ(τ))dτ|^k(dγ) +∫_Γ|∫_s^t(τ,γ(τ))-(τ,γ(τ))|(dγ) 3=lim sup_k→ +∞∫_Γ|∫_s^t(^k(τ,γ(τ))-(τ,γ(τ)))dτ|^k(dγ) +∫_Γ|∫_s^t((τ,γ(τ))-(τ,γ(τ)))dτ|(dγ) 4≤lim sup_k→ +∞∫_Γ∫_s^t|^k(τ,γ(τ))-(τ,γ(τ))|dτ^k(dγ) +∫_Γ∫_s^t|(τ,γ(τ))-(τ,γ(τ))|dτ(dγ) 5=lim sup_k→ +∞∫_Γ∫_s^t|^k(τ,x)-(τ,x)|dτ dx +∫_Γ∫_s^t|(τ,x)-(τ,x)|dτ dx 6≤lim sup_k→ +∞∫_^d∫_s^t|^k(τ,x)-(τ,x)|dτ dx +2∫_^d∫_s^t|(τ,x)-(τ,x)|dτ dx 7=2∫_^d∫_s^t|(τ,x)-(τ,x)|dτ dx<2. 1 follows by a triangular inequality, 2 follows because the functional Γ∋γ⟼|γ(t)-γ(s)-∫_s^t(τ,γ(τ))dτ| is continuous, 3 follows because ^k is concentrated on integral curves of ^k, 4 follows by bringing the absolute value inside the integral, 5 follows because (e_τ)_#^k=d=(e_τ)_#, 6 follows by a triangular inequality, and 7 follows since ^k→ in L^1_loc. As was arbitrary, (<ref>) follows. Therefore is a Lagrangian representation of (1,). By the first part of Theorem <ref> we have already proved, we know that there exists a unique Lagrangian representation of (1,). Therefore, the whole sequence ^k converges narrowly to as k→+∞. This proves the thesis. § STOCHASTICITY We will now prove part (iii) of Theorem <ref>. Throughout this section, is the unique Lagrangian representation of (1,) from the first part of Theorem <ref>. Recall that we have fixed a regular measurable selection {γ_1,y} of integral curves of starting from 1 and that =∫_^d_γ_1,ydy. We also define the measure =(e_0,e_1)_#. For every x∈^d, we define the family of measures on Γ _x,γ_1,y:={_γ_1,y if γ_1,y(0)=x, 0 if γ_1,y(0)≠ x. . Define the projection maps π_0 :^d×^d∋ (x,y)⟼ x∈^d, and π_1 :^d×^d∋ (x,y)⟼ y∈^d. Let {_x} be a disintegration of with respect to π_0 and d. §.§ Disintegration of with respect to We will now give an expression for disintegrations of with respect to e_0 and d in terms of {_x}. Throughout this section {_0,x} is a disintegration of with respect to e_0 and d. We then define _x :=(π_1)_#_x for every x∈^d. We also define the probability measure _y:=_(γ_1,y(0),y), on ^d×^d for every y∈^d. The family {_y} is a disintegration of with respect to π_1 and d. It is clear that _y is supported on π_1^-1(y). By part (i) of Theorem <ref>, we know also that {_γ_1,y} is a disintegration of with respect to e_1 and d. Therefore, for every Borel set A in ^d×^d, we have (A)=(e_0,e_1)_#(A)= ∫_^d (e_0,e_1)_#_γ_1,y (A)dy=∫_^d_(γ_1,y(0),y)(A)dy=∫_^d_y(A)dy, which proves the thesis. The family {_x,γ_1,y: x,y∈^d} is a disintegration of with respect to (e_0,e_1) and . It is clear that _x,γ_1,y is supported on (e_0,e_1)^-1 (x,y). For every Borel set A contained in Γ, we have ∫_^d×^d_x,γ_1,y(A) (dx,dy) =∫_^d∫_^d×{y}_x,γ_1,y(A) d_ydy =∫_^d_γ_1,y(A)dy =(A), where in the first equality we have used Lemma <ref>, as well as (<ref>). In the second equality we have used the definition (<ref>) of _y and the definition (<ref>) of _x,γ_1,y, and in the last equality we have used that {_γ_1,y} is a disintegration of with respect to e_1 and d, which follows from (i) of Theorem <ref>, which we have already proved. This proves the claim. For d-a.e. x∈^d, we have _0,x=∫_^d_γ_1,yd_x. Let B be a Borel set contained in ^d. As Γ is separable, its Borel -algebra is generated by a countable family 𝒢. Let A∈𝒢. We then have ∫_B _0,x(A)dx 1=∫_B _0,x(A∩{γ(0)∈ B})dx 2=(A∩{γ(0)∈ B}) 3=∫_B∫_^d_x,γ_1,y(A∩{γ(0)∈ B})(dx,dy) 4=∫_B[ ∫_{x}×^d_x,γ_1,y(A) d_x]dx 5=∫_B[∫_^d_γ_1,y(A)_x(dy)]dx. In equality 1, we have used that _0,x is supported on {γ(0)=x} for every x∈^d. In equality 2, we have used that {_0,x} is a disintegration of of with respect to e_0 and d. In equality 3, we have used Lemma <ref>. In equality 4, we have used that {_x} is a disintegration of with respect to π_0 and d, equation (<ref>), as well as the fact that _x,γ_1,y(A∩{γ(0)∈ B})=0 if x∉ B by definition. In equality 5, we have used the definition of _x,γ_1,y as well as the definition of {_x}. As B was an arbitrary Borel set in ^d, there exists a set N_A of vanishing Lebesgue measure such that for every x∈^d-N_A, we have _0,x(A)=∫_^d_γ_1,y(A)_x(dy). Now define N:=⋃_A∈𝒢N_A, which is a set of vanishing Lebesgue measure. Then, for every A∈𝒢 and every x∈^d- N, we have _0,x(A)=∫_^d_γ_1,y(A)_x(dy). As 𝒢 generates the Borel -algebra of Γ, the thesis is proved. §.§ Proof of (iii) of Theorem <ref> We can know conclude the proof of Theorem <ref>. Recall that is the unique Lagrangian representation of (1,) from the first part of Theorem <ref>, and that we have fixed a regular measurable selection {γ_1,y} of integral curves of . In view of Lemma <ref>, there exists a Borel family of probability measures {_x} defined in Section <ref> such that _0,x=∫_^d_γ_1,y_x(dy), for d-a.e. x∈^d, which is the first part the statement of part (iii) of Theorem <ref>. Let us now show that for the vector field _DP:[0,1]×^2→^2 constructed by Depauw, the family probability measures _x are not Dirac masses for 2-a.e. x∈^2. From the Appendix, we have that ρ^B, ρ^W:[0,1]×^2→^+ are two bounded densities in C([0,1];w^*-L^∞(^2)) such that: * ρ^B(1,_DP) and ρ^W(1,_DP) solve (<ref>); * ρ^B(0,·)=1/2=ρ^W(0,·); * ρ^B(t,·)+ρ^W(t,·)=1 for every t∈[0,1]; * ρ^B(1,·)∪ρ^W(1,·)=^2; * ρ^B(1,·)∩ρ^W(1,·) is of vanishing Lebesgue measure. Let ^B and ^W be two Lagrangian representations of ρ^B(1,_DP) and ρ^W(1,_DP) respectively, whose existence follows from Ambrosio's superposition principle. Let ^B and ^W be two probability measures given by ^B=(e_0,e_1)_#^B and ^W=(e_0,e_1)_#^W. Let {_0,x^B} be a disintegration of ^B with respect to e_0 and 2, and let {_x^B} be a disintegration of ^B with respect to π_0 and 2. Similarly, let {_0,x^W} be a disintegration of ^W with respect to e_0 and 2, and let {_x^W} be a disintegration of ^W with respect to π_0 and 2. Note that by definition of ^B and ^W, we have (π_1)_#^B=(e_1)_#^B and (π_1)_#^W=(e_1)_#^W. This clearly implies that for 2-a.e. x∈^2, we have (π_1)_#^B_x=(e_1)_#_0,x^B and (π_1)_#^W_x=(e_1)_#_0,x^W. Therefore, we have ∫_^2(π_1)_#^B_x(ρ^W(1,·))dx =∫_^2(e_1)_#^B_x(ρ^W(1,·))dx =(e_1)_#^B(ρ^W(1,·)) =∫_ρ^W(1,·)ρ^B(1,x)dx =0. Similarly, we have ∫_^2(π_1)_#^W_x(ρ^B(1,·))dx=0. Therefore, for 2-a.e. x∈^2, in view of property (iv) above, the probability measures (π_1)_#^W_x and (π_1)_#^B_x are mutually singular. Notice that ν^B and ν^W are both absolutely continuous with respect to 2⊗2 with density given by ν^B=ρ^B(1,y)/22⊗2(dx⊗ dy) and ν^W=ρ^W(1,y)/22⊗2(dx⊗ dy) respectively. Therefore, a disintegration of ν^B with respect to π_0 and d is given by ν^B_x=ρ^B(1,y)/2 _x⊗2(dx⊗ dy), and ν^W_x=ρ^W(1,y)/2 _x⊗2(dx⊗ dy). This implies that ν̃^B_x:=(π_1)_#ν^B_x=ρ^B(1,y)/22( dy), and ν̃^W_x:=(π_1)_#ν^W_x=ρ^W(1,y)/22( dy). Therefore, whence ν^B(ν^W)=0=ν^W(ν^B). Note also that ν^B=^d×ρ^B(1,·) and ν^W=^d×ρ^W(1,·). Note also that ^B+^W is the Lagrangian representation of (1,_DP), whence ν =(e_0,e_1)_#=1/2(e_0,e_1)_#(^B+^W)=1/2(ν^B+ν^W). Let {ν_x^B} be a disintegration of ν^B with respect to π_0 and d, and let {ν_x^W} be a disintegration of ν^W with respect to π_0 and d. We then have ∫_^dν_x^W(ν^W)dx=ν^W(ν^W)=1=ν^B(ν^B)=∫_^dν_x^B(ν^B)dx. So for a.e. x∈^d, we have ν_x^W(ν^W)=1, and ν_x^B(ν^B)=1. On the other hand, we have ∫_^dν_x^B(ν^W)dx=ν^B(ν^W)=0=ν^W(ν^B)=∫_^dν_x^W(ν^B)dx. Therefore, for a.e. x∈^d, we have ν_x^B(ν^W)=0=ν_x^W(ν^B). Also, by property (iii) above, and by uniqueness of the Lagrangian representation of (1,), we have that =1/2(^B+^W). By essential uniqueness of the disintegration, we therefore have for 2-a.e. x∈^2 _x=1/2(^W_x+^B_x). Therefore, for 2-a.e. x∈^2, we have _x=(π_1)_#_x=1/2(((π_1)_#^W_x+(π_1)_#^B_x). whereby for 2-a.e. x∈^2 the probability measure _x is not a Dirac mass. This concludes the proof of (iii) of Theorem <ref>. § APPENDIX We construct the bounded, divergence-free vector field _DP:[0,1]×^2→^2 of Depauw from <cit.>, as well as two densities ρ^W,ρ^B:[0,1]×^2→^+ such that the vector fields ρ^W(1,_DP) and ρ^B(1,_DP) solve (<ref>), and have the following properties: * ρ^B(1,_DP) and ρ^W(1,_DP) solve (<ref>); * ρ^B(0,·)=1/2=ρ^W(0,·); * ρ^B(t,·)+ρ^W(t,·)=1 for every t∈[0,1]; * ρ^B(1,·)∪ρ^W(1,·)=^2; * ρ^B(1,·)∩ρ^W(1,·) is of vanishing Lebesgue measure. We follow closely the construction of a similar vector field given in <cit.>. Introduce the following two lattices on ℝ^2, namely ℒ^1 := ℤ^2⊂ℝ^2 and ℒ^2:=ℤ^2 + (1/2, 1/2)⊂ℝ^2. To each lattice, associate a subdivision of the plane into squares, which have vertices lying in the corresponding lattices, which we denote by 𝒮^1 and 𝒮^2. Then consider the rescaled lattices ℒ^1_k:= 2^-kℤ^2 and ℒ^2_k := (2^-k-1,2^-k-1)+2^-kℤ^2 and the corresponding square subdivision of ℤ^2, respectively 𝒮^1_k and 𝒮^2_k. Observe that the centres of the squares 𝒮^1_k are elements of ℒ^2_k and viceversa. Next, define the following 2-dimensional autonomous vector field: (x) = (0, 4x_1)^t , if 1/2 > |x_1| > |x_2| (-4x_2, 0)^t , if 1/2 > |x_2| > |x_1| (0, 0)^t , otherwise. is a bounded, divergence-free vector field, whose derivative is a finite matrix-valued Radon measure given by D(x_1,x_2) = [ 0 0; 4 sgn(x_1) 0 ]2⌊_{|x_2|<|x_1|<1/2} + [ 0 -4 sgn(x_2); 0 0 ]2⌊_{|x_1|<|x_2|<1/2} +[ 4x_2 sgn(x_1) -4x_2 sgn(x_2); 4x_1 sgn(x_1) -4x_1 sgn(x_2) ]ℋ^1⌊_{x_1=x_2,0<|x_1|,|x_2|≤ 1/2} Periodise by defining Λ = {(y_1, y_2) ∈ℤ^2 : y_1 + y_2 is even} and setting (x) = ∑_y ∈Λ(x-y) . Even though is non-smooth, it is in BV_loc(^2;^2). Choose a standard mollifier ζ∈ C^∞_c(^d), and consider the regularisation (u^k)_k∈ of given by ^k=⋆ζ^k, and which satisfies the hypothesis of Proposition <ref>. As satisfies the hypothesis of Theorem <ref>, the Cauchy problem (<ref>) is well-posed along the regularisation (^k)_k∈. Therefore, to determine the unique bounded weak solution of (<ref>) along with initial datum ρ̅, it suffices to determine the unique bounded weak solution of (<ref>) along ^k with initial datum ρ̅ for k arbitrarly large. By the theory of regular Lagrangian flows (see for instance <cit.>), there exists a unique incompressible almost everywhere defined flow along can be described explicitely. (R) The map (t,0 ,·) is Lipschitz on each square S of 𝒮^2 and (1/2,0, ·) is a clockwise rotation of π/2 radians of the “filled” S, while it is the identity on the “empty ones”. In particular for every j≥ 1, (1/2,0, ·) maps an element of 𝒮^1_j rigidly onto another element of 𝒮^1_j. For j=1 we can be more specific. Each S∈𝒮^2 is formed precisely by 4 squares of 𝒮^1_1: in the case of “filled” S the 4 squares are permuted in a 4-cycle clockwise, while in the case of “empty” S the 4 squares are kept fixed. Let ρ^B:[1/2,1]×^2→^+ be the unique density such that ρ^B(1,) solves (<ref>) and ρ^B(1,·)=⌊x_1⌋ /2+ ⌊x_2⌋/2 mod 2=:ρ̅^B. Then, we have the following formula (t,0,·)_#ρ̅^Bd=ρ^B(t,x)d. Using property (R), we have ρ^B(1/2, x) = 1 - ρ̅^B(2x) . Likewise it is simple to use (O) to prove (R) If ζ solves the continuity with final data ζ_in and j≥ 1, α are such that ζ_in has average α on every S∈𝒮^1_j with j≥ 1, then ζ (1/2, ·) has also average α on S∈𝒮^1_j We define _DP:[0,1]×^2→^2 as follows. Set _DP(t, x) = (x) for 1/2<t≤1 and _DP(t, x) = (2^k x) for 1/2^k+1<t≤1/2^k. Let ρ^B:[0,1]×^2→^+ be the unique density such that ρ^B(1,) solves (<ref>) with ρ^B(0,·)=⌊x_1⌋/2 + ⌊x_2⌋/2 mod 2=:ρ̅^B Moreover, using recursively the appropriately scaled version of (<ref>), we can check that ρ^B(1/2^k, x) = ρ̅^B (2^k x) for k even, ρ^B (1/2^k, x) =1- ρ̅^B (2^k x) for k odd. Define the density ρ^W(t,x):=1-ρ^B(t,x). Then ρ^W(1,_DP) also solves (<ref>), by linearity. As the construction we have performed is ^2-periodic, we may consider _DP, ρ^W, and ρ^B to be defined on [0,1]×^2. Properties (i)-(v) follow directly from the construction. We therefore have ∫_B _0,x dx=∫_B∫_γ_1,yν_x(dy)dx. Let B be a Borel set. We have ∫_B_0,xdx=∫_B∫_^d_x,y ν(dx,dy)=∫_B∫_^d_x,y ν_y(dx)dy. Then, we have We also have ∫_B∫_^d_x,yν_y(dx)dy=∫_B_γ_1,ydy, and also for a.e. y∈^d ∫_^d_x,yν_y(dx)=_γ_1,y. §.§ Truncated dynamics Observe that ^τ∈ L^1((0,+∞);BV(^d;^d)). The map S^τ sends Lagrangian representations of (1,) to Lagrangian representations of (1,^τ). This is recorded in the following lemma. Consider a Lagrangian representation of (1,). Then (S^τ)_# is a Lagrangian representation of (1,^τ). It can readily be checked that ∫_Γ|γ(t)-γ(s)-∫_s^t^τ(u,γ(u))du| (S^τ)_#(dγ)=0. Likewise, (e_s∘ S^τ)_#= (e_s∨τ)_#=d. Therefore, is a Lagrangian representation of (1,^τ). §.§ Stopped Lagrangian representations Consider the probability space (Γ,ℳ,). We define the stochastic process S^τ_s :Γ∋γ↦γ(s∨τ)∈^d. Denote by ℳ_τ:=σ(Y^τ_s: s∈) the corresponding backwards filtration. For τ_1≤τ_2, we have that ℳ_τ_1⊃ℳ_τ_2. Also ℳ=σ(S_s : s∈). Correspondingly, consider S^τ :Γ∋γ↦γ(·∨τ )∈Γ. Given the probability space (Γ,ℳ,), we shall denote by ^τ:=(S^τ)_# the backward stopped version of . We denote Γ_t:=C([t,+∞);^d). We introduce the restriction map r_t :Γ∋γ↦γ⌊_[t,+∞)∈Γ_t . Consider two positive real numbers τ_1≤τ_2. Then, ^τ_1 equals ^τ_2 on ℳ_τ_2. For every τ∈ (0,1), ^τ is the unique Lagrangian representation of (1,^τ) of Theorem <ref> and there exists a Borel map ^τ:^d→Γ such that the disintegration of with respect to e_1 and d is given by ^τ=∫δ_^τ_1(x)dx. STEP 1 (^τ is concentrated on integral curves of ^τ) STEP 2 ((e_t)_#^τ=d for every t∈[0,+∞)) STEP 3 (Disintegration) As ^τ∈ L^1_loc([0,+∞);BV(^d;^d)), Corollary <ref> applies. Therefore, denoting by ^τ the unique regular Lagrangian flow of ^τ, we have for every s>τ that ^τ=∫δ_^τ(·,s,x)dx. The thesis follows. § PROOF OF THEOREM <REF> By Theorem <ref>, existence of a Lagrangian representation of (1,) holds. Let be such a Lagrangian representation. By Lemma <ref>, for every τ>0, (S^τ)_# is the unique Lagrangian representation of (1,^τ). By Lemma <ref>, is uniquely characterised as the narrow limit of (S^τ)_#. This proves that there exists a unique Lagrangian representation of (1,). Let {_1,x:x∈^d} be a disintegration of with respect to e_1 and d. Let us show that there exists a Borel map :^d→Γ such that _1,x=δ_(·,x) for a.e. x∈^d. As (S^τ)_# is a Lagrangian representation of (1,^τ) with disintegration {(S^τ)_#_1,x:x∈^d} with respect to e_1 and d, by Theorem <ref>, there is a set N^τ⊂^d of full Lebesgue measure and a Borel map ^τ:^d→Γ__L^∞_t,x such that (S^τ)_#_1,x=δ_^τ(x) for every x∈ N^τ. Define the set N:=⋂_τ∈ (0,1)∩ N^τ, which is of vanishing Lebesgue measure. Then, note that for every x∈ N, we have that {^τ(x)}_τ∈,τ>0 is a precompact family of Γ since it has uniform Lipschitz constant _L^∞_t,x. Moreover, for every τ<ν, it holds that (S^ν)_#δ_^τ(x)=δ_^ν(x), namely ^τ(x)(s∨ν)=^ν(x)(s) for every s≥ 0. Therefore, the family {^τ(x)}_τ>0 has a unique limit point in Γ__L^∞_t,x for the uniform convergence on compact time intervals and we may set (x):=lim_τ∈, τ↓ 0^τ(x), which clearly implies that δ_(x)=lim_τ∈, τ↓ 0δ_^τ(x)=lim_τ∈, τ↓ 0 (S^τ)_#_1,x=_1,x, for every x∈ N, where limits are taken with respect to the narrow convergence on 𝒫(Γ). This is the second part of the statement. It remains to prove that is selected by the class of divergence-free regularisations of . Let (^k)_k∈ be a divergence-free regularisation of . Since we have shown uniqueness of the Lagrangian representation of (1,), it suffices to show that a limit point of the sequence (^k)_k∈ of unique Lagrangian representations of (1,^k) is necessarly a Lagrangian representation of (1,). Let be a Lagrangian representation of (1,). Let n∈, F be a real-valued continuous function on (^d)^n, and let 0≤ t_1<…<t_n. Suppose that 0<t_1. By Lemma <ref>, for τ<t_1 we have that ∫_Γ F(γ(t_1),…, γ(t_n))(dγ)= ∫_ΓF(γ(t_1),…, γ(t_n))^τ(dγ). Now, suppose that t_1=0. For every n∈, we write τ_n=1/n and consider the Borel maps ^τ_n:^d→Γ from the Lemma <ref>. STEP 1. For every natural numbers n≥ m, there exists a set N_n,m of vanishing Lebesgue measure such that r_τ_m∘^τ_n(x)=r_τ_m∘^τ_m(x) for every x∈ N_n,m. This follows because of Lemma <ref>. Define the set N:=⋃_n,m∈ N_n,m, which is of vanishing Lebesgue measure. For every x∈ N, we let (x):=lim_n→ +∞^τ_n(x). For s∈, consider the boundary value problem posed on ×^d, BVP{∂_t ρ_ +÷_x(ρ) =0 , ρ(s,x) =ρ̅(x). . We shall say that has bounded marginals, if (e_t)_#≪d and the Radon-Nikodym derivative of (e_t)_# with respect to d is given by a function η(t,·)∈ L^∞(^d). Consider the family 𝒦:={∈𝒫(Γ) : is a Lagrangian representation of (1,) with bounded marginals} Then, ρ_s(t,·) solves (<ref>) with boundary datum ρ̅. Consider a sequence ^k of Markovian and also Markovian. Assume that ^k converges to . Then, w^*-lim_k→+∞ P^k(s,x;u,Γ)=P(s,x;u,Γ). Consider a bounded, divergence-free vector field :[0,+∞)×^d→^d, and a regularisation (^k)_k∈. Assume that ∈ L_loc^1((0,+∞);BV(^d;^d)) and that * |^k|≤ C; * ^k(t,s,·)_#d≤ Cd. and that (^k)_k∈ is tight and its limit point belong to 𝒦 and are Markovian. § PROOF OF THEOREM <REF> For t>0, we have the following weak convergence of measures: ∫ P^k(0,x;t,dy)ϕ(x)dx⟶∫ P(0,x;t,dy)ϕ(x)dx . Let ψ∈ C(^d). ∫_^d∫_^dψ(y) P^k(0,x;t,dy)ϕ(x)dx, =∫_^d∫_^dψ(γ(t))d^k_x,0(γ)ϕ(x)dx, =∫_^d∫_^dϕ(γ(0))ψ(γ(t))d^k(γ). Notice that ϕ(γ(0))ψ(γ(t))∈ C_b(Γ). Therefore, passing to the limit k→ +∞, we get the claim. For s>0 and Γ∈ℬ_^d, we have the strong convergence P^k(s,x;t,Γ)⟶ P(s,x;t,Γ) . Use strong convergence of RLF. STEP 1 (Tightness): ∫ |γ(t)-γ(s)|d^k(γ)≤ C|t-s|. STEP 2 (Limit points are in 𝒦): STEP 3 (Passing into the limit in the Markov property): § PROOF OF THEOREM <REF> Regularisation by convolution satisfies the hypothesis of Theorem <ref>. Let be a limit point. It is Markov by Theorem <ref>. Therefore, lim_k→+∞∫_^d P^k(s,x;u,Γ)ρ̅(x)dx= ∫_^d P(s,x;u,Γ)ρ̅(x)dx. Note that ∫_^dP^k(0,x;t,dy)ρ̅(x)dx=ρ^k(t,·)d. By our previous work, we know that ρ^k→ρ in 𝒟'((0,+∞)×^d). Therefore, ∫_^d P(0,x;t,dy)ρ̅(x)dx=ρ(t,·)d. Also by Lemma <ref>, we have convergence for s>0. Therefore, the probability transition kernel of is uniquely characterised. This implies that is uniquely characterised. Let ={_t,ℱ_t; t≥ 0} be a continuous, adapted process with values in ^d, defined on some probability space (Ω, ℱ, ). We shall say that is a Lagrangian representation of the vector field ρ(1,):[0,+∞)×^d→^d+1, if * law _t=ρ(t,·)d; * [0,+∞)∋ t↦_t∈^d is a solution of ODE -a.s. We shall say that a Lagrangian representation of the vector field ρ(1,):[0,+∞)×^d→×^d is unique in law, if for any two Lagrangian representations _1 and _2 of ρ(1,) satisfy We shall say that a Lagrangian representation of (1,) is selected by a regularisation (^k)_k∈, if ^k the unique Lagrangian representation of (1,^k) satisfies ^k converges in law to . We shall say that a Lagrangian representation of (1,) is selected by a regularisation class ℛ, if for every regularisation (^k)_k∈∈ℛ, is selected by (^k)_k∈. Consider a bounded, divergence-free vector field :[0,+∞)×^d→^d. Assume that ∈ L_loc^1((0,+∞);BV_loc(^d;^d)). Then, there exists a Lagrangian representation of (1,) unique in law, which is selected by the regularisation class ℛ_conv. An adapted d-dimensional process ={_t,ℱ_t; t≥ 0} on some probability space (Ω,ℱ,) is said to be a Markov process with initial distribution μ if * [_0∈Γ]=μ(Γ), ∀Γ∈ℬ(^d); * for s,t≥ 0 and Γ∈ℬ(^d), [_t+s∈Γ|ℱ_s]=[_t+s∈Γ|_s], -a.s. Consider a bounded, divergence-free vector field :[0,+∞)×^d→^d and a Lagrangian representation of (1,) on some probability space (Ω,ℱ,). Assume that ∈ L_loc^1((0,+∞);BV(^d;^d)). Then, is a Markov process. Let be a random variable on a probability space (Ω,ℱ, ) taking values in a complete, seprable metric spac (S,ℬ(S)). Let 𝒢 be a sub--field of ℱ. A regular conditional probability of given 𝒢 is a function Q:Ω×ℬ(S)→ [0,1] such that * for each ω∈Ω, Q(ω,·) is a probability measure on (S,ℬ(S)); * for each E∈ℬ(S), the mapping ω↦ Q(ω; E) is 𝒢-measurable, and * for each E∈ℬ(S), [X∈ E|𝒢](ω)=Q(ω;E), -a.e. ω. Given a Lagrangian representation of (1,), we can consider its disintegration at time t given by {_x,t}_x∈^d. We shall say that a Lagrangian representation of (1,) satisfies the uniqueness property starting from time t_0∈ [0,+∞), Consider a bounded, divergence-free vector field :[0,+∞)×^d→^d and a Lagrangian representation of (1,) on some probability space (Ω,ℱ,). alpha
http://arxiv.org/abs/2407.03061v1
20240703123445
ALTER: Augmentation for Large-Table-Based Reasoning
[ "Han Zhang", "Yuheng Ma", "Hanfang Yang" ]
cs.CL
[ "cs.CL" ]
Hilbert band complexes and their applications Shengyuan A. Yang July 8, 2024 ============================================= § ABSTRACT While extensive research has explored the use of large language models (LLMs) for table-based reasoning, most approaches struggle with scalability when applied to large tables. To maintain the superior comprehension abilities of LLMs in these scenarios, we introduce ALTER (Augmentation for Large Table-basEd Reasoning)-a framework designed to harness the latent augmentation potential in both free-form natural language (NL) questions, via the query augmentor, and semi-structured tabular data, through the table augmentor. By utilizing only a small subset of relevant data from the table and supplementing it with pre-augmented schema, semantic, and literal information, ALTER achieves outstanding performance on table-based reasoning benchmarks. We also provide a detailed analysis of large-table scenarios, comparing different methods and various partitioning principles. In these scenarios, our method outperforms all other approaches and exhibits robustness and efficiency against perturbations. The code of this paper will be released at <https://github.com/Hanzhang-lang/ALTER>. § INTRODUCTION Tabular data is one of the fundamental and pivotal semi-structured data types widely used in relational databases, spreadsheets, analysis reports, . Table-based reasoning tasks such as table-based fact verification (FV) <cit.> and table-based question answering (TQA) <cit.> require sophisticated reasoning over textual, numerical, and logical forms. Besides, the reasoning tasks with large data capacity pose more complexity and challenges to machine intelligence. Recently, large language models (LLMs) have demonstrated remarkable proficiency in reasoning and inference. The advent of LLMs has spurred a surge in research focusing on their application to tabular data, heralding what can be termed the LLM era <cit.>. Despite techniques following the pre-LLM era, such as fine-tuning methods, the latest LLM-based approaches have achieved results that are on par with or surpass those obtained through rule-based or pre-trained language model approaches <cit.>, leveraging the contextual understanding capabilities of LLMs. Mainstream techniques addressing tabular tasks in the LLM era focus on designing prompts or pipelines that combine multiple instructions with serialized natural language descriptions converted from tables, without requiring additional training. The sequential text data is parsed by the LLMs and transformed into executable code (SQL and Python) using symbolic code generation abilities <cit.> or direct output for final inference utilizing literal reasoning abilities <cit.>. However, most table-based methods encounter three challenges when analyzing complex large tables. Firstly, in the process of converting table cells into natural language descriptions, the entire data is often expected to be included to provide enough comprehensive information <cit.>. This approach can sometimes face data leakage issues involving privacy concerns and may fail due to context length limitations. Additionally, the full table content can be long and noisy, leading to unnecessary computational resource consumption. Secondly, table reasoning tasks often require numerical reasoning, data preparation, or key cell identification. LLMs alone may lack the robustness to address these tasks directly and can sometimes introduce inaccuracies or hallucinations in their outputs. As tables grow in size, reasoning about minor or nuanced details becomes even more difficult <cit.>, and LLMs require careful design to enhance their expandability and robustness in such scenarios. Thirdly, relevant parts needed to derive the answer may be scattered in different places for a complex large-table reasoning task. Therefore, intricate queries cannot be answered in a single glance or with a single execution step using programming languages. Although a couple of methods have optimized for specific issues mentioned above, no approach simultaneously considers all these problems while extending table-based reasoning tasks to large-scale tables. In consideration of the issues mentioned above, how can we mitigate performance degradation as the size of the table increases? We note that tables are inherently structured; in real-world databases, for instance, tables are well-categorized, and each column feature adheres to certain criteria, including data format, text representation, feature semantics, . Based on these practical observations, in this paper, we propose a framework named ALTER to facilitate the understanding of tables and to scale effectively to large-scale tables. Without utilizing the entire table data as contextual information throughout the process, we first generate adaptations about the NL questions with the query augmentor in Section <ref> and interpretations about the table's inherent structure and content with the table augmentor in Section <ref>. Subsequently, the data is further distilled to filter irrelevant column features. In conjunction with augmented information, the well-organized data is integrated with SQL executors and ultimately transformed into a more accessible format for joint reasoning, adhering to the proposed augment-filter-execution procedure. In summary, our main contributions include: (1) We explore new augmentation methods for queries and tables that are beneficial for table-based reasoning tasks. (2) We propose a general framework and a novel augment-filter-execution procedure capable of scaling to large tables. (3) We conduct extensive experiments on two table-based reasoning benchmarks and demonstrate the best performance over large-table scenarios. § RELATED WORK Large Language Models for Table Reasoning. Primary approaches using LLMs to tackle table reasoning tasks involve fine-tuning a foundational model following the pre-LLM era or directly utilizing in-context learning abilities unique to the LLM era. For fine-tuning methods, following the success of mask language modeling (MLM), task-specific fine-tuning methods are designed. For example, TaPas <cit.> extends BERT's <cit.> architecture and enhances the understanding of tabular data by recovering masked cells. Moreover, models relying on logical codes (SQL) can further enhance the model's reasoning ability. For example, Tapex <cit.> and OmniTab <cit.> focus on generating SQL queries that are then executed to fetch relevant information. Prompting technologies such as few-shot learning <cit.>, chain-of-thought reasoning (COT) <cit.>, and agent-based methods <cit.> can be correspondingly applied in table reasoning tasks. <cit.> first explores and demonstrates the feasibility of using LLMs in generic reasoning tasks. Binder <cit.> shows symbolic languages are also beneficial for complex analysis with prompt methods. Chain-of-Table <cit.>, inspired by CoT prompting methods, uses tabular data in the reasoning chain as a proxy for intermediate thoughts. ReAcTable <cit.> employs LLMs extending the ReAct framework to reason step-by-step and iteratively generates sub-tables using code executors. Dater <cit.> and DIN-SQL <cit.> break down table reasoning into multi-step inference by handcrafting pipeline. Query Augmentation. In question-answering tasks, query augmentation or query rewrite is a prevalent method to bridge the gap between queries and facts. Within the framework of LLMs, tasks related to Retrieval-Augmented Generation (RAG) often involve various forms of query modification, including query rewriting, disambiguation, and decomposition,  <cit.>. RQ-RAG <cit.> equips the model with multiple capabilities in multi-hop QA tasks. <cit.> proposes Rewrite-Retrieve-Read pipeline which adapt the query itself. Step-Back Prompting<cit.> presents a simple technique to derive high-level concepts. Table Augmentation and Table sampling. Table augmentation involves integrating knowledge and exploring implicit table content. Mainstream methods include incorporating commonsense knowledge <cit.> from Wikipedia[<https://www.wikipedia.org/>], obtained through search engines or analytical knowledge <cit.>. <cit.> relies on LLM itself to augment structural information using internal knowledge. § PRELIMINARY In this section, we introduce the definition of table reasoning tasks. Table reasoning requires reasoning over both free-form natural language(NL) and inherently structured tables. Given the triplet (T, Q, A), where table T = {c_i }_i=1^C, C represents the number of column features in the table. Note that we do not represent the table in cell format as we expect the table under investigation to adhere to certain norms inherently. Q signifies a query or claim related to the table, and A denotes the answer. We specifically focus on the table question answering and fact verification tasks. In the table question answering tasks, Q and A correspond to the query and expected answers in natural language form, respectively. In the table fact verification task, Q represents a claim about the table, and the final answer A ∈{0, 1} where 0 indicates falsity and 1 indicates truth regarding the input claim. § METHODOLOGY §.§ Overview In this work, we assume that semi-structured tabular data is rich in latent information beyond its raw data values. This information suggests that data storage adheres to certain common patterns and field semantics, facilitating the model's inference of the overall data distribution from a minimal sample of data. Inspired by knowledge-fusion models for metadata inference <cit.> and the internal knowledge-retrieving ability of LLMs <cit.>, we utilize LLMs to uncover patterns and semantics within tables, which helps to understand and operate data correctly. The whole workflow is illustrated in Algorithm <ref> in the appendix. In our framework, we do not include the full content of the table in a prompt; only K rows can be observed. Instead, the reasoning effect is ensured through the elaborate augmented information. The framework seamlessly accommodates large-scale tables, as the model is pre-endowed with comprehensive information about the data structure and content before encountering it. As illustrated in Figure <ref>, our proposed system ALTER, consists of three core components: ∙ Query Augmentor: This component enhances the original query by generating multiple sub-queries, each examining the original query from different perspectives. Compared to the partial original query, this component comprehensively provides more information through the subsequent table organizer. ∙ Table Organizer: Given the input query, this component utilizes the augment-filter-execution procedure. It first enriches the raw data with augmented table content, then filters the data to retain only highly relevant rows and columns, and finally employs an SQL executor to derive a reasonable and accessible sub-table for final inference. ∙ Joint Reasoner: This component efficiently performs reasoning and aggregation for the query augmentor and the primary workflow. §.§ Query Augmentor One of the primary challenges in naive Question Answering (QA) lies in its direct reliance on the user’s original query as the basis. Sometimes, the query itself is complex and ambiguous, resulting in subpar effectiveness. In tabular reasoning scenarios, an imprudent query can lead to the model focusing on one partially biased part in the table. We propose a novel improvement method for the query part to mitigate the information loss caused by only access to fractional sub-tables in our downstream process. This approach enables the LLMs to utilize the multi-query technique to attend to different parts within the table through diverse analysis processes. Additionally, based on the results of the sub-queries, it dynamically refines the original query to reduce ambiguity or complexity. In this work, we propose two query augmentation methods: step-back augmentation and sub-query augmentation. The step-back prompting method <cit.> has been empirically validated as effective in the RAG domain. We equip it with sampled sub-table information, which aims to obtain broader and more abstract-level comprehension within the table. LLMs are shown to be stronger at solving sequentially subproblems than directly solving a complex problem <cit.>. The latter query augmentation method seeks to decompose the information required by the query, enabling LLMs to locate the relevant information more easily in each sub-query. The reasoning process of all sub-queries in the table organizer is executed in parallel, during which the model utilizes each independent reasoning module to extract information pertinent to answering the original query. Irrelevant information is rejected, and duplicate queries are filtered out. §.§ Table Organizer The table organizer is the core component of the entire reasoning process. As previously mentioned, throughout the reasoning process, we do not use the entire table data as contextual information. Instead, we further filter the column features of the table, as detailed in Section <ref>, thereby simplifying the transmitted information. To maintain model performance without accessing full data, we employ the augment-filter-execution strategy. By pre-analyzing and augmenting the table's schema, semantic, and literal information, sufficient supplementary information required by the query is provided. Given the limited table features, the augmented information does not increase commensurately with the table size. Therefore, our method exhibits strong robustness to variations in table size. The table organizer primarily encompasses one preparatory stage and three reasoning stages, as illustrated in Figure <ref>. In the preparatory stage, the table augmentor correspondingly mines and enhances information for the downstream process, storing it in advance. In stage 1, based on the schema and semantic information for columns, relevant columns and rows are located. In stage 2, more detailed augmentation information for the filtered columns is considered, including schema, semantic and literal information, . As shown in <cit.>, high-quality programming language (SQL, Python) can be a powerful tool regarding numerical and logical questions, we rely on SQL as the standard language for querying structured data. Based on the filtered sub-table and incrementally updated augmentation information, executable SQL queries are generated and the final sub-table is retrieved in stage 3. If the upstream input is a sub-query, the final sub-table will be transformed into an effective response for the sub-query. If the input is the original query, the sub-table, along with the enhancement information from the sub-queries, will be received by the joint reasoner. §.§.§ Table Augmentor The table augmentor aims to convey extra information hidden inherently in the table, beyond the raw data itself, to the LLMs. The augmentation process occurs prior to the official reasoning process, as illustrated in Figure <ref>. It's worth noting that we can link this process to real large database systems or table applications <cit.>. In standard database systems, extensive work on data cleaning and normalization must be undertaken. During the process, hierarchical augmentation information will be stored and synchronized, including information about the database, tables, and statistical data inside the table. We focus on the latter two levels, as our emphasis lies on the Table QA scenario. In fact, in real-world databases, column names are often stored without semantic meaning and represented by uppercase abbreviations. The data stored may be formatted in equations and abstract symbols, posing challenges in generating SQL queries accurately. Therefore, schema and column feature information need to be predefined and stored in a standardized manner. In this case, we can simplify the process of the table augmentor by migrating augmented information. In this paper, the extra information table augmentor generates mainly includes the schema information, semantic information, and the literal representation of the table. Schema information delineates the storage format and interrelationships of database objects. This may include stored procedures at the database level or table relationships like Foreign Keys and Primary Keys at the table level. For pure data, schema information primarily denotes data types. We extracted three commonly used types in daily analysis: Numerical, Char, and Date types. These types are used to standardize data in advance during the process. The global and feature-specific semantic information enables LLMs to understand the primary content of the table or columns without directly accessing the data. This assists the LLMs in locating the relevant information corresponding to the query and determining the specific domain the table is about. When columns are named using acronyms or aliases, the imparted semantics can be pivotal for analysis. <cit.> demonstrates SQL queries often fail in accurately parsing the correct format stored in the table and improves it using multiple chain calls. However, the literal representation can explicitly inform the LLMs about the raw data representation format within the table. This facilitates the generation of correctly formatted SQL queries by the LLMs and effectively bridges the gap between complex SQL queries and user questions. §.§.§ Column Filter and Row Sample Irrelevant table content in the prompt can lead to unnecessary computations and quality regression issues <cit.>, especially in scenarios involving large tables. We filter column features unrelated to the query and sample relevant rows, which avoids token waste and the introduction of additional bias. Reliance on LLMs to predict the indexes of rows escalates computational costs and budgets <cit.>. Rule-based methods can only match specific patterns, whereas embedding-based methods can leverage semantic and contextual information. Therefore, we initially sample K rows using embedding-based semantic similarity between each row and the utterance, following the practical guide from <cit.>. Subsequently, a powerful LLM is utilized to select columns relevant to the query, excluding irrelevant ones. We utilize the augmented information in Section <ref> throughout the filtering process. §.§ Joint Reasoner Given the sub-table derived from the primary workflow (illustrated in Figure <ref>) and the supplementary information from the query augmentor in <ref>, we leverage the step-by-step thought of the LLM to arrive at the final answer. § EXPERIMENT In this section, we first introduce the datasets and evaluation metrics. We compare ALTER with baseline methods and report the results in Section <ref> and Section <ref>. The ablation study and analysis of large-table scenarios are discussed in Section <ref> and Section <ref>, respectively. Please refer to Appendix <ref> for additional implementation details. §.§ Datasets and Evaluation Metrics We evaluate our proposed method on two widely-used table-based reasoning benchmarks, WikiTQ <cit.> and TabFact <cit.>. For the table-based fact verification task, we adopt the TabFact dataset, which contains various statements based on Wikipedia tables. We evaluate the dataset using binary classification accuracy on the small-test set containing 1998 statements with 298 different tables. For the table reasoning task, we adopt WikiTableQuestion (WikiTQ), which contains open-domain tables accompanied by complex questions. We use denotation accuracy as our evaluation metric, which evaluates the predicted answers based on the gold ones. We evaluate our method on the test set containing 4344 samples from 421 different tables. §.§ Baselines [1]For the Dater method, we report the results of using the LLM-based method as backbone We compare the proposed ALTER with a range of advanced reasoning frameworks for table-based tasks. The baseline methods for comparison can be categorized into two types: mainstream techniques following the pre-LLM era and techniques unique to the LLM era. For the techniques following the pre-LLM era, we select TAPEX <cit.>, ReasTAP <cit.>, TaCube <cit.>, OmniTab <cit.>, CABINET <cit.>. For the techniques unique to the LLM era, we select Binder <cit.>, Dater <cit.>, ReAcTable <cit.>, Mix SC <cit.>, Chain-of-Table <cit.>. Additionally, generating multiple reasoning paths and ultimately choosing the most consistent answer through voting or self-consistency <cit.> can enhance the performance of LLMs. Therefore, for the techniques unique to the LLM era, we report two types of results for those methods employing result ensemble techniques. §.§ Results We present the results on the WikiTQ and TabFact datasets. The experimental outcomes are summarized in Table <ref>. From the results, we observe that our ALTER method achieves comparatively outstanding outcomes. Specifically, on the WikiTQ dataset, while the Mix SC method do marginally outperforms our results by aggregating multiple reasoning paths (with 10 sampling times), ALTER still managed to exceed the performance of all other methods under comparison. Notably, ALTER demonstrates the best performance in single-round reasoning among all other methods that utilize result ensemble techniques in the LLM era. This demonstrates the robust performance of our method in reasoning tasks, which can be attributed to the reinforced information provided by the query augmentor and our innovative modular procedure within the table organizer. §.§ Ablation Study We carry out an ablation study to assess the impact of various components on the performance of our methods, as well as to explore the relationship between the pure table data and the inherent augmentation information. Analysis of the Query Augmentor. To analyze the impact of two query augmentation methods in the query augmentor. We conducted experiments on the WikiTQ and TabFact datasets by discarding the step-back augmentation module (denoted as w/o step-back) and the sub-query augmentation module (denoted as w/o sub-query). For each dataset, we further categorized the questions based on the difficulty level, following <cit.>. This stratification facilitates a more comprehensive evaluation of each module's impact across different types of questions. The ablation test results are reported in Table <ref>. From the results in the table, it is anticipated that employing both augmentation methods simultaneously yields the best performance under all experimental settings. For WikiTQ datasets, the accuracy of ALTER without step-back/sub-query augmentation drops by 2.9%/2.0%, demonstrating the necessity of augmented information from multi-queries. Furthermore, on the TabFact datasets, both augmentation methods have a much larger impact on hard questions than on simple questions. This indicates that the augmented information provided by the query augmentor is particularly effective in dealing with complex questions. Analysis of Pure Data & Augmentation. In our ALTER experiments, we primarily set K=3, meaning the model can only access three rows of data relevant to the question throughout the process. To explore the relationship between pure table data and the augmented information in the table organizer, we conducted ablation experiments varying the value of K and the augmentation process. Results are shown in Table <ref>. We observe that methods utilizing augmented information exhibit significant performance improvements compared to those without augmented information. We also note that the concurrent absence of augmented information and data provision leads to a catastrophic decline in model performance. Notably, on both datasets, using only one row of data with augmented information achieves comparable performance to using three rows of data. Similar trends can also be observed in other settings. This validates that when the model is limited to a small portion of data, the table augmentor serves as a beneficial auxiliary tool, providing additional insights into the table's content. §.§ Large Table Analysis LLMs often struggle to interpret tables within large-scale scenarios, leading to hallucinations and errors. To the best of our knowledge, nearly all methods encounter a decline in model performance as the table size increases when handling large tables. To demonstrate the effectiveness of the ALTER framework in large-scale scenarios, we compare the performance of our framework across different table sizes in this section. We selected various table partitioning principles and different types of methods for a systematic evaluation. For table partitioning, we employed two approaches based on the token count and the number of cells. For the models, representative methods from both the LLM era and the pre-LLM era are chosen. Figure <ref> shows the comparison results of ALTER and methods following the pre-LLM era, including CABINET and OMNITAB, partitioning tables in the WikiTQ dataset by the number of cells. In Table <ref>, we present the results based on different table sizes divided by the token count in the WikiTQ dataset, comparing our method with Dater, Chain-of-TABLE, and Binder unique to the LLM era. Table <ref> shows that ALTER significantly outperforms all three methods in the LLM era across different table sizes. The performance improvement is particularly noteworthy when dealing with large tables. In Figure <ref>, our model demonstrates a much slower performance decline as the model size increases compared to the other two methods. As the size of the table increases, both CABINET and OMNITAB exhibit a monotonous decline in performance. However, our method shows a brief reversal with an increase in performance observed in the intermediate range, indicating the robustness and insensitivity of our approach to changes in table size. Our model significantly outperforms the other two methods when the table size exceeds a certain threshold (>300 cells). Specifically, in the 300-400, 400-500, and 500+ cell categories, our model exceeds their performance by at least 15%, 19%, and 25%, respectively. From the results, it is evident that our method exhibits exceptional performance in large tables. §.§ Robustness and Efficiency Analysis We examined ALTER's robustness to noise perturbations and token efficiency in large-scale scenarios. By adding random rows based on different perturbation factors, we introduced noise to each table in WikiTQ, details of perturbations can be found in Appendix <ref>. From Figure <ref>, we illustrate that as the degree of perturbation increases, the proportion of tokens utilized of the whole table by ALTER decreases. It can be observed that the initial fluctuation has the most significant effect, yet our model still outperforms the compared method (9.8% ALTER v.s. 11.4% CABINET). Concurrently, the decline in the framework's performance degree slows down. This indicates that our method efficiently maintains robust performance in large-table scenarios by narrowing down the scope of larger tables. §.§ Case Study In Appendix <ref>, we present a case study to elucidate the scenarios in which each component of enhanced information within ALTER can facilitate a more profound comprehension of table contents. When addressing complex problems, without the assistance of the augmentation process, the model may focus on biased information or experience hallucinations when generating SQL. However, when the augmented information is explicitly provided, the model can identify the region containing the correct information or generate syntactically correct SQL, thereby delivering accurate responses. § CONCLUSION We propose a framework, namely ALTER, which significantly optimizes model performance on large-scale tables. Within this framework, we extract inherent information pertinent to the questions and tables. By leveraging an augment-filter-execution process as the core reasoning workflow, ALTER demonstrates superior performance in handling large tables. We believe ALTER can bridge the gap between table reasoning methodologies and real-world analysis and bring insights into understanding the way LLMs comprehend tables. § LIMITATIONS ALTER is designed to generalize to large table reasoning tasks, but our method still faces some limitations. Our approach relies partly on the degree of structured and standardized storage of tables, meaning that if the table structure is totally disordered or lacks a certain level of standardization, our model's performance will degrade, for instance, when headers and data are intermixed. Additionally, the combination methods of different augmented information can be explored further. Due to the page limits, we will leave these explorations for future work. § IMPLEMENTATION DETAILS All experiments in this paper were conducted on GPU clusters with 4 NVIDIA A100 GPUs. We employ GPT-3.5-turbo as our large language model backbone in all experiments. To ensure consistent results, we apply a self-consistency technique with 5 sampling times for each benchmark dataset. For the embedding model in Section <ref>, we utilize bge-large-en model <cit.> and employ FAISS <cit.> for efficient similarity search. § CASE STUDY In Figure <ref>, the input question asks for the vehicle preceding the Jaguar XJS. When filtered table data is directly provided, the SQL output for the original query only attends to the second last row of the table. This indicates that the model has observed biased data, incorrectly assuming that the vehicle Jaguar XJS appears only once. However, through step-back query augmentation, the query is reframed, and the model generates a more general SQL query, acquiring more results and thus arriving at the correct answer. This demonstrates that step-back query augmentation enables the model to access a broader scope of information. In Figure <ref>, the input query seeks to determine the tenure of René Heitmann as head coach. This involves operations on two distinct feature columns. By decomposing the original query into sub-queries, the difficulty is reduced, allowing the model to accurately retrieve the corresponding information and ultimately compute the correct result. In Figure <ref>, the input query seeks to determine the score differential for the team Detroit. Without relying on the augmented information from the table augmentor, the model fails to correctly capture the name in the Team column and cannot accurately extract the score values in the Score column. After incorporating the augmented information, the model can generate syntactically correct SQL and extract the needed data. § DETAILS OF TABLE PERTURBATION In Section <ref>, we discussed the robustness and efficiency of ALTER. We provide details of the perturbations implemented. We insert noise into a table by adding rows based on the size of the table, following the row adding steps in <cit.>. However, we do not randomly extract values from other tables, as this would compromise the pre-augmented schema standardization. Based on the augmented schema information, we randomly generated data for three types of features: Date, Numerical, and Char. We believe the disturbance intensity is quite similar for the model compared to the previous approach. Based on the number of cells (#cells = N) in the table, the exact scheme of the n rows inserted is as follows: (1) n=1 if N ≤ 150, (2) n=2 if 150 < N ≤ 300, (3) n=4 if 300 < N ≤ 450, (4) n=8 if N ≥ 450. Additionally, for each of these categories, we vary the degrees of perturbation by multiplying the number of added rows by 1, 2, and 4 times (perturbation factor used in Figure <ref>). § PROMPTS We provide the prompt templates for different augmentation methods used within the ALTER framework. See Figure <ref> for two query augmentation methods and Figure <ref> for different augmentations used in table augmentor. In these templates, the red text serves as a placeholder for specific input. The in-context few-shot examples are selected from the training or validation set for each task. The sub-tables are serialized into HTML format throughout the experiments. [b] ALTER Workflow
http://arxiv.org/abs/2407.02888v1
20240703080359
Joint Optimization of Resource Allocation and Data Selection for Fast and Cost-Efficient Federated Edge Learning
[ "Yunjian Jia", "Zhen Huang", "Jiping Yan", "Yulu Zhang", "Kun Luo", "Wanli Wen" ]
cs.LG
[ "cs.LG", "cs.AI" ]
Joint Optimization of Resource Allocation and Data Selection for Fast and Cost-Efficient Federated Edge Learning Yunjian Jia, Zhen Huang, Jiping Yan, Yulu Zhang, Kun Luo, and Wanli Wen The work is sponsored by the National Natural Science Foundation of China under Grant 62201101, the Project funded by China Postdoctoral Science Foundation under Grant 2022M720020, the Natural Science Foundation of Chongqing, China under Grant cstc2021jcyj-msxmX0458, and the Special Key Projects for Technological Innovation and Application Development in Chongqing Municipality under Grant CSTB2022TIAD-KPX0059. The authors are with the School of Microelectronics and Communication Engineering, Chongqing University, Chongqing 400044, China (yunjian@cqu.edu.cn, ZhenHuang@cqu.edu.cn, jiping_yan@stu.cqu.edu.cn, yulu_zhang@stu.cqu.edu.cn, kunluo@cqu.edu.cn, wanli_wen@cqu.edu.cn). (Corresponding author: Wanli Wen) =========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== § ABSTRACT Deploying federated learning at the wireless edge introduces federated edge learning (FEEL). Given FEEL’s limited communication resources and potential mislabeled data on devices, improper resource allocation or data selection can hurt convergence speed and increase training costs. Thus, to realize an efficient FEEL system, this paper emphasizes jointly optimizing resource allocation and data selection. Specifically, in this work, through rigorously modeling the training process and deriving an upper bound on FEEL’s one-round convergence rate, we establish a problem of joint resource allocation and data selection, which, unfortunately, cannot be solved directly. Toward this end, we equivalently transform the original problem into a solvable form via a variable substitution and then break it into two subproblems, that is, the resource allocation problem and the data selection problem. The two subproblems are mixed-integer non-convex and integer non-convex problems, respectively, and achieving their optimal solutions is a challenging task. Based on the matching theory and applying the convex-concave procedure and gradient projection methods, we devise a low-complexity suboptimal algorithm for the two subproblems, respectively. Finally, the superiority of our proposed scheme of joint resource allocation and data selection is validated by numerical results. Federated edge learning, mislabeling, data selection, training cost, resource allocation. § INTRODUCTION It is estimated that there will be 29.3 billion networked devices, such as smart phones, pads, wearable devices and other consumer electronics, by 2024 around the world <cit.>. These devices will inevitably generate huge amounts of data at the wireless edge that can be applicable for multifarious machine learning (ML) tasks, e.g., autonomous driving and product recommendation. Traditional ML algorithms need to expose the raw data to third-party entities for model training, which, however, may compromise data privacy <cit.>. To tackle such privacy issue, researchers in the field of wireless communications integrate federated leaning with mobile edge computing, thus forming a concept known as federated edge learning (FEEL) <cit.>. In the training process of FEEL, the devices must send their local training results such as gradients or model parameters, instead of the raw data, via wireless channels. Since the available radio resources such as bandwidth and time at the network edge are constrained, it is necessary to allocate appropriate radio resources for each device during model training. In addition, the data owned by the devices may be mislabeled in practice, for example, a hand-written digit “1” may be labeled as “0”, and an image “t-shirt” may be labeled as “trouser”. Training ML models on such mislabeled data can seriously deteriorate the convergence of FEEL, so it is necessary to select local data appropriately during model training<cit.>. To date, there have been many research efforts on the resource allocation for FEEL, among which some representative researches are <cit.>. Specifically, the authors in <cit.> developed an efficient joint device scheduling and wireless resource allocation scheme respectively, aiming to minimize devices' total energy cost <cit.> and learning time cost <cit.>, maximize the weighted sum data rate <cit.>, or speed up the convergence of FEEL <cit.>. The authors in <cit.> and <cit.> established a resource allocation problem to reduce the weighted sum of FEEL training time and total energy cost of all devices. A joint device scheduling and resource management strategy is developed in <cit.>, which can significantly speed up model training and save energy costs. Additionally, a joint optimization of the processing-rate, the uplink Non-orthogonal Multiple Access (NOMA) transmission duration, and the broadcasting duration, as well as the accuracy of the local training was proposed in <cit.>, with the aim of minimizing the system-wise cost including the total energy consumption and the FEEL convergence latency. It is worth noting that excessive energy cost may prevent devices from participating in model training, thus reducing the performance of FEEL. However, how to deal with this issue has not been well studied in the literature <cit.>. There are several ways to encourage devices to join in the model training process, such as rewarding the devices appropriately to compensate for their energy costs, as done in <cit.>. More specifically, the authors in <cit.> proposed to reward all devices based on the number of CPU cycles <cit.> or the quantity of available training samples <cit.> that they are willing to contribute, and the greater the contribution, the higher the reward. On this basis, they further focused on how to achieve the desirable resource allocation. Note that the above works have never considered the design of data selection in the FEEL system, so the training algorithm they proposed may not be applicable for scenarios in which some local data are mislabeled. As for the design of the data selection in FEEL, there are not many works in this direction recently, and some representative works include <cit.>. Specifically, the work in <cit.> demonstrated the negative influence of data mislabeling on training performance and proposed an efficient data selection method. The authors in <cit.> first evaluated the relevance of data samples and then proposed to filter out all irrelevant data before model training. In <cit.>, the authors built a joint data selection and resource allocation problem to either speed up model training <cit.> or minimize the energy cost of FEEL <cit.>. Note that although the above works in <cit.> have proposed several efficient schemes to speed up model training and reduce the cost of the FEEL system from various perspectives, there are still some limitations. First, the works in <cit.> and <cit.> overlooked the resource allocation of the FEEL system. Second, the authors in <cit.> did not analyze the convergence of the training process. Third, the studies in <cit.> assumed that all devices can always participate in model training, which, however, may not hold in practice, since the edge devices may fail to upload local gradients due to the loss of connections. As a result, the proposed schemes therein may not be suitable for practical FEEL systems. In this work, we would like to address the above issues. We investigate a generic FEEL system consisting of multiple devices and an edge server, where each device connect to the server via wireless channels. It is worth noting that to simulate the practical scenarios, we assume that the devices may not always be available to connect to the server. In addition, some devices may be not willing to conduct model training due to the high energy cost. Thus, to entice devices to join in model training, the server will reward the devices to compensate for the energy cost. Particularly, in the FEEL system, we consider that the communication resources are limited and each device may have some mislabeled data samples. On these bases, we would like to analyze the convergence of FEEL and jointly optimize the resource allocation and data selection to speed up model training while reducing the net cost (i.e., the energy cost minus the reward) of all devices. Our contributions are listed below. * By mathematically modeling the training process of FEEL, we derive an analytical expression for the net cost of all devices. In addition, we propose a new gradient aggregation method and derive an upper bound on FEEL's one-round convergence rate. On this basis, we obtain a mathematical expression that can be used to speed up the convergence of FEEL. * To accelerate the convergence of FEEL while minimizing the net cost of all devices, we establish a joint resource allocation and data selection problem. Since the formulated problem is unsolvable on the server side, we equivalently transform it into a more solvable form with some appropriate transformations and then separate the original problem into two subproblems: the resource allocation problem and the data selection problem. These two subproblems are mixed-integer non-convex and integer non-convex problems, respectively, and it is very challenging to obtain their optimal solutions. * Based on the matching theory and applying the convex-concave procedure and gradient projection methods, we propose a low-complexity suboptimal algorithm for the resource allocation problem and the data selection problem, respectively. Aside from the above contributions, we compare the performance of the proposed scheme with several representative baselines on two popular datasets, i.e., MNIST and Fashion-MNIST. Numerical results show that the developed solution of joint resource allocation and data selection significantly outperforms the representative baselines. The rest of this paper is organized as follows. Section <ref> introduces the system model. Section <ref> formulates an optimization problem of joint data selection and resource allocation and then transforms the problem into two subproblems with manageable structure. These two subproblems are solved in Section <ref> and Section <ref>, respectively. Section <ref> presents the performance evaluation of FEEL under the proposed scheme and some representative baselines. Finally, Section <ref> concludes our work. § SYSTEM MODEL We investigate a generic FEEL system, which consists of one edge server and K devices, as depicted in Fig. <ref>. To maintain the simplicity of analysis and optimization, we assume that both the BS and devices are equipped with a single antenna. Let 𝒦={1,2,⋯, K} denote the user set. Note that we assume that all devices are legitimate, with no malicious devices. Let 𝒟_k={(𝐱_j,y_j)}_j=1^|𝒟_k| denote the dataset of the k-th device. Here, 𝐱_j∈ℝ^d is the d-dimensional data sample, y_j∈ℝ denotes the data label, and |·| depicts the cardinality of a set. We define 𝒟= ∪_k∈𝒦𝒟_k as the whole dataset resided on all devices. Table <ref> summarized the main notations used in this paper. §.§ Learning Model The FEEL system can learn an ML model 𝐰∈ℝ^d over the datasets of all devices by solving the problem below 𝐰^*= min_𝐰L(𝐰). Here, L(𝐰)=1/| 𝒟|∑_k∈𝒦| 𝒟_k |L_k(𝐰) is the loss function with L_k(𝐰) =1/| 𝒟_k |∑_j∈𝒟_kℓ(𝐰,𝐱_j,y_j), denoting the loss function for device k and ℓ(·) being an appropriate sample-wise loss function. Training an ML model in the FEEL system usually is an iterative procedure, in which the iteration is also known as the communication round. Define 𝐰^(i) to be the global model at round i=1,2,⋯. Then, the training process of FEEL at each round consists of three stages, i.e., Local Gradient Computing, Local Gradient Uploading, and Global Model Updating. We will detail these stages in the following subsections. §.§ Local Gradient Computing The k-th device will compute the local gradient, denoted by 𝐠̂_k^(i), of L_k(𝐰) at 𝐰=𝐰^(i), where 𝐰^(i) is received from the edge server. The computed gradient 𝐠̂_k^(i) is uploaded to the server subsequently in the stage of Local Gradient Uploading. Note that, most of the current literature on FEEL implicitly assumes that the device's data samples are always labeled correctly <cit.>, and thus the gradient 𝐠̂_k^(i) is 𝐠̂_k^(i) = ∇ L_k(𝐰^(i)) = 1/|𝒟_k|∑_(𝐱_j,y_j) ∈𝒟_k𝐠_k,j^(i), ∀ k∈𝒦 where 𝐠_k,j^(i) = ∇ℓ(𝐰^(i),𝐱_j,y_j) and L_k(𝐰^(i)) is given by (<ref>). Nevertheless, in practice, some data samples in 𝒟_k may be mislabeled, for example, a hand-written digit “1” may be labeled as “0”, and an image “t-shirt” may be labeled as “trouser”. Such mislabeling may degenerate the training performance of FEEL if directly sending ĝ_k^(i) in (<ref>) to the edge server. To this end, we consider a data selection design,[Note that the data selection mechanism discussed in this paper does not necessitate the capability of the user devices or the edge server to detect data with incorrect labels.] in which only a subset ℳ_k^(i) of the data samples in 𝒟_k are selected to compute the local gradient. Specifically, a subdataset is first sampled from 𝒟_k with size |𝒟̂_k|, where 𝒟̂_k ⊆𝒟_k. Then, the gradient norm square of each data sample in 𝒟̂_k is computed and sent to the BS. On this basis, a subset ℳ_k^(i) is chosen from 𝒟̂_k to compute the local gradient. On this basis, with a slight abuse of notation, we reexpress (<ref>) as ĝ_k^(i) = 1/|ℳ_k^(i)|∑_(𝐱_j,y_j) ∈ℳ_k^(i)𝐠_k,j^(i), ∀ k∈𝒦 where ℳ_k^(i) satisfies the following constraints ℳ_k^(i)⊆𝒟̂_k, ∀k∈𝒦, ℳ_k^(i)≠∅, ∀k∈𝒦. Let ℳ^(i) = {ℳ_k^(i)}_k∈𝒦 denote the data selection design. Note that the data selection design ℳ^(i) can be applicable in smart healthcare systems for disease diagnosis <cit.> and in autonomous vehicles for enhanced driving process safety <cit.>. After completing the local gradient calculation, device k will send the local gradient ĝ_k^(i) given in (<ref>) instead of (<ref>) to the edge server. As done in <cit.> and <cit.>, to compensate for the cost of user participation in training and attract them to participate in model training, we consider that the server will reward each device. Let q_k denote the reward per data sample for device k.[Determining the specific value of q_k is a strategic process that takes into account a range of factors, including but not limited to data volume and privacy concerns.] Then, the total reward received by all devices in 𝒦 is R(ℳ^(i)) = ∑_k∈𝒦 q_k|ℳ_k^(i)|. On the other hand, the local calculation will consume a certain amount of energy of each device. To be specific, we define F_k as the CPU computational power required by device k to compute the gradient norm square of a single data sample, which is quantified in terms of CPU cycles per sample. Concurrently, let f_k denote the CPU frequency of device k, measured in CPU cycles per second. Then, the computation time at device k for computing the gradient norm square of the data samples in 𝒟̂_k is given by τ_k=F_k |𝒟̂_k|/f_k, ∀k∈𝒦. Based on <cit.>, the energy consumption for gradient computing at device k is E_k^cmp = κ F_k |𝒟̂_k|f_k^2, ∀ k∈𝒦. Here κ denotes the energy capacitance coefficient which is dependent on the chip architecture. Let c_k denote the cost per unit energy consumption of device k.[Note that accurately quantifying c_k in real-world FEEL systems is complex, as it involves various factors including the energy efficiency of the hardware, the energy required for data transfer, and the operational efficiency of the model on specific devices <cit.>.] Thus, the total cost of performing gradient computing on all devices is given by C^ cmp= ∑_k∈𝒦 c_k E_k^cmp. §.§ Local Gradient Uploading In this stage, each device needs to submit its gradient to the edge server. Due to the resource limitation of wireless transmission, we consider a grant-based NOMA system that has a set 𝒩={1,2,⋯,N} of N resource blocks (RBs). To support the gradient uploading, these RBs in 𝒩 should be carefully allocated to the devices in 𝒦. Define ρ_k,n to be the RB assignment indicator, where ρ_k,n=1 indicates that the n-th RB is assigned to the k-th device, and ρ_k,n=0, otherwise. Consider that each RB can be allocated to Q≥1 devices at most, while each device can occupy only one RB <cit.>, i.e., the following constraints should be satisfied ρ_k,n^(i)∈{0,1}, ∀k ∈𝒦, ∀n ∈𝒩, ∑_k∈𝒦ρ_k,n^(i)≤ Q,  ∀n∈𝒩, ∑_n∈𝒩ρ_k,n^(i)≤ 1, ∀k∈𝒦. Additionally, as mentioned earlier, not all devices are available to transmit their gradients to the server. To reflect such behavior, let α_k ∈{0,1} denote device k's availability state, where α_k = 1 indicates that device k can be available to upload its gradient 𝐠̂_k^(i), and α_k = 0, otherwise. Then, we reform the uploaded local gradient of device k as α_k^(i)𝐠̂_k^(i), and the following constraint should be satisfied ρ_k,n^(i)≤α _k^(i), ∀k∈𝒦, ∀n∈𝒩. According to the NOMA transmission protocol <cit.>, the uplink signals from the devices occupying the same RB will be superposed. Let 𝒮_n^(i) denote the set of devices occupying RB n. Then, the received signal at the edge server on RB n can be expressed as u_n^(i)=∑_k∈𝒮_n^(i)√(ρ_k,n^(i)p_k,n^(i)h_k,n^(i))α_k^(i) v_k,n^(i)+ m, ∀ n∈𝒩 where v_k,n^(i) denotes the signal transmitted from device k on the n-th RB with 𝔼{| v_k,n^(i)|^2 } =1 and 𝔼{·} being the expectation operator, h_k,n^(i) is the channel power gain of device k on RB n, m denotes the zero-mean Gaussian noise with variance N_0, p_k,n^(i) denotes the transmission power of device k on RB n. Let p_k^max represent the maximum power limit of device k. Then, we have 0≤ p_k,n^(i)≤ρ _k,n^(i)p_k^ max, ∀k∈𝒦, ∀n∈𝒩, We consider that the successive interference cancellation is applied at the edge server. Specifically, for each RB, the edge server first decodes the signal from the device with the highest channel power gain and treats the signals of others as interference. Then, the edge server subtracts the decoded signal from the superposed signal. This iterative process continues until the signals from all users have been successfully decoded. On this basis, we can calculate the achievable rate of device k on RB n as follows, r_k,n^(i)=Blog_2 (1+ρ_k,n^(i) p_k,n^(i) h_k,n^(i)/∑_t∈𝒮_n^(i)1[ h_t,n^(i)<h_k,n^(i)]ρ_t,n^(i)p_t,n^(i)h_t,n^(i) + N_0), ∀k ∈𝒦, ∀n ∈𝒩. Here, B represents the bandwidth of each RB and 1[ ·] denotes an indicator function. Let L and T represent the size of the local gradient ĝ_k^(i) in (<ref>) and the time duration for gradient uploading, respectively. To ensure that ĝ_k^(i) can be uploaded to the server successfully, we have ∑_n∈𝒩r_k,n^(i)T≥α_k^(i)L, ∀k∈𝒦. Finally, the energy consumption of device k for local gradient uploading is E^com_k(ρ^(i),𝐩^(i))=∑_n∈𝒩ρ_k,n^(i)p_k,n^(i)T, and the total cost of performing local gradient uploading from all devices can be expressed as C^ com(ρ^(i),𝐩^(i)) = ∑_k∈𝒦c_kE_k^com(ρ^(i),𝐩^(i)), where ρ^(i)=(ρ_k,n^(i))_k∈𝒦, n∈𝒩 and 𝐩^(i)=(p_k,n^(i))_k∈𝒦, n∈𝒩 denote the design parameters of RB assignment and transmission power allocation, respectively. Finally, based on the reward in (<ref>), and the total costs in (<ref>) and (<ref>), we can compute the net cost (i.e., the energy cost minus the reward) of all devices as follows. C(ℳ^(i), ρ^(i),𝐩^(i)) = C^ com(ρ^(i),𝐩^(i))+ C^ cmp - R(ℳ^(i)). §.§ Global Model Updating In this stage, by aggregating the gradients from the devices, the edge server will generate a new global model for the next round of model training. Let ϵ_k = [α_k^(i) = 1] denote the probability that device k is available to submit its local gradient. Then, we propose that at round i, the server aggregates the local gradients based on ĝ^(i)=1/|𝒟̂|∑_k ∈𝒦|𝒟̂_k|/ϵ _kα _k^(i)ĝ_k^(i), where 𝒟̂=∑_k=1^K 𝒟̂_k, and 𝐠̂^(i) denotes the global gradient generated on the edge server side. The following lemma shows the unbiasedness of ĝ^(i), which can greatly facilitate us to prove FEEL's convergence rate. The expectation of 𝐠̂^(i) is equal to the ground-truth gradient 𝐠^(i) = ∇ L(𝐰^(i)). The proof can be found in Appendix <ref>. Based on (<ref>), the edge server generates a new global model 𝐰^(i+1) for the next round of model training according to 𝐰^(i+1)=𝐰^(i)-η^(i)ĝ^(i). Here, η^(i)>0 denotes the learning rate at round i. §.§ Convergence Analysis The above three stages will be repeated several times until FEEL converges. We now analysis the convergence behaviour of FEEL under the proposed data selection scheme. §.§.§ One-round convergence rate Using Lemma <ref>, we first derive an upper bound on FEEL's one-round convergence rate. If ∇ L(𝐰) is Lipschitz continuous, we obtain 𝔼[ L(𝐰^(i+1)) -L( 𝐰^*)] ≤𝔼[ L(𝐰^(i)) -L(𝐰^*)] -η^(i)𝐠^(i) ^2 +β(η^(i)) ^2/2|𝒟̂|^2Δ(ℳ^(i)). Here, β>0 is the Lipschitz modulus and Δ(ℳ^(i)) is given by (<ref>), shown at the bottom of next page, where σ_k,j^(i)=𝐠_k,j^(i)^2. The proof can be found in Appendix <ref>. §.§.§ Convergence upper bound Based on Lemma <ref>, we can derive the convergence upper bound in the i-th communication round. If ∇ L(𝐰) is strongly convex with a positive parameter μ, we have 𝔼[L(𝐰^(i+1))-L(𝐰^*)] ≤∏_t=1^i (1-2μη^(t)) 𝔼[L(𝐰^(1))-L(𝐰^*)] +β/2|𝒟̂|^2∑_t=1^iA^(t)(η^(t))^2Δ(ℳ^(t)), where A^(t)=∏_j=t+1^i (1-2μη^(j)) is a weight coefficient. The proof can be found in Appendix <ref>. Lemmas <ref> and <ref> indicate that Δ(ℳ^(i)) directly relates to the data selection ℳ^(i) and the decrease of Δ(ℳ^(i)) will speed up the convergence of FEEL. § PROBLEM FORMULATION AND TRANSFORMATION In the sequel, we first establish a joint resource allocation and data selection problem. Then, we conduct some necessary transformations to facilitate solving the original problem. §.§ Problem Formulation According to (<ref>) and (<ref>), we see that the variables (ρ^(i),𝐩^(i)) and ℳ^(i) can affect both the convergence of FEEL and the net cost of all devices. Therefore, a question naturally arises: how to design an appropriate joint resource allocation and data selection scheme to accelerate the convergence of FEEL while minimizing the net cost of all devices? To answer this question, the following problem is established. [Joint Resource Allocation and Data Selection] min_ℳ^(i),ρ^(i),𝐩^(i) λΔ(ℳ^(i)) +(1-λ) C(ℳ^(i), ρ^(i),𝐩^(i)) s.t. (<ref>), (<ref>), (<ref>), (<ref>), (<ref>), (<ref>), (<ref>), (<ref>). Problem <ref> should be tackled on the server side, not on the device side. In particular, due to the constraints in (<ref>) and (<ref>), all users' local data should be uploaded to the server when solving Problem <ref>. However, to protect data privacy, FEEL does not allow the server to directly access the raw data of each device, so Problem <ref> cannot be solved at the edge server. In the following subsection, we will transform Problem <ref> into a solvable form through variable substitution. Subsequently, we will decompose it into two subproblems. Due to the transformation, the server can address Problem <ref> without compromising user data confidentiality. §.§ Problem Transformation Let δ_k,j^(i)∈{0,1} denote the data selection indicator, where δ_k,j^(i) = 1 represents that the j-th data sample in 𝒟_k is selected into the set ℳ_k^(i), and δ_k,j^(i) = 0, otherwise. Define the variable δ^(i)=(δ_k^(i))_k∈𝒦, where δ_k^(i)=(δ_k,j^(i))_j∈𝒥_k and 𝒥_k = {1,2,⋯,|𝒟̂_k|}. Then, by replacing ℳ_k with δ^(i), we transform Problem <ref> into an equivalent problem as follows. [Joint Resource Allocation and Data Selection] min_δ^(i),ρ^(i),𝐩^(i) λΔ̂(δ^(i)) + (1-λ) Ĉ(δ^(i),ρ^(i),𝐩^(i)) s.t. (<ref>), (<ref>), (<ref>), (<ref>), (<ref>), (<ref>), δ_k,j^(i)∈{ 0,1 }, j∈𝒥_k, ∀k∈𝒦, 0 < ∑_j∈𝒥_kδ_k,j^(i)≤ |𝒟̂_k|, ∀k∈𝒦, where Δ̂(̂δ̂^̂(̂î)̂)̂ is given in (<ref>), shown at the bottom of next page, and the net cost of devices is given by Ĉ(δ^(i),ρ^(i),𝐩^(i))=C^com( ρ^(i),𝐩^(i))+C^cmp-∑_k∈𝒦q_k∑_j∈𝒥_kδ_k,j^(i). Compared to the original Problem <ref>, Problem <ref> is a solvable problem, since it is only required to send the cardinality of 𝒟̂_k to the server, instead of 𝒟̂_k itself. Nonetheless, it is still difficult to achieve an optimal point of Problem <ref> for the following reasons. First, the variables ρ^(i) and δ^(i) are binary. Second, the objective function and the constraint in (<ref>) are non-convex. Thus, Problem <ref> is recognized as a highly challenging mixed-integer non-convex problem. To tackle Problems <ref>, we decompose Problem <ref> into it and <ref> equivalently. [Resource Allocation Problem] min_ρ^(i),𝐩^(i) C^ com( ρ^(i),𝐩^(i))+C^ cmp s.t. (<ref>), (<ref>), (<ref>), (<ref>), (<ref>), (<ref>). [Data Selection Problem] min_δ^(i) λΔ̂(δ^(i)) + (1-λ) Ĉ(δ^(i),ρ^(i),𝐩^(i)) s.t. (<ref>), (<ref>). In the sequel, we will solve Problem <ref> by solving Problem <ref> and Problem <ref>, as illustrated in Fig. <ref>. The details of solving Problem <ref> are summarized in Algorithm <ref>.[Note that in Algorithm <ref>, each user performs a single iteration over their local data to compute the local gradient. Consequently, Algorithm <ref> can be considered a variant of the widely-employed FedSGD algorithm <cit.> in federated learning. The analysis and optimization presented herein can be adapted to scenarios utilizing the FedAvg algorithm <cit.>, wherein users typically execute multiple iterations on their local data and upload model parameters instead of gradients to the edge server.] Specifically, the edge server first solves Problem <ref> to obtain the resource allocation design, denoted by (ρ^*(i), 𝐩^*(i)), via Algorithm <ref> (see Section <ref>), as shown in Step 2. Then, Problem <ref> is solved to obtain the data selection, denoted by δ^*(i), via Algorithm <ref> (see Section <ref>), as shown in Step 3. Note that, the edge server sends (δ^*(i), ρ^*(i), 𝐩^*(i)) back to all devices in 𝒦, and each device can then determine ℳ^*(i) according to δ^*(i) and select all data samples in ℳ^*(i) for local gradient computation. We remark that Algorithm <ref> yields a locally optimal solution for Problem <ref>, not a global optimum. When dealing with a non-convex optimization problem, attaining a globally optimal solution is typically not guaranteed. In such scenarios, the pursuit of a locally optimal solution becomes the conventional objective<cit.>. § SOLUTION OF PROBLEM <REF> Like Problem <ref>, Problem <ref> is also a highly challenging mixed-integer non-convex problem. In the sequel, based on the matching theory and convex-concave procedure (CCP), we will propose a low-complexity algorithm to solve Problem <ref>. §.§ Proposed Algorithm In this subsection, we first define a matching model. Then we give the details of the proposed matching-based algorithm. Finally, the convergence and complexity of our proposed algorithm will be analyzed. §.§.§ Matching Formulation To rationally utilize the limited communication resources, we only allocate RBs to the available devices. Let 𝒰^(i) denote the set of U^(i) available devices. To devise a low-complexity algorithm, we consider the devices in 𝒰^(i) and RBs in 𝒩 as two sets of nodes in a bipartite graph. Then, our goal is to match the devices to the RBs and allocate power appropriately to minimize the net cost, i.e., the objective function value of Problem <ref>. In the following, we define a matching model. [Matching Model] Given two disjoint sets, i.e., 𝒰^(i) and 𝒩, a matching Ψ is a mapping from the set 𝒰^(i)∪𝒩 into the set of all subsets of 𝒰^(i)∪𝒩, satisfying i) Ψ(u) ∈𝒩, ii) Ψ(n) ⊆𝒰^(i), iii) | Ψ(u) |=1 with |·| denoting the size of the matching, iv) | Ψ(n) |≤ Q, and v) n=Ψ(u) ⇔ u=Ψ(n), for each u∈𝒰^(i) and n∈𝒩. Here, conditions i) and iii) mean that each available device can match only one RB, and conditions ii) and iv) represent that each RB can accommodate up to Q available devices. §.§.§ Algorithm Detail Motivated by the housing assignment problem in <cit.>, we introduce the concept of swap matching into the matching formulated in Definition <ref> and devise a matching-based algorithm to solve Problem <ref>. A swapping operation means that two available devices matched with different RBs exchange their matches, while the matches of other available devices remain unchanged. The corresponding power allocation and net cost under the given RB assignment is then updated via Algorithm <ref>, which will be detailed in Section <ref>. To guarantee the reduction of the net cost, a swapping operation is approved and the matching is updated only when the net cost for the given RB assignment decreases after the swap. The above swapping operation will be repeated several times until no swapping is further approved. The proposed matching algorithm is detailed in Algorithm <ref> and its convergence and complexity will be analyzed in the sequel. §.§.§ Convergence After multiple swapping operations, the matching situation between RBs and available devices changes as follows: Ψ_0 →Ψ_1 →Ψ_2 →⋯, where Ψ_l with l=1,2,⋯ denotes the matching at the l-th swapping operation and Ψ_0 represents the initial matching. At swapping operation l, the matching changes from Ψ_l-1 to Ψ_l. Let Ĉ_l-1 and Ĉ_l denote the net costs at matching Ψ_l-1 and matching Ψ_l, respectively. Then, we have Ĉ_l<Ĉ_l-1, namely, the net cost decreases after the swap. Since there is a certain positive net cost to the available devices to guarantee the successful uploading of the local gradients, the net cost has a lower bound, which implies that Algorithm <ref> will converge after a finite number of swapping operations. §.§.§ Complexity Finally, we discuss the computational complexity of Algorithm <ref>. For each swapping operation, we should consider all possible swapping combinations, which requires 𝒪(U^(i)^2) operations. For each swapping attempt, we need to allocate the power, and calculate and compare the net cost before and after the swapping of the available devices under the given RB assignment. Let 𝒪(X) represent the complexity of the power allocation algorithm (i.e., Algorithm <ref>), which will be analyzed in Section <ref>. Assume that the matching remains unchanged after V swapping operations. Then, the computational complexity of Algorithm <ref> is 𝒪(U^(i)^2VX). §.§ Power Allocation In line 6 of Algorithm <ref>, the net cost is minimized by optimizing the power allocation under a given RB assignment via Algorithm <ref>. In this subsection, we focus on how to construct Algorithm <ref>. Given the RB assignment, Problem <ref> becomes the problem in (<ref>), shown at the top of next page. Due to the non-convexity of the constraint in (<ref>), the problem in (<ref>) is a non-convex problem. To solve it in a more tractable manner, we perform some transformations as detailed below. Specifically, we assume that the devices in the set 𝒮_n^(i) occupying RB n are arranged in ascending order according to their channel power gains. Then, the term ∑_t∈𝒮_n^(i)1[ h_t,n^(i)<h_k,n^(i) ]ρ_t,n^*(i)p_t,n^(i)h_t,n^(i)+N_0 can be re-expressed as sI_k,n(𝐩^(i)) =∑_t=1^k-1ρ_t,n^*(i) p_t,n^(i)h_t,n^(i)+N_0. On this basis, the constraint in (<ref>) can be rewritten as ∑_n∈𝒩Blog(1+ρ_k,n^*(i)p_k,n^(i)h_k,n^(i)/I_k,n( 𝐩^(i)))T ≥α_k^(i)L,  ∀k∈𝒦. In addition, based on the property of the logarithmic function, (<ref>) can be further equivalently transformed into (<ref>), shown at the top of next page. As a result, by replacing the constraint in (<ref>) with (<ref>), we can transform (<ref>) to an equivalent problem as follows min_𝐩^(i) C^ com( ρ^*(i),𝐩^(i))+C^ cmp s.t. (<ref>), (<ref>). Since the left-hand side of (<ref>) can be regarded as a difference of two concave functions, (<ref>) is a standard difference-of-concave (DC) problem. Using the CCP method, we can achieve a stationary point of the problem in (<ref>). The key idea of the CCP method is to linearize ∑_n∈𝒩log(I_k,n(𝐩^(i))) of the constraint in (<ref>) to obtain a convex constraint, so as to generate a series of convex subproblems that approximately solves the the non-convex problem in (<ref>). The CCP is an iterative procedure. Let v=0,1,2,⋯ be the iteration index. At iteration v, the convex subproblem is given by the problem in (<ref>), shown at the bottom of next page. Here, 𝐩^(i)(v) represents a solution of the problem in (<ref>) at iteration v. Since the problem in (<ref>) is convex, its optimal solution can be achieved via CVX <cit.>. Finally, Algorithm <ref> details the procedure for solving the problem in (<ref>). According to <cit.>, Algorithm <ref> converges to a stationary point of (<ref>). Fig. <ref> plots the convergence trajectory of Algorithm <ref> under five randomly generated initial points. From Fig. <ref>, we can see that the objective function value decreases with the increase of the number of iteration index and remains unchanged after several iterations (e.g., v≥ 4), that is Algorithm <ref> converges. Furthermore, we can observe that Algorithm <ref> can converge to the identical objective value under different initial points, which indicates that Algorithm <ref> can adapt to the change of initial points robustly. Algorithm <ref>'s complexity is dominated by solving (<ref>), which, therefore, can be calculated as 𝒪((KN)^3.5log(1/ε)). § SOLUTION OF PROBLEM <REF> Solving Problem <ref> optimally is NP-hard due to the existence of integer variables and the fractional form of the objective function. Toward this end, a low-complexity suboptimal algorithm for Problem <ref> will be proposed in this section. Our algorithm consists of two stages, that is, continuous relaxation and binary recovery. In the first phase, we solve the continuous relaxation of Problem <ref> via the gradient projection method. In the second stage, we achieve a feasible integer solution of Problem <ref> using the λ-representation method. §.§ Continuous Relaxation We first relax the integer constraint in (<ref>) to δ_k,j^(i)∈ [0,1], ∀k ∈𝒦, ∀j ∈𝒥_k. Based on (<ref>), the continuous relaxation of Problem <ref> is written as min_δ^(i) λΔ̂(δ^(i)) + (1-λ) Ĉ(δ^(i),ρ^*(i), 𝐩^*(i)) s.t. (<ref>), (<ref>). This is a non-convex continuous problem, and we can solve it efficiently by using the gradient projection method, as summarized in Algorithm <ref>. Particularly, in Step 3 of Algorithm <ref>, f(δ^(i)) is the objective function of the problem in (<ref>), v represents the iteration index, and α(v) denotes the diminishing stepsize at the v-th iteration, satisfying α(v) → 0 as v→∞, ∑_v=0^∞α(v) = ∞, and ∑_v=0^∞α^2(v) < ∞. In Step 4, the projection of δ̂^(i)(v+1) onto the feasible set of the problem in (<ref>) can be achieved through tackling the following problem δ^(i)(v+1) = min_δδ - δ̂^(i)(v+1) ^2 s.t. (<ref>), (<ref>). The problem in (<ref>) is convex and we can achieve an optimal solution of it via CVX. Steps 3 and 4 will be repeated serval times until Algorithm <ref> converges. According to <cit.>, we know that δ^(i)(v) →δ^†(i) as v→∞, where δ^†(i) is a stationary point of (<ref>). Algorithm <ref>'s complexity is dominated by solving (<ref>), which can be expressed as 𝒪((|𝒟̂|)^3.5log(1/ε)). §.§ Binary Recovery The obtained solution δ^†(i) is generally continuous and hence is an infeasible solution of Problem <ref>. In the sequel, based on δ^†(i), we construct a feasible solution of Problem <ref>. Let ℬ={(<ref>),(<ref>)} be the constraint set. By projecting δ^†(i) onto ℬ, we establish the following problem δ^*(i)≜min_δ^(i)∈ℬδ^(i) - δ^†(i)^2. The problem in (<ref>) is an integer nonlinear programming. By using the λ-representation technique <cit.>, we can transform it into a linear programming problem as follows. min_δ^(i),a,b∑_k∈𝒦∑_j∈𝒥_k[ (δ_k,j^†(i))^2a_k,j+(1-δ_k,j^† (i))^2b_k,j] s.t. (<ref>), (<ref>), b_k,j = δ_k,j^(i), ∀k∈𝒦, ∀j∈𝒥_k, a_k,j + b_k,j = 1, ∀k∈𝒦, ∀j∈𝒥_k, a_k,j≥ 0, b_k,j≥ 0, ∀k∈𝒦, ∀j∈𝒥_k, where a=(a_k,j)_k∈𝒦,j∈𝒥_k and b=(b_k,j)_k∈𝒦,j∈𝒥_k. Lemma <ref> indicates the relationship between the problems in (<ref>) and (<ref>). The problems in (<ref>) and (<ref>) are equivalent. Please refer to Appendix <ref> for the proof. Based on Lemma <ref>, we know that to solve the problem in (<ref>), it is only required to solve the linear problem in (<ref>) by using CVX. Thus, the corresponding computational complexity for solving the problem in (<ref>) can be expressed as 𝒪((3|𝒟̂|)^3.5log(1/ε)). §.§ Algorithm Summary Finally, Algorithm <ref> summarizes the details of solving Problem <ref>, whose complexity is the summation of the complexity of the continuous relaxation stage and the binary recovery stage, which is given by 𝒪((1 +3^3.5)|𝒟̂|^3.5log(1/ε)). In summary, the complexity of Algorithm <ref> is given by 𝒪(U^(i)^2V(KN)^3.5log(1/ε)+(1+3^3.5)|𝒟̂|^3.5log (1/ε)). § SIMULATION RESULTS Extensive simulations will be conducted in this section to show the superiority of our proposed scheme (obtained by Algorithm <ref>). In the following, we will first introduce the simulation setup and then detail the performance evaluation. §.§ Simulation Setup Unless otherwise stated, the simulation parameters are set as follows. We consider that the FEEL system has K=10 devices. For device k, we set the cost per Joule and the reward for each data sample, respectively, to c_k=5 and q_k=0.002, if k is odd, and c_k=10 and q_k=0.005, otherwise. The CPU frequency, the number of CPU cycles to handle a sample, and the capacitance coefficient are set to f_k={0.1, 0.2, ⋯, 1.0} GHz, F_k=20 cycles/sample, and κ=1× 10^-28, respectively. The probability of local gradient uploading is ϵ_k=0.2 if k is odd, and ϵ_k=0.8, otherwise <cit.>. The maximum power limit of device k is set to p_k^max=10 W <cit.>. In addition, we set N=5, Q=2, B=2 MHz, N_0=10^-9 W, T=500 ms, λ=1×10^-3. The channel power gain h_k,n follows an exponential distribution with mean 10^-5. We train an image classification model with the proposed FEEL algorithm. The corresponding datasets are MNIST and Fashion-MNIST. The MNIST dataset has 70000 handwritten grayscale images of the digits 0 to 9, and the Fashion-MNIST dataset comprises 70000 grayscale images of fashion items from ten classes, e.g., “t-shirt”, “pants”, and “bag”. For each dataset, 60000 images are used for model training and the rest are the test dataset. To simulate non-IID distribution, we randomly allocate |𝒟_k|=1000 figures of one label to device k in set 𝒦, and then randomly choose |𝒟̂_k|=200 samples from 𝒟_k in each communication round. We assume that some data samples on each device are mislabeled. Let ϱ_k denote the proportion of mislabeled data on device k. To simulate the scenario of data mislabeling, we randomly select ϱ_k·|𝒟_k| of the data samples on device k and then mislabel each of them, such as labeling the digit “1” as “0”, or labeling the item “t-shirt” as “pants”. In the simulations, we set the mislabeled proportion ϱ_k to 10% for all k∈𝒦. We apply the convolutional neural network (CNN) to conduct the image classification task. The CNN model has seven layers, two 5×5 convolution layers (the first and second layers have 10 and 20 channels, respectively, each followed by 2×2 max pooling), and three full connection layers with ReLu activation. Then using Python, the size of the gradient ĝ_k^n is estimated to be ℓ = 0.56 × 10^6 bits (MNIST) or ℓ = 1× 10^6 bits (Fashion-MNIST). We choose Adam as the optimizer and set the learning rate η to 0.001. Please note that the hyper-parameters employed in this study have been selected based on prior experimental experience and analogous experimental configurations, rather than through a process of fine-tuning. To demonstrate the superior of our proposed scheme, the following four representative baseline schemes are investigated <cit.>: * Baseline 1: Device k randomly selects half of the local data samples for gradient computing and uploads the gradient using one RB with minimal channel power gain. * Baseline 2: Device k randomly selects half of the local data samples for gradient computing and uploads the gradient using one RB with maximal channel power gain. * Baseline 3: Device k selects all of the local data samples for gradient computing and uploads the gradient using one RB with minimal channel power gain. * Baseline 4: Device k selects all of the local data samples for gradient computing and uploads the gradient using one RB with maximal channel power gain. Note that the power allocation of the four baseline schemes can be achieved via Algorithm <ref>. §.§ Evaluation Details Fig. <ref> depicts the convergence of model training and the cumulative net cost under the proposed scheme and the baseline schemes. Here, the cumulative net cost at communication round i refers to the sum of the net costs of the first i rounds. From Fig. <ref>, we have some observations. First, the test accuracy of all schemes increases in a fluctuating manner as the iteration goes on because the local and global models are trained on the user devices and the edge server, respectively. Second, as the iteration proceeds, the proposed scheme can achieve significant performance gain in terms of test accuracy and cumulative net cost compared to the four baselines. For example, at the 600th communication round, the test accuracy of the proposed scheme on the MNIST dataset (resp. Fashion-MNIST dataset) can be improved by about 20% (resp. 51%), but the cumulative net cost is reduced by at least 19% (resp. 32%). Such performance gain stems from the fact that the proposed scheme can wisely select training data samples and appropriately allocate radio resources at each communication round. Note that the negative cumulative net costs in Fig. <ref> indicate that users are profitable under these schemes, which can encourage them to participate in model training more actively. Finally, we find that the difference in test accuracy between the four baseline schemes is not so significant, indicating that if the mislabeled data samples of each device are not excluded during model training, FEEL can only maintain a relatively low level of test accuracy. The reason is that the mislabeled data samples will produce incorrect gradient information, which will mislead the convergence behavior of FEEL. Fig. <ref> shows the effect of the mislabeled proportion under the proposed scheme and the baseline schemes. All results in Fig. <ref> are obtained when the training process of FEEL stops at the 300th communication round. As expected, we can see that the test accuracy under all schemes significantly decreases as the mislabeled proportion increases, but this does not hold for the net cost which is independent of the mislabeled proportion (see Problem <ref>). In addition, we can also see that our solution is dramatically better than the baseline schemes, indicating that the proposed scheme is more robust than the baseline schemes in combating data mislabeling. Fig. <ref> shows the effect of the device availability under the proposed scheme and the baseline schemes. All results in Fig. <ref> are obtained when the training process of FEEL stops at the 300th communication round. From Fig. <ref>, we can see that the test accuracy improves with the device availability, but the more devices participate in local gradient uploading, the greater the cumulative cost. Note that when all devices are unavailable to upload gradients, i.e., the device availability ϵ_k=0 for all k∈𝒦, the test accuracy under each scheme is zero since the server cannot aggregate any gradients from the devices. Our scheme in the case of ϵ_k=0 may underperform the baseline schemes because the baselines can contribute more data samples than ours. However, when some devices become available for gradient uploading (e.g., when ϵ_k ≥ 0.2), the proposed scheme becomes superior to the baselines, indicating that our solution can better adapt to the changes in device availability. § CONCLUSIONS In this paper, we first rigorously model the training process of FEEL and derive its one-round convergence bound. Then, we formulate a joint resource allocation and data selection optimization problem, which, unfortunately, cannot be solved directly. To tackle this problem, we equivalently transform it into a more tractable form with some appropriate transformations and then break it into the resource allocation problem and the data selection problem. Both subproblems are mixed-integer non-convex and integer non-convex optimization problems, respectively, and it is very challenging to obtain their optimal solutions. Based on the matching theory and applying the convex-concave procedure and gradient projection methods, we propose a low-complexity suboptimal algorithm for the resource allocation problem and the data selection problem, respectively. At last, the superiority of the proposed scheme is demonstrated via extensive numerical results. In future work, we aim to extend the results proposed in this paper to multi-task FEEL systems. Specifically, we will explore how machine learning methods can be leveraged to achieve cross-domain data selection within this system. By doing so, we seek to enhance the system's adaptability and efficiency across different domains while ensuring that data selection remains optimized for each specific task. § PROOF OF LEMMA <REF> The derivative of L(𝐰) at 𝐰=𝐰^(i) is calculated as 𝐠^(i) =∇ L(𝐰^(i)) =1/|𝒟|∑_k∈𝒦|𝒟_k|∇ L_k(𝐰^(i)) =1/|𝒟|∑_k∈𝒦|𝒟_k|𝐠_k^(i). Here, 𝐠_k^(i)≜∇ L_k(𝐰^(i))= 1/|𝒟_k|∑_(𝐱_j,y_j) ∈𝒟_k∇ℓ(𝐰^(i),𝐱_j,y_j) denotes the ground-truth local gradient of the k-th device. The expectation of ĝ^(i) is calculated as 𝔼[ ĝ^(i)] =𝔼[ 1/|𝒟̂|∑_k∈𝒦|𝒟̂_k|/ϵ _kα _k^(i)ĝ_k^(i)] = ^(a)1/|𝒟̂|∑_k∈𝒦|𝒟̂_k| 𝔼[ α _k^(i)/ϵ _k] 𝔼[ ĝ_k^(i)] = ^(b)1/|𝒟̂|∑_k∈𝒦|𝒟̂_k| 𝔼[ ĝ_k^(i)] . Here, (a) follows from the independence between the random variables α _k^(i) and ĝ_k^(i) and (b) follows from 𝔼[α _k^(i)] = ϵ _k. According to <cit.>, ĝ_k^(i) is unbiased, making their average in (<ref>) to be an unbiased estimation of 𝐠^(i), which completes the proof. § PROOF OF LEMMA <REF> First, based on the Lipschitz continuity of ∇ L(𝐰), we obtain L(𝐮) ≤ L(𝐯) +(∇ L(𝐯)) ^T(𝐮-𝐯) +β/2𝐮-𝐯 ^2, where β>0 is a modulus, (·)^T denotes the transposition operator. Based on (<ref>), we have the following inequality: L(𝐰^(i+1)) ≤ L(𝐰^(i))+(∇ L(𝐰^(i)))^T(-η^(i)𝐠̂^(i)) +β/2 -η^(i)𝐠̂^(i)^2 =L(𝐰^(i))-η^(i)(𝐠^(i))^T𝐠̂^(i)+β/2(η^(i))^2 𝐠̂^(i)^2. Next, by taking the expectation on both sides of (<ref>), we can obtain 𝔼[ L(𝐰^(i+1)) -L(𝐰^*)] ≤𝔼[ L(𝐰^(i)) -L(𝐰^*)] - η^(i)(𝐠^(i)) ^T𝔼[ ĝ^(i)] + β(η^(i)) ^2/2𝔼[ĝ^(i)^2 ] = ^(a)𝔼[ L(𝐰^(i)) -L(𝐰^*)] -η^(i)𝐠^(i) ^2+β(η^(i)) ^2/2𝔼[ 𝐠̂^(i)^2 ], where (a) is due to the unbiasedness of 𝐠̂^(i) (see Lemma <ref>). Finally, by applying the generalized triangle inequality ‖∑_j=1^nx_j‖^2 ≤ n∑_j=1^n‖ x_j ‖^2 <cit.>, we further have 𝔼[𝐠̂^(i)^2]=𝔼[1/|𝒟|∑_k∈𝒦α_k^(i)|𝒟_k|/ϵ_k𝐠̂_k^(i)^2] ≤𝔼[1/|𝒟|^2(∑_k∈𝒦α_k^(i)|𝒟_k|/ϵ_k)(∑_k∈𝒦α_k^(i)|𝒟_k|/ϵ_k𝐠̂_k^(i)^2)] ≤1/|𝒟|^2𝔼[(∑_k∈𝒦α_k^(i)|𝒟_k|/ϵ_k)(∑_k∈𝒦α_k^(i)|𝒟_k|/ϵ_k|ℳ_k^(i)|∑_(𝐱_j,y_j) ∈ℳ_k^(i) g_k,j^(i)^2)] =1/|𝒟|^2∑_k∈𝒦Δ(ℳ^(i)), where Δ(ℳ^(i)) is given in (<ref>). Substituting (<ref>) into (<ref>), we obtain Lemma <ref>. § PROOF OF LEMMA <REF> Since L(𝐰) is strongly convex, we obtain L(𝐰^(i+1)) ≥ L(𝐰^(i))+(𝐠^(i))^T(𝐰^(i+1)-𝐰^(i)) +μ/2𝐰^(i+1)-𝐰^(i)^2. Minimizing both sides of (<ref>) with respect to 𝐰^(i+1), it follows min_𝐰^(i+1)L(𝐰^(i+1)) ≥min_𝐰^(i+1)[L(𝐰^(i))+(𝐠^(i))^T(𝐰^(i+1)-𝐰^(i)) +μ/2𝐰^(i+1)-𝐰^(i)^2]. The minimization of the left-hand side of (<ref>) is achieved when 𝐰^(i+1)=𝐰^*, while the right-hand side of (<ref>) is minimized when 𝐰^(i+1)=𝐰^(i)-1/μ𝐠^(i). Based on this, we have 𝐠^(i)^2 ≥ 2μ(L(𝐰^(i))-L(𝐰^*)). Combining (<ref>) with the one-round convergence rate in (<ref>), we obtain 𝔼{L(𝐰^(i+1))-L(𝐰^*)}≤𝔼{L(𝐰^(i))-L(𝐰^*)} -2μη^(i)𝔼{L(𝐰^(i))-L(𝐰^*)}+β(η^(i))^2/2|𝒟̂|^2Δ(ℳ^(i)) =(1-2μη^(i))𝔼{L(𝐰^(i))-L(𝐰^*)} +β(η^(i))^2/2|𝒟̂|^2Δ(ℳ^(i)) ≤^(recursively)∏_t=1^i(1-2μη^(t))𝔼{L(𝐰^(1))-L(𝐰^*)} +β/2|𝒟̂|^2∑_t=1^iA^(t)(η^(t))^2Δ(ℳ^(t)), with A^(t)=∏_j=t+1^i (1-2μη^(j)), we obtain Lemma <ref>. § PROOF OF LEMMA <REF> We first point out that the term of δ^(i) - δ^†(i)^2 can be represented as δ^(i) - δ^†(i)^2=∑_k∈𝒦∑_j∈𝒥_k( δ_k,j^(i)-δ_k,j^†(i))^2. Based on the λ-representation technique <cit.>, the integer convex function ( δ_k,j^(i)-δ_k,j^†(i))^2 is equivalent to the following linear problem min_a_k,j,b_k,j( δ_k,j^†(i))^2a_k,j+( 1-δ_k,j^†(i))^2b_k,j s.t. b_k,j = δ_k,j, a_k,j+b_k,j = 1, a_k,j≥ 0, b_k,j≥ 0. Then, on the basis of the problem in (<ref>), we can equivalently rewrite the problem in (<ref>) as min_δ^(i),a,b∑_k∈𝒦∑_j∈𝒥_k[ (δ_k,j^†)^2a_k,j+(1-δ_k,j^†)^2b_k,j] s.t. (<ref>), (<ref>), (<ref>), (<ref>), (<ref>). Note that (<ref>) is a mixed integer problem and its constraint matrix is totally unimodular. As such, we can relax the binary constraint in (<ref>) to (<ref>) without any loss of optimality, that is, the problems in (<ref>) and (<ref>) are equivalent. Given the equivalence between the problems in (<ref>) and (<ref>), we can conclude that the problems in (<ref>) and (<ref>) are equivalent. 10 url@samestyle cisco2020global C. Syst. and S. Jose, “Cisco annual internet report (2018-2023),” CA, USA, Mar. 2020. FLsurvey2020 W. Y. B. Lim, N. C. Luong, D. T. Hoang, Y. Jiao, Y.-C. Liang, Q. Yang, D. Niyato, and C. Miao, “Federated learning in mobile edge networks: A comprehensive survey,” IEEE Commun. Surv. Tutor., vol. 22, no. 3, pp. 2031–2063, 2020. review2021Lo S. K. Lo, Q. Lu, C. Wang, H.-Y. Paik, and L. Zhu, “A systematic literature review on federated machine learning: From a software engineering perspective,” ACM Comput. Surv., vol. 54, no. 5, 2021. rewardCheng2022 W. Cheng, Y. Zou, J. Xu, and W. Liu, “Dynamic games for social model training service market via federated learning approach,” IEEE Transactions on Computational Social Systems, vol. 9, no. 1, pp. 64–75, 2022. health K. Guo, T. Chen, S. Ren, N. Li, M. Hu, and J. Kang, “Federated learning empowered real-time medical data processing method for smart healthcare,” IEEE/ACM Transactions on Computational Biology and Bioinformatics, 2022. vehicle J. Zhao, X. Chang, Y. Feng, C. H. Liu, and N. Liu, “Participant selection for federated learning with heterogeneous data in intelligent transport system,” IEEE transactions on intelligent transportation systems, vol. 24, no. 1, pp. 1106–1115, 2022. mcmahan2017communication B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. A. y Arcas, “Communication-efficient learning of deep networks from decentralized data,” in Artificial intelligence and statistics.1em plus 0.5em minus 0.4emPMLR, 2017, pp. 1273–1282. resall2020huang Q. Zeng, Y. Du, K. Huang, and K. K. Leung, “Energy-efficient radio resource allocation for federated edge learning,” in Proc. IEEE Int. Conf. Commun. Workshops (ICC Workshops), Jun. 2020, pp. 1–6. luo2021cost B. Luo, X. Li, S. Wang, J. Huang, and L. Tassiulas, “Cost-effective federated learning in mobile edge networks,” IEEE Journal on Selected Areas in Communications, vol. 39, no. 12, pp. 3606–3621, 2021. resall2021Mo X. Mo and J. Xu, “Energy-efficient federated edge learning with joint communication and computation design,” J. Commun. Inf. Netw., vol. 6, no. 2, pp. 110–124, Jun. 2021. NomaFLMa2020 X. Ma, H. Sun, and R. Q. Hu, “Scheduling policy and power allocation for federated learning in noma based mec,” in GLOBECOM 2020 - 2020 IEEE Global Communications Conference, 2020, pp. 1–7. joint2020yu J. Ren, Y. He, D. Wen, G. Yu, K. Huang, and D. Guo, “Scheduling for cellular federated edge learning with importance and channel awareness,” IEEE Trans. Wireless Commun., vol. 19, no. 11, pp. 7690–7703, Nov. 2020. joint2020Shi W. Shi, S. Zhou, and Z. Niu, “Device scheduling with fast convergence for wireless federated learning,” in Proc. IEEE Int. Conf. Commun. (ICC), Jun. 2020, pp. 1–6. joint2021Wadu M. M. Wadu, S. Samarakoon, and M. Bennis, “Joint client scheduling and resource allocation under channel uncertainty in federated learning,” IEEE Trans. Commun., vol. 69, no. 9, pp. 5962–5974, 2021. cao2021optimized X. Cao, G. Zhu, J. Xu, Z. Wang, and S. Cui, “Optimized power control design for over-the-air federated edge learning,” IEEE Journal on Selected Areas in Communications, vol. 40, no. 1, pp. 342–358, 2021. joint2021Chen M. Chen, Z. Yang, W. Saad, C. Yin, H. V. Poor, and S. Cui, “A joint learning and communications framework for federated learning over wireless networks,” IEEE Trans. Wireless Commun., vol. 20, no. 1, pp. 269–283, 2021. resall2021Yang Z. Yang, M. Chen, W. Saad, C. S. Hong, and M. Shikh-Bahaei, “Energy efficient federated learning over wireless communication networks,” IEEE Trans. Wireless Commun., vol. 20, no. 3, pp. 1935–1949, 2021. resall2021Dinh C. T. Dinh, N. H. Tran, M. N. H. Nguyen, C. S. Hong, W. Bao, A. Y. Zomaya, and V. Gramoli, “Federated learning over wireless networks: Convergence analysis and resource allocation,” IEEE-ACM Trans. Netw., vol. 29, no. 1, pp. 398–409, 2021. resall2022Wen W. Wen, Z. Chen, H. H. Yang, W. Xia, and T. Q. S. Quek, “Joint scheduling and resource allocation for hierarchical federated edge learning,” IEEE Trans. Wireless Commun., vol. 21, no. 8, pp. 5857–5872, 2022. NomaFLWu2022 Y. Wu, Y. Song, T. Wang, L. Qian, and T. Q. S. Quek, “Non-orthogonal multiple access assisted federated learning via wireless power transfer: A cost-efficient approach,” IEEE Transactions on Communications, vol. 70, no. 4, pp. 2853–2869, 2022. cost2021Lin X. Lin, J. Wu, J. Li, X. Zheng, and G. Li, “Friend-as-learner: Socially-driven trustworthy and efficient wireless federated edge learning,” IEEE. Trans. Mob. Comput., vol. 22, no. 1, pp. 269–283, 2023. cost2019Feng S. Feng, D. Niyato, P. Wang, D. I. Kim, and Y.-C. Liang, “Joint service pricing and cooperative relay communication for federated learning,” in Proc. IEEE Int. Conf. Green Comput. Commun. (GreenCom), 2019, pp. 815–820. reward2022Lim W. Y. B. Lim, J. S. Ng, Z. Xiong, J. Jin, Y. Zhang, D. Niyato, C. Leung, and C. Miao, “Decentralized edge intelligence: A dynamic resource allocation framework for hierarchical federated learning,” IEEE Trans. Parallel Distrib. Syst., vol. 33, no. 3, pp. 536–550, 2022. sample2021Li A. Li, L. Zhang, J. Tan, Y. Qin, J. Wang, and X.-Y. Li, “Sample-level data selection for federated learning,” in Proc. IEEE INFOCOM, May 2021, pp. 1–10. noisy2021Tuor T. Tuor, S. Wang, B. J. Ko, C. Liu, and K. K. Leung, “Overcoming noisy and irrelevant data in federated learning,” in Proc. Int. Conf. Pattern Recognit (ICPR), Jan. 2021, pp. 5020–5027. data-SGD2020He Y. He, J. Ren, G. Yu, and J. Yuan, “Importance-aware data selection and resource allocation in federated edge learning system,” IEEE Trans. Veh. Technol., vol. 69, no. 11, pp. 13 593–13 605, Nov. 2020. dataimp2021Aiman A. Albaseer, M. Abdallah, A. Al-Fuqaha, and A. Erbad, “Threshold-based data exclusion approach for energy-efficient federated edge learning,” in Proc. IEEE Int. Conf. Commun. Workshops (ICC Workshops), Jun. 2021, pp. 1–6. finedata2021Aiman A. M. Albaseer, M. Abdallah, A. Al-Fuqaha, and A. Erbad, “Fine-grained data selection for improved energy efficiency of federated edge learning,” IEEE Trans. Netw. Sci. Eng., vol. 9, no. 5, pp. 3258–3271, 2021. cmp2016Mao Y. Mao, J. Zhang, and K. B. Letaief, “Dynamic computation offloading for mobile-edge computing with energy harvesting devices,” IEEE J. Sel. Areas Commun., vol. 34, no. 12, pp. 3590–3605, 2016. noma2019Han S. Han, X. Xu, X. Tao, and P. Zhang, “Joint power and sub-channel allocation for secure transmission in noma-based mmtc networks,” IEEE Syst. J., vol. 13, no. 3, pp. 2476–2487, 2019. NomaOma2019 M. Zeng, A. Yadav, O. A. Dobre, and H. V. Poor, “Energy-efficient joint user-rb association and power allocation for uplink hybrid noma-oma,” IEEE Internet Things J., vol. 6, no. 3, pp. 5119–5131, Jun. 2019. zhu2022efficient B. Zhu, K. Chi, J. Liu, K. Yu, and S. Mumtaz, “Efficient offloading for minimizing task computation delay of noma-based multiaccess edge computing,” IEEE Transactions on Communications, vol. 70, no. 5, pp. 3186–3203, 2022. zhang2023reinforcement S. Zhang, X. Wang, Z. Shi, and J. Liu, “Reinforcement learning based rss-threshold optimization for d2d-aided htc/mtc in dense noma systems,” IEEE Transactions on Wireless Communications, 2023. shi2021multi Z. Shi, J. Liu, S. Zhang, and N. Kato, “Multi-agent deep reinforcement learning for massive access in 5g and beyond ultra-dense noma system,” IEEE Transactions on Wireless Communications, vol. 21, no. 5, pp. 3057–3070, 2021. shi2022sparse Z. Shi and J. Liu, “Sparse code multiple access assisted resource allocation for 5g v2x communications,” IEEE Transactions on Communications, vol. 70, no. 10, pp. 6661–6677, 2022. nonlinear M. Avriel, Nonlinear Programming: Analysis and Methods, ser. Prentice-Hall series in automatic computation.1em plus 0.5em minus 0.4emEnglewood Cliffs, NJ, USA: Prentice-Hall, 1976. 2011Peer E. Bodine-Baron, C. Lee, A. Chong, B. Hassibi, and A. Wierman, “Peer effects and stability in matching markets,” in Proc. Lect. Notes Comput. Sci. (LNCS), 2011. cvx M. Grant and S. Boyd, “CVX: Matlab software for disciplined convex programming, version 2.1,” <http://cvxr.com/cvx>, Mar. 2014. Lipp2016 T. Lipp and S. Boyd, “Variations and extension of the convex–concave procedure,” Optim. Eng., vol. 17, no. 2, pp. 263–287, Jun. 2016. MM2017Sun Y. Sun, P. Babu, and D. P. Palomar, “Majorization-minimization algorithms in signal processing, communications, and machine learning,” IEEE Trans. Signal Process., vol. 65, no. 3, pp. 794–816, 2017. lambda2020Jiang Y. Jiang, Z. Huang, and D. H. Tsang, “On power-peak-aware scheduling for large-scale shared clusters,” IEEE Trans. Big Data, vol. 6, no. 2, pp. 412–426, 2020. gentri2008Hsu C.-Y. Hsu, S.-Y. Shaw, and H.-J. Wong, “Refinements of generalized triangle inequalities,” J. Math. Anal. Appl., vol. 344, no. 1, pp. 17–31, 2008. ILPtoLP1977 R. R. Meyer, “A class of nonlinear integer programs solvable by a single linear program,” SIAM J. Control Optim., vol. 15, no. 6, pp. 935–946, 1977.
http://arxiv.org/abs/2407.03050v1
20240703121837
Semantic-Aware Power Allocation for Generative Semantic Communications with Foundation Models
[ "Chunmei Xu", "Mahdi Boloursaz Mashhadi", "Yi Ma", "Rahim Tafazolli" ]
eess.SP
[ "eess.SP" ]
Semantic-Aware Power Allocation for Generative Semantic Communications with Foundation Models Chunmei Xu1, Mahdi Boloursaz Mashhadi1, Yi Ma1, Rahim Tafazolli1 15GIC & 6GIC, Institute for Communication Systems (ICS), University of Surrey, Guildford, U.K. 1e-mails: {chunmei.xu; m.boloursazmashhadi; y.ma; r.tafazolli}@surrey.ac.uk July 8, 2024 ===================================================================================================================================================================================================================================================== empty § ABSTRACT Recent advancements in diffusion models have made a significant breakthrough in generative modeling. The combination of the generative model and semantic communication (SemCom) enables high-fidelity semantic information exchange at ultra-low rates. A novel generative SemCom framework for image tasks is proposed, wherein pre-trained foundation models serve as semantic encoders and decoders for semantic feature extractions and image regenerations, respectively. The mathematical relationship between the transmission reliability and the perceptual quality of the regenerated image and the semantic values of semantic features are modeled, which are obtained by conducting numerical simulations on the Kodak dataset. We also investigate the semantic-aware power allocation problem, with the objective of minimizing the total power consumption while guaranteeing semantic performance. To solve this problem, two semantic-aware power allocation methods are proposed by constraint decoupling and bisection search, respectively. Numerical results show that the proposed semantic-aware methods demonstrate superior performance compared to the conventional one in terms of total power consumption. Semantic communication, generative foundation models, semantic-aware power allocation. § INTRODUCTION Semantic communications (SemCom) aim at precise content reconstruction with equivalent semantics, which is fundamentally different from conventional communications targeting accurate source recovering <cit.>. It has the potential to achieve ultra-low compression rates and extremely high transmission efficiency, which is gaining substantial interest from both academic and industry communities <cit.>. Although efforts to develop semantic information theory have been ongoing since the establishment of Shannon's theory, a comprehensive and universal theory remains elusive. Nevertheless, the remarkable advancements in artificial intelligence (AI) have paved the way for the development of SemCom systems, particularly in the realm of deep learning-based SemCom. The end-to-end architecture is widely used in jointly training the neural network (NN) based semantic encoder and decoder, facilitating the formation and sharing of the knowledge base between them. The concept of deep joint source and channel coding (JSCC) was first proposed for image tasks by adopting the auto-encoder NN network <cit.>, and numerous variants of deep JSCC were developed subsequently for various types of sources and channel models <cit.>. These deep JSCC approaches have demonstrated superior performance over the conventional separated source and channel coding schemes in terms of distortion metrics such as mean square error (MSE), peak-signal-to-noise (PSNR), and multi-scale structural similarity (MS-SSIM). However, the distortion may no longer serve as the primary performance indicator for emerging applications with inference goals, where precisely conveying the semantic information becomes more important. To reserve the semantic, the authors in <cit.> proposed to integrate the generative adversarial network (GAN) into SemCom systems for signal regeneration. It was shown to significantly outperform the Deep JSCC technique in terms of both distortion and perceptual quality. Recent advancements in state-of-the-art diffusion models have marked a significant breakthrough in generative modeling, showing impressive results in regenerating images <cit.>, audios <cit.>, and videos <cit.>. The diffusion model has been adopted in SemCom systems for synthesizing semantic-consistent signals, utilizing a combination loss function of the MSE and Kullback-Leibler (KL) divergence <cit.>. This approach demonstrated high robustness to poor channel conditions and outperformed existing methods in generating high-quality images while preserving semantic information. However, the adoption of end-to-end architectures to learn a deep learning-based SemCom system faces two challenges. First, the necessity of employing analog modulations for data training, due to their feasibility and convenience in gradient computation and back-propagation, and the joint source and channel coder architecture, conflict with modern digital communication systems with open systems interconnection (OSI) model. Secondly, intensive computations are required in the training phase to account for wireless channel characteristics, which potentially results in poor generalization performance. Concurrently, the field of AI is undergoing a paradigm shift with the emergence of foundation models such as bidirectional encoder representations from transformers (BERT) and generative pre-trained transformers (GPT). These foundation models, trained on vast and diverse datasets, demonstrate the ability to capture general patterns, and thereby form the knowledge base. Notably, generative diffusion foundation models such as DALL·E show promise in synthesizing high perceptual quality images with ultra-low-rate prompt exchanges <cit.>. Inspired by these, we propose the generative SemCom framework for image tasks by utilizing powerful pre-trained foundation models to extract semantic features and regenerate signals at the encoder and decoder, respectively. Within this framework, transmission reliability becomes the sole factor influencing the perceptual quality of the regenerated images, with their mathematical relationship modeled as a non-decreasing percpetion-error function. Semantic values of semantic data streams are defined to measure the semantic information accordingly. We investigate the semantic-aware resource allocation problem in the channel-uncoded case, aiming at minimizing the total power consumption while ensuring the perceptual quality of regenerated images. The rest of this paper is organized as follows. Section II introduces the proposed generated SemCom framework for image tasks and defines semantic values. Section III provides the semantic-aware power allocation problem formulation, and Section IV presents the proposed methods. Numerical results are given in Section V to demonstrate the performance of the proposed framework. Finally, Section V concludes this paper. § GENERATIVE SEMCOM FRAMEWORK The proposed generative SemCom framework for image task, as depicted in Fig. <ref>, consists of semantic encoder ℱ_en, transmission scheme 𝒯, and semantic decoder ℱ_de. Before giving a detailed description of the generative SemCom framework, we introduce the semantic metric based on contrastive language-image pre-training (CLIP) similarity <cit.> to evaluate the perceptual quality of the regenerated image, which is written as P≜𝔼[ CLIP(𝐗,𝐗̂)]= 1-𝔼[F_clip(𝐗)· F_clip(𝐗̂)/‖ F_clip(𝐗)‖‖ F_clip(𝐗̂)‖], where P is within the range [0,1]. 𝐗 and 𝐗̂ denote the source and the regenerated images, respectively. F_clip(·) refers to a pre-trained model trained on a large text-image dataset <cit.>. §.§ Semantic Encoder The source image is encoded into two distinct semantic features, namely the textual prompt and the edge map features, utilizing two semantic extractors based on pre-trained foundation models. The textual prompt is extracted by textual transform coding via prompt inversion <cit.>. The edge map feature is extracted using the Holistically-nested Edge Detection (HED) with a non-linear transform code (NTC) model <cit.> for further compression. For notional simplicity, we use subscripts 1 and 2 to replace subscripts prompt and edge in the sequel. The ith extracted feature can be expressed by 𝐒_i=F_en,i(𝐗|θ_i^*), where F_en,i(𝐗|θ_i^*) is the ith pre-trained foundation model with θ_i^* being the NN parameters. To ensure compatibility with existing digital communication systems, the semantic feature 𝐒_i is converted into the bit sequence denoted as 𝐊_i. We have 𝐊_i=ℬ(𝐒_i), where ℬ(·) is a binary mapping function such as ASCII, Unicode encoding and quantization. In SemCom systems, the semantic data streams contribute unequally to the perceptual quality of the regenerated image, which can be measured by the semantic metric closely related to the inference goal or task at the receiver. This is fundamentally different from conventional communication systems. Denote the semantic value of the ith semantic data stream as L_i to quantify its semantic information in terms of a specific semantic metric. Generally, the semantic data stream with a larger L_i has a greater impact on the perpetual quality of the regenerated signal, indicating its greater importance. §.§ Transmission Scheme Due to the different importance of the semantic data streams, multi-stream transmissions are considered in the proposed generative SemCom framework. The received data streams are expressed as [𝐊̂_1,𝐊̂_2]=𝒯([𝐊_1,𝐊_2]), where 𝒯(·) is the transmission scheme, which may comprise the channel coding/decoding and modulation/demodulation components. The semantic data streams are considered to be transmitted in an orthogonal manner to mitigate the inter-stream inference. Despite this, errors may still occur in the received semantic data stream 𝐊̂_i due to the fading and noisy effects of the wireless channels. §.§ Semantic Decoder In the semantic decoder, the pre-trained generative foundation model F_de, i.e., ControlNet <cit.> built upon the Stable Diffusion model <cit.> is employed to synthesize the received semantic data streams into an image 𝐗̂. In the channel-uncoded case, the received data streams regardless of transmission errors are processed by the generative foundation for signal synthesizing, as the transmission errors cannot be identified and corrected. The received semantic data streams 𝐊̂_i are first reconverted into the semantic features 𝐒̂_i=ℬ^-1(𝐊̂_i) with ℬ^-1(·) being the inverse operation of ℬ(·). 𝐒̂_i are forwarded to the generative foundation model F_de for synthesizing 𝐗̂, which can be expressed as 𝐗̂=F_de(𝐒̂_1,𝐒̂_2|ω^*)=ℱ_de(𝐊̂_1,𝐊̂_2), where ω^* are the NN parameters of the ControlNet. Denote L̂_i as the semantic values of the ith received semantic data stream 𝐊̂_i. The semantic information is lossy due to the transmission errors, thus we have L̂_i≤ L_i. Given the semantic encoder and decoder, the transmission scheme and wireless channels remain to influence the perceptual quality of the regenerated image. As a consequence, the transmission reliability becomes the factor impacting the achieved perceptual quality. Denoting the bit error rate (BER) of the jth bit of 𝐊̂_i as ψ_ij, the perception value P defined in (<ref>) becomes a function of ψ_ij. Assume that the perception value P is non-decreasing with respective to (w.r.t.) the BER ψ_ij. The semantic values of ith transmitted semantic data stream 𝐊_i is defined as L_i = 1-P_i, where P_i is the perception value of regenerated signal 𝐗̂_i^*=ℱ_de(𝐊_i) synthesized only by the ith semantic data stream 𝐊_i. For the received semantic data stream 𝐊̂_i, the semantic value is defined as L̂_i({ψ_ij}_j)= 1-P_i({ψ_ij}_j), where P_i({ψ_ij}_j) is the perception value of 𝐗̂_i=ℱ_de(𝐊̂_i) synthesized only by 𝐊̂_i. § PROBLEM FORMULATIONS OF SEMANTIC-AWARE POWER ALLOCATION The transmission reliability significantly affects the perceptual quality of the regenerated images and the consumption of the resources. In contrast to conventional communications that treat the transmitted data streams equally, SemCom systems offer the opportunity to exploit the semantic importance to enhance resource efficiency. In this paper, we investigate the semantic-aware power allocation problem for generative SemCom systems at ultra-low rates. The objective is to minimize total power consumption while guaranteeing semantic performance. Let z_i be the transmitted symbol of the ith semantic data stream with unit energy such that 𝔼{ z_iz_i^H} =1. The ith received semantic signal can be written as y_i=√(q_i)h_iz_i+n_i, where h_i is channel assumed to be quasi-static and modelled as h_i = √(h_0(d/d_0)^-α)h_i where h_0(d/d_0)^-α is the path loss at distance d with h_0 being the path loss at reference distance d_0. h_i and n_i are the Rayleigh fading channel with a covariance of 1 and the Gaussian noise following the distributions of n_i∼𝒞𝒩(0,σ_i^2). q_i is the allocated power for each symbol of the ith semantic data stream. Under the quasi-static channel, the received signal-to-noise ratio (SNR) of each symbol is equal, which is given by SNR_i=q_i| h_i|^2/σ_i^2. The BER of each bit of the ith semantic data is given by ψ_i = a_i/log_2M_iQ(√(b_iSNR_i)), where Q(x)=1/√(2π)∫_x^∞e^(-u^2/2)du is the Q-function. Parameters a_i and b_i depend on the adopted modulation type with a order of M_i, which are listed in <cit.>. The problem minimizing the total power consumption while ensuring the semantic performance P̅ under the uncoded case can be formulated as (𝒫1): min_q_i ∑_i=1^I K_iq_i s.t. P({ψ_i}_i)≤P̅ To solve the problem, the following corollary is established according to Assumption <ref>, since the BER ψ_i is monotonically decreasing with the allocated power q_i. The optimal solutions q_i^* to problem 𝒫1 satisfies the equality of constraint (<ref>). § SEMANTIC-AWARE POWER ALLOCATION METHODS This section presents two semantic-aware power allocation methods, namely the semantic-aware proportional method and semantic-aware bisection method. §.§ Semantic-Aware Proportional Method By assuming the independence of semantic data streams, the constraint (<ref>) can be decoupled into I independent constraints, each corresponding to the semantic value constraint of an individual received data stream. Problem (𝒫1) is then relaxed into (𝒫2): min_q_i ∑_i=1^I K_iq_i s.t. L̂_i(ψ_i) ≥L̅_i, ∀ i∈ℐ,, where L̅_i is the semantic value requirement of the ith received semantic data stream corresponding to P̅. Based on Assumption <ref>, the semantic value of the received semantic data stream is non-increasing w.r.t. the BER ψ_i. Therefore, the optimal solutions to 𝒫2 are obtained when the equalities of constraints (<ref>) hold. Denoting ψ_i^* as the solution obtained by solving equation L̂_i(ψ_i) = L̅_i, the optimal solutions can be readily obtained by substituting ψ_i^* back to (<ref>), which is given by q_i^* = σ_i^2/b_i| h_i|^2(Q^-1(log_2M_i/a_iψ_i^*))^2, §.§ Semantic-Aware Bisection Method Based on Corollary <ref>, problem 𝒫1 can be reduced into (𝒫3): min_ψ_1, ψ_2 ∑_i=1^2 K_iσ_i^2/2| h_i|^2(Q^-1(ψ_i))^2 s.t. P(ψ_1, ψ_2)=P̅. The feasible solutions (ψ_1, ψ_2) form a line on the perception-error surface. For any two feasible solutions (ψ^(1)_1, ψ^(1)_2) and (ψ^(2)_1, ψ^(2)_2), we have ψ^(2)_2≤ψ^(2)_2 if ψ^(1)_1≥ψ^(2)_1. The main idea is to find the solution with the gradient of the objective function being 0, which is obtained by the bisection search technique. Denoting the two ends of the line as (ψ^L_1,ψ^L_2) and (ψ^R_1,ψ^R_2) where ψ^R_1≥ψ^L_1, the procedure to obtain the solution is summarized in Algorithm <ref>. § NUMERICAL RESULTS To transmit the textual prompt and edge map data streams, the communication parameters are set as follows. The modulations of these two semantic data streams are the same. Both 8-QAM and 16-QAM modulation schemes are considered. The channel parameters are set to d=100 m, d_0=1 m, h_0=-30 dB and α=-3.4. The noise power is set to σ_i^2=-110 dBm. Fig. <ref> depicts two regenerated image examples using the proposed generative SemCom framework to demonstrate the achieved perceptual quality. As the increase of the BERs, the semantic performance in terms of the CLIP metric degrades. The compression rates achieved are 0.0278 and 0.02597 bits per pixel (BPP), indicating that ultra-low rates can be achieved within the proposed generative SemCom. It is difficult to explicitly obtain the mathematical relationship between the BERs and the perceptual quality of the regenerated image. Instead, we conduct numerical simulations on the Kodak dataset <cit.> to empirically derive this function. As shown in Fig. <ref>, the perception-error function is non-decreasing with BERs ψ_i, which is obtained by curve fitting using the numerical simulation points. Fig. <ref> depicts the defined semantic values of both transmitted and received semantic data streams. The semantic values of textual prompt and edge map streams are L_1=0.5887 and L_2=0.3596, respectively. For the received semantic data streams, their semantic values, i.e., L̂_1 and L̂_2, are non-increasing with BERs ψ_i. In addition, the prompt feature has a greater impact on the CLIP performance compared to the edge map feature. However, the edge map feature exhibits greater vulnerability to the BER than the prompt feature due to its larger data length. The proposed semantic-aware proportional and bisection methods are compared with the conventional semantic-unaware one that treats the semantic data streams equally. For the semantic-unaware method, the SNRs for both semantic data streams are the same. For the semantic-proportional method, the allocated power is obtained based on (<ref>), where L̂_1/L_1=L̂_2/L_2. The total power consumption comparison results are given in Fig. <ref>, showing that the total power consumption decreases as the increase of the performance requirement P̅. Under stringent semantic performance requirements, the semantic-proportional method consumes lower power than the conventional approach. However, this performance advantage diminishes as P̅ increases. The proposed semantic-aware bisection method consistently outperforms the semantic-aware proportional and the semantic-unaware methods. Moreover, it can be observed that higher modulation orders lead to increased power consumption due to lower transmission reliability. Notably, the performance advantage of the proposed semantic-aware methods over the semantic-unaware one becomes more evident as the increase of modulation order. § CONCLUSION A generative SemCom framework for image tasks was proposed in this work, leveraging pre-trained foundation models for both semantic encoder and decode. Given the semantic encoder and decoder, the transmission reliability emerged as the primary factor influencing the perceptual quality of the regenerated images. Their mathematical relationship was modeled as a perception-error function, and the semantic values of the semantic data streams were defined. The perception-error function and the semantic values were empirically derived through numerical simulations on the Kodak dataset, providing a quantitative basis for further analysis. We investigated the semantic-aware power allocation problems and proposed two semantic-aware proportional and bisection methods. Numerical results demonstrated that the proposed semantic-aware bisection method consistently outperformed the semantic-aware proportional method and the conventional approach that treats the data streams equally with the same SNR. The performance advantages of the proposed semantic method become more pronounced with the increase of modulation order. IEEEtran
http://arxiv.org/abs/2407.01993v1
20240702070710
Analysis of short range interactions between $u/d$ quarks in the $NN$, $D_{03}$, and $D_{30}$ systems
[ "Qi-Fang Lü", "Yu-Bing Dong", "Peng-Nian Shen", "Zong-Ye Zhang" ]
hep-ph
[ "hep-ph", "hep-ex", "nucl-th" ]
lvqifang@hunnu.edu.cn Department of Physics, Hunan Normal University, Changsha 410081, China Key Laboratory of Low-Dimensional Quantum Structures and Quantum Control of Ministry of Education, Changsha 410081, China Key Laboratory for Matter Microstructure and Function of Hunan Province, Hunan Normal University, Changsha 410081, China dongyb@ihep.ac.cn Institute of High Energy Physics, Chinese Academy of Sciences, Beijing 100049, China School of Physical Sciences, University of Chinese Academy of Sciences, Beijing 101408, China shenpn@ihep.ac.cn Institute of High Energy Physics, Chinese Academy of Sciences, Beijing 100049, China zhangzy@ihep.ac.cn Institute of High Energy Physics, Chinese Academy of Sciences, Beijing 100049, China § ABSTRACT The dynamic mechanism of short range interaction between u/d quarks is still an open and challenging problem. In order to reveal this quark dynamics, we perform a systematic analysis of NN, D_03, and D_30 systems in the (extended) chiral SU(3) constituent quark models. By comparing results calculated with different models and different parameter sets, the effects of one gluon exchange and vector meson exchange terms are carefully examined. The results indicate that the vector meson exchange interactions dominate the short range interactions between u/d quarks, while the small residual one gluon exchange coupling strength is also allowed. Analysis of short range interactions between u/d quarks in the NN, D_03, and D_30 systems Zong-Ye Zhang July 8, 2024 ========================================================================================= § INTRODUCTION Understanding the strong interactions between constituent quarks ia a fundamental and intriguing topic in hadron physics. Now, it becomes clear that the hadronic dynamics is governed by quantum chromodynamics (QCD), which shows asymptotic freedom in the short range and color confinement in the long distance. Owing to the complexity of non-perturbative properties, rigorous solutions for QCD in low-energy region become extremely difficult, and then one has to seek lattice QCD calculations, effective field theory, and various phenomenological models. In particular, some constituent quark models <cit.> were proposed in sprite of QCD, which has gained considerable successes in describing the internal structures and properties of hadrons, such as baryon spectroscopy and nucleon-nucleon scattering. Despite these achievements, several long-standing problems and disputes exist in the constituent quark models <cit.>, especially for the short-range interactions between light quarks. Resolving these difficulties is undoubtedly important for further understanding the strong interaction in the lower energy QCD region. Historically, the study of the origin of strong interaction began with investigating the nuclear force on the hadron level, where the nucleon-nucleon scattering data were used as a check point for adjudging theoretical models. The most successful model is so-called the one boson exchange model, where the pion and σ meson exchanges as well as the vector mesons ρ and ω exchanges are also introduced to reproduce the abundant experimental data. With a large number of coupling strengthens and cutoff parameters, the nucleon-nucleon interaction can be well described <cit.>. This model has also been extended to various hadronic systems to study exotic systems. One of the achievements on studying new exotica strongly showed that vector meson exchange interactions are mainly responsible for the short range interaction and make certain contribution to the formation of some loosely bound molecular states <cit.>. Since the QCD was established, the study of the strong interaction has entered the quark level. One of the successful model is the constituent quark model, in which one can adopt a few unified parameters to describe as many properties of hadronic systems as possible and investigate the fundamental quark dynamics naturally. It is found that the short range interaction between hadrons, in particular the repulsive core, is closely related to the one-gluon exchange and the quark exchange. Moreover, on the quark level, it is easy to deal with the quark exchange effect and hidden color configurations that are absent on the hadron level. To be specific, different quark-quark (q-q) potentials are used in different constituent quark models. In the model we used, namely the chiral SU(3) or extended chiral SU(3) constituent quark model, the pesudoscalar and scalar chiral field induces q-q potentials and the confining q-q potential are mainly responsible for the medium and long range interactions, while the one gluon exchange (OGE) and/or vector meson exchange (VME) induced interactions dominate the short range interaction. An inevitable question arises: which interaction provides the short range interaction between u/d quarks, OGE, VME, or both of interacting terms? Until now, the dynamic mechanism of short range interactions between u/d quarks is still an open and challenging problem. In fact, in the bound state problem of the NN system, the roles of OGE and VME potentials in the nucleon-nucleon interaction have intensively been investigated, but a consensus for the provider of the short range interaction cannot be reached. One of the main reasons is that the studied deuteron is a loosely bound state of the NN system and the size of the deuteron is quite large. Therefore, the separation of the two interacting nucleons is so large that they interact primarily through the long range interaction arising from pion meson exchange and are very insensitive to the short range interactions. To reveal the mechanism of short range interaction between u/d quarks, it is better to find a system which is not only more compact than the deuteron but also has experimental data about its properties. The dibaryons D_03 is just such a system, where the notation D_IJ stands for a dibaryon with isospin I and spin J. In order to make the quark model more reliable and more applicable, it is also an important work to check and adjust the model parameters as much as possible with the observed data of other dibaryons. For this reason, the mirror state D_30 of the dibaryon D_03 should also be studied, where the short range interaction of vector meson induced potentials are completely different from those in D_03. In fact, there have been many research works on these two states in the literature both experimentally <cit.> and theoretically <cit.>. Therefore, we want to emphasize that the D_03 and D_30 systems provide excellent platforms to investigate the short range interactions between u/d quarks at the quark level. In this work, we perform a systematic analysis of NN, D_03, and D_30 systems in the (extended) chiral SU(3) constituent quark models. Firstly, for each mentioned quark model, we adjust the model parameters to get the best fit of the available data of the ground state masses of light baryons, binding energy of deuteron, and the NN phase shifts. Then, in terms of the same set of adjusted model parameters, we calculate the properties of D_03 and D_30 dibaryons. Finally, by comparing the results from various interaction models, namely models with different interactions, we study the effects of OGE and VME interactions. This paper is organized as follows. In Sec. <ref>, the theoretical formalism of (extended) chiral SU(3) constituent quark model is introduced. We present the results and discussions of NN, D_03, and D_30 systems in Sec. <ref>. A summary is given in the last section. § FORMALISM §.§ Interactions On the quark level, the interactions between constituent quarks are intermediated by the gluon and chiral field. Here, we choose the chiral SU(3) quark model, where the interactive Lagrangian between the quark and chiral field can be written as L_I^ch = -g_chψ̅ (∑_a=0^8 λ_a σ_a + i γ_5 ∑_a=0^8 λ_a π_a ) ψ, where g_ch is the coupling constant of quark with the chiral field, ψ is the quark field, and σ_a and π_a (a=0,1,...,8) are the scalar and pseudo-scalar nonet chiral fields, respectively. According to this Lagrangian, the corresponding Hamiltonian can be obtained H_I^ch = g_ch F(q^2) ψ̅ (∑_a=0^8 λ_a σ_a +i γ_5 ∑_a=0^8 λ_a π_a ) ψ. Here, a form factor F(q^2) is introduced to describe the structures of the chiral fields, which is usually taken as F(q^2)= (Λ^2/Λ^2+q^2 )^1/2. The Λ is the cutoff mass, which corresponds to the scale of the chiral symmetry breaking. From this Hamiltonian, one can easily derive the quark-quark interaction V^σ_a and V^π_a arising from the chiral fields, which mainly provide the medium range interaction. Besides the chiral field induced interactions, OGE potential V^OGE and phenomenological confining potential V^conf are still needed, which correspond to the short range and long range interactions, respectively. Consequently, the total Hamiltonian for a six-quark system in the chiral SU(3) quark model can be given by H=∑_i=1^6 T_i -T_G + ∑_j>i=1^6 (V_ij^OGE +V_ij^conf+V_ij^ch ), with V_ij^ch=∑_a=1^8 V_ij^σ_a + ∑_a=1^8 V_ij^π_a, where the T_i and T_G are the kinetic energy operators for the i-th quark and the center of mass motion, respectively. The V_ij^OGE, V_ij^conf, and V_ij^ch stand for the OGE, confinement, and chiral field induced interactions between the i-th and j-th quarks, respectively. To better study the short-range interaction mechanism, the interactions between the quark and the vector meson fields are introduced in the extended chiral SU(3) quark model <cit.>. The interactive Lagrangian is L_I^ chv = -g_ chvψ̅γ_μλ_a ρ^μ_a ψ -f_ chv/2M_Nψ̅σ_μνλ_a ∂^μρ^ν_a ψ. Here the ρ_a (a=0,1,⋯,8) represent the vector nonet fields, and g_ chv and f_ chv stand for the coupling constants for vector and tensor terms between quark and vector fields, respectively. After adding the vector meson exchange interaction, the chiral fields induced effective interaction between the i-th quark and the j-th quark, V^ ch reads V_ij^ ch = ∑_a=0^8 V_ij^σ_a + ∑_a=0^8 V_ij^π_a + ∑_a=0^8 V_ij^ρ_a, with V^ρ_a being the quark-quark interaction potential induced by vector-meson exchanges. The vector-meson exchange potential V^ρ_a is also short range, which competes with the OGE interaction. Therefore, it is more suitable to investigate the short range interaction mechanism in the extended chiral SU(3) quark model. The explicit expressions of these potentials can be found in Refs. <cit.>. §.§ Parameters In our calculation, the η and η' mesons are mixed by η_1 and η_8, and the mixing angle θ_η is taken to be the usual value with θ_η=-23^∘. For the ω and ϕ mesons, we adopt the flavor wave functions (uu+dd)/√(2) and ss respectively, that is, they are mixed by ω_1 and ω_8 with the ideal mixing angle θ_ω=-54.7^∘. In the non-strange multi-quark systems, the u or d quark mass is taken to be m_u/d=313 MeV. The harmonic-oscillator width parameter b_u in the Gaussian wave function for each u or d quark is chosen to be around 0.45 fm, and the effects of its variation will be discussed in the following section. All meson masses are taken from the experimental values except for the σ meson. According to the dynamical vacuum spontaneous breaking mechanism, its value should satisfy <cit.> m_σ^2= (2 m_u)^2 + m_π^2. This gives us a strong constraint for the mass of σ meson. As in previous calculations, we treat it as an adjustable parameter by fitting the binding energy of deuteron, which lies in the reasonable range around 500∼700 MeV. Also, the cutoff mass Λ is taken to be 1100 MeV that is close to the chiral symmetry breaking scale. In the chiral SU(3) quark model, the coupling constant between the quark field and the scalar and pseudo-scalar chiral fields g_ ch is determined according to the relation g^2_ ch/4π = ( 3/5)^2g^2_NNπ/4πm^2_u/M^2_N, with g^2_NNπ/4π=13.67. After the parameters of chiral fields are fixed, the coupling constant g_u of OGE interaction can be determined by the mass gap of Δ-N. The confinement strength a_uu^c and zero-point energy a_uu^c0 are then fixed by the stability condition and the mass of nucleon, respectively. In the extended chiral SU(3) quark model, the VME interactions are also added, and three types of coupling constants are considered here. Two types of coupling constants are same as in previous works <cit.>: g_ chv=2.351, f_ chv/g_ chv=0, and g_ chv=1.972, f_ chv/g_ chv=2/3. In present calculation, to study the mechanism of the short range interaction, we also consider an extreme case, where OGE interaction is excluded and only VME interactions are responsible for short range interaction. In this situation, the coupling constant g_u of OGE interaction is forced to equal to zero, where the OGE term is replaced by the VME interactions. Then the g_ chv is no longer a free parameter, which is completely determined by the constraint of Δ-N mass gap. The typical value of g_ chv in this case is 2.536 when b_u equals to 0.45  fm. All the parameters used in present work are tabulated in Table <ref>. The Sets I, II, and III represent three cases with b_u= 0.43, 0.45, and 0.47  fm, respectively. It should be mentioned that most of parameters in our model are strongly restricted by theoretical models and experimental data. The number of adjustable parameters have been greatly reduced in comparison to the one boson exchange model on hadronic level. When all the parameters in the potentials are determined, two baryon systems on the quark level can be studied by using the resonating group method (RGM). Then, one can dynamically obtain the partial wave phase shifts of NN scattering and properties for dibaryons. More details about (extended) chiral SU(3) quark model and RGM can be found in previous works <cit.>. § RESULTS AND DISCUSSIONS With the above potentials and parameters, we can calculate the properties of deuteron, NN scattering, D_03, and D_30 to study the short range interactions between u/d quarks. The binding energy, root-mean-square (RMS) radius, and fraction of channel wave functions for deuteron are listed in Table <ref>. The calculated binding energy for deuteron are limited within the range of 2.10∼2.30 MeV, which fix the model parameters, such as the mass of σ meson. The resultant RMS radii and the percentages of partial waves in different models are almost the same. Specifically, the resultant percentages for the S-wave and D-wave are 93∼95% and 5∼7% respectively in the deuteron, which indicates that all the chiral SU(3) and extended chiral SU(3) constituent quark models used can well describe the deuteron. Moreover, the resultant RMS radius of 1.3 1.4 fm shows a large separation between constituent nucleon clusters. In other words, the proton and neutron in deuteron feel each other mainly through the long range interactions and are insensitive to the short range interactions. Therefore, the deuteron is not an ideal platform to distinguish the dynamic mechanism of short range interactions. The S-wave phase shifts of NN scattering in different quark models with b_u=0.45 fm are displayed in Figure <ref>. From this figure, one sees that resultant ^1S_0 phase shifts by using the chiral SU(3) constituent quark model underestimate the experimental data, while the results from the extended chiral SU(3) constituent quark model with different g_ chv are obviously more reasonable. Meanwhile, the obtained ^3S_1 phase shifts in either chiral SU(3) constituent quark model and extended chiral SU(3) constituent quark model can provide an excellent description of the experimental data. Moreover, changes in b_u around 0.45 fm have almost no effect on the behaviors of these phase shifts. Here, we would emphasize that when the VME potential is introduced, namely in the extended chiral SU(3) constituent quark model, there are two important features: (1) the data of the NN phase shifts, in particular the ^1S_0 phase shifts, can naturally be reproduced; (2) the strength of OGE interaction is greatly reduced. Since the deuteron is a loosely bound state and insensitive to the short range interactions, we move on to other nonstrange dibaryons, especially the more compact ones. Clearly, d^*(2380), which is usually regarded as a dibaryon D_03 composed of ΔΔ and hidden-color components (CC) in the literature, satisfies this demand. The results for D_03 and its mirror state D_30 are listed in Table <ref>. From this table, one sees that the binding energy of D_03 is about 19∼ 32 MeV, which is far away from the measured data for d^*(2380). Actually, one cannot obtain a reasonable binding energy by using SU(3) chiral quark model even with b_u=0.5 fm and significantly large g_u <cit.>. Meanwhile, in the case of the extended chiral SU(3) constituent quark model with b_u=0.43 ∼ 0.47 fm and f/g=0, the resultant binding energy of d^*(2380) is about 73∼87 MeV, which is compatible with the experimental data of 84  MeV. However, when the ratio of f/g shifts from 0 to 2/3, the obtained binding energy of d^*(2380) is about 58∼75 MeV, which underestimates the data. The reason may be due to the introduction of the tensor component in the vector-chiral-field-induced q-q interaction, in order to better explain the available NN data, the vector meson coupling constant g_chv has to be decreased. Moreover, if we further eliminated OGE completely, the binding energy of D_03 will be about 73∼ 111 MeV, which is sensitive to the parameter b_u. In a word, the change trend of the binding energy of D_03 with the change of the model used is the same as that of deuteron. From these results, we believe that the short-range attractive feature of VME plays an important role in the formation of D_03, which is the same as that provided by OGE, but it is much stronger. However, in order to reduce the dependence of the binding energy on model parameters, say the width parameter b_u, it seems that it is necessary to have a certain amount of OGE interaction. In fact, in this way, not only the dependence of the binding energy on b_u is reduced, but also the gluon coupling constant g_u becomes smaller, so that g_u is more consistent with QCD theory. Obviously, these results do not support the views in some literature <cit.> that the OGE is the only mechanism for the short range interaction for nonstrange dibaryons. It should be specially mentioned that unlike the deuteron, the D_03 system is a deeply bound state with a large CC component, and the calculated RMS radius is about 0.77 fm. This compact structure arises from the large quark exchange effect and attractive short range interaction, and as a consequence, a very large hidden color component (CC) appears. The large quark exchange effect can be seen from spin-flavor-color (SFC) coefficients due to symmetry, which can be characterized by the averaged value of antisymmetrizer ⟨𝒜^sfc⟩ in the spin-flavor-color space. In the six identical quark systems, the antisymmetrizer is usually simplified as A = 1-9P_36. In Table <ref>, we list some relevant SFC coefficients for present work. It is easy to see that the averaged value ⟨𝒜^sfc⟩ equals to 2 for D_03 system, which exhibits highly strong quark change effect in this system. Besides D_03, its mirror state D_30 has also attracted great attention from experimentalists and theoreticians, because according to the symmetry analysis of J. Dyson <cit.>, D_30 should also have the same binding energy as D_03 has. However, till now, the newest experiment suggested that the D_30 should be weekly bound or lie above the ΔΔ threshold <cit.>. One possible reason is that this structure may lie near the ΔΔ threshold and have a broad width. However, fundamentally speaking, its weak bound or unbound characteristics should be caused by the repulsive feature of the short-range interaction of exchanged vector particles, say gluon and/or vector mesons in such a spin-isospin six quark system. This can also be seen from the differences of SFC coefficients between D_03 and D_30 systems in Table <ref>, where the operators (σ_i ·σ_j)(λ^c_i ·λ^c_j) and ∑_k=1^3λ^F_i(k)λ^F_j(k) play important roles in the OGE and VME potentials, respectively. To verify this assertion, based on the great success in explaining the data of D_03, we calculated D_30 properties with the same method and the same sets of model parameters. The obtained results are also tabulated in Table <ref>. From this table, one finds that by using the models used above, the binding energy of D_30 is about 5∼13 MeV which is compatible with the newest data and is almost independent of the parameter b_u. In particular, the results in the extended chiral SU(3) constituent quark model with f/g=0 and no OGE cases agree with the experimental finding. Combined with the best result for D_03, it seems that using the extended chiral SU(3) constituent quark model with f/g=0, the data of D_03 and D_30 can be well explained simultaneously. Again, these results supports our previous conclusion that the VME interaction plays an important role in the short-range interactions in nonstrange systems. As a symbiotic result, we also obtain corresponding wave functions for deuteron, D_03, and D_30. We plot the relative wave functions by using the extended chiral SU(3) constituent quark model with f /g=0 and b_u=0.45 fm in Figure <ref>. Evidently, the wave function curves of deuteron, D_03, and D_30 exhibit quite different behaviors and consequently the RMS radii. The wave function of deuteron exhibits a wide distribution for both S-wave and D-wave, so the RMS radius is quite large. The wave function of D_03 shows a very narrow and very large distribution of the CC component and a more or less wider and smaller distribution of the ΔΔ component, so its RMS radius is small, namely formed a compact six quark state. In contrast, although D_30 also has a narrow wave function of the CC component, but it is not as large as in D_03. Meanwhile, it has a much wider wave function of the ΔΔ component which is larger compared with its CC component. Therefore, its RMS radius is also relatively large. The characteristics of these wave functions are consistent with the binding properties of the corresponding states obtained above. The features of these wave functions again imply that owing to the large RMS radius and lack of CC component, the deuteron can hardly be used to distinguish the relative importance between OGE and VME in short distance. On the contrary, due to existences of richer CC components, D_03 with short-range attractive feature and D_30 with short-range repulsive character can be used as a good platform to study which of VME and OGE is responsible for the short-range interaction. § SUMMARY In order to reveal the dynamic mechanism of short range interactions between u/d quarks, we perform a systematic analysis of the NN, D_03, and D_30 systems in terms of the (extended) chiral SU(3) constituent quark models. The OGE interaction and hidden color components can be easily dealt with in these constituent quark models. All model parameters are fixed by fitting the ground state masses of nonstrange baryons and binding energy of deuteron. Then, we calculate the NN phase shifts and the properties of D_03 and D_30 dibaryons by using the RGM in two types of chiral SU(3) constituent quark models with various sets of model parameters, and analyze the effects of OGE and VME interactions in the short range carefully. Our results show that in the extended chiral SU(3) constituent quark model, the binding energies of D_03 and D_30 are 58∼111 MeV and 5∼12MeV, respectively. Further considerations need to be able to well explain all the data of the ground state masses of non-strange baryons, the NN phase shifts, the binding energy of deuteron, the binding energy of D_03 dibaryon and its decay properties, and the binding behavior of D_30 dibaryon with the same set of model parameters, and then the extended chiral SU(3) constituent quark model with f/g=0 and b_u=0.45  fm should be adopted. In this case, deuteron has a binding energy of 2.25 MeV and RMS radius of 1.34 fm, respectively, while D_03 has a binding energy and a RMS radius of 80.08 MeV and 0.77 fm, and D_30 has a binding energy and a RMS radius of 6.00 MeV and 1.25 fm, respectively. In other word, D_03 is a compact hexaquark dominated state and D_30 is a weekly bound state. Clearly, these values agree with the observed data. For the D_03 and D_30 states, the great difference of their binding energies is largely due to the fact that in the D_03 and D_30 states, the short-range interactions of the vector particle exchange potentials present the attractive and repulsive features, respectively. Based on these observations, we believe that the VME interactions dominate the short range interactions between u/d quarks, while a small residual OGE coupling constant g_u has a small value of 0.273 which solves the puzzle of the large gluon coupling constant in old constituent quark models and meets the requirement of the QCD theory. Of course, to thoroughly understand the dynamic mechanism of short-range interaction, more theoretical and experimental investigations are still needed. ACKNOWLEDGEMENTS We would like to thank Lian-Rong Dai for helpful discussions. This work is supported by the Natural Science Foundations of China under Grant No. 12375142 and of Hunan Province under Grant No. 2023JJ40421, and the Key Project of Hunan Provincial Education Department under Grant No. 21A0039. This work is also supported by the National Key Research and Development Program of China under Contract No. 2020YFA0406300 and by the Sino-German CRC 110 Symmetries and the Emergence of Structure in QCD project by NSFC under Grant No. 12070131001. 99 Isgur:1977ef N. Isgur and G. Karl, Hyperfine Interactions in Negative Parity Baryons, Phys. Lett. B 72, 109 (1977). Shen:1997jd P. N. Shen, Y. B. Dong, Z. Y. Zhang, Y. W. Yu and T. S. H. Lee, E2/M1 ratio of the Δ↔γ N transition within the chiral constituent quark model, Phys. Rev. C 55, 2024-2029 (1997). Glozman:1995fu L. Y. Glozman and D. O. Riska, The Spectrum of the nucleons and the strange hyperons and chiral dynamics, Phys. Rept. 268, 263-303 (1996). Oka:1981ri M. Oka and K. Yazaki, Short Range Part of Baryon Baryon Interaction in a Quark Model. 1. Formulation, Prog. Theor. Phys. 66, 556-571 (1981). Oka:1981rj M. Oka and K. Yazaki, Short Range Part of Baryon Baryon Interaction in a Quark Model. 2. Numerical Results for S-Wave, Prog. Theor. Phys. 66, 572-587 (1981). Faessler:1983yd A. Faessler, F. Fernandez, G. Lubeck and K. Shimizu, The Nucleon Nucleon Interaction and the Role of the (42) Orbital Six Quark Symmetry, Nucl. Phys. A 402, 555-568 (1983). Zhang:1997ny Z. Y. Zhang, Y. W. Yu, P. N. Shen, L. R. Dai, A. Faessler and U. Straub, Hyperon nucleon interactions in a chiral SU(3) quark model, Nucl. Phys. A 625, 59-70 (1997). Dai:2003dz L. R. Dai, Z. Y. Zhang, Y. W. Yu and P. Wang, N N interactions in the extended chiral SU(3) quark model, Nucl. Phys. A 727, 321-332 (2003). Ping:1998si J. L. Ping, F. Wang and J. T. Goldman, Effective baryon baryon potentials in the quark delocalization and color screening model, Nucl. Phys. A 657, 95-109 (1999). Furuichi:2002gi M. Furuichi and K. Shimizu, Description of SU(3) octet and decuplet baryons, Phys. Rev. C 65, 025201 (2002). Garcilazo:2001md H. Garcilazo, A. Valcarce and F. Fernandez, Baryon spectrum in the chiral constituent quark model, Phys. Rev. C 63, 035207 (2001). Capstick:1986ter S. Capstick and N. Isgur, Baryons in a relativized quark model with chromodynamics, Phys. Rev. D 34, 2809-2835 (1986). He:2023ucd B. R. He, M. Harada and B. S. Zou, Quark model with hidden local symmetry and its application to T_cc, Phys. Rev. D 108, 054025 (2023). Isgur:1999jv N. Isgur, Critique of a pion exchange model for interquark forces, Phys. Rev. D 62, 054026 (2000). Glozman:1999ms L. Y. Glozman, Reply to Isgur's 'Critique of a pion exchange model for interquark forces', arXiv:nucl-th/9909021. Liu:1998um K. F. Liu, S. J. Dong, T. Draper, D. Leinweber, J. H. Sloan, W. Wilcox and R. M. Woloshyn, Valence QCD: Connecting QCD to the quark model, Phys. Rev. D 59, 112001 (1999). Isgur:1999ic N. Isgur, Comment on `Valence QCD: Connecting QCD to the quark model', Phys. Rev. D 61, 118501 (2000). Liu:1999kq K. F. Liu, S. J. Dong, T. Draper, J. H. Sloan, W. Wilcox and R. M. Woloshyn, Reply to Isgur's comments on valence QCD, Phys. Rev. D 61, 118502 (2000). Meng:2023jqk L. Meng, Y. K. Chen, Y. Ma and S. L. Zhu, Tetraquark bound states in constituent quark models: Benchmark test calculations,” Phys. Rev. D 108, 114016 (2023). Pirjol:2008gd D. Pirjol and C. Schat, Quark Forces from Hadron Spectroscopy, Phys. Rev. Lett. 102, 152002 (2009). Stoks:1994wp V. G. J. Stoks, R. A. M. Klomp, C. P. F. Terheggen and J. J. de Swart, Construction of high quality N N potential models, Phys. Rev. C 49, 2950-2962 (1994). Jade:1996hv L. Jade and H. V. von Geramb, A Nonlinear approach to N N interactions using selfinteracting meson fields, Phys. Rev. C 55, 57-66 (1997). Long:2011xw B. Long and C. J. Yang, Renormalizing chiral nuclear forces: Triplet channels, Phys. Rev. C 85, 034002 (2012). Lu:2021gsb J. X. Lu, C. X. Wang, Y. Xiao, L. S. Geng, J. Meng and P. Ring, Accurate relativistic chiral nucleon-nucleon interaction up to next-to-next-to-leading order, Phys. Rev. Lett. 128, 142002 (2022). Guo:2017jvc F. K. Guo, C. Hanhart, U. G. Meißner, Q. Wang, Q. Zhao and B. S. Zou, Hadronic molecules, Rev. Mod. Phys. 90, 015004 (2018);[erratum: Rev. Mod. Phys. 94, 029901 (2022)]. Dong:2021bvy X. K. Dong, F. K. Guo and B. S. Zou, A survey of heavy-heavy hadronic molecules, Commun. Theor. Phys. 73, 125201 (2021). Dong:2022cuw X. K. Dong, Y. H. Lin and B. S. Zou, Interpretation of the η_1 (1855) as a KK̅_1(1400) + c.c. molecule, Sci. China Phys. Mech. Astron. 65, 261011 (2022). Chen:2022asf H. X. Chen, W. Chen, X. Liu, Y. R. Liu and S. L. Zhu, An updated review of the new hadron states, Rept. Prog. Phys. 86, 026201 (2023). Bashkanov:2008ih M. Bashkanov et al., Double-Pionic Fusion of Nuclear Systems and the ABC Effect: Aproaching a Puzzle by Exclusive and Kinematically Complete Measurements, Phys. Rev. Lett. 102, 052301 (2009). Adlarson:2011bh P. Adlarson et al. (WASA-at-COSY Collaboration), ABC Effect in Basic Double-Pionic Fusion — Observation of a new resonance?, Phys. Rev. Lett. 106, 242302 (2011). Adlarson:2014pxj P. Adlarson et al. (WASA-at-COSY Collaboration), Evidence for a New Resonance from Polarized Neutron-Proton Scattering, Phys. Rev. Lett. 112, 202301 (2014). Adlarson:2014ozl P. Adlarson et al. (WASA-at-COSY Collaboration), Neutron-proton scattering in the context of the d^*(2380) resonance, Phys. Rev. C 90, 035204 (2014). WASA-at-COSY:2016bha P. Adlarson et al. (WASA-at-COSY Collaboration), Search for an isospin I = 3 dibaryon, Phys. Lett. B 762, 455-461 (2016). Clement:2016vnl H. Clement, On the History of Dibaryons and their Final Discover, Progress in Particle and Nuclear Physics, 93, 195 (2017). Clement:2020mab H. Clement and T. Skorodko, Dibaryons: Molecular versus Compact Hexaquarks, Chin. Phys. C 45, 022001 (2021). Dong:2023xdi Y. Dong, P. Shen and Z. Zhang, d^*(2380) in a chiral constituent quark model, Prog. Part. Nucl. Phys. 131, 104045 (2023). Lu:2017uey Q. F. Lü, F. Huang, Y. B. Dong, P. N. Shen and Z. Y. Zhang, Six-quark structure of d^*(2380) in a chiral constituent quark model, Phys. Rev. D 96, 014036 (2017). Ping:2000dx J. L. Ping, F. Wang and J. T. Goldman, The d^* dibaryon in the extended quark delocalization, color screening model, Phys. Rev. C 65, 044003 (2002). Bashkanov:2013cla M. Bashkanov, S. J. Brodsky and H. Clement, Novel Six-Quark Hidden-Color Dibaryon States in QCD, Phys. Lett. B 727, 438 (2013). Yuan:1999pg X. Q. Yuan, Z. Y. Zhang, Y. W. Yu and P. N. Shen, Deltaron dibaryon structure in chiral SU(3) quark model, Phys. Rev. C 60, 045203 (1999). Dyson:1964xwa F. Dyson and N. H. Xuong, Y=2 States in SU(6) Theory, Phys. Rev. Lett. 13, 815 (1964). Gal:2013dca A. Gal and H. Garcilazo, Three-Body Calculation of the Delta-Delta Dibaryon Candidate D_03(2370) at 2.37 GeV, Phys. Rev. Lett. 111, 172301 (2013). Gal:2014zia A. Gal and H. Garcilazo, Three-body model calculations of N Δ and ΔΔ dibaryon resonances, Nucl. Phys. A 928, 73 (2014). Huang:2013nba H. Huang, J. Ping and F. Wang, Dynamical calculation of the ΔΔ dibaryon candidates, Phys. Rev. C 89, 034001 (2014). Chen:2014vha H. X. Chen, E. L. Cui, W. Chen, T. G. Steele and S. L. Zhu, QCD sum rule study of the d^*(2380), Phys. Rev. C 91, 025204 (2015). Li:1999dm Q. B. Li and P. N. Shen, Possible Delta-Delta dibaryons in the quark cluster model, J. Phys. G 26, 1207-1216 (2000). Park:2015nha W. Park, A. Park and S. H. Lee, Dibaryons in a constituent quark model, Phys. Rev. D 92, 014037 (2015). Dai:2005kt L. R. Dai, Delta Delta dibaryon structure in extended chiral SU(3) quark model, Chin. Phys. Lett. 22, 2204 (2005). Huang:2014kja F. Huang, Z. Y. Zhang, P. N. Shen and W. L. Wang, Is d^* a candidate for a hexaquark-dominated exotic state?, Chin. Phys. C 39, 071001 (2015). Dong:2015cxa Y. Dong, P. Shen, F. Huang and Z. Zhang, Theoretical study of the d^*(2380) → d ππ decay width, Phys. Rev. C 91, 064002 (2015). Huang:2015nja F. Huang, P. N. Shen, Y. B. Dong and Z. Y. Zhang, Understanding the structure of d^*(2380) in chiral quark model, Sci. China Phys. Mech. Astron. 59, 622002 (2016). Ikeno:2021frl N. Ikeno, R. Molina and E. Oset, Triangle singularity mechanism for the pp →π+d fusion reaction, Phys. Rev. C 104, 014614 (2021). Kukulin:2022gze V. I. Kukulin, V. N. Pomerantsev, O. A. Rubtsova, M. N. Platonova and I. T. Obukhovsky, Dibaryon resonances and short-range NN interaction, Chin. Phys. C 46, 114106 (2022). Dai:2023ofz L. Dai, Y. Wang, L. Chen and T. Zhang, The Role of the Hidden Color Channel in Some Interesting Dibaryon Candidates, Symmetry 15, 446 (2023). Bashkanov:2023yca M. Bashkanov, D. P. Watts, G. Clash, M. Mocanu and M. Nicol, Dibaryons and where to find them, J. Phys. G 51, 045106 (2024). Arndt:1994br R. A. Arndt, I. I. Strakovsky and R. L. Workman, An Updated analysis of NN elastic scattering data to 1.6 GeV, Phys. Rev. C 50, 2731-2741 (1994).
http://arxiv.org/abs/2407.03098v1
20240703133714
Magnetic nutation induced by anisotropic RKKY interaction in magnetic semiconductors
[ "H. Kachkachi" ]
cond-mat.mtrl-sci
[ "cond-mat.mtrl-sci" ]
Universit de Perpignan Via Domitia, Laboratoire PROMES-CNRS (UPR 8521), Rambla de la Thermodynamique, Tecnosud, 66100 Perpignan, FRANCE. § ABSTRACT We consider a system with both charge and spin degrees of freedom, such as a magnetic semiconductor, composed of two subsystems: 1) a gas of noninteracting conduction electrons and 2/ a ferromagnet of localized spins, the two being coupled by the Vonsovskii (sd) local interaction. The whole system is submitted to external electrical and magnetic disturbances. Using the Feynman-Schwinger approach to integrate out the faster (Grassmann) charge degrees of freedom associated with the conduction electrons, we derive an effective Hamiltonian for the localized spins. We find that the effective field responsible for the conduction-band splitting induces an anisotropic effective exchange coupling in the form of an XXZ spin model. As an application of this general formalism, we study the dynamics of the net magnetic moment of the effective spin model and show that the anisotropy of the exchange coupling leads to a nutational motion of the magnetization with much higher frequencies than the usual precession, which is also modified by the same token. We have derived analytical expressions for the modified precession and low-lying nonuniform modes and compared them with the numerical simulations. We have shown that the frequency (amplitude) of the nutation modes decreases (increases) with the band splitting field. The study of this phenomenon in magnetic semiconductors should open up new routes for new control in spin-based electronics. Magnetic nutation induced by anisotropic RKKY interaction in magnetic semiconductors H. Kachkachi June 24, 2024 ==================================================================================== § INTRODUCTION Semiconductors and magnetic materials play an essential role in modern electronics. In electronic and optical semiconductors, the charge degrees of freedom are used to process information, whereas in magnetic materials the spin is utilized to store information. One of the main ideas that have been driving the developments of the last few decades is to combine the properties of both materials for possible spin-electronic applications with enhanced functionalities<cit.>. This combination has been achieved in diluted magnetic semiconductors (DMS) by doping a nonmagnetic semiconductor with magnetic elements <cit.>. The advent of spintronics based on the manipulation of spin polarized currents has triggered an active search for ferromagnetic semiconductors (FMS) and the discovery of long range ferromagnetism beyond room temperature. Indeed, the discovery of ferromagnetism in doped semiconductors with high transition temperatures has led to a renewed interest in DMS <cit.>. Besides their potential applications, magnetic semiconductors (MS) have attracted considerable attention owing to the fundamental mechanism and nature of their ferromagnetic state and the underlying magnetic cooperative phenomena arising from the new spin degrees of freedom. Likewise, magnetic materials such as EuO and EuS have been actively investigated for decades because they exhibit a high degree of symmetry and isotropy as well as short-range magnetic interaction, making them simple model spin systems <cit.>, and for their widely recognized potential applications <cit.>. A branch of spintronics applications focuses on information storage on magnetic media by reversing the magnetization direction using various means, such as an external magnetic field initially applied opposite to the magnetization state. The efficiency of this (damped) switching process is limited by the macroscopic relaxation time of the magnetization which is on the order of 100 ps. More efficient switching processes, achieving ultra-fast magnetization reversal, can be achieved using picosecond magnetic field pulses perpendicular to the magnetization direction <cit.>. In general, the dynamics of magnetization in a magnetically ordered material is governed by precession. However, if the latter is somehow disturbed so that the rotation axis is no longer aligned with the angular momentum, an additional motion appears known as magnetic nutation, similar to what occurs in the case of a mechanical gyroscope when an external force tilts its rotation axis away from the direction of the gravity field. For decades, magnetization nutation was ignored as being a spurious effect on top of precession. However, wide attention has recently been drawn towards this phenomenon owing to the recent experimental observations of nutational oscillation at THz frequencies in thin films <cit.>. In fact, spin nutation was investigated a few decades ago in NMR <cit.>, optical resonance <cit.> and EPR <cit.>. It was then predicted in Josephson junctions <cit.> and was later developed using various approaches based on relativistic quantum mechanics, first principles <cit.>, and electronic structure theory <cit.>. More recent theoretical investigations deal with the magnetization nutation in metallic ferromagnets as being induced by magnetic inertia due the nonadiabatic contribution of the environment degrees of freedom<cit.>. Magnetic inertia has also been introduced within a macroscopic approach <cit.> through an extended version of the Landau-Lifshitz-Gilbert equation <cit.>. Recently, it has been shown <cit.> that spin misalignment induced by surface effects in nanoscale magnetic systems trigger nutational oscillations of the magnetization at several frequencies in the GHz-THz range. One of the most important features of MS (FMS and DMS) is that their magnetic properties, such as the ferromagnetic phase transition and spin excitations dynamics can be controlled by the interaction between the spins of itinerant carriers and the spins of the localized magnetic ions, the so called sd coupling. This provides us with a handle to control their magnetic properties by light through its action on the charge degrees of freedom. Indeed, several recent experiments have used time-resolved optical techniques to demonstrate electrical control of coherent electron-spin dynamics in nonmagnetic semiconductor hetero-structures. Since this exploits variations in the spin-orbit interaction due to compositional gradients, the energy of these effects in conventional semiconductors, however, is on the eV scale and is manifested as variations in the spin precession frequency in the GHz range. In contrast, THz electron spin precession frequencies can be easily achieved in MS due to the sd exchange interaction. This coupling leads to large spin splittings in the conduction band states on the order of 10-100 meV <cit.>, thus matching the THz regime of spin nutation. This is the reason we believe that it is quite relevant and timely to study magnetic nutation in MS and, in particular, DMS whose outstanding properties and functionalities stem from the complex structure of their valence band and the effect of long-range carrier-mediated exchange (RKKY) interaction. More precisely, it is important to investigate their low-lying energy states which contribute to both equilibrium (M_s and T_C) and nonequilibrium magnetic properties (FMR and relaxation). Objectives and plan of the paper: The prime objective of the present work is to study magnetic nutation induced in MS by the effective field that is responsible for the band splitting between spin-up and spin-down bands. For this purpose, we plan the following: In the next Section, we consider a magnetic semiconductor composed of a subsystem of conduction electrons coupled to the subsystem of localized spins. We then present the general formalism of path integrals that consists in integrating over the Grassmann variables related with the fast charge degrees of freedom and derive the effective spin Hamiltonian for the localized spins. Next, we compute the long-range effective exchange couplings in Fourier space, which turn out to be anisotropic, and discuss the low-lying excitation modes (precession and long-wavelength modes). In the last Section, we derive the spatial dependence of the effective exchange couplings and use them to perform a numerical simulation of the magnetization dynamics of the effective spin system. We then infer the frequencies of the low-lying modes and compare the results with our analytical expressions. § MODEL HAMILTONIAN AND HYPOTHESES A crystal becomes a conductor when free electrons and holes appear in it. The model proposed by Vonsovsky in 1971 [see the review by Irkhin Irkhin_pmm2010 and references therein] for a magnetic semiconductor (MS) presumes that all electrons in the crystal can be subdivided into the localized ones that form partially filled ionic d- or f-shells and the de-localized ones occupying s-type states. Since the former are localized and the latter are de-localized (carriers), this model is termed the cl model. This is also sometimes called the sd-model. Accordingly, we consider a magnetic semiconductor as being composed of two subsystems: 1) the subsystem of conduction electrons and 2) the subsystem of localized spins. The two subsystems are coupled by the sd interaction of the conduction electron spin s_i with the localized spins S_i. The corresponding contributions to the total Hamiltonian are described below. Conduction electrons The Hamiltonian of the conduction electrons is given by ℋ_e = ℋ_K+ℋ_ Z^(s) where the first term is the electron kinetic energy, ℋ_K=η∑_⟨ i,j⟩∑_σC_i,σ^+C_j,σ with C_i,σ^+ and C_i,σ being the creation and annihilation operators of the electron and η the nearest-neighbor hopping energy. We will consider the simplest case of a one-band model for which the electrons are in the Bloch state with quasi-momentum k and spin σ [See Ref. takahashi_carrier_2010]. In Eq. (<ref>), ℋ_ Z^(s) is the Zeeman energy of the electrons in the external (DC) magnetic field H_0, i.e. ℋ_ Z^(s)=-g_eμ_BH_0·∑_is_i. In the present approach, we will use the representation of the conduction electron spin in terms of the creation and annihilation operators, i.e. s_i=(ħ/2)∑_α,βC_i,α^+σ_αβC_i,β, with σ being the vector of the usual Pauli matrices. Therefore, upon performing the Fourier transformation C_α( r)=1/√(V)∑_ ke^-i k.r C_ kα,C_α^†( r)=1/√(V)∑_ ke^i k.r C_ kα^+, the total Hamiltonian of the subsystem of conduction electrons becomes ℋ_e=∑_kE_k∑_σC_kσ^+C_kσ-∑_α,β∑_ kC_ kα^+(ξ·σ_αβ)C_ kβ where E_k =zη t_k, t_k=1/z∑_ae^ik·a=1/d∑_α=x,y,zcos(ak_α), ξ =1/2γ_sμ_BH_0. d is the dimension of space, z the coordination number and a the lattice spacing. Localized spins The subsystem of localized spins subjected to an external magnetic field H_0 can be described by the (Heisenberg) Hamiltonian<cit.> ℋ_ S=-J∑_⟨ i,j⟩S_i·S_j-g_Sμ_BH_0·∑_iS_i where g_S is the gyromagnetic factor of the local spin. We note that the superexchange term in Eq. (<ref>) is initially absent in the case of a DMS and the exchange coupling between localized spins appears upon doping with magnetic ions. There is a huge literature on this topic [see the review articles SatoEtAl_RevModPhys.82.1633, Dietl_nm10,KALITA2023116201 and references therein]. In ferromagnetic semiconductors, such as EuO and EuS, this superexchange term couples the spins of Eu atoms. In these compounds, the nearest-neighbor exchange coupling (J_1) between localized spins is the strongest coupling. Indeed, the next-nearest-neighbor coupling (J_2), inferred from specific heat and neutron inelastic scattering diffusion experiments reported in Refs. MaugerGoddart_PR86,sutarto_euo_2009, is very small (J_2/J_1∼0.2). Electrical disturbance In addition to the magnetic disturbance represented by the Zeeman energy in both (<ref>) and (<ref>), we may include an electric or a charge disturbance in the form<cit.> ℋ_Elec =∑_σ d r Φ_ pl( r,t) C_σ^+( r)C_σ( r), where Φ_ pl( r,t)=-|e|φ_s( r,t). The latter contribution is added for completeness anticipating on future applications. For example, it will be interesting to investigate the coupling between a plasmonic nanostructure with an FMS<cit.>. Accordingly, we can consider a system composed of an assembly of metallic (Au, Ag, etc) nano-elements (of a few tens of nanometers in diameter) deposited on top of an FMS, such as EuO and EuS. A monochromatic light beam shed on the nano-elements at appropriate frequencies excites plasmonic modes in the latter leading to the creation of hot spots with enhanced electric field. The latter penetrates (with some attenuation) into the FMS and alters the behavior of the conduction electrons which in turn, through their coupling to the localized spins, affect the excitations spectrum of the latter. Here, the effect of the top layer of nano-elements is regarded as an (external) excitation electromagnetic field, which leads to new contributions in the effective Hamiltonian of localized spins. So the potential in Eq. (<ref>) may stem from the electric field radiated by the assembly of metallic nanoparticles or film excited at their plasmonic resonance by an incident electromagnetic wave. Total Hamiltonian of the magnetic semiconductor As discussed earlier, the exchange interaction between the localized (ion) spin S_i at site r_i and the conduction-electron spin s_i is given by ℋ_sd=-λ∑_i S_i· s_i=-λ/2∑_i∑_α,βC_i,α^+( S_i·σ_αβ)C_i,β where λ is the sd coupling. This is a local interaction, see Refs. nagaev_spin_1974,nagaev83mir,Irkhin_pmm2010, GajKos_spr11 for extensive discussions. In the subsequent calculations involving the electron degrees of freedom, we will drop the localized spin Hamiltonian ℋ_ S (<ref>) because it does not involve the electron gas. Therefore, the total Hamiltonian for the charge degrees of freedom, comprising the (free) conduction electron energy (<ref>), the sd coupling (<ref>) and the coupling of the electrons to the external electrical potential (<ref>), reads ℋ =∑_ k,αE_ kC_ kα^†C_ kα-∑_α,β∑_ kC_ kα^+(ξ·σ_αβ)C_ kβ+1/V∑_ k, p∑_α,βC_ kα^+Υ_αβ( k- p,t)C_ p,β, where Υ_αβ( k,t)≡Φ_ pl( k,t)δ_αβ-λ/2( S_ k(t)·σ_αβ). Due to the Zeeman splitting under the effect of the external magnetic field and the internal molecular field (MFA), induced by localized spins S, the perturbed energies of the free carriers are E_k,σ≡ E_σ(k) =E_k-ħσ/2(gμ_BH_0+λ⟨ S^z⟩). A few remarks are in order. * In Ref. wesselinowa83PSS it was shown that the spin-wave mode is underdamped for low temperatures and overdamped near T_c, see the review MaugerGoddart_PR86 for a detailed discussion. Likewise, at low temperature the electronic damping γ_ el^σ is extremely small, it is quadratic in the coupling λ and only increases as T_c is approached. Consequently, in the present work, we will ignore the damping γ_qσ, since we only consider the low temperature regime where spin-wave theory is valid <cit.>. * In the following section, we will adopt the path-integral approach for which one chooses the grand-canonical ensemble. Indeed, the density of electrons in a real material fluctuates around the average N/V, which is held fixed by the chemical potential μ as the temperature is varied. Then, if we adopt a parabolic dispersion for the conduction electrons, we write E_k=ħ^2k^2/2m-μ. * According to Eq. (<ref>), the (rigid) band splitting between the spin-up and spin-down bands is caused by the effective field Ξ comprising the external magnetic field and the internal molecular field, i.e. Ξ=ξ+λ/2⟨ S^z⟩ . This splitting parameter will appear in all subsequent calculations via the ratio Ξ/ϵ_F, where ϵ_F is the Fermi energy of the electron gas. In the mean-field picture, the external magnetic field aligns the localized spins (magnetic ions), which in turn act on the carriers (conduction electrons) via the ion-carrier interaction λ. However, in practice, in (doped) magnetic semiconductors, such as Eu_1-xGd_xO and Ga_1-xMn_x, the main contribution to Ξ is attributed to the second term <cit.>, i.e. to the sd coupling λ. For example, in an external magnetic field of 1T, in an MS with S=5/2-7/2 and ϵ_F≃0.4 eV, ξ/ϵ_F≃10^-4 while Ξ/ϵ_F≃10^-1-1. In the textbook GajKos_spr11, there is a detailed discussion, and in Ref. BeaDan_prb10 a table of values, of the coupling λ and the splitting it produces in the conduction and valence bands in various MS. § EFFECTIVE SPIN HAMILTONIAN: GENERAL FORMALISM As hinted to earlier, we assume that the motion of the conduction electrons within the MS is much faster than the variations associated with the fluctuations of the localized spins [see discussion in the textbook nagaev83mir]. Therefore, we can average over the Grassmann variables associated with the conduction electrons using the Feynman-Schwinger technique <cit.> and obtain the effective model for the localized spin variables. There are several variants of such a procedure in the literature [see e.g., Refs. <cit.>]; it is most efficiently done within the Lagrange formalism. In Appendix <ref>, we present the details of our more general calculations including both electric and magnetic external excitations. Considering the sd λ coupling as a small parameter with respect to the band width, we perform a perturbation to 2^nd order in λ. The whole procedure leads to the following effective action for the subsystem of localized spins (in the absence of the external electric disturbance) 𝒜_ eff =-λ/2V∑_k[f_FD(E_k^-)-f_FD(E_k^+)]( S_ 0·e_ξ) +1/2(λ/2V)^2∑_p,k1/β∑_m[ S(-k)· S(k)]{f_FD(E_p^-)-f_FD(E_p+k^+)/iϖ_m+E_p-E_p+k-2Ξ+(Ξ⟷-Ξ)} +1/2(λ/2V)^2∑_k,p1/β∑_m[ S(-k)·e_ξ][ S(k)·e_ξ] {f_FD(E_p^-)-f_FD(E_p+k^-)/iϖ_m+E_p-E_p+k-f_FD(E_p^-)-f_FD(E_p+k^+)/iϖ_m+E_p-E_p+k-2Ξ+(Ξ⟷-Ξ)} where e_ξ=ξ/ξ is the verse of the applied magnetic field, S_ 0=∑_iS_i, f_FD(z)=1/(e^β z+1) is the Fermi-Dirac function, E_p^±=E_p±Ξ, and ϖ_l=2π l/β are the bosonic Matsubara frequencies associated with the spin degrees of freedom. The details are given in Appendix <ref>. Note that in the absence of the magnetic field (ξ=0), this expression reduces to 𝒜_ eff^(2) =(λ/2V)^2∑_k[∑_p1/β∑_mf_FD(E_p)-f_FD(E_p+k)/iϖ_m+E_p-E_p+k]( S_-k· S_k) which is the well known result from the RKKY theory, where the term between square brackets brings an (additional) exchange coupling between localized spins induced by their sd coupling to conduction electrons. Eq. (<ref>) provides a non trivial extension of the RKKY expression in the presence of a magnetic field. This is the first original contribution of the present work. §.§.§ Effective exchange couplings From the result in (<ref>) we infer the contribution of the sd coupling to the effective spin Hamiltonian for the localized spins S_i which we write as follows ℋ_ eff =-∑_i,j𝒥_1(r_ij) S_i· S_j-∑_i,j𝒥_2(r_ij)( S_i·e_ξ)( S_j·e_ξ) where 𝒥_n(r_ij)≡1/V∑_ke^-ik·r_ij1/β∑_m𝒥_n(k,ω),n=1,2 with 𝒥_1(k,ω,Ξ) =-λ^2/81/V∑_p[f(E_p^-)-f(E_p+k^+)/ω+E_p^--E_p+k^++(Ξ⟶-Ξ)], 𝒥_2(k,ω,Ξ) =-λ^2/81/V∑_p{f(E_p^-)-f(E_p+k^-)/ω+E_p^--E_p+k^-+(Ξ⟶-Ξ)}-𝒥_1(k,ω). Note that, in the case of an FMS, the contribution (<ref>) would add to the spin Hamiltonian in Eq. (<ref>), whereas in DMS the exchange coupling between localized spins is indirect and is given solely by (<ref>). In the following, we will set the magnetic field in the z direction, i.e. e_ξ=e_z, so the Hamiltonian ℋ_ eff can be rewritten in the following XXZ form ℋ_ eff =-∑_i,j𝒥_⊥(r_ij)(S_i^xS_j^x+S_i^yS_j^y)-∑_i,j𝒥_∥(r_ij)S_i^zS_j^z, where 𝒥_⊥(k,ω;ξ)=𝒥_1(k,ω)=-λ^2/81/V∑_p[f(E_p^-)-f(E_p+k^+)/ω+E_p^--E_p+k^++(Ξ⟶-Ξ)] and 𝒥_∥(k,ω;ξ)=𝒥_1+𝒥_2=-λ^2/81/V∑_p{f(E_p^-)-f(E_p+k^-)/ω+E_p^--E_p+k^-+(Ξ⟶-Ξ)}. In order to obtain explicit expressions for the two couplings as a function of the distance (r_ij) between any pair of localized spins, we need to compute the following generic integral I_μν(k)=∫𝒟p f(E_p^μ)-f(E_p+k^ν)/ω+E_p^μ-E_p+k^ν. where we have replaced the sum with an integral using 𝒟p=d^3p/(2π)^3 ; E_p^μ=E_p+μΞ,μ=±1. We also introduce the (modified) Fermi momentum and energy k_F^μ =√(k_F^2+μ2mΞ/ħ^2)=k_F√(1+μΞ/ϵ_F), ϵ_F=ħ^2k_F^2/2m, ϵ_F^μ =ħ^2(k_F^μ)^2/2m=ϵ_F+μΞ,μ=±1 , together with the density of states at the Fermi energy ρ_F^μ=mk_F^μ/ħ^2π^2=ρ_F√(1+μΞ/ϵ_F). The density of states of a three dimensional parabolic spectrum is ρ(ϵ)=(2m)^3/2/2π^2ħ^3√(ϵ) and ρ_F=ρ(ϵ_F)=mk_F/ħ^2π^2. Finally, as usual, we introduce the dimensionless parameters x =k/k_F^μ, k/2k_F^μ≡k̃_μ, ω_μν(Ξ)=ω+(μ-ν)Ξ. Then, following the procedure described in many textbooks [see e.g. <cit.>], one obtains I_μν(k,ω;Ξ) =ρ_F^μ/4Λ_μν^-ℱ(k/2k_F^μΛ_μν^-)+ρ_F^ν/4Λ_μν^+ℱ(k/2k_F^νΛ_μν^+) where Λ_μν^∓(k,ω;Ξ)=1∓2mω_μν(Ξ)/ħ^2k^2=1∓ω_μν(Ξ)/E_k and ℱ(x)=1/2+1-x^2/4xlog[x+1/x-1] is the Lindhard function. In case Ξ=0, ω_μν≡ω,k^↑↓=k, and I_μν(k,ω;0) =ρ_F/4(1-2mω/ħ^2k^2)ℱ[k/2k_F(1-2mω/ħ^2k^2)]+(ω⟶-ω) =ρ_F/4[1+1/4k̃[1-(k̃-ω̃/k̃)^2]log[(k̃-ω̃/k̃)+1/(k̃-ω̃/k̃)-1]+(ω⟶-ω)] with ω̃=ω/(4ϵ_F). This is he result obtained by many authors in the absence of the external magnetic field, see e.g. the textbook <cit.>, where the calculations are done for the dynamic spin susceptibility with a factor of 2 (from electron spin). In the following, we will focus on the RKKY exchange coupling as a function of space and this is given by the static limit of the couplings (<ref>) and (<ref>). In this limit, (<ref>), becomes (Λ_μν^∓(k,0;ξ)=1∓2m(μ-ν)ξ/ħ^2k^2) I_μν(k,0;Ξ) =ρ_F^μ/4Λ_μν^-(k,0;Ξ)ℱ(k/2k_F^μΛ_μν^-(k,0;Ξ))+ρ_F^ν/4Λ_μν^+(k,0;Ξ)ℱ(k/2k_F^νΛ_μν^+(k,0;Ξ)). Accordingly, from Eq. (<ref>) we have 𝒥_∥(k,0;Ξ) =λ^2/8I_–(k,ω;Ξ)+(Ξ⟶-Ξ)=λ^2/8[ρ_F^↑/2ℱ(k/2k_F^↑)+ρ_F^↓/2ℱ(k/2k_F^↓)]. Now, we deal with the coupling 𝒥_⊥(k,ω;Ξ). Starting from Eq. (<ref>) and using again Eq. (<ref>), we obtain (Λ_-+^∓(k,0;Ξ)=1±2Ξ/E_k) and I_-+(k,0;Ξ)=ρ_F^μ/4Λ_-+^-(k,0;Ξ)ℱ(k/2p_F^μΛ_-+^-(k,0;Ξ))+ρ_F^ν/4Λ_-+^+(k,0;Ξ)ℱ(k/2p_F^νΛ_-+^+(k,0;Ξ)) 𝒥_⊥(k,0;Ξ) =λ^2/8I_-+(k,0;Ξ)+(Ξ⟶-Ξ) or more explicitly 𝒥_⊥(k,0;Ξ)=λ^2/4ρ_F^↑/4[1+Ξ/2ϵ_F^↑(k/2k_F^↑)^-2]ℱ(k/2p_F^↑[1+Ξ/2ϵ_F^↑(k/2k_F^↑)^-2])+(Ξ⟶-Ξ). For Ξ=0, this becomes 𝒥_⊥(k,0;0)=λ^2/8ρ_Fℱ(k/2k_F). The result (<ref>) coincides with that of Ref. <cit.>, and the result in <cit.> is recovered if we drop the term between square brackets (i.e. for Ξ≪ϵ_F). As discussed in e.g. Ref. <cit.>, to leading order, 𝒥_1(k,0,Ξ) is not affected by the external magnetic field and the induced extra polarization of the conduction electrons may be accounted for in the Fermi wave vector. In this case, 𝒥_1(k,0,Ξ) becomes 𝒥_⊥(k,0;Ξ) ≃λ^2/8ρ_F/2[ℱ(k/2k_F^↑)+ℱ(k/2k_F^↓)]. Comparing expression (<ref>) for 𝒥_∥ with that in Eq. (<ref>) for 𝒥_⊥, we see that because of the factor (1±2Ξ/E_k), these effective exchange couplings are different, thus showing that the splitting field between the spin-up and spin-down bands induces an anisotropy in the effective exchange coupling. Indeed, if we drop the field dependence in the prefactors in Eq. (<ref>), we see that 𝒥_⊥(r_ij)=𝒥_∥(r_ij). This implies that, at this level of approximation, there is no anisotropy in the effective exchange coupling, as is the case in the standard RKKY theory <cit.> 𝒥=-J_0 Φ(k/2p_F), with J_0=λ^2p_F^3ρ_F/2π=λ^2mp_F^4/(2π^3ħ^2). The function Φ is defined in Section <ref>. It was argued in Ref. YuLee_prb95 that, in DMS, the symmetry of the host crystal and thereby that of the wave functions may induce a stronger directional dependence of the effective exchange coupling between the magnetic ions. In conclusion, the anisotropy of the RKKY exchange coupling, which results from the band spin splitting, can be ignored as long as the ground state of the system is concerned, i.e. when the directions of the localized spins (magnetic ion) and the carrier spins are parallel. However, when the carrier magnetization is tilted by an anisotropy field, or by an external magnetic field, as is the case here, the precession angles of both magnetizations become different. This effect leads to a nutational motion of the magnetization of the effective spin system, and this will be demonstrated in the next section. §.§.§ Energy dispersion Combining the initial Hamiltonian (<ref>) for the localized spins with the contribution induced by their coupling to the conduction electrons, namely the contribution in Eq. (<ref>), we obtain the total Hamiltonian for the localized spins in the magnetic semiconductor ℋ_FMS=-∑_i,j𝒥̃_⊥(r_ij)(S_i^xS_j^x+S_i^yS_j^y)-∑_i,j𝒥̃_∥(r_ij)S_i^zS_j^z-g_Sμ_BH_0·∑_iS_i^z with 𝒥̃_⊥(r_ij)=J+𝒥_⊥(r_ij) and 𝒥̃_∥(r_ij)=J+𝒥_∥(r_ij). Defining the retarded Green's function G^(r)(i-j,t)=-iθ(t)⟨[S_i^-(t),S_j^+(0)]⟩ and using its equation of motion idG^(r)(i-j,t)/dt = δ(t)⟨[S_i^-(0),S_j^+(0)]⟩ +θ(t)⟨[dS_i^-(t)/dt,S_j^+(0)]⟩ together with the SU(2) spin algebra [S_i^z,S_j^μ]=μ S_i^μδ_ij,[S_i^+,S_j^-]=2S_i^zδ_ij,μ=±, we obtain the spin-wave dispersion relation ω(k) as the solution of the equation ħω(k,Ξ)=g_Sμ_BH_0+2zJ⟨ S^z⟩(1-γ_k)+2⟨ S^z⟩[𝒥_∥(0,ω(k),Ξ)-𝒥_⊥(𝐤,ω(k),Ξ)]. Here, ⟨ S^z⟩ is the average magnetization. We have also used Bogoliubov and Tyablikov<cit.> decoupling approximation -iθ(t)⟨[(S_k^zS_i^-)(t),S_j^+(0)]⟩ =⟨ S^Z⟩ G_ij^(r)(t) with the assumption ⟨ S_i^z⟩ =⟨ S^z⟩. In the absence of the sd coupling, Eq. (<ref>) becomes ω(k,ξ) =g_Sμ_BH_0+2zJ⟨ S^z⟩(1-γ_k). which is the usual dispersion law of a spin Hamiltonian, with z being the coordination number. In the static limit, i.e. setting ω=0 on the right hand side of (<ref>), we have (in the approximation Ξ≪ϵ_F) ħω(k,Ξ) ≃ g_Sμ_BH_0+2zJ⟨ S^z⟩(1-γ_k)+2⟨ S^z⟩[𝒥_∥(0,0,Ξ)-𝒥_⊥(𝐤,0,Ξ)]. The full analysis of the excitation modes and their experimental characterization will be presented in a future work. Using a similar technique of averaging out the conduction electron degrees of freedom along with the Green's function and self-energy approach, the effect of band occupation on the spin-wave dispersion and thereby on critical temperature was studied in the framework of Kondo-lattice model<cit.>, see also <cit.>. Here, we focus on the first effects of the sd coupling on the precession frequency (uniform mode) and the low-lying modes, and how induce nutation of the magnetization in the effective spin system. Let us denote by ħΔω_sd(k,Ξ) the contribution to ħω stemming from the interaction between the conduction electrons and localized spins, namely ħΔω_sd(k,Ξ)=2⟨ S^z⟩[𝒥_∥(0,0,Ξ)-𝒥_⊥(𝐤,0,Ξ)]. Using the fact that ℱ(0)=1, Eq. (<ref>) yields 𝒥_∥(𝐤,0,Ξ)=-λ^2/16(ρ_F^↑+ρ_F^↓)=λ^2ρ_F/16(√(1+Ξ/ϵ_F)+√(1-Ξ/ϵ_F)) and for small Ξ, 𝒥_∥(𝐤,0,Ξ)≃λ^2ρ_F/8. For the coupling 𝒥_⊥(𝐤,0,Ξ), we may proceed by an expansion in the long-wavelength limit (k→0) using the following expansion (a≠0) (1+a/t^2)ℱ(κ(1+a/κ^2))≃1/3a-(5a-1)/15a^3κ^2, with a=Ξ/2ϵ_F^↑ and κ=k/2p_F^↑. Therefore, for the uniform or precession mode (k=0) we obtain ħΔω_prec(0,Ξ)=2⟨ S^z⟩λ^2ρ_F/8[(1+Ξ)^3/2-(1-Ξ)^3/2/3Ξ-1/2(√(1+Ξ)+√(1-Ξ))]. The term between square brackets is an increasing function of the splitting field Ξ. This implies that the precession frequency (f=ω/2π) of the localized spins increases with the sd coupling. This result will be confirmed in the next section by numerical simulations [see Fig. <ref>]. For small Ξ, the precession frequency of the localized spins is given by ħω(0,Ξ)≃ g_Sμ_BH_0+⟨ S^z⟩λ^2ρ_Fv_0/48Ξ^2. In a typical magnetic semiconductor for which λ^2ρ_Fv_0/8≃10 meV, the last term leads to a frequency on the order of 10 GHz for Ξ=0.1. For the nonuniform mode (k≠0), we obtain in the long-wavelength limit ħΔω_nut(k→0,Ξ) ≡ħΔω_sd(k,ξ)-ħΔω_prec(0,Ξ) =2⟨ S^z⟩λ^2ρ_F/816/5[(1-Ξ)^3/2(1+3Ξ/2)-(1-3Ξ/2)(1+Ξ)^3/2/Ξ^3]k^2. For the same parameters as before, this contribution is in the THz range. Accordingly, as discussed in Ref. MyersEtAl_prb05 and references therein, in nonmagnetic conventional semiconductor hetero-structures, the variation of the spin-orbit coupling allows for an electrical control of coherent electron-spin dynamics involving energies on the order of the eV, leading to spin precession frequencies in the GHz range. In comparison, the sd exchange interaction in magnetic semiconductors induces large spin splittings in the conduction band states with energies on the order of 10–100 meV, or frequencies in the THz regime <cit.>. § MAGNETIC NUTATION IN MAGNETIC SEMICONDUCTORS As discussed in the introduction, the main objective of this work is to i) derive the effective exchange couplings between localized spins in the presence of a magnetic field and ii) study the effect of the anisotropy of the latter on the dynamics of the magnetization of the effective spin system. More precisely, we demonstrate that this exchange-coupling anisotropy induces a nutational motion of the net magnetic moment of the spin subsystem ; this nutation is exemplified by low-amplitude and high-frequency oscillations of the component of the net magnetic moment in the direction of the magnetic field (here z). In addition, this nutational motion induces a variation of the precession frequency as a function of the splitting parameter Ξ. In order to better understand these effects, we need to obtain the expressions of the effective couplings 𝒥_∥ and 𝒥_⊥ as functions of the distance within the lattice hosting the localized spins S_i. §.§ Exchange coupling in direct space In order to derive the spatial dependence of the exchange couplings, i.e 𝒥_∥(r_ij) and 𝒥_⊥(r_ij) appearing in the effective Hamiltonian (<ref>), we need to compute the inverse Fourier transform of the couplings in Eqs. (<ref>, <ref>). Using the definition 𝒥_α(r,0,Ξ) =∫𝒟p e^-ik·r𝒥_α(k,0,Ξ)=∫_0^∞kdk/2π^2r𝒥_α(k,0,Ξ)sin(kr) and following the standard procedure described, for example in Ref. kakehashi_springer12, we obtain for the longitudinal coupling 𝒥_∥(r,Ξ) =λ^2/2π[ρ_F^↑/2(k_F^↑)^3Φ(2k_F^↑r)+ρ_F^↓/2(k_F^↓)^3Φ(2k_F^↓r)] with Φ(x)=sin x-xcos x/x^4. More explicitly, we have 𝒥_∥(r,Ξ)=λ^2mk_F^4/2π^3ħ^21/2[(1+Ξ/ϵ_F)^2Φ(2k_F^↑r)+(1-Ξ/ϵ_F)^2Φ(2k_F^↓r)]. For Ξ=0, this reduces to the well known result 𝒥_∥(r)=λ^2mk_F^4/2π^3ħ^2Φ(2k_Fr). The calculation of the inverse Fourier transform of the transverse exchange coupling (<ref>) is more involved. In Ref. werwil_msp06 an expression for this coupling was given which we have checked and reproduced in Appendix <ref>. We have numerically computed the inverse Fourier transforms of the couplings 𝒥_∥(r,Ξ) and 𝒥_⊥(r,Ξ) and checked that we recover the expressions in Eqs. (<ref>, <ref>). We have also checked their numerical values against those obtained by the numerical studies of Refs. JalboutEtAl_apl02, SharmaEtAl_jmrt23 for the specific case of ZnO of variable doping with Mn or Co. One may resort to an expansion in terms of the field Ξ. However, it turns out that the first contribution is of second order but the corresponding expression is rather cumbersome and will not be used here. §.§ Numerical study of magnetic nutation of the effective spin model We consider the simple system of spins localized on a simple cubic lattice, coupled by the distance-dependent effective exchange couplings given in Eqs. (<ref>, <ref>). More precisely, we setup of a system of 𝒩 localized spins S_i with the effective Hamiltonian composed of the contribution (<ref>) together with the effective contribution (<ref>) induced by their sd coupling to the conduction electrons. Note that taking into account both direct and indirect exchange applies to an FMS, while for DMS, we would drop the contribution (<ref>). Indeed, the (indirect) RKKY coupling becomes dominant in solid-state systems containing diluted magnetic moments in a conducting nonmagnetic host when there is a sufficiently strong exchange coupling between the localized moments and the conduction electrons [See discussion in Ref. ZhouEtAl_np10]. Then, we define the net magnetic moment of this system as m=(1/𝒩)∑_i=1^𝒩S_i and study its (undamped) dynamics using the following system of coupled Larmor equations dS_i/dτ=S_i×h_ eff,i, with the (normalized) local effective field h_ eff,i acting on S_i being defined by h_ eff,i=-δℋ/δS_i; τ is the reduced time τ=t/τ_s, where τ_ s=μ_a/(γ J) is the characteristic time of the system's dynamics; μ_a is the magnetic moment per atom. Then, the frequency ω=2π f=2π/T is measured in units of τ_ s^-1. The details of the numerical procedure for studying the magnetic nutation in such systems is described in Refs. <cit.> and will not be repeated here. We start with an initial state where all spins are at some angle with respect to the z axis, materialized here by the direction of the magnetic field and then let the system freely evolve in time. The results are presented and discussed in the following paragraphs. In Fig. <ref>, we plot the time trajectory of the components m_α,α=x,y,z of the net magnetic moment for Ξ/ϵ_F=0 (left) and Ξ/ϵ_F=0.05 (right). On the left, the component m_z is constant while the components m_x,y exhibit a periodic oscillatory motion, showing that the net magnetic moment executes a precessional rotation around the z axis. On the right, in the presence of the splitting field, the z component of the magnetic moment of the effective spin system exhibits a nutational motion, while the components m_x,y show an oscillatory motion (precession) with a period that is different from the case Ξ=0 on the left. As explained earlier, this is due to anisotropy of the RKKY exchange coupling (i.e. Ξ≠0). The plot in Fig. <ref> shows that the precession frequency is a function of Ξ. More precisely, we plot (in red) the correction to the precession frequency, due to the sd coupling, given by the quantity in square brackets in Eq. (<ref>) for the uniform mode (k=0). The blue symbols represent the result rendered by the numerical simulation of the effective spin system with space-dependent effective exchange couplings. The good match between the analytical and numerical results (for small Ξ) shows that the effective XXZ spin model confirms the predictions of our general formalism in the presence of a magnetic field. For the nonuniform mode (k≠0), we plot in Fig. <ref> (left) the frequency Δ f_nut≡Δω_nut/2π, where Δω_nut is given by Eq. (<ref>) for ak≃0.0468 (in red line) and in symbols the result of the numerical simulations. On the right, we plot the amplitude of the corresponding nutational mode as rendered by the numerical study of the dynamics of the net magnetic moment. The frequency plot shows that the RKKY result deviates from that rendered by the k^2 expansion at a relatively high values of the splitting parameter Ξ. A similar result was obtained in Ref. KonigEtAl_prl00 which shows, for a given Ξ, such a deviation of the long-wavelength spin-wave branch from that of the mode induced by the sd coupling. On the other hand, the plot on the right clearly shows that the amplitude of the nutation mode increases with Ξ. Note that the derivation of an analytical expression for the amplitude of the nutation mode is somewhat more involved and will be postponed to a subsequent work (e.g. using a similar approach to that of Ref. suhl57jpcs). Again, the result in Fig. <ref> (left) shows that the prediction of our analytical calculations are well recovered by the numerical simulation of the effective spin system, at least for materials with Ξ small with respect to ϵ_F. § CONCLUSIONS We have generalized the formalism of RKKY calculations to include external electrical and magnetic disturbances, acting on a system composed of two subsystems: the subsystem of localized spins and the subsystem of conduction electrons, the two being coupled by the local sd interaction. After integrating over the fast (Grassmann) degrees of freedom associated with the conduction electrons, we have derived the effective spin Hamiltonian with space-dependent exchange couplings in Fourier and direct space, which exhibit a directional anisotropy due to the effective field responsible for the band splitting between the spin-up and spin-down bands. Using this effective XXZ Hamiltonian, we have studied the dynamics of the net magnetic moment of the effective spin system by (numerically) solving the system of coupled Larmor equations. We have computed the time trajectory of the components of the net magnetic moment and inferred its frequencies and amplitude. We have shown that the exchange anisotropy leads to a nutational motion of the net magnetic whose frequency (amplitude) is a decreasing (increasing) function of the splitting field. Moreover, we have shown that the precession frequency increases with the latter reaching a variation on the order of several tens of GHz. On the other hand, the low-lying nonuniform modes have frequencies in the THz regime. These results confirm those of previous studies on the nutational dynamics in other kinds of materials. We believe that the present work may form the basis for future studies of the nutational dynamics in (diluted)magnetic semiconductors with the perspective of offering a new handle on the opto-electronic control of electric and magnetic properties of such systems and their applications in spin-based electronics. For this purpose, further theoretical developments should include a study of the effects of magneto-crystalline anisotropy, crystal structure and doping, and should consider a generalization to multi-band models as this may lead to a richer nutation spectrum. apsrev 90 natexlab#1#1bibnamefont#1#1bibfnamefont#1#1citenamefont#1#1url<#>1urlprefixURL [Prinz(1990)]Prinz_sci90 authorG. A. Prinz, journalScience volume250, pages1092 (year1990). [Wolf et al.(2001)]WolfEtAl_sci01 authorWolf et al., journalScience volume294, pages1488 (year2001). [J. K. Furdyna and J. Kossut(2002)]FurdynaKossut_ap authorJ. K. Furdyna and J. Kossut, in booktitleDiluted Magnetic Semiconductors, edited by editorJ. K. Furdyna and editorJ. Kossut (publisherACADEMIC PRESS, INC., year2002), vol. volume25. [H. Ohno, H. Munekata, T. Penny, S. von Molnar, and L. L. Chang(1992)]OhnoEtAl_prl92 authorH. Ohno, H. Munekata, T. Penny, S. von Molnar, and L. L. Chang, journalPhys. Rev. Lett. volume68, pages2664 (year1992). [F. Matsukura and H. Ohno and T. Dietl(2002)]MatsukuraEtAl_Buschow02 authorF. Matsukura and H. Ohno and T. Dietl, journalHandbook of Magnetic Materials volume14, pages1 (year2002). [A. Singh, A. Datta, S. K. Das, and V. A. Singh(2003)]SinghEtAl_prb03 authorA. Singh, A. Datta, S. K. Das, and V. A. Singh, journalPhys. Rev. B volume68, pages235208 (year2003). [Jungwirth et al.(2006)Jungwirth, Sinova, Ma ššek, Ku ččera, and MacDonald]JungwirthETAl_RevModPhys.78.809 authorT. Jungwirth, authorJ. Sinova, authorJ. Ma ššek, authorJ. Ku ččera, and authorA. H. MacDonald, journalRev. Mod. Phys. volume78, pages809 (year2006). [Dietl et al.(2007)Dietl, Ohno, and Matsukura]DietlEtAl_ieee07 authorT. Dietl, authorH. Ohno, and authorF. Matsukura, journalIEEE Transactions on Electron Devices volume54, pages945 (year2007). [O. W. Dietrich and Meyer(1975)]DietrichEtAl_prb75 authorJ. O. W. Dietrich, A. J. Henderson and authorH. Meyer, journalPhys. Rev. B volume12, pages2844 (year1975). [A. Mauger and C. Goddart(1986)]MaugerGoddart_PR86 authorA. Mauger and C. Goddart, journalPhys. Rep. volume141, pages51 (year1986). [Mauger and Mills(1983)]MaugerMills_PhysRevB.28.6553 authorA. Mauger and authorD. L. Mills, journalPhys. Rev. B volume28, pages6553 (year1983). [White and Woolsey(1968)]whiwoo68pr authorR. M. White and authorR. B. Woolsey, journalPhys. Rev. volume176, pages908 (year1968). [Woolsey and White(1970)]whiwoo70prb authorR. B. Woolsey and authorR. M. White, journalPhys. Rev. B volume1, pages4474 (year1970). [Dietl(2002)]Dietl_sst02 authorT. Dietl, journalSemicond. Sci. Technol. volume17, pages377 (year2002). [Katayama-Yoshida et al.(2007)]KatayamaEtAl_re12 authorKatayama-Yoshida et al., journalPhys. Stat. Sol. (a) volume204, pages15 (year2007). [Orlov et al.(2012)]OrlovEtAl_re12 authorOrlov et al., journalRussian Microelectronics volume41, pages443 (year2012). [Hasan(2024)]Hasan_arxiv24 authorN. Hasan, journalarXiv:2401.17554v1 (year2024). [Th. Gerrits, M. L. Schneider, A. B. Kos, and T. J. Silva(2006)]GerritsEtAl_prb06 authorTh. Gerrits, M. L. Schneider, A. B. Kos, and T. J. Silva, journalPhys. Rev. B volume73, pages094454 (year2006). [Li et al.(2015)Li, Barra, Auffret, Ebels, and Bailey]LiEtAl_prb15 authorY. Li, authorA.-L. Barra, authorS. Auffret, authorU. Ebels, and authorW. E. Bailey, journalPhys. Rev. B volume92, pages140413 (year2015). [K. Neeraj et al.(2021)]NeerajEtAl_nat21 authorK. Neeraj et al., journalNature Phys. volume17, pages245 (year2021). [U.Unikandanunni et al.(2022)]Unikandanunni_prl22 authorU.Unikandanunni et al., journalPhys.Rev.Lett. volume129, pages237201 (year2022). [A. De, J. Schlegel, A. Lentfert, L. Scheuer, B. Stadtmller1, Ph. Pirro, G. von Freymann, U. Nowak, and M Aeschlimann(2024)]DeEtAl_arxiv24 authorA. De, J. Schlegel, A. Lentfert, L. Scheuer, B. Stadtmller1, Ph. Pirro, G. von Freymann, U. Nowak, and M Aeschlimann, journalarXiv:2405.01334 (year2024). [Torrey(1949)]Torrey49pr authorH. C. Torrey, journalPhys. Rev. volume76, pages1059 (year1949). [Hocker and Tang(1968)]hoctan68prl authorG. B. Hocker and authorC. L. Tang, journalPhys. Rev. Lett. volume21, pages592 (year1968). [Verma and Fessenden(1973)]verfes73jcp authorN. C. Verma and authorR. W. Fessenden, journalJ. Chem. Phys. volume58, pages2501 (year1973). [P. W. Atkins, A. J. Dobbs, and K. A. McLauchlan(1974)]atkinsEtal74cpl authorP. W. Atkins, A. J. Dobbs, and K. A. McLauchlan, journalChem. Phys. Lett. volume25, pages105 (year1974). [Fedoruk(2002)]fedoruk02jas authorG. G. Fedoruk, journalJ. Appl. Spectroscopy volume69, pages161 (year2002). [Zhu and Fransson(2006)]zhufra06jpcm authorJ.-X. Zhu and authorJ. Fransson, journalJ. Phys.: Condens. Matter volume18, pages9929 (year2006). [Fransson and Zhu(2008)]franzhu08njp authorJ. Fransson and authorJ.-X. Zhu, journalNew Journal of Physics volume10, pages013017 (year2008). [Fransson(2008)]franson08nanotech authorJ. Fransson, journalNanotechnology volume19, pages285714 (year2008). [Nussinov et al.(2005)Nussinov, Shnirman, Arovas, Balatsky, and Zhu]PhysRevB.71.214520 authorZ. Nussinov, authorA. Shnirman, authorD. P. Arovas, authorA. V. Balatsky, and authorJ. X. Zhu, journalPhys. Rev. B volume71, pages214520 (year2005). [Zhu et al.(2004)Zhu, Nussinov, Shnirman, and Balatsky]PhysRevLett.92.107001 authorJ.-X. Zhu, authorZ. Nussinov, authorA. Shnirman, and authorA. V. Balatsky, journalPhys. Rev. Lett. volume92 (year2004). [Mondal et al.(2017)Mondal, Berritta, Nandy, and Oppeneer]mondaletal17prb authorR. Mondal, authorM. Berritta, authorA. K. Nandy, and authorP. M. Oppeneer, journalPhys. Rev. B volume96, pages024425 (year2017). [Mondal and Oppeneer(2020)]monopp_jpcm20 authorR. Mondal and authorP. M. Oppeneer, journalJ. Phys.: Condens. MatterB volume32, pages455802 (year2020). [Bhattacharjee et al.(2012)Bhattacharjee, Nordström, and Fransson]PhysRevLett.108.057204 authorS. Bhattacharjee, authorL. Nordström, and authorJ. Fransson, journalPhys. Rev. Lett. volume108, pages057204 (year2012). [Fähnle et al.(2011)Fähnle, Steiauf, and Illg]PhysRevB.84.172403 authorM. Fähnle, authorD. Steiauf, and authorC. Illg, journalPhys. Rev. B volume84, pages172403 (year2011). [T. Kikuchi and G. Tatara(2015)]PhysRevB.92.184410 authorT. Kikuchi and G. Tatara, journalPhys. Rev. B volume92, pages184410 (year2015). [D. Thonig, O. Eriksson, M. Pereiro(2017)]thoningetal17sp authorD. Thonig, O. Eriksson, M. Pereiro, journalScientific Reports volume7, pages931 (year2017). [R. Cheng, X. Wu, and D. Xiao(2017)]chengetal17prb authorR. Cheng, X. Wu, and D. Xiao, journalPhys. Rev. B volume96, pages054409 (year2017). [T. Kikuchi, and G. Tatara(2015)]kiktat_prb15 authorT. Kikuchi, and G. Tatara, journalPhys. Rev. B volume92, pages184410 (year2015). [I. Makhfudz, and E. Olive, S. Nicolis(2020)]MakhlufEtAl_apl20 authorI. Makhfudz, and E. Olive, S. Nicolis, journalApplied Physics Letters volume117, pages132403 (year2020). [P. Thibaudeau, S. Nicolis(2021)]ThiNico_epjb21 authorP. Thibaudeau, S. Nicolis, journalEur. Phys. J. B volume94, pages196 (year2021). [M.-C. Ciornei, J. M. Rubi, and J. E. Wegrowe(2011)]ciorneietal11prb authorM.-C. Ciornei, J. M. Rubi, and J. E. Wegrowe, journalPhys. Rev. B volume83, pages020410 (year2011). [E. Olive, Y. Lansac, and J. E. Wegrowe(2012)]oliveetal12apl authorE. Olive, Y. Lansac, and J. E. Wegrowe, journalAppl. Phys. Lett. volume100, pages192407 (year2012). [Olive et al.(2015)Olive, Lansac, Meyer, Hayoun, and Wegrowe]doi:10.1063/1.4921908 authorE. Olive, authorY. Lansac, authorM. Meyer, authorM. Hayoun, and authorJ.-E. Wegrowe, journalJournal of Applied Physics volume117, pages213904 (year2015). [E. Olive and J. E. Wegrowe(2016)]oliweg16jpcm authorE. Olive and J. E. Wegrowe, journalJ. Phys.: Condens. Mat. volume28, pages106001 (year2016). [L. D. Landau and E. M. Lifshitz(1935)]lanlif35 authorL. D. Landau and E. M. Lifshitz, journalPhys. Z. Sowjetunion volume8, pages153 (year1935). [Gilbert(1956)]gilbert56phd authorT. L. Gilbert, Ph.D. thesis, schoolIllinois Institute of Technology, Chicago (year1956). [R. Bastardis, F. Vernay, and H. Kachkachi(2018)]basvenkac_prb18 authorR. Bastardis, F. Vernay, and H. Kachkachi, journalPhys. Rev. B volume98, pages165444 (year2018). [M. Adams, R. Bastardis, A. Michels, and H. Kachkachi(2024)]AdamsEtAl_preprint24 authorM. Adams, R. Bastardis, A. Michels, and H. Kachkachi, journalarXiv:2405.14586 (year2024). [D. D. Awschalom and N. Samarth(2002)]AwschalomSmarth02 authorD. D. Awschalom and N. Samarth, in booktitleSemiconductor Spintronics and Quantum Computation, edited by editorD. D. Awschalom and editorD. Loss (publisherSpringer, year2002), pp. pages147–193. [Myers et al.(2005)Myers, Ku, Li, Samarth, and Awschalom]MyersEtAl_prb05 authorR. C. Myers, authorK. C. Ku, authorX. Li, authorN. Samarth, and authorD. D. Awschalom, journalPhys. Rev. B volume72, pages041302 (year2005). [Irkhin(2010)]Irkhin_pmm2010 authorV. Y. Irkhin, journalPhys. Metals Metallogr. volume110, pages602 (year2010). [Takahashi(2010)]takahashi_carrier_2010 authorM. Takahashi, journalMaterials volume3, pages3740 (year2010), ISSN issn1996-1944. [Nagaev(1974)]nagaev_spin_1974 authorE. L. Nagaev, journalphysica status solidi (b) volume65, pages11 (year1974). [Nagaev(1983)]nagaev83mir authorE. L. Nagaev, titlePhysics of magnetic semiconductors (publisherMir Publishers, year1983). [Sutarto(2009)]sutarto_euo_2009 authorR. Sutarto, typeThesis/Dissertation, schoolKöln University, addressKöln, Germany (year2009). [Sato et al.(2010)Sato, Bergqvist, Kudrnovský, Dederichs, Eriksson, Turek, Sanyal, Bouzerar, Katayama-Yoshida, Dinh et al.]SatoEtAl_RevModPhys.82.1633 authorK. Sato, authorL. Bergqvist, authorJ. Kudrnovský, authorP. H. Dederichs, authorO. Eriksson, authorI. Turek, authorB. Sanyal, authorG. Bouzerar, authorH. Katayama-Yoshida, authorV. A. Dinh, et al., journalRev. Mod. Phys. volume82, pages1633 (year2010). [Dietl(2010)]Dietl_nm10 authorT. Dietl, journalNature Materials volume9, pages965 (year2010). [Kalita et al.(2023)Kalita, Bhushan, and Robindro Singh]KALITA2023116201 authorH. Kalita, authorM. Bhushan, and authorL. Robindro Singh, journalMaterials Science and Engineering: B volume288, pages116201 (year2023). [Kim et al.(1973)Kim, Schwartz, and Praddaude]kimetal73prb authorD. J. Kim, authorB. B. Schwartz, and authorH. C. Praddaude, journalPhys. Rev. B volume7, pages205 (year1973). [F. Vernay and H. Kachkachi(2024)]verkac2024phys authorF. Vernay and H. Kachkachi, volumeIn preparation (year2024). [Jan A. Gaj, Jacek Kossut(2011)]GajKos_spr11 authorJan A. Gaj, Jacek Kossut, in booktitleSpringer Series in materials science 144, edited by editorR. Hull C. Jagadish R.M. Osgood, Jr. J. Parisi Z. Wang H. Warlimont (publisherSpringer Science & Business Media, year2011). [Wesselinowa(1983)]wesselinowa83PSS authorJ. Wesselinowa, journalPhys. Stat. Sol. (b) volume120, pages585 (year1983). [J. Jensen and A.R. Mackintosh(1991)]jenmac_ox1991 authorJ. Jensen and A.R. Mackintosh, titleRare earth magnetism: structures and excitations (publisherClarendon Press, addressOxford, year1991). [Beaulac and Gamelin(2010)]BeaDan_prb10 authorR. Beaulac and authorD. R. Gamelin, journalPhys. Rev. B volume82, pages224401 (year2010). [Feynman(1950)]feynman_mathematical_1950 authorR. P. Feynman, journalPhysical Review volume80, pages440 (year1950). [Schwinger(1951)]schwinger_gauge_1951 authorJ. Schwinger, journalPhysical Review volume82, pages664 (year1951). [J.W. Negele and H. Orland(1998)]negorl98 authorJ.W. Negele and H. Orland, titleQuantum Many-Particle Systems (publisherPerseus Publisher, addressCambridge-Massachusetts, year1998). [König et al.(2000)König, Lin, and MacDonald]KonigEtAl_prl00 authorJ. König, authorH.-H. Lin, and authorA. H. MacDonald, journalPhys. Rev. Lett. volume84, pages5628 (year2000). [König et al.(2001)König, Jungwirth, and MacDonald]KonigEtAl_prb01 authorJ. König, authorT. Jungwirth, and authorA. H. MacDonald, journalPhys. Rev. B volume64, pages184423 (year2001). [Ch. Mudry(2014)]MudryWS2014 authorCh. Mudry, titleLecture Notes on Field Theory in Condensed Matter Physics (publisherWorld Scientific, year2014). [P. Coleman(2015)]coleman2015 authorP. Coleman, titleIntroduction to Many-Body Physics (publisherCambridge University Press, addressCambridge, year2015). [A.M. Werpachowska and Z. Wilamowski(2006)]werwil_msp06 authorA.M. Werpachowska and Z. Wilamowski, journalMaterials Science-Poland volume24, pages1 (year2006). [M. A. Ruderman and C. Kittel(1954)]rudkit_pr54 authorM. A. Ruderman and C. Kittel, journalPhys. Rev. volume96, pages99 (year1954). [S.-S. Yu and V.-C. Lee(1995)]YuLee_prb95 authorS.-S. Yu and V.-C. Lee, journalPhys. Rev. B volume52, pages4647 (year1995). [N. N. Bogolyubov and S. V. Tyablikov(1959)]bogtya59 authorN. N. Bogolyubov and S. V. Tyablikov, journalDoklady volume4, pages589 (year1959). [C. Santos and W. Nolting(2002)]SantosNolting_prb02 authorC. Santos and W. Nolting, journalPhys. Rev. B volume65, pages144419 (year2002). [Y. Kakehashi(2012)]kakehashi_springer12 authorY. Kakehashi, titleModern theory of magnetism in metals and alloys (publisherSpringer-Verlag, addressBerlin, year2012). [A. F. Jalbout, H. Chen, and S. Whittenburg(2002)]JalboutEtAl_apl02 authorA. F. Jalbout, H. Chen, and S. Whittenburg, journalAppl. Phys. Lett. volume81, pages2217 (year2002). [T. Sharma, R. Jain, N. Ahmad, M. Ahmed, S. Oh, and S. Siddiqui(2023)]SharmaEtAl_jmrt23 authorT. Sharma, R. Jain, N. Ahmad, M. Ahmed, S. Oh, and S. Siddiqui, journalJ. Mater. Research and Tech. volume26, pages7483 (year2023). [Zhou et al.(2010)]ZhouEtAl_np10 authorZhou et al., journalNature Phys volume6, pages187 (year2010). [H. Suhl(1957)]suhl57jpcs authorH. Suhl, journalJ. Phys. Chem. Solids volume1, pages209 (year1957). [L.H. Ryder(Second Edition, 1996)]ryder96 authorL.H. Ryder, titleQuantum Field Theory (publisherCambridge Univ. Press, addressNew York, yearSecond Edition, 1996). [Fetter and Walecka(1971)]FetterWalecka71 authorA. L. Fetter and authorJ. D. Walecka, titleQuantum theory of many-particle systems (publisherDover Publications, Inc., addressNew York, year1971). [Babcenco and Cottam(1981)]BabcencoCottam_JPhysCSoldStatPhys.14.5347 authorA. Babcenco and authorM. G. Cottam, journalPhys. stat. Sol (b) volume14, pages5347 (year1981). [Balcerzak(2006)]balcerzak_pss06 authorT. Balcerzak, journalphysica status solidi c volume3, pages212 (year2006). [Liu and Furdyna(2006)]LiuFurdyna_2006 authorX. Liu and authorJ. K. Furdyna, journalJournal of Physics: Condensed Matter volume18, pagesR245 (year2006). [H. Bruus and K. Flensberg(2004)]henfle_oxford2004 authorH. Bruus and K. Flensberg, titleMany-body quantum theory in condensed matter physics: an introduction (publisherOxford University Press, addressOxford, year2004). [M. Abramowitz and I. A. Stegun(1970)]abrste_NY1970 authorM. Abramowitz and I. A. Stegun, titleHandbook of mathematical functions with formulas, graphs, and mathematical tables (publisherDover publications, addressNew York, year1970). § EFFECTIVE ACTION FOR LOCALIZED SPINS: A PATH-INTEGRAL APPROACH In this appendix, we present the details of our formalism leading to the final expression in Eq. (<ref>) for the action of the magnetic subsystem of localized magnetic ions. In this approach, instead of computing the grand-canonical partition function and its derivatives with respect to β=1/k_ BT to infer the various thermodynamic functions, one represents the grand-canonical partition function as a path integral over Grassmann coherent states <cit.>. This amounts to replacing, at finite temperature, the creation and annihilation operators, C_ kα^†(t),C_ kα(t), in the Heisenberg representation, by the imaginary-time dependent Grassmann fields ψ_ kα^*(τ=it),ψ_ kα(τ=it), respectively. Note that whereas C_ kα^† is the adjoint of C_ kα, the two Grassmann fields ψ_ kα^* and ψ_ kα are independent of each other. §.§ Euclidean action Using these new variables, the grand-canonical partition function reads [see e.g. Refs. <cit.>] Z= Tr e^-βℋ=∫ D[S] D[ψ^*] D[ψ] e^-𝒜_ E where 𝒜_ E is the Euclidean action (of the general form 𝒜_ E=∫_0^βdτ [η^*∂_τη+ℋ(η^*,η)]) 𝒜_ E=_0^βdτ ∑_α,β∑_ k, pψ_ kα^*(τ)Λ_αβ( k, p;τ)ψ_ pβ(τ) with the bi-linear form Λ_αβ( k, p;τ) =[δ_αβ∂_τ+E_ kδ_αβ-(ξ·σ_αβ)]δ_kp+1/VΥ( k- p,τ). Note that the Grassmann integration variables obey the anti-periodic boundary conditions in imaginary time<cit.> ψ_ kα^*(τ+β)=-ψ_ kα^*(τ), ψ_ kα(τ+β)=-ψ_ kα(τ). Next, we discuss the integration measure D[S] D[ψ^*] D[ψ] in Eq. (<ref>). For the (local) spin variables S_α, we have (S_α=Ss_α ) D[S]=∏_α=1^𝒩ds_α=∏_α=1^𝒩sinθ_αdφ_α. For the Grassmann variables, we first write ψ_ kα^*(τ) =1/√(β)∑_n∈ℤψ_ k,ω_n,α^*e^iω_nτ, ψ_ kα(τ) =1/√(β)∑_n∈ℤψ_ k,ω_n,αe^-iω_nτ, where ω_n are the Matsubara frequencies given by ω_n=π/β(2n+1), n∈ℤ with 1/β_0^βdτ e^i(ω_n-ω_m)τ=δ_mn. Then, we have the measure for the Grassmann variables D[ψ^*] D[ψ]=∏_k,n,αdψ_ k,ω_n,α^*dψ_ k,ω_n,α. For the bosonic fields, we use the Fourier transforms Φ_ pl( q,τ) =1/β∑_l∈ℤe^iΩ_lτΦ_ pl( q,Ω_l), S( q,τ) =1/β∑_l∈ℤe^iϖ_lτ S( q,ϖ_l), with Ω_l=2π l/β and ϖ_l=2π l/β being bosonic Matsubara frequencies. Therefore, Eq. (<ref>) becomes 𝒜_ E =∑_α,β∑_ k, p∑_n,m∈ℤψ_ k,ω_n,α^*(-iω_n)δ_αβδ_kpδ_mnψ_ p,ω_m,β +∑_α,β∑_ k, p∑_n,m∈ℤψ_ k,ω_n,α^*[E_ kδ_αβ-(ξ·σ_αβ)]δ_kpδ_mnψ_ p,ω_m,β +∑_α,β∑_ k, p∑_n,m∈ℤψ_ k,ω_n,α^*1/β V[Φ_ pl( k- p,Ω_m-n)δ_αβ-λ/2 S( k- p,ϖ_m-n)·σ_αβ]δ_mnψ_ p,ω_m,β where ϖ_m-n=ϖ_m-ϖ_n=2π(m-n)/β, similarly for Ω_m-n. To simplify the notation, we will use the short hand notation k=(𝐤,n),p=(𝐩,m),q=(𝐪,l) , etc, and rewrite the action as follows: 𝒮_ E =∑_k,p∑_α,βψ_k,α^*Λ_αβ^nm(k,p)ψ_p,β with Λ_αβ^nm(k,p) =-[(iω_n-E_ k)δ_αβ+(ξ·σ_αβ)]δ_kp+1/β V[Φ_ pl(k-p)δ_αβ-λ/2 S(k-p)·σ_αβ] where δ_kp≡δ_kpδ_mn and (k-p)→( k- p,ϖ_m-n), remembering that for the spin variables we have the frequencies ϖ_m while for the external charge disturbance, these are Ω_m. Summing up, the previous developments, we write Z =∫ D[S] D[ψ^*] D[ψ] e^-𝒮_ E=∫ D[S]×∏_k,n,αdψ_ k,ω_n,α^*dψ_ k,ω_n,α e^-𝒜_ E =∫ D[S]×∏_k,n,αdψ_ k,ω_n,α^*dψ_ k,ω_n,αexp{ -∑_k,p∑_α,βψ_k,α^*Λ_αβ^nm( k, p)ψ_p,β} . Then, after integration over the Grassmann variables using ∫ dc^∗dc e^-∫ c^∗Mc= M, we obtain Z =∫ D[S]×Λ=∫ D[S]× e^ln(Λ)≡∫ D[S]× e^-𝒜_ eff where the effective 𝒜_ eff action is given by 𝒜_ eff=-ln(Λ)=-ln(Λ); the trace includes a trace on both the conduction electron variables and the spin variables. §.§ Expanding in sd coupling In the one-band model, the conduction band E_k=zηγ_k, γ_k=∑_ae^ik·a/z, is very large as compared to the sd exchange coupling λ S [see Eq. (<ref>)]. In FMS, such as europium chalcogenides (EuS), the band width is of 3-5 eV and the sd coupling is circa 0.5 eV [see the review MaugerGoddart_PR86]. A similar situation applies in the case of DMS [see textbook GajKos_spr11 and Ref. BeaDan_prb10]. As such, we may perform an expansion with respect to the coupling constant λ, see also discussion in Ref. BabcencoCottam_JPhysCSoldStatPhys.14.5347. For this purpose, we rewrite the tensor in Eq. (<ref>) as Λ=ℱ+Π, i.e. splitting Λ into two parts ℱ and Π whose components are given by ℱ_αβ^nm(k,p) =-[(iω_n-E_ k)δ_αβ+(ξ·σ_αβ)]δ_kp+1/β VΦ_ pl(k-p)δ_αβ, Π_αβ^nm(k,p) =-λ/2β V S(k-p)·σ_αβ. ℱ represents the free contribution and Ξ that of the λ interaction, considered here as a perturbation. Then, dropping the first term which does not contain spin operators, we write 𝒜_ eff=- TrlogΛ=𝒜_ eff^(1)+𝒜_ eff^(2), where the 1^st and 2^nd order contributions to the effective action are given by 𝒜_ eff^(1) =- Tr[ℱ^-1Π], 𝒜_ eff^(2) =1/2 Tr[(ℱ^-1Π)^†(ℱ^-1Π)]. The next task consists in computing the trace appearing above which involves summing over the spin degrees of freedom, the wave vectors and frequencies. We will first compute the trace over the (electron) spin variables using the well known properties of the spin-1/2 algebra of the Pauli matrices σ_i,i=1,2,3. For later reference, we introduce a few new operators. First, we rewrite the operator ℱ appearing in (<ref>), which represents the contribution in the absence of the sd coupling, as ℱ_αβ^nm( k, p) =F^nm( k, p)δ_αβ-δ_kpδ^nm(ξ·σ_αβ) where F^nm( k, p) =F_0^nm( k, p)+1/β VΦ_ pl( k- p,Ω_m-n) is the contribution in the absence of the effective magnetic field (and is thereby independent of spin variables) and F_0^nm( k, p) =-(iω_n-E_ k)δ^nmδ_kp is the contribution in the absence of any kind of interactions. Note that in tensor form, we may write ℱ=F⊗1_ s-1_ p⊗(ξ·σ), where 1_ s,1_ p are the identity matrices in the space of spin variables and wave vectors, respectively. To compute the inverse of ℱ, needed in Eq. (<ref>), we note that, with regard to spin variables, ℱ is a linear combination of the unit matrix 1_ s and the three Pauli matrices σ, forming a complete set of the space of 2×2 matrices. Hence, we may seek ℱ^-1 in the form ℱ^-1=M1_ s+ϕ·σ, where M is an 𝒩×𝒩 matrix in phase space and ϕ is a result of an 𝒩×𝒩 matrix acting on a vector. Then, using Trσ^i=0 and (σ^i)^2=1_ s, we obtain the coefficients M and ϕ. Doing so, we find that the inverse of ℱ is given by ℱ^-1 =-(ξ^21_ p-F^2)^-1[F1_ s+(ξ·σ)] ≡-𝒟[F1_ s+(ξ·σ)] where we have introduced the tensor 𝒟≡(ξ^21_ p-F^2)^-1=(ξ1_ p-F)^-1(ξ1_ p+F)^-1 which is also diagonal in spin variables. We then obtain the result ℱ^-1Ξ =λ/2β V𝒟(ξ·σ+F1_ s)×[ S(k-p)·σ]. Finally, we define the Green's function 𝒢 ≡-ℱ=𝒟(F1_ s+ξ·σ) =(ξ1_ p-F)^-1(ξ1_ p+F)^-1(F1_ s+ξ·σ). It is easy to see that when there is no external disturbance (Φ_ pl≡0, ξ=0), 𝒢 becomes the “free propagator” 𝒢_0=-F^-1 with the components (𝒢_0)_αβ^nm( k, p)=1/iω_n-E_kδ_αβδ_kpδ_nm. Next, to obtain the 1^st and 2^nd order contributions, we compute the traces in Eq. (<ref>) using the following identities (which are well known properties of the spin-1/2 algebra) (σ·a)(σ·b) =(a·b)1_s+iσ·(a×b). Trσ^i =0, Tr(σ^iσ^j) =2δ^ij, i,j=1,2,3, Tr(σ^iσ^jσ^k) =2iϵ^ijk, Tr(σ^iσ^jσ^kσ^l) =2(δ^ijδ^kl-δ^ikδ^jl+δ^ilδ^jk). Doing so we obtain the effective action in Eq. (<ref>). 𝒜_ eff =-λ/β V∑_p,κ𝒟_p,p+κ[ξ· S(κ)] +2(λ/2β V)^2∑_p,q∑_κ,κ^'𝒟_p,q+κ𝒟_q,p+κ^'[ξ· S(κ)][ξ· S(κ^')] +(λ/2β V)^2∑_p,q∑_κ,κ^'{[𝒟F]_p,q+κ[𝒟F]_q,p+κ^'-ξ^2𝒟_p,q+κ𝒟_q,p+κ^'}[ S(κ)· S(κ^')] with the operators F and 𝒟 being given by Eqs. (<ref>) and (<ref>), respectively. Note that the 1^st-order contribution (first line) vanishes in the absence of the applied magnetic field. This contribution leads to an effective gyromagnetic factor of the localized spins <cit.> which is linear in the magnetic field. The 2^nd-order contribution yields two types of effective exchange coupling that depend on the (effective) magnetic field (Ξ). This will be discussed at length in the next sections. §.§ Effective spin Hamiltonian of the MS in an external magnetic field In what follows we focus on the derivation of the effective Hamiltonian for the localized spins under an external magnetic field, ignoring the electrical disturbance. In the presence of the splitting field (Ξ≠0), 𝒟 is given by Eq. (<ref>) and the 1^st order contribution to the effective action becomes 𝒜_eff^(1) =-1/2λ/V∑_k{1/β∑_n[1/iω_n-E_k^--1/iω_n-E_k^+]}( S_ 0·e_ξ) where we have introduced e_ξ=ξ/ξ, the verse of the magnetic field and S_ 0=∑_iS_i. Next, we make use of the usual technique to sum over the Matsubara frequencies <cit.> 1/β∑_n∈ℤg(iω_n)=∑_z_0∈poles(g)z=z_0Res[g(z)f_η(z)] where g(z) is supposed to be a holomorphic function except at the poles z=z_0 and f_η(z) is the distribution function f_η(z)={[ 1/e^β z+1, Fermi-Dirac (η=-1), ω_n=π/β(2n+1),; ; 1/e^β z-1, Bose-Einstein (η=1), ω_n=2π n/β. ]. In particular, we have the general formula 1/β∑_n1/(iω_n-ε)^l=-η/(l-1)!(∂^l-1f_η/∂ϵ^l-1)(ε). Consequently, the first contribution to the effective action becomes 𝒜_eff^(1) =-λ/2V∑_k[f_FD(E_k^+)-f_FD(E_k^-)]( S_ 0·e_ξ). This can also be rewritten as 𝒜_eff^(1)=-λ/8(1/V∑_ksinh(βξ)/cosh(β E_k^+)cosh(β E_k^-))( S_ 0·e_ξ) with E_k^μ=E_k+μΞ,μ=±1 The sum over k yields the population difference induced by the effective magnetic field, or equivalently the polarization of the band. Obviously, this effect vanishes for Ξ=0. The calculation of the 2^nd-order contribution is more involved and comprises several different contributions. It makes use of the same technique for computing the various sums over Matsubara frequencies. We also use the fact that the Fermi-Dirac distribution function does not change under a translation by a bosonic frequency, i.e. f_FD(ϵ+iϖ_m)=f_FD(ϵ). The final result for the 2^nd-order contribution to the effective action then reads 𝒜_ eff^(2) =1/2(λ/2V)^2∑_p,k1/β∑_m[ S(-k)· S(k)]{f_FD(E_p^-)-f_FD(E_p+k^+)/iϖ_m+E_p-E_p+k-2Ξ+(Ξ⟷-Ξ)} +1/2(λ/2V)^2∑_k,p1/β∑_m[ S(-k)·e_ξ][ S(k)·e_ξ] {f_FD(E_p^-)-f_FD(E_p+k^-)/iϖ_m+E_p-E_p+k-f_FD(E_p^-)-f_FD(E_p+k^+)/iϖ_m+E_p-E_p+k-2Ξ+(Ξ⟷-Ξ)}. Note that the latter expression is an even function of the magnetic field variable Ξ. Therefore, the total effective action of the subsystem of localized spins, in the presence of an external magnetic field, is given in Eq. (<ref>). § SPIN-WAVE ENERGY Defining the retarded Green's function G^(r)(i-j,t)=-iθ(t)⟨[S_i^-(t),S_j^+(0)]⟩ and using its equation of motion idG^(r)(i-j,t)/dt = -δ(t)⟨[S_i^-(0),S_j^+(0)]⟩ +θ(t)⟨[dS_i^-(t)/dt,S_j^+(0)]⟩ = 2δ(t)δ_ij ⟨ S_i^Z⟩ +iθ(t)⟨[[ℋ,S_i^-(t)],S_j^+(0)]⟩ . Then, using the Hamiltonian (<ref>) and the SU(2) spin algebra [S_i^z,S_j^μ]=μ S_i^μδ_ij, [S_i^+,S_j^-]=2δ_ijS_i^z we obtain [ℋ,S_k^-(t)]=-∑_i𝒥̃_⊥^ik(S_k^zS_i^-+S_i^-S_k^z)+∑_i𝒥̃_∥^ik(S_i^zS_k^-+S_k^-S_i^z) where we have use the fact that the exchange coupling is symmetrical in space. Next, we use the spin algebra again [S_i^z,S_k^-] =S_i^zS_k^--S_k^-S_i^z=-S_i^-δ_ik, [S_i^-,S_k^z] =S_i^-S_k^z-S_k^zS_i^-=S_i^-δ_ik we write S_k^-S_i^z=S_i^zS_k^-+S_i^-δ_ik, S_i^-S_k^z=S_k^zS_i^-+S_k^-δ_ik and thereby (𝒥̃_⊥^ii=0=𝒥̃_∥^ii) [ℋ,S_k^-(t)] =-2∑_i𝒥̃_⊥^ikS_k^zS_i^-+2∑_i𝒥̃_∥^ikS_i^zS_k^-. Therefore, we obtain idG^(r)(i-j,t)/dt = =-2δ(t)δ_ij ⟨ S_i^Z⟩ +2∑_l𝒥̃_⊥^li(-iθ(t)⟨[S_i^zS_l^-,S_j^+(0)]⟩)-2∑_l𝒥̃_∥^li(-iθ(t)⟨[S_l^zS_i^-,S_j^+(0)]⟩). According to Bogoliubov and Tyablikov<cit.>, one uses the following approximation -iθ(t)⟨[(S_k^zS_i^-)(t),S_j^+(0)]⟩ =⟨ S^Z⟩ G_ij^(r)(t) where ⟨ S_k^Z⟩ =⟨ S^Z⟩. This leads to the following equation for the Green function G_ij^(r)(t), idG_ij^(r)(t)/dt = -2δ(t)δ_ij⟨ S^z⟩ +2⟨ S^z⟩[∑_l𝒥̃_⊥^liG_lj^(r)(t)-∑_l𝒥̃_∥^liG_ij^(r)(t)]. Transforming to the Fourier components of G_kj^(r)(t) with respect to time-space coordinates, G_ij^(r)(t)=∫∫dω/2πe^-iω td^3k/(2π)^3e^-i𝐤𝐫_ijG^(r)(𝐤,ω) we obtain the dispersion law by solving for ω given in Eq. (<ref>). § EFFECTIVE EXCHANGE COUPLINGS IN DIRECT SPACE Using our notations and correcting a misprint, we rewrite the expression for the inverse Fourier transform of the transverse coupling given in Ref. <cit.> as follows, 𝒥_⊥(r,0;ξ) =λ^2m/8π^3ħ^2r^4{ +[(k_F^-)^2r^2-2](k_F^+r)cos(k_F^+r)-(k_F^+k_F^-r^2-2)(√(k_F^+k_F^-)r)cos(√(k_F^+k_F^-)r) +[(k_F^-)^2r^2+2]sin(k_F^+r)-(k_F^+k_F^-r^2+2)sin(√(k_F^+k_F^-)r) +k_F^+k_F^-r^2[Si(k_F^+r)-Si(√(k_F^+k_F^-)r)]} +λ^2m/4π^3ħ^2r^4[(k_F^+k_F^-r^2)H_0(√(2k_F^+k_F^-)r)-r√(2k_F^+k_F^-) H_1(√(2k_F^+k_F^-)r)] where Si(z)=_0^zdt sin t/t, and H_n(z),n=0,1 is the Struve function that is given by the solution of differential equation<cit.> z^2d^2y/dz^2+zdy/dz+(z^2-n^2)y=2/πz^n+1/π(2n-1)!!.
http://arxiv.org/abs/2407.01738v1
20240701191240
SONIC: Connect the Unconnected via FM Radio & SMS
[ "Ayush Pandey", "Rohail Asim", "Khalid Mengal", "Matteo Varvello", "Yasir Zaki" ]
cs.NI
[ "cs.NI" ]
ayush.pandey@nyu.edu New York University Abu Dhabi Abu Dhabi United Arab Emirates rohail.asim@nyu.edu New York University Abu Dhabi Abu Dhabi United Arab Emirates kqm1@nyu.edu New York University Abu Dhabi Abu Dhabi United Arab Emirates matteo.varvello@nokia.com Nokia Bell Labs New Jersey United States yasir.zaki New York University Abu Dhabi Abu Dhabi United Arab Emirates /sections/00_abstract SONIC: Connect the Unconnected via FM Radio & SMS Yasir Zaki ================================================= /sections/01_introduction /sections/02_related /sections/04_design /sections/05_evaluation ACM-Reference-Format
http://arxiv.org/abs/2407.02471v1
20240702175058
Cubic Galileon Gravity in the CMB
[ "Gen Ye", "Alessandra Silvestri" ]
astro-ph.CO
[ "astro-ph.CO", "gr-qc" ]
ye@lorentz.leidenuniv.nl Institute Lorentz, Leiden University, PO Box 9506, Leiden 2300 RA, The Netherlands Institute Lorentz, Leiden University, PO Box 9506, Leiden 2300 RA, The Netherlands § ABSTRACT Among the models addressing the Hubble tension, those introducing a dynamical dark component around recombination have been the most promising thus far. Their study has highlighted that, in fact, cosmic microwave background (CMB) and baryon acoustic oscillation (BAO) observations can allow for such components before and near recombination. The new dynamical degree of freedom can be early dark energy (EDE) or early modified gravity depending on its coupling to gravity. We study a new model, 𝒢EDE, featuring the cubic Galileon operator Xϕ and test it against the most recent Planck PR4 CMB and Cepheid calibrated Pantheon+ type Ia Supernovae data. Thanks to the kinetic braiding effects, 𝒢EDE gives a better fit to the data, with a higher H_0, and is preferred over the canonical EDE with a Bayes factor ln B=0.9, despite introducing one more parameter. This calls for further explorations of modified gravity near and before last scattering. To facilitate these, we introduce a substantial extension of the cosmological code that allows to fully evolve the background and linear dynamics of any covariant theory, oscillatory or not, belonging to the Horndeski class. Cubic Galileon Gravity in the CMB Alessandra Silvestri July 8, 2024 ================================= § INTRODUCTION At the core of modern cosmology lies the Hubble constant H_0, which sets the size and expansion rate of our current Universe. In the past century tremendous effort has been devoted to measuring this value, both locally or through a cosmological model. The most precise constraint on H_0 is derived from the cosmic microwave background (CMB) anisotropy observations assuming the standard cosmological constant (Λ) cold dark matter (CDM) model. The latest Planck PR4 data report H_0=67.81±0.38 km/s/Mpc^-1 <cit.>. On the other hand, local observations can also measure H_0, by e.g. calibrating the distance-ladder, with the most precise one being the Cepheid calibrated type Ia Supernovae from the SH0ES collaboration, giving H_0=73.04±1.04 km/s/Mpc^-1 <cit.>. The now 5σ discrepancy, usually referred to as the Hubble tension, between the local and early Universe determination of H_0, has triggered extensive discussions about the possible underlying new physics, see e.g. <cit.> for recent reviews. Among the plethora of possibilities explored, one of the most promising and extensively studied is the proposal of an additional dark energy component around matter-radiation equality, often dubbed early dark energy (EDE)  <cit.>; see also <cit.> for a recent review. Stringent constraints on EDE can be derived from large scale structure (LSS) observations <cit.>, CMB data from Planck PR4 and ground based observations <cit.>. Despite these constraints, there remains room for a new dynamical degree of freedom (DoF) at high redshifts, and this is commonly referred to as EDE or early modified gravity (EMG), depending on how it is coupled to gravity. Modifications of the effective Newtonian constant G_eff have been considered as EMG candidates, both with dynamical <cit.> and parametrical <cit.> approaches. The Generalized Galileons theory <cit.> offers a natural, unifiying framework to explore EDE/EMG models. The corresponding Lagrangian describes theories with second order equations of motion (EoM), in a four-dimensional spacetime, for the usual massless graviton plus one additonal scalar DoF. As shown in <cit.>, this action is equivalent to that of Horndeski gravity <cit.>. As a first step into our exploration of EDE/EMG, we will restrict to the subclass of covariant Galileons, identified in <cit.> as a first generalization of the work of <cit.> to an expanding background. In particular, we consider the model defined by the following Lagrangian L =M_p^2/2R +X-V(ϕ)-ξ Xϕ. where V(ϕ) is the scalar field potential, X≡-1/2(∂ϕ)^2 and ≡ g^μν∇_μ∇_ν. The non-canonical dynamics is only characterized by the free parameter ξ multiplying the Galileon operator, Xϕ. Lagrangian (<ref>) corresponds to luminal gravitational waves, c^2_T=1, and a minimally coupled scalar sector, with the only non-trivial modified gravity (MG) effect coming from the kinetic braiding introduced by the last term. Kinetic braiding theories are the majority of Horndeski gravity models that survive the no-ghost and gradient stability conditions as well as positivity bounds <cit.>; the latter encode the fundamental requirement of causality, unitarity and Lorentz invariance of the UV complete theory, however, it is important to note that at the moment they heavily rely on an extrapolation of results from the Minkowski to the cosmological background <cit.>. The shift symmetric case of (<ref>) has been ruled out as a dark energy candidate by a combination of cosmological observations <cit.>. However, this conclusion assumes shift-symmetry and that the Galileon field sits on its tracking solution, which does not apply to (<ref>) with a scalar potential. In the non-shift-symmetric case such as (<ref>), the Galileon operator still offers important MG effect (<ref>) through the kinetic braiding without suffering from stringent constraints. Furthermore, as we will show, with a potential V(ϕ) to drive the field evolution, and the expansion of the Universe dominated by radiation and matter, Galileons can be revived as an EDE/EMG theory (𝒢EDE) which is favored at ∼2σ over the EDE model based on a canonical scalar field (i.e. setting ξ=0 in Eq.(<ref>)), hereafter cEDE. The paper is structured as follows. In Section <ref> we present the background and linear perturbation dynamics of 𝒢EDE. The numerical setup and the data used are outlined in Section <ref> and the cosmological constraints are presented in Section <ref>. Finally we conclude in Section <ref>. § THE MODEL The covariant EoM for the scalar field in the 𝒢EDE model is -ϕ + ξ[(ϕ)^2 - ∇_μ∇_νϕ∇^μ∇^νϕ-R^μν∇_μϕ∇_νϕ]+V_ϕ=0 . On a FRW background ds^2 = -dt^2+a^2(t)|dx|^2, the Einstein equation gives the following Friedmann equations 3 M_p^2 H^2 = ρ_m + 1/2ϕ̇_0^2 + V(ϕ_0) + 3 ξ H ϕ̇_0^3, 2M_p^2 Ḣ = -(ρ_m + p_m + ϕ̇_0^2 + 3 ξ H ϕ̇_0^3 - ξϕ̇_0^2 ϕ̈_0 ), and the scalar field EoM reduces to ϕ̈_0+3Hϕ̇_0[1+ξ(2ϕ̈_0+(3+Ḣ/H^2)Hϕ̇_0)]+V_ϕ=0 where overdot denotes derivation with respect to cosmic time, t, and the subscript 0 denotes background quantities. Similar to cEDE, in 𝒢EDE initially the field is frozen at ϕ=ϕ_i on the potential well by Hubble friction when m^2_ϕ,i∼ V_ϕϕ(ϕ_i)≪ H^2. At some point H drops below m_ϕ,i so the field thaws and rolls down the potential and finally oscillates around the potential minima, rapidly redshifting away its energy density. During the entire process the Gallileon operator, controled by ξ, modulates both the background and perturbation evolution though its MG effect. Fig.<ref> depicts the frozen and thawing stage of the dynamical picture. The time direction flows from left to right for the trajectories plotted in Fig.<ref>. The initial frozen stage is an attractor and characterized by ϕ̇/HM_p≪1, where the field displacement in one Hubble time is very small. Thus we can set V_ϕ∼ V_ϕ,i∼ const. in Eq.(<ref>) to obtain the approximate tracking solution Hϕ̇_0≃-1+√(1-12/3-Ḣ/H^2(ξ V_ϕ,i))/6ξ≃const. where we have chosen the branch that reduces to a canonical scalar field Hϕ̇_0≃-V_ϕ,i/(3-Ḣ/H^2) when ξ→0. For compact notation we will drop the subscript i and simply write ξ V_ϕ hereafter. This tracking behavior is also evident in Fig.<ref> where ξ Hϕ̇ remains approximately constant up to the energy peak of the scalar field. Specially, in radiation dominance one has Ḣ/H^2=-2, one will need ξ V_ϕ<5/12 for the square root in Eq.(<ref>) to be well defined, i.e. the existence of a stable tracking background. At the linear perturbation level, the dynamics of 𝒢EDE can be characterized by the effective functions <cit.> α_K = ϕ̇_0^2 + 6 ξ H ϕ̇_0^3 / H^2 M_p^2 ≡ 6 ( f_X + α_B ) , α_B = ξϕ̇_0^3 /H M_p^2= (6 ξ H ϕ̇_0 ) f_X. where we have defined f_X ≡ X_0 / (3 M_p^2 H^2) as the energy fraction of the canonical kinetic term X_0 = ϕ̇_0^2 / 2. Compared with cEDE, kinetic braiding is the only MG effect in 𝒢EDE, while running of effective Planck mass (α_M) and modification of tensor speed (α_T) are absent in both models. Using the tracking solution (<ref>) and Friedmann equation (<ref>), one can derive the approximate sound speed c_s^2≃1 + ( 4 - 2 Ḣ / H^2 ) ξ H ϕ̇_0 - 3 ( ξ H ϕ̇_0 )^2 f_X /1 + 6 ξ H ϕ̇_0 + 9 ( ξ H ϕ̇_0 )^2 f_X In the initial tracking stage, gradient stability (c_s^2>0) requires ξ V_ϕ < 5/8 or ξ V_ϕ > 5/6. Ghost stability (α_K+3α_B^2/2>0) yields ξ V_ϕ<5/6. Combining these conditions with the existence of the background tracking solution (<ref>) we arrive at the necessary stability condition ξ V_ϕ < 5/12. The modified gravity effect is evident only when the scalar field thaws and contributes non-trivially to the total energy budget with α_K,B∝ f_X in Eq. (<ref>). In particular, α_B=6ξ Hϕ̇_0f_X∼ - ξ V_ϕf_X has the opposite sign as ξ V_ϕ. At the background level, braiding enters the Hubble equation (<ref>) as 3M_p^2(1 - α_B)H^2=ρ_m + 1/2ϕ̇_0^2+V(ϕ_0), thus having ξ V_ϕ<0 (so α_B>0) in 𝒢EDE can further increase the Hubble parameter compared with a cEDE model with the same ϕ_0(t),ϕ̇_0(t), meaning smaller sound horizon and enhanced ability to mitigate the Hubble tension. Within the scalar field sound horizon, perturbations are generally suppressed due to scalar field pressure support. The model under consideration introduces also a fifth force on small scales; we can estimate its effect on perturbations via the phenomenological MG functions μ, Σ and γ which for our model read <cit.> μ=Σ=1+α_B^2/2c_s^2(α_K+3/2α_B^2), γ=1. Using Eq.(<ref>) and f_X≪1 one has μ-1≃12(ξ Hϕ̇_0)^2/1 + ( 4 - 2 Ḣ / H^2 ) ξ H ϕ̇_0f_X∼0.48(ξ V_ϕ)^2/1-1.6(ξ V_ϕ)f_X. In the last approximate equality we have assumed the tracking solution (<ref>) and radiation dominance. Thus on small scales, the MG effect always enhances gravity and perturbation growth. For superhorizon modes, δϕ is not excited so the effect on perturbation evolution outside of horizon is negligible. In general on small scales the MG term enhances perturbation growth, competing with the additional pressure support from the scalar fluid within its sound horizon, which suppresses clustering. § DATASETS AND METHODOLOGY In order to perform a fit to the data, we choose an explicit 𝒢EDE model by setting the potential to the following quartic ansatz V(ϕ) = V_0ϕ^4+V_Λ where V_Λ stands for the cosmological constant supporting the late-time accelerated expansion. This potential depends on only one parameter and is thus simpler than the one in the original axion EDE <cit.> model, while still being able to considerably alleviate the Hubble tension <cit.>. In general V_Λ could also be dynamical to represent the dark energy. The scalar field model is specified by two model parameters {ξ, V_0} plus the initial field value ϕ_ini. In practice we use the more intuitive parameters: f_ede - the peak energy density of the scalar field and z_c - the redshift at which the peak occurs in place of {V_0, ϕ_i} and use a shooting method to map them back to {V_0, ϕ_i}. Energy density of the scalar field can be ambiguous once MG is included, we thus define f_EDE(z)≡ 1-ρ_m,tot(z)/3M_p^2H^2(z) where ρ_m,tot is the total energy density of all species except for the scalar field. One can also define the energy fraction of the canonical part by f^c_EDE≡(ϕ̇^2/2+V(ϕ))/3M_p^2H^2. For cEDE one has f^c_EDE= f_EDE while for 𝒢EDE f^c_EDE f_EDE. In light of the discussion in Section.<ref>, instead of ξ we use the dimensionless parameter ξ V_ϕ which directly controls the dynamics and stability of the theory. With this setup, we have the standard six cosmological parameters {ω_b, ω_c, H_0, ln 10^10A_s, n_s, τ_reion} and three additional model parameters { f_ede, z_c, ξ V_ϕ} for our cosmological model. To compute the cosmology and the corresponding prediction for the observables of interest, we use a new version of  <cit.>, based on the public Einstein-Boltzmann code <cit.>. This version features the implementation of a covariant Horndeski module which evolves any covariant Horndeski theory, both at the background and linear level. Provided with an arbitrary Horndeski Lagrangian, the code integrates the scalar field perturbations in terms of δϕ instead of π=δϕ/ϕ̇, thus avoids the divergence in π when ϕ̇ crosses zero that plagues all previous Boltzmann codes that evolve scalar-tensor theories. The new is interfaced with <cit.>, which we use to perform Monte Carlo Markov Chain (MCMC) analysis to derive posterior constraints on the model parameters. We use the Gelman-Rubi <cit.> diagnostic R-1<0.02 as our convergence criterium. We also use the nested sampler <cit.>, interfaced with , to compute the Bayesian evidence of the models. We use the following dataset: * P20: TTTEEE and EE likelihoods of the Planck2020 CMB temperature and polarization data <cit.> as well as the Planck2018 low-l TT data <cit.>. Planck PR4 CMB lensing <cit.>. * BAO: The low redshift BAO from MGS <cit.> and 6dF <cit.> and high redshift BAO data from SDSS DR12 <cit.>. * cSN: The Pantheon+ Type Ia supernova light curve sample <cit.> sample with SH0ES Cepheid host distance calibration <cit.>. * LSS: Weak lensing shear and galaxy clustering (3x2pt) LSS measurement from the Dark Energy Survey (DES) year one data <cit.>. * Baseline: P20+BAO+cSN Due to the significantly increased computational cost, when running nested sampling we replace the high l TTTEEE CMB likelihood with and the DES Y1 data with a Gaussian prior on S_8=0.790^+0.018_-0.014, from the most recent KiDS and DES combined result <cit.>. § RESULTS We show the marginalized posterior distributions of relevant model parameters in Fig. <ref>, and report the posterior constraints for all parameters in Table. <ref>. Additionally, we report the full posterior results in Appendix-<ref>. Finally, in Table. <ref> we report the χ^2 for the bestfit model, total as well as per experiment, and the bayesian evidence. There is a significant Δχ^2 (≳20) in the cSN dataset between both EDE models and ΛCDM. This is because ΛCDM is in strong Hubble tension with the cSN data while the EDE models greatly alleviate that. Looking at Fig. <ref>, we see that both cEDE and 𝒢EDE show shifts in the values of n_s,ω_m compared with ΛCDM as observed and explained in <cit.>, but 𝒢EDE predicts an ω_b comparable to ΛCDM, with the central value ∼2σ lower than cEDE. The Hubble tension is mitigated with the inclusion of the dark energy field, while presence of the MG term Xϕ further improves the fit to the cSN data with Δχ^2_SN≃-5 as well as the CMB data with Δχ^2_CMB≃-2, bringing the latter to the same goodness-of-fit as ΛCDM in terms of CMB. Due to the overall improved fit, we observe a preference for the MG term, i.e. ξ V_ϕ 0, at 2σ (95% C.L.). In terms of Bayesian evidence, both dark energy models are strongly favored over ΛCDM with Bayes factor ln B ≡ln (Z_1/Z_0) > 6 due to the alleviation of the Hubble tension. Moreover, despite one more free parameter, 𝒢EDE is still favored over the cEDE with ln B=0.9, corresponding to a ∼ 1σ preference. The 2σ evidence for ξ0 comes both from features at the background and perturbation levels. At the background, 𝒢EDE predicts a slightly but consistently larger H_0 than cEDE, while fitting better with cSN with Δχ^2≃-5. This is due to the kinetic braiding effect in the Friedmann equation (<ref>), where a ξ V_ϕ<0, thus a α_B>0, enhances the Hubble parameter compared to a canonical field. This is clearly seen in the left panel of Fig.<ref> where f_EDE in 𝒢EDE is significantly boosted by α_B>0 compared with its canonical part f^c_EDE. Note we have α_B,max≃0.03, which enhances the Hubble parameter by 1.5%. However, because the scalar field only takes up at most ∼10% of the total energy budget, the small increase in Hubble translates to a significant boost (∼ 20%) in the scalar field peak energy fraction. At the level of perturbations, 𝒢EDE improves the fit to the CMB spectra with Δχ^2_CMB≃-2. It has been identified in <cit.> that an increase ω_b is needed to shrink the physical damping scale in all cEDE models to prevent the damping angular scale from changing drastically when H_0 is increased, but has the side effect of also impacting the baryon drag which changes the relative height between even and odd acoustic peaks, requiring imperfect compensation from other sources, including tuning n_s <cit.> and early integrated Sachs-Wolfe (ISW) effect <cit.>. In contrast, 𝒢EDE completely negates such ω_b shift in Fig.<ref> with ξ V_ϕ∼-2, indicating a new degeneracy direction between the MG effect and a larger angular damping scale. Fig.<ref> compares the effect of ω_b and MG in the CMB power spectra. We setup the “Base" model as the bestfit 𝒢EDE but with ξ=0, in which the TT spectrum is overdamped at high-ℓ. Turning on the Galileon operator (“+ξ") or increasing baryon density (“+ω_b") can both compensate for the excess damping, but increasing ω_b suffers from modified baryon drag around ℓ∼500. In 𝒢EDE, MG adds power to the high-ℓ tail by enhancing gravity on small scales, see right panel of Fig.<ref>, and thus radiation driving. This effect might be general to all MG with an actractive fifth force, as in Eq.(<ref>). Near horizon scale, the MG effect on perturbations in Fig.<ref> is difficult to clarify analytically due to the many factors at play. However, the change in baryon drag effect due to a shift in ω_b is of order δω_b/ω_b|Ψ|∼0.03|Ψ| in the first two peaks, but at such scales radiation driving is already subdominant and the change in early ISW, according to Fig.<ref>, is only at sub percent level. Therefore, the impact of MG on intermediate scales is generally smaller than ω_b thus the better fit. The S_8 problem is not alleviated by 𝒢EDE, nor does it get worse. However, the MG model does predict slightly smaller S_8 than the canonical model. We attribute this reduction mainly to the reduced ω_b thus a smaller Ω_m. MG does not help to this end because the fifth force is always attractive at small scales according to Eq.(<ref>), but it also does not worsen the problem because μ-1>0 is very small, e.g. with ξ V_ϕ=-2 and f_X=0.1 Eq.(<ref>) gives μ-1<5%, and only non-vanishing in a narrow redshift window when the scalar field energy fraction is non-negligible. To explicitly help with the S_8 tension one might need a repulsive fifth force (μ<1), which, according to <cit.>, requires non-trivial G_4 in Horndeski theories. It has been shown that G_4⊃ϕ^2 might have some effect but does not differ much from the general prediction of EDE, i.e. increased S_8 than ΛCDM <cit.>. In Fig.<ref>, adding LSS (dashed contours) considerably widens some posterior distributions, especially that of z_c. With LSS, there are MCMC points accumulate at the upper prior boundary ln(1+z_c)<9, creating a secondary peak in the distributions. It is a known feature of EDE that the model displays bimoduality in z_c with the global bestfit living in the main peak with lower z_c, see e.g. <cit.>. The high z_c subpeak fits worse to the baseline data but predicts a smaller S_8, thus once LSS is included, the competition between data in tension degrades the constraints and drives the contour to extend towards the high z_c region. Appendix-<ref> further explores the high redshift model by excluding the low z_c peak from the prior range. § CONCLUSION We have studied the phenomenology of the Galileon field with a non-canonical operator Xϕ as an realization of EMG/EDE, with the resultant model dubbed 𝒢EDE. It is found that, depite having one more parameter, 𝒢EDE is still favored over canonical EDE with a Bayes factor ln B=0.9 due to better fit to the CMB and SNIa data. We identify the source of improvement as contribution from the MG effect (kinetic braiding) on both background and linear perturbation levels. It's known that noticeable increase in ω_c, ω_b and n_s is a common feature of EDE models in order to fit CMB and BAO <cit.>. The necessity of increasing ω_b is removed in 𝒢EDE due to the small scale attractive fifth force, resulting in improved consistency with small scale CMB observations as well as recent BBN constraints <cit.>. Aside from ω_b, n_s and ω_c in 𝒢EDE still follow the same increasing trend as in previous EDE/EMG models. The increase in ω_c is related to the S_8 tension which plagues 𝒢EDE as well. The increase in n_s connects the Hubble tension with a Harrison-Zeldovich initial spectrum <cit.>, which might be of profound implication in the study of the primordial Universe <cit.>. It should, however, be stressed that Lagragian (<ref>) is very simplified from a theoretic point of view. For example, the theory will additionally include non-minimal coupling to gravity, i.e. G_4(ϕ), if regarded as a direct derivation of local modification of gravity <cit.>. Moreover, it will violate the positivity bounds postulated in <cit.> if V_ϕϕ<0, e.g. in the axion-like EDE potential <cit.> before the field thaws. One simple extension to remedy is adding a positive non-standard kinetic term X^2. The possible extensions are left for future study. Another attractive possibility is kinetic non-minimal coupling G_4(X). Because EDE is negligible after recombination, such models evade the c_T=1 constraint from GW170817 <cit.> while also contributing non-trivially to CMB observables, including B-mode polarization. Furthermore, as recently pointed out by <cit.>, EDE/EMG can also leave unique resonant signatures in the stochastic gravitational wave background if the field is oscillating, which is a ubiquitous feature of many EMG/EDE models. Resonance in the scalar sector <cit.> is also worth studying . This work is supported by NWO and the Dutch Ministry of Education, Culture and Science (OCW) (grant VI.Vidi.192.069). Some of the plots are made with the help of <cit.>. The authors acknowledge the ALICE and Xmaris clusters for computational support. The cosmology code used to produce the results is a major component of the new (in preparation) which will be made public later this year. Currently the code can be provided upon reasonable request. § SUPPLEMENTARY MCMC DETAILS Fig.<ref> shows the posterior distributions of all comoloigcal and model parameters of ΛCDM, cEDE and 𝒢EDE in the baseline (filled) and baseline+LSS (dashed line) datasets. The corresponding bestfit values, obtained from the minimization module in , are summarised in Table.<ref>. The sampled cosmological parameters are {ω_b, ω_c, H_0, ln 10^10A_s, n_s, τ_reion}, namely the baryon and cold dark matter density parameter ω_c,b≡Ω_c,bh^2, the Hubble constant H_0, the log amplitude ln 10^10A_s and spectra tilt n_s of the primordial curvature perturbations and the effective reionization optical depth τ_reion. The cEDE model adds two more model parameters {f_ede, ln(1+z_c)} to be sampled, namely the scalar field peak energy fraction f_ede and logarithm of its redshift postion ln(1+z_c). Following Planck <cit.> we treat the neutrinos as two massless ulra-relativistic species with N_ur=2.0308 plus one massive with mass 0.06eV, reproducing N_eff=3.044 <cit.>. Together with the cosmological and model parameters we also sample all nuisance parameters of the corresponding likelihoods with their recommended priors. We use uninformative priors for cosmological and model parameters as reported in Table.<ref>. The upper bound on ξ V_ϕ is informed by the theoretical stability requirement (<ref>). We have checked that relaxing this prior will not meaningfully change the posterior distribution because the theories with ξ V_ϕ>5/12 are automatically discarded due to numerical instability, but enforcing the bound in prior makes the MCMC runs much more numerically robust. Due to the significantly increased numerical cost, we shrink some of the priors in nested sampling to speed up convergence, with the exception of ln(1+z_c) because nested sampling works well with bimodality. § CONSTRAINTS ON THE HIGH-Z_C MODELS We explore in this appendix the high redshift secondary peak of cEDE and 𝒢EDE. With the same setup as in Appendix-<ref> but changing the prior of ln(1+z_c) to [9,10], we obtain the baseline MCMC posterior constraints as shown in Fig.<ref>. The cEDE model converges to the high z_c minima while turning on Galileon MG in 𝒢EDE spoils the convergence. We attribute this to the fact that the new dimension (ξ V_ϕ) connects the high redshift local minima with the global one around matter-radiation equality by lowering the χ^2 barrier between the two minimas, thus the degeneracy band in the ξ V_ϕ - ln(1+z_c) panel. In fact, in the nested sampling runs, this high-z_c local minima is covered by our prior range, see Table.<ref>, and has been correctly identified, but turned out to be disfavored in both cEDE and 𝒢EDE compared with the low redshift minima studied in the main text.
http://arxiv.org/abs/2407.02185v1
20240702114432
On a multiscale formulation for multiperforated plates
[ "Kersten Schmidt", "Sven Pfaff" ]
math.AP
[ "math.AP", "35C20, 35J25, 35B40, 41A60" ]
On a multiscale formulation for multiperforated plates Kersten SchmidtDepartment of Mathematics, Technical University of Darmstadt, Dolivostr. 15, 64293 Darmstadt, Germany Sven Pfaff^* July 8, 2024 ===================================================================================================================================== § ABSTRACT Multiperforated plates exhibit high gradients and a loss of regularity concentrated in a boundary layer for which a direct numerical simulation becomes very expensive. For elliptic equations the solution at some distance of the boundary is only affected in an effective way and the macroscopic and mesoscopic behaviour can be separated. A multiscale formulation in the spirit of the heterogeneous multiscale method is introduced on the example of the Poisson equation. Based on the method of matched asymptotic expansion the solution is separated into a macroscopic far field defined in a domain with only slowly varying boundary and a mesoscopic near field defined in scaled coordinates on possibly varying infinite periodicity cells. The near field has a polynomial behaviour that is coupled to the traces of the macroscopic variable on the mid-line of the multiperforated plate. A variational formulation using a Beppo-Levi space in the strip is introduced and its well-posedness is shown. The variational framework when truncating the infinite strip is discussed and the truncation error is estimated. myheadings plain K. Schmidt and S. PfaffA finite element method for perforated plates § INTRODUCTION This paper considers the solution of second order elliptic problems in presence of multiperforated plates or thin mesh like structures with locally periodic pattern that may be vary on a macroscopic scale. Multiperforates plates can be applied to reduce acoustic noise pollution <cit.> in lecture halls, concert halls or in car mufflers. They can be applied to suppress thermoacoustic instabilities and for cooling in combustion chambers or as sieves to control fluid flows. Moreover, thin mesh-like structures consisting of metallic wires – so called Faraday cage effect – can effectively shield electric fields, and similar structures of elastic rods – the stents – are used in blood vessels in human body where they influence the flow. Due to the multiple scales in the geometry and consequently the solution a direct numerical simulation of such problems is very costly. If finite element methods are used the mesh width needs to be as small as the smallest geometrical scale, at least in an adaptive refinement towards the perforated plate or thin mesh-like structure. Therefore, models for the macroscopic fields are proposed that take the thin surfacic microstructures with effective boundary or transmission conditions into account. Their derivation relies on the observation that the small scale variations of the solution have a boundary layer behaviour and decay exponentially away from the thin microstructure. This boundary layer have been widely studied since the works of Sanchez-Palencia <cit.>, Achdou <cit.> and Artola and Cessanat <cit.>. With a combination of homogenisation techniques and the method matched asymptotic expansions or the method multiscale expansion an asymptotic expansion of the near field and far field solution can be derived. This asymptotic technique is sometimes called surface homogenisation. An asymptotic expansion of order 1 has been obtained for the acoustic wave propagation through an perforated duct of vanishing thickness <cit.>, where in the limit of vanishing period the perforated duct becomes transparent <cit.>. For the scattering by a thin ring of regularly-spaced inclusions an asymptotic expansion of any order has been derived and justified in <cit.>, and in <cit.> an approximative model with transmission conditions of order 2 has been derived and justified. In <cit.> an approximate model for the Poisson problem with regularly spaced small inclusions with Dirichlet boundary conditions is derived with a three-scale expansion where the size of the inclusions and the distance to their nearest neighbour are considered as independent scales, which is extended to the Helmholtz equation in <cit.>. Approximate boundary conditions for regularly spaced inclusions with a different material has been derived for the Poisson equation via a two-scale homogenisation in <cit.>, and the method was applied for inclusions with Dirichlet or Neumann boundary conditions in <cit.>. Alternatively to the surface homogenisation the periodic unfolding method <cit.> that is based on the two-scale convergence <cit.> was extended to a multiperforated plates for the Helmholtz equation in <cit.>. An approximative model for the acoustic-structure interaction with elastic multiperforated plates was derived with periodic unfolding in <cit.>. For multiperforated acoustic liners with small viscosity a third scale for the hole size has been considered in surface homogenisation to obtain approximative models and transmission conditions in two dimensions <cit.> and three dimensions <cit.>. The surface homogenisation can be extended to multiperforated plates of finite size by incorporating additional terms for corner singularities . Based on the homogenisation theory <cit.> for periodic microstructure in all space directions numerical methods were proposed. The heterogenous multiscale method <cit.> (HMM) aims to provide a numerical solution to the limit equations where local cell problems are solved on quadrature points of the finite element mesh. This allows for a locally periodic microstructures, where the local problems may change slowly. A complete numerical analysis of the method in terms of the macroscopic mesh width and the mesh width of the local cell problems was given in <cit.>. In this paper we aim to propose and analyse a coupled variational formulation for the far and near field that are present in the surface homogenisation. For the coupling the principles of the method of matched asymptotic expansions are applied. The the near and far field in the variational formulation can be discretised by finite elements which shall be presented in a forthcoming presentation. The article is structured as follows. In Section <ref> the geometry and model problem with the solution decomposion in the macroscopic far field and near field is introduced. With matching conditions based on the method of matched asymptotic expansions the formulation of the coupled problem is stated. In Section <ref> a variational framework for the near field problem in an infinite strip using a Beppo-Levi space is introduced, its well-posedness is shown and the error introduced by the truncation of the strip is estimated. Finally, in Section <ref> the well-posedness of the coupled formulation will be proven and the truncation error is estimated. § THE GEOMETRIC SETTING AND THE MODEL PROBLEM §.§ Domain with multiperforated plates Let Ω⊂ℝ^2 be an open, bounded Lipschitz domain. In this domain we consider a perforated wall that is defined in a vicinity of a closed C^1 curve Γ, that we call mid-line, with unit normal vector . The curve Γ is parametrized with _Γ: [0,1) →Γ where c_Γ≤ |_Γ'(x)| ≤ C_Γ with c_Γ, C_Γ > 0. The normalized normal vector on Γ is given by (_Γ'(x))^, x ∈ [0,1) where 𝐯^ = (v_2, -v_1)^⊤ for any 𝐯 = (v_1, v_2)^⊤ is the vector that is turned clock-wise by 90^∘. Using the parametrization of the vicinity of Γ ϕ: (x,y) ↦_Γ(x) + y _Γ(x) . we define the perforated domain as Ω^ε = Ω∖⋃_n=1^1/εϕ( εΩ_w (ϕ ( ε (n-12), 0))), 1/ε∈ℕ . Here,  Ω_w(), ∈Γ is the local wall pattern, which is for each ∈Γ an open Lipschitz domain in (0,1) × [-R_0,R_0] for some R_0 > 0. The outer normalized normal vector field on ∂Ω_w() is denoted by (, X, Y) = (n_1(, X, Y), n_2(, X, Y))^⊤. For simplicity we assume that the local wall pattern match between the left and right side, i.e., I() := { Y ∈ℝ, (0,Y)^⊤∉∂Ω_w() } = { Y ∈ℝ, (1,Y)^⊤∉∂Ω_w() }. Moreover, Ω() = ((0,1) ×ℝ) ∖Ω_w() denotes the periodicity cell on ∈Γ. Assuming Ω_w() to depend continuously on in a finite partition of Γ, then the perforated wall is called locally periodic. §.§ Poisson problem in the perforated domain In the perforated domain Ω^ε we state the model problem { -Δ u^ε = f in Ω^ε , ∂_n u^ε = 0 on ∂Ω^ε , ∫_Ω_f u^ε d = 0 , . where Ω_f := f ⊂Ω∖Γ with dist( f, Γ) > 2√(ε_0) for some ε_0 > 0. Assuming further that ∫_Ω f() d = 0 we can assert that the Poisson problem (<ref>) has a unique solution. Based on the principles of periodic homogenization and the method of matched asymptotic expansions <cit.> we take the ansatz u^ε() = U_int(_Γ(x), x/ε - ⌊x/ε⌋, y/ε) + o(1) with = ϕ(x,y), dist(, Γ) < 2√(ε), u_ext() + o(1), dist(, Γ) > √(ε), where U_int: { (, X, Y), ∈Γ, (X,Y) ∈Ω()}→ℝ and u_ext: Ω\Γ→ℝ describe the dominating behaviour in a vicinity and outside a vicinity of the microstructure. Since (_Γ(x-ε), X+1,Y) and (_Γ(x), X, Y) correspond to the same point in the vcinity of the microstructure and assuming continuity of u^ε we find that U_int(_Γ(x-ε), ⌊x/ε⌋, y/ε) = U_int(_Γ(x), ⌊x/ε⌋, y/ε) and taking the limit ε→ 0 we see that U_int is periodic in X, i.e., for any (,Y) ∈Γ×I it holds lim_X→ 0+ U_int(, X, Y) = lim_X→ 1- U_int(, X, Y) . Similarly, we find that for any (,Y) ∈Γ×I it holds lim_X→ 0+∂_X U_int(, X, Y) = lim_X→ 1-∂_X U_int(, X, Y) . Inserting the ansatz (<ref>)_1 into (<ref>) we find for = ϕ(x,y) that 0 = -Δ u^ε() = -1/ε^2( ∂^2/∂ X^2 + ∂^2/∂ Y^2) U_int(_Γ(x), x/ε - ⌊x/ε⌋, y/ε) + o(1/ε^2) 0 = ∂_n u^ε() = 1/ε∇_XY U_int(_Γ(x), x/ε - ⌊x/ε⌋, y/ε)·(_Γ(x)) + o(1/ε) and, hence, we demand for all ∈Γ -Δ_XY U_int(, X, Y) = 0, (X,Y) ∈Ω(), ∇_XY U_int(, X, Y) ·() = 0, (X,Y) ∈∂Ω_w(). For |Y| ≥ R_0 we can expand U_int in a Fourier series in X, U_int(, X,Y) = ∑_n=0^∞ U_int, n^± (, Y) e^2π n X, ± Y ≥ R_0, and inserting into (<ref>) and assuming non-exponential increase in Y leads to U_int, 0^± (, Y) = U_int, 0,0^±() + Y U_int, 0,1^±(), U_int, n^± (, Y) = U_int, n^±() e^-2π n |Y| . Indeed, U_int, 0,1^+() = U_int, 0,1^-(), i.e., the slopes for Y →∞ and Y → -∞ coincide. To verify this, we integrate (<ref>) over the truncated periodicity cell Ω_R = Ω∩ (0,1) × (-R,R) for some R > R_0. Then, applying Gauss's theorem and the periodicity condition (<ref>) we obtain ∫_0^1 ∂_Y U_int(, X, R) dX = ∫_0^1 ∂_Y U_int(, X, -R) dX . Now, taking the limit for R→∞ using (<ref>), we find that the linear slopes are the same, lim_Y →±∞∂_Y U_int(, X, Y) = α() . In the following we denote the linear slope and the jump and mean of the constant monomial by α() := U_int, 0,1^+() = U_int, 0,1^-() , u_∞() := U_int, 0,0^+() - U_int, 0,0^-() , m_∞() := 12 U_int, 0,0^+() + 12 U_int, 0,0^-() . The two representation of u^ε in the ansatz (<ref>) shall be identical in the two matching zones where √(ε) < dist(, Γ) < 2 √(ε), at least asymptotically for ε→ 0. Assuming u_ext() - U_int(_Γ(x), x/ε - ⌊x/ε⌋, y/ε) = o(√(ε)), = ϕ(x,y), √(ε) < y < 2 √(ε) . As we assumed the midline Γ to be C^1 the Taylor expansion of u_ext around Γ u_ext(_Γ(x) + y _Γ) = u_ext^±(_Γ(x)) + y ∂_n u_ext^±(_Γ(x)) + o(|y|) holds separately for the two sides of Γ with ∂_n u_ext^±(_Γ(x)) = lim_y → 0±∇ u_ext(_Γ(x) + y _Γ(x))·_Γ(x). In the two matching zones the linear polynomial (<ref>) is the dominating term of the near field U_int. Therefore, for all ∈Γ u_ext^±() + y ∂_n u_ext^±() - U_int,0,0^±() - y/εα() = o(√(ε)), √(ε) < y < 2 √(ε) . Hence, u_ext^±() = U_int,0,0^±() , ε∂_n u_ext^+() = ε∂_n u_ext^-() = α() . For the introduction of the coupled system we define J: ℝ→ℝ as a canonical jump function, cf. Fig. <ref>, that is a smooth and odd function with (Y)J(Y)=1/2 for |Y| > R_1 for some R_1 > R_0, and vanishing in [-R_0,R_0]. Moreover, [·] and {·} denote the average and the jump of traces on the mid-line Γ. Altogether, we obtain the coupled system for (u_ext, U_int, α, u_∞, m_∞) -Δ u_ext() = f(), ∈Ω∖Γ, ∂_n u_ext() = 0, ∈∂Ω, [∂_n u_ext]() = 0, ∈Γ, -Δ_XYU_int(,X,Y) = 0, ∈Γ, (X,Y) ∈Ω(), ∂_n U_int(,X,Y) = 0, ∈Γ, (X,Y) ∈∂Ω_w(), lim_|Y|→∞ U_int(,X,Y) - m_∞() - u_∞() J(Y) - α() Y = 0, ∈Γ, X ∈ (0,1), [u_ext]() - u_∞() = 0, ∈Γ, {u_ext}() - m_∞() = 0, ∈Γ, {∂_n u_ext}() - ε^-1α() = 0, ∈Γ . Here, we use ∂_n U_int(,X,Y) := ∇_XY U_int(,X,Y)·(,X,Y). The equations (<ref>)–(<ref>) form the subsystem for the macroscopic far field, the equations (<ref>)–(<ref>) form the subsystem for the microscopic near field and (<ref>)–(<ref>) are the coupling conditions. In Sec. <ref> we discuss the near field problem for given jump u_∞ at infinity that we call near field Dirichlet problem. Then, in Sec. <ref> we introduce the variational formulation coupling the near and far field and show its well-posedness. § NEAR FIELD DIRICHLET PROBLEM IN ONE PERIODICITY CELL In this section we consider a near field problem in one periodicity cell Ω = ((0,1) ×ℝ) ∖Ω_w with Ω_w denoting a wall domain. Here, we omit the slow variable . The solution U satisfies the system -Δ_XY U(X,Y) = 0 in Ω, ∂_n U(X,Y) = 0 on ∂Ω_w, U(1,Y) = U(0,Y) on ℝ∖ I_y, ∂_X U(1,Y) = ∂_X U(0,Y) onℝ∖ I_y, with the polynomial behaviour U(X,Y) = m_∞ + u_∞J(Y) + α Y + O(exp(-2π|Y|)) for Y →±∞, at infinity, with m_∞, u_∞, α∈ℝ. We consider the problem that we seek the coefficient α for given coefficient u_∞, which we call the near field Dirichlet problem. As near field Neumann problem we would denote the one where the linear slope α is given where the coefficient u_∞ results. In both problems the solution is defined up to the constant m_∞. §.§ Variational spaces For the variational problem of the near field Dirichlet problem we consider as unknowns the pair (U, α) with U(X,Y) := U(X,Y) - m_∞ - u_∞J(Y) - α Y, that is seeked in the Beppo-Levi space BL_0,♯(Ω) that is the completion of the space of smooth functions with bounded support and periodicity conditions in X C^∞_c,♯(Ω) := {V∈ C^∞_♯(Ω), diam((V)) < ∞} in BL_♯(Ω) := {V∈𝒟'_♯(Ω) : V/√(1 + |Y|^2)∈ L^2(Ω) and ∇V∈ L^2(Ω) } . Now, we define for any subdomain G⊆Ω the norm V_BL(G)^2 := ∫_G |∇V(X,Y)|^2 + |V(X,Y)|^2/1 + |Y|^2d(X,Y). The Beppo-Levi spaces BL_♯(Ω) and BL_0,♯(Ω) are Hilbert spaces when endowed with the norm ·_BL(Ω) which follows in analogy to <cit.>, see also <cit.>. The seminorm |·|_H^1(Ω) is a norm on the space BL_0,♯(Ω), which is equivalent to ·_BL(Ω). The statement follows from the Poincaré inequality ∫_Ω|V(X,Y)|^2/1 + |Y|^2 d(X,Y) ≤ C_p ∫_Ω |∇V(X,Y)|^2 d(X,Y), which follows similarly to <cit.>. The space Ḣ^1_♯(Ω) as the completion of C^∞_c(Ω) with periodicity condition in X with respect to the H^1(Ω)-seminorm is an Hilbert space with ∫_0^1 V^2(X,Y) dX → 0 for |Y|→∞ for all V∈Ḣ^1_♯(Ω). First, we show the decaying property (<ref>). For this, let for an arbitrary V∈Ḣ^1_♯(Ω) and any Y with |Y| ≥ R_0 be V(Y) := √(∫_0^1 V^2(X,Y) dX) . By the Cauchy-Schwarz inequality we can assert that | V'(Y) | ≤∫_0^1 (∂_Y V(X,Y))^2 dX and so | V|_H^1((-∞, -R_0) ∪ (R_0, ∞))≤|V|_H^1(Ω) Hence, the continous extension V onto ℝ by linear polynomial into [-R_0, R_0] is the homogeneous Sobolev space Ḣ^1(ℝ) <cit.>, the completion of C^∞_c(ℝ) with respect to the H^1(ℝ) seminorm. It is well-known that the homogeneous Sobolev space Ḣ^1(ℝ) with a decaying behavior towards ±∞. This implies (<ref>). It suffices to show definitness of Ḣ^1_♯(Ω). For this let V∈Ḣ^1_♯(Ω) be a function with V_Ḣ^1_♯(Ω) = 0. Then, ∇V = 0 and V is a constant. Finally,  (<ref>) implies V = 0, and the definiteness of the H^1-seminorm follows. Now, Lemma <ref> and Lemma <ref> imply The homogeneous Sobolev space Ḣ^1_♯(Ω) and the Beppo-Levi space BL_0,♯(Ω) are equivalent. §.§ Variational problem The unknown U satisfies -ΔU(X,Y) = -Δ U(X,Y) + αΔ Y + u_∞J”(Y) = u_∞J”(Y) in Ω, ∂_n U(X,Y) = ∂_n U(X,Y) -α∂_n Y - u_∞J'(Y)n_2 = -αn_2 on ∂Ω, since J'(Y) n_2 = 0 on ∂Ω. Multiplying (<ref>) with a test function V ∈ BL_0,♯(Ω), integrating over Ω and using integration by parts we find the equality ∫_Ω∇U(X,Y) ·∇V(X,Y) d(X,Y) + α∫_∂Ωn_2 V(X,Y) d_XYσ = u_∞∫_Ω J”(Y)V(X,Y) d(X,Y). A second equation shall derived on the truncated periodicity cell Ω_R := Ω∩ (0,1) × (-R,R) for R > R_0. Applying Green's formula twice and using Δ Y = 0 we can assert that ∫_∂Ω U(X,Y) n_2 d_XYσ = ∫_∂Ω_RU(X,Y)∇ Y ·𝐧 d_XYσ - ∫_0^1 U(X,R) - U(X,-R) dX = ∫_Ω_R∇U(X,Y) ·∇ Y d(X,Y) - ∫_0^1 U(X,R) - U(X,-R) dX = - ∫_Ω_R YΔU(X,Y) d(X,Y) + ∫_∂Ω Y ∂_n U(X,Y) d_XYσ + ∫_0^1 R∂_Y U(X,R) dX - R∂_Y U(X,-R) dX - ∫_0^1 U(X,R) - U(X,-R) dX Taking the limit for R →∞ the last two integrals vanish due to the exponential decay of U, see (<ref>), and inserting (<ref>) we find ∫_∂ΩU(X,Y) n_2 d_XYσ = u_∞∫_Ω Y J”(Y) d(X,Y) - α∫_∂Ω Y n_2 d_XYσ. Integrating the first integral on the right hand side by parts two times and using the smoothness of J we can assert that ∫_Ω Y J”(Y) d(X,Y) = -1 . Moreover, the last integral can be simplified using the Green's formula inside the wall Ω_w, ∫_∂Ω Y n_2 d_XYσ = ∫_∂Ω Y ∇ Y ·𝐧d_XYσ = -∫_Ω_w Y Δ Y + ∇ Y ·∇ Y d(X,Y) = -|Ω_w| < 0, where the sign is changed as the normal vector 𝐧 is directed inside Ω_w. Hence, we seek (U, α) ∈ BL_0,♯(Ω) ×ℝ such that ∫_Ω∇U·∇Vd(X,Y) + α∫_∂Ωn_2 Vd_XYσ = u_∞∫_Ω J”Vd(X,Y) ∀V∈ BL_0,♯(Ω), -∫_∂ΩUn_2 d_XYσ +α|Ω_w | = u_∞ . The variational formulation (<ref>) admits a unique solution (U,α) ∈ BL_0,♯(Ω) ×ℝ and there exists a constant C such that |U|_H^1(Ω) + |α|≤ C | u_∞| . With the bilinear form 𝖺 given by 𝖺((U,α),(V,β)) = ∫_Ω∇U·∇Vd(X,Y) + α∫_∂Ωn_2 Vd_XYσ -β∫_∂ΩUn_2 d_XYσ +αβ|Ω_w | and the linear form ℓ defined by ℓ((V, β)) = u_∞( ∫_ΩV J”d(X,Y) + β), that are both continuous on (BL_0,♯(Ω),ℝ), the variational formulation (<ref>) is equivalent to seek (U,α) ∈ BL_0,♯(Ω) ×ℝ such that 𝖺((U, α), (V,β)) = ℓ((V,β) for all (V,β) ∈ BL_0,♯(Ω) ×ℝ. Taking (V,β) = (U,α) we can assert that a((U,α),(U,α)) = ∫_Ω|∇U|^2 d(X,Y) +α^2|Ω_w |≥min(1,|Ω_w |) (|U|_H^1(Ω) + |α|^2) and as H^1(Ω)-seminorm is a norm on BL_0,♯(Ω) due to Lemma <ref> the bilinear form is (BL_0,♯(Ω),ℝ)-elliptic with ellipticity constant γ = min(1,|Ω_w|). Then, with the Lax-Milgram lemma follows existence and uniqueness of the solution as well as its continuous dependency on u_∞. §.§ Formulation on the truncated periodicity cell To propose a numerical scheme the periodicity cell shall be truncated at |Y| = R for some R > R_1, where we search for approximate solutions with homogeneous Dirichlet boundary conditions at |Y| = R. We denote the truncated periodicity cell Ω_R = Ω∩ [0,1] × [-R,R] on which we consider the Hilbert space BL_0, ♯(Ω_R) := {V_R ∈𝒟'_♯(Ω_R) : V_R _BL(Ω_R) < ∞, V_R(·,± R)=0}, By the definiton of BL_0, ♯(Ω) as the completion of C^∞_c,♯(Ω) the union of BL_0, ♯(Ω_R) for all R > R_0 each extended by zero for |Y| > R is dense in BL_0, ♯(Ω). Hence, in view of Lemma <ref> it follows that the H^1(Ω_R)-seminorm and BL(Ω_R)-norm are equivalent with constants independent of R. Then, we search (U_R,α_R) ∈ BL_0, ♯(Ω_R) ×ℝ such that for all V_R∈ BL_0, ♯(Ω_R) ∫_Ω_R∇U_R·∇V_R d(X,Y) + α_R ∫_∂Ω_Rn_2 V_R d_XYσ = u_∞∫_Ω_R J”V_R d(X,Y) -∫_∂Ω_RU_Rn_2 d_XYσ +α_R|Ω_w | = u_∞ . The variational formulation (<ref>) admits a unique solution (U_R,α_R) ∈ BL_0, ♯(Ω_R) ×ℝ and there exists a constant C independent of R such that |U_R|_H^1(Ω_R) + |α_R |≤ C | u_∞|. The proof is in analogy to the one of Lemma <ref> where the bilinear form is BL_0, ♯(Ω_R) ×ℝ-elliptic in this case. Let R > 2R_1. Then, there exists a constant C independent of R and |U_R- U|_H^1(Ω) + |α_R -α|≤ C exp(-π R) . The proof is divided into two parts. First the truncation error is bounded using Céa's lemma by the best approximation error which is then bounded by the error of an interpolant. As BL_0, ♯(Ω_R) extended by 0 into Ω∖Ω_R is a subspace of BL_0, ♯(Ω) and with the BL_0, ♯(Ω) ×ℝ-ellipticity of the bilinear form 𝖺 we can apply Céa's lemma. This leads to |U_R- U|_H^1(Ω) + |α_R -α|≤(1+C^2/γ^2)inf_(V_R,β_R)∈ H^1_± R,♯(Ω) ×ℝ(|V_R- U|_H^1(Ω)^2 + |β_R -α|^2), where γ = min(1, |Ω_w|) is the ellipicity constant and C the continuity constant of the bilinear form. As β_R can be chosing to equal α as they are real numbers this simplifies to |U_R- U|_H^1(Ω) + |α_R -α|≤(1+C^2/γ^2)inf_V_R∈ H^1_± R,♯(Ω)|V_R- U|_H^1(Ω)^2. Now, we define for any U∈ BL_0, ♯(Ω) the interpolant Π_R U∈ BL_± R, ♯(Ω) where (Π_R U) (X,Y) = U_R(X,Y) ·{[ 1, | Y | < R/2,; 2R-Y/R, R/2<| Y | < R. ]., which extended by 0 into Ω∖Ω_R is in BL_0, ♯(Ω). To estimate the interpolation error in the H^1(Ω) seminorm we compute the L^2(Ω)-norms of the derivatives of the difference Π_R U - U. As Π_R U = U in Ω_R/2 the errors consists of contributions in Ω_R ∖Ω_R/2 and in Ω∖Ω_R. As the solution U∈ BL_0,♯(Ω) of (<ref>) satisfies -ΔU = 0 for |Y| > R_1 using separation of variables we can assure that it admits the series representation U(X,Y) = ∑_k=1^∞ U_k,±exp(2π k X) exp(-2π k (± Y-R_0)) for ± Y > R2≥ R_1. Now, with the trace theorem and Lemma <ref> it holds with constants c,C that 2π∑_k=1^∞ k |U_k,±|^2 ≤ cU_H^1/2(Γ_± R_0)^2 ≤ C |u_∞|^2 . Using Π_RU = 0 in Ω∖Ω_R we find ∂_X(Π_RU - U)^2_L^2(Ω∖Ω_R) = 4 π^2 ∑_±∑_k=1^∞ k^2 |U_k,±|^2 ∫_R^∞exp(-4π k(Y-R_0)) dY ≤π∑_±∑_k=1^∞ k |U_k,±|^2 exp(-4π k (R-R_0)) ≤exp(-4π (R-R_0)) π∑_±∑_k=1^∞ k |U_k,±|^2 ≤ C exp(-4π (R-R_0)) |u_∞|^2 , and in analogy we obtain ∂_Y(Π_RU - U)^2_L^2(Ω∖Ω_R) ≤ C exp(-4π (R-R_0)) |u_∞|^2 . To analyse the error in Ω_R ∖Ω_R/2 we first see that Π_RU(X,Y) - U(X,Y) = ( 2R - Y/R - 1 )U(X,Y) = ( 1 - 2Y/R) U(X,Y), and using (<ref>) we find that ∂_X(Π_RU - U)^2_L^2(Ω_R ∖Ω_R/2) = 4π^2 ∑_±∑_k=1^∞ k^2 | U_k,±|^2 E_k(R) where E_k(R) = ∫_R/2^R ( 1 - 2Y/R)^2exp(- 4π k(Y - R_0))dY ≤∫_R/2^R exp(- 4π k(Y - R_0))dY ≤1/4π kexp(- 2π k(R - 2R_0)) . Now, inserting the inequality (<ref>) we can assert that ∂_X(Π_RU - U)^2_L^2(Ω_R ∖Ω_R/2)≤ C exp(-2π (R-2R_0)) |u_∞|^2 . For the Y-derivative we obtain ∂_Y(Π_RU - U)^2_L^2(Ω_R ∖Ω_R/2) ≤4/R^2∑_±∑_k=1^∞| U_k,±|^2∫_R/2^R exp(- 4π k(Y - R_0))dY + 4π^2 ∑_k=1^∞ k^2 | U_k,±|^2 E_k(R) ≤∑_±∑_k=1^∞| U_k,±|^2 ( 1/π k R^2 + π k )exp(-2 π k(R - 2R_0)) ≤ C exp(-2π (R-2R_0)) |u_∞|^2 , where we used again (<ref>). Finally, adding all the error contributions and using (<ref>) with V_R = Π_RU we can assert the statement of the lemma. § VARIATIONAL FORMULATION FOR NEAR AND FAR FIELD §.§ The variational formulation on unbounded periodicity cells For the variational problem of the coupled problem we consider as unknown for the near field U_int(,X,Y) := U_int(,X,Y) - m_∞() - u_∞()J(Y) - α() Y, ∈Γ, that is seeked in the variational space L^2(Γ, BL_0,♯(Ω)) := {V_int(,·) ∈ BL_0,♯(Ω()) for almost all ∈Γ, V_int(·,X,Y)_BL(Ω(·))∈ L^2(Γ) }. This space is equipped with the norm defined by V_int_L^2(Γ, BL(Ω))^2 := ∫_ΓV_int(,X,Y)_BL(Ω())^2 dσ . Functions in this space have periodicity conditions in X, and the domain of definition for fixed is the local periodicity cell Ω(). As the far field u_ext can only be uniquely defined up to an additive constant we seek it in an Hilbert space of vanishing mean in the support of f H^1_*(Ω∖Γ):={v_ext∈ H^1(Ω∖Γ): ∫_Ω_f v_ext d = 0} , which still allows for jumps on Γ. We equip the space with the norm defined by ‖ v_ext‖^2_H^1_*(Ω∖Γ) := | v_ext|^2_H^1(Ω∖Γ) + ‖ [v_ext] ‖^2_L^2(Γ) , and due to term ‖ [v_ext] ‖^2_L^2(Γ) the norm is only zero for v_ext = 0. To obtain a variational formulation we consider first (<ref>) where U, V are replaced by U_int and V_int, the second equation (<ref>) is multiplied with β∈ L^2(Γ), α is considered in L^2(Γ) and both equations are multiplied with 1 / ε and integrated over Γ. Then, multiplying (<ref>) by v_ext∈ H^1_*(Ω∖Γ), integrating over Ω∖Γ and using [∂_n u_ext] = 0 by (<ref>) and {∂_n u_ext} = α / ε by (<ref>) and multiplying (<ref>) by v_∞∈ L^2(Γ) and integrating over Γ leads to the coupled variational formulation: Seek (u_ext, α, U_int, u_∞) ∈ H^1_*(Ω∖Γ) × L^2(Γ) × L^2(Γ, BL_0,♯(Ω)) × L^2(Γ) such that ∫_Ω∖Γ∇ u_ext·∇ v_extd + 1/ε∫_Γα[v_ext] dσ = ∫_Ω∖Γ f v_extd 1/ε∫_Γ -[u_ext]v_∞ + u_∞ v_∞dσ = 0 1/ε∫_Γ(α∫_∂Ω()V_intn_2d_XYσ + ∫_Ω()∇U_int·∇V_int - u_∞ J”V_intd(X,Y) ) dσ = 0 1/ε∫_Γ(αβ|Ω_w() | -β∫_∂Ω()U_intn_2 d_XYσ - u_∞β)dσ = 0 for all (v_ext, β, V_int, v_∞) ∈ H^1_*(Ω∖Γ) × L^2(Γ) × L^2(Γ, BL_0,♯(Ω)) × L^2(Γ). To discuss the well-posedness we introduce the product space 𝒲 = H^1_*(Ω∖Γ) × L^2(Γ)× L^2(Γ, BL_0,♯(Ω)) × L^2(Γ) with ε-dependent norm defined by (v_ext,β, V_int,v_∞) _𝒲,ε^2 := v_ext_H^1(Ω∖Γ)^2 + 1/ε( β_L^2(Γ)^2 + U_int_L^2(Γ, BL(Ω))^2 + v_∞_L^2(Γ)^2) and a related seminorm | (v_ext,β, V_int,v_∞) _𝒲,ε^2 := | v_ext|_H^1(Ω∖Γ)^2 + 1/ε( β_L^2(Γ)^2 + U_int_L^2(Γ, BL(Ω))^2 + v_∞_L^2(Γ)^2) . On this space we define the bilinear form 𝖻(( u_ext, α, U_int,u_∞),(v_ext,β, V_int,v_∞)) := ∫_Ω∖Γ∇ u_ext·∇ v_extd + 1/ε∫_Γα[v_ext] - [u_ext]v_∞ + u_∞ v_∞dσ + 1/ε∫_Γ(α∫_∂Ω()V_intn_2d_XYσ + ∫_Ω()∇U_int·∇V_int - u_∞ J”V_intd(X,Y) ) dσ +1/ε∫_Γ(αβ|Ω_w() | -β∫_∂Ω()U_intn_2 d_XYσ - u_∞β)dσ , for which we state inf-sup-conditons where the first gives only a lower bound in terms of the seminorm. Let J'_L^∞(ℝ)≤12. Then there exists a constant γ > 0 independent of ε such that for all (u_ext, α, U_int,u_∞) ∈𝒲 it holds sup_(v_ext,β, V_int,v_∞) ∈𝒲∖{0}| 𝖻((u_ext, α, U_int,u_∞),(v_ext,β, V_int,v_∞)) |/ (v_ext,β, V_int,v_∞) _𝒲,ε ≥γ|(u_ext, α, U_int,u_∞)|_𝒲,ε , and for all (v_ext,β, V_int,v_∞) ∈𝒲∖{(0,0,0,0) } it holds sup_(u_ext,α, U_int,u_∞) ∈𝒲∖{0}| 𝖻((u_ext, α, U_int,u_∞),(v_ext,β, V_int,v_∞)) | > 0 . First, integrating by parts we find that ∫_∂Ω()U_intn_2 dσ_XY = ∫_Ω() Y' ∂_Y U_intd(X,Y) = ∫_Ω()∂_Y U_intd(X,Y), -∫_Ω() J”U_intd(X,Y) = ∫_Ω() J' ∂_Y U_intd(X,Y), since the boundary term ∫_∂Ω() J'U_intn_2 dσ_XY vanishes as J' is zero on the wall boundary ∂Ω_w. Inserting the test function (v_ext,β, V_int,v_∞) = (u_ext, α - √(2)2 u_∞, U_int,α) . into the bilinear form, the mixed terms with α and u_∞ cancel out. Now, defining m_w := inf_∈Γ|Ω_w()|, using the above formulas and m_w ≤|Ω_w()| we obtain 𝖻( (u_ext,α, U_int,u_∞), (u_ext, α - √(2)2 u_∞, U_int,α) ) ≥| u_ext|^2_H^1(Ω∖Γ)+ 1/ε∫_Γ|U_int|^2_H^1(Ω())dσ + m_w/εα_L^2(Γ)^2 +√(2)/2 εu_∞^2_L^2(Γ) +1/ε∫_Γu_∞∫_Ω()(J' + √(2)2)∂_YU_intd(X,Y)dσ. Using Young's inequality we find that ∫_Γu_∞∫_Ω()(J' + √(2)2)∂_YU_intd(X,Y)dσ ≥ -3/4∫_Γ |U_int |_H^1(Ω())^2 dσ - 1/3( J'_L^∞(ℝ) + √(2)2)^2 u_∞^2_L^2(Γ) . With the assumption on J'_L^∞(ℝ) we can assert that 1 - 1/3(J'_L^∞(ℝ) + √(2)2)^2 ≥1/3 and therefore 𝖻( (u_ext,α, U_int,u_∞), (u_ext, α - √(2)2 u_∞, U_int, α) ) ≥| u_ext|^2_H^1(Ω∖Γ)+ 1/ε(1/4∫_Γ|U_int|^2_H^1(Ω())dσ + m_w α_L^2(Γ)^2 + 1/3 u_∞_L^2(Γ)^2) ≥γ√(| u_ext|^2_H^1(Ω∖Γ)+ 1/ε(∫_Γ|U_int|^2_H^1(Ω())dσ + α_L^2(Γ)^2 + u_∞_L^2(Γ)^2)) ·√(| u_ext|^2_H^1(Ω∖Γ)+ 1/ε(∫_Γ|U_int|^2_H^1(Ω())dσ + α - √(2)2u_∞_L^2(Γ)^2 + α_L^2(Γ)^2)) for some well-chosen γ > 0 only depending on m_w since α - √(2)2u_∞^2_L^2(Γ) + α^2_L^2(Γ)≤ 2 ( α^2_L^2(Γ) + u_∞^2_L^2(Γ)). As the H^1(Ω())-seminorm and the BL(Ω())-norm are equivalent on BL_0,♯(Ω()) by Lemma <ref> the inequality (<ref>) follows. Now, we aim to show the second inf-sup condition. For this we fix the test functions and choose appropriate trial functions. More precisely, we see that for M_w := max(4, 1/3sup_∈Γ|Ω_w()|) it holds 𝖻( (v_ext, v_∞, 2 M_w V_int, M_w (v_∞ - 2β)), (v_ext,β, V_int,v_∞)) = | v_ext|^2_H^1(Ω∖Γ) + 2M_w/ε∫_Γ|V_int|^2_H^1(Ω())dσ + M_w/ε v_∞_L^2(Γ)^2 + 2M_w/εβ^2_L^2(Γ) + 1/ε∫_Γ∫_Ω()( (1 + 2M_w J') v_∞ - 2 M_w (1 + J') β)∂_Y V_intd(X,Y) dσ + 1/ε∫_Γ (|Ω_w| -3 M_w)β v_∞dσ . Using Young's inequality, the definition of M_w and the assumption on J' we can estimate the mixed terms and using again Poincaré estimate of Lemma <ref> we obtain 𝖻( (v_ext, v_∞, 2 M_w V_int, M_w (v_∞ - 2β)), (v_ext,β, V_int,v_∞)) ≥| v_ext|^2_H^1(Ω∖Γ) + 1/ε( 2M_w - 12 (1 + M_w J'_L^∞(ℝ)) - M_w( 1 + J'_L^∞(ℝ)) ) ∫_Γ|V_int|^2_H^1(Ω())dσ + 1/ε( M_w - 12 (1 + M_w J'_L^∞(ℝ)) - 12 (|Ω_w| -3 M_w) ) v_∞_L^2(Γ)^2 + 1/ε( 2M_w - M_w (1 + J'_L^∞(ℝ)) - 12 (|Ω_w| -3 M_w) ) β_L^2(Γ)^2 ≥| v_ext|^2_H^1(Ω∖Γ) + 1/2 ε∫_Γ|V_int|^2_H^1(Ω())dσ + 11/ε v_∞_L^2(Γ)^2 + 2/εβ_L^2(Γ)^2 ≥| v_ext|^2_H^1(Ω∖Γ) + 1/2 ε(C_p+1)V_int_L^2(Γ, BL(Ω())^2 + 11/ε v_∞_L^2(Γ)^2 + 2/εβ_L^2(Γ)^2 , which is positive if (v_ext,β, V_int,v_∞) ∈𝒲∖{ (0,0,0,0) }. Let J'_L^∞(ℝ)≤12 and let ε_0 > 0. Then, for all ε∈ (0, ε_0] the variational formulation (<ref>) admits a unique solution and there exists a constant C > 0 independent of ε such that | u_ext|_H^1(Ω∖Γ) + 1/√(ε)(‖α‖_L^2(Γ) + U_int_L^2(Γ, BL(Ω)) + ‖ u_∞‖_L^2(Γ)) ≤ C ‖ f ‖_L^2(Ω∖Γ) . The proof is divided into two steps. First, we show that a solution of (<ref>) is bounded by the right hand side f and therefore it is unique. Second, we use the Fredholm theory to conclude that a solution exists for any f ∈ L^2(Ω∖Γ). Using the definition of the bilinear form 𝖻 we see that the solution (u_ext,α, U_int,u_∞) ∈𝒲 of (<ref>) satisfies for all (v_ext,β, V_int,v_∞) ∈𝒲 𝖻((u_ext,α, U_int,u_∞), (v_ext,β, V_int,v_∞)) = ∫_Ω∖Γ f v_ext d . Moreover, in view of the second equation of (<ref>) we can assert that u_∞ = [u_ext] ∈ H^1/2(Γ) . Using (<ref>), the Cauchy-Schwarz inequality and (<ref>) we find for any ε_0 > 0 and ε≤ε_0 that 1/γf _L^2(Ω∖Γ) u_ext_L^2(Ω∖Γ) ≥| u_ext|_H^1(Ω∖Γ)^2 + 1/ε( α_L^2(Γ)^2 + U_int_L^2(Γ, BL(Ω))^2 + u_∞_L^2(Γ)^2 ) ≥| u_ext|_H^1(Ω∖Γ)^2 + 12ε_0 [u_ext] _L^2(Γ)^2 + 1εα_L^2(Γ)^2 + 1εU_int_L^2(Γ, BL(Ω))^2 + 12ε u_∞_L^2(Γ)^2 ≥min(1, 12ε_0) ‖ u_ext‖^2_H^1_*(Ω∖Γ) + 12ε(α_L^2(Γ)^2 + U_int_L^2(Γ, BL(Ω))^2 + u_∞_L^2(Γ)^2) . Using the Poincaré inequality for functions in H^1_*(Ω∖Γ) v_ext_L^2(Ω∖Γ)≤ C_P v_ext_H^1_*(Ω∖Γ) for all v_ext∈ H^1_*(Ω∖Γ) and using Young's inequality we obtain min(1, 12ε_0)‖ u_ext‖^2_H^1_*(Ω∖Γ) + 12ε(α_L^2(Γ)^2 + U_int_L^2(Γ, BL(Ω))^2 + u_∞_L^2(Γ)^2 ) ≤C_P^2/2γ^2f _L^2(Ω∖Γ)^2 , and (<ref>) follows. Now, we define the sesquilinearform 𝖻_0((u_ext,α, U_int,u_∞), (v_ext,β, V_int,v_∞)) := 𝖻((u_ext,α, U_int,u_∞), (v_ext,β, V_int,v_∞)) + ∫_Γ [u_ext] [v_ext] dσ for which corresponding inf-sup-conditions with the norm ·_𝒲,ε as defined in (<ref>) holds. Hence, the associated operator 𝖡_0: 𝒲→𝒲 is an isomorphism. Moreover, the operator 𝖪: 𝒲→𝒲 defined by (𝖪(u_ext,α, U_int,u_∞), (v_ext,β, V_int,v_∞))_𝒲,ε = -∫_Γ [u_ext] [v_ext] dσ ∀ (v_ext,β, V_int,v_∞) ∈𝒲 with the corresponding inner product (·,·)_𝒲,ε is compact as the trace space H^1/2(Γ) of H^1_⋆(Ω∖Γ) is compactly embedded in L^2(Γ) due to theorem of Rellich-Kondrachov. Hence, the operator 𝖡 = 𝖡_0 + 𝖪 corresponding to the sesquilinear form 𝖻 of the variational formulation (<ref>) is a Fredholm operator of index 0. Hence, by the Fredholm alternative we can conclude from the uniqueness of a solution of (<ref>), which we have shown above, its existence. This completes the proof. The condition J'_L^∞(ℝ)≤1/2 is fulfilled for the piecewise polynomial J(Y) = sgn(Y)/32(|Y|-R_0)^3 ( 3 (|Y| - R_0)^2 - 15 (|Y| - R_0) + 20), R_0 ≤ |Y| < R_0 + 2, sgn(Y)/2, |Y| ≥ R_0 + 2, 0, otherwise, which is in C^2(ℝ) for any R_0 > 0, and it holds J'_L^∞(ℝ) = 15/32. §.§ Coupled formulation with truncated periodicity cells Now, as in Sec. <ref> the near field function shall be truncated at Y=± R for some R>R_1, as this simplifies the numerical discretization. Let Ω_R() = Ω() ∩ [0,1]× [-R,R] the truncated periodicity cell for each ∈Γ. The truncated solution will be seeked in the space L^2(Γ, BL_0,♯(Ω_R)) := {V_int(,·,·) ∈ BL_0,♯(Ω_R()) for almost all ∈Γ, V_int(·,X,Y)_BL(Ω_R(·))∈ L^2(Γ), V_int(·,·, ± R) = 0 }, which is equipped with the ·_L^2(Γ, BL(Ω))-norm. Then, the coupled variational formulation with the truncated periodicity cells is: Seek (u_ext,R, α_R, U_int,R, u_∞,R) ∈ H^1_*(Ω∖Γ) × L^2(Γ) × L^2(Γ, BL_0,♯(Ω_R)) × L^2(Γ) such that ∫_Ω∖Γ∇ u_ext,R·∇ v_ext,Rd + 1/ε∫_Γα_R[v_ext,R] dσ = ∫_Ω∖Γ f v_ext,Rd ∫_Γ -[u_ext,R]v_∞,R + u_∞,R v_∞,Rdσ = 0 1/ε∫_Γ(α_R ∫_∂Ω(x)V_int,Rn_2 d_XYσ + ∫_Ω_R(x)∇U_int,R·∇V_int,R - u_∞,R J”V_int,Rd(X,Y) ) dσ = 0 1/ε∫_Γ(α_R β_R |Ω_w() | -β_R ∫_∂Ω()U_int,Rn_2 d_XYσ - u_∞,Rβ_R )dσ = 0 for all (v_ext,R, β_R, V_int,R, v_∞,R) ∈ H^1_*(Ω∖Γ) × L^2(Γ) × L^2(Γ, BL_0,♯(Ω_R)) × L^2(Γ). To discuss the well-posedness we introduce the product space 𝒲_R = H^1_*(Ω∖Γ) × L^2(Γ)× L^2(Γ, BL_0,♯(Ω_R)) × L^2(Γ) equipped with the norm defined in (<ref>), and we define the bilinear form 𝖻_R as in (<ref>) where Ω() is replaced by Ω_R(). Let J'_L^∞(ℝ)≤12. Then there exists a constant γ > 0 independent of ε and R such that for all (u_ext,R, α_R, U_int,R,u_∞,R) ∈𝒲_R it holds sup_(v_ext,R,β_R, V_int,R,v_∞,R) ∈𝒲_R ∖{0}| 𝖻_R((u_ext,R, α_R, U_int,R,u_∞,R),(v_ext,R,β_R, V_int,R,v_∞,R)) |/ (v_ext,R,β_R, V_int,R,v_∞,R) _𝒲,ε ≥γ|(u_ext,R, α_R, U_int,R,u_∞,R)|_𝒲,ε , and for all (v_ext,R,β_R, V_int,R,v_∞,R) ∈𝒲_R ∖{(0,0,0,0) } it holds sup_(u_ext,R,α_R, U_int,R, u_∞,R) ∈𝒲_R ∖{0}| 𝖻_R((u_ext,R, α_R, U_int,R,u_∞,R),(v_ext,R,β_R, V_int,R,v_∞,R)) | > 0 . First integrating by parts we find that ∫_∂Ω()U_int,Rn_2 dσ_XY = ∫_Ω_R() Y' ∂_Y U_int,Rd(X,Y) = ∫_Ω_R()∂_Y U_int,Rd(X,Y), -∫_Ω_R() J”U_int,Rd(X,Y) = ∫_Ω_R() J' ∂_Y U_int,Rd(X,Y) as the boundary terms on [0,1]×{± R} vanish where U_int,R = 0 as U_int,R∈ BL_0,♯ (Ω_R) and the boundary term ∫_∂Ω() J'U_int,Rn_2 dσ_XY vanishes as J' is zero on the wall boundary ∂Ω_w. The remainder of the proof is in analogy to the one of Lemma <ref>. Let J'_L^∞(ℝ)≤1/2 and let ε_0 > 0. Then, for all ε∈ (0, ε_0] the variational formulation (<ref>) admits a unique solution and there is a constant C > 0 independent of ε such that | u_ext,R|_H^1(Ω∖Γ) + 1/√(ε)(‖α_R‖_L^2(Γ) + U_int,R_L^2(Γ, BL(Ω)) + ‖ u_∞,R‖_L^2(Γ)) ≤ C ‖ f ‖_L^2(Ω∖Γ) . The proof is in analogy to the one of Theorem <ref> using the inf-sup conditions of Lemma <ref>, where the operator 𝖡_R: 𝒲_R →𝒲_R associated to the bilinear form 𝖻_R is Fredholm of index 0. Now, we are going to estimate the truncation error. For this we denote by U_int,R also the extension of U_int,R by 0 onto Ω() ∖Ω_R() for any ∈Γ. For the solution (u_ext,α,U_int,u_∞)∈ H^1_*(Ω∖Γ) × L^2(Γ) × L^2(Γ, BL_0,♯(Ω)) × L^2(Γ) of (<ref>) and the solution (u_ext,R,α_R, U_int,R,u_∞,R)∈ H^1_*(Ω∖Γ) × L^2(Γ) × L^2(Γ, BL_0,♯(Ω_R)) × L^2(Γ) of (<ref>) it holds | u_ext,R-u_ext|_H^1_*(Ω∖Γ) + 1/√(ε)( ‖α_R - α‖_L^2(Γ) + ‖U_int,R- U_int‖_L^2(Γ, BL(Ω)) + ‖ u_∞,R - u_∞‖_L^2(Γ)) ≤C/√(ε)exp(-π R) where the constant C > 0 is independent of ε and R. Using the triangle inequality we can assert that for any (v_ext,R,β_R, V_int,R,v_∞,R) ∈𝒲_R | u_ext,R-u_ext|_H^1_*(Ω∖Γ) + 1/√(ε)( ‖α_R - α‖_L^2(Γ) + ‖U_int,R- U_int‖_L^2(Γ, BL(Ω)) + ‖ u_∞,R - u_∞‖_L^2(Γ)) ≤| v_ext,R-u_ext|_H^1_*(Ω∖Γ) + | u_ext,R-v_ext,R|_H^1_*(Ω∖Γ) + 1/√(ε)( ‖β_R - α‖_L^2(Γ) + ‖α_R - β_R‖_L^2(Γ) + ‖V_int,R- U_int‖_L^2(Γ, BL(Ω)) + ‖U_int,R- V_int,R‖_L^2(Γ, BL(Ω)) + ‖ v_∞,R - u_∞‖_L^2(Γ) + ‖ u_∞,R - v_∞,R‖_L^2(Γ)). With the inf-sup conditions in Lemma <ref> and Galerkin orthogonality we find that | u_ext,R-v_ext,R|_H^1_*(Ω∖Γ) + 1/√(ε)( ‖α_R - β_R‖_L^2(Γ) + ‖U_int,R- V_int,R‖_L^2(Γ, BL(Ω)) + ‖ u_∞,R - v_∞,R‖_L^2(Γ)) ≤1/γsup_(w_ext,R, δ_R, W_R, w_∞) ∈𝒲_R ∖{0}| 𝖻((u_ext,R-v_ext,R, α_R - β_R, U_int,R- V_int,R, u_∞,R - v_∞,R), (w_ext,R, δ_R, W_R, w_∞))|/ (w_ext,R, δ_R, W_R, w_∞)_𝒲,ε ≤1/γsup_(w_ext,R, δ_R, W_R, w_∞) ∈𝒲_R ∖{0}| 𝖻((u_ext-v_ext,R, α - β_R, U_int- V_int,R, u_∞ - v_∞,R), (w_ext,R, δ_R, W_R, w_∞))|/ (w_ext,R, δ_R, W_R, w_∞)_𝒲,ε . Applying the Cauchy-Schwarz inequality we can assert with a constant C > 0 that | u_ext,R-u_ext|_H^1_*(Ω∖Γ) + 1/√(ε)( ‖α_R - α‖_L^2(Γ) + ‖U_int,R- U_int‖_L^2(Γ, BL(Ω)) + ‖ u_∞,R - u_∞‖_L^2(Γ)) ≤(1 + C/γ) ( | v_ext,R-u_ext|_H^1_*(Ω∖Γ) + 1/√(ε)( ‖β_R - α‖_L^2(Γ) + ‖V_int,R- U_int‖_L^2(Γ, BL(Ω)) + ‖ v_∞,R - u_∞‖_L^2(Γ))). Now, taking v_ext,R = u_ext, β_R = α and v_∞,R = u_∞ we find that | u_ext,R-u_ext|_H^1_*(Ω∖Γ) + 1/√(ε)( ‖α_R - α‖_L^2(Γ) + ‖U_int,R- U_int‖_L^2(Γ, BL(Ω)) + ‖ u_∞,R - u_∞‖_L^2(Γ)) ≤(1 + C/γ) 1/√(ε)‖V_int,R- U_int‖_L^2(Γ, BL(Ω)) . Using the interpolant Π_R, defined in (<ref>), we can assert in analogy to the proof of Lemma <ref> for a fixed ∈Γ that ‖U_int(,X,Y)- (Π_R U_int) (,X,Y)‖_BL(Ω)≤ C |u_∞()| exp(-π R) . Now, taken the L^2(Γ)-norm on both sides we find that ‖U_int - (Π_R U_int ) ‖_L^2(Γ, BL(Ω))≤ C u_∞_L^2(Γ)exp(-π R) . Inserting V_int,R = Π_R U_int in (<ref>) and as u_∞_L^2(Γ) is bounded by assumption the inequality (<ref>) follows and the proof is complete. 10 Abdulle:2005 Abdulle, A. On a priori error analysis of fully discrete heterogeneous multiscale fem. Multiscale Modeling and Simulation 4, 2 (2005), 447. Abdulle.E.Engquist.VandenEijnden:2012 Abdulle, A., Weinan, E., Engquist, B., and Vanden-Eijnden, E. The heterogeneous multiscale method. Acta Numer. 21 (5 2012), 1–87. Achdou:1989 Achdou, Y. Etude de la réflexion d'une onde électromagnétique par un métal recouvert d'un revêtement métallisé. Tech. rep., INRIA, 1989. Achdou:1992 Achdou, Y. Effect of a thin metallized coating on the reflection of an electromagnetic wave . C. R. Acad. Sci. Paris, Ser. I 314, 3 (January 1992), 217–222. Artola.Cessenat:1991 Artola, M., and Cessenat, M. Diffraction d'une onde électromagnétique par une couche composite mince accolée à un corps conducteur épais. I. Cas des inclusions fortement conductrices. C. R. Acad. Sci. Paris, Ser. I 313, 5 (1991), 231–236. Artola.Cessenat:1991a Artola, M., and Cessenat, M. Scattering of an electromagnetic wave by a slender composite slab in contact with a thick perfect conductor. II. Inclusions (or coated material) with high conductivity and high permeability. C. R. Acad. Sci. Paris, Ser. I 313, 6 (1991), 381–385. Bensoussan.Lions.Papanicolaou:1978 Bensoussan, A., Lions, J., and Papanicolaou, G. Asymptotic Analysis for Periodic Structures. North-Holland, Amsterdam, 1978. Bonnet.Drissi.Gmati:2004 Bonnet-Ben Dhia, A.-S., Drissi, D., and Gmati, N. Simulation of muffler's transmission losses by a homogenized finite element method. J. Comput. Acoust. 12 (2004), 447–474. Bonnet.Drissi.Gmati:2005 Bonnet-Ben Dhia, A.-S., Drissi, D., and Gmati, N. Mathematical analysis of the acoustic diffraction by a muffler containing perforated ducts. Math. Models Meth. Appl. Sci. 15, 7 (2005), 1059–1090. Chapman.Hewett.Trefethen:2015 Chapman, S. J., Hewett, D. P., and Trefethen, L. N. Mathematics of the faraday cage. SIAM Rev. 57, 3 (2015), 398–417. Cionarescu.Damlamiam.Griso:2002 Cioranescu, D., Damlamian, A., and Griso, G. Periodic unfolding and homogenization. Comptes Rendus. Mathématique 335, 1 (2002), 99–104. Cioranescu.Damlamian.Griso.Onofrei:2008 Cioranescu, D., Damlamian, A., Griso, G., and Onofrei, D. The periodic unfolding method for perforated domains and Neumann sieve models. J. Math. Pures Appl. (9) 89, 3 (2008), 248–277. Claeys.Delourme:2013 Claeys, X., and Delourme, B. High order asymptotics for wave propagation across thin periodic interfaces. Asymptot. Anal. 83, 1–2 (2013), 35–82. Dautray.Lions:1990 Dautray, R., and Lions, J.-L. Mathematical Analysis and Numerical Methods for Science and Technology, vol. 4. Springer-Verlag, Berln, 1990. Delourme.Haddar.Joly:2012 Delourme, B., Haddar, H., and Joly, P. Approximate models for wave propagation across thin periodic interfaces. J. Math. Pures Appl. (9) 98, 1 (2012), 28–71. Delourme.Luneville.Marigo.Maurel.Mercier.Pham:2021 Delourme, B., Lunéville, E., Marigo, J.-J., Maurel, A., Mercier, J.-F., and Pham, K. A stable, unified model for resonant faraday cages. Proc. R. Soc. A: Math. Phys. Eng. 477, 2245 (2021), 20200668. Delourme.Schmidt.Semin:2016 Delourme, B., Schmidt, K., and Semin, A. On the homogenization of thin perforated walls of finite length. Asymptot. Anal. 97, 3-4 (2016), 211–264. Fratta.Fiorenza:2022 Di Fratta, G., and Fiorenza, A. A unified divergent approach to Hardy–Poincaré inequalities in classical and variable Sobolev spaces. Journal of Functional Analysis 283, 5 (2022), 109552. E.Engquist:2003 E, W., and Engquist, B. The heterogeneous multiscale methods. Comm. Math. Sci. 1, 1 (2003), 87–132. EvansBook Evans, L. C. Partial differential equations. American Mathematical Society, 2010. Hewett.Hewitt:2016 Hewett, D. P., and Hewitt, I. J. Homogenized boundary conditions and resonance effects in faraday cages. Proc. R. Soc. A 472, 2189 (2016), 20160062. Lahiri.Sadig.Gerendas.Enghardt.Bake:2011 Lahiri, C., Sadig, S., Gerendas, M., Enghardt, L., and Bake, F. Establishment of a high quality database for the modelling of perforated liners. J. Eng. Gas Turbines Power 133, 9 (2011), 091503–1–091503–9. Lukes.Rohan:2007 Lukěs, V., and Rohan, E. Modelling of acoustic transmission through perforated layer. Appl Comp Mech 1 (2007), 137–142. Marigo.Maurel:2016-2 Marigo, J.-J., and Maurel, A. Two-scale homogenization to determine effective parameters of thin metallic-structured films. Proc. R. Soc. A: Math. Phys. Eng. 472, 2192 (2016), 20160068. Maurel.Marigo.Ourir:2016 Maurel, A., Marigo, J.-J., and Ourir, A. Homogenization of ultrathin metallo-dielectric structures leading to transmission conditions at an equivalent interface. J. Opt. Soc. Am. B 33, 5 (May 2016), 947–956. Nguetseng:1989 Nguetseng, G. A general convergence result for a functional related to the theory of homogenization. SIAM J. Math. Anal. 20, 3 (1989), 608–623. Rohan.Lukes:2019 Rohan, E., and Lukeš, V. Homogenization of the vibro–acoustic transmission on perforated plates. Appl. Math. Comput. 361 (2019), 821–845. SanchezPalencia:1980 Sánchez-Palencia, E. Nonhomogeneous media and vibration theory, vol. 127 of Lecture Notes in Physics. Springer-Verlag, Berlin, 1980. SanchezPalencia:1985 Sánchez-Palencia, E. Un problème d' écoulement lent d'une fluide visqueux incompressible au travers d'une paroi finement perforée. Tech. rep., Electricité de France, 1985. Schmidt.Semin.ThoensZueva.Bake:2018 Schmidt, K., Semin, A., Thöns-Zueva, A., and Bake, F. On impedance conditions for circular multiperforated acoustic liners. J. Math. Ind. 8, 1 (Dec 2018), 15. Semin.Delourme.Schmidt:2018 Semin, A., Delourme, B., and Schmidt, K. On the homogenization of the Helmholtz problem with thin perforated walls of finite length. ESAIM Math. Model. Numer. Anal. 52, 1 (2018), 29–67. Semin.Schmidt:2018 Semin, A., and Schmidt, K. On the homogenization of the acoustic wave propagation in perforated ducts of finite length for an inviscid and a viscous model. Royal Society 474, 2210 (2018), 1–23.
http://arxiv.org/abs/2407.03057v1
20240703122527
Towards High Resolution Real-Time Optical Flow Particle Image Velocimetry
[ "Juan Pimienta", "Jean-Luc Aider" ]
physics.flu-dyn
[ "physics.flu-dyn" ]
Article Title]Towards High Resolution Real-Time Optical Flow Particle Image Velocimetry [1,2]Juan Pimientajuan.pimienta@espci.fr [1]Jean-Luc Aiderjean-luc.aider@espci.fr *[1]Laboratoire PMMH, ESPCI Paris - PSL, CNRS, 7-9 quai Saint Bernard, Paris, 75005, France [2]Photon Lines, 10 avenue des Touches, Pacé, 35740, France Particle Image Velocimetry (PIV) is the most commonly used optical technique for measuring 2D velocity fields. However, improving the spatial resolution of instantaneous velocity fields and having access to the velocity field in real time remains a challenge. Optical Flow veolcimetry makes it possible to meet these challenges. In this study, we show that it is possible to access dense velocity fields (1 vector per pixel) in real-time using an appropriate seeding concentration adapted to optical flow algorithms and no longer to cross-correlation PIV algorithms. The influence of concentration on the quality of velocity fields is demonstrated using synthetic images generated for a Rankine vortex. We thus demonstrate that it is possible to precisely measure small vortices using optical flow provided that the seeding is suitable. The notion of "Active Pixels" is also introduced in order to define a seeding optimization criterion adapted to experimental measurements. This criterion is finally successfully applied to the flow downstream of a cylinder leading to a spatial resolution down to one vector per pixel. [ * July 8, 2024 ================ § INTRODUCTION Particle Image Velocimetry (PIV) is a non-intrusive optical technique that allows the measurement of the two-components (2C) of a velocity fields in a plane (2D) defined by a laser light sheet traversing a fluid flow seeded with reflecting particles <cit.>. The basic principle consists in computing the displacement of the particles between two successive snapshots using, in standard PIV post-processing, a FFT (Fast Fourier Transform) cross-correlation (CC) algorithm. CC-PIV is the standard algorithm currently used in most experiments, despite being very time consuming, computationally demanding and limited in terms of spatial resolution or real-time measurements. To optimize the quality of the velocity fields, it is important to choose the proper experimental parameter adapted to the CC algorithms, like the time interval between two snapshots, related to a maximum displacement of a few particles inside an Interrogation Windows (IWs), which is a key element for the spatial resolution of the PIV field <cit.>. However, the optimal parameters for other type of algorithms, like Optical Flow (OF), may not be the same as the ones for a CC algorithm. Indeed, OF-PIV offers a different approach to estimate the velocity fields through the motion of particles. Coming from the community of Machine Vision, Optical Flow can be understood as the apparent velocities from changes in intensity patterns in a scene <cit.>. From this point of view, the particles are only used as a way to create a textured image that is modified by the flow between two time steps. The general idea to estimate the displacements from intensity changes is then based on the assumption that intensity levels are kept constant between successive frames and that displacements are supposed to be small, of the order of 1 pixel. Determining displacement vectors from intensity variations is an under-constrained problem. This problem was solved mainly in two ways. Either a smoothness constraint is imposed on the system (Horn-Schunck algorithm) <cit.> which produces a global solution, or it is assumed that the displacements in the vicinity of a kernel centered on a pixel are very close to each other (Lukas -Kanade algorithm) <cit.>. Later, the Lukas-Kanade OF algorithm was modified by adding an iterative scheme (Folki) <cit.> and then adapted to perform PIV calculations <cit.>. One of the considerable advantages of OF-PIV algorithms is that they can be easily parallelized, particularly on GPUs (Graphics Processor Unit) whose architecture makes them an ideal partner for OF algorithms. This is why the Folki algorithm has been optimized to operate in real time <cit.> to the point of being used as a sensor in closed-loop flow control experiments <cit.>. In addition to the significant gain in calculation time <cit.>, OF-PIV algorithms should also lead to dense velocity fields, with a spatial resolution which should reach 1 vector per pixel. Although OF-PIV has indeed been shown to lead to smaller scales than CC-PIV in turbulent spectra <cit.>, the resolution of one vector per pixel has not been achieved. In recent years, the entire acquisition chain for real-time PIV measurement has been optimized (Laser, high-speed streaming cameras, dedicated computer, optimization of the algorithm). It leads to a high-resolution, high-frequency real-time optical flow PIV (RT-OFPIV) system. Being able to run RT-OFPIV and calculate quantities derived from the instantaneous velocity field in real time has made possible almost unlimited observations, analysis or recording of a flow <cit.> leading to the possibility to run machine learning or neural network analysis based on large OF-PIV databases <cit.>. This also leads to new experimental challenges. For example, various studies and optimizations have been carried out to improve the experimental conditions, the selection of algorithm parameters or the selection of appropriate equipment. The present study is divided into two main sections. The first part consists of a benchmark of OF-PIV based on synthetic particles images generated by a given flow (Rankine vortex). The influence of various image parameters (particle concentration, displacement amplitude) on the spatial resolution and velocity estimation at various scales is studied. The second part is focused on the application of the optimization of the experimental parameters in order to improve the quality the RT-OFPIV experimental measurements on two test cases: an uniform vortex-free flow and a massively separated flow downstream a cylinder. § EXPERIMENTAL SET UP §.§ Hydrodynamic channel Experiments have been carried out in a hydrodynamic channel in which the flow is driven by gravity (Fig. <ref>a), using a constant level water tank to ensure a pressure differential of Δ P = 0.3 bar. The maximum free-stream velocity U_∞=22 cm.s^-1. The flow is stabilized by divergent and convergent sections separated by honeycombs, leading to a turbulence intensity lower than 1 %. A NACA 0020 profile is used to smoothly start a Blasius boundary layer over the flat plate, upstream of the separated flow. The test section is 80 cm long with a rectangular cross-section w=15 cm wide and H=7.7 cm high (Fig. <ref>b). To study a wake flow, a vertical cylinder is mounted over the flat plate. It has a diameter of D=1 cm and is placed at x=40 cm from the leading edge of the plate. The maximum Reynolds number based on the diameter of the cylinder D is Re_D = U_∞ * D/ν≈ 2200 for a water temperature of 21^∘C (ν being the kinematic viscosity of water). §.§ OF-PIV setup To carry out the PIV measurements, the water was seeded with light reflecting polyamid neutrally buoyant micro particles of 20 μ m of diameter. The flow was illuminated by a laser sheet generated by a laser beam going through a Powell cylindrical lens. A Coherent™continuous Nd:Yag laser (wavelength of 532 nm for a power output of 2 W) was used. The measurements were carried out in an horizontal plane at the mid-height of the channel, covering the free-stream region upstream of the cylinder as well as the region downstream of a cylinder (Fig. <ref>). To record the instantaneous snapshots of the seeded flow, a Mikrotron™21CXP12 camera was used. It allows the acquisition of 21 Mpx images with an acquisition frequency up to 240 Hz, which can be streamed toward a dedicated workstation through a CoaxPress card. A computer has been designed and built to optimize its performances for real-time acquisition. The system is based on a AMD Ryzen Threadripper PRO 3955WX processor with 16 cores running at a frequency of 3.90 GHz with 128 GB of RAM. Two powerful, last generation, GPUs (RTX4070) are supported on a custom open chassis that allows for better access and easier connection/removal of new GPUs. § REAL-TIME OPTICAL FLOW PIV The optical flow algorithm is based upon the assumption of intensity conservation between successive images, which can be expressed as: ∇ I(x,y,t) = 0 where I(x,y,t) is the light intensity at every pixels of the images at a given time t. Using this assumption (Eq. <ref>) and an additional constraining condition, it is theoretically possible to estimate the flow displacement between two successive snapshots at each pixel contained in the images. The constraint imposed by the Lukas-Kanade <cit.> method is to assume that neighboring pixels will behave in a similar way. This is the reason why FOLKI <cit.> can be considered as a compromise between pure optical flow methods and window-based methods. This is because it uses a pixel-centered kernel, which defines one of the important parameters of the algorithm called the kernel radius (KR). It defines the size of the areas where the intensity gradients will be compared. This process will be applied to each pixel, leading to a resolution of one vector per pixel. This is very different from the interrogation windows (IW) used in the CC-PIV standard, which defines a minimum area containing a few particles that will move inside the IW between two time steps. Optical flow codes can be limited to estimating small displacements, of the order of 1 pixel. However, this problem is solved by the implementation of a Gaussian pyramid scheme, which allows successive reductions in the size of the image, therefore subsampling of large displacements. This defines another important parameter of the algorithm called the pyramid sublevels. At each new pyramid sublevel, the number of pixels in each direction is halved. The last important parameter is the number of Gauss-Newton iterations that the code must perform to give a solution. An extra step of pre-processing has been added before calculating the velocity fields, which consists in an intensity normalization using a pixel-centered kernel all across the image. The whole process of velocity fields calculation consists in six main steps: * Normalization of intensity of the image. * Image sub-sampling with Gaussian pyramids. * Estimation of the displacements at the kernel scale. * Projection of the velocity fields up-sampling the image size. * Iterations through the user defined times. * Velocity fields estimation. The process of estimating displacements, particularly at the kernel scale, is illustrated in Fig. <ref>. This starts with two subsamples of size 12 × 12 pixels, for two successive times t and t' = t+δ t. In this example, two neighboring pixels at positions (x, y) = (6, 6) (red pixel) and (x, y) = (7, 6) (green pixel) are considered with a Kernel radius KR = 1 pixel. The core associated with each pixel can be seen as a lighter shade of the pixel's color (left side of Fig <ref>). It is at this scale that the spatial intensity gradients are calculated, then the difference minimized using the iterative Gauss-Newton scheme to find the displacement leading to the velocity vectors for each of the positions evaluated. This process is performed for each pixel of successive snapshots, yielding a displacement vector per pixel. In the following, the tests will be carried out with EyePIV™, a dedicated OF-PIV plugin developed in collaboration between the two teams (Laboratory PMMH in collaboration with Photon Lines) for real-time measurements. § SYNTHETIC IMAGES GENERATION To assess the quality of the results delivered by the OFPIV, a benchmark with synthetic PIV images was carried out. Synthetic PIV images were generated using a PIV-image-generator developed by <cit.>. This Matlab-based software allows the synthetic creation of PIV images, using a well-known analytical expression of specific flows, such as a shear flow or a Rankine vortex, which will be used to move a set of randomly distributed particles in a given snapshot. In the following, the accuracy and precision of OF-PIV measurements will be studied using different image parameters and for a given flow: a single Rankine vortex. The Rankine vortex is defined using the analytical expressions summarized in Tab. <ref> as proposed by <cit.>. This is a very interesting benchmark due to the spatially localized high velocity gradients present in this type of flow, as well as the possibility of imposing important and critical parameters for PIV measurements, such as the vortex radius and the amplitude of the rotation speed. These features were identified as challenging for CC-PIV <cit.>. The generation of synthetic PIV images requires the definition of a set of parameters. Some parameters remained constant for generating all PIV images, such as image size (1024 × 1024 pixels^2), particle radius (r_p = 1.5 pixels ), the radius of the simulated particles, the thickness of the laser sheet (0.2 mm), the standard deviation of the out-of-plane motion (σ = 0.025) and no added noise. The parameters studied were the maximum expected displacement, the particle concentration, expressed in particles per interrogation window [part/IW] and the core radius of the Rankine vortex (R). The IW size used to generate the images was IW =16 px. This definition of particle concentration is convenient for CC-PIV but not really well suited for OF-PIV and will be discussed in more detail in section  <ref>. Table <ref> lists the dynamic parameters used to create the images. For reasons of statistical relevance, ten pairs of images were created for each of the possible combinations of dynamic parameters. As a result, there were 216 sets of 10 even images to perform the benchmark. The OF-PIV parameter space has been kept reasonably constrained since it has been empirically found that there is a "convergence zone", meaning that any increment from this region will only lead to a deterioration of the results, particularly when dealing with high gradient flows. The OF-PIV parameters are reported in Table  <ref> as minimum, maximum and per increment, leading to 288 unique combinations. In general, for each image parameter spanning the OFPIV parameter combination, there were 2 880 data entry points. This represents 207 360 data entry points from 414 720 image pairs for the entire study. The velocity fields corresponding to various synthetic Rankine vortices obtained for different parameters are shown in Fig. <ref>. The central radius of the Rankine vortex increases from left to right, while the maximum magnitude of displacement increases from top to bottom. As the size of the images remains constant, this means that OF-PIV will be tested on large vortices as well as very small vortices, for low and high rotations, which are the key characteristics needed to correctly measure, for example, turbulent shear flows. § INFLUENCE OF CONCENTRATION ON THE QUALITY AND SPATIAL RESOLUTION OF OF-PIV §.§ Error estimation Two error estimation criteria were used for this study to quantify the quality of the OF-PIV calculations: the absolute displacement error and a comparison of the displacement profile of the Rankine vortex. The absolute displacement error Err is defined as: Err(x,y) = √((u_th - u_OF)^2 + (v_th - v_OF)^2) where subscripts th and OF stands respectively for the theoretical displacement and the OF-PIV computed result. u and v are respectively the velocities along the x and y directions. Comparison of displacement profiles was performed by extracting all velocity profiles across the vortex core. This process first involves subsampling the image to better target the vortex in the center of the image. Second, a one-pixel velocity profile U(x,y) = √(u(x,y)^2 + v(x,y)^2) is selected in the middle of the image (y/2) and is stored in an array R. Third, the downsampled image is rotated by a specific angle θ (rot = f(U(x,y),θ)) to achieve a rotation equivalent to 1 pixel (Fig. <ref>a). The rotation angle changes depending on the radius of the vortex core. Fourth, a new row of one pixel at y/2 is selected in the rotated field and stored in the R(x, y) array. Steps 3 and 4 are repeated until a rotation of 180 is obtained. Fig. <ref>b shows an example of all the velocity profiles plotted against angle θ for a theoretical Rankine vortex, which is why the same profile is obtained for each angle θ. This process is carried out both for the theoretical domains and for the OF-PIV domains. Then the two fields are compared. A criterion has been defined to store the error values as a scalar value. A criterion Ψ is defined as the sum of the absolute differences between the theoretical displacement field θ averaged and the results unfolded OF-PIV: Ψ = Σ^i,j=x,y_i,j=0 |R_th(i,j) - R_ofpiv(i,j)|. For clarity, the criteria are normalized with the vortex area unfolded. Fig. <ref> shows the comparison between the theoretical and OF-PIV spatially averaged (along the θ direction) displacement profiles. In this particular example a very good agreement can be observed, both for the amplitude and location of the maxima of velocity. It will be further detailed and discussed in the Results section. §.§ Influence of particle concentration on the quality of OF-PIV computations Fig. <ref> shows a scatter plot summarizing the error analysis as a function of the different particle concentrations obtained for the case of the Rankine Vortex with a vortex core of r = 75   pixels and maximum displacements. It is clearly seen that, for a given set of OF-PIV parameters, the higher the concentration of particles in the images, the lower the error levels. More precisely, Fig. <ref>a) is a scatter plot of the spatially averaged displacement error over the entire image <Err> and unfolded profile criteria Ψ through the different maximum expected displacements and particle concentrations. Fig. <ref> b) is a zoom in on a specific region of interest (ROI) in Fig <ref>a) where the results show minimal error with both criteria. It corresponds to Ψ values less than 1. The criterion being normalized by the area of the vortex unfolded, this value can be interpreted as an average difference between the theoretical displacements and OF-PIV of 1 pixel. Fig <ref>c) shows the probability density function (PDF) of the Ψ criteria inside the ROI for the three concentrations. Here it can be clearly understood that for all different displacement magnitudes, concentration plays a major role in the error margins. To go further, configurations minimizing the Ψ error criteria were sought. Table <ref> presents the set of parameters leading to the best possible OF-PIV results for the present study. Fig. <ref> shows the contour of the velocity amplitude associated with all the parameters presented in Table <ref>. It can be seen that the majority of results come from images with a higher particle concentration, as was highlighted previously. For cases with concentrations below 15 p/IW, although Fig. <ref> and Tab. <ref> present the best results for each image configuration, the code limits may have been exceeded. Indeed, there seems to be a direct link between the size of the maximum displacement D and the size of the structure that can be resolved (in this case, the radius R of the vortex). In order to quantify this effect, the ratio between the maximum displacement and the radius of the vortex core D/R is introduced. It can be considered as an indicator of the displacement gradient. Larger D/R values correspond to small, rapidly rotating vortices while lower D/R values correspond to large, slowly rotating vortices. The interaction between these two parameters can be clearly seen in Fig. <ref>, where there appears to be a limit close to D/R ≈ 0.7 (sub-Fig. <ref> g,m,n,s and t) of the resolution capacity of the OF-PIV algorithm. As expected, the smaller, faster vortices are the most difficult to measure. Another interesting observation that can be made from both Table <ref> and Fig. <ref> is that the choice of KR does not seem to be directly linked to the size of the displacement, as previously thought, but more to the gradient of the displacement. This is very well illustrated in the case of the vortex with r=100 px and a displacement magnitude of 32 px, where the KR is only 2  px. However, if the results are ranked according to the D/R ratio, it can be seen that there is another hidden trend pointing towards higher D/R and smaller KR and vice versa. Since D/R can be interpreted as a proxy for the displacement gradient, this means that the larger the displacement gradient, the smaller the kernel. The latter can be explained by the nature of the algorithm, since larger kernel sizes tend to have a smoothing effect on OF results, a smaller size will help better detect very localized displacement variations, i.e. high gradients. This trend can be observed in Fig. <ref> b, i, p, w) which all have D/R = 0.32 and very similar OF parameters, regardless of size of the displacement or the radius of the vortex core. §.§ Resolution of small scales An important question is the ability of OF-PIV to accurately measure small structures with relatively large displacements. Fig. <ref> and Fig. <ref> present a comparison of the best result obtained for two Rankine vortices, one of core radius r = 12 and the other with core radius r=25 pixels, and a maximum displacement of 8 pixels for both. Both cases are clear examples of a locally concentrated displacement gradient. Both figures show a zoom-in on the theoretical vortex core and OF-PIV (sub-Figures a and b respectively). Sub-Figures c shows a comparison of a velocity profile extracted from a 1-pixel line halfway up the image (dashed green line). In Fig. <ref> and <ref> one can appreciate a very precise detection of the structures, taking into account their small sizes. This is particularly true for the r=12 pixel vortex shown in Fig. <ref>. Even if the global maxima are not perfectly found (Fig. <ref>c), the differences on the displacement peaks are within 5% of the theoretical value. Detection of maxima positions and velocity gradient resolution are good, even when comparing velocity profiles on a single 1 pixel thick profile. The same could be said for the larger vortex (r=25 pixels) shown in Fig. <ref>. OF-PIV makes it possible to better identify the structure (Fig. <ref>b) and its corresponding global maxima and minima (Fig. <ref>c). These two cases support the assertion of a limit for OF-PIV based on the gradient of the displacement, more than on its magnitude, since the only change between these two cases is the size of the structure. Fig <ref> presents the absolute displacement error Err obtained for the best OF-PIV results for each of the image configurations. Here, the influence of the gradient D/R of the displacement becomes even more obvious since the error follows a decrease proportional to the decrease in the gradient towards the right of the Figure. Furthermore, it can be clearly observed that the error levels decrease with D/R. For example, for Fig <ref> b,i,p,w) which correspond to D/R = 0.32, one can see that the error behaves very similarly in localization and in magnitude. These results are also very encouraging since they show that OF-PIV can give results with error margins of the order of a sub-pixel, if good conditions are met, both in terms of choice of OF parameters -PIV as input image settings. § INFLUENCE OF PARTICLE SEEDING ON EXPERIMENTAL MEASUREMENTS Given the nature of the OF algorithm, it is important to study in depth the impact of particle seeding on the quality of the results. This question is even more relevant when performing RT-OFPIV measurements for hours, looking for low-frequency signatures in the fluctuation of scalar quantities derived from instantaneous velocity fields. Indeed, the question of seeding becomes crucial due to the sedimentation of particles over time. To avoid loss of information in the velocity fields over time, it becomes necessary to add particles into the closed-loop hydrodynamic channel. Unfortunately, a quantitative criterion is missing to know how many particles should be injected and when. The first observation is that the standard criteria used to adjust the concentration of particles in CC-PIV (particles per window or particles per pixel) are not adapted to OF-PIV. The appropriate criteria for OF-PIV should be related to the number of pixels containing information relating to intensity variations. This notion must be taken into account because the OF algorithm works optimally when fed with a very textured image, that is to say when each pixel sees an intensity variation that can be linked to movement. Unfortunately, one can see that many pixels are black in the standard snapshots (Fig. <ref>a) used with a concentration C_0 adapted to CC-PIV. All black pixels do not provide information to the OF algorithm and are useless for calculating instantaneous velocity fields. It becomes necessary to define new quantitative criteria to optimize particle seeding in order to obtain the best possible textured images, thus leading to better instantaneous velocity fields calculated with OF. The ultimate goal is for each pixel to contain relevant information, leading to a maximum spatial resolution of 1 vector per pixel. The first step consisted of increasing the concentration of particles by successive injection of particles inside the volume of water. The particle concentration will be expressed as mass of particles per volume of water (g/L). The first injection (C_0 = 9.21 10^-3 g/L) corresponds to the standard concentration used for PIV measurements. Then, the same amount of particles C_0 was successively injected every 20 min, while image pairs were captured every 5 s. The measurements were carried out in a horizontal plane, in the freestream, vortex-free region, upstream of the cylinder (Fig. <ref>). One can see in Fig. <ref> the evolution of the concentration in small windows (256×256 pixels^2) taken at the center of the raw snapshots. As expected, as the concentration increases, the number of black pixels decreases. In order to quantify the number of pixels containing information, the background noise of the camera sensor is removed (4 gray levels with a dynamic range of 8 bits). Then, the number of active pixels N_act that actually detect intensity changes above the noise level is calculated. The ratio R_act = N_act / N_pix of the number of active pixels (i.e. with information) to the total number of pixels N_pix of the sensor gives the percentage of camera pixels actually containing information useful to the OF. First, R_act is calculated for the synthetic PIV images used in the previous section. Fig <ref> shows examples of particle distribution in 16×16 pixels^2 interrogation windows obtained for the three concentrations used to create the images. It can be seen that to increase the particle concentration, from 5 p/IW to 15 p/IW, the number of active pixels is also increased, from R_act = 36 % to R_ act = 73%. Fig. <ref> shows the evolution of R_act as a function of time to increase the particle concentration. Each red vertical line corresponds to a new injection of particles inside the hydrodynamic channel, leading to an increase in the concentration of particles seen by the camera sensor. The first observation is that for the initial concentration, commonly used for CC-PIV, only 6% of the sensor pixels contain information. This is clearly insufficient for an OF algorithm to calculate 1 vector per pixel. One can see that after 5 injections, leading to a total particle concentration of around 0.05 g/L (10 × C_0), the ratio of active pixels is greatly increased to almost reach R_act =80%. This strong evolution in the proportion of active pixels should impact the quality of the resulting instantaneous velocity fields calculated with the OF algorithm. Fig.  <ref> shows the effect of particle injection on the resulting velocity fields in the free flow region. It can be seen that the velocity field becomes denser and more homogeneous as the particle concentration increases. For the maximum concentration, the velocity field becomes much smoother and uniform. No smoothing was applied to the instantaneous velocity fields. As stated previously, the Kernel radius is a key parameter for OF. When dealing with suboptimal conditions for image texture (low particle concentration), a larger kernel radius is needed to resolve the velocity estimate in a pair of images on each pixel. This has the side effect of losing smaller structures, because it smoothes the flow field. Using the appropriate concentration, one should obtain valuable information close to pixel resolution using a smaller kernel radius. § INFLUENCE OF THE CONCENTRATION OF PARTICLE ON THE SPATIAL RESOLUTION IN THE VELOCITY FIELDS For both upcoming subsections the background camera noise was suppressed, as in the previous section, to enhance the observation of the particle seeding impact. §.§ Freestream flow In order to evaluate the impact of seeding in terms of quality of the resulting velocity fields, images in the free zone of the tunnel, upstream of the cylinder, were taken. The objective was to calculate the velocity fields of a homogeneous flow, in a region where there are no complex velocity gradients or other 3D phenomena difficult to estimate. These are the same images that were taken to measure the amount of active pixels in the images in the previous section. Fig. <ref>a) presents a 2D plot of the evolution of the instantaneous velocity profile in the direction of the current, extracted from a line of pixels in the center of the image, to increase the concentration of particles. One can see that a minimum concentration is necessary to homogenize the speed profiles and obtain a good estimate of the speed. This is confirmed by Fig. <ref>b) which shows the evolution of the relative error (RE_u = |U_∞ - u|/U∞ * 100) made on the estimation of the speed in the direction of the current. The error is minimized for the largest particle concentration. A more in-depth analysis, taking into account OF parameters, is still necessary. Nevertheless, it is clear that for such uniform flow, increasing particle seeding in the water tunnel leads to a better resolved velocity field and reduced noise and errors. §.§ Flow past a cylinder The same measurements were carried out downstream of a cylinder in a horizontal plane at mid-height of the cylinder (Fig. <ref>), in order to estimate the influence of the seeding concentration on the instantaneous velocity fields quality. The objective was to compare the spatial resolution of the velocity fields for a flow containing numerous eddies of various scales. The measurements were carried out at Reynolds Re_h = D × U_∞/ν = 1443, which corresponds to the subcritical regime, but with high velocity fluctuations at different scales. The images have a size of X =5120, Y=1440 pixels, which results in images of 7.37 Mpx. Two configurations are compared: one with the standard particle concentration (C_0), the other with a higher concentration (5C_0). An instantaneous velocity magnitude field obtained with a low concentration is shown in Fig. <ref>a). It can be seen that if no smoothing is applied, some information is lost, leading to holes or errors in the velocity field. It is very different as the concentration increases: there are no holes in the field which appears smooth and has fine details. The velocity magnitude profile along the center line downstream of the cylinder (green line in Fig. <ref>) is plotted for low and high concentrations (Fig. <ref>). This confirms that when the concentration is too low, many pixels do not contain information which leads to many zero pixels. On the other hand, it is clear that the resolution of the velocity field is improved with higher concentration. This leads to smoother velocity magnitude profiles that clearly contain interesting velocity fluctuations, without any smoothing or interpolation. Here, the resulting spatial resolution for the velocity vectors is 47.62 μ m, or 400 vectors per mm^2. It is important to note that both cases were treated with the same OF parameters (Normalization radius of 3 pixels, Kernel radius of 6 pixels, 4 Pyramid sublevels and 3 iterations). § CONCLUSION The objectives of the present study were to evaluate the quality of OF-PIV results through a benchmark with synthetic images and to optimize the experimental conditions to adapt the requirements of OF-PIV. More specifically, we were interested in the influence of images and OF-PIV parameters on the results, as well as the influence of particle seeding on the spatial resolution. First, an error test was carried out to evaluate and quantify the quality of the OF-PIV results. The study was carried out using synthetically generated PIV images of a Rankine vortex. The case of the Rankine vortex was analyzed according to different displacement magnitudes and vortex core sizes. The concentration of particles in the images was found to be of primary importance in improving the OF-PIV results. A trend leading to a possible association of the size of the Kernel radius with the size of the displacement gradient was found. OF-PIV results can be within a sub-pixel margin of error when the parameters of the algorithm and input images are chosen correctly. The relevance of particle seeding density on the quality and resolution of OF-PIV was also studied. For this purpose, the notion of active pixels was introduced as a proxy for image quality. The objective was to search for a particle seeding criterion different from the criteria used for CC-PIV, adapted to OF. It has been shown that it is indeed possible to increase the number of actually useful active pixels in a given pair of images. Using standard seeding, well suited to CC-PIV, less than 10% of the camera sensor was used. Increasing the particle concentration led to more than 80% active pixels. Finally, it was shown that thanks to this optimization, it was possible to increase the spatial resolution leading to much better instantaneous velocity fields. Measurements of the flow past a cylinder were used as a reference to evaluate the quality of the instantaneous velocity fields. 2D velocity fields obtained with lower concentration exhibit holes and errors that disappear if the particle concentration increases. Additionally, fine details associated with small structures can be observed, showing that with the appropriate parameters, OF-PIV measurements can effectively lead to dense velocity fields, with 1 vector per pixel. More systematic experiments are still needed to generalize these results. For example, this study demonstrates that the small scales of turbulent flow can indeed be resolved in RT-OFPIV measurements, but these experiments could not be performed in the present study. § ACKNOWLEDGEMENTS This study benefits from the support of the ANRT (french National Agency for Technological Research) which co-finances the Cifre thesis of J. Pimienta. The research also benefits from an industrial collaboration with Photon Lines Inc.
http://arxiv.org/abs/2407.03166v1
20240703144044
Neutral Atomic Hydrogen Surveys: past, present and future
[ "F. M. Maccagni", "W. J. G. de Blok" ]
astro-ph.GA
[ "astro-ph.GA" ]
Tail calibration of probabilistic forecasts Sam Allen Corresponding author: <sam.allen@stat.math.ethz.ch>. Johan Segers gratefully acknowledges helpful questions and suggestions from participants, in particular Anja Janßen and Clément Dombry, of the EXSTA Kick-Off Workshop at Université Paris Cité, Laboratoire MAP5, June 5–7, 2024. Seminar for Statistics, ETH Zurich and Jonathan Koh IMSV, University of Bern and Johan Segers LIDAM/ISBA, UCLouvain and Johanna Ziegel Seminar for Statistics, ETH Zurich Received: date / Accepted: date ============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== § ABSTRACT Neutral atomic hydrogen () observations are fundamental to understand the dynamics of galaxies, their assembly, the fuelling of their star formation and environmental interactions. studies have so far been limited by the capabilities of single-dish radio telescopes or synthesys arrays to either small samples or low resolution and sensitivities. Now, the Square Kilometer Array precursors and pathfinders are providing a novel view of the in and around galaxies allowing wide-field high resolution deep surveys in nearby galaxies. We give an overview of past, current and future surveys consistently comparing their column density and spatial resolutions highlighting their main scientific key goals and results. § NEUTRAL HYDROGEN GAS IN GALAXIES Over cosmic time, star formation is one of the main drivers of galaxy evolution. It creates new stars, while simultaneously consuming the cold gas fuel reservoir. The availability of the latter is determined by the delicate balance between the amount of gas consumed by star formation, the amount of material accreted from the inter-galactic medium (IGM) and the amount of gas expelled from the galaxy. In spiral galaxies such as our own, to sustain SF over billions of years the gas reservoirs must be replenished from the IGM through cosmic time. In elliptical galaxies SF has been rapidly quenched and the gas reservoirs have been depleted and kept so for billions of years. Several phenomena determine the fate of a galaxy. In the environment, merger-driven tidal motions between galaxies or hydrodynamic interactions can drive gas into galaxies or strip it from them. Cold gas infall from the inter-galactic medium and collapse into dense clouds triggers star formation, whose supernovae and stellar winds may heat and expel the gas ( `baryon-cycle). Cold gas may be funnelled down to the central regions triggering nuclear activity, whose energetic feedback can also eject gas from the galaxy and contribute to quench its star formation. Hydrogen is the most abundant element in the Universe and is the main component of the gas fuel reservoir from which galaxies are replenished. Sufficiently cold hydrogen can be observed at 21 cm and therefore can be potentially used to trace these gas reservoirs. This neutral atomic hydrogen (HI) also forms the main component of the gas inside disk and dwarf galaxies. A complete knowledge of the content in galaxies ( the mass function between ∼10^6 and ∼ 10^11 ) and how this correlates with their other gas and stellar properties is crucial to understand how galaxies sustain their star formation. gas typically extends beyond the stellar disk, this has allowed to probe the rotation curves of galaxies. For many galaxies, these are flat which implies a divergence between the dynamics expected by the visible matter and the total matter distribution <cit.>. is thus an exquisite tracer to study the Dark Matter content in galaxies and identify `exotic' sources which may deviate from the typical assembly expected by the ΛCDM cosmological model,  <cit.>. emits through its hyper-fine transition at 21-cm wavelengths and is observable in the nearby Universe by radio telescopes in the L-band. For the reasons described above, surveys have always been key science projects of both single dish and radio synthesis telescopes. The development of new facilities and sensitive receivers has pushed the investigation of cold gas in galaxies to previously unexplored low column density sensitivities (≲ 10^19 ), high angular resolutions (10-30”) and over wide areas of the sky (≳ 20 deg^2), thus allowing us to obtain a detailed view of galaxy dynamics and assembly over large statistical samples of galaxies. In the following sections, we compare the different areas, sensitivities and resolutions achieved by past and on-going surveys and will show the potentials of the upcoming Square Kilometer Array (SKA <cit.>) and Deep Synoptic Array (DSA2000) <cit.> telescopes. § PAST SURVEYS Neutral hydrogen studies are always limited by the capacities of the available radio telescopes. Single-dish telescopes ( Parkes, Arecibo, Green Bank Telescope) guarantee a wide-field of view and high surface brightness sensitivity but have arcminute scale angular resolution, while radio synthesis telescopes ( ATCA, VLA, WSRT) enable arcsecond scale resolution observations over smaller fields of view (≲ 1 deg^2). The incomplete coverage of the uv-plane due to the limited number of antennas limits the surface brightness sensitivity to the faint extended emission of in the outskirts of galaxies and in the IGM. In Figure <ref> we compare the column density sensitivity and physical scale resolution of several surveys carried out with single dish or interferometers. All sensitivities are homogenised to a 3σ limit over a velocity range of 16 . This velocity is representative of the lowest velocity dispersions commonly observed in galactic disks. We convert the angular resolution of each survey based on the distances of the targets. The Figure shows that so far studies have focused on two separate regions of parameter spaces. Single-dish telescopes have focused on sensitive (∼ 10^18 ) observations over large portions of the sky, but mostly rely on unresolved detections (resolution ≳ 10 kpc). Interferometers, instead, have focused on highly resolved (resolution ≲ 1 kpc) observations of a limited number of galaxies in the Local Universe, providing the most detailed studies of dynamics in galaxies from dwarfs to spirals. The surveys shown are the Westerbork observations of neutral Hydrogen in Irregular and SPiral galaxies (WSP, <cit.>), the Local Volume HI Survey (LVS,  <cit.>), the Nearby Galaxy Survey (TNGS, <cit.>), the Local Irregulars That Trace Luminosity Extremes THINGS (LITTLE TNGS, <cit.>), Hydrogen Accretion in LOcal GAlaxieS (HALOGAS, <cit.>) and the VLA Imaging of Virgo in Atomic Gas (VIVA, <cit.>). For single dish studies we show the full samples of The H I Parkes All Sky Survey (HIPASS, <cit.>) and the Arecibo Legacy Fast ALFA (ALFALFA, <cit.>). For these two surveys, the markers show the median redshifts of the detections (40 and 110 Mpc respectively). With these surveys it has been possible to obtain a near-complete census of the overall density in the nearby Universe and of the mass function. It has also provided a comprehensive picture of the various scaling relations between the neutral atomic phase and the gas and stellar content in galaxies. The high sensitivity of single dish telescopes has also enabled targeted deep observations in search for the diffuse cold clouds and filaments that may accrete onto galaxies (AGES <cit.>) and the observations of and M31 <cit.> and GBT and Parkes observations of nearby galaxies <cit.>). However, when diffuse gas has been detected, the question whether it was accreting onto the galaxy or was a remnant of an interaction has remained an unanswered question, due to the limited angular resolution of the observations. The Figure also highlights that interferometers provide a continuous description of the angular-resolution column-density sensitivity parameter space. By changing the weights of the short or long baselines ( robustness) one can trade angular resolution for sensitivity thus enabling the detection of diffuse and extended features in the outskirts and environment of galaxies. Nevertheless, reaching the column densities probed by single-dish observations for a representative number of sources is almost impossible using realistic observing times. § CURRENT SURVEYS In the last decade, the SKA L-band precursors and pathfinders (MeerKAT, ASKAP and Apertif) opened a new parameter space for interferometric observations. For example, MeerKAT <cit.> can provide wide-field of view (∼ 1 deg^2 and/or high surface brightness sensitivity (≳ 10^18 ), with 10” with a few tens of hours of integration time. This corresponds to an mass sensitivity of ∼ 10^6 for an unresolved source at 10 Mpc assuming a linewidth of 50 . The new phased array feed receivers of WSRT (Apertif, <cit.>) and ASKAP <cit.> have increased the field of view of interferometers by an order of magnitude, thus enabling blind interferometric surveys over several thousands of square degrees, also increasing the studies of high redshift sources (≳ 100 Mpc). Figure <ref> (left) compares the column density sensitivity of SKA precursors and pathfinders surveys with the past surveys presented in the previous Section. The phased array feed telescope surveys shown are the Widefield ASKAP L-band Legacy All-sky Blind surveY (WALLABY, <cit.>) and the Apertif-Shallow and Medium-Deep surveys <cit.>. surveys are the MeerKAT International GHz Tiered Extragalactic Exploration in HI (MIGHTEE-HI, <cit.>), the MeerKAT Fornax Survey (MFS, <cit.>), the MeerKAT Observations of Nearby Galactic Objecs: Observing Southern Emitters (MHONGOOSE, <cit.>) and the Virgo Cluster multi Telescope Observations in Radio of Interacting galaxies and AGN (VicTORIA, <cit.>). For wide-field surveys (WALLABY, Apertif and MIGHTEE-HI) the lines show the spatial resolution at the median distance of the expected detections. The dark shaded regions show the range of resolutions for 90% of the detections and the light shaded regions the resolutions for the full sample. The Figure shows that WALLABY and Apertif are wide-field surveys with sensitivities and resolutions that previously have been limited only to targeted observations in the Local Volume (LVHIS and WHISP). With resolutions between 5 and 100 kpc these surveys will provide the most complete census of in galaxies at distances (≳ 70 Mpc), which so far has mostly been studied only with single-dish surveys. Nevertheless, the system temperature of phased array receivers does not allow to easily reach the very low column densities needed to properly detect and characterize, for example, cold gas accretion features. These studies are best tackled by the large survey programs which, for the very first time, are reaching single-dish column density sensitivity with ten times higher angular resolution. The MeerKAT Fornax Survey is observing the nearby Fornax cluster and infalling group of Fornax A <cit.>. MHONGOOSE is targeting 30 nearby galaxies from dwarfs to star forming spirals to detect and characterise the diffuse that is potentially accreting onto galaxies and understand its connection to SF. ViCTORIA is observing the entire Virgo cluster that was previously only observed with targeted observations of 46 galaxies with VIVA. MIGHTEE-HI is blindly targeting four fields to provide a sensitive unbiased characterisation of the population at 250 Mpc. The Figure shows that even if investigating a new parameter space in sensitivity, resolution and area, the current interferometric surveys are still limited by the different optimizations of their telescopes. Phased array feed telescopes and provide similar 10-90 arcsecond angular resolutions, but while the first are best suited for wide-field surveys, they do not reach extremely low column densities. enables these deep studies but over only limited fields, samples of galaxies and environments. § FUTURE SURVEYS In the next few years, new radio telescopes will keep exploring new regions of the sensitivity versus resolution parameter space allowing deep wide-field surveys with improved angular resolution. The upgrade from to + will increase the survey speed of the instrument and provide higher (4”) angular resolution. This motivates the development of a medium-shallow survey (M+MSHIS) which, compared to current studies, will investigate over a wide area (718 deg^2), reaching with only 2 hours per pointing, sensitivities ≲ 10^20 . The sensitivity versus angular resolution of the M+MSHIS survey is shown in Figure <ref> (right) along with the on-going surveys. Before this upgrade, as a pilot for a medium-shallow survey, is observing the Euclid Deep Field South (EDFS). This is an example of the synergy between and deep optical observations (from the Euclid space mission), which enable the unambiguous identification of low-surface brightness gas rich galaxies. The Figure shows that an unprecedented and complete view of the in and around galaxies will be provided by DSA 2000 and the SKA observations. Thanks to 2000 antennas guaranteeing a dense coverage of baselines within 10 km, DSA 2000 will reach M+MSHIS sensitivities at 20 arcseconds in half the integration time. Because of the survey strategy of the instrument, DSA 2000 will scan the full northern sky, thus providing the most complete sample of resolved sources in the Northern Hemisphere. Besides survey speed, an important innovation of DSA 2000 is the improvement in angular resolution to 3”. The right panel of Fig. <ref> shows that with only 4 hours of integration DSA 2000 will detect the in the star-forming disks (≳ 10^20 ) at 100 pc resolution (double the resolution of MHONGOOSE). By increasing the sensitivity of one order of magnitude w.r.t. Apertif and WALLABY it will be possible to perform a direct comparison in all SF galaxies of the nearby universe (20000 sources from dwarfs to starburts within 100 Mpc), between the distribution and kinematics and the other gas phases of the ISM and the stars to probe how the baryon cycle within galaxies fuels star formation <cit.>. The SKA observations will be groundbreaking. They will reach between 10^19 and 10^18 at unmatched resolutions (5-50) arcseconds in only 10 hours per pointing [here we assume the SKA-MID baseline design with 197 dishes]. This will allow us to perform MFS and MHONGOOSE surveys not anymore on limited samples and environments but over the entire Southern Hemisphere ( hundreds of thousands of galaxies). SKA observations with 100 hours per pointing (light shaded green area in the figure) will potentially give a definitive answer to cold gas accretion in galaxies and on the existence of cold gas cosmic filaments: either a detection and full characterisation of these systems will be possible either a non-detection will require to improve our understanding of the physical processes that sustain star formation over cosmic time. The Figure also shows the survey planned with the single-dish FAST telescope (FASHI, <cit.>). This wide-field survey will be complementary to and SKA observations providing high sensitivities on even larger angular scales. The other game-changer of SKA surveys is the complete description of the neutral gas phase in galaxies out to redshift (z∼ 0.5-1). High redshift studies have so far been limited to `pencil beam' surveys in emission such as the Cosmos Extragalactic Survey (CHILES,<cit.>), Looking At the Distant Universe with the MeerKAT Array (LADUMA, <cit.>) and the Deep Investigation of Neutral Gas Origins (DINGO, <cit.>), or to targeted absorption surveys like The First Large Absorption Survey in HI (FLASH, <cit.>) and the MeerKAT Absorption Line Survey (MALS, <cit.>) or to stacking experiments (, <cit.>). Large-volume surveys of resolved sources will enable to characterise the mass function throughout the last Gyr and the cosmic density. This will allow us to understand if and how the strong decrease in star formation rate observed in the last billion years is linked to a variation in their gas reservoir. § ACKNOWLEDGEMENTS This project has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement no. 679627 and grant agreement no. 882793). 99 bosma Bosma, A., “The distribution and kinematics of neutral hydrogen in spiral galaxies of various morphological types”, PhDT, 1978. pavel Mancera Piña, P. E., “Off the Baryonic Tully-Fisher Relation: A Population of Baryon-dominated Ultra-diffuse Galaxies”, The Astrophysical Journal, vol. 883, no. 2, 2019. doi:10.3847/2041-8213/ab40c7. ska Blyth, S., “Exploring Neutral Hydrogen and Galaxy Evolution with the SKA”, in Advancing Astrophysics with the Square Kilometre Array (AASKA14), 2015. doi:10.22323/1.215.0128. dsa2000 Hallinan, G., “The DSA-2000 — A Radio Survey Camera”, vol. 51, no. 7, 2019. doi:10.48550/arXiv.1907.07648. whisp Swaters, R. A., “The Westerbork HI survey of spiral and irregular galaxies. I. imaging of late-type dwarf galaxies”, Astronomy and Astrophysics, vol. 390, pp. 829–861, 2002. doi:10.1051/0004-6361:20011755. lvhis Koribalski, B. S., “The Local Volume Survey (LVHIS)”, Monthly Notices of the Royal Astronomical Society, vol. 478, no. 2, pp. 1611–1648, 2018. doi:10.1093/mnras/sty479. things Walter, F., “THINGS: The H I Nearby Galaxy Survey”, The Astronomical Journal, vol. 136, no. 6, pp. 2563–2647, 2008. doi:10.1088/0004-6256/136/6/2563. littlethings Hunter, D. A., “Little Things”, The Astronomical Journal, vol. 144, no. 5, 2012. doi:10.1088/0004-6256/144/5/134. halogas Heald, G., “The Westerbork Hydrogen Accretion in LOcal GAlaxieS (HALOGAS) survey. I. Survey description and pilot observations”, Astronomy and Astrophysics, vol. 526, 2011. doi:10.1051/0004-6361/201015938. viva Chung, A., “VLA Imaging of Virgo Spirals in Atomic Gas (VIVA). I. The Atlas and the H I Properties”, The Astronomical Journal, vol. 138, no. 6, pp. 1741–1816, 2009. doi:10.1088/0004-6256/138/6/1741. hipassMeyer, M. J., “The HIPASS catalogue - I. Data presentation”, Monthly Notices of the Royal Astronomical Society, vol. 350, no. 4, pp. 1195–1209, 2004. doi:10.1111/j.1365-2966.2004.07710.x. alfaalfaHaynes, M. P., “The Arecibo Legacy Fast ALFA Survey: The ALFALFA Extragalactic Source Catalog”, The Astrophysical Journal, vol. 861, no. 1, 2018. doi:10.3847/1538-4357/aac956. agesAuld, R.,“The Arecibo Galaxy Environment Survey: precursor observations of the NGC 628 grou”, Monthly Notices of the Royal Astronomical Society, vol. 371, no. 4, pp. 1617–1640, 2006. doi:10.1111/j.1365-2966.2006.10761.x. n2903 Irwin, J. A., “ΛCDM Satellites and Companions—the Arecibo ALFA Survey of NGC 2903”, The Astrophysical Journal, vol. 692, no. 2, pp. 1447–1463, 2009. doi:10.1088/0004-637X/692/2/1447. m31Wolfe, S. A., “Sensitive 21cm Observations of Neutral Hydrogen in the Local Group near M31”, The Astrophysical Journal, vol. 816, no. 2, 2016. doi:10.3847/0004-637X/816/2/81. gbt1Sorgho, A., “Early observations of the MHONGOOSE galaxies: getting ready for MeerKAT”, Monthly Notices of the Royal Astronomical Society, vol. 482, no. 1, pp. 1248–1269, 2019. doi:10.1093/mnras/sty2785. gbt2Sardone, A., “A Census of the Extended Neutral Hydrogen around 18 MHONGOOSE Galaxies”, The Astrophysical Journal, vol. 910, no. 1, 2021. doi:10.3847/1538-4357/abde45. gbt3Pingel, N. M., “A GBT Survey of the HALOGAS Galaxies and Their Environments. I. Revealing the Full Extent of around NGC 891, NGC 925, NGC 4414, and NGC 4565”, The Astrophysical Journal, vol. 865, no. 1, 2018. doi:10.3847/1538-4357/aad816. meerkat Jonas, J. and MeerKAT Team, “The MeerKAT Radio Telescope”, in MeerKAT Science: On the Pathway to the SKA, 2016. doi:10.22323/1.277.0001. apertif1 van Cappellen, W. A., “Apertif: Phased array feeds for the Westerbork Synthesis Radio Telescope. System overview and performance characteristics”, Astronomy and Astrophysics, vol. 658, 2022. doi:10.1051/0004-6361/202141739. askap Johnston, S., “Science with ASKAP. The Australian square-kilometre-array pathfinder”, Experimental Astronomy, vol. 22, no. 3, pp. 151–273, 2008. doi:10.1007/s10686-008-9124-7. wallaby1 Koribalski, B. S., “WALLABY - an SKA Pathfinder HI survey”, Astrophysics and Space Science, vol. 365, no. 7, 2020. doi:10.1007/s10509-020-03831-4. apertif2 Adams, E. A. K., “First release of Apertif imaging survey data”, Astronomy and Astrophysics, vol. 667, 2022. doi:10.1051/0004-6361/202244007. maddox Maddox, N., “MIGHTEE-HI: The emission project of the MeerKAT MIGHTEE survey”, Astronomy and Astrophysics, vol. 646, 2021. doi:10.1051/0004-6361/202039655. mfs Serra, P., “The MeerKAT Fornax Survey. I. Survey description and first evidence of ram pressure in the Fornax galaxy cluster”, Astronomy and Astrophysics, vol. 673, 2023. doi:10.1051/0004-6361/202346071. mhon de Blok, W. J. G., “MHONGOOSE – A MeerKAT Nearby Galaxy survey”, 2023, submitted to Astronomy and Astrophysics. boselli Boselli, A., “ViCTORIA project: MeerKAT observations of the ram pressure stripped galaxy NGC 4523”, Astronomy and Astrophysics, vol. 676, 2023. doi:10.1051/0004-6361/202346812. m+ https://www.dropbox.com/s/byezm31ugyh37ig/MeerKAT%2B%20White%20Paper%20-%204May2021%20-%20with%20appendices.pdf?dl=0M+ white paper dsa20002 https://www.dropbox.com/scl/fi/579zwhw5r8pt5o45ftsn0/DSA-2000_Community_Science_Book.pdf?rlkey=fb5xfu6jr8g643c63qq7ea3dt&dl=0DSA2000 Community Science Document fashi Zhang, CP., ”The FAST all sky H I survey (FASHI): The first release of catalog“, Sci. China Phys. Mech. Astron. 67, 219511 (2024). doi:10.1007/s11433-023-2219-7. chiles Fernández, X., “A Pilot for a Very Large Array H I Deep Field”, The Astrophysical Journal, vol. 770, no. 2, IOP, 2013. doi:10.1088/2041-8205/770/2/L29. laduma Blyth, S., ”LADUMA: Looking at the Distant Universe with the MeerKAT Array”, in MeerKAT Science: On the Pathway to the SKA, 2016. doi:10.22323/1.277.0004. dingo Rhee, J., ”Deep investigation of neutral gas origins (DINGO): stacking experiments with early science data”, Monthly Notices of the Royal Astronomical Society, vol. 518, no. 3, OUP, pp. 4646–4671, 2023. doi:10.1093/mnras/stac3065. flash Allison, J. R., ”The First Large Absorption Survey in H I (FLASH): I. Science goals and survey design”, Publications of the Astronomical Society of Australia, vol. 39, 2022. doi:10.1017/pasa.2022.3. mals Deka, P. P., ”The MeerKAT Absorption Line Survey (MALS) Data Release. I. Stokes I Image Catalogs at 1–1.4 GHz”, The Astrophysical Journal Supplement Series, vol. 270, no. 2, IOP, 2024. doi:10.3847/1538-4365/acf7b9. chowdury Chowdhury, A., “H I 21-centimetre emission from an ensemble of galaxies at an average redshift of one”, Nature, vol. 586, no. 7829, pp. 369–372, 2020. doi:10.1038/s41586-020-2794-7.
http://arxiv.org/abs/2407.02798v1
20240703035822
Game-Based Discovery: Harnessing Mini-Games within Primary Games for Scientific Data Collection and Problem Solving
[ "Abhishek Phadke", "Mamta Yadav", "Stanislav Ustymenko" ]
cs.HC
[ "cs.HC", "cs.MM" ]
Regional and Temporal Patterns of Partisan Polarization during the COVID-19 Pandemic in the United States and Canada [ July 8, 2024 ==================================================================================================================== empty empty § ABSTRACT In the popular video game Batman: Arkham Knight, produced by Rocksteady Studios and released in 2015, the primary protagonist of the game is Batman, a vigilante dressed as a bat, fighting crime from the shadows in the fictitious city of Gotham. The game involves a real-world player who takes up the role of Batman to solve a peculiar side mission wherein they have to reconstruct the clean DNA sequence of a human and separate it from mutant DNA to manufacture an antidote to cure the villain. Although this is undoubtedly a fascinating part of the game, one that was absent in previous Batman games, it showcases an interesting notion of using mini-games embedded within primary games to achieve a particular real-world research objective. Although the DNA data used in this case was not real, there are multiple such instances in video games where mini-games have been used for an underlying motive besides entertainment. Based on popular case studies incorporating a similar method, this study characterizes the methodology of designing mini-games within primary games for research purposes into a descriptive framework, highlighting the process's advantages and limitations. It is concluded that these mini-games not only facilitate a deeper understanding of complex scientific concepts but also accelerate data processing and analysis by leveraging crowd-sourced human intuition and pattern recognition capabilities. This paper argues for strategically incorporating miniaturized, gamified elements into established video games that are mainly intended for recreational purposes. § INTRODUCTION In the past five years, players of the video game Borderlands 3, a popular sequel in video game culture, have encountered a peculiar new feature on their "Homeship player base." Within a small section of the in-game environment, a computer terminal was discovered, initiating a new simulation environment in which players' in-game avatars interacted. Ingeniously integrated into the main storyline as a side mission, this mini-game prompted players to dedicate some of their game-playing time to solve a specific challenge presented within the mini-game. Involving the manipulation of Tetris-inspired colored blocks, the mini-game collected players' responses, with these colored blocks representing data from the human gut. The outcomes of players' interactions contributed to an open-source project known as the American Gut Project <cit.>. This game serves as just one instance of a recent trend: the gamification of research and data collection methodologies. Such games effectively convert complex scientific challenges into engaging tasks that harness the collective problem-solving skills of a diverse group of participants. For example, in biochemistry, mini-games like Foldit enable players to contribute to protein folding studies <cit.>. Although video games have long been accepted and widely established as a form of entertainment across the globe, appealing to diverse audiences, researchers have also recognized their educational benefits. This recognition has led to studies exploring and developing educational games as effective tools for learning <cit.> and knowledge dissemination <cit.>. However, the scope of "Educational games" extends beyond this. Figure 1 provides a brief categorization of the various uses and forms of games for educational and research purposes. Type 1 games are designed mainly to explain concepts to the audience. For example, a game may be designed for kids where they learn different shapes, colors, or the alphabet. A game may be designed for middle school students, where they can see an interactive display of the human body and learn to place the body organs in the correct positions. Games have been created to explain higher-level concepts to the targeted student audience <cit.>. Type 2 games involve games that help with memory, pattern recognition, association, and other essential features of the human mind. These games are usually designed to combat medical issues such as dementia <cit.>, vision loss <cit.><cit.>, hearing<cit.>, or perception <cit.>. Type 3 games are not independent games. These are usually mini-games embedded within primary video games that are created for the commercial market. These games offer the player a unique opportunity to participate in crowd-sourced data collection and scientific problem endeavors that the game developers usually have created with an associated research organization. While participation may be purely voluntary, it allows the audience to be a part of a larger scientific community working towards a goal. Methodologically, this paper examines the design, execution, and outcomes of these Type 3 mini-games, assessing their effectiveness in generating valid scientific data and insights. Additionally, it addresses the technical and ethical considerations involved in integrating gamification into scientific research. It discusses the balance between data integrity and participatory inclusiveness. Throughout the combined history of scientific research and video game development, there have been multiple instances of a successful collaboration between the two. However, Type 3 games are relatively new, and existing established taxonomies of educational games <cit.> may need updating to incorporate them accurately. One prominent example is Foldit <cit.>, an online puzzle platform initially released in 2008 for protein folding. The game was developed by multiple organizations in collaboration, primarily led by the University of Washington. The game aims to challenge gamers by testing their ability to fold protein structures with precision. It provides tools for players to interact with and manipulate the protein structures, aiming to produce optimal results. Researchers analyzed the highest-scoring results to understand if the solution is a native state that can be applied to the real-world protein. This information helped researchers understand if the solutions can be applied to issues such as diseases. In 2008, the platform had 240,000 registered players <cit.> which shows participation far greater than any similar research survey or crowd-sourced scientific data collection request. The scientific publication produced on the process and the game has concluded that a large percentage of the registered players provided relevant results or "outperformed algorithmically computed solutions" <cit.><cit.>. Subsequent published studies have continued to highlight various achievements of the game, remaining pertinent over the past five years <cit.> <cit.>. Sea Hero Quest <cit.> was a popular mobile game designed by the British game developer Glitchers in collaboration with associated Alzheimer's disease research centers. The plot of the game involves a sea journey taken by the protagonist, whose role is assumed by the player to navigate, shoot flares, and chase enemies in the game. Data collected from the game helped researchers understand the process of three-dimensional navigation, which is one of the first abilities a person with dementia loses. Among the multiple such examples is “Stall Catchers”, which used citizen researchers' help to review footage of blood flow in mice brains to detect Alzheimer's symptoms <cit.>. As illustrated in these case study examples, the primary aim behind game development remained consistent: to engage a broad audience and gather a substantial dataset for researchers to analyze and understand. Although these games were specifically crafted as tools for collecting data to support scientific research, contemporary approaches share similar objectives. Figure 2 outlines the primary components of the process outlined in the study and briefly describes how they relate to each other. This study distinguishes between the two game subjects as "Primary games" (PGs) and "Scientific Mini games" (SMGs). Primary games are the usual multi-platform video games produced by game studios and distributed and managed by them under various licensure agreements. These games have a global market, are often created for multiple platforms, and have features such as online multi-person cooperative gameplay, expansion packs, and downloadable content packs to enhance and extend gameplay and support for game performance and improvement through further updates over a few years. The SMG is a scientific endeavor planned and created by the game studio and a research organization. A game in this category has a specific goal in mind and involves the "Game Player," who is the real-world entity playing the game, using their "In-game Player Avatar" or PA to interact with the SMG in a way similar to the PG. The result of this interaction produces "Decision data" that is then used by the Research Organization to accomplish their scientific goals. § DESCRIPTIVE METHODOLOGY In the sections above, this study introduced the reader to the different classifications of educational games and the major components involved in the process. This is followed by a descriptive methodology that can be used to integrate SMGs in PG. Figure 3 shows the descriptive methodology for integrating SMGs into Primary games. The process follows: The SMG is present within the primary game environment and included as a side mission in the primary game's storyline. Typically, the real-world player establishes an in-game avatar to play the primary game. This same avatar is used to interact with the mini-game as well. The mini-game may involve asking the game player to use their player avatar to perform actions such as solving a puzzle, answering specific questions asked by NPC (Non-Playable Characters), or completing other kinds of tasks that are physical in nature for the in-game player avatar. For the real-world game player, this would mean controlling the in-game avatar using the game platform's I/O device. Depending on the data being collected or analyzed, the tasks may differ from the primary game's overall gameplay, style, and intent. The results of this interaction will typically be stored as a separate data structure within the primary game mechanism to protect the data. As outlined above, this data is typically labeled as Decision data and includes information about the game players' interaction with the SMG. This decision data is then communicated over the network to the database of the research organization that is collecting the data. The research organization may then analyze the data according to its goals. Any update to the SMG is done through the game developer, who will push an OTA update to the primary game through established channels. In some instances, a focused update to modify the contents of the SMG can also be performed. Table 1 highlights how SMGs can be integrated into PGs, the process they would follow, an example implementation, and the expected data generation. § CHALLENGES AND LIMITATIONS This section discusses the various challenges of implementing mini-games as viable tools for data collection. Starting with the challenges to integration and maintenance of SMGs within the primary game environment, several issues will need to be addressed on an ongoing basis for the research organization to maintain effective relationships with the game developer and the player. One of the significant goals of integration outlines that the mini-game participation should be optional and not a part of the main storyline. While this is done to prevent degrading the player's gameplay experience, the success of the data collection or procuring player information from the mini-game interaction depends mainly on the player's choice to interact with the SMG. If the player chooses to skip the SMG entirely and continue with the primary gameplay storyline only, no information can be collected. Even if the decision to integrate the SMG has been made at the initial stage of game design and development, the creation of the SMG itself, along with its interactive elements placed within the game, and other aspects of game design such as crafting appropriate assets, sound files, and rewards, require time and resources. These efforts may ultimately incur costs for the developer. Additionally, since the SMG operates as a piece of code executed within the primary game environment, it must undergo the same rigorous testing and verification procedures before deployment. A glitch in the SMG code could potentially disrupt or even cause the primary game to malfunction, which would not only impact the game development process but also damage the reputation of the game developer and its product. Most offline games designed these days do not require a persistent, active Internet connection to be maintained by the game console to make the game playable. In such scenarios, explicit permission must be obtained from the user for the console and the game to be allowed to connect to the internet and share the decision data obtained by the mini-game. As data collection advances and the needs of the managing RO change, a parallel change in the SMG may be required. While this can be achieved through an update, in some cases, it might involve pushing an update through the primary game, which may be tedious. Also, most updates to the software running on proprietary gaming consoles must be verified independently by the hardware providers, which may add an extra facet of time before the new version is released. Since the entire process is tedious and resource-consuming, it is only logical to implement it in games and game-development studios with a huge fan following and commercial market. Smaller indie developers may not have the time or capability to manage the integration of SMGs in initial builds. However, this issue can be easily solved over slow, long-term releases in later years. Validating the results obtained through crowd-sourced data labeling, annotation, or even decision-based results may require additional vetting from the research organization. This is necessary to examine outliers, prevent errors, and detect additional patterns in the data obtained. A similar challenge would be disseminating the results produced using these methods, as the data is collected over such a wide audience track over extended periods that it might be impossible to establish milestones in data collection and processing. As such, the academic and scientific community may bear additional skepticism about the veracity of the collected information without sufficient proof. Fortunately, any additional scrutiny can be passed by creating a documented and streamlined data collection and analysis process. § ADVANTAGES OF INCORPORATION OF SMGS IN PGS The following subsection discusses the advantages of including SMG in primary games as an effective data-gathering and problem-solving method. There is a vast and ever-growing market for games, from popular established GDs sequels to established games, remakes, and new games in all genres. This market is many times bigger in terms of reciprocity than current research surveys get. Figure <ref> lists popular games and the millions of copies sold worldwide. Indeed, suppose one were to imagine capturing just a tiny percentage of the audience from this market willing to interact with the SMG as they play the primary game. In that case, the possibility of capturing data and problem-solving using distributed means is endless and spans a global range. Most puzzle-solving, strategy, and RPG (Role Playing Games) already put the players in a similar mindset where they solve various challenges in-game to accomplish a particular goal. As such, the player in this state is a perfect audience for such SMGs to target and gain valuable information from. It has been realized that most, if not all, survey and crowd-problem-solving endeavors require that the user be compensated for their time or incentivized to participate in some way. While actual monetary compensation may be difficult to accomplish because of obvious income, taxation, and misuse prevention regulations, it is very easily possible to allow the players to be awarded in-game currency, perks, and collectible items for completing milestones dictated by the SMG within the primary game. Sometimes, even with monetary compensation involved, collecting opinions and data from the public often has fewer responses. Adding mini-game approaches in-game might provide a richer data stream. Based on specific case studies, this approach has been done before with a reasonable success rate in Borderlands 3, as described above <cit.>. Additionally, incorporating these SMGs will not affect the overall gameplay experience, as completing them is voluntary. Skipping or completing these SMGs will not affect the game's central plot or the player's ability to complete the primary game. A certain group of game players make it their goal to acheieve a 100% game completion achievement. Skipping the SMG will not affect the percentage completion rubric of the PG. These games merely serve as side quests for the player to complete should they wish to take a break from primary gameplay or simply for them to return to the game once they have completed all the missions in the primary game. It is also unlikely that incorporating these SMGs will increase the overall file size of the game or affect gameplay performance to a noticeable extent. Since most games have a relatively global market with fewer restrictions than other forms of data collection, a wider audience can be reached. By default, GDs include language pack downloads with their games that enable them to be played by a broader range of audiences. Integrating SMG would harness the current existing capabilities of primary games and their geographical reach to provide greater diversity to the data and decisions obtained. Often, the primary games are played by enthusiastic followers and fans over decades, enabling the SMGs present within them to have a more prolonged and persistent presence that would reach a wider generation audience over an extended period. This has other advantages, too. If the needs of the research study supported by the SMG change with time, it is a matter of updating the mini-game to accommodate the new requirements for data collection, questionnaires, or target experiments. This change can be pushed as an update to the primary game or simply a focused refresh to update the SMG. It is logical to assume that players are more likely to interact with mini-game surveys and data collection exercises than with current methods of focus groups, phone interviews, and in-person interactions. This is especially relevant if, as a means of compensation, the SMGs provide the player's in-game avatar with in-game currency, perks, or collectible items. § FUTURE WORK AND CONCLUSIONS Although the advantages of including SMGs in PGs are significant, challenges, as discussed above, still need to be addressed. Research organizations that seek to use such games and integrate them into their data collection or decision-making processes must work towards maintaining a fine balance between helping keep GDs on their development timelines without causing them damage in terms of resources or time spent, keeping the GPs playing experience the same and collecting data in a manner that is acceptable to the scientific community. Additional possibilities include expanding to other platforms through which Video games can be played. With the advent of virtual reality headsets, game developers have started porting many existing games and creating new educational games to cater to the VR market <cit.>. SMGs implanted into these games may provide additional benefits as well. In conclusion, the integration of mini-games as agents for scientific problem-solving and data collection, wrapped up as side missions in larger, established primary games, represents a promising frontier in research, offering new pathways for discovery and innovation. -12cm IEEEtran
http://arxiv.org/abs/2407.01885v1
20240702021442
Survey on Knowledge Distillation for Large Language Models: Methods, Evaluation, and Application
[ "Chuanpeng Yang", "Wang Lu", "Yao Zhu", "Yidong Wang", "Qian Chen", "Chenlong Gao", "Bingjie Yan", "Yiqiang Chen" ]
cs.CL
[ "cs.CL", "cs.AI" ]
Institute of Information Engineering, Chinese Academy of Sciences & School of Cyber Security, University of Chinese Academy of Sciences Beijing China 0000-0003-0601-2224 yangchuanpeng@iie.ac.cn Correspondence to: Wang Lu and Yao Zhu. Tsinghua University China newlw230630@gmail.com [1] Zhejiang University China ee_zhuy@zju.edu.cn Peking University Haidian Qu Beijing Shi China yidongwang37@gmail.com Institute of Computing Technology, Chinese Academy of Sciences, University of Chinese Academy of Sciences China chenqian20b@ict.ac.cn Institute of Computing Technology, Chinese Academy of Sciences China gaochenlong@ict.ac.cn Institute of Computing Technology, Chinese Academy of Sciences, University of Chinese Academy of Sciences China bj.yan@ieee.org Institute of Computing Technology, Chinese Academy of Sciences China yqchen@ict.ac.cn § ABSTRACT Large Language Models (LLMs) have showcased exceptional capabilities in various domains, attracting significant interest from both academia and industry. Despite their impressive performance, the substantial size and computational demands of LLMs pose considerable challenges for practical deployment, particularly in environments with limited resources. The endeavor to compress language models while maintaining their accuracy has become a focal point of research. Among the various methods, knowledge distillation has emerged as an effective technique to enhance inference speed without greatly compromising performance. This paper presents a thorough survey from three aspects: method, evaluation, and application, exploring knowledge distillation techniques tailored specifically for LLMs. Specifically, we divide the methods into white-box KD and black-box KD to better illustrate their differences. Furthermore, we also explored the evaluation tasks and distillation effects between different distillation methods, and proposed directions for future research. Through in-depth understanding of the latest advancements and practical applications, this survey provides valuable resources for researchers, paving the way for sustained progress in this field. <ccs2012> <concept> <concept_id>10010147.10010178.10010179</concept_id> <concept_desc>Computing methodologies Natural language processing</concept_desc> <concept_significance>500</concept_significance> </concept> <concept> <concept_id>10002944.10011122.10002945</concept_id> <concept_desc>General and reference Surveys and overviews</concept_desc> <concept_significance>500</concept_significance> </concept> </ccs2012> [500]Computing methodologies Natural language processing [500]General and reference Surveys and overviews 20 February 2007 [revised]12 March 2009 [accepted]5 June 2009 Survey on Knowledge Distillation for Large Language Models: Methods, Evaluation, and Application Yiqiang Chen July 8, 2024 ================================================================================================ § INTRODUCTION The emergence of Large Language Models (LLMs) <cit.> has significantly improved text generation quality in various generative tasks, becoming a pivotal and widely discussed topic in the field of artificial intelligence. These models, compared to their predecessors, show superior generalization to unseen data. Moreover, they exhibit capabilities that smaller models lack, such as multi-step reasoning <cit.> and instruction-following <cit.>. The success of LLMs is often attributed to increased training data and a larger number of model parameters (e.g., GPT-3 with 175 billion parameters <cit.>). However, the expansion in parameter size brings significant drawbacks, particularly in terms of high inference costs and substantial memory requirements, making practical deployment challenging. For example, GPT-3 requires around 350GB of model storage (float16) and at least 5 A100 GPUs with 80GB of memory each for inference, contributing significantly to carbon emissions. To mitigate these challenges, model compression <cit.> has emerged as a viable solution. Model compression aims to transform large, resource-heavy models into more compact versions that are suitable for storage on constrained mobile devices. This process may involve optimizing for reduced latency to achieve faster execution or balancing between minimal latency and model performance. Thus, a key goal in applying these high-capacity models in real-world scenarios is to compress them, reducing the number of parameters while preserving maximum performance. As the necessity to reduce computational resource demands becomes increasingly crucial, Knowledge Distillation (KD) <cit.> emerges as a promising technique. KD is a machine learning method focused on compressing and speeding up models by transferring knowledge from a large, complex model to a smaller, more efficient one. This technique is frequently employed to condense the knowledge stored in large deep neural network models into smaller counterparts, thus reducing computational resource requirements and improving inference speed without substantial performance sacrifices. Fundamentally, knowledge distillation leverages the extensive knowledge acquired by a large model on a substantial dataset to guide the training of a smaller model. This knowledge typically includes the output probability distribution, intermediate layer representations, and loss function of the large model. During training, the smaller models aim not only to match the original data labels but also to mimic the behavior of the larger models. For advanced models like GPT-4 <cit.>, which are accessible only through APIs, the generated instructions and explanations can aid in the training of student models <cit.>. With recent advancements in knowledge distillation, several studies have synthesized the latest progress in various distillation techniques. Specifically, Gou <cit.> provide an extensive review of knowledge distillation, addressing six critical aspects: knowledge categories, training schemes, teacher-student architectures, distillation algorithms, performance comparisons, and applications. Similarly, Wang <cit.> summarize the research progress and technical details of knowledge distillation techniques related to visual tasks comprehensively. Alkhulaifi <cit.> introduce an innovative metric known as the distillation metric, which they employ to evaluate different knowledge compression methods. Additionally, Hu <cit.> explore various teacher-student architectures across multiple distillation objectives, presenting different knowledge representations and their corresponding optimization goals. They also provide a systematic overview of teacher-student architectures, incorporating representative learning algorithms and effective distillation schemes. Existing reviews on knowledge distillation have laid a crucial foundation and offered valuable insights into model compression <cit.>. However, the emergence of LLMs has brought several new challenges to KD: 1) Large language models are designed not for single tasks like text generation but for broad generality across various tasks and unseen data, including emergent capabilities. Therefore, assessing the generalization of compressed LLMs requires careful and thorough evaluation. 2) The existing review is only a summary of existing work, without providing specific examples of KD technology applied to compress and deploy LLMs in real-world scenarios. This case study can help readers choose the best KD scheme for LLMs of different scales. To tackle these challenges, a variety of knowledge distillation algorithms specifically designed for LLMs have been developed. This paper aims to provide a comprehensive and insightful guide to these methods. The overarching classification framework of our survey is depicted in Figure <ref>, which examines the distillation algorithm of LLMs from three aspects: method, evaluation, and application. To clearly explain these methods, we categorize them into white-box KD and black-box KD. White-box KD includes two distinct types: Logits-based methods <cit.>, which transfer knowledge at the logits level, and Hint-based methods <cit.>, which transmit knowledge through intermediate features. Black-box KD involves an API-based approach where only the outputs from the teacher model are accessible. This category typically includes three methods: In-Context Learning <cit.>, Chain-of-Thought <cit.>, and Instruction Following <cit.>. In addition, we simultaneously evaluate the effectiveness of the above two types of distillation algorithms on robustness benchmarks <cit.>. Finally, we discuss the relationships and application scenarios among different distillation methods and propose directions for future research. The rest of this paper is organized as follows. Sec.<ref> briefly reviews the definition of knowledge distillation methods. Next, Sec.<ref> delves into the distillation and evaluation methods in the field of LLMs. Sec.<ref> presents applications while Sec.<ref> summarizes the challenges of knowledge distillation and explores future research directions. Lastly, Sec.<ref> concludes the paper. § OVERVIEW OF KNOWLEDGE DISTILLATION In this section, we summarized the optimization objectives of each knowledge distillation algorithm. §.§ Logits-based KD As implied by its name, logic-based KD <cit.> is a distillation paradigm that employs logic within teacher models for knowledge transfer. We can formulate the general knowledge distillation loss function as follows: ℒ_logits =K L(p^𝐭 p^𝐬)=∑_j=1^C p_j^𝐭log(p_j^𝐭/p_j^𝐬), p_i^𝐬=exp(z_i^𝐬/τ)/∑_j=1^C exp(z_j^𝐬/τ), p_i^𝐭=exp(z_i^𝐭/τ)/∑_j=1^C exp(z_i^𝐭/τ), where z^𝐬, z^𝐭∈ℝ^C denote the logits output of the student and teacher network, respectively. τ is a temperature parameter that adjusts the smoothness of the logits. C represents the number of classes. The Kullback-Leibler divergence (KLD) <cit.> loss can also be replaced with other functions, such as Reverse Kullback–Leibler (RKL) <cit.> distillation, Jenson–Shannon (JS) <cit.> distillation, etc. §.§ Hint-based KD Given the restricted ability of students to extract knowledge in logit-based knowledge distillation, researchers strive to more precisely replicate the behavior of teachers. Consequently, intermediate feature-based knowledge distillation <cit.> was introduced. This technique involves matching the outputs of the intermediate layers between student and teacher models. This approach requires students to understand both the results and the processes leading to those results. The general form of the feature-based knowledge distillation loss function is outlined below: ℒ_hint =ℋ(F^𝐬, F^𝐭)=F^𝐭-ϕ(F^𝐬)^2, where F^𝐬, F^𝐭∈ℝ^H× W × C denote the intermediate features of the student and teacher networks, respectively. The function ϕ is used to ensure that the student features match the dimensions of the teacher features. The metric function is represented by ℋ, and as an example, we use mean square error. §.§ In-Context Learning ICL <cit.> utilizes a natural language prompt composed of task descriptions or several task examples as demonstrations. Formally, let D_K={ f(x_1, y_1), …, f(x_k, y_k.) } represent a set of k examples, where f(x_k, y_k.) is a function that converts the k-th task example into a natural language prompt. Given the task description I, the demonstration set D_k, and a new input query x_k+1, the predicted output ŷ_k+1 generated by the LLM can be described by the following formula: LLM(I, f(x_1, y_1), …, f(x_k, y_k._demonstrations ), f(x_k+1_input , _answer )) →ŷ_k+1, where the answer y_k+1 is left blank for the LLM to predict. The student model is used to predict the results generated by the LLM. §.§ Chain-of-Thought CoT <cit.> integrates intermediate reasoning steps into prompts, rather than relying solely on simple input-output pairs as done in ICL. LLM(I, f(x_1, r_1, y_1), …, f(x_k, r_k, y_k._demonstrations ), f(x_k+1_input , _rational ,_answer )) →r̂_k+1, ŷ_k+1. where r_k represents the rationale provided by the user that explains why the answer to x_k is y_k. At this point, the student model not only needs to predict the labels of the teacher model, but also needs to emulate the reasons generated by the teacher. §.§ Instruction Following By fine-tuning on a structured multitask dataset that utilizes natural language descriptions, LLMs exhibit proficiency on unseen tasks that are similarly expressed in instructional formats <cit.>. Through instruction tuning, LLMs can follow task guidelines for new assignments without needing explicit examples, thus improving their generalization abilities. The process of distilling instruction-following skills involves generating task-specific instructions with the LLM and then fine-tuning the student model using this instruction dataset. § KNOWLEDGE DISTILLATION IN LARGE LANGUAGE MODELS The Transformer architecture is highly scalable, allowing for the creation of extremely large models with billions or even trillions of parameters. It underpins many of the most prominent large-scale models in NLP, CV, and multimodal domains. For example, notable large language models like the GPT series <cit.>, LLaMA <cit.>, and Qwen <cit.> are based on its decoder-only configuration. Before 2023, research on Transformer-based NLP distillation <cit.> mainly centered around the BERT architecture. However, with the rise of pre-trained large language models <cit.>, there has been increasing interest in distilling Transformers with billion-scale parameters and in developing more efficient distillation methods for scenarios with limited data and high computational costs <cit.>. The existing distillation algorithms are mainly divided into two categories: white-box KD and black-box KD. §.§ White-box Knowledge Distillation White-box distillation depends on methods that require access to the teacher model's internal data during training, utilizing the accessible internal information of the teacher model. In the following discussion, we explore two distinct types of white-box knowledge distillation. Firstly, logits-based methods, introduced by Hinton <cit.>, transfer knowledge at the logits level, where the knowledge is conveyed using the teacher model's logits. Given the limited knowledge acquired by students in logits-based knowledge distillation, researchers aim to more accurately replicate the teacher's behavior. To this end, Romero <cit.> propose hint-based knowledge distillation, which involves aligning the feature outputs of intermediate layers between the student and teacher models. This approach requires the student to understand not only the final results but also the processes leading to those results. In the following section, we analyze in detail the characteristics of each method from the perspective of evaluation tasks (as shown in Table <ref>). Furthermore, we evaluate the strengths and weaknesses of the two types of distillation algorithms based on robustness, providing certain guidance in the applicable scenarios of the algorithms. §.§.§ Logits-based KD The distillation of Bidirectional Long Short-Term Memory Networks (BiLSTM) <cit.> marks the earliest attempt to apply knowledge distillation to BERT <cit.>. The distillation objective is to minimize the mean squared error loss between the logits of the student network and those of the teacher. This approach has been tested on three tasks: sentence classification and sentence matching. Experimental results show that the shallow BiLSTM-based model achieves performance comparable to the ELMo language model <cit.>, but with approximately 100 times fewer parameters and a 15-fold increase in inference speed. Similarly, DistillBERT <cit.> initializes a shallower student model using the teacher's parameters and minimizes the difference in soft target probabilities between the teacher and student, a technique known as word-level knowledge distillation. It introduced a triple loss that combines language modeling, distillation, and cosine distance loss to leverage the inductive bias learned by the pre-trained model. DistilBERT achieved performance equivalent to or exceeding the ELMo baseline in nine tasks. Compared to BERT, DistilBERT maintains 97% of the performance while reducing the number of parameters by 40%. MixKD <cit.> extends the concept of encouraging students to mimic teachers' logits by using linear interpolation of example pairs. It improves the effectiveness of knowledge distillation by using data augmentation to create additional samples from the available task-specific data. This approach mirrors students learning more effectively from teachers by asking further questions to explore their answers and concepts in depth, providing more data for student models to extract insights from large-scale language models. Evaluation results across six datasets show that MixKD significantly outperforms traditional knowledge distillation and previous methods in compressing large language models. ReAugKD <cit.> includes both an inference stage and a training stage. In the inference stage, it aggregates soft labels generated by teachers that closely resemble student embeddings. During the training phase, a novel relationship KD loss is used to minimize the differences between teacher-student embeddings and their distributions. Evaluation results on six datasets demonstrated that ReAugKD achieved superior performance compared to the baseline, with a latency overhead of less than 3% of the baseline, highlighting that integrating retrieval information can significantly improve generalization ability. Turc <cit.> proposed a pre-training distillation (PD) method, which is a universal yet straightforward algorithm for building compact models. It consists of three standard training operation sequences and can be applied to any architecture choice. The method also explores transferring task knowledge from large fine-tuned models using traditional logits-based KD and evaluates its performance on six datasets. On average, this pre-training distillation method performs best and even surpasses the corresponding teacher model. The above distillation algorithms are all based on BERT as the teacher model and GLUE as the evaluation benchmark. With the increasing size of the model, existing distillation algorithms and evaluation standards can no longer meet the requirements MINILLM <cit.> addresses the limitations of traditional logits-based Knowledge Distillation methods by proposing an innovative approach to distill large language models (LLMs) into smaller ones, focusing on minimizing the forward Kullback-Leibler divergence during free-running generation. This method replaces the standard KD method's forward KLD target with a reverse KLD, which is more suitable for generating KD on language models and aims to prevent student models from overestimating the low probability distribution of teacher distributions. To further stabilize and accelerate training, an effective optimization method is introduced, comprising three key steps: 1) single-step decomposition to reduce variance, 2) teacher mixed sampling to mitigate reward hacking, and 3) length normalization to counteract length bias. MINILLM is applied to models ranging in size from 120M to 13B parameters. Experimental evaluations on five datasets using Rouge-L <cit.>, human judgment, and GPT-4 feedback consistently demonstrate that this approach outperforms the standard KD baseline. Further research and analysis indicate that MINILLM can reduce exposure bias and improve long-response generation performance. Similar to MINILLM, GKD <cit.> moves beyond relying solely on a fixed set of output sequences, training student models to generate their own sequences with feedback from the teacher model. Unlike supervised KD methods, GKD allows for the use of alternative loss functions between the student and teacher, which is advantageous when student models lack the expressive capability to effectively mimic teacher distributions. Additionally, GKD enables the seamless integration of distillation and Reinforcement Learning (RL) fine-tuning for language models. By providing flexibility to optimize alternative divergence measures such as reverse KL and generalized JSD, GKD allows limited student capacity to focus on generating samples similar to those produced under teacher supervision. It has been demonstrated that on-policy GKD facilitates the integration of distillation with RL <cit.> fine-tuning of language models, a combination not previously explored. Regarding performance enhancement for initial students, on average, GKD yielded a relative gain of 2.1 times for abstracts, 1.7 times for machine translation, and 1.9 times for arithmetic reasoning tasks across different sizes of T5 student models, underscoring the effectiveness of GKD. In terms of performance enhancement for initial students, GKD showed average relative gains of 2.1 times for abstracts, 1.7 times for machine translation, and 1.9 times for arithmetic reasoning tasks across various sizes of T5 student models, highlighting the effectiveness of GKD. Wen <cit.> proposed the f-DISTILL framework, which formulates sequence-level knowledge distillation by minimizing a generalized f-divergence function. This framework introduces four distillation variants, demonstrating that existing SeqKD <cit.> and ENGINE <cit.> methods are approximations of KL and reverse KL distillation. Furthermore, the f-DISTILL method includes step-wise decomposition to convert the complex sequence-level divergence into a more manageable word-level loss. This facilitates easier calculation. This method was evaluated on four datasets: DART for data-to-text generation <cit.>, XSum for summarization <cit.>, WMT16 EN-RO for machine translation <cit.>, and Commonsense Dialogue <cit.>. The experiments demonstrated that f-DISTILL variants outperformed existing distribution-matching KD methods, leading to performance improvements when combined with representation-matching KD methods.Additionally, the results indicated that symmetric distillation loss is superior to asymmetric distillation loss, confirming that extreme mode averaging or collapse is suboptimal. MiniMA <cit.> found that the optimal distillation effect occurs when the student model is approximately 40% the size of the teacher model. It combines structured pruning with logit-based knowledge distillation, using LLaMA2-7B <cit.> as the teacher model to train the 3B MiniMA model. The results showed that MiniMA achieved impressive performance in knowledge, reasoning, and encoding, while using a similar or even fewer number of tokens than the teacher model. §.§.§ Hint-based KD The feature-based knowledge distillation methods <cit.> extract knowledge from the embedding space, transformer layers, and prediction layers, allowing the student model to learn various aspects of the teacher model comprehensively. For instance, Sun <cit.> proposed a patient knowledge distillation (PKD) method aimed at compressing a large-scale teacher model into an equally effective lightweight student model. They proposed two distinct distillation strategies: 1) PKD-Last: The student model learns from the last k layers of the teacher model, based on the assumption that the top layers contain the most informative knowledge. 2) PKD-Skip: The student learns from every k-layer of the teacher, suggesting that the lower layers also contain essential information that should be gradually transferred during distillation. Experiments conducted on seven datasets across four tasks—sentiment classification, paraphrase similarity matching, natural language inference, and machine reading comprehension—showed that the PKD method outperformed standard knowledge distillation methods. It achieved superior performance and better generalization, significantly enhancing training efficiency and reducing storage requirements while maintaining accuracy comparable to the original large-scale model. MetaDistill <cit.> offers a simple and efficient alternative to traditional KD methods by keeping the teacher model fixed during training. Within the meta-learning framework, teacher networks enhance knowledge transfer to student networks by distilling feedback on student performance. Additionally, a pilot update mechanism is introduced to improve the alignment between internal learners and meta-learners, focusing on enhancing internal learners' performance. Extensive experiments have validated the effectiveness and versatility of this method across text and image classification tasks. Furthermore, experiments on the GLUE benchmark have shown that MetaDistill significantly outperforms traditional knowledge distillation, achieving state-of-the-art performance compression. AD-KD <cit.> addresses two key limitations of existing knowledge distillation methods. First, student models often merely mimic the teacher's behavior without developing their own reasoning capabilities. Second, these methods typically focus on transferring knowledge specific to complex models while neglecting data-specific knowledge. To overcome these issues, AD-KD introduces an innovative attribution-driven knowledge distillation method, which calculates the importance score of each input token using a gradient-based attribution approach <cit.>. To minimize the impact of less significant dimensions in the teacher's input embeddings, a top-K strategy filters out dimensions with lower attribution scores. The remaining scores are aggregated and normalized to reflect the importance of individual tokens. Additionally, this method extracts all potential predicted attribution knowledge, not just the highest probability prediction. To improve knowledge transfer for reasoning and generalization, AD-KD explores multi-view attribution distillation of all potential decisions made by the teacher. Experimental results on the GLUE benchmark indicate that this method surpasses several state-of-the-art approaches in performance. Mukherjee <cit.> present XtremeDistil, a distillation method leveraging internal representations and parameter projections that are independent of the teacher's architecture. Unlike previous approaches focused on single-language GLUE tasks, this method distills multilingual Named Entity Recognition (NER) across 41 languages, using the multilingual bidirectional encoder representation from Transformers (mBERT) <cit.> as the teacher model. Experimental results indicate that XtremeDistil achieves higher compression and faster inference speeds. Additionally, the study explored several previously unexamined aspects of distillation, including the effects of unlabeled transmission data and annotation resources, the selection of multilingual word embeddings, architectural modifications, and inference delays. This method significantly compressed the teacher model by up to 35 times in terms of parameters and reduced batch inference delay by 51 times while maintaining 95% of the performance in large-scale multilingual NER and either matching or surpassing it in classification tasks. TinyBERT <cit.> integrates pre-trained distillation with fine-tuning distillation to capture both general domain and task-specific knowledge from BERT. It extracts multiple types of knowledge from different layers, including the embedding layer, hidden states, attention matrices, and transformation layers. During the GLUE benchmark evaluation, its teacher model BERT_base achieved a performance exceeding 96.8%, while offering inference speeds that were 7.5 to 9.4 times faster. MiniLM <cit.> introduced a depth self-attention distillation framework for task-agnostic Transformer-based language model (LM) distillation. This method isolates the self-attention module of the teacher model's final Transformer layer and uses the scaled dot-product between values within this module as a novel form of depth self-attention knowledge. This technique addresses the challenge of layer alignment between teacher and student models by transforming various dimensional representations of both models into a relation matrix of matching dimensionality, without requiring additional parameters for transforming student representations. This enhances the depth flexibility of the student model. MiniLM retained over 99% accuracy on the SQuAD 2.0 <cit.> and various GLUE benchmark tasks while using only 50% of the Transformer parameters and computational resources of the teacher model. This demonstrates the effectiveness of employing a teacher assistant <cit.> in distilling large pre-trained Transformer-based models. TED <cit.> introduces an innovative task-aware layout distillation method designed to combat underfitting in student models and remove unnecessary information from teachers' hidden representations. This method aligns the hidden representations of students and teachers at each level, employing task-aware filters to extract relevant knowledge for the target task. By doing so, it narrows the knowledge gap between the models and enhances the student's ability to adapt to the target task. MobileBERT <cit.> and HomoBERT <cit.> primarily focus on adjusting the model's width while maintaining its depth. This contrasts with Turc <cit.>, who found that altering model depth significantly impacts performance. MobileBERT introduces bottlenecks and inverted bottlenecks to both teacher and student models to modify hidden dimensions. However, this approach can disrupt the parameter balance between the multi-head attention and feed-forward networks, which is mitigated by using a stacked Feed-Forward Network (FFN) approach. Knowledge extraction is then carried out through the attention and hidden states of the transformer layers. HomoBERT, on the other hand, employs pruning. It starts by initializing the student model with the teacher model to ensure minimal initial divergence. It then targets input embeddings, hidden states, attention matrices, and output logits for pruning to create the distillation loss function. In each iteration, the most significant neurons are pruned based on importance scores, and the student model is trained using the distillation loss. This iterative process continues until the student model achieves the desired size. While white-box distillation is limited by the proprietary nature of LLMs, restricting its applicability, the rise of diverse open-source LLMs like Alpaca <cit.> and Vicuna <cit.> offers promising prospects for the future of white-box distillation. §.§ Robustness Evaluation of White-box KD There are various evaluation standards for existing white-box KD algorithms, most of which utilize BERT as the teacher model. However, the effectiveness of these distillation algorithms in the context of LLMs remains unclear. Building on the work presented in <cit.>, we conducted a unified evaluation of these algorithms from a robustness perspective, specifically focusing on adversarial robustness and out-of-distribution (OOD) robustness. Both types of robustness pertain to performance under input disturbances, which is particularly critical for safety-sensitive applications. Adversarial robustness examines the stability of models against adversarial and imperceptible disturbances, while OOD robustness assesses performance on unseen data that differs from the training data distribution. To evaluate adversarial robustness, we employed the AdvGLUE <cit.> and ANLI <cit.> benchmarks, using Attack Success Rate (ASR) as the metric. For OOD robustness, we used the Flipkart <cit.> review and DDXPlus <cit.> medical diagnostic datasets, with F1-score (F1) as the indicator. Inspired by the work on MINILLM <cit.>, we utilized the Dolly [https://github.com/databrickslabs/dolly/tree/master] dataset for distillation, fine-tuning both student and teacher models. We evaluated five distillation algorithms and four models concurrently to assess their robustness. The evaluation results are shown in Tables <ref>-<ref>. Firstly, we observed that MINILLM demonstrated superior overall distillation performance in GPT-2. Notably, for the 340M-sized GPT-2, it achieved state-of-the-art results on both adversarial and out-of-distribution datasets when compared to the other four distillation algorithms. Furthermore, MINILLM outperformed the other algorithms on the Flipkart and DDXPlus datasets for GPT-2 of any size, highlighting its exceptional generalization capability to out-of-distribution data. Secondly, for the OPT model, we discovered that the most straightforward KD algorithm, which employs the teacher distribution as supervision for each token step to fine-tune the student model, achieved the best overall performance. Likewise, MINILLM outperformed other distillation algorithms and even exceeded the performance of teacher models for OPTs of any size on the Flipkart dataset. Finally, for LLaMA, SeqKD demonstrated a comparatively better distillation effect, whereas for LLaMA2, JS showed a relatively superior performance. This suggests that even when the model size is identical and the model structure is similar, the effectiveness of the same distillation algorithm can vary significantly. §.§ Discussion on White-box KD Logits-based KD methods typically focus on aligning the output distributions between the teacher and student models. In contrast, hint-based KD methods can convey richer information by aligning the intermediate layers, leading to better results. However, implementing layer-to-layer knowledge distillation necessitates careful design of the layer mappings between the teacher and student models and requires a deep understanding of the model architecture. Both logits-based and hint-based KD methods demand substantial GPU memory during the distillation process. Even though the teacher network doesn't need backpropagation, the activation of intermediate features during forward propagation consumes a significant amount of GPU memory. Therefore, exploring ways to reduce training costs and shorten training times is crucial. §.§ Black-box Knowledge Distillation The two previously discussed distillation techniques rely on access to the internal data of the teacher model, categorizing them as white-box distillation methods, which require internal data during training. However, many modern large-scale closed-source models do not provide access to internal data, limiting us to using only model predictions. Distillation where knowledge is transferred solely through the teacher model's predictions is known as black-box knowledge distillation. Researchers have found that when model parameters are sufficiently large, the models exhibit remarkable versatility, enabling them to handle complex tasks. Many black-box distillation methods take advantage of this capability, typically utilizing three techniques: In-Context Learning, Chain-of-Thought, and Instruction Following. In this section, we further categorize black-box KD methods based on the use of emergent capabilities. §.§.§ In-Context Learning ICL was initially introduced in GPT-3 <cit.>, where it employs a natural language prompt that includes both task descriptions and multiple task examples as demonstrations. The process begins with the task description, followed by selecting specific instances from the task dataset to serve as examples. These instances are then formatted into natural language prompts using a predefined template and arranged in a particular order. Finally, the test samples are incorporated into the input of the LLM to produce the output. Expanding on this concept, Huang et al. <cit.> propose In-Context Learning Distillation, which aims to enhance the few-shot learning capabilities of multitask models by effectively extracting and transferring knowledge through context learning and language modeling objectives. This approach introduces two paradigms for few-shot learning: Meta In-context Tuning and Multitask In-context Tuning. In Meta-ICT <cit.>, the language model undergoes meta-training across a broad spectrum of tasks using in-context learning objectives. Subsequently, it adapts to unseen target tasks through in-context learning. However, the efficacy of in-context learning heavily relies on the knowledge accumulated during pretraining <cit.>, potentially limiting its ability to fully leverage the input-label correspondence provided in the training data <cit.>. To address this limitation, an alternative few-shot learning paradigm called Multitask In-Context Tuning is proposed. While Meta-ICT enables the student model to adapt to new tasks via context learning and teacher guidance, Multitask-ICT treats all target tasks as training tasks and utilizes examples directly from these tasks for in-context learning distillation. These two paradigms for few-shot learning involve a trade-off between performance and computational efficiency. Results across tasks such as classification, natural language inference, and question answering indicate that Multitask-ICT achieves a reduction in model size by 93% while retaining 91.4% of the teacher's performance. Therefore, Multitask-ICT proves to be more effective, albeit with higher computational costs. LLM-R <cit.> utilizes a pre-trained frozen LLM to retrieve high-quality contextual examples, which are then ranked to generate training data. Subsequently, it constructs a reward model using a cross-encoder to capture ranking preferences. Finally, knowledge distillation is applied to train a dense retriever based on dual encoders. Our comprehensive evaluation of LLM-R across diverse tasks consistently demonstrates superior performance compared to several robust baselines. Furthermore, our model exhibits scalability across different task sizes and LLM architectures. Detailed analysis indicates that our approach enhances context learning performance by an average of 7.8%, with consistent improvements observed across various sizes of LLMs. §.§.§ Chain-of-Thought Chain-of-Thought (CoT) <cit.> represents an advanced prompting strategy aimed at enhancing LLMs' ability to tackle complex reasoning tasks. Unlike the input-output pair approach used in ICL for prompt formulation, CoT integrates intermediate inference steps that incorporate final outputs into the prompts. Typically, CoT distillation <cit.> involves leveraging large-scale models to construct enriched datasets focused on reasoning tasks, which are then utilized for fine-tuning student models. Thus, the primary focus is on generating high-quality rationales for training and ensuring effective utilization of these rationales by students <cit.>. Li <cit.> pioneered the use of explanations generated by LLMs to enhance the training of smaller inference machines. They systematically explored three methods for deriving interpretations from LLMs and integrated them into a multitask learning framework to empower compact models with robust reasoning and interpretative capabilities. Across multiple inference tasks, experiments consistently demonstrated that their approach outperforms baseline fine-tuning methods under various conditions. Notably, it achieved up to a 9.5% accuracy improvement over GPT-3 (175B) after 60 rounds of fine-tuning on Commonsense QA. The high-quality explanations generated by their method elucidate the rationale behind AI's interpretable predictions. Hsieh <cit.> introduced step-by-step distilling, a novel and straightforward approach aimed at reducing the amount of training data required to refine and fine-tune LLMs into smaller models. Central to their method is a paradigm shift: LLMs are not merely sources of noisy labels but proxies capable of providing natural language reasoning to justify their predictions. Empirical findings across four NLP benchmark tests yielded three notable outcomes. Firstly, compared to fine-tuning and traditional distillation methods, their model reduced the average number of training samples required by over 50% (with some reductions exceeding 85%), leading to improved performance. Secondly, their model achieved superior performance to LLMs while being significantly smaller in size, thereby reducing computational resources for deployment. Thirdly, their method concurrently reduced model size and required data to outperform LLMs. For example, their final iteration of the 770M T5 model surpassed the performance of a 540B parameter LLM, utilizing only 80% of the labeled dataset. Moreover, Ho <cit.> propose fine-tuning CoT, a method harnessing LLMs' reasoning capabilities to guide smaller models in solving complex tasks. By generating multiple inference solutions from the teacher model through random sampling, they enrich the training data of the student model. Evaluation across 12 tasks using widely accessible models demonstrates that fine-tuning CoT achieves significant inference performance in smaller models while preserving much of the generality of hint-based CoT inference, previously reliant on models with over 100 billion parameters. Consequently, models with as few as 0.3 billion parameters can outperform larger counterparts in specific tasks, even surpassing the performance of the teacher model with 175 billion parameters. Similarly, Chen <cit.> introduced Multi-CoT Consistent Knowledge Distillation (MCC-KD) to efficiently capture the diversity and coherence of reasoning capabilities. In MCC-KD, multiple fundamental principles are generated for each question, and the consistency between corresponding predictions is strengthened by minimizing bidirectional KL divergence between answer distributions. MCC-KD's efficacy is evaluated on mathematical reasoning and common sense reasoning benchmarks across various model architectures. Empirical findings not only confirm MCC-KD's superior performance on in-distribution datasets but also highlight its robust generalization ability on out-of-distribution datasets. Fu <cit.> Fu <cit.> apply CoT to specialize smaller language models for multi-step mathematical reasoning tasks. The SOCRATIC CoT method, as detailed by Shridhar <cit.>, decomposes the original problem into a series of sub-problems and employs a pair of compact distillation models: a problem decomposer and a sub-problem solver. These models collaborate to break down and resolve complex problems presented in new tasks. Evaluation across various inference datasets, including GSM8K, StrategyQA, and SVAMP, demonstrates that this distillation approach significantly enhances the performance of smaller models by over 70% compared to the baseline. On the other hand, SCOTT <cit.> introduces a core principle of leveraging a LLM to guide the correct answer through comparative decoding. This method encourages the teacher model to generate tokens that align closely with the correct answer, thereby improving the fidelity of the distillation process. Jie <cit.> and Zhu <cit.> enhance mathematical reasoning capabilities through program distillation. Chae <cit.> and Wang <cit.> propose an interactive multi-loop learning framework. In this framework, the former focuses on training students using multi-hop reasoning, while the latter actively communicates their learning status to the LLM teacher. Subsequently, the teacher offers customized explanations for the students' feedback, guiding them to reflect on their errors. §.§.§ Instruction Following The instruction following capability aims to enhance the language model's ability to perform new tasks without heavy reliance on limited examples. Through fine-tuning across various tasks specified by instructions, the language model demonstrates its proficiency in accurately executing tasks described in previously unseen instructions. However, in black-box distillation, knowledge transfer relies solely on datasets, making the availability of a sufficiently large dataset crucial. Therefore, collaborative efforts in these approaches <cit.> involve creating a comprehensive dataset comprising instructions, inputs, and outputs. This dataset enables the student model to acquire extensive knowledge from the teacher model. Specifically, Wang <cit.> propose self-instruction, a semi-automatic process that utilizes indicator signals from the model itself to refine the language model's instructions. The process begins with a constrained seed set of manually crafted tasks, such as the 175 tasks used in our study, to guide the overall generation process. Initially, the prompt model uses this initial set of instructions to generate a broader array of task descriptions. Furthermore, for newly generated sets of instructions, the framework creates input-output instances that can be used for supervised instruction tuning in the future. Finally, various heuristic methods are employed to automatically filter out low-quality or duplicate instructions before incorporating the remaining valid tasks into the task pool. This iterative process can be repeated multiple times until a significant number of tasks are obtained. This method has influenced subsequent research, leading to adjustments in the 13B open-source models like Alpaca <cit.>, Vicuna <cit.>, and GPT4All <cit.> following this paradigm. Expanding on these ideas, Peng <cit.> explore the use of GPT-4 to generate instruction-following data for fine-tuning LLMs. They curated a dataset of 52,000 instruction-following examples in both English and Chinese, along with feedback datasets generated by GPT-4. Using these datasets, they fine-tuned two student models, LLaMA-GPT4 and LLaMA-GPT4-CN. Additionally, they developed a feedback model to evaluate the quality of model responses. Wu <cit.> meticulously compiled a dataset comprising 2.58 million instructions, ensuring coverage of diverse topics. These instructions were used as input to generate responses using GPT-3.5 Turbo. They fine-tuned a range of models under the LaMini-LM, including both encoder-decoder and decoder-only architectures. Evaluation of the LaMini-LM models' performance involved applying automatic metrics across 15 benchmarks, alongside manual assessment. Results illustrate that the proposed LaMini-LM model achieves comparable performance to competitive baselines despite being only one-tenth the size. However, existing methodologies have predominantly concentrated on one-way knowledge distillation, where student model responses align with those of teacher models to generate instructions without incorporating a "feedback" mechanism. To address this limitation, Jiang <cit.> introduce an innovative adversarial distillation framework consisting of three stages: imitation, discrimination, and generation. Leveraging the adaptable nature of LLMs, this framework incentivizes teacher models to identify "challenging" instructions and generate new instructions for student models, thereby enhancing the effectiveness of knowledge transfer. This approach achieves open-generation capability comparable to ChatGPT using only 70,000 training samples, surpassing traditional state-of-the-art instruction adjustment models (such as Vicuna-13B) by 55.4% and 16.7% on the zero-shot inference BBH and AGIEval tasks, respectively. In efforts to provide task-specific guidance, Chen <cit.> propose a fine-tuning dataset for code generation instructions and develop a multi-round personalized distillation approach. This approach enables student models to first attempt solving tasks independently, followed by adaptive refinements provided by the teacher to enhance their performance through executive feedback. Unlike traditional knowledge transfer methods where the teacher's prior knowledge is directly imparted to students, personalized refinement offers individualized learning experiences by learning solely from examples of mistakes and iteratively improving their solutions. Meanwhile, UniversalNER <cit.> has conducted extensive research on named entity recognition tasks. Unlike the aforementioned methods that aim to increase instruction diversity, UniversalNER focuses on augmenting input diversity to enhance the model's generalization capabilities across various domains. §.§ Robustness Evaluation of Black-box KD Inspired by the work in <cit.>, we conducted a unified evaluation of the step-by-step distillation algorithm based on CoT from a robustness perspective. Due to the closed-source nature of the PaLM 540B model, we adhered to the experimental setup in <cit.> and used the generated CoT interpretations to fine-tune the student model. The experimental results are presented in Tables <ref>-<ref>. For GPT-2 models with 120M and 340M parameters, distillation using the interpretations from the ANLI and e-SNLI datasets produced better results. However, as the model size increases, the explanatory power of these two datasets diminishes, and a similar trend is observed in OPT models. For OPT models of various sizes, the explanatory distillation effects generated by ANLI and e-SNLI were suboptimal. This suggests that commonsense data (CQA) and mathematical data (SVAMP) are more conducive to CoT distillation in OPT models. Regardless of whether it is LLaMA or OPT, the distillation of CoT using CQA and SVAMP outperforms the distillation using the other two datasets on Flipkart and DDXPlus. This indicates that distillation of mathematical abilities and commonsense knowledge enhances the model's ability to generalize to out-of-distribution. §.§ Discussion on Black-box KD The black-box based KD method is typically used by LLMs to generate explanations or instruction pairs to fine-tune the student model. In this approach, only the teacher model generates data, and only the student model is involved in training, making it memory-efficient. However, most current methods rely on closed-source teacher models, and generating additional data can be costly. Additionally, many methods do not have open-source data generation techniques or involve closed-source generated data, posing challenges for the fair evaluation of these black-box based distillation algorithms. §.§ Others As large language models have advanced significantly, their inherent limitation lies in their inability to comprehend visual information, as they are primarily designed for processing discrete texts. Consequently, researchers are increasingly exploring ways to transfer the capabilities of language models into multimodal domains, where text and image data are integrated to enable a wider range of tasks <cit.>. Extracting knowledge from pre-trained multimodal models to enhance the performance and generalization of compact multimodal language models has become a focal point of interest in this field. §.§.§ Muiti-Modal Large Language Models Knowledge distillation for multimodal large models is still in its nascent stages, focusing primarily on refining instruction-following capabilities. Li <cit.> have pioneered a novel framework featuring two stages for distilling knowledge in multimodal large models. The initial stage involves multimodal pre-training to align multimodal features through a projection layer. The second stage, termed multimodal competitive distillation, establishes a bidirectional feedback loop encompassing: 1) Multimodal instruction adjustment to ensure student responses align with teacher-provided multimodal instructions. 2) Multimodal evaluation to identify challenging multimodal instructions. 3) Multimodal augmentation, where new instructions are generated and combined with original images to create a new multimodal instruction dataset for training student models. Evaluation on datasets like ScienceQA <cit.>, SEED-Bench <cit.>, and LLaVA Test Set <cit.> demonstrates that CoMD surpasses existing models in inference tasks and zero-shot settings. Park <cit.> developed a localized visual commonsense model by sampling localized commonsense knowledge from LLMs. Users can specify regions as inputs, and a separately trained critic model selects high-quality examples. Empirical results and human evaluations in the zero-shot setting indicate that this distillation method produces a more accurate VL inference model compared to simply passing generated reference expressions to baseline LLMs. Similarly, Hu <cit.> introduced Instruction Tuning for Visual Program Distillation (VPD). VPD leverages LLMs' inference capability by sampling multiple candidate programs, executing and verifying them, and translating correct programs into language descriptions of inference steps for VLM distillation. Extensive experiments have shown that VPD enhances counting, spatial relationship understanding, and combinatorial reasoning abilities in VLMs, achieving state-of-the-art performance in challenging visual tasks such as MMBench <cit.>, OK-VQA <cit.>, A-OKVQA <cit.>, TallyQA <cit.>, POPE <cit.>, and Hateful Memes <cit.>. § APPLICATIONS In this section, we briefly explore the applications of LLM distillation in various critical domains such as healthcare, education, and law. §.§ Healthcare Healthcare represents a critical domain deeply intertwined with human well-being. Since the inception of ChatGPT, numerous endeavors have endeavored to harness the prowess of ChatGPT and other LLMs in the realm of medicine. For example, Zhang <cit.> introduced HuatuoGPT, a specialized LLM designed for medical consultations. By distilling data from ChatGPT and integrating real-world insights from physicians through supervised fine-tuning, HuatuoGPT incorporates a reward model aimed at synergizing the strengths derived from both datasets. Empirical results demonstrate that HuatuoGPT achieves state-of-the-art performance in medical consultations, outperforming GPT-3.5_turbo across various metrics evaluated on GPT-4, including manual assessments and medical benchmark datasets. Li <cit.> highlight the scarcity of LLMs specifically tailored to medical domains. Using LLaMA as a developmental and evaluative platform, they explored two enhancement strategies: model fine-tuning and knowledge integration to augment the efficacy of LLMs as medical chatbots. Fine-tuning the dialogue model on a dataset comprising 100K patient physiological dialogues sourced from online medical consultation platforms, their experiments demonstrate that the Chatdoctor model surpasses ChatGPT in terms of accuracy and F1 score. Furthermore, Wu <cit.> introduced PMC-LLaMA, which amalgamates 4.8M biomedical academic papers and 30K medical textbooks to infuse data-centric knowledge, coupled with exhaustive fine-tuning tailored to specific domain directives. With a modest parameter count of 13B, PMC-LLaMA demonstrates outstanding performance, surpassing ChatGPT across various public medical question answering benchmarks. §.§ Education Education represents another critical domain where LLMs show significant promise. Current research demonstrates that LLMs can achieve proficiency comparable to students in standardized exams across various mathematical disciplines such as physics and computer science <cit.>. Xie <cit.> introduced DARWIN, a framework aimed at enhancing natural sciences by accelerating and enriching the automation of discovery processes. This approach incorporates the Scientific Instruction Generation (SIG) model, which integrates structured and unstructured scientific knowledge from public datasets and literature. By eliminating the need for manual extraction or domain-specific knowledge graphs, DARWIN achieves state-of-the-art performance across diverse scientific tasks. Luo <cit.> proposed WizardMath, which utilizes the Reinforcement Learning from Evol-Instruct Feedback (RLEIF) technique to enhance the mathematical reasoning capabilities of LLaMA-2 <cit.>. This method employs math-specific Evol-Instruct to generate diverse mathematical instruction data, subsequently training the Instruction Reward Model (IRM) and the Process Supervised Reward Model (PRM) <cit.>. The IRM evaluates the quality of evolutionary instructions, while the PRM receives feedback at each step of the solution process. Through extensive experimentation on two mathematical reasoning benchmarks, GSM8k <cit.> and MATH <cit.>, WizardMath significantly outperforms other open-source LLMs. Furthermore, Deng <cit.> introduced K2, a LLM tailored for geoscience, and established the GeoBench, the first geoscience benchmark, to evaluate LLMs within this domain. §.§ Law Law, a domain rich in professional expertise, has recently adopted LLMs to address various legal tasks, such as legal document analysis <cit.> and legal document generation <cit.>. Huang <cit.> integrated legal expertise into the continuous training phase of LLaMA by employing carefully designed supervised fine-tuning tasks. These tasks aimed to impart professional skills to the model while mitigating the issue of model-generated illusions. To enhance training, they introduced a retrieval module that extracts relevant legal articles before the model generates responses. Similarly, Cui <cit.> integrated legal-specific data into LLaMA, resulting in the creation of ChatLaw. Concerned with the accuracy of reference retrieval from legal datasets, they developed a hybrid approach combining vector database retrieval and keyword-based retrieval. This approach addresses hallucination concerns and improves accuracy by implementing a self-attention mechanism. This mechanism enhances the ability of large models to correct errors within reference data, thereby improving coherence and augmenting problem-solving proficiency in legal contexts. § CHALLENGES AND FUTURE DIRECTIONS §.§ Unified Evaluation Benchmark The existing benchmark for evaluating knowledge distillation primarily falls into four categories: 1) General Language Understanding Evaluation (GLUE) Benchmark <cit.>: This benchmark consists of nine sentence-level classification tasks, including language acceptability <cit.>, sentiment analysis <cit.>, text similarity <cit.>, entailment detection <cit.>, and natural language inference <cit.>. It is commonly utilized to assess distillation methods employing BERT as the teacher model. 2) Multimodal Multitask Learning Understanding (MMLU) Benchmark <cit.>: This benchmark serves as a universal evaluation tool for assessing the multitasking knowledge comprehension abilities of LLMs. It covers various domains such as mathematics, computer science, humanities, and social sciences, featuring tasks of varying difficulty levels from basic to advanced. 3) BIG Bench <cit.>: A collaborative effort to create a comprehensive evaluation benchmark that explores the capabilities of existing LLMs across a diverse range of tasks. It includes 204 tasks spanning linguistics, child development, mathematics, common sense reasoning, biology, physics, social prejudice, software development, and more. 4) Human-Evaluated Language Models (HELM) Benchmark <cit.>: This is a holistic evaluation benchmark comprising 16 core scenarios and 7 indicator categories. It integrates various previously proposed evaluation benchmarks to provide a holistic assessment of LLM performance. These benchmarks collectively cover a wide array of mainstream LLM evaluation tasks. Additionally, there are specialized evaluation benchmarks tailored to specific tasks, such as TyDiQA <cit.> for evaluating multilingual knowledge utilization and MGSM <cit.> for assessing multilingual mathematical reasoning. As large models continue to evolve, evaluation criteria are continually updated, and developing a unified evaluation standard for knowledge distillation remains a promising avenue of research. §.§ Advanced Algorithms Current methodologies primarily aim to equip student models with specific capabilities. For example, symbolic knowledge distillation <cit.> leverages LLMs to gather and filter data, extracting high-quality commonsense maps for training commonsense models. Similarly, DISCO <cit.> employs LLMs to acquire counterfactual data, which is then filtered using a large teacher Natural Language Inference model to improve students' proficiency in natural language reasoning tasks. As open-source LLMs continue to evolve, exploring white-box distillation algorithms for LLMs could prove to be an effective approach for integrating multiple capabilities. Furthermore, the current development pace of MLLMs distillation lags behind that of LLMs. Thus, investigating more advanced MLLMs distillation algorithms could facilitate the integration of multiple modalities more effectively. §.§ Interpretability Stanton <cit.> explore the interpretability of knowledge distillation and introduce the concept of matching degree to enhance its reliability. Their study reveals several significant insights: 1) The relationship between student models' generalization performance and matching degree is not uniformly consistent. Excluding self-distillation, models with the best generalization performance do not always exhibit the highest fidelity. 2) There is a notable correlation between student models' fidelity and the calibration of the distillation process. Although the most faithful student model may not always achieve the highest accuracy, it consistently shows superior calibration. 3) Optimization during the knowledge distillation process is challenging, resulting in lower fidelity. Similarly, in the era of large language models, knowledge distillation faces comparable difficulties. For example, current methods struggle to elucidate how CoT-distillation imparts CoT capability to student language models or to determine the required amount of data for fine-tuning instructions. Therefore, integrating interpretability into the process is crucial for advancing LLM knowledge distillation. This integration not only aids in evaluating model distillation but also enhances the reliability and predictability of models in production § CONCLUSION In this survey, we systematically investigate the knowledge distillation algorithms from three perspectives: methods, evaluation, and application. Compared to smaller models, distillation in larger models faces more challenges. Despite considerable efforts by existing algorithms to tackle these challenges, many still rely on frameworks initially tailored for compressing smaller models, while the challenge of compressing large models still exists. In the future, while ensuring the universality and generalization of LLMs, it becomes imperative to delve deeper into developing more efficient and effective compression algorithms. This survey aims to furnish valuable references, shed light on the current landscape, and advocate for ongoing exploration of this pivotal theme to enable the effective design, learning, and application of various distillation objectives within the teacher-student framework. ACM-Reference-Format
http://arxiv.org/abs/2407.02912v1
20240703083735
Homogenization of layered materials with rigid components in two-slip finite crystal plasticity
[ "Akira Ishikawa", "Karel Svadlenka" ]
math.AP
[ "math.AP", "math.OC", "49J45 (Primary) 74Q05, 74C15 (Secondary)" ]
The More the Merrier? Navigating Accuracy vs. Energy Efficiency Design Trade-Offs in Ensemble Learning Systems [ July 8, 2024 ============================================================================================================== § ABSTRACT This paper is an extension of the result by Christowiak and Kreisbeck <cit.>, which addresses the Γ-convergence approach to a homogenization problem for composite materials consisting of two distinct types of parallel layers. In <cit.>, one of the layers of the material undergoes only local rotations while the other allows local rotation and plastic deformation along a single slip system. On the other hand, real materials show an interplay of multiple directions of slip. Here we obtain the Γ-limit for the problem where the number of slip directions is increased to two. When the slip systems are orthogonal, we derive the full homogenized energy based on generalized convex envelopes of the original energy density. Since these envelopes are not completely known for angles between slip directions differing from the right angle, we present a partial, conditional homogenization result in this general case. The analysis is based on a modification of the classical construction of laminate microstructures but several nontrivial difficulties arise due to nonconvex constraints being present in the composite energy. § LIST OF SYMBOLS v^⊥ vector obtained by rotating vector v∈ℝ^2 by π/2 counterclockwise I 2× 2 identity matrix SO(2) the rotation group of ℝ^2, i.e., the group consisting of all orthogonal 2× 2 matrices with determinant 1 1_X indicator function of a set X (a)_+^s means (max{0,a})^s, a∈ℝ, s>0 c a generic positive constant, independent of the involved parameters, unless stated otherwise e_i the unit vector along the positive direction of coordinate axis x_i in ℝ^2 ∂_i the first-order partial differential operator with respect to x_i L^p(D) the Lebesgue space of functions with integrable p-powers for 1 ≤ p < ∞ L^∞(D) the Lebesgue space of essentially bounded functions L^p_ loc(D) the Lebesgue space of functions with locally integrable p-powers L_0^p(D) the set of functions in L^p(D) with vanishing mean W^k,p(D) the Sobolev space of functions with all k-th derivatives in L^p(D) → strong convergence in a normed linear space ⇀ weak convergence in a normed linear space § INTRODUCTION AND MAIN RESULTS A major task of modern material science is to develop new materials with properties exceeding the existing ones. Combination of two or more materials provides one possible way to do so. It turns out that a clever choice of material components and their suitable spatial distribution may lead to a dramatic improvement in mechanical strength, ductility, stiffness, corrosion resistance and other characteristics. Investigations of the relation between properties and spatial arrangement of material components and the response of composite materials have been carried out thoroughly, especially by experimental, engineering and numerical methods (see, e.g., <cit.> and references therein), while mathematical tools to address this relation have been developed alongside. The main approach of this mathematical research is homogenization, where the scale of the geometrical structure of composite material is taken zero to obtain an averaged or effective material model. This model is usually simpler than the full model of composite material while retaining its important physical features and thus is suited for numerical implementations and in some cases even for theoretical analysis aimed at revealing the principles behind improved material properties. A successful analytical tool introduced by De Giorgi et al. <cit.> for variational models based on energy minimization is the concept of Γ-convergence, providing a mathematically rigorous and physically consistent way to define limits of sequences of energy functionals. The still active research of composite materials has been recently boosted by the discovery of so-called LPSO magnesium alloys and a possibility of a new principle of material strengthening related to kink-band formation in the periodic layered structure of the alloy <cit.>. The LPSO alloys shows a periodic milefeuille structure of soft pure magnesium layers and rigid layers where additive elements accumulate. In this paper, we contribute towards the mathematical understanding of the kink-band strengthening mechanism, building on the seminal work of Christowiak and Kreisbeck <cit.>, where variational models of bilayered materials are studied (see also <cit.> for a few examples of related results). They consider the deformation of a planar material in which band-shaped rigid layers and softer layers with one available direction of plastic slip are periodically repeated. Since the actual LPSO alloy has multiple slip systems in its soft layers, which are thought to contribute to the strengthening, here we address the problem with two different slip systems in soft layers. To introduce the setting of the problem, let Ω⊂ℝ^2 be a simply connected Lipschitz domain representing (the reference configuration of) an elastoplastic material in plane undergoing deformation, and u:Ω→ℝ^2 be the function representing the deformation. Let λ∈ (0, 1) represent the fraction of the softer layer in the periodic structure, and for the unit cell Y = [0, 1)^2 define the subsets Y_soft := [0, 1)× [0, λ) ⊂ Y and Y_rig := Y \ Y_soft , which represent the soft and rigid parts, respectively. We extend this unit structure to the whole ℝ^2 and scale the oscillation width with ϵ>0, which represents the layer thickness. Then the hard and soft layers ϵ Y_rig∩Ω and ϵ Y_soft∩Ω are parallel to e_1, as shown in Fig. <ref>. We use the Kröner-Lee multiplicative decomposition of the deformation gradient ∇ u=F_eF_p as a fundamental assumption <cit.>. Here, the elastic part F_e describes local rotation and stretching of the crystal structure, and the inelastic part F_p accounts for local plastic deformation due to dislocations. For simplicity, we consider only local rotations for F_e, so values of F_e are restricted to SO(2), which is a reasonable approximation for metallic materials <cit.>. We also assume that there is no plastic deformation in the hard layer, i.e., F_p = I on ϵ Y_rig∩Ω. In the soft layers ϵ Y_soft∩Ω plastic glide can occur along one of two active slip systems (v_1,v_1^⊥), (v_2,v_2^⊥) with slip directions given by unit vectors v_1,v_2∈ℝ^2. Here v^⊥ denotes the counterclockwise rotation of v∈ℝ^2 by π /2. Hence, F_p=I+γ_1 v_1⊗ v_1^⊥+γ_2 v_2⊗ v_2^⊥ in ϵ Y_soft∩Ω, where γ_1, γ_2 ∈ℝ corresponds to the amount of slip in each slip direction and satisfies γ_1γ_2=0 a.e. in Ω <cit.>. If the slip direction at a point is v_i (i=1,2), the deformation gradient at that point is restricted to the set ℳ_i := {F∈ℝ^2× 2:F=R(I+γ_i v_i⊗ v_i^⊥), R∈ SO(2), γ_i∈ℝ} = {F∈ℝ^2× 2: F=1, |Fv_i|=1}. In the rigid layer, both γ_1 and γ_2 vanish, so that the deformation gradient is restricted to SO(2)= ℳ_1∩ℳ_2. In view of rigid elasticity in our model, and considering that no plastic slip is allowed in rigid layer, for energy density in the rigid layer we have W_rig(F)= 0 if F ∈ SO(2) ∞ otherwise. This corresponds to the gradient being restricted to SO(2) in ϵ Y_rig. On the other hand, using the two-slip model in <cit.>, the energy density in the soft layer is W_soft(F)= γ_1^2+γ_2^2 if F ∈ℳ ∞ otherwise. where ℳ = ℳ_1∪ℳ_2. Note that since slip in only one direction is allowed at every point, γ_1^2+γ_2^2 is equal either to γ_1^2 or to γ_2^2. Summarizing the above, we can define the energy functional E_ϵ:W^1,2(Ω;ℝ^2)∩ L_0^2(Ω;ℝ^2)→ℝ∪{∞} as E_ϵ(u):= ∫_Ω(γ_1^2+γ_2^2 ) dx if u∈ W^1,2∩ L_0^2(Ω;ℝ^2), where ∇ u=R(I+γ_1 v_1⊗ v_1^⊥ + γ_2 v_2⊗ v_2^⊥), R∈ L^∞(Ω;SO(2)), γ_i∈ L^2(Ω), γ_i=0 a.e. in ϵ Y_rig∩Ω, γ_1γ_2=0 a.e. in Ω, ∞ otherwise. We remark that for orthogonal slip directions one can write γ_1^2+γ_2^2=|(∇ u)v_1|^2+|(∇ u) v_2|^2-2. Hence, the functional E_ϵ can be rephrased as E_ϵ(u)= ∫_ΩW(∇ u) dx if u∈ W^1,2∩ L_0^2(Ω;ℝ^2), ∇ u ∈ℳ and ∇ u ∈ SO(2) in ϵ Y_rig∩Ω, ∞ otherwise, where W(F) = |Fv_1|^2+|Fv_2|^2-2 if F∈ℳ, ∞ otherwise. The main result of this work are the the following two homogenization theorems. The first one fully describes the homogenized material for the orthogonal case of v_2=v_1^⊥, while the second one provides a partial analysis for the case of arbitrary angle between slip directions. Let v_2=v_1^⊥. Then the family (E_ϵ)_ϵ Γ-converges as ϵ→ 0 with respect to the strong L^2(Ω;ℝ^2)-topology to the functional E(u)= λ∫_ΩW_hom(N) dx if u∈ W^1,2∩ L_0^2(Ω;ℝ^2), ∇ u=R(I+γ e_1⊗ e_2), R∈ SO(2), γ∈ L^2(Ω), ∞ otherwise. Here, N is given by the formula λ N+(1-λ)R=∇ u, and W_hom:ℝ^2× 2→ℝ is defined by W_hom(F) := |Fv_1^⊥|^2-1 if F=1, |Fv_1|≤ 1, |Fv_2^⊥|^2-1 if F=1, |Fv_2|≤ 1, χ(|Fv_3|) if F=1, |Fv_1|,|Fv_2|>1 and Fv_1· Fv_2> 0, χ(|Fv_3^⊥|) if F=1, |Fv_1|,|Fv_2|>1 and Fv_1· Fv_2< 0, ∞ otherwise, where v_3=-(v_1+v_2)/|v_1+v_2|=-(v_1+v_2)/√(2), and the function χ is defined by χ(z)=((2z^2-1 )_+^1/2-1 )_+^2, z∈ℝ. The symbol (a)_+^s for a∈ℝ and s>0 means (max{a,0})^s. The function W_hom appearing in Theorem <ref> coincides with the rank-one convex envelope of W in (<ref>) obtained in <cit.>. Note that if F=1 then |Fv_1|,|Fv_2|> 1 and Fv_1· Fv_2=0 cannot happen at the same time (see proof of Lemma <ref>). Moreover, since |Fv_3|≷ |Fv_3^⊥| is equivalent to Fv_1· Fv_2≷ 0 (see Lemma <ref>), we can simply write W_hom(F) = χ(max{|Fv_3|,|Fv_3^⊥|}) if F=1, |Fv_1|,|Fv_2|> 1. Let (v_1, v_2) form a right-handed system with angle 2θ∈ (π/2,π). Let the family (E_ϵ)_ϵ Γ-converge with respect to the strong L^2(Ω;ℝ^2)-topology to an integral functional E(u):L_0^2(Ω;ℝ^2)→ [0,∞] whose integrand is a function of ∇ u. Then E can be represented as E(u)= λ∫_ΩW_hom(N) dx if u∈ W^1,2∩ L_0^2(Ω;ℝ^2), ∇ u=R(I+γ e_1⊗ e_2), R∈ SO(2), γ∈ L^2(Ω), ∞ otherwise. Here, N is given by the formula λ N+(1-λ)R=∇ u, and W_hom:ℝ^2× 2→ℝ is given in part by W_hom(F) := |Fv_1^⊥|^2-1 if F=1, |Fv_1|= 1, |Fv_2^⊥|^2-1 if F=1, |Fv_2|= 1, h(|Fv_3|) if F=1, |Fv_1|,|Fv_2|<1, h(|Fv_3|) if F=1, |Fv_1|,|Fv_2|>1 and Fv_1· Fv_2> 0, h^⊥(|Fv_3^⊥|) if F=1, |Fv_1|,|Fv_2|>1 and Fv_1· Fv_2< 0, ∞ if F ≠ 1, where v_3 = -(v_1 + v_2)/|v_1 + v_2|, and the functions h,h^⊥ are defined by h(z):= ( (z^2sin^2θ -1)_+^1/2 - cosθsinθ)_+^2 , h^⊥(z):= ((z^2cos^2θ -1)_+^1/2 - sinθcosθ)_+^2 . We emphasize that this is a conditional and partial statement in the sense that it is assumed that the Γ-limit is an integral functional, while W_hom(F) remains unknown for matrices F with either |Fv_1|>1, |Fv_2|<1 or |Fv_1|<1, |Fv_2|>1. Notice also that in the orthogonal case θ = π/4, we always have min{|Fv_1|,|Fv_2|}≥ 1 for F=1, and both functions h and h^⊥ reduce to the function χ from Theorem <ref>. We define the Γ-limit along a continuous parameter ϵ→ 0+ by requiring Γ-convergence along arbitrary sequence (ϵ_j)_j such that ϵ_j → 0+. Therefore, the proof of Theorem <ref> consists of showing the following three claims for any sequence (ϵ_j)_j in ℝ such that ϵ_j → 0+ as j→∞. Compactness (Section <ref>): Let (u_j)_j be a sequence in L^2_0(Ω;ℝ^2) such that (E_ϵ_j (u_j))_j is uniformly bounded. Then there exists a subsequence of (u_j)_j whose limit in the strong L^2(Ω;ℝ^2)-topology is u satisfying E(u) < ∞. Liminf inequality (Section <ref>): Let (u_j)_j be a sequence in L^2_0(Ω;ℝ^2) with u_j → u in L^2(Ω;ℝ^2). Then lim inf_j→∞E_ϵ_j(u_j)≥ E(u). Recovery sequence (Section <ref>): For any u ∈ L^2_0(Ω;ℝ^2) with E(u) < ∞, there exists (u_j)_j⊂ L^2_0(Ω;ℝ^2) such that u_j→ u in L^2(Ω;ℝ^2) and lim_j→∞ E_ϵ_j(u_j) = E(u). The basic flow of the proof when the slip systems are orthogonal is similar to the one proposed in <cit.>. In particular, in the proof of existence of recovery sequences, we approximate a limit by expressing N which appears in Theorem <ref> as a rank-one convex combination of matrices with finite energy. Since the set of admissible deformations is widened by increasing the number of slip systems to two, the construction of rank-one convex combinations needs to be modified. In addition, due to the higher complexity of the homogenized functional, the simple approach adopted in <cit.> of reducing the construction of recovery sequences for general admissible functions for the Γ-limit to the the case of piecewise affine functions does not work. Similarly, the fact that, unlike the problem with only one slip system, the homogenized functional E has a different form from the original functional E_ϵ makes the proof of liminf inequality difficult. Namely, the different form of the homogenized functional does not allow to use the same method as in Corollary 2.5 of <cit.> to pass to limit along a sequence of non-admissible deformations when proving the liminf inequality. Another essential complication in the proof of liminf inequality is that a sequence converging to an admissible deformation can choose from two slip directions at each point. In this paper, we addressed the above issues in the following way. When proving the existence of recovery sequences, we first show that, in general, if piecewise affine functions are dense in the set of admissible functions for the Γ-limit, it is sufficient to deal with the case of piecewise affine limit functions. Subsequently, the piecewise affine case is realized through the construction of rank-one convex combinations, referring to <cit.>. In order to prove the liminf inequality, the energies of convergent sequences sliding in different directions at each point are collectively evaluated by a judiciously chosen function f, defined in Lemma <ref>. The important point is that f can be chosen as a convex function. This function f plays an intermediary role to connect the different forms of the original energy density W and its homogenization W_hom. We show that the extra terms arising due to this discrepancy between the densities can be estimated employing the function f. The same approach generalizes to the problem with arbitrary angles between slip directions. Section <ref> is devoted to the attempt to find generalized convex envelopes <cit.> of the energy density function in the general case of arbitrary slip systems. Due to the interference of the slip systems, the envelopes are identified using certain symmetries only in a certain subset of ℝ^2× 2. Based on this information, we then reveal the form of the Γ-limit for arbitrary slip systems summarized in Theorem <ref>. The incomplete knowledge of the envelopes is reflected in the partiality of the result and also forces us to assume the integral form of the homogenized functional. We remark that routine technical modifications can be done to deal with the problem with W_soft(F)=γ_1^p + γ_2^p for F∈ℳ, where p>2. On the other hand, the variant of our problem, which consists in adding a linear term |γ_1|+|γ_2| to the energy density, remains fully open for the setting of more than one slip system. This term represents dissipative contribution and is thus vital for addressing the evolution problem <cit.>, for example, in the framework of rate-independent systems <cit.>. For the problem with one slip system, the authors of <cit.> were able to identify the Γ-limit in the particular case of slip direction parallel to the layers by noticing a special structure leading to strong convergence of rotations and hence weak convergence of slips. However, for two slip systems, the trick to show strong convergence of rotations is not available. § HOMOGENIZATION FOR ORTHOGONAL SLIPS In this section, we present the details of proof of the main result, Theorem <ref>, which deals with the special case of orthogonal slip systems. We start with compactness, continue with the proof of liminf inequality and finish with a construction of recovery sequences, as outlined after the statement of main theorems in Section <ref>. §.§ Compactness To begin with, we look at the asymptotic behavior of material with stiff layers in the limit of vanishing layer width ϵ. It has been described fully in the following fundamental proposition quoted from <cit.>. Let Ω⊂ℝ^2 be a bounded Lipschitz domain. Suppose that the sequence (u_ϵ)_ϵ⊂ W^1,2(Ω;ℝ^2) satisfies u_ϵ⇀ u in W^1,2(Ω;ℝ^2) as ϵ→ 0 for some u ∈ W^1,2(Ω;ℝ^2) with ∇ u=1 a.e. in Ω, and ∇ u_ϵ∈ SO(2) a.e. in Ω∩ϵ Y_rig ∀ϵ >0, with Y_rig defined in (<ref>). Then there exists a matrix R∈ SO(2) and a function γ∈ L^2(Ω) such that ∇ u=R(I+γ e_1⊗ e_2). Furthermore, ∇ u_ϵ1_ϵ Y_rig∩ Ω⇀| Y_rig| R in L^2(Ω;ℝ^2). The function γ in (<ref>) is independent of x_1 in the sense that its distributional derivative ∂_1 γ vanishes.<cit.> Since any weakly convergent sequence (u_ϵ)_ϵ of bounded energy (E_ϵ)_ϵ fulfills the assumptions of this proposition, one obtains a necessary condition on the form of deformations for the homogenized material. Note that the conclusion of the proposition is independent of the structure of slip systems in soft layers. To prove compactness, we recall that the original energy has the form E_ϵ(u)= ∫_ΩW(∇ u) dx if u∈ W^1,2∩ L_0^2(Ω;ℝ^2) and ∇ u∈ K_ϵ, ∞ otherwise, for a set K_ϵ⊂ L^2(Ω;ℝ^2× 2), where W:ℝ^2× 2→ [0,∞] has quadratic growth, i.e., W(F) ≥μ|F|^2 - c for some μ>0. When (u_j)_j is a sequence satisfying E_ϵ_j(u_j)<c for some c>0 and all j, this implies that (∇ u_j_L^2)_j is uniformly bounded. Since ∫_Ωu_j dx=0, (u_j)_j is uniformly bounded in W^1,2(Ω;ℝ^2) due to Poincaré-Wirtinger inequality. Therefore, we can choose a subsequence (u_j)_j (not relabeled) to be weakly convergent in W^1,2(Ω;ℝ^2). Finally, using Rellich-Kondrachov theorem, we find that (u_j)_j converges also strongly in L^2(Ω;ℝ^2) to a limit u∈ W^1,2(Ω;ℝ^2)∩ L^2_0(Ω;ℝ^2). To see that E(u) is finite, it is enough to note that the convergence u_j ⇀ u in W^1,2(Ω;ℝ^2) implies ∇ u_j *⇀∇ u in the sense of measures and to apply Proposition <ref>, cf. <cit.>. §.§ Lower bound The following fact, which is a generalization of Corollary 2.5 from <cit.>, plays an important role in the proof of the liminf inequality (<ref>). Let Ω be a cube, to say Ω = Q = (0,l)^2 for l > 0, and let (u_ϵ)_ϵ⊂ W^1,2(Ω;ℝ^2) be such that E_ϵ(u_ϵ)≤ c for all ϵ>0 and u_ϵ⇀ u in W^1,2(Ω;ℝ^2) for some u∈ W^1,2(Ω;ℝ^2) with gradient of the form (<ref>). If, in addition, u is finitely piecewise affine, then lim inf_ϵ→ 0 E_ϵ(u_ϵ) ≥λ∫_ΩW_hom(1/λ(∇ u-(1-λ)R) ) dx, where W_hom is defined in (<ref>). In order to prove the assertion, we prepare a lemma, which will be used also later on. Define the function f:ℝ^2× 2→ [0,∞) by f(F):= max{(|Fv_1|^2-1)_+, (|Fv_2|^2-1)_+, χ(max{|Fv_3|,|Fv_3^⊥|})}. Then f is convex and coincides with W on ℳ and with W_hom on the set 𝒩:={F∈ℝ^2× 2: F=1}. Thus, W_hom is continuous on 𝒩. Since the maximum of convex functions is convex and (|Fv_1|^2-1)_+,(|Fv_2|^2-1)_+ are convex functions, it is only necessary to confirm that the third term is a convex function. This can be seen from the fact that χ(z) is nondecreasing for z≥ 0 and F ↦ |Fv| is convex for any v∈ℝ^2. If F∈ℳ by symmetry we can assume that F=R(I+γ v_1⊗ v_2). Then |Fv_1|^2-1=0, |Fv_2|^2-1=γ^2 and a short calculation shows that χ(max{|Fv_3|,|Fv_3^⊥|})=χ(√(1+|γ|+γ^2/2))=γ^2, which implies that f(F)=γ^2=W(F). The coincidence of f and W_hom on 𝒩 follows from the results in <cit.> (Step 2 in the proof of Theorem 1.1). In the following we can assume that u is affine in Ω. If otherwise, we can slightly generalize Ω to be a cuboid instead of a cube, and apply the same argument to each affine part of u. Indeed, since by Remark <ref> the limit is a function of x_2 only, each piecewise affine region is a cuboid. Since ∇ u_ϵ⇀∇ uin L^2(Ω;ℝ^2× 2), and, by (<ref>), ∇ u_ϵ1_ϵ Y_rig⇀ (1-λ)R in L^2(Ω;ℝ^2× 2), it follows that ∇ u_ϵ1_ϵ Y_soft⇀∇ u -(1-λ)R in L^2(Ω;ℝ^2× 2). Define the sets Ω_ϵ^iandΩ_ϵ^0 via Ω_ϵ^i:=(0,l)×((i-1)ϵ,iϵ ) (i=1 ,…, ⌊ l/ ϵ⌋), Ω_ϵ^0:=Ω\⋃_i=1^⌊ l/ ϵ⌋ Ω^i_ϵ . Then |Ω_ϵ^0|<lϵ and since |∫_Ω'∇ u_ϵ dx|≤ |Ω'|^1/2∇ u_ϵ_L^2(Ω) for Ω'⊂Ω_ϵ^0, we deduce ∫_Ω^0_ϵ∩ϵ Y_soft∇ u_ϵ dx→ 0 as ϵ→ 0. In the same fashion, max{(|∇ u_ϵv_1|^2-1)_+,(|∇ u_ϵv_2|^2-1)_+}≤|∇ u_ϵ|^2 and χ(max{|∇ u_ϵv_3|, |∇ u_ϵv_3^⊥)|}) ≤ 2|∇ u_ϵ|^2 imply ∫_Ω^0_ϵf(∇ u_ϵ) dx≤∫_Ω^0_ϵ2 |∇ u_ϵ|^2 dx→ 0 as ϵ→ 0. Considering that f(∇ u_ϵ)=0 a.e. in ϵ Y_rig∩Ω and that W(∇ u_ϵ)=f(∇ u_ϵ) a.e. in ϵ Y_soft∩Ω by Lemma <ref>, we have lim inf_ϵ→ 0E_ϵ(u_ϵ) = lim inf_ϵ→ 0∫_ϵ Y_soft∩Ω W(∇ u_ϵ) dx =lim inf_ϵ→ 0∫_Ω f(∇ u_ϵ) dx =lim inf_ϵ→ 0(∑_i=1^⌊ l/ ϵ⌋∫_Ω^i_ϵ∩ ϵ Y_soft f(∇ u_ϵ) dx+∫_Ω^0_ϵf(∇ u_ϵ) dx) ≥lim inf_ϵ→ 0 λϵ l∑_i=1^⌊ l/ ϵ⌋f(1/λϵ l∫_Ω^i_ϵ∩ ϵ Y_soft∇ u_ϵ dx) ≥lim inf_ϵ→ 0 λϵ l·⌊ l/ ϵ⌋· f(1/⌊ l/ ϵ⌋∑_i=1^⌊ l/ ϵ⌋1/λϵ l∫_Ω^i_ϵ∩ ϵ Y_soft∇ u_ϵ dx), where we have used the continuous and discrete form of Jensen's inequality in the last two steps, respectively. From (<ref>), we may continue as = lim inf_ϵ→ 0 λ|Ω|· f(1/⌊ l/ ϵ⌋ 1/λϵ l∫_Ω∩ ϵ Y_soft∇ u_ϵ dx) =λ|Ω|· f(1/λ(∇ u-(1-λ)R)) =λ|Ω|· W_hom(1/λ(∇ u-(1-λ)R)) . Here, we employed (<ref>) and the continuity of f in the second equality. We proceed to the proof of the liminf inequality. Let Ω be a cube, that is, Ω = Q = (0, l)^2 for l > 0. The proof for the general case is done by approximating Ω by a finite number of cubes and taking the supremum in the resulting liminf inequality over all such approximations. Let (ϵ_j)_j fulfill ϵ_j→ 0 as j→∞, and let (u_j)_j be a sequence with u_j→ u as j→∞ in L^2(Ω;ℝ^2) such that (E_ϵ_j(u_j))_j is bounded. Then, similarly as in the compactness proof (Section <ref>), the limit u∈ W^1,2(Ω;ℝ^2) satisfies ∇ u=R(I+γ e_1⊗ e_2) for some R∈ SO(2) and γ∈ L^2(Ω). If γ is piecewise constant, Corollary <ref> implies the liminf inequality, i.e., lim inf_j→∞ E_ϵ_j(u_j) ≥∫_ΩW_hom(1/λ(∇ u-(1-λ)R)) dx = E(u). The general case γ∈ L^2(Ω) can be reduced to the previous one through approximating γ by piecewise constant functions. Since γ is essentially a function of x_2 only (cf. Remark <ref>), one can approximate γ in one-dimension and extend it constantly in the x_1-direction. Let γ^(ϵ)∈ C^∞_0(0,l) be such a one-dimensional proxy for γ fulfilling γ^(ϵ)-γ_L^2(Ω)≤ϵ, and (ζ_k^(ϵ))_k⊂ L^2(Ω) be a approximation by simple functions for γ^(ϵ) such that ζ^ϵ_k-γ^(ϵ)<1/k. By a diagonal argument we obtain a subsequence ζ_k:=ζ_k^(ϵ_k) that satisfies ζ_k→γ in L^2(Ω) as k→∞, ζ_k=∑^n_k_i=1ζ_k,i1_(t_k,i-1,t_k,i), where (t_k,i-1,t_k,i)_i are divisions 0=t_k,0<t_k,1<⋯< t_k,n_k=l of the interval (0,l). This approximation can be constructed so that the subdivisions are nested. For ζ_k obtained in this way, let w_k∈ W^1,2(Ω;ℝ^2)∩ L^2_0(Ω;ℝ^2) be given by ∇ w_k=R(I+ζ_k e_1⊗ e_2) for k∈ℕ. Adapting the method from <cit.>, we construct (_j)_j,(_k,j)_j⊂ W^1,2∩ L^2_0(Ω;ℝ^2) weakly converging to u and w_k, respectively: _j⇀ u, _k,j⇀ w_k ∀ k, both in W^1,2(Ω;ℝ^2) as j→∞. Furthermore, we require that (_j)_j,(_k,j)_j are defined so that ∇_k,j=∇_j in ϵ_j Y_rig∩Ω for all j,k∈ℕ, and so that there is a nondecreasing sequence of natural numbers (k(j))_j diverging to ∞ as j→∞, for which ∇_k,j-∇_j_L^2(Ω;ℝ^2× 2)≤1/λζ_k -ζ_k(j)_L^2(Ω)+c/√(k(j)) holds for all k∈ℕ and all j≥ j_0, where j_0 may depend on k. Here, c is a constant independent of j,k. To satisfy these conditions, we define (_k,j)_j with zero mean by ∇_k,j=R+∑_i=1^n_k(N_k,i-R)1_ϵ_j Y_soft∩Ω1_(0,l)×(⌈ϵ^-1_j t_k,(i-1)⌉ϵ_j,⌊ϵ^-1_j t_k,i⌋ϵ_j), with N_k,i∈𝒩 defined from ζ_k,i by λ N_k,i+(1-λ)R=R(I+ζ_k,i e_1⊗ e_2). Note that _k,j is well defined by its gradient since N_k,i and R rank-one connect along all segments [0,l] × m ϵ_j, m=1, …, ⌊ l/ϵ_j ⌋ (see Fig. <ref>). Thanks to the averaging lemma on weak convergence of periodic functions, combined with the fact that the measure of the region around the lines [0,l]× t_k,i, where ∇_k,j deviates from a periodic function, vanishes for j→∞ and fixed k, we have ∇_k,j⇀∑_i=1^n_k(λ N_k,i+(1-λ)R)1_(0,l)× (t_k,i-1,t_k,i)=∇ w_k in L^2(Ω;ℝ^2× 2) for every k∈ℕ as j→∞. Thus the second condition in (<ref>) is fulfilled. The idea to realize also the first condition in (<ref>), together with (<ref>) and (<ref>) is to use a diagonal argument to define _j:=_k(j),j. Then (<ref>) trivially holds. Furthermore, to achieve (<ref>), we set τ_k:=min_h (t_k,h-t_k,h-1) and estimate ∇_k,j-∇_j_L^2(Ω;ℝ^2× 2) = ∇_k,j-∇_k(j),j_L^2(Ω) ≤ ∑_i=1^n_k∑_h=1^n_k(j)(N_k,i-N_k(j),h)1_(0,l)×(t_k(j),h-1,t_k(j),h)1_(0,l)×(t_k,i-1,t_k,i)_L^2(Ω) +√(2ϵ_j/τ_k(j))∑_i=1^n_k∑_h=1^n_k(j)(N_k,i-R)1_(0,l)×(t_k(j),h-1,t_k(j),h)1_(0,l)×(t_k,i-1,t_k,i)_L^2(Ω). The second term accounts for the discrepancy between ∇_k,j and ∇_k(j),j possibly occurring in an ϵ_j-neighborhood of the jump lines t_k(j),h, where we assume that j is large enough so that k(j)>k. Note that since the divisions (t_k,i)_i are nested when k increases, the subintervals corresponding to ζ_k(j) are then a subdivision of those corresponding to ζ_k. Accordingly, ∇_k,j-∇_j_L^2(Ω;ℝ^2× 2) ≤1/λ ∑_i=1^n_k∑_h=1^n_k(j)(ζ_k,i-ζ_k(j),h)1_(0,l)×(t_k(j),h-1,t_k(j),h)1_(0,l)×(t_k,i-1,t_k,i)_L^2(Ω) + 1/λ√(2ϵ_j/τ_k(j))∑_i=1^n_kζ_k,i1_(0,l)×(t_k,i-1,t_k,i)_L^2(Ω) =1/λ ζ_k-ζ_k(j)_L^2(Ω)+ 1/λ√(2ϵ_j/τ_k(j))ζ_k_L^2(Ω). Therefore, to have (<ref>) and (<ref>), it is enough to find (k(j))_j so that ϵ_j/τ_k(j)≤1/k(j) ∀ j and _k(j),j⇀ u in W^1,2(Ω;ℝ^2) as j→∞. We next construct such a sequence (k(j))_j. Noting that (_k,j)_k,j and (w_k)_k are uniformly bounded in W^1,2(Ω;ℝ^2), we let d denote a distance metrizing the weak topology of W^1,2(Ω;ℝ^2) in some closed sphere containing (_k,j)_k,j, (w_k)_k and u. Taking into account the weak convergence of _p,j to w_p and the fact that (ϵ_j/τ_p)_j is a decreasing sequence for fixed p, we can find an increasing sequence (j(p))_p⊂ℕ such that j(1)=1 and for p≥ 2, d( _p,j,w_p ) ≤1/p and ϵ_j/τ_p≤1/p when j ≥ j(p) . Then we define k(j) for j∈ℕ as follows: k(j)=p if j(p) ≤ j ≤ j(p+1)-1 . For any δ>0, take p∈ℕ sufficiently large so that 1/p< min{1,δ} and so that for any k≥ p one has d(w_k,u)<δ. Then j≥ j(p) implies d(_k(j),j,u)≤ d(_k(j),j,w_k(j)) + d(w_k(j),u)<2δ , and thus _j=_k(j),j⇀ u as j→∞. In addition, by construction of (k(j))_j, ϵ_j/τ_k(j)<1/k(j) for j≥ j(2), which in combination with (<ref>) yields (<ref>). Now we set z_k,j=u_j-_j+_k,j for j,k∈ℕ. By (<ref>), f(∇ z_k,j)=0 a.e. inϵ_j Y_rig∩Ω, where f is the function defined in Lemma <ref>. By Proposition <ref>, for each k∈ℕ, ∇ z_k,j1_ϵ_j Y_rig⇀ (1-λ)R in L^2(Ω;ℝ^2× 2) as j→∞. Also, thanks to (<ref>), z_k,j weakly converges to the piecewise affine function w_k in W^1,2(Ω;ℝ^2) as j→∞. We now show the liminf inequality using the proof of Corollary <ref> and the following lemma, whose proof can be found in Appendix <ref>. Let F=A+D, where D=Rγ_1 e_1⊗ e_2 and either A=R(I+γ_2 v_2⊗ v_1) or A=R(I+γ_2 v_1⊗ v_2) for some γ_1, γ_2 and R∈ SO(2). Then there exists a constant c, such that f(F) ≤ |F|^2 - 2 +c(√(|D|)+|D|)( √(|A|)+ |A|+√(|D|)+|D|) . In view of (<ref>), we know that ∇_k,j is equal to R in rigid layers and either N_k,i=R(I+ζ_k,i/λ e_1⊗ e_2) or R in soft layers. Since _j = _k(j),j is a subsequence, it satisfies the same property and thus ∇_j - ∇_k,j = {[ 0 in Ω∩ϵ_j Y_rig; Rγ/λ e_1⊗ e_2 in Ω∩ϵ_j Y_soft ]. for some γ∈ℝ. Hence, we may apply Lemma <ref> to F:=∇ z_k,j decomposed as F=A+D, where A:=∇ u_j ∈ℳ and D:=∇_k,j-∇_j. Combined with Hölder's inequality, with the boundedness of ∇ u_j _L^2(Ω; ℝ^2× 2), ζ_k _L^2(Ω) and with the estimate (<ref>), this yields ∫_Ω f(∇ z_k,j) dx ≤∫_Ω( | ∇ z_k,j|^2 -2 ) dx + c ∇_k,j-∇_j _L^2(Ω; ℝ^2× 2)^1/2 ≤∫_Ω( | ∇ z_k,j|^2 -2 ) dx + c ( ζ_k-ζ_k(j)_L^2(Ω)+1/√(k(j)))^1/2 . Using triangle inequality and the uniform boundedness of ∇_j-∇_k,j_L^2, ∇ z_k,j_L^2 and ζ_k-ζ_k(j)_L^2, we get lim inf_j→∞E_ϵ_j(u_j) =lim inf_j→∞∫_Ω( |∇ z_k,j+(∇_j-∇_k,j)|^2-2 ) dx ≥lim inf_j→∞( ∫_Ω( |∇ z_k,j|^2-2 ) dx - c ∇_j-∇_k,j_L^2(Ω; ℝ^2× 2)) ≥lim inf_j→∞(∫_Ω f(∇ z_k,j) dx - c ( ζ_k-ζ_k(j)_L^2(Ω)+1/√(k(j)))^1/2) ≥lim inf_j→∞∫_Ω f(∇ z_k,j) dx - c ζ_k-γ_L^2(Ω)^1/2 ≥ E(w_k)- c ζ_k-γ_L^2(Ω)^1/2 , where the last estimate can be shown as in the proof of Corollary <ref> due to the weak convergence of (z_k,j)_j to the piecewise affine function w_k together with the convergence ∇ z_k,j1_ϵ_j Y_rig⇀ (1-λ)R. Therefore, taking limes inferior as k→∞, we arrive at lim inf_j→∞E_ϵ_j(u_j)≥ E(u) because E(w_k)=∫_Ω f(∇ w_k) dx, which is a lower semicontinuous functional thanks to the convexity of f <cit.>. §.§ Recovery sequence Let u∈ W^1,2(Ω;ℝ^2)∩ L^2_0(Ω;ℝ^2) be such that ∇ u=R(I+γ e_1⊗ e_2)=λ N +(1-λ)R with R∈ SO(2), N∈𝒩 and γ∈ L^2(Ω). The goal of this section is to show the existence of a sequence (u_j)_j ⊂ L_0^2(Ω;ℝ^2) such that u_j→ u in L^2(Ω;ℝ^2) and E_ϵ_j(u_j)→ E(u). The idea of constructing such a sequence would be to take a function w∈ W^1,∞∩ L^∞_0 (Y;ℝ^2) with ∇ w=R1_ Y_rig+N1_Y_soft in Y, extend it periodically and set ∇ u_ϵ(x)=∇ w(x/ϵ) for x∈Ω. Then the laminate ∇ u_ϵ converges weakly to λ N +(1-λ)R, as desired. However, since N may not belong to the admissible set ℳ, this construction needs to be refined by grafting into the soft layer another laminate between two matrices from ℳ to replace the matrix N (see Fig. <ref>). To realize this goal, we first prove in Section <ref> that it is in fact sufficient to construct recovery sequences for piecewise affine limit functions u, i.e., piecewise constant functions γ. Then we address the construction of suitable laminates approximating N in Section <ref> and, finally, construct recovery sequences for constant and piecewise constant functions γ. §.§.§ Recovery sequence for general γ∈ L^2(Ω) Let (E_ϵ)_ϵ:W^1,2∩ L_0^2(Ω;ℝ^2)→ [0,∞] be a functional family and let E:L_0^2(Ω;ℝ^2)→ [0,∞] have the form E(u)= ∫_ΩW_hom(∇ u) dx if u∈ K, ∞ otherwise. Here, K is a subset of W^1,2∩ L_0^2(Ω;ℝ^2) such that the set of finitely piecewise affine functions belonging to K is dense in K. Further, the integrand W_hom:ℝ^2× 2→ [0,∞] is a continuous function on ℒ:={∇ u(x): u∈ K, x∈Ω} with the growth W_hom(A)≤μ|A|^2 + c for some μ, c>0 and all A∈ℒ. If for any finitely piecewise affine u∈ K there exists (u_j)_j ⊂ W^1,2∩ L_0^2(Ω;ℝ^2) such that u_j ⇀ u in W^1,2(Ω;ℝ^2) and E_ϵ_j(u_j)→ E(u), then such a sequence exists for any u ∈ K. For a given u∈ K, let (w_k)_k⊂ K ∩ W^1,∞(Ω;ℝ^2) be a sequence of finitely piecewise affine functions such that w_k→ u in W^1,2(Ω;ℝ^2) as k→∞. We show that lim_k→∞E(w_k)=E(u). For α with 0<α≤ 1 we define Ω^α_k:={x∈Ω : |(∇ w_k-∇ u)(x)|>α} and Ω^α,0_k:=Ω\Ω^α_k. Then lim_k→∞|Ω^α_k|=0. Now, let k∈ℕ be fixed and consider u as a function over Ω^1,0_k. By assumption, W_hom is uniformly continuous on the set ℒ ∩ {F∈ℝ^2× 2:|F|≤∇ u_L^∞(Ω^1,0_k;ℝ^2× 2)+1}. Note that by (<ref>), ∇ u(x) and ∇ w_k'(x) belong to this set for all x ∈Ω^1,0_k∩Ω^α,0_k' and for all k'∈ℕ. Therefore, for any ε>0 one can find a sufficiently small α(k,ε) such that for every k'∈ℕ, |W_hom(∇ w_k')-W_hom(∇ u)|<ε in Ω^1,0_k ∩Ω^α,0_k'. It follows that ∫_Ω^1,0_k∩ Ω^α,0_k'W_hom(∇ w_k') dx <ε |Ω^1,0_k∩Ω^α,0_k'| + ∫_Ω^1,0_k∩ Ω^α,0_k'W_hom(∇ u) dx. Hence, by the growth assumption on W_hom, E(w_k') ≤∫_Ω^1_k∪ (Ω^1,0_k∩ Ω^α_k')( μ |∇ w_k'|^2+c ) dx + ∫_Ω^1,0_k∩ Ω^α,0_k'W_hom(∇ w_k') dx . The convergence ∇ w_k'→∇ u in L^2(Ω;ℝ^2× 2) together with (<ref>) imply that the first term on the right side converges to ∫_Ω^1_k (μ |∇ u|^2+c ) dx when k'→∞. Similarly, in view of (<ref>), lim sup_k'→∞∫_Ω^1,0_k∩ Ω^α,0_k'W_hom(∇ w_k') dx ≤ε|Ω ^1,0_k| + ∫_Ω^1,0_k W_hom(∇ u) dx , so that (<ref>) yields lim sup_k'→∞ E(w_k')≤∫_Ω^1_k (μ |∇ u|^2+c ) dx + ε|Ω| + ∫_Ω W_hom(∇ u) dx. Taking ε→ 0+ and k→∞, we get from (<ref>), lim sup_k'→∞ E(w_k')≤∫_Ω W_hom(∇ u) dx. An analogous argument, exploiting the nonnegativity of W_hom in place of the growth condition, leads to the opposite inequality for the lower limit, and we conclude lim_k'→∞E(w_k')=E(u). By assumption, we can obtain a recovery sequence (w_k,j)_j for w_k such that lim_j→ 0E_ϵ_j(w_k,j)=E(w_k) and w_k,j⇀ w_k in W^1,2(Ω;ℝ^2) as j→∞ for all k∈ℕ. By Rellich-Kondrachov theorem, (w_k,j)_j can be assumed to converge to w_k strongly in L^2(Ω;ℝ^2). Combining this with lim_k→∞w_k-u_L^2(Ω;ℝ^2)=0 and with (<ref>), we have lim_k→∞lim_j→ 0( w_k,j-u_L^2(Ω;ℝ^2)+|E_ϵ_j(w_k,j)-E(u)| ) =0. A simplified version of the diagonalization argument introduced in the proof of the liminf inequality (see also <cit.>, Corollary 1.18), yields the existence of a sequence (u_j)_j such that u_j=w_k(j),j and u_j→ u in L^2(Ω;ℝ^2) and E_ϵ_j(u_j)→ E(u) as j→ 0. It remains to show that u_j converges to u weakly in W^1,2(Ω;ℝ^2). This follows by noting that any subsequence of (u_j)_j contains a subsequence weakly converging to u. We verify that W _hom given by (<ref>) satisfies the assumptions of Lemma <ref>. Indeed, since K={u∈ W^1,2∩ L_0^2(Ω;ℝ^2): ∇ u=R(I+γ e_1⊗ e_2),R∈ SO(2),γ∈ L^2(Ω)}, taking an arbitrary u∈ K we see that ∇ u=R(I+γ e_1⊗ e_2) for some R∈ SO(2) and this γ is independent of x_1 due to 0=curl∇ u = - ∂γ/∂ x_1 Re_1 (see Remark <ref>). Hence we may use a one-dimensional approximation of γ by a sequence of simple functions (ζ_k)_k, i.e., ζ_k→γ in L^2(Ω). Then the vanishing curl of R(I+ζ_k e_1⊗ e_2) guarantees the existence of a piecewise affine w_k∈ W^1,2(Ω;ℝ^2) with ∇ w_k=R(I+ζ_k e_1⊗ e_2) and with mean value zero. In combination with Poincaré's inequality we have w_k→ u in W^1,2(Ω;ℝ^2× 2), as desired. Moreover, the continuity of W_hom on ℒ follows from Lemma <ref> and the relation ℒ⊂𝒩. For the specific form of energy that we have, one can directly show that it is locally Lipschitz continuous in γ (see Appendix <ref> for details). Namely, there is a constant c depending only on λ such that |W_hom(γ_1) - W_hom(γ_2)| ≤ c(1+|γ_1|+|γ_2|) |γ_1-γ_2| ∀γ_1, γ_2 ∈ℝ, where we abuse notation to write W_hom(γ) instead of W_hom(∇ u) when ∇ u = R(I+γ e_1⊗ e_2). This fact allows for a simpler proof in the sense that the argument leading to (<ref>) can be skipped. §.§.§ Laminate constructions Thanks to Lemma <ref>, it suffices to address the case of piecewise affine functions u, or equivalently, piecewise constant functions γ. As mentioned above, the underlying idea requires the construction of simple laminates between matrices with finite energy E_ϵ. For this purpose, in this section we first prove that the domain 𝒩 of the homogenized functional E, i.e., the set of matrices with unit determinant, is contained in the rank-one convex hull of ℳ, that is, any N∈𝒩 can be represented by a convex combination of matrices in ℳ that are rank-one connected. In this way, we can construct a simple laminate whose gradient lies in ℳ and which, with period approaching to zero, weakly converges to a given N∈𝒩. For a given N∈𝒩, there are F_+, F_-∈ℳ satisfying rank (F_+-F_-)=1 and μ∈ [0,1] such that N=μ F_+ +(1-μ) F_- and * if min{|Nv_1|, |Nv_2|}>1 and Nv_1· Nv_2>0 then N(v_1+v_2)=F_+(v_1+v_2)=F_-(v_1+v_2); * if min{|Nv_1|, |Nv_2|}>1 and Nv_1· Nv_2<0 then N(v_1-v_2)=F_+(v_1-v_2)=F_-(v_1-v_2); * if |Nv_1|≤ 1 then Nv_2=F_+v_2=F_-v_2; * if |Nv_2|≤ 1 then Nv_1=F_+v_1=F_-v_1. First, let us confirm that the above cases (i)–(iv) cover all possibilities, namely, that min{|Nv_1|, |Nv_2|}>1 and Nv_1· Nv_2=0 cannot hold at the same time. Indeed, if Nv_1· Nv_2=0 then Nv_1 and Nv_2 are orthogonal, and since N=|Nv_1||Nv_2|=1, we get min{|Nv_1|, |Nv_2|}≤ 1, which contradicts our assumption. To begin with, we focus on the case (i) and introduce the function φ(t), t∈ℝ following the idea of <cit.>: φ(t):=|N_t(v_1-v_2)|^2-|N_t(v_1+v_2)|^2, where N_t:=N(I+t(v_1+v_2)⊗(v_1-v_2)). φ is a quadratic function with φ(0)=-4Nv_1· Nv_2<0, and while N_t(v_1+v_2) is a constant with respect to t, the term |N_t(v_1-v_2)| diverges to +∞ when t→±∞. Hence, we can determine two finite values t_-, t_+ with t_-<0<t_+ as t_-=inf A, t_+=sup A, where A:={t∈ℝ:min{|N_t v_1|,|N_t v_2|}>1 and φ(t)<0}. Since |N_t v_1|^2-1, |N_tv_2|^2-1 are quadratic functions of t, and so is φ, the set A is a union of a finite number of open intervals, and is not empty because 0 ∈ A. We see that t_-=inf A implies that min{|N_t_- v_1|,|N_t_- v_2|}=1 or φ(t_-)=0 holds but, in fact, in either case min{|N_t_- v_1|,|N_t_- v_2|}=1 holds. Indeed, when φ(t_-)=0, the vectors N_t_-v_1 and N_t_-v_2 are orthogonal, which in view of N_t_-=1 implies min{|N_t_- v_1|,|N_t_- v_2|}≤ 1 and so min{|N_t_- v_1|,|N_t_- v_2|}=1. From this and the definition of ℳ, we have that N_t_-∈ℳ and an analogous argument leads to N_t_+∈ℳ. Now we define F_±:=N_t_±. Since N=N_0 is the point that internally divides the line segment N_t, t∈ [t_-,t_+] connecting F_- and F_+ into -t_-:t_+, setting μ=-t_-/t_+-t_- yields (<ref>). In addition, the property (i) is satisfied because N_t(v_1+v_2) is constant in t. The case (ii) is proved in an analogous way, considering the function φ(t)=|N_t(v_1+v_2)|^2-|N_t(v_1-v_2)|^2, where N_t=N(I+t(v_1-v_2)⊗(v_1+v_2)). The argument for cases (iii) and (iv) can be reduced to the setting of single slip, which is proved in Lemma 3.3 of <cit.>. In particular, when N∈ℳ, we may take F_+=F_-=N. We notice that by the definition of t_-,t_+, the function t ↦ W_hom(N_t) is constant in [t_-,t_+]. Moreover, for F∈ℳ we have W_hom(F)=|Fv_1|^2+|Fv_2|^2-2=W(F). Hence, when min{|Nv_1|,|Nv_2|}>1 we have W_hom(N)=W(F_+)=W(F_-). This is true also when min{|Nv_1|,|Nv_2|}≤ 1, as shown in <cit.>, Lemma 3.3. The following two theorems are taken from <cit.> and <cit.>, respectively. In combination, they allow us to approximate a matrix N∈𝒩 by a simple laminate of finite energy, while preserving boundary values. Let F_+,F_-∈ℝ^2× 2 with F_+ = F_- = 1 and rank (F_+-F_-) = 1 and p∈ℝ^2 be such that |F_+p| = |F_-p| and F_+p ≠ F_-p. Moreover, let Ω⊂ℝ^2 be a bounded domain and fix μ∈(0,1). Then for any δ>0, there are h_δ>0 and Ω_δ⊂Ω with |Ω\Ω_δ|<δ such that the restriction to Ω_δ of any simple laminate between the gradients F_+ and F_- with weights μ and 1-μ and period h<h_δ, can be extended to a finitely piecewise affine function v_δ:Ω→ℝ^2 so that ∇ v_δ=μ F_++(1-μ)F_- on ∂Ω, ∇ v_δ=1, |(∇ v_δ)p|≤|F_+p|=|F_-p|, and dist (∇ v_δ,[F_+,F_-])≤δ on Ω, where [F_+,F_-]={tF_++(1-t)F_-:t∈[0,1]}. Note that the vector p appearing in the theorem exists and is unique up to scaling, cf. the proof in <cit.>. Let 𝒦⊂{F∈ℝ^2× 2: F=1}. Suppose that (U_i)_i is an in-approximation of 𝒦, i.e., the sets U_i are open in {F∈ℝ^2× 2: F=1} and uniformly bounded, U_i is contained in the rank-one convex hull of U_i+1 for every i∈ℕ, and (U_i)_i converges to 𝒦 in the following sense: if F_i ∈ U_i and |F_i - F|→ 0 as i →∞ then F∈𝒦. Then for any F ∈ U_1 and open domain Ω⊂ℝ^2, there exists u∈ W^1,∞(Ω;ℝ^2) such that ∇ u∈𝒦 a.e. in Ω and u=Fxon∂Ω. Using Lemma <ref> and the above two theorems we prove the following approximation result that will be the basis for our construction of recovery sequence. Let Ω⊂ℝ^2 be a bounded domain. If N∈𝒩\ℳ, let F_+,F_-∈ℳ and μ∈(0,1) be as in Lemma <ref>, otherwise let F_+=F_-=N∈ℳ and μ∈(0,1). Then for every δ >0 there exists Ω_δ⊂Ω with |Ω\Ω_δ|<δ and u_δ∈ W^1,∞(Ω;ℝ^2) such that u_δ coincides with a simple laminate between F_+ and F_- with weights μ and 1-μ and period h_δ<δ in Ω_δ , ∇ u_δ∈ℳ a.e. in Ω, and u_δ=Nx on ∂Ω. Moreover, there is a constant c depending only on N, such that for any δ∈ (0,1), |∇ u_δ|≤ c a.e. in Ω . In particular, ∇ u_δ⇀ N in L^2(Ω;ℝ^2× 2) as δ→ 0. The proof requires addressing the four cases for N in Lemma <ref> separately. Here we deal with the case (i) only, since a similar argument applies to case (ii), while cases (iii) and (iv) were treated in Corollary 3.7 of <cit.>. For the given F_+,F_- and δ >0, Theorem <ref> with this δ and p taken as v_1+v_2 yields a finitely piecewise affine function v_δ and a set Ω_δ. Since dist (∇ v_δ,[F_+,F_-])≤δ and 0<δ<1, there is c>0 independent of δ, such that |∇ v_δq|<c a.e. in Ω for q=v_1+v_2,v_1-v_2,v_1,v_2 . The desired function u_δ is obtained as a result of applying Theorem <ref> to modify v_δ in the finitely many subsets of Ω∖Ω_δ where it is affine. Take any such subset S and first assume that ∇ v_δ satisfies the conditions of case (i) in Lemma <ref>, i.e, min{|∇ v_δv_1|,|∇ v_δv_2|} >1 and ∇ v_δv_1·∇ v_δv_2>0 on S. We apply Theorem <ref> to the in-approximation (U^δ_i)_i of 𝒦:=ℳ∩{F∈ℝ^2× 2:|F(v_1+v_2)|≤ c, Fv_1· Fv_2≥ 0} defined as U_i^δ:={F∈ℝ^2× 2: F=1, |F(v_1+v_2)|<c, |Fv_1|>1,|Fv_2|>1, Fv_1· Fv_2>0} ∩{F∈ℝ^2× 2:|Fv_1|<1+2^-(i-1) or |Fv_2|<1+2^-(i-1).}, i∈ℕ. By shifting the index i if necessary, we may assume that ∇ v_δ∈ U_1^δ in S. In order to apply Theorem <ref>, it remains to show that (U^δ_i)_i is indeed an in-approximation of 𝒦. When |F_i-F|→ 0 for F_i ∈ U^δ_i, it is easy to see that F=1 and |F(v_1+v_2)|≤ c and Fv_1· Fv_2≥ 0. In addition, we see that either |Fv_1|=1 or |Fv_2|=1 holds. Indeed, assuming that there is a subsequence (F_i_k)_k that satisfies 1<|F_i_kv_1|<1+2^-(i_k-1) for all k, we obtain |Fv_1|=1. If this assumption is false and there is only a finite set of values k for which 1<|F_i_kv_1|<1+2^-(i_k-1) holds, from the definition of U_i^δ there must be an infinite subsequence (F_i_k)_k satisfying 1<|F_i_kv_2|<1+2^-(i_k-1) for all k. But then |Fv_2|=1. Next, we prove that U^δ_i is contained in the rank-one convex hull of U^δ_i+1. Let G∈ U^δ_i and set G_t=G(I+t(v_1+v_2)⊗ (v_1-v_2)). Then we have |G_t (v_1+v_2)|=|G(v_1+v_2)|<c for any t∈ℝ. Defining t_-, t_+ as in Lemma <ref>, in view of the continuity of |G_tv_1|, |G_tv_2| as functions of t, one can find t_-^i+1>t_-, t_+^i+1<t_+ close enough to t_- , t_+ so that min{|G_t^i+1_-v_1|, |G_t^i+1_-v_2| } < 1+2^-i and min{|G_t^i+1_+v_1|, |G_t^i+1_+v_2| } < 1+2^-i. Also, due to the definitions of t_- and t_+, G_t^i+1_+v_1· G_t^i+1_+v_2>0. Then G_t_-^i+1, G_t_+^i+1∈ U^δ_i+1, and G can be written as a rank-one convex combination of G_t_-^i+1 and G_t_+^i+1. Now Theorem <ref> allows us to modify v_δ in S to obtain a function u_δ, which satisfies ∇ u_δ∈ℳ and |∇ u_δ(v_1+v_2)|≤ c in S. The case when min{|∇ v_δv_1|,|∇ v_δv_2|} >1 and ∇ v_δv_1·∇ v_δv_2<0 is handled analogously. One just needs to modify 𝒦, U^δ_i, and G_t appropriately by interchanging the roles of v_1+v_2 and v_1-v_2, and switching the inequality Fv_1· Fv_2>0 to Fv_1· Fv_2<0 in the definition of U_i^δ. The case when min{|∇ v_δv_1|,|∇ v_δv_2|} <1 in a subset S can be reduced to the proof of Lemma 2 of <cit.>. For example, when |∇ v_δ v_1| <1, this yields ∇ u_δ∈ℳ_1 satisfying |∇ u_δ v_2| ≤ c in S. Similarly, when |∇ v_δ v_2| <1, we have ∇ u_δ∈ℳ_2 satisfying |∇ u_δ v_1| ≤ c in S. Finally, to prove (<ref>), we recall that ∇ u_δ coincides with ∇ v_δ in Ω_δ and thus satisfies |(∇ u_δ)(v_1+v_2)|=|N(v_1+v_2)| a.e. in Ω_δ. On the other hand, ∇ u_δ on subsets S of Ω∖Ω_δ was constructed so that it fulfills one of the following four conditions: (i) ∇ u_δ∈ℳ and |(∇ u_δ)(v_1+v_2)|≤ c, (ii) ∇ u_δ∈ℳ and |(∇ u_δ)(v_1-v_2)|≤ c, (iii) ∇ u_δ∈ℳ_1 and |∇ u_δv_2|≤ c, (iv) ∇ u_δ∈ℳ_2 and |∇ u_δv_1|≤ c. Therefore, for δ<1, in either of the four cases both |∇ u_δv_2| and |∇ u_δv_1| are uniformly bounded with respect to δ in terms of N, from which (<ref>) follows. §.§.§ Recovery sequence for constant γ Let γ∈ℝ be constant and define N ∈𝒩 through λ N+(1-λ)R=R(I+γ e_1⊗ e_2). With this definition, Ne_1=Re_1 holds, and thus laminates parallel to the direction of e_1 can be constructed. For ϵ∈ (0,1) let φ^1_ϵ∈ W^1,∞((0,1)×(0,λ);ℝ^2) be obtained by applying Corollary <ref> to Ω=(0,1)×(0,λ)⊂ℝ^2 and N with δ=ϵ. We then define φ _ϵ(x) :=∑_i∈ℤ^2(φ^1_ϵ(x-i)-N(x-i) )1_i+(0,1)×(0,λ), x∈ℝ^2, so that φ_ϵ is Y-periodic. Next we set z_ϵ=w+φ_ϵ, where w∈ W^1,∞_loc∩ L^∞_0(ℝ^2;ℝ^2) is such that ∇ w =R1_Y_rig+N1_Y_soft (see Fig. <ref>). Because ∇φ_ϵ⇀ 0 in L^2_loc(ℝ^2;ℝ^2× 2) as ϵ→ 0, ∇ z_ϵ⇀∇ w in L^2_loc(ℝ^2;ℝ^2× 2). If u_ϵ∈ W^1,2(Ω;ℝ^2) with mean value zero is determined by ∇ u_ϵ(x)=∇ z_ϵ(x/ϵ), x∈Ω, the functions u_ϵ are admissible for E_ϵ in view of ∇ u_ϵ=R∈ SO(2) in ϵ Y_rig∩Ω and ∇ u_ϵ∈ℳ a.e. in Ω. By (<ref>), when ϵ→ 0, ∇ u_ϵ⇀∫_Y∇ w dx=λ N+(1-λ)R=∇ u in L^2(Ω;ℝ^2× 2). This weak convergence is proved using a nonstandard averaging lemma, see Appendix <ref> for details. Recalling (<ref>) yields uniform boundedness of ∇ z_ϵ_L^∞ for 0<ϵ<1. It follows from the continuity of W on ℳ that |W(∇ u_ϵ)|=|W(∇ z_ϵ(ϵ^-1x))|≤ c a.e. in Ω for some c>0 and for all 0<ϵ<1. We wish to investigate the limit lim_ϵ→ 0E_ϵ(u_ϵ)=lim_ϵ→ 0∫_ΩW(∇ z_ϵ(ϵ^-1x))1_ϵ Y_soft∩Ω dx. Let Ω_ϵ⊂Ω be the open set where ∇ u_ϵ is a simple laminate. Then ∇ u_ϵ takes value in {F_+,F_-} on Ω_ϵ and in ℳ on Ω_ϵ^0:=(ϵ Y_soft∩Ω)\Ω_ϵ. Using (<ref>), (<ref>) =lim_ϵ→ 0( ∫_ϵ Y_soft∩ΩW_hom(N) dx+∫_Ω_ϵ^0( W(∇ z_ϵ(ϵ^-1x))-W_hom(N) ) dx ). The first term converges to λ |Ω|W_hom(N) when ϵ→ 0 due to the averaging lemma. On the other hand, the second term converges to 0 when ϵ→ 0 due to |Ω_ϵ^0|→ 0 and the uniform boundedness of the L^∞-norm of the integrand. We conclude that lim_ϵ→ 0E_ϵ(u_ϵ)=λ |Ω|W_hom(N)=E(u). §.§.§ Recovery sequence for piecewise constant γ In this step, we prove the existence of recovery sequence for (finitely) piecewise affine u. To begin with, assume that Ω=(0,l)^2⊂ℝ^2 with l > 0. Since γ is essentially a function of x_2 only, γ can be expressed as a one-dimensional simple function γ(x_1,t)=∑_i=1^nγ_i1_(t_i-1,t_i)(t), t∈(0,l), with γ_i ∈ℝ, i=1,…,n and 0=t_0<t_1<⋯<t_n=l. Let N_i∈𝒩 be the matrix corresponding to γ_i via (<ref>). We define u_ϵ∈ W^1,2(Ω;ℝ^2) with mean value zero by ∇ u_ϵ :=R+∑_i=1^n(∇ u_i,ϵ-R)1_ϵ Y_soft∩Ω1_ℝ×(⌈ϵ^-1t_i-1⌉ϵ,⌊ϵ^-1t_i⌋ϵ), where (u_i,ϵ)_ϵ⊂ W^1,2((0,l)×(t_i-1,t_i);ℝ^2) for i∈{1,…,n} are the recovery sequences corresponding to γ_i constructed in Section <ref>. Note that u_ϵ is well defined since ∇ u_i,ϵ and R rank-one connect along all segments [0,l] × j ϵ, j=1, …, ⌊ l/ϵ⌋ (also see Fig. <ref>). For φ∈ L^2(Ω), ∫_(0,l)× (t_i-1,t_i)(∇ u_ϵ-∇ u)φ dx=∫_(0,l)×(⌈ϵ^-1t_i-1⌉ϵ,⌊ϵ^-1t_i⌋ϵ)(∇ u_i,ϵ-∇ u)φ dx +∫_(0,l)× (t_i-1,t_i)] \ (0,l)×(⌈ϵ^-1t_i-1⌉ϵ,⌊ϵ^-1t_i⌋ϵ)(R-∇ u)φ dx By (<ref>) the first term on the right-hand side converges to 0 as ϵ→ 0, while the second term converges to 0 due to vanishing measure of integration domain. It follows that ∇ u_ϵ⇀∑_i=1^n(λ N_i+(1-λ)R)1_(0,l)× (t_i-1,t_i)=∇ u in L^2(Ω;ℝ^2× 2). Similarly, for the energy contributions, it follows that ∫_(0,l)× (t_i-1,t_i)W(∇ u_ϵ) dx=∫_(0,l)×(⌈ϵ^-1t_i-1⌉ϵ,⌊ϵ^-1t_i⌋ϵ)W(∇ u_i,ϵ) dx +∫_(0,l)× (t_i-1,t_i) \ (0,l)×(⌈ϵ^-1t_i-1⌉ϵ,⌊ϵ^-1t_i⌋ϵ)W(R) dx. As ϵ→ 0 the first term converges to λ(t_i-t_i-1)W_hom(N_i) in the same way as in Section <ref>, and the second term is equal to 0. This implies lim_ϵ→ 0∫_ΩW(∇ u_ϵ) dx =λ∑_i^n(t_i-t_i-1)W_hom(N_i) =E(u). To generalize to a simply-connected Lipschitz domain Ω, just fill Ω with overlapping cubes and glue the gradients properly. § ENVELOPES FOR GENERAL SLIP SYSTEMS Our homogenization result in the previous Section relies on the construction of generalized convex envelopes. For the problem with a single slip direction or the problem with two orthogonal slip directions, the rank-one convex envelope of energy density in the setting of rigid plasticity have in common the usage of first-order laminates, i.e., rank-one convex combinations of matrices with finite energy. When there are two orthogonal slip systems, first-order laminates yield the rank-one convex envelope <cit.>, which allows for the full identification of the corresponding Γ-limit. In this case, this rank-one convex envelope in fact matches the polyconvex envelope and the quasiconvex envelope as well. This is not true for the spatial dimension 3 <cit.>. In this section, we partially extend the results from <cit.> to arbitrary two-slip systems, pointing out the related results in <cit.>. The results are partial in the sense that the envelopes are identified only in a strict subset of ℝ^2× 2. To fix the notation, let unit vectors v_1,v_2 be such that (v_1, v_2) is a right-handed system and the angle between v_1 and v_2 is 2θ with π/4<θ<π/2. Further, let v_3:=-v_1+v_2/|v_1+v_2| denote the unit vector dividing in half the larger angle between v_1 and v_2. For later use we note that v_1· v_2 = cos 2θ, v_1· v_3 = v_2· v_3 = - cosθ, v_1 · v_3^⊥ = -v_2 · v_3^⊥ = sinθ . Considering the energy density W(F) = |F|^2-2 if F∈ℳ_1∪ℳ_2, ∞ otherwise, where ℳ_i:={F∈ℝ^2× 2: F=1, |Fv_i|=1}, i=1,2, the main result of this section reads as follows. The rank-one convex envelope W^rc, the polyconvex envelope W^pc and the quasiconvex envelope W^qc of W fulfill W^rc(F)=W^pc(F)=W^qc(F) = h(|Fv_3|) if F∈𝒜∪𝒩_1∩𝒩_2, h^⊥(|Fv_3^⊥|) if F∈𝒜_⊥ = |Fv_1^⊥|^2-1 if F=1,|Fv_1|=1, |Fv_2^⊥|^2-1 if F=1,|Fv_2|=1, h (|Fv_3|) if F∈𝒜∪ (𝒩_1∩𝒩_2), h^⊥ (|Fv_3^⊥|) if F∈𝒜_⊥. Furthermore, W^rc(F)=W^pc(F)=+∞ if F ≠ 1. Here the sets 𝒩_1, 𝒩_2, A, 𝒜_⊥ are defined by 𝒩_i :={F∈ℝ^2× 2 : F=1, |Fv_i|< 1}, i=1,2, 𝒜 :={F∈ℝ^2× 2 : F=1, min_i=1,2|Fv_i|>1, Fv_1· Fv_2>0 }, 𝒜_⊥ :={F∈ℝ^2× 2 : F=1, min_i=1,2|Fv_i|>1, Fv_1· Fv_2<0 }. We recall that h and h^⊥ were defined in Theorem <ref>. For the purposes of this Section, we extend them as (see Fig. <ref>), h^*(z) :=1+z^2-2cosθ√(z^2-sin^2θ)/sin^2θ-2 for z∈[sinθ, ∞), h^⊥ *(z) :=1+z^2-2sinθ√(z^2-cos^2θ)/cos^2θ-2 for z∈[cosθ, ∞). Then one easily checks that h(z) = 0 if z∈ [0,1] h^*(z) if z ≥ 1 , h^⊥(z) = 0 if z∈ [0,1] h^⊥ *(z) if z ≥ 1 , and that h(z)≤ h^*(z) for z∈[sinθ, ∞), and h^⊥(z)≤ h^⊥*(z) for z∈[cosθ, ∞). The condition Fv_1· Fv_2 ≷ 0 in the definition of 𝒜 and 𝒜_⊥ appears already in Theorem <ref> and serves the purpose of dividing the set {F: F=1, |Fv_1|> 1, |Fv_2|>1 } into two parts 𝒜 and 𝒜_⊥, as shown in Figure <ref>. It is equivalent to |Fv_3|/|Fv_3^⊥| ≷sinθ/cosθ, as shown in Lemma <ref> below. We introduce two formulas that will be useful in the sequel. First, for F∈ℝ^2× 2 and a,b∈ℝ^2, one has |Fa|^2|Fb|^2=|Fa· Fb|^2+( F)(a^⊥· b)^2. Second, for F∈ℝ^2× 2 and unit vectors a,b with a^⊥· b≠ 0, one has |F|^2=|Fa|^2+|Fb|^2-2(a· b)(Fa· Fb)/(a^⊥· b)^2. The conditions Fv_1· Fv_2≷ 0 and |Fv_3|/|Fv_3^⊥|≷sinθ/cosθ are equivalent. In addition, the following holds: |Fv_3|≥ 1 for F∈𝒜∪ (𝒩_1 ∩𝒩_2), |Fv_3^⊥|≥ 1 for F∈𝒜_⊥. First, we calculate |Fv_3|^2 = |Fv_1+Fv_2|^2/|v_1+v_2|^2 = |Fv_1|^2+|Fv_2|^2 + 2Fv_1· Fv_2/2+2v_1· v_2, |Fv_3^⊥|^2 = |Fv_1-Fv_2|^2/|v_1-v_2|^2 = |Fv_1|^2+|Fv_2|^2 - 2Fv_1· Fv_2/2-2v_1· v_2, which together with (<ref>) implies that, if Fv_3^⊥≠ 0, |Fv_3|^2/|Fv_3^⊥|^2·cos^2θ/sin^2θ=|Fv_1|^2+|Fv_2|^2 + 2Fv_1· Fv_2/|Fv_1|^2+|Fv_2|^2 - 2Fv_1· Fv_2. Hence, the conditions Fv_1· Fv_2≷ 0 and |Fv_3|/|Fv_3^⊥|≷sinθ/cosθ are equivalent. To prove the second part of the Lemma, we use (<ref>) to get |Fv_1· Fv_2|^2 = |Fv_1|^2|Fv_2|^2 - (v_1^⊥· v_2)^2 = |Fv_1|^2|Fv_2|^2 - 1 + (v_1 · v_2)^2. Now if F∈𝒜 then |Fv_1|,|Fv_2| ≥ 1, Fv_1· Fv_2≥ 0, and thus in view of v_1· v_2=cos 2θ<0 (<ref>) implies |Fv_3|^2 ≥|Fv_1|^2+|Fv_2|^2/2≥ 1. On the other hand, if F∈𝒩_1 ∩𝒩_2, (<ref>) yields |Fv_3^⊥|^2 ≤1-Fv_1· Fv_2/1-v_1· v_2≤ 1, because v_1· v_2< 0 and |Fv_1· Fv_2|^2≤ (v_1· v_2)^2 by (<ref>), so irrespective of the sign of Fv_1· Fv_2, it is true that v_1 · v_2 ≤ Fv_1· Fv_2. Therefore, |Fv_3|≥1. Finally, if F∈𝒜^⊥ then Fv_1· Fv_2≤ 0, and |Fv_3^⊥|^2 ≥1-Fv_1· Fv_2/1-v_1· v_2≥ 1, because v_1· v_2< 0 and |Fv_1· Fv_2|^2 ≥ (v_1· v_2)^2, again by (<ref>). We shall prove Proposition <ref> in a series of lemmas. There are two ways to construct first order laminates: between two matrices on ℳ_i and between a matrix on ℳ_1 and another matrix on ℳ_2. In the first case, the optimal laminate is in the direction v_i, which follows from the analysis of the single-slip problem <cit.>. In the latter case, the following lemma provides the stepping stone for identifying the optimal directions for first order laminates in the set 𝒜∪𝒜_⊥={F∈ℝ^2× 2: F=1, |Fv_1|> 1, |Fv_2|>1 }. If F ∈𝒜_⊥ we define for t ∈ℝ, F_t = F(I +t v^⊥_3 ⊗ v_3), and if F∈𝒜 we define for t ∈ℝ, F_t = F(I +t v_3 ⊗ v^⊥_3). Then the equations |F_t v_1|=1 and |F_t v_2|=1 have each two distinct solutions. The two solutions of each of these two equations have the same sign, and the sign of the two solutions of (<ref>) is opposite to the sign of the two solutions of (<ref>). We first prove that there are two distinct solutions to each of (<ref>), (<ref>) and that these solutions have the same sign. We focus on the first case (<ref>) since the second case (<ref>) follows by switching the roles of v_3 and v_3^⊥. Equation (<ref>), when squared, is equivalent to |Fv_1|^2-1+2t(v_3· v_1)(Fv_3^⊥· Fv_1)+t^2(v_3· v_1)^2|Fv_3^⊥|^2=0 . In view of v_3 · v_1=-cosθ≠ 0 we see that two distinct real solutions exist provided that (Fv_3^⊥· Fv_1)^2>|Fv_3^⊥|^2(|Fv_1|^2-1). In view of (<ref>) and F = 1, this is equivalent to |Fv_3^⊥|^2>(-v_3· v_1)^2 = cos^2θ , which immediately follows from the assumption |Fv_3|/|Fv_3^⊥|<sinθ/cosθ since due to F=1 we have |Fv_3||Fv_3^⊥|≥ 1 and thus |Fv_3^⊥|^2 = |Fv_3^⊥|/|Fv_3| |Fv_3^⊥| |Fv_3| > cosθ/sinθ > cos^2θ . In addition, |Fv_1|^2-1 > 0 by assumption, and hence the solutions have the same sign. The proof for (<ref>) is obtained by simply replacing v_1 by v_2. We next look at the sign of the solutions for (<ref>) and (<ref>), again only in the first case of (<ref>). It suffices to show that the average of the two solutions of the first equation has a different sign than the average of the two solutions of the second one. The average of the two solutions of (<ref>) is -Fv_3^⊥· Fv_1/(v_3· v_1)|Fv_3^⊥|^2, and, replacing v_1 by v_2, the average of the two solutions of (<ref>) is -Fv_3^⊥· Fv_2/(v_3· v_2)|Fv_3^⊥|^2. Since (v_3 · v_1)(v_3 · v_2) = cos^2θ > 0, it suffices to show that (Fv_3^⊥· Fv_1)(Fv_3^⊥· Fv_2)<0. We observe that v_1=-(cosθ) v_3 + (sinθ) v_3^⊥ v_2=-(cosθ) v_3 - (sinθ) v_3^⊥ and thus (Fv_3^⊥· Fv_1 )(Fv_3^⊥· Fv_2) =cos^2θ |Fv_3^⊥· Fv_3|^2 - sin ^2θ |Fv_3^⊥|^4 . The assertion follows in view of Cauchy-Schwarz inequality and the assumption |Fv_3|/|Fv_3^⊥|<sinθ/cosθ. For F∈ℝ^2× 2, we define W^lc(F) as follows: W^lc(F)=inf{μ W(F_0)+(1-μ)W(F_1): μ∈[0,1], μ F_0+(1-μ)F_1=F, rank(F_0-F_1)≤ 1 }. The following two lemmas evaluate W^lc from above inside the sets 𝒜∪𝒜_⊥ and 𝒩_1∩𝒩_2, respectively, in terms of the functions h, h^⊥. The proof will reveal that the corresponding optimal laminates connect matrices on ℳ_1 and ℳ_2 with equal energies. If F∈𝒜 then W^lc(F)≤ h(|Fv_3|), and if F∈𝒜_⊥ then W^lc(F)≤ h^⊥(|Fv_3^⊥|). If F∈𝒜, Lemma <ref> yields four mutually different real numbers s', s”, t', t” such that s',s” and t',t” have opposite signs, and satisfy |F_s'v_1|=|F_s”v_1|=|F_t'v_2|=|F_t”v_2|=1. Here F_t is the rank-one line with direction Fv_3⊗ v_3^⊥, i.e., F_t =F(I+tv_3⊗ v_3^⊥). Note that z=|F_tv_3|=|Fv_3| is independent of t. Using the formula (<ref>) with a=v_i, i=1,2 and b=v_3, we get for t∈{s',s”,t',t”}, F_tv_i · F_tv_3 = ±√(|F_tv_i|^2 |F_tv_3|^2 - (v_i^⊥· v_3)^2) = ±√(z^2 - sin^2 θ). Hence by (<ref>), |F_t|^2=1+z^2±2cosθ√(z^2-sin^2θ)/sin^2θ, t∈{s',s”,t',t”}. We may assume that s'< s”, t'<t”. Then either s'<s”<0<t'<t” or t'<t”<0<s'<s” holds. Since |F_t|^2 is a quadratic function of t, it never takes the same value more than twice. Moreover, there are only two possible values of |F_t|^2 for t = s',s”,t',t”, and hence we conclude that |F_s'|=|F_t”| and |F_s”|=|F_t'|. Because the quadratic term t^2 appearing in |F_t|^2 has positive coefficient, we also see that the value with minus sign in (<ref>) is taken for t=s”,t' (when s'<s”<0<t'<t”) or for t=t”,s' (when t'<t”<0<s'<s”). Therefore, we can choose s_⋆∈{s',s”} and t_⋆∈{t',t”} such that |F_s_⋆|^2=|F_t_⋆|^2=1+z^2-2cosθ√(z^2-sin^2θ)/sin^2θ. Note that s_⋆ and t_⋆ have different signs. Since F is a rank-one convex combination of F_s_⋆ and F_t_⋆ and W(F_s_⋆) = |F_s_⋆|^2-2 =|F_t_⋆|^2-2 = W(F_t_⋆), we conclude W^lc(F) ≤ W(F_s_⋆)=|F_s_⋆|^2-2=h^*(z) = h(z), the last equality following from Lemma <ref>. The case F∈𝒜_⊥ is shown analogously, interchanging the roles of v_3 and v_3^⊥ and the roles of cosθ and sinθ. If F∈𝒩_1∩𝒩_2 then W^lc(F)≤ h(|Fv_3|) . Consider the rank-one line given by F_t = F(I +t v_3 ⊗ v^⊥_3). Since |F_tv_1|→∞ as t →±∞ and |F_0v_1|<1, by continuity we find t_1,± with t_1,- < 0 < t_1,+ and |F_t_1,± v_1| = 1. Since F_tv_1 · Fv_3 = Fv_1 · Fv_3 + t|Fv_3|^2(v^⊥_3 · v_1) and v^⊥_3 · v_1 = sinθ >0, it follows that F_t_1,+ v_1 · Fv_3 > F_t_1,- v_1 · Fv_3. Combining this with (<ref>), we have F_t_1,+ v_1 · Fv_3 = √(z^2 - sin^2θ) , F_t_1,- v_1 · Fv_3 = - √(z^2 - sin^2θ) , where we set z = |Fv_3|. By (<ref>) with a = v_1, b = v_3, |F_t_1,-|^2=1+z^2-2cosθ√(z^2-sin^2θ)/sin^2θ. Repeating the same argument with v_2 instead of v_1 leads to two values t_2,- < 0 < t_2,+, such that |F_t_2,± v_2| = 1. Since v^⊥_3 · v_2 = -sinθ<0 we obtain as above, |F_t_2,+|^2=1+z^2-2cosθ√(z^2-sin^2θ)/sin^2θ. Employing a laminate between F_t_1,- and F_t_2,+ as in the previous lemma and noting that |Fv_3| ≥ 1 by Lemma <ref> finishes the proof. Using the rank-one line F(I +t v^⊥_3 ⊗ v_3)) one can similarly show that W^rc(F)≤ h^⊥*(|Fv_3^⊥|) for F∈𝒩_1∩𝒩_2. However, this fact does not provide any additional information since for such F one has h(|Fv_3|) ≤ h^⊥*(|Fv_3^⊥|). We next turn to estimating the envelopes of W from below in terms of h and h^⊥. For any F∈ℝ^2× 2, max{h(|Fv_3|),h^⊥(|Fv_3^⊥|)}≤ W^pc(F), W^qc(F), W^rc(F). In addition, if F≠1 then W^pc(F)=W^rc(F)=+∞ . Since for z≥ 0, h(z) and h^⊥(z) are nondecreasing and convex functions taking finite values, the function max{h(|Fv_3|),h^⊥(|Fv_3^⊥|)} is convex and takes finite values for F∈ℝ^2× 2. Thus, this function is also polyconvex, quasiconvex and rank-one convex. Therefore, it suffices to show h(|Fv_3|)≤ W(F) and h^⊥(|Fv_3^⊥|)≤ W(F), where F=F_γ^i=I+γ v_i⊗ v_i^⊥, for any γ∈ℝ and each i=1,2. In view of F_γ^i =1 and the relation W(F_γ^i)=γ^2, it is sufficient to check that h(|F_γ^iv_3|) ≤γ^2 and h^⊥(|F_γ^iv_3^⊥|) ≤γ^2 for any γ∈ℝ and each i=1,2. To begin with, we fix i=1. Since |F_γ^1v_3|^2=1+2γcosθsinθ + γ^2sin ^2θ, we see that |F_γ^1v_3|^2 - sin^2θ≥ 0 and from (<ref>) h(|F_γ^1v_3|) ≤ h^*(|F_γ^1v_3|) =2cos^2θ+2γcosθsinθ+γ^2sin^2θ-2cosθsinθ√((γ+cosθ/sinθ)^2)/sin^2θ =( cosθ-sinθ√((γ + cosθ/sinθ)^2)/sinθ)^2. Hence, if γ≥ -cosθ/sinθ, we obtain h(|F_γ^1v_3|) ≤ h^*(|F_γ^1v_3|) = γ^2. This inequality holds even if γ < -cosθ/sinθ. Indeed, h(|F_γ^1v_3|)≤ h^*(|F_γ^1v_3|) ≤1+|F_γ^1v_3|^2+2cosθ√(|F_γ^1v_3|^2-sin^2θ)/sin^2θ-2 and a simple calculation shows that the last expression is equal to γ^2. The same is true for h^⊥, where we find that h^⊥(|F_γ^1v_3^⊥|) ≤ h^⊥*(|F_γ^1v_3^⊥|)=γ^2 for γ≤sinθ/cosθ and h^⊥(|F_γ^1v_3^⊥|) ≤ h^⊥*(|F_γ^1v_3^⊥|)≤γ^2 otherwise. The proof for i=2 is analogous. To show the statement about infinite values, we follow <cit.> and define the function I(F)= 0 if F =1, +∞ otherwise, and the functions g(F)=h(|Fv_3|)+I(F), g^⊥(F)=h^⊥(|Fv_3^⊥|)+I(F). Since I(F) is a convex function of F and thus polyconvex, the functions g and g^⊥, being sums of a convex and a polyconvex function, are polyconvex functions, and so is max{g,g^⊥}. The function max{g,g^⊥} takes the value +∞ for F such that F≠ 1, and the first part of this proof shows that max{g,g^⊥}≤ W. Since, in general, W^pc≤ W^rc even for extended-valued function W according to <cit.>, we deduce max{g,g^⊥}≤ W^pc≤ W^rc. Therefore W^pc(F)=W^rc(F)=+∞ if F≠1. Based on the ideas in the proof of Lemma <ref>, we may extend the results of Lemmas <ref> and <ref> up to the boundaries of the involved sets, i.e., Lemma <ref> holds with 𝒜, 𝒜_⊥ replaced by 𝒜, 𝒜_⊥, respectively, and Lemma <ref> holds with 𝒩_1∩𝒩_2 replaced by 𝒩_1∩𝒩_2. To see why, first note that, using the notation of Lemma <ref>, if a matrix F_γ^1 belongs to the boundary of the set 𝒜∪ (𝒩_1 ∩𝒩_2) then γ≥ - cosθ/sinθ. Assuming on the contrary γ<-cosθ/sinθ (<0) leads after some calculation to F^1_γv_1 · F^1_γv_2=cos 2θ +γsin 2θ<0, and thus F^1_γ∈∂𝒜_⊥. But the only matrices belonging to both ∂ (𝒜∪ (𝒩_1 ∩𝒩_2)) and ∂𝒜_⊥ are pure rotations, which shows that for F_γ^1 ∈∂ (𝒜∪ (𝒩_1 ∩𝒩_2)) we indeed have γ≥ - cosθ/sinθ, and the proof of Lemma <ref> in turn implies h^*(|F^1_γv_3|)=γ^2=W(F^1_γ)=|F_γ^1v_1^⊥|^2-1. Moreover, since for such F_γ^1 Lemma <ref> yields |F_γ^1 v_3|≥ 1, we also have h^*(|F^1_γv_3|)=h(|F^1_γv_3|). The corresponding identity h^⊥(|F^1_γv_3^⊥|)=h^⊥ *(|F^1_γv_3^⊥|)=γ^2=W(F^1_γ)=|F_γ^1v_1^⊥|^2-1 holds for F_γ^1 ∈𝒜_⊥, and a similar argument can be made about F^2_γ. The statement in the beginning of this remark then follows from the relation W^lc≤ W. Unfortunately, in the remaining set (𝒩_1 ∖𝒩_2) ∪ (𝒩_2 ∖𝒩_1), the estimate of the envelopes from above that is derived from optimal first order laminates does not match with the estimate from below obtained in Lemma <ref>. We include the result for completeness and because the boundedness of W^lc, which it implies, will be needed later. For each i=1,2 and F ∈𝒩_i, W^lc(F)≤ |Fv_i^⊥|^2 -1 . If additionally F∈ (𝒩_1\𝒩_2) ∪ (𝒩_2\𝒩_1) then W^lc(F)≤min{ h_+(|Fv_3|), h^⊥_+(|Fv^⊥_3|) } . Here h_+,h^⊥_+ are given by h_+(z):=1+z^2+2cosθ√(z^2-sin^2θ)/sin^2θ-2, and h^⊥_+(z):=1+z^2+2sinθ√(z^2-cos^2θ)/cos^2θ-2. Let F_t = F(I +t v^⊥ _i ⊗ v_i). Since |F_0v_i|=|Fv_i| ≤ 1 and |F_t v_i|→∞ as t →±∞, there are two values t_-≤ 0≤ t_+ such that |F_t_± v_i| = 1. We have W(F_t_-)=|F_t_-|^2-2 = |F_t_-v_i^⊥|^2+|F_t_-v_i|^2-2=|Fv_i^⊥|^2-1, and similarly W(F_t_+)=|Fv_i^⊥|^2-1. Since F is a rank-one convex combination of F_t_- and F_t_+, the relation (<ref>) follows from the definition (<ref>) of W^lc. Let us next prove that W^lc(F)≤ h_+(|Fv_3|) for F ∈𝒩_1\𝒩_2. We take the rank-one line F_t=F(I +t v_3 ⊗ v^⊥_3) and construct laminates between its intersections with ℳ_1, ℳ_2. To do so, we consider the transformation ξ:ℝ^2× 2→ℝ^2× 2 given by F↦ RFR^-1=RFR, where R is the reflection about v_3. In particular, taking v_3 and v_3^⊥ as the coordinate system, R=[ 1 0; 0 -1; ]. Note that if F=1, then ξ (F)=1. Moreover, since Rv_1=v_2 and Rv_2 =v_1, we have ξ(𝒩_1)=𝒩_2, ξ(ℳ_1)=ℳ_2 and ξ(𝒩_2)=𝒩_1, ξ(ℳ_2)=ℳ_1. Since |(UF)v_i|=|Fv_i| for any U∈ SO(2), we can assume that F is an upper triangular matrix F = [ α β; 0 γ ] with α>0. Then {F_t : t∈ℝ}={ξ(F_t) : t∈ℝ} holds because ξ(F_t) = RFR + tRFv_3⊗ v_3^⊥ R = [ α -β; 0 γ ] - t [ 0 α; 0 0 ] = F - (α t + 2β) v_3⊗ v_3^⊥ . From the proof of Lemma <ref>, we see that there exist t_± such that t_- < 0 < t_+ and |F_t_± v_1| = 1. Since the line F_t is invariant with respect to the transformation ξ, one finds that it intersects the set ℳ_1 at F_t_- and F_t_+, and the set ℳ_2 at ξ(F_t_-) and ξ(F_t_+). Since the intersection of ℳ_1 and ℳ_2 is the set SO(2) which is not crossed by the line F_t more than once, at least three of the matrices F_t_-, F_t_+, ξ (F_t_-), ξ (F_t_+) are different. Regarding the intersections of F_t with ℳ_2, there exist t_-' and t'_+ such that ξ (F_t_-)=F_t'_- and ξ (F_t_+)=F_t'_+. In view of the form of ξ (F_t) and α>0, we can see that t'_+<t'_-. Furthermore, since |F_tv_2|^2 is a quadratic function with a positive coefficient of the second-order term and |Fv_2|≥ 1, t'_- and t'_+ have the same sign. We consider the two cases for the sign of t_+' and t_-' separately, beginning with the case 0<t'_+<t'_-. Note that then t'_-≤ t_+ cannot happen because then t_-<0<t_+'<t_-'≤ t_+, while |F_t|^2 is a quadratic function of t and |F_t_+|^2 =|ξ(F_t_+)|^2 =|F_t'_+|^2 along with |F_t_-|^2 =|ξ(F_t_-)|^2 =|F_t'_-|^2. Therefore, t_-<0<t'_+,t_+<t'_-, i.e., t_-=min{t_-,t_+,t'_-,t'_+} and t'_-=max{t_-,t_+,t'_-,t'_+}. In this case we show that we can construct optimal laminates between F_t_- and F_t'_-. Let F^±_γ=R_±(I+γ_± v_1⊗ v_1^⊥) for γ_±∈ℝ and R_±∈ SO(2) be the matrices on ℳ_1 such that F_t_± = F^±_γ. The calculations in Lemma <ref> show that W(F_t_±)=|F_t_±|^2-2=W(F_γ^±)=|F_γ^±|^2-2=γ_±^2 is equal to either h(|F_γ^±v_3|)=h(|Fv_3|) or to h_+(|F_γ^±v_3|)=h_+(|Fv_3|), depending on the value of γ_±. We notice that |F_t_+|≠|F_t_-|. Indeed, if it is not the case, the quadratic function |F_t|^2 would take the same value for more than two values of t, in view of |F_t_+|^2 =|F_t'_+|^2 and |F_t_-|^2 =|F_t'_-|^2. This implies {W(F_t_+),W(F_t_-)}={h(|Fv_3|),h_+(|Fv_3|)}. Now, W(F_t)=|F_t|^2-2 is a quadratic function of t with positive coefficient of the quadratic term, W(F_t_+)=W(F_t_+'), and t_-<t_+',t_+<t_-'. In view of h(|Fv_3|)≤ h_+(|Fv_3|), we conclude that W(F_t_-)=W(F_t'_-)=h_+(|Fv_3|), which implies W^lc(F) ≤ h_+(|Fv_3|). The second case t'_+<t'_-<0 is analogous, leading to laminates between F_t_+ and F_t'_+. This finishes the proof of W^lc(F)≤ h_+(|Fv_3|) for F∈𝒩_1∖𝒩_2. The proof for F ∈𝒩_2∖𝒩_1 is identical except for exchanging the roles of v_1 and v_2. Finally, the proof of W^rc(F)≤ h_+^⊥(|Fv_3^⊥ |) is done in the same way replacing the rank-one line F_t=F(I +t v_3 ⊗ v_3^⊥) by F_t=F(I +t v_3^⊥⊗ v_3). Proof of Proposition <ref> First we confirm that for F∈𝒩, i.e., for F∈ℝ^2× 2 with F=1, we have W^rc(F) ≤ W^lc(F) and W^pc(F) ≤ W^lc(F). Indeed, the first inequality follows directly from the definition of the envelopes, while the second one follows by noting that we may restrict F_0,F_1 to matrices with determinant 1 in the definition (<ref>) of W^lc. Then if F=μ F_0+ (1-μ)F_1 for μ∈ [0,1] and F_0,F_1∈𝒩, we have F=μ F_0 + (1-μ) F_1. Since W^pc(F) can be expressed as a convex function of F and F, we get W^pc(F) ≤μ W^pc(F_0) + (1-μ) W^pc(F_1) ≤μ W(F_0) + (1-μ) W(F_1). Taking infimum over all rank-one connected F_0,F_1 proves the second inequality. Lemmas <ref>, <ref> and <ref>, together with Remark <ref> imply W^pc(F)=W^rc(F)=W^lc(F)=max{h(|Fv_3|),h^⊥(|Fv_3^⊥|)} for F ∈𝒜∪ (𝒩_1∩𝒩_2) ∪𝒜_⊥. Since, in view of Lemmas <ref>, <ref> and <ref>, W^lc is finite on 𝒩, Theorem 3.1 in <cit.> implies that W^qc≤ W^lc on 𝒩. Hence, taking into account Lemma <ref>, all the envelopes W^pc, W^qc, W^rc, W^lc are equal on 𝒜∪ (𝒩_1∩𝒩_2) ∪𝒜_⊥. The remaining claims of Proposition <ref>, pertaining to values of envelopes on the boundary and to their infinite values, follow from Lemma <ref> and the accompanying Remark <ref>. § HOMOGENIZATION FOR GENERAL SLIPS In this section we assume that the angle between slip directions is arbitrary and, based on the analysis of convex envelopes in Section <ref>, prove the partial homogenization result in Theorem <ref>, assuming that the Γ-limit is an integral functional. This assumption is needed because the general theory (see, e.g., <cit.>) that guarantees Γ-limits to be integral functionals applies only to sequences of functionals with standard growth conditions, while in our setting we deal with functionals whose admissible functions form a nonconvex subset of the underlying Sobolev space. First, we present generalized versions of the two pillars of the proof, namely, Proposition <ref> and Corollary <ref>. Let Ω = (0,l)^2 for l > 0 be a cube, and let (u_ϵ)_ϵ⊂ W^1,2(Ω;ℝ^2) be such that E_ϵ(u_ϵ)≤ C for all ϵ>0 and u_ϵ⇀ u in W^1,2(Ω;ℝ^2) for u ∈ W^1,2(Ω;ℝ^2) with gradient of the form (<ref>). If, in addition, u is piecewise affine, then lim inf_ϵ→ 0 E_ϵ(u_ϵ) ≥λ∫_Ωf( 1/λ(∇ u-(1-λ)R ) ) dx, where f(F) = max{ h(|Fv_3|), h^⊥ (|Fv_3^⊥|) }, with v_3 defined by (<ref>) and h,h^⊥ by (<ref>). In particular, for N:=1/λ(∇ u-(1-λ)R), lim inf_ϵ→ 0 E_ϵ(u_ϵ) ≥λ|Ω| h(|Nv_3 |) if N∈𝒜∪𝒩_1∩𝒩_2, λ|Ω| h^⊥(|Nv_3^⊥| ) if N∈𝒜_⊥. As noted in the proof of Lemma <ref>, f is a convex function on ℝ^2× 2 and hence continuous. Moreover, Lemma <ref> and Remark <ref> show that f(∇ u_ϵ)=W(∇ u_ϵ) a.e. in ϵ Y_soft∩Ω for u_ϵ with finite energy. For such u_ϵ we additionally have f(∇ u_ϵ)=0 a.e. in ϵ Y_rig∩Ω. Therefore, beginning with lim inf_ϵ→ 0E_ϵ(u_ϵ) = lim inf_ϵ→ 0∫_ϵ Y_soft∩Ω W(∇ u_ϵ) dx = lim inf_ϵ→ 0∫_Ω f(∇ u_ϵ) dx , the arguments in the proof of Proposition <ref> are still valid, yielding the relation (<ref>). Finally, the analysis in Section <ref> implies λ|Ω| f(N)= λ|Ω| h(|Nv_3| ) if N∈𝒜∪𝒩_1∩𝒩_2, λ|Ω| h^⊥(|Nv_3^⊥|) if N∈𝒜_⊥. Let Ω⊂ℝ^2 be a bounded domain. If N∈𝒜∪𝒩_1∩𝒩_2∪𝒜_⊥, let F_+,F_-∈ℳ_1∪ℳ_2 and μ∈(0,1) be such that N=μ F_+ + (1-μ) F_- and μ∈(0,1). Specifically, (i) if N∈𝒜 or N∈𝒜_⊥, let F_-=F_s_⋆, F_+=F_t_⋆ with F_s_⋆, F_t_⋆ defined in the proof of Lemma <ref>, and a suitable μ; (ii) if N∈𝒩_1∩𝒩_2, let F_-=F_t_1,-, F_+=F_t_2,+ with F_t_1,-, F_t_2,+ defined in the proof of Lemma <ref>, and a suitable μ; (iii) if N∈ℳ_1∪ℳ_2, let F_+=F_-=N and μ∈(0,1) be arbitrary. Then for every δ >0 there exists Ω_δ⊂Ω with |Ω\Ω_δ|<δ and u_δ∈ W^1,∞(Ω;ℝ^2) such that u_δ coincides with a simple laminate between F_+ and F_- with weights μ and 1-μ and period h_δ<δ in Ω_δ , ∇ u_δ∈ℳ_1∪ℳ_2 a.e. in Ω, and u_δ=Nx on ∂Ω. Moreover, there is a constant c depending only on N, such that for any δ∈ (0,1), |∇ u_δ|≤ c a.e. in Ω . In particular, ∇ u_δ⇀ N in L^2(Ω;ℝ^2× 2) as δ→ 0. The proof is almost the same as for Corollary <ref>. Let us only deal with the case that N∈𝒜, since a similar argument applies to the cases N∈𝒜_⊥ and N∈ (𝒩_1∩𝒩_2)\ (ℳ_1∪ℳ_2). For the given F_+,F_- and δ >0, Theorem <ref> with p taken as v_3 yields a finitely piecewise affine function v_δ and a set Ω_δ. Since dist (∇ v_δ,[F_+,F_-])≤δ and 0<δ<1, there is c>0 independent of δ, such that |∇ v_δq|<c for q=v_3,v_3^⊥,v_1,v_2 . The desired function u_δ is obtained as a result of applying Theorem <ref> to modify v_δ in the finitely many subsets of Ω∖Ω_δ where it is affine. Taking any such subset S, there are three possible cases for the value of ∇ v_δ in S: either ∇ v_δ∈𝒜 or ∇ v_δ∈𝒜_⊥ or min{|∇ v_δv_1|,|∇ v_δv_2|} <1. We first investigate the case ∇ v_δ∈𝒜 in S. Recalling that the condition |Fv_3|/|Fv_3^⊥|>sinθ/cosθ is equivalent to Fv_1· Fv_2 >0, we can apply Theorem <ref> to the in-approximation (U^δ_i)_i of 𝒦:=(ℳ_1∩ℳ_2)∩{F∈ℝ^2× 2:|Fv_3|≤ c, Fv_1· Fv_2 ≥ 0} defined as U_i^δ:={F∈ℝ^2× 2: F=1, |Fv_3|<c, |Fv_1|>1,|Fv_2|>1, Fv_1· Fv_2>0} ∩{F∈ℝ^2× 2:|Fv_1|<1+2^-(i-1) or |Fv_2|<1+2^-(i-1).}, i∈ℕ. It can be shown similarly to Corollary <ref> that (U^δ_i)_i is an in-approximation of 𝒦. The case when ∇ v_δ∈𝒜_⊥ in S is handled analogously. The case when min{|∇ v_δs|,|∇ v_δm|} <1 in a subset S can be reduced to the proof of Lemma 2 of <cit.>. The estimate (<ref>) can also be proved in the same way as in the proof of Corollary <ref>. We now turn to the proof of Theorem <ref>. Noting that the set of admissible functions for the Γ-limit E coincides with the set of weak limits lim_j→∞ u_j of admissible sequences (u_j)_j for (E_j)_j, we deduce from Proposition <ref> that E is finite exactly for functions u with the form (<ref>). Step 1: Liminf inequality for affine u Let u be affine and let (u_j)_j be a sequence converging to u in L^2(Ω;ℝ^2) such that (E_ϵ_j(u_j))_j is bounded, where ϵ_j→ 0+ as j→∞. Then, taking a subsequence if necessary, u_j⇀ u in W^1,2(Ω;ℝ^2) and ∇ u=R(I+γ e_1⊗ e_2) for some R∈ SO(2) and γ∈ L^2(Ω). Similarly to the proof of Theorem <ref>, it is sufficient to consider the case when Ω is a square. If N=1/λ(∇ u-(1-λ)R) ∈𝒜∪𝒩_1∩𝒩_2∪𝒜_⊥, we obtain from Corollary <ref> lim inf_j→∞E_ϵ_j(u_j)≥λ |Ω| f(N). Step 2: Recovery sequence for affine u Let u be affine. If N=1/λ(∇ u-(1-λ)R)∈𝒜∪𝒩_1∩𝒩_2∪𝒜_⊥, Corollary <ref> can be used to perform convex integration as in the proof of Theorem <ref> (see Section <ref>), and to show the existence of a recovery sequence (u_j)_j. Then lim_j→∞E_ϵ_j(u_j)= λ |Ω| f(N). Step 3: W_hom=f on 𝒜∪𝒩_1∩𝒩_2∪𝒜_⊥ Let N∈𝒜∪𝒩_1∩𝒩_2∪𝒜_⊥ and take u affine such that 1/λ(∇ u-(1-λ)R)=N. By the assumption that the Γ-limit is an integral functional with density λ W_hom, there is a recovery sequence (u_j)_j such that lim_j→∞ E_ϵ _j(u_j)=λ |Ω|W_hom(N). From this and (<ref>), f(N)≤ W_hom(N). On the other hand, from Step 2, there is a recovery sequence (u_j)_j such that (<ref>) holds. Using the liminf inequality in the definition of Γ-limit, we get lim inf_j→∞E_ϵ_j(u_j)≥λ |Ω|W_hom(N), and hence W_hom(N)≤ f(N). These two inequalities yield W_hom(N)=f(N). § APPENDIX §.§ Averaging Lemma The standard averaging lemma is a special case of the following claim. Let Ω be a bounded open set in ℝ^n, and let g_ϵ∈ L^2_loc(ℝ^n) be Y-periodic functions, where Y is an n-cube. If g_ϵ⇀ g in L^2(Y), then g_ϵ(x/ϵ) ⇀⟨ g⟩ :=1/|Y|∫_Yg(x) dx in L^2(Ω) . We prove this theorem by generalizing the proof for the standard averaging lemma in <cit.>, see also <cit.>. Since C^∞_0(Ω) is dense in L^2(Ω), we only need to prove the following for every θ∈ C^∞_0(Ω): ∫_Ωg_ϵ(xϵ)θ(x) dx →⟨ g ⟩∫_Ωθ(x) dx. First, we show the uniform boundedness of g_ϵ(x/ϵ) in L^2(Ω). To this end, we introduce the notation ϵ Y_i :=x^ϵ_i+ϵ Y , where x^ϵ_i∈ϵℤ^n∩Ω, i.e., {x^ϵ_i, i∈ℤ} is the set of grid points separated by the distance ϵ. Then ∫_Ω|g_ϵ(xϵ)|^2 dx=∑_i ∫_ϵ Y_i∩Ω|g_ϵ(xϵ)|^2dx≤|Ω|/ϵ^n∫_Y|g_ϵ(y)|^2 ϵ^n dy= |Ω|g_ϵ^2_L^2(Y). Since g_ϵ⇀ g, there is C>0 such that g_ϵ(xϵ)_L^2(Ω)≤ C. We define a piecewise constant interpolation of θ by θ_ϵ(x)=θ(x^ϵ_i), x∈ϵ Y_i ∩Ω. By Cauchy-Schwarz inequality and (<ref>), ∫_Ω|g_ϵ(xϵ)(θ(x) - θ_ϵ(x))| dx ≤g_ϵ( xϵ)_L^2(Ω)θ - θ_ϵ_L^2(Ω)≤ Cθ - θ_ϵ_L^2(Ω) . Since θ has a compact support in Ω, making ϵ small enough we can ignore cubes ϵ Y_i intersecting the boundary ∂Ω. Thus, ∫_Ωg_ϵ(xϵ) θ_ϵ(x) dx=∑_i∫_ϵ Y_ig_ϵ(xϵ) θ_ϵ(x) dx= ⟨ g_ϵ⟩∫_Ωθ_ϵ(x) dx . In view of θ - θ_ϵ_L^2(Ω)→ 0 and ⟨ g_ϵ⟩∫_Ωθ_ϵ(x) dx →⟨ g ⟩∫_Ωθ(x) dx as ϵ→ 0, the convergence (<ref>) follows from (<ref>) and (<ref>). §.§ Proof of Lemma <ref> Lemma 2.5. Let v_2=v_1^⊥ and F=A+D, where D=Rγ_1 e_1⊗ e_2 and either A=R(I+γ_2 v_2⊗ v_1) or A=R(I+γ_2 v_1⊗ v_2) for some γ_1, γ_2 and R∈ SO(2). Then there exists a constant c, such that f(F) ≤ |F|^2 - 2 +c(√(|D|)+|D|)( √(|A|)+ |A|+√(|D|)+|D|) . We focus on the case |Av_2|=1, i.e., A=R(I+γ_2 v_2⊗ v_1) since the remaining case |Av_1|=1 is analogous. We can also assume R=I since rotations do not change any of the terms appearing in the inequality to be proved. First, we denote the components of v_i by (v_i^(1),v_i^(2)) and do a few preliminary calculations: |Fv_2|^2 = |v_2+γ_1v_2^(2)e_1|^2 = 1 + 2γ_1v_2^(1)v_2^(2)+γ_1^2(v_2^(2))^2 |Fv_1|^2 = |v_1+γ_1v_1^(2)e_1+γ_2v_2|^2 = 1 + γ_2^2 + γ_1^2 (v_1^(2))^2 + 2γ_1v_1^(1)v_1^(2)+2γ_1γ_2v_1^(2)v_2^(1) |F|^2 = 2+γ_1^2+γ_2^2 +2γ_1γ_2 v_2^(1) v_1^(2) |A|^2 = 2+ γ_2^2 |D|^2 = γ_1^2 F = 1- γ_1γ_2 v_2^(2)v_1^(2) Recalling that f(F) = max{ (|Fv_1|^2-1)_+, (|Fv_2|^2-1)_+, χ(max{|Fv_3|,|Fv_3^⊥|}) }, it is enough to prove the following 3 inequalities: (|Fv_1|^2-1)_+ ≤ |F|^2 - 2 +c(|D|+|A||D|) (|Fv_2|^2-1)_+ ≤ |F|^2 - 2 +c|A||D| χ(max{|Fv_3|,|Fv_3^⊥|}) ≤ |F|^2 - 2 + c(A,D) where c(A,D) stands for c(√(|D|)+|D|)( √(|A|)+ |A|+√(|D|)+|D|). When |Fv_1|≥ 1, since |F|^2=|Fv_2|^2+|Fv_1|^2, inequality (<ref>) is equivalent to -|Fv_2|^2+ 1 ≤ c( |D|+ |A||D|), which is true in view of the preliminary calculations because -|Fv_2|^2+ 1 = -2γ_1v_2^(1)v_2^(2)-γ_1^2(v_2^(2))^2 ≤ 2|γ_1|≤ 2|D|. On the other hand, when |Fv_1|<1, we have to prove the inequality -|F|^2 +2 ≤ c( |D|+|A||D|), which is again true due to -|F|^2+ 2 =-γ_1^2-γ_2^2 -2γ_1γ_2 v_2^(1) v_1^(2)≤ 2|γ_1| |γ_2| ≤ 2 |D||A|. The reasoning for the second inequality (<ref>) is similar and we omit it. To address the third inequality (<ref>), we first notice that max{|Fv_3|^2,|Fv_3^⊥|^2} = 1/2(|Fv_1|^2 + |Fv_2|^2+ 2|Fv_1· Fv_2|) ≥ 1 because |Fv_1|^2 + |Fv_2|^2 =|F|^2 = 2+γ_1^2+γ_2^2 +2γ_1γ_2 v_2^(1) v_1^(2)≥ 2+(|γ_1|-|γ_2|)^2 . Hence we can omit the plus subscripts in χ(t)=((2z^2-1)^1/2_+-1)_+^2 when z^2=max{|Fv_3|^2,|Fv_3^⊥|^2} and write χ (max{|Fv_3|,|Fv_3^⊥|}) =(√(|Fv_1|^2+|Fv_2|^2+2√(|Fv_1|^2|Fv_2|^2-| F|^2)-1) -1)^2 . Setting x := |Fv_1|^2, δ_1 := |Fv_2|^2-1, δ_2 := |Fv_1|^2(|Fv_2|^2-1) +1-| F|^2 for brevity, this transforms into χ(max{|Fv_3|,|Fv_3^⊥|}) =(√(x+δ_1 +2√(x-1+δ_2)) -1)^2 . For now we assume that |Fv_1|^2≥ 1 to estimate (√(x+δ_1 +2√(x-1+δ_2)) -1)^2 - (√(x+2√(x-1)) -1)^2 ≤(√(x+|δ_1| +2√(x-1)+2√(|δ_2|)) -1)^2 - (√(x+2√(x-1)) -1)^2 = (√(x+|δ_1| +2√(x-1)+2√(|δ_2|)) -√(x+2√(x-1))) ×(√(x+|δ_1| +2√(x-1)+2√(|δ_2|)) + √(x+2√(x-1)) -2). Denoting the content of the last bracket by b, we continue = |δ_1| + 2√(|δ_2|)/√(x+|δ_1| +2√(x-1)+2√(|δ_2|))+√(x+2√(x-1))× b ≤ |δ_1| + 2√(|δ_2|)/1+√(2)/√(2)(√(x-1)+1)+1/√(2)√(|δ_1| + 2√(|δ_2|))× b ≤( 2√(x-1) + √(|δ_1| + 2√(|δ_2|))) ( |δ_1| + 2√(|δ_2|))/2/√(2)√(x-1)+1/√(2)√(|δ_1| + 2√(|δ_2|)) ≤√(2)( |δ_1| + 2√(|δ_2|)), where the last three inequalities follow from √(a+b)≥1/√(2)(√(a)+√(b)) for a,b≥ 0, from the fact that √(x+2√(x-1)) = √(x-1)+1 and that b ≤ 2√(x+2√(x-1)) + √(|δ_1| +2√(|δ_2|)) -2 = 2 √(x-1) +√(|δ_1| + 2√(|δ_2|)) . The above estimates imply χ(max{|Fv_3|,|Fv_3^⊥|}) ≤ |Fv_1|^2 -1 +√(2)( |δ_1| + 2√(|δ_2|)), and since we have already estimated |Fv_1|^2-1, it remains to show that δ_1, δ_2 have the right order. This follows from |δ_1| = | 2γ_1v_2^(1)v_2^(2)+γ_1^2(v_2^(2))^2 | ≤ 2|D|+|D|^2 , |δ_2| = | ( 1 + γ_2^2 + γ_1^2 (v_2^(2))^2 + 2γ_1v_1^(1)v_1^(2)+2γ_1γ_2v_1^(2)v_2^(1))( 2γ_1v_2^(1)v_2^(2)+γ_1^2(v_2^(2))^2 ) . + . 1 -( 1-γ_1γ_2 v_2^(2)v_1^(2))^2| ≤ 4(|D|+|D|^2)^2+2|D||A|(1+|D|)^2 + 2|D||A|^2+2|D|^2|A|^2 . Next, when |Fv_1|^2<1, we set x := |Fv_2|^2 >1, δ_1 := |Fv_1|^2-1 <0 and δ_2 := |Fv_2|^2(|Fv_1|^2-1) +1-| F|^2 ≤ 1-| F|^2 ≤ 2γ_1 γ_2 v_2^(2) v_1^(2), and an analogous estimation process leads to χ(max{|Fv_3|,|Fv_3^⊥|}) ≤ |Fv_2|^2 -1 + 4√(|γ_1 γ_2 v_2^(2) v_1^(2)|), which is the desired estimate in view of having already estimated |Fv_2|^2-1. §.§ Proof of Remark <ref> There is a constant c depending only on λ such that |W_hom(γ_1) - W_hom(γ_2)| ≤ c(1+|γ_1|+|γ_2|) |γ_1-γ_2| , where we write W_hom(γ) instead of W_hom(∇ u) with ∇ u = R(I+γ e_1⊗ e_2). Let (a,b) stand for the components of the vector v_1. Then v_2 has components (-b,a). For ∇ u = R(I+γ e_1⊗ e_2) we have N=R(I+γ/λ e_1⊗ e_2) and thus |Nv_2|^2 = 1 - 2ab γ/λ + a^2 γ^2/λ^2, |Nv_1|^2 = 1 + 2ab γ/λ + b^2 γ^2/λ^2, |N(v_1 ± v_2)|^2 = 2 ± 2γ/λ(a^2-b^2) + γ^2/λ^2(1 ± 2ab). Then for ab >0, |Nv_2|≤ 1 ⇔ 0 ≤γ≤ 2λb/a |Nv_1|≤ 1 ⇔ -2λa/b≤γ≤ 0 |Nv_2|, |Nv_1| ≥ 1 ⇔ γ≥ 2λb/a or γ≤ -2λa/b |N(v_1+v_2)| ≥ |N(v_1-v_2)| ⇔ γ≥max{ 0, λb^2-a^2/ab} or γ≤min{ 0, λb^2-a^2/ab}, where the last inequality |N(v_1+v_2)| ≥ |N(v_1-v_2)| always holds when |Nv_2|≥ 1 and |Nv_1|≥ 1. An analogous calculation is done for the case ab<0 leading to the following expression for W_hom when ab >0: W_hom(N) = {[ 2ab γ/λ + b^2 γ^2/λ^2 when 0 ≤γ≤ 2λb/a; - 2ab γ/λ + a^2 γ^2/λ^2 when -2λa/b≤γ≤ 0; (√((1+2(a^2-b^2)γ/λ +(1+2ab) γ^2/λ^2)_+) -1 )_+^2 otherwise, ]. and when ab <0: W_hom(N) = {[ 2ab γ/λ + b^2 γ^2/λ^2 when 2λb/a≤γ≤ 0; - 2ab γ/λ + a^2 γ^2/λ^2 when 0 ≤γ≤ -2λa/b; (√((1+2(b^2-a^2)γ/λ +(1-2ab) γ^2/λ^2)_+) -1 )_+^2 otherwise . ]. The + subscript can be removed in the given ranges because, for example in the setting ab >0, for γ≥ 2λb/a one can estimate 1+ 2(a^2-b^2)γ/λ +(1+2ab) γ^2/λ^2 = (1+γ/λ)^2 +2abγ/λ( γ/λ - 2 b/a)≥ 1, and similarly for the remaining cases. Estimating the derivative with respect to γ when ab>0, we see that * for 0 ≤γ≤ 2λb/a: | ∂ W_hom/∂γ| = | 2ab/λ + 2b^2/λ^2γ| ≤2/λ^2 (1+|γ|) * for -2λa/b≤γ≤ 0: | ∂ W_hom/∂γ| = | -2ab/λ + 2a^2/λ^2γ| ≤2/λ^2 (1+|γ|) * otherwise: setting R=√(1+2(b^2-a^2)γ/λ +(1-2ab) γ^2/λ^2)≥ 1, | ∂ W_hom/∂γ| = | 2 (R -1 )b^2-a^2/λ +1-2ab/λ^2γ/R| ≤2/λ^2 (1+|γ|) . The computation is analogous for ab<0. This means that there is a constant c(λ) such that for all γ_1,γ_2∈ℝ, |W_hom(γ_1) - W_hom(γ_2)| ≤ c(1+|γ_1|+|γ_2|) |γ_1-γ_2| . Acknowledgement: This research was supported by JSPS Kakenhi Grant numbers 19K03634 and 18H05481. 99 Attouch1984 H. Attouch: Variational Convergence for Functions and Operators. Applicable Mathematics Series. Pitman (Advanced Publishing Program), Boston (1984). Benesova2017 B. Benešová, M. Kružík: Weak lower semicontinuity of integral functionals and applications. SIAM Rev. 59, pp. 703-766 (2017). Berlyand2018 L. Berlyand, V. Rybalko: Getting Acquainted with Homogenization and Multiscale. Compact Textbooks in Mathematics, Springer (2018), pp. 23-24. Bouchitte2002 G. Bouchitté, M. Bellieud: Homogenization of a soft elastic material reinforced by fibers. Asymptot. Anal. 32(2), pp. 153-183 (2002). Braides1999 A. Braides, A. Defranceschi: Homogenization of Multiple Integrals. Oxford Lecture Series in Mathematics and Its Applications (1999), pp. 77-98. Braides1995 A. Braides, A. Garroni: Homogenization of periodic nonlinear media with stiff and soft inclusions. Math. Models Methods Appl. Sci. 5(4), pp. 543-564 (1995). Carstensen2002 C. Carstensen, K. Hackl, A. Mielke: Non-convex potentials and microstructures in finite-strain plasticity. In: Proceedings of the Royal Society of London. Series A: Mathematical, Physical and Engineering Science, vol. 458, no. 2018, pp. 299-317 (2002). Cherdantsev2012 M. Cherdantsev, K.D. Cherednichenko: Two-scale Γ-convergence of integral functionals and its application to homogenisation of nonlinear high-contrast periodic composites. Arch. Ration. Mech. Anal. 204(2), pp. 445-478 (2012). Christowiak F. Christowiak, C. Kreisbeck: Homogenization of layered materials with rigid components in single-slip finite crystal plasticity. Calc. Var. Partial Differential Equations 56(75) (2018). Conti2016 S. Conti, G. Dolzmann: Relaxation in crystal plasticity with three active slip systems. Continuum Mechanics and Thermodynamics volume 28 (2016), pp. 1477-1494. Conti2013 S. Conti, G. Dolzmann and C. Kreisbeck: Relaxation of a model in finite plasticity with two slip systems. Mathematical Models and Methods in Applied Sciences 23(11), pp. 2111-2128 (2013). Conti2015 S. Conti, G. Dolzmann, C. Kreisbeck: Variational modeling of slip: from crystal plasticity to geological strata. In: Analysis and Computation of Microstructure in Finite Plasticity (S. Conti and K. Hackl eds.). Lect. Notes Appl. Comput. Mech. 78, pp. 31-62 (2015). ContiTheil2005 S. Conti, F. Theil: Single-slip elastoplastic microstructures. Arch. Ration. Mech. Anal 178(1), pp. 125-148 (2005). Dacorogna B. Dacorogna: Direct Methods in the Calculus of Variations. Springer, Berlin (1989). Davoli2015 E. Davoli, G. Francfort: A critical revisiting of finite elasto-plasticity. SIAM J. Math. Anal. 47, pp. 526-565 (2015). DeGiorgi1975 E. De Giorgi: Sulla convergenza di alcune successioni d’integrali del tipo dell’area. Rend. Mat. 6(8), pp. 277-294 (1975). Drozdenko2022 D. Drozdenko, M. Knapek, M. Kruzik, K. Mathis, K. Svadlenka, J. Valdman: Elastoplastic deformations of layered structures, Milan Journal of Mathematics 90, pp. 691-706 (2022). Francfort2014 G. Francfort, A. Giacomini: On periodic homogenization in perfect elasto-plasticity. J. Eur. Math. Soc. (JEMS) 16(3), pp. 409-461 (2014). Gurtin2000 M. E. Gurtin: On the plasticity of single crystals: free energy, microforces, plastic-strain gradients. J. Mech. Phys. Solids. 48, pp. 989-1036 (2000). Hagihara2019 K. Hagihara, M. Yamasaki, Y. Kawamura, T. Nakano: Strengthening of Mg-based long-period stacking ordered (LPSO) phase with deformation kink bands, Materials Science and Engineering: A, Volume 763, 138163 (2019). Kroner1960 E. Kröner: Allgemeine Kontinuumstheorie der Versetzungen und Eigenspannungen. Arch. Ration. Mech. Anal. 4, pp. 273-334 (1960). Kruzik2019 M. Kružík, T.Roubíček: Mathematical Methods in Continuum Mechanics of Solids. Springer, Switzerland (2019). Lee1969 E. H. Lee: Elastic-plastic deformation at finite strains. J. Appl. Mech. 36, pp. 1-6 (1969). Low2021 I. M. Low, Y. Dong (editors): Composite Materials, Elsevier (2021). Lukkassen2002 D. Lukkassen, P. Wall: On weak convergence of locally periodic functions. J. Nonlinear Math. Phys. 9(1), pp. 42-57 (2002). Mantic2022 V. Mantic: Mathematical Methods and Models in Composites. World Industries Scientific Publishing (2022). Mielke2015 A. Mielke, T. Roubíček. Rate-Independent Systems: Theory and Application. Springer, New York (2015). Milton2002 G. W. Milton: The Theory of Composites. Cambridge Monographs on Applied and Computational Mathematics, vol. 6. Cambridge University Press, Cambridge (2002). Morrey1966 C. B. Morrey: Multiple Integrals in the Calculus of Variations. Springer-Verlag, Berlin, (1966). Muller1987 S. Müller: Homogenization of nonconvex integral functionals and cellular elastic materials. Arch. Ration. Mech. Anal. 99, pp. 189-212 (1987). Muller1999 S. Müller: Variational models for microstructure and phase transitions. Calculus of Variations and Geometric Evolution Problems, eds. F. Bethuel et al., Springer Lecture Notes in Math., Vol. 1713 (Springer-Verlag, 1999), pp. 85-210. MullerSverak1999 S. Müller, V. Šverák: Convex integration with constraints and applications to phase transitions and partial differential equations. J. Eur. Math. Soc. (JEMS) 1(4), pp. 393-422 (1999) . Ortiz1999 M. Ortiz, E.A. Repetto: Nonconvex energy minimization and dislocation structures in ductile single crystals. J. Mech. Phys. Solids 47(2), pp. 397-462 (1999). Plummer2021 G. Plummer, H. Rathod, A. Srivastava, M. Radovic, T. Ouisse, M. Yildizhan, P.O.A. Persson, K. Lambrinou, M.W. Barsoum, G.J. Tucker: On the origin of kinking in layered crystalline solids. Mater. Today 43, pp. 45-52 (2021). Reshetnyak1968 Y. Reshetnyak: Weak convergence and completely additive vector functions on a set. Sibir. Math. 9, pp. 1039-1045 (1968). Wadee2004 M. A. Wadee, G.W. Hunt, M.A. Peletier: Kink band instability in layered struc- tures. J. Mech. Phys. Solids 52, pp. 1071-1091 (2004).
http://arxiv.org/abs/2407.01820v1
20240701214044
Exploring the Role of Randomization on Belief Rigidity in Online Social Networks
[ "Adiba Mahbub Proma", "Neeley Pate", "Raiyan Abdul Baten", "Sifeng Chen", "James Druckman", "Gourab Ghoshal", "Ehsan Hoque" ]
cs.SI
[ "cs.SI" ]
CLIP the Divergence: Language-guided Unsupervised Domain Adaptation Jinjing Zhu, Yucheng Chen, Lin Wang 2 July 2024 =================================================================== § ABSTRACT People often stick to their existing beliefs, ignoring contradicting evidence or only interacting with those who reinforce their views. Social media platforms often facilitate such tendencies of homophily and echo-chambers as they promote highly personalized content to maximize user engagement. However, increased belief rigidity can negatively affect real-world policy decisions such as leading to climate change inaction and increased vaccine hesitancy. To understand and effectively tackle belief rigidity on online social networks, designing and evaluating various intervention strategies is crucial, and increasing randomization in the network can be considered one such intervention. In this paper, we empirically quantify the effects of a randomized social network structure on belief rigidity, specifically examining the potential benefits of introducing randomness into the network. We show that individuals' beliefs are positively influenced by peer opinions, regardless of whether those opinions are similar to or differ from their own by passively sensing belief rigidity through our experimental framework. Moreover, people incorporate a slightly higher variety of different peers (based on their opinions) into their networks when the recommendation algorithm provides them with diverse content, compared to when it provides them with similar content. Our results indicate that in some cases, there might be benefits to randomization, providing empirical evidence that a more randomized network could be a feasible way of helping people get out of their echo-chambers. Our findings have broader implications in computing and platform design of social media, and can help combat overly rigid beliefs in online social networks. § INTRODUCTION Humans tend to stubbornly stick to their beliefs and pre-existing opinions, often dismissing information that contradicts their beliefs to prevent cognitive dissonance <cit.>. Belief rigidity can be considered as how likely an individual is to change their stance in light of new information/opinions <cit.>. While rigid beliefs can ensure consistency in behavior <cit.>, harmful rigid beliefs can often lead to societal problems. For example, strong beliefs in vaccine conspiracy theories have been shown to increase vaccine hesitancy <cit.>, making it harder to achieve herd immunity during the COVID-19 pandemic <cit.>. Moreover, people tend to seek others with shared beliefs, confirming their own biases through homophilic tendencies, and forming echo-chambers <cit.>. This is especially prevalent in social media platforms, which are generally designed to optimize for user engagement <cit.>. Social media platforms suggest similar content <cit.>, thus limiting exposure to diverse perspectives <cit.>. Researchers found that the median Facebook user received most of their content from like-minded sources— 50.4% compared to 14.7% from cross-cutting sources <cit.>. Mutual reinforcement among like-minded peers can result in increasingly stubborn beliefs over time, leading to polarization and increased susceptibility to accept and spread misinformation <cit.>. This can have real-life implications on what policies people support, who they vote for, and how they perceive others in society. For instance, political inaction for climate change has often been attributed to partisan echo-chambers formed to challenge the consequence of climate change <cit.>. It is, therefore, essential to study belief rigidity within the context of online social networks and design interventions to reduce belief rigidity. In this paper, we aim to quantify the effects of a randomized social network structure on belief rigidity, specifically examining the potential benefits of introducing randomness into network connections. We consider increasing randomization as an intervention aimed to increase exposure to diverse perspectives. Our experimental framework enables passive sensing of private belief rigidity, thus allowing us to better understand individual belief rigidity. By introducing more randomization into the network, our study explores whether deviating from algorithmically curated personalized content can encourage individuals to consider a broader range of perspectives. Theoretical studies on opinion dynamics in social networks suggest that it is essential to consider the combined effect of human social influence and algorithmic decisions to understand how opinions are affected in online social networks <cit.>. While our study focuses on belief rigidity, we can still draw from this literature on opinion dynamics. Extending upon the idea that both social influence and algorithmic choices have an impact, our randomization condition is designed to increase randomization in both human social influence aspect and in the recommendation algorithms. Our research questions are: * RQ1: How does exposure to a more diverse spectrum of opinions influence belief rigidity? * RQ2: Can a more randomized recommendation convince people to be more open to incorporating diverse views in their networks? Typically, social media analysis has been used to understand social signals on social platforms, including impact of similar content <cit.>, impact of recommendation algorithms <cit.>, and interaction with political content <cit.>. Different experiments have also been conducted on social media platforms to understand polarization <cit.>. However, studying change in beliefs through social media analysis and experiments is difficult because beliefs are private. Data from social media can only make an estimate about change in belief since online behavior may differ compared to people's private beliefs <cit.>. For example, people may appear more stubborn in their stance compared to in real life to prove a point to others <cit.>. Social media analysis on Reddit's subreddit r\ChangeMyView sheds some light on how individuals change opinions <cit.>. However, the individuals engaged in this subreddit are open to changing their stance, which is usually not the case. Therefore, we take an experimental approach to answer our research questions. Our experimental framework can help measure private belief since participants have no other motive to change their beliefs unless they really want to. Participants are given prompts which they rate on a 7-point Likert scale, and explain the reason for their rating. Next, they are shown the answers of other participants and asked if they would like to change their rating. This change in rating represents change in belief, and can be used to understand social influence. Participants are then asked to select three participants among those recommended whose responses they would like to see in the next round. Analyzing participant selections help in understanding the role of recommendation algorithms. Our experimental design consists of two conditions. For the first condition (Condition 1, C1: similar condition), we emulate traditional social media networks by recommending similar users and showing users that participants select themselves. In the second condition (Condition 2, C2: randomized condition), we increase randomization by recommending randomly, and only showing 1/3 of who participants selected (the rest is randomized) (Figure <ref>). Our analysis shows that irrespective of whether peer opinions are similar or diverse compared to one's own, they influence individuals' beliefs. Moreover, given a choice and access to a wide variety of viewpoints by the recommendation algorithm, individuals tend to include some of those diverse viewpoints into their networks, but generally tend to prefer similar views as themselves. In both conditions, participants followed people with similar views than not, but the effect was stronger in the similar condition group. Therefore, our results suggest that increased randomization can help individuals be slightly more open to differing views, while also highlighting people’s homophilic tendency. Our results have broader implications in computing and platform design, since most social media platforms tend to offer highly personalized content, thus facilitating people’s homophilic tendencies. While personalization has its merits, it can have unintended negative consequences on belief rigidity and polarization if not implemented carefully. Our findings provide empirical evidence that a more randomized network might be better suited to positively impact belief rigidity, and could help reduce echo chambers and polarization. § RESULTS To answer our research questions, we compare two conditions - Condition 1 (C1: similar condition) is designed to prioritize similar users in the network, while Condition 2 (C2: randomized condition) is designed to increase randomization in the network. This means that an individual in C1 is more likely to encounter perspectives closely aligned with their own, compared to a participant in C2, who would be presented with a wider array of differing opinions/views. We conducted five batches of each of the conditions, recruiting a total of 163 participants using Prolific Academia (C1: 82, C2: 81). Participants must all join at the same time online and the experiment approximately takes 60 minutes. Note that all correlations calculated are Pearson's correlation, ρ. §.§ Study design Our study consists of five rounds; in each round, participants are given one prompt regarding climate change actions. Each round has three stages - response, revision, and selection (Figure <ref>). In the response stage (stage 1), participants see a prompt regarding spending money to mitigate climate change, and they are asked to rate the statement on a 7-point Likert scale, and provide reasoning for their rating. The 7-point Likert scale ranges from “strongly disagree" to “strongly agree". For the rest of the paper, we refer to each rating and its accompanying explanation as a “response entry". In the revision stage (stage 2), participants are shown the response entries of selected participants (referred to as “peers"), and are given the option to change their rating - we will refer to this as the “updated rating". Note that participants are provided with their prior rating from stage 1 (response stage), thus ensuring that their change in rating is indeed intentional. For the first round of the similar condition (C1), we assign participants similar peers based on their ratings, and for the following rounds, participants see the response entries of peers they selected in stage 3 (selection stage) in the immediate previous round. For the first round of the randomized condition (C2), participants are randomly assigned three peers, and in the following rounds, they only see one of the peers they selected in stage 3 (selection stage). The other two peers are randomly assigned from those they did not select. In the selection stage (stage 3), participants are recommended response entries of other participants, and they also see the response entries of participants they saw in stage 2. For C1, participants are recommended similar response entries, while for C2, they are recommended randomly. Participants must select (i.e., “follow") three peers whose answers they would like to see in stage 2 of the next round. They can stick to previous connections or pick new ones (or do a mix of both). However, as mentioned previously, only C1 participants can see the response entries of those they followed in stage 2 of the next round. C2 participants only get one of the peers they followed, and the other two are randomly selected. §.§ Peer opinions influence belief, even when they are different from one's own Aggregating across both conditions, our analysis shows that participants update their beliefs 25.13 percent of the time (C1 = 22.86 percent, C2 = 27.36 percent; no significant difference between the two conditions according to two-sample z-test) (Figure <ref>), showing that beliefs are rigid for most people (Figure <ref>). Despite this rigidity, peer responses in stage 2 affect individual shifts in belief, consistent with existing literature that suggests that beliefs are indeed influenced by peer beliefs <cit.>. Our analysis shows that individual belief changes to be more similar to the beliefs of their peers. We define δ R_i as the difference between the participant's updated Likert rating in stage 2 and the initial Likert rating in stage 1 for a specific participant, i. Moreover, we define social signal, S_i as the average rating of the peers in stage 2, and δ S_i as the difference between S_i and the initial Likert rating of the participant. We find positive correlation between δ S_i and δ R_i, showing that peers can influence individual beliefs. For similar condition group (C1), ρ is 0.338 (p: 2.378e-11) [CI: 0.244, 0.425], and for randomized condition group (C2), ρ is 0.310 (p: 2.140e-10) [CI: 0.219, 0.396]. Note that in C2, we intentionally show participants response entries they did not select for themselves. Yet, the ρ is similar across both conditions, suggesting that people are likely to be influenced by their peers' opinions, regardless of whether these opinions align with their own or not. A stronger indication of this phenomenon is seen when we consider only instances where participants did change their responses. In such instances, we observe a significant increase in correlation for both conditions. For C1, ρ is 0.624 (p: 2.378e-11)[CI:0.475, 0.738], and for C2, ρ is 0.622 (p: 2.140e-10) [CI: 0.493, 0.725]. However, individuals often do not realize the impact of peer influence on their beliefs. In stage 2, participants select from a predetermined list the reason behind the change in their stance (or the lack thereof). Only a small percentage selects that peer responses influenced their change in rating (13.25% in C1, and 16.58% in C2) compared to the percentage that actually changed their rating (no statistical difference between the proportions in the two conditions as per proportion z-test). When considering only participants that did change their ratings, we see that only about half attributed that change to peer influence (47.25% in C1, 50.91% in C2; no statistical difference between the proportions as determined by proportion z-test). Interestingly, participants in Condition 1 (C1) reported not changing their minds a higher percentage of times than those in Condition 2 (C2). For C1, 83.90% of the responses included “I did not change my mind", while for C2, the percentage was 75.88% (p-value of proportion z-test: 0.005). We can interpret this analysis as some indication that the randomized condition group (C2), where participants are presented with a wider range of viewpoints, is more “aware" of their belief shifting. This insight is particularly compelling when we also consider that there was no significant difference between the two conditions regarding how often participants change their responses. It must be noted that most individuals do not drastically change their beliefs. Instead, the shift in belief is more gradual, consistent with previous literature <cit.>. Plotting the change in Likert scale for every instance, we see an unimodal distribution, peaking at 0 (Figure <ref>), with more extreme values being less common. Note that the scale never reaches 6 or -6, also showing that the shift in belief is not drastic. There is no significant difference between the distribution of change in Likert rating between the two conditions (as shown by Levene's Test of variance). The same phenomenon is seen when considering the distribution of the average change in Likert rating for each participant (Figure <ref>) as the distribution peaks at 0. In summary, our results suggest that despite individual beliefs being rigid, peer opinions can influence individuals, irrespective of what the opinion entails, providing empirical evidence for RQ1. §.§ Introducing randomness while recommending can increase incorporation of diverse views into individual's networks Our findings indicate that participants tend to choose responses that align with their own, meaning they usually follow others who share similar opinions. This tendency is particularly noticeable in the randomization condition, where a wide variety of opinions are presented. We calculate the Belief Network Distance (defined in Equation <ref>) for both those followed in stage 3 (B_i_followed, see Equation <ref>) and those not followed but shown in stage 3 (B_i_notfollowed, see Equation <ref>) and compare them. The Belief Network Distance signifies how far the average belief of the peers in the user's network for that round is from the participant's belief. Denoting B_i as the Belief Network Distance for each participant i, mean B_i_followed is 1.201 and mean B_i_notfollowed is 1.226 for C1. There is no significant difference between the two (using t-test), which makes sense since C1 is designed to only show similar content. For C2, mean B_i_followed is 1.546 and mean B_i_notfollowed is 2.098. Our t-test shows a significant difference between the two (p: 1.733e-12), implying that even when presented with a diverse set of views, participants are more likely to select those similar to their own (Figure <ref>). Moreover, our results from calculating the correlation between the follow signal, F_i (defined in Equation <ref>), and the participant's belief show the same phenomenon - there is a moderately strong positive correlation between a participant's belief and the beliefs of who they follow. When calculating ρ between F_i and the initial Likert rating of the participant, we find ρ to be 0.645 (p: 4.897e-46) [CI: 0.582, 0.700], and for C2, ρ is 0.503 (p: 3.457e-27) [CI: 0.426, 0.573]. Additionally, we consider the updated Likert rating to be the participant's current belief state, and we calculate the correlation between follow signal, F_i, and the participant's updated Likert rating. For C1, ρ is 0.636 (p: 9.744e-45) [CI: 0.572, 0.692], and for C2, ρ is 0.519 (p: 2.659e-29) [CI: 0.444, 0.587]. Furthermore, we evaluate the cosine similarity between an individual's reasoning and the reasoning of the responses they follow, compared to the cosine similarity between an individual's reasoning and the reasoning of those they do not follow (Figure <ref>). Our results show that the response entries followed by participants are semantically more similar to their own answers compared to those not followed in both conditions. For C1, the mean cosine similarity for followed is 0.424; for not followed, the mean cosine similarity is 0.376 (significant difference using Mann-Whitney U test, p: 2.473e-05). For C2, the mean cosine similarity for followed is 0.451, and for not followed, it is 0.404 (significant difference using Mann-Whitney U test, p: 9.661e-07). However, it is possible to reduce some of the homophilic tendency of individuals by increasing randomness in the recommendation algorithm and presenting individuals with a diverse set of views. Mean B_i_followed of C1 is less than the mean B_i_followed of C2 (using t-test, p: 1.305e-06), showing that by increasing diversity in the content of response entries, it is possible to increase being open to incorporating other beliefs. Similarly, the correlation between the follow signal, F_i, and the participant's belief is lower in the randomized condition (C2) compared to the similar condition (C1) (ρ mentioned above). Our results imply that people tend to incorporate other beliefs to some extent when they have access to them. Yet, we do not find any semantic difference in the response reasoning between C1 and C2 for followed and not followed. Despite a higher average cosine similarity in C2 than C1 for both followed and not followed, no statistical differences were found between the two conditions (Mann-Whitney U Test, p: 0.012 for followed; p: 0.009 for not followed). Summarizing our results to answer RQ2, we find that algorithmic changes in recommendation method, i.e., showing a wider diversity of views instead of opting for homophily-based recommendation, can reduce homophilic tendencies, albeit slightly. Our results also show that people are willing to incorporate more diverse opinions into their networks when they are exposed to them. § DISCUSSION Existing research indicates that online social networks play a critical role in influencing individuals' political attitudes, behaviors, and perceptions <cit.>. Our findings reiterate this since regardless of whether peer opinions align or vary, we see that they impact individuals' beliefs. This also implies that traditional social networks, which promote similar content, can indeed lead to increased belief rigidity, potentially exacerbating polarization. Our research empirically shows that a simple tweak such as introducing some randomness in what is displayed to users can have a positive impact on belief rigidity. In other words, when given the option and access to a wider range of viewpoints, people are inclined to include at least some of those viewpoints in their networks. This provides empirical evidence that making social networks less tailored to individual preferences could potentially encourage openness to different viewpoints. In this paper, we take an empirical approach by building an experimental framework, which we use to understand belief rigidity on online social networks. In the first stage, participants are given a prompt to rate, and in the second stage, they can change their rating based on the responses of their peers. This technique is similar to a Delphi study, which has been used in various experimental frameworks to understand individual and collective attitudes and beliefs <cit.>. These two stages allow us to understand the impact of peers on individual beliefs. Our third stage is designed to quantify the impact of recommendation algorithms, allowing participants scope for network restructuring. Therefore, our experimental framework enables us to take into account the combined effect of human social influence and algorithmic decisions, consistent with research on opinion dynamics in online social networks <cit.>. While social media analysis is generally used to study social phenomena, it is difficult to test out different intervention strategies and compare them in real time on social media platforms. Moreover, in real social platforms individuals not only view each other's posts but also directly interact with them. Social media analysis can therefore only estimate the change in belief since online behavior may differ from people's private beliefs due to the online disinhibition effect <cit.>. For example, people may appear more stubborn or even aggressive just to win an online argument <cit.>. In our experimental framework, there is no peer pressure (since participants are anonymized) or fear of social isolation (since participants don't have access to how many “followers” they have) compared to actual social media platforms. Since participants have no external motivation to change beliefs, our experimental framework enables passively sensing private belief. Our results show that individuals' beliefs are influenced by peer opinions even when the peer opinions are different from their own. This has been theoretically established in existing literature for opinion dynamics in social networks - dissimilar peer opinions tend to moderate individual opinions <cit.>. However, there is often a lack of realization regarding this influence among participants. Change in belief is often minor, suggesting that beliefs are inherently rigid but can be gradually influenced over time. This is consistent with Introne's proposed Belief Landscape Framework, where he showed that people who update their beliefs tend to do so incrementally as if moving through a landscape composed of stable regions (where beliefs change slowly) and transitional regions (where beliefs change more quickly) <cit.>. Tying our findings back to our first research question, it is, therefore, important to diversify exposure to different viewpoints to facilitate slight but meaningful shifts in beliefs within social networks. We also observe the homophilic tendency of individuals to prefer and follow content that align with their own beliefs, consistent with existing literature <cit.>. For example, a study on climate change link-sharing on Facebook shows that users are mostly online friends with others who share similar concerns about the topic <cit.>. However, we show that by introducing randomness in recommendation algorithms, this homophilic tendency can be slightly reduced, encouraging openness to differing views. Similar findings have been seen in other domains such political polarization – exposure to counter-attitudinal news (news that contradicts one's beliefs) on Facebook was shown to reduce affective polarization <cit.>. Tying this finding to our second research question, a randomized recommendation can convince people to diversify peer choices, but not drastically. However, our study still has some limitations. One limitation is that this study is population-dependent since the results of each experiment would strongly depend on the demographic make-up of that batch. Our sample size is moderate (N = 163) and we only considered participants from the US. While that might reduce the generalizability of our findings, it must be noted that the whole experiment is synchronous, takes an hour, and all participants must join at the same time. Therefore, our experiment requires a significant time and resource commitment (needing access to a computer with an internet connection for an hour at a provided time) from participants. Previous literature has discussed the difficulties of recruiting participants for relatively longer complex experiments. Levendusky et al. identify that it is difficult to recruit participants for studies over twenty minutes even with relatively larger incentives, especially when relying on online survey takers <cit.>. However, complex experiments can provide a more in-depth understanding. Therefore, for complex studies, a smaller N is often an acceptable trade-off. Some examples of such studies include understanding idea generation in social networks <cit.>, facilitating eco-friendly purchases through user interface design <cit.>, or designing technology to reduce misinformation <cit.> and so on. This can be considered a “depth–breadth trade-off”, where the researcher has to balance between the complexity of the task and the number of participants <cit.>. For our study, we therefore chose a more complex setup and a smaller sample size. While our framework has some features present in traditional social media platforms, the framework itself does not imitate a traditional social media platform. Our framework has three stages in every round. Participants provide a response to a prompt in stage 1, see their peer's responses in stage 2, and follow/unfollow people in stage 3. In social platforms, though, people can share content, interact with others, and follow and unfollow content all at once. We acknowledge that the interactions on social media platforms are far more complex. However, breaking these interactions down to simpler forms as we did here allows us to probe into the individual impact of each of these features. In future studies, our framework can be used to compare different design choices or different algorithmic choices for different platform components. Similarly, in our study, all participants were anonymized, did not have any identifiable cues, and were unknown to the other participants. While there are some social media platforms where people have very little information about others (such as Reddit), that is not the case for most platforms such as Facebook, Instagram, and X (formerly Twitter). Platforms often display user identity, follower count, or various demographic cues, which significantly influence trust and perception <cit.>. So, it is crucial to study the impact of displaying demographic cues and follower count on belief rigidity. Our framework facilitates designing such studies since minimal changes would be required to include displaying cues in the study. Likewise, the statement prompts can also be changed to adapt our study to other topics such as health, vaccines, or gun control. Finally, our study aims to measure immediate changes in belief rigidity. According to the Belief Landscape framework <cit.>, shifts in belief are often slight and subtle, which is what we observed in our study. Still, it would also be interesting to conduct a longitudinal study to quantify the long-term effects of exposure to diverse viewpoints on belief change. A longitudinal experiment would allow us to map the “tipping points” where people change their beliefs more drastically. § CONCLUSION In this paper, we explore whether a more randomized recommendation can convince people to diversify peer choices, and aim to quantify the impact of this diversification. Our results show that increasing randomization in a network can improve people's ability to incorporate a higher diversity of views, despite people's natural homophilic tendency. Moreover, we show that individuals are affected by the opinions of others, irrespective of if they hold the same opinions as them. While we acknowledge that beliefs are inherently rigid, our study shows that there are plausible steps to be taken to reduce belief rigidity. Our experimental framework can easily be modified to study different belief rigidity cues such as trust, reputation, and popularity, and we encourage others to do so. Finally, although our work is empirical, we can use this method to draw out meaningful connections in the real world. § MATERIALS AND METHODS Here, we provide details about the experimental framework design, study setup, validation checks for our setup, and participant demographic information. The experimental framework is shown in Figure <ref>. §.§ Experimental Framework Design Our study consists of five rounds. In each of the rounds, participants are given one prompt regarding climate change policies, and the focus of the round is on that statement. The prompts are designed to probe the stance of people regarding spending money to mitigate climate change and are derived from the Climate Change Twitter Dataset <cit.>, which is a Twitter dataset containing actual posts on climate change on Twitter, and ClimateFever <cit.>, which is a fact-checking dataset on climate change. Our criteria for the prompts was that they must be short and unambiguous. For both conditions, the prompts are the same (further details in Supplementary materials). Each round consists of three stages - response, revision, and selection. Additional information on interface design is provided in Supplemental materials. The stages are explained below. §.§.§ Stage 1: Response stage Given a prompt, participants are asked to evaluate each statement using a 7-point Likert scale, where 0 represents “strongly disagree" and 6 signifies “strongly agree". They must also explain the reasoning for their rating in a maximum of 400 characters. We refer to each evaluation and its accompanying explanation as a “response entry”. §.§.§ Stage 2: Revision stage In the revision stage, participants are shown the response entries of a selected group of other participants (referred to as “peers"). How this group of peers is selected depends on the round and whether the participant is part of the similar condition or the randomized condition. Condition 1: For the first round in C1, each participant is matched with three peers — referred to as their base connections — based on the similarity of their responses to the first prompt. Essentially, this means that the peers gave the prompt similar ratings as the participant. For the subsequent rounds, participants see the response entries of peers they selected in stage 3 (selection stage) of the immediate previous round. Condition 2: For the first round in C2, participants are randomly assigned three base connections. In subsequent rounds, participants still see three peers. However, unlike C1, only one of these peers is chosen by the participant themselves during stage 3 (selection stage) of the previous round. The other two peers are chosen randomly from among those the participant did not follow. As the participants have access to the response entries of a selected group of peers, participants are now asked to rate the same prompt again on a 7-point Likert scale. To ensure their change in rating is intentional, we display the participant's initial Likert rating to the prompt. However, they still need to click the Likert scale again manually. This difference between the updated Likert rating and the initial Likert rating can be used to represent belief rigidity. Note that, belief rigidity is defined as the ability to change beliefs in light of new information/opinion, and the peer response entries can be considered as the “new information/opinion". §.§.§ Stage 3: Selection stage In the selection stage, participants are presented with the response entries of six participants in total - including the three peers they see in stage 2 (and the rest are “recommended" to them). The recommendation process depends on the round and the group the participant is part of. For the first round, the three participants recommended are those similar to the participant for C1, and for C2, they are randomly selected from the pool of other participants (excluding the base connections). The participants must select (i.e., “follow") at most three participants whose answers they would like to see in stage 2 (revision stage) for the next round. They can stick to the connections assigned or pick new ones (or do a mix of both). However, as mentioned previously, only C1 participants see who they followed in stage 2. C2 participants get only one of the followed peers in stage 2, and the other two are randomly selected from those not followed. The participants must also like or dislike all the response entries shown. For the upcoming rounds, the recommendation process in stage 3 is designed in the following way - * Condition 1: Participants see the people they are currently following, and three other people whose responses are the most similar to them. * Condition 2: Participants see the three people they are currently following, the two people randomly selected and shown to them in stage 2, and another randomly selected participant. Once again, participants must select (“follow") three participants whose answers they would like to see in stage 2 (revision stage) of the subsequent round, and the process repeats for the remaining rounds. An example of the user experience flow is presented in Supplementary materials. §.§ Study Details The experimental platform was designed using Empirica v1 <cit.>, a Javascript framework designed for facilitating multi-user experiments in real time. Participants were recruited through Prolific Academia, a crowd-sourcing platform aimed at connecting research participants with behavioral studies <cit.>. The criteria for inclusion were that participants must be located in the US, have access to a computer and internet connection, and must be able to join the study at the time being conducted. The study lasted for approximately 60 minutes, and participants were given $15 after completion of the study. Participants provided informed consent while signing up for the study, and again while starting the study. There were 5 batches of C1 and 5 batches of C2. Aggregating across all batches, C1 had 82 participants and C2 had 81 participants (total N = 163). The participant demographic is added to Supplementary materials. In general, there were similar proportions of gender and race in both conditions. Most participants showed pro-climate sentiment, with the aggregate average initial Likert response being 3.91 (C1: 3.89, C2: 3.93) and aggregate skewness being -0.680 (C1: -0.696, C2: -0.669) (Figure <ref>). This means that the distribution is moderately left-skewed, which indicates that more participants had an agreeable response to our statements. This is representative of the current sentiment in the US, where the majority of the US population shows pro-climate sentiment <cit.>. §.§ Defining Analysis Measures §.§.§ Belief Update We define the change in belief as the difference between the initial rating given by the participant in stage 1 and the updated rating of the participant in stage 2. Assuming R is the rating, and i denotes each participant, let R_i_initial and R_i_updated denote the initial and updated ratings for the i^th participant, respectively. The change in belief for each participant, δ R_i, can then be represented as: δR_i= R_i_updated - R_i_initial §.§.§ Social Signal We define social signal, S, as what the participant sees in the revision stage, and this gives the participant some direction to what peers in the network might think about the same topic. To calculate social signal for each participant for a particular round - S_i - we take the mean of the ratings of peers in the revision stage. Assuming j denotes each peer, and N is the total number of peers, S_i = ∑_j=1^j=N R_j_initial/N To calculate how different the social signal, S_i is from their own rating, we subtract the initial rating of the participant, R_i_initial from the social signal, S_i. Therefore, δ S_i is defined as δ S_i = S_i - R_i_initial Intuitively, δ S_i is measuring how different peer responses are from the individual. §.§.§ Follow Signal For a particular round, for each participant, we define follow signal, F, as the ratings of those the participant chooses to follow in the selection stage. To calculate the follow signal for each participant for a specific round - F_i - we take the mean of the ratings of the followed peers. Assuming j denotes each followed peer, and N is the total number of followed peers, F_i= ∑_j=1^j=N R_j_initial/N §.§.§ Belief Network Distance Extending the concept of follow signal, F_i, we define belief network distance, B_i as the average difference between a participant's updated rating and the initial rating of other participants within a belief network. This signifies how different the belief is compared to the participants' own beliefs. We consider the updated Likert rating for the participant since that is their most current belief, but we consider the initial belief of others since that is what they see as part of the response entries. Assuming N is the total number of participants shown for that given round, B_i = ∑_j=1^j=N |R_j_initial - R_i_updated|/N Modifying the above equation to include only those that the participant is following, we can denote G as followed. B_i_followed = ∑_j=1^j=G |R_j_initial - R_i_updated|/G Denoting participants that are not followed as G', the equation becomes B_i_notfollowed = ∑_j=1^j=G' |R_j_initial - R_i_updated|/G' §.§ Validating the study setup To ensure that C1 shows similar signals as the participant's own choice and C2 shows more diverse responses, we calculate the Pearson correlation, ρ, between the participant's initial Likert rating, and the average of the peer rating shown in stage 2. The ρ for C1 is 0.424 (p: 1.309e-17) [CI: 0.337, 0.504], and the ρ for C2 is 0.178 (p: 3.313e-04) [CI: 0.082, 0.271], implying that C1 participants indeed see more similar peer responses compared to the participants in C2. To ensure that the recommendation algorithms are significantly different between the two conditions, we calculate the Pearson correlation, ρ, between the mean of all Likert ratings displayed in stage 3 with the initial Likert rating of the participant. For C1, ρ is 0.747 ( p: 1.865e-63) [CI: 0.696, 0.790] and for C2, ρ is 0.311 ( p: 2.563e-10) [CI: 0.219, 0.398]. The correlations show that the recommendations provided for C1 are more similar to the participant compared to the recommendations provided for C2, which was the intention of the design. Moreover, the average number of words written as a response reason for C1 is 23.48 (median: 21) and for C2 is 28.04 (median: 25). This shows that participants elaborated on their opinions, increasing the trustworthiness of the framework. ieeetr
http://arxiv.org/abs/2407.02396v1
20240702161459
A refractory density approach to a multi-scale SEIRS epidemic model
[ "Anton Chizhov", "Laurent Pujo-Menjouet", "Tilo Schwalger", "Mattia Sensi" ]
q-bio.PE
[ "q-bio.PE", "math.DS", "physics.soc-ph" ]
inst1]Anton Chizhov [inst1]organization=Centre Inria d'Université Côte d'Azur, addressline=2004 Rte des Lucioles, city=Biot, postcode=06410, country=France inst2]Laurent Pujo-Menjouetcor1 pujo@math.univ-lyon1.fr [inst2]organization=Universite Claude Bernard Lyon 1, CNRS, Ecole Centrale de Lyon, INSA Lyon, Université Jean Monnet, ICJ, UMR5208, Inria, city=Villeurbanne, postcode=69622, country=France inst3,inst4]Tilo Schwalger [inst3]organization=Bernstein Center for Computational Neuroscience, addressline=Philippstr. 13, Haus 6, city=Berlin, postcode=10115, country=Germany [inst4]organization=Technische Universität Berlin Institute of Mathematics, addressline=Straße des 17. Juni 136, city=Berlin, postcode=10623, country=Germany inst1,inst5]Mattia Sensi [inst5]organization=Department of Mathematical Sciences “G. L. Lagrange”, Politecnico di Torino, addressline=Corso Duca degli Abruzzi 24, city=Torino, postcode=10129, country=Italy [cor1]Corresponding author. § ABSTRACT We propose a novel multi-scale modeling framework for infectious disease spreading, borrowing ideas and modeling tools from the so-called Refractory Density (RD) approach. We introduce a microscopic model that describes the probability of infection for a single individual and the evolution of the disease within their body. From the individual-level description, we then present the corresponding population-level model of epidemic spreading on the mesoscopic and macroscopic scale. We conclude with numerical illustrations taking into account either a white Gaussian noise or an escape noise to showcase the potential of our approach in producing both transient and asymptotic complex dynamics as well as finite-size fluctuations consistently across multiple scales. A comparison with the epidemiology of coronaviruses is also given to corroborate the qualitative relevance of our new approach. Refractory Density; Partial Differential Equations; Age-structured model; Time since last infection; Finite-size fluctuations § INTRODUCTION The susceptibility of an individual to an infectious disease as well as the infectiousness of an infected individual considerably depends on the time of the person's last infection, or “infection age”. Such dependence on the infection course and history can be modeled by age-structured population models. These models have been used since the seminal paper by Kermack and McKendrick <cit.> for a wide variety of mathematical models in epidemics, see e.g. <cit.>. Taking into account the time since the last infection (TSLI) is extremely convenient to introduce a temporal heterogeneity in the population concerning an ongoing epidemic, and to explicitly model a complex disease evolution with consequences both within and between hosts. Population models structured by this TSLI are deterministic macroscopic models that relate to the idealized limit of an infinitely large population. Two important questions arise here: first, a fundamental question is how such macroscopic models are linked to “microscopic models”, where the TSLI or other internal variables are tracked for each individual in the population. Microscopic models can be more directly related to the mechanisms underlying the spreading of disease and to parameters measured in clinical studies. However, these models are too complex to understand the collective dynamics of an epidemic. Second, populations in reality (say, a local community relevant to the disease or an age group) are not infinitely large and thus exhibit fluctuations that reflect the infections of single individuals. Therefore, this raises the question of whether there exists an intermediate “mesoscopic model” that combines the simplicity of age-structured population models with the ability to capture fluctuations due to finite population size. Again, the parameters of such a mesoscopic model should be linked to the microscopic parameters. Answering these questions requires a multi-scale modeling framework for epidemic diseases. We found our inspiration in the neuroscience field, where the age-structured population dynamics models have already been used in the form of the refractory density method <cit.>. Indeed, in that case, the basic units at the microscopic scale are the neurons, whose probability to fire a spike is largely determined by the time since the last spike as well as the spike input from other neurons in a large network of neurons. Taking into account the time since the last spike is important because neurons exhibit a period of absolute and relative refractoriness after a spike, where the firing probability is strongly reduced. At the macroscopic scale, the evolution of a neuronal population is governed by a system of first-order partial differential equations called refractory-density equations (RDE). The RDE tracks the density of the times since the last spikes within a continuous phase state, thereby avoiding discrete states such as spiking, refractoriness, and resting. Importantly, the RDE is a bottom-up mean-field model that is directly derived from the single neuron equations at the microscopic scale. Simulations using this technique yield precise solutions for both equilibrium states and transient non-equilibrium regimes. In the last two decades, there have been several advancements in the refractory-density method that offered the possibility for significant progress in the multi-scale modeling of cortical networks in the brain. First, the method has been adapted for the important case where spiking events are triggered by the threshold crossing of a neuronal membrane potential that is driven by biologically plausible Gaussian white <cit.> or colored noise <cit.>, while the classical RDE is based on a phenomenological hazard-rate-based spike-generation mechanism (“escape noise”) <cit.>. Second, the RDE has been generalized to finite-size populations <cit.>, resulting in a stochastic RDE that accurately reproduces the finite-size fluctuations at the mesoscopic scale. This advancement offers a unique and consistent description of a neural circuit at three levels of granularity: micro-, meso- and macroscopic scales. Additionally, there has been a further advancement in the generalization of RDE to accommodate complex, multi-dimensional <cit.>, and even two-compartmental <cit.> neurons, where the neuronal state is given by multiple variables rather than a single membrane potential. These advancements collectively allow the application of RDE for constructing meso- and macroscopic models based on a wide range of microscopic models. The main concept of this paper is to adopt the recent generalizations of the refractory-density method from neuroscience to epidemiology. In particular, we suggest a novel multi scale modeling framework that connects three different scales (micro, meso, macro). Moreover, the theoretical framework applies to two different noise mechanisms at the microscopic scale (Gaussian white noise and escape noise). We show their distinct effects on the form of the age-structure epidemic population model at the meso- and macroscopic scales. To this end, we consider an infectious disease with a relatively short infection period, followed by temporary immunity and the possibility of multiple subsequent infections (e.g., flu or SARS-CoV-2). Our model can be naturally structured within a classical SEIRS (Susceptible – Exposed – Infected – Recovered – Susceptible) epidemic framework, where we continuously track the interplay between viral load and the immune system within each individual. Unlike the classical ordinary differential equation (ODE)-based compartmental models of epidemics (see, e.g., <cit.>), we choose a continuous phase space, focusing on the time since infection and the corresponding evolution of the interaction between the virus and the immune system. In this space, the SEIRS stages can be defined either strictly or loosely; see section <ref> for a detailed explanation of our definition. In reality, an individual does not experience strict boundaries between SEIRS stages, and our approach allows us to avoid such additional assumptions. Instead, we propose that two characteristics—- the time since the previous infection and the interplay between viral load and immune response -— sufficiently describe an individual's state within a population. We believe that avoiding strictly defined compartments and introducing a continuous space of states is a key advantage of our model. Another advantage of our approach is that the spread of infectious disease at the mesoscopic and macroscopic population level is derived from the equations that stand for the dynamics of infection, disease development, and recovery of a single individual. A similar approach governs the derivation of macroscopic equations starting from their microscopic counterparts in kinetic theory, see e.g. <cit.>. Our construction differs however significantly, as we explain in sections <ref> and <ref>. In particular, the present paper is, to the best of our knowledge, novel in that it combines such a multi-scale (microscopic, mesoscopic, macroscopic) model with an age (in the sense of “infection age”) structured component, exploiting tools and analytical techniques coming from the so-called Refractory Density approach. The paper is structured as follows: in Section <ref>, we present the microscopic model describing the evolution of the disease within a single individual. In Section <ref> we introduce the equations governing the macro-scale population, while the mesoscopic ones are given in Section <ref>. We illustrate our work in Section <ref> with numerical simulations and compare them to real data. We finally discuss our work in Section <ref>. § MICROSCOPIC MODEL OF SINGLE INDIVIDUALS We consider a population of N≫ 1 individuals, among which an infectious disease is spreading. We do not consider birth and death processes, and consequently, we assume the population to be constant (although this assumption is not critical for our framework). We assume that the risk of infection for an individual depends on two state variables V_i and t^*_i (see Table <ref> for a summary of the variables and parameters of the model). The variable V_i represents the interplay of the viral load and the immune response within individual i. We call this variable viral state. The dynamics of V_i(t) are governed by the interaction of multiple processes (Fig.<ref>A, bottom). First, due to the immune system, the number of viral particles relaxes with time constant τ. Second, it grows proportionally to the fraction of infected individual I(t), with a coefficient of proportionality k, which is assumed to be proportional to the basic reproduction number R_0 of the corresponding disease. This modeling assumption is qualitatively close to the underlying homogeneous mixing assumption of classical ODE models, whereby any individual is equally likely to meet with any other individual in the population. Third, it mimics the time course of the disease composed of a rapid increase of the viral load at disease onset and a subsequent decrease caused by the immune response. This time course is driven by the function D(t_i^*), started at each onset of the disease (Fig.<ref>A, top). Note that, since V_i represents the interplay between the virus and the immune system, it can become negative at the negative phase of D(t_i^*). The state variable t_i^* measures the time relative to the last infection time (onset of the disease) of individual i. These onsets define an ordered sequence t_1,i<t_2,i<t_3,i<… of infection times for individual i. The continuous states of the epidemic in the -space may be roughly interpreted in terms of discrete SEIRS stages (Fig.<ref>A). To this end, we may partition the population with respect to an ongoing epidemic in the following compartments: S, individuals who are Susceptible to the disease; E, individuals who have been Exposed to the disease but are not yet infectious; I, Infected individuals who may spread the disease; and R, Recovered individuals who are temporarily immune from infection. According to these definitions, we can loosely define the stages in the -axis. When V_i(t) overcomes a threshold (in the case of Gaussian noise, see below) or an infection event is triggered by a probabilistic “soft” threshold (in the case of escape noise), the individual becomes infected and consequently belongs to compartment I. During the infectious phase, V_i(t) rises above the threshold because of the viral production during disease and then decreases because of the immune response. The I state lasts for a fixed duration τ_I>0. After the time τ_I has elapsed since the onset of infection, the individual is effectively immune to the disease and consequently belongs to compartment R. The sojourn in this compartment lasts until the end of the recovery period τ_R>τ_I, measured from the onset of infection. After this time, when V_i is low, the individual becomes Susceptible and then Exposed until the next infection when the individual becomes Infected and infectious again. Note that only the Infectious and Recovered states are strictly defined by the interval 0<<τ_I and τ_I≤<τ_R, respectively. However, the duration of these states does not directly affect the evolution of the individual's state V_i, but instead determines the periods when the individual affects the population and is in an “absolute refractory period”, respectively. Taken together, the dynamics for single individuals i, i=1,…,N, between two subsequent infection times t_k,i and t_k+1,i is modeled as dV_i/dt =-V_i/τ + k(t) I(t) + D(t_i^*) + σ(I(t)) ξ_i(t), dt_i^*/dt =1, with the fraction of infected individuals I(t) =N_c(t)- N_c(t-τ_I)/N, where N_c(t)∈ℕ is a global counting variable that keeps track of the cumulative number of cases in the population up to time t. Clearly, this counting variable is constant when no new infection occurs. At the infection times t_k,i, however, the overall dynamics is supplemented with an additional jump condition (reset): V_i(t_k,i^+)=V^T, t_i^*(t_k,i^+)= 0, N_c(t_k,i^+)= N_c(t_k,i^-)+1, where t_k,i^- and t_k,i^+ denote the left- and right-sided limits, respectively. In Eq. (<ref>), we indicate with τ_I the average time spent in the infectious state or average duration of the infectious window. We note that Eq. (<ref>) can be easily generalized to permit a more fine-grained, graded account of infectiousness depending on infection age t^* (see Discussion, Eq. (<ref>)). Furthermore, D(t^*) = 4a exp(-t^*/τ_r) - a exp(-t^*/4 τ_r) is a response function that drives the time course of V_i during disease. The parameter a is the severity of the disease, and τ_r is the time of disease progress and cure. The last term in Eq. (<ref>) represents an independent Gaussian white noise with mean 𝔼[ξ(t)]=0, auto-covariance function 𝔼[ξ(t)ξ(t')]=δ(t-t') and strength σ(I) (δ(·) denotes the Dirac delta function). As explained below, this term is only present in our model variant with Gaussian noise and is absent in the variant with escape noise where we set σ(I)≡ 0. As initial conditions of the model, we assume that before time t=0 there were no cases, N_c(t)=0 for t<0 and that at time t=0 a fraction I_0 of the population gets infected, i.e. N_c(0)=I_0 N. Thus the state variables for the I_0 N initially infected individuals are V_i(0)=V^T and t_i^*(0)=0, while the state variables of all other (non-infected) individuals (the remaining (1-I_0)N individuals in the population) is initially V_i(0)=0 and t_i^*(0)=+∞. Importantly, it remains to define how the infection events are triggered. We consider two different variants to define these events, and hence the infection times t_k,i, corresponding to two different ways of including stochasticity in the model. We refer to these variants as the white- and escape-noise cases, respectively. Model with escape noise To introduce stochasticity that phenomenologically captures the influences of various sources of noise, we model the infection events of an individual i by a probabilistic risk of becoming infected depending on the individual's current state variables V_i(t) and t^*_i(t). Mathematically, we keep the dynamics of V_i between infection events deterministic (i.e. setting σ=0) and generate infection events stochastically by a state-dependent hazard rate λ_i(t)=H(V_i(t),t_i^*(t)). The hazard rate means that an infection of individual i occurs in a small time step of length Δ t with conditional probability H_i(t)Δ t given the individual's current state (V_i(t),_i(t)). More precisely, we have the following conditional probability: Pr{individual i gets infected in (t,t+Δ t) | V_i(t),t_i^*(t^-)}=λ_i(t)Δ t+o(Δ t) as Δ t→ 0. For a concrete choice of the hazard rate in simulations, we use a rectified power function with absolute refractory period τ_R: H(V,t^*)=cmax(0,V/)^m1_t^*≥τ_R, where m>0. We illustrate the effect of varying m in Figure <ref>. Model with white Gaussian noise An alternative way to model noise is to consider an additive white-noise drive in the dynamics of V_i. This “diffusive noise” is modeled by the term σ(I(t)) ξ_i(t) in Eq. (<ref>). The stochastic term may reflect both the fluctuations of the number of viral particles of a given type around a particular individual and the fluctuations of activity of the immune system of this individual. To prevent that in the absence of a viral infection in the population an infection occurs spontaneously by noise, we enforce the noise strength to be zero when I(t) is zero by choosing the specific function σ(I)=σ̂1_I>0. In the white-noise case, infection events are triggered when the viral state V_i(t) crosses a certain threshold V^T, provided that at least an absolute refractory time τ_R has passed since the last infection. Hence, the infection times t_k,i are defined by the condition V(t_k,i^-)≥ V^T, t^*(t_k,i^-)≥τ_R. Note that if this condition is fulfilled at some time t_k,i^-, the reset condition Eq. (<ref>) ensures that the above threshold condition no longer holds at t_k,i^+, i.e. immediately after this time, as it should be. The above equations conclude the definition of the microscopic model. Given the following application of the refractory-density (RD) method, we note that the case of white Gaussian noise can be approximately mapped to the case of escape noise <cit.>. In this case, the noise term in Eq. (<ref>) is omitted and the condition Eq. (<ref>) is substituted by (<ref>), where H_i(t) is now given by the following hazard function (<ref>). The hazard rate λ(t,) depends on the actual state V_i, its rate of change dV_i/dt, the threshold , and the noise amplitude σ, i.e., λ(t,)=H(V_i(t),V̇_i(t),_i(t); σ(I(t))). In <cit.>, this function was found as an approximate solution of a first-time passage problem based on the Kolmogorov-Fokker-Planck equation: H(U,U̇,,σ) ={ A(T)+B(T,U̇,σ),   if  >τ_R,   σ>0;    0,  otherwise}, A(T) = 1/τ exp (0.0061-1.12 T - 0.257 T^2 - 0.072 T^3 - 0.0117 T^4), B(T,U̇,σ) = 2/σ√(π)U̇_+exp(-T^2)/1+erf(T),     T=-U/σ. Here and in the following, [x]_+=max(0,x) denotes the rectified linear function. We refer to <cit.> for a slightly more elaborate approximation of diffusive noise by escape noise. § MACROSCOPIC MODEL OF A POPULATION For a large number of individuals N, the dynamics of the microscopic model can be either evaluated with a Monte-Carlo simulation or obtained much more efficiently by integrating corresponding macroscopic population equations. To this end, we apply the refractory-density (RD) approach <cit.> to the epidemiological model described at an individual level in section <ref>. According to this approach, the state variables of a single “particle”/individual are parameterized by the “age” (with the total derivative in time substituted by the sum of partial derivatives, i.e. d/dt=∂ /∂ t + ∂/∂), and the population is characterized by the density of particles in the one-dimensional, semi-infinite -space, i.e. ρ(t,) (Fig.<ref>B). In our case, in line with other age-dependent models in mathematical epidemiology, the “age” is the time elapsed since infection. This means that at any time t, the number of individuals in the population who became infected between _1 and _2 time units ago are represented by ∫__1^_2ρ(t,)d. At the macroscopic level, we define the rate of new infections ν(t)=lim_Δ t → 0lim_N →∞(t+Δ t)-(t)/N Δ t. We also introduce a function U(t,) as follows: for the model with escape noise, U(t,) is the unique function that maps the infection age _i(t) to the state variable V_i(t), i.e. V_i(t)=U(t,_i(t)). For the model with Gaussian white noise, there is no such deterministic map. However, we can first map the model to the approximate escape-noise model, Eq. (<ref>), for which we can define again the unique function U(t,). Intuitively, this function can be interpreted as the variable V_i(t) averaged across realizations of the Gaussian-white noise for individuals i that had the time of last infection equal to t^*, i.e., the same . The equation for U then follows from the corresponding averaging of Eq. (<ref>). The bottom traces in Fig.<ref>B illustrate distributions of U and ρ in -space. The RD model corresponding to Eqs. (<ref>)-(<ref>) consists of two transport equations for ρ(t,) and U(t,), and two integrals for ν(t) and I(t): ρt + ρ =-ρ H(U(t,),t^*), Ut + U =-U/τ + k(t)I(t) + D(), ν(t) = ∫_0^∞ρ(t,)  H(U(t,),) d + I_0 δ(t), rate of new cases I(t) = ∫_t-τ_I^tν(s) ds, fraction of infected people The boundary condition for ρ, resulting from the conservation of people in the population, is ρ(t,0)=ν(t). The boundary condition for U corresponding to the reset in the microscopic model, Eq. (<ref>), is U(t,0)=V_T. The initial condition for the density ρ corresponding to an initial fraction of infected people I_0 at t=0 and to a situation, where individuals never encountered this viral infection before time t=0 (formally, _i(0^-)=∞) is ρ(0,)=δ() I_0 + δ(-∞) (1-I_0). Furthermore, the corresponding initial condition for U can be taken as the steady-state solution of Eq.(<ref>) (with ∂ U/∂ t=0 and I(t)=0 for t<0): U(0,)=exp(-/τ) +∫_0^t^*e^-s/τD(-s) ds. In the density equation (<ref>), the density diminishes because of the sink term -ρ H but the total mass ∫_0^∞ρ(t,) d=1 is conserved at all times because of the boundary condition, Eq. (<ref>). This boundary condition acts as a source term at =0 given by the rate of new cases ν(t) that exactly balances the integrated sink term, Eq. (<ref>). We note that this conservation law reflects the fact that we neglect mortality, and focus on the transmission mechanism. Furthermore, in the density equation (<ref>), the risk of illness is evaluated by the hazard function H. In Eq. (<ref>), we used the hazard function for the case of escape noise. For the case of white noise, the hazard function H(U(t,),) should be replaced by H(U(t,),U̇(t,),,σ(I(t))), where H is given by Eq. (<ref>) and U̇(t,)=(∂_t+∂_t^*) U(t,t^*) is given by the right-hand-side of Eq. (<ref>). Once again, note that Eq. (<ref>) permits a simple generalization to graded infectiousness depending on the course of the infection (see Discussion, Eq. (<ref>)). § MESOSCOPIC MODEL OF A FINITE-SIZE POPULATION At the intermediate mesoscopic scale, the size of (sub-)populations is finite. The finite size causes fluctuations in the fraction of infectious people I(t), which may yield significant finite-size effects through the interaction of finite-size noise and nonlinear population dynamics. In the case of a finite size population consisting of N ∈ℕ individuals, N≫ 1, we apply the corresponding theory from <cit.>, which yields a stochastic generalization of the macroscopic dynamics given in the previous section. To this end, we introduce the pseudo-density ρ(t,t^*) in terms of the survivor function S(t,t^*) as ρ(t,t^*)=S(t,t^*) ν_N(t-t^*). Here, S and ν_N are given as the solution of the following system of stochastic partial differential equations, generalizing Eq. (<ref>) St + S =-S  H(U), S(t,0)=1, Ut + U =-U/τ + u(t) + k I(t) + D(), U(t,0)=V_T, I(t) =∫_t-τ_I^t ν_N(s) ds, ν(t) = [∫_0^∞ρ(t,)  H(U(t,)) d+H̅(t)(1-∫_0^∞ρ(t,t^*) dt^*)]_+, ν_N(t) =ν(t)+√(ν(t)/N)ξ(t), with H̅(t)=∫_0^∞ H(U(t,))(1-S(t,t^*))ρ(t,t^*) dt^*/∫_0^∞ (1-S(t,t^*))ρ(t,t^*) dt^*, where ξ(t) is the white Gaussian noise of unity amplitude. We remark that, for all t,t^*, S(t,t^*), ∈ [0,1]; hence, the function H̅(t) is non-negative for all t. Furthermore, we note that ν_N(t) must be regarded as an abstract quantity that mathematically only makes sense as a distribution, i.e. if it is integrated against some test function over time. Specifically, the empirical rate of new cases ν̂_N measured over some finite time step Δ t (such that the expected number of new cases Nν(t)Δ t≫ 1) would be ν̂_N(t)=[∫_t^t+Δ tν_N(s) ds]_+. While in practice it is extremely rare that the integral takes negative values for large N, we added the rectification [·]_+ to enforce a non-negative rate. The same statement also holds true for Eq. (<ref>). Likewise, ρ is not a normalized probability density <cit.>. Therefore, it is no longer the strict interpretation of the empirical density of the times since the last infection but ρ(t,t^*)dt^* must be regarded as an abstract measure. In comparison with the macroscopic model, this system of equations is stochastic. Its solution corresponds to a single realization of the noise ξ(t). When N ∞, the solution converges to the solution of the macroscopic model. The precise numerical method employed for these simulations is described in <cit.> (see also the provided simulation code in Python). § SIMULATIONS Model with escape noise A first simulation of epidemic waves is shown in Fig. <ref>. The onset of an epidemic is modeled by a short pulse of magnitude I_0 of the rate of new infections ν(t) at time t=0 reflecting the appearance of contaminating patients. These conditions result in oscillations of the rate of cases ν(t) and the fraction of potentially infected population I(t). These oscillations tend to relax. The dynamics of a single individual state V_i(t) (Fig. <ref>A, next to bottom panel) follows the epidemic waves of ν(t) and I(t). Each peak of V_i(t) reflects the virus reproduction during the disease. The decreasing phase reflects the immune response, after which an individual becomes again susceptible and might get infected during the next contaminating flux k(t) I(t). Each spike of the rate of cases ν(t) leads to a correspondingly shaped distribution of the density in the -space. The population waves move in the -space from =0 towards the region where U is about to cross the threshold V^T and form the peaks of ρ H, which is the distribution of falling sick individuals (Fig. <ref>B). Since the integral over ρ H determines the rate of cases ν(t), the increase of the ρ H determines the next peak of ν(t) and I(t), and so on. We assume that at t=t_1=700 the basic reproduction rate drops 4 fold, which might be due to any kind of containment measures. As a result, the epidemic wave generation stops through a significant decrease in the rate of cases ν(t) (Fig. <ref>A). For the escape noise in the form of (<ref>), the epidemic depends on the steepness m of the hazard function (Fig. <ref>). With the increase of m from 0.5 to 1, the epidemic oscillations increase. However, it is non-trivial that in the case of more steep H with m=2 the epidemic shows only one peak. In this case, the rapid illness of the whole population results in a short splash of contamination k(t) I(t) that occurs at the recovery phase of the population when no one is susceptible. Seasonality One of the experimentally observed features of epidemics is its seasonal oscillations, as illustrated in Fig. <ref> by data from <cit.>. In Fig. <ref>, we illustrate the case of seasonal change of k through a step function alternating between k=2.4 and k=1.2 every 180 days. In response to a meander-like k, we observe a complex pattern of waves with various amplitudes. Model with white Gaussian noise When simulating the system with the white Gaussian noise (Fig. <ref>), we observed a similar behavior. The onset of an epidemic is again modeled by a short pulse of the rate of new infections ν(t) of magnitude I_0 at time t=0 reflecting the appearance of infectious individuals. At the same time, the viral load noise jumps (σ changes from 0.5 to 2). These conditions result in sustained oscillations of the rate of cases ν(t) and the fraction of potentially infected population I(t). Again, the dynamics of a single individual state V_i(t) (Fig. <ref>A, next to bottom panel) follows the epidemic waves of ν(t) and I(t). When at t=t_1=900 the basic reproduction rate k drops due to assumed anti-epidemic measures, the epidemic wave generation stops. Mesoscopic model In order to consider a finite number of individuals in a population, we apply the mesoscopic model described in section <ref>, based on Eqs.(<ref>)-(<ref>). The solutions are different for different N (Fig.<ref>). The solutions for small N differ significantly for different realizations of the noise. The model in the limit N ∞ is equivalent to the macroscopic model. Hence, the solutions for large N converge for different realizations and approach the macroscopic model solution shown in Fig. <ref>. § DISCUSSION In the present work, we have proposed the description of an epidemic that stems from a microscopic model of an individual and results in the meso- and macroscopic models of the entire population. The microscopic evolution is described by the dynamics of two state variables for each individual, the interplay between the viral load and immune response V_i, and the time since the last infection _i. The corresponding mesoscopic and macroscopic models are then given by deterministic and stochastic refractory-density equations, respectively. We have shown the consistency between the models. The presented simulations remind of epidemic “waves”, for instance, the ones reported in <cit.>. One of our basic ideas to describe the epidemic in terms of the evolution of viral load depending on the time since the last infection is supported by the experimental data from <cit.>, where the viral load was measured in patients with SARS-CoV-2 as a function of the time since infection. Our multi scale modeling framework allows us to take into account such microscopic parameters measured in individuals in order to constrain the parameters of the meso- or macro-scale models. The approach and the model that we presented in this manuscript could naturally be extended and generalized. We conclude the manuscript with a few promising leads for future research, which we identified during the writing of this work. §.§ Treatment In simulations, we varied the individual rate of viral load k(t), which accounted for the seasonality. This factor can also reflect such social measures as wearing masks and isolation. In contrast, medical treatment of patients is a function of the time since infection , and it changes the course of the disease. Naturally, this factor can be taken into account through modification of the shape of the function D entering the equation for the interplay of viral load and immune response, Eq. (<ref>). §.§ Modelling infectiousness depending on infection age It has been shown that the infectiousness of SARS-CoV-2 depends on the time since the last infection <cit.>. This can be easily modeled in our framework by generalizing Eq. (<ref>), Eq. (<ref>) and Eq. (<ref>) for the micro-, macro- and mesoscopic fractions of infected people to I(t)=1/N∫_0^tκ(t-s) dN_c(s), I(t)=∫_0^tκ(t-s)ν(s) ds and I(t)=∫_0^tκ(t-s)ν_N(s) ds, respectively. The kernel κ:ℝ^+→ [0,1] describes the infectiousness depending on the infection course. Our equations (<ref>),  (<ref>) and Eq. (<ref>) are recovered as the special case κ(t^*)=1_(0,τ_I)(t^*), allowing only two possible states – infectious and non-infectious. Our infectious period τ_I=10days well correspond to the half-duration of cell culture infectivity measured in <cit.> for SARS-CoV-2. §.§ Network of multiple populations Certain types of heterogeneity (e.g. different social communities, age groups, etc.) and spatial structure (cities) can be treated by splitting a large heterogeneous population into smaller homogeneous subpopulations, again in analogy to RDE application in neuroscience, for instance, to simulate cortical neuronal populations <cit.>. Moreover, the methodology can be extended to interacting populations, such as in the scenario of epidemic transmission between countries, consideration of a network of communities, similar to the so-called multi-group approach (see e.g. <cit.>). Since the multi-group approach naturally operates with smaller-size subpopulations, the proposed mesoscopic approach could be applied in this case. §.§ Multiple internal variables The proposed derivation of the population model from that for an individual applies not only to the 1-D individual model described by a single ODE for the viral state V_i, but also to multidimensional cases, in which one may be interested in including additional characteristics of individuals and/or of populations, and keep track of these as the disease spreads. In this way, additional internal variables can be introduced, describing, for instance, activation and/or inactivation of some processes in the immune system. This concept resembles the consideration of Hodgkin-Huxley-like neurons in neuroscience <cit.>. If an individual is characterized by n ODEs, the population model comprises n+1 partial differential equations (PDEs) for the state variable and the density. These equations remain one-dimensional, with time since the infection is the sole independent variable, besides t. Consequently, this approach maintains computational efficiency even for multi-dimensional individuals. Leveraging this advantage, more intricate models can be developed. In comparison with the above-mentioned kinetic theory <cit.> that results in the Fokker-Planck equation describing the evolution of the population in the phase space of a state variable, the proposed approach considers the evolution in the phase space of the time since infection. Once the microscopic models are identical, these two macroscopic approaches would result in very similar numerical simulations, at least in the case of white Gaussian noise <cit.>. In the more general case of non-scalar state variables, the Fokker-Planck equation becomes multidimensional, whereas the transport equations of the refractory density approach remain 1-D PDEs that are effectively solvable. §.§ Age-dependence The epidemics to a different extent affect people of different ages <cit.>. Our approach allows the consideration of the population distributed by age â via an age-dependent hazard function H(V,t^*,â). In this case, the main variables have to be parameterized by â as ρ=ρ(t,, â) and U=U(t,, â). Assuming that the hazard function changes much slower with age â than with the time since the last infection (i.e. |∂ H/∂â|≪ |∂ H/∂ t^*|), the age â can be treated quasi-statically as a parameter and not as an independent dynamical variable. Under this plausible assumption, the equations of the macroscopic model remain to be 1-D transport equations. However, the fraction of infected individuals I(t) would be an integral over â. This parametrization would allow the consideration of different disease time courses, etc. The infectivity κ(, â) can also be dependent on â because usually people interact stronger within their age groups. §.§ Asymptomatic spread We remark that, for our model, we are interested in the ability of an individual to spread the disease, rather than showing symptoms; hence, the infected and infectious population might be generalized to include asymptomatic individuals, thus allowing to adapt our construction to more complex compartmental models such as the ones presented in <cit.>. This may be done in multiple ways, depending on which characteristics of symptomatic and asymptomatic spread one wishes to capture in their model. §.§ Heterogeneous severity It could be of biological interest to assume a heterogeneous setting, e.g. the severity a in Eq. (<ref>) could vary among individuals depending on the vaccine status, stronger or weaker immune system, etc.; whereas we can assume that τ_r is approximately constant and depends only on the specific disease. However, this generalization would carry a non-negligible increase in the computational cost of our simulations, and make mean field approximations which we perform in section <ref> and <ref> considerably more difficult. One potential strategy to deal with such heterogeneity would be to split the population into several, approximately homogeneous subpopulations and use a multi-population version of our model as described above. Importantly, in this scenario, we could take full advantage of the mesoscopic model because it remains valid for small sub-populations. We leave such a generalization to heterogeneous systems as a promising outlook for future work. §.§ Complexity and parameters Despite more mathematical complexity in comparison with the classical SEIRS models (PDEs versus ODEs), the proposed model has a similar number of parameters. The SEIRS model has 5 parameters (β, σ, γ, and ξ, in traditional notations, and the time scale). Instead, in the case of omitted explicit course of the recovery D(t), the proposed model has also only 5 parameters: τ, τ_I, k, σ, and . §.§ Inverse problem The model may be used in future studies for solving reverse problems of finding the parameters from statistical data for a certain disease. For this purpose, one should have a sufficient set of experimental data that would reflect the effects of each of the parameters. For instance, the effect of k can be studied with the effect of masks; the effect of τ_I with the effect of care; the effect of with the effect of provisional stimulation of the immune system of the entire population; the effect of σ with a selection of more homogeneous subpopulation; the effect of τ with the selection of faster (or slower) recovering subpopulation, and so on. §.§ Limitations In this first work on the topic, we made several simplifying assumptions. One of the most important is that our model assumes a strictly conserved population: no removal, no mortality, closed community: no external influx, no out-flux. It would be of interest to consider, instead, a similar model in which the population is allowed to vary in time, for any of the reasons listed above. § DECLARATION OF COMPETING INTEREST The authors have no competing interest to declare. § DECLARATION OF EQUAL CONTRIBUTION All authors contributed equally to this work. § CODE AVAILABILITY The code used to numerically solve the models presented in this manuscript is available at <https://github.com/schwalger/refracdens_epidem>. § ACKNOWLEDGEMENTS Mattia Sensi was partially supported by the Italian Ministry for University and Research (MUR) through the PRIN 2020 project “Integrated Mathematical Approaches to Socio-Epidemiological Dynamics” (No. 2020JLWP23, CUP: E15F21005420006). § MACROSCOPIC VS MICROSCOPIC MODELS, WHITE GAUSSIAN VS. ESCAPE NOISE To illustrate the agreement between microscopic and mesoscopic models as well as escape noise and Gaussian white noise, we consider a slightly simplified setup. Whereas the main model equation (<ref>) includes the term k(t) I(t), which is recurrently dependent on the activity of the entire population ν(t), here we consider a simplified problem with a step-wise input signal instead of the term k(t) I(t). This case can be interpreted as a rough approximation of an epidemic problem, where the time course of the epidemic is assumed to be known and shaped as a step function. In this interpretation, we are interested in the probabilistic behavior of an individual, with ν(t) to be a probability for this individual to get ill. We simulate this response for the case of white noise and the escape noise in the form of Eq. (<ref>) (Fig.<ref>). From the methodical point of view, Fig. <ref> shows the testing comparison of the macroscopic model with the microscopic models with white Gaussian and escape noise. The microscopic solutions converge to the macroscopic ones. plain
http://arxiv.org/abs/2407.01822v1
20240701214341
Science DMZ Networks: How Different are They Really?
[ "Emily Mutter", "Susmit Shannigrahi" ]
math.NA
[ "math.NA", "cs.NA", "cs.NI" ]
Investigating the Segment Anything Foundation Model for Mapping Smallholder Agriculture Field Boundaries Without Training Labels Ruizhe Jiang July 8, 2024 ================================================================================================================================ [remember picture, overlay] [anchor=north east, xshift=-2.5cm, yshift=-1cm, inner sep=5pt, fill=green, text=red, rotate=0] at (current page.north east) This is the accepted version of the paper. The final version will appear in the proceedings of IEEE LCN 2024.; § ABSTRACT The Science Demilitarized Zone (Science DMZ) is a network environment optimized for scientific applications. A Science DMZ provides an environment mostly free from competing traffic flows and complex security middleware such as firewalls or intrusion detection systems that often impede data transfer performance. The Science DMZ model provides a reference set of network design patterns, tuned hosts and protocol stacks dedicated to large data transfers and streamlined security postures that significantly improve data transfer performance, accelerating scientific collaborations and discovery. Over the past decade, many universities and organizations have adopted this model for their research computing. Despite becoming increasingly popular, there is a lack of quantitative studies comparing such a specialized network to conventional production networks regarding network characteristics and data transfer performance. We strive to answer the following research questions in this study: Does a Science DMZ exhibit significantly different behavior than a general-purpose campus network? Does it improve application performance compared to such general-purpose networks? Through a two-year-long quantitative network measurement study, we find that a Science DMZ exhibits lower latency, higher throughput, and lower jitter behaviors. However, we also see several non-intuitive results. For example, a DMZ may take a longer route to external destinations and experience higher latency than the campus network. While the DMZ model benefits researchers, the benefits are not automatic - careful network tuning based on specific use cases is required to realize the full potential of such infrastructure. § INTRODUCTION Science and engineering applications are generating data at an unprecedented rate, producing hundreds of Terabytes to Petabytes of data within a very short time. Additionally, scientific collaborations are becoming increasingly global, which means the researchers must transfer these datasets over the wide area networks to various scientific facilities. Such data transfers can occur between instruments, storage servers, computing systems, and cloud computing platforms. General-purpose enterprise networks are often unsuitable for these types of data transfers since these networks prioritize general usability and security over performance. Scientific data transfers can face several challenges, such as bandwidth throttling, packet loss, slow throughput due to firewalls, intrusion detection systems, and other middleboxes, resulting in lower throughput, higher latency, and increased jitter and packet loss<cit.><cit.><cit.>. These challenges ultimately result in lower scientific productivity. Organizations often tailor a portion of their network for scientific data transfers to address these challenges. Such a network is generally called a Science DMZ. Science DMZs prioritize data transfer performance through streamlined security postures, such as simple rule-based access control lists rather than stateful firewalls, and network tuning, such as large Ethernet frames and larger TCP windows. Science DMZ networks are widely deployed at US academic campuses and other countries. By the latest count, more than 200 Science DMZs<cit.> are in the US alone. While they are widely deployed, there is a lack of comparative, quantitative studies on how Science DMZ networks differ from their general-purpose counterparts. To address this gap, we have observed a general-purpose production network alongside a Science DMZ at a university campus over the past two years. We have deployed multiple measurement instruments in both networks and external facilities. We have used a number of standard network measurement tools (iperf3, ping, traceroute) and developed our own comparison software to measure network parameters such as RTT, Throughput, Jitter, and Packet loss. Externally, we have looked into network traffic to and from large cloud platforms (Google Cloud) and the RIPE Atlas measurement platform. These long-running measurements allowed us to understand the nuances in performance differences on both networks. We confirm that a Science DMZ generally provides a better environment for data-intensive research. However, such benefits are not automatic, and these networks may be susceptible to higher latency, packet loss, and longer paths. Therefore, careful network planning and optimization based on the requirements of specific use cases (e.g., bulk data vs. real-time) must be a part of such infrastructure. § BACKGROUND §.§ General-purpose Networks vs. Science DMZs Campus networks are typically designed to serve large numbers of users and devices, support various applications (e.g., email, web browsing, and video), and provide security and quality of service <cit.>. Campus networks are also equipped with firewalls to maintain network security that often takes precedence over quality of service<cit.>. Because most general-purpose data flows are small (KBs-MBs) and have a short duration, moderate bandwidth, latency, and loss rates are usually sufficient for these flows. Most traditional applications on a campus network can adapt to the network’s bandwidth and are not overly sensitive to packet loss or jitter. On the contrary, scientific data is often at terabyte- and petabyte-scale <cit.>. When packet loss occurs during such transfers, TCP reduces throughput to levels where it can take days to complete a single data transfer <cit.>. Energy Sciences Network (ESNet) developed the Science DMZ <cit.> architecture to address these issues and transfer scientific data faster. A Science DMZ is a portion of a network designed for high-performance scientific applications. It is often separated from the campus network either physically or logically <cit.>. Science DMZs also have a different security posture than enterprise networks. Instead of using multi-layer firewalls as in enterprise networks, Science DMZs use simple stateless Access Control Lists (ACLs) that allow line-rate packet processing<cit.><cit.>. These steps decrease packet loss and congestion and increase throughput <cit.>. The Science DMZs are also often limited to specific (and vetted) users and devices, eliminating many of the threats on general-purpose networks and allowing Science DMZs to be equipped with more lenient security policies<cit.>. §.§ State of Science DMZ Deployment The Science DMZ model, since its conception by Dart et al. <cit.>, has seen widespread adoption and evolution, addressing the growing data-intensive demands of scientific research. There are currently more than 200 <cit.> deployments across various organizations. The model's effectiveness in handling large-scale data transfers has been recognized across various scientific disciplines. Peisert et al. <cit.> discuss the implementation of medical science DMZs, providing a secure yet high-performance network environment crucial for handling sensitive medical data. Gonzalez et al. <cit.> and Liu et al. <cit.> have explored the challenges and solutions in monitoring and optimizing data transfers over international research network connections. These studies underscore the importance of efficient data transfer protocols, as also highlighted by Kissel et al.<cit.>, to support the high-bandwidth requirements of global scientific collaborations. The evolution of Science DMZs encompasses advancements in data rate management using machine learning <cit.>, scalable designs considering the nature of research traffic <cit.>, and explicit feedback mechanisms for congestion control <cit.>. Gegan et al. <cit.> and Mazloum et al. <cit.> have contributed to enhancing security and measurement capabilities within Science DMZs and general purpose networks, addressing the critical need for secure data environments in the wake of increasing cybersecurity threats. §.§ Studies on Science DMZ Performance A few studies have looked at Science DMZ and application performance. A study by Crichigno et al. <cit.> provides a comprehensive guide to a Science DMZ and describes some performance measurements. This study covers protocols and equipment essential for a high-performance Science DMZ, including router and switch configurations. This tutorial highlights performance evaluations using ESnet and a lab testbed, focusing on the effectiveness of router and switch equipment for large-scale data transfers. Additionally, it examines TCP attributes, their impact on network performance, the significance of specific data transfer tools and security measures in Science DMZs, and how such software and equipment can create bottlenecks. Another study by Lee et al. <cit.> examines scientific research traffic on the Korea Research Environment Open Network, proposing a scalable Science DMZ design and an iterative greedy algorithm. The design enables cost-effective sharing of Data-Transfer Nodes (DTNs), crucial but expensive components in a Science DMZ. This approach significantly reduces capital expenditures (CAPEX) by up to 79% compared to traditional models where each user has a dedicated DTN. Vega et al. <cit.> shows that a P4-based controller that enhances data transfer rates can significantly improve network performance compared to non-dedicated Science DMZ cyberinfrastructure. Using emulated hosts, the study shows that their model can improve the flow completion time of large scientific data flows by an average of 21.7%. Calyam et al. <cit.> present a case study demonstrating the architecture's effectiveness in enhancing remote scientific collaboration and simplifying network management for High-Throughput Computing services. In <cit.>, researchers studied the effect of the Science DMZ on network performance. They created three networks: one with no DMZ and no firewall, one with no DMZ and a firewall, and a DMZ scenario. They show that the DMZ scenario returns the overall best results compared to the no DMZ, no firewall, and no DMZ, no firewall scenarios. There have been several other studies on Science DMZ performance and specific tunings<cit.>. However, these studies focused on particular aspects of a DMZ, such as data transfer performance and network tuning. As a result, studies that demonstrate quantitative improvements of a DMZ over general-purpose networks still need to be done. § MEASUREMENT INFRASTRUCTURE SETUP In this study, we focus on analyzing the performance of two distinct networks and comparing them: the Science DMZ and the campus commodity network on our university campus. This section summarizes the tools and infrastructure we used for our study. §.§ Measurement Tools and Infrastructure RIPE Atlas: RIPE Atlas is a global network with numerous servers, measurement devices, and virtual machines for network measurement. The RIPE Atlas network is a collection of “probes" that conduct measurements and provide a real-time understanding of the condition of the Internet. Probes can conduct ping, traceroute, SSL/TLS, DNS, NTP, and HTTP measurements to select targets <cit.>. We utilize RIPE Atlas to perform ping and traceroute measurements to and from servers on our campus. perfSONAR: perfSONAR (performance Service-Oriented Network monitoring ARchitecture) is an open-source network measurement toolkit<cit.>. It provides many tools within one package to test and measure network performance. These tools include latency, throughput, trace, and disk-to-disk measurements. perfSONAR identifies areas of poor performance, by both location within the network and by a window of time in which they occur, and flags these problem spots. For this study, we created dedicated PerfSonar nodes and utilized publicly available ones. Google Cloud: Google Cloud is a platform that is traditionally not used for network measurements. However, in our case, it is evident that several science use cases are utilizing the Google Cloud for their computations. As such, we quantified the network parameters to and from the cloud. Standard tools: In addition to these distributed measurement platforms, we utilized several standard tools, such as ping<cit.> and traceroute<cit.>. Traceroute provides the option to use both UDP and ICMP, and we utilized both. For performance measurements, we utilized iPerf3<cit.> - a command-line tool that measures the throughput between two IP endpoints. It also returns bandwidth, throughput, packet loss, and jitter from the tests. Finally, tcpdump, libpcap and Wireshark allow packet capture and analysis of traffic traces. We utilized all three for our analyses. §.§ Measurement Servers For these measurements, we created measurement servers within the campus network as well as on the DMZ. Figures <ref> and <ref> show these servers. The measurement server on the campus network is referred to as Leo. Leo ran a Perfsonar instance and had installed standard tools such as iperf3, ping, and traceroute. On the Science DMZ, we used three other nodes: DTN1, DTN2, and Perfsonar1. We used DTN1 and DTN2 for data transfer experiments and Perfsonar1 for network measurement experiments. Externally, we used RIPE Atlas<cit.>, publicly available PerfSonar nodes and Google Cloud (GCP) for our measurements. §.§ Network Routes A commercial ISP provided Layer3 network connectivity to the campus network. Internet2<cit.>, a network specifically designed to support scientific applications, provided Layer3 connectivity to the Science DMZ. Internally, the campus network was connected to the provider using a 10Gbps link. All traffic passes through a gateway/firewall box that performs packet inspection. The Science DMZ network was connected to Internet2 at 10Gbps. This connection was served by a gateway and a security appliance using access control lists for security. The campus and the Science DMZ network were logically separate. Even though they shared physical fibers, these networks used their own VLANs and traffic was completely separated. Figure <ref> shows the external routes. The colored lines in Figure <ref> show external (logical) connectivity to external measurement points (mainly RIPE Atlas and GCP). Figure <ref> shows local connections between Leo, DTNs, Perfsonar, and the gateways. §.§ Experiments We summarize our measurement experiments in Table <ref>. For this work, we conducted “ping" tests to measure network latency, packet loss, and jitter. We utilized “traceroute" to collect latency associated with network paths and identify intermediate hops between the source and destination nodes within each route. We utilized Iperf3 to observe throughput between external sources, the campus network, and the DMZ. We originated these tests inbound from RIPE Atlas and Google Cloud Platform (GCP) virtual machines and outbound from the three local nodes (Leo, Perfsonar1, and DTN1). §.§.§ Internal clients → External servers Experiments Tests are run with one node of the campus network (Leo) and two nodes of the DMZ (Perfsonar1 and DTN1) posing as clients. We run ping and traceroute to 12 select perfSONAR nodes within the United States every 30 minutes. We send only ten packets during these tests so as not to overwhelm the external servers. We also used these clients to perform ping and traceroute from GCP VM instances hosted within the United States. We used Iperf3 throughput experiments between two on-campus clients (Leo and Perfsonar1) and GCP VM instances, which were executed every 12 hours. We perform the data transfer experiments using Leo and DTN1 as clients. We downloaded Linux ISOs from publicly available mirrors every four hours on both nodes and captured the packet headers using tcpdump. These packet capture datasets allowed us to analyze interpacket delay, packet loss, round-trip time, packet retransmissions and download time. We observed the average daily value of these metrics in Wireshark, and we calculated the average RTT and interpacket delay externally and then plotted the daily values from DTN and Leo side-by-side. §.§.§ External Client → Internal Server Experiments We ran ping tests from RIPE Atlas to the local nodes every hour and traceroute tests every six hours. The ping and traceroute measurements send three packets of size 48 bytes during each execution. We executed a set of five experiments for each of these tests. For each experiment, we utilized five different RIPE Atlas source probes located within the United States. Using the same method, we also run the ping and traceroute tests from GCP and the local nodes. Every 30 minutes, ping and traceroute tests run from GCP to the local nodes. §.§.§ Internal Clients ↔ Internal Servers Experiments As previously mentioned, ping tests are performed to measure network latency, packet loss, and jitter, and traceroute tests are conducted to collect latency associated with network paths and identify intermediate hops between the source and destination nodes within each route. These tests are executed between the local network nodes (Leo, Perfsonar1, DTN1, DTN2), as well as between select local network nodes and the gateway to the campus network. Ping and traceroute tests are executed on these routes using the same method. Every 30 minutes, ping and traceroute tests run from Leo to DTN1, from Perfsonar1 to DTN1, from DTN1 to DTN2, and from both Leo and DTN1 to the gateway. Ping is designated to send only ten packets during the test. §.§.§ BGP Experiments For BGP experiments, we utilized a BGP dump from our Science DMZ BGP border router, which we manage. We obtained the BGP routes from our upstream provider on the campus network. §.§ Data Analysis We parsed the collected data from ping, traceroute, and iperf3 into JSON and used Pandas, Seaborn, and Matplotlib to analyze and graph the results. We examined the ping data to interpret latency, packet loss, and jitter. We analyzed the latency by taking all round-trip time (RTT) occurrences and graphing them with a Cumulative Distribution Function (CDF). We plotted daily packet loss by dividing the sum of all packets lost over a day by all packets sent over a day. We determined jitter by finding the difference in latency of subsequent packets. The jitter is then averaged daily and plotted with the standard deviation from that average. We used traceroute data to calculate network latency and hop counts associated with network paths. We plot this by categorizing the measurements by the number of hops traversed in the network path and then averaging the latency observed for each route length. Finally, we used iperf3 and downloaded datasets for throughput insight. We plot this by averaging the bitrates from each day, categorizing them into “sender" and "receiver," and then plotting the averages per day. § RESULTS In this section, we discuss the comparative results from our experiments. We ran our experiments at regular intervals, as we described in the previous section. §.§ Path Lengths Different upstream providers serve the DMZ and the commodity network in this study. A commercial ISP serves the campus network while the DMZ is served by Internet2, which is a specialized network for research. These experiments compare the path lengths of network destinations to/from internal and external vantage points. Figures <ref> and <ref> show the average latency and path lengths between RIPE Atlas, Leo (located in the campus network), PerfSONAR1, and DTN1 (both located in the DMZ). In both experiments, The maximum hop counts are 19 hops, and the minimum is 8 hops. The latency and hop counts are lower between these servers and GCP, shown in Figures <ref> and <ref>. The hop count to these servers is 10 hops compared to 19 from RIPE Atlas. RIPE probes are hosted by various organizations and served by various ISPs. However, Google has a more optimized peering presence, leading to lower hop counts. The latency between GCP and these servers is also lower. Both for the DMZ and the campus network, the maximum latency is 300ms. But the DMZ exhibits lower latency at all route lengths in common with the campus network by ∼3% - 6.78%. As exhibited in Figures <ref> and <ref>, when traffic is outbound to external perfSONAR nodes, Leo experiences routes with ranges 1-2 hops shorter than DMZ routes, and there is a point when the commodity network performs faster than the DMZ by 12.5% at 10 hops. However, the DMZ tends to have a latency 20% - 36.7% lower than Leo, exclusively comparing common path lengths. Plots of the two DMZ nodes are very similar for this experiment, so Figure <ref> was selected to represent both nodes. However, we noticed one difference. The DTN1 node on the DMZ has a latency, at the longest path length of 14 hops, that is ∼6.75% lower than that of the Perfsonar1 node on the DMZ. In these outbound experiments, the path lengths are between 7-12 hops on the campus network and 9-14 hops on the DMZ side. Since IP routing can be asymmetric, there is a mismatch between the hop counts from the inbound and the outbound experiments. Takeaways: Given that a specialized research network serves the DMZ, Internet2, we expected this to have lower hop counts for inbound and outbound traffic. However, the DMZ experiments consistently show higher hop counts than the campus network. This observation is critical for delay-sensitive research applications, such as AR-VR, since moving them into the DMZ will potentially increase their hop count, resulting in end-to-end delay. From these experiments, we conclude that just placing research use cases into a DMZ may not automatically improve their performance/latency. Careful discussions and planning with upstream providers are needed to optimize routing and/or physical path. On our campus, we discovered the upstream provider routing traffic using a longer but less congested physical path rather than a short but more heavily used physical path. §.§ Latency Comparing the distribution in latency in Figures 3a-3f, we find that the latencies are between 100-400ms on the campus network and 50-600ms on the DMZ. There is a significant spike in latency at the penultimate hop (DMZ gateway) for the DMZ experiments. More interestingly, the latencies are slightly higher on the DMZ for outbound experiments since the paths are typically longer. On the paths with higher hop counts, both the campus network and the DMZ experience similar latency as Figures <ref> and <ref> show. Figures 4a-4c compare the latency for inbound WAN traffic from RIPE Atlas and GCP and for outbound WAN traffic to perfSONAR nodes. The 95 percentile latency from RIPE Atlas to both the DMZ and campus network is around 80 ms. The 95 percentile latency from GCP to the campus network is around 35 ms, and to the DMZ is near 37 ms. This can again be attributed to better peering provided by GCP. Based on Figures <ref> and <ref>, when traffic is inbound from RIPE Atlas, the range of hops is the same to reach Leo and the two DMZ nodes. Comparing only standard path lengths, both nodes on the DMZ have similar latency, represented only by Figure <ref> for conciseness. A difference in latency was noted when the route averaged 16 hops for the DMZ nodes. At that point, the DTN1 node had 11% lower latency. Both DMZ nodes often exhibit 34-73% lower latency than Leo, but Leo has path lengths that have 13-30% lower latency than the DMZ nodes. Based on Figures <ref> and <ref>, when traffic is inbound from Google Cloud, both nodes on the DMZ tend to have similar latency, with an occasional ∼2% difference. Due to close similarities in their plots, only Figure <ref> represents the DMZ nodes for this experiment. The campus network tends to have similar latency to the DMZ or higher latency by ∼2% - 24%. For the outbound experiments presented in Figure <ref>, 95 percentile latency to external PerfSonar nodes is also around 35 ms on the DMZ side. On the campus network , the 95 percentile latency is near 55 ms. When traffic is outbound to perfSONAR nodes, both nodes on the DMZ exhibit similar latency, while the campus network experiences latency that is 30.43% - 83% slower. Internally, we find the latency between the campus and DMZ nodes to be very low. However, given that the path length is minimal, the effect of the firewall is really pronounced here. Most pings between campus network servers and the DMZ exhibit a 10ms delay. Compare this to Figure <ref> where pings between the nodes and their respective gateways are less than 2ms. The inline firewall and access control lists (ACLs) add 8ms latency to each packet, which is very large. Most of these additional delays can be attributed to the firewall and packet inspection middleware. Takeaway: We find that both the campus network and the DMZ exhibit similar latency, with the campus network occasionally having a lower average latency than the DMZ by as much as ∼20ms (5% - 30.5%). We find the measurements often get delayed on the DMZ (e.g., pings not arriving or arriving with higher latency), which affects results poorly. For internal measurements, we find that firewalls negatively affect performance, even when measurement boxes are placed on the same campus/data center. §.§ Packet Loss The DMZ experiences more packet loss than the campus network for inbound traffic from RIPE Atlas. While Leo exhibits a period of 100% packet loss due to the campus node being down, as Figure <ref> shows, both nodes of the DMZ experience 50% genuine packet loss even when the network was up. However, the packet loss is more consistent on the campus network, where we can observe regular 1-2% packet losses. The Perfsonar1 node on the DMZ exhibits more packet loss than the DTN1 node on the DMZ; it loses ∼5% more packets than DTN1 over three months, as Figure <ref> shows. This is potentially because more experiments were conducted on the PerfSonar1 node than on DTN1. When traffic is incoming from Google Cloud, there is no packet loss pattern across DMZ or campus network. When traffic is outbound to external perfSONAR nodes, the campus network experiences more packet loss than the DMZ, but the Perfsonar1 node experiences more packet loss than the DTN1 node. Perfsonar1 exhibits ∼2% more packet loss than DTN1. The campus network exhibits ∼.2% more packet loss than Perfsonar1 and ∼5.7% more packet loss than DTN1 as Figures <ref> shows. Internally, shown in Figure <ref>, there is no pattern of packet loss sourced from the DMZ nodes, but there are spikes of loss sourced from the campus node of less than 2%. Takeaways: The campus network experiences more regular packet loss. Firewalls and middleboxes contribute to these packet loss events. Packet loss also occurs on the outbound paths from campus, again, potentially due to the presence of firewalls. This observation is important since large data transfers are sensitive to packet loss. Placing research use cases on a shared campus network will affect data transfer performance. Such use cases should be placed in a DMZ network, which has a lower loss rate due to the simplified nature of such networks. §.§ Jitter Jitter is an important matrix for video and other real-time applications. In these experiments, we compare the jitter between the campus network and the DMZ. When traffic is inbound from RIPE Atlas, Leo, the campus node, exhibits lower average jitter than the DTN1 or Perfsonar1 nodes as Figures <ref> and <ref> show. Jitter on the campus route tends to be 60-78% lower than on the DMZ routes on average. DTN1 tends to exhibit higher variation in its daily jitter than Perfsonar1 by as much as 37 milliseconds, but the two DMZ nodes exhibit similar overall performance. When traffic is inbound from GCP to Leo and the DMZ nodes, all three routes exhibit similar average jitter patterns between 0-1 milliseconds, only ever differing by fractions of milliseconds. Figure <ref> represents the average jitter pattern, and differences in standard deviation from all three nodes are noted. Leo's route often experiences more deviation in its jitter than the DMZ nodes by as much as two milliseconds. The DTN1 node and Perfsonar1 node experienced a similar jitter pattern, so only the Perfsonar1 plot was selected to convey this experiment. However, the two nodes' difference in variation was noted. The DTN1 node experiences more deviation than the Perfsonar1 node by as much as 1.5 milliseconds. When traffic is outbound to perfSONAR nodes, as shown in Figures <ref> and <ref>, Leo and the DMZ nodes typically have an average jitter between 0-1 milliseconds. However, Leo often reaches higher jitter rates up to 10-63 milliseconds greater than the DMZ nodes. Both DTN1 and Perfsonar1 exhibited similar daily jitter and deviation patterns, so only the plot of Perfsonar1 was chosen to represent the DMZ for this experiment. Takeaways: Campus networks experience more jitter than their DMZ counterparts. The average jitter on the campus network is also higher due to a higher number of competing flows. §.§ Data Transfer Throughput One of the main reasons for creating DMZs is the higher data transfer rate that it enables. This section compares data transfer rates between the DMZ and the campus network. As mentioned earlier, for these tests, we downloaded publicly available Linux ISOs. We performed both experiments back to back to reduce variations in network conditions. Additionally, we did not tune the TCP stacks on the hosts. While such tuning significantly improves the data transfer rates, we wanted to establish a baseline comparison. Further tuning will improve data transfer performance in both DMZ and campus networks. As Figure <ref> shows, the average throughput was much higher on the DMZ when compared to the campus network. The host on the campus network could achieve only 50Mbps, while the host on the DMZ achieved close to 1Gbps. The slower data transfers are a result of packet loss and in-line firewall. On the other hand, the DMZ performs well since it only uses ACLs, and the loss rate is also low. We also looked at the TCP window sizes for these transfers, shown in Figures <ref> and <ref>. We looked at both the “Bytes out" window size (bytes in flight) and the received window size, and the DTN had more oversized windows in both cases. The received window size was larger on Leo several times, but the throughput was low. This observation is consistent with what we would expect on a lossy link. Figure <ref> corroborates these observations. We ran regular iperf3 tests between hosts on Google Cloud, DMZ, and the campus network. The DMZ host consistently outperforms the campus host in both upload and download performance. Takeaways: The general purpose network performs significantly worse than a DMZ regarding file transfer performance. The campus network performs worse since it has more packet loss, the TCP window is smaller, and firewalls add latency to the packets. §.§ BGP Path Comparison This section compares the BGP path lengths between the DMZ and the campus network. We downloaded the BGP tables from the DMZ BGP router and campus ISP's BGP router. First, we noticed that the commercial ISP had more additional routes than the Science DMZ router. The campus network had 715,810 BGP routes compared to 94,773 on the Science DMZ router. The campus BGP table also had three entries per destination as backup routes. We believe these are artifacts of BGP configurations. Other than having more route options in case of a failure and the capability of better load balancing, more BGP routes provide no additional advantages. We then compared BGP hop counts between these networks. Figure <ref> shows the distribution. The general purpose network generally had a large number of paths with hop counts six or less (note the split Y axis). The DMZ also showed similar patterns. Since the DMZ had less number of routes, we separated the intersection of these two tables and compared them in Figure <ref>. We found that the path lengths for the DMZ were slightly lower for shorter-length paths (hop counts 3). For other DMZ routes, the hop count was larger than that of the campus routes. While BGP and IP path lengths are not always strictly correlated, these observations corroborate our findings in the previous experiments. Takeaways: The DMZ has less path diversity and longer path lengths than the campus network. While this may not directly affect performance, the resiliency of the DMZ can be improved by using additional fallback routes. Further, the path length can be reduced by creating better peering at the upstream, which requires negotiation with the upstream provider. § CONCLUSIONS Science DMZs represent a paradigm shift in network design, tailored explicitly for scientific applications and distinct from traditional campus or general-purpose networks. The core principles of the Science DMZ, such as optimized paths for large data transfers and minimized security interference, position it as an advantageous environment for research and scientific collaboration. Over recent years, its adoption by numerous universities and organizations highlights its value in the academic and research communities. Our comprehensive study over two years presents a nuanced picture. We confirm that the Science DMZ exhibits lower latency, higher throughput, and reduced jitter compared to general-purpose networks; when considering file transfer performance, the DMZ clearly outperforms general-purpose networks. The packet loss, smaller TCP windows, and added latency from firewalls in campus networks significantly hinder their efficiency in handling large-scale data transfers. Science DMZs are not without their limitations. We observed non-intuitive results such as higher latency in specific scenarios and increased hop counts compared to campus networks. These findings suggest that while the Science DMZ can enhance certain aspects of network performance, it may not uniformly outperform campus networks in all areas, particularly in delay-sensitive applications like AR-VR. Our study reveals that the DMZ has less path diversity and longer path lengths than campus networks. While this impacts performance, strategic enhancements, such as developing better peering agreements and incorporating fallback routes, could mitigate these limitations. In summary, while the Science DMZ model offers distinct advantages for specific research applications, it is not a one-size-fits-all solution. Such deployments must be carefully tailored to the particular needs and use cases of the communities they serve. This approach is paramount to fully harnessing the potential of DMZs in advancing scientific discovery and collaboration. ieeetr
http://arxiv.org/abs/2407.03102v1
20240703134106
Droplets of Bosons at a Narrow Resonance
[ "Ke Wang", "Thimo Preis", "Dam Thanh Son" ]
cond-mat.quant-gas
[ "cond-mat.quant-gas", "nucl-th" ]
James Franck Institute and Kadanoff Center for Theoretical Physics, University of Chicago, Chicago, Illinois 60637, USA Institut für Theoretische Physik, Universität Heidelberg, 69120 Heidelberg, Germany James Franck Institute and Kadanoff Center for Theoretical Physics, University of Chicago, Chicago, Illinois 60637, USA § ABSTRACT We consider bosons interacting through a narrow s-wave resonance. Such a resonance is characterized by an infinite scattering length and a large and negative effective range r_0. We argue that any number N≥3 of bosons can form a self-bound cluster with the binding energy per particle increasing as N^2 for 1≪ N≪ (-r_0/a_bg)^1/2, where a_bg is the background scattering length (between atoms and molecules). In the opposite limit N≫ (-r_0/a_bg)^1/2, bosons form droplets with binding energy per particle saturating to a constant value independent of the particle number. The stability of clusters and droplets when the interaction is detuned from the resonance is also studied. Droplets of Bosons at a Narrow Resonance Dam Thanh Son July 2024 ======================================== Introduction.—Dilute quantum droplets are particularly interesting physical systems that have recently become the subject of active theoretical and experimental study <cit.>. The oldest known quantum droplets are those formed by ^4He atoms <cit.>. Such a droplet [We use the terms “droplet” and “cluster” interchangeably.] exists for any number of helium atoms N, with binding energy per particle approaching the thermodynamic limit of 7.1 K when N→∞. It is notable that the approach to this asymptotic value is quite slow, and, in the intermediate regime 3≤ N≲ 10, the binding energy, instead of growing linearly, follows an approximate quadratic law of (N-2)^2 <cit.>. It was suggested that the bosons with short-range attractive interaction can form a droplet in 3D <cit.> and 2D <cit.>. In 2015, Petrov <cit.> showed that in a bosonic mixture the collapse of the system can be prevented by quantum effects (see also Ref. <cit.>). Droplets stabilized by quantum effects have been realized experimentally <cit.>. In this Letter, we study bound states of bosons which interact with each other through a narrow s-wave resonance. With ultracold atoms, narrow resonances are realized, for example, by cesium atoms in a magnetic field <cit.>. At such resonances, bosons can form both atomic and molecular condensates; the thermodynamics and dynamics of such a hybrid condensate have been studied <cit.>. In particular, it was found that such a system can form a “mutually trapped state” <cit.>, in essence, a self-bound drop of a liquid phase that phase-coexists with the vacuum. At the other end of the range of particle numbers, the three-body problem of bosons with narrow resonance has been solved <cit.>. The properties of the three-body system is completely characterized by the scattering length a and the effective range r_0, both assumed to be much larger than any other length scales in the problem, and, in addition, r_0 is assumed to be negative. At a=∞ there is an infinite tower of three-body Efimov states, with the energy of the ground state entirely determined by r_0. It has been found that a stable three-body bound state exists only in a finite range of inverse scattering length 1/a: when 1/a is too negative, the three-body bound state decays into free particles. Conversely, when 1/a is too large and positive, the three-body bound state is unstable toward a decay into a particle and a bound dimer. We will show that a system of a large number of bosons at a narrow s-wave resonance forms a bound cluster with binding energy that first increases as N^3 and whose size decreases as 1/N with increasing N. As N is increased further, one goes to the regime of large droplets with constant density and the core and binding energy increase as 𝒪(N) (i.e., constant binding energy per particle). The crossover between the two regimes happens at N∼ (r_0/a_bg)^1/2, where a_bg is the characteristic scale of the background atom-atom, atom-molecule, and molecule-molecule scattering. Moreover, we show that the N-body bound state remains stable in a range of the detuning parameter |r_0|/a, which includes both positive and negative values of a. The width of this range scales as N^2 in the small cluster regime and is constant in the large droplet regime. Combining our results with the solution to the three-body problem <cit.>, one comes to the conclusion that at a narrow resonance, a bound state of N bosons exists for any N≥3. The model.—A narrow resonance is characterized by a large s-wave scattering length a and a large and negative effective range r_0. The effective Hamiltonian describing this situation is (we set the mass of the atom m=1, so the mass of the molecule is 2) H = ∫d[ -1/2 |∇ψ|^2 - 1/4 |∇ϕ|^2 + α(ψ^ψ^ϕ + ϕ^ψψ) - νϕ^ϕ] . By computing the atom-atom scattering amplitude from (<ref>), one can establish the connection between the parameters α and ν with the scattering length a and the effective range r_0 characterizing the low-energy interaction between the atoms, α = √(4π/-r_0), ν = - 2/(-r_0) a . In particular, for negative detuning ν<0 the molecule is bound in vacuum, while for ν>0 it is unbound. In the regime a≪ r_0, the binding energy of the dimer is -ν. As a nonrelativistic quantum field theory, the theory given by Eq. (<ref>) is superrenormalizable. Indeed, the dimensions of both α and ν are positive: [α]=1/2 and [ν]=2. Thus the theory (<ref>) can be defined without an ultraviolet cutoff. In the real world (<ref>) is only an effective field theory. There exist irrelevant corrections to the Lagrangian with coefficients of natural (i.e., not finely tuned) magnitudes, and they limit the regime of validity of (<ref>) in the ultraviolet. If we denote by a_bg the length scale associated with these terms, we can safely use Eq. (<ref>) when the characteristic momentum is much smaller than 1/a_bg. For now, let us assume that a_bg=0, and check the effect of the irrelevant corrections to Eq. (<ref>) later. We will try to find the ground state of N bosons, with N≫1. First let us assume the resonance is at exact zero energy, i.e., the detuning parameter vanishes ν=0. We expect that the mean-field approximation works for N≫1 bosons. Thus the problem becomes that of minimizing the classical energy functional given by Eq. (<ref>) with ν=0 under the constraint ∫ d (ψ^ψ + 2ϕ^ϕ) = N . There are two competing contributions to the energy of the droplet: the positive gradient energy and the negative attraction energy between the two condensates. To understand the interplay between these contributions, one can perform a simple variational calculation. Namely, we pick a profile function f(r) which has a finite value at r=0 and tends to zero exponentially at r=0. For definiteness, we take f(x) = 1/cosh(x) . We then try the following ansatz for the condensates of atoms and molecules, ψ(r) = √(cN/4π I_2 R^3) f( r/R), ϕ(r) = √((1-c)N/8π I_2 R^3) f( r/R) . Here I_2 is a numerical constant that depends on the shape function f(x), I_2=∫_0^∞dx x^2 f^2(x) , with I_2≈ 0.822 for the choice (<ref>). The ansatz (<ref>) corresponds to two clouds of atoms and molecules of the same shape and size. The variational parameter R controls the size of the droplet, and c denotes the fraction of free atoms not bound in molecules. The fraction of atoms bound in molecules is (1-c), and the total particle number is N. In addition to m=1 we will further set -r_0=1 for convenience. In particular, energy is measured in units of ħ^2/(mr_0^2). Inserting the ansatz (<ref>) into the energy (<ref>), one finds the variational energy E(c,R) = a(c)N/R^2 - b(c)N^3/2/R^3/2 , with a(c) = 3c+1/8K/I_2 , b(c) = c√(2(1-c)) I_3/I_2^3/2 , where we have defined two further characteristics of the shape I_3 =∫_0^∞ dx x^2 f^3(x) , K =∫_0^∞dx x^2 (f'(x))^2 . For the choice (<ref>), I_3≈ 0.367 and K≈ 0.607. The gradient terms in the energy lead to the 1/R^2 contribution, which dominates at small R, and the Feshbach interaction [the terms proportional to α in Eq. (<ref>)] leads to the -1/R^3/2 contribution, which dominates at large radii. At fixed c, the energy is minimized at radius R = 16a^2/9b^21/N , with the value at the minimum E(c) = - 27/256b^4/a^3 N^3 = -A c^4(1-c)^2/(c+1/3)^3 N^3 , where A=8I_3^4/(I_2K)^3≈1.16. Note that E_0 is zero both at c=0 and c=1: at these values one of the condensates vanishes and the coupling term -ϕψ^2 does not give an attractive contribution to the energy. The minimal energy is achieved at the atomic fraction c= √(17)-1/6≈ 0.521, with the energy at the minimum E_0 ≈ -0.0316 N^3. Since this is only a variational calculation, this should be regarded as an upper bound on the energy. Indeed, a variational calculation based on the modified Woods-Saxon shape function [cosh(x/R)+ξ]^-1, with five variational parameters corresponding to the parameters R and ξ of the atom and the molecular condensates, and the relative amplitude of the two condensates, yields E_0=-0.0347 N^3 ħ^2/mr_0^2 , where we have restored the unit of energy previously set to 1. Further minimization using the imaginary-time Gross-Pitaevskii equation lowers the energy by less than the last digit given in Eq. (<ref>) <cit.>. We now consider the fate of the N-boson cluster when the detuning parameter ν is nonzero. It was found in Ref. <cit.> that the three-body bound state (the trimer) exists only within a finite range of ν, ν_-(3)<ν<ν_+(3), where ν_+(3)≈8.72 and ν_-(3)≈ -0.366. For ν<ν_-(3), the trimer decays into a dimer (molecule) and an atom, while for ν>ν_+(3) the trimer is completely unbound. We now show that the N-body bound states at large N behave in a similar way: each of them exists in a finite range of the detuning parameter, ν_-(N)<ν<ν_+(N), but the range expands with increasing N: ν_±(N)∼ N^2. The qualitative behavior of the N-body cluster can again be investigated using the variational ansatz. The contribution of the detuning term to the energy is simply proportional to the number of molecules and independent of the cluster size R. Thus minimization over R proceeds as in the case of zero detuning and one obtains the energy as a function of atomic concentration c, E = - N^3 A c^4(1-c)^2/(c+1/3)^3 + ν/2 N (1-c) . The behavior of this function of c is controlled by ν̃=ν/N^2. There are two critical values, which in our variational calculation are ν̃_+≈0.132A≈ 0.153 and ν̃_-≈-0.117A≈ -0.136. For ν̃_-<ν̃<ν̃_+, the minimum of E(c) is located at a finite value of the atomic fraction c which changes from 0.667 at ν̃=ν̃_+ to 0.404 at ν̃=ν̃_-. A more accurate treatment <cit.> gives the values of the upper and lower ends of the interval, beyond which the N-boson bound state ceases to exist, as ν_+(N) ≈ 0.191 N^2, ν_-(N) ≈-0.140 N^2 . If one fixes the value of the detuning parameter ν, parametrically larger than 1 (in the unit system m=-r_0=1), then a bound cluster can only be formed if the number of particles is larger than some N_crit=(ν/ν̃_±)^1/2 for positive or negative detuning, respectively. As we have seen, the size of the cluster decreases with increasing number of particles as 1/N. The density at the center of the cluster scales like N^4 at large N. That means that at some value of N one can no longer ignore terms that were dropped when one writes down the energy functional. The leading irrelevant terms are H_bg = 1/2∫d(g_11|ψ|^4 + 2g_12|ϕ|^2|ψ|^2 + g_22|ϕ|^4) , where g_11=4π a_11/m, g_12=3π a_12/m, and g_22=2π a_22/m, with a_11, a_12, and a_22 being the background (i.e, the nonresonant part of) atom-atom, atom-molecule, and molecule-molecule scattering length. As the density of the droplet increases with increasing N, one expects that the four-point interaction, if it is repulsive (i.e., g_11>0, g_22>0, and g_11g_22-g_12^2>0), should stabilize the density at some finite value. This can be seen by minimizing the energy using the symmetrized Woods-Saxon ansatz. At large N the density distribution flattens out in the center. At very large N, the droplet has a bulk of constant density surrounded by a thin wall across which the density drops to zero. The situation here is simply the coexistence of two phases: the self-bound liquid and the vacuum. The Landau free energy density at chemical potential μ, V(ψ,ϕ; μ) = - α(ψ^ψ^ϕ + ϕ^ψψ) + g_11/2|ψ|^4 + g_12|ψ|^2|ϕ|^2 + g_22/2|ϕ|^4 - μ(|ψ|^2 +2|ϕ|^2) , allows a first-order phase transition to occur at a negative value of μ where a nontrivial minimum of Ω(ψ,ϕ) is degenerate with the trivial minimum at ψ=ϕ=0. This critical value of μ and the values of condensates in the liquid phase are the solution to the equations Ω=_̣ψΩ(ψ_0,ϕ_0)=_̣ϕΩ(ψ_0,ϕ_0)=0. Parametrically, this occurs at μ∼E/N∼ħ^2/ma_bg|r_0| . The binding energy per particle has the same order of magnitude as μ. The density in the fluid phase is n ∼1/4π a_bg^2 |r_0| . At saturation density, the self-bound liquid is still dilute: na_bg^3≪ 1, justifying the mean-field approximation. Also, provided that a_bg sets the magnitude of the coefficients of the additional terms involving higher powers of fields and higher derivatives, these can be ignored in Eq. (<ref>). In Fig. <ref>, we plot the binding energy per particle of a droplet as a function of N, showing a crossover between the 𝒪(N^2) behavior at small N to the 𝒪(N^0) behavior at large N. One the characteristics of the self-bound liquid is found, one can determine the surface tension of the phase boundary between the liquid and the vacuum by minimizing the static energy of a one-dimensional field configuration ψ(x), ϕ(x) σ= ∫_-∞^∞ dx [ 1/2 (_̣x ψ)^2 + 1/4(_̣x ϕ)^2 + V(ψ,ϕ)] which interpolates between the vacuum and the liquid, i.e., satisfies the boundary conditions ψ(-∞)=ϕ(-∞)=0, ψ(+∞)=ψ_0, ϕ(+∞)=ϕ_0. In Table <ref> we consider two simple models: in model I, g_11=g_22=4π a_bg, g_12=0, and in model II, g_11=g_22=g_12=4π a_bg. Now we discuss the effect of detuning. We first consider how detuning affects the phase diagram of homogeneous matter. For this, one needs to investigate the behavior of the Landau functional (previously considered in Refs. <cit.>) V(ψ,ϕ;μ,ν) = V(ψ,ϕ;μ) + ν |ϕ|^2 . A typical (μ,ν) phase diagram is shown in Fig. <ref>. One can understand the positive detuning (ν>0) region of the phase diagram by integrating out ϕ, assuming all fields are small. The minimum of ϕ is achieved at ϕ = α/ν-2μψ^2 + 𝒪(ψ^4) . Substituting this into Eq. (<ref>), we find the effective potential for ψ: V_eff(ψ) = -μ|ψ|^2 +(g_11/2 - α^2/ν-2μ)|ψ|^4 + g_12α^2/(ν-2μ)^2|ψ|^6 + 𝒪(|ψ|^8). We will limit ourselves to the case g_12>0. Then at μ=0 and ν=ν_+(∞), with ν_+(∞) = 2α^2/g_11 = 2ħ^2/ma_11|r_0| , the coefficients of both the |ψ|^2 and the |ψ|^4 term vanish. This point is the tricritical point T in Fig. <ref>. For ν>ν_+(∞), as one changes μ there is a phase transition at μ=0 from the vacuum to the Bose-Einstein condensate (BEC). For these values of ν the self-bound liquid does not exist. On the other hand, for ν<ν_+(∞) there is a first-order phase transition between the vacuum and the self-bound fluid. Thus, ν_+(∞) is the N→∞ limit of the upper value detuning parameter for which the N-particle droplet exists. At negative detuning, the line of vacuum-liquid first-order phase transition continues to exist at negative ν and meets the second-order phase transition line μ=ν/2 at a critical end point <cit.> (point S in Fig. <ref>) at ν_-(∞) =-2α^2/√(g_11g_22)+g_12 = -2√(2)ħ^2/m|r_0| (√(a_11a_22)+3/√(2)a_12) . For ν slightly smaller than ν_-(∞), as one increases μ, the system goes through two phase transitions: first from the vacuum to a molecular BEC state with ψ=0, ϕ≠0, and then to a hybrid condensate phase with ψ≠0, ϕ≠0. Since the hybrid condensate does not phase-coexist with the vacuum, there is no stable droplet consisting of any finite number of atoms. The first-order phase transition line between the hybrid condensate phase and the molecular BEC phase is a straight line that terminates at a ℤ_2 tricritical point (point Z in Fig. <ref>) <cit.>. Conclusion.—In this Letter, we have considered the problem of a droplet of a finite but large number of bosonic atoms at a narrow Feshbach resonance. We find that the binding energy per particle should increase like N^2 and then flatten out to a constant. In this work we have considered only the case when all atoms are identical bosons. It should be straightforward to extend the calculation to the case when the narrow resonance is formed from two nonidentical bosons. The calculations in this paper are performed in the mean field approximation. For a small number of particles it will be important to compute quantum corrections to the energy. It would also be interesting if one could solve the four-body problem, e.g., by extending the zero-range calculations of Ref. <cit.> to an interaction with finite and negative effective range. For larger N, one may hope to find the binding energy through Monte Carlo methods <cit.>. Experimentally, one system where one may be able to create bound droplets is ultracold cesium at Feshbach resonance. For Cs the resonance at B=19.849(2) G has a very small width of Δ=8.3(5) mG <cit.>. The effective range can be evaluated through <cit.> r_0≃ -2ħ^2/m Δ δμ a_bg . Using δμ=2πħ×0.76(3) MHz/G and a_bg=163(1)a_0 (a_0 being the Bohr radius), we find r_0/a_bg=-320(20). The molecule-molecule s-wave scattering length is a_22=220(30)a_0 <cit.>, which corresponds to g_22≈2/3 g_11. The atom-molecule scattering is unknown, but if one assume that g_12 is of the same order of magnitude as g_11 and g_22, and that g_12^2/(g_11g_22)<1, then one should expect the transition between the small cluster (E_N∼ N^3) regime to the large droplet (E_N∼ N) regime in the binding energy to occur at N∼ (r_0/a_bg)^1/2∼20. The authors thank Cheng Chin, Ubirajara van Kolck, Dmitry Petrov, and Zhendong Zhang for discussion. This work is supported, in part, by the U.S. DOE Grants No. DE-FG02-13ER41958 and by the Simons Collaboration on Ultra-Quantum Matter, which is a Grant from the Simons Foundation (No. 651440, DTS). T.P. acknowledges partial support by the DFG (EXC2181/1-390900948, 273811115) and thanks the University of Chicago for hospitality. KW acknowledges partial support from Kadanoff Center for Theoretical Physics and University of Chicago’s Research Computing Center. — Supplemental Material — Droplets of Bosons at a Narrow Resonance Ke Wang, Thimo Preis, and Dam Thanh Son § THE SYMMETRIZED WOODS-SAXON VARIATIONAL ANSATZ We introduce the symmetrized Woods-Saxon function F(x;ξ,R) = 1/cosh(x/R) + ξ . This function goes to zero exponentially as x→∞. It is an even function of x so its derivative at x=0 vanishes. We use the following five-parameter trial wave functions, ψ(r) = N^2/√(4π I_2) F (Nr; ξ_1, R_1), ϕ(r) = N^2b/√(4π I_2) F (Nr; ξ_2, R_2). In order for the total number of particles to be equal to N, we require I_2 = ∫_0^∞dx x^2[F^2(x;ξ_1,R_1) + 2b^2 F^2(x;ξ_2,R_2) ] . Therefore I_2 is a function of the variational parameters ξ_1,2, R_1,2 and b. The total energy is then E = N^3 ( K/I_2 - 2I_3/I_2^3/2), where K = ∫_0^∞dx [1/2 F^'2(x;ξ_1,R_1) + 1/4 F^'2(x;ξ_2,R_2) ], I_3 = b∫_0^∞dx F^2(x;ξ_1,R_1) F(x;ξ_2,R_2), are both functions of the variational parameters. We now minimize the energy (<ref>) to find E ≈ -0.0347 N^3 , achieved at ξ_1≈ 0.167, ξ_2 ≈ 1.11, R_1 ≈ 1.53, R_2 ≈ 1.09, b≈ 1.39. Now we turn on detuning. Defining I_2m = b^2∫_0^∞dx x^2 F^2(x;ξ_2,R_2) , the variational energy is now E/N^3 = K/I_2 - 2I_3/I_2^3/2 - ν/N^2I_2m/I_2 . For negative detuning, the droplet becomes unstable when the energy of the droplet crosses zero. This happens at ν̃≈ -0.191, where ξ_1≈-0.121, ξ_2≈1.17, R_1≈2.07, R_2≈1.20, b≈1.52. For positive detuning, the droplet becomes unstable when its energy is larger than the energy of N/2 dimers. This happens when ν̃≈0.140, where ξ_1≈0.394, ξ_2≈0.851, R_1≈.143, R_2≈1.22, b≈1.19. Introducing new parameters into the ansatz (for example, by using the function [cosh(x/R)+ξ]^-n with n being an additional variational parameter, which can have different values for atoms and molecules) does not change the ground state energy at zero detuning and the values of ν̃_± appreciably. § IMAGINARY-TIME GROSS-PITAEVSKII EQUATION To numerically find the ground state wave function of the droplet, we consider the imaginary-time Gross-Pitaevskii evolution. For simplicity, we consider the case g_11=g_22 with g_12=0, -∂_τψ(r) = ( -∂^2_r+2r^-1∂_r/2m_1-μ_1 ) ψ(r)+g_11|ψ(r)|^2 ψ(r)-2αψ^*(r) ϕ(r), -∂_τϕ(r) = (-∂^2_r+2r^-1∂_r/2m_2 -μ_2 ) ϕ(r)+g_22 |ϕ(r)|^2 ϕ(r)-αψ(r) ψ(r). The total particle number is required to be conserved during the evolution. Here, the chemical potential is evaluated self-consistently at each time step by μ_i=∂ E/∂ N_i and N_1/2 is the atomic/molecular particle number. We place the GPE on a discrete lattice with the lattice position r(i)=d× i, where i the integer 0≤ i ≤ L and d is the lattice distance. First, we use the imaginary-time Gross-Pitaevskii equation (itGPE) to confirm Eq. (<ref>) in the main text: the droplet energy converges to ≈ -0.0347 N^3/r^2_0 when g_11=g_22=0. We start from the symmetrized Woods-Saxon ansatz: the wave-functions are given by ϕ_1(r)= √(N/4 I_2π)f(r) and ϕ_2(r)= √( N/4 I_2π) g(r). Here I_2 is defined in Eq. (<ref>) of the main text and the shape functions read f(x)=1/exp (x+ξ_1)/R_1+exp -(x-ξ_1)/R_1+1 , g(x)=b/exp (x+ξ_2)/R_2+exp -(x-ξ_2)/R_2+1 . The minimization of the energy functional at N=100 and g_11=g_22=0 leads to the energy E≈ -0.034666 N^3/r^2_0 with the following variational parameters: ξ_1≈ 0.0167r_0, R_1≈0.01528r_0, ξ_2≈ -0.008673 r_0, R_2≈ 0.01088r_0, b≈0.2094. Next, we simulate the itGPE with the ansatz above as the initial condition. Here we use a lattice with d=0.0005 r_0 and L=600. Evaluation of the energy functional of the initial state on this lattice reads ≈ -0.034670 N^3/r^2_0. Compared to the coefficient -0.034666 from the continuous space, one may conclude that the error from the lattice discretization on the energy of this state is of the order of ≈ 5× 10^-6 . The simulation result is shown in Fig. <ref>: we find that the droplet energy converges to ≈ -0.03472N^3/r_0^2. This confirms Eq. (<ref>) in the main text. Now we aim to check the first-order transition values of ν_±(N) in Eq. (<ref>) of the main text. These two values are obtained by using the conditions E_droplet(ν_+) = 0 and E_droplet(ν_-) = ν N/2. Here, E_droplet is the ground state energy of the droplet. Using the Woods-Saxon ansatz, the conditions above lead to ν_+ ≈ 0.191 N^2 and ν_- ≈ -0.140 N^2. We pick two ansatz wave functions (Ψ_±), which are obtained from the minimization at ν/N^2 = 0.1904 and ν/N^2 = -0.14. The variational parameters for ν/N^2 = 0.1904 and N=100 are: ξ_1≈ 0.11335r_0, R_1≈ 0.019905r_0, ξ_2≈ -0.01135r_0, R_2≈ 0.01183r_0, b≈0.001824. The variational parameters for ν/N^2 = -0.14 and N=100 are: ξ_1≈ 0.003406r_0, R_1≈ 0.014305r_0, ξ_2≈ -0.00649r_0, R_2≈ 0.012214r_0, b≈ 0.55081. Then we use Ψ_+,Ψ_- as the initial condition in the itGPE when ν/N^2 is varied around 0.191 and -0.14. The energy of the converging state represents the droplet energy. Simulation results of itGPE are discussed below (r_0=1). * For ν_+, we find the energy of the droplet (converging) state at ν/N^2 = 0.1908, 0.1912, 0.1914. The energy is E≈ -6 × 10^-5 N^3 at ν/N^2 = 0.1908, while E≈ 2.3 × 10^-5 N^3 at ν/N^2 = 0.1914. Therefore, ν_+/N^2 lies between 0.1908 and 0.1914. Furthermore, we show the imaginary time evolution in Fig. <ref> at ν/N^2 = 0.1912: the energy of the evolving state converges to a value around zero. Thus, we find ν_+/N^2 ≈ 0.191. * For ν_-, we find the energy of the droplet state at ν/N^2 = -0.1399, -0.140, -0.1403. The energy is E-ν N/2≈ -2.4 × 10^-5 N^3 at ν /N^2= -0.1399, while E-ν N/2≈ 6 × 10^-5 N^3 at ν/N^2 = -0.1403. Therefore, ν_-/N^2 lies between -0.1399 and -0.1403. The imaginary time evolution at ν/N^2 = -0.14 is shown in Fig. <ref>: the energy of the evolving state converges to a value around zero. Thus, we find ν_-/N^2 ≈ -0.140. According to these itGPE results, we confirm the first order transition values in Eq. (<ref>) of the main text. § EXPERIMENTAL ASPECTS OF CS133 We adapt the data from the experimental paper <cit.>, δμ= ħ× 2π× 0.76 MHz/G, Δ=8.3mG. Here δμ is the relative magnetic moment bewteen closed/open channel. The Δ is the resonance width. The background atomic scattering length is given by a_bg=163 a_Br≃ 8.6 × 10^-9 m Here a_Br is the Bohr radius. Nearby the Fesh-bach resonance, the effective range is approximated by <cit.> r_0 ≃ -2ħ^2/m ·δμΔ· a_bg≃ -2.8× 10^-6 m. Therefore we have r_0≃ -3.2× 10^2 a_bg. § THE HYBRID CONDENSATE-MOLECULAR BEC PHASE TRANSITION From the energy given in Eq. (<ref>), one finds that the phase diagram has a critical end point located at μ_cep = -α^2/√(g_11g_22)+g_12 , ν_cep = -2α^2/√(g_11g_22)+g_12 , and a ℤ_2 tricritical point located at μ_ℤ_2 = -2α^2/√(g_11g_22)+g_12+ g_12α^2/(√(g_11g_22)+g_12)^2 , ν_ℤ_2 = -4α^2/√(g_11g_22)+g_12+ (2g_12-g_22)α^2/(√(g_11g_22)+g_12)^2 . The line separating the molecular-BEC phase and the hybrid condensate phase is a straight line connecting (μ_cep, ν_cep) to (μ_ℤ_2, ν_ℤ_2): μ = -α^2/√(g_11g_22)+g_12 - x √(g_11g_22) α^2/(√(g_11g_22)+g_12)^2 , ν = -2α^2/√(g_11g_22)+g_12 - x (2√(g_11g_22) + g_22)α^2/(√(g_11g_22)+g_12)^2 , where x runs between 0 (the critical end point) to 1 (the ℤ_2 tricritical point). On the molecular-BEC side of the phase transition, the fields are ψ = 0 , ϕ = α√(x)/√(g_11g_22)+g_12 , and on the hybrid condensate side they are ψ = (g_22/g_11)^1/4α√(1-x)/√(g_11g_22)+g_12 , ϕ = α/√(g_11g_22)+g_12 . One can check that (<ref>) and (<ref>) are two local minima of the energy and these minima are degenerate in energy.
http://arxiv.org/abs/2407.03248v1
20240703162411
Section conjectures over $\mathbb{C}$ and Kodaira fibrations
[ "Simon Shuofeng Xu" ]
math.AG
[ "math.AG", "math.GT", "14D05, 55R37, 14H10, 14J29" ]
§ ABSTRACT In this paper we propose and study topological and Hodge theoretic analogues of Grothendieck's section conjecture over the complex numbers. We study these questions in the context of family of curves, in particular Kodaira fibrations, and in the context of the family of Jacobians associated to a Kodaira fibration. We showed that in the case of family of curves, both the topological and Hodge-theoretic analogues of the injectivity part of the section conjecture holds, and that in the case of family of Jacobians, the topological analogue of the surjectivity part of the section conjecture does not hold in general. For family of curves, we also reduce the topological analogue of the surjectivity part of the section conjecture to the case where the families have no algebraic sections. Magnetic Hysteresis Modeling with Neural Operators Abhishek Chandra0000-0003-2319-62211,2, Graduate Student Member, IEEE, Bram Daniels0000-0002-0037-93271, Graduate Student Member, IEEE, Mitrofan Curti0000-0002-0084-43721,2, Member, IEEE, Koen Tiels0000-0001-9279-110X2,3, Member, IEEE, and Elena A. Lomonova0000-0002-2515-14411,2, Senior Member, IEEE 1Department of Electrical Engineering, Eindhoven University of Technology, 5612 AP Eindhoven, The Netherlands 2Eindhoven Artificial Intelligence Systems Institute, Eindhoven University of Technology, 5612 AP Eindhoven, The Netherlands 3Department of Mechanical Engineering, Eindhoven University of Technology, 5600 MB Eindhoven, The Netherlands Corresponding author: A. Chandra (email: a.chandra@tue.nl). July 8, 2024 ========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================= § INTRODUCTION In this paper, we would like to propose and study some analogues of Grothendieck's section conjecture over the complex numbers. To explain these analogues, we first recall the anabelian philosophy which motivates the present work. In <cit.>, Grothendieck conjectured that there exists a special class of schemes, called the anabelian schemes, defined over some field k that is finitely generated over , whose behaviour is controlled an associated short exact sequence of étale fundamental groups: 1→π_1^ét(X_k)→π_1^ét(X)→(k/k)→ 1 Loosely speaking, this means that maps between anabelian schemes X and Y are the same as conjugacy classes of maps of extensions: 1 π_1^ét(X_k) π_1^ét(X) (k/k) 1 1 π_1^ét(Y_k) π_1^ét(Y) (k/k) 1 [from=1-1, to=1-2] [from=1-2, to=1-3] ["f̅", from=1-2, to=2-2] [from=1-3, to=1-4] ["f", from=1-3, to=2-3] [from=1-4, to=1-5] ["=", from=1-4, to=2-4] [from=2-1, to=2-2] [from=2-2, to=2-3] [from=2-3, to=2-4] [from=2-4, to=2-5] where two such maps (f,f̅) and (g,g̅) are conjugate if their images are conjugate by some elements of π_1^ét(Y_k). Furthermore, he conjectured that the class of anabelian schemes should satisfy the following properties (see <cit.>): * it should contain all hyperbolic curves * it should contain the moduli stacks M_g,n of smooth projective curves of genus g with n marked points * it should be closed under taking fibrations, i.e., if f:X→ Y is a smooth proper map such that both Y and the fiber F is also in this class, then so is X. Now if one also believes that a point k is anabelian, then one arrives at the Grothendieck's section conjecture for smooth proper curves of genus g≥ 2 Let X/k be a smooth projective curve of genus g≥ 2 over some field k that is finitely generated over . Then the section map sec:{k-rational points of X}→{splittings of (<ref>)}/conjugation is a bijection. One can see that the notion of anabelian schemes is supposed to be algebraic analogues of k(π,1)-spaces in topology. Indeed, hyperbolic curves and M_g,n are all k(π,1)-spaces, and if we have a Serre fibration f:X→ Y, where both Y and the fiber F are k(π,1)-spaces, then so is X. However, it's worth pointing out that the analogy is not perfect and there are k(π,1) spaces that are not anabelian (for example, elliptic curves over a field of characteristics 0 are not anabelian but they are still k(π,1) spaces). The question we would like to explore in this paper is that if we now work over the complex numbers, is there a reasonable class of schemes, and some functorial invariant F like the étale fundamental group, so that maps between such schemes are the same as maps between their functorial invariants? A naïve approach to this question is that we may simply take the class of anabelian schemes proposed by Grothendieck and see if one can find any interesting functorial invariant that can replace the étale fundamental group when we work over the complex numbers. In this paper, we study two invariants that naturally occur in complex algebraic geometry: the first one is the topological fundamental group and the other one is the category of graded-polarizable admissible -variation of mixed Hodge structures. Furthermore, we study them in the context of families of curves over another curve, in particular Kodaira fibrations, and the family of Jacobians associated to a Kodaira fibration. We first recall what a Kodaira fibration is. A Kodaira fibration f:S→ C is a non-isotrivial fibration from a smooth projective surface S onto a smooth projective curve C such that all of the fibers are smooth projective of genus g. Such a fibration is first constructed by Kodaira in <cit.>; see also the work of Parshin <cit.> and Atiyah <cit.>. For a recent survey on Kodaira fibration, see <cit.>. By an observation of Kas <cit.>, we know that given a Kodaira fibration f:S→ C, the genus of the base C is at least 2 and the genus of the fiber S_b is at least 3. In particular, they should be examples of anabelian schemes, making them ideal testing grounds for anabelian conjectures. We can now formulate the questions we study in this paper: Topological version of the question. Given a Kodaira fibration f:S→ C, we have a short exact sequence of topological fundamental groups 1→π_1(S_b)→π_1(S)→π_1(C)→ 1 Now any algebraic section induces a splitting of this short exact sequence, so we get a section map Φ:{algebraic sections to f:S→ C}→{sections of (<ref>)}/conjugation, where the conjugation action is the natural action of π_1(S_b) on the set of sections of (<ref>) defined by (g· f)(x):=gf(x)g^-1 for all g∈π_1(S_b), x∈π_1(C), f splittings of (<ref>). The following question is then a topological analogue of Grothendieck's section conjecture (Conj. <ref>). Is the section map Φ a bijection? Now it's worth pausing and asking why one should think that Question <ref> may have a positive answer. Indeed, if we just consider the naïve topological analogue of Grothendieck's section conjecture for smooth projective curves X over , we will not get an one-to-one correspondence: there are infinitely many maps from to X, but only one group theoretic map from {*}=π_1()→π_1(X). Therefore, the section map in this case is surjective but never injective. However, notice that in this case, π_1() is trivial, and therefore X→ has trivial monodromy. On the other hand, in the case of a curve over a number field k, since π_1^ét(k)=(k/k), we do have non-trivial monodromy. Therefore, in some sense, Kodaira fibrations are closer analogues to curves over number fields than Riemann surfaces. Indeed, the analogue of Faltings' theorem holds for Kodaira fibrations (see Corollary 4.2 in <cit.>), i.e. any Kodaira fibration f:S→ C has only finitely many algebraic sections. Furthermore, the proof of this results uses a strategy similar to Faltings' original proof. In particular, it proves a geometric Shafarevich conjecture <cit.>, which states that for a given base C, there are only finitely many Kodaira fibrations f:S→ C with fiber genus g(S_b)=g. Perhaps the most relevant evidence that suggests Question <ref> may have a positive answer is that the topological analogue of Grothendieck's section conjecture holds for the universal curve C_g over the moduli stack M_g of genus g curves when g≥ 2: for C_g→M_g, the associated short exact sequence of topological fundamental group is the Birman short exact sequence 1→π_1(Σ_g)→MCG_g,1→MCG_g→ 1, where Σ_g is a smooth compact orientable Riemann surface of genus g, and MCG_g,n is the mapping class group of a genus g surface with n marked points. This short exact sequence is known to be a non-split sequence whenever g≥ 2 <cit.> (as we will use later, it does not even virtually split, see <cit.>). Therefore, it seems to us that Question <ref> may very well have a positive answer at least when the monodromy representation ρ:π_1(C)→_2g() associated to a Kodaira fibration f:S→ C has large image, and one of the main results of this paper is that the non-injectivity phenomenon observed in the case of Riemann surfaces does not occur for Kodaira fibrations with large monodromy: If f:S→ C is a Kodaira fibration whose monodromy representation ρ has no invariants, then Φ is injective. The main strategy to prove this theorem is to study the associated family of Jacobians π:^0_S/C→ C attached to a Kodaira fibration f:S→ C. One could similarly ask Question <ref> in the case of family of Jacobians and this may be viewed as an abelianized version of the topological section question for Kodaira fibrations. We proved the following results for the abelianized section map Let π:^0_S/C→ C be the family of Jacobians associated to a Kodaira fibration. Then * if the monodromy representation associated to f:S→ C has no invariants, then the abelianized section map is injective (see Cor. <ref>); * if the Kodaria fibration has an algebraic section, then the abelianized section map is never surjective (see Cor. <ref>). In fact, the first part of the theorem is true for any family of principally polarized abelian varieties over a curve, and we work in that generality at the beginning of section <ref>. The second part of the theorem does uses the fact that we are working with a family of Jacobians associated to a Kodaira fibration f:S→ C and the main input is a computation of the degree of R^1f_*O_S via Grothendieck-Riemann-Roch. For the surjectivity part of Question <ref>, we are not able to show that the section map Φ is surjective. However, we can show that the question of the surjectivity of Φ can be deduced from the following weak topological section conjecture: Let f:S→ C be a Kodaira fibration. Then it admits an algebraic section if and only if the short exact sequence (<ref>) of fundamental groups splits. More precisely, we prove the following: The surjectivity of the section map Φ is equivalent to the weak topological section conjecture for all connected finite étale cover S' of S such that S'→ C has connected fibers (i.e. S'→ C is also a Kodaira fibration). This result also has an analogue in the arithmetic setting <cit.>, which states that the surjectivity part of Grothendieck's section conjecture may be deduced from the weak section conjecture (i.e. existence of a rational points is the same as the existence of a splitting of the sequence <ref>) for geometrically connected finite étale covers of the curve. In fact, the proof is almost the same as the proof of this analogous result in <cit.>. Hodge theoretic version of the question. Another possible candidate that one may replace the étale fundamental group with is the category of graded-polarizable admissible -variation of mixed Hodge structures _, as Hodge theory has always been an extremely useful tool in the study of complex algebraic geometry. The downside of this category is that it's not a Tannakian category, and therefore, we cannot use it to define a Tannakian fundamental group. This means that we will lose the group theoretic aspect of anabelian geometry if we work with _. Nevertheless, it still makes sense to ask if functors between these categories are in one-to-one correspondence with algebraic sections. In other words, given a map f:X→ Y, we get a section map sec:{algebraic sections to f}→{section functors from _(X)→_(Y)}, where section functors are defined to be functors that becomes isomorphic to the identity functor after composing with the pullback f^*. Then the Hodge theoretic section question asks if this Hodge theoretic section map is a bijection. See section (<ref>) for more detailed backgrounds and a more precise formulation of the question. Furthermore, we also prove the following result: For any family of curves f:X→B. The injectivity part of the Hodge theoretic section question holds. Note that this result does not make any assumption on the monodromy of the family and hence is true even when the base B is a point. We by no means believe that the formulation of the Hodge theoretic section question proposed in section <ref> should be the final and correct formulation, and we conclude this paper with some discussions of open questions (both in the topological setting and in the Hodge theoretic setting) and possible modifications to the Hodge theoretic section question. Acknowledgement: I would like to thank my advisor Daniel Litt. This paper would not have been possible without his encouragement, generosity and wisdom. I would also like to thank Sasha Shmakov for pointing me to some very helpful expositions of Beilinson-Deligne-Goncharov's construction of mixed Hodge structures on fundamental groups. Finally, I would like to thank Laure Flapan, whose delightful talk on Kodaira fibration got me interested in this subject in the first place. § SOME CONSTRUCTION OF KODAIRA FIBRATIONS In this paper, we will prove theorems about Kodaira fibrations satisfying some extra hypotheses. In this section we would like to demonstrate that those theorems are non-empty, i.e., we give some standard constructions of Kodaira fibrations, which produce Kodaira fibrations satisifying those extra hypotheses. We also take this opportunity to set up some notations. Let f:S→ C be a Kodaira fibration and let M_g be the moduli stack of smooth projective curves of genus g. By the universal property of M_g, such a Kodaira fibration f:S→ C corresponds to some non-constant map φ:C→M_g. Therefore, to construct a Kodaira fibration, it's enough to construct complete curves inside M_g. To avoid issues with stacks, we will work instead with the fine moduli space M_g[n] of genus g curves with fixed level n≥ 3 structure. The following construction, which we called the moduli construction, is fairly well-known (see for example <cit.>). Moduli construction: Suppose g≥ 4 and consider the Satake compactification M_g[n]^*, i.e., the closure of M_g[n] inside the Satake compactification A_g[n]^* of A_g[n] via the Torelli map J:M_g[n]→A_g[n]. Since M_g[n]^* is projective, we may embed it into some large projective space and cut it with hyperplane sections and produce a curve. Now that when g≥ 4, the boundary component M_g[n]^*-J(M_g[n]) is of codimension at least 2 and the hyperelliptic locus H_g[n], where the Torelli map fails to be an immersion, is of codimension at least 2. Hence, a curve obtained by cutting M_g[n]^* with hyperplane sections can avoid the boundary component as well as the hyperelliptic locus, and corresponds to a Kodaira fibration via the universal property of M_g[n]. Let f:S→ C be a Kodaira fibration constructed via the moduli construction explained above. Then the image of the monodromy representation ρ:π_1(C)→_2g(Z) is of finite index inside _2g() and hence the monodromy action on H^1(S_b,) has no invariants (i.e. H^0(C,R^1π_*)=0). By Lefschetz's hyerplane theorem for quasi-projective varieties <cit.>, the monodromy representation ρ factors through π_1(M_g[n]) via a surjection: π_1(C)↠π_1(M_g[n])→_2g(Z), and by definition, the last map, which corresponds to the monodromy representation for the universal family over M_g[n], surjects onto the kernel of _2g()→_2g(/n), which certainly acts on H^1(S_b,) with no invariants. Since we are interested in studying algebraic sections of Kodaira fibrations, it would be quite pointless if there's no Kodaira fibrations with algebraic sections. The following theorem of Bregman proves that that's not the case, and that monodromy cannot obstruct the existence of an algebraic section. <cit.> For every Kodaira fibration f:S→ C, there exists a Kodaira fibration f̃:S̃→C̃ with an algebraic section such that * The fibers of f:S→ C and f̃:S̃→C̃ are of same genus g * The monodromy homomorphisms π_1(C)→_g and π_1(C̃)→_g have the same image. The idea is similar to that of the moduli construction. First embed S into some projective space P^N. Then a general hyerplane section C̃ will be smooth and projective and admits a non-constant and hence finite map onto C. Then S̃ can be constructed as the fiber product of C̃ and S over C and the algebraic section comes from the identity map C̃→C̃ and the natural inclusion map C̃→ S. The statement on monodromy homomorphism follows from a similar application of the Lefschetz hyperplane theorem as above. We can also use the moduli construction to produce many families of curves over higher dimensional bases that trivially satisfy the topological analogue of Grothendieck's section conjecture (i.e. that does not have any topological section). Let f:X→B be a family of curve constructed by cutting M_g[n] with hyperplane sections, with B≥ 3. Again applying the Lefschetz hyperplane theorem, we obtain the following commutative diagram: 1 π_1(Σ_g) π_1(X) π_1(B) 1 1 π_1(Σ_g) π_1(M_g,1[n]) π_1(M_g[n]) 1 [from=1-1, to=1-2] [from=1-2, to=1-3] ["=", from=1-2, to=2-2] [from=1-3, to=1-4] [from=1-3, to=2-3] [from=1-4, to=1-5] ["≅", from=1-4, to=2-4] [from=2-1, to=2-2] [from=2-2, to=2-3] [from=2-3, to=2-4] [from=2-4, to=2-5] If the top row splits and we have a group theoretic section from π_1(B)→π_1(X), it will descends to a group theoretic section of the bottom row. However, π_1(M_g[n]) is a finite index subgroup of MCG_g by definition and hence the bottom row does not split as long as g≥ 4 <cit.>. It follows that the top row of this diagram also does not split and hence there's no topological section. The same argument doesn't quite work for Kodaira fibrations, since the Lefschetz hyperplane theorem only gives us a surjection π_1(B)→π_1(M_g[n]) and sections of the top row may not always descends to a section in the bottom row. § ALGEBRAIC SECTIONS TO FAMILY OF JACOBIANS In this section we study algebraic sections to the family of Jacobians associated to a Kodaira fibration. Since many results, especially the injectivity result, hold in general for any family of principally polarized abelian varieties, we first work in that generality and specialize our discussion to the case of family of Jacobians when we discuss the question of surjectivity. §.§ Sections to family of abelian varieties By a family of principally polarized abelian variety A over a curve C, we mean a smooth proper map π:A→ C, where the fibers are principally polarized abelian varieties. Equivalently, this is the same as a map from C to the moduli space of g-dimensional principally polarized abelian varieties A_g (the image does not need to be contained in the Torelli locus of A_g). Associated to such a map π:A→C is a short exact sequence of fundamental groups: 1→π_1(A_b)→π_1(A)→π_1(C)→ 1 where A_b is the fiber over some point b∈ C. As before, any algebraic section of π induces a splitting of (<ref>). Furthermore, it's a classical fact that splittings of (<ref>) are parameterized up to isomorphism by the cohomology group H^1(π_1(C), H_1(A_b,)) <cit.>. Therefore, we get a map Φ^ab:H^0(C,A)→ H^1(π_1(C),H_1(A_b,)) which maps an algebraic section s to the cohomology class corresponding to the splitting induced by s. Here, A is viewed as a sheaf on C, and H^0(C,A) is the set of algebraic sections to A. We want to relate this map Φ^ab to the uniformization sequence associated to A→ C. Consider the exponential short exact sequence 0→→O_A→O^*_A→ 0 on A. Taking derived pushforward, we get …→π_*𝒪_A→π_*(𝒪_A^*)→ R^1π_*→ R^1π_*O_A→ R^1π_*O_A^*→ R^2π_1→… Now π_*O_A→π_*(O_A^*) is surjective, so we get a short exact sequence (the uniformization sequence) 0→ R^1π_*→ R^1π_*O_A→(R^1π_*(O_A^*)→ R^2π_*)→ 0 Since the fibers are principally polarized, (R^1π_*(O_A^*)→ R^2π_*)=^0_A/C=A, so we see that the short exact sequence becomes 0→ R^1π_*→ R^1π_*O_A→A→ 0. Taking long exact sequence in sheaf cohomology gives us …→ H^0(C,A)→ H^1(C, R^1π_*)→ H^1(C,R^1π_*O_A)→…. Since C is smooth projective of genus at least 1, it's a k(π_1,1) space and so are abelian varieties. It follows that H^1(C, R^1π_*) is canonically isomorphic to H^1(π_1(C), H_1(A_b,)). The following key lemma gives an alternative description of Φ^ab: The image of Φ^ab agrees with the boundary map Φ̃:H^0(C,A)→ H^1(C,R^1π_*) under this canonical isomorphism between H^1(C, R^1π_*) and H^1(π_1(C), H_1(A_b,)). Consider the following commutative diagram of short exact sequence of sheaves: 0[r] R^1π_*[r][d,"="] R^1π_*O_A[r][d] A[r][d] 0 0[r] R^1π_*[r] (R^1π_*O_A)^cont[r] (A)^cont[r] 0 where (R^1π_*O_A)^cont and A^cont are the sheaves of continuous sections to R^1π_*O_A and A, respectively. Taking long exact sequence in cohomology gives us the following commutative diagram [sep=small] …[r] H^0(C,R^1π_*O) [r][d] H^0(C,A)[r,"Φ̃"][d] H^1(C,R^1π_*)[r][d,"="] H^1(C,R^1π_*O) …[r] H^0(C,(R^1π_*O)^cont) [r] H^0(C,(A)^cont)[r,"ψ̃"] H^1(C,R^1π_*)[r] H^1(C,(R^1π_*O)^cont)=0 where H^1(C,(R^1π_*O)^cont)=0 because (R^1π_*O)^cont is a fine sheaf. By the commutativity of the diagram, we know that the map Φ̃ agrees with the following map H^0(C,A)→ H^0(C,(A)^cont) H^1(C,R^1π_*). Furthermore, the map Φ^ab certainly also factors through H^0(C,A)→ H^0(C,(A)^cont), and therefore we've reduced the problem to checking that the following diagram commutes H^0(C,(A)^cont) H^1(π_1(C),H_1(A_b,)) H^1(C,R^1π_*)["ψ", from=1-1, to=1-2] ["ψ̃"', from=1-1, to=2-1] ["≅"', from=2-1, to=1-2] This can be checked via an explicit Cech cocycle computation using the universal cover U→ C. §.§ Injectivity of Φ^ab We now show that, under some assumption on the monodromy of π:A→ C, Φ^ab is always injective. If the monodromy representation has no invariant factors, then Φ^ab is injective. By Lemma <ref>, it's enough to show that under this assumption, H^0(C, R^1π_*O_A)=0. Consider the Higgs bundle associated to the variation of Hodge structure R^1π_*: E:=π_*ω_A/C⊕ R^1π_*Oπ_*ω_A/C⊕ R^1π_*O⊗ω_C, where the Higgs field θ is defined by the following two maps π_*ω_A/CR^1π_*O⊗ω_C R^1π*Oπ_*ω_S/C⊕ R^1π_*O⊗ω_C Here ∇ is the flat connection associated to the vector bundle π_*ω_A/C. Now if H^0(C, R^1π_*O_A)≠ 0, then O_C maps into R^1π_*O_A, and hence (O,0) is a sub-Higgs bundle of (R^1π_*O_A,0) and hence a sub-Higgs bundle of (E,θ). On the other hand, by a theorem of Simpson <cit.>, Higgs bundles associated to a variation of Hodge structure are polystable, i.e., it's a direct sum of stable Higgs bundles of the same slope. Since O is a line bundle, we see that (O,0) must be one of the irreducible factors of (E,θ). In particular, by the non-abelian Hodge correspondence, the trivial representation should appear as a sub-representation of the monodromy representation associated to R^1π_*. This contradicts the fact that the monodromy representation no invariant factors. We can immediately deduce the following corollary for family of Jacobians associated to a Kodaira fibration: Let f:S→ C be a Kodaira fibration whose monodromy action on H^1(S_b,) has no invariants, and π:^0_S/C→ C be the corresponding family of Jacobians, then Φ^ab:H^0(C,^0_S/C)→ H^1(π_1(C),H_1(S_b,)) is injective. Furthermore, the abelian version of the section question is also related to the original topological section question for Kodaira fibrations: Let f:S→ C be a Kodaira fibration whose monodromy action on H^1(S_b,) has no invariants, then the corresponding map Φ:{algebraic sections to f:S→ C}→{sections of (<ref>)}/conjugation is injective. If f:S→ C has no sections, then the statement is trivially true so let's assume that we have a fixed section s_0:C→ S. Then we may define a C-morphism h:S→^0_S/C which maps x∈ S_b to the divisor class [s_0(f(x))-x]. Note that h is injective, as it's just the Abel-Jacobi map on each fiber. Now recall that we have the following commutative diagram: 1 π_1(S_b) π_1(S) π_1(C) 1 0 H_1(S_b,)=π_1(S_b)^ab π_1(^0_S/C) π_1(C) 1 [from=1-1, to=1-2] [from=1-2, to=1-3] [from=1-2, to=2-2] [from=1-3, to=1-4] [from=1-3, to=2-3] [from=1-4, to=1-5] ["=", from=1-4, to=2-4] [from=2-1, to=2-2] [from=2-2, to=2-3] [from=2-3, to=2-4] [from=2-4, to=2-5] Let s and s' be two distinct algebraic sections of f:S→ C. By post-composing with h and using the injectivity of h, we get two distinct sections s̃ and s̃' of π:^0_S/C→ C. If s and s' are conjugate via some element g∈π_1(S_b), then s̃ and s̃' must be conjugate via the image of g in H_1(S_b,), contradicting Cor. <ref>. Hence, Φ is injective as desired. By Lemma <ref>, we see that the Kodaira fibration constructed using the moduli construction will satisfy the assumption of Corollary <ref>. On the other hand, many classical constructions of Kodaira fibration (including Kodaira's original construction <cit.>) involves taking branched covers of a product of curves and so typically the monodromy action will have invariants. Bregman gave a partial converse to this observation in <cit.> when the dimension of the invariants is small. §.§ The question of surjectivity for family of Jacobians In this section, we focus on the case of the family of Jacobians π:^0_S/C→ C associated to a Kodaira fibration f:S→ C. Furthermore, we will assume that the Kodaira fibrations in this section all have a distinguished algebraic section s_0, which induces a map h:S→^0_S/C as in the proof of Cor. <ref>. By Lemma <ref>, we see that to check if Φ^ab is surjective, it's enough to understand the map H^1(C, R^1π_*)→ H^1(C, R^1π_*O). Since we have a map h:S→^0_S/C, we get canonical isomorphisms R^1π_*≅ R^1f_* and R^1π_*O≅ R^1f_*O, so we may work in the relative curve setting and instead study the map Ψ: H^1(C, R^1f_*)→ H^1(C, R^1f_*O_S). To understand this map, we first compute the degree of the vector bundle R^1f_*O_S. The vector bundle R^1f_*O_S is of negative degree. Let f:S→ C be a Kodaira fibration such that the fiber S_b has genus g and the base C has genus h. By Grothendieck-Riemann-Roch, we know that (f_!(𝒪_S))=f_*((𝒪_S)·_S/C), where denotes the Chern character, and _S/C is the Todd class of the relative tangent bundle 𝒯_S/C. Since we are in the relative curve setting, we know that the higher derived pushforward R^if_*(𝒪_S) vanishes for all i≥ 2, so we can rewrite the the left hand side of the equation to get (f_!(𝒪_S)) =(f_*𝒪_S)-(R^1f_*(𝒪_S)) =(𝒪_C)-(R^1f_*(𝒪_S)) =1-((R^1f_*(𝒪_S))+c_1(R^1f_*(𝒪_S))+…). On the other hand, since (𝒪_S)=1, we see that the right hand side is simply f_*_S/C =f_*((𝒯_S/C)) =f_*(1+c_1(𝒯_S/C)/2+c_1^2(𝒯_S/C)+c_2(𝒯_S/C)/12+…). As 𝒯_S/C is a line bundle, we know that it has no higher Chern classes. It follows then R^1f_*(𝒪_S)=c_1(R^1f_*(𝒪_S))=-f_*(c_1^2(𝒯_S/C)/12). Thus, it's enough to compute f_*(c_1^2(𝒯_S/C)). Since 𝒯_S/C=Ω_S/C^∨, we know that c_1^2(𝒯_S/C)=c_1^2(Ω_S/C) so we can work with the relative differential. Consider the following short exact sequence 0→ f^*Ω_C→Ω_S→Ω_S/C→ 0. By taking the wedge power, we get the following isomorphism ∧^2Ω_S≅ f_*Ω_C⊗Ω_S/C. Since c_1 is a group homomorphism, we know that c_1(∧^2Ω_S)=c_1(f_*Ω_C)+c_1(Ω_S/C). Then c_1(Ω_S/C)^2 =(c_1(∧^2Ω_S))^2-2c_1(∧^2Ω_S)· c_1(f^*Ω_C)+(c_1(f^*Ω_C))^2 =K_S^2-2K_S· c_1(f^*(K_C))+(c_1(f^*(K_C))^2) where K_S is a canonical divisor on S, and K_C is a canonical divisor on C. Now since c_1 is functorial and f^* is a ring homomorphism, we know that (c_1(f^*(K_C)))^2=f^*(c_1(K_C)^2). Because C is a curve, this has to vanish. It follows then (c_1(Ω_S/C))^2=K_S^2-2K_S· f^*K_C and hence f_*((c_1(Ω_S/C))^2)=f_*(K_S^2)-2f_*(K_S· f^*K_C). Since we may compute degrees both before and after pushing-forward, we know that f_*(K_S^2)=K_S^2. To understand the last term, we first use projection formula to write it as 2f_*(K_S· f^*K_C)=2f_*(K_S)· K_C. Now f_*(K_S)=K_S· S_b, and because K_S· S_b= K_S|_S_b= K_S/C|_S_b for any generic fiber S_b, we see that K_S· S_b=2g-2,. Now since K_C has degree 2h-2, we see that f_*((c_1(Ω_S/C))^2)=K_S^2-8(g-1)(h-1). Hence, we have R^1f_*𝒪_S=c_1(R^1f_*𝒪_S)=-K^2_S+8(g-1)(h-1)/12=-K^2_S+2χ_S/12, where χ_S is the Euler characteristics of S. Finally, by the signature formula of Hirzebruch, Atiyah and Singer, we know that the signature σ(S) of S is precisely given by σ(S)=1/3(K^2_S-2χ_S). Since Kodaira fibrations necessarily have positive signatures <cit.>, R^1f_*O_S<0 as desired. H^1(C, R^1f_*O_S)>3. By Riemann-Roch, we know that H^1(C, R^1f_*O_S)=-( R^1f_*O_S+(R^1f_*O_S)(1-h)- H^0(C,R^1f_*O_S)). Lemma <ref> says that - R^1f_*O_S>0. Furthermore, as we've mentioned in the introduction, the base curve of a Kodaira fibration has genus at least 2 and the fiber has genus at least 3, so we know that H^1(C, R^1f_*O_S)>(R^1f_*O_S)(h-1)≥ 3· 1=3, as desired. We will use this corollary to show that Ψ:H^1(C,R^1f_*)→ H^1(C,R^1f_*O_S) is non-zero. The map Θ:H^1(C,R^1f_*)→ H^1(C,R^1f_*O_S) is non-zero. First observe that this map factors through through H^1(C,R^1f_*), i.e., Θ agrees with the map H^1(C,R^1f_*)↪ H^1(C,R^1f_*)→ H^1(C,R^1f_*O). Consider the projection map coming from the Hodge decomposition of H^2(S,)↠ H^2(S,f_*O). This is induced via the inclusion ↪O_S. Now by Leray spectral sequence, we know that H^2(S,) is equipped with the Leray filtration L^∙: 0 H^2(C,) L^1(H^2(S,)) H^1(C,R^1f_*) 0 0 L^1(H^2(S,) H^2(S,) H^0(C,R^2f_*) 0 [from=1-1, to=1-2] [from=1-2, to=1-3] [from=1-3, to=1-4] [from=1-4, to=1-5] [from=2-1, to=2-2] [from=2-2, to=2-3] [from=2-3, to=2-4] [from=2-4, to=2-5] In particular, H^1(C,R^1f_*) is a sub-quotient of H^2(S,). Now since f:S→ C has a section, it restricts to a splitting of the top row, and we may canonically view H^1(C,R^1f_*) as a subspace of H^2(S,). Since the connecting homomorphism Θ is also induced by ↪O_S, we know that Θ may be identified with the map H^1(C,R^1f_*)↪ H^1(C,R^1f_*)↪ H^2(S,)↠ H^2(S,O_S). In particular, because H^0(C,R^2f_*)= H^2(C,f_*)=1, it follows that H^1(C,R^1f_*) generates a subspace of codimension 2 inside H^2(S,). Again by Leray spectral sequence, we see that H^2(S,O_S)≅ H^1(C,R^1f_*O) since we are in the setting of relative curves, so by Corollary <ref>, H^2(S,O_S)>3. Thus for the map H^2(S,)→ H^2(S,O_S) to be surjective, it must be non-zero on the subspace generated by H^1(C,R^1f_*) and hence Θ is non-zero as desired. Finally, using Lemma <ref>, we arrive at the following conclusion. Let f:S→ C be a Kodaira fibration with an algebraic section, and π:^0_S/C→ C. The map Φ^ab is never surjective for these families of Jacobians. In the case where f:S→ C has an algebraic section, one may identify ^0_S/C with ^1_S/C, and hence our results shows that the topological section question has a negative answer for ^1_S/C→ C in the case where the associated Kodaira fibration has a section. This in fact differs from the universal case: the universal family of moduli space of degree 1 line bundles π':^1_C_g/M_g→M_g does satisfies the topological section question when g≥ 3 in a trivial way, i.e., there's no topological section to π' (this is first proven by Morita when g≥ 9; see <cit.>. The strengthened result is proven in <cit.>). § WEAK TOPOLOGICAL SECTION CONJECTURE AND SURJECTIVITY In this section, we want to relate the surjectivity part of the topological section question to the weak topological section conjecture. The strategy we will use is almost identical to the strategy used to prove <cit.>. We choose the highlight the main ingredients that make this strategy work so we will first work in a fairly general setting. Let f:X→ B be a smooth projective map with connected fibers between smooth projective varieties. Assume that we have a short exact sequence of topological fundamental groups 1→π_1(X_b)→π_1(X)→π_1(B)→ 1 where X_b is the fiber over b∈ B. In this general setting, we have the following two claims: [Weak topological section conjecture] The map X→ B admits an algebraic section if and only if the associated short exact sequence of topological fundamental groups split. [Surjectivity of top. section question] The section map Φ:{algebraic sections to f:X→ B}→{splittings of (<ref>)}/conjugation is surjective. We prove the following proposition: Assume that * π_1(X_b) is residually finite. * The set X'(B) of algebraic sections of X'→ B is finite for every finite étale connected cover X'→ X such that the composed map X'→ B has connected fibers. Then Statement <ref> being true for X→ B is equivalent to Statement <ref> being true for all finite étale connected covers X' of X such that the composed map X'→ B has connected fibers. To distinguish an algebraic section of f:X→ B and a group theoretic section of (<ref>), we will denote the former s and the latter x. Let x:π_1(B)→π_1(X) be a group theoretic splitting of the short exact sequence of topological fundamental groups (<ref>). Then a neighbourhood of x is a finite étale connected cover S' of S such that S'→ C has connected fibers and the finite index subgroup π_1(S')⊂π_1(S) contains the image x(π_1(C)) of the section x. Note that given a neighbourhood of x, we get a lift of x to a group theoretic section x':π_1(B)→π_1(X') of the short exact sequence of fundamental groups associated to X'→ B. Furthermore, if we post-compose x' with the natural inclusion map π_1(X')→π_1(X), we recover the section x so one may alternatively define a neighbourhood of x as a pair (X',x') of finite étale covers S' with connected fibers over B and a group theoretic section x' which descends to x. Let x=x_s be a geometric section, i.e., it's induced by some algebraic section s:B→ X, then a neighbourhood of x is the same as pair (X',x_s'), where X' is a finite étale connected cover of X with connected fibers over B, and x_s' is a group theoretic section induced by some algebraic section s':B→ X that is a lift of s. Recall that a finite étale connected covers is the same as a finite set with a transitive π_1(X) action. In this case, the finite set is given by the set π_1(X)/π_1(X') of cosets of π_1(X'). Using the section x_s:π_1(B)→π_1(X), we get an induced action of π_1(B) on this set. Since π_1(X') contains π_1(B), it follows that this action has a fixed point. Therefore, the cover that corresponds to this action of π_1(B) is disconnected and has a copy of B. Hence, we may lift the section s to an algebraric section s':B→ X. Given a group theoretic splitting x:π_1(B)→π_1(X), let X_x be the pro-étale cover of X defined by the the projective system (X'→ X), where X' runs over all neighbourhoods of x. Let x_1 and x_2 be two group theoretic sections. Suppose π_1(X_b) is residually finite. Then X_x_1=X_x_2 if and only if they are conjugate to each other. It's enough to show that π_1(X_x)=x(π_1(C)). This is the case since X_x_1=X_x_2 is equivalent to π_1(X_x_1) being conjugate to π_1(X_x_2). Hence, if π_1(X_x)=x(π_1(B)), then π_1(X_x_1) being conjugate to π_1(X_x_2) is equivalent to x_1 being conjugate to x_2. Now to see that π_1(X_x)=s(π_1(B)), consider all finite index subgroups H of π_1(X) containing x(π_1(B)). Observe that π_1(X_x)=⋂ H so it's enough to show that x(π_1(B))=⋂ H. Since we have a section, π_1(X) can be written as a semi-direct product π_1(X)≅π_1(X_b)⋊π_1(B). Let N_i:=⋂_[π_1(X_b):H]=i H⊂π_1(X_b). Note that because π_1(X_b) is finitely generated, it admits finitely many maps into the symmetric group S_i and hence there are only finitely many index i subgroups of π_1(X_b). In particular, this intersection is finite and N_i is again of finite index. Since every automorphism of π_1(X_b) preserves the index of a subgroup, we see that N_i is also characteristics. It follows that N_ix(π_1(B)) are finite index subgroups of π_1(X). Since π_1(X_b) is residually finite, we know that ⋂ N_i is trivial. It follows that N_ix(π_1(B))=x(π_1(B)) and hence the intersection of all finite index subgroups of π_1(X) containing x(π_1(B)) is x(π_1(B)) as desired. The proof of this lemma is essentially a minor modification of Malcev's proof (for example, see <cit.>) that semi-direct products of residually finite, finitely generated groups are residually finite. Combining these two lemmas, we may give a characterization of group theoretic sections that come from algebraic geometry: A group theoretic section x is conjugate to x_s for some algebraic section s:B→ X if and only if s belongs to the image of the natural map X_x(B)→ X(B), where X_x(B) is the set of algebraic sections of X_x→ B and X(B) is the set of algebraic sections of X→ B. If x:B→ X is an algebraic section, by lemma <ref>, it lifts to a compatible system of algebraic sections and hence it lifts to an algebraic section of X_x→ B. Conversely, if s is in the image of X_x(B)→ X(B), the section x and x_s have the same collection of neighbourhood and so are conjugate to each other by lemma <ref>. Now we are ready to prove proposition <ref>. First let's deduce the weak topological section conjecture for connected finite étale covers with connected fibers over B from the surjectivity of the section map for f:X→ B. Suppose there exists a connected finite étale cover X'→ X with connected fibers over B and a group theoretic section x'. Then x' descends to a group theoretic section from π_1(B)→π_1(X) and hence by the surjectivity, there exists an algebraic section from B→ X. By Lemma <ref>, we may lift this to an algebraic section from B to X', and hence the weak topological section question holds for X'→ B. For the other direction, by Corollary <ref>, it's enough to show that X_x(B)= X'(B) is non-empty, where X'(B) is the set of algebraic sections of X'→ B. Since every neighbourhood X' has a topological section by definition, it follows from the weak topological section conjecture that X'(B) is non-empty. It's finite by assumption, and therefore it's a non-empty set with a compact Hausdorff topology. Then such projective limit is always non-empty as desired. Since Kodaira fibrations satisfy the assumptions of Prop. <ref>, we may deduce the following corollary: Let f:S→ C be a Kodaira fibration. The surjectivity of the section map Φ is equivalent to the weak topological section conjecture for all connected finite étale cover S' of S such that S'→ C has connected fibers. The fact that fundamental groups of surfaces are residually finite is the main theorem of <cit.> and the fact that the set of algebraic section is finite is <cit.>. In fact, this will hold in general for any non-isotrivial family X of smooth projective curves over some smooth projective base B. Note that because the fundamental group of a smooth projective curve has trivial center, we indeed still have a short exact sequence of fundamental groups 1→π_1(X_b)→π_1(X)→π_1(B)→ 1. We have the following corollary: Let f:X→ B be a smooth projective family of curves corresponding to some non-constant map from B to M_g. Then the surjectivity of the section map is equivalent to the weak topological section conjecture. It's enough to verify that such a family of curves X has only finitely many algebraic sections. If X→ B has infinitely many algebraic sections, note that the locus where these infinitely many algebraic sections agree is a countable union of closed subvarieties of B, and hence we may find a smooth proper curve C⊂ B such that the restriction of all these sections are distinct. Furthermore, we may also choose C such that the map from B to M_g restricts to a non-constant map on C. This then gives us a Kodaira fibration with infinitely many algebraic sections, contradicting <cit.>. § HODGE-THEORETIC SECTION QUESTION In this section we turn to Hodge theoretic section question. We give a precise formulation and discuss some natural questions one might ask about this formulation. First, we record some preliminary facts about the category _(X). §.§ Notations and preliminary facts Let X be a smooth connected variety over . Let _(X) be the category of admissible, graded-polarizable, -variation of mixed Hodge structures (VMHS) over X. For a precise definition of such an object, see <cit.>. Note that this is an abelian tensor category. The unit object in this category is the constant variation of mixed Hodge structure (0) and by trivial objects, we mean direct sums of the unit objects. The semi-simple objects in this category are exactly the polarizable -variation of pure Hodge structures. Suppose X and Y are two smooth complex varieties. Any functor F:_(X)→_(Y) is assumed to be exact, additive ⊗-functor. Note that if we have a map f:Y→ X, then the pull-back functor f^* satisfies these assumptions. Admissibility is a technical condition on the behavior of a variation of mixed Hodge structure at infinity that will not play a role in the proof of our main injectivity result (Prop. <ref>). The main point is that it ensures the variation of mixed Hodge structure to have some nice properties, which all variations of mixed Hodge structure of geometric origin have. Any VMHS that comes from geometry is admissible. If one wants to ignore this technical point, one may assume that all the spaces in section <ref> and section <ref> are compact, in which case all graded-polarizable variation of -mixed Hodge structures are automatically admissible. §.§ Formulation of the question Let f:X→ Y be a smooth projective morphism with connected fibers between two smooth connected varieties. Let X_b be the fiber over b∈ Y, and ι:X_b→ X the natural inclusion map. Now f and ι defines two pull-back functors and we have the following sequence _(Y)_(X)_(X_b). Furthermore, if s:Y→ X is an algebraic section to f, we get a functor s^*:_(X)→_(Y) such that f^*∘ s^*=𝕀__(Y). We can make a formal definition: A functor F:_(X)→_(Y) is a section to f^* if f^*∘ F is isomorphic to the identity functor on _(Y). Now we formulate the question we are interested in: [Hodge theoretic section question] * (injectivity) If s_1,s_2 are two distinct algebraic sections to f:X→ Y, can the functors s_1^* and s_2^* be isomorphic? * (surjectivity) Suppose that F:_(X)→_(Y) is a functor which is a section to f^*. Then can we find an algebraic section s:Y→ X such that F is isomorphic to s^*? §.§ Analogue of exactness In <cit.>, the authors showed that for any smooth connected complex variety X, we have a short exact sequence of groups 1→π_1^Tann(LS^Hdg(X))→π_1^Tann(VMHS_(X))→π_1^Tann(MHS_)→ 1, where LS^Hdg(X)) is the Tannakian category of -local systems underlying a variation of mixed Hodge structure over X, VMHS_(X) the category of graded-polarizable -variation of mixed Hodge structure over X, MHS_ is the category of graded polarizable -mixed Hodge structures, and π_1^Tann is the Tannakian fundamental group of a neutral -linear Tannakian category. This sequence may be viewed as a Hodge theoretic analogue of the short exact sequence of étale fundamental groups 1→π_1^ét(X_k)→π_1^ét(X)→(k/k)→ 1. Now given a Serre fibration of k(π,1)-spaces f:X→ Y, we should get a short exact sequence of topological fundamental groups. Therefore, if we would like to view _(Y)_(X)_(X_b) as a Hodge theoretic analogue of that short exact sequence of topological fundamental groups, it's natural to ask if some notions of “short exactness" still hold in this case. We first define what it means for this sequence of categories to be “short exact" and prove that some of the conditions hold in this setting, while other conditions do not hold. We say that the sequence _(Y)_(X)_(X_b) is short exact if the following conditions hold: * (observable) both f^* and ι^* send semi-simple objects to semi-simple objects; * (faithfully flat) the functor f^* is fully faithful and f^*((C))⊂(S) is closed under taking subojects; * (closed immersion) every object in (X_b) is a subquotient of an object coming from (X); * (exactness) the objects in the image of ι^*∘ f^* are trivial and for every object V in (S), there exists U∈(C) such that the maximal trivial suboject of ι^*(V) comes from f^*U⊂V in the category (S). This definition certainly requires justification. The idea is the following: if we pretend that _ forms a Tannakian category, then by duality, we would have obtained a sequence of affine group schemes and it certainly makes sense to discuss if this sequence is short exact. Furthermore, one may translate the conditions for the sequence of Tannaka groups to be short exact back into the language of category theory via the Tannakian duality. This is done in appendix A of <cit.>. Now as we've mentioned before, _ no longer forms a Tannakian category, but those conditions on the category theory side still makes sense, so we've used those conditions as our definition of “short-exactness" in this context. For example, by Proposition A. 6 of <cit.>, if condition (2) holds for functors between Tannakian categories, then the associated map on the Tannaka group is faithfully flat. Now we may ask to what extent is the sequence _(Y)_(X)_(X_b) short exact in the sense of Definition <ref>. Conditions (1) and (2) hold but condition (4) does not. Since the semisimple objects in _ are the variation of pure Hodge structures, pulling back certainly preserves semi-simplicity so (1) holds. To see that (2) holds, recall that the six functor formalism holds for _, and since f has connected fibers, we know that f_*f^*=𝕀. In particular, by adjunction, we know that (f^*V,f^*W)=(V,f_*f^*W)=(V,W). Therefore, f^* is faithfully flat. It's also closed under taking subojects, for if W is a subobject of f^*V, then it must come from f_*W⊂ f_*f^*V=V. Condition (4) does not hold, since the trivial objects in _(X_b) are given by (0)^⊕ n, but the objects in the image of ι^*∘ f^* are constant variation of mixed Hodge structures on X_b, most of which are not trivial (i.e. the fiber over a point needs not to be (0)^⊕ n). Note that the second part of condition (4) does hold. Indeed, Z(0)^⊕ n is not just pulled back from Y but also from a point. On the other hand, we do not expect condition (3) to hold. Condition (3) says that every -VMHS on X_b is a subquotient of the restriction of some -VMHS on X. This would be the case if every -VMHS on X_b spreads out into families and so this question is somewhat related to the question of which -VMHS on a curve inside M_g,1 can be spread out to all of M_g,1. An analogous question where one replace -VMHS with local system and where the curve is assumed to be very general inside M_g,n is investigated in recent work of <cit.>, in which they discovered strong restrictions on local systems that do spread out into families. In particular, if g is much bigger than the rank of the local system, then any local system that spreads out in families must have finite monodromy. § FAMILY OF CURVES In this section, we work with family of curves. Let f:X→B be a family of smooth projective curves of genus g≥ 2 over some smooth connected base B (which is not assumed to be proper), and ϕ:B→M_g the corresponding map into M_g. We study the injectivity part of the Hodge theoretic section question. In particular, we have the following proposition: Let f:X→B be a family of curves as above. Then for any pair of algebraic sections s_1,s_2:B→X, if s_1^* is isomorphic to s_2^* as functors from _(X)→_(B), then s_1=s_2. To prove this proposition, we need to find a graded-polarizable, admissible -variation of mixed Hodge structure on X whose associated period map is injective (or at least injective on each fiber). We do so by using the canonical variation of mixed Hodge structure of Hain and Zucker. We first recall some definitions and facts. Let X be a smooth algebraic variety over , and let PX be the space of piecewise-smooth paths in X endowed with the compact open topology. The free path fibration p:PX→ X× X is defined as p:PX → X× X γ ↦ (γ(0),γ(1)) Denote the P_x,y the fiber of p:PX→ X× X over the point (x,y). Now there's a isomorphism H_0(P_x,x,)≅[π_1(X,x)]. Let J_x be the augmentation ideal of the group ring [π_1(X,x)]. Note that H_0(P_x,y) carries a canonical left [π_1(X,x)]-module structure, so we get an induced filtration J^∙ by the augmentation ideal J_x. [r-th canonical VMHS, Prop. 4.20 + Def. 4.21 of <cit.>] Let X be a smooth algebraic variety over and x∈ X a fixed point. Then there exists a graded-polarizable variation J_x of mixed Hodge structure on X such that for any y ∈ X, J_x,y:=(J_x)_y=H_0(P_x,y,)/J^r+1. We will in particular be interested in the case where r=1. In this case, we have an extension of mixed Hodge structures <cit.> 0→ H_1(X,)→ H_1(X,{x,y})→(0)→ 0 In particular, when x≠ y, we just have H_0(P_x,y,)/J^2≅ H_1(X,{x,y}). We have the following proposition which classifies such extensions: <cit.> Extensions of this form is classified by the Albanese (X) of X, and the map y↦ H_1(X,{x,y}) agrees with the Albanese mapping with basepoint x: α_x:X →(X):=F^1H^1(X)^∨/H_1(X,) y ↦(ω↦∫_γω) where γ is any path from x to y. <cit.> This period map α_x agrees with the period map for the 1-st canonical VMHS. When (X,x) is a curve, the 1-st canonical VMHS on X with base point x has injective period map. Now we may proceed to the proof of Proposition <ref>. If f:X→B has no section, then the claim is trivially true, so we may without loss of generality and assume that we have an algebraic section s_0:B→X. Then as before we get a commutative diagram X[r,"h"][dr,"f"] ^0_X/B[d,"π"] B The fibers of π:^0_X/B→B is (X_b)=(X_b), which we may view as a mixed period domain. In particular, ^0_X/B carries a universal variation of mixed Hodge structure U such that for a given point p∈(X_b), U|_p is the extension class in ^1((0),H_1(X_b)) corresponding to p. Now pulling back U along h, we get a variation of mixed Hodge structure J:=h^*U on X, whose period map factors through h, and which, when restricting to the fiber X_b, agrees with J_s_0(b),y. By Corollary <ref>, we know that V is injective on each fiber. Therefore, if s_1^* is isomorphic to s_2^* as functors, then for all b∈B, s_1^*V|_b≅ s_2^*V|_b or equivalently, V|_s_1(b)≅V|_s_2(b). By the injectivity of the period map, we see that s_1(b)=s_2(b). Therefore, the desired proposition follows immediately if one can verify that J is admissible. The -VMHS J constructed in the proof above is admissible. This lemma, again, is automatically true when X is proper and we provide a sketch of the proof in the case where X is not proper. We claim that this variation of mixed Hodge structures comes from geometry as it's the cohomology of a family of cosimplicial schemes. Since any variation of mixed Hodge structure that comes from geometry is admissible, this proves the desired claim. The main idea is to run the Beilinson-Deligne-Goncharov's construction <cit.> of the mixed Hodge structure on [π_1(X,x,y)]/J^r+1 in families. Consider the fiber product X×_BX X X B["p_2",from=1-1, to=1-2] ["p_1",from=1-1, to=2-1] [from=1-2, to=2-2] [from=2-1, to=2-2] We get a fibration φ:X×_BX→B, where over each point b∈B, the fiber is given by the product of curves φ^-1(b)=X_b× X_b. Let Z_0 be the image of the diagonal map Δ:X→X×_BX. Let D be the image of the fixed section s_0:B→X, and let Z_1 be the preimage of D in X×_BX under the second projection map p_2. Note that Z_0∩π^-1(b) is the closed subset Z_0⊂ X_b× X_b defined by {x_1=x_2}, where the x_i are coordinates of X_b× X_b and Z_1∩π^-1(b) is the closed subset Z_1⊂ X_b× X_b defined by {x_1=s_0(b)} Let _Z_i be the extension by zero of the constant sheaf on Z_i along the natural inclusion map. We can define the following complex K_s:0→→_Z_0⊕_Z_1→ 0, where the map →_Z_0⊕_Z_1 is given by the alternating sum of the natural restriction map. Note that if we restrict this complex to π^-1(b), we recover the complex of sheaves on X_b× X_b used in Beilinson-Deligne-Goncharov's construction _∙K_s(b)⟨ 1⟩:0→→_Z_0⊕_Z_1→ 0. Now the desired variation of mixed Hodge structure agrees with the variation of mixed Hodge structure defined on the local system R^1(p_1)_*(K_s) on X, whose fiber at y∈ S_b is given by the hypercohomology ℍ^1(X_b,_yK_s(b)⟨ 1⟩), which agrees with H^1(X_b,{s(b),y}) when s(b)≠ y. When s(b)=y, this hypercohomology becomes the split extension of H^1(X_b,s(b)) by (0). * For a detailed explanation of Beilinson-Deligne-Goncharov's construction, see section 3.6 of <cit.>. * The fact that r-th canonical VMHS is admissible over a single curves is already proved in <cit.>, and the above lemma is their results in families. Let f:S→ C be a Kodaira fibration, and s_1,s_2 two distinct algebraic sections of f. Then s_1^* is not isomorphic to s_2^*. § SOME CONCLUDING REMARKS AND OPEN QUESTIONS In this paper, we've studied the question of algebraic sections to Kodaira fibrations using topological and Hodge theoretic invariants. We've proven some partial results but clearly many interesting questions remain unanswered. We list some of these questions here. §.§ Topological question Let f:S→ C be a Kodaira fibration. We proved in Corollary <ref> that the natural map Φ:{algebraic sections to f:S→ C}→{sections of <ref>}/conjugation is injective under some assumption on the monodromy representation associated to f:S→ C. We conjecture that this condition on the monodromy representation is not necessary: Let f:S→ C be a Kodaira fibration. Then the map ϕ is always injective. On the other hand, we do not know much about the surjectivity part. Does there exists a Kodaira fibration such that the section map Φ is not surjective? Since the analogue of Faltings' theorem holds for Kodaira fibrations, one can also ask the following question. Does there exist a Kodaira fibration where the set {sections of <ref>}/conjugation is infinite? Finally, in section <ref>, we constructed many family of curves over base of dimension at least 2 which trivially satisfy the topological section question, i.e., which do not have any topological section at all. However, the argument we presented does not work for Kodaira fibrations, and we in fact do not know examples of Kodaira fibrations with no topological sections. However, given that the universal family of curves has no topological sections, we conjecture the following There is a Kodaira fibration with no topological section. One might even guess that if one writes down a generic complete curve in M_g, then the associated Kodaira fibration has no topological sections, and hence trivially satisfies the topological analogue of Grothendieck's section conjecture. However, there's not much evidence for this more ambitious guess, so we do not state it as a conjecture. * The analogous question has an affirmative answer if one instead asks whether there are Kodaira fibrations with no algebraic sections, and one may find examples of such Kodaira fibrations among the so-called double Kodaira fibrations: these are smooth projective algebraic surface S such that S admits two distinct Kodaira fibration structure: F_1→ S C_1 F_2→ S C_2 where f_1:S→ C_1 and f_2:S→ C_2 are both Kodaira fibrations, and F_1 and F_2 are fibers of the corresponding fibration. Then if g(C_1)>g(C_2) and g(F_1)>g(C_2), we know that f_2:S→ C_2 cannot admit an algebraic section, because C_2 cannot have any non-constant algebraic map to F_1 nor C_1. Such an example is already given in <cit.> (see also the work of <cit.> and Example 5.8 of <cit.>). * One may also ask if there are topological surface bundles over a Riemann surface with hyperbolic fibers and no continuous sections. Such an example is constructed in <cit.>, and the bases in those examples are tori (and hence cannot be Kodaira fibrations). Hillman recorded an example of Endo in <cit.>, and in this example, the fiber and the base of the surface bundle are both of genus 3. However, the construction is topological in nature, and we do not know if one can put compatible complex structures on the surface bundle and the base to turn this example into a Kodaira fibration. §.§ Hodge theoretic question We've already hinted in section <ref> that the Hodge theoretic section question we formulated in this note may not be the correct formulation. Here we discuss some possible modifications. Modification 1: Modifying the functor. Tannakian categories are equipped with natural fiber functors. Therefore, it may make more sense to replace our variety X with pointed variety (X,x) and define fiber functor F_x:_(X) →_ V ↦V_x Then we can restrict our attention to functors between category of _ that commute with fiber functors, i.e., for two pointed varieties (X,x) and (Y, y), we only consider functors F such that the following diagram commutes: _(Y) _(X) _["F"', from=1-1, to=1-3] ["F_y"', from=1-1, to=2-2] ["F_x", from=1-3, to=2-2] Note that if f:(X,x)→ (Y,y) is a morphism of pointed variety, then the pullback functor f^* is a functor that commutes the fiber functors. Furthermore, as we've pointed out, in Grothendieck's anabelian program, maps between the étale fundamental groups are not just maps of profinite groups: if X/K and Y/K are two varieties over some number field K, then the morphisms of étale fundamental group π_1^ét(X)→π_1^ét(Y) are really π_1(Y_K)-conjugacy classes of maps of short exact sequences 1→π_1^ét(X_K)→π_1^ét(X)→(K/K)→ 1. Now the analogous sequence in the Hodge theoretic context should be _→_(X)→^Hdg(X), where ^Hdg(X) is the category of local systems on X that underlies a graded-polarizable admissible -variation of mixed Hodge structure. This is so since if one replace with , the associated Tannaka group does form a short exact sequence (Cor. 4.7 of <cit.>). Now given f:X→ Y, we get a pair of pullback functors such that we have the following commutative diagram: _ _(Y) ^Hdg(Y) _ _(X) ^Hdg(X)[from=1-1, to=1-2] ["="', from=1-1, to=2-1] [from=1-2, to=1-3] ["f_^*", from=1-2, to=2-2] ["f_^*", from=1-3, to=2-3] [from=2-1, to=2-2] [from=2-2, to=2-3] If we take fiber functors into considerations, and consider morphisms of pointed varieties, we get the following diagram: _ _(Y) ^Hdg(Y) _ Ab _ _(X) ^Hdg(X)[from=1-1, to=1-2] ["="', from=1-1, to=3-1] [from=1-2, to=1-4] ["F_x"', from=1-2, to=2-3] ["f^*_", from=1-2, to=3-2] [from=1-4, to=2-5] ["f^*_"pos=0.2,from=1-4, to=3-4] [from=2-3, to=2-5, crossing over] [from=3-1, to=3-2] ["F_y", from=3-2, to=2-3] [from=3-2, to=3-4] [from=3-4, to=2-5] Therefore, we get a map Φ from Mor((X,x),(Y,y)) to the set isomorphism classes of pairs of functors F:_(Y)→_Z(X) and G:^Hdg(Y)→^Hdg(X) making the diagram above commutes. One may ask the following natural questions: * Is the map Φ is a bijection? * Suppose that f:Y→ X is a smooth projective morphism of smooth varieties with connected fibers. Is Φ a bijection when we restrict the domain to sections of f (after restricting the codomain to pairs of functors that are sections to f^*)? In this note, we didn't adopt these modifications, primarily because in Grothendieck's original anabelian conjectures, the morphisms between anabelian schemes are not pointed. Furthermore, we do not know examples of functors that are sections to f^*, but fail to meet the additional restrictions (e.g. commuting with the fiber functors). One interesting question to test to see if it makes sense to put these restrictions is the following: we know that given a Kodaira fibration f:S→ C, the set of algebraic sections is finite. Therefore, one may ask if the set of functors that are sections to f^* satisfying some additional restrictions is also finite. Answering this question would be a natural first step towards the surjectivity part of the Hodge theoretic section question. Modification 2: Modifying the category. We can certainly consider other natural categories one may attach to a smooth variety using Hodge theory. In fact, part of the reason we could not say anything about the surjectivity part of Question <ref> is that the category of variation of mixed Hodge structure is very large and not semi-simple. Therefore, it's not easy to even write down functors between these categories, and one might hope that restricting to a smaller subcategory would be helpful in this regard. For example, one can restrict their attention to the subcategory of graded-polarizable, admissible, unipotent variation of mixed Hodge structures. The advantage of this subcategory is that these variation of mixed Hodge structures are classified in <cit.>. The reason we didn't work with this subcategory is that we do not know how to construct a unipotent variation of mixed Hodge structure on a family of curves whose period map is injective (or at least injective on each fibers). Note that the variation J we constructed in the proof of Proposition <ref> is not unipotent; in fact, it's only unipotent on each fiber but not on all of X. Therefore, one may ask the following natural question: Given a smooth variety X, when does X carry a graded-polarizable admissible, (unipotent) -variation of mixed Hodge structure whose period map is injective? In particular, we do not know if every Kodaira fibrations carries a unipotent variation of mixed Hodge structures whose period map is injective on each fibers. If such a variation exists, then we can use basically the same proof as the proof of Prop. <ref> to obtain injectivity results. One can also consider the category _ of admissible, graded-polarizable, variation of -mixed Hodge structures, equipped with some fiber functor. This is an honest Tannakian category, and therefore we get a Tannakian fundamental group by duality. For example, this is considered in the thesis of Ferrario <cit.>. In this thesis, Ferrario showed that the injectivity part of the Hodge theoretic section conjecture holds for the map X→, where X=ℙ^1-D and D is a finite set of at least 3 points, and made partial progress on this problem when X is an elliptic curve minus a point. The key tool is what Ferrario called the r-th Chen's variation of mixed Hodge structure on X, whose fiber over x∈ X is given by the mixed Hodge structure on [π_1(X,x)]/J^r. They are able to study this variation using iterated integrals, which are much more computable when the base is P^1-D or an elliptic curve minus a point. Finally, recall that we showed condition (4) in Proposition-Definition <ref> is not true for the sequence _(Y)_(X)_Z(X_b). This failure cannot really be repaired by naively replacing the category _ with smaller but geometrically meaningful subcategories (e.g. the category of unipotent VMHS) that include all constant variation of mixed Hodge structures, since the image of ι^*∘ f^* will be the constant VMHS on the fiber and most of them are not trivial. Therefore, one should do some other modification if one wants to recover analogues of exactness. alpha
http://arxiv.org/abs/2407.02036v1
20240702080935
PT symmetric fermionic particle oscillations in even dimensional representations
[ "Leqian Chen", "Sarben Sarkar" ]
quant-ph
[ "quant-ph", "hep-ph" ]
http://arxiv.org/abs/2407.02803v1
20240703042101
KnobCF: Uncertainty-aware Knob Tuning
[ "Yu Yan", "Junfang Huang", "Hongzhi Wang", "Jian Geng", "Kaixin Zhang", "Tao Yu" ]
cs.DB
[ "cs.DB" ]
Harbin Institute of Technology 92 West Dazhi St Harbin Heilongjiang China yuyan@hit.edu.cn Harbin Institute of Technology 92 West Dazhi St Harbin Heilongjiang China twiherhol@gmail.com Harbin Institute of Technology 92 West Dazhi St Harbin Heilongjiang China wangzh@hit.edu.cn Harbin Institute of Technology 92 West Dazhi St Harbin Heilongjiang China gengj@stu.hit.edu.cn Harbin Institute of Technology 92 West Dazhi St Harbin Heilongjiang China 21B903037@stu.hit.edu.cn Harbin Institute of Technology 92 West Dazhi Street Harbin Heilongjiang China 150001 21B903056@stu.hit.edu.cn § ABSTRACT The knob tuning aims to optimize database performance by searching for the most effective knob configuration under a certain workload. Existing works suffer two significant problems. On the one hand, there exist multiple similar even useless evaluations of knob tuning even with the diverse searching methods because of the different sensitivities of knobs on a certain workload. On the other hand, the single evaluation of knob configurations may bring overestimation or underestimation because of the query uncertainty performance. To solve the above problems, we propose a decoupled query uncertainty-aware knob classifier, called , to enhance the knob tuning. Our method has three significant contributions: (1) We propose a novel concept of the uncertainty-aware knob configuration estimation to enhance the knob tuning process. (2) We provide an effective few-shot uncertainty knob estimator without extra time consumption in training data collection, which has a high time efficiency in practical tuning tasks. (3) Our method provides a general framework that could be easily deployed in any knob tuning task because we make no changes to the knob tuners and the database management system. Our experiments on four open-source benchmarks demonstrate that our method effectively reduces useless evaluations and improves the tuning results. Especially in TPCC, our method achieves competitive tuning results with only 60% to 70% time consumption compared to the full workload evaluations. KnobCF: Uncertainty-aware Knob Tuning Tao Yu July 8, 2024 ===================================== PVLDB Reference Format: . . PVLDB, (): , . https://doi.org/doi: [This work is licensed under the Creative Commons BY-NC-ND 4.0 International License. Visit <https://creativecommons.org/licenses/by-nc-nd/4.0/> to view a copy of this license. For any use beyond those covered by this license, obtain permission by emailing mailto:info@vldb.orginfo@vldb.org. Copyright is held by the owner/author(s). Publication rights licensed to the VLDB Endowment. Proceedings of the VLDB Endowment, Vol. , No. ISSN 2150-8097. https://doi.org/doi: ]footnote-1 PVLDB Artifact Availability: The source code, data, and/or other artifacts have been made available at <https://github.com/AvatarTwi/KnobCF>. § INTRODUCTION The knob tuning task is typically defined as the process of optimizing database performance by searching for the most effective knob configuration <cit.>. The knob configuration directly determines the resource partition, operator execution, etc., significantly influencing the workload performance of the database system <cit.>. Existing solutions for knob tuning typically consist of two major parts: (i) Configuration Recommendation: searches knob configuration according to heuristic rules <cit.>, Bayesian Optimization Models <cit.>, Reinforcement Learning Agents <cit.>, etc. (ii) Workload Evaluation: evaluates the workload performance based on selected knob configuration. Among these steps, the most time-consuming part is the workload evaluation. For example, in the open source benchmark, TPCH, we observe that CDBtune <cit.> spends over 80% of time evaluating the performance of knob configurations. Thus, the efficiency of the evaluation is crucial for the knob tuning results. We observe two significant problems in the workload evaluation of knob tuning that cause the efficiency issue. First, although existing work makes the best efforts to recommend diverse knob configurations, multiple similar even useless evaluations still exist in the knob tuning process, leading to low time efficiency. Taking Figure <ref> as an example, we present the probability density curve of the query latency distribution on the YCSB-a knob tuning process (we utilize DDPG <cit.> as the knob tuner). We observe that the query latency distribution rarely changes with the tuning iterations, indicating that there exists a large number of iterations that obtain similar even same query latency with the former iterations. This phenomenon is caused by the different sensitivities of database knobs on a certain workload, i.e., the same modification on sensitive knobs brings more huge performance changes than the non-sensitive knobs. Thus, the tuning process contains many useless evaluations even with various configuration search strategies. Second, there exists a natural contradiction between time efficiency and accuracy in evaluating the workload performance of knob configurations. On the one hand, to save evaluation time consumption, some methods <cit.> utilize a one-time evaluation as the estimated performance of knob configurations. However, due to the uncertain workload execution, one-time evaluation may result in overestimation or underestimation, leading to wrong knob tuning decisions. On the other hand, multi-time evaluations <cit.> will bring more accurate and robust knob tuning while causing large time consumption. And in large database systems <cit.>, it may be better to continue to run with a worse configuration instead of executing multiple evaluations for knob tuning. In general, the above two problems could be concluded as how to implement an efficient evaluation for knob configurations. To address the first problem, we consider modeling the performance distribution of knob configuration. To address the second problem, we combine the uncertainty distribution with performance modeling. Specifically, we propose the concept of uncertainty-aware evaluation for knob tuning, i.e., we aim to model the uncertain performance distribution of knob configurations. Importantly, knob tuning is a multi-objective optimization problem, which has multiple evaluation metrics, such as workload execution time, memory usage, disk IO usage, etc. In this paper, we limit our focus on predicting the workload execution time (Throughput) of knob configurations based on our uncertainty-aware knob estimator. In addition, the uncertainty of other evaluation metrics is interesting to study in the future. For execution uncertainty, some existing works <cit.> have proposed some machine learning estimators to model the query uncertainty distribution. However, these methods focus on underlying uncertain cardinality sampling while lacking the consideration of knob configuration. Moreover, to our best known, existing works have no formal definition of the uncertainty distribution under knob configurations. Even though the uncertainty of knob configuration could enhance the evaluations of knob tuning, it brings several challenges: (1)High-Dimensional Knob Candidate Space: Even after applying knob filtering techniques <cit.>, the space of potential knob configurations remains high-dimensional. This requires multiple query executions to gather sufficient training data on the uncertain distribution. (2)Diverse Queries: Each query may have a unique structure, different table schema, and various indexes making it difficult to design a universal feature representation which is the basis for designing a model transfer mechanism. (3)Time Efficiency: It is especially crucial for knob tuning tasks designed for changed workloads to construct a model transfer mechanism. This means we should avoid extensive pre-training for workload adjustments and design the model transfer mechanism to reduce training time consumption. Our Approach. To address these challenges, we propose a query uncertainty-aware knob classifier called to enhance the knob tuning. Our key observation is that a considerable portion of knob configurations have similar performance distribution with evaluated knob configurations. In these cases, it is unnecessary to reevaluate these knob configurations. Instead, we estimate the performance of new knob configurations based on historical evaluations, to largely reduce the useless evaluations. Also, due to the uncertainty modeling, our method could alleviate the overestimation and underestimation problems caused by single-point estimation. In addition, our uncertainty-aware estimator is easy to deploy in any knob runners, which is decoupled with the knob optimizers. Specifically, our approach addresses the above challenges in three aspects. (1) Problem Definition: Instead of simply defining a regression task to predict the uncertainty distribution of single knob configurations, we propose an innovative problem definition, a joint distribution uncertainty classifier, to predict the joint distribution classification label of knob configurations. On the one hand, we utilize the joint distribution of knob configurations instead of the single knob distribution, largely reducing the time overhead of collecting training data. On the other hand, compared with predicting regression distribution statistics, the design of classification label prediction can efficiently prevent the model from being affected by different scales of various tuning tasks. (2) Feature Representation: The feature representation is a fundamental task for effective model training and inference. To process the diverse queries, we combine the graph convolutional network <cit.> and knob importance <cit.> to encode the query plan into a fixed embedding vector, which could easily gather sufficient training data to obtain high-quality embedding vectors. Also, we design the universal knob encoding method by utilizing max-min normalization and one-hot encoding. (3) Model Design: Instead of considering end-to-end learning, we design a two-stage learning method to decouple feature embedding and uncertainty label learning. In the first stage, we design an efficient query embedding model to achieve transferable embedding vectors. In the second stage, we design a lightweight model to predict knob uncertainty labels. The model inputs the query embedding vectors and knob configurations to estimate the uncertainty distribution label. Compared to an end-to-end learning model, our decouple learning design enables sufficient embedding training to obtain high-quality embedding features and efficient model transfer by fine-tuning the lightweight prediction model. The main contributions of this paper are as follows: (1) We propose a novel concept of the uncertainty-aware knob configuration estimation and define the problem of uncertain estimation in Section <ref> to enhance the knob tuning process. (2) We design transferable feature representation for the uncertainty estimation task, including the transferable query representation learning method in Section <ref> and the transferable knob configuration encoding method in Section <ref>. (3) We propose a query uncertainty-aware knob classifier, , and introduce the enhanced knob tuning algorithm in Section <ref>. Our method could effectively reduce the time consumption of knob tuning evaluations while maintaining the knob tuning results. (4) We conduct experiments in Section <ref> on four open-source benchmarks, demonstrating that our method effectively reduces useless evaluations and improves the tuning results. § OVERVIEW In this section, we introduce the background knowledge of the knob tuning problem, formalize the uncertainty-aware knob estimation, and present the overview of our method. §.§ Preliminary Knob Tuning Process. As shown in Figure <ref>, we present the typical process of knob tuning, and the main components are shown as follows: (1) The knob space defined as K = {K_1,..., K_n} could be identified based on a certain database environment and workload. Generally, we could obtain the adjustable knob space according to resource limitations and DBMS types. Then, existing methods utilize knob selection methods <cit.> to reduce knob space for certain workloads. The knob tuning aims to find the optimal knob configuration from the filtered knob space. (2) For the search model, we could roughly divide the existing tuning model into three types: the rule-based heuristic search model <cit.>, the RL-based search model <cit.>, and the BO-based search model <cit.>. Existing methods typically start the tuning process by sampling several knob points to initialize the search model. Then, the model is responsible for recommending diverse knob configurations to further find the optimal solution. (3) The recommended knob configuration could be generated by several principles. Specifically, RL-based models utilize the knob configuration with the highest potential reward as the next recommended knob configuration. BO-based models employ the knob configuration with the maximum acquisition function value. And the rule-based models utilize the heuristic greedy rules to select the knob configuration. (4) Existing knob tuning methods typically utilize one-time or multi-time evaluations to obtain the corresponding reward for a certain knob configuration. Different from these methods, we consider the uncertainty-aware estimation for knob tuning, to improve the evaluation efficiency of knob tuning. Uncertain Estimation. Naturally, we could formalize the uncertain estimation task for knob tuning as follows: given a knob configuration k ∈ K and the workload W (a set of queries), the uncertain distribution of workload execution time could be defined as T(k, W) ∼ f(ξ_k,W), where ξ_k,W is the random variable. The f(ξ_k,W) represents the uncertain distribution of workload execution time in knob configuration k. Considering the complexity of directly modeling workload, we could simplify the uncertain estimation task as follows: T(W, k) = ∑_q_i ∈ W T(k, q_i), where T(k, q_i) = f(ξ_k,q_i) is the uncertain distribution of query q_i in knob configuration k. §.§ Problem Definition In this paper, we aim to design an uncertain-aware knob estimator to enhance the knob tuning task. Specifically, given the knob configuration k and the query q, our goal is to learn the uncertain distribution of query execution time defined as f(ξ_k,q). Generally speaking, we could formalize the uncertain-aware knob estimation task as a regression task. The input of the regression task is the knob configuration k and the query q, and the output is the uncertain distribution range of the query execution time. However, the above regression task requires multiple evaluations for each query and each knob configuration to construct the train data, which is time-consuming in the database management system. To optimize this problem, we consider two novel designs for the learning goal. (1) We define the uncertain prediction task as a classification task. In fact, the category information of the knob configurations is sufficient for detecting the useless evaluations. For a new configuration, we only judge the uncertainty category instead of predicting the exact uncertainty distribution. If we observe that similar knobs have been evaluated before, we can directly use the historical evaluations to predict the current distribution. (2) The second design is to improve the single uncertain distribution to the joint uncertain distribution. As we discussed in Section <ref>, uncertainty distributions among different knobs can complement each other to save time consumption of query evaluation. Thus, we further improve the uncertain estimation task to a joint estimation. Given the knob space K and the workload W (a set of queries), the uncertain distribution of workload execution time could be defined as T(K, W) = ∑_q_i ∈ W T(K, q_i), where T(K, q_i) ∼ f(ξ_K,q_i) represents the uncertain distribution of query q in knob space. Based on the above two novel designs, we define our classification-based knob estimator as follows. The knob configuration k ∈ K and the query q ∈ W. The category label of uncertain distribution f(ξ_k,q). Furthermore, we introduce the construction of our joint uncertain distribution estimator and the detailed utilization of our knob estimator in Section <ref> <ref>. Example 1: We present an example to illustrate the above definition. Given the knob space K = {k_1, k_2, k_3, k_4, k_5} and the workload W = {q_1}. Assume that f(ξ_K,q_1) = ∑_i = 1^2π_i * 𝒩 (μ_i, σ_i) = π_1 ×𝒩 (μ_1, σ_1^2) + π_2 ×𝒩 (μ_2, σ_2^2) the joint uncertain distribution of query q_1. We observe this joint distribution consists of two Gaussian distributions, 𝒩 (μ_1, σ_1^2) and 𝒩 (μ_2, σ_2^2). Thus, for each k_i ∈ K, knob configuration has three potential category label, [1,0] (the uncertain distribution of k_i is involved in 𝒩 (μ_1, σ_1^2).), [0,1], [1,1](the uncertain distribution of k_i is combined by 𝒩 (μ_1, σ_1^2) and 𝒩 (μ_2, σ_2^2).). §.§ Solution Overview Knob tuning is a time-consuming task that involves multiple evaluations to determine the optimal configuration in a high-dimensional space of knobs. The efficiency of these evaluations directly impacts the overall efficiency of knob tuning. A natural approach is to leverage the observation that if there are useless evaluations, similar performance distributions exist among knobs. We exploit this property by designing a knob uncertainty classifier to group similar knob configurations into the same distribution. This design allows us to estimate new knob configurations based on existing evaluations without individually evaluating each one. Specifically, we show the workflow of our method in Figure <ref>, consisting of two fundamental components: (1) a transferable feature embedding model that encodes the query plan and knob importance into an embedding vector, which is used to obtain a high-quality embedding representation for various queries and will be discussed in Section <ref>; (2) an uncertainty-aware knob classifier that predicts the distribution category for a given knob configuration and query embedding vector, and will be discussed in Section <ref>. Then, if KnobCF matches similar historical evaluations, our model directly returns an estimated cost for the current query instead of practical query execution in the database. To further explain the workflow of our uncertainty-aware knob classifier, we present an example based on Example 1. Given the category labels L = {[1,0], [0,1], [1,1], [1,0], [0,1]} for all knob configurations K under q_1, the label [1,0] corresponds to the uncertain distribution 𝒩 (μ_1, σ_1), [0,1] corresponds to 𝒩 (μ_2, σ_2), and [1,1] corresponds to both 𝒩 (μ_1, σ_1) and 𝒩 (μ_2, σ_2). For a new knob configuration k_6 and query q_1, we first encode the query plan and knob configuration into embedding vectors. Then, we input the embedding vectors into the uncertainty-aware knob classifier to predict the category label [1,1]. Then, we match the historical labels to determine whether to evaluate the knob configuration. In this case, k_6 has the same label as k_1 and k_4. Therefore, we can estimate the performance of k_6 based on the evaluations of k_1 and k_4 without conducting a new evaluation. § TRANSFERABLE FEATURE REPRESENTATION LEARNING Feature representation is the fundamental task for our uncertainty-aware knob classifier, directly influencing the efficiency of model training, inference, and transfer. However, it is challenging to process the diverse queries and the high-dimensional knobs to obtain the high-quality feature representation. In this section, we introduce our transferable query feature embedding model in Section <ref> and the transferable knob configuration encoding in Section <ref>. §.§ Transferable Query Feature Embedding Generally speaking, existing AI-driven methods <cit.> have proposed some effective feature representation for queries. For example, the QueryFormer <cit.> proposed a tree-transformer model to learn embedding representations for queries. Hilprecht et al. <cit.> proposed a zero-shot cost model for query plan encoding which focuses on the transferable feature representation learning. However, these estimators focus on query optimization and index tuning, lacking the consideration of knob tuning. Moreover, the uncertainty prediction of knob tuning faces a large knob candidate space, making it difficult to model the uncertainty distribution. To address this challenge, we design a downstream task-related query embedding model to provide high-quality embedding vectors for knob uncertainty classification. Specifically, we combine the knob characteristics with the query embedding representation. Our key idea is to combine the knob importance <cit.> with the query embedding representation, which could benefit the uncertainty classification task. Specifically, the knob importance calculated by SHAP <cit.> is typically utilized to filter useless knobs in existing works <cit.>. The important information identifies the knob sensitivity for queries, largely influencing the knob uncertainty category. Also, similar queries have similar sensitivity to the same knobs. The knob importance information could be directly related to queries. For efficiently obtaining the knob importance information, we utilize popular openFE <cit.> to calculate the knob importance for each query. Compared to SHAP <cit.>, OpenFE <cit.> utilizes the lightGBM to efficiently calculate the dimension importance, bringing fast and accurate knob importance score. Furthermore, the knob importance could be manually adjusted by the database administrator to better suit the practical environment. With the knob importance obtained, we combine the knob importance with the query embedding representation. As shown in Figure <ref>, we replace the output query plan with the knob importance for query representation learning. We make this revision due to two considerations. On the one hand, some machine learning methods <cit.> have clarified that the downstream task-related embedding method could achieve model learning more efficiently than the typical encoder-decoder model, which is a general schema of representation learning. On the other hand, compared to the uncertainty distribution, it requires a small part of knob-performance training data to calculate the knob importance for a certain query. In the model transfer, we could implement efficient retraining to adapt to workload drift. Specifically, our query representation learning model consists of three main components: (1) Before implementing the representation learning, we encode the original query plan to the vector representation. Considering the model transfer, we adopt the transferable feature representation <cit.> and construct the directed acyclic graph for queries. As shown in Figure <ref>, a query plan is represented by a directed acyclic graph consisting of operator nodes, predicate nodes, table nodes, etc. Then, each node consists of the transferable representations. For example, the column feature consists of the column type and statistical characteristic instead of column-length one-hot codings. (2) We employ the bottom-up graph convolution embedding method <cit.> to process the plan graph, which could be flexibly extended to process diverse plan graphs. Due to the graph convolution design, our query embedding model could process different query plan structures. Then, we regard the hidden output of the query plan root node as the query representation learning results, i.e. the fixed embedding vector. This fixed embedding vector will be used to predict the knob uncertainty category in Section <ref>. (3) We design a knob importance leaner based on a three-layer neural network, which processes the embedding vector to knob importance. In our feature representation learning model, the knob importance information will be combined with the query embedding vector by the graph embedding parameters updated by backpropagation. In the training process, with limited time consumption, we could gather sufficient query-importance data to obtain a high-quality query embedding vector, enhancing the downstream knob uncertainty prediction model. §.§ Knob Configuration Encoding In this section, we design a transferable knob configuration encoding method which is the basis for model transfer. Generally speaking, different database management systems (even different versions of the same DBMS) have different kinds of knobs, which makes it incredible to design a universal knob encoding method. Thus, we limit our focus on the same DBMS version to design a transferable knob encoding method. Specifically, the database knobs could be divided into two categories: numerical knobs and non-numerical knobs. Actually, it is easy to obtain the universal encoding for numerical knobs because the same DBMS version has database knobs. We only need to normalize the knobs to process the varying scale of different tuning tasks. For numerical knobs, we directly utilize the max-min normalization as shown in Formula <ref> to normalize the different scales of potential knob spaces. x' = x - x_min/x_max - x_min Where x is the original knob value, x_min and x_max are the minimum and maximum values of the knob space. After normalization, the numerical knob could be encoded as a fixed-scale vector. For non-numerical knobs, we utilize the one-hot encoding to obtain the universal encoding. For example, the `autovacuum' knob of PostgreSQL 14 has two potential values: `on', `off'. Thus, we encode the `autovacuum' knob as [1,0] or [0,1]. After obtaining the encoding, we directly concatenate them to obtain the final knob configuration encoding. Furthermore, from the aspect of model transfer, there exists a contradiction between comprehensive knob encoding and filtered knob encoding. On the one hand, we could encode all the potential knobs of a certain DBMS version to support the various tuning tasks. While comprehensive knob encoding may bring high input dimension, leading to the underfitting problem and poor model prediction <cit.>. On the other hand, the filtered knob encoding could be obtained by filtering the useless knobs, which could largely reduce the knob search space and is the most important preprocessing of existing tuning tasks. However, the filtered knob encoding may bring limited knob representation, resulting in the overfitting problem and poor model transfer <cit.>. To balance the above contradictions, we simply utilize the union knob set of the important knobs in the historical tuning task. This problem could be further improved by some dimensionality reduction methods like the random projection in llamatune <cit.>. And these methods could be easily integrated into our knob encoding method. § UNCERTAINTY-AWARE KNOB TUNING Based on the high-quality feature representation, we aim to predict the category label of a certain knob configuration. With similar historical knob configurations, our method could directly predict the latency category of the knob configurations to enhance the knob tuning. Then, we introduce the uncertainty-aware knob classifier in Section <ref> and the specific uncertainty-aware knob tuning method in Section <ref>. §.§ Uncertainty-aware Knob Classifier In this section, we introduce the uncertainty-aware knob classifier, which could be used to predict the query latency distribution. Basically, we concatenate the query embedding vector and the knob configuration vector to form the input of the classifier. Taking advantage of our transferable encoding, the classifier could be directly applied to various tuning tasks on the same DBMS version. Gaussian category label After obtaining the classifier input, we consider designing the output label. Specifically, the output label design of our classifier faces two challenges. I: how to obtain the training label of the classifier? A fundamental problem is Which configurations should be grouped into one category. A straightforward method is to make multiple evaluations for each knob configuration and divide the knob configurations according to some statistics, such as mean and variance. However, this method is time-consuming and not practical in the real-world tuning task. II: how many categories should we design? The number of categories is a key factor that affects the performance of the classifier. On the one hand, the small number of categories may not be able to distinguish the differences among different knob configurations, leading to the different knob configurations being divided into the same category. On the other hand, if we design a large number of categories, the classifier may underfit the training data and result in a low prediction hit rate. To address the above challenges, we design the Gaussian category labels based on the mixed Gaussian model (GMM) <cit.>. Specifically, we assume that the joint query latency distribution of knob configurations follows the mixed Gaussian distribution. We make this assumption based on two aspects. (1) Knob Tuning Aspect: Multiple existing knob tuning methods <cit.> design the Gaussian-based surrogate model to fit the relationships between the knob configuration and the corresponding performance. This indicates that the performance of knob configurations may follow Gaussian distribution. (2) Query Estimation Aspect: Existing methods <cit.> make similar assumptions to predict cardinality and query latency. This clarifies that the performance of the majority of queries satisfies Gaussian distribution in practical database management systems. Thus, we think the mixed Gaussian is the best choice for the joint latency distribution of knob configurations. Then, we can address the first challenge. We utilize the mixed Gaussian model to analyze the mixed Gaussian distribution for knob configurations. Then, we could obtain the category label for every knob configuration according to the clustering results of GMM. As shown in Figure <ref>, we plot the uncertainty distribution of different queries under 300 knob configurations. We observe that these points could be represented by a mixed Gaussian distribution. Once we evaluate partial points, we could predict other knob configurations based on the mixed Gaussian distribution. Based on the GMM design, we address the second challenge. In our classifier model, we could utilize a small number of output bits to represent multiple categories. On the one hand, the number of categories increases exponentially with the parameters of the model, i.e. |C| = 2^n where n is the number of mixture components in GMM, and |C| is the number of categories. For example, we set the number of output bits is 3. Then, we represent the eight category labels by 3 bits, i.e., [0,0,0], [0,0,1] ... [1,1,1]. Also, from the example in Figure <ref>, we observe that although we make evaluations for 300 knob configurations, there exist limited categories for every query. Thus, the limited output bits are sufficient to represent the uncertainty distribution of knob configurations. Knob Classifier Based on the label, we propose the architecture of the uncertainty-aware knob classifier. Its input is the concatenated vector of the query embedding and the knob configuration. Its output is the binary category label of the knob configuration. The specific classifier consists of three fully connected layers with the LeakyReLU activation function <cit.>. We utilize the cross-entropy loss function <cit.> to optimize the model for the training process. Actually, our classifier design is a simple neural network, which is easy to train and transfer to different tuning tasks. This simple structure benefits from our simple classification problem definition and feature representation learning. These preprocessing allow our classifier to focus on uncertainty classification based on high-quality embedding vectors and knobs, greatly reducing the learning burden of the classification prediction model. §.§ Uncertainty-aware Knob Tuning Knob Tuning For a tuning task, our knob classifier could be applied directly to attach labels for knob configurations. Then, the historical knob configurations with the same category could be utilized to predict the performance of the new knob configuration. In particular, in this prediction process, we only utilize historical evaluations of the same tuning task to calculate the estimated performance. Thus, even in model transfer scenarios, our method could keep the predicted performance within the scope of practical query distribution. Typically, the knob tuning process consists of two main parts: the initialization part which is responsible for constructing the knob turner, and the iterative part which aims to iteratively find the optimal knob configuration. In this section, we integrate our knob classifier into the initialization process and iterative optimization process to enhance the knob tuning. Specifically, we introduce the uncertainty-aware knob tuning method as shown in Algorithm <ref>. The input of our algorithm consists of four parts, including the workload W, the number of tuning iterations N, the knob candidate space S, and the trained uncertainty-aware knob classifier KnobCF. In the initialization part, we first generate the initial knob configurations using Latin Hypercube Sampling (LHS) <cit.>. In existing work <cit.>, the points P are used to initialize the tuning model. Then Lines 4-11 complete the single-point evaluations of these sampling knob configurations. Importantly, even if we utilize single-point evaluations for knob configurations, our method still captures the joint uncertainty distribution by the multiple evaluations on the same category. Meanwhile, we predict the category label for each query in Line 8. And record the corresponding label and time information in Line 10, which will be used to estimate the future knob configurations. Then, Line 12 completes tuning model initialization based on the knob-performance datasets. In the iterative knob tuning process, Line 14 recommends the next knob configuration based on the knob tuning model, such as the surrogate model of Bayesian Optimization <cit.> and the agent of reinforcement learning <cit.>. Then, Lines 16-23 evaluate the performance of the recommended knob configuration p. For each query q, Line 17 predicts the corresponding category labels by the trained uncertainty-aware knob classifier. If the category label has appeared in the historical evaluations, Line 19 directly estimates the query latency based on the mean value of historical evaluations. The mean value could represent the stable performance of a certain category. Otherwise, Line 21 executes the query with the recommended knob configuration, and Line 22 updates the label set. Then, Line 24 updates the knob-performance datasets and the knob tuning model, respectively. Finally, the algorithm repeats the iterative knob tuning process until the number of iterations is reached and returns the optimal knob configuration with minimal total latency. Note that the uncertainty-aware knob tuning method is a general framework for knob tuning tasks, which could be easily integrated into existing knob tuning methods, such as GPTuner <cit.>, CDBTune <cit.>, etc. For time consumption, our method only requires some extra model inference time, which could be accelerated by GPU and label matching time for limited categories, which is significantly lower than the query execution time. Model Transfer Furthermore, another important task is how to obtain a mature-trained knob classifier for tuning tasks. The basic idea is to gather a sufficient training set of the current tuning task to complete the knob classifier training. Although we could obtain a high-quality knob classifier for the current tuning task, obtaining sufficient training data is time-consuming and impractical for a real-world tuning task. Thus, we consider to obtain the trained knob classifier from historical tuning tasks. As we introduced in Section <ref>, our method provides a transferable feature representation, including transferable query embedding and knob configuration representation. Thus, we could directly utilize the historical tuning tasks to train the knob classifier. Then, we could achieve the zero-shot model transfer and utilize the trained model on unseen tuning tasks. Although the zero-shot model transfer is a good choice for saving time consumption, the zero-shot knob classifier still requires high-quality and representative historical tuning tasks to complete sufficient pretraining. If current tuning tasks have a large difference from historical tuning tasks, the performance of the zero-shot knob classifier may be degraded. In addition, the zero-shot knob classifier could not provide specialized estimation services for certain tuning tasks because the classifier is trained on multiple tuning tasks. To address these problems, we consider a few-shot model transfer method instead of zero-shot one. Benefiting from our decoupled classifier model, we could utilize different transfer mechanisms for the feature representation learning model and the uncertainty prediction model. Firstly, feature representation learning is responsible for processing diverse queries and obtaining the high-quality embedding feature, which requires sufficient training data. Thus, we are not required to adjust the representation learning model for the current tuning task. Secondly, our uncertainty prediction model is lightweight enough to easily make few-shot retraining. We could utilize the initialization configurations of the evaluated knobs and the previous iterations of the knob tuning to fine-tune our uncertainty prediction model. Importantly, we do not implement extra evaluations for the few-shot retraining instead of only utilizing the evaluations of the tuning process, which has high time efficiency in practical tuning tasks. We present an example to illustrate how to achieve few-shot training for KnobCF. Given two historical tuning tasks, we have evaluation observations as O_1 = {(k_1, q_1, c_1), (k_2, q_2, c_2), ...} and O_2 = {(k_1, q_1, c_1), (k_2, q_2, c_2), ...}, where k_i is the knob configuration, q_i is the query, and c_i is the corresponding query latency. We could utilize sufficient historical evaluations to complete the KnobCF pretraining, including training the feature representation learning model and the uncertainty prediction model. Then, for the current tuning task, we could obtain the evaluation observations of the initialization phase as O_init = {(k_1, q_1, c_1), (k_2, q_2, c_2), ...}. Also, we could obtain the evaluation observations of the first 30 iterations as O_30 = {(k_1, q_1, c_1), (k_2, q_2, c_2), ...}. Then, we could utilize the O_init and O_30 to finetune the uncertainty prediction model of KnobCF, improving the performance of the classifier on the current tuning task. In particular, our pretraining and finetuning process does not collect extra training data, which is efficient in practical tuning tasks. § EXPERIMENT In this section, we conduct extensive experiments to evaluate the effectiveness of our KnobCF, including the experiment settings in Section <ref>, the evaluations of our uncertainty-aware knob classifier in Section <ref>, and the evaluations of our uncertainty-aware knob tuning method in Section <ref>, and the robustness analysis in Section <ref>. §.§ Experiment Setup §.§.§ Databases & Workloads Our evaluation was conducted on four popular open-source benchmarks: , , , and . In the benchmark (scale factor = 10), we set the random seed to 1 and generate 22 instance queries based on 22 templates. Based on the IMDB dataset (size = 7049MB), this workload contains 70 queries generated by the job-light workload. For the benchmark, we generate 1000 queries for each of the three workloads: YCSB-a and YCSB-b, by utilizing `recordcount = 1000000`, `operationcount = 1000` settings. The YCSB-a workload has a read/write ratio of 50%/50%, the YCSB-b workload has a read/write ratio of 95%/5%. For the benchmark, with `warehouses=100` and `loadWorkers=4`, 256 queries were generated. Since we make some assumptions for the query joint performance distribution (mixed Gaussian distribution) under different knob configurations, we conducted normality analysis on all queries across four benchmarks to check if queries follow a normal distribution. Then, we utilize the satisfied queries to make our evaluations and present the generated workloads in our github [<https://github.com/AvatarTwi/KnobCF/benchmarks>]. §.§.§ Hardwares Our experiment is conducted in Windows 11, equipped with an Intel Core i7-12700H processor, 40GB of memory, 2.5TB of disk space, and an NVIDIA Geforce 3060Ti graphics card. Meanwhile, the Postgres database system version 14.4 is deployed on a Linux virtual machine with 4 cores, 16GB of memory, and 256GB of disk space. §.§.§ Implementation We implement our knob classifier in Python3 based on PyTorch-cuda <cit.> and the query graph embedding source code provided by Hilprecht et al <cit.>. The specific source code and benchmarks are publicly available at <https://github.com/AvatarTwi/KnobCF>. §.§ Evaluations of Uncertainty-aware Knob Classifier In this section, we evaluate the uncertainty-aware knob classifier on extensive benchmarks, including the evaluation metrics in Section <ref>, the baselines in Section <ref>, and the experimental results in Section <ref>. §.§.§ baselines Our method is the first work to study the query uncertainty-aware knob configuration estimation. Thus, we could not directly compare our method with the existing query cost estimation methods. In our experiments, we replace our query embedding model with existing query encoding methods to evaluate the effectiveness of our method. Existing query plan encoding methods consist of three main types: (1) Tree-CNN, like Bao <cit.>. (2) Tree-RNN, like Plan-Cost <cit.>. (3) Tree-Transformer, like QueryFormer <cit.>. Among these, the QueryFormer proposes a general encoding method for various database tasks and outperforms other plan encoding methods <cit.> in cost estimation tasks due to its attention mechanism. Thus, we choose the QueryFormer method as the baseline, called KnobCF(QueryFormer) to evaluate the effectiveness of our knob classifier. §.§.§ Evaluation Metrics We evaluate our uncertainty-aware knob classifier based on the following popular metrics, which are also widely used in existing works <cit.>. Accuracy. Accuracy shown in Formula <ref> refers to the proportion of correctly predicted knob configurations among all knob configurations. It indicates the classifier's overall prediction performance. accuracy=TP+TN/TP+TN+FP+FN. Precision. Precision shown in Formula <ref> refers to the proportion of correct matched knob configurations among all matched knob configurations. It indicates the classifier's ability to predict the matched knob configurations correctly. precision=TP/TP+FP. Recall. Recall shown in Formula <ref> refers to the proportion of correct matched knob configurations among all actually matched knob configurations. It indicates the classifier's ability to capture actual matched knob configurations. recall=TP/TP+FN. Time Consumption. Our training data comes entirely from evaluations of the tuning tasks. Thus, we ignore the data collection time and only utilize the model training time to clarify the time efficiency of our model preparation. Inference Efficiency. Inference efficiency refers to the time taken to predict the category label of a certain query plan. It is an important metric for the practical application of our uncertainty-aware knob classifier. We utilize the inference throughput per second to clarify the inference efficiency of our model. §.§.§ Experimental Results In this section, we evaluate the effectiveness of our uncertainty-aware knob classifier based on the above metrics. Since the QueryFormer utilizes the original features of the query plan and could not achieve model transfer, we make this comparison in static workload, i.e., we collect labeled data for each workload based on their tuning tasks and utilize 80% as training data, 20% as test data. Accuracy, Precision & Recall. Table <ref> shows the prediction accuracy, precision, and recall of our KnobCF and the KnobCF(QueryFormer) on the TPCC, YCSB, JOB-light, and TPCH workloads. In general, our KnobCF achieves an average accuracy of 0.921, an average precision of 0.911, and an average recall of 0.811 while the KnobCF(QueryFormer) achieves 0.920, 0.941, and 0.738 respectively. Specifically, the KnobCF method slightly outperforms the KnobCF(QueryFormer) method with an accuracy of 0.963 for TPCC and 0.969 for YCSB, compared to 0.948 and 0.944 for the Query Former method. For JOB-light and TPCH, we also achieve competitive accuracy of 0.812 and 0.872, compared to 0.840 and 0.883 for the QueryFormer method. This is because QueryFormer utilizes the original features of the query plan and designs a self-attention mechanism that captures more useful information to learn the long path relationships in query plans. Our KnobCF utilizes graph embedding to process the diverse query structure which focuses on learning the node relationships. Overall, even with the transferable feature representation, our KnobCF achieves a competitive prediction accuracy, precision, and recall compared to the KnobCF(QueryFormer). Time Consumption. As shown in Table <ref>, the average training time of our KnobCF is 68.8 seconds, which is similar to the training time of 71.4 seconds of the KnobCF(QueryFormer). Especially, in TPCH which only has 22 queries, our KnobCF completes model training within 18 seconds. Actually, due to excluding the time consumption of training data collection, the training is directly determined by the GPU and the scale of the training set. Our KnobCF could achieve a second-level training time, which brings high time efficiency for practical tuning tasks. Inference Efficiency. As depicted in Figure <ref>, we present the inference throughput of our KnobCF, including the query embedding inference and uncertainty category inference. We observe that the query embedding inference throughput is directly related to the query structure complexity. Specifically, KnobCF reaches an inference throughput of 12331 queries per second on the YCSB-a workload while only 2174 queries per second on the TPCH workload. However, the query embedding vector could be reused for different knob configurations, which greatly reduces the query embedding time consumption. Meanwhile, the uncertainty category inference throughput is stable to extensive workloads and achieves an average inference throughput of 20093. This is because the uncertainty prediction model is a simple neural network model. This simple structure benefits from our uncertainty category definition and the high-quality representation learning, greatly improving the inference efficiency. §.§ Evaluations of Uncertainty-aware Knob Tuning In this section, we evaluate the effectiveness of our uncertainty-aware knob tuning method, including the evaluation metrics in Section <ref>, the baselines in Section <ref>, and the experimental results in Section <ref>. §.§.§ baselines In our experiments, we utilize two state-of-the-art knob tuning optimizers, the Bayesian Optimization (BO) <cit.> and the Deep Deterministic Policy Gradient (DDPG) <cit.>. According to various works <cit.>, these optimizers are proven to work well in most situations. For BO optimizer, we utilize the SMAC3 <cit.> implemented by Lindauer et.al. For the DDPG optimizer, we utilize the implementation provided by llamatune <cit.>, which is the original neural network architecture used in CDBTune <cit.>. Further, we integrate our KnobCF into the classical knob tuning optimizer, DDPG, to evaluate the effectiveness of our method. We design two kinds of implementations: the static uncertainty-aware knob tuning called DDPG(KnobCF) and the few-shot transfer knob tuning, called DDPG(few-shot). For static situations, we collect the 300 evaluations of the current tuning task to train our knob classifier. For the few-shot situation, we utilize the evaluations of the historical tuning tasks to complete pretraining and utilize the former 30 Iterations evaluations of the current tuning task to finetune our knob classifier. §.§.§ Evaluation Metrics We utilize three typical metrics, which are also widely used in existing works <cit.>, to evaluate the effectiveness of our uncertainty-aware knob tuning method. DBMS Throughput: This metric refers to the number of queries that could be executed per second. The Uncertainty Latency: We execute the full workload 10 times on the optimal knob configurations and utilize the 90-th percentile tail latency distribution as the uncertainty performance of workload. Average Time per Iteration: This metric shown in Formula <ref> refers to the average time of an iteration of knob tuning, where T_i is the time consumption of the i-th iteration. It reflects the time efficiency of our uncertainty-aware knob tuning. T =1/n∑_1^n T_i. §.§.§ Experimental Results In this section, we introduce the experimental results of our uncertainty-aware tuning on the YCSB-a and TPCC workload. Latency Distribution & Time Efficiency. As shown in Figure <ref>, we present the 90-th percentile tail latency distribution and the average time per iteration of different tuners on the YCSB-a and TPCC workload. On the YCSB-a, the KnobCF(few-shot) is pre-trained on historical tuning tasks of TPCH, TPCC, and Job-light, and fine-tuned by the evaluations of the former 30 iterations in YCSB-a. Then, we integrate the KnobCF and KnobCF(few-shot) to DDPG optimizer to implement the knob tuning in YCSB-a. For workload latency, we observe 82ms, 80ms average latency in DDPG(KnobCF) and DDPG(few-shot), which outperforms the 110ms, 95ms average latency in DDPG and BO. This indicates that our KnobCF achieves more accurate latency estimation, bringing lower workload latency. For the average time per iteration, we observe that the DDPG(KnobCF) and DDPG(few-shot) achieve the best average time per iteration of 75 and 80 seconds, which is significantly better than the average time per iteration of 125 and 100 seconds of the DDPG and BO. This indicates that our KnobCF could quickly complete the latency prediction, bringing high time efficiency for practical tuning tasks. On the TPCC, the KnobCF(few-shot) is pre-trained on the historical tuning tasks of TPCH, YCSB-a, and Job-light, and finetuned by the evaluations of the former 30 iterations in TPCC. We observe similar effects, with the DDPG(KnobCF) and DDPG(few-shot) achieving the best average latency of 17ms and 22ms. The DDPG and BO achieve an average latency of 27ms and 24ms, respectively. Also, the DDPG(KnobCF) and DDPG(few-shot) achieve the best average time per iteration of 26s and 27s, respectively. Overall, our KnobCF effectively guides the knob tuning and largely saves the evaluation time of the tuning tasks. Maximum Throughput Trends Over Time. Figure <ref> shows the maximum throughput trends over time on the YCSB-a and TPCC workload. In YCSB-a, we observe that DDPG(KnobCF) and DDPG(few-shot) achieve competitive maximum throughput to DDPG and BO methods in lower time consumption ( approximately 80%-85%). This indicates that our KnobCF could effectively reduce the time consumption of knob tuning evaluations while maintaining the knob tuning results. In TPCC, we observe similar improvements, i.e. DDPG(KnobCF) and DDPG(few-shot) achieve competitive maximum throughput to DDPG and BO methods with only 60%-70% time consumption. This time efficiency benefits from our KnobCF, which could efficiently predict the query latency distribution based on historical evaluations instead of executing the queries repeatedly. Traditional knob tuners <cit.>, on the other hand, typically repeat the execution of all queries without considering uncertainty distribution. §.§ The robustness analysis In this section, we evaluate the robustness of our uncertainty-aware knob classifier and uncertainty-aware knob tuning method varying the dimension of classifier output from 8 to 16. In our classifier, the output dimension is a crucial hyper-parameter, which directly determines the number of categories of query uncertainty and affects the quality and efficiency of knob tuning. §.§.§ The effectiveness of KnobCF under different output dimensions In this section, we conduct the robustness evaluation for KnobCF(few-shot) on YCSB-a. We utilize the TPCH, JOB-light, and TPCC to complete the pretraining of KnobCF and then utilize the former 30 iterations of YCSB-a tunings to finetune our knob classifier under different output dimensions of 8, 10, 12, 14, and 16. Similarly, we evaluate the model prediction accuracy, training time, and inference throughput of KnobCF(few-shot). Prediction Accuracy: As depicted in Figure <ref>, the red line of the left figure presents the prediction accuracy changes with different dimensions in YCSB-a. We observe that our KnobCF(few-shot) maintains high prediction accuracy above 0.9 across various output dimensions. Even with the output dimension of 8, our KnobCF(few-shot) still achieves a prediction accuracy of 0.92, which is slightly lower than the output dimension of 16. This indicates that our KnobCF(few-shot) achieves stable and reliable prediction performance across various output dimensions. Training Time: As shown in Figure <ref>, the blue line of the left figure presents the training time of KnobCF(few-shot) of different dimensions in YCSB-a. We observe that the average training time of KnobCF(few-shot) (13 s) is significantly shorter than the general KnobCF (160 seconds shown in Table <ref>). Specifically, in any dimension, the few-shot training time is under 20 seconds, accounting for less than 0.1% of the total knob tuning time. This is because our KnobCF utilizes the former 30 evaluations of knob tuning to complete model training without any extra time consumption of collecting training data. The stable training time makes our KnobCF(few-shot) more efficient and practical in practical knob tuning tasks. Inference Throughput: The right figure in Figure <ref> presents the uncertainty category inference throughput of KnobCF(few-shot) of different dimensions. Although the inference throughput decreases with the output dimension increasing, our KnobCF(few-shot) still achieves high inference throughput above 17000 queries per second. Also, this inference time is significantly lower than the query execution time, which could be further accelerated by GPU. §.§.§ The Effect of Different Output Dimensions in Knob Tuning. In this section, we evaluate the tuning robustness of DDPG (few-shot) in YCSB-a under different output dimensions, including the uncertainty latency, average runtime of tuning iterations, and the maximum throughput trends over time. Figure <ref> shows the uncertainty workload latency of optimal knob configuration and average runtime of tuning iterations of the DDPG (few-shot) with output dimensions of 8, 10, 12, 14, and 16. We could observe that DDPG (few-shot) achieves similar uncertainty workload latency, 78ms in dimension 10 and 79ms in dimension 16. Also, the average runtime of tuning iterations is stable across different output dimensions. This indicates that our KnobCF(few-shot) makes stable uncertainty predictions to enhance the knob tuning. Figure <ref> shows the throughput increase of the DDPG (few-shot) with the time consumption under different output dimensions. We observe similar tuning trends in different output dimensions. This indicates that limited output dimensions are sufficient to represent the uncertainty distribution of knob configurations and make effective knob tuning. § RELATED WORK In this section, we introduce existing works from two aspects, including the knob tuning methods and the query cost estimation methods. §.§ Knob Tuning Mehods With the development of cloud databases, automatic knob tuning plays a crucial role in database performance optimization. Existing automatic database knob tuning methods <cit.> can be roughly classified into three categories: heuristic methods, Bayesian optimization methods, and reinforcement learning methods. Specifically, Sullivan et al. <cit.> propose heuristic knob tuning, which involves random sampling of a small number of knob settings to automate the exploration of appropriate knob settings. Subsequently, researchers propose some Bayesian-optimizer-based knob tuners <cit.>, such as Restune <cit.> and OnlineTune <cit.>, which recommend suitable knob configurations through iterative sampling and evaluation of knob settings. Compared to BO-based tuning methods, reinforcement learning-based knob tuning methods do not require training data preparation. For example, Zhang et al. propose CDBTune <cit.>, which builds a reinforcement learning-based tuning model DDPG. During iterations, this model utilizes an agent to recommend tuning operations based on state features and updates the tuning strategy through rewards to optimize database performance. Li et al. <cit.> proposed Query-based reinforcement learning tuning, Qtune, integrates query and workload features into the reinforcement learning tuning model, allowing better adaptation to new scenarios. Further, to reduce tuning time, some improved reinforcement learning methods have been proposed. For instance, Wang et al. <cit.> propose UDO, a universal database optimization based on Reinforcement Learning, which reduces the number of database restarts during tuning knob configuration by reorganizing the order of evaluations. Cai et al. <cit.> propose HUNTER to preemptively reduces the configuration search space through knob selection and genetic algorithm, to further largely reduce the time consumption of knob tuning. Additionally, DB-BERT proposed by Trummer et al. <cit.> uses a pre-trained BERT language model to extract useful tuning methods from database manuals and search engines, combined with a simpler DDQN algorithm to select knobs and recommend tuning. In summary, existing knob tuning methods have made great progress in reducing the knob candidate space and making effective recommendations. However, these methods still face the challenges of useless evaluations, overestimation, and underestimation. In this paper, we consider to address these challenges from the aspect of the query uncertainty distribution. §.§ Query Cost Estimation Query cost estimation <cit.> occupies an important position in database management systems, as it is a key part of database tuning strategies, playing a vital role in query optimization, index optimization, storage efficiency, etc. Early on, to improve the efficiency of query optimization, researchers propose some statistical methods to estimate query costs hypothetically. For example, Li et al. <cit.> propose an operator-based statistical technique to estimate query execution time. Recently, the database community has attempted to use deep learning models <cit.> to improve the accuracy of query cost estimation. For example, Sun et al. propose an end-to-end query cost estimation framework <cit.>, which uses a tree structure model to learn the relationships between queries and performance labels. Marcus et al. <cit.> analyze the query plan tree structure and propose a plan structural deep neural network for query cost estimation. Hilprecht et al. designed a zero-shot cost model <cit.> for query cost estimation, which has the advantage of being able to generalize to unseen databases. Furthermore, some studies have proposed uncertainty cost estimation methods. For example, Wu et al. <cit.> introduce the concept of query uncertainty, suggesting that the performance of queries can be represented by a distribution due to runtime influences (e.g., random access). Similarly, Dorn et al. <cit.> also propose that configurable software systems exhibit performance estimation uncertainty. Different from existing query cost estimation methods, we consider the query uncertainty estimation from the perspective of knob configurations. § CONCLUSION KnobCF effectively tackles the useless evaluations and the overestimation and underestimation issues in the traditional knob tuning framework. It is a general framework that can be applied to various DBMSs and tuning tasks. For useless evaluations, we propose the uncertainty-aware knob classifier to predict the category label of a certain knob configuration according to historical evaluations. For the overestimation and underestimation, we model the joint performance distribution of knob configurations instead of the single-point estimation. Importantly, KnobCF is based on transferable feature representation learning, bringing effective model transfer for different tuning tasks. The experimental results demonstrate that KnobCF significantly reduces the knob tuning time and improves the tuning efficiency compared to the state-of-the-art methods. This work was supported by the National Natural Science Foundation of China (No. 62232005, No. 62202126, No. 92267203). ACM-Reference-Format
http://arxiv.org/abs/2407.02916v1
20240703084550
How does a low surface brightness galaxy form spiral arms?
[ "Ganesh Narayanan", "Anagha A. G.", "Arunima Banerjee" ]
astro-ph.GA
[ "astro-ph.GA" ]
Department of Physics, Indian Institute of Science Education and Research (IISER) Tirupati, Tirupati - 517507, India Department of Physics, Indian Institute of Science Education and Research (IISER) Tirupati, Tirupati - 517507, India Department of Physics, Indian Institute of Science Education and Research (IISER) Tirupati, Tirupati - 517507, India § ABSTRACT The formation and evolution of spiral arms in low surface brightness galaxies (LSBs) are not well-understood. We study the dynamics of spiral arms in two prototypical LSBs, F568-VI and F568-01, using both analytical models and N-body + hydrodynamical simulations. We first consider the disk as a 2-component system of gravitationally-coupled stars and gas in the force field of a spherical dark matter halo, subjected to local, non-axisymmetric perturbations. However, no local spirals are formed. We next assume the disk to be a 1-component system of stars in the net gravitational potential of a galaxy with a spherical dark matter halo perturbed by a global m=2 instability. In this case, the growth time for spiral formation was low, equal to 0.78 and 0.96 Gyrs, respectively, corresponding to a few dynamical times of the galaxies. Finally, we simulate the LSBs using the N-body + hydrodynamical simulation code RAMSES. Our results show that a quadrupolar field associated with an oblate halo with an axial ratio of 0.7 is necessary to drive a long-lived global spiral in the LSB disks. Further, feedback corresponding to a supernova mass fraction of ∼ 0.05 is essential to comply with the observed stellar surface density. The simulated spirals survives for about ten dynamical times and the average pattern speed lies between 10 - 15 kms^-1kpc^-1. The spiral arm thus formed is therefore a transient global pattern driven by the tidal field of the oblate dark matter halo. § INTRODUCTION Low Surface Brightness galaxies (LSBs) are galaxies with central B-band surface brightness μ_B(0) > 22 mag arcsec^-2 <cit.>. LSBs are gas-rich <cit.> and dark matter dominated <cit.>, and are considered to be under-evolved systems due to the low values of their metallicities and star formation rates. However, they have high gas mass fraction when placed alongside the High Surface Brightness galaxies (HSBs) <cit.>. These are mostly disk galaxies, and may be located either in the void or isolated environment <cit.>. The spiral arms constitute the most notable characteristic of face-on disk galaxies, though all disk galaxies may not exhibit these non-axisymmetric features. LSBs mostly exhibit weak and sometimes fragmented spiral structures in their stellar disk <cit.>, the formation and evolution of which have not been studied systematically. From a dynamical perspective, the formation of spiral arms in galactic disks is primarily triggered by disk instabilities. See, for example, <cit.>. <cit.> analytically studied the response of a 1-component, self-gravitating and differentially-rotating, infinitesimally thin disk of fluid, to local axi-symmetric perturbations and introduced the Toomre Q parameter such that Q > 1 would indicate a stable disk. The formalism was later generalized to more realistic multi-component fluid disks, with or without a finite disk thickness <cit.>. The response of a 1-component, sheared disk to local, non-axisymmetric perturbations under the WKB or tight-winding approximation was pioneered by <cit.> and <cit.>. <cit.> coined the term swing amplification for this phenomenon since the growth is maximum as a wave swings from a leading to a trailing position. <cit.> generalized the study to a 2-component galactic disk model with application to the Galaxy. <cit.>, on the other hand, introduced the density wave theory to study the global spiral modes in the galactic disk and derived the dispersion relations using the tight winding approximation. <cit.> studied the normal modes in a stellar disk modeled by the collisionless Boltzmann equation and found that the unstable modes are regulated by dark matter density distribution in the galaxy. The disk dynamics of the LSBs is dominated by the dark matter halo at all radii as is evident from their mass models constructed from stellar photometry and high-resolution HI 21cm observations <cit.>. In contrast, dark matter dominates the disk dynamics of ordinary spirals only at larger radii. See, for example, <cit.>. Calculating the 2-component disk stability parameter Q_RW proposed by <cit.> of a sub-sample of LSBs from <cit.>, <cit.> showed that the low star formation rates in LSBs could be explained on the basis of the high dynamical stability of their disks against local, axi-symmetric perturbations. Using the 2-fluid disk stability parameter of <cit.>, <cit.> found that the galactic disk of the LSB superthin galaxy UGC7321 is stable against local, axisymmetric perturbations as well as local, non-axisymmetric perturbations. <cit.> studied the response of an LSB galactic disk to global, non-axisymmetric perturbations, and found that the dark matter halo does not play any significant role in suppressing the global instabilities. In fact, they attributed the lack of strong spiral features in LSBs to their sparse environment. However, according to Galaxy zoo2, the bar fraction in LSBs is ∼ 0.2 whereas the same in HSBs is ∼ 0.3 (see <cit.>) . Therefore, the prevalence of non-axisymmetric features is not that insignificant in LSBs as compared to HSBs. Using N-body simulations, <cit.> showed that LSBs are quite stable against the formation of bars or non-axisymmetric instabilities. They also used the analytical parameter introduced by <cit.> to show the stability of LSBs against a global non-axisymmetric mode. Using N-body simulations, <cit.> argued that a dynamical coupling between the disc and the dark matter halo of high angular momentum low surface brightness galaxies may drive disk density waves via the co-rotation resonance even in disks with a high value of the Toomre Q parameter. Further, the halo scaleheight was found to regulate the pitch-angle of the spiral arm, with more loosely-would arms driven by smaller scaleheights. Further, hydrodynamical simulations done by <cit.> indicated that the LSBs are stable against bar formation, and attributed the same to the lack of their self-gravity. Using TNG100, <cit.> shows that the high values of specific angular momentum of the stellar disk and the spin of the dark matter halo in LSBs are responsible for their extended nature, and also leading to the absence of massive central black holes. Thus, analytical and numerical studies have shown that LSBs are stable against both local axisymmetric and non-axisymmetric instabilities. Yet, we observe these galaxies are not devoid of spiral features. The origin of spiral structure in LSBs is therefore a puzzle. These galaxies are often found in void or isolated environments, and so the spirals must be formed from self-excited mechanisms. In several numerical simulation studies, recurrent spiral activity is seen in unbarred isolated disk galaxy. This spiral activity fades over time due to the random stellar motion in the disk, which heats up the stellar disk further <cit.>. In fact, N-body simulations always form a transient spiral, which does not obey density wave theory. Interestingly, <cit.> found that long-lived spiral modes are not reproducible in simulations. <cit.> simulated recurrent transient spiral patterns, and showed that a recurrent global spiral structure can be explained by the superposition of several modes in isolated, unbarred disk galaxy models. In this paper we try to understand the origin and evolution of the observed spiral features in two typical LSBs, F568-VI and F568-01, using theoretical models of swing amplification and density waves, as well as N-body + hydrodynamical simulation. The organization of the paper is as follows: In 2, we describe the models we use for the galactic spiral arms, in 3 the target galaxies, in 4, the input parameters for the different models followed by the results and the conclusions in 5 and 6 respectively. § MODELLING OF GALACTIC SPIRAL ARMS §.§ Local Spiral Arms Galactic disk could be stable against local, axi-symmteric perturbations as indicated by a high value of the Toomre Q parameter <cit.> and yet unstable against the growth of local, non- axisymmetric instabilities. The growth of these local, non-axisymmetric instabilities was first studied by <cit.>, and <cit.> proposed the swing amplification mechanism to explain the origin of spiral arms in stellar disk of galaxies in response to them. The term swing amplification was coined by <cit.> as the wave gets amplified as it swings from a leading to a trailing position due to differential rotation of the galaxy. The swing amplification mechanism can only be used to understand the local or flocculent spiral arms and not directly to understand the global spiral structure <cit.>. Further, N-body simulations with spherical halos produce recurrent and transient spirals which are mostly excited by the swing amplification mechanism <cit.>. Following <cit.>, we model the galactic disk as a 2-fluid disk of stars and gas with zero vertical thickness, differentially-rotating, and gravitationally-coupled to each other, and also in the force-field of a spherical, pseudo-isothermal dark matter halo, and study the linear response of the disk to local, non-axisymmetric perturbations. We present here the final coupled differential equation describing the growth of θ_i=δμ_i/μ_0i, a dimensionless quantity which represents the ratio of the perturbed to the unperturbed surface density of the i^th component (i = stars, gas) as a function of τ, a dimensionless measure of time in a sheared co-ordinate system. Here the sheared co-ordinate system is defined as x'=x, y'=y-2Axt, z'=z, t'=t where A = 1/2(V_rot/R - dV_rot/dR). τ is given by τ=2At'- k_x/k_y where k_x and k_y are respectively the wave numbers in the x and y directions and is hence a measure of the dynamical time scale of the disk. (d^2θ_i/dτ^2)-(dθ_i/dτ) (2τ/1+τ^2) +θ_s [ ξ^2 + 2(η-2)/η(1+ τ^2)+(1+τ^2)Q_i^2(1-ϵ)^2ξ^2/4χ^2] =ξ^2/χ(1+ τ^2)^1/2[θ_s(1-ϵ)+θ_gϵ] Here Q_i denotes the Toomre Q parameter for the i^th disk component, where i = stars and gas, with Q_i>1 indicating that the i^th galactic disk component is stable against local, axi-symmetric perturbations. η is the logarithmic shearing rate in the galactic disk (η = (R/Ω)/(dΩ/dR)) and χ the wavelength of non-axisymmetric perturbation λ in units of the critical wavelength λ_crit; η<1 and η>1 corresponds to the rising and the falling part of the rotation curve respectively while η=1 represents a flat rotation curve (ξ^2=2). Here ξ^2 = (4-2η)/η^2. Further, ϵ is the ratio of surface density of gas disk to that of the stellar disk given as Σ_g / Σ_s and χ is the wavelength of axi-symmetric perturbation in units of λ_crit. The above system of equations may be solved if the above 2-component disk is stable against local, axi-symmetric perturbations, which is given by the following criterion <cit.>: (1 - ϵ)/χ (1+(Q_s^2 (1-ϵ)^2)/4 χ^2)) +ϵ/χ(1+(Q_g^2 ϵ^2 /4 χ^2))<1 Solution of the equations: The coupled, second-order, ordinary differential equations given by Equation (1) are solved iteratively using fourth order Runge-Kutta method by imposing suitable initial conditions. There are four possible sets of initial conditions for (θ_s,θ̇_̇ṡ,θ_g,θ̇_̇ġ) : (1,0,0,0),(0,1,0,0),(0,0,1,0),(0,0,0,1). The magnitude of amplification also depends on the choice of the initial value of τ. Therefore, τ is also varied with each initial condition and the combination which leads to the maximum amplification is chosen. The initial value of τ was varied and the value which gives maximum amplification is retained. If not, the τ was chosen for a greater maximum amplification factor MAF (=(θ_i)_max/(θ_i)_ini) of star or gas. The values of the input parameters like η and ξ may be directly calculated from the observed rotation curve. χ is varied in the range 1 - 3 as the swing amplification mechanism becomes ineffective outside this range of χ <cit.>; the χ value which results in the maximum amplification is determined by trial and error method. §.§ Global Spiral Modes Observational studies suggest that the galactic spiral arms are stable against the differential rotation of the galaxy and rotate as a rigid body with a constant pattern speed. This indicates that the spiral arms cannot be material arms as otherwise they would have wound up resulting in tightly-wound spirals only, just in a few dynamical times. <cit.> and <cit.> found a way out of the winding dilemma by modeling the spiral structures as stationary, density waves rather than winding material spiral arms. Following <cit.>, we model the galaxy as a 1-component fluid disk obeying the Euler equation, and responding to the gravitational potential of its self-gravity as well as the external force-fields of the gas disk and the dark matter halo, as governed by the Poisson equation. Perturbing the Euler equation by a global, non axi-symmetric instability of the form f(r)+f_1(r)e^imϕ-iω t with f(r) denoting any unperturbed quantity, and f_1(r) the corresponding amplitude of the perturbed quantity, m the azimuthal wave number and ω is the complex frequency of the perturbation. The linearized equation is then given by d^2/dr^2(w_1+ψ_1)+A d/dr (w_1+ψ_1)+ B(w_1+ψ_1) - D/c_r^2 (w_1) = 0 A = 2m+1/r +1/Σd Σ/dr -1/DdD/dr B = m/r[ ( 1/Σd Σ/dr -1/DdD/dr) (1 - 2 Ω/ω - m Ω) - ( 2/ω - m Ωd Ω/dr) ] D = κ^2 - (ω - m Ω)^2 where Ω is the angular velocity of the disk, Σ the stellar surface density and κ the epicyclic frequency derived from rotational velocity profile as 2V/r^2d/dr(V/R). ω_1 is the perturbed enthalpy, Σ_1 is the perturbed surface density, and the perturbed gravitational potential is given as ψ̃_̃1̃(r) = -2 πΓ(m + 1/2)/Γ(1/2) Γ(m+1) [∫_0^r Σ_1(r')r'/r^2 m+1 F (0.5, m+0.5, m+1, r'/r^2 ) dr'+ ∫_r^R_outΣ_1(r') F(0.5, m+0.5, m+1, r'/r^2)dr'] Here F denotes the hypergeometric function. The surface density and the radial velocity dispersion are respectively modelled as Σ = Σ_0exp(-r/h_σ)((1- r/R_OUT)^2)^5 c_z = c_z(0)exp(-r/(2h_σ))((1- r/R_OUT)^2)^2.5 where Σ_0 is the central surface density, h_σ the radial disk scale length, c_z(0) the central vertical dispersion, and R_OUT the truncation radius. The boundary conditions are obtained by taking the radial stellar velocity dispersion c_r = 0 at the boundaries. The equations are thus reduced to a matrix equation and the eigen frequency spectrum is found, following <cit.>. The growth time and the pattern speed of the mode is given by as 2 π/Im(ω) and by 2 π/Re(ω)/m respectively. §.§ N-body + Hydrodynamical Simulations The galactic disk is modelled as a 2-component system of stars and gas in an NFW halo <cit.>, all the three components being gravitationally-coupled to each other. Both the stellar and the gas surface density radial distributions were taken to follow exponential profiles. The radial and vertical velocity dispersion are regulated using Toomre Q and vertical scale height respectively. The physical properties like asymptotic rotational velocity and surface density of the disk components are used as constraints to generate the initial conditions. We use a publicly-available code DICE <cit.> to generate initial conditions for the galaxy in equilibrium. The initial conditions are generated using Lagrangian particles whose distributions are built using a Metropolis-Hasting MonteCarlo Markov Chain algorithm <cit.>. Simulation is run with 2×10^5 dark matter halo particles, 5×10^5 stellar disk particles and 1×10^5 gas particles following <cit.>. We evolve the galaxy model using the publicly available code RAMSES <cit.>. The code use Adaptive Mesh Refinement (AMR) technique and tree-based data structure which allows recursive grid refinements. The hydrodynamical solver is based on second-order Godunov method, which computes the thermal history of the fluid component with high accuracy. All the plots were generated using publicly-available software pynbody <cit.>. § TARGETS: FGC568-VI & FGC568-01 We choose two prototypical LSBs, F568-V1 and F568-01, studied by <cit.> for our study. Both are seen face-on with angles of inclination ∼ 46.4^∘ and ∼ 31.9^∘ respectively, with the corresponding distances being 90 Mpc and 101 Mpc. Astrometric parameters of the galaxy are obtained from Hyperleda[http://leda.univ-lyon1.fr/] <cit.> and are quoted in <ref>. The respective asymptotic rotational velocities V_rot are 99.6 and 100.9 kms^-1, confirming that their total dynamical mass are intermediate between that of dwarfs and ordinary galaxies like the Milky Way. The masses of the HI disk and the stellar disk are both of the order of 10^9 M_⊙, indicating that the self-gravity of the stars and the gas are equally important in regulating the disk dynamics. The physical properties of the galaxies are presented in <ref>. § INPUT PARAMETERS The radial profile of both the stellar surface density in B-band, as well as the gas surface density, were taken from <cit.>. The rotation curve was taken from <cit.>. §.§ Local Spiral Arms & Global Spiral Mode For the Swing Amplification study, the observed gas surface density profile was fitted with a linear superposition of two Gaussian profiles, but not centered at zero: Σ_HI = A e^-(r-m_1)^2/s_1^2+B e^-(r-m_2)^2/s_2^2. See, for example, <cit.>. The surface density of the stars was fitted with an exponential profile Σ_0 exp(-R/R_d). For the Global Mode study, the gas surface density profile was not required. For the stellar surface density, R_OUT is taken to be 5 R_d. R_OUT indicates the radius beyond which the stellar surface density becomes negligible or zero. Assuming that the stellar surface density “zero” at R_OUT is necessary to comply with the set of suitable boundary conditions of the problem as formulated <cit.>. Our conclusions remain unchanged for any value of R_OUT equal to or larger than 5 R_d. In both cases, the rotation curve was fitted with a(1-exp(-r/b) where a and b are constants. The stellar velocity dispersion for our sample galaxy was not available from spectroscopic observations, and was therefore analytically modelled. We use the 2-component model of gravitationally-coupled, stellar and gas disks, in the force-field of the dark matter halo and in vertical hydrostatic equilibrium as constrained by the observed stellar and gas scale height to model this <cit.>. The dark matter parameters were taken from the mass models with pseudo-isothermal (PIS) halo <cit.>. Since our LSB is not edge-on, its stellar and the HI vertical scale height are not directly measured. We assume that the mean scale height of stellar disk to be ∼ R_d/6 following the scaling relation determined from the study of a sample of edge-on spirals which says that the stellar scale height lies between R_d/5-R_d/7 <cit.>. The HI scale height was taken to linearly vary between ∼ 0.2 to 1 kpc between R = 0 to 0.6 R_d following the trend observed in a sample of edge-ons by <cit.>(See Figure 25 of their paper). We further assume the radial variation of the vertical, stellar dispersion profile as σ_z(R)= σ_z(0) exp(-R/αR_d), where σ_z(0) is the central vertical stellar dispersion value and α R_d the scale length for the fall-off of the dispersion, R_d being the exponential disk scale length of the stellar disk. This assumed radial profile of σ_z(R) is due to <cit.>, who found this to be well-consistent with the flat, radial profiles of the stellar scale heights as observed in a sample of edge-on galaxies. Interestingly, we note there is hardly any notable variation of σ_z with h_z. This is due to the fact that dispersion σ_z varies as √((h_z)) , at least in the one component model, and so the dependence of σ_z on the assumed value of h_z is weak. Besides, σ_HI was assumed to remain constant with z, which is a reasonable assumption given a thin vertical structure of the stellar disk. For HI dispersion σ_HI, we consider it to be constant at all R at a canonical value of 7 kms^-1. We first obtain the vertical stellar dispersion and then we multiply it by a factor 2 obtain the central radial stellar dispersion, 0.5 being the ratio of the vertical-to-radial stellar dispersion observed at the solar neighbourhood (See, for example, <cit.>). However, recent studies have shown that this ratio may take a range of values with a value of ∼ 0.3 more appropriate for late-type galaxies like the LSBs <cit.>. Therefore, our choice of a vertical-to-planar stellar velocity dispersion ratio of 0.5 is rather conservative. In case of the Swing Amplification study, the radial velocity dispersion values were required to calculate the Toomre Q values for the stellar and the gas disks as Q_i = κσ_R/(π G Σ), symbols having usual significance. In case of the Global Mode study, only the radial stellar velocity dispersion was required, and was used directly as an input parameter. All the input parameters discussed above are listed in <ref>. §.§ N-body + Hydrodynamical simulations DICE does not allow superposition of two Gaussian profiles for the gas disk. So, an exponential profile with a large enough radial scale length, mimicking the average gas surface density value and giving the same total mass was fitted to generate in the initial conditions. We used a Navarro-Frenk-White (NFW) dark matter halo for our simulations. This was preferred over a pseudo-isothermal halo to comply with the observed shape of the rotation curve. The parameters were taken from the NFW mass models of <cit.>. In the ILLUSTRIS TNG simulation, the median value of intermediate-to-major axes ratio b/a for 14,000 dark matter halos in mass range 10^11 - 10^14 m_⊙ is 0.8 <cit.>. Also ILLUSTRIS-dark simulation without baryons results in halos with 0.7 as median b/a. We initialize dark matter halo in our simulation with a similar value. The dark matter halo spin is given by λ = j /√(2) V_vir R_vir where V_vir and R_vir are the velocity at the virial radius and the virial radius respectively <cit.>. Also dark matter halos with higher spin promotes bar formation in the stellar disk <cit.>. <cit.> showed λ peaks at 0.025 for dark matter halos hosting spiral galaxies. LSBs are found in fast rotating halos with large angular momenta <cit.>. We have chosen spin parameter as 0.02 which is similar to that of spiral galaxies. Q is set in range of 1-2 which is ideal for the growth of spirals by swing amplification <cit.>. In <ref>, we present the input parameters for the RAMSES simulations. § RESULTS §.§ Linear Perturbation Analysis §.§.§ Local Spiral Arms In <ref>, we show that response of the stellar and the gas disk in the gravitational potential of the disk and the dark matter halo (Left Panel), the dark matter halo only (Middle Panel), and the disk only (Right Panel) of our sample galaxies: F568-VI (Top Panel) and F568-01 (Bottom Panel). We find that dark matter plays a crucial role in inhibiting local, non-axisymmetric instabilities that grow by the swing amplification mechanism. This confirms that the disks of both F568-VI and F568-01 are stable against the growth of local spirals by swing amplification. The galactic disk is susceptible to non-axisymmetric instabilities if the local disk Toomre Q is in range of 1-2 <cit.>. In this case, both Q_s and Q_g are ∼ 10, and one may tend to attribute the absence of swing amplification to the high Q values. However, we checked the same by taking both Q_s and Q_g ∼ 1.5, but that too does not lead to the growth of spiral arms. Interestingly, LSBs and other low luminosity galaxies are characterized by patchy and irregular spiral features, which are often understood to be developed by the amplification of the local, non-axisymmetric perturbations by the swing amplification mechanism, among others. §.§.§ Global Spiral modes In <ref>, we present the eigen spectrum of the Global Mode Analysis of the LSB disks of F568-VI and F568-01. The imaginary part determines the growth rate, while the real part gives the pattern speed of the eigenmode. For F568-VI, the real and imaginary values of the most unstable mode are 0.56 and 0.16, respectively. This gives a pattern speed of 22 km s^-1 kpc^-1, and a growth-time of 0.78 Gyr, which is 4 times the dynamical time of the galaxy (0.2 Gyr at 1.5 R_d). For F568-01, the real and imaginary values of the same are 0.44 and 0.13, respectively. This indicates a pattern speed of 20 km s^-1 kpc^-1 and a growth-time of 0.96 Gyr, which is 2.3 times the dynamical time of the galaxy (0.42 Gyr at 1.5 R_d). The growth times are comparable to the range of growth rates observed by <cit.>. This already implies that the LSB stellar disks are susceptible to the growth of global spiral modes, which is also in line with the observations made by Sodi and Garcia (2017) However, there is a caveat as in this model: the self-gravity of the gas is not included in our study, and gas, being a cold component, may render the disk unstable. Response of a 2-component system of gravitationally-coupled stars and gas in a live dark matter halo to global, non-axisymmetric perturbations is not tractable by analytical models, and one has to resort to N-body + hydrodynamical simulations to study the problem. §.§ N-body + Hydrodynamical simulations using RAMSES: : A Non-Spherical Dark Matter Halo As noted earlier, the initial conditions for our simulations are generated using DICE. We emphasize here that the initial conditions thus generated by DICE are indeed in equilibrium. This is because these are constrained by the observed stellar and gas surface density profiles, and rotation curve. In addition, we consider the different components of the rotation curve as was obtained from the mass model, as constraints. We further check that the star-gas-halo system was still in equilibrium when the simulation start by confirming that the same constraints are complied with for the first several epochs. <ref> shows the snapshots of the stellar (top panel) and the gaseous disks (bottom panel) of FGC568-VI as determined from the volume density at 0.78 Gyr and 1.34 Gyr respectively. At 0.78 Gyr, the spiral wave just begins to appear; at 1.34 Gyr, the pitch angles of the simulated and the observed LSB match the best. The spiral arms extend over ∼ 10 kpc, which is about 3 disk scale lengths. Similarly, in <ref>, we present the snapshots of the stellar and the gaseous disks of FGC568-01 as obtained from the volume density at 00.46 Gyr and 1.4 Gyr, respectively. The spiral arms extend over ∼ 10 kpc, which is about 2 disk stellar scale lengths. For FGC568-VI, the oblate halo triggers spiral perturbations, which begin to develop around 0.78 Gyr, which is ∼ 4 times the dynamical time of the galaxy. Incidentally, this is also the growth time of the most unstable global, non-axisymmetric mode, as discussed in 5.1.2. We note that the spiral activity in the stellar disk persists for approximately 28 dynamical times (∼ 2.1 Gyr). For FGC568-01, spiral features begin to appear around 0.5 Gyr, which is smaller than the growth time of the most unstable mode in this galaxy (5.1.2). This possibly underscores the limitation of the analytical model of global mode analysis in studying spiral structures. Here, the spiral arm survives for more than 3 Gyrs, which is about 7 dynamical times. Interestingly, in either of the galaxies, the gas disks do not develop spiral features, which is a puzzle. See 5.2, last paragraph, for a discussion. The oblate dark matter halo: The dark matter halo is often modelled as spherical in shape with either the pseudo-isothermal (PIS) or Navarro-Frenk-White (NFW) density profile. In the first attempt, we initialized our simulations with an NFW dark matter halo, spherical in shape and evolved the system for 10 Gyrs. However, no spiral features were observed to develop in the LSB disk. Therefore, an oblate halo with vertical-to-planar axes ratio c/a = 0.7 was used later. Milky Way type galaxies in cosmological zoom-in simulations a shows a triaxial halo with median c/a = 0.9 (<cit.>. However, an oblate halo with c/a = 0.9 failed to produce a long-lived, global spiral of reasonable strength in our stellar disks. The quadrupolar potential of the dark matter halo plays the key role in triggering and sustaining spiral features in the LSB disks of both F568-VI and F568-01. Our results are in compliance with <cit.>, who showed that a triaxial halo could induce spiral in gas disk of a compact dwarf galaxy NGC 2915, using hydrodynamic simulations. Choice of Q value for the stellar disc: We first use Q_s = 10 for FGC568-VI and Q_s = 5.5 FGC568-1, as constrained by an assumed stellar scaleheight of Rd/5 (See 4.1). However, no spiral features were formed. Next, we successively tried Q_s values corresponding to smaller values of the assumed scale height. We note R_d/20 is the scale height of one of the thinnest galaxies observed <cit.>, and the corresponding Q_s is 3.9. However, we find that spiral features are not triggered even in a disk with Q_s as low as 1.5. until a sufficiently oblate potential of the dark matter halo, with c/a = 0.7 is used. Therefore, we conclude that the oblate shape of the dark matter halo plays the most crucial role in driving spiral features in the LSB disks. We may note here that <cit.> argued that the disc-halo interaction in LSBs may trigger disk density waves even in disks with a high value of the Toomre Q parameter. However, their simulations were N-body-only simulations, and hydrodynamical effects were not taken into account. In summary, we did several simulations for all possible combinations of the following c/a and stellar Q values: c/a = 1, 0.9, 0.8, 0.7, stellar Q = 1, 1.5, 2, 11, with gas Q = 1. Stellar feedback: In our first attempt, we had ignored the effect of stellar feedback in the simulation. However, we found that the stellar central surface density increased by few times of the initial value as the galaxy evolved over time. This mismatch could be adjusted by introducing stellar feedback in the model. Simulations of galaxy formation result in galaxies with cuspy dark matter halos with concentrated stellar bulges <cit.> whereas most of the LSBs are devoid of central stellar bulges. Formation of steeper stellar profiles can be stopped by removing baryons with low angular momentum. This can be driven off center by supernovae (SN) explosions <cit.>. The gas removal through these explosions also impart energy to dark matter particles which expand the halo near the center <cit.>. This helps in maintaining a shallow dark matter profile and hence prevent the collapse of the stellar particles towards the center. Following <cit.>, we use two parameters, star formation efficiency ϵ_* (mass ratio of stars formed to the total gas mass) as 0.05 and supernova mass fraction η_SN (ratio of mass of stars formed and mass of stars lost in supernova) as 0.05 to regulate the central mass density. This reduces the number of gas particles getting converted to star particles and sustain a shallow potential near the galaxy center. Similar values could sustain shallow dark matter halo potential of mass 10^9 M_⊙ in dwarf galaxies . Observational constraints and others: We next confirm if our simulation results comply with the constraints directly modeled from optical and HI 21cm radio-synthesis observations. <ref> (Top Panel) shows the radial profile of surface densities of the stellar and the gas disks of FGC568-VI at 0.78 Gyr and 3.73 Gyr, respectively in the Left Panel . The spiral pattern appears by 0.78 Gyr in the stellar disk and winds up by 3.73 Gyr. At 1.34 Gyr, the pitch angles of the observed and the simulated spirals match the best. In the Right Panel, we plot the rotation curve again for the same epochs. We note that both the radial mass distributions, as well as the rotation curve comply with observations. We observe here that the initial conditions of the simulations were set up based on the same observational constraints, which have not evolved significantly over time. Similarly, in the bottom panel, we compare the rotation curves and the surface density profiles of FGC568-01 at 0.5 Gyr when it appears and at 1.4 Gyr, which is about seven dynamical times. As before, we note that there is negligible change in the profiles. In <ref>, we present the radial stellar velocity dispersion (Left Panel) and the ratio of the radial-to-vertical stellar velocity dispersion (Right Panel). The top panel shows FGC568-VI and the bottom panel FGC568-01. Interestingly, for FGC568-VI, we observe ∼ 14% change in both the radial stellar velocity dispersion and in the radial-to-vertical stellar velocity dispersion ∼ R=0 during the evolution of the galaxy, which possibly implies that disk heating is minimal due to the spiral activity, which is perhaps weak. The average value of the radial-to-vertical stellar velocity dispersion is ∼ 2.5, which complies with the observed fact that the radial-to-vertical velocity dispersion ratio increases from 2 to 3 for nearby late-type galaxies <cit.>. For FGC568-01, however, the radial-to-vertical stellar velocity dispersion remains between 0.8 - 1 within one stellar disk scale length or so, which indicates isotropy of the stellar velocity ellipsoid. This is possibly due to the development of a small speroidal component in the galactic center, which was confirmed when the simulated galaxy was viewed edge-on. See <ref>. Finally, in <ref>, top panel, we show the evolution of Toomre Q of the stellar disks of FGC568-VI (Left) and FGC 568-01 (Right). For F568-VI, the minimum stellar Toomre Q is 1.5, and increases by a factor of 3-4 as the disk evolves. For F568-01, the stellar Toomre Q remains almost unchanged on average. Similarly, for the gas disks (bottom panel), the Toomre Q increases by a factor of 5-6 in the outer disk for FGC568-VI. However, the same decreases for the gas disk of FGC568-01. Pitch Angle: <ref>, the top panel shows the observed optical image of the FGC568-VI as obtained from the SDSS <cit.> (Left), and the simulated image at the epoch when the pitch angles match (Right). We have fitted a logarithmic spiral to each, which gives a pitch angle of 32^∘ and 28^∘ for the observed and the simulated spiral, respectively, which, therefore, mostly agree with each other. In the bottom panel, we present the same for FGC568-01. The pitch angle comes out to be 29.7^∘ in either case. Analysis of Spiral Features Fast Fourier Transform In <ref>, we present the radial profile of A_2/A_0 i.e., the ratio of the amplitudes of the m=2 to the m=0 modes, obtained by performing a Fast Fourier Transform of the simulated image of the stellar disks of FGC568-VI (Left) and FGC568-01 (Right) as seen in <ref> (Top Panel, Right) and <ref> (Bottom Panel, Right), respectively. In both cases, the amplitude of the spiral A_2/A_0 > 0.2 is sufficient to identify it as a spiral structure in simulations <cit.>. We note that the value of the ratio always lies well below 0.4, thus indicating that the spiral arm formed is weak. In <ref>, (Left Panel), we present the radial profile of the pattern speed of the m=2 mode of FGC568-VI at two epochs: when the spiral features just appear (solid line) and when the pitch-angles of the observed and the simulated images are equal (dotted line). For FGC568-VI, we note that the pattern speed fluctuates quite a lot within 2 R_d beyond which it slowly falls off with radius. Also, beyond 2 R_d, the pattern speed remains almost constant with time, the average value being 15 kms^-1. Interestingly, the pattern speed obtained from our simulation using 2D-FFT lies close to the pattern speed for the most unstable mode in the global mode analysis. In <ref> (Right Panel), we present the pattern speed of the spiral for FGC568-01. Similar to FGC568-VI, no well-defined pattern speed exists. The average value of the pattern speed is about 10 kms^-1. The values of the pitch angles and the pattern speeds are presented in <ref>. A stationary density wave or a transient spiral? The primary signature of a stationary, density wave is a pattern speed constant with radius and time. In the case of F568-VI, we note that the pattern speed of the m=2 mode survives for 2.1 Gyrs, and the average pattern speed varies between 14 - 16 kms^-1kpc^-1; the shearing rate at each epoch being ∼ 1 kms^-1kpc^-2. The radial dependence of the pattern speed has been observed, for instance, in M51 <cit.>. Further, the necessary condition for the existence of a stationary density wave is the presence of the Inner and Outer Lindblad resonances. During the nascent stages of the formation of the spiral, when linear analysis was still applicable, we checked for the presence of Inner and Outer Lindblad resonances. However, their presence was not quite apparent (See <ref>, Left Panel). The above observations, taken together, constitute the fingerprint of a transient spiral feature. The argument of the superposition of modes leading to transient spirals in simulations is not applicable in this case, as we specifically picked up the m=2 mode. Thus we conclude that the spiral features observed in FGC568 VI constitute a transient spiral pattern driven by the quadrupolar potential of an oblate dark matter halo. Similarly, we can argue that the spiral in FGC568-01 is not a stationary density wave as its pattern speed is toy constant with radius, and keeps fluctuating with time. Why does the gas disk not form spiral arms? Both stars and gas have been considered as self-gravitating components in our simulations. But, unlike the stars, gas, constitutes a collisional medium, and hence, in general, will not form a self-gravitating pattern like the spiral arm, due to the excitation of shock waves (See, for example, <cit.>). In <ref>, we have now presented the map of the Mach Number of the gas disks of FGC568-VI (Left) and FGC568-01 (Right). Interestingly, between F568-VI and F568-01, the former has a gas surface density few times higher than the latter. In addition, F568-VI has a much higher mach number (∼ 40 - 55) than F568-01(∼ 20 - 24). The combined effect of the self-gravity and shock waves is reflected in the disks of the two LSBs. As expected, very weak structures are seen in the gas disk of F568-01 due to its larger self-gravity and lower mach number, but no such features are noticeable at all in F568-VI. § CONCLUSION We model the spiral features of the proto-typical LSBs FGC568-VI and FGC568-01 using analytical methods and numerical simulations. If we consider the LSB disk to be a 2-component system of gravitationally-coupled stars and gas hosted in a spherical dark matter halo, we find that it is stable against local, non-axisymmetric perturbations and hence unlikely to produce local spiral features. However, the LSB disk assumed to be a single component system of self-gravitating stars, also subjected to the external potential of the gas disc and a spherical dark matter halo is found to be unstable to a global, spiral instability. Finally, using N-body + hydrodynamical simulations using the publicly-available code RAMSES, we note that the observed spiral features of the LSB can be modeled as a transient, global spiral pattern existing for 2.1 Gyrs, with an average pattern speed of 14 - 16 kms^-1kpc^-1 in FGC568-VI. In FGC568-01, the same exists beyond 3 Gyrs, with a pattern speed of 10 kms^-1kpc^-1. The spiral features are driven by the quadrupolar potential of an oblate dark matter halo with a vertical-to-planar axes ratio of 0.7 and a spin parameter of 0.02. aasjournal
http://arxiv.org/abs/2407.01759v1
20240701194615
A chiral quark model analysis of the $\bar KN$ interaction
[ "M. Conde-Correa", "T. Aguilar", "A. Capelo-Astudillo", "A. Duenas-Vidal", "J. Segovia", "P. G. Ortega" ]
hep-ph
[ "hep-ph", "nucl-th" ]
[]marlon.conde@epn.edu.ec Departamento de Física, Escuela Politécnica Nacional, Quito 170143, Ecuador. Departamento de Física, Escuela Politécnica Nacional, Quito 170143, Ecuador. Departamento de Física, Escuela Politécnica Nacional, Quito 170143, Ecuador. []alvaro.duenas@epn.edu.ec Departamento de Física, Escuela Politécnica Nacional, Quito 170143, Ecuador. []jsegovia@upo.es Departamento de Sistemas Físicos, Químicos y Naturales, Universidad Pablo de Olavide, E-41013 Sevilla, Spain. []pgortega@usal.es Departamento de Física Fundamental and Instituto Universitario de Física Fundamental y Matemáticas (IUFFyM), Universidad de Salamanca, E-37008 Salamanca, Spain § ABSTRACT In this work we analyze the K̅N interaction in the framework of a constituent quark model. The near-threshold elastic and charge exchange cross sections are evaluated, finding a good agreement with the experimental data. Furthermore, the possible existence of K̅N bound states are explored, finding two poles in the isoscalar J^P=1/2^- sector that can be interpreted as the experimental Λ(1405) state. A chiral quark model analysis of the K̅N interaction P. G. Ortega July 8, 2024 ==================================================== § INTRODUCTION The interest in strangeness in nuclear physics is primarily driven by the distinctive role of the strange quark within low-energy quantum chromodynamics (QCD). Located between the domains of light and heavy quarks, its presence introduces an interaction characterised by spontaneous and explicit chiral symmetry breaking. This breaking pattern gives rise to a remarkably strong attractive interaction between antikaons and nucleons near their respective thresholds, suggesting the possible existence of quasi-bounded states involving antikaons with both nucleons and nuclei, the so-called kaonic nuclei (see Refs. <cit.> for a review). In fact, the study of the isoscalar K̅N system lead to the prediction in 1959 <cit.>, and latter discovery in 1961 <cit.>, of the Λ(1405) in the πΣ invariant mass distribution of the K^-p →πππΣ reaction at 1.15 GeV. This state, with J^P=1/2^- <cit.>, is compatible with a quasi-bound K̅N state embedded within the πΣ continuum with a large decay width of ∼ 50 MeV, revealing a complex intrinsic quasi-molecular structure. From a quark model point of view, the Λ(1405) resonance serves as a pioneering example of an exotic baryon, distinguished by its underlying five-quark composition (udu u̅s and udd d̅s). The discovery of this hyperon-like state, just 27 MeV below the K^-p threshold, triggered a large number of theoretical and experimental research in order to unveil the nature of the Λ(1405), where many authors suggest a two-pole nature in the πΣ unphysical sheet, one associated to the K̅N and another to the πΣ channel (see, e.g., Refs. <cit.>). Thus, the study of the interaction of the strange mesons and nucleons embody a crucial aspect of our understanding of exotic hadrons and strange nuclei, with relevance in the study of neutron stars <cit.>. In this work we analyze the K̅ N system in the framework of a widely used constituent quark model (CQM) <cit.>, which has been applied to the study of the NN̅ system <cit.>, the NN interaction <cit.> and the deuteron properties <cit.>. Furthermore, in the last decades it has been successfully employed to describe the phenomenology associated to meson-meson, baryon-meson and baryon-baryon systems <cit.>. As a result of this careful analysis of the hadron phenomenology, all the parameters of the model have already been constrained. The paper is organized as follows: After this introduction, Sec. <ref> briefly presents the theoretical framework. In section <ref> the results are analyzed and discussed. Finally, we summarize and draw some conclusions in Sec. <ref>. § THEORETICAL FRAMEWORK §.§ Constituent quark model For the study of the Antikaon-Nucleon (K̅N) dynamics, with quark content n̅snnn where n={u,d}, we will use a constituent quark model (CQM) which models the basic phenomenology of Quantum Chromodynamics (QCD) at low and intermediate energies <cit.>. This CQM is based on the spontaneous breaking of the chiral symmetry at some momentum scale, following the Diakonov's picture of the QCD vacuum <cit.> as a dilute instanton liquid. As a consequence, quarks acquire a dynamical mass due to interactions with fermionic zero modes of individual instantons. This momentum-dependent mass vanishes at high momenta and serves as a natural cutoff for the theory at low momenta. This scenario can be modeled with the following chiral invariant Lagrangian <cit.>: ℒ = Ψ[iγ^μ∂_μ - M(q^2) U^γ_5]Ψ, where U^γ_5 = exp(iϕ^a λ^a γ_5 /f_π); ϕ^a denotes the pseudoscalar fields {π⃗, K_i, η_8} with i = 1 … 4; λ^a are the SU(3) flavour matrices; and M(q^2) is the dynamical constituent quark mass. The momentum dependence of the constituent quark mass can be parameterized as M(q^2)=m_qF(q^2), with m_q≈ 300 MeV and where F(q^2) =√(Λ^2/Λ^2+q^2), where Λ is a cutoff parameter that fixes the chiral symmetry breaking scale. Expanding the Nambu-Goldstone boson field matrix from the latter Lagrangian we obtain: U^γ_5 = 1 + i/f_πγ_5λ^aϕ^a - 1/2f_π^2ϕ^aϕ^a + _…. Here, the contribution of the constituent quark mass is identified in the first term. Further terms give rise to quark-quark interactions mediated by boson exchanges. Specifically, the second term represents the exchange of one boson, while the third term illustrates a two-boson exchange, primarily modeled as a scalar σ exchange. The model is completed with two further QCD effects: the confinement and the one gluon exchange interactions. The first one is a non-perturbative phenomena that prevents from having colorful hadrons, but it does not have a direct contribution to the K̅N interaction. Regarding the gluon, even below the chiral symmetry breaking scale quarks can still interact via the exchange of one gluon, a QCD perturbative effect which can be described by the Lagrangian <cit.>, ℒ_gqq = i√(4πα_s)ψ̅γ_μ G_c^μλ^cψ, being λ^c the SU(3) color matrices and G_c^μ the gluon field. For the K̅N, direct one-gluon exchanges are not allowed between colorless hadrons, but it will contribute via annihilation diagrams that will be explained below. The basic non-relativistic potentials at quark level, relevant for the K̅N system, can be obtained within this model in the static approximation and are given by V_π(q ) = - 1/(2π)^3g_ch^2/4m_im_jΛ_π^2/Λ_π^2+q^2(σ_i·q)(σ_j·q)/m_π^2+q^2(τ_i·τ_j), V_σ(q ) = - g_ch^2/(2π)^3Λ_σ^2/Λ_σ^2+q^21/m_σ^2+q^2, where the q⃗ is the transferred momentum, the σ⃗ (τ⃗) are the Pauli spin (isospin) matrices and m_i(j) is the mass of the quark i(j). The parameters of the model, shown in Table <ref>, are constrained by previous studies of hadron phenomenology, e.g., the NN interaction <cit.>, the NN̅ system <cit.> and other baryon-baryon <cit.> and meson-baryon <cit.> systems involving nucleons and/or strange hadrons. Two types of interactions are considered in this work, diagrammatically shown in Fig. <ref>. On the one hand, the exchange of Goldstone bosons between a K meson and a nucleon via the potentials of Eq. (<ref>). On the other hand, the light antiquark of the K meson can annihilate with the quarks inside the nucleon. These are shown in the last two diagrams of Fig. <ref>. In our model, the real component of this potential can be derived from annihilation diagrams involving the exchange of a gluon or a pion. When represented in momentum space, this interaction can be expressed as <cit.>: V_A,π(q⃗ ) =1/(2π)^3g_ch^2/4m_q^2-m_π^2( 1/3 + 1/2λ⃗_i ·λ⃗_j ) ( 1/2 - 1/2σ⃗_i ·σ⃗_j ) ( 3/2 + 1/2τ⃗_i ·τ⃗_j ), V_A,g(q⃗ ) = α_s/8π^2m_q^2( 4/9 - 1/12λ⃗_i ·λ⃗_j ) ( 3/2 + 1/2σ⃗_i ·σ⃗_j ) ( 1/2 - 1/2τ⃗_i ·τ⃗_j ), the first one (V_A,π) coming from annihilation through a pseudoscalar boson and the second one (V_A,g) through a gluon. §.§ Resonating Group Method To extract the interaction between an antikaon (K̅) and a nucleon (N) in terms of quark degrees of freedom we make use of the resonating group method (RGM) <cit.>. This approach models the K̅N system as a five-body problem, considering the quark content of the antikaon (one strange quark and one light antiquark) and the nucleon (three light quarks). The RGM effectively captures the complex quark dynamics within the meson-baryon system, allowing the interaction potential between the antikaon and the nucleon to be decomposed into a direct potential where the natural cutoff is the wave functions of the hadrons. Hence, the direct kernel is expressed as: ^RGM V_D(P',P) = ∑_i∈ A, j∈ B∫ dp_ξ'_A dp_ξ'_B1dp_ξ'_B2 dp_ξ_A× × dp_ξ_B1dp_ξ_B2 ϕ_A'^*(p_ξ'_A) ϕ_B'^*(p_ξ'_B1,p_ξ'_B2) V_ij(P',P)× ×ϕ_A(p_ξ_A) ϕ_B(p_ξ_B1,p_ξ_B2), where P⃗^(') is the initial (final) relative momentum of the K̅N, p_ξ_A(B) are the Jacobi momentum of the meson (baryon) and V_ij represents the quark-quark interaction potential within the constituent quark model, where i (j) runs into the constituents of the meson (baryon). In Eq. (<ref>), ϕ_A(B) represents the wave function for meson (baryon). On the one hand, the K̅ meson wave function is built as: ϕ_A(q⃗ ) = Ψ_A(q⃗ ) χ_ ST^(A)ξ_c^(A)[1^3], where χ_ ST^(A) is the spin-isospin wave function, ξ_c^(A) is the color wave function and q⃗ is the relative momentum of the sn̅ system. The momentum wave function Ψ_A(q⃗ ) is obtained by solving the two-body Schödinger equation with the potentials of the constituent quark model, expanded into a sum of Gaussians with ranges in geometrical progression, using the Gaussian Expansion Method (GEM) <cit.>. Thus, the internal wave function of the K̅ will be given by Ψ_A (q⃗ )=∑_n=1^n_ maxN_n C_n e^-q^2/4η_n, with N_n=(2πη_n)^-3/4 and n_ max=24. The η_n ranges are taken in geometrical progression, η_n = a_0· a_1^2(1-n), which minimizes the number of free parameters to just three, {n_ max,a_0,a_1}, while ensuring a dense description at short distances <cit.>. On the other hand, the wave function for the baryon state is similar, ϕ_B = Ψ_B(p_ξ_ρ,p_ξ_λ)χ_ ST^(B)ξ_c^(B)[1^3], with χ_ ST^(B) is the totally symmetric spin-isospin wave function and ξ_c^(B) is the totally antisymmetric color wave function. The p_ξ_ρ is the momentum between two light quarks (called the ρ mode), while the p_ξ_λ is the momentum between the third quark and the center of mass of the other two light quarks (called the λ mode). The internal momentum wave function for the nucleon Ψ_B can be obtained with GEM as it is done for the K̅. However, in Ref. <cit.> it was shown, from an analysis of the nnn system in the Born-Oppenheimer approach, that a simpler one-Gaussian function is a good approximation for the long-range regime, Ψ_B(p_ξ_ρ,p_ξ_λ) = [2b^2/π]^3/4 e^-b^2p_ξ_ρ^2[3 b^2/2π]^3/4 e^-3b^2/4p_ξ_λ^2, with b the parameter related to the size of the baryon, fixed to b=0.518 fm <cit.>. The direct kernel can be factorized as, ^RGMV_D = 3∑_i∈ A,j∈ Bℱ^(A)_i ℱ^(B)_j V_ij, where all of them are functions of Q⃗=P⃗'-P⃗, the transferred momentum between the K̅ and N. The ℱ^(A),(B) are the form factors for the anti-meson and baryon which encodes the information of the hadron wave functions in the AB→ A'B' reaction. A factor 3 must be added to all diagrams in Fig. <ref> due to multiplicity. They can be expressed as, ℱ^(A)_i(q⃗ ) = (4π)^3/2∑_n,n'^n_ maxC_nC_n'N_nN_n'(η_nη^*_n'/η_n+η^*_n')^3/2× × e^-(1-m_i/m_s+m_n)^2Q⃗^2/4(η_n+η^*_n'), ℱ^(B)_j(q⃗ ) = e^-b^2 Q^2/6. Here we see that F_A relates only to the meson wave function, while F_B includes the information of the baryon wave function range. These form factors act as natural cutoffs for the quark-quark potential. To develop a comprehensive model of the K̅N interactions, it is imperative to take into account the coupling with other meson-baryon channels and annihilation processes to strange baryons, which are rather intricate. These processes are typically described using microscopic quark-level models such as the ^3P_0 model <cit.> for the coupling to the baryon spectrum or exchange diagrams for the coupling with, e.g., ηΛ or πΣ. In this work we will model the loss of K̅N flux due to the coupling with nearby channels and the baryon spectrum by means of an optical potential approach, to streamline our calculations and improve model feasibility. This methodological approach has been used previously in the context of the NN̅ interaction <cit.> and the hyperon-antihyperon <cit.> or the Λ_cΛ̅_c <cit.> production. In our study, we adopt a parameterization similar to that used in Ref. <cit.>. This approach allows us to effectively capture the essential dynamics of K̅N interactions within our modeling framework. By exploiting the optical potential, we aim to provide a robust description of the annihilation processes without the computational complexity associated with full quark-level simulations. This simplified methodology improves our ability to predict and understand K̅N interactions in different energy ranges, facilitating a deeper understanding of the underlying physics of these interactions. The considered optical potential is then a complex Gaussian model with isospin dependence, given by V_opt^I(q) = i· W_i^I e^-b'^2q^2/2, where W_i^I and b' are parameters, fitted to experimental near-threshold elastic and charge-exchange cross sections. §.§ Solution of the Scattering Problem Once we have calculated the meson-baryon effective potential by means of the RGM formulation, we obtain the T matrix from the Lippmann-Schwinger equation in each partial wave, solved using the matrix-inversion method described in Ref. <cit.> including the complex optical potential described in Eq. (<ref>), T^α'_α(z;p',p) = V^α'_α(p',p)+∑_α”∫ dp” p”^2× × V^α'_α”(p',p”)1/z-E_α”(p”)T^α”_α(z;p”,p), where α represents the set of quantum numbers for a given partial wave JLST, V is the full potential and E_α(q) is the non-relativistic energy for the momentum q. The on-shell S-matrix is, then, obtained from the T-matrix in the non-relativistic kinematics, S_α^α'(E) = δ_α^α'-2π i √(μ_αμ_α'k_α k_α')T_α^α'(E;k_α',k_α), with k_α the on-shell momentum of the meson-baryon system. The K̅N→K̅N elastic and charge-exchange cross sections are given in terms of the scattering matrix elements in each partial wave as, σ_el = π/2p^2∑_J (2J+1) |1 - S_el^J|^2, σ_ce = π/2p^2∑_J (2J+1) | S_ce^J|^2, where I denotes the isospin for the corresponding channel and p is the on-shell relativistic momentum, which improves the phase space description. The S_el and S_ce terms are combinations of the S-matrix in isospin 0 and 1 as S_el^J = 1/2 (S_J^I=1 + S_J^I=0), S_ce^J = 1/2 (S_J^I=1 - S_J^I=0). The cross section is given as a function of the K̅ momentum in the laboratory reference system, p_ lab=p_ cmE_ cm/m_N. § RESULTS §.§ Elastic and charge exchange cross section The aim of this work is to study the K̅N→K̅N reactions near threshold. First of all, we analyze the elastic K^-p→ K^-p and the charge-exchange K^-p→K̅^0n cross sections below p_ lab=700 MeV/c^2, so that the ηΛ channel (with threshold at ∼ 1.66 GeV) remains closed. In principle, the K̅N and the ηΛ channels can only be connected by exchange diagrams, which are usually small, so it is safe to ignore the channel. As for the πΣ channel, its influence is modeled in the optical potential. Another nearby channel is the K̅^*N (threshold around 1.83 GeV), which can couple with K̅N (∼ 1.43 GeV), but its influence near the K̅N threshold was found to be small, so it is not included either. We then limit ourselves to the K̅N channel only. The results for the cross sections with partial waves up to J=9/2 are shown in Fig. <ref>. We find a good agreement, except for the bump around p_ lab≈ 400 MeV/c^2 in the charge exchange cross section due to the Λ(1520) baryon, which is not considered in this work. For the K̅N system, no direct π-exchange is allowed, so the interaction is mainly due to the scalar σ-exchange and the π and gluon annihilation diagrams. The CQM K̅N interaction alone is capable of describing the charge exchange cross section, but the elastic cross section is smaller than the experimental data, indicating a significant contribution from intermediate states such as baryons or other meson-baryon systems. The agreement improves when the latter effects are accounted for by the optical potential. The parameters of the optical potential (Eq. (<ref>)) are obtained by minimizing the χ^2 function with the available elastic and charge exchange experimental data between p_ lab=200 MeV/c^2 and 700 MeV/c^2. In particular, we exclude the region of charge exchange data between 350 and 450 MeV/c^2, where the Λ(1520) resonance signal is prominent. We find a reasonable value of χ^2/d.o.f.=1.69 with the parameters of Table <ref>, where the uncertainty of the optical potential parameters is estimated from the experimental error. §.§ K̅N molecular states Now we analyze the possible existence of K̅N molecules near threshold. The good agreement of the cross section around the threshold suggests a good description of the K̅N dynamics in such energy region. Then, it is tempting to explore possible K̅N bound states in relative S-wave. The most promising candidate for a I=0 K̅N molecule is the Λ(1405), which has been deeply explored since its discovery in 1961 <cit.>. A simple baryon picture is unable to reproduce its properties, so a meson-baryon structure must be used. In particular, in Ref. <cit.> the meson–baryon scattering amplitude was studied using the bag model of Ref. <cit.>, finding a two-pole structure for the Λ(1405). These structures would both contribute to the Λ(1405) signal, interfering to form only one resonance. The two-pole structure, emerging from the πΣ-K̅N channels, was latter confirmed and analyzed in, e.g., Refs. <cit.>. In this work, the effect of the πΣ channel is encoded in the optical potential, so it is worth exploring if any pole is predicted near the K̅N threshold. First, we analyze the possible structures in J^P=1/2^- without the optical potential, so only with the elastic K̅N interaction. We include the ^2S_1/2 partial wave in both I=0 and I=1. We do not find any bound state. However, two virtual states (poles in the second Riemann sheet below the K̅N threshold) are found for I=0 and I=1. The I=0 has a mass of 1405 MeV, while the I=1 is located at 1414 MeV. The effect of the K̅^*N channel is analyzed, including the ^2S_1/2-^4D_1/2 partial waves. Its influence is found to be small, though. The I=0 pole moves to 1409 MeV, while the I=1 pole moves to 1415 MeV. When we include the optical potential, each virtual state in I={0,1} moves into the complex plane, acquiring width and splitting in two. Then, the isoscalar sector presents two poles, one in z_1=(1439± 3 - i 22±2) MeV and another in z_2=(1417±4-i 55±7) MeV. In the isovector sector, the two poles are in z_1=(1444_-2^+3-i 7±1) MeV and in z_2=(1432±1-i 57±8) MeV. The masses of the two I=0 poles are in agreement with other studies performed with chiral SU(3) dynamics, as shown in Table <ref>, predicting one wide and one narrower pole around the K̅N threshold. This result would confirm the two-pole nature of the Λ(1405). The existence of additional I=1 states is also predicted in some previous studies. For example, Ref. <cit.> obtained I=1 poles at 1425-i 6.5 MeV and 1468-i 13 MeV, close to our estimates. In Ref. <cit.>, an I=1 state is found, but less stable than the I=0 poles. Both states are expected to have a unique resonance structure. However, no experimental state has yet been found in this energy region. This could be due to the fact that these I=1 poles are more sensitive to coupled-channels effects than the I=0 sector. § SUMMARY In this work we have analyzed the K̅N system in the framework of a constituent quark model where all the parameters are constrained from previous studies of the hadron phenomenology. We have studied the elastic and charge exchange cross section, finding a good agreement with the available experimental data near the K̅N threshold. In addition, we have explored possible bound states in the J^P=1/2^- section, where the K̅N can be in a relative S-wave. If no optical potential is included, we find two virtual states: an isovector state at ∼ 1415 MeV and an isoscalar state at ∼ 1405 MeV. When the effect of other meson-baryon channels and the baryon spectrum is modeled by means of an optical potential, each virtual pole moves into the complex plane and splits in two (see Table <ref>), pointing to a two-pole nature for the Λ(1405), as suggested by other theoretical works. This work has been partially funded by Escuela Politécnica Nacional under projects PIS-22-01, PIS-22-04 and PIM-23-01; EU Horizon 2020 research and innovation program, STRONG-2020 project, under grant agreement no. 824093; Ministerio Español de Ciencia e Innovación under grant Nos. PID2019-105439GB-C22, PID2019-107844GB-C22 and PID2022-140440NB-C22; Junta de Andalucía under contract Nos. Operativo FEDER Andalucía 2014-2020 UHU-1264517, P18-FR-5057, PAIDI FQM-370 and PCI+D+i under the title: "Tecnologías avanzadas para la exploración del universo y sus componentes" (Code AST22-0001). apsrev4-1
http://arxiv.org/abs/2407.02809v1
20240703050006
Distinguishing Carrier Transport and Interfacial Recombination at Perovskite-Transport Layer Interfaces Using Ultrafast Spectroscopy and Numerical Simulation
[ "Edward Butler-Caddle", "K. D. G. Imalka Jayawardena", "Anjana Wijesakara", "Rebecca L Milot", "James Lloyd-Hughes" ]
cond-mat.mtrl-sci
[ "cond-mat.mtrl-sci" ]
Department of Physics, University of Warwick. Advanced Technology Institute, University of Surrey. Department of Physics, University of Warwick. Department of Physics, University of Warwick. j.lloyd-hughes@warwick.ac.uk Department of Physics, University of Warwick. § ABSTRACT In perovskite solar cells, photovoltaic action is created by charge transport layers (CTLs) either side of the light-absorbing metal halide perovskite semiconductor. Hence, the rates for desirable charge extraction and unwanted interfacial recombination at the perovskite-CTL interfaces play a critical role for device efficiency. Here, the electrical properties of perovskite-CTL bilayer heterostructures are obtained using ultrafast THz and optical studies of the charge carrier dynamics after pulsed photoexcitation, combined with a physical model of charge carrier transport that includes the prominent Coulombic forces that arise after selective charge extraction into a CTL, and cross-interfacial recombination. The charge extraction velocity at the interface and the ambipolar diffusion coefficient within the perovskite are determined from the experimental decay profiles for heterostructures with three of the highest performing CTLs, namely C_60, PCBM and Spiro-OMeTAD. Definitive targets for the further improvement of devices are deduced: fullerenes deliver fast electron extraction, but suffer from a large rate constant for cross-interface recombination or hole extraction. Conversely, Spiro-OMeTAD exhibits slow hole extraction but does not increase the perovskite's surface recombination rate, likely contributing to its success in solar cell devices. Distinguishing Carrier Transport and Interfacial Recombination at Perovskite-Transport Layer Interfaces Using Ultrafast Spectroscopy and Numerical Simulation James Lloyd-Hughes July 8, 2024 ============================================================================================================================================================= § INTRODUCTION Perovskite solar cells first gained widespread attention when their power conversion efficiency jumped to over 10% by changing from a dye-sensitised structure incorporating a liquid, to an all solid-state design <cit.>. This structure remains the standard architecture to this day <cit.>, and consists of a light absorbing metal-halide perovskite sandwiched between an electron transporting layer (ETL) that selectively extracts conduction band electrons from the perovskite (but not valence band holes), and a hole transporting layer (HTL) that does the opposite, as pictured in Fig. <ref>. Employing interfaces of opposite selectivity on the two sides of the device gives photovoltaic action <cit.>, which can be thought of as a kinetic asymmetry, and hence these heterojunctions are crucial to device performance. Selective extraction is enabled by a good electronic band alignment for one band at the interface (e.g. perovskite conduction band and ETL LUMO in Fig. <ref>a), and the other band being misaligned (e.g. perovskite valence band and ETL HOMO). Photovoltaic action also arises from the differences in the equilibrium Fermi levels of the different materials before they are connected <cit.> (particularly the electrodes), which creates a built-in field and an opposing gradient in chemical potential across the device when the layers are connected <cit.> (Fig. <ref>b). Under illumination, the combined electrochemical potential develops a net gradient that drives electrons and holes towards different electrodes. Although photovoltaic action can occur without selective extraction layers, the charge-carrier blocking function of the charge transport layers (CTLs) has been found to be crucial for high performance, by preventing electron-hole recombination processes at the electrodes that compete with power extraction <cit.>. As the quality of the perovskite layer has improved (i.e. recombination rate constants within the perovskite bulk have been reduced), focus has shifted towards increasing the performance of CTLs <cit.>. A wide variety of CTL materials have been trialled but those used in the earliest devices have remained amongst the most successful, namely the hole extracting material Spiro-OMeTAD, and the electron extracting materials C_60 (fullerene molecule) and its derivative PCBM. In order to make further increases in performance by design, rather than slow trial and error, the electrical properties of the heterojunctions formed by these CTLs should be understood, so that the limiting factors can be targeted for improvement. Whilst measurements of the carrier dynamics of a complete device would be most representative of operational conditions, the large number of interfaces and processes present would make the dynamics difficult to interpret. Instead, by studying simple perovskite-CTL bilayers with one interface (for brevity shortened to CTL bilayers), the dynamics can be more easily interpreted and related to individual processes. Although the built-in field in a bilayer will not be as large as in a full device (Fig. <ref>b-c), properties of individual interfaces derived from bilayer measurements can be readily transferred to a model of the full solar cell. Ideally, to uncover the diffusion coefficient and the kinetics of interfacial processes, studies of the carrier dynamics of bilayers should have excellent time resolution, well below 1 ns. Here we report an investigation of the interfacial kinetics for perovskite-CTL bilayers fabricated from three of the highest performing CTLs – C_60, PCBM and Spiro-OMeTAD – using time-resolved optical spectroscopy with photoluminescence, THz or visible white light probes. We elucidate the carrier dynamics on sub-nanosecond timescales using a physical model that, critically, includes the Coulombic forces between electrons and holes. This is vital for the description of interfacial kinetics when one charge carrier species is selectively extracted across an interface: for example this can generate Coulomb forces that act as a bottleneck, switching off charge extraction. A quantitative agreement between the THz photoconductivity dynamics and the carrier dynamics model enabled the rate constants for surface extraction and cross-interfacial recombination to be determined, which revealed that C_60 and PCBM both rapidly extract electrons, but unfortunately have sufficiently fast cross-interface recombination that no Coulomb bottleneck forms. In marked contrast, Spiro-OMeTAD did not rapidly extract holes from the perovskite, but did have a substantially slower surface extraction velocity and surface recombination velocity. The trends observed with sub-nanosecond THz and transient absorption measurements were also observed in nanosecond photoluminescence transients at lower injected carrier densities (comparable to solar illumination), indicating that the properties extracted from the sub-nanosecond measurements using higher injected carrier densities are relevant to operational carrier densities. § EXPERIMENTAL RESULTS §.§ Heterojunction Materials For photovoltaics, the most directly relevant perovskite compositions to develop are those that have bandgaps close to the optimum calculated by the Shockley-Queisser model, whilst being sufficiently stable against degradation. One of the most promising routes to rapidly producing cells that exceed the efficiency of silicon cells is to form perovskite-silicon tandem cells, for which the Shockley-Queisser model gives an optimum perovskite bandgap of 1.6-1.7 eV, <cit.> which requires mixed-halide compositions containing a majority of iodine and a minority of bromine. In this work, spin-coated thin films (for fabrication details see Appendix) of triple cation mixed-halide perovskite were employed, with the composition (FA_0.83MA_0.17))_0.95Cs_0.05Pb(I_0.83Br_0.17)_3. This composition has high crystallinity and excellent reproducibility and stability, including against light-induced halide segregation <cit.>. Electron micrographs (Fig. <ref> a-c) showed complete coverage of the quartz substrates, with grain diameters of a few hundred nanometres, with most grains extending through the thickness of the layer (either 370 nm or 600 nm thick depending on the batch). After forming the perovskite thin films, different CTLs were deposited on top to form various perovskite-CTL bilayers. In the bilayers with spin-coated PCBM or Spiro-OMeTAD layers, the CTLs were approximately 120 nm thick, while thermally evaporated C_60 was 30 nm thick (AFM measurements confirmed complete coverage). These layers were thicker than used in solar cells, in order to ensure complete coverage of the underlying perovskite and thus avoid adding another variable, i.e. surface coverage, that can modify the dynamics (for example, with incomplete coverage some carriers must travel an additional distance to reach a perovskite-CTL heterojunction, and the carriers extracted into the CTL will be more concentrated). The ultrafast carrier dynamics should not be influenced by thicknesses above a few nanometres because within the 3 ns experimental time window, the low mobilities of the CTLs mean that extracted carriers will travel less than a few nanometres in fullerenes, and even less in Spiro-OMeTAD. To check whether the solution deposition of PCBM and Spiro-OMeTAD affected the underlying perovskite, perovskite layers washed with chlorobenzene were also prepared and measured, and showed no significant difference (see Supplemental Material, SM, [See Supplemental Material [url] for: additional optical data; details of the numerical and analytical models; and additional simulation results. It includes additional Refs. <cit.>.] Figs. S8 and S9). Separately, reference CTLs were deposited directly on quartz to confirm their optical absorbance. The UV and visible absorbance of the individual materials is reported in Fig. <ref>d, showing that while PCBM and C_60 exhibit a small amount of absorption across the visible, Spiro-OMeTAD has strong absorption above 3.0 eV. The triple-cation perovskite had a sharp absorption onset (band edge) near 1.6 eV and its steady state photoluminescence (SS-PL) spectrum (Fig. <ref>e) peaked at 1.63 eV, which demonstrated this composition's suitable bandgap energy for tandem solar cells. Furthermore, SS-PL spectra showed no evidence of light induced halide segregation as a second red-shifted peak did not appear under illumination, evidencing the long-term stability of this composition. The reduction in SS-PL intensities for the bilayers compared to the bare perovskite, has been interpreted as an increase in non-radiative recombination channels when the CTLs are connected <cit.>. Throughout this paper, measurements on bare perovskite layers are plotted in black/grey, perovskite-PCBM bilayers are plotted in blue, perovskite-C60 bilayers are plotted in green, and perovskite-Spiro-OMeTAD bilayers are plotted in red. §.§ Time-resolved PL spectroscopy In time-resolved optical spectroscopy a signal S(t) related to the carrier density in the perovskite layer is measured as a function of time t after photoexcitation, where S refers to either the PL intensity, transient absorption change, or THz photoconductivity (from optical pump, THz probe, or OPTP, spectroscopy), depending on the technique used. While in the reference perovskite material the carrier density can be altered only by bulk carrier recombination or surface recombination near the edges of the perovskite crystallites, in bilayers n can be additionally lowered by charge extraction into the CTL or by extra recombination at the CTL-perovskite interface. The experimental S(t) corresponds to the total change integrated over the whole charge carrier distribution, and information about the density profile n(x) with depth x is therefore lost from data at one time t. However in certain situations the dynamical signal, S(t), can still yield useful information about processes that occur at different depths in the bilayer, particularly if different photoexcitation schemes are used. As pictured in Fig. <ref>f, illuminating with short wavelengths generates carriers in the perovskite near the CTL (case i), and hence the initial charge extraction and/or interfacial recombination rates should be high. At later times these rates should drop, as charges diffuse away from the heterojunction into the perovskite. In contrast, excitation through the substrate (case ii) yields carriers that are well separated from the CTL interface, with a low initial decay rate that then increases after carriers have had sufficient time to diffuse through the perovskite layer. Illuminating with long wavelengths (case iii) generates carriers throughout the depth of the perovskite layer, with similar dynamics expected to the long time limit for cases i and ii. Time-resolved PL dynamics measured by time-correlated single photon counting (TCSPC) are reported in Fig. <ref>g for the perovskite-CTL bilayers and bare perovskite reference, recorded under 405 nm excitation from the quartz side with incident pulse fluences of 25 nJcm^-2 (∼ 5× 10^14 incident photons m^-2 per pulse) and under ambient conditions in order to passivate surface defects (see Experimental Methods). The injected carrier density is comparable to that under solar illumination, which for this triple-cation perovskite composition has been estimated as 4 × 10^21 m^-3 <cit.>. The modelling in Section <ref> indicates that carriers spread throughout the layer thickness within the few-nanosecond resolution of TCSPC, so an approximately uniform density can be considered, which gives an injected density of 10^21 m^-3 for our TCSPC measurements. The bare perovskite (black curve) had a PL lifetime of τ_eff=200 ns, consistent with results on a similar sample from electronically delayed OPTP (E-OPTP) with a long time window and low injection levels <cit.>, which represents the effective lifetime from bulk recombination and any surface recombination at the perovskite-air or perovskite-substrate interfaces. When fullerene layers were added, the PL decay was significantly accelerated (blue and green curves), while the Spiro-OMeTAD bilayer (red) had only a marginally faster decay rate than the bare perovskite. The faster PL decay rates for the bilayers implies either charge extraction to the CTL or recombination at the CTL interface. The shortening of the PL decay time is consistent with the quenching of the steady-state PL intensity. For each sample no difference in PL dynamics was observed for photoexcitation on opposite sides, despite generating different initial distributions due to the short penetration depth (δ∼ 35 nm) at this excitation wavelength. Furthermore, for each sample using excitation light with a longer penetration depth (δ∼ 260 nm for 633 nm excitation) resulted in similar dynamics as for 405 nm excitation (see SM section 1.5 <cit.>). This implies that the instrument response time (several nanoseconds) was not sufficiently fast to distinguish case (i)(excitation on the front with short penetration depths), which should have much higher initial rates of extraction and interfacial recombination compared to cases (ii) and (iii) (excitation on back or excitation with long penetration depths). In short, the time taken for carriers to diffuse through the perovskite film is lower than the instrument response time in these PL experiments, which therefore probed the later dynamics only. §.§ Ultrafast spectroscopy In order to investigate charge transport, charge extraction and interfacial recombination, we therefore turned to ultrafast pump-probe spectroscopies with sub-picosecond time resolution. These were performed with incident pulse fluences of 2-7 μJcm^-2, corresponding to an incident photon flux of 0.6-1.4 × 10^17 m^-2 per pulse. The pump fluences for different wavelengths were chosen such that the absorbed photon fluxes per pulse were similar for the different pump wavelengths (using the absorption data reported in Supplemental Fig. S2 and S3 <cit.>). These low pump fluences were chosen such that the OPTP and TA decay curves were flat for the bare perovskite sample (see Fig. <ref> and Fig. <ref>), indicating that the fraction of the carrier population that recombines during the measurement window of 3 ns is negligible. This means recombination within the perovskite can be excluded, which significantly simplifies the interpretation of the data. It is worth noting that the ultrafast dynamics were found to be more reproducible than time-resolved PL measurements performed at much lower pulse fluences of 25 nJcm^-2 (∼ 5× 10^14 incident photons m^-2 per pulse). §.§.§ Origin of the transient photoconductance The OPTP decays reported in Fig. <ref> show the change in transmitted electric field, S(t)=-Δ T/T, which is proportional to the sheet photoconductance Δσ of the sample <cit.>. As the conductivity is σ=n eμ_e + p e μ_h, OPTP is sensitive to changes in either mobility or density, and hence we first establish the origin of S(t) for CTL-perovskite bilayers. No appreciable contribution to the photoconductivity is expected from charges transferred to the CTLs, where carriers have a substantially lower mobility than in the perovskite: THz sum mobilities μ_e+μ_h exceed 10 cm^2Vs^-1 in perovskites <cit.>, whereas Spiro-OMeTAD has μ_h<10^-3 cm^2Vs^-1 <cit.> and PCBM exhibits μ_e<1 cm^2Vs^-1 <cit.>. To validate this assumption, in this work CTLs deposited on quartz were photoexcited above their bandgap (at 410 nm), and no CTL photoconductivity was observed via OPTP at the experimental fluences used. Hence we conclude that S(t) arises just from carriers in the perovskite film. Within the first picosecond following photoexcitation, the mobility of hot photoexcited carriers in the perovskite may change as carriers relax within the bandstructure <cit.> or into localised states <cit.>. However after this time the mobility can be assumed to be limited by electron-phonon scattering, and independent of time (and density), and thus changes in -Δ T/T are linked to the density of photoexcited carriers in the perovskite. The sum of the electron and hole mobilities was calculated for the perovskite reference sample from the initial amplitude of the OPTP signal (method described in <cit.>), assuming unity quantum yield of free carriers (reasonable due to the low exciton binding energy in these perovskites <cit.>). This yielded a sum mobility ∼ 40 cm^2V^-1s^-1 when using short pump wavelengths (and ∼ 70 cm^2V^-1s^-1 when using long pump wavelengths), consistent with literature <cit.>. Hence the detected S(t) from OPTP for bilayers can be safely assumed to arise only from the photoconductance of mobile charges in the perovskite films. §.§.§ Transient photoconductance of perovskite-ETL bilayers Fig. <ref>(a-c) reports S(t)=-Δ T/T for the fullerene bilayers (C_60 in green and PCBM in blue), which exhibited accelerated decay over the 3 ns window relative to the bare perovskite sample (black/grey), for both “front” excitation through the CTL (darker shade), and “back” excitation through the substrate (lighter shade). For 700 nm excitation on the PCBM bilayer (case iii, Fig 3a), the initial amplitude and the decay dynamics for front excitation (dark blue) were similar to those for back excitation (light blue). This is consistent with the negligible parasitic absorption of this pump wavelength by the PCBM (Fig. <ref>d), and with the expectation that carrier removal dynamics will be similar when the electrons and holes are photoinjected throughout the layer (the perovskite's absorption depth was ∼ 260 nm). For excitation with 410 nm (Figs. 3b-c) on the front of the fullerene bilayers (case i), the decay was markedly faster than for 700 nm excitation: the carriers were concentrated near the interface rather than spread throughout the perovskite film's thickness. The initial amplitude of the signal was marginally lower for excitation through the front (case (i), dark blue and green) compared to through the back (case (ii), light blue and green), in agreement with parasitic absorption of the pump by the fullerenes (Fig. <ref>d). For 410 nm excitation on the back (case (ii)) of the C_60 bilayer (light green, Fig. <ref>c), the decay rate was slow at early pump-probe delay times, before accelerating at later times. This can be qualitatively understood as requiring a finite time for carriers to diffuse from a low recombination velocity surface (perovskite-quartz interface) through the thickness of the film to the CTL interface, where population reduction can then occur. A quantitative model for both front and back excitation is discussed in Section 3.1. On the other hand, for 410 nm excitation on the back (case (ii)) of the PCBM bilayer (light blue, Fig. <ref>b), the dynamics were similar to that for 700 nm excitation. This suggests that the solution-deposited PCBM had penetrated deeper into the perovskite than the vapour deposited C_60, thus reducing the distance carriers must diffuse from the quartz side to reach the PCBM. §.§.§ Transient photoconductance of perovskite-HTL bilayer The corresponding OPTP data for the Spiro-OMeTAD bilayer (hole extracting) are reported in Fig. <ref>d-f. In stark contrast to the case of electron extraction, there is no faster decay in the OPTP signal in comparison to the perovskite reference over the 3 ns window, when excited on either side of the sample or with any of the pump wavelengths used (410 nm, 460 nm and 700 nm). The only substantial difference is in the initial amplitude when exciting with 410 nm through the CTL: the Spiro-OMeTAD bilayer exhibited around 25 % of the photoconductivity of the perovskite reference. In this case, a large fraction of the optical pump beam was absorbed by the Spiro-OMeTAD layer first, as Spiro-OMeTAD exhibits a substantial absorbance for that pump photon energy (Fig. <ref>d; 410 nm corresponds to 3.02 eV). For excitation conditions that did not exhibit parasitic absorption – 700 nm and 460 nm excitation from either side and 410 nm back excitation – the similar initial amplitude to that of the bare perovskite, and the lack of decay within the 3 ns window, suggests that there is no substantial removal of carrier density at the Spiro-OMeTAD-perovskite interface within 3 ns via selective charge extraction or via any additional surface recombination. This conclusion can be drawn because in all cases carriers were either generated next to the Spiro-OMeTAD interface, or would have diffused to the interface within the 3 ns window, as evidenced by the fullerene measurements. Thus we have established that the rates for extraction and surface recombination at the Spiro-OMeTAD-perovskite interface must be low, such that the effective lifetime is close to the bulk lifetime. This is in agreement with the lifetimes from time-resolved PL, discussed further in Section 3.4. §.§.§ Transient absorption of perovskite-CTL bilayers An additional factor complicating the interpretation of OPTP spectroscopy of selective carrier extraction, is that S(t) for OPTP is proportional to the sum of the photoconductivity of electrons and holes, and their carrier mobilities may differ. Therefore, extraction of one carrier type may cause a different reduction in S(t) than removal of the other type. Computational studies have suggested similar effective masses for both electrons and holes <cit.>, but the ratio of their mobilities is harder to determine as mobility depends on the scattering rate as well. Therefore, to check that the lack of decay over 3 ns for the Spiro-OMeTAD bilayers was not due to OPTP being less sensitive to hole density (if it were the case that μ_h≪μ_e), white light TA spectroscopy was additionally performed, as reported in Fig. <ref>. In TA spectroscopy S(t)=ΔOD, the change in optical density. The strength of the ground state bleach (negative ΔOD, red in Fig. <ref>a) of the interband transition can be assumed to be proportional to the carrier density injected, independently of the mobility. Here, experiments were performed at low enough pump fluences that the hot phonon bottleneck regime <cit.> was not reached, and carriers cooled quickly (within a few ps) to the band edge. The TA dynamics (Fig. <ref>b) show the same trends as for OPTP, with no decay for the Spiro-OMeTAD bilayers over 3 ns, and the fullerene bilayers did exhibit additional decay in comparison to the bare perovskite reference (the amplitudes are not interpreted since they were not as well controlled). The agreement between TA and OPTP transients additionally confirms that the decay of the photoconductance signal is due to carrier removal rather than changes in mobility. § CARRIER DYNAMICS MODELLING We developed a theoretical model of the evolution of the charge density within a CTL-perovksite bilayer in order to simulate the dynamical signal S(t) observed in experiments, and to disentangle the different contributions to S(t) from diffusion within the perovskite, population transfer to the CTL and recombination at the perovskite-CTL interface. A selective interface will clearly lead to the spatial separation of electrons and holes, and the resulting Coulombic forces will influence the motion of carriers within the perovskite, i.e. introducing drift transport in addition to diffusive motion. A mathematical description of n(x,t) and p(x,t) requires solving the non-linear continuity equations for electrons and holes, including drift and diffusion current terms, along with the electric field E(x,t) obtained from Gauss's law. Krogmeier et al. <cit.> simulated the carrier dynamics of perovskite-CTL bilayers and suggested the importance of Coulombic forces at high carrier densities, such as used in the OPTP and TA experiments reported here. They focused on modelling the dynamics measured by TRPL over tens of nanoseconds; here we instead focus on carrier dynamics on the sub-nanosecond timescales relevant to OPTP and TA measurements. Although device simulation packages are available <cit.>, these are not designed for simulating the transients of bilayers on picosecond timescales, following pulsed illumination. §.§ Numerical model As described in the Methods and Supplemental Material <cit.>, in the present work we developed a numerical solution using the Scharfetter-Gummel discretisation of the drift-diffusion current equations and the forward Euler method <cit.>, for parameters that match our experimental conditions. Perovskites are at most weakly doped <cit.>, and hence at the fluences used in OPTP and TA spectroscopy only the photoexcited carriers (n_e, p_e) are considered in the model, as these will have higher density than the equilibrium carriers (n_0, p_0), i.e. high level injection n_e, p_e >> n_0, p_0. Experiments were performed at low pump fluences (0.6-1.4× 10^17 photons m^-2 per pulse) such that the OPTP and TA decay curves were flat for the bare perovskite sample (see Fig. <ref> and Fig. <ref>), indicating that the fraction of the carrier population that recombined during the measurement window of 3 ns was negligible. Thus recombination within the perovskite can be excluded, which significantly simplifies the interpretation of the data and reduces the number of free parameters in the model. Further, the low fluence allowed the influence of photon reabsorption <cit.> to be ignored in our model as the rate of bimolecular radiative recombination was minimised. The migration of ions is too slow to occur during the measurement window of a few nanoseconds so can be ignored. The physical processes involved in the simulation, are described in Fig. <ref>a-c for the case of electron removal via population transfer to the ETL, under excitation with an initial sheet carrier density of 10^17 m^-2 and a penetration depth of 35 nm. Initially, the electrons and holes have the same profile in the perovskite (schematically shown in Fig. <ref>a), while at later times charges can either diffuse away from the interface into the perovskite, or be extracted to the ETL. For a selective ETL we assumed that interfacial electrons were extracted at a rate defined by the interfacial extraction velocity, S, while hole extraction was forbidden. The preferential extraction of electrons into the ETL (Fig. <ref>b) attracts holes in the perovskite towards the interface, and repels electrons, forming an electrical double layer. Further, in the simulation we considered two processes that can reduce the net charge in the ETL (Fig. <ref>c): (i) cross-interface recombination, where an electron in the LUMO of the ETL recombines with a hole in the VB of the perovskite at a rate k_intern_CTLp(x=0), or (ii), hole transfer from the perovskite VB to the HOMO of the ETL with velocity S_holes. This later process may be possible given the tail in the density of states for fullerene molecules above their HOMO level, suggesting that the VB offset between fullerene and perovskite is not very large and thus hole extraction may not be blocked very effectively <cit.>. Cross-interface recombination and hole extraction are difficult to distinguish, as both lead to the loss of a hole from the perovskite and reduce the negative charge in the ETL (they are compared in SM section 2.5 <cit.>). The results of the simulations are shown in Figures 5d-f. As well as plotting the perovskite photoconductance versus pump-probe delay, the decay rate can be conveniently parameterised by the instantaneous lifetime, τ^*(t) = -Δσ/[d(Δσ)/dt]. It should be noted that this represents the decay lifetime for the perovskite's entire sheet carrier density and is thus effected by the distribution of carriers. For example, even if the rate constant for carrier loss at an interface is fixed, the decay rate of the sheet density is faster if carriers are concentrated near the interface, and lower if they are spread throughout the thickness (the green lines, which will be discussed below, are an example of this). Fig. <ref>d shows simulations of the photoconductance Δσ, for an extraction velocity S=90 ms^-1 and for various cross-interfacial recombination rates, under either front excitation (thick lines) or back excitation (thin lines). Considering first the case without cross-interfacial recombination, k_inter=0 (blue lines), for front excitation the simulated photoconductance of the perovskite initially decays before plateauing (thick line), while for back excitation the photoconductance only appreciably decays after about 1ns (thin line). The plateaux in the photoconductance corresponds to a rapidly increasing τ^* (blue line, Fig. <ref>e), and can be understood as resulting from the formation of the electrical double layer: the Coulomb repulsion of electrons away from the interface prevents further electron extraction, leading to the population decay rate slowing. The electrical double layer's formation can be seen from the excess electron sheet density in the CTL, as reported in Fig. <ref>f. The electron sheet density in the CTL can be seen to rapidly rise within 200 ps to above 10 % of the initial sheet carrier density in the perovskite, before rolling over as the extraction rate slows. In line with the findings of Ref. <cit.>, simulations for carrier sheet densities three orders of magnitude lower (10^14 rather than 10^17 m^-2) do not show such a Coulombic bottleneck for extraction (see SM Fig. S22 <cit.>). To verify the robustness of the observed plateau in the simulated transients with k_inter=0 (blue lines), we performed alternative simulations whereby electrons were allowed to be extracted from deeper into the perovskite layer (smearing the interface), or the perovskite's permittivity was increased to reduce the electric field strength (see SM Figs. S20 and S21 <cit.>). However, even extreme smearing of the interface or extreme permittivity values were not sufficient to remove the plateau. The simulations with k_inter=0 described above are qualitatively different from the experimental S(t) observed for the fullerene bilayers: the experiments did not show a plateau after a rapid decay; rather the OPTP, TA and TRPL exhibited a continual decay. Hence we can conclude that there is a physical process that prevents the formation of the Coulomb bottleneck for extraction, such as cross-interface recombination or hole extraction. When cross-interface recombination was included by setting k_inter=10^-14 s^-1 (orange lines in Fig. <ref>d-f) the photoconductance continually decreased with time, and the extraction rate does not slow significantly. This is a result of the cross-interface recombination reducing the strength of the Coulomb bottleneck: the excess sheet density in the ETL initially increases (Fig. <ref>f, orange line), but the accumulation of both p(x=0) and the electron density in the ETL causes the cross-interfacial recombination rate to increase until it balances the extraction rate, meaning the net charge in the ETL stops rising and the Coulomb bottleneck is weak. When the cross-interfacial recombination rate balances the extraction rate, extracted electrons can be considered to immediately undergo recombination. With an even higher rate k_inter=10^-12 s^-1 no substantial charge density builds up in the ETL at all (green lines in Fig. <ref>f). In this case, τ^* increases slowly due to carriers diffusing away from the interface (alluded to earlier), rather than the Coulombic bottleneck. Fig. <ref>d shows that parameter values of S=90 ms^-1 and k_inter=10^-12 s^-1 give excellent agreement with the experimental data. Summarising the results from the numerical simulation, the relative amplitude of the extraction and cross-interfacial recombination rate constants can be qualitatively obtained by visually inspecting the shape of the experimental decay at delay times below a few nanoseconds. For Spiro-OMeTAD, population removal (extraction to the HTL or recombination) was not observed on timescales of 1 ps to 3 ns, and hence the extraction velocity S and surface recombination are small. For the fullerenes, the decay of the population within 3 ns indicates that the charge extraction rate was higher, and the persistence of the decay over 3ns indicates that interfacial removal/loss processes, such as cross-interface recombination or hole extraction, limit the formation of a Coulomb bottleneck. In the following sections we quantify the extraction velocity S and k_inter. §.§ Analytical “one-step” model In the limiting case of rapid cross-interfacial recombination or simultaneous extraction of both carriers (such as for fullerene bilayers), we now discuss a simpler theoretical model of the carrier dynamics with fewer free parameters than the numerical model. With rapid cross-interfacial recombination, the two-step process of extraction then recombination is effectively a one-step process limited by the slowest step – extraction. Therefore, this effective surface recombination has a velocity given by the extraction velocity. This process means the electron and hole densities at the interface remain similar, and the carrier removal rate becomes proportional to the electron density in the perovskite at the interface. This scenario satisfies the neutrality approximation, n(x)=p(x), so there is no need to solve Gauss's law and the two continuity equations can be combined into a single equation for the balanced density n(x)=p(x) <cit.>. For high level injection, this ambipolar continuity equation is just the diffusion equation, and if the effective surface recombination is linear in density, then the equation can be solved analytically by separation of variables <cit.>: n(x,t) = e^-t/τ_B∑_m=1^∞ A_m cos(α_m(L - x)) e^- α_m^2 D t where τ_B is the bulk recombination time in the perovskite and α_m are the different eigenvalue solutions of the transcendental equation α_m tan(α_m L) = S/D (see SM section 2.3 <cit.>). Hence α_m depend on L, S and D, where the surface recombination velocity S is equal to the electron extraction velocity used in the numerical model, and D is the ambipolar diffusion coefficient. The Fourier coefficients, A_m, are determined by the initial distribution and therefore by the absorption coefficient at the pump wavelength (from UV-visible spectroscopy). Similar expressions have been used to analyse surface recombination in silicon <cit.>, where D and L are substantially larger than in perovskites. In order to compare the simulated sheet densities (an even simpler Fourier Series than Eq. <ref>) to the measured S(t)=-Δ T/T we used the standard thin film expression <cit.>. Overall, the only free parameters in the model were the ambipolar diffusion coefficient, D, and the interfacial extraction velocity, S. Using this expression with S=90 ms^-1 and μ=15 cm^2V^-1s^-1 (or D=0.39 cm^2s^-1) yields τ^*(t) shown by the green points reported in Fig. <ref>e, which are almost identical to the numerical simulation's results (solid green lines) for the same parameters. At late times, the instantaneous lifetime tended to a constant, namely τ^*(t→∞)→ 10 ns in the case shown in Fig. <ref>e, and was similar for both front and back excitation. In this limit of long time delays, the carrier distribution described by Eq. <ref> is dominated by the smallest eigenvalue α_m, because it has the slowest temporal decay (and also has the smoothest spatial distribution). The distribution becomes stationary: the shape doesn't change, only the amplitude alters, which decays with rate 1/τ_eff = 1/τ_B + 1/τ_S, where 1/τ_S = α_0^2D and τ_B is the bulk recombination lifetime. For a semiconductor with large S on one surface (the extraction layer here), and negligible S on the other surface, τ_S can be approximated as <cit.>: τ_s = L/S + 4/D(L/π)^2. In contrast to representing surface recombination, as witnessed in a single semiconductor <cit.>, here for the perovskite-CTL bilayer S represents extraction at speed S followed by fast cross-interface recombination. The second term in τ_s accounts for the time required for charges to diffuse through the film's thickness before reaching the interface. Calculating the surface lifetime using Eq. <ref> for the same parameters as the numerical and analytical simulations in Fig. <ref> (i.e. L=600 nm, S=90 ms^-1, D=0.39 cm^2s^-1) yields τ_s=10.5 ns and thus τ_eff=10.0 ns, in excellent agreement with τ^*(t→∞) deduced from the models. §.§ Global fits to experiment As an analytical expression, Eq. <ref> can be fit iteratively to the experimental data orders of magnitude faster than the full numerical solution. By globally fitting a set of decay curves for different experimental conditions, using a common set of parameters, one can obtain a sharper minimum of the global cost function with respect to the free parameters than is possible for a fit to just a single decay curve. Here, including front and back excitation with a short penetration depth gave a strong constraint on the ambipolar D (and mobility), as the shape of the decay is strongly influenced by the time taken for carriers to diffuse through the perovskite to the CTL interface. It is important to note that because the ambipolar continuity equation is linear, the normalised decay shape is the same regardless of the sheet density, and any error in the estimation of the initial sheet density has no impact on the parameters deduced. Fig. <ref> shows that the global fits of the analytical model (solid lines) accurately reproduce all the features of the OPTP experiments (points), with small residual error (a map of the residual versus μ and S for the C_60 bilayer contains a single minimum, as shown in SM Fig. S23). The experimental measurement for the C_60 bilayer with 410 nm excitation showed a clear delay in the decay of the signal when exciting on the back (quartz) side, which was captured by the model. The values of the best fit parameters were an ambipolar mobility μ=15±0.6 cm^2V^-1s^-1 (or ambipolar D=[3.8 ± 0.2]× 10^-5 m^2s^-1) and S=90±2 ms^-1. Returning to the more detailed two-step numerical model, for the C_60 bilayers we performed a series of simulations for different values of S and k_inter in order to estimate the values of the two rate constants. In Fig. <ref>b the rms error between the data reported in Fig. <ref>a (the fitted analytical solution) and the numerical model is shown, as a function of both extraction velocity S and the cross-interfacial recombination rate k_inter. The numerical simulation best agrees when the rms error is close to zero (white area), inside the area defined by the black contour, corresponding to the region around S=90 ms^-1 and k_inter≥ 7× 10^-13 m^3s^-1. The contour plot shows a single minimum, but the contours extend towards large values of k_inter, indicating this is a lower bound for k_inter. The surface recombination velocity in the one step model is equivalent to the extraction velocity in the two step model (both are 90 ms^-1). For the PCBM bilayers, whilst the decay under back excitation was slower compared to excitation on the front (PCBM) side, it did not show a distinct delay to the start of the decay. This suggests the solution-deposited PCBM had penetrated into the perovskite's grain boundaries, and that the planar bilayer model may oversimplify a more complex scenario for the PCBM bilayer. Alternatively, the solution-phase processing to add the PCBM may have modified the strain and/or defect density in the perovskite, changing its τ_B. However, a global fit to measurements on PCBM yielded reasonable fits (Fig. <ref>c), with best fit parameters μ=15± 1 cm^2V^-1s^-1, S=55±2 ms^-1 and τ_B=15± 2 ns. Therefore we conclude that the electron extraction velocities for PC_60BM is S≃55 ms^-1, slightly smaller than S=90±2 ms^-1 for the C_60 bilayer. We note that Krogmeier et al. <cit.> deduced a value of S=53 ms^-1 for a perovskite-PC_61BM bilayer from the power dependence of their PL dynamics on nanosecond timescales, very similar to our values. §.§ Further discussion By considering the results obtained from the numerical model including the Coulombic forces and interfacial recombination, we demonstrated that the accumulation of electrons in the fullerene layer is undermined by process such as cross-interface recombination or hole extraction. This corroborates the interpretation of the SS-PL measurements (Fig. <ref>e) in which PL quenching by the fullerenes is interpreted as resulting from the introduction of additional channels for non-radiative recombination <cit.>. Cross-interface recombination and hole extraction are difficult to distinguish, as both lead to the loss of a hole from the perovskite and reduce the negative charge in the ETL (they are compared in SM section 2.5). Both mechanisms have been discussed in the literature. Schulz et al. <cit.> suggested that the observed tail of states above the C_60 valence band (VB), reduces the energy offset between the C_60 and the perovskite VBs and thus may allow some hole extraction into the ETL. In contrast, Warby et al. <cit.> concluded that cross-interface recombination is dominant. They observed that adding a C_60 layer increased the sub-gap feature in the external quantum efficiency (EQE), indicating the presence of intragap states. They did not observe any PL or electroluminescence from the C_60 layer, which could be expected following hole extraction and subsequent radiative recombination within the C_60. Therefore, they proposed recombination is mostly cross-interface via intragap states or broadening of the C_60 LUMO states due to disorder, which was previously reported <cit.>. The attribution to cross-interface recombination is further supported by the observation that adding a LiF interlayer a few nanometres thick between the perovskite and C_60 can reduce the sub-gap feature in EQE (i.e. lower the intragap state density) and improve V_oc. Liu et al. <cit.> also showed that nanometre thick interlayers improved the performance. Their density functional theory (DFT) simulations showed that when C_60 is close to the perovskite surface, new states form in the bandgap, but this does not occur if the C_60 is moved away by an interlayer. Finally, regarding the Spiro-OMeTAD-perovskite interface, Eq. <ref> can be used to roughly estimate the surface loss or removal velocity from the PL lifetime. The PL data in Fig. <ref>g shows that the effective lifetime of the Spiro-OMeTAD bilayer (≈ 100 ns) is roughly half that of the perovskite bulk (estimated from the bare perovskite reference), meaning that the surface lifetime, is comparable to the bulk lifetime, τ_s ≈τ_B ≈ 200 ns. Since the second term (i.e. the diffusion contribution) in the surface lifetime (Eq. <ref>) is only ≈ 3 ns, the surface lifetime is dominated by the first term τ_s ≃ L/S. S is then estimated as ≈ 2ms^-1, more than an order of magnitude below the surface extraction rate for the ETL bilayers. With no rapid extraction for the HTL, it was not possible to determine the cross-interfacial recombination rate from the comparison between ultrafast spectroscopy and numerical simulation. § CONCLUSION The combination of picosecond optical spectroscopy and an advanced carrier dynamics model that includes Coulombic forces allowed us to provide an improved understanding of the charge carrier dynamics in perovskite-CTL bilayer heterostructures that employ three of the most common CTLs, namely C_60, PCBM and Spiro-OMeTAD. For perovskite-Spiro-OMeTAD bilayers, negligible decay of the carrier density in the perovskite was observed within the first 3 ns after photoexcitation, indicating that hole extraction was relatively slow (around 5 to 10 times slower than the electron extraction velocity of the fullerenes). Importantly, surface recombination at the interface must also be slow, which may explain the excellent performance of Spiro-OMeTAD in solar cell devices despite its relatively slow hole extraction. For perovskite-fullerene bilayers, when charge carriers were photoexcited near the fullerene interface, the carrier density decayed within the first few hundred picoseconds and continued to decay on nanosecond timescales. The numerical model including Coulombic forces indicated that at the higher carrier densities used in pump-probe experiments, the population decay should plateau within the first nanosecond due to the accumulation of extracted electrons in the fullerene repelling electrons in the perovskite to form an electrical double layer. The absence of this characteristic fingerprint of a Coulombic bottleneck indicates that there is substantial loss of holes at the interface. We highlighted cross-interface recombination and hole extraction as possible physical mechanisms active at the interface, but note that these cannot be distinguished by these experiments. The numerical and analytical carrier dynamics models presented were found to accurately replicate the experimental data, and determined the interfacial extraction velocity, along with the ambipolar diffusion coefficient for vertical transport through the perovskite films. Perovskite-fullerene bilayers were found to fall under the limiting case of rapid interfacial recombination (for which the analytical “one-step” model is suitable), and thus only a lower bound for the cross-interface recombination rate constant can be determined. The trends observed with sub-nanosecond THz and transient absorption measurements were also observed for nanosecond photoluminescence measurements that use lower injected carrier densities which are comparable to solar illumination, indicating that the properties extracted from the sub-nanosecond measurements using higher injected carrier densities are relevant to operational carrier densities. Having identified targets for improvement in three of the most popular CTLs, future work will build upon the combination of picosecond optical spectroscopy and drift-diffusion modelling presented here, for instance to study the interface extraction and recombination kinetics of other heterojunctions, or to fully elucidate the interfacial recombination processes active at such interfaces. § MATERIAL SYNTHESIS In this work, thin films of (FA_0.83MA_0.17))_0.95Cs_0.05Pb(I_0.83Br_0.17)_3 were investigated. All processing was carried out in a N_2 glove box with both O_2 and H_2O levels below 0.1 ppm. Similar results were obtained on films of (FA_0.83MA_0.17))_0.95Cs_0.05PbI_3. Perovskite precursors were prepared such that the (FA + MA): Pb molar ratio was 1:1.1 for both compositions. For the mixed I-Br composition, PbI_2 (1.1 mmol, TCI) was added to a DMF:DMSO mix (1 ml, 4:1 ratio by volume), followed by FAI (1 mmol, Greatcell solar), PbBr_2 (0.22 mmol, TCI), and then MABr (0.2 mmol, Greatcell Solar). A CsI solution was prepared by dissolving CsI (78 mg, Sigma Aldrich) in DMSO (200 μl). This solution (42 μl) was added to the PbI_2 + FAI + PbBr_2 + MABr in DMF/DMSO mix. For the Spiro-OMeTAD layer, Spiro-OMeTAD (72.3 mg, Borun) was dissolved in chlorobenzene (abbreviated CB). Separately, a solution of FK209 in acetonitrile (300 mg ml^-1) and a solution of Li-TFSi in acetonitrile (520 mg ml^-1) were prepared. To the Spiro solution, tBP solution (28.8 μl), Li-TFSi solution (17.5 μl) and FK209 solution (29 μl) were added. For PC_60BM, powder (20 mg, Nano-C) was dissolved in CB (1 ml). To make thinner perovskite films, the perovskite solution (100 μl) was added to the DMF:DMSO 4:1 mix (500 μl). For the thinner CTLs, the Spiro-OMeTAD (50 μl) and PC_60BM solutions were added to CB (150 μl). The perovskite layers were formed by spin-coating on quartz substrates using an anti-solvent method. The perovskite solution was dropped onto the substrate prior to spinning. The spinning was divided into two steps. First, the substrate was accelerated up to 1000 rpm at a rate of 200 rpm/s, then accelerated after 8 s to 6000 rpm at a rate of 1000 rpm/s. 150 μl of CB was dropped onto the spinning substrate 30 s into the second step. The sample was then annealed at 100 ^C for 1 hour. To form the Spiro-OMeTAD layer, the perovskite sample was accelerated to 4000rpm at a rate of 2000rpm/sec, and the Spiro-OMeTAD solution was dropped onto the spinning perovskite sample 5 seconds after starting (spinning stopped at 25 seconds). To form the PC_60BM layer, the PC_60BM solution was dropped onto the stationary perovskite sample, which was then accelerated to 1000rpm at a rate of 1000rpm/sec (spinning stopped at 30 seconds). The C_60 layers were deposited by thermal evaporation at a rate of 0.1-0.4 Ås^-1. Top down SEM images were recorded with a Zeiss SUPRA 55-VP, and cross-sectional images were taken with a Zeiss Gemini 500. § STEADY-STATE OPTICAL AND TIME-RESOLVED PL SPECTROSCOPY UV-visible spectroscopy was performed using a spectrophotometer (Perkin Elmer Lambda 1050) with an integrating sphere module, to determine the film's absorbance taking account of scatter and reflection losses. The steady state (SS) and time resolved (TR) PL were measured using a Horiba Jobin Yvon Fluorolog-3 fluorescence spectrometer, comprising a iHR320 monochromator and a Hammamatsu R982P PMT. TR-PL was measured by time-correlated single photon counting (TCSPC), with the timing electronics provided by a Horiba FluoroHub. The samples were illuminated with pulsed diode sources with centre wavelengths of 405 nm and 633 nm, pulse energies of ∼3.4 and ∼1.7 pJ/pulse respectively, and pulse duration <200 ps. The angle of incidence was ∼ 45^∘ and the Gaussian illumination spots yielded maximum pulse fluences of 25 and 10nJcm-2 per pulse (5× 10^14 and 3× 10^14 photons/m^2 per pulse) for 405 nm and 633 nm excitation, respectively. The PL was collected at 45^∘. A long pass filter was placed between the sample and the spectrometer, and the spectrometer was set to detect at the peak of the PL spectrum (760 nm). The fraction of photoexcitation cycles that resulted in detection of a photon (called count rate) was kept below 2%. The PL decays were much longer than the instrument response function (IRF, 2ns), and so iterative reconvolution of the data and the IRF was not necessary. The repetition period of the pulsed diodes was chosen to ensure the PL intensity decayed to zero before the next pump pulse arrived (typically 1-10 μs). To normalise the TCSPC decays, the background offset was first subtracted, and then the data was normalised to a value just after the initial small spike resulting from residual scatter of the pump light. The relatively low fluence resulted in a low carrier density injection into the perovskite, and carrier recombination was therefore dominated by defect-mediated recombination <cit.> which is thus sensitive to changes in the defect density due to illumination or atmosphere <cit.>. The TCSPC decay profile can thus depend on the environmental exposure of the sample and may change during an experimental session. To study the differences between the CTLs we adopted an experimental protocol that minimised any differences in surface defect density during measurements. Samples were measured in ambient air starting after several minutes of exposure, until the PL lifetime had stabilised, so that all samples had the opportunity to reduce their surface defect density to a similar level. This resulted in bright, long-lived PL dynamic data that reproduced when repeatedly measured, making differences due to the CTL easier to distinguish. Alternative environments (e.g. nitrogen) resulted in shorter PL lifetimes which makes differences due to CTLs harder to disinguish. § ULTRAFAST SPECTROSCOPY Time-resolved measurements of carrier densities were performed using transient absorption (TA) spectroscopy and optical pump terahertz probe (OPTP) spectroscopy, which measured dynamics with sub-ps resolution and over a 3 ns range. The OPTP setup is described in detail in Ref. <cit.>. Photon fluxes per pulse for OPTP and TA were calculated by considering the effective area of the overlapping pump and probe spots. For TA, the Gaussian intensity profile of the pump and probe spots had σ∼ 0.4 mm and σ∼ 0.04 mm respectively. For OPTP, the Gaussian profile of the pump and probe spots had σ∼ 1 mm and σ∼ 0.4 mm respectively (for THz this was the standard deviation of the electric field). For TA, femtosecond laser pulses were generated by a Spitfire Ace amplified Ti:Sapphire laser system (Spectra Physics), outputting 13 mJ pulses with a centre wavelength of 800nm at a repetition rate of 1 kHz. A beam splitter directed a fraction of this beam power for use in the TA experiment (another fraction was used for OPTP experiments). A second beam splitter directed a fraction of this beam power to an optical parametric amplifier (Light Conversion TOPAS) to generate pump pulses of different wavelengths, and the other fraction of the beam was focussed into a 2 mm CaF_2 window to generate a white light continuum (the window is continuously translated to prevent damage). A mechanical delay stage was used to vary the path length of the beam that generates the white light pulse, and thus vary the relative time delay between the white light pulse and pump pulse arriving at the sample. A mechanical chopper (500 Hz) was used to alternately block and pass the pump pulse, resulting in the white light pulses measuring pump-on and pump-off conditions alternately. The white light pulses were directed into a fibre coupled Avantes spectrometer (Avaspec 1650 Fast USB) consisting of a grating and a multi-pixel CCD, to measure the spectra shot-by-shot. ΔOD spectra were calculated from pairs of pump-on and pump-off spectra. § THEORETICAL METHODS The numerical solution to the coupled continuity and Poisson equations was constructed in Python with calculations accelerated by the Numba library. The coupled equations were solved using the Forward Euler method, and the drift-diffusion equations for current were discretised using the Scharfetter-Gummel scheme to improve stability. <cit.> The drift-diffusion expression for the electron charge current is J_n(x,t) = -qμ_e n(x,t) ∂Φ(x,t)/∂ x + q D_e ∂ n(x,t)/∂ x and includes the mobility μ_e, the diffusion coefficient D_e, the electric potential Φ and the electron density n. A similar expression holds for holes. The electron and hole diffusion constants were given by D_e,h=μ_e,h k_B T / q for mobilities μ_e,h. Here we assumed equal electron and hole mobilities in the perovskite for simplicity. The mobility of carriers in the CTL is much lower than that of the perovskite, and it was thus assumed that any carriers extracted into the CTL would remain close to the interface within the 3 ns time window of the experiments, and hence that all CTL carriers contribute to cross-interface recombination. Poisson's equation, ∂^2 Φ(x,t)/∂ x^2 = -ρ_f(x,t)/ϵ_r ϵ_0, was solved at each time step for free charge density ρ_f, where the perovskite layer was assumed to have dielectric constant ϵ_r=20. Numerical simulations were validated by checking numerical convergence for a variety of discretisation conditions, and by comparison to analytical results in limiting cases (see SM Figs. S12-S16 <cit.>) The results presented here used a 3 nm spatial step, and a 10 fs time step over the 2.5 ns simulation window. The initial photoexcited carrier density and depth profile in the perovskite matched those used in the experiment. Supplemental Material Supplemental Material associated with this article is available online. EBC would like to thank the EPSRC (UK) for a PhD studentship. RJM would like to acknowledge funding from the EPSRC (EP/V001302/1). KDGIJ thanks the Equal Opportunities Foundation (Hong Kong) for financial support. The authors thank Prof. Ross Hatton, Stephen York and Dr. Eric Hu for assistance with C_60 film deposition, SEM and AFM studies, respectively. The authors acknowledge use of the Warwick Centre for Ultrafast Spectroscopy Research Technology Platform. 58 fxundefined [1] ifx#1 fnum [1] #1firstoftwo secondoftwo fx [1] #1firstoftwo secondoftwo noop [0]secondoftwo ref[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0] rl [1]href #1 @bib@innerbibempty [Lee et al.(2012)Lee, Teuscher, Miyasaka, Murakami, and Snaith]Lee2012 author author M. M. Lee, author J. Teuscher, author T. Miyasaka, author T. N. Murakami, and author H. J. Snaith, https://doi.org/10.1126/SCIENCE.1228604/SUPPL_FILE/LEE.SM.PDF journal journal Science volume 338, pages 643 (year 2012)NoStop [Jena et al.(2019)Jena, Kulkarni, and Miyasaka]Jena2019 author author A. K. Jena, author A. Kulkarni, and author T. Miyasaka, https://doi.org/10.1021/acs.chemrev.8b00539 journal journal Chemical Reviews volume 119, pages 3036 (year 2019)NoStop [Jayawardena et al.(2020)Jayawardena, Bandara, Monti, Butler-Caddle, Pichler, Shiozawa, Wang, Jenatsch, Hinder, Masteghin, Patel, Thirimanne, Zhang, Sporea, Lloyd-Hughes, and Silva]Jayawardena2020 author author K. Jayawardena, author R. M. I. Bandara, author M. Monti, author E. Butler-Caddle, author T. Pichler, author H. Shiozawa, author Z. Wang, author S. Jenatsch, author S. J. Hinder, author M. G. Masteghin, author M. Patel, author H. M. Thirimanne, author W. Zhang, author R. A. Sporea, author J. Lloyd-Hughes, and author S. R. P. Silva, journal journal Journal of Materials Chemistry A volume 8, https://doi.org/10.1039/c9ta10543c 10.1039/c9ta10543c (year 2020)NoStop [Würfel and Würfel(2009)]Wurfel2009 author author P. Würfel and author U. Würfel, https://go.exlibris.link/B29Qhtfb title Physics of solar cells: from basic principles to advanced concepts (publisher Wiley-VCH, address Weinheim, year 2009)NoStop [Wurfel et al.(2015)Wurfel, Cuevas, and Wurfel]Wurfel2015 author author U. Wurfel, author A. Cuevas, and author P. Wurfel, https://doi.org/10.1109/JPHOTOV.2014.2363550 journal journal IEEE Journal of Photovoltaics volume 5, pages 461 (year 2015)NoStop [Hara and Usami(2013)]Hara2013 author author K. O. Hara and author N. Usami, https://doi.org/10.1063/1.4825046 journal journal Journal of Applied Physics volume 114, pages 153101 (year 2013)NoStop [Lipovšek et al.(2019)Lipovšek, Smole, Topič, Humar, and Sinigoj]Lipovsek2019 author author B. Lipovšek, author F. Smole, author M. Topič, author I. Humar, and author A. R. Sinigoj, https://doi.org/10.1063/1.5092948 journal journal AIP Advances volume 9, pages 055026 (year 2019)NoStop [Juarez-Perez et al.(2014)Juarez-Perez, Wußler, Fabregat-Santiago, Lakus-Wollny, Mankel, Mayer, Jaegermann, and Mora-Sero]Juarez-Perez2014 author author E. J. Juarez-Perez, author M. Wußler, author F. Fabregat-Santiago, author K. Lakus-Wollny, author E. Mankel, author T. Mayer, author W. Jaegermann, and author I. Mora-Sero, https://doi.org/10.1021/jz500059v journal journal Journal of Physical Chemistry Letters volume 5, pages 680 (year 2014)NoStop [Zhang et al.(2015)Zhang, Liu, Eperon, Leijtens, McMeekin, Saliba, Zhang, De Bastiani, Petrozza, Herz, Johnston, Lin, and Snaith]Zhang2015 author author Y. Zhang, author M. Liu, author G. E. Eperon, author T. C. Leijtens, author D. McMeekin, author M. Saliba, author W. Zhang, author M. De Bastiani, author A. Petrozza, author L. M. Herz, author M. B. Johnston, author H. Lin, and author H. J. Snaith, https://doi.org/10.1039/C4MH00238E journal journal Materials Horizons volume 2, pages 315 (year 2015)NoStop [Liao et al.(2020)Liao, Wu, Jiang, Zhong, Wang, and Kuang]Liao2020 author author J.-F. Liao, author W.-Q. Wu, author Y. Jiang, author J.-X. Zhong, author L. Wang, and author D.-B. Kuang, https://doi.org/10.1039/C8CS01012A journal journal Chem. Soc. Rev. volume 49, pages 354 (year 2020)NoStop [Wolff et al.(2019)Wolff, Caprioglio, Stolterfoht, and Neher]Wolff2019 author author C. M. Wolff, author P. Caprioglio, author M. Stolterfoht, and author D. Neher, https://doi.org/https://doi.org/10.1002/adma.201902762 journal journal Advanced Materials volume 31, pages 1902762 (year 2019)NoStop [Le Corre et al.(2019)Le Corre, Stolterfoht, Perdigón Toro, Feuerstein, Wolff, Gil-Escrig, Bolink, Neher, and Koster]LeCorre2019 author author V. M. Le Corre, author M. Stolterfoht, author L. Perdigón Toro, author M. Feuerstein, author C. Wolff, author L. Gil-Escrig, author H. J. Bolink, author D. Neher, and author L. J. A. Koster, https://doi.org/10.1021/acsaem.9b00856 journal journal ACS Applied Energy Materials volume 2, pages 6280 (year 2019)NoStop [Yoo et al.(2022)Yoo, Shin, and Seo]Yoo2022 author author J. J. Yoo, author S. S. Shin, and author J. Seo, https://doi.org/10.1021/acsenergylett.2c00592 journal journal ACS Energy Letters volume 7, pages 2084 (year 2022)NoStop [Aydin et al.(2020)Aydin, Allen, De Bastiani, Xu, Ávila, Salvador, Van Kerschaver, and De Wolf]Aydin2020 author author E. Aydin, author T. G. Allen, author M. De Bastiani, author L. Xu, author J. Ávila, author M. Salvador, author E. Van Kerschaver, and author S. De Wolf, https://doi.org/10.1038/s41560-020-00687-4 journal journal Nature Energy 2020 5:11 volume 5, pages 851 (year 2020)NoStop [De Bastiani et al.(2021)De Bastiani, Mirabelli, Hou, Gota, Aydin, Allen, Troughton, Subbiah, Isikgor, Liu, Xu, Chen, Van Kerschaver, Baran, Fraboni, Salvador, Paetzold, Sargent, and De Wolf]DeBastiani2021 author author M. De Bastiani, author A. J. Mirabelli, author Y. Hou, author F. Gota, author E. Aydin, author T. G. Allen, author J. Troughton, author A. S. Subbiah, author F. H. Isikgor, author J. Liu, author L. Xu, author B. Chen, author E. Van Kerschaver, author D. Baran, author B. Fraboni, author M. F. Salvador, author U. W. Paetzold, author E. H. Sargent, and author S. De Wolf, https://doi.org/10.1038/s41560-020-00756-8 journal journal Nature Energy 2021 6:2 volume 6, pages 167 (year 2021)NoStop [Saliba et al.(2016)Saliba, Matsui, Seo, Domanski, Correa-Baena, Nazeeruddin, Zakeeruddin, Tress, Abate, Hagfeldt, and Grätzel]Saliba2016 author author M. Saliba, author T. Matsui, author J. Y. Seo, author K. Domanski, author J. P. Correa-Baena, author M. K. Nazeeruddin, author S. M. Zakeeruddin, author W. Tress, author A. Abate, author A. Hagfeldt, and author M. Grätzel, https://doi.org/10.1039/c5ee03874j journal journal Energy and Environmental Science volume 9, pages 1989 (year 2016)NoStop [Note1()]Note1 note See Supplemental Material [url] for: additional optical data; details of the numerical and analytical models; and additional simulation results. It includes additional Refs.+.1667em<cit.>.Stop [Sarritzu et al.(2017)Sarritzu, Sestu, Marongiu, Chang, Masi, Rizzo, Colella, Quochi, Saba, Mura, and Bongiovanni]Sarritzu2017 author author V. Sarritzu, author N. Sestu, author D. Marongiu, author X. Chang, author S. Masi, author A. Rizzo, author S. Colella, author F. Quochi, author M. Saba, author A. Mura, and author G. Bongiovanni, https://doi.org/10.1038/srep44629 journal journal Scientific Reports volume 7, pages 1 (year 2017)NoStop [Stolterfoht et al.(2018)Stolterfoht, Wolff, Márquez, Zhang, Hages, Rothhardt, Albrecht, Burn, Meredith, Unold, and Neher]Stolterfoht2018 author author M. Stolterfoht, author C. M. Wolff, author J. A. Márquez, author S. Zhang, author C. J. Hages, author D. Rothhardt, author S. Albrecht, author P. L. Burn, author P. Meredith, author T. Unold, and author D. Neher, https://doi.org/10.1038/s41560-018-0219-8 journal journal Nature Energy volume 3, pages 847 (year 2018)NoStop [Hutter et al.(2020)Hutter, Kirchartz, Ehrler, Cahen, and Von Hauff]Hutter2020 author author E. M. Hutter, author T. Kirchartz, author B. Ehrler, author D. Cahen, and author E. Von Hauff, https://doi.org/10.1063/1.5143121 title Pitfalls and prospects of optical spectroscopy to characterize perovskite-transport layer interfaces (year 2020), https://arxiv.org/abs/1912.10399 arXiv:1912.10399 NoStop [Hempel et al.(2022)Hempel, Savenjie, Stolterfoht, Neu, Failla, Paingad, Kužel, Heilweil, Spies, Schleuning, Zhao, Friedrich, Schwarzburg, Siebbeles, Dörflinger, Dyakonov, Katoh, Hong, Labram, Monti, Butler-Caddle, Lloyd-Hughes, Taheri, Baxter, Magnanelli, Luo, Cardon, Ardo, and Unold]Hempel2022 author author H. Hempel, author T. J. Savenjie, author M. Stolterfoht, author J. Neu, author M. Failla, author V. C. Paingad, author P. Kužel, author E. J. Heilweil, author J. A. Spies, author M. Schleuning, author J. Zhao, author D. Friedrich, author K. Schwarzburg, author L. D. A. Siebbeles, author P. Dörflinger, author V. Dyakonov, author R. Katoh, author M. J. Hong, author J. G. Labram, author M. Monti, author E. Butler-Caddle, author J. Lloyd-Hughes, author M. M. Taheri, author J. B. Baxter, author T. J. Magnanelli, author S. Luo, author J. M. Cardon, author S. Ardo, and author T. Unold, https://doi.org/10.1002/AENM.202102776 journal journal Advanced Energy Materials , pages 2102776 (year 2022)NoStop [Butler-Caddle et al.(2023)Butler-Caddle, Grant, Pain, Murphy, Jayawardena, and Lloyd-Hughes]Butler-Caddle2023 author author E. Butler-Caddle, author N. E. Grant, author S. L. Pain, author J. D. Murphy, author K. D. G. I. Jayawardena, and author J. Lloyd-Hughes, https://doi.org/10.1063/5.0130721 journal journal Applied Physics Letters volume 122, pages 012101 (year 2023)NoStop [Shi et al.(2016)Shi, Qin, Li, He, Zhong, Pan, Dong, Xu, Li, Hu, Brédas, and Bakr]Shi2016 author author D. Shi, author X. Qin, author Y. Li, author Y. He, author C. Zhong, author J. Pan, author H. Dong, author W. Xu, author T. Li, author W. Hu, author J. L. Brédas, and author O. M. Bakr, journal journal Science Advances volume 2, https://doi.org/10.1126/SCIADV.1501491/ 10.1126/SCIADV.1501491/ (year 2016)NoStop [Priebe and Könenkamp(1997)]Priebe1997 author author G. Priebe and author R. Könenkamp, https://doi.org/10.1063/1.119368 journal journal Appl. Phys. Lett volume 71, pages 2160 (year 1997)NoStop [Cabanillas-Gonzalez et al.(2006)Cabanillas-Gonzalez, Virgili, Gambetta, Lanzani, Anthopoulos, and de Leeuw]Cabanillas-Gonzalez2006 author author J. Cabanillas-Gonzalez, author T. Virgili, author A. Gambetta, author G. Lanzani, author T. D. Anthopoulos, and author D. M. de Leeuw, https://doi.org/10.1103/PhysRevLett.96.106601 journal journal Phys. Rev. Lett. volume 96, pages 106601 (year 2006)NoStop [Singh et al.(2007)Singh, Sariciftci, Yang, Yang, Plochberger, and Sitter]Singh2007 author author T. B. Singh, author N. S. Sariciftci, author H. Yang, author L. Yang, author B. Plochberger, and author H. Sitter, https://doi.org/10.1063/1.2743386 journal journal Applied Physics Letters volume 90, pages 213512 (year 2007)NoStop [Monti et al.(2018)Monti, Tao, Staniforth, Crocker, Griffin, Wijesekara, Hatton, and Lloyd-Hughes]Monti2018 author author M. Monti, author S. X. Tao, author M. Staniforth, author A. Crocker, author E. Griffin, author A. Wijesekara, author R. A. Hatton, and author J. Lloyd-Hughes, https://doi.org/10.1021/acs.jpcc.8b07792 journal journal Journal of Physical Chemistry C volume 122, pages 20669 (year 2018)NoStop [Monti et al.(2020)Monti, Jayawardena, Butler-Caddle, Bandara, Woolley, Staniforth, Silva, and Lloyd-Hughes]Monti2020 author author M. Monti, author K. D. G. I. Jayawardena, author E. Butler-Caddle, author R. M. I. Bandara, author J. M. Woolley, author M. Staniforth, author S. R. P. Silva, and author J. Lloyd-Hughes, https://doi.org/10.1103/PhysRevB.102.245204 journal journal Physical Review B volume 102, pages 245204 (year 2020)NoStop [Ren et al.(2023)Ren, Wang, Dai, Xia, Bai, Butler-Caddle, Smith, Lai, Ye, Li, Zhan, Yao, Li, Tang, Liu, Bi, Li, Kai, Chen, Yan, Hong, Yuan, Marko, Wonfor, Fu, Hindmarsh, Sanchez, Lloyd-Hughes, Sweeney, Rao, Greenham, Wu, Li, Cheng, Friend, Penty, White, Snaith, and Zhang]Ren2023 author author A. Ren, author H. Wang, author L. Dai, author J. Xia, author X. Bai, author E. Butler-Caddle, author J. A. Smith, author H. Lai, author J. Ye, author X. Li, author S. Zhan, author C. Yao, author Z. Li, author M. Tang, author X. Liu, author J. Bi, author B. Li, author S. Kai, author R. Chen, author H. Yan, author J. Hong, author L. Yuan, author I. P. Marko, author A. Wonfor, author F. Fu, author S. A. Hindmarsh, author A. M. Sanchez, author J. Lloyd-Hughes, author S. J. Sweeney, author A. Rao, author N. C. Greenham, author J. Wu, author Y. Li, author Q. Cheng, author R. H. Friend, author R. V. Penty, author I. H. White, author H. J. Snaith, and author W. Zhang, https://doi.org/10.1038/s41566-023-01242-9 journal journal Nature Photonics 2023 17:9 volume 17, pages 798 (year 2023)NoStop [Baranowski and Plochocka(2020)]Baranowski2020 author author M. Baranowski and author P. Plochocka, https://doi.org/10.1002/aenm.201903659 journal journal Advanced Energy Materials volume 10, pages 1903659 (year 2020)NoStop [Giorgi et al.(2013)Giorgi, Fujisawa, Segawa, and Yamashita]Giorgi2013 author author G. Giorgi, author J. I. Fujisawa, author H. Segawa, and author K. Yamashita, https://doi.org/10.1021/JZ4023865/SUPPL_FILE/JZ4023865-LIVESLIDES.HTM journal journal Journal of Physical Chemistry Letters volume 4, pages 4213 (year 2013)NoStop [Krogmeier et al.(2018)Krogmeier, Staub, Grabowski, Rau, and Kirchartz]Krogmeier2018 author author B. Krogmeier, author F. Staub, author D. Grabowski, author U. Rau, and author T. Kirchartz, https://doi.org/10.1039/C7SE00603A journal journal Sustainable Energy Fuels volume 2, pages 1027 (year 2018)NoStop [Courtier et al.(2019)Courtier, Cave, Walker, Richardson, and Foster]Courtier2019 author author N. E. Courtier, author J. M. Cave, author A. B. Walker, author G. Richardson, and author J. M. Foster, https://doi.org/10.1007/S10825-019-01396-2/FIGURES/4 journal journal Journal of Computational Electronics volume 18, pages 1435 (year 2019)NoStop [Calado et al.(2022)Calado, Gelmetti, Hilton, Azzouzi, Nelson, and Barnes]Calado2022 author author P. Calado, author I. Gelmetti, author B. Hilton, author M. Azzouzi, author J. Nelson, and author P. R. Barnes, https://doi.org/10.1007/S10825-021-01827-Z/FIGURES/16 journal journal Journal of Computational Electronics volume 21, pages 960 (year 2022)NoStop [Selberherr(1984)]Selberherr1984 author author S. Selberherr, https://doi.org/10.1007/978-3-7091-8752-4 title Analysis and Simulation of Semiconductor Devices (publisher Springer Vienna, year 1984)NoStop [Peña-Camargo et al.(2022)Peña-Camargo, Thiesbrummel, Hempel, Musiienko, Le Corre, Diekmann, Warby, Unold, Lang, Neher, and Stolterfoht]Pena-Camargo2022 author author F. Peña-Camargo, author J. Thiesbrummel, author H. Hempel, author A. Musiienko, author V. M. Le Corre, author J. Diekmann, author J. Warby, author T. Unold, author F. Lang, author D. Neher, and author M. Stolterfoht, https://doi.org/10.1063/5.0085286 journal journal Applied Physics Reviews volume 9, pages 021409 (year 2022)NoStop [Crothers et al.(2017)Crothers, Milot, Patel, Parrott, Schlipf, Müller-Buschbaum, Johnston, and Herz]Crothers2017 author author T. W. Crothers, author R. L. Milot, author J. B. Patel, author E. S. Parrott, author J. Schlipf, author P. Müller-Buschbaum, author M. B. Johnston, and author L. M. Herz, https://doi.org/10.1021/acs.nanolett.7b02834 journal journal Nano Letters volume 17, pages 5782 (year 2017)NoStop [Hutchinson et al.(2023)Hutchinson, Ruggeri, Woolley, Delport, Stranks, Milot, Hutchinson, Woolley, Milot, Ruggeri, Delport [+], and Stranks]Hutchinson2023 author author J. D. Hutchinson, author E. Ruggeri, author J. M. Woolley, author G. Delport, author S. D. Stranks, author R. L. Milot, author J. D. Hutchinson, author J. M. Woolley, author R. L. Milot, author E. Ruggeri, author G. Delport [+], and author S. D. Stranks, https://doi.org/10.1002/ADFM.202305736 journal journal Advanced Functional Materials volume 33, pages 2305736 (year 2023)NoStop [Schulz et al.(2015)Schulz, Whittaker-Brooks, Macleod, Olson, Loo, and Kahn]Schulz2015 author author P. Schulz, author L. L. Whittaker-Brooks, author B. A. Macleod, author D. C. Olson, author Y. L. Loo, and author A. Kahn, https://doi.org/10.1002/ADMI.201400532 journal journal Advanced Materials Interfaces volume 2, pages 1400532 (year 2015)NoStop [McKelvey(1966)]McKelvey1966 author author J. P. McKelvey, https://go.exlibris.link/sF3hZM9T title Solid state and semiconductor physics, Harper's physics series (publisher Harper & Row, address New York, year 1966)NoStop [Hahn and Özişik(2012)]Hahn2012 author author D. W. Hahn and author M. N. Özişik, https://doi.org/10.1002/9781118411285 title Heat Conduction: Third Edition (publisher John Wiley and Sons, year 2012)NoStop [Luke and Cheng(1987)]Luke1987 author author K. L. Luke and author L.-J. Cheng, https://doi.org/10.1063/1.337938 journal journal Journal of Applied Physics volume 61, pages 2282 (year 1987)NoStop [Sproul(1994)]Sproul1994 author author A. B. Sproul, https://doi.org/10.1063/1.357521 journal journal Journal of Applied Physics volume 76, pages 2851 (year 1994)NoStop [Burdanova et al.(2019)Burdanova, Tsapenko, Satco, Kashtiban, Mosley, Monti, Staniforth, Sloan, Gladush, Nasibulin, and Lloyd-Hughes]Burdanova2019 author author M. G. Burdanova, author A. P. Tsapenko, author D. A. Satco, author R. Kashtiban, author C. D. W. Mosley, author M. Monti, author M. Staniforth, author J. Sloan, author Y. G. Gladush, author A. G. Nasibulin, and author J. Lloyd-Hughes, https://doi.org/10.1021/acsphotonics.9b00138 journal journal ACS Photonics volume 6, pages 1058 (year 2019)NoStop [Warby et al.(2022)Warby, Zu, Zeiske, Gutierrez-Partida, Frohloff, Kahmann, Frohna, Mosconi, Radicchi, Lang, Shah, Peña-Camargo, Hempel, Unold, Koch, Armin, De Angelis, Stranks, Neher, and Stolterfoht]Warby2022 author author J. Warby, author F. Zu, author S. Zeiske, author E. Gutierrez-Partida, author L. Frohloff, author S. Kahmann, author K. Frohna, author E. Mosconi, author E. Radicchi, author F. Lang, author S. Shah, author F. Peña-Camargo, author H. Hempel, author T. Unold, author N. Koch, author A. Armin, author F. De Angelis, author S. D. Stranks, author D. Neher, and author M. Stolterfoht, https://doi.org/https://doi.org/10.1002/aenm.202103567 journal journal Advanced Energy Materials volume 12, pages 2103567 (year 2022)NoStop [Shao et al.(2016)Shao, Yuan, and Huang]Shao2016 author author Y. Shao, author Y. Yuan, and author J. Huang, https://doi.org/10.1038/nenergy.2015.1 journal journal Nature Energy 2016 1:1 volume 1, pages 1 (year 2016)NoStop [Liu et al.(2022)Liu, Bastiani, Aydin, Harrison, Gao, Pradhan, Eswaran, Mandal, Yan, Seitkhan, Babics, Subbiah, Ugur, Xu, Xu, Wang, ur Rehman, Razzaq, Kang, Azmi, Said, Isikgor, Allen, Andrienko, Schwingenschlögl, Laquai, and Wolf]Liu2022 author author J. Liu, author M. D. Bastiani, author E. Aydin, author G. T. Harrison, author Y. Gao, author R. R. Pradhan, author M. K. Eswaran, author M. Mandal, author W. Yan, author A. Seitkhan, author M. Babics, author A. S. Subbiah, author E. Ugur, author F. Xu, author L. Xu, author M. Wang, author A. ur Rehman, author A. Razzaq, author J. Kang, author R. Azmi, author A. A. Said, author F. H. Isikgor, author T. G. Allen, author D. Andrienko, author U. Schwingenschlögl, author F. Laquai, and author S. D. Wolf, https://doi.org/10.1126/science.abn8910 journal journal Science volume 377, pages 302 (year 2022)NoStop [Johnston and Herz(2016)]Johnston2016 author author M. B. Johnston and author L. M. Herz, journal journal Accounts of Chemical Research volume 49, https://doi.org/10.1021/acs.accounts.5b00411 10.1021/acs.accounts.5b00411 (year 2016)NoStop [Meggiolaro et al.(2017)Meggiolaro, Mosconi, and De Angelis]Meggiolaro2017 author author D. Meggiolaro, author E. Mosconi, and author F. De Angelis, https://doi.org/10.1021/ACSENERGYLETT.7B00955 journal journal ACS Energy Letters volume 2, pages 2794 (year 2017)NoStop [Brenes et al.(2018)Brenes, Eames, Bulović, Islam, and Stranks]Brenes2018 author author R. Brenes, author C. Eames, author V. Bulović, author M. S. Islam, and author S. D. Stranks, https://doi.org/10.1002/adma.201706208 journal journal Advanced Materials volume 30, pages 1706208 (year 2018)NoStop [Quitsch et al.(2018)Quitsch, Dequilettes, Pfingsten, Schmitz, Ognjanovic, Jariwala, Koch, Winterer, Ginger, and Bacher]Quitsch2018 author author W. A. Quitsch, author D. W. Dequilettes, author O. Pfingsten, author A. Schmitz, author S. Ognjanovic, author S. Jariwala, author S. Koch, author M. Winterer, author D. S. Ginger, and author G. Bacher, https://doi.org/10.1021/acs.jpclett.8b00212 journal journal Journal of Physical Chemistry Letters volume 9, pages 2062 (year 2018)NoStop [Motti et al.(2019)Motti, Meggiolaro, Barker, Mosconi, Perini, Ball, Gandini, Kim, De Angelis, and Petrozza]Motti2019 author author S. G. Motti, author D. Meggiolaro, author A. J. Barker, author E. Mosconi, author C. A. R. Perini, author J. M. Ball, author M. Gandini, author M. Kim, author F. De Angelis, and author A. Petrozza, https://doi.org/10.1038/s41566-019-0435-1 journal journal Nature Photonics volume 13, pages 532 (year 2019)NoStop [Karakus et al.(2015)Karakus, Jensen, D'Angelo, Turchinovich, Bonn, and Cánovas]Karakus2015 author author M. Karakus, author S. A. Jensen, author F. D'Angelo, author D. Turchinovich, author M. Bonn, and author E. Cánovas, https://doi.org/10.1021/acs.jpclett.5b02485 journal journal Journal of Physical Chemistry Letters volume 6, pages 4991 (year 2015)NoStop [Farrell et al.(2017)Farrell, Rotundo, Doan, Kantner, Fuhrmann, and Models]Farrell2017 author author P. Farrell, author N. Rotundo, author D. H. Doan, author M. Kantner, author J. Fuhrmann, and author T. K. S. E. D.-D. Models, in https://doi.org/10.4324/9781315152318-25 booktitle Handbook of Optoelectronic Device Modeling and Simulation (publisher CRC Press, year 2017)NoStop [Evans et al.(2000)Evans, Blackledge, and Yardley]Evans2000 author author G. A. Evans, author J. M. Blackledge, and author P. D. Yardley, https://doi.org/10.1007/978-1-4471-0377-6 title Numerical Methods for Partial Differential Equations, Springer Undergraduate Mathematics Series (publisher Springer London, address London, year 2000)NoStop [Vasileska et al.(2017)Vasileska, Goodnick, and Klimeck]Vasileska2010 author author D. Vasileska, author S. M. Goodnick, and author G. Klimeck, https://doi.org/10.1201/b13776 title Computational electronics: Semiclassical and quantum device modeling and simulation (publisher CRC Press, year 2017)NoStop [Eymard et al.(2000)Eymard, Gallouët, and Herbin]Eymard2000 author author R. Eymard, author T. Gallouët, and author R. Herbin, in https://doi.org/10.1016/S1570-8659(00)07005-8 booktitle Handbook of Numerical Analysis, Vol. volume 7 (publisher Elsevier, year 2000) pp. pages 713–1018NoStop [LeVeque(1990)]LeVeque1990 author author R. J. LeVeque, https://doi.org/10.1007/978-3-0348-5116-9 title Numerical Methods for Conservation Laws (publisher Birkhäuser Verlag, address Basel, year 1990)NoStop
http://arxiv.org/abs/2407.03001v1
20240703105926
Radiative corrections of the order $α\,(Z\,α)^6$ for rotational states of two-body systems
[ "Vojtěch Patkóš", "Krzysztof Pachucki" ]
physics.atom-ph
[ "physics.atom-ph" ]
Faculty of Mathematics and Physics, Charles University, Ke Karlovu 3, 121 16 Prague 2, Czech Republic Faculty of Physics, University of Warsaw, Pasteura 5, 02-093 Warsaw, Poland § ABSTRACT The analytical calculation of the complete α (Z α)^6 one-loop radiative correction to energies of two-body systems with the angular momenta l>0, consisting of a pointlike particle and an extended-size nucleus with arbitrary masses and spin 1/2, is presented. The obtained results apply to a wide variety of two-body systems, such as hydrogen, muonium, positronium, and antiprotonic atoms. Radiative corrections of the order α (Z α)^6 for rotational states of two-body systems Krzysztof Pachucki July 8, 2024 ======================================================================================= § INTRODUCTION The hadronic two-body systems, such as antiprotonic atoms in circular states l∼ n, give the possibility to probe the existence of the long-range interactions between hadrons, which is not possible by other means. The emission spectroscopy of light antiprotonic atoms is feasible at CERN <cit.>, and from the theoretical side these atoms can be very accurately calculated. In fact, in a highly excited circular state the effective coupling Z α/n is much smaller than one, and so the NRQED approach can be used to obtain the energy levels even for high Z-nuclei. Such calculations for an arbitrary mass ratio and arbitrary state up to the order (Z α)^6 has recently been performed in Refs. <cit.>, and here we extend this result to the order α (Z α)^6 and Z^2 α (Z α)^6. Another two-body systems, such as hydrogen and hydrogen-like ions, serve for determination of the fundamental physical constants <cit.>, because they can be measured and calculated with high accuracy. Significant progress has been achieved in recent years by the inclusion of the nuclear charge radii obtained from muonic hydrogen and other light muonic atoms <cit.>. The current value of the Rydberg constant, based mainly on the precisely measured 1S-2S transition in H <cit.> and 2S-2P in μH <cit.>, has a relative accuracy of 1.1·10^-12, limited by uncertainties in theoretical predictions for H and μH <cit.>. These uncertainties mainly come from the two-loop electron self-energy, the radiative recoil, and nuclear polarizability in the case of muonic atoms. The radiative recoil correction is a topic of this work. In this paper, we perform a calculation at the α (Z α)^6 order for two-body systems with arbitrary masses, including self-energy of an orbiting particle and with an arbitrary nucleus. In the first step, we consider the states with l>0. The lower-order terms have recently been obtained for l=0 states in Ref. <cit.>, and for l>0 in Refs. <cit.>. The α^7 corrections are currently known only in the nonrecoil limit <cit.>, and here we derive them for an arbitrary mass ratio. The results obtained may also find applications in more complicated few-electron systems like the helium atom, where discrepancies between theoretical predictions and experimental values for the ionization energies have been observed <cit.>, and they might come from a similar calculation of radiative α^7 m correction for triplet states of the He atom <cit.>. § RADIATIVE Α (Z Α)^6 CORRECTION The radiative (electron self-energy) α (Z α)^6 correction to energy E^(7)_rad of a two-body system can be expressed as a combination of terms with various spin dependencies, E^(7)_rad = μ α(Zα)^6/π⟨ℰ_NS + L⃗·s⃗_1 ℰ_S1 + L⃗·s⃗_2 ℰ_S2 + s⃗_1·s⃗_2 ℰ_SS + (L^i L^j)^(2) s_1^i s_2^j ℰ_LL⟩ , where μ is the reduced mass, Z α = -e_1 e_2/(4 π), Z is the charge number of the nucleus which is a particle number 2, s⃗_i is the spin of the i-th particle, and (L^i L^j)^(2) = 1/2 (L^i L^j + L^j L^i) - δ^ij/3 L⃗^2 , which is a symmetric traceless tensor. The calculation is divided into three parts, E^(7)_rad = E_L + E_M + E_H , where the low-energy part E_L corresponds to the frequency of the radiative photon ω∼ m_1 α^2, the middle-energy part E_M comes from the region of ω∼ m_1 α, and the high-energy part E_H corresponds to ω∼ m_1. § LOW-ENERGY PART E_L The low-energy contribution of the order α (Zα)^6 is further divided into three parts, E_L = E_L1 + E_L2 + E_L3 . These parts will be evaluated in the subsequent sections as corrections to the leading low-energy contribution E_L0 of the order α (Zα)^4, namely to the Bethe logarithm. §.§ E_L0 Let us consider the nonrelativistic Hamiltonian for a two-body system in d dimensions, H = p_1^2/2 m_1 + p_2^2/2 m_2 + V(r) , V(r) = e_1 e_2/4 π [1/r]_ϵ , [1/r]_ϵ = m_1^2 ε∫d^dk/(2 π)^d 4 π/k^2 , where r⃗ = r⃗_1-r⃗_2, and d=3-2 ε. The leading nonrelativistic (dipole) low-energy contribution is E_L0 = e_1^2/m_1^2-2 ε∫d^d k/(2 π)^d 2 k (δ^ij-k^i k^j/k^2) ×< ϕ|p^i_1 1/E-H-k p^j_1 |ϕ> , where H is the nonrelativistic Hamiltonian in d dimensions from Eq. (<ref>). The wave function ϕ denotes the nonrelativistic Schrödinger–Pauli wave function in the center of mass frame (p⃗_1 = - p⃗_2 = p⃗ ). In the following, we will denote the expectation value of an arbitrary operator Q, evaluated with the nonrelativistic Schrödinger–Pauli wave function, by the shorthand notation ⟨ Q ⟩. After the d-dimensional integration with respect to k, and the expansion in ε, E_L0 becomes E_L0 = (4 π)^ε Γ(1+ε) 2 α/3 π m_1^2 < p⃗_1 (H-E){1/2 ε+5/6 - ln[2(H-E)/m_1] }p⃗_1 > , where we ignore terms of order ε and higher. The factor (4 π)^ε Γ(1+ε) appears in all the terms, and thus we will omit consistently in all matrix elements. The contribution E_L0 can thus be rewritten as E_L0 = 4 α/3 m_1^2 Z α {1/2 ε + 5/6 + ln[m_1/μ (Z α)^2] } ⟨δ^d(r) ⟩ - 2 α/3 π m_1^2< p⃗ (H-E) ln[2(H-E)/μ (Z α)^2] p⃗ > , where the last term is the so-called Bethe logarithm <cit.>. §.§ E_L1 We consider now all possible relativistic corrections to Eq. (<ref>) and introduce the notation δ_Q < p^i 1/E-H-k p^j > ≡< p^i 1/E-H-k (Q-⟨ Q⟩) 1/E-H-k p^j > + 2 < Q 1/(E-H)' p^i 1/E-H-k p^j > , where Q is an arbitrary operator. δ_Q involves the first-order perturbations to the Hamiltonian, to the energy, and to the wave function. The correction E_L1 is the perturbation of E_L0 by the relativistic Breit Hamiltonian H^(4), which in d dimensions is (setting e_1=-e, e_2 = Z e) H^(4) = H'^(4) + H”^(4) , H'^(4) = ∑_a=1,2{ -p_a^4/8 m_a^3 +π Zα/2[1/m_a^2 + 4/3 r_Ea^2] δ^d(r)} +Zα/2 m_1 m_2 p_1^i [δ^ij/r+r^i r^j/r^3]_ϵ p_2^j + g_1 g_2 π Zα/4 d m_1 m_2 σ_1^ij σ_2^ij δ^d(r) , H”^(4) = ∑_a=1,2g_a-1/4 m_a^2 σ_a^ij (∇^i_a V) p_a^j +g_1 g_2/16 m_1 m_2 σ_1^ik σ_2^jk (∇^i ∇^j-δ^ij/d ∇^2) V -1/4 m_1 m_2 (∇^i V) (g_1 σ_1^ij p^j_2 -g_2 σ_2^ij p^j_1 ) , where δ^d(r) is the Dirac δ-function in d dimensions. In d=3 spatial dimensions, the matrices σ^ij reduce to σ^ij=ϵ^ijk σ^k, and the Breit Hamiltonian in the center of mass frame becomes H^(4) = -p⃗^ 4/8 m_1^3 -p⃗^ 4/8 m_2^3 -Zα {1/2 m_1 m_2 p^i (δ^ij/r+r^i r^j/r^3) p^j +g_1 g_2/4 m_1 m_2 [s_1^i s_2^j/r^3 (δ^ij-3 r^i r^j/r^2) -8 π/3 s⃗_1·s⃗_2 δ^3(r⃗)] -r⃗×p⃗/2 r^3·[ g_1/m_1 m_2 s⃗_1 +g_2/m_1 m_2 s⃗_2 +(g_2-1)/m_2^2 s⃗_2 +(g_1-1)/m_1^2 s⃗_1]} + 2 Zα/3( 3/4 m_1^2+ 3/4 m_2^2 +r_E1^2+ r_E2^2 ) π δ^3(r⃗) . We will use this d=3 form of H^(4) also later in the calculation of the second-order correction. Additionaly, we note that the first particle is point-like, so r_E1^2=0 and g_1=2. The second particle will be considered with finite nuclear size, and we will calculate the radiative corrections only for the first particle. However, in the case of antiprotonic atoms we will drop these assumptions for the first particle and include radiative corrections for the second particle in Sec. VIII. We now split E_L1 by introducing an intermediate cutoff Λ E_L1 = e^2/m_1^2-2 ε(∫_0^Λ + ∫_Λ^∞) d^d k/(2 π)^d 2 k (δ^ij-k^i k^j/k^2) ×δ _H^(4)< p_1^i 1/E-H-k p_1^j > . After the Z α expansion with Λ = λ (Z α)^2, one goes subsequently to the limits ε→ 0 and λ→∞. Under the assumption that l≠0, we may perform an expansion in 1/k in the second part and obtain E_L1 = 2 α/3 π m_1^2 ∫_0^Λ dk k δ_H^(4)< p⃗_1 1/E-H-k p⃗_1 > + α/3 π m_1^2-2 ε [1+ε (5/3-2 ln2 )] ∫_Λ^∞ dk 1/k^1+2 ε ×{2 < H^(4) 1/(E-H)' [ p⃗_1,[ H,p⃗_1 ]]> + <[ p⃗_1,[ H^(4),p⃗_1 ]]> } . The second-order contribution in braces will vanish for states with l≠ 0. In the calculations, we keep g_2 and r_E2^2 arbitrary. After performing the k-integration and with the help of commutator relations, it reads E_L1 = α/π (Z α)^6/n^3 μ β_1 + α/3 π {1/2 ε+5/6 + } 1/m_1^2<p⃗ 4π δ^d(r) p⃗ ×[Zα/4(1/m_1^2+1/m_2^2 + 4/3 r_E2^2) +Zα/m_1 m_2(2/3 - 2/9ε) +g_2 Zα/4 m_1 m_2(1/3 + 2/9ε) σ_1^ij σ_2^ij] -Zα/m_1 m_2 p^i (δ^ij/r^3 - 3r^i r^j/r^5)_ϵ p^j +g_2 Zα/4 m_1 m_2 σ_1^ik σ_2^jk [p^i 4π δ^d(r) p^j]^(2) + i Zα/2(σ_1^ij/2 m_1^2 + (g_2-1) σ_2^ij/2 m_2^2 + 2 σ_1^ij+g_2 σ_2^ij/2 m_1 m_2) p^i 4π δ^d(r) p^j> , where the expectation value is expressed in the center of mass system. Here, β_1 is a dimensionless quantity, defined as a finite part of the k-integral with divergent terms proportional to λ^n (n = 1, 2, …) and ln(λ/μ) in the limit of large λ omitted, α/π (Z α)^6/n^3 β_1 = lim_λ→∞2 α/3 π m_1^2 μ ∫_0^Λ dk k δ_H^(4) < p_1^i 1/E-H-k p_1^i > = α/π (Z α)^6/n^3 [β^NS_1 + L⃗·s⃗_1 β_1^S1 +L⃗·s⃗_2 β_1^S2 + s⃗_1·s⃗_2 β_1^SS + (L^i L^j)^(2) s_1^i s_2^j β_1^LL] . In all integrals with an upper limit Λ, to be discussed in the following, the divergent terms in λ will be subtracted. In particular, the terms proportional to ln(λ/μ) but not ln(2 λ/μ) are subtracted, which leads to the presence of factor under the logarithm in Eq. (<ref>). §.§ E_L2 The second relativistic correction, E_L2, is the nonrelativistic quadrupole contribution. Specifically, it comes from the quadratic in k term from the expansion of exp( i k⃗·r⃗), E_L2 = e^2/m_1^2-2 ε∫d^d k/(2 π)^d 2 k (δ^ij-k^i k^j/k^2) ×[ < p_1^i ( i k⃗·r⃗_1) 1/E-H-k p_1^j (- i k⃗·r⃗_1)> +< p_1^i ( i k⃗·r⃗_1)^2 1/E-H-k p_1^j> ] . In a similar way as for E_L1, we split the integration into two parts, by introducing a cutoff Λ. In the first part, with the k-integral from 0 to Λ, one can set d=3 and extract the logarithmic divergence. In the second part, with the k-integral from Λ to ∞, we perform a 1/k expansion and employ commutator relations, with the intent of moving the operator H-E to the far left or right where it vanishes when acting on the Schrödinger–Pauli wave function. In this second part it is advantageous, instead of directly expanding the exponentials, at first to use the identity e^ik⃗·r⃗ f(p⃗ ) e^-ik⃗·r⃗ = f(p⃗ - k⃗) . Thus, after expanding the resolvent in 1/k, we get for the expression in the expectation value < p_1^i e^ik⃗·r⃗_1 (H-E)^3/k^4 p_1^i e^-ik⃗·r⃗_1> = 1/k^4< p_1^i (H-E - p⃗_1·k⃗/m_1 + k^2/2 m_1)^3 p_1^i > . We expand the bracket and take into account only terms quadratic in k, contributing at the order α^7. This leads to E^Λ_L2 = e^2/m_1^2-2 ε∫_Λ^∞d^d k/(2 π)^d 2 k^5 (δ^ij-k^i k^j/k^2) ×< p_1^i [3/2 m_1(H-E)^2 k^2 + 2/m_1^2 (p⃗_1·k⃗)^2 (H-E) + 1/m_1^2 (p⃗_1·k⃗) (H-E) (p⃗_1·k⃗) ] p_1^j > . We now pass to the center of mass system, and the resulting expression, after performing k integration and expansion for small ε, is E_L2 = α/π (Z α)^6/n^3 μ β_2 + α/π < (∇⃗ V)^2 [1/m_1^3(1/2ε +5/6 +) + μ/m_1^4(1/6ε+14/45 +1/3 )] +Zα/m_1^4 p⃗ 4π δ^d(r) p⃗ ×[1/20 ε+3/25 +1/10 ] > . Here, β_2 = β_2^NS is defined as the finite part of the integral [see the discussion following Eq. (<ref>)] α/π (Z α)^6/n^3 β_2 = 4 π α/m_1^2 μ lim_λ→∞∫_0^Λd^3k/(2 π)^3 2 k (δ^ij-k^i k^j/k^2) ×{< p_1^i ( i k⃗·r⃗_1)^2 1/E-H-k p_1^j > . . +< p_1^i ( i k⃗·r⃗_1) 1/E-H-k p_1^j (- i k⃗·r⃗_1) > } . §.§ E_L3 The third contribution, E_L3, originates from the relativistic corrections to the coupling of the electron to the electromagnetic field. These corrections can be obtained from the Hamiltonian in Eq. (<ref>), and they have the form of a correction to the current δ j_1^i = i [H^(4),r_1^i ] = -1/2 m_1^3 p_1^i p⃗_1^ 2 +Zα/2 m_1 m_2[δ^ij/r + r^i r^j/r^3]_ε p_2^j +g_1-1/4 m_1^2 σ_1^ji ∇^j V +g_2/4 m_1 m_2 σ_2^ji ∇^j V , with H^(4) given in Eq. (<ref>), and we keep g_1 arbitrary for now. The corresponding correction E_L3 is E_L3 = 2 e^2/m_1^1-2 ε∫d^d k/(2 π)^d 2 k (δ^ij-k^i k^j/k^2) ×< δ j_1^i1/E-H-k p_1^j > . We now perform an angular averaging of the matrix element to bring the correction E_L3 into the form E_L3 =2 e^2/m_1^1-2 εd-1/d∫d^d k/(2 π)^d 2 k < δ j_1^i 1/E-H-k p_1^i > . We again split this integral into two parts. In the first part, where k<Λ, one can approach the limit d=3. In the second part, with k>Λ, one performs a 1/k-expansion and obtains E_L3^Λ = α/m_1 π(5/9 + 1/3ε - 2/3ln[2Λ/m_1] ) ⟨[δ j_1^i,[V,p_1^i]]⟩ . The expectation value for states with angular momentum l>0 can be written as [δ j_1^i,[V,p_1^i]] = - 1/2 m_1^3 [p_1^ip_1^2,[V,p_1^i]] +Zα/2 m_1 m_2 [δ^ij/r + r^i r^j/r^3]_ε(-∇^i∇^j V) = ( -μ/m_1^3 + 1 - ε/m_1 m_2 )(∇⃗V)^2 , where we used the identity [Zα/2(δ^ij/r + r^i r^j/r^3)]_ε(-∇^i∇^j V) =(1 - ε)(∇⃗V)^2 , which follows from evaluation of this expression in momentum representation in d dimensions, namely (∇⃗V(r))^2 = (4 π Z α)^2 m_1^4 ε∫d^d q/(2 π)^d e^i q⃗·r⃗∫d^d k/(2 π)^dk⃗·(k⃗-q⃗)/k^2 (k⃗-q⃗)^2 = -(π Z α)^2 m_1^4 ε∫d^d q/(2 π)^d e^i q⃗·r⃗ 4^ε q^1 - 2 εtan(ε π)/ε π , and [Zα/2(δ^ij/r + r^i r^j/r^3)]_ε(-∇^i∇^j V) = -(4 π Z α)^2 m_1^4 ε∫d^d q/(2 π)^d e^i q⃗·r⃗∫d^d k/(2 π)^d (δ^ij-k^i k^j/k^2) 1/k^2 (q-k)^i (q-k)^j/(q-k)^2 = -(π Z α)^2 m_1^4 ε∫d^d q/(2 π)^d e^i q⃗·r⃗ 4^ε q^1 - 2 ε tan(ε π)/ε π (1-ε) . For E_L3 we finally obtain E_L3 = α/π (Z α)^6/n^3 μ β_3 -α/π [1/3 ε+ 5/9+ 2/3] [μ/m_1^4 - (1-ε)/m_1^2 m_2] <(∇⃗ V)^2> , where β_3 is the finite part of the integral (in the center of mass system) α/π (Z α)^6/n^3 β_3 = -4 α/3 π m_1 μ lim_λ→∞∫_0^Λ dk k < [1/2 m_1^3 p^i p^2 + Zα/m_1 m_2(δ^ij/r + r^i r^j/r^3) p^j +ϵ^ijk(g_1-1/4 m_1^2 σ_1^k + g_2/4 m_1 m_2 σ_2^k) ∇^jV] 1/E-H-k p^i > = α/π (Z α)^6/n^3 [β_3^NS + L⃗·s⃗_1 β_3^S1 + L⃗·s⃗_2 β_3^S2] . Now we make the transition g_1→2, but in the case of antiprotonic atoms, discussed in Sec. VIII we would keep g_1 arbitrary. This completes the treatment of the low-energy part in Eq. (<ref>), and the complete Bethe-log-like contributions are β^NS = β_1^NS + β_2^NS + β_3^NS, β^S1 = β_1^S1 + β_3^S1, β^S2 = β_1^S2 + β_3^S2, β^SS = β_1^SS, β^LL = β_1^LL. § MIDDLE-ENERGY PART In the middle-energy part, the momenta of both the radiative and the exchanged photon are of the order m_1α. This part consists of two diagrams: the triple seagull contribution and a single seagull with retardation, see Fig. <ref> and Fig. <ref>. We follow the approach used in <cit.> for the case of two electrons and extend it to two particles with arbitrary masses. §.§ Triple seagull contribution The first middle-energy contribution is the triple seagull diagram given by Fig. <ref>, which is expressed (with k_3 being the radiative photon) as E_M1 = e^6 Z^2/m_1^2-6 ε m_2∫d^d k_1/(2π)^d 2k_1∫d^d k_2/(2π)^d 2k_2∫d^d k_3/(2π)^d 2k_3 δ_⊥^ik(k_1)δ_⊥^jk(k_2)δ_⊥^ij(k_3) ×<ϕ| e^ i(k⃗_1+k⃗_2)·r⃗_1 1/E-H-k_1-k_2 e^ i(k⃗_3-k⃗_1)·r⃗_2 1/E-H-k_2-k_3 e^- i(k⃗_2+k⃗_3)·r⃗_2 +e^- i(k⃗_1+k⃗_3)·r⃗_2 1/E-H-k_1-k_3 e^- i(k⃗_2-k⃗_3)·r⃗_2 1/E-H-k_1-k_2 e^ i(k⃗_1+k⃗_2)·r⃗_1 +e^- i(k⃗_1+k⃗_3)·r⃗_2 1/E-H-k_1-k_3 e^ i(k⃗_1+k⃗_2)·r⃗_1 1/E-H-k_2-k_3 e^- i(k⃗_2+k⃗_3)·r⃗_2|ϕ> . Neglecting E-H in comparison to photon energies, we express the triple seagul contribution as E_M1=⟨ H_M1⟩, where H_M1= e^6 Z^2/m_1^2-6 ε m_2∫d^d k_1/(2π)^d 2k_1∫d^d k_2/(2π)^d 2k_2∫d^d k_3/(2π)^d 2k_3 δ_⊥^ik(k_1)δ_⊥^ik(k_2)(d-1)/d × e^ i(k⃗_1+k⃗_2)·r⃗[1/(k_1+k_2)(k_2+k_3) +1/(k_1+k_3)(k_1+k_2)+1/(k_1+k_3)(k_2+k_3)] . The integration over radiative photon k_3 is trivial. The remaining integration is performed in spheroidal coordinates, as explained in Appendix B of Ref. <cit.>. The result for the triple seagull contribution is H_M1 = α(Zα)^2π/2 m_1^2 m_2(-1/3+4/3ln2)∫d^3q/(2 π)^3e^i q⃗·r⃗ q. §.§ Single seagull with retardation The second middle-energy contribution comes from the diagram with a single seagull and retardation, as depicted in Fig. <ref>. Such diagram contains two photons, one of which is a transverse photon exchanged between the electrons, and the other is a radiative photon. The corresponding contribution to the energy is expressed as E_M2 = e^4 Z/m_1^2-4 ε m_2∫d^d k_1/(2π)^d 2k_1∫d^d k_2/(2π)^d 2k_2δ_⊥^in(k_1)δ_⊥^im(k_2) × <ϕ| j_1^n(k_1) e^ ik⃗_1·r⃗_1 1/E-H-k_1 e^- i(k⃗_1+k⃗_2)·r⃗_2 1/E-H-k_2 j_2^m(k_2) e^ ik⃗_2·r⃗_2 +j_2^n(k_1) e^ ik⃗_1·r⃗_2 1/E-H-k_1 e^- i(k⃗_1+k⃗_2)·r⃗_2 1/E-H-k_2 j_1^m(k_2) e^ ik⃗_2·r⃗_1 +j_1^n(k_1) e^ ik⃗_1·r⃗_1 1/E-H-k_1 j_2^m(k_2) e^ ik⃗_2·r⃗_2 1/E-H-k_1-k_2 e^- i(k⃗_1+k⃗_2)·r⃗_2 +j_2^n(k_1) e^ ik⃗_1·r⃗_2 1/E-H-k_1 j_1^m(k_2) e^ ik⃗_2·r⃗_1 1/E-H-k_1-k_2 e^- i(k⃗_1+k⃗_2)·r⃗_2 +e^- i(k⃗_1+k⃗_2)·r⃗_2 1/E-H-k_1-k_2 j_1^n(k_1) e^ ik⃗_1·r⃗_1 1/E-H-k_2 j_2^m(k_2) e^ ik⃗_2·r⃗_2 +e^- i(k⃗_1+k⃗_2)·r⃗_2 1/E-H-k_1-k_2 j_2^n(k_1) e^ ik⃗_1·r⃗_2 1/E-H-k_2 j_1^m(k_2) e^ ik⃗_2·r⃗_1|ϕ> , where j_i^l(k) is the current j_i^l(k) = p^l_i + i g_i/4σ_i^kl k^k. The α^7 contribution is obtained by expanding the integrand up to the first order in E-H. Because ∫ d^d k k^α = 0 in the dimensional regularization, only the terms with k_1+k_2 in the denominator do not vanish, and they can be cast in the form E_M2 = -2e^4 Z/m_1^2-4 ε m_2∫d^d k_1/(2π)^d 2k_1∫d^d k_2/(2π)^d 2k_2 ×1/k_1^2(k_1+k_2) δ_⊥^in(k_1)δ_⊥^im(k_2) ×<ϕ| [[j_1^n(k_1) e^ ik⃗_1·r⃗_1,H-E],j^m_2(k_2) e^ ik⃗_2·r⃗_2] × e^- i(k⃗_1+k⃗_2)·r⃗_2|ϕ> . Taking into account that only the spin-independent terms survive the double commutator and performing the angular average for the radiative photon, we arrive at E_M2 = -(4πα)^2 Z/2 m_1^2-4 ε m_2(d-1)/d∫d^d k_1/(2π)^d∫d^d k_2/(2π)^d ×1/k_1^3k_2(k_1+k_2)δ_⊥^mn(k_1) ⟨ϕ| e^ ik⃗_1·r⃗ ∂_1^m∂_2^n V|ϕ⟩ . We express this as the expectation value of an effective operator H_M2, H_M2 = -(4πα)^3 Z^2/2 m_1^2-6 ε m_2(d-1)/d∫d^d q/(2 π)^d e^i q⃗·r⃗∫d^d k_1/(2π)^d ×∫d^d k_2/(2π)^dδ_⊥^mn(k_1) q^m q^n/k_1^3 |k⃗_1-q⃗|^2 k_2(k_1+k_2) . Performing the remaining integrations in the same way as in Ref. <cit.>, we get the result H_M2 = α(Z α)^2π/2 m_1^2-2 ε m_2∫d^d q/(2 π)^d e^i q⃗·r⃗(4/9+2/3ϵ-8/3lnq/m_1) q . §.§ Total result for the middle-energy contribution The total result for the effective operator representing the middle-energy contribution is H_M = H_M1 + H_M2 = α (Zα)^2/2 m^2-2 ε_1 m_2 π ∫d^d q/(2 π)^d e^i q⃗·r⃗ ×[ 1/9 + 2/3 ε ( 1 - 2 ε lnq/2 m_1) - 4/3 lnq/m_1] q . This needs to be transformed into the coordinate representation with the help of (∇⃗V)^2 = -(π Z α)^2 m_1^2 ε∫d^d q/(2 π)^d e^i q⃗·r⃗ q ×(1-2 ε lnq/2 m_1) + O(ε^2) . Specifically, in the limit ε→ 0 1/r^4 = ∫d^3 q/(2 π)^3 e^i q⃗·r⃗ (-π^2 q) , ln r+γ/r^4 = ∫d^3 q/(2 π)^3 e^i q⃗·r⃗ π^2(-3/2+ln q) q , leading to the middle-energy contribution H_M = α/2 π m^2_1 m_2[17/9 - 2/3 ε-4/3(ln m_1 r+γ) ](∇⃗V)^2 . § HIGH-ENERGY PART The high-energy part E_H comes from the momenta of the radiative photon of the order of electron mass m_1, and is split into three parts E_H = E_H1 + E_H2 + E_H3 , where E_H1 is due to slopes and higher derivatives of electromagnetic form-factors, E_H2 is due to the anomalous magnetic moment κ_1 = α/(2 π), and E_H3 is due to QED correction to the polarizability α_E of the first particle beyond κ_1. §.§ E_H1 The first part of the high-energy contribution comes from the derivatives F_1'(0), F_1”(0), and F_2'(0) of electromagnetic form-factors of the first particle. For the second particle, we assume s_2=1/2, and an arbitrary g_2, r^2_E2, r^4_EE2, r^2_M2, and α_E2. As a starting point we will use Ref. <cit.> and the effective Hamiltonian δ H = Zα/120 r_EE1^4 4π ∇^2δ^d(r) +Zα/24 m_1^2( g_1 r_M1^2 - r_E1^2) i σ^ij_1 p^i 4π δ^d(r) p^j + Zα/36 r_E1^2 (r_E2^2 + 3/4 m_2^2) 4π ∇^2 δ^d(r) +iZα/24 r_E1^2 (g_2-1)/m_2^2 σ^ij_2 p^i 4π δ^d(r) p^j + Zα/48 m_1 m_2 g_1 r_M1^2 (2 i σ^ij_1 p^i 4π δ^d(r) p^j + g_2 σ_1^ik σ_2^jk p^i 4π δ^d(r) p^j) + Z α/24 m_1 m_2 r_E1^2 ( 4 π ∇⃗^2δ^3(r) + i g_2 σ^ij_2 p^i 4π δ^d(r) p^j ) , where we collected all the terms that contain form-factor derivatives, given by expressions δ E_1-δ E_9 in Eqs. (36), (38), (41), (43), (45), (47)-(48), (53), and (59) of Ref. <cit.>. The electromagnetic radii are r_E1^2 = 6/m_1^2 (κ/4 + F'_1(0)) , g_1 r_M1^2 = 12/m_1^2 (F'_1(0) + F'_2(0)) , r_EE1^4 = 15/m_1^4 (4 F”_1(0) + F'_1(0) + 2 F'_2(0)) . The derivatives of form-factors are given by F'_1(0) = α/π[-1/8 - 1/6 ε] , F”_1(0)= α/π[-11/120 - 1/20 ε] , F'_2(0)= α/12 π . For the resulting expression E_H1 we get E_H1 = α/π Zα/m_1^2⟨{-1/m_1^2[13/160 + 11/120 ε] - 1/m_2^2[1/32 +1/24 ε] - 1/m_1 m_2[1/16 + 1/12 ε] -g_2 σ_1^ijσ_2^ij/m_1 m_2[11/864 + 1/72 ε] - r_E2^2 [1/24 + 1/18 ε]} p⃗ 4π δ^d(r) p⃗ -g_2/m_1 m_2[1/96 + 1/24 ε]σ_1^ik σ_2^jk (p^i 4π δ^d(r) p^j)^(2) - i {-σ_1^ij/m_1^2[1/96-1/24 ε] + (g_2-1) σ_2^ij/2 m_2^2[1/32 + 1/24 ε] + σ_1^ij/m_1 m_2[1/48 + 1/12 ε] } p^i 4π δ^d(r) p^j⟩ . For the case of two pointlike particles, we checked this result also by a complementary method of calculation, namely the scattering amplitude approach, as was done for the E_9 contribution in Ref. <cit.>. Generalizing the derivation in Ref. <cit.> for arbitrary masses of both particles and considering also the spin-orbit terms, we get the result in agreement with Eq. (<ref>) for the pointlike second particle. §.§ E_H2 E_H2 is the contribution due to the anomalous magnetic moment κ of the pointlike first particle. It can be obtained by collecting all the κ-dependent parts of the first-order operators δ E_1 - δ E_9 in Ref. <cit.>, where κ is present in the g-factor g=2 (1+κ) and in the electric dipole polarizability δ H = -e^2/2 α_E E⃗^ 2 where α_E = -κ (1+κ)/4 m^3 -α/3 π m^3(1-2/ε) . We shall add a few comments at this point. If we consider a point particle with the magnetic moment anomaly, then the electric dipole polarizability includes the first term in the above equation. The additional radiative correction, which is not accounted for by the magnetic moment anomaly, is the second term, which was calculated in Ref. <cit.>. Here, we account only for the first term, and in the next subsection, we will separately address the second term. This is because, for a non-point particle such as antiproton, we will include the first term in the definition of the electric dipole polarizability, and the second term will be an additional correction with 1/ε infrared singularity to be canceled with a similar term in the low-energy part. All these contributions due to the magnetic moments are finite, and thus we may present them in three-dimensional form as E_H2 = κ_1 (∑_i=1… 9⟨δ H_i⟩ + E_ sec) , where the individual δ H_i operators were derived in Ref. <cit.> and are presented in Appendix <ref>. E_sec is a second-order amm contribution E_ sec = 2 ⟨ H^(4)_ amm 1/(E-H)' H^(4)(κ_1=0)⟩ = α/π[ E_ sec^NS + E_ sec^S1 ⟨L⃗·s⃗_1⟩ + E_ sec^S2 ⟨L⃗·s⃗_2⟩ + E_ sec^SS ⟨s⃗_1·s⃗_2⟩ + E_ sec^LL ⟨(L^i L^j)^(2) s_1^i s_2^j⟩] where H^(4)_ amm is the part of H^(4) in Eq. (<ref>) which is linear in κ_1, and H^(4)(κ_1=0) is the Breit Hamiltonian with κ_1 omitted. §.§ E_H3 This is a correction due to the second term in the electric dipole polarizability in Eq. (<ref>), E_H3 = α/π m_1^3(1/6 - 1/3 ε)(∇⃗V)^2 , which is considered separately because it is infrared divergent. We will assume that it is common to all particles, including all nuclei, and will exclude it from the definition of the electric dipole polarizability. § TOTAL ONE-LOOP RADIATIVE CORRECTION With the help of the identity derived in Appendix <ref> valid for l>0 states, p^i [Zα/r^3(δ^ij - 3r^i r^j/r^2)]_ϵ p^j = Zα(1/6 - 2/9 ε) p⃗ 4π δ^d(r) p⃗ + μ (∇⃗V)^2 , all the singularities proportional to 1/ε cancel out algebraically in the sum of all parts in Eq. (<ref>). We may therefore pass to three dimensions by setting ε→ 0 and replace p⃗ 4π δ^d(r) p⃗→ p⃗ 4π δ^3(r) p⃗ , σ_a^ij p^i 4π δ^d(r) p^j → 2 s⃗_a·p⃗× 4π δ^3(r) p⃗ , σ_1^ik σ_2^jk p^i 4π δ^d(r) p^j → 4 s⃗_2 ×p⃗ 4π δ^3(r) s⃗_1 ×p⃗ . The final expression for E^(7)_rad in Eq. (<ref>) for the α^7 radiative two-body correction to the energy is E^(7)_rad = α/π ⟨ E_NS + L⃗·s⃗_1 E_S1 +L⃗·s⃗_2 E_S2 + s⃗_1·s⃗_2 E_SS + (L^i L^j)^(2) s_1^i s_2^j E_LL⟩ , where individual coefficients are E_NS = ⟨Zα/m_1^4 m_2^2[(31/288 + 1/6 ) m_1 m_2 +(779/7200+11/60 ) m_2^2 + 1/72 (1 + 4/3 m_2^2 r_E2^2) (5+6 ) m_1^2] p⃗ 4π δ^3(r) p⃗ + 1/m_1^3 m_2 (m_1+m_2) ×[(17/12 + 2/3 - 2/3 (ln m_1 r+γ)) m_1^2 +(589/720+2/3 ) m_2^2 +m_1 m_2 (317/144+4/3 -2/3 (ln m_1 r+γ) )] (Zα)^2/r^4⟩ + E_ sec^NS + (Zα)^6 μ/n^3 β^NS , E_S1 = ⟨- (m_1^2 - m_1 m_2 + m_2^2)/2 m_1^3 m_2^2E Zα/r^3 - (2 m_1^2 + m_1 m_2 + 2 m_2^2)/4 m_1^3 m_2^2(Zα)^2/r^4 - Zα/m_1^4 m_2^3 [ (43/144 + 1/3 ) m_1 m_2^2 + (23/144 + 1/6 ) m_2^3 + 1/16 m_1^2 m_2 + 1/32 m_1^3 (2 - g_2 ) + m_1^3 m_2^3/12 μ r_E2^2 ] p⃗ 4π δ^3(r) p⃗⟩ + E_ sec^S1 + (Zα)^6 μ/n^3 β^S1 , E_S2 = ⟨- Zα/m_1^3 m_2^2[(5/36 + 1/6 ) (g_2-1) m_1 + (31/288 + 1/6 ) g_2 m_2] p⃗ 4π δ^3(r) p⃗⟩ + E_sec^S2 + (Zα)^6 μ/n^3 β^S2 , E_SS = ⟨((5 g_2-6) m_1 + 5 g_2 m_2)/24 m_1^2 m_2^2(Zα)^2/r^4 + Zα/m_1^3 m_2^3[(g_2-2)/48 m_1^2 + (g_2-1)/24 m_1 m_2 + (77/432 + 2/9 ) g_2 m_2^2 + g_2/18 m_1^2 m_2^2 r_M2^2] p⃗ 4π δ^3(r) p⃗⟩ + E_ sec^SS + (Zα)^6 μ/n^3 β^SS , E_LL = Zα/(2l-1)(2l+3) ⟨3 (m_1 + m_2 - g_2 m_2)/m_1 m_2^2 (m_1+m_2)E/r^3 +[(12+g_2) m_1^2 + (12-7 g_2) m_1 m_2 + g_2 m_2^2]/4 m_1^2 m_2^2 (m_1+m_2)Zα/r^4 + 1/m_1^3 m_2^3[7 (g_2-2)/16 m_1^2 + 5 (g_2-1)/4 m_1 m_2 + (233/144 +5/3 ) g_2 m_2^2 +5 g_2/12 m_1^2 m_2^2 r_M2^2] ×p⃗ 4π δ^3(r) p⃗⟩ + E_ sec^LL + (Zα)^6 μ/n^3 β^LL . The expectation values of the first-order operators are evaluated with the help of formulas from Appendix <ref>. The second-order contribution, which comes exclusively from the amm contribution, is evaluated in the same way as in Ref. <cit.>. We will now present the final formula for the radiative α^7 contribution to energy. § RESULTS The general result can be cast in the form E^(7)_rad = μ α(Zα)^6/π Δ⟨ℰ_NS + L⃗·s⃗_1 ℰ_S1 + L⃗·s⃗_2 ℰ_S2 + s⃗_1·s⃗_2 ℰ_SS + (L^i L^j)^(2) s_1^i s_2^j ℰ_LL⟩ , where we pulled out the factor Δ^-1, with Δ = 30 for l=1, and for l>1 it is defined in Eq. (<ref>). We consider separately the cases with l=1 and l>1, where for the latter case the individual coefficients are lengthy and thus we move their explicit results into Appendix <ref>. Defining η_1 = μ/m_1, η_2 = μ/m_2 , ln_1 = the results for l=1 are ℰ_NS = ℰ_NS^(3)/n^3 + ℰ_NS^(4)/n^4 + ℰ_NS^(5)/n^5 + 8 η_1^2 η_2 (2/3 n^5 - 1/n^3) ×( 137/60 - H_n+1 +lnn/2 η_1 Zα) + 100 μ^2 η_1^2/27 r_E2^2 ×(1/n^3-1/n^5)+ ln_1 ℰ_NS^log + Δ/n^3 β_NS , ℰ_NS^(3) = η_1^2(821/36 - η_1 1081/72+η_1^2 16/5) -η_1^2 η_2^2 137/480 g_2^2 , ℰ_NS^(4) = η_1^2(-29/2 + η_1 53/4) - η_1^2 η_2^2 3 /16 g_2^2 , ℰ_NS^(5) = η_1^2(-112/9 + η_1 221/36 - η_1^2 46/15) , ℰ_NS^log = η_1^2[(1/n^3 - 1/n^5) ( 34/3 + 4 η_1^2 + 40 μ^2/9r_E2^2) + 8/3 n^5] , ℰ_S1 = ℰ_S1^(3)/n^3 + ℰ_S1^(4)/n^4 + ℰ_S1^(5)/n^5 - (1/n^3 - 1/n^5)(10 μ^2 η_1/3 r_E2^2 +20 η_1^3(η_2 + 1)/3ln_1 ) + Δ/n^3β_S1 , ℰ_S1^(3) = η_1(10/9 + η_1 191/36 - η_1^2 835/72 + η_1^3 50/9) + η_1η_2^3133/288g_2 +g_2^2 η_1 η_2^2(227/288 -η_1 13/320) , ℰ_S1^(4) = - η_1 η_2^3 5/16 g_2 + g_2^2 η_1 η_2^2(5/16 + η_1 9/32) + η_1 (25/4 - η_2^25/4) , ℰ_S1^(5) = η_1(-15/2 + η_1 + η_1^2 125/18 - η_1^3 50/9) - η_1 η_2^3 ( 7/8 g_2 + 3/8 g_2^2 ) , ℰ_S2 = ℰ_S2^(3)/n^3 + ℰ_S2^(4)/n^4 + ℰ_S2^(5)/n^5 + Δ/n^3 β_S2 -20 η_1^2 η_2/3(g_2 - η_2) ×(1/n^3 - 1/n^5) ln_1 , ℰ_S2^(3) = η_1^2 η_2^2 (50/9 - 13/320g_2^2) -g_2 η_1^2 η_2(559/288 + η_2 133/288) , ℰ_S2^(4) = g_2 η_1^2 η_2(15/16+η_2 5/16) + g_2^2 η_1^2 η_2^29/32, ℰ_S2^(5) = g_2 η_1^2 η_2( 229/72 + η_2 7/8) - η_1^2 η_2^2 (50/9 - 3/8 g_2^2) , ℰ_SS = ℰ_SS^(3)/n^3 + ℰ_SS^(4)/n^4 + ℰ_SS^(5)/n^5 + Δ/n^3 β_SS + 20 g_2/9(1/n^3 - 1/n^5) ×(μ^2 η_1 η_2 r_M2^2+4 η_1^3 η_2 ln_1 ) , ℰ_SS^(3) = g_2 η_1 η_2(-47/54 +η_1^2 170/27) - η_1^2 η_2^2 137/360 g_2^2 - η_1η_2^225/54 , ℰ_SS^(4) = - η_1 η_2 5 /3 g_2 + η_1 η_2^2 5/3 - η_1^2 η_2^2 g_2^2/4 , ℰ_SS^(5) = g_2 η_1 η_2(-1/2 - η_1^2 170/27) + η_1 η_2^2 5/3 , ℰ_LL = ℰ_LL^(3)/n^3 + ℰ_LL^(4)/n^4 + ℰ_LL^(5)/n^5 + Δ/n^3 β_LL + 10 g_2/3(1/n^3 - 1/n^5) ×(μ^2 η_1 η_2 r_M2^2 +4 η_1^3 η_2 ln_1 ) , ℰ_LL^(3) = η_1 η_2^2(1171/180 - 3η_1) - g_2 η_1 η_2(3697/720 + η_1417/40 - η_1^23067/360) +g_2^2 η_1 η_2^2(-227/80+η_1 1291/1200) , ℰ_LL^(4) = η_1η_2^25/2 + g_2 η_1 η_2(-71/8 + η_29/4 + η_2^29/4) + g_2^2 η_1 η_2^2( -9/8 -η_1 21/40) , ℰ_LL^(5) = η_1 η_2^2(-19/5 + 3 η_1) + g_2 η_1 η_2( 153/20 - η_1 9/5 - η_1^2 101/45) + η_1 η_2^2 (4 η_2 - 1) 9/20 g_2^2 , where H_n = ∑_i=1^n i^-1 gives the n-th harmonic number. The Bethe logarithmic terms will be calculated after combining them with those from the exchange contribution at (Z α)^7 order. We now consider special cases of the general results in Eq. (<ref>). §.§ Positronium First, we will examine the case of a positronium atom, i.e., the two-body system of bound electron and positron. To achieve this, we treat the nucleus as pointlike by setting g_2=2, r_E2^2 = r_M2^2= 0, m_1 = m_2 = m, and include the corresponding result for the radiative correction of the second particle, where we make the exchange (1↔2). For the l=1 states we get the result E^(7)_pos(n^S P_J) = α(Zα)^6 m/π[ ℰ^(7)(n^S P_J) + (1/30 n^3 - 1/45 n^5)(H_n+1 - lnn/ Zα) ] , where ℰ^(7)(n^1P_1) = β^pos(^1P_1)/n^3 + 73/1440 n^3 -1/20 n^4 - 47/3600 n^5 + (3/40 n^3 - 19/360 n^5) , ℰ^(7)(n^3P_0) = β^pos(^3P_0)/n^3 -101/960 n^3 -3/10 n^4 + 307/2400 n^5 + ( 29/120 n^3 - 79/360 n^5) , ℰ^(7)(n^3P_1) = β^pos(^3P_1)/n^3 + 181/1728 n^3 -73/960 n^4 - 877/21600 n^5 + (47/360 n^3 - 13/120 n^5) , ℰ^(7)(n^3P_2) = β^pos(^3P_2)/n^3 + 491/8000 n^3 -41/1600 n^4 - 67/900 n^5 + (3/40 n^3 - 19/360 n^5) , β^pos(^2s+1P_j) = β_NS(1) + F (β_S1(1) + β_S2(1)) + ( 2 s(s+1) -3)/4 β_SS(1) +[ 3 F (1+2 F) - 4 s(s+1) ] β_LL(1)/12 , F = 1/2 [j(j+1) - s(s+1) - 2] , where we introduced notation β_i(x) = β_i with x=m_1/m_2. §.§ Hydrogenlike atoms For hydrogenlike atoms, we begin with the nonrecoil limit, assuming the nuclear mass m_2 to be infinitely heavy. We consider the case of l=1 while the l>1 case is presented in Appendix <ref>. We obtain the result E^(7,0)_ hydr(n P) = m_1 α(Zα)^6/π⟨ℰ_NS^(7,0) + L⃗·s⃗_1 ℰ_S1^(7,0)⟩ , ℰ_NS^(7,0) = 1319/3600 n^3 -1/24 n^4 - 1687/5400 n^5 + 10/81(1/n^3-1/n^5) m_1^2 r_E2^2 +β_NS^(0)/n^3 + [23/45 n^3 - 19/45 n^5 + 4/27 m_1^2 r_E2^2(1/n^3-1/n^5)] , ℰ_S1^(7,0) = 1/80 n^3 +5/24 n^4 -23/135 n^5 - 1/9(1/n^3-1/n^5) m_1^2 r_E2^2 +β_S1^(0)/n^3 - 2/9 (1/n^3-1/n^5) , where β_i(x) = β_i^(0) + x β_i^(1) + x^2 β_i^(2) + … . This result is in agreement with the one from Ref. <cit.> for a pointlike nucleus. For the leading recoil contribution we get E^(7,1)_ hydr(n P) = m_1^2 α(Zα)^6/π m_2⟨ℰ_NS^(7,1) + L⃗·s⃗_1 ℰ_S1^(7,1) +L⃗·s⃗_2 ℰ_S2^(7,1) + s⃗_1·s⃗_2 ℰ_SS^(7,1) +(L^i L^j)^(2) s_1^i s_2^j ℰ_LL^(7,1)⟩ , ℰ_NS^(7,1) = β^(1)_NS - β^(0)_NS/n^3 -4913/5400 n^3 -19/60 n^4 + 1243/1350 n^5 + (4/15 n^3 - 8/45 n^5)(H_n+1 - lnn/2 Zα) -38/81(1/n^3 - 1/n^5) m_1^2 r_E2^2 + [ 23/15 n^5-9/5 n^3 -20/27 m_1^2 r_E2^2 (1/n^3 - 1/n^5)] , ℰ_S1^(7,1) = β^(1)_S1 - β^(0)_S1/n^3 -223/1080 n^3 - 5/12 n^4 + 28/45 n^5 +4/9 (1/n^3 - 1/n^5) m_1^2 r_E2^2 + 2/3 (1/n^3 - 1/n^5) × , ℰ_S2^(7,1) = β^(1)_S2/n^3 + g_2(-559/8640 n^3 + 1/32 n^4 + 229/2160 n^5) - g_2 2/9 (1/n^3 - 1/n^5) , ℰ_SS^(7,1) = β^(1)_SS/n^3 + g_2(293/1620 n^3 - 1/18 n^4 - 367/1620 n^5) +g_2 2/27 (1/n^3 - 1/n^5) m_1^2 r_M2^2 + g_2 8/27 (1/n^3 - 1/n^5) , ℰ_LL^(7,1) = β^(1)_LL/n^3 + g_2(-5069/21 600 n^3 - 71/240 n^4 + 649/5400 n^5) +g_2 1/9 (1/n^3 - 1/n^5) m_1^2 r_M2^2 + g_2 4/9 (1/n^3 - 1/n^5) , § ANTIPROTONIC ATOMS We may apply the results of our calculation also to highly excited rotational states of antiprotonic atoms. In the case of a two-body system consisting of two hadronic particles, one has to include the strong interaction effects. However, for highly excited rotational states, these effects are negligible due to their short range. We may also omit all the other local interaction terms, but we have to keep the g factor of the first particle in the general form, and include also the radiative contribution for the second (heavy) particle. As a result, only the low-energy, middle-energy, and E_H3 contributions have to be taken into account. For antiprotonic atoms, the low-energy contribution E_L1 is E_L1 = α/3 π m_1^2 {1/2 ε+5/6 + } ×< -Zα/m_1 m_2 p^i (δ^ij/r^3 - 3r^i r^j/r^5)_ϵ p^j > + α/π (Z α)^6/n^3 μ β_1(x) +Z^2(1↔2,x↔ x^-1) . In the Bethe log contribution the perturbation of the expectation value by the Breit Hamiltonian H^(4) has to include g-factors of both particles. The low-energy contribution E_L2 is for antiprotonic atoms of the form E_L2 = α/π < (∇⃗ V)^2 [1/m_1^3(1/2ε +5/6 +) + μ/m_1^4 (1/6ε +14/45+1/3)] > +α/π (Z α)^6/n^3 μ β_2(x) + Z^2(1↔2,x↔ x^-1) . The final low-energy contribution is given by E_L3 = -α/π (1/3 ε+ 5/9+ 2/3) ×(μ/m_1^4 - (1-ε)/m_1^2 m_2) <(∇⃗ V)^2> + α/π (Z α)^6/n^3 μ β_3(x) + Z^2(1↔2,x↔ x^-1) . The middle-energy contribution for antiprotonic systems is obtained in a straightforward way as E_M = α/2 π m^2_1 m_2[17/9 - 2/3 ε-4/3(ln m_1 r+γ) ](∇⃗V)^2 +Z^2 (1↔2) . The only high-energy part that will contribute is given by E_H3, E_H3 = α/π(1/m_1^3 + Z^2/m_2^3)(1/6 - 1/3 ε)(∇⃗V)^2 . The other terms go to polarizability of both particles, and they are already included in the α^6 μ contribution in Ref. <cit.>. After summing all the contributions, the singularities exactly cancel each other, which leads to E^(7)_p̅ = μ α(Zα)^2/90 π m_1^4 m_2^2⟨1/r^4( 105 m_1^2 + 170 m_1 m_2 + 68 m_2^2 - 60 m_1 (m_1+m_2) (ln(m_1 r) + γ) + 60 (m_1+m_2)^2 ln_1)⟩ + μ α(Zα)^6/π n^3⟨β^NS(x) + L⃗·s⃗_1 β^S1(x) +L⃗·s⃗_2 β^S2(x) + s⃗_1·s⃗_2 β^SS(x) + (L^i L^j)^(2) s_1^i s_2^j β^LL(x) ⟩ +Z^2 (1↔2,x↔ x^-1) . Evaluating the expectation values, we obtain E^(7)_p̅ = μ α(Zα)^6/π Δ η_1^2 [8/3 η_2 (l(l+1)/n^5 - 3/n^3) (H_2l-2 + H_2l+3 - H_n+l + lnn/2 Zα η_1) + 8/3 (3/n^3 - l(l+1)/n^5) ln_1 + 1/n^3( 70/3 - η_1 44/3 + η_1^2 2/5) -4 η_2 (2l+1)/n^4 + 1/n^5( l (l+1)( - 6 + η_1 28/9 - η_1^2 2/15) + η_2 4/3) ] + μ α(Zα)^6/π n^3⟨β^NS(x) + L⃗·s⃗_1 β^S1(x) +L⃗·s⃗_2 β^S2(x) + s⃗_1·s⃗_2 β^SS(x) + (L^i L^j)^(2) s_1^i s_2^j β^LL(x) ⟩ +Z^2 (1↔2,x↔ x^-1) . The final result for antiprotonic atoms is thus very simple and compact. § SUMMARY We have derived a complete α (Z α)^6 and Z^2α (Z α)^6 one-loop self-energy correction to the energy levels of a two-body system with angular momentum l>0. The obtained results are valid for constituent particles of arbitrary masses and spin 1/2, with the nucleus being either point-like or extended-size. For l=1, the results are presented in Eqs. (<ref>-<ref>), while for l>1 in Eqs. (<ref>-<ref>). For the case of positronium, the results for l=1 are presented in Eq. (<ref>-<ref>), and for rotational states of antiprotonic atoms in Eq. (<ref>). For hydrogenlike atoms in the nonrecoil limit, our results agree with the former calculation in the literature <cit.> in the case of a point nucleus. We present also the first-order recoil correction in Eq. (<ref>) for l=1 and in Eq. (<ref>) for l>1, which has not yet been considered in the literature. What is yet unknown is the pure exchange contribution of order (Z α)^7. Once it is completed, we aim to perform numerical calculation of relativistic Bethe logarithms and the electron (muon) vacuum polarization contributions. This will eventually allow for very accurate results for l>0 states of arbitrary two-body systems, including muonic and antiprotonic atoms. Finally, we note that using the operator form of α (Z α)^6 correction in Eq. (<ref>) we found a small mistake in the previous calculation of a similar correction to He ionization energies, which we describe in detail in Appendix B. 99 nancypaul N. Paul, Antiprotonic Atom X-ray Spectroscopy with Quantum Sensors, presented on “Future Nuclear and Hadronic Physics at the CERN AD" (2024). zatorski:22 J. Zatorski, V. Patkóš, and K. Pachucki, Phys. Rev. A 106, 042804 (2022). patkos:24:twobodyP V. Patkóš, V. A. Yerokhin, and K. Pachucki, Phys. Rev. A 109, 022819 (2024). codata:22 E. Tiesinga, CODATA 2022, in preparation. muH1 R. Pohl, et al., Nature 466, 213 (2010). muH2 A. Antognini, et al., Science 339, 417 (2013). muD R. Pohl, et al., (CREMA), Science 353, 669 (2016). mualpha J. J. Krauth, et al., Nature 589 (7843), 527 (2021). muhelion K. Schuhmann, et al., (CREMA), arXiv:2305.11679 [physics.atom-ph] (2023). rmp:2024 K. Pachucki, V. Lensky, F. Hagelstein, S.S. Li Muli, S. Bacca, and R. Pohl, Rev. Mod. Phys. 96, 015001 (2024). 1S2S C. G. Parthey, A. Matveev, J. Alnis, B. Bernhardt, A. Beyer, R. Holzwarth, A. Maistrou, R. Pohl, K. Predehl, T. Udem, T. Wilken, N. Kolachevsky, M. Abgrall, D. Rovera, C. Salomon, P. Laurent, and T. W. Hänsch, Phys. Rev. Lett. 107, 203001 (2011). adkins:23 G. S. Adkins, J. Gomprecht, Y. Li, and E. Shinn, Phys. Rev. Lett. 130, 023004 (2023). jentschura:05 U. D. Jentschura, A. Czarnecki, and K. Pachucki, Phys. Rev. A 72, 062102 (2005). He1 G. Clausen, P. Jansen, S. Scheidegger, J. A. Agner, H. Schmutz, and F. Merkt Phys. Rev. Lett. 127, 093001 (2021). He2 G. Clausen, S. Scheidegger, J. A. Agner, H. Schmutz, and F. Merkt Phys. Rev. Lett. 131, 103001 (2023). He3 V. Patkóš, V. A. Yerokhin, K. Pachucki, Phys. Rev. A 103, 042809 (2021). patkos:21:rad V. Patkóš, V. A. Yerokhin, and K. Pachucki, Phys. Rev. A 103, 012803 (2021). bethe H. A. Bethe and E. E. Salpeter, Quantum Mechanics Of One- And Two-Electron Atoms (Plenum, New York, 1977). drakeLog R. A. Swainson and G. W. F. Drake, J. Phys. B 23, 1079 (1990). § OPERATORS CONTRIBUTING TO E_H2 Individual first-order operators that come from the anomalous magnetic moment of the first particle are δ H_1 = - Z α/4 m_1^4 L⃗·s⃗_1 ( p^2 1/r^3 + 1/r^3 p^2) , δ H_2 = Z α/2 m_1^2 m_2^2 (g_2-1) ( s⃗_2 ×p⃗)^i ( δ^ij/r^3 - 3 r^i r^j/r^5 +δ^ij/3 4 π δ^3(r) ) ( s⃗_1 ×p⃗)^j + Zα/24 m_1^2 ( r_E2^2 + 3/4 m_2^2) 4π ∇^2 δ^3(r) +iZα/6 m_1^2[ 3 (g_2-1)/4 m_2^2s⃗_2 + ( r_E2^2 + 3/4 m_2^2) s⃗_1]·p⃗× 4πδ^3(r) p⃗ , δ H_3 = Zα/4 m_1 m_2^3 ( {p^2 , s⃗_2 ×∇⃗_2 ·s⃗_1×r⃗/r^3} - { p^2 , p⃗·s⃗_1×r⃗/r^3}) + 1/4 m_1^3{p⃗·∇⃗_1× e_1A⃗_1, p⃗·s⃗_1 } + Zα (g_2-2)/8 m_1 m_2^3{p⃗×∇⃗_2·s⃗_1×r⃗/r^3, p⃗·s⃗_2 } - Zα/8 m_1^3 m_2(i s⃗_1·p⃗× 4π δ^3(r) p⃗ - g_2 s⃗_2 ×p⃗ 4π δ^3(r) s⃗_1 ×p⃗) + Zα/6 m_1 m_2(g_2 r_M2^2 +3 (g_2-2)/4 m_2^2) s⃗_2 ×p⃗ 4π δ^3(r) s⃗_1 ×p⃗ , δ H_4 = Zα/m_1 m_2 e_2A⃗_2·s⃗_1×r⃗/r^3 , δ H_5 = 1/8 m_1^3 (Z α)^2/r^4 , δ H_6 = (Zα)^2 (g_2-1)/2 m_1 m_2^2 s⃗_2×r⃗/r^3·s⃗_1×r⃗/r^3 -Zα/m_1^2 s⃗_1·r⃗/r^3× e_1A⃗_1 , δ H_7 = Z α/4 m_1 m_2 ( {[ (s⃗_1×r⃗/r)^i, p^2/2 m_1], i Zα r^i/r^3} + {[ p^2/2 m_2, [ (s⃗_1×r⃗/r)^i, p^2/2 m_1] ], p^i} ) - Z α g_2/8 m_1^2 m_2^2[ p^2, [p^2, s⃗_1·s⃗_2 2/3 r + s_1^i s_2^j 1/2 r( r^i r^j/r^2 - δ^ij/3) ] ] . δ H_8 = -Zα/m_1^2 s⃗_1 ·r⃗/r^3× e_1A⃗_1 + i/4 m_1^3 [ {e_1𝒜⃗_1, p⃗×s⃗_1}, p^2 ] +(Zα)^2 (g_2-1)/2 m_1 m_2^2 s⃗_2 ×r⃗/r^3·s⃗_1×r⃗/r^3 - i Zα (g_2-1)/8 m_1 m_2^3 [ {s⃗_1×r⃗/r^3 ,p⃗×s⃗_2}, p^2 ] . δ H_9 = Z α/6 m_1 m_2 (r_E2^2-3 (g_2-2)/8 m_2^2 ) i s⃗_1·p⃗× 4 π δ^3(r) p⃗ + Z α/16 m_1^3 m_2 ( i g_2 s⃗_2·p⃗× 4 π δ^3(r) p⃗ + 2 π ∇⃗^2δ^3(r)) , where e_1 A^i_1 = - Zα/2 r( δ^ij + r^i r^j/r^2) p_2^j/m_2 - Z α g_2/2 m_2(s⃗_2×r⃗)^i/r^3 , e_2 A^i_2 = - Zα/2 r( δ^ij + r^i r^j/r^2) p_1^j/m_1 + Z α/m_1(s⃗_1×r⃗)^i/r^3 , are static vector potentials. § COMPARISON WITH HELIUM Α^7 RADIATIVE CORRECTIONS We can compare our results with electron-electron operators derived for helium centroid triplet states in Ref. <cit.>, given by the expression E^B_SE in Eq. (156) of that work. It can be transformed into the form E_SE^B = α^7/π⟨ -(1039/1350+49/45ln[ α^-2]) p⃗ 4π δ^3(r) p⃗ + (403/90 + 2 ln[ α^-2] - 4/3 ln r - 4/3 γ - 2/3 ln 2)1/r^4⟩ . This result can be checked against our two-body first-order operators derived here. We obtain it from the general result E^(7)_rad in Eq. (<ref>) by setting g_2=2, r_E2^2 = r_M2^2= 0, m_1 = m_2 = 1, adding the corresponding result for the second particle where we make the exchange (1↔2), setting s⃗_1·s⃗_2 = 1/4, omitting fine structure and hyperfine structure tensor terms, and transforming into atomic units by r→ r/α. We obtain Ẽ_SE^B = α^7/π⟨ -(1039/1350+49/45ln[ α^-2]) p⃗ 4π δ^3(r) p⃗ + (851/180 + 2 ln[ α^-2] - 4/3 ln r - 4/3 γ - 2/3 ln 2)1/r^4⟩ . We observe a discrepancy between these results. It can be traced to the contribution E_2 in Ref. <cit.>, given by Eqs. (102), (103), and (104). There is a missing overall factor of two in this term, which would lead to an additional contribution in helium results equal to δ E = α^7/4 π r^4 . Correcting for this mistake, we would get a perfect agreement between the two results. The numerical change from this correction amounts only to 2 kHz for the 2^3S state and 3 kHz for the 2^3P state, and thus does not explain discrepancies for ionization energies <cit.>. § DERIVATION OF IDENTITIES To derive Eq. (<ref>), we start with the identity p^i [ p^i,[V,p^j]] p^j = p^2 V p^2 + 1/2 p^i [p^j,[V,p^j]] p^i - 1/2 {p^2,p⃗ V p⃗ } = 1/4 [p^2,[V,p^2]] + 1/2 p^i [p^j,[V,p^j]] p^i - 1/4 {p^2,[p⃗,[V, p⃗ ]]} . For states with l>0 the third term in the last equality vanishes. With the help of the expectation value identity [p^2,[V,p^2]] = 4 μ(∇⃗V)^2 , and relation [ p^i,[V,p^j]] = [Zα/r^3(δ^ij - 3 r^i r^j/r^2)]_ε + δ^ij/d Zα 4π δ^d(r) , with d=3-2 ε, we arrive at Eq. (<ref>). We will also present the evaluation for the expectation value of the operator in the first term of Eq. (<ref>), ⟨( s⃗_2 ×p⃗)^i ( δ^ij/r^3 - 3 r^i r^j/r^5 +δ^ij/3 4 π δ^3(r) ) ( s⃗_1 ×p⃗)^j⟩ . Firstly, we need to isolate the traceless part of this operator, which is contracted with spin vectors. The expectation value of the traceless part will be proportional to ⟨ (L^i L^j)^(2) s_1^i s_2^j⟩, while the trace part will result in terms involving ⟨s⃗_1·s⃗_2⟩. For the non-local term we get ⟨( s⃗_2 ×p⃗)^i ( δ^ij/r^3 - 3 r^i r^j/r^5) ( s⃗_1 ×p⃗)^j⟩ = ⟨ ( A (L^i L^j)^(2) + B δ^ij) s_1^i s_2^j⟩ . Coefficients A and B are obtained by projecting the expression on both sides of the equation, which is contracted with spin operators, either to (L^i L^j)^(2) or δ^ij. After lengthy angular momentum algebra, this leads to A = 1/(2l-1)(2l+3) ⟨10/3 p⃗ 4π δ^3(r) p⃗ - 12 μ E/r^3 - 16 μ Zα/r^4⟩ , B = -1/3 (1/6 p⃗ 4π δ^3 p⃗ + μ Zα/r^4) . For the local interaction part we would proceed in a similar way, leading to ⟨( s⃗_2 ×p⃗)^i 4 π δ^3(r) ( s⃗_1 ×p⃗)^i⟩ = ⟨p⃗ 4π δ^3(r) p⃗ ⟩ ×⟨ (L^i L^j)^(2) s_1^i s_2^j+ 2/3 s⃗_1·s⃗_2⟩ . § EXPECTATION VALUES OF FIRST-ORDER OPERATORS We employ the following identities to evaluate the expectation values with hydrogenic wave functions <cit.>: ⟨1/r^3⟩ = 2 (μ Zα)^3/l (l+1) (2l+1) n^3 ⟨1/r^4⟩ = 4 (μ Zα)^4 (3 n^2 - l(l+1))/l (l+1) (2l-1) (2l+1) (2l+3) n^5 , ⟨ln m_1 r + γ/r^4⟩ = (μ Zα)^4/l (l+1) (2l-1) (2l+1) (2l+3) n^5 ×[ 4 (3 n^2 - l(l+1)) ( H_2l-2 + H_2l+3 - H_n+l + lnn m_1/2 μ Zα - 1/2) - 2 ( 1 - 3 (2l+1) n + 4 n^2)] , ⟨ p⃗ 4π δ^3(r) p⃗ ⟩ = 4 (μ Zα)^5/3 (1/n^3 - 1/n^5) δ_l1 . § GENERAL RESULTS FOR STATES WITH L>1 In this section, we will present the results for arbitrary angular momentum l>1. Defining Δ = l (l+1) (2l-1) (2l+1) (2l+3) , we obtain the following results for the coefficients in Eq. (<ref>): ℰ_NS = ℰ_NS^(3)/n^3 + ℰ_NS^(4)/n^4 + ℰ_NS^(5)/n^5 + η_1^2 η_2 8 (l (l+1)-3 n^2)/3 n^5(H_2l-2 + H_2l+3 - H_n+l +lnn/2 η_1 Zα) +Δ/n^3 β_NS , ℰ_NS^(3) = η_1^2(8 ln_1 + 113/6 + 3/2l - 3/2 (l+1) + 4/(2l+1)^2 + η_1 ( -79/6 - 3/4l + 3/4(l+1) - 2/(2l+1)^2) + η_1^2 2/5) + η_1^2 η_2^2 ( - 3 /16l + 3 /16(l+1) - 3 /16(2l-1) - 3 /8(2l+1)^2 +3 /16(2l+3)) g_2^2 , ℰ_NS^(4) = η_1^2[- 3 (2l+1)/4 + 3/(2l+1) + η_2(-19(2l+1)/4 + 3/(2l+1))] - η_1^2 η_2^2 9/16(2l+1) g_2^2 , ℰ_NS^(5) = η_1^2 l(l+1) ( -8/3 ln_1 - 11/2 + 4/3l - 4/3(l+1) + η_1(28/9 - 4/3l + 4/3(l+1)) -η_1^2 2/15) . The following coefficient is ℰ_S1 = ℰ_S1^(3)/n^3 + ℰ_S1^(4)/n^4 + ℰ_S1^(5)/n^5 + Δ/n^3 β_L1 , ℰ_S1^(3) = η_1 [6 - 3/l + 3/(l+1) - 8/(2l+1)^2 - 6 η_1 η_2 +(η_1(η_1-2) + η_2^3g_2/4)(3/2l^2 - 13/2l + 3/2(l+1)^2 + 13/2(l+1) - 16/(2l+1)^2)] + η_1 η_2^2 g_2^2[3/16l^2 - 1/16l + 3/16(l+1)^2 + 1/16(l+1) + 3/4(2l-1) - 1/2(2l+1)^2 - 3/4(2l+3) + η_2(-9/16l^2 + 27/16l - 9/16(l+1)^2 - 27/16(l+1) - 3/4(2l-1) +9/2(2l+1)^2 + 3/4(2l+3))] , ℰ_S1^(4) = η_1[3 - 9/2l + 6l - 9/2(l+1) + 12/(2l+1) +η_2^2(1+η_2g_2/4)(9/2l + 9/2(l+1) - 24/(2l+1))] + η_1 η_2^2 g_2^2[9/16l + 9/16(l+1) - 3/4(2l+1) + η_2( -27/16l - 27/16(l+1) + 27/4(2l+1))] , ℰ_S1^(5) = η_1 l(l+1)[-8 + 9/2l - 9/2(l+1) +η_2(6 - 3/l + 3/(l+1)) +η_2^2(-6 + 9/2l - 9/2(l+1))] +η_1 η_2^33/8 (g_2-g_2^2) . For coefficient ℰ_S2 we obtain ℰ_S2 = ℰ_S2^(3)/n^3 + ℰ_S2^(4)/n^4 + ℰ_S2^(5)/n^5 + Δ/n^3 β_S2 , ℰ_S2^(3) = η_1^2 η_2(3+η_2) g_2(-3/8l^2 + 13/8l - 3/8(l+1)^2 -13/8(l+1) + 4/(2l+1)^2) + η_1^2 η_2^2 g_2^2(9/16l^2 - 27/16l + 9/16(l+1)^2 + 27/16(l+1) + 3/4(2l-1) - 9/2(2l+1)^2 - 3/4(2l+3)) , ℰ_S2^(4) = η_1^2 η_2 g_2[-27/8l - 27/8(l+1) + 18/(2l+1) + η_2(-9/8l - 9/8(l+1) + 6/(2l+1))] + η_1^2 η_2^2 g_2^2(27/16l + 27/16(l+1) - 27/4(2l+1)) , ℰ_S2^(5) = η_1^2 η_2 g_2(-9/8 - η_2 3/8) + η_1^2 η_2^2 3/8 g_2^2 . The scalar spin-spin coefficient is ℰ_SS = ℰ_SS^(3)/n^3 + ℰ_SS^(4)/n^4 + ℰ_SS^(5)/n^5 + Δ/n^3 β_SS , ℰ_SS^(3) = η_1 η_2^2 (2-1/l + 1/(l+1) - 8/3(2l+1)^2) +η_1 η_2 g_2(-5/2 + 1/l - 1/(l+1) + 8/3(2l+1)^2) + η_1^2 η_2^2 g_2^2(-1/4l + 1/4(l+1) - 1/4(2l-1) - 1/2(2l+1)^2 + 1/4(2l+3)) , ℰ_SS^(4) = η_1 η_2 (η_2 - g_2)(2l+1 - 4/(2l+1)) -η_1^2 η_2^23 /4(2l+1) g_2^2 , ℰ_SS^(5) = η_1 η_2 g_2 l(l+1)/6 . Finally, for the tensor spin-spin coefficient we obtain ℰ_LL = ℰ_LL^(3)/n^3 + ℰ_LL^(4)/n^4 + ℰ_LL^(5)/n^5 + Δ/n^3 β_LL , ℰ_LL^(3) = η_1 η_2^2(-3/l^2 + 13/l - 3/(l+1)^2 - 13/(l+1) + 9/(2l-1) + 32/(2l+1)^2 - 9/(2l+3)) + η_1 η_2 g_2[η_2^2 (9/2 l^2+27/2 (l+1). . +9/2 l-1-9/2 l+3+9/2 (l+1)^2-36/(2 l+1)^2-27/2 l) +η_2 (9/2 l^2+27/2 (l+1)+18/2 l-1-18/2 l+3. . +9/2 (l+1)^2-36/(2 l+1)^2-27/2 l) -15/4 l^2+17/4 l-17/4 (l+1)-24/2 l-1+24/2 l+3-15/4 (l+1)^2+16/(2 l+1)^2] + η_1 η_2^2 g_2^2[ 3/2 l^2-1/2 l+1/2 (l+1)+1/2 (2 l-1)-1/2 (2 l+3)+3/2 (l+1)^2-3/(2 l-1)^2-6/(2 l+1)^2-3/(2 l+3)^2 + η_2 (-15/4 l^2-29/4 (l+1)-29/4 (2 l-1)+29/4 (2 l+3)-15/4 (l+1)^2+3/(2 l-1)^2+24/(2 l+1)^2+3/(2 l+3)^2+29/4 l)] , ℰ_LL^(4) = η_1 η_2 g_2[-45/4l - 45/4(l+1) + 24/(2l+1) + η_2(1+η_2)(27/2l + 27/2(l+1) - 54/(2l+1))] +η_1 η_2^2 g_2^2 [-27/4l - 27/4(l+1) + 27/(2l+1) + η_1(45/4l + 45/4(l+1) - 9/2(2l-1) - 36/(2l+1) - 9/2(2l+3))] + η_1 η_2^2(-9/l -9/(l+1) + 48/(2l+1)) , ℰ_LL^(5) = η_1 η_2 g_2[η_2^2 (9/4 (2 l+3)-9/4 (2 l-1)+6)+η_2 (9/2 (2 l+3)-9/2 (2 l-1)-6)+6/2 l-1-6/2 l+3+47/4] + η_1 η_2^2 (4 η_2 -1) g_2^2(9/16(2l-1) - 9/16(2l+3))+ η_1 η_2^2(-9 - 9/4(2l-1) + 9/4(2l+3)) . Further, for the positronium atom l>1 states we obtain E^(7)_pos = m α(Zα)^6/π Δ⟨ℰ^pos_NS + L⃗·(s⃗_1 + s⃗_2) ℰ^pos_LS + s⃗_1·s⃗_2 ℰ^pos_SS + (L^i L^j)^(2) s_1^i s_2^j ℰ^pos_LL⟩ , where ℰ^pos_NS = 1/n^3(247 /80 + 15/64l - 15/64(l+1) -3/64(2l-1) +21/32(2l+1)^2+3/64(2l+3)) + 1/n^4(-25/32 - 25l/16 + 63/64(2l+1)) + 1/n^5(1/6 - 179 l (l+1) /180) + ( 3n^2 - l(l+1))/3n^5(2 - H_2l-2 - H_2l+3 + H_n+l - lnn/ Zα) + Δ/n^3 β_NS(1) , ℰ^pos_LS = 1/n^3(9/8 - 3/8l^2 + 17/16l - 3/8(l+1)^2 - 17/16(l+1) +3/16(2l-1) + 19/8(2l+1)^2 - 3/16(2l+3)) + 1/n^4(3/4 - 9/8l + 3l/2 - 9/8(l+1) + 57/16(2l+1)) +1/n^5(57/64 - 13l(l+1)/8) + Δ/n^3 β_LS(1) , ℰ^pos_SS = 1/n^3(-1 + 5/16l - 5/16(l+1) - 1/16(2l-1) +7/8(2l+1)^2 + 1/16(2l+3)) +1/n^4(-3/8 - 3l/4+ 21/16(2l+1)) + l(l+1)/12 n^5 + Δ/n^3 β_SS(1) , ℰ^pos_LL = 1/n^3(-3/4l^2 + 1/4l - 3/4(l+1)^2 - 1/4(l+1) - 3/4(2l-1)^2 - 109/16(2l-1) + 3/2(2l+1)^2 - 3/4(2l+3)^2 + 109/16(2l+3)) + 1/n^4(-9/4l - 9/4(l+1) - 9/8(2l-1) + 9/4(2l+1) - 9/8(2l+3)) +1/n^5(4 + 51/32(2l-1) - 51/32(2l+3)) + Δ/n^3 β_LL(1) . For hydrogenlike atoms with l>1, in the limit of an infinitely heavy nucleus, we get the result E^(7,0)_ hydr = m_1 α(Zα)^6/π Δ[1/n^3(91/15 + 3/4l - 3/4(l+1) + 2/(2l+1)^2) +1/n^4(-3/4 - 3l/2 + 3/(2l+1)) - 227 l(l+1)/90 n^5 + 8 (3n^2 - l(l+1))/3 n^5 +⟨L⃗·s⃗_1⟩ (1/n^3(6 - 3/2l^2 + 7/2l - 3/2(l+1)^2 - 7/2(l+1) + 8/(2l+1)^2) + 1/n^4(3 - 9/2l + 6l - 9/2(l+1) + 12/(2l+1)) +1/n^5(9/2 - 8 l(l+1))) + 1/n^3 (β_NS(0) + ⟨L⃗·s⃗_1⟩ β_S1(0)) ] , and for the leading recoil correction we get E^(7,1)_ hydr = m_1^2 α(Zα)^6/π m_2 Δ⟨ℰ^(7,1)_NS + L⃗·s⃗_1 ℰ^(7,1)_S1 +L⃗·s⃗_2 ℰ^(7,1)_S2 +s⃗_1·s⃗_2 ℰ^(7,1)_SS +(L^i L^j)^(2) s_1^i s_2^j ℰ^(7,1)_LL⟩ , ℰ^(7,1)_NS = 1/n^3(13/6-3/2l + 3/(2l+1) - 4/(2l+1)^2) -1/n^4(5/2 + 5l + 6/(2l+1))+ 1/n^5(4/3 + 37l(l+1)/18) + 8(l(l+1) - 3 n^2)/3 n^5 (3 + H_2l-2 + H_2l+3 - H_n+l + lnn/2 Zα) + Δ/n^3 (β_NS^(1)-β_NS^(0)) ℰ^(7,1)_S1 = 1/n^3(-18 + 3/l^2 - 7/l + 3/(l+1)^2 + 7/(l+1) - 16/(2l+1)^2) +1/n^4(-6 + 9/l - 12l +9/(l+1) - 24/(2l+1)) + 1/n^5(-12 + 22 l(l+1)) + Δ/n^3 (β_S1^(1) - β_S1^(0)) ℰ^(7,1)_S2 = g_2[1/n^3(-9/8l^2 + 39/8l - 9/8(l+1)^2 - 39/8(l+1) + 12/(2l+1)^2) + 1/n^4(-27/8l -27/8(l+1) + 18/(2l+1)) - 9/8 n^5] + Δ/n^3 β_S2^(1) , ℰ^(7,1)_SS = g_2[1/n^3(-5/2 + 1/l - 1/(l+1) + 8/3(2l+1)^2) +1/n^4(-1 - 2 l + 4/(2l+1)) + l(l+1)/6 n^5] + Δ/n^3 β_SS^(1) , ℰ^(7,1)_LL = g_2[1/n^3(-15/4l^2 + 17/4l - 15/4(l+1)^2 - 17/4(l+1) - 24/(2l-1) + 16/(2l+1)^2 + 24/(2l+3)) +1/n^4(-45/4l-45/4(l+1) + 24/(2l+1)) + 1/n^5(47/4 + 6/(2l-1) - 6/(2l+3))] + Δ/n^3 β_LL^(1) .
http://arxiv.org/abs/2407.02836v1
20240703062723
A category of arrow algebras for modified realizability
[ "Umberto Tarantino" ]
math.CT
[ "math.CT", "math.LO", "03G30, 18B25" ]
=16pt 1.1 math,arrows,positioning,shapes,fit,calc,positioning theoremTheorem[section] lemma[theorem]Lemma conjecture[theorem]Conjecture corollary[theorem]Corollary proposition[theorem]Proposition definition definition[theorem]Definition method[theorem]Method fact[theorem]Fact problem[theorem]Problem question[theorem]Question example[theorem]Example *example*Example remark remark[theorem]Remark *notationNotation itemize* enumerate* enumerate** enumerate*** pseudo-functorial Ł =⌜ =#1 =#1 =<=0pt by -⌜ ⌝ ◃ Å Ø [1] @#1@ , umberto.tarantino@studenti.unimi.it § ABSTRACT In this paper we further the study of arrow algebras, simple algebraic structures inducing toposes through the tripos-to-topos construction, by defining appropriate notions of morphisms between them which correspond to morphisms of the associated triposes. Specializing to geometric inclusions, we characterize subtriposes of an arrow tripos in terms of nuclei on the underlying arrow algebra, recovering a classical locale-theoretic result. As an example of application, we lift modified realizability to the setting of arrow algebras, and we establish its functoriality. July 8, 2024 ================ The purpose of this paper is to develop the theory of arrow algebras as a framework to study realizability toposes from a more concrete, `algebraic', point of view which can also take localic toposes into account. Arrow algebras were introduced in <cit.>, of which this paper can be seen as a follow-up, generalizing Alexandre Miquel's implicative algebras <cit.> as algebraic structures which induce triposes and hence toposes. The main advantage of arrow algebras, compared to implicative algebras, lies in the fact that they perfectly factor through the construction of realizability triposes coming from partial combinatory algebras[From now on, PCAs.] which are actually partial, whereas implicative algebras are the intermediate structure only in the case of total combinatory algebras. The aim of the following is then to define appropriate notions of morphisms between arrow algebras which correspond to morphisms of the associated triposes, so to determine a category of arrow algebras which factors through the construction of both realizability and localic triposes in a 2-functorial way. Specializing the previous correspondence to geometric inclusion, we will characterize subtriposes of triposes arising from arrow algebras – that is, arrow triposes – as arrow triposes themselves, generalizing the locale-theoretic concept of a nucleus and the correspondence between nuclei on a locale and sublocales. To do this, we will make use of a particularly simple construction which does not work for implicative algebras and which was also part of the original motivation for the introduction of arrow algebras. As an example of application of this framework, we will study modified realizability (<cit.> and <cit.>) at the level of arrow algebras, rephrasing and extending results partially known in the literature at the level of PCAs. § PRELIMINARY ON TRIPOS THEORY We begin by reviewing the necessary background on tripos theory, mainly following the account given in <cit.> on the basis of Pitts' PhD thesis <cit.>; the reader is instead assumed to be familiar with topos theory, for which standard references are <cit.>. §.§ Preorder-enriched categories First, we need to establish our terminology for 2-dimensional categories. Following <cit.>, with 2-category we mean a 2-dimensional category which is also an ordinary category, meaning that the unit and associativity laws for 1-cells hold on the nose. Instead, we speak of bicategories for 2-dimensional categories where the axioms of an ordinary category only hold up to (coherent) invertible 2-cells. We can now introduce the most important kind of 2-dimensional category in the context of this thesis. A preorder-enriched category is a locally small 2-category with at most one 2-cell between any pair of 1-cells.[As in <cit.>, we will also speak improperly of preorder-enriched bicategories for bicategories whose homcategories are preorders.] Explicitly, this means that a preorder-enriched category is a category endowed with a preorder structure on each homset (A,B), in such a way that the composition map (B,C) ×(A,B) →(A,C) is order-preserving for all A,B,C∈. In a preorder-enriched category, a morphism f : X→ Y is left adjoint to a morphism g : Y → X – equivalently, g is right adjoint to f – if 𝕀_X ≤ g f and fg ≤𝕀_Y, in which case we write f ⊣ g. Two parallel morphisms f,g are isomorphic if f ≤ g and g ≤ f, in which case we write f g. Every category is preorder-enriched with respect to the discrete order. The category of preordered sets and monotone functions is preorder-enriched with respect to the pointwise order. Let and be preorder-enriched categories. A pseudofunctor F : → maps every object X in to some object F(X) in and every morphism f : X → Y in to some morphism F(f) : F(X)→ F(Y) in , in such a way that: * the association XY→F(X)F(Y) is order-preserving for all objects X,Y in ; * F(𝕀_X)𝕀_F(X) for all objects X in ; * F(gf) F(g)F(f) for all composable arrows f,g in . In particular, F is a 2-functor[This is called an enriched functor in <cit.>.] if it is an actual functor, i.e. if the equalities in (ii) and (iii) hold on the nose rather than up to isomorphism. Let F and G be pseudofunctors →. A pseudonatural transformation Φ : F G is given by a morphism Φ_X :F(X) → G(X) in for every object X in in such a way that, for all morphisms f : X → Y in , the square F(X) G(X) F(Y) G(Y)["Φ_X", from=1-1, to=1-2] ["F(f)"', from=1-1, to=2-1] ["G(f)", from=1-2, to=2-2] ["Φ_Y"', from=2-1, to=2-2] commutes up to isomorphism. We will also have to deal with pseudomonads on a preorder-enriched category , that is, the datum of a pseudofunctor T : → and two pseudonatural transformations η : 𝕀_T T and μ : T^2 T satisfying the usual monad laws up to (coherent) invertible modifications. We will not go into any detail about the theory of pseudomonads, which will serve us only en passant: for reference, see <cit.>. We will then consider pseudoalgebras over a pseudomonad T, that is, objects X endowed with a morphism T(X) → X in satisfying the usual algebra laws up to isomorphism. In complete analogy with the 1-dimensional case, pseudoalgebras determine the Kleisli bicategory T of the pseudomonad T: the critical point here is precisely that T is not necessarily a 2-category, but we will see how this will not really be an issue for our purposes. The following example of a preorder-enriched category plays a key role in the theory of triposes. A Heyting prealgebra is a preorder whose poset reflection is a Heyting algebra; in other words, it is a (small) thin cartesian closed category which admits finite coproducts. A morphism of Heyting prealgebras is a monotone function which is a morphism of Heyting algebras between the poset reflections of domain and codomain; in other words, it is a functor preserving finite products and coproducts and exponential objects. We denote with the category of Heyting prealgebras, which is preorder-enriched with respect to the pointwise order. §.§ Triposes As the only triposes considered in this thesis will be -based, we restrict ourselves to triposes over a fixed elementary topos . An -tripos is a pseudofunctor P : → satisfying the following axioms. * For every morphism f : X → Y in , the map f^* P(f) : P(Y)→ P(X) has both a left adjoint ∃_f and a right adjoint ∀_f in .[That is, ∃_f and ∀_f need not preserve the Heyting structure.] Moreover, these adjoints satisfy the Beck-Chevalley condition, which means that for every pullback square in as the left one, the induced square on the right commutes up to isomorphism in :[This also implies the same condition for left adjoints, namely that ∃_f ∘ g^* h^* ∘∃_k.] X Y Z W ["f", from=1-1, to=1-2] ["g"', from=1-1, to=2-1] ["k"', from=2-1, to=2-2] ["h", from=1-2, to=2-2] ["⌟"anchor=center, pos=0.125, draw=none, from=1-1, to=2-2] P(X) P(Y) P(Z) P(W)["∀_g"', from=1-1, to=2-1] ["f^*"', from=1-2, to=1-1] ["∀_h", from=1-2, to=2-2] ["k^*", from=2-2, to=2-1] * There exists a generic element in P, which is an element σ∈ P(Σ) for some object Σ in with the property that, for every object X in and every element ϕ∈ P(X), there exists a morphism [ϕ] : X →Σ in such that ϕ and [ϕ]^*(σ) are isomorphic elements of P(X). An -tripos P such that P(X) = (X, Σ) for some object Σ in is said to be canonically presented. The fact that f^* preserves the Heyting structure implies the Frobenius condition, that is, for every morphism f : X → Y in , ψ∈ P(X) and ϕ∈ P(Y): ∃_f ( ψ f^*(ϕ) ) _Y ∃_f(ψ) ϕ A trivial example of an -tripos is given by Sub_ : 𝖤→𝖧𝖾𝗒𝗍𝖠𝗅𝗀. Indeed, for any morphism f : X → Y in it is well-known that the pullback map f^* : []Y→ [] X is a morphism of Heyting algebras and has adjoints satisfying the Beck-Chevalley condition; a generic element is given by (the subobject of Ω represented by) the subobject classifier t : 1 →Ω. Let H be a complete Heyting algebra. We define the -tripos of H-valued predicates P_H as follows. For any set X, we let P_H(X) XH, which is a Heyting algebra under pointwise order and operations; for any function f :X→ Y, the precomposition map f^* : P_H(Y)→ P_H(X) is then a morphism of Heyting algebras. Adjoints for f^* are provided by completeness as, for ϕ∈ P_H(X) and y∈ Y: ∃_f (ϕ)(y) _x ∈ f (y)ϕ(x) ∀_f (ϕ)(y) _x ∈ f(y) ϕ(x) which also satisfy the Beck-Chevalley condition. A generic element is trivially given by 𝕀_H ∈ P_H(H). Let P and Q be -triposes. A transformation Φ : P → Q is a pseudonatural transformation P Q where P and Q are considered as pseudofunctors →; in other words, this means that each component Φ_X : P(X) → Q(X) is an order-preserving map but not necessarily a morphism of Heyting prealgebras. Transformations P Q can be ordered by letting Φ≤Ψ if Φ_X ≤Ψ_X pointwise for all X in , therefore making -triposes and transformations into a preorder-enriched category which we denote as . A transformation Φ : P → Q is an equivalence if there exists another transformation Ψ : Q → P such that Φ∘Ψ𝕀_Q and Ψ∘Φ𝕀_P. Through the generic element, every -tripos is equivalent to a canonically presented one. is essentially locally small: in fact, a transformation Φ : P → Q is determined up to isomorphism by Φ_Σ(σ) ∈ Q(Σ), since for any ϕ∈ P(X) we have that Φ_X(ϕ)_X Q([ϕ])(Φ_Σ(σ)). Every -tripos P gives rise to a topos [P] in a process called the tripos-to-topos construction, which we will not describe here as it does not serve any purpose for the sake of this paper. The interested reader can find all the details in <cit.>. [Sub_]. For a complete Heyting algebra H, [P_H] is the topos of H-valued sets, equivalent to the topos H of sheaves over H. §.§ Geometric morphisms of triposes The most important notion of morphism between toposes is arguably that of geometric morphism, which by now has a vast and standard theory. Much more niche, instead, is the theory of geometric morphisms of triposes, and how they relate with geometric morphisms of toposes: with no aim for a complete treatment, we review here the notions we will need in the following. Let P and Q be -triposes. A transformation Φ : P → Q is left exact if each component Φ_X : P(X) → Q(X) preserves finite meets up to isomorphism. We denote with the wide subcategory of on left exact transformations. A transformation Φ^+ : P → Q admits a right adjoint if there exists another transformation Φ_+ : Q → P such that (Φ^+)_X ⊣ (Φ_+)_X in for all X in , in which case we write Φ^+ ⊣Φ_+. If Φ^+ is moreover left exact, then the pair (Φ^+, Φ_+) defines a geometric morphism Q → P[The direction is conventional and follows the same convention for geometric morphisms of toposes.], of which Φ_+ and Φ^+ constitute respectively the direct and inverse image. For practical reasons, we denote with the wide subcategory of on transformations having a right adjoint; a morphism P → Q in is hence a geometric morphism Q → P. Let Φ : P → Q be an equivalence and let Ψ : Q → P be such that Φ∘Ψ𝕀_Q and Ψ∘Φ𝕀_P. Then, (Φ,Ψ) : Q→ P and (Ψ,Φ) : P → Q are both geometric morphisms. Every geometric morphism of -triposes Q → P induces a geometric morphism [Q] →[P]. The converse is not true in general: a geometric morphism [Q] →[P] is induced by a geometric morphism of -triposes Q → P if and only if its inverse image part preserves constant objects; again, we refer the reader to <cit.>. Let X,Y be two complete Heyting algebras regarded as locales. Then, geometric morphisms P_X → P_Y correspond to locale homomorphisms X → Y. More precisely, given any geometric morphism Φ = (Φ^+, Φ_+) : P_X → P_Y, there exists an essentially unique morphism of locales f : X → Y such that, regarding f as a morphism of frames f^* : Ø(Y) →Ø(X) and letting f_* : Ø(X) →Ø(Y) be its right adjoint, Φ^+ is given by postcomposition with f^* and Φ_+ is given by postcomposition with f_*. §.§ Subtriposes Geometric inclusions and surjections of toposes admit analogues on the level of triposes. A geometric morphism of -triposes Φ = (Φ^+, Φ_+) : Q → P is an inclusion if either of the following equivalent conditions hold: –for all X in , (Φ_+)_X reflects the order; –Φ^+ ∘Φ_+ 𝕀_Q, Dually, Φ is a surjection if either of the following equivalent conditions hold: –for all X in , (Φ^+)_X reflects the order; –Φ_+ ∘Φ^+ 𝕀_P. Every geometric inclusion (resp. surjection) of -triposes Q → P induces a geometric inclusion (resp. surjection) [Q]→[P]. Moreover, every geometric inclusion into [P] is induced, up to equivalence, by an essentially unique geometric inclusion of -triposes into P. Let P be the set of subtriposes of P, that is, triposes endowed with a geometric inclusions into P.[For practical reasons, we identify a subtripos with the inclusion itself.] Given two geometric inclusions Φ : Q P and Ψ: R P, we write Φ⊆Ψ if there exists a geometric morphism Θ : Q → R such that ΦΨ∘Θ – meaning that Φ_+ Ψ_+ ∘Θ_+ or equivalently Φ^+ Θ^+ ∘Ψ^+ –, in which case Θ is an inclusion itself. This relation obviously makes P into a preorder. Two subtriposes Φ : Q P and Ψ: R P are equivalent if they are isomorphic elements of P, that is, if both Φ⊆Ψ and Ψ⊆Φ hold; equivalently, this means that there exists an equivalence Θ : Q → R such that ΦΨ∘Θ. As it is known, subtoposes of a topos correspond up to equivalence to local operators, that is, morphisms j : Ω→Ω such that, in the internal logic of the topos: * j(t) = t; * jj = j; * j(a b) = j(a) j(b), In a topos of the form [P] for a canonically presented -tripos P -Σ, such a morphism corresponds to an essentially unique transformation Φ_j : P → P which is: * left exact; * inflationary, that is, 𝕀_P ≤Φ_j; * idempotent, that is, Φ_j Φ_j Φ_j. Such Φ_j is called a closure transformation on P; conversely, every closure transformation on P determines a local operator on [P]. These correspondences lead to the following result. Let P be a canonically presented -tripos and let P be the set of closure transformations on P, ordered as above. * Geometric inclusions into P correspond, up to equivalence, to closure transformations on P; in particular, there is an equivalence of preorder categories: P P * Every geometric inclusion of toposes into [P] is, up to equivalence, of the form [Q] →[P], induced by a geometric inclusion of -triposes Q → P; in particular, there is an equivalence of preorder categories: [P] P Note then that the poset reflection of P is a bounded distributive lattice, since so is the set of subtoposes of any topos considered up to equivalence. In the case of a canonically presented -tripos P -Σ, we can even give an explicit description of the inclusion Q → P inducing a geometric inclusion into [P]. Let ([P])_j be the subtopos of [P] corresponding to a closure transformation Φ_j : P → P and let J (Φ_j)_Σ(𝕀_Σ): Σ→Σ. Then, ([P])_j is equivalent over to [P_j], where P_j is the canonically presented -tripos defined as follows: * the underlying pseudofunctor is still -Σ; * the order ⊢^j is redefined as ϕ⊢_I^j ψ if and only if ϕ⊢_I J ψ; * the implication →_j is redefined as [column sep = large] Σ×Σ["𝕀_Σ× J"] Σ×Σ["→"] Σ while ⊤,,, remain unchanged.[Left and right adjoints for f^* can then be defined as ϕ↦∃_f(ϕ) and ϕ↦∀_f (Jϕ).] This means that we can restate the previous theorem as follows. Let P be a canonically presented -tripos. Then, every geometric inclusion of toposes into [P] is induced, up to equivalence, by a geometric inclusion of triposes of the form: P_j P[""name=0, anchor=center, inner sep=0, "𝕀_Σ∘ - "', curve=height=12pt, from=1-2, to=1-1] [""name=1, anchor=center, inner sep=0, "J ∘ - "', curve=height=12pt, from=1-1, to=1-2] ["⊣"anchor=center, rotate=-90, draw=none, from=0, to=1] for some J : Σ→Σ corresponding as above to a closure transformation Φ_j on P. Let X be a complete Heyting algebra regarded as a locale. Then, closure transformations on P_X correspond to nuclei on X, that is, monotone, inflationary and idempotent endofunctions on the underlying frame of X; therefore, they also correspond to sublocales of X. Another important notion from topos theory which can be recovered at the level of triposes is that of open and closed subtoposes. A subtripos Φ : Q P is open if there exists an element α∈ P(1) such that, for any ϕ∈ P(I): Φ_+Φ^+(ϕ) P(!)(α) →ϕ where ! is the unique function I → 1. Dually, a geometric inclusion of triposes Ψ : R P is closed if there exists an element β∈ P(1) such that, for any ϕ∈ P(I): Ψ_+Ψ^+(ϕ)ϕ P(!)(β) where ! is the unique function I → 1. For α = β, Φ and Ψ define each other's complement in the lattice of subtriposes of P considered up to equivalence. Through the correspondence in <ref>, open (resp. closed) subtriposes correspond to open (resp. closed) subtoposes. Let X be a complete Heyting algebra regarded as a locale. Then, open (resp. closed) subtriposes of P_X correspond to open (resp. closed) sublocales of X. § ARROW ALGEBRAS §.§ Arrow algebras We now review the theory of arrow algebras presented in <cit.>. An arrow structure is a complete meet-semilattice (A, ) endowed with a binary operation → : A × A → A such that if a' a and b b', then a→ b a' → b'. A separator on an arrow structure (A, , →) is a subset S ⊆ A such that: * it is upward closed, i.e. if a ∈ S and a b then b∈ S; * it is closed under modus ponens, i.e. if a→ b ∈ S and a ∈ S then b∈ S; * it contains the combinators: k _a,b∈ A a → b → a s _a,b,c ∈ A (a → b → c) → (a→ b) → (a → c) a _a, (b_i)_i∈ I, (c_i)_i∈ I∈ A (_i∈ I a → b_i → c_i) → a →(_i∈ I b_i → c_i) An arrow algebra is a quadruple (A, , →, S) where (A, , →) is an arrow structure and S is a separator on it. We will assume that → associates to the right, as it is common in type theory, and binds stronger than . This means that, for example, the combinator k is given by _a,b∈ A ( a → ( b → a ) ). A fundamental property of arrow algebras is given by the following. Let = (A, , →, S) be an arrow algebra. * Let (x_i)_i∈ I,(y_i)_i∈ I and (z_i)_i∈ I be I-indexed families of elements in A. If: _i ∈ I x_i→ y_i → z_i ∈ S _i ∈ I x_i ∈ S then: _i ∈ I y_i→ z_i ∈ S * Let ϕ be a propositional formula built from propositional variables p_1,…,p_n using implications only. If ϕ is an intuitionistic tautology, then: _a_1,…,a_n ∈ Aϕ (a_i/p_i) ∈ S In the following, we will make extensive use of the previous proposition, which we will justify simply as intuitionistic reasoning. The idea, in essence, is to find a suitable propositional intuitionistic tautology built of implications, in general of the shape ϕ→ψ→χ, so that ϕ→ψ→χ∈ S as in (2); then, from the knowledge of ϕ∈ S, we can deduce ψ→χ∈ S as in (1). Let (A, , →, S) be an arrow algebra. Then, S contains the combinators: i _a ∈ A a → a b _a,b,c ∈ A (b→ c) → (a→ b) → (a → c) An arrow algebra = (A, , →, S) is compatible with joins if, for all a∈ A and all B ⊆ A: (_b∈ B b) → a = _b∈ B (b → a) §.§ Nuclei We now introduce nuclei, a generalization of the locale-theoretical notion which will correspond to closure transformations on (and hence subtriposes of) arrow triposes. Let 𝒜= (A, , →, S) be an arrow algebra. A nucleus on 𝒜 is a function j : A → A such that: * if a b then ja jb; * _a ∈ A a → ja ∈ S; * _a,b ∈ A (a→ jb)→ ja → jb ∈ S. In particular, these properties also imply: iv. _a∈ A jja → ja ∈ S; v. _a,b ∈ A (a→ b) → ja → jb ∈ S; vi. _a,b ∈ A j(a→ b)→ ja → jb ∈ S, and we can even substitute (iii) in the definition above with the conjunction of (iv) and (vi). Starting from every nucleus j on , we can define a new arrow algebra _j, which generalizes what in locale theory would be the sublocale of j-fixed elements. Let 𝒜= (A, , → , S) be an arrow algebra and let j : A → A be a nucleus on it. Then, 𝒜_j = (A, , →_j, S_j) with a →_j b a → jb S_j a ∈ A | ja ∈ S is also an arrow algebra, which is compatible with joins whenever so is 𝒜. _a ∈ A a → ja ∈ S implies that S ⊆ S_j: in fact, if a ∈ S, then by modus ponens ja ∈ S, which precisely means a∈ S_j. For any arrow algebra 𝒜= (A, , →, S) and all c ∈ A, the following are nuclei on : * ja c → a; * ja (a → c) → c; * ja (a → c) → a. Particularly relevant in the theory of arrow algebras is the nucleus: ∂ a ⊤→ a which is the special case of (1) above for c = ⊤. Note that it satisfies the extra property: _a∈ A (⊤→ a) → a ∈ S This follows by intuitionistic reasoning since q → (q → p) → p is an intuitionistic tautology and ⊤∈ S. §.§ The interpretation of lambda terms As for implicative algebras, we can interpret λ-terms in any arrow algebra = (A,,→,S). To do this, we need to interpret applications and abstractions. For a, b ∈ A, we define the application: ab U_a,b where U_a,b c → d ∈ A | a b → c → d. The application enjoys the following properties. * If a a' and b b', then ab a'b'. * (a→ b → c)a b→ c. * (a →_i∈ Ib_i→ c_i)a _i∈ Ib_i→ c_i. * If a,b ∈ S, then ab∈ S. For a function f : A → A, we define the abstraction: λ f _x∈ A x →∂ f (x) Let f,g : A → A be functions; the abstraction enjoys the following properties. * If f(a) g(a) for all a∈ A, then λ f λ g. * (λ f) a ∂ f (a) for all a∈ A. We will assume that application associates to the left and binds stronger than →, which binds stronger than ∂, which binds stronger than . In particular, this means that a → bc stands for a → (bc), ∂ ab stands for ∂ (ab), and ∂ a → b stands for ⊤→ a → b rather than (⊤→ a) → b. We can now recursively define an interpretation of λ-terms in . Let M be a λ-term with free variables among x_1,…, x_n. The interpretation ot M in 𝒜 is a function M^𝒜 : A^n → A defined recursively as follows: * if M = x_i, then M^𝒜 is the projection onto the i-th coordinate; * if M = N_1N_2, then: M^𝒜(ă) N_1^𝒜 (ă) N_2^𝒜(ă) * if M = λ x. N where the free variables of N are among x_1,…,x_n, x, then: M^𝒜 (ă) λ(b ↦ N^𝒜(ă,b)) The proof of the following nontrivial result needs a detour which is carried out in full details in <cit.>. Let 𝒜 = (A, , →, S) be an arrow algebra and let M be a λ-term with free variables among x_1,…,x_n. Then, for any a_1,…,a_n ∈ S: M^𝒜(a_1,…, a_n)∈ S §.§ The logical order Although arrow algebras are defined in terms of , which we refer to as the evidential order, there is another – arguably more important – order which can be defined in terms of implications and separators. As we will see, this order will be the only relevant one in the construction of the arrow tripos. Let = (A,,→,S) be an arrow algebra. We define the logical order ⊢ on A by letting: a ⊢ b a → b ∈ S which is weaker than the evidential order since i∈ S gives a → a ∈ S for all a∈ A and therefore if a b then a→ a a → b ∈ S i.e. a ⊢ b. Note that ⊢ allows us to recover the separator[In hindsight, this characterizes the separator as what Pitts calls the set of designated truth values of the induced arrow tripos.] as: S = a ∈ A | ⊤⊢ a Indeed, if a∈ A is such that ⊤⊢ a, then a ∈ S follows by modus ponens from ⊤ , ⊤→ a ∈ S; conversely, if a∈ S, then the intuitionistic tautology p → q → p gives a →⊤→ a ∈ S and hence ⊤→ a ∈ S by modus ponens. We denote with ⊢^j the logical order in the arrow algebra _j induced by a nucleus j on . Explicitly, a ⊢^j b if and only if j ( a → j b ) ∈ S, which by the properties of nuclei and separators is equivalent to a → j b ∈ S and hence to a ⊢ jb. (A, ⊢) is a Heyting prealgebra, with → being the Heyting implication. The combinators i, b∈ S respectively express reflexivity and transitivity of ⊢, making (A,⊢) into a preorder, which is bounded by ⊤ and since a ⊤ for all a∈ A. For a,b∈ A, a meet of a and b can be defined as: a × b (λ z . z(∂ a) (∂ b))^ = _z∈ A z →∂ z (∂ a) (∂ b) while a join can be defined as: a + b _c ∈ A (a →∂ c) → (b→∂ c) →∂ c §.§ The arrow tripos Let =(A,,→,S) be an arrow algebra. For any set I, the set A^I of functions I → A can be given an arrow structure by choosing pointwise order and implication, and the uniform power separator[The name comes from the analog for implicative algebras in <cit.>.]: S^I ϕ : I → A | _i∈ Iϕ(i) ∈ S makes it into an arrow algebra which we denote as ^I. The logical order in 𝒜^I, which we denote as ⊢_I, is therefore given explicitly by: ϕ⊢_I ψ_i∈ Iϕ(i)→ψ(i) ∈ S and by <ref>, (A^I,⊢_I) is a Heyting prealgebra. As above, given some nucleus j on , we denote with ⊢^j_I the logical order in _j^I. Again by the properties of nuclei and separators, ϕ⊢^j_I ψ explicitly means ϕ⊢_I j ψ. In general, ⊢_I is stronger than the pointwise version of ⊢: the two coincide only if the separator is closed under arbitrary meets. However, it is easy to see that all the Heyting structure in (A^I,⊢_I) is determined pointwise by that of (A,⊢). We can finally define the arrow tripos induced by , of which the Heyting prealgebras (A^I,⊢_I) are the components at each level I. Let 𝒜 = (A, , →, S) be an arrow algebra. The functor: P_ : → I (A^I, ⊢_I) J (A^J, ⊢_J)["f", from=2-1, to=1-1] ["-∘ f", from=1-2, to=2-2] [maps to, from=1-1, to=1-2] [maps to, from=2-1, to=2-2] is a (canonically presented) tripos. In particular, left and right adjoints to f^* : P_(I)→ P_(J) are given by: ∃_f (α) (i) _a∈ A( _j ∈ f (i)α(j) →∂ a ) →∂∂ a ∀_f (α) (i) _j∈ f (i)∂α(j) while a generic element is trivially given by 𝕀_A : A → A ∈ P_(A). If is compatible with joins, a left adjoint to f^* : P_ (I)→ P_(J) can also be defined as: ∃_f(α)(i) _j∈ f (i)α(j) We denote with the arrow topos induced by , that is: [P_] § EXAMPLES OF ARROW ALGEBRAS §.§ Implicative algebras Implicative algebras can be characterized as arrow algebras where the equality: a →_b∈ B b = _b ∈ B a → b holds for all elements a and subsets B. §.§ Frames Every frame Ø(X) can be canonically seen as an arrow algebra[In particular, compatible with joins.] by using its order and its Heyting implication as the arrow structure, and {⊤} as the separator. Note then that the logical order coincides with the evidential order, since x → y ∈ S if and only if ⊤≤ x → y, which is equivalent to x ≤ y. Similarly, the logical order ⊢_I on Ø(X)^I reduces to the pointwise order, which makes so that the arrow tripos P_Ø(X) coincides with the localic tripos induced by Ø(X), and hence Ø(X) is equivalent to the topos Ø(X) of sheaves over Ø(X). §.§ Partial combinatory algebras A main example of arrow algebras arises from partial combinatory algebras, building blocks of realizability toposes. We review here their theory: as already in <cit.>, our treatment closely follows that of <cit.>, which allows us for the highest level of generality considered in the literature in what concerns appropriate notions of morphisms and their connections with morphisms of the associated toposes. We then warn the reader that the following definitions and conventions are not entirely standard, but they turn out to be particularly convenient for our purposes. A partial applicative poset is a poset (P, ≤) endowed with a partial binary map · : P × P → P called application such that if a≤ a' and b≤ b' and a'· b' is defined, then a · b is defined as well and a· b ≤ a' · b'. (P,≤,·) is total if · is a total operation, and it is discrete if ≤ is a discrete order. We denote a· b also as ab, and we assume that · associates to the left. We write ab to indicate that ab is defined; note that a statement like “abc” is to be interpreted as “ab and (ab)c”. A filter on a partial applicative poset (P, ≤, ·) is a subset P^#⊆ P such that: * it is upward closed, i.e. if a ∈ P^# and a ≤ b then b ∈ P^#; * it is closed under defined application, i.e. if a, b ∈ P^# and ab then ab ∈ P^#. A partial applicative structure is the datum (P, ≤, ·, P^#) of a partial applicative poset (P, ≤,·) together with a filter P^# on it. In particular, it is absolute if P^# = P. A partial combinatory algebra is a partial applicative structure (P, ≤, ·, P^#) such that there exist elements k, s∈ P^# satisfying: * kab and k a b ≤ a; * s a b; * if ac(bc), then sabc and s ab c ≤ ac (bc), for all a,b,c ∈ P. A partial combinatory algebra is total or discrete if the underlying partial applicative poset is, and it is absolute if it is absolute as a partial applicative structure. Let (P, ≤, ·) be a partial applicative poset. Given two possibly undefined expressions e,e' that, if defined, assume values in P, we write e ≼ e' for the following statement: “if e', then e and e ≤ e'”.[This relation is also called Kleene inequality.] Instead, we write e ≤ e' only if both expressions are always defined. The archetypal example of a PCA is Kleene's first model 𝒦_1, underlying Kleene's original number realizability. 𝒦_1 is the absolute and discrete PCA defined on by letting n· m be the result of the n-th partial recursive function φ_n on input m whenever defined, fixed an ordering φ_i | i ∈ of the partial recursive functions →. Underlying function realizability, a variant of Kleene's number realizability introduced in <cit.>, is Kleene's second model 𝒦_2, a discrete PCA defined on the set ^ with total recursive functions as the filter. The van Oosten model ℬ, introduced in <cit.>, generalizes 𝒦_2 by considering all partial functions ⇀ as its domain, and partial recursive functions as the filter; in that case, we obtain however a total PCA. It can also be ordered by letting α≤β if α extends β as partial functions ⇀, in which case a filter is given by all subfunctions of partial recursive functions. Elements of a PCA are usually called combinators; in particular, the k and s combinators correspond to constants from Schönfinkel's combinatory logic. Their most important consequence is combinatory completeness: every partial function obtained by repeatedly applying the application map is already present as a computation in the PCA itself. More formally, let ℙ = (P, ≤, ·, P^#) be a PCA. The set of terms over ℙ is defined recursively as follows: * we assume given a countable set of distinct variables, each of which is a term; * we assume given a constant symbol for each element in P, each of which is a term; * if t_0,t_1 are terms, then so is t_0 · t_1. Every term t = t(x_1,…,x_n) defines a partial function P^n ⇀ P which assigns the obvious, possibly undefined, interpretation t(a_1,…,a_n) to an input sequence (a_1,…,a_n) ∈ P^n. Combinatory completeness can then be expressed as follows. For every nonempty sequence x̆,y of distinct variables and every term t = t(x̆,y), there exists an element λ^*x̆,y.t ∈ P such that, for every sequence ă of the same length as x̆ and every b in P: * (λ^* x̆,y.t) ă; * (λ^* x̆,y.t) ă b ≼ t(ă,b). Moreover, if all the constants occurring in t are from P^#, then λ^*x̆,y.t ∈ P^# as well. Combinatory completeness allows us to perform constructions from recursion theory inside ℙ: let us see some examples which will make use of in the following. Since they also correspond to constants of combinatory logic, they are called combinators as well. * The identity combinator is defined as iλ^* x. x ∈ P^# and it satisfies i a ≤ a for all a∈ P. * The constant combinator k can also be defined as λ^* x. x; its `dual' k̅∈ P^# is defined as k̅ki and it satisfies k̅ a b ≤ b for all a,b∈ P. * The pairing combinator p∈ P^# and the unpairing combinators p_0,p_1 ∈ P^# are defined as pλ^* x, y, z . z x y p_0 λ^* x . x k p_1 λ^* x. xk̅ and they satisfy p_0 (p a b ) ≤ a and p_1 ( p a b ) ≤ b for all a,b∈ P. The most important construction on a PCA, at least in the context of this paper, is the following. Let ℙ = (P, ≤, ·, P^#) be a PCA. The set TP of inhabited downward-closed subsets of P, ordered by inclusion, can be equipped with a partial applicative structure by defining, for α , β∈ TP: α·β x y | x ∈α, y ∈β in case x y for all x∈α and y∈β, and by defining the filter: (TP)^# α∈ TP | α∩ P^#∈ TP = α∈ TP | ∃β∈ TP^#, β⊆α = (TP^#) Two combinators k, s∈ P^# for ℙ then yield corresponding combinators {k}, {s}∈ (TP)^# for this partial applicative structure, making it into a PCA which we denote as Tℙ. In the same way, the set DP of all downward-closed subsets of P ordered by inclusion can be made into a PCA Dℙ with the same application operation, which in particular yields αβ and αβ = ∅ in case either α = ∅ or β = ∅; a filter (DP)^# is given by the same (TP)^#, and the combinators are described in the same way as well. For future reference, given a discrete and absolute PCA ℙ, we denote with ℙ the PCA D ℙ: note then that the filter is given by the set of inhabited subsets of P. Following <cit.>, for x ∈ P and β∈ DP we write x ·β instead of {x}·β. Explicitly, x β if x y for all y∈β, in which case x β = x y | y ∈β. Therefore, given γ∈ DP, x·β⊆γ is equivalent to x y ∈γ for all y ∈β. Given a PCA ℙ = (P, ≤, ·, P^#), the realizability tripos P_ℙ is defined as follows. For any set X, we let P_ℙ(X) XDP, ordered by letting ϕ⊢_X ψ if there exists an element r ∈ P^# such that r ·ϕ(x) ⊆ψ(x)[Recall that this notation presupposes r·ϕ(x).] for all x∈ X. The Heyting structure is then given by ⊤_X(x) P and _X(x) ∅ as top and bottom elements, and for ϕ,ψ∈ P_ℙ(X): (ϕψ) (x) p·ϕ(x) ·ψ(x) (ϕψ) (x) (p·k·ϕ(x)) ∪ (p·k̅·ψ(x)) (ϕ→ψ) (x) a ∈ P | a ·ϕ(x) ⊆ψ(x) For any function f : X → Y, the precomposition map f^* : P_ℙ(Y) → P_ℙ(X) is then a morphism of Heyting prealgebras. Adjoints for f^* satisfying the Beck-Chevalley condition are defined by: ∃_f(ϕ) (y) ⋃_x ∈ f (y)ϕ(x) ∀_f(ϕ)(y) a ∈ A | ∀ b∈ P, ∀ x ∈ f (y) , ab ab ∈ϕ(x) while a generic element is trivially given by 𝕀_DP∈ P_ℙ(DP). We denote with ℙ the realizability topos [P_ℙ]. 𝖱𝖳(𝒦_1) is the effective topos . With these definitions, we can finally describe arrow algebras arising from PCAs. Let ℙ=(P, ≤, ·, P^#) be a PCA and let DP the set of downward-closed subsets of P. Defining, for α,β∈ DP: α→β a ∈ P | a·αa·α⊆β and letting S_DP be the family of downward-closed subsets containing an element from the filter P^#, <cit.> shows how (DP, , →, S_DP) is an arrow algebra which is compatible with joins; we denote it with Dℙ as for the corresponding PCA.[Similarly, we let ℙ be the arrow algebra D ℙ for a discrete and absolute PCA ℙ.] Note then that: S_DP = α∈ DP | ∃ a ∈α∩ P^# = α∈ DP | ∃ a ∈ P^# : {a}⊆α = α∈ DP | ∃β∈ T(P^#) : β⊆α meaning that the separator S_DP coincides with the filter (DP)^# of <ref>. This makes so that the arrow tripos P_Dℙ coincides with the realizability tripos induced by ℙ, and hence D ℙ coincides with the realizability topos ℙ. In <cit.>, another construction is identified which yields an arrow algebra starting from a PCA. Although it will not play any role in this paper, we here introduce it to use it as an example in <ref>. Let ℙ = (P,≤,·, P^#) be a PCA. Then, ℙ×ℙ is a PCA with pointwise order and application, and with the filter P^#× P^#, which means that D (ℙ×ℙ) is an arrow algebra. Explicitly, its elements are downward-closed binary relations on P ordered by inclusion, the implication is defined as: R → S (a,a') ∈ P× P | (a,a') · R (a,a') · R ⊆ S while a separator is given by: S_D (P × P) R ∈ D (P× P) | ∃ (a,a') ∈ R ∩ (P^#× P^#) If we restrict to downward-closed binary relations on P which are symmetric and transitive[That is, partial equivalence relations.], we obtain an arrow algebra which is compatible with joins; we denote it as 𝖯𝖤𝖱( ℙ). § IMPLICATIVE MORPHISMS In this section, we introduce the first notion of a morphism between arrow algebras we will see in this paper, namely that of implicative morphisms, and we set up some first results which will be useful in the following. By definition, an arrow algebra is a poset endowed with an implication and a specified subset: therefore, it would be natural to define morphisms of arrow algebras as monotone functions preserving implications (in a suitable sense) and the specified subset. This intuition, obviously also valid for implicative algebras, is what leads to the definition of applicative morphisms in <cit.>, which partially inspires our definition. However, for reasons which will become clear in the following, we will not define our morphisms to be monotone with respect to the evidential order[As we have already mentioned, the evidential order is not the most important feature of an arrow algebra anyway.], but we will see how this will not actually be an issue. The downside is that, in general, we will have to impose a third condition – automatically satisfied in the case of monotonicity – involving both implications and separators. §.§ Implicative morphisms Let 𝒜 = (A, , →, S_A) and ℬ = (B, ,→, S_B) be two arrow algebras. An implicative morphism f : 𝒜→ℬ is a function f :A → B satisfying: * f(a) ∈ S_B for all a ∈ S_A; * there exists an element r ∈ S_B such that r f(a→ a')→ f(a)→ f(a') for all a,a'∈ A; * for any subset X ⊆ A × A, _(a,a') ∈ X a → a' ∈ S_A _(a,a') ∈ X f(a) → f(a') ∈ S_B , in which case we say that f is realized by r ∈ S_B. An order `up to a realizer' can be defined on implicative morphisms as follows. Given two implicative morphisms f,f' : 𝒜→ℬ, we write f ⊢ f' if there exists an element u ∈ S_B such that u f(a) → f'(a) for all a∈ A, in which case we say that f⊢ f' is realized by u. In other words, this means that: _a∈ A f(a) → f'(a) ∈ S_B i.e. f ⊢_A f' seeing f and f' as elements of the arrow algebra ^A of <ref>, so in particular it is also equivalent to fϕ⊢_I f'ϕ for all sets I and all functions ϕ : I → A. If f happens to be monotone with respect to the evidential order, then (iii) is a consequence of (i) and (ii). Indeed, given any X⊆ A × A such that _(a,a') ∈ X a→ a' ∈ S_A: –by (ii) we have: _(a,a') ∈ X f(a → a')→ f(a) → f(a') ∈ S_B; –as f( P) f(P) for all subsets P⊆ A by monotonicity, and by (i) and upward-closure of S_B: f(_(a,a')∈ X a→ a') _(a,a') ∈ X f(a→ a') ∈ S_B, from which _(a,a')∈ X f(a) → f(a') ∈ S_B by <ref>. Therefore, in proving that a monotone function is an implicative morphism, we will systematically omit to check condition (iii). Applicative morphisms of <cit.> only satisfy condition (ii) for a,a'∈ A such that a⊢ a', while they are monotone by definition. The two notions are hence incomparable in general. Arrow algebras, implicative morphisms and their order form a preorder-enriched category . First, let f : → and g : →𝒞 be implicative morphisms; let us show that gf : A → C satisfies the definition of an implicative morphism →𝒞. Condition (i) and (iii) are clearly compositional; to show condition (ii), instead, note that by (ii) for f we know that: _a,a' f(a→ a') → f(a) → f(a') ∈ S_B from which, by (iii) for g: _a,a' gf(a→ a') → g(f(a) → f(a')) ∈ S_C Moreover, by (ii) for g we know that: _a,a' g(f(a) → f(a')) → gf(a) → gf(a') ∈ S_C from which, by intuitionistic reasoning: _a,a' gf(a→ a') → gf(a) → gf(a') ∈ S_C Then, for any arrow algebra , the identity function 𝕀_A is an implicative morphism →, trivially realized by i∈ S_A since we know that: i_a,a' ∈ A (a→ a') → a → a' This makes into a category. The fact that ⊢ is a preorder on each homset (,) follows immediately as it is the subpreorder of (B^A, ⊢_A) on implicative morphisms. Therefore, to conclude, we simply need to show that composition of implicative morphisms is order-preserving: –for f,f' : 𝒜→ℬ and g : ℬ→𝒞 such that f ⊢ f'; explicitly, this means that: _a∈ A f(a) → f'(a) ∈ S_B from which, by (iii) in <ref>: _a∈ A gf(a) → gf'(a) ∈ S_C meaning that g f ⊢ g f'; –for f : 𝒜→ℬ and g,g' : ℬ→𝒞, any realizer of g ⊢ g' also realizes gf ⊢ g'f. Let j : A → A be a nucleus on an arrow algebra 𝒜. Then, (i), (ii) and (vi) in <ref> immediately imply that j is an implicative morphism →. In a constructive metatheory, truth values are arranged in the frame Ω given by the powerset of the singleton {*}[Ω is the initial object in the category of frames, and coincides with 2 {≤⊤} in a classical metatheory.], which we can see as an arrow algebra in the canonical way. For any arrow algebra = (A, , →, S), we can then consider the characteristic function of the separator, which is defined constructively as: χ : A →Ω χ(a) * | a ∈ S Note that, by upward closure of the separator, χ is monotone. Indeed, if a a', to show that χ(a) ⊆χ(a') suppose that * ∈χ(a); then, a ∈ S, hence a' ∈ S as well, i.e. * ∈χ(a'). We then have that χ is an implicative morphism →Ω. * If a ∈ S, then by definition *∈χ(a), which means that χ(a) = {*}. * Let a,a'∈ A. Then, {*}⊆χ(a→ a') →χ(a) →χ(a') is equivalent to χ(a→ a') ⊆χ(a) →χ(a'). To show this, suppose * ∈χ(a→ a'), meaning that a→ a' ∈ S. So, χ(a→ a') = {*}, which means that we can show equivalently that χ(a) ⊆χ(a'). Suppose then *∈χ(a) as well, meaning that a ∈ S; by modus ponens, it follows that a' ∈ S, i.e. * ∈χ(a'). The definition of an implicative morphism can be restated purely in terms of the logical order. Let = (A,,→,S_A) and = (B,, →, S_B) be arrow algebras. A function f : A → B is an implicative morphism → if and only if it satisfies: * ⊤⊢ f(⊤); * f (π_1 →π_2) ⊢_A× A f π_1 → f π_2, where π_1,π_2 : A × A → A are the two projections; * f ϕ⊢_I f ψ for any set I and all ϕ, ψ : I → A such that ϕ⊢_I ψ. Condition (2) is a rewriting of condition (ii) recalling that the Heyting implication in ^A× A is computed pointwise, and condition (3) is a rewriting of condition (iii). Suppose now f satisfies (i), (ii) and (iii). Then, f(⊤) ∈ S_B since ⊤∈ S_A, which means that ⊤⊢ f(⊤). Conversely, suppose that f satisfies (1), (2) and (3). Note that condition (3) implies that f is monotone with respect to the logical order: therefore, for a∈ S_A we have that ⊤⊢ a, and hence f(⊤) ⊢ f(a). Then, ⊤⊢ f(⊤) implies by modus ponens that f(⊤) ∈ S_B, from which f(a) ∈ S_B as well. If f : → is an implicative morphism and f' : A → B is such that f _A f' in ^A, then f' is an implicative morphism → as well. Many implicative morphisms we will see in the following are monotone with respect to the evidential ordering: as it turns out, this can always be assumed up to isomorphism. Therefore, in principle, we could substitute for an equivalent category where all morphisms are monotone; however, we won't go in this direction, for reasons which will become clear in the next sections. Recall that, for any arrow algebra , we can consider the nucleus ∂ x ⊤→ x, which satisfies 𝕀_A _A ∂. Every implicative morphism is isomorphic to a monotone one. Let f : → be an implicative morphism and consider the monotone function: f ' : A → B f'(a) _a a'∂ f(a') Let us show that f_A f', which in particular implies that f' is an implicative morphism → by the previous corollary. On one hand, ∂⊢_B 𝕀_B gives: _a (∂ f(a) ) → f(a) ∈ S_B from which, since _a a'∂ f(a') ∂ f(a) and by upward-closure of S_B: _a (_a a'∂ f(a') ) → f(a) ∈ S_B i.e. f' ⊢_A f. On the other hand, f ⊢_A f' explicitly reads as: _a f(a) →( _a a'∂ f(a') ) ∈ S_B Note that, since a∈ S_B: _a( _a a' f(a) →⊤→ f(a') ) → f(a) →( _a a'⊤→ f(a') ) ∈ S_B Therefore, f ⊢_A f' is ensured by <ref> if we show: _(a,a') ∈ I f(a) →∂ f(a') ∈ S_B where I (a,a') ∈ A × A | a a'. By intuitionistic reasoning, this is ensured by: _(a ,a')∈ I f(a) → f(a') ∈ S_B _(a ,a')∈ I f(a') →∂ f(a') ∈ S_B where (1) follows since f is an implicative morphism and i∈ S_A witnesses the fact that _(a,a') ∈ I a → a' ∈ S_A, and (2) follows since 𝕀_B ⊢_B ∂. Let Mf f' be the monotone implicative morphism defined above. Then, f ↦ Mf is immediately seen to be a pseudofunctor M : → since Mf f. § EXAMPLES OF IMPLICATIVE MORPHISMS I Besides the easy examples seen above, let us see the two main classes of implicative morphisms, corresponding to the examples of arrow algebras seen in <ref>: those arising from frame homomorphisms, and those arising from morphisms of PCAs. §.§ Frames As we know, every frame can be canonically seen as an arrow algebra by choosing {⊤} as the separator. Then, any morphism of frames f : Ø(Y) →Ø(X), a (necessarily monotone) function preserving finite meets and arbitrary joins, is also an implicative morphism. Indeed: * f(⊤) = ⊤ as f preserves finite meets; * for y,y'∈Ø(Y) we know that y (y→ y') ≤ y', so by monotonicity and meet-preservation f(y) f(y→ y') ≤ f(y'), meaning that f(y→ y') ≤ f(y) → f(y') and therefore ⊤≤ f(y→ y') → f(y) → f(y'). As emerges from the above reasoning, more generally we have that any monotone function f : Ø(Y) →Ø(X) which preserves finite meets is an implicative morphism. Recall moreover that is preorder-enriched by the pointwise order: therefore, given two frame homomorphisms f,f' : Ø(Y) →Ø(X), f ≤ f' in (Ø(Y), Ø(X)) if and only if f ⊢ f' in (Ø(Y), Ø(X)), since ⊢_I coincides with the pointwise order for any set I. In other words, the inclusion determines a 2-functor → with the additional property that each map (Ø(Y),Ø(X)) ↪(Ø(Y),Ø(X)) (preserves and) reflects the order. While obviously faithful, this inclusion is far from being full. Indeed, consider the initial frame Ω and let Ø(X) be a frame such that ≠⊤: then, the unique frame homomorphism Ω→Ø(X) is given by p ↦⊤ | * ∈ p, whereas the constant function of value ⊤ is an implicative morphism Ω→Ø(X). As we will see in <ref>, frame homomorphisms coincide with computationally dense implicative morphisms, while implicative morphisms between frames coincide simply with monotone functions preserving finite meets – thus reversing the previous remark. §.§ Partial combinatory algebras Let Å = (A, ≤, ·, A^#) and = (B, ≤, ·, B^#) be two PCAs. Let us start by reviewing the theory of morphisms between PCAs, again following <cit.>. A morphism of PCAs 𝔸→𝔹 is a function f : A→ B satisfying: * f(a) ∈ B^# for all a ∈ A^#; * there exists an element t ∈ B^# such that if a a' then t f(a)f(a') and tf(a)f(a') ≤ f(aa'); * there exists an element u ∈ B^# such that if a≤ a' then uf(a) and uf(a) ≤ f(a'), in which case we say that f is realized by t, u∈ B^# or that it preserves application up to t and order up to u. An order `up to a realizer' can be defined on morphisms of PCAs as follows. Given two morphisms f,f' : 𝔸→𝔹, we write f ≤ f' if there exists some s ∈ B^# such that sf(a) and s f(a) ≤ f'(a) for all a∈ A, in which case we say that f≤ f' is realized by s. PCAs, morphisms of PCAs and their order form a preorder-enriched category . Let's also introduce here the notion of computational density, which we will make use of in the following section. A morphism of PCAs f : 𝔸→𝔹 is computationally dense if there exists an element m∈ B^# with the property that for all s ∈ B^# there is some r ∈ A^# such that mf(ra)≼ sf(a) for all a ∈ A. Let us now note that the constructions of <ref> determine pseudomonads on . Indeed, given a morphism of PCAs f : 𝔸→𝔹, we can define a morphism Tf : T𝔸→ T𝔹 by letting: Tf(α) (f(α)) = f(x) | x ∈α and this makes the association Å↦ TÅ pseudofunctorial. Then, a pseudomonad structure is defined considering: * as unit δ, the pseudonatural transformation : 𝕀_𝗈𝖯𝖢𝖠 T of components δ_𝔸 : 𝔸→ T𝔸 given by the (computationally dense) morphisms of PCAs sending a∈ A to the principal downset {a}∈ T A; * as multiplication ∪, the pseudonatural transformation TT T of components ∪_Å : T T Å→ T Å given by the (computationally dense) morphisms of PCAs sending α∈ TTA to its union ⋃α.[In this case, naturality actually holds on the nose.] Similarly, we have a (computationally dense) pseudomonad (D, δ', ∪') on 𝗈𝖯𝖢𝖠; the inclusions T A D A determine a natural transformation T D. Through these two pseudomonads we can define two new notions of morphism of PCAs. Let _T be the preorder-enriched bicategory defined as the Kleisli bicategory of the pseudomonad (T, δ, ∪). Explicitly, _T is the category having PCAs as objects, and morphisms of PCAs Å→ T as morphisms Å→, which we call applicative morphisms. Similarly, let 𝗈𝖯𝖢𝖠_D be the preorder-enriched bicategory defined as the Kleisli bicategory of the pseudomonad (D, δ', ∪'). Explicitly, _D is the category having PCAs as objects, and morphisms of PCAs Å→ D as morphisms Å→, which we call partial applicative morphisms. In either case, a morphism in _T or _D is computationally dense if it is so as a morphism of PCAs. Explicitly, a (partial) applicative morphism f : Å→ is computationally dense if there exists an element m∈ B^# with the property that for all s∈ B^# there is some r ∈ A^# satisfying mf(ra) and mf(ra)⊆ sf(a) for any a ∈ A such that sf(a). As T and D are pseudomonads, _T and _D are not strictly (preorder-enriched) categories but only bicategories. The only axiom of an ordinary category that does not hold, however, is f ∘𝕀 = f, in either _T and _D. Since it can be shown that f satisfies f∘𝕀 = f if and only if it preserves the order on the nose, f can therefore be replaced up to isomorphism with f∘𝕀 which preserves the order on the nose: under this identification, _T and _D can then be treated as preorder-enriched categories. An applicative morphism f : Å→ can be seen as a partial applicative morphism Å→ through T ↪ D. Note then that the inclusion T ↪ D is a pseudomono, that is, the composition functor (Å, T) →(Å, D) is an equivalence of preorder categories for every PCA Å; therefore, _T is a preorder-enriched sub(-bi)category of _D. In other words, this means that we can reduce to consider only partial applicative morphisms in the following, of which `plain' applicative morphisms are a particular case. More precisely, we can say that a partial applicative morphism f : Å→ is total if f = A, where f a ∈ A | ∃ b ∈ f(a) , which is equivalent to say that f factors through T↪ D; hence, applicative morphisms can be identified with total partial applicative morphisms. The map h (α) e ∈ | φ_e = α is a partial applicative morphism 𝒦_2 →𝒦_1. Let us now move to the level of arrow algebras. We warn the reader here that we will see DÅ and D simultaneously as PCAs as in <ref> and as arrow algebras as described in <ref>. Both structures possess a notion of application, and in general the two do not coincide: for example, given α,β∈ DA, the inequality (α→β) ·α⊆β only holds with respect to the application defining the partial applicative structure on DÅ, while it does not hold for the arrow-algebraic application of <ref>.[For this other application, we only have the inequality (α→∂β) ·α⊆∂β, where ∂β A →β.] In this paper, we will not make any use of the latter form of application when dealing with arrow algebras of the form DÅ; hence, the application considered will always be the one defining the partial applicative structure. First, let us show how every morphism of PCAs D Å→ D is an implicative morphism with respect to the canonical arrow structures on DÅ and D. Let f : D Å→ D be a morphism of PCAs. Then, f is also an implicative morphism D Å→ D. Let us verify the three conditions in <ref>. * Condition (i) is ensured by (i) in <ref> since S_DA = (DA)^# and S_DB = (DB)^#. * To show condition (ii), we need to find some ρ∈ (DB)^# such that ρ⊆f(α→β)→f(α) →f(β) for all α,β∈ DA. By definition, recall that: –there exists τ∈ (DB)^# such that if αα', then τ f(α) f(α') and τ f(α) f(α') ⊆ f(αα'); –there exists υ∈ (DB)^# such that if α⊆α' then υ f(α) and υ f(α) ⊆ f(α'). By combinatory completeness, consider then: ρ (λ^* v, w . υ( τ v w ) ) ∈ (DB)^# Since (α→β)·α, we know that: τ f(α→β) f(α) τ f(α→β) f(α) ⊆ f( (α→β) ·α) Since moreover (α→β)·α⊆β, we also know that: υ f( (α→β) ·α) υ f( (α→β) ·α) ⊆ f(β) So, by downward-closure of the domain of the application: υ (τ f(α→β) f(α)) υ (τ f(α→β) f(α) ) ⊆ f(β) Therefore: ρ f(α→β) f(α) ρ f(α→β) f(α)⊆ f(β) or, in other words: ρ⊆ f(α→β) → f(α) → f(β) * Let X ⊆ DA × DA be such that there exists some σ∈ (DA)^# satisfying σ⊆α→β for all (α,β) ∈ X. To show condition (iii), we need to find some ρ∈ (DB)^# such that ρ⊆ f(α) → f(β) for all (α,β) ∈ X. By combinatory completeness, since f(σ) ∈ (DB)^# by condition (i), consider then: ρ (λ^* w . υ (τ f(σ) w )) ∈ (DB)^# Since σ·α and σ·α⊆β, exactly as above we have that: υ (τ f(σ) f(α)) υ (τ f(σ) f(α) ) ⊆ f(β) Therefore: ρ f(α) ρ f(α) ⊆ f(β) or, in other words: ρ⊆ f(α) → f(β) The two orders also coincide, by definition of the implication in downsets PCAs: given two morphisms of PCAs f,f' : D Å→ D, then f ≤ f' in DÅD if and only if f ⊢ f' in DÅD. Let now f : Å→ be a partial applicative morphism, i.e. a morphism of PCAs Å→ D. Then, f corresponds to an essentially unique D-algebra morphism f̃ : DÅ→ D which, up to isomorphism, we can describe as: f̃(α) ⋃_a∈α f(a) The association f ↦f̃ is 2-functorial[As noted in <ref>, _D is only a bicategory, but compositions are defined on the nose so we can still speak of 2-functors rather than pseudofunctors.] on _D, and it realizes an equivalence of preorder categories between partial applicative morphisms Å→ and D-algebra morphisms DÅ→ D. Together with the previous lemma and remark, this immediately implies the following. The assignment f ↦f̃ determines a 2-functor: D̃ : _D Moreover, for any PCAs Å and , the map: _DÅ DÅD (preserves and) reflects the order. The maps _DÅ→DÅ D defined by D̃ are obviously not essentially surjective, meaning that D̃ is not 2-fully faithful. Indeed, any morphism of PCAs D Å→ D is an implicative morphism D Å→ D, but obviously only those which are union-preserving are D-algebra morphisms and therefore arise as D̃f for some partial applicative morphism f : Å→. We will see more about the interplay of these notions in <ref>. How does the PER(-) construction behave with respect to (partial applicative) morphisms of PCAs and implicative morphisms? § TRANSFORMATIONS OF ARROW TRIPOSES We have finally arrived at the heart of this paper. In this section, we further the study of implicative morphisms and their relations with transformations of arrow triposes, lifting the association ↦ P_ to a 2-functor defined on a suitable category of arrow algebras. The main goals, in this perspective, are the following. * First, we will characterize implicative morphisms → as those functions A → B which induce by postcomposition a left exact transformation of arrow triposes P_→ P_. * Then, we will determine a suitable notion of computational density which characterizes those implicative morphisms → such that the induced transformation P_→ P_ has a right adjoint, hence corresponding to geometric morphism of triposes P_→ P_. To motivate this, let us reconsider the classes of examples seen in the previous section. [Localic triposes] The localic tripos P_Ø(X) induced by a frame Ø(X) coincides with the arrow tripos obtained seeing Ø(X) as an arrow algebra in the canonical way. By <cit.> we know that any frame homomorphism f^*:Ø(X) →Ø(Y) induces a geometric morphism of triposes P_Ø(Y)→ P_Ø(X) whose inverse image is given by postcomposition with f^* at each component; as we have shown in the previous section, f^* is an implicative morphism Ø(X) →Ø(Y). Moreover, the direct image is given by postcomposition with f_* : Ø(Y) →Ø(X) at each component, where f_* is the right adjoint of f^* as maps of posets – which always exists for frame homomorphisms. [Realizability triposes] The realizability tripos P_Å induced by a PCA Å coincides with the arrow tripos P_DÅ obtained seeing DÅ as an arrow algebra in the canonical way. By <cit.> we know that any partial applicative morphism f: Å→ induces a left exact transformation of triposes P_Å→ P_ given by postcomposition with f̃ : DA → DB at each component; as we've shown in the previous chapter, f̃ is an implicative morphism DÅ→ D. Moreover, the induced left exact transformation admits a right adjoint making it the inverse image of a geometric morphism P_→ P_Å if and only if f is computationally dense, in which case the direct image is given by postcomposition with h : D → D Å, where h is the right adjoint of f̃ in – which <cit.> showed to be equivalent to the computational density of f. §.§ Left exact transformations of arrow triposes Let us start with a lemma we will make use of in the following. Recall that, given any two arrow algebras = (A, , →, S_A) and = (B, , →, S_B), for every set I we can consider the arrow algebras ^I = (A^I, , →, S_A^I) and ^I = (B^I, , →, S_B^I) as in <ref>. Let f : → be an implicative morphism. For any set I, f^I f ∘ - is an implicative morphism ^I→^I. Let us verify the three conditions in <ref>. * To show condition (i), recall that we can equivalently prove that f^I(⊤_I) ∈ S_B^I, where ⊤_I:I → A is the constant function of value ⊤∈ A. Note then that by condition (i) for f we have: _i f(⊤_I(i)) = _i f(⊤) = f(⊤) ∈ S_B meaning that f^I(⊤) ∈ S^I_B. * Let r ∈ S_B be a realizer for f and let ρ : I → B be the constant function at r; as _i ρ(i) = r ∈ S_B, we know that ρ∈ S^I_B. Then, for any ϕ,ϕ' ∈ A^I: r f(ϕ(i) →ϕ'(i) ) → fϕ(i) → fϕ'(i) ∀ i∈ I i.e., since the order and implications are defined pointwise in ^I: ρ f^I(ϕ→ϕ') → f^Iϕ→ f^Iϕ' meaning that ρ realizes f^I. * Let X ⊆ A^I × A^I be such that _(ϕ,ψ) ∈ Xϕ→ψ∈ S_A^I. For the sake of notation, assume X = (ϕ_j, ψ_j) | j ∈ J; then, since the order (hence meets) and implications are defined pointwise in ^I, we have: _i _j ϕ_j(i) →ψ_j(i) ∈ S_A from which, by (iii) for f: _i _j fϕ_j(i) → fψ_j(i) ∈ S_A meaning that _j f^Iϕ_j → f^Iψ_j ∈ S_A^I. Fix now an implicative morphism f : → and define the transformation: Φ_f^+ : P_→ P_ (Φ_f^+)_I (ϕ) f^Iϕ = f ∘ϕ Indeed, monotonicity of each component (Φ_f^+)_I : P_(I) → P_(I) precisely corresponds to condition (iii) in <ref>, while naturality is obvious. Let us now show that, for every set I, (Φ_f^+)_I: P_(I) → P_(I) preserves finite meets up to isomorphism. As f^I(⊤_I) ∈ S_B^I we know that f^I(⊤_I) _I ⊤_I, so we only have to show that for any ϕ,ψ∈ A^I: f^I(ϕ×ψ) _I f^Iϕ× f^Iψ where f^Iϕ× f^Iψ is the meet of f^Iϕ and f^Iψ in P_(I), which we recall from <ref> that it can be assumed to be defined pointwise. Of course f^I(ϕ×ψ) ⊢_I f^Iϕ× f^I ψ follows simply by monotonicity of f^I with respect to the logical order; on the other hand, f^Iϕ× f^Iψ⊢_I f^I(ϕ×ψ) is ensured by the following lemma applied to f^I : ^I →^I. Let f : → be an implicative morphism. Then: _a,b (f(a) × f(b)) → f(a× b) ∈ S_B First, recall that: _a,b a → b → (a× b) ∈ S_A from which, by (iii) in <ref>: _a,b f(a) → f(b → (a× b)) ∈ S_B whereas, by (ii): _a,b f(b → (a× b))→ f(b) → f(a× b) ∈ S_B so by intuitionistic reasoning we conclude: _a,b f(a) → f(b) → f(a× b) ∈ S_B which means: _a,b (f(a) × f(b)) → f(a× b) ∈ S_B Summing up, we have shown the following. Let f : → be an implicative morphism. Then: Φ_f^+ : P_→ P_ (Φ_f^+)_I (ϕ) f ∘ϕ is a left exact transformation of triposes. As promised above, we can also prove the converse: up to isomorphism, every left exact transformation of arrow triposes is induced in the sense of the previous proposition by an implicative morphism which is unique up to isomorphism. The association f ↦Φ_f^+ determines a 2-fully faithful 2-functor: [from=1-1, to=1-2] Explicitly, this means that for any arrow algebras and there is an equivalence of preorder categories: ≡P_P_ By the previous discussion, we have a functor →; 2-functoriality amounts to showing that given any two implicative morphisms f,f' : → such that f ⊢ f', then Φ^+_f ≤Φ^+_f'. By definition, f ⊢ f' means that fϕ⊢_I f'ϕ holds for any set I and any ϕ : I → A, i.e. (Φ^+_f)_I(ϕ) ⊢_I (Φ^+_f')_I(ϕ) holds in P_(I), which means means that Φ^+_f ≤Φ^+_f'. Note then that the converse also holds: if Φ_f^+ ≤Φ_f'^+, then in particular we have that (Φ^+_f)_A(𝕀_A) ⊢_A (Φ^+_f')_A(𝕀_A), which means that f⊢ f'. Let now Φ^+: P_→ P_ be any left exact transformation of arrow triposes. Recall that, up to isomorphism, Φ^+ is given by postcomposition with the function: f (Φ^+)_A(𝕀_A) : A → B Let us now verify that f satisfies the three conditions[If we assumed implicative morphisms to be monotone, we would not be able to prove that f is one.] in <ref>. * To show condition (i), recall that we can equivalently prove that f(⊤) ∈ S_B. By left exactness we know that (Φ^+)_I(⊤_I) _I ⊤_I, which for I 1 means that f(⊤) ⊣⊢⊤, i.e. f(⊤)∈ S_B. * Let I A × A; recall that condition (ii) can be rewritten as: f (π_1→π_2) ⊢_I f π_1 → f π_2 where π_1,π_2 : I → A are the two projections. In terms of Φ^+, this means that we have to show: (Φ^+)_I (π_1→π_2) ⊢_I (Φ^+)_I (π_1) → (Φ^+)_I (π_2) Through the Heyting adjunction in P_(I), the previous is equivalent to: (Φ^+)_I (π_1→π_2) × (Φ^+)_I (π_1) ⊢_I (Φ^+)_I (π_2) i.e., by left exactness: (Φ^+)_I (π_1→π_2 ×π_1) ⊢_I (Φ^+)_I (π_2) which is ensured by monotonicity since π_1 →π_2 ×π_1 ⊢_I π_2. * Condition (iii) precisely corresponds to the monotonicity of each component (Φ^+)_I. Therefore, the association Φ^+ ↦ (Φ^+)_A(𝕀_A) realizes the desired inverse equivalence since obviously (Φ^+_f)_A(𝕀_A) = f for all implicative morphisms f : →. Recall from <cit.> that0.05em[At least, assuming the Axiom of Choice.] every -based tripos is isomorphic to an implicative one, and hence to an arrow one. Therefore, the 2-functor → is actually a 2-equivalence of 2-categories. §.§ Geometric morphisms of arrow triposes Let us now move to geometric morphisms: as we will see in a moment, the existence of a right adjoint at the level of transformations of triposes can exactly be characterized by the existence of a right adjoint in . An implicative morphism f : → is computationally dense[The name is obviously taken from the theory of PCAs, and it is also used in <cit.> in the context of applicative morphisms.] if it admits a right adjoint in , that is, if there exists an implicative morphism h : → such that f h ⊢𝕀_B and 𝕀_A ⊢ h f. For any arrow algebra , the identity 𝕀_A : → is computationally dense, as it is trivially right adjoint to itself. As the existence of a right adjoint in the preorder-enriched sense is clearly compositional, we can define the wide subcategory of on computationally dense morphisms. Fix now a computationally dense implicative morphism f : → with right adjoint h : → and consider the left exact transformation induced by h as in <ref>: Φ_+ : P_→ P_ (Φ_+)_I (ϕ) h∘ϕ For every set I, (Φ_+)_I : P_(I) → P_(I) is right adjoint to the map (Φ_f^+)_I : ψ↦ f∘ψ. By the universal property of the counit of the adjunction, it suffices to show that for all ϕ : I → B: * f h ϕ⊢_I ϕ in P_(I); * for any ψ : I → A such that f ψ⊢_I ϕ in P_(I), then ψ⊢_I h ϕ in P_(I). (1) clearly follows being h right adjoint to f. To show (2), instead, suppose f ψ⊢_I ϕ; then, h f ψ⊢_I h ϕ as h is an implicative morphism, and hence ψ⊢_I h ψ since 𝕀_A ⊢_A h f. The previous results then immediately yield the following. Let f : → be a computationally dense implicative morphism with right adjoint h : →. Then: P_ P_[""name=0, anchor=center, inner sep=0, "Φ^+"', curve=height=12pt, from=1-2, to=1-1] [""name=1, anchor=center, inner sep=0, "Φ_+"', curve=height=12pt, from=1-1, to=1-2] ["⊣"anchor=center, rotate=-90, draw=none, from=0, to=1] (Φ^+)_I (ψ) f ∘ψ (Φ_+)_I (ϕ) h ∘ψ is a geometric morphism of triposes. As we did for implicative morphisms and left exact transformations, in this case too we can prove the converse: up to isomorphism, every geometric morphism of arrow triposes is induced by an essentially unique computationally dense implicative morphism. The 2-functor of <ref> restricts to a 2-fully faithful 2-functor: Explicitly, this means that for any arrow algebras and there is an equivalence of preorder categories: ≡P_P_ By the previous discussion, we have a 2-functor → such that, given any two computationally dense implicative morphisms f,f' : →, f ⊢ f' if and only if Φ_f^+ ≤Φ_f'^+. Let now Φ^+ : P_→ P_ be a left exact transformation of triposes having a right adjoint Φ_+ : P_→ P_. Recall that, up to isomorphism, Φ^+ is given by postcomposition with f (Φ^+)_A(𝕀_A) : A → B, which is an implicative morphism → by <ref>. In the same way, as it is also left exact, Φ_+ is given up to isomorphism by postcomposition with the implicative morphism h (Φ_+)_B(𝕀_B) : →. Moreover, the adjunction between Φ^+ and Φ_+ directly yields fh ⊢𝕀_B and 𝕀_A ⊢ hf, meaning that h is right adjoint to f making it computationally dense. As in <ref>, the 2-functor → is a 2-equivalence of 2-categories.[Again, assuming the Axiom of Choice.] §.§ Equivalences of arrow triposes Finally, let us characterize equivalences of arrow triposes on the level of arrow algebras. With usual 2-categorical notation, we say that an implicative morphism f : → is an equivalence if there exists another implicative morphism g : → such that f g 𝕀_B in (,) and g f 𝕀_A in (,), in which case g is a quasi-inverse of f. Two arrow algebras are then equivalent if there exists an equivalence between them; clearly, equivalent arrow algebras induce equivalent triposes. Let f : → be an equivalence of arrow algebras. Then, f is computationally dense, and the induced geometric morphism of triposes Φ : P_→ P_ is an equivalence. Let g : → be a quasi-inverse of f. As g is in particular right adjoint to f in , f is computationally dense, and the induced geometric morphism Φ :P_→ P_ is given by: (Φ^+)_I (ψ) = f ∘ψ (Φ_+)_I(ϕ) = g ∘ϕ In particular, Φ^+Φ_+ and Φ_+Φ^+ are isomorphic to identities since f g 𝕀_B and g f 𝕀_A, meaning that Φ is an equivalence. By the previous results, we can also easily address the converse. Let Φ : P_→ P_ be an equivalence of arrow triposes. Then, Φ is induced up to isomorphism by an (essentially unique) equivalence of arrow algebras f : →. Let Ψ : P_→ P_ be a quasi-inverse of Φ. Then, Φ is both left and right adjoint to Ψ, which means in particular that the pair (Φ, Ψ) defines a geometric morphism P_→ P_. Therefore, by <ref>, Φ is induced up to isomorphism by an (essentially unique) computationally dense implicative morphism f : →; a right adjoint g : → inducing Ψ up to isomorphism then satisfies f g 𝕀_B and g f 𝕀_A, making f an equivalence. We say that an arrow algebra is trivial if S = A, or equivalently if ∈ S. It is then immediate to show that is trivial if and only if the unique map →{*} – which is obviously an implicative morphism – is an equivalence. Hence, is trivial if and only if is (equivalent to) the trivial topos. § EXAMPLES OF IMPLICATIVE MORPHISMS II We can now finally conclude our analysis of the two main classes of arrow algebras, namely those arising from frames and from PCAs, now studying their morphisms in relation to the transformations between the associated triposes. §.§ Frames First, recall that frame homomorphisms are implicative morphisms, seeing frames as arrow algebras in the canonical way. More generally, as noted in <ref>, every monotone map of frames which preserves finite meets is an implicative morphisms: we can now easily prove the converse as well. In fact, if f : Ø(Y) →Ø(X) is an implicative morphism, then: * by definition, f is monotone with respect to the logical order; * as the separator on a frame is canonically defined as {⊤}, f(⊤) = ⊤; * by <ref>, f preserves binary logical meets. Since the logical order on frames coincides with the evidential order, this simply means that f is monotone and preserves finite meets. Moving on, let us see how every frame homomorphism is computationally dense as an implicative morphism. Let f^* : Ø(Y)→Ø(X) be a frame homomorphism. Then, f^* is a computationally dense implicative morphism. Let f_* : Ø(X) →Ø(Y) be the right adjoint of f^*, i.e. the monotone function: f_*(x) = y | f^*(y) ≤ x As it is monotone and preserves finite meets, f_* is an implicative morphism Ø(X) →Ø(Y); in particular, it is clearly right adjoint to f^* in , which is then computationally dense. The converse is also true: computationally dense implicative morphisms between frames are themselves frame homomorphisms. Indeed, let f : Ø(Y) →Ø(X) be a computationally dense implicative morphism between frames and let h : Ø(X) →Ø(Y) be right adjoint to it. First, note that f is monotone, since the logical and the evidential order coincide on arrow algebras arising from frames and implicative morphisms are monotone with respect to the logical order. For the same reason, <ref> implies that f preserves finite meets: indeed, for all y∈Ø(Y) we know that ⊤≤ (f(y) f(y')) → f(y y') i.e. f(y) f(y') ≤ f(y y'), and therefore f(y) f(y') = f(y y') as the converse inequality holds by monotonicity. Finally, again as the logical and the evidential order coincide, h is then right adjoint to f as monotone maps between the posets underlying Ø(Y) and Ø(X), which means that f preserves all joins. Therefore, f is a frame homomorphism; summing up, we have shown the following. The inclusion 2-functor ↪ is 2-fully-faithful. Explicitly, this means that for any frames Ø(Y) and Ø(X) there is an equivalence of preorder categories: (Ø(Y), Ø(X) ) ≡(Ø(Y),Ø(X)) In essence, this makes so that the canonical embedding of locales and their homomorphisms into localic triposes and geometric morphisms factors through arrow algebras and computationally dense implicative morphisms. In the `algebraic' notation we have been using, this gives the following diagram: [hook, from=1-1, to=1-3] [hook, from=1-1, to=2-2] [hook, from=2-2, to=1-3] §.§ Partial combinatory algebras Let us start by reviewing the theory of transformations of realizability triposes coming from morphisms between PCAs, once again following <cit.>. First, let's start by linking morphisms of PCAs with left exact transformations of realizability triposes. Any transformation of realizability triposes Φ : P_Å→ P_ is given up to isomorphism by postcomposition with the function f Φ_DA(𝕀_DA): D A → D B at each component: as shown in <cit.>, Φ is left exact if and only if f is a morphism of PCAs, and the respective orders agree as well. In other words, we have the following. The association f ↦ f ∘ - is 2-functorial on downsets PCAs and, for any PCAs Å and , it realizes an equivalence of preorder categories: DÅD≡P_ÅP_ Instead, partial applicative morphisms Å→ are characterized as those inducing regular transformations of triposes, which we now introduce. Let P and Q be -triposes. A left exact transformation Φ^+ : P → Q is regular if it preserves existential quantification, that is, if: (Φ^+)_Y ∘∃_g _Y ∃_g ∘ (Φ^+)_X for all g : X → Y in . We denote with the wide subcategory of on regular transformations. This means that Φ^+ preserves the interpretation of regular logic, the fragment of first-order logic defined by ⊤, and ∃. Consider now a partial applicative morphism f : Å→, that is, a morphism of PCAs f : Å→ D. Recall that, f corresponds essentially uniquely to the D-algebra morphism: f̃ : DÅ→ D f̃(α) ⋃_a∈α f(a) and the 2-functor defined by f ↦f̃ realizes an equivalence of preorder categories between partial applicative morphisms Å→ and D-algebra morphisms DÅ→ D. Therefore, the correspondence stated in the previous proposition restricts to partial applicative morphisms and regular transformations: indeed, a left exact transformation g ∘ - : P_Å→ P_ is regular if and only if g : D Å→ D is a D-algebra morphism, i.e. if and only if it is up to isomorphism of the form g = f̃ for an essentially unique partial applicative morphism f : Å→. In other words, we have the following. The association f ↦f̃∘ - determines a 2-fully faithful 2-functor: _D Explicitly, this means that for any PCAs Å and there is an equivalence of preorder categories: _DÅ≡P_ÅP_ Finally, we can specify the previous correspondence to geometric morphisms by means of computational density. First, as we already mentioned, note how computational density can be characterized by the existence of right adjoints in . Let f : Å→ be a partial applicative morphism. Then, the following are equivalent: * f is computationally dense; * f̃: D Å→ D has a right adjoint in , in which case the right adjoint h : D→ DÅ can be described as: h(β) a | m f(a)m f(a) ⊆β where m ∈ B^# witnesses computational density for f. As the existence of right adjoints in precisely corresponds to the existence of right adjoints on the level of transformations of triposes, this yields with the following. Let _D,𝖼𝖽 be the wide sub(-bi)category of _D on computationally dense partial applicative morphisms. Then, the 2-functor of <ref> restricts to a 2-fully faithful 2-functor: _D,𝖼𝖽 Explicitly, this means that for any PCAs Å and there is an equivalence of preorder categories: _D,𝖼𝖽Å≡P_ÅP_ In particular, a right adjoint of f̃∘ - : P_Å→ P_ is given by h ∘ - : P_→ P_Å, where h : D → D Å is right adjoint to f̃ in . Consider the function f_0 : 𝒦_1→𝒦_2 defined by letting f_0(n) be the constant function of value n. The function f δ_𝒦_2 f_0 is a (total) applicative morphism 𝒦_1→𝒦_2 having the partial applicative morphism h : 𝒦_2 →𝒦_1 of <ref> as right adjoint in . Therefore, the pair f ⊣ g induces a geometric morphism of triposes P_𝒦_2→ P_𝒦_1, and hence a geometric morphism of toposes 𝒦_2→. Let us now see how arrow algebras fit in the picture. First, recall by <ref> that any morphism of PCAs D Å→ D, given two PCAs Å = (A,≤, ·, A^#) and = (B, ≤, ·, B^#), is an implicative morphism between the associated arrow algebras. <ref> now allows us to easily address the converse. Let f : D Å→ D be an implicative morphism. Then, f is also a morphism of PCAs D Å→ D. Therefore: DÅD = DÅD Indeed, f induces by postcomposition the left exact transformation of realizability triposes Φ_f^+ : P_DÅ→ P_D; by <ref>, therefore, f is a morphism of PCAs D Å→ D. Recalling that the two orders coincide as well, we conclude that DÅD = DÅD. Moving on to partial applicative morphisms Å→, recall that they correspond to regular transformations of triposes. The following definition is then obvious. Let and be arrow algebras. An implicative morphism f : → is regular if: f ∘∃_g(α) _Y ∃_g (f∘α) for all functions g : X → Y and all α∈ P_(X). We denote with the wide subcategory of on regular implicative morphisms; the 2-functor of <ref> obviously restricts to a 2-fully faithful 2-functor →. Note that the inequality ∃_g (fα) ⊢_Y f∃_g(α) holds for all implicative morphisms f: indeed, through the adjunction ∃_g ⊣ g^* it is equivalent to f α⊢_X f ∃_g (α) g, which is ensured by the properties of f being α⊢_X ∃_g(α) g by the unit of the same adjunction. Therefore, regularity amounts to the inequality f ∃_g (α) ⊢_Y ∃_g(f α). Computationally dense implicative morphism are regular. Indirectly, this is obvious as inverse images of geometric morphisms of triposes are regular; more explicitly, instead, if h : → is right adjoint to f: f ∃_g (α) ⊢_Y ∃_g(f α) ∃_g(α) ⊢_Y h ∃_g(f α) α⊢_X h ∃_g(f α) g f α⊢_X ∃_g(f α) g which is ensured by the unit of the adjunction ∃_g ⊣ g^*. Drawing from the previous results, we conclude that regular implicative morphisms between arrow algebras arising from PCAs arise themselves from partial applicative morphisms. The 2-functor D̃ of <ref> restricts to a 2-fully faithful 2-functor: _D Explicitly, this means that for all PCAs Å and , D̃ realizes an equivalence of preorder categories: _DÅD ÅD Let f : D Å→ D be a regular implicative morphism. Then, f induces by postcomposition the regular transformation of realizability triposes Φ_f^+ : P_DÅ→ P_D; by <ref>, therefore, f=D̃g for an essentially unique partial applicative morphism g : Å→[We can also describe g as f∘δ'_Å.]. Moreover, for f,f' : Å→ partial applicative morphisms, we have already showed that f ≤ f' in _DÅ if and only if D̃ f ⊢D̃ f' in DÅD. Alternatively, the proof of the previous can be given by observing that a regular implicative morphism f : D Å→ D is a union-preserving morphism of PCAs, and hence a D-algebra morphism: in fact, DÅ and D are compatible with joins, meaning that existentials can be computed as unions. Finally, let us specialize to the case of computational density. Recall by <ref> that a partial applicative morphism f : Å→ is computationally dense if and only if f̃ : D Å→ D has a right adjoint in . Since DÅD coincides with DÅD, this is also equivalent to f̃ : DÅ→ D having a right adjoint in , that is, to f̃ being computationally dense as an implicative morphism D Å→ D. In essence, we have shown the following. The 2-functor D̃ of <ref> restricts to a 2-fully faithful 2-functor: _D,𝖼𝖽 Explicitly, this means that for all PCAs Å and , D̃ realizes an equivalence of preorder categories: _D_𝖼𝖽 (Å, ) ≡(DÅ, D) As for frames, in essence this makes so that the construction of realizability triposes and geometric morphisms from PCAs and partial applicative morphisms factors through arrow algebras and computationally dense implicative morphisms, giving the following diagram: _D_𝖼𝖽 [hook, from=1-1, to=1-3] [hook, from=1-1, to=2-2] [hook, from=2-2, to=1-3] § INCLUSIONS OF ARROW TRIPOSES In this section, we will specify the previous correspondence between computationally dense implicative morphisms and geometric morphisms of arrow triposes to the case of geometric inclusions into a given arrow tripos P_, and see how they correspond to nuclei on . §.§ Inclusions and surjections Recall that a geometric morphism of arrow triposes Φ : P_→ P_ is an inclusion if (Φ_+)_I reflects the order for any set I, or equivalently if (Φ^+)_I(Φ_+)_I(ϕ) _I ϕ for any set I and any ϕ : I → B. Dually, Φ is a surjection if (Φ^+)_I reflects the order for any set I, or equivalently if (Φ_+)_I(Φ^+)_I(ϕ) _I ϕ for any set I and any ϕ : I → A. Recall moreover the following general definition. Let be a preorder-enriched category. An arrow f : A → B in is a lax epimorphism if, for all C ∈, the map -∘ f : (B, C) →(A, C) is fully-faithful as a functor between preorder categories, which explicitly means that p ≤ q for all p, q : B → C such that pf ≤ qf. Dually, f is a lax monomorphism if, for all C ∈, the map f∘ - : (C, A) →(C, B) is fully-faithful as a functor between preorder categories, which explicitly means that p ≤ q for all p, q : C → A such that fp ≤ fq. Specializing to , we can then give the following definition. A computationally dense implicative morphism f : → is an implicative surjection (resp. implicative injection) if it is a lax epimorphism (resp. lax monomorphism) in . Let f : → be a computationally dense implicative morphism with right adjoint h : → and let Φ : P_→ P_ be the induced geometric morphism of arrow triposes. The following are equivalent: * Φ is an inclusion; * f h _B 𝕀_B; * f is an implicative surjection. Dually, the following are equivalent: * Φ is a surjection; * h f _A 𝕀_A; * f is an implicative injection. For (1) ⇔ (2), recall that the inverse image Φ^+ is given by postcomposition with f, and the direct image Φ_+ is given by postcomposition with h: therefore, Φ is an inclusion if and only if f h ϕ_I ϕ for any set I and any ϕ∈ P_(I), which is equivalent to f h _B 𝕀_B. For (2) (3), suppose p,q : →𝒞 are such that pf ⊢ qf. Then, pfh ⊢ qfh, and hence p ⊢ q. For (3) (2), of course fh ⊢𝕀_B; conversely, to show that 𝕀_B ⊢ fh it then suffices to show that f ⊢ fhf, which is ensured by 𝕀_A ⊢ hf. For any arrow algebras and , there are equivalence of preorder categories between: * implicative surjections → and geometric inclusions P_ P_; * implicative injections → and geometric surjections P_ P_. Combining <ref> with the previous proposition. Of course, an implicative morphism is an equivalence if and only if it is both an implicative surjection and an implicative inclusion. §.§ Nuclei and subtriposes Let = (A,,→,S) be an arrow algebra. As we have seen in <ref>, every nucleus j on determines a new arrow algebra 𝒜_j = (A, , →_j, S_j) where: a →_j b a → jb S_j a ∈ A | ja ∈ S With the previous machinery, <cit.> can then be reduced to the following observation. 𝕀_A is an implicative surjection 𝒜→𝒜_j, with j as a right adjoint. Let us start by showing that 𝕀_A is an implicative morphism 𝒜→𝒜_j. As the evidential order is the same in and _j, we only have to verify (i) and (ii) in <ref>. * If a ∈ S, then ja ∈ S, meaning that a ∈ S_j. * Condition (ii) explicitly reads as: _a,a' ( a → a' ) →_j a →_j a' ∈ S_j i.e.: j (_a,a' ( a → a' ) → j ( a → j a')) ∈ S so, by (ii) in <ref>, it suffices to show: _a,a' ( a → a' ) → j ( a → j a') ∈ S This, in turn, follows by intuitionistic reasoning from: _a ,a' (a→ a')→ a → ja' ∈ S _a,a'(a→ ja')→ j(a→ ja')∈ S which follow from (ii) <ref>. Then, let us show that j is an implicative morphism _j→: again, recall that j is monotone by definition, so we only have to verify (i) and (ii) in <ref>. * If a ∈ S_j, then by definition j a ∈ S. * Condition (ii) explicitly reads as: _a,a' j ( a → j a') → j a → j a' ∈ S which follows from intuitionistic reasoning from: _a,a' j (a → ja') → ja → jja' ∈ S _a' jja' → ja' ∈ S Finally, let us show that j : _j→ is right adjoint to 𝕀_A : →_j in . * On one hand, j ⊢^j_A 𝕀_A explicitly reads as j ⊢_A j, which is clearly true. * On the other, 𝕀_A ⊢_A j is true as j is a nucleus. Moreover, we also have that 𝕀_A ⊢_A^j j as it explicitly reads as 𝕀_A ⊢_A j j, which makes 𝕀_A an implicative surjection by <ref>. Every nucleus j on induces a geometric inclusion of triposes P__j P_, given by: P__j P_[""name=0, anchor=center, inner sep=0, "𝕀_A -0.1em∘0.1em -"', curve=height=12pt, from=1-2, to=1-1] [""name=1, anchor=center, inner sep=0, "j ∘-"', curve=height=12pt, from=1-1, to=1-2] ["⊣"anchor=center, rotate=-90, draw=none, from=0, to=1] However, we are now in the position to do more than that: namely, we can recover <ref> and hence a converse to the previous through nuclei. Recall in fact by the discussion in <ref> that we have an equivalence of preorder categories between subtriposes of P_ and closure transformations on P_, that is, transformations Φ_j : P_→ P_ which are left exact, inflationary and idempotent: P_P_ Note then that, given a closure transformation Φ_j on P_, the function j (Φ_j)_A(𝕀_A) : A → A inducing it up to isomorphism satisfies the following: * j is an implicative morphism →; * 𝕀_A ⊢ j; * j j j. Assuming j to be monotone with respect to the evidential order in as in <ref>, this means that j satisfies (i), (ii), (iv) and (vi) in <ref>, which as we've noted suffice to make j into a nucleus. Of course, the converse is also true – that is, nuclei induce by postcomposition transformations which are left exact, inflationary and idempotent – since, as we know, every nucleus on is an implicative morphism →. Since the association j ↦Φ_j also preserves and reflects the order, we conclude with the following. Let be the set of nuclei on , with the preorder induced by P_(A). Then, <ref> yields an equivalence of preorder categories: P_ so, in particular: P_ Every geometric inclusion of toposes into is induced, up to equivalence, by a geometric inclusion of triposes of the form: P__j P_[""name=0, anchor=center, inner sep=0, "𝕀_A ∘-"', curve=height=12pt, from=1-2, to=1-1] [""name=1, anchor=center, inner sep=0, "j ∘-"', curve=height=12pt, from=1-1, to=1-2] ["⊣"anchor=center, rotate=-90, draw=none, from=0, to=1] for some nucleus j on . By definition of _j, note therefore how P__j coincides precisely with the tripos P_j described before <ref>. For this reason, we will usually refer to P__j simply as P_j. We conclude this part with the following alternative description of P_j, already noted in the general case <cit.> and then in the context of arrow algebras in <cit.>. Let j be a nucleus on an arrow algebra . Then, P_j is equivalent over P_ to the tripos defined by: Q_j (I) α∈ P_(I) | j α⊢_I α with the Heyting prealgebra structure induced by P_(I). Consider the pair of transformations: Θ^+ 𝕀_A -0.1em∘0.2em - : Q_j → P_j Θ_+ j∘- :P_j→ Q_j obviously well-defined since so is the geometric morphism P_j P_ above, and being j j ⊢_A j. Then, Θ^+ and Θ_+ define an equivalence of triposes between P_j and Q_j. * The fact that Θ^+Θ_+ 𝕀_P_j is equivalent, by <ref>, to j ^j_A 𝕀_A: on one hand, 𝕀_A ⊢^j_A j is equivalent to 𝕀_A ⊢_A j j, which follows being 𝕀_A ⊢_A j; on the other, j ⊢^j_A 𝕀_A is equivalent to j ⊢_A j, which follows by reflexivity. * To show that Θ_+Θ^+ 𝕀_Q_j we need to show that j α_I α for every set I and every α∈ Q_j(I): on one hand, α⊢_I j α follows by 𝕀_A ⊢_A j; on the other, j α⊢_I α follows by definition of Q_j(I). Therefore, Q_j is equivalent to P_j; through the equivalence, the geometric inclusion of Q_j in P_ is given by: Q_j P_[""name=0, anchor=center, inner sep=0, "j ∘-"', curve=height=12pt, from=1-2, to=1-1] [""name=1, anchor=center, inner sep=0, "j ∘-"', curve=height=12pt, from=1-1, to=1-2] ["⊣"anchor=center, rotate=-90, draw=none, from=0, to=1] §.§ A factorization theorem As it is known, every geometric morphism of toposes can be factored as a geometric surjection followed by a geometric inclusion. Generalizing locale theory, let us recover the same result on the level of arrow algebras; in doing so, we will also make the correspondence between subtriposes and nuclei more explicit. Let f : → be a computationally dense implicative morphism with right adjoint h : →; by <ref>, up to isomorphism we can assume both f and h to be monotone with respect to the evidential order. First, observe the following. hf : → is a nucleus on . Let us verify that hf satisfies the three conditions in <ref>. * hf is monotone by composition. * Clearly 𝕀_A ⊢ hf as h is right adjoint to f. * Let I A × A; note that condition (iii) can be rewritten as: π_1 → hfπ_2 ⊢_I hfπ_1 → hf π_2 where π_1,π_2 : I → A are the obvious projections. Through the Heyting adjunction in P_(I), this is equivalent to: (π_1 → hfπ_2 )× hfπ_1 ⊢_I hf π_2 and hence, since h∘ - is right adjoint to f ∘ -, which preserves finite meets, to: f(π_1 → hfπ_2) × fhfπ_1 ⊢_I f π_2 Therefore, since fh fπ_1 ⊢_I fπ_1 as fh ⊢𝕀_B, it suffices to show: f(π_1 → hfπ_2) × fπ_1 ⊢_I f π_2 i.e., again through the Heyting adjunction in P_(I): f(π_1 → hfπ_2) ⊢_I fπ_1 → fπ_2 which in turn follows since, by (ii) in <ref>: f (π_1→ hf π_2) ⊢_I fπ_1 → fhfπ_2 and again fhfπ_2 ⊢_I fπ_2. A natural question is then to relate f to j hf, and in particular the geometric morphism Φ_f : P_→ P_ induced by f with the inclusion Φ_j : P_j P_ induced by j. To this aim, recall here that f h ⊢𝕀_B and 𝕀_A ⊢ hf imply fhf f and hfh h. f is an implicative injection _j →, with h as a right adjoint. Let us start by showing that f is an implicative morphism _j →. * Let a ∈ S_j; by definition, hf(a) ∈ S_A, so fhf(a) ∈ S_B, and hence f(a) ∈ S_B since fh ⊢𝕀_B. * Condition (ii) explicitly reads as: _a,a' f(a→ ja') → f(a) → f(a') ∈ S_B which follows by intuitionistic reasoning from: _a,a' f(a→ ja') → f(a) → fj(a') ∈ S_B _a,a' (f(a) → fhf(a')) → f(a) → f(a') ∈ S_B where the latter follows being fhf ⊢ f. Note now that h : B → A is an implicative morphism →_j since it is an implicative morphism → and 𝕀_A is an implicative morphism →_j. Then, we have that h : →_j is right adjoint to f: _j →: * clearly f h ⊢𝕀_B; * on the other hand, 𝕀_A ⊢^j h f explicitly reads as 𝕀_A ⊢_A h f h f, which follows from 𝕀_A ⊢_A h f. Moreover, we also have that hf ⊢_A^j 𝕀_A as it explicitly reads as hf ⊢_A hf, which makes f : _j → an implicative surjection by <ref>. Recalling by <ref> that 𝕀_A defines an implicative surjection →_j, we have the following. Every computationally dense implicative morphism factors as an implicative surjection followed by an implicative inclusion. On the level of triposes, this means that the geometric morphism Φ_f : P_→ P_ induced by f factors through Φ_j : P_j P_ by means of a geometric surjection Θ_f : P_→ P_j, also induced by f as a morphism _j →: P_ P_ P_j[""name=0, anchor=center, inner sep=0, "f∘-"description, curve=height=12pt, from=2-3, to=1-1] [""name=1, anchor=center, inner sep=0, "h∘-"description, curve=height=12pt, from=1-1, to=2-3] [""name=2, anchor=center, inner sep=0, "𝕀_A-0.2em∘-"description, curve=height=12pt, from=2-3, to=3-1] [""name=3, anchor=center, inner sep=0, "j∘-"description, curve=height=12pt, from=3-1, to=2-3] ["Θ_f"', from=1-1, to=3-1] ["⊣"anchor=center, rotate=-114, draw=none, from=0, to=1] ["⊣"anchor=center, rotate=-66, draw=none, from=2, to=3] Θ_f is an equivalence if and only if Φ_f is an inclusion. By <ref> and <ref>, Θ_f is an equivalence if and only if f : _j → is an equivalence. Since h : →_j is right adjoint to f: _j →, this is equivalent to hf ⊢^j_A 𝕀_A and 𝕀_B ⊢ fh, and since hf ⊢^j_A 𝕀_A holds trivially this is equivalent simply to 𝕀_B ⊢ fh. By <ref>, 𝕀_B ⊢ fh is in turn equivalent to Φ_f being an inclusion. In essence, this gives us a more explicit description of the correspondence given in <ref>, in perfect generalization of the localic case: indeed, if a subtripos Φ : P_ P_ is induced by an implicative surjection f : →, then it is equivalent to the subtripos induced by (a nucleus isomorphic to) hf, where h is right adjoint to f. § ARROW ALGEBRAS FOR MODIFIED REALIZABILITY In this section, we will apply the theoretical framework developed above to the study of modified realizability from the point of view of arrow algebras, recovering some functoriality results – such as in <cit.> – in much greater generality. The key feature of modified realizability lies in separating between a set of potential realizers and a subset thereof of actual realizers. On the level of triposes, this amounts to moving from the ordinary realizability tripos P_Å over a (traditionally, discrete and absolute) PCA Å to a tripos whose predicates on a set I are functions from I to the set: (α,β) ∈ D A × D A | α⊆β which are preordered by: ϕ⊢_I ψ⋂_i∈ I (ϕ_1(i) →ψ_1(i)) ∩ (ϕ_2(i) →ψ_2(i)) ∈ (DA)^# where we denote with ϕ_1(i),ϕ_2(i) the two components of ϕ(i). This idea is what led, in <cit.>, to the definition of the Sierpiński construction on arrow algebras, which we will describe below. To do this, and for what follows, we will consider some ad hoc notions of arrow algebras which still encompass all the relevant cases and many others. This does not mean that we have counterexamples showing how the presented results may fail for more general classes of arrow algebras, but only that minimal assumptions suffice to ensure the desired properties. An arrow algebra = (A,,→,S) is binary implicative if the equality: a → (b c) = a → b a → c holds for all a,b,c∈ A, and it is modifiable if moreover the equality: → a = ⊤ holds for all a ∈ A. We denote with and the full subcategories of on binary implicative and modifiable arrow algebras, respectively. Every frame, seen as an arrow algebra in the canonical way, is modifiable. For any PCA 𝔸, D Å is modifiable; in particular, PERÅ is modifiable. §.§ The Sierpiński construction Recall by <cit.> that, starting from any binary implicative arrow algebra = (A, , →, S), we can define a new arrow algebra = ( A, , →, S), also binary implicative, by letting: A x = (x_0,x_1) ∈ A × A | x_0 x_1 with pointwise order, implication: x → y ( x_0 → y_0 x_1 → y_1 , x_1 → y_1) and separator: S x ∈ A | x_0 ∈ S This means that, for any set I, the order in P_(I) is given by: ϕ⊢_I ψ_i∈ Iϕ_1(i) →ψ_1(i) ϕ_2(i) →ϕ_2(i) ∈ S where we denote with ϕ_1,ϕ_2 : I → A the two components of ϕ : I → A. Let us now lift the association ↦ to a (pseudo)functor on . Let f : → be an implicative morphism in , for the moment assumed to be monotone, and define: f : A → B f (x_0,x_1) (f(x_0), f(x_1) ) f is an implicative morphism →. First, note that f is well-defined as a function A → B by monotonicity of f, and it is monotone itself with respect to the evidential orders in and . Let us then verify that f satisfies the first two conditions in <ref>. * If x ∈ S_A, then x_0 ∈ S_A, so f(x_0) ∈ S_B and hence f (x) ∈ S_B. * First note that, for all x,y ∈ A: f ( x → y ) = f ( x_0 → y_0 x_1 → y_1 , x_1 → y_1 ) = ( f(x_0→ y_0 x_1→ y_1) , f(x_1 → y_1) ) whereas: f (x) →f (y) = (fx_0, fx_1) → (fy_0, fy_1) = ( fx_0 → fy_0 fx_1 → fy_1, fx_1 → fy_1 ) Therefore, by binary implicativity, a realizer for f amounts to an element r ∈ S_B such that: r f(x_0→ y_0 x_1 → y_1) → fx_0→ fy_0 r f(x_0→ y_0 x_1 → y_1) → fx_1→ fy_1 r f(x_1→ y_1) → fx_1 → fy_1 for all x,y ∈ A, in which case (r,r) ∈ S_B realizes f. By monotonicity of f, note then that it suffices to show that: r f(x_0→ y_0) → fx_0→ fy_0 r f(x_1→ y_1) → fx_1 → fy_1 for all x,y ∈ A, which means that r can be taken to be a realizer for f. Therefore, (-) defines a functorial association on binary implicative arrow algebras and monotone implicative morphisms between them. Note moreover that, given two monotone implicative morphisms f,f' : → in , if u ∈ S_B realizes f ⊢ f', then (u,u) ∈ S_B clearly realizes f ⊢f', meaning that (-) is actually 2-functorial. Precomposing with the pseudofunctor M of <ref>, we obtain the following. For any implicative morphism f : → in , let: f : → f (x_0,x_1) ( _x_0 a∂ f (a) , _x_1 a∂ f(a) ) Then, (-) is a pseudofunctor →. Moreover, if f is computationally dense with right adjoint h : →, then f is computationally dense as well, and a right adjoint is given by h : →. We only have to show the last part, so let f be computationally dense with right adjoint h : → and, up to isomorphism, assume both f and h to be monotone, so that f and h can be defined as above in the case of monotonicity. Let's show that h is right adjoint to f. * To show that f h ⊢𝕀_ B, note that: [3] _y∈ B f h (y) → y ∈ S_B _y∈ B ( f h (y_0), fh (y_1) ) → (y_0,y_1) ∈ S_B _y∈ B f h (y_0) → y_0 f h (y_1) → y_1 ∈ S_B which is ensured by f h ⊢𝕀_B. * Similarly, 𝕀_ A⊢ h f reduces to 𝕀_A ⊢ h f. Let and be binary implicative arrow algebras. Then, every geometric morphism Φ : P_→ P_ lifts to a geometric morphism Φ : P_→ P_. §.§ The modification of an arrow algebra Let us now study the relation between P_ and P_. First, generalizing what is showed in <cit.> for discrete and absolute PCAs, we can note the following. P_ is a subtripos of P_. Consider the projection: π_1 : A → A (x_0,x_1) ↦ x_1 Let us show that π_1, which is obviously monotone, is an implicative morphism →. * If (x_0,x_1) ∈ S, then by definition x_0 ∈ S, so x_1 ∈ S as well since x_0 x_1. * A realizer of π_1 amounts to an element r ∈ S such that: r (x_1 → y_1) → x_1 → y_1 for all x,y ∈ A, so we can take r i. Consider now the diagonal map: δ : A → A a ↦ (a,a) Let us show that δ, also obviously monotone, is an implicative morphism →. * If a ∈ S, then clearly (a,a) ∈ S. * We have: [3] _a,a'∈ A δ (a→ a') →δ(a) →δ (a') ∈ S _a,a'∈ A (a→ a', a → a') → (a,a) → (a',a') ∈ S _a,a' ∈ A (a→ a') → a → a' ∈ S which is ensured by i∈ S. Finally, let us show that δ is right adjoint to π_1 in , making it an implicative surjection. * On one hand, π_1 δ = 𝕀_A. * On the other, we have: [3] 𝕀_ A⊢_ Aδπ_0 _x∈ A (x_0,x_1) → ( x_1, x_1 ) ∈ S _x∈ A x_1 → x_1 ∈ S which is ensured by i∈ S. Therefore, π_1 induces a geometric inclusion Φ_1 : P_ P_. is a subtopos of . In the case of = ℙ for a discrete and absolute PCA, Johnstone <cit.> showed that there is another inclusion P_ P_, induced by the projection π_0 : → and disjoint from Φ_1. We have not been able to show that this holds in general for (binary implicative) arrow algebras, nor to find reasonable assumptions under which this may be the case. At least in the modifiable case, we can say even more about the inclusion Φ_1 : P_ P_. Specializing <ref> to the context of arrow algebras, recall that a subtripos of P_ is open if it is induced by a nucleus o on of the shape: o (a) u → a for some u ∈ A, in which case the closed nucleus: c (a) a + u induces its complement in the lattice of subtriposes of P_ considered up to equivalence. Given a modifiable arrow algebra , we define its modification as the arrow algebra ^m ()_c, where c is the nucleus on defined by: c(x) x+ (,⊤) We denote with M_ the modified arrow tripos P_^m, that is, the subtripos P_()_c of P_. Let be a modifiable arrow algebra. Then, the inclusion Φ_1 : P_ P_ is open, induced by the nucleus: o( x ) (,⊤) → x In particular, M_ is the closed complement of P_ in the lattice of subtriposes of P_ considered up to equivalence. By <ref> and the discussion preceding it, we only have to show that o δπ_1. * To show that o ⊢_ Aδπ_1, note that: _x∈ A o(x) →δπ_1(x) ∈ S _x∈ A( (,⊤) → (x_0,x_1) ) → (x_1,x_1) ∈ S _x∈ A (→ x_0⊤→ x_1, ⊤→ x_1) → (x_1,x_1 ) ∈ S _x∈ A (→ x_0⊤→ x_1) → x_1 (⊤→ x_1)→ x_1 ∈ S _x∈ A (⊤→ x_1) → x_1 ∈ S which is ensured by the properties of ∂ a ⊤→ a. * To show that δπ_1 ⊢_ A o note that, by the hypothesis of modifiability: _x∈ Aδπ_1(x) → o(x) ∈ S _x∈ A (x_1,x_1) → (,⊤) → (x_0,x_1) ∈ S _x ∈ A (x_1,x_1) → (→ x_0 ⊤→ x_1, ⊤→ x_1) ∈ S _x ∈ A (x_1,x_1) → ( ⊤→ x_1, ⊤→ x_1) ∈ S _x∈ A (x_1 →⊤→ x_1,x_1 →⊤→ x_1) ∈ S _x∈ A x_1 →⊤→ x_1 ∈ S which is again ensured by the properties of ∂ a ⊤→ a. For = 𝒦_1, we reobtain what proved in <cit.>: the effective topos is an open subtopos of the effective topos built on the topos of sheaves over the Sierpiński space, _·→·, and Grayson's modified realizability topos – characterized in <cit.> as ^m – is its closed complement. For = PER, we obtain that the extensional modified realizability topos characterized in <cit.> as ^m is the closed complement of as subtoposes of . Let us now see how the construction of the modified arrow tripos can be made pseudofunctorial. In the proof, we will need the following property, which makes use of the hypothesis of modifiability. Let f : → be an implicative morphism in . Then, c f c ⊢ f c.[Of course, the first c is a nucleus on , while the second one is a nucleus on .] By definition of the nucleus c ∈, and using the fact that logical joins are computed pointwise in ()^ A, c f c ⊢ f c is equivalent to: _x ∈ A (,⊤) → f c (x) ∈ S_B Since is modifiable, this reduces to: _x∈ A⊤→ Mf ( (c x)_1 ) ∈ S_B where M : → is the monotonization pseudofunctor of <ref>, and hence since Mf f: _x∈ A⊤→ f ( ((,⊤) + (x_0,x_1) )_1 ) ∈ S_B Note now that, in any arrow algebra of the form , the logical join a + a' has a_1 + a_1' as its second component. This can be seen using the explicit description of logical joins given in <ref>, recalling that (evidential) meets and implications in are computed pointwise on the second component. Therefore, the previous is equivalent to: _x∈ A⊤→ f ( ⊤ + x_1 ) ∈ S_B which follows from _a ⊤→ (⊤ + a) ∈ S_A by the properties of f. For any implicative morphism f : → in , let: f^m : ^m→^m be the composite: ^m ["c"] [" f"] ["𝕀_ B"] ^m where f : → is the implicative morphism defined in <ref>. Then, (-)^m is a pseudofunctor →. Moreover, if f is computationally dense with right adjoint h : →, then f^m is computationally dense as well, and a right adjoint is given by h^m : ^m →^m. Furthermore, the square: ^m ^m ["h^m", from=1-1, to=1-2] ["c"', from=1-1, to=2-1] ["c", from=1-2, to=2-2] [" h"', from=2-1, to=2-2] commutes up to isomorphism. First, let us show that (-)^m preserves identities and compositions up to isomorphism. * By definition, 𝕀_A^m : ^m →^m is given by the composite: ^m ["c"] ["𝕀_A"] ["𝕀_ A"] ^m which means that 𝕀_A^m = c : ^m →^m, and obviously c ^c 𝕀_ A. * By definition, for f : → and g : →𝒞, (g f)^m is given by the composite: ^m ["c"] ["(g f)"] 𝒞["𝕀_ C"] 𝒞^m where of course (gf) g f, whereas g^m f^m is given by the composite: [cramped] ^m ["c"] [" f"] ["𝕀_ B"] ^m ["c"] ["g"] 𝒞["𝕀_ C"] 𝒞^m which means that we need to show that g c f c ^c g f c. On one hand, using the fact that 𝕀⊢ c both for c ∈ and c ∈𝒞: g f c ⊢ g c f c ⊢ c g c f c i.e. g f c ⊢^c g c f c. On the other, by the previous lemma we know that c f c ⊢ f c, which implies g c f c ⊢ g f c by the properties of g, and hence g c f c ⊢ c g f c since 𝕀_ C⊢ c. The pseudofunctoriality of f ↦ f then yields the pseudofunctoriality of f ↦ f^m. Suppose now h : → is right adjoint to f and, without loss of generality, assume h to be monotone, in which case we can describe h as (y_0,y_1) ↦ (h(y_0), h(y_1)); let us show that h c is right adjoint to f c :^m→^m. * On one hand, f c h c ⊢_ B^c 𝕀_ B explicitly reads as f c h c ⊢_ B c. By <ref>, we know that h is right adjoint to f, so the previous is equivalent to c h c ⊢_ B h c, which is ensured by the previous lemma. * On the other, 𝕀_ A⊢^c_ A h c f c explicitly reads as 𝕀_ A⊢_ A c h c f c. As 𝕀_ A⊢ c, this is ensured if 𝕀_ A⊢_ A h c f c. This is again equivalent to f ⊢_ A c f c as h is right adjoint to f, which follows since 𝕀⊢ c, both for c ∈ and c∈. Finally, to show that the square above commutes up to isomorphism, we need to show that c h^m h c as morphisms ^m →. On one hand, h c ⊢ c h^m explicitly means h c ⊢_ B c h c, which follows simply being 𝕀_ A⊢ c. On the other, c h^m ⊢ h c explicitly means c h c ⊢_ B h c, which is again ensured by the previous lemma. Let and be modifiable arrow algebras. Then, every geometric morphism Φ : P_→ P_ induces a geometric morphism Φ^m : M_→ M_ such that the diagram: M_ M_ P_ P_["Φ^m", from=1-1, to=1-2] [hook, from=1-1, to=2-1] [hook, from=1-2, to=2-2] ["Φ"', from=2-1, to=2-2] is a pullback square of triposes and geometric morphisms. In particular, the induced diagram of toposes and geometric morphisms: ^m ^m [from=1-1, to=1-2] [hook, from=1-1, to=2-1] [hook, from=1-2, to=2-2] [from=2-1, to=2-2] is a pullback square. The fact that the square commutes follows directly by the previous proposition. To show that it is a pullback, instead, recall from <cit.> that, given a closed nucleus k x x + u on , the pullback of the closed subtripos P__k P_ along Φ is the closed subtripos of P_ determined by the nucleus k' y y + (Φ)^+_ B (u). Therefore, the square above is a pullback if and only if (Φ)^+_ B (,⊤) (,⊤) in . To prove this, let f : → be an implicative morphism with right adjoint h : → inducing Φ, so that Φ is induced by f with right adjoint h as in <ref>; then, we need to show that f (,⊤) (,⊤) in . On one hand, by modifiability of , (,⊤) ⊢ f (,⊤) reduces simply to ⊤⊢ f(⊤), which is true as f is an implicative morphism. On the other, f (,⊤) ⊢ (,⊤) is equivalent to (,⊤) ⊢ h (,⊤), which is true again as is modifiable and h is an implicative morphism. In particular, restricting to arrow algebras of the form ℙ for a discrete and absolute PCA ℙ, we reobtain <cit.>. Recall by <ref> that we can identify M_ up to equivalence with the subtripos M_' P_ defined by: M_'(I) α∈ P_(I) | c α⊢_I α = α∈ P_(I)| _i (,⊤) →α(i) ∈ S_A = α∈ P_(I)| _i ⊤→α_1(i) ∈ S_A = α∈ P_(I)| ⊤_I ⊢_I α_1 and in the same way we can identify M_ up to equivalence with the subtripos M_' P_ defined by: M_'(I) = β∈ P_(I) | ⊤_I ⊢_I β_1 In these terms, Φ^m can be described explicitly as: M'_ M'_[""name=0, anchor=center, inner sep=0, " f ∘-"', curve=height=12pt, from=1-2, to=1-1] [""name=1, anchor=center, inner sep=0, " h ∘-"', curve=height=12pt, from=1-1, to=1-2] ["⊣"anchor=center, rotate=-90, draw=none, from=0, to=1] that is, exactly the restriction of Φ in both directions. The details of the proof of the previous corollary also reveal that a similar result holds for open complements of modified triposes, again generalizing what proved in <cit.>. Let and be modifiable arrow algebras. Then, for every geometric morphism Φ : P_→ P_, the diagram: P_ P_ P_ P_["Φ", from=1-1, to=1-2] [hook, from=1-1, to=2-1] [hook, from=1-2, to=2-2] ["Φ"', from=2-1, to=2-2] is a pullback square of triposes and geometric morphisms. In particular, the induced diagram of toposes and geometric morphisms: [from=1-1, to=1-2] [hook, from=1-1, to=2-1] [hook, from=1-2, to=2-2] [from=2-1, to=2-2] is a pullback square. amsalpha
http://arxiv.org/abs/2407.02547v1
20240702131344
Domain Generalizable Knowledge Tracing via Concept Aggregation and Relation-Based Attention
[ "Yuquan Xie", "Wanqi Yang", "Jinyu Wei", "Ming Yang", "Yang Gao" ]
cs.AI
[ "cs.AI", "cs.LG" ]
Domain Generalizable Knowledge Tracing via Concept Aggregation and Relation-Based Attention Yuquan Xie, Wanqi Yang, Jinyu Wei, Ming Yang and Yang Gao 29 February 2024 =========================================================================================== § ABSTRACT Knowledge Tracing (KT) is a critical task in online education systems, aiming to monitor students’ knowledge states throughout a learning period. Common KT approaches involve predicting the probability of a student correctly answering the next question based on their exercise history. However, these methods often suffer from performance degradation when faced with the scarcity of student interactions in new education systems. To address this, we leverage student interactions from existing education systems to mitigate performance degradation caused by limited training data. Nevertheless, these interactions exhibit significant differences since they are derived from different education systems. To address this issue, we propose a domain generalization approach for knowledge tracing, where existing education systems are considered source domains, and new education systems with limited data are considered target domains. Additionally, we design a domain-generalizable knowledge tracing framework (DGKT) that can be applied to any KT model. Specifically, we present a concept aggregation approach designed to reduce conceptual disparities within sequences of student interactions from diverse domains. To further mitigate domain discrepancies, we introduce a novel normalization module called Sequence Instance Normalization (SeqIN). Moreover, to fully leverage exercise information, we propose a new knowledge tracing model tailored for the domain generalization KT task, named Domain-Generalizable Relation-based Knowledge Tracing (DGRKT). Extensive experiments across five benchmark datasets demonstrate that the proposed method performs well despite limited training data. Traditional KT models show significant performance degradation on target domains, while models with the DGKT framework alleviate this issue. Our proposed DGRKT model outperforms five knowledge tracing models, achieving an average AUC improvement of 4.16%, benefiting from a specially designed relation-based attention mechanism. Knowledge tracing, intelligent education, domain generalization, knowledge concept. § INTRODUCTION Over the past decades, a large number of online education systems have emerged, which provide remote learning environments and personalized guidance for users. These advancements have revolutionized the educational landscape, enabling platforms to cater to the individual learning needs of students more effectively <cit.>. Knowledge tracing (KT) plays a crucial role in this context, as it allows education systems to monitor and evaluate the evolving knowledge states of students. Knowledge tracing (KT) aims to trace a student's knowledge states accurately which is often realized via the task of predicting students’ future performance according to their historical interactions <cit.>. This enables online education platforms to better assess students' comprehension and provide tailored assistance for learners <cit.>. A number of KT models have demonstrated their effectiveness  <cit.>. However, these methods require a substantial amount of student interactions to adequately train a KT model for a specific subject. In the majority of knowledge tracing datasets, students' problem-solving records typically have a scale of several hundred thousand or more. For example, there are 346, 860 interactions in ASSISTment 2009 which is completely enough to train a KT model sufficiently while it is difficult for a newly developed education system to gain such a substantial amount of data in the beginning. In practice, developing a new online education system or adding a new question bank to an existing one often lacks the availability of abundant student interactions, leading to a scarcity of data for training a KT model <cit.>. This scarcity of interactions results in the challenge of insufficient training data. As shown in Fig. <ref>, common KT models suffer significant AUC degradation when confronted with limited data sizes. Currently, few studies have delved into the data scarcity issue in knowledge tracing. Some prior efforts have aimed to address the sparse problem that students tend to interact with only a small set of questions, using approaches like pre-trained question embeddings <cit.> or contrastive learning <cit.>. However, these methods assume relatively sufficient interaction records. In cases where interactions are severely limited, these approaches might not be effective in learning question embeddings and students' knowledge states, as they heavily rely on a larger amount of student interactions. Thus, the scarcity of student interactions in new educational systems presents a significant challenge. In the absence of student interaction records in the new system, numerous student interaction records from diverse online education platforms offer valuable insights. These records may originate from different sources, yet they mirror the process of knowledge acquisition. Extracting overarching cognitive patterns from these varied student interaction sequences is pivotal for addressing the data scarcity issue in KT tasks. AdaptKT, as proposed by <cit.>, delved into domain adaptation for knowledge tracing. However, it is demanding that AdaptKT requires the question texts and relatively sufficient interactions in the target domain, which is often not met in reality. Moreover interaction sequences from different education system differs a lot so it is unfeasible to directly use them as training data. Thus, the significant differences between various educational systems present another major challenge. We innovatively frame the dilemma of the new education system as a domain generalization issue, as shown in Fig. <ref>. We aim to design a novel KT method that derives meta-knowledge from various student interaction sequences from different education systems(source domains), enabling seamless transfer to a new education system(target domain). In the context of domain generalization for knowledge tracing, different domains corresponds to the questions and interactions from different online education platforms, such as engineering statics course and Algebra course. These domains have different knowledge concepts and these concepts are related to different questions, we aim to assess student's mastery on various concepts. Sufficient student interaction records from specific systems are treated as source domains, while a limited number of interactions in the new system represent the target domain. To solve the domain generalization issue, this paper introduces a novel approach, namely Domain Generalizable Knowledge Tracing (DGKT). DGKT's goal is twofold: firstly, to utilize data from multiple source domains for training a versatile KT model, and subsequently, to swiftly adapt this model to a novel target domain with a few student interactions. Notably, DGKT faces two primary challenges: 1) significant differences of concepts among the source domains, and 2) scarce student interactions within the target domain. For the problem of significant differences of concepts among the source domains, since diverse source domains encompass different concepts, raw concepts are ineffective to achieve domain generalization. To overcome this challenge, we propose the concept aggregation—a clustering algorithm for concept embedding that analyzes students' interaction sequences from a unified perspective. Moreover, we propose a relation-based attention encoder that fully leverage the relation For the problem of scarce student interactions within the target domain, we find it difficult to learn the target question embedding with a few interactions. Consequently, we design a unique concept representation for target domain that can effectively adapt to target domain with a few training data. Moreover, we propose a relation-based attention that fully leverage exercise information in target domain. The contributions of this paper are summarized as follows: * A novel solution of knowledge tracing for the data scarcity issue. The data scarcity issue is valuable but insufficiently studied. We aim to train a domain-generalizable KT model from auxiliary source domains and quickly adapt the model to new target domains with only a few interactions. * Innovative concept aggregation for reducing domain discrepancy. This addresses the challenge that questions and concepts from different domains have enormous differences. Through concept aggregation, similar concepts are associated with the same cluster centroid. * Domain-generalizable relation-based knowledge tracing. We introduce a domain-generalizable relation-based knowledge tracing approach that leverages relational information within the target domain. Our specially designed model demonstrates superior performance in domain generalization for knowledge tracing. After conducting extensive experiments on five benchmark datasets, our approach shows a performance of 4.16% improvement over five KT methods on domain generalization of knowledge tracing tasks. § RELATED WORK In this section, we introduce the related work including congnitive diagnosis, knwoledge tracing and domain generalization. §.§ Cognitive Diagnosis In online education systems, cognitive diagnosis is an essential task aimed at analyzing a student's performance and providing a detailed evaluation of their proficiency across various concepts. Over the past decades, numerous cognitive diagnosis models have been developed, which can be broadly classified into two categories: traditional cognitive diagnosis models and deep learning-based cognitive diagnosis models. Traditional models include IRT <cit.>, DINA <cit.> and MIRT <cit.>, and one of the most prominent is IRT. IRT represents a student's ability as a one-dimensional scalar and calculates the probability of correctly answering an exercise using a logistic function. This model has been widely used due to its simplicity and interpretability. However, it is limited by its assumption of unidimensionality and its relatively simplistic modeling of student responses. In contrast, deep learning-based cognitive diagnosis models leverage the power of neural networks to capture more complex patterns in student data. A notable example is NeuralCD <cit.>, which applies a multilayer perceptron to model the cognitive diagnosis process. By utilizing deep neural networks, NeuralCD can learn intricate relationships between student performance and underlying knowledge concepts. While cognitive diagnosis models are effective in evaluating a student's proficiency at a given point in time, they often lack the capability to continuously track the student's knowledge state over an extended learning period. This limitation hinders their ability to provide ongoing, dynamic feedback that adapts to the student's evolving learning needs. §.§ Knowledge Tracing Knowledge tracing task is an essential task to trace a student's knowledge state over an extended learning period. Previous methods for KT can be divided into two categories, i.e., traditional machine learning methods and deep learning methods. Among the methods based on traditional machine learning algorithms, the most representative one is BKT <cit.>. BKT builds a hidden Markov model for each knowledge concept to predict a student's mastery of specific concepts. Other traditional machine learning KT models include Performance Factors Analysis (PFA) <cit.> and item response theory (IRT) <cit.>. Recently, many deep models for KT have emerged. The earliest deep model for KT is Deep Knowledge Tracing (DKT) <cit.>. DKT applies recurrent neural networks (RNNs) and outperforms traditional KT models. Many variants of DKT are proposed afterwards such as DKT+ <cit.>. Exercise-Enhanced Recurrent Neural Network (EERNN) <cit.> is proposed for student performance prediction by taking full advantage of both student exercising records and the text of each exercise. Another typical deep KT model is Separated Self-AttentIve Neural Knowledge Tracing (SAINT) <cit.> which applies Transformer to KT tasks. Attentive Knowledge Tracing (AKT) <cit.> is another attention-based KT model using a novel monotonic attention mechanism that relates a learner’s future responses to assessment questions to their past responses. Learning Process-consistent Knowledge Tracing (LPKT) <cit.> monitors students' knowledge state through directly modeling their learning process. As far as we know, few works focus on the data scarcity issue in online education systems, and we innovatively attempt to train a generalizable KT model. §.§ Domain Generalization Domain generalization (DG) has attracted increasing interest in recent years <cit.>. Some DG methods improve generalization via designing novel learning strategies or augmentation <cit.>. Mancini et al. use learnable weights for aggregating the predictions from different source-specific classifiers <cit.>, where a domain predictor is adopted to predict the probability of a sample belonging to each domai. Learning and Removing Domain-specific features for Generalization (LRDG) <cit.> learns a domain-invariant model by tactically removing domain-specific features from the input images. There are several DG methods based on meta-learning. Inspired by model-agnostic meta-learning (MAML) <cit.>, Li et al. propose Meta-Learning for DG (MLDG) <cit.>, which is an optimization-based meat-learning strategy for domain generalization. MetaReg <cit.> introduce a meta regularizer in meta-learning for fine-grained regularization. Another way to solve the DG problem is representation learning. Many DG methods reduce distribution discrepancy across training domains to learn domain invariant representations via feature normalization <cit.> or minimize feature distribution divergence explicitly  <cit.>. SNR  <cit.> is proposed to simultaneously ensure both high generalization and discrimination capability of the networks. However, these methods cannot be directly borrowed for KT tasks since KT relies on comprehending a student's learning trajectory. § DOMAIN-GENERALIZABLE KNOWLEDGE TRACING In this section, we elaborate on the proposed DGKT method. Firstly, we formulate the knowledge tracing (KT) tasks and their domain generalization setting. Subsequently, we present the model architecture and concept aggregation. Furthermore, we introduce the generalization of KT model, which empowers the model to acquire meta-knowledge from source domains and effectively apply it to the target domain. §.§ Problem Definition A KT task with few sequences. In a KT task, there are very few sequences of interactions for training, denoted as I = {(q_1, r_1), (q_2, r_2), ..., (q_T, r_T)}. Here, T represents the length of the sequence, q_t ∈ℕ^+ corresponds to the question ID of the t-th interaction (t≤ T), and r_t ∈{0, 1} means the correctness of the student's answer to the question q_t. These questions are associated with n_c concepts that students need to master. The objective is to train a KT model capable of mining the knowledge state of students and predicting the probability that a student will answer the next question correctly, denoted as P(r_n+1 | q_n+1,I). Since data scarcity affects model training, we resort to other available sequences as auxiliary source domain data for domain generalization. Domain generalization setting. In the context of domain generalization for knowledge tracing, there are N auxiliary source domains {D_s^i | 1≤ i ≤ N} and a target domain D_t. Each source domain D_s^i consists of m_i sequences of student interactions, denoted as D_s^i = {I_j^i | 1≤ j ≤ m_i} along with a set of questions {q_1, q_2,...,q_n_q_i} and a set of concepts {c_1, c_2,...,c_n_c_i}, where n_i represents the number of questions, k_i represents the number of concepts. The target domain, on the other hand, contains only m_t student interaction sequences, represented as D_t = {I_j^t | 1 ≤ j ≤ m_t}, with m_t ≪ m_i. Importantly, the data D_t from the target domain is unseen during the meta-training phase. The objective is to train a generalized KT model by leveraging all the source domain data {D_s^i|1 ≤ i ≤ N}, and subsequently adapt this model to the target domain data D_t for knowledge tracing. §.§ Model Architecture The process outlined in Fig. <ref> illustrates the key steps of our DGKT approach. Initially, we employ the feature embedding module to transform the sequence into embedding sequence. These embedding sequences are then input into the knowledge state encoder, which generates the hidden knowledge state. Ultimately, this knowledge state is decoded by the knowledge state decoder, producing the anticipated probability of a student providing a correct response to the subsequent question. Feature embedding module. We aim to convert the sequence of questions and responses into the embedding sequence. For each knowledge tracing domain, we have a concept matrix Q∈ℝ^n_c × n_q, where n_q represents the total number of question IDs and n_c represents the total number of knowledge concept IDs. The element Q_ij is set to 1 if the j-th question is related to the i-th knowledge concept. Thus, we learn the embedding e_q_t∈ℝ^d of question q_t (d is the dimension of the embedding), which can be written by e_q_t = ∑_i𝕀(Q_iq_t = 1) e_i/∑_i𝕀(Q_iq_t = 1), where 𝕀(·) represents the indicator function, which equals 1 if the condition inside the parentheses is true, and 0 otherwise. e_i∈ℝ^d is a learnable vector representing the i-th concept. Using the questions and responses, we construct the question-response embedding e_qr_t∈ℝ^2d as follows: e_qr_t = e_q_t⊕0, if r_t = 1, 0⊕ e_q_t, if r_t = 0, where 0 = (0,0,...,0) is an all zero vector with the same dimension d as e_q_t, and ⊕ is the concatenation operator. By concatenating all the embeddings in a sequence, we obtain the question embedding matrix M_q{1:T} = (e_q_1, e_q_2, ..., e_q_T) ∈ℝ^d × T and the question-response embedding matrix M_qr{1:T} = (e_qr_1, e_qr_2, ..., e_qr_T) ∈ℝ^2d × T. Knowledge tracing backbone. To encode question embedding and question-response embedding into knowledge state, we employ backbone of existing knowledge tracing model. Here we take DKT, SAINT and AKT as examples to illustrate how to transform embedding into knowledge state using off-the-shelf knowledge tracing models. For DKT, we use LSTM as knowledge state emcoder to get student's knowledge state: f_t = σ(W_fe_qr_t+U_fh_t-1+b_f), i_t = σ(W_ie_qr_t+U_ih_t-1+b_i), o_t = σ(W_oe_qr_t+U_oh_t-1+b_o), c_t = σ(W_ce_qr_t+U_ch_t-1+b_c), c_t = f_t ⊙ c_t-1 + i_t ⊙c_t, h_t = o_t ·σ(c_t), where e_qr_t denotes the t-th question-response feature and h_t represents the student's knowledge state at timestep t. Likewise, in the AKT model, three attention-based modules are utilized: the question encoder, the knowledge encoder, and the knowledge retriever. The student's knowledge state can be represented as: x_t = Encoder_q(e_q_1,e_q_2,...,e_q_t), y_t = Encoder_k(e_qr_1,e_qr_2,...,e_qr_t), h_t = Decoder((x_1,y_1),(x_2,y_2),...,(x_t-1,y_t-1),x_t), where Encoder_q represents the question encoder which produce modified, contextualized representations of each question based on the sequence of questions the learner has previously practiced on, Encoder_k represents the knowledge encoder which produces modified, contextualized representations of the knowledge the learner acquired while responding to past questions, and Decoder represents the knowledge retriever which retrieves knowledge acquired in the past that is relevant to the current question using an attention mechanism. In AKT, the student's knowledge state h is obtained in the knowledge retriever module by first constructing question features x based on the student's question sequence and constructing question-interaction features y based on the student's question-interaction sequence. Then, h is derived through attention layers applied to x and y. As for SAINT, the transformer's encoder is responsible for receiving the student's question sequence, while the decoder receives both the student's question-interaction sequence and the output from the encoder: o_t = Encoder(e_q_1,e_q_2,...,e_q_t), h_t = Decoder(e_qt_1,e_qt_2,...,e_qt_t-1,o_1,o2,...,o_t), where Encoder and Decoder represent the Transformer's encoder and decoder respectively. Knowledge state decoder. To decode knowledge state into the probability, we use the same knowledge encoder to predict the probability of a student answering the next question correctly : ŷ_t+1 = Dec(h_t+1, q_t+1) = σ (W_2 ·ReLU(W_1 · [h_t+1, e_q_t+1] + b_1) + b_2), where Dec represents the knowledge state decoder, W_1, W_2 and b_1, b_2 denote weights and biases, respectively. σ(·) is the sigmoid function. §.§ Sequence Instance Normalization To reduce the distribution discrepancy from different domains, utilizing normalization method is a common solution. However, existing normalization methods may not be applicable for sequence feature in knowledge tracing task since feature after timestep t should be kept unseen when calculating y at timestep t. To tackle the dilemma, we design the sequence instance normalization SeqIN in our KT model to normalize the feature embeddings of sequential student interactions among domains. Since the normalization process takes into account the fact that the later interactions cannot be seen in the previous interactions, the feature embedding of the interaction each moment is normalized by only the statistics of all the previous moments. Given a sequential feature matrix M = (m_1, m_2, ..., m_n) where m_t ∈ℝ^d is the feature embedding at time t, the normalized feature matrix M is calculated by M=(m_1, m_2, ..., m_n), m_t = γ(m_t-μ_t(M)/σ_t(M))+ β, μ_t(M) = 1/t+1 (p + ∑_i=1^t m_i), σ_t(M) = √((p- μ_t(M))^2 + ∑_i=1^t (m_i - μ_t(M))^2/t+1), where γ, β∈ℝ^d are the parameters learned from the data. μ_t(M) and σ_t(M) denote the mean and standard deviation of the sequence {m_1,m_2,...,m_t}(1≤ t≤ n). As seen, SeqIN normalizes m_t by considering all previous feature embeddings up to time t while remaining later feature embeddings unseen. In consideration of the meaninglessness to compute the mean and standard deviation of m_1 itself, we add a padding vector p in front of the original sequence, i.e., {p,m_1,m_2,...,m_n}, where p is learned along with the model. Moreover, SeqIN can allow the feature embedding matrices of student interactions from different source domains to be aggregated together, which will be obviously observed in Fig. <ref> in the visualization experiments. We apply SeqIN to aforementioned knowledge tracing backbone networks, including DKT, AKT and SAINT. For DKT, the SeqIN is directly used on students' knowledge state h: h = SeqIN(h), where h is the students' knwoledge state. For AKT, we apply SeqIN to intermediate features x and y as follows: y = SeqIN(y), x = SeqIN(x), where x and y is respectively derived from the question encoder and the knowledge encoder of AKT. As for SAINT, we apply SeqIN to intermediate features o: o = SeqIN(o), where o is the output of Transformer's encoder. §.§ Concept Aggregation We here illustrate the concept aggregation for KT model, which enables KT model to retrive meta-knowledge from source domains, aggregate concepts from different domains and adapt to target domain. In the knowledge tracing task, each question is typically associated with a few specific knowledge concepts, which is directly utilized by conventional knowledge tracing models. However, this approach is not applicable for domain generalization knowledge tracing tasks, as knowledge concepts can vary across different domains, and they may even share the same concept ID. Additionally, the features of interaction sequences from different knowledge tracing domains show significant variations. For example, in certain knowledge tracing tasks, students may practice a question repeatedly, while in other tasks, students will move on to the next question regardless of their performance on the current question. Consequently, the question sequences from different domains exhibit different levels of granularity, which poses challenges for knowledge tracing models to effectively analyze a student's learning patterns. To address the issue of granular differences, we propose concept aggregation for domain-specific knowledge concepts. This approach involves learning the domain-specific concept embedding and conducting k-means algorithm for embedding across all source domains. Subsequently, we replace concept embeddings with calculated centroid embedding and further train the model using concept embeddings. Concept feature learning. We first train the KT model on all source domains. Each source domain learns their own concept embedding matrix E_i = {e_j}_j=1^n_i (referred to Eq. (<ref>)), while the remaining parameters of knowledge state encoder and decoder, namely θ_enc and θ_dec in KT model respectively, are shared across all domains. During this stage, we randomly sample data from each source domain. The model is trained by minimizing classification loss on all source domains, which can be formulated as: {E_i}_i=1^N,θ_enc, θ_decmin 𝔼_D_s^i∼ D_s,(I^i_j,R^i_j)∼ D_s^i [L_CE(Ŷ^̂î_̂ĵ, R^i_j)] , where L_CE is the cross-entropy loss, I^i_j is the j-th interaction sequence in the i-th source domain, and R^i_j is the corresponding response to the next question. Concept clustering. After first training on source domains, we obtain the concept embeddings {E_i}_i=1^N from various domains. In this stage, we apply the k-means algorithm to all concept embeddings, dividing numerous concepts into k clusters. This process provides KT model with a coarse-grained perspective of interaction sequences. Specifically, through the k-means procedure, we obtain a cluster assignment matrix A and the embeddings of each centroid E_c. E_cat = [E_1, E_2, ..., E_N], A = Kmeans(E_cat), e_c_i = 1/|C_i|∑_j=1^n_eE_cat_jA_ij, E_c = [e_c_1,e_c_2,...,e_c_k], where E_cat concatenates all concept embeddings from N source domains, and n_e represents the total of the concepts. A∈{0, 1}^k × n_e is the cluster assignment matrix where A_ij = 1 means the j-th concept of E_cat is assigned to the i-th centroid. |C_i| denotes the number of concepts of the i-th cluster. e_c_i denotes the embedding of i-th centroid and E_c∈ℝ^d × k represents the matrix of centroid embeddings. Concept centroid refinement. Based on the results of concept clustering, we replaced the concept embeddings of each source domain with the embeddings of the cluster centroids to narrow the difference between the source domains. Accordingly, for domain x, the question embedding is rewritten as: Q = A_x · Q_x, e_q_t = ∑_i𝕀(Q_iq_t = 1) e_i/∑_i𝕀(Q_iq_t = 1), where A_x ∈{0, 1}^k × n_c_x is the cluster assignment matrix of domain x, where A_x_ij=1 means the j-th concept of E_x is assigned to the i-th centroid. Q represent the relationship between questions and centorids, where Q_ij = 1 when j-th question is related to i-th centroid. Q_x is the concept matrix of domain x. e_i is the i-th vector in E_c, E_c and A are calculated in concept clustering phase. Similar to Eq. (<ref>), we randomly sample data from source domains and update the parameters (E_c, θ_enc and θ_dec) in KT model by minimizing the classification loss. The procedure of concept aggregation can be succinctly summarized as Algorithm <ref>. §.§ Generalization to Target Domain For domain generalization of KT tasks, we delve into transferring the learned KT model to a designated target domain. Firstly, we initialize the concept embedding of the target domain and subsequently design a novel concept representation for the target domain which not only exhibits adaptability to the target domain but also maintains the centroid embeddings learned from the source domain to prevent overfitting. Initialization for target embedding. We begin by initializing the concept embedding for the target domain, namely target embedding. Given that several concepts in the target domain remains unseen, the optimal approach for initializing the target embedding is to utilize the learned centroid embedding. For the target embedding, the formulation is as follows: e_t_i∈{e_c_1, e_c_2, ..., e_c_k}, E_t = [e_t_1, e_t_2, ..., e_t_n_c], where e_t_i represents the i-th concept embedding in target domain, 1≤ i≤ n_c_t. n_c_t is the number of concepts in target domain. Initially, e_t_i is randomly selected from the set of centroid embeddings. E_t represents target embedding matrix. Concept representation of target domain. Regarding the representation of concepts in the target domain, we aim for it to leverage the parameters learned during the concept aggregation phase effectively. Additionally, we seek its efficient adaptation to the target domain, enabling the model to swiftly transition with limited data while avoiding overfitting. For problems in target domain, we have: e_q_t = ∑_i𝕀(Q_iq_t = 1) e_i/∑_i𝕀(Q_iq_t = 1), j = argmin_1 ≤ i ≤ k ||e_q_t - e_c_i||, e_target = (1-λ) e_c_j + λ e_q_t, where e_i is the concept embedding retrived from E_t. e_c_j denotes the j-th centroid embedding. λ is a hyperparameter used to adjust the degree of freedom of the target concept embedding. e_target is the final representation of q_t in target domain that will be fed into KT model. Target embedding adaptation. We fine-tune our KT model on target domain using the aforementioned concept representation. During this phase, only the target embedding E_t is optimized, while other parameters including θ_enc and θ_dec are frozen, which is similar to Eq. (<ref>). In this way, our model will effectively adapt to target domain while fully utilizing its learned centroid embedding in concept aggregation. For clarification, the whole process of generalization is summarized in Algorithm <ref>. § DOMAIN-GENERALIZABLE RELATION-BASED KNOWLEDGE TRACING Although our proposed DGKT framework effectively enhances the performance of existing KT models in new domains, these models are not inherently designed for domain generalization in knowledge tracing. One of the biggest challenges hindering these models from transferring to a new domain is the complete discrepancy in exercise IDs across domains, which leads to an inability to fully leverage the information from exercise IDs in students' interaction sequences. To address this issue, we propose a domain-generalizable relation-based knowledge tracing model(DGRKT) specifically designed for domain generalization in knowledge tracing. This model captures the relationships between exercises and concepts across different timesteps based on attention mechanism. In this section, we illustrate DGRKT in detail including DGRKT model architecture and relation-based attention encoder. §.§ DGRKT Model Architecture Similar to the aforementioned model architecture, our DGRKT consists of a feature embedding module, a knowledge state encoder, and a knowledge state decoder. In DGRKT, the feature embedding module and knowledge state decoder remain the same, while the knowledge state encoder is specifically designed to capture the relationships between different timesteps. As described above, we can transform a student's interaction sequence into question embedding matrix and question-response embedding matrix: M_q, M_qr = FeatureEmbeding((q_1, r_1),(q_2, r_2),...,(q_t, r_t)). Given a question embedding matrix M_q and a question-response embedding matrix M_qr, the outputted knowledge state can be represented as: x_t = R-Attn(𝒬=M_q, 𝒦=M_q, 𝒱=M_qr), h_t = SeqIN(x_t), where M_q denotes the question embedding matrix and M_qr denotes the question-response embedding matrix. R-Attn(·) is the relation-based attention mechanism, which will be elaborated in the upcoming subsection. 𝒬, 𝒦, and 𝒱 represent the query, key, and value, respectively, in the attention mechanism. SeqIN(·) is the aforementioned sequence instance normalization. Likewise, the probability of a student answering the next question correctly can be calculated as: ŷ_t+1 = Dec(h_t+1, q_t+1). Same as before, we conduct concept aggregation to realize generalization to the target domain. §.§ Relation-based Attention Encoder To explicitly capture the relation between different timesteps and fully leverage exercise IDs from different domains, we have designed a novel relation-based attention encoder that adjusts the attention value α based on the relation between two timesteps. We assume that the more similar a question a student answered at a previous time is to the current question, the more valuable the information from that previous time will be. Therefore, we have defined three types of relations based on the relevance between two timesteps to guide the learning of the relation-based attention encoder, as is shown in Fig. <ref>. Specifically, we define three kinds of relations between two timesteps. Given two timesteps i and j, and their corresponding interactions (q_i, r_i) and (q_j, r_j), the relevance between two exercises q_i and q_j can be represented as: R(i,j) = 𝕀(q_i=q_j) + 𝕀(C_i ∩ C_j ≠∅), where R(i, j) denotes the relation between timestep i and timestep j. R(i, j) = 0 indicates q_i and q_j are irrelevant, R(i, j) = 1 indicates q_i and q_j have common related concepts, and R(i, j) = 2 indicates q_i and q_j are the same. C_i and C_j are the concept sets associated with q_i and q_j, respectively, consisting of related concepts. Following the procedure of attention mechanism, we assume that e_qr corresponds to value and e_q corresponds to query and key. In attention block, the attention value α_ij is calculated as: q_i = W_qe_q_i, k_j = W_ke_q_j, α_ij = Softmax(q_i^Tk_j/√(d)), where W_q and W_k ∈ℝ^d × d are linear transformation, d is the dimension of features. Normally, in an attention mechanism, the calculated attention value α is directly used to compute the weighted sum. However, in the relation-based attention encoder, our model aims to explicitly differentiate between different relations. Therefore, we adjust the attention value α_ij based on the relation R(i,j). Specifically, for timestep j, its attention values [α_1j, α_2j, ..., α_(j-1)j] are modified as follows: v_i = W_v e_q_i, λ_ij = a if R(i,j) = 0 b if R(i,j) = 1 c if R(i,j) = 2 , A = [λ_1jα_1j, λ_2jα_2j, ..., λ_(j-1)jα_(j-1)j], x_j = ∑_i=1^j-1λ_ijα_ij/∑ A v_i, where W_v ∈ℝ^d × d is a linear transformation for the value vectors v_i. λ_ij represents the importance of timestep i for timestep j. It is noted that 0 < a < b < c to ensure that λ_ij increases with the relevance of i and j. The output of the relation-based attention encoder, denoted as x, is computed as the weighted sum of the value vectors v. § EXPERIMENTS In this section, we evaluate the performance of our proposed method using five well-known knowledge tracing benchmark datasets on domain generalization knowledge tracing. Also, we conduct visualization experiment along with cluster analysis that manifest the effectiveness of our proposed concept aggregation. §.§ Datasets and setting. We evaluate the performance of our DGKT using five well-known KT benchmark datasets: ASSISTment 2009[https://sites.google.com/site/assistmentsdata/home/assistment-2009-2010-data/skill-builder-data-2009-2010], ASSISTment 2015[https://sites.google.com/site/assistmentsdata/home/2015-assistments-skill-builder-data], ASSISTment 2017[https://sites.google.com/view/assistmentsdatamining/dataset], Algebra 2005[https://pslcdatashop.web.cmu.edu/KDDCup/], and Junyi[http://www.junyiacademy.org/]. The first three datasets are provided by an online education system called ASSISTment <cit.> with different exercises and students, which are widely used to evaluate KT models. Junyi <cit.> was collected from an E-learning platform called Junyi Academy, which was established in 2012. Algebra 2005 is provided by the KDDcup 2010 Educational Data Mining challenge containing 13–14 year old students’ interaction on Algebra questions. For convenience, these datasets are abbreviated as ASSIST09, ASSIST15, ASSIST17, STATICS and ALGEBRA in the following. In the DGKT model, after conducting a grid search, the dimension d of feature embeddings is determined to be 256, and the number k of concept clusters is set to 5. The optimization of the model employs the Adam optimizer with a learning rate of 0.0001, and the batch size is fixed at 32. In relation-based attention, a, b, c are set to 1, 2 and 4 respectively. Our approach adopts a domain generalization setup, where one dataset serves as the target domain, while the remaining four datasets act as four source domains. For concept feature learning, we conduct 12,000 epochs of training on the source domains. In each epoch, every source domain is sampled for a batch, and the model is trained on four batches of data. Subsequently, for concept centroid refinement, we similarly conduct 6,000 epochs of training. On the target domain, we test our method with different scale of training data, from a single batch of data to 8 batches of data, and a fine-tuning process of 15 epochs is conducted. This approach allows us to assess the effectiveness of knowledge tracing across all five datasets. §.§ Comparison Results and Analysis We present a comparative analysis between our proposed DGKT method and four prominent knowledge tracing models, i.e., DKT <cit.>, SAINT <cit.>, AKT <cit.>, and SimpleKT <cit.>. For fair comparison, we initially pre-train these KT models on all available source domains, followed by fine-tuning them on the specific target domain. Methods including DKT, AKT, and SimpleKT are reimplemented by their public source codes, while SAINT is reimplemented based on their papers. Also, the hyperparameters are set as the values provided by the literature. Domain Generalization of Knowledge Tracing Tasks. The AUC results for all compared KT methods in the context of domain generalization are presented in Table <ref>. DKT, SAINT, AKT and SimpleKT are pretrained on all source domains and finetuned on target domain, while DG-DKT, DG-SAINT, DG-AKT and DG-SimpleKT are corresponding KT models with DGKT framework. These results highlight the effectiveness of the DGKT approach in enhancing knowledge tracing performance with limited training data. In the target domain, DGKT demonstrates significant improvements due to its robust generalization capabilities. Addidtionally, we find that DG-DKT performs better than other models based DGKT framework in several dataset despite its relatively small parameter size. We believe it is possibly because its relatively small parameter size makes DG-DKT better generalization abilility. All knowledge tracing models integrated with the DGKT framework exhibit superior performance compared to traditional knowledge tracing models pretrained on source domains. As the size of the data increases, our method consistently maintains its advantage. §.§ Ablation Studies. Concept Aggregation. We conducted ablation experiments to investigate the impact of our proposed concept aggregation methods, which include concept feature learning (CFL), centroid refinement (CR), and target embedding adaptation (TA). Specifically, we compare our DGKT model with three variants: DGKT-w/o CFL, DGKT-w/o CR, and DGKT-w/o TA. The details of these variants are presented below: * DGKT-w/o CFL: This variant omits concept feature learning. The model is directly trained on the target domain without pretraining on the source domain. * DGKT-w/o CR: This variant omits centroid refinement. After concept feature learning, the model is directly fine-tuned on the target domain without centroid refinement. * DGKT-w/o TA: This variant omits target embedding adaptation. The model is fine-tuned on the target domain using randomly initialized embeddings, and question embeddings are calculated solely by Eqs. (<ref>). From Table <ref>, we can observe that DGKT without Concept Feature Learning (CFL) shows a significant decrease in AUC. This indicates that CFL is a crucial procedure, as it derives meta-knowledge from several source domains. Additionally, concept aggregation proves to be beneficial for learning a robust concept embedding for DGRKT, as well as for adapting the target embedding effectively. Sequence Instance Normalization. Moreover, we conduct ablation experiment on Sequence Instance Normalization (SeqIN), we compare DG-DKT with three variants: DG-DKT -w/o SeqIN (SeqIN removed), DG-DKT -BN (SeqIN replaced by BN) and DG-DKT -LN (SeqIN replaced by LN). We omit instance normalization because its calculation inevitably includes all timesteps, leading to information leakage in knowledge tracing. The results, shown in Table <ref>, demonstrate the advantage of SeqIN. Specifically, DG-DKT without SeqIN (DG-DKT -w/o SeqIN) shows a significant decrease in AUC, indicating that the normalization method is crucial in such cross-domain tasks and Sequence Instance Normalization is a better normalization module for domain generalization of knowledge tracing. §.§ Visualization We demonstrate the effectiveness of our proposed method via visualization methods, including visualizing feature embedding via t-sne, student's mastery and attention score of DGRKT. Visualization of SeqIN. To further demonstrate the effectiveness of our SeqIN, we visualize the knowledge state h via t-SNE <cit.> with different normalization methods in Fig. <ref>. This figure shows four different source domains (ASSIT15, ASSIT17, Junyi and Algebra05), and each color represents one source domain. Moreover, we use Proxy 𝒜-distance to calculate the divergence between different domains <cit.>. From Fig. <ref> we can observe that the SeqIN can significantly aggregate the feature embeddings from different source domains, resulting in the lowest 𝒜-distance, while the BN and LN do not work well on the embedding sequences of student interactions. This shows the efficacy of our SeqIN again to reduce the domain discrepancy for knowledge tracing. Visualization of student's mastery. We also pivot towards gauging a student's mastery of five concept clusters via visualization. The interaction sequence is derived from ASSIST09, consisting of the initial 50 interactions. A green rectangle implies that the student correctly answered the corresponding concept cluster while red rectangle means that the student failed. Student's mastery of a concept cluster is calculated by his predicted performance on it. We find that our DGKT is capable of providing a coarse-grained but effective assessment despite the data scarcity issue. Visualization of attention score. Here, we demonstrate the effectiveness of relation-based attention using a heatmap visualization. As shown in Fig. <ref>, we present the attention scores for four attention heads: the left side displays scores from DGRKT without relation-based attention (DGRKT -w/o RA), while the right side shows scores from DGRKT with relation-based attention. We observe that with the help of relation-based attention, DGRKT effectively distinguishes the relationships between different timesteps, focusing on those with closer connections. In contrast, the attention scores in DGRKT -w/o RA are smoother, indicating a reliance primarily on the sequential order. Cluster analysis. We conducted an analysis on the aggregated concepts from ASSIST09, randomly selecting 8 concepts from each cluster. As shown in Table <ref>, it is evident that the first cluster primarily consists of fundamental and basic concepts, while the second cluster contains many advanced algebraic concepts. The third, fourth, and last clusters respectively include geometry concepts, advanced data representation, and statistical concepts. Although the concepts are not strictly categorized into five clusters, those with similar patterns in student interactions have been grouped together. These results clearly demonstrate the effectiveness of the concept aggregation process. §.§ Hyperparameter Analysis We conduct several experiment for λ in concept representation of target domain. We present the auc of DGRKT with different λ and different data size, shown in Fig. <ref>. When λ is set too low, the constraints on the target domain become too strong, preventing the model from learning effective representations of the target domain, which leads to a decrease in target domain auc. Conversely, when λ is set too high, the constraints diminish, causing the model to lose the information from its learned centroid embedding. As for λ, it achieves best performance when it's set to 0.7. § CONCLUSION This paper introduces a novel approach, Domain Generalizable Knowledge Tracing (DGKT), as a solution to address the data scarcity issue within education systems. DGKT capitalizes on the utilization of multiple source domains to train a versatile KT network, enabling rapid adaptation to new target domains with commendable accuracy. Additionally, a novel concept aggregation technique along with target domain concept representation, enabling KT model to analyzes students' interaction sequences from a coarse-grained perspective and effectively adapt to target domain with minimal data. Moreover, we propose domain-generalizable relation-based knowledge tracing (DGRKT) utilizing relation based attention encoder. Experimental evaluations conducted on five benchmark datasets demonstrate substantial improvements when compared to existing KT models. In conclusion, this study showcases the potential of DGKT in providing a versatile and accurate solution for knowledge tracing across a spectrum of educational domains. unsrt
http://arxiv.org/abs/2407.02055v1
20240702083705
Abstract Dialectical Frameworks are Boolean Networks (full version)
[ "Jesse Heyninck", "Matthias Knorr", "João Leite" ]
cs.AI
[ "cs.AI" ]
Unconventional p-wave and finite-momentum superconductivity induced by altermagnetism through the formation of Bogoliubov Fermi surface Kyoung-Min Kim July 8, 2024 ======================================================================================================================================= § ABSTRACT Abstract dialectical frameworks are a unifying model of formal argumentation, where argumentative relations between arguments are represented by assigning acceptance conditions to atomic arguments. Their generality allow them to cover a number of different approaches with varying forms of representing the argumentation structure. Boolean regulatory networks are used to model the dynamics of complex biological processes, taking into account the interactions of biological compounds, such as proteins or genes. These models have proven highly useful for comprehending such biological processes, allowing to reproduce known behaviour and testing new hypotheses and predictions in silico, for example in the context of new medical treatments. While both these approaches stem from entirely different communities, it turns out that there are striking similarities in their appearence. In this paper, we study the relation between these two formalisms revealing their communalities as well as their differences, and introducing a correspondence that allows to establish novel results for the individual formalisms. § INTRODUCTION Formal argumentation is one of the major approaches to knowledge representation and reasoning. In the seminal paper <cit.>, Dung introduced abstract argumentation frameworks, conceived as directed graphs where nodes represent arguments and edges between nodes represent attacks. Their meaning is given by so-called argumentation semantics that determine which sets of arguments can be reasonably upheld together given such an argumentation graph. Various authors have since remarked that other relations between arguments are worth consideration, such as, for example, a dual support relation as in bipolar argumentation frameworks <cit.>. The last decades witnessed a proliferation of extensions of the original formalism that has often made it hard to compare the dialects of the different resulting argumentation formalisms. In an attempt to cope with the resulting number of dialects and unify them, abstract dialectical frameworks (in short, ) were introduced <cit.>. Just like abstract argumentation frameworks, are also directed graphs. However, in , edges between nodes do not necessarily represent attacks, but can encode any relationship between arguments. Such generality is achieved by associating an acceptance condition with each argument, represented as a Boolean formula over the parents of the argument, expressing the conditions under which the argument can be accepted. offer a general framework for argumentation-based inference as they are able to capture all of the major semantics of abstract argumentation <cit.>, and even normal logic programs <cit.>. The following example illustrates a simple . You are considering your travel plans for the upcoming conference summer. If you manage to write a paper, it will be suitable for a conference in texas or in vietnam. If you submit the paper to the conference in Vietnam, you cannot submit it to the conference in Texas (but the conference in Texas does allow for submission of paper submitted elsewhere). Both conferences will require you to apply for travelling funds. This example can be expressed formally in the given in Fig. <ref>.a). Interactions between atoms are expressed by arrows (e.g. writing a paper influences attendance of a conference in both Vietnam and Texas), and the acceptance conditions express these interactions more precisely. E.g., C_v=¬ t p expresses conference attendance in Vietnam is possible if one has a paper and did not submit it to the conference in Texas. In systems biology, Biological regulatory networks encode interactions between biological specimens or compounds, such as proteins or genes, and their interactions, to acquire a better understanding of the complex processes that take place in cells, as doing so may lead to discoveries and new theories about living organisms. To abstract from actual concentration values, and use thresholds to represent whether a compound is active or inactive, Logical models <cit.> are often used instead of quantitative models. Because of that, they require far less information than quantitative models and are therefore more adequate to deal with incomplete, imprecise, and noisy information regarding the biological system. Among these, Boolean (logical) models or networks (), have been extensively used to reproduce known behaviour and test new hypotheses in silico, e.g., as models of gene regulation networks and other biological systems <cit.>. Consider the following toy scenario of a biosphere where four species are potentially present: native quagga mussels, invasive zebra mussels, algae, and fish. These species interact as follows: fish feed on mussels, zebra-mussels outcompete quagga, and both kinds of mussels feed on algae. These interactions can be represented using the biological network in Fig. <ref>.b). Notice that this is a simplified representation of a biological network, meant merely to ease understanding. Influences are represented by arrows, marked with a “+” if the influence is positive (e.g. since fish feed on mussels), and a “-” if the influence is negative (e.g. since zebra mussels outcompete quagga mussels). Furthermore, the precise nature of these interactions is encoded by Boolean functions on the right-hand side of the figure. For example, f_q=¬ z a expresses that quagga mussels will be alive if there are no zebra-mussels and there are algae. These examples show striking similarities between these models of two very different subject matters: and . Syntactically, they both use directed graphs to encode interactions, and boolean formulae to express the precise nature of such interactions. The open question is whether such similarities extend from the syntactic to the semantic level. If that were the case, it would open up a fruitful ground for the cross-fertilization of both fields, both theoretically, where established results from one could provide new insights for the other, but also practically, e.g., by allowing implementations for one to be used by the other. In this paper, we compare and . After formally introducing in Sec. <ref>, in Sec. <ref> we introduce and show that the similarities with extend from the syntactic to the semantic level, by pointing in a formally precise way what are the equivalent notions, but also what the differences are. Our results show that are essentially , creating a fruitful ground for investigating synergies, as illustrated in Sec. <ref>. We then conclude in Sec. <ref>. § ABSTRACT DIALECTICAL FRAMEWORKS We recall following loosely the notation from <cit.>. An D is a tuple D=(,L,C) where is a finite set of atoms, representing arguments or statements, L⊆× is a set of links, representing dependencies or attacks from one argument against another, and C={C_s}_s∈ is a set of total functions (also called acceptance functions) C_s:2^par_D(s)→{,} for each s∈ with par_D(s)={s'∈| (s',s)∈ L} and truth values true () and false (). An acceptance function C_s defines the cases when the statement s can be accepted (is true), depending on the acceptance status of its parents in D. We often identify an acceptance function C_s by its equivalent acceptance condition which models the acceptable cases as a propositional formula over and the usual Boolean connectives ∧ (and), ∨ (or), (negation) and → (material implication). Also, the set of links of D is completely determined by C_s and sometimes left implicit. In Ex. <ref>, we find an D=({f,p,t,v},L,C) with L and C as in Fig <ref>.a). Here, we omit reflexive arrows (e.g. (p,p)) to avoid clutter. An acceptance condition like C_v=¬ t p can be read as “v is accepted if t is not accepted and p is accepted. Interpretations can be used to formally assign meaning to these acceptance conditions. An interpretation (also called possible world) is a function :→{,}. Let ^2() denote the set of all interpretations for . We simply write ^2 if the set of atoms is implicitly given. An interpretation satisfies (or is a model of) an atom a∈, denoted by a, if and only if (a)=. The satisfaction relation is extended to formulas as usual. Then, an interpretation is a two-valued model of an D if, for all s∈, s iff C_s. For sets of formulas Φ, we also define Φ if and only if ϕ for every ϕ∈Φ, and the set of models Φ={ω∈ V^2()|Φ} for every set of formulas Φ. A set of formulas Φ_1 entails another set of formulas Φ_2, denoted by Φ_1⊢Φ_2, if Φ_1⊆Φ_2. A formula ϕ is a tautology if ϕ= V^2() and inconsistent if ϕ=∅. We compare two possible worlds ω_1 and ω_2 by: ω_1≤ω_2 iff for every α∈, ω_1(α)= implies ω_2(α)=. Commonly though, an D=(,L,C) is interpreted through 3-valued interpretations ν:→{,,} adding truth value undecided (). We denote the set of all 3-valued interpretations over by ^3(). We define the information order <_i over {,,} by making the minimal element: <_i and <_i, and †≤_i iff † <_i or †= for any †,∈{,,}. This order is lifted point-wise as follows (given ν,ν'∈^3()): ν≤_i ν' iff ν(s)≤_i ν'(s) for every s∈. The set of two-valued interpretations extending a 3-valued interpretation ν is defined as [ν]^2={ω∈^2()|ν≤_i ω}. Given a set of 3-valued interpretations V⊆^3(), ⊓_i V is the 3-valued interpretation defined via ⊓_i V(s)=† if for every ν∈ V, ν(s)=†, for any †∈{,,}, and ⊓_i V(s)= otherwise. All major semantics of single out three-valued interpretations in which the truth value of every atom s∈ is, in some sense, in alignment or agreement with the truth value of the corresponding condition C_s. The Γ-function enforces this intuition by mapping an interpretation ν to a new interpretation Γ_D(ν), which assigns to every atom s exactly the truth value assigned by ν to C_s, i.e.: Γ_D(ν): →{,,} where s→⊓_i {ω(C_s)|ω∈ [ν]^2}. We also need to define the reduct D^ν of D given ν, i.e., D^ν=(^ν,L^ν,C^ν) with: (i) ^ν={s∈|ν()=}, (ii) L^ν=L∩ (^ν×^ν), and (ii) C^ν= {C_s[{ϕ|ν(ϕ)=}/]| s∈^ν}, where C_s[ϕ/ψ] is obtained by substituting every occurrence of ϕ in C_s by ψ. Let D be an with ν∈() a 3-valued interpretation. Then, ν is admissible for D iff ν≤_iΓ_D(ν); ν is complete for D iff ν=Γ_D(ν); ν is preferred for D iff ν is ≤_i-maximal among all admissible models; ν is grounded for D iff ν is ≤_i-minimal among all complete models; and ν is stable iff ν is a two-valued model of D and {s∈|ν(s)=}={s∈| w(s)=} where w is the grounded model of D^ν. We denote by (D), (D), (D), (D), (D), and (D) the sets of two-valued, admissible, complete, preferred, grounded, and stable models of D. It was been shown that (D)⊆(D) ⊆(D) ⊆(D)⊆(D) as well as (D)⊆(D). [Ex. <ref> ctd.] The in Ex. <ref> has three complete models ν_1, ν_2, ν_3 with: ν_1(p)=, ν_1(v)=, ν_1(t)= and ν_1(f)=; ν_2(s)= and ν_3(s)= for all s∈. An admissible model that is not complete is ν_4 with ν_4(p)=, ν_4(v)=ν_4(t)=ν_4(f)=. ν_3 is the grounded model, whereas ν_1 and ν_2 are both preferred and two-valued, and only ν_2 is stable. § BOOLEAN NETWORKS In this section, we recall Boolean networks as known from the literature (see e.g., <cit.>), first looking at the syntax and then focussing on their semantics. During this presentation, we will establish sometimes surprising connections to ADFs as well as notable differences between these two formalisms. §.§ Syntax Boolean networks utilize a regulatory graph to represent the compounds in the biological process and the principal interactions between them. A regulatory graph is a directed graph G = (V,E), where V = {v_1,...,v_n} is the set of vertices (nodes) representing the regulatory compounds, and E = {(u,v,s): u,v ∈ V, s ∈{+,-}} is the set of signed edges representing the interactions between compounds. An edge with s=+ is called positive interaction (or activation), representing that u activates v, while an edge with s=- is called negative interaction (or inhibition), representing that u inhibits v. A node with no incoming edges is called input node, representing external stimuli, whose values do not change. Fig. <ref> shows regulatory graph G = (V,E) with V = {v_1,v_2,v_3,v_4} and E = {(v_1,v_2, -),(v_1,v_3, +), (v_2,v_1,+), (v_2,v_3, +), (v_4,v_2, +),(v_4,v_3, -)}. Boolean logical models then add regulatory functions for each compound to specify how different compounds that affect the same node interact with each other for that node's activation. A Boolean logical model M of a regulatory network is defined as a tuple (V,F) where V = {v_1,v_2,...,v_n} is the set of variables representing the regulatory compounds of the network such that v_i can be assigned to a value in {0,1}, and F={f_1, f_2,...,f_n} is the set of Boolean functions such that f_i defines the value of v_i and where f_i=v_i if v_i is an input node. Regulatory functions of input nodes may sometimes be omitted (cf. Fig. <ref>), which means that f_i=v_i, but commonly, we use the explicit representation. Fig. <ref> presents a boolean logical model with G from Ex. <ref> and regulatory functions for G on the right. We can observe that, syntactically, a Boolean network is strikingly similar to an abstract dialectical framework. The only difference is that, unlike the edges in the regulatory graph of a BN, links in do not mention explicitly whether an argument is attacking or supporting. Still, this can be extracted from the corresponding acceptance conditions. We establish this connection formally. Let M=(V,F) be a Boolean logical model of a regulatory network with regulatory graph G=(V,E). We define the corresponding D_M,G=(V,L,F) with L={(u,v)| (u,v,s)∈ E}. Let D=(,L,C) be an with C in NNF. We define the corresponding Boolean logical model M_D=(,C) of a regulatory network with regulatory graph G_D=(,E) with E={(u,v,+)| u∈ C_v}∪{(u,v,-)| u∈ C_v}. The requirement for the acceptance conditions to be in Negation Normal Form (NNF) is necessary to include the correct edges in G_D. Alternatively, one can determine the polarity using Definition <ref> below. §.§ Dynamics BNs allow us to capture the changes over time in a biological process based on the interactions of the various compounds involved, which should correctly represent the dynamics observable in the real system. We will see that the study of dynamics in BNs correspond to semantics of . We start with network states that are used to represent the (current) activations of a network's compounds. The network state of a BN with n compounds is a vector S = [v_1,v_2,...v_n] where v_i is the value of the variable representing the i-th compound of the network. Clearly, for Boolean logical models, the number of different states in a network is given by 2^n. E.g., if nodes 1 and 3 in Ex. <ref> are active and the other two are not, then the state will be represented by 1010. Similar representations are used for interpretations of (by ordering the atoms), and it is clear that the states in BNs correspond to possible world of . The update of the i-th compound v_i of a network from one discrete time point t to the next is then defined as v_i(t+1)=f_i(S(t)) for state S(t) at time t, which clearly exactly corresponds to the evaluation of an acceptance condition in . We can then use state transition graphs <cit.> to describe how networks, and thus the modelled biological systems, evolve over time. A State Transition Graph (STG) is a directed graph G_STG = (S,T) where S is the set of vertices representing the different states of the network, and T is the set of edges representing the viable transitions between states. Two update schemes are employed to update the values of nodes in a BN: the synchronous and the asynchronous updating scheme <cit.>. Note that, given a Boolean logical model, the state transition graphs for the synchronous updating scheme and asynchronous updating scheme are uniquely determined. In the synchronous updating scheme, at each time step, all compounds are updated simultaneously. Each network state has at most one successor (cf. Fig. <ref>.a), which is sometimes argued to be biologically less realistic and less accurate for analysis. Yet, synchronous updates are still regularly used <cit.>, and many of the concepts below, such as trap spaces, are independent of the used scheme. In the asynchronous updating scheme, at each time step, one or more regulatory functions may be applied <cit.>. This is closer to what is observable in real systems, since these changes seldomly tend to take place simultaneously. With n compounds in a network, and the frequently used, particular case of exactly one function being applied at each time step, each state can have at most n possible state transitions (including a transition to itself - cf. Fig. <ref>.b). There are certain cycles of states in which these networks reside most of the time. These cycles are a trap of sorts, since as soon as a network enters a cycle's state, it is unable to exit the cycle. These traps, called attractors, are linked to many important cellular processes, such as phenotypes, cell cycle phases, cell growth, differentiation, apoptosis, and more <cit.>. Because the number of states in a network is finite, transitions from all states eventually lead to an attractor. All states that lead to a certain attractor form its attraction basin. Attractors have other relevant characteristics. One such characteristic is that, from any state belonging to an attractor, it is possible to find a path of attractor states to any other state in that attractor. Another important property is that there are no state transitions from attractor states to states outside of the attractor. Attractors can also be classified under different types. The left image of Fig. <ref>.a) shows an example of a stable state (or point attractor) in the white-colored 0000 state. Stable states are attractors that contain only a single state. When that is not the case, the attractor is denoted as a cyclic (or complex) attractor, visualized in the right image of Fig. <ref>.a). There, we have a complex attractor comprised of four states: 0101, 1101, 1011 and 0011 <cit.>. Given an STG G_STG = (S,T), a set S'⊆ S is a trap set of G_STG if, for every ω∈ S', (ω,ω')∈ T implies ω'∈ S'. An attractor is a ⊆-minimal trap set. A stable state of G is a singleton trap set of G. We obtain that stable states of a transition graph G under synchronous update scheme coincide with the two-valued models of the corresponding ADF. Let G_STG be the synchronous state transition graph of the Boolean Model M with regulatory graph G, and D_M,G the corresponding ADF. Then, ω is a stable state of G_STG iff ω is a two-valued model of D_M,G. This follows from the following list of equivalences: ω is a stable state for the synchronous transition graph of M=(V,F) ⇔ s_i=f_i(ω) for every s_i∈ ⇔ ω(s)=ω(C_s) for every s∈ ⇔ ω is a two-valued model of D. To the best of our knowledge, the more general notion of a trap set has not been investigated in the context of . This is not surprising, as it has a clear meaning and use in biological networks, but not so much in argumentation: Consider the D=({a,b,c},L,C) with C_a=¬ c, C_b=¬ a and C_c=¬ b. Then {000,111} is a trap set and an attractor, but their argumentative interpretation is not clear: if we interpret a, b, and c as arguments that attack each other, the stability under transitions of {000,111}, interesting in a biological interpretation, is of less interest in argumentation. More surprisingly, we will now see that many other semantics for have a natural counterpart in Boolean networks. For example, there has been a lot of interest in so-called subspaces of regulatory graphs <cit.>, which are sets of interpretations for which assignments of some variables are fixed. Trap spaces have received a lot of attention in the literature on Boolean networks as finding them is computationally easier then finding any trap set <cit.>, while they are still guaranteed to contain a trap set <cit.>. A subspace m of a regulatory graph (V,E) is a mapping m:V↦{0,1,⋆}. We call v∈ V fixed if m(v)∈{0,1} and free if m(v)=⋆, and D_m is the set of all fixed variables in m. We let S[m]:={ s∈ S|∀ v∈ D_m: s(v)=m(v)}. An interesting observation is that subspaces, interpreted as their representative set of boolean valuations S[m], correspond exactly to the completions of the subspaces, interpreted as three-valued interpretations: Let m be a subspace m, and ν_m:V↦{0,1, U} defined by ν_m(v)= m(v) v∈ D_m U . It holds that [ν_m]^2=S[m]. It suffices to observe that ν_m≤_i iff (v)=m(v) for every v∈ D_m. Subspaces that are also trap sets are of special interest in the literature on Boolean networks. A trap space is a subspace that is also a trap set. Trap spaces, interpreted as three-valued interpretations (see Proposition <ref>) correspond to admissible interpretations: Let M be a Boolean Model with regulatory graph G, and D_M,G the corresponding ADF: m is a trap space of M iff ν_m is admissible in D_M,G. We first need some preliminaries due to Klarner et al. <cit.>. Given a boolean function f, f[m] is the expression obtained by stituting every occurence of some v∈ D_m in f by m(v). For example, given m=0⋆ 1 and f=(a b) c, f[m]=(0 1) c. We let U_m:={v_i∈ V| f_i[m]} and define F[m]:V↦{0,1, U} as: F[m](v_i) = f_i[m] v_i∈ U_m ⋆ The core of the proof depends on the following result: A space m is a trap set if and only if S[m]⊆ S[F[m]]. Indeed, with Proposition <ref>, [ν_m]^2⊆ [ν_F[m]]^2, i.e. ν_m≤_i ν_F[m]. We now show that Γ_D_M,G[ν_m]=ν_F[m]. Recall that Γ_D_M,G[ν_m](v_i)=⊓_i{(C_s)|∈ [ν_m]^2}. We can safely assume that C_s is in conjunctive normal form, as Γ_D_M,G is invariant under classical equivalences. Let C_s=⋀_i=1^n ⋁Δ_i. Then f_s[m]=⋀_i=1^n ⋁σ(Δ_i) where σ(Δ) is obtained by replacing every occurrence of some v∈ D_m by m(v). Suppose now F[m](s)=0. This means there is some i=1,…,n s.t. σ(Δ_i)={0}. Thus, ⊓_i{(⋁Δ_i)|∈ [ν_m]^2}=0 which implies that ⊓_i{(⋀_i=1^n ⋁Δ_i)|∈ [ν_m]^2}=0. The cases for F[m](s)=1 and F[m](s)=⋆ are similar. On the other hand, the complete semantics does not seem to have a direct counterpart in Boolean networks. Consider the D=({a,b},L,C) with C_a={a¬ a} and C_b=b. Let us take a look at the STG for synchronous updates first: < g r a p h i c s > There are three trap spaces: ⋆⋆, ⋆1, and 11. By Prop. <ref>, this corresponds to the interpretations and 11. However, the unique complete interpretation is 11. We might conjecture that the complete interpretations correspond to the minimal trap spaces, but that is not the case either. Consider the D=({a},L,C) with C_a={a}. There are three trap spaces: ⋆, 1, and 0. The corresponding interpretations , 1 and 0 are not only admissible but also complete. However, is not a minimal trap space. We further note that there has been some interest in maximal and minimal trap spaces <cit.> in the literature on . In more detail, trap sets are compared as follows: m_1≤ m_2 iff S[m_1]⊆ S[m_2]. It can be easily observed that this is the reverse of the information order ≤_i known from : Let m_1,m_2 be two subspaces. Then m_1≤ m_2 iff ν_m_2≤_i ν_m_1.heyninck2020epistemic With Proposition 2 from <cit.>, ν_m_2≤_i ν_m_1 iff for every s∈ iff [ν_m_2]^2⊇ [ν_m_1]^2. As S[m_i]=[ν_m_i]^2 for i=1,2 (Proposition <ref>), we obtain that ν_m_2≤_i ν_m_1 iff S[m_1]⊆ S[m_2]. By definition of ≤, ν_m_2≤_i ν_m_1 iff m_1≤ m_2. Minimal trap spaces are trap spaces m_1 such that there is no trap space m_2<m_1. Maximal trap spaces are trap spaces m_1 s.t. there is no trap space m_2>m_1 and S[m_1]≠ S, i.e., the trivial trap space is excluded by definition. Let M be a with regulatory graph G, and D_M,G the corresponding ADF: m is a minimal trap space of M iff ν_m is preferred in D_M,G. With Proposition <ref>, m is a trap space of M iff ν_m is admissible in D_M,G. Suppose now towars a contradiction that m is a minimal trap space, yet ν_m is not preferred, i.e. there is some admissible ν with ν_m<_i ν. Then with Proposition <ref> and proposition <ref>, there is a trap space m_ν< m, contradicting m being a minimal trap space. However, maximal trap spaces do not correspond to the grounded model. The reason is that in , the trivial trap space is excluded. E.g., in Ex. <ref>, ν_3 is the grounded model, but does not correspond to a maximal trap space as it is trivial trap space. Finally, we note that the concept of stable model from has no clear counterpart in . Indeed, the main motivation behind stable models is to exclude self-supporting arguments (see e.g., Ex. <ref> where ν_1 with p supporting itself via C_p=p is excluded). In , there is nothing a priori wrong with such self-supporting stable states, and it might often even have a clear biological meaning, e.g., since algae are self-reproducing. §.§ Subclasses of Boolean Networks An interesting observation is that both the literature on and the literature on Boolean Networks has identified certain subclasses of frameworks for which the computational complexity of computational tasks decreases. In fact, both strands of literature have identified the same subclass! In the literature on , these frameworks are called bipolar ADFs, whereas in the literature on Boolean networks, they are called sign-definite. A Boolean function f: {0,1}^n↦{0,1} is increasing monotone if for every v_1,v_2∈{0,1}^n, v_1≤ v_2 implies f(v_1)≤ f(v_2), and it is decreasing monotone if for every v_1,v_2∈{0,1}^n, v_1≤ v_2 implies f(v_1)≥ f(v_2). A Boolean function is sign-definite if it is increasing or decreasing monotone. A Boolean logical model (V,F) is sign-definite if every Boolean function in F is sign-definite. It is well-known that a Boolean function is sign-definite if and only if it can be represented by a formula in disjunctive normal form in which all occurrences of a given literal are either negated or non-negated <cit.>. We now recall bipolar <cit.>. Given a Boolean function f:{0,1}^n↦{0,1}: * i≤ n is supporting iff f(v_1,…, v_i,…,v_n)=1 implies f(v_1,…, v'_i,…,v_n)=1 where v_i=0 and v'_i=1, * i≤ n is attacking iff f(v_1,…, v_i,…,v_n)=0 implies f(v_1,…, v'_i,…,v_n)=0 where v_i=0 and v'_i=1. An is bipolar iff for every a∈, every b∈par_D(a) is supporting, attacking or both in C_a. It was shown that an is bipolar iff every acceptance formula is syntactically bipolar, i.e., no atoms occurs both positively and negatively <cit.>. In more detail, given a formula ϕ, the polarity of an atom a in ϕ is determined by the number of negations on the path from the root of the formula tree to the atom, and is positive if this number is even, and negative otherwise. For example, in ¬ (a¬ b), a occurs negatively and b occurs positively. A propositional formula is syntactically bipolar if no atom a occurs both positively and negatively in ϕ. An D=(,L,C) is bipolar iff, for every s∈, C_s is syntactically bipolar. Follows from <cit.>. The reader might be somewhat surprised by the fact that links can be both attacking and supporting. However, the only possibility for that being the case, is if the link is redundant: Consider an D=(,L,C) with s_1,s_2∈ and s_2 being supporting and attacking in C_s_1. For every v_1,v_2∈𝒱^2() s.t. v_1(s_2)≠ v_2(s_2) and v_1(s)=v_2(s) for every s∈, Γ_D(v_1)(s_1)= Γ_D(v_1)(s_1). Wlog supose that v_1(s_2)=0 and v_2(s_2)=1. Suppose first that C_s_1(v_1)=1. As s_2 is supporting in C_s_1, C_s_1(v_2)=1. Suppose now that C_s_1(v_1)=0. As s_2 is attacking in C_s_1, C_s_1(v_2)=0. This allows us to show that these two special cases coincide. A Boolean Model is sign-definite iff the corresponding ADF is bipolar. Clearly, an acceptance condition is supporting respectively attacking iff it is increasing respectively decreasing monotone. § INSIGHTS FROM BOOLEAN NETWORKS Based on the established correspondence, we provide examples of insights from the literature on Boolean Networks that are directly relevant for . §.§ Complexity of Counting For many applications in KR, counting the number of solutions is important. For many formalisms, including abstract argumentation <cit.>, the complexity of counting has been studied in depth. For , such as study is missing. Thanks to existing results from the literature on we can start filling in this gap: Deciding whether the maximal number of stable states in a Boolean logical model M is larger or equal than k is: NP-complete for k≥ 2; NEXPTIME-complete for an arbitrary but fixed k; NP^# P-complete for an arbitrary but fixed k if the maximum in-degree of M is bounded by a constant d≥ 2. Deciding whether the minimal number of stable states in a Boolean logical model M is smaller or equal than k is: NEXPTIME-complete for an arbitrary but fixed k; NP^ NP-complete for k≥ 2, if the maximum in-degree of M is bounded by a constant d≥ 2; NP^# P-complete for an arbitrary but fixed k if the maximum in-degree of M is bounded by a constant d≥ 2. This gives rise to the following corollary: Deciding whether the maximal number of two-valued models in an D is larger or equal than k is: NP-complete for k≥ 2; NEXPTIME-complete for an arbitrary but fixed k; NP^# P-complete for an arbitrary but fixed k if the maximum in-degree of D is bounded by a constant d≥ 2. Deciding whether the minimal number of two-valued models in an D is smaller or equal than k is: NEXPTIME-complete for an arbitrary but fixed k; NP^ NP-complete for k≥ 2, if the maximum in-degree of M is bounded by a constant d≥ 2; NP^# P-complete for an arbitrary but fixed k if the maximum in-degree of D is bounded by a constant d≥ 2. §.§ Existence of Fixpoints There is a large line of work in the literature on Boolean networks that studies structural properties of Boolean networks that affect the existence (and the number) of fixed points. Such work immediately translates to the existence of two-valued models in . E.g., several works <cit.> provide a theoretical analysis of the existence of fixpoints in sign-definite BN in relation to structural parameters of the corresponding Boolean networks. We follow Aracena <cit.> in defining a path in a BN as positive if the number of negative arcs is even, and negative otherwise. The first few results study the connection between cycles, their parity, and the existence and number of fixpoints: If a sign-definite BN has: no cycles, then it has a unique stable state <cit.>; no positive cycles, then it has at most one stable state <cit.>; no negative cycles, then it has at least one stable state <cit.>. If a BN N has a strongly connected component H such that all cycles of H are negative and for each arc (v_i,v_j ) in the N, v_j∈ H then v_i∈ H, then N has no stable states <cit.>. We derive the following corollary for : Let a bipolar D be given. Then: if D has no cycles, then it has a unique two-valued model; if D has no positive cycles, then it has at most one two-valued model; if D has no negative cycles, then it has at least one two-valued model. If D has a strongly connected component H such that all cycles of H are negative and for each arc (v_i,v_j ) in the N, v_j∈ H then v_i∈ H, then N has no two-valued models. One finds also more intricate results on the existence of fixpoints in the literature, for example in terms of so-called non-expansive maps, that guarantee the existence of stable states <cit.>. These fall outside the scope of this paper. Some work has also investigated the connection between the existence of stable states and the structure of the corresponding BN: If a BN has at least one stable state, then it has at least one positive cycle. We derive the following corollary for : If an has a two-valued model, then it has at least one positive cycle. The final result we discuss is an upper bound on the number of stable states in terms of the number of feedback vertex sets (FVSs). An FVS of a digraph G = (V,E) is defined to be a set of vertices that contains at least one vertex of each cycle of G. The minimum number of vertices of a FVS is denoted by τ(G). A Boolean logical model N=(V,F), where |V^-(v)| ≥ 1 for all v∈ V, has at most 2^τ(N) stable states. This carries over to as follows. An D=(,L,C) s.t. every argument has at least one attacker has at most 2^τ((,L)) stable states. § CONCLUSION We have reviewed the main syntactic similarities between and , and demonstrated that these extend in large parts to the semantical level. Furthermore, we have shown the fruitfulness of this connection by deriving results on complexity and existence of semantics for based on existing results for . We hope that our results will lead to further cross-contamination between these two fields, such as the use of implementations or benchmarks. plain
http://arxiv.org/abs/2407.01957v1
20240702052130
A Survey on Advancements in THz Technology for 6G: Systems, Circuits, Antennas, and Experiments
[ "Sidharth Thomas", "Jaskirat Singh Virdi", "Aydin Babakhani", "Ian P. Roberts" ]
eess.SP
[ "eess.SP" ]
A Survey on Advancements in THz Technology for 6G: Systems, Circuits, Antennas, and Experiments Sidharth Thomas, Graduate Student Member, IEEE, Jaskirat Singh Virdi, Graduate Student Member, IEEE, Aydin Babakhani, Senior Member, IEEE, Ian P. Roberts, Member, IEEE Sidharth Thomas, Jaskirat Singh Virdi, Aydin Babakhani, and Ian P. Roberts are with the Department of Electrical Engineering, University of California, Los Angeles, CA 90095 USA (e-mail: sidhthomas@ucla.edu). July 8, 2024 =============================================================================================================================================================================================================================================================================================================================================================================================================================== § ABSTRACT Terahertz (THz) carrier frequencies (100 GHz to 10 THz) have been touted as a source for unprecedented wireless connectivity and high-precision sensing, courtesy of their wide bandwidth availability and small wavelengths, but noteworthy implementation challenges remain to make this a reality. In this paper, we survey recent advancements in THz technology and its role in future 6G wireless networks, with a particular emphasis on the 200–400 GHz frequency range and the IEEE 802.15.3d standard. We provide a comprehensive overview of THz systems, circuits, device technology, and antennas, while also highlighting recent experimental demonstrations of THz technology. Throughout the paper, we review the state-of-the-art and call attention to open problems, future prospects, and areas of further improvement to fully realize the potential of THz communication in next-generation wireless connectivity. Terahertz (THz), 6G, millimeter-wave, IEEE 802.15.3d, channel modeling, THz applications, device technology, circuits, antenna, transceiver architectures. § INTRODUCTION Wireless communication has become an indispensable component of contemporary life, with each generation of technological advancement opening the door to a host of novel applications that transform our lifestyles. With commercial 5G systems under deployment, the focus of many wireless researchers has shifted towards developing the sixth generation (6G) of wireless communication <cit.>. The forthcoming wave of technological advancement is poised to revolutionize wireless connectivity by offering unprecedented throughput and latency and expanded capabilities such as sensing, tailored to meet the stringent demands of applications of the next decade. The necessity for 6G technology primarily stems from the global increase in data generation and consumption <cit.>. The amount of data traveling across the internet has exponentially grown over the years, and this trend is expected to continue. The International Telecommunication Union (ITU) predicts that by 2030, global mobile traffic volume will be 670 times higher than in 2010, with aggregate mobile data traffic expected to exceed 5 ZB per month <cit.>. As illustrated in Fig. <ref>, the traffic volume per mobile device in 2030 is expected to increase by 50 times from 2010. This surge is primarily attributed to the widespread proliferation of IoT (Internet of Things) devices, ranging from simple household gadgets to advanced industrial sensors, resulting in a massive and ever-growing volume of data. Automation technologies such as digital twins and the adoption of cloud-based machine learning and artificial intelligence across various sectors have further accelerated this trend <cit.>. As society's dependence on smart devices and connected systems increases, the demand for a more robust and capable network infrastructure becomes imperative <cit.>. In response, 6G aims to deliver speeds, reliability, and latency improvements that far surpass those of its predecessors, establishing a critical foundation for the next digital connectivity and innovation era. The impressive wireless networks of today are the product of decades of extensive optimization and technology advancements. To deliver future 6G networks which substantially exceed the capabilities of today's 5G networks, many have proposed turning to vast, untapped swaths of spectrum at terahertz (THz) frequencies. The THz band, which ranges from 0.1 to 10 THz <cit.>, occupies a distinctive spot in the electromagnetic (EM) spectrum. Positioned between radio frequencies (RF) and optical frequencies, this band displays characteristics of both and offers exciting possibilities for new applications. The THz band can support high-capacity wireless links due to its vast bandwidth availability. Additionally, its short wavelength allows for operating large-scale antenna arrays in a small form factor. This facilitates network densification and makes THz ideal for sensing applications, with THz radars and imagers providing superior range and lateral resolution <cit.>. Moreover, THz imaging technologies may offer safer, more accessible medical diagnostics than traditional X-ray imaging <cit.>. These unique features also enable future 6G technologies to combine communication and sensing into a single system, revolutionizing everyday interactions, transforming healthcare, and forging new markets. A few THz bands in particular have garnered substantial interest for their applications in wireless communication. These bands include the D-band (110–170 GHz) <cit.>, the 300 GHz band (253–322 GHz) <cit.>, and the 400 GHz band <cit.>. Operating at these THz frequencies presents unique challenges spanning various technical considerations such as device technology, circuit design, antenna, packaging, channel modeling, signal processing, and system design <cit.>. These complexities associated with realizing THz transceivers and deploying THz wireless systems has drawn considerable attention and investment from both industrial and academic research laboratories. Progress thus far has resulted in emerging THz wireless standards, with perhaps the most notable being IEEE 802.15.3d <cit.>. This standard ultimately seeks to enable wireless communication over channels as wide as 69 GHz within the 253–322 GHz frequency range <cit.>. Its aim is to showcase the feasibility of THz frequencies for communication while also serving as a coordinated effort toward effective and reliable connectivity solutions at THz. In this paper, we survey several key techniques and technologies for enabling wireless communication at carrier frequencies beyond 200 GHz and focus in particular on current specifications of the IEEE 802.15.3d standard. Our aim is to complement existing THz systems-level surveys (e.g., <cit.>) by bridging topics at the intersection of multiple technical communities. In this pursuit, we conduct a comprehensive analysis of the latest developments, potential opportunities, challenges, and the current state-of-the-art in the 200–400 GHz range. Unlike existing surveys on THz for 6G, this paper provides thorough discussion on semiconductor device technologies, circuit topologies, antennas, and packaging—and how design decisions surrounding such may impact system performance and capabilities. We describe the unique challenges in these areas specific to THz design, acknowledge the limitations of current technology, and highlight promising approaches for practical solutions. Furthermore, we overview notable demonstrations of THz-based 6G from across the globe and examine their architectural and implementation considerations, a crucial step to hone in on capable, cost-effective THz systems for future commercial 6G systems. The rest of this paper is organized as follows. Section II reviews the envisioned role of THz in 6G. Section III describes the IEEE 802.15.3d standard. Section IV examines the behavior of wireless channels at THz frequencies. Section V surveys the state-of-the-art in semiconductor device technology. Circuit techniques and architectures suited for THz design are discussed in Section VI. Antenna design and packaging techniques are addressed in Section VII. Section VIII showcases notable THz demonstrations worldwide. Finally, the paper is concluded in Section IX, summarizing the key findings and insights of this survey. Table I summarizes key acronyms used throughout this paper. § APPLICATIONS OF 6G: WHY THZ? The maximum data rate for a wireless channel with bandwidth B and a signal-to-noise ratio (SNR) of 𝖲𝖭𝖱 can be calculated using the well-known Shannon channel capacity R = B·log_2(1+𝖲𝖭𝖱). While this expression does not strictly depend on carrier frequency, bandwidth availability tends to increase at higher carrier frequencies <cit.>. Increasing bandwidth B then facilitates higher data rates, assuming appreciable SNRs can be attained. For instance, at 300 GHz, the maximum data rate for link with a 25 dB noise figure (representative number <cit.>) is plotted in Fig. <ref> for various received power levels. With a 10% fractional bandwidth (i.e., B=30 GHz) and -40 dBm received power, a link can support 56 Gbps. Such massive bandwidths also introduce the potential to multiplex an unprecedented number of devices across frequency resources, with each device enjoying substantial data rates. To further illustrate the impact of enhanced throughput and latency offered by THz, this section highlights targeted applications conceptualized for THz-based 6G. * WLAN and IoT: THz-based 6G links have the potential to establish high-speed wireless local area network (WLAN) communication. The main premise is that a THz access point serves as a sort of enhanced hotspot that can facilitate applications such as near-instant video transfer, large-scale IoT, and real-time streaming of immersive experiences in virtual reality <cit.>. As communication systems evolve and merge with IoT, 6G networks will connect humans, sensors, and computing devices across an extensive network <cit.>. * Wireless backhaul: With the evolution of wireless networks, mm-Wave small cells have become increasingly necessary to meet capacity demands. A reliable and high-speed backhaul connection to each mm-Wave base station is essential to connect it to the core network, but traditional fiber backhauling does not scale well due to the sheer density of small cells <cit.>. Wireless backhauling via THz could interconnect base stations to the core network with extremely high capacity links, circumventing the time, cost, and overhead associated with trenching optical fiber <cit.>. In turn, this could facilitate a paradigm where wireless networks provide computing and cloud-based services directly to devices at the network's edge, proliferating emerging technologies such as artificial intelligence (AI) and digital twins <cit.>. * AR/VR and holographic projections: Augmented reality (AR), virtual reality (VR), and 3D holographic projections are all application spaces where the impact of 6G technology is expected to be substantial <cit.>. In essence, providing a high-capacity, low-latency THz link can facilitate computational offloading from AR/VR headsets, allowing them to reduce their form factor and battery consumption, while remaining immersive and interactive. Advancements in this area can even revolutionize the healthcare industry and potentially enable applications such as remote surgeries <cit.>. * Kiosk downloading: In this setup, a fixed wireless transmitter, unconstrained by size and power limitations, may connect to a fiber network, sending high-speed multi-Gbps wireless data to a low-power mobile receiver within a short range. This may be utilized to transfer massive files and high-definition videos in a “touch-and-go” fashion <cit.>. This application has been gaining attention with prototypes being demonstrated <cit.>. * Data centers: THz links have the potential to supplement or replace current Ethernet and fiber-based links in data centers <cit.>. Current approaches require extensive cables, which can be expensive and difficult to set up. An alternative solution is utilizing point-to-point high-speed wireless links connecting racks. This is more flexible and could reduce the cost of cabling. Moreover, this approach reduces latency as the EM waves can travel through air over 50% faster than through optical fiber <cit.>. * Joint communication and sensing: A so-called joint communication and sensing (JCAS) paradigm in which wireless communication systems also perform sensing tasks has gained traction as a defining feature of future 6G networks <cit.>. The main premise behind JCAS is to leverage existing communications infrastructure, such as cellular network base stations, to perform wide area sensing, as opposed to deploying dedicated sensing hardware (e.g., radar). In doing so, emerging applications which rely on sensing data, such as real-time digital twins and autonomous driving, could be fully realized. Simultaneously, this sensing data could actually be leveraged to improve communication system performance by, for example, adjusting system operation dynamically upon detecting blockage/obstructions, such as vehicles <cit.>. Beyond this, applications of JCAS are ripe in industrial automation, robotics, and AR/VR, especially in environments/conditions where camera may fail due to darkness and/or weather. With THz, the potential for JCAS is particularly exciting, as wide bandwidths yield fine range resolution, dense antenna arrays provide granular angular resolution, and high carrier frequencies facilitate Doppler-based velocity estimation. * Secure communication: THz links have emerged as a promising solution for enhancing communication security. With their highly directional nature, eavesdropping becomes significantly more difficult, offering better data protection and privacy <cit.>. The high levels of atmospheric attenuation also enable new ideas such as `whisper radio,' where communication is inherently secure <cit.>. Additionally, the small wavelength of THz signals, which measures in the range of hundreds of micrometers, allows for innovative modulation techniques, which can enhance physical layer security. One example is in space-time modulated phased arrays <cit.>. This is illustrated in Fig. <ref>(a). Here, parallel data streams are modulated spatially at different transmitters and temporally at different instances. This data can be deciphered by the intended broadside receiver, Bob. However, an eavesdropper in the off-axis (such as Eve) cannot decipher the information, as the signals from each antenna travel different distances, resulting in signal corruption. Another example is using orbital angular momentum (OAM), for wireless communication <cit.>. Here, the information is modulated on a carrier signal, which has an angular momentum mode, as illustrated in Fig. <ref>(b). A receiver antenna can only decipher the information if it has the same angular momentum mode as the transmitter. * Satellite communication: THz links may in fact also offer a viable option for ground-satellite connections, since atmospheric attenuation decreases exponentially with altitude <cit.>. As an example, at 600 GHz, the free-space path loss of a geostationary link originating from a moderately dry, high-altitude location is equivalent to that of a 2-km link situated at sea level. THz links can also be used for inter-satellite communication, with the potential to outperform optical links commonly used today <cit.>. The vacuum of space eliminates atmospheric attenuation issues, and the larger THz wavelengths (compared to optics) result in much broader beam widths, which enhances the accuracy between the transmitter and the receiver. Additionally, THz-band communication systems often require significantly less power and weight than their laser-based counterparts <cit.>. § THE IEEE 802.15.3D STANDARD The IEEE 802.15 Terahertz Interest Group was established in 2008 to explore wireless communication within 0.3 to 3 THz. In 2017, the IEEE 802.15.3d standard (the 300 GHz band) was approved, serving as the initial step towards THz band communication <cit.>. This standard outlines a PHY layer of 100 Gbps, with the option to revert to lower rates. It supports wireless communications over up to 69 GHz wide channels within 253–322 GHz. Its primary goal is to demonstrate the practicality of using THz frequencies for communication while providing high-connectivity solutions. Note that the envisioned applications are restricted primarily to point-to-point links between static or quasi-static devices, to remain within the realm of what is possible with current semiconductor technology. However, this may change in the future as device technology matures. A detailed description of this standard can be found in <cit.>. As illustrated in Fig. <ref>, the IEEE 802.15.3d standard covers frequencies between 252.72 GHz and 321.84 GHz, with 69 overlapping channels. These channels offer eight supported bandwidth options, ranging from 2.16 GHz to 69.12 GHz, each bandwidth being an integer multiple of 2.16 GHz. By default, channel number 41, with a bandwidth of 4.32 GHz, is used. Summarized next, the PHY layer in the IEEE 802.15.3d standard has two modes: THz single-carrier mode (THz-SC PHY) and THz on-off keying mode (THz-OOK PHY). §.§ THz-SC PHY: Single-Carrier Mode THz-SC PHY aims to achieve high data rates and caters to bandwidth-oriented use cases such as wireless fronthaul/backhaul and additional data center links. The mode offers a range of six different modulations, including four variations of phase-shift keying: binary (BPSK), quadrature (QPSK), 8-phase (8-PSK), and 8-phase asymmetric (8-APSK). In addition, quadrature amplitude modulation (QAM) is available as 16-QAM and 64-QAM. While BPSK and QPSK modulations are mandated for the THz-SC mode, the other modulations are optional. This mode employs one of two low-density parity check (LDPC) codes for forward error correction: a high-rate 14/15 LDPC (1440, 1344) or a low-rate 11/15 LDPC (1440, 1056). §.§ THz-OOK PHY: On-Off Keying Mode THz-OOK PHY mode is designed to cater to low-complexity THz devices by using low-cost, relatively simple amplitude modulation schemes. Despite this limitation, it can still attain impressive data rates of up to tens of gigabits per second when utilizing the widest channels available. In terms of coding, three different error correction schemes are supported, including the mandatory (240, 224)-Reed Solomon code for simple hard decoding. Additionally, the two LDPC-based schemes described in the the previous subsection can also be used with THz-OOK operation for soft decoding. § WIRELESS CHANNEL CHARACTERISTICS Wireless channels at THz bands exhibit behaviors markedly distinct from other key frequencies, such as the sub-6 GHz band and mm-Wave bands. This difference arises from the unique interaction between EM waves and materials at THz frequencies, whose dynamics are heavily dependent on the size of the wavelength relative to the physical dimensions of the objects it interacts with. THz channels show high levels of signal attenuation and display scattering properties notably different from their lower-frequency counterparts. Moreover, there is strong absorption loss, susceptibility to blockage, and shadowing <cit.>. This results in a sparse channel with few paths from the transmitter to the receiver and can lead to reliability concerns in certain environments. §.§ Path Loss Path loss in wireless communication primarily stems from free-space path loss (FSPL) and atmospheric attenuation, each characterized by distinct mechanisms as outlined below. §.§.§ Free-Space Path Loss FSPL occurs from the radial spreading of EM waves as they travel through free space. This effect is quantified by (<ref>) L = (4 π d/λ)^2 , where L represents the path loss, d signifies the distance the wave has propagated, and λ is the wavelength of the EM wave <cit.>. Note that the antenna gain can be embedded along with FSPL. The FSPL with isotropic antennas at 3 GHz, 30 GHz, and 300 GHz are plotted with dashed lines in Fig. <ref>(a). It can be observed that the path loss increases with frequency. §.§.§ Atmospheric Attenuation THz frequencies experience higher atmospheric attenuation than RF frequencies due to various effects. THz wavelengths are comparable in size to dust, rain, snow, and other atmospheric particles, which leads to higher attenuation <cit.>. Moreover, several absorption resonances exist at THz frequencies. This happens when the frequency approaches the energy required to excite vibrational modes in molecules. The THz absorption spectrum is plotted in Fig. <ref> for standard (blue) and dry (red) atmospheric moisture conditions. Sharp absorption peaks are observed at 183 GHz, 325 GHz, 380 GHz, and 450 GHz, which arise due to rotational and vibrational excitation states in gas molecules <cit.>. While these bands have some applications in short-range secure communication schemes such as `whisper radio' <cit.>, and in applications such as hydration sensing and spectroscopy <cit.>, they should be avoided for long-distance communication. Note that some frequency bands, such as the D-band (110-170 GHz), 300 GHz, and 400 GHz bands, do not have significant absorption peaks. Hence, these frequencies are optimal candidates for THz-based 6G communication. At 300 GHz, atmospheric attenuation is as low as 10 dB per km, making it tolerable for communication links <cit.>. THz links are also prone to adverse weather conditions such as rain, snow, hail, fog, and clouds. This has been extensively studied, and it has been shown that rain attenuation flattens out after 100 GHz <cit.> (Fig. <ref>(a)). At 100 GHz, the attenuation caused by fog and cloud is approximately 5 dB per kilometer. In the most severe conditions of heavy fog or cloud, it can increase by roughly 15 dB when the frequency rises from 0.1 to 1 THz. This is plotted in Fig. <ref>(b) <cit.>. This additional attenuation may be compensated by using high-gain antennas. §.§ Scattering The scattering behavior of an EM wave with a material depends on the roughness of the scatterer with respect to the wavelength <cit.>. The Rayleigh criterion can be used to determine the smoothness or roughness of a surface <cit.>. It defines a critical surface height of a surface, h, which depends on the wavelength and the angle of incidence, θ_i. The critical height is given in (<ref>). h = λ/8cosθ_i The surface is smooth for a given frequency with wavelength λ if the minimum to the maximum surface protuberance, h_0, is smaller than h. At RF frequencies, most surfaces are smooth and follow reflection-like specular behavior. Here, the reflection process has a strong dominant specular path, where the incident angle equals the reflected angle. There is also little absorption. This results in a rich multi-path channel that can be studied with several models. At optical frequencies, the scattering is diffuse, with multiple signal paths across different directions (Fig. <ref>(a)). The channel behavior is much different at THz. Here, the surface roughness becomes comparable to the wavelength, and hence, scattering shows both diffuse and specular behavior (Fig. <ref>(a)) <cit.>. This results in many scattered waves in addition to the primary reflected specular component, resulting in multiple signal paths across different directions. This is studied using the directive scattering (DS) model in <cit.>. Results show that the scattered power is higher at higher frequencies relative to the specularly reflected power. Interestingly, the strongest scattering happens when the wave hits the surface straight on. However, when the wave grazes the surface, the scattering sharply decreases, and most of the wave's energy is reflected instead <cit.>. THz links also suffer from high absorption upon reflection. This is demonstrated in <cit.> using the setup in Fig. <ref>(b). Here, the bit error rate (BER) is measured at 300 GHz for a 2-meter link involving reflection from a cinderblock wall, which exhibits both absorption and scattering. The experiment is repeated with reflection from the wall coated with metal foil (which eliminates absorption) and finally from a smooth metal plate (which eliminates both absorption and diffuse scattering). The BER is plotted against input power for three cases in Fig. <ref>(c). It can be observed that the smooth metal plate exhibits the best BER for a given power. The curve shifts right for the metal foil case, demonstrating the SNR degradation from diffuse scattering. The curve shifts significantly in the cinderblock case, suggesting high absorption. §.§ Antenna Considerations In order to compensate for the increased path loss and attenuation at THz frequencies, highly directive antennas are often required and this naturally results in a communication link which is highly directional. This contrasts with lower frequencies, where conventional cellular antennas can broadcast over a wide coverage angle of 120∘ <cit.>. This stark contrast in the directionality of transmission introduces noteworthy challenges at high carrier frequencies, as already observed in mm-Wave 5G. A large-scale phased array may be used to realize high directivity. A linear N-element half-wavelength spaced antenna array experiences an increase in directivity by 10log_10(N) <cit.>. This can be used to create high-directive beams. Using a phased array architecture also makes it possible to have electronic beam steering, which is helpful in THz communication, where the channel is highly prone to blockage. Using a multi-antenna architecture also makes it possible to perform MIMO techniques such as spatial multiplexing, potentially across users <cit.>. It should be noted that beamforming should ideally be realized using true-time delays rather than phase shifters to avoid beam squint, which can become prominent in THz communication, where channels may consume a large fractional bandwidth <cit.>. MEMS-based mechanical steering is also being actively investigated as another possible solution <cit.>, though this would likely be most useful for point-to-point links where channel variations are slow. High directivity can also be realized using passive structures such as a lens antenna or a parabolic reflector <cit.>. This approach becomes advantageous at THz frequencies since higher frequencies result in greater antenna directivity, given a specific aperture area <cit.>. For an antenna with an aperture area of A_e, the antenna gain, G_a, is given by (<ref>). G_a= 4 π A_e/λ^2 This equation shows that the antenna gain increases with frequency for a given aperture. Using such an antenna at the transmitter and receiver, the overall channel loss, L, is given by (<ref>) after embedding the antenna gain. L = (λ d/A_e)^2 It can be observed that the channel loss now decreases with an increase in frequency. This is plotted in Fig. <ref>(b) using the solid lines for an A_e of 2 cm2. THz links can thus outperform RF links regarding channel loss when using a fixed aperture antenna at the transmitter and receiver. When utilizing a large-scale array or passive reflector, it is essential to consider the impact on antenna far-field distance <cit.>. The minimum distance to the far-field region, d_far, for an antenna with a maximum dimension of D and wavelength of λ is given by (<ref>). d_far= 2D^2/λ Due to the small wavelengths, the far-field distance can become quite substantial at THz frequencies. For instance, employing a 10 cm antenna array can result in a far-field distance of 20 m at 300 GHz. This sizable distance indicates that the receiver can likely operate in the near-field, necessitating a redesign of conventional beam-steering algorithms <cit.>. For example, <cit.> proposes performing wavefront engineering for link maintenance, where so-called `airy' beams are created to curve around potential blockages in the near-field. It is worth noting that, while highly directional beams may indeed be formed at THz using the aforementioned methods, a key challenge remains in efficiently closing and reliably maintaining the link between two devices <cit.>. The overhead of traditional mm-Wave beam sweeping approaches to close the link may be unsuitable at THz, where beams are even more narrow and where path loss is even more severe. This necessitates new mechanisms and procedures for beam alignment, in order to ensure that communication between two devices can be established and maintained, even in mobile applications and when the environment is highly variable. § DEVICE TECHNOLOGY This section compares key semiconductor technologies suitable for designing circuits in the IEEE 802.15.3d band. `Technology' or a 'process' in this context refers to a specific semiconductor manufacturing process and its typical feature size. Examples of this include silicon CMOS (complementary metal-oxide-semiconductor), SiGe BiCMOS (Silicon-Germanium Bipolar CMOS), and III-V technologies like InP (Indium Phosphide) and GaAs (Gallium Arsenide) <cit.>. §.§ Metrics for Performance Characterization A key challenge in THz circuit design stems from the limitations in transistor performance <cit.>. At these frequencies, transistor performance degrades to the point where it can no longer provide amplification. Furthermore, the interconnects that allow for electrical connections to the transistor add significant parasitic resistance, capacitance, and inductance, further damaging overall performance. Transistors also exhibit non-quasistatic behavior at THz frequencies, making their modeling complicated. A technology-agnostic metric commonly used to characterize the transistor performance at high frequencies is the maximum oscillation frequency, f_max. For any 2-port (such as a transistor), f_max is the frequency at which the maximum available power gain (or the unilateral power gain) drops to zero <cit.>. In other words, amplifiers and oscillators cannot be built beyond the f_max of a technology. Circuit design becomes challenging beyond f_max/2. Amplifiers operating at these frequencies require several amplification stages. Consequently, power amplifiers at these frequencies have low saturated output power and low power-added efficiency, and low-noise amplifiers have high noise figures, both detrimental to RF transceiver performance. A higher f_max almost always implies a better communication link for the same DC power, enabling more energy-efficient communication. It should be noted that while amplification is not possible beyond f_max, it is possible to generate signals using non-linear circuit design techniques. However, these techniques usually have low efficiencies (0.5 %) <cit.>. This will be discussed later in Section VI. The transit frequency, f_T, is another metric for high-frequency technology characterization. It is the frequency at which the current gain of a transistor, with source and drain shorted, drops to unity. While the relative importance of the two parameters is debated, f_max is often seen as a better metric for high-frequency characterization as it accounts for more non-idealities <cit.>. Hence, throughout this survey, we will emphasize f_max as a standard for comparing technologies. §.§ Silicon CMOS Technology Silicon CMOS is usually the most popular choice when designing integrated circuits. This is mainly due to its low cost and high digital integration capabilities, enabling systems-on-chip (SoCs) with enhanced sensing and computing capabilities. A CMOS technology node is characterized by its feature size, which roughly describes the smallest dimension of the transistor gate length. For example, a 65nm CMOS node offers CMOS transistors with a minimum gate length of 65nm. Note that this naming convention is not strictly followed in recent technology nodes. CMOS technology has rapidly scaled to lower dimensions for the past several decades, following Moore's law. This has resulted in an exponentially growing computing power. However, CMOS scaling is driven by a desire to enhance digital CMOS performance. This does not necessarily translate to enhanced RF performance <cit.>. This behavior is illustrated in Fig. <ref>(a), where different transistor performance metrics are plotted for various technology nodes. Notice that the transistor count has increased exponentially and is accompanied by a reduction in digital block energy, pushing Moore's law forward. However, f_max reached its peak at 22nm, with an f_max of 370 GHz <cit.>. This trend is concerning for THz circuit designers since they can no longer rely on technology scaling to push to higher frequencies. This demands new circuit design techniques or switching to a non-CMOS platform. §.§ SiGe BiCMOS Technology The SiGe BiCMOS process is another popular choice for THz circuit design, as it provides high-performance HBTs (heterojunction bipolar transistors). These processes often utilize the same platform as a CMOS node, thereby inheriting the benefits associated with digital CMOS technology. Cutting-edge SiGe processes have achieved an f_max reaching up to 720 GHz <cit.>. However, it is essential to acknowledge that the CMOS transistors integrated within these nodes are typically from earlier generations. This makes it difficult to integrate complex digital processing into the same chip. For instance, the most advanced BiCMOS process currently offers CMOS transistors with a 45nm feature size, a technology that is over a decade old <cit.>. §.§ III-V Technologies III-V semiconductors, such as gallium arsenide (GaAs) and indium phosphide (InP), present numerous benefits over conventional silicon-based technologies. These advantages stem from higher electron mobility, higher breakdown voltages, and improved high-temperature performance. InP HBTs and HEMTs (high-electron-mobility transistors) are among the favored III-V technologies for THz circuit design, capable of reaching an f_max beyond 1 THz <cit.>. Northrop Grumman has showcased a 1 THz amplifier, as detailed in <cit.> (Fig. <ref>(b)). Despite these advantages, III-V technologies have not been as widely adopted as CMOS and SiGe due to their higher costs, lack of digital integration capabilities, and smaller available wafer sizes <cit.>. However, with the advent of 6G technology, III-V semiconductor technology could significantly grow in demand and maturity, addressing some of these challenges. §.§ Future Prospects Significant research efforts are underway to enhance device performance, highlighted by initiatives like DARPA's T-MUSIC program in North America <cit.> and the TARANTO project in Europe <cit.>. For circuit design in the THz band, adopting a “best junction for the function" strategy is often recommended (Fig. <ref>(c)) <cit.>. This strategy integrates advanced III-V technologies for the RF front-end with CMOS technology for digital components. Achieving this integration depends on advancements in packaging techniques and heterogeneous integration <cit.>. Resonant tunneling diodes <cit.>, traveling wave tubes <cit.>, and photonic techniques like quantum cascade lasers <cit.> have also been explored for generating THz signals. These advanced technologies offer promising avenues for THz applications but face high costs, complex fabrication processes, and integration and scalability issues <cit.>. Despite these challenges, ongoing research aims to improve their accessibility and compatibility with existing semiconductor processes, potentially paving the way for their increased use in future THz systems. § CIRCUIT DESIGN Circuit design techniques at THz frequencies differ notably from those used for RF and mm-Wave frequencies. This is primarily due to transistor performance degradation from the limited f_max <cit.>. Additionally, passive devices such as capacitors and inductors suffer from a high loss at these frequencies due to skin effect, self-resonance, and substrate coupling. Due to these limitations, designing amplifiers, local oscillators (LOs), and frequency synthesizers at THz frequencies is difficult. Harmonic techniques play a significant role in circuit design at these frequencies. Any non-linear system generates higher-order harmonic products when driven by a large-signal input. Conventionally, these harmonic products are undesired and need to be removed. However, these harmonics can prove useful at THz circuit design to generate signals beyond f_max. The greater the non-linearity, the stronger the higher-order harmonics, resulting in greater THz signal efficiency. Consider a non-linear device, such as a transistor, driven by an RF source. Strong harmonic products are generated if the source power is large enough to drive this device into non-linear operation. When an NMOS transistor is driven by a voltage V_in at frequency f_0, the drain current I can be expressed as I = I_0 + g_m1V_in + g_m2V_in^2 + g_m3V_in^3 + ... , where I_0 represents the DC, while g_m1V_in represents the output current at f_0 <cit.>. The output current includes higher-order terms generated due to transistor non-linearity, which are represented by g_m2V_in^2 and g_m3V_in^3. These terms contain harmonic signals at 2f_0 and 3f_0, respectively. These higher-order harmonic terms can be extracted and used to design THz signal sources. Note that while non-linearity can be used to design signal sources and synthesizers beyond f_max, it still cannot achieve amplification. Extensive research is underway within the circuit design community to improve the performance of transmitters, receivers, and local oscillators for THz operation. This section overviews various transmitter and receiver architectures for THz design. Understanding trade-offs within various architectures is crucial in choosing the correct modulation schemes to maximize data rates and spectral efficiency. §.§ Transmitter Architectures §.§.§ Power Amplifier (PA)-Last Architecture This is the conventional transmitter architecture, which is followed at RF frequencies. This architecture can support both amplitude and phase modulation schemes by incorporating a linear power amplifier as its final stage, enabling higher-order modulation schemes like M-QAM (Fig. <ref>(a)). However, due to the limited f_max, several amplification stages are needed to achieve sufficient amplification at THz frequencies. This impacts power consumption, amplifier stability, and efficiency <cit.>. For instance, <cit.> demonstrates a 290 GHz power amplifier in a 130nm BiCMOS process with a saturated output power of 7.5 dBm and a power added efficiency of just 0.39%. Because of this, PA-last techniques are usually avoided while designing in CMOS/SiGe platforms, which have a lower f_max. However, this architecture remains popular in InP transmitters <cit.>, which have relatively high f_max. §.§.§ Mixer-Last Architecture This architecture does not use a power amplifier. Instead, the final stage here is a mixer (Fig. <ref>(b)). A mixer behaves linearly for the input baseband signal but has non-linear behavior with the LO <cit.> and thus performs upconversion. Conventionally, the baseband signal is upconverted to the LO frequency. However, an N-th harmonic mixer can be designed, which upconverts the baseband signal to the N-th harmonic of the LO. This approach is capable of operating at frequencies beyond f_max and has been utilized in <cit.> and <cit.> with the second and third harmonics of the LO, respectively. This topology can support M-QAM modulation as the mixer behaves linearly and maintains the amplitude and phase information of the input baseband signal. However, mixers typically provide low-output saturated power. This affects the link budget and can thus limit the modulation order and viable link distance. §.§.§ Multiplier-Last Architecture This architecture utilizes a frequency multiplier as the final stage (Fig. <ref>(c)). A frequency multiplier is a circuit designed to maximize non-linearity to achieve high output power at the harmonic of the input. Frequency multipliers typically can achieve higher output power than a mixer and have been used in THz transceiver design <cit.>. However, this technique has a few noteworthy challenges. Firstly, amplitude distortion can occur due to the inherently non-linear nature of frequency multiplication. This makes it difficult to support amplitude modulation beyond two levels. Secondly, phase distortion can also occur during frequency multiplication, causing constellation points to rotate. For instance, an 8PSK signal upon doubling would result in constellation points corrupting each other as different input symbols get mapped to the same symbol after doubling. However, 8PSK can be preserved by tripling. This is illustrated in Fig. <ref> <cit.>. Although the constellation points change their position post-tripling, they do not cause signal corruption. Thus, only certain modulation schemes can be supported with specific multiplication indices in a multiplier-last transmitter <cit.>. Lastly, bandwidth expansion is a concern due to the non-linear frequency translation in a frequency multiplier (Fig. <ref>). This can cause the bandwidth occupied by the input baseband signal to expand by N times when passed through a multiply-by-N circuit <cit.>. It should be noted that the multiplier-last architectures can support 2-level amplitude modulation (such as OOK) without any of the problems mentioned above. This is utilized in <cit.> to create an efficient THz link at 400 GHz, for instance. §.§ Receiver Architectures At THz frequencies, there exist three commonly used receiver architectures: the low-noise amplifier (LNA)-first architecture (Fig. <ref>(d)) <cit.>, the mixer-first architecture (Fig. <ref>(e)) <cit.>, and the power-detector-based architecture (Fig. <ref>(f) <cit.>. Both LNA-first and mixer-first architectures share similar trade-offs with PA-last and mixer-last architectures. As they are linear, these architectures can accommodate M-QAM modulation schemes. Conversely, the power-detector-based architecture offers a straightforward method for amplitude demodulation while consuming low DC power. Additionally, it can be utilized for phase demodulation by applying a Kramers-Kronig processing technique, as exemplified in <cit.>. In terms of noise figure, LNA-first architectures offer the lowest, followed by mixer-first architectures. Power-detector-based approaches typically have high noise figures (often exceeding 30 dB) as the conversion gain depends on input received power. §.§ Summarizing Remarks The choice of the best THz transceiver architecture is still debated. However, some conclusions can be drawn based on the discussions above. * If a technology node with a high f_max is used for design (such as III-V), then a PA-last and LNA-first transceiver, with M-QAM modulation is optimal. * If one desires to perform M-QAM modulation in a technology with limited f_max (such as CMOS, SiGe), then mixer-last and mixer-first approaches should be used. While this can support high data rates, the communication link distance can be limited since this approach suffers from low saturated output power and a high noise figure. * If the demand is for a robust link with moderate data rates in a technology with limited f_max (such as CMOS, SiGe), then the multiplier last approach is optimal, as it can support high output power. On the receiver end, a mixer-first architecture may be optimal due to its lower noise figure compared to the detector-first approach. § ANTENNAS AND PACKAGING A 300 GHz signal has a wavelength of 1 mm in free space. This enables the design of large antenna arrays within a small area. Moreover, this wavelength decreases in a dielectric medium. For example, in silicon dioxide—a widely used dielectric in technologies like silicon CMOS and SiGe BiCMOS—the wavelength of a 300 GHz signal is reduced to just 0.5 mm due to a dielectric constant of 3.9 <cit.>. This reduction enables the design of a large antenna array within an integrated circuit (IC). For instance, a 144-element on-chip array is demonstrated at 670 GHz in <cit.>. Efficient antennas and their interfacing methods are crucial. In an RF signal chain, the LNA and PA interface with the antenna. Inefficiencies in the antenna or its interfacing can lead to increased noise figure in the LNA and decreased efficiency and output power in the PA, highlighting the importance of developing high-efficiency antennas and employing low-loss packaging techniques for interfacing <cit.>. Besides this, packages also need to provide mechanical support and protection from external conditions and facilitate easy thermal dissipation while enabling easy interfaces with DC biases and I/O control signals. This section explores the latest advancements in antenna design and packaging approaches at THz frequencies. While our primary emphasis is silicon ICs, the techniques and principles discussed apply to a broad spectrum of other technologies. §.§ Integrated On-Chip Antennas Given the reduced wavelength in silicon, it is feasible to integrate antennas directly within integrated circuits (ICs typically have a size of a few millimeters). These antennas are often designed in thick top metal layers to mitigate conductive losses. A variety of antenna designs—including dipole, slot, ring, cavity, leaky-wave, and patch antennas—have been successfully implemented on-chip <cit.>. For example, Fig. <ref>(a) demonstrates an on-chip bowtie antenna implemented in a 130nm SiGe BiCMOS platform. <cit.>. These antennas can be designed to radiate from the top or back sides of the IC, leading to different design strategies, which are discussed below. §.§.§ Top-Side Radiation Antennas that radiate from the top typically exhibit lower efficiencies. This is because of the significant mismatch in intrinsic impedances between the antenna medium (silicon dioxide) and air <cit.>. An effective strategy to enhance radiation efficiency involves the addition of a quartz superstrate atop the antenna, a technique that has demonstrated improved radiation efficiency <cit.>. This is demonstrated in Fig. <ref>(b). While this method does necessitate some post-fabrication processing, the associated costs are relatively low. §.§.§ Back-Side Radiation Radiation from the back side generally achieves higher efficiency compared to top-side radiation. This improvement is attributable to the high dielectric constant of the silicon substrate beneath the silicon dioxide layer, which lowers intrinsic impedance. However, back-radiation faces challenges, as unwanted surface waves can get generated within the silicon substrate. To address this, a hyper-hemispherical silicon lens can be added, which can significantly boost efficiency and directivity <cit.>. This is demonstrated in Fig. <ref>(c). Yet, adopting silicon lenses has drawbacks: they are costly and require precise alignment. When utilized with an array, lenses typically suffer from poor performance due to off-axis effects, where the phase center of individual antenna elements does not align with the lens's phase center <cit.>. Recent advancements have seen the use of 3D-printed Teflon lenses, which offer enhanced directivity owing to the design flexibility afforded by 3D printing in creating optimal lens shapes <cit.>. Additionally, the exploration of metasurface-based planar lenses represents a growing field of study <cit.>. §.§.§ Antenna on PCB Antennas can be integrated onto printed circuit boards (PCBs), offering a cost-effective and flexible solution <cit.>. Fig. <ref>(d) demonstrates a Vivaldi antenna array at 300 GHz, implemented by stacking PCBs over one another. However, this approach has notable drawbacks. First, connecting the chip to the PCB requires either bond wires or flip-chip packaging using copper bumps. Both methods can adversely affect impedance matching and increase signal loss. Second, the material typically used in PCBs exhibits high loss at THz frequencies, potentially lowering antenna efficiency. Additionally, the manufacturing resolution available at most PCB facilities may not meet the precise requirements for designing antennas at these high frequencies, particularly for those with sub-millimeter dimensions. §.§.§ Micro-Machined Waveguide Antennas Waveguide antennas are known for their exceptional performance at THz frequencies. However, interfacing these antennas with ICs presents technical challenges. Typically, bond wires or copper bumps are used to deliver the THz signal to an EM coupler, which then excites the waveguide antenna. This is demonstrated in Fig. <ref>(e). However, achieving consistent impedance matching proves complex in this approach <cit.>. An alternative strategy involves embedding the EM coupler directly within the IC, enabling it to directly generate the required EM mode for the antenna <cit.>. Despite the high quality of antennas and interfaces this method yields, it is expensive due to the high costs involved in micro-machining at such fine dimensions. §.§.§ Other Techniques Dielectric waveguides (DWG) have attracted attention recently, where a low-loss polymeric medium is used to guide electromagnetic waves at THz frequencies. DWGs exhibit low loss and can hence potentially replace fiber in medium-range links <cit.>. However, it is important to note that DWGs are not strictly a wireless technology since it requires a dielectric channel to guide the EM wave. Another area of significant interest is the development of reconfigurable intelligent surfaces (RIS). Designing these advanced structures to operate at THz frequencies presents substantial challenges, primarily because conventional switches fail to function at these high frequencies, complicating reconfigurability. Despite this, there have been demonstrations of THz RIS structures such as metasurfaces for holographic projections <cit.> (Fig. <ref>(f)) and reflectarrays using 1-bit phase shifter for high-resolution radar <cit.>. § THZ TRANSCEIVER DEMONSTRATIONS The focus on THz frequencies has been driven by their largely untapped potential, evidenced by practical applications such as the use of frequencies around 120 GHz for data transmission during the Beijing Olympics <cit.> and the Air Force Research Laboratory's (AFRL) experiments on propagation loss between aircrafts at frequencies of 300 GHz <cit.>. These applications, among others, underscore the significant interest and ongoing efforts within the field to overcome the inherent challenges of operating at such high frequencies. This section delves into a few prototype transceivers above 200 GHz developed by academic and industrial research institutions. We highlight some notable demos that have successfully demonstrated a complete over-the-air (OTA) wireless link, encompassing both transmitter and receiver components, capable of achieving multi-Gbps transmission rates. §.§ Hiroshima University and NTT Hiroshima University and Nippon Telegraph and Telephone (NTT) have showcased a series of transceivers within the 300 GHz band, using a 40nm CMOS technology platform <cit.>. Notably, <cit.> presents a complete transceiver capable of sustaining a 3 cm over-the-air (OTA) link with a data rate of 80 Gbps while achieving an energy efficiency of 22.3 pJ/bit. This transceiver supports advanced modulation schemes, including QPSK, 16QAM, and 64QAM. Note that this prototype does not include a packaged antenna and interfacing. Instead, the OTA link was tested by interfacing the chip with external waveguide horn antennas via waveguide probes. Further advancements by the same group are detailed in <cit.>, where the authors demonstrate a receiver module that incorporates packaging. In this development, the 40nm CMOS chip is flip-chip bonded onto a PCB and interfaces with waveguide antennas (Fig. <ref>(e)). This packaged receiver module can achieve 76 Gbps using 16QAM modulation over a 6 cm distance and 4.32 Gbps with QPSK modulation over a 1 meter OTA distance. §.§ BiCMOS Transceiver from University of Wuppertal The study referenced as <cit.> showcases a BiCMOS transceiver operating at 230 GHz. The transceiver is equipped with on-chip ring antennas and a silicon lens for optimal radiation, while heat-sinking techniques are implemented to regulate thermal performance (Fig. <ref>(a)). Notably, this transceiver can attain a data transmission rate of 100 Gbps using 16QAM modulation over a 1-meter link while maintaining an energy efficiency of 14 pJ/bit. §.§ Beam-Steerable Phased Array from TokyoTech TokyoTech has successfully showcased beam steering within the 300 GHz band (from 242-280 GHz), utilizing a 65nm CMOS technology platform <cit.>. The prototype features a CMOS transceiver paired with PCB-based Vivaldi antennas and can support QPSK and 16QAM modulation schemes (Fig. <ref>(d)). By stacking four of these PCBs, the team has managed to steer the beam across a 36-degree angle. Furthermore, the design can also operate with standard horn antennas. OTA tests have resulted in data transmission speeds of 16 Gbps over a distance of 3.5 cm while achieving an energy efficiency of 93.75 pJ/bit. Additionally, the team recently demonstrated a new 2-D phased array transmitter by stacking multiple PCBs and chips with on-chip Vivaldi antennas <cit.> (Fig. <ref>(b)). §.§ NTT and TokyoTech NTT and TokyoTech have demonstrated a 300 GHz heterodyne transceiver using their in-house 80nm InP HEMT platform <cit.>. It consists of custom-designed PAs and mixer modules, which are all packaged in individual waveguide modules by using ridge couplers (Fig. <ref>(c)). Using external high-gain lens antennas, the transceiver achieves OTA data transmission of a 120 Gb/s 16QAM signal over a link distance of 9.8 m (Fig. <ref>(d)), with an energy efficiency of 92.5 pJ/bit. §.§ Multi-Kilometer Link from Northeastern University Northeastern University has showcased a long-distance, high-speed link at 225 GHz for wireless backhaul <cit.>. The link supports over 2 Gbps at frequencies between 210–230 GHz across a 2 km OTA outdoor environment, utilizing a 200 mW terahertz signal source and broadband low-noise balanced mixers employing Schottky diode technology developed by NASA (Fig. <ref>(e)). The system employs a highly directional lens antenna (with over 40 dBi gain) for the outdoor link setup. It also integrates custom communication and signal processing strategies into a software-defined, ultra-wideband (30 GHz) digital baseband system. It presents cutting-edge outcomes, proving the feasibility of establishing long-distance terahertz links in real-world outdoor settings. §.§ 400 GHz Transceiver from UCLA UCLA has demonstrated a multi-Gbps wireless transceiver at 400 GHz using a 90nm SiGe BiCMOS process <cit.>. This work utilizes silicon PIN diodes that show strong non-linearity and can hence generate THz signals with high DC-to-THz generation efficiency. The prototype supports OOK modulation and incorporates on-chip antennas paired with a silicon lens. In OTA experiments, this prototyped transceiver achieved 5 Gb/s of data transmission over a link distance of 5 cm (Fig. <ref>), with an energy efficiency of 52.8 pJ/bit. Notably, this is the first instance of a fully integrated multi-Gb/s wireless transceiver operating above 300 GHz in silicon. Additionally, by employing external mirrors for collimation, UCLA demonstrated a transmitter that can support 3 Gbps over a 20-meter link <cit.>. § CONCLUSION The potential to transform wireless communication and sensing by operating at THz carrier frequencies has spawned research and development across the globe, with eyes set on enabling future 6G networks. In this survey, we provide a comprehensive summary of recent advancements in THz technology, spanning state-of-the-art systems, circuits, devices, and antennas, while also calling attention to directions needing further development. We also highlight notable experimental demonstrations of THz technology from across the globe, which reveals outstanding progress but also noteworthy shortcomings related to energy efficiency and reliability. Worthwhile directions for future research include developing process technologies that can provide devices with high f_max for the THz front-end, alongside advanced sub-micron CMOS for the digital back-end. There is also a need for novel circuit topologies with improved efficiency and antennas that offer good performance without requiring expensive post-processing and precise machining. Furthermore, novel algorithms are needed to facilitate reliable operation in THz channels, which are often directive, lossy, and prone to blockage. Despite its practical hurdles, global enthusiasm for THz communication and sensing raises hope that it will indeed be a core component in revolutionizing future generations of connectivity. As THz technology continues to mature, it is crucial that we conduct thorough investigations to understand the potential long-term biological effects of THz radiation. Additionally, it is imperative that THz systems be deployed so that they do not interfere with radio astronomy and Earth observation satellites, which monitor climate change, natural disasters, and other atmospheric phenomena. Furthermore, the potential for increased surveillance capabilities through THz sensing demands heightened security and privacy measures. Finally, while THz technologies hold the potential to revolutionize internet access and connectivity, it is crucial to not worsen the existing digital divide and ensure equitable access to these breakthroughs. Advancements in communication technology continue to bring exciting new features to our lives. THz-based 6G networks are rapidly gaining momentum. This leap in wireless communication will offer unprecedented data speeds, lower latency, and higher reliability, which could support the ever-growing demand for bandwidth in our interconnected world. This paper provides an overview of the opportunities and challenges associated with 6G using THz technology and describes some potential applications for 6G. We emphasize the IEEE 802.15.3d standard and explore the unique characteristics of the 300 GHz channel. We highlight the ongoing efforts in the device, circuits, antenna, and packaging committee and showcase a few notable demos. As we continue to advance 6G and THz technology, it is crucial that we conduct thorough investigations to understand the potential biological effects of THz radiation on humans and animals and its long-term consequences. This knowledge will help establish safe exposure limits to ensure public safety. Additionally, we must ensure that advancements in THz networks do not interfere with radio astronomy and Earth observation satellites, which monitor climate change, natural disasters, and other atmospheric phenomena. Furthermore, the potential for increased surveillance capabilities through THz sensing and the heightened risk of cyber-attacks demands advanced security protocols and encryption methods to protect sensitive information. As THz technologies hold the potential to revolutionize internet access and connectivity, it is crucial to address the issue of the digital divide and ensure equitable access to these breakthroughs. Despite these practical hurdles, the global enthusiasm for 6G and THz systems raises hope for a world where wireless systems will operate at THz frequencies. This hope is supported by continuous global research and efforts, striving to tap into the vast possibilities of cutting-edge 6G THz networks and usher in an era of unprecedented speed and reliable wireless communication. 100 url@samestyle thz_rappaport T. S. Rappaport, Y. Xing, O. Kanhere, S. Ju, A. Madanayake, S. Mandal, A. Alkhateeb, and G. C. Trichopoulos, “Wireless communications and applications above 100 GHz: opportunities and challenges for 6G and beyond,” IEEE Access, vol. 7, pp. 78 729–78 757, 2019. review_what_should_6G S. Dang, O. Amin, B. Shihada, and M.-S. Alouini, “What should 6G be?” Nature Electronics, vol. 3, no. 1, pp. 20–29, 2020. cisco T. Barnett, S. Jain, U. Andra, and T. Khurana, “Cisco visual networking index (VNI) complete forecast update, 2017-2022,” Americas/EMEAR Cisco Knowledge Network (CKN) Presentation, pp. 1–30, 2018. imt Int. Telecommun.Union, ITU-Recommedation, “IMT traffic estimates for the years 2020 to 2030,” vol. M.2370, no. 0, 2015. 6g_survey_bs M. Z. Chowdhury, M. Shahjalal, S. Ahmed, and Y. M. Jang, “6G wireless communication systems: Applications, requirements, technologies, challenges, and research directions,” IEEE Open Journal of the Communications Society, vol. 1, pp. 957–975, 2020. review_towards M. Giordani, M. Polese, M. Mezzavilla, S. Rangan, and M. Zorzi, “Toward 6G networks: Use cases and technologies,” IEEE Communications Magazine, vol. 58, no. 3, pp. 55–61, 2020. jornet_nature_survey J. M. Jornet, E. W. Knightly, and D. M. Mittleman, “Wireless communications sensing and security above 100 GHz,” Nature communications, vol. 14, no. 1, p. 841, 2023. athz_ref_1 I. F. Akyildiz, C. Han, Z. Hu, S. Nie, and J. M. Jornet, “Terahertz band communication: An old problem revisited and research directions for the next decade,” IEEE Transactions on Communications, vol. 70, no. 6, pp. 4250–4285, 2022. athz_ref_2 I. F. Akyildiz, J. M. Jornet, and C. Han, “Terahertz band: Next frontier for wireless communications,” Physical communication, vol. 12, pp. 16–32, 2014. athz_ref_3 H. Elayan, O. Amin, B. Shihada, R. M. Shubair, and M.-S. Alouini, “Terahertz band: The last piece of RF spectrum puzzle for communication systems,” IEEE Open Journal of the Communications Society, vol. 1, pp. 1–32, 2020. ehsan_imaging A. Mostajeran, H. Aghasi, S. M. H. Naghavi, and E. Afshari, “Fully integrated solutions for high resolution terahertz imaging (invited),” in 2019 IEEE Custom Integrated Circuits Conference (CICC), 2019, pp. 1–8. tonouchi2007cutting M. Tonouchi, “Cutting-edge terahertz technology,” Nature photonics, vol. 1, no. 2, pp. 97–105, 2007. dband_rebeiz S. Li, Z. Zhang, B. Rupakula, and G. M. Rebeiz, “An eight-element 140-GHz wafer-scale IF beamforming phased-array receiver with 64-QAM operation in CMOS RFSOI,” IEEE Journal of Solid-State Circuits, vol. 57, no. 2, pp. 385–399, 2022. dband_nokia M. Elkhouly, J. Ha, M. J. Holyoak, D. Hendry, M. Sayginer, R. Enright, I. Kimionis, Y. Baeyens, and S. Shahramian, “Fully integrated 2D scalable tx/rx chipset for D-band phased-array-on-glass modules,” in 2022 IEEE International Solid-State Circuits Conference (ISSCC), vol. 65, 2022, pp. 76–78. dband_intel S. Callender, A. Whitcombe, A. Agrawal, R. Bhat, M. Rahman, C. C. Lee, P. Sagazio, G. Dogiamis, B. Carlton, M. Chakravorti, S. Pellerano, and C. Hull, “A fully integrated 160Gb/s D-band transmitter with 1.1 pj/b efficiency in 22nm FinFET technology,” in 2022 IEEE International Solid-State Circuits Conference (ISSCC), vol. 65, 2022, pp. 78–80. fuji_1 K. Katayama, K. Takano, S. Amakawa, S. Hara, A. Kasamatsu, K. Mizuno, K. Takahashi, T. Yoshida, and M. Fujishima, “A 300 GHz CMOS transmitter with 32-QAM 17.5 Gb/s/ch capability over six channels,” IEEE Journal of Solid-State Circuits, vol. 51, no. 12, pp. 3037–3048, 2016. fuji_2 S. Lee, S. Hara, T. Yoshida, S. Amakawa, R. Dong, A. Kasamatsu, J. Sato, and M. Fujishima, “An 80-Gb/s 300-GHz-band single-chip CMOS transceiver,” IEEE Journal of Solid-State Circuits, vol. 54, no. 12, pp. 3577–3588, 2019. fuji_3 S. Hara, R. Dong, S. Lee, K. Takano, N. Toshida, A. Kasamatsu, K. Sakakibara, T. Yoshida, S. Amakawa, and M. Fujishima, “A 76-Gbit/s 265-GHz CMOS receiver with WR-3.4 waveguide interface,” IEEE Journal of Solid-State Circuits, vol. 57, no. 10, pp. 2988–2998, 2022. okada_300 I. Abdo, C. da Gomez, C. Wang, K. Hatano, Q. Li, C. Liu, K. Yanagisawa, A. A. Fadila, T. Fujimura, T. Miura, K. K. Tokgoz, J. Pang, H. Hamada, H. Nosaka, A. Shirane, and K. Okada, “A bi-directional 300-ghz-band phased-array transceiver in 65-nm cmos with outphasing transmitting mode and LO emission cancellation,” IEEE Journal of Solid-State Circuits, vol. 57, no. 8, pp. 2292–2308, 2022. okada_2d C. Wang, H. Herdian, W. Zheng, C. Liu, J. Mayeda, Y. Liu, O. A. Yong, W. Wang, Y. Zhang, C. D. Gomez, A. Shehata, S. Kato, I. Abdo, T. Jyo, H. Hamada, H. Takahashi, H. Sakai, A. Shirane, and K. Okada, “24.3 a 236-to-266 GHz 4-element amplifier-last phased-array transmitter in 65nm CMOS,” in 2024 IEEE International Solid-State Circuits Conference (ISSCC), vol. 67, 2024, pp. 415–417. pfieffer P. Rodríguez-Vázquez, J. Grzyb, B. Heinemann, and U. R. Pfeiffer, “A 16-QAM 100-Gb/s 1-m wireless link with an EVM of 17% at 230 GHz in an SiGe technology,” IEEE Microwave and Wireless Components Letters, vol. 29, no. 4, pp. 297–299, 2019. 400g_thomas S. Thomas, S. Razavian, J. S. Virdi, W. Sun, B. F. Motlagh, and A. Babakhani, “A 400-GHz efficient radiator and OOK transceiver for multi-Gb/s wireless communication in silicon,” IEEE Journal of Solid-State Circuits, pp. 1–17, 2024. reynart_outphasing A. Standaert and P. Reynaert, “A 390-GHz outphasing transmitter in 28-nm CMOS,” IEEE Journal of Solid-State Circuits, vol. 55, no. 10, pp. 2703–2713, 2020. isscc_6Gforum “F3: The path to 6G: Architectures, circuits, technologies for sub-thz communications, sensing and imaging,” in 2022 IEEE International Solid-State Circuits Conference (ISSCC), vol. 65, 2022, pp. 534–536. ieee_std “802.15.3d-2017 - IEEE standard for high data rate wireless multi-media networks–amendment 2: 100 gb/s wireless switched point-to-point physical layer,” IEEE Std 802.15.3d-2017 (Amendment to IEEE Std 802.15.3-2016 as amended by IEEE Std 802.15.3e-2017), pp. 1–55, 2017. petrov2020ieee V. Petrov, T. Kurner, and I. Hosako, “IEEE 802.15.3d: First standardization efforts for sub-terahertz band communications toward 6G,” IEEE Communications Magazine, vol. 58, no. 11, pp. 28–33, 2020. thz_survey_sp H. Sarieddeen, M.-S. Alouini, and T. Y. Al-Naffouri, “An overview of signal processing techniques for terahertz communications,” Proceedings of the IEEE, vol. 109, no. 10, pp. 1628–1665, 2021. thz_survey_ch Z. Chen, X. Ma, B. Zhang, Y. Zhang, Z. Niu, N. Kuang, W. Chen, L. Li, and S. Li, “A survey on terahertz communications,” China Communications, vol. 16, no. 2, pp. 1–35, 2019. thz_survey_new W. Jiang, Q. Zhou, J. He, M. A. Habibi, S. Melnyk, M. El-Absi, B. Han, M. D. Renzo, H. D. Schotten, F.-L. Luo, T. S. El-Bawab, M. Juntti, M. Debbah, and V. C. M. Leung, “Terahertz communications and sensing for 6g and beyond: A comprehensive review,” IEEE Communications Surveys & Tutorials, pp. 1–1, 2024. goldsmith2005wireless A. Goldsmith, Wireless communications.1em plus 0.5em minus 0.4emCambridge university press, 2005. rappaport_5G T. S. Rappaport, S. Sun, R. Mayzus, H. Zhao, Y. Azar, K. Wang, G. N. Wong, J. K. Schulz, M. Samimi, and F. Gutierrez, “Millimeter wave mobile communications for 5G cellular: It will work!” IEEE Access, vol. 1, pp. 335–349, 2013. review_white_paper N. Rajatheva, I. Atzeni, E. Bjornson, A. Bourdoux, S. Buzzi, J.-B. Dore, S. Erkucuk, M. Fuentes, K. Guan, Y. Hu et al., “White paper on broadband connectivity in 6G,” arXiv preprint arXiv:2004.14247, 2020. 6g_iot D. C. Nguyen, M. Ding, P. N. Pathirana, A. Seneviratne, J. Li, D. Niyato, O. Dobre, and H. V. Poor, “6G internet of things: A comprehensive survey,” IEEE Internet of Things Journal, vol. 9, no. 1, pp. 359–383, 2022. cudak21integrated M. Cudak, A. Ghosh, A. Ghosh, and J. G. Andrews, “Integrated access and backhaul: A key enabler for 5G millimeter-wave deployments,” IEEE Communications Magazine, vol. 59, no. 4, pp. 88–94, Apr. 2021. polese2020integrated M. Polese, M. Giordani, T. Zugno, A. Roy, S. Goyal, D. Castor, and M. Zorzi, “Integrated access and backhaul in 5G mmWave networks: Potential and challenges,” IEEE Communications Magazine, vol. 58, no. 3, pp. 62–68, Mar. 2020. jornet_km P. Sen, J. V. Siles, N. Thawdar, and J. M. Jornet, “Multi-kilometre and multi-gigabit-per-second sub-terahertz communications for wireless backhaul applications,” Nature Electronics, vol. 6, no. 2, pp. 164–175, 2023. twins A. Alkhateeb, S. Jiang, and G. Charan, “Real-time digital twins: Vision and research directions for 6g and beyond,” IEEE Communications Magazine, vol. 61, no. 11, pp. 128–134, 2023. thz_ref_5 M. Giordani, M. Polese, M. Mezzavilla, S. Rangan, and M. Zorzi, “Toward 6G networks: Use cases and technologies,” IEEE Communications Magazine, vol. 58, no. 3, pp. 55–61, 2020. thz_ref_9 X. Xu, Y. Pan, P. P. M. Y. Lwin, and X. Liang, “3D holographic display and its data transmission requirement,” in 2011 International Conference on Information Photonics and Optical Communications, 2011, pp. 1–4. thz_ref_10 Q. Zhang, J. Liu, and G. Zhao, “Towards 5g enabled tactile robotic telesurgery,” arXiv preprint arXiv:1803.03586, 2018. kiosk H.-J. Song, H. Hamada, and M. Yaita, “Prototype of kiosk data downloading system at 300 GHz: Design, technical feasibility, and results,” IEEE Communications Magazine, vol. 56, no. 6, pp. 130–136, 2018. pozar D. M. Pozar, Microwave engineering.1em plus 0.5em minus 0.4emJohn wiley & sons, 2011. jcas T. Wild, V. Braun, and H. Viswanathan, “Joint design of communication and sensing for beyond 5G and 6G systems,” IEEE Access, vol. 9, pp. 30 845–30 857, 2021. Ericsson2021JCAS Ericsson. (2021, October) Joint sensing and communication: The foundation of 6G. <https://www.ericsson.com/en/blog/2021/10/joint-sensing-and-communication-6g>. scatter_rappaport T. S. Rappaport, R. W. Heath Jr, R. C. Daniels, and J. N. Murdock, Millimeter wave wireless communications.1em plus 0.5em minus 0.4emPearson Education, 2015. sengupta_Secure S. Venkatesh, H. Saeidi, K. Sengupta, and X. Lu, “Millimeter-wave physical layer security through space-time modulated transmitter arrays,” in 2022 IEEE 22nd Annual Wireless and Microwave Technology Conference (WAMICON), 2022, pp. 1–4. han_oam M. I. W. Khan, J. Woo, X. Yi, M. I. Ibrahim, R. T. Yazicigil, A. P. Chandrakasan, and R. Han, “A 0.31-thz orbital-angular-momentum (OAM) wave transceiver in CMOS with bits-to-OAM mode mapping,” IEEE Journal of Solid-State Circuits, vol. 57, no. 5, pp. 1344–1357, 2022. oam_wei W. Sun, S. Thomas, and A. Babakhani, “A 360 GHz single-element multi-mode orbital angular momentum cavity antenna-based transmitter in 90nm SiGe BiCMOS,” in 2024 IEEE Radio Frequency Integrated Circuits Symposium (RFIC), 2024. satellite J. Y. Suen, “Terabit-per-second satellite links: A path toward ubiquitous terahertz communication,” Journal of Infrared, Millimeter, and Terahertz Waves, vol. 37, no. 7, pp. 615–639, 2016. space I. Mehdi, J. Siles, C. P. Chen, and J. M. Jornet, “THz technology for space communications,” in 2018 Asia-Pacific Microwave Conference (APMC), 2018, pp. 76–78. attenuation P. Series, “Attenuation by atmospheric gases and related effects,” Recommendation ITU-R, vol. 25, pp. 676–12, 2019. hitran I. E. Gordon, L. S. Rothman, R. Hargreaves, R. Hashemi, E. V. Karlovets, F. Skinner, E. K. Conway, C. Hill, R. V. Kochanov, Y. Tan et al., “The HITRAN2020 molecular spectroscopic database,” Journal of quantitative spectroscopy and radiative transfer, vol. 277, p. 107949, 2022. thz_spectro M. M. Assefzadeh, B. Jamali, A. K. Gluszek, A. J. Hudzikowski, J. Wojtas, F. K. Tittel, and A. Babakhani, “Terahertz trace gas spectroscopy based on a fully-electronic frequency-comb radiating array in silicon,” in 2016 Conference on Lasers and Electro-Optics (CLEO), 2016, pp. 1–2. channel_survey D. Serghiou, M. Khalily, T. W. C. Brown, and R. Tafazolli, “Terahertz channel propagation phenomena, measurement techniques and modeling for 6G wireless communication applications: A survey, open challenges and future research directions,” IEEE Communications Surveys & Tutorials, vol. 24, no. 4, pp. 1957–1996, 2022. rain_attenuation Z. Qingling and J. Li, “Rain attenuation in millimeter wave ranges,” in 2006 7th International Symposium on Antennas, Propagation & EM Theory, 2006, pp. 1–4. rain J. Ma, F. Vorrius, L. Lamb, L. Moeller, and J. F. Federici, “Comparison of experimental and theoretical determined terahertz attenuation in controlled rain,” Journal of Infrared, Millimeter, and Terahertz Waves, vol. 36, pp. 1195–1202, 2015. snow P. Sen, J. Hall, M. Polese, V. Petrov, D. Bodet, F. Restuccia, T. Melodia, and J. M. Jornet, “Terahertz communications can work in rain and snow: Impact of adverse weather conditions on channels at 140 GHz,” in Proceedings of the 6th ACM Workshop on Millimeter-Wave and Terahertz Networks and Sensing Systems, 2022, pp. 13–18. rappaport2024wireless T. S. Rappaport, Wireless communications: principles and practice.1em plus 0.5em minus 0.4emCambridge University Press, 2024. ds_scatter_rappaport S. Ju, S. H. A. Shah, M. A. Javed, J. Li, G. Palteru, J. Robin, Y. Xing, O. Kanhere, and T. S. Rappaport, “Scattering mechanisms and modeling for terahertz wireless communications,” in ICC 2019 - 2019 IEEE International Conference on Communications (ICC), 2019, pp. 1–7. scatter_meas J. Ma, R. Shrestha, L. Moeller, and D. M. Mittleman, “Invited article: Channel performance for indoor and outdoor terahertz wireless links,” APL Photonics, vol. 3, no. 5, 2018. rodwell2013sub M. Rodwell, “Sub-mm-wave technologies: Systems, ICs, THz transistors,” in 2013 Asia-Pacific Microwave Conference Proceedings (APMC).1em plus 0.5em minus 0.4emIEEE, 2013, pp. 509–511. balanis2016antenna C. A. Balanis, Antenna theory: analysis and design.1em plus 0.5em minus 0.4emJohn wiley & sons, 2016. thz_mimo A. Faisal, H. Sarieddeen, H. Dahrouj, T. Y. Al-Naffouri, and M.-S. Alouini, “Ultramassive MIMO systems at terahertz bands: Prospects and challenges,” IEEE Vehicular Technology Magazine, vol. 15, no. 4, pp. 33–42, 2020. true_time B. Govind, T. Tapen, and A. Apsel, “Ultra-compact quasi-true time delay for boosting wireless channel capacity,” Nature, vol. 627, no. 8002, pp. 88–94, 2024. beamsteering X. Fu, F. Yang, C. Liu, X. Wu, and T. J. Cui, “Terahertz beam steering technologies: from phased arrays to field-programmable metasurfaces,” Advanced optical materials, vol. 8, no. 3, p. 1900628, 2020. antenna_review R. Xu, S. Gao, B. S. Izquierdo, C. Gu, P. Reynaert, A. Standaert, G. J. Gibbons, W. Bösch, M. E. Gadringer, and D. Li, “A review of broadband low-cost and high-gain low-terahertz antennas for wireless communications applications,” Ieee Access, vol. 8, pp. 57 615–57 629, 2020. airy H. Guerboukha, B. Zhao, Z. Fang, E. Knightly, and D. M. Mittleman, “Curving thz wireless data links around obstacles,” Communications Engineering, vol. 3, no. 1, pp. 1–8, 2024. ethan_beam Y. Heng, J. G. Andrews, J. Mo, V. Va, A. Ali, B. L. Ng, and J. C. Zhang, “Six key challenges for beam management in 5.5G and 6G systems,” IEEE Wireless Communications Magazine, vol. 59, no. 7, pp. 74–79, Jul. 2021. dev_sengupta_nature K. Sengupta, T. Nagatsuma, and D. M. Mittleman, “Terahertz integrated electronic and hybrid electronic-photonic systems,” Nature Electronics, vol. 1, no. 12, pp. 622–635, 2018. dev_sengupta K. Sengupta, “Integrated circuits for terahertz communication beyond 100 GHz: are we there yet?” in 2019 IEEE International Conference on Communications Workshops (ICC Workshops), 2019, pp. 1–6. swami_dev E. Seok, D. Shim, C. Mao, R. Han, S. Sankaran, C. Cao, W. Knap, and K. K. O, “Progress and challenges towards terahertz CMOS integrated circuits,” IEEE Journal of Solid-State Circuits, vol. 45, no. 8, pp. 1554–1564, 2010. niknejad_dev C. Doan, S. Emami, A. Niknejad, and R. Brodersen, “Millimeter-wave CMOS design,” IEEE Journal of Solid-State Circuits, vol. 40, no. 1, pp. 144–155, 2005. nauta B. Nauta. (2024) 1.2 racing down the slopes of Moore’s Law. inp_3 X. Mei, W. Yoshida, M. Lange, J. Lee, J. Zhou, P.-H. Liu, K. Leong, A. Zamora, J. Padilla, S. Sarkozy, R. Lai, and W. R. Deal, “First demonstration of amplification at 1 THz using 25-nm InP high electron mobility transistor process,” IEEE Electron Device Letters, vol. 36, no. 4, pp. 327–329, 2015. hetero_1 A. Gutierrez-Aitken, “Heterogeneous integration for high frequency RF applications,” in 2020 IEEE International Electron Devices Meeting (IEDM).1em plus 0.5em minus 0.4emIEEE, 2020, pp. 34–5. GF_FDSOI S. Ong, L. Chan, K. Chew, C. Lim, W. L. Oo, A. Bellaouar, C. Zhang, W. Chow, T. Chen, R. Rassel, J. Wong, C. Wan, J. Kim, W. Seet, and D. Harame, “22nm FD-SOI technology with back-biasing capability offers excellent performance for enabling efficient, ultra-low power analog and RF/millimeter-wave designs,” in 2019 IEEE Radio Frequency Integrated Circuits Symposium (RFIC), 2019, pp. 323–326. hbt_ihp B. Heinemann, H. Rücker, R. Barth, F. Bärwolf, J. Drews, G. G. Fischer, A. Fox, O. Fursenko, T. Grabolla, F. Herzel, J. Katzer, J. Korn, A. Krüger, P. Kulse, T. Lenke, M. Lisker, S. Marschmeyer, A. Scheit, D. Schmidt, J. Schmidt, M. A. Schubert, A. Trusch, C. Wipf, and D. Wolansky, “SiGe HBT with f_T/f_max of 505 GHz/720 GHz,” in 2016 IEEE International Electron Devices Meeting (IEDM), 2016, pp. 3.1.1–3.1.4. hbt_gf J. Pekarik, V. Jain, C. Kenney, J. Holt, S. Khokale, S. Saroop, J. B. Johnson, K. Stein, V. Ontalus, C. Durcan, M. Nafari, T. Nesheiwat, S. Saudari, E. Yarmoghaddam, S. Chaurasia, and A. Joseph, “SiGe HBTs with f_T/f_max ∼ 375/510GHz integrated in 45nm PDSOI CMOS,” in 2021 IEEE BiCMOS and Compound Semiconductor Integrated Circuits and Technology Symposium (BCICTS), 2021, pp. 1–4. inp_1 R. Lai, X. Mei, W. Deal, W. Yoshida, Y. Kim, P. Liu, J. Lee, J. Uyeda, V. Radisic, M. Lange et al., “Sub 50 nm InP HEMT device with fmax greater than 1 THz,” in 2007 IEEE international electron devices meeting.1em plus 0.5em minus 0.4emIEEE, 2007, pp. 609–611. inp_2 M. Urteaga, Z. Griffith, M. Seo, J. Hacker, and M. J. Rodwell, “InP HBT technologies for THz integrated circuits,” Proceedings of the IEEE, vol. 105, no. 6, pp. 1051–1067, 2017. tmusic DARPA. (2020) T-MUSIC proposer’s day. <https://www.darpa.mil/attachments/T-MUSIC_Proposers%20Day_Jan30.pdf>. taranto European Commission. (2017) TARANTO. <https://cordis.europa.eu/project/id/737454>. hetero_2 J. Li, Y. Royter, P. Patterson, T. Hussain, J. Duvall, M. Montes, D. Le, D. Hitko, M. Sokolich, D. Chow, and K. Elliott, “Heterogeneous wafer-scale integration of 250nm, 300GHz InP DHBTs with a 130nm RF-CMOS technology,” in 2008 IEEE International Electron Devices Meeting, 2008, pp. 1–3. hetero_3 M. Urteaga, A. Carter, Z. Griffith, R. Pierson, J. Bergman, A. Arias, P. Rowell, J. Hacker, B. Brar, and M. Rodwell, “THz bandwidth InP HBT technologies and heterogeneous integration with Si CMOS,” in 2016 IEEE Bipolar/BiCMOS Circuits and Technology Meeting (BCTM), 2016, pp. 35–41. rtd D. Cimbri, J. Wang, A. Al-Khalidi, and E. Wasige, “Resonant tunneling diodes high-speed terahertz wireless communications - a review,” IEEE Transactions on Terahertz Science and Technology, vol. 12, no. 3, pp. 226–244, 2022. twt R. Basu, L. R. Billa, R. Letizia, and C. Paoloni, “Design of sub-THz traveling wave tubes for high data rate long range wireless links,” Semiconductor Science and Technology, vol. 33, no. 12, p. 124009, 2018. qcl_ben B. S. Williams, “Terahertz quantum-cascade lasers,” Nature photonics, vol. 1, no. 9, pp. 517–525, 2007. sorin_book S. Voinigescu, High-frequency integrated circuits.1em plus 0.5em minus 0.4emCambridge University Press, 2013. momeni_travelling O. Momeni and E. Afshari, “A broadband mm-wave and terahertz traveling-wave frequency multiplier on CMOS,” IEEE Journal of Solid-State Circuits, vol. 46, no. 12, pp. 2966–2976, 2011. kko_amp S. Ghosh, F. Zhang, H. Guo, and K. K. O, “305-GHz cascode power amplifier using capacitive feedback fabricated using SiGe HBT's with fmax of 450 GHz,” in 2023 IEEE Radio Frequency Integrated Circuits Symposium (RFIC), 2023, pp. 313–316. inp_transceiver H. Hamada, T. Tsutsumi, H. Matsuzaki, T. Fujimura, I. Abdo, A. Shirane, K. Okada, G. Itami, H.-J. Song, H. Sugiyama, and H. Nosaka, “300-GHz-band 120-Gb/s wireless front-end based on InP-HEMT PAs and mixers,” IEEE Journal of Solid-State Circuits, vol. 55, no. 9, pp. 2316–2335, 2020. niknejad_thyagarajan_tx S. Kang, S. V. Thyagarajan, and A. M. Niknejad, “A 240 GHz fully integrated wideband QPSK transmitter in 65 nm CMOS,” IEEE Journal of Solid-State Circuits, vol. 50, no. 10, pp. 2256–2267, 2015. heydari_210 Z. Wang, P.-Y. Chiang, P. Nazari, C.-C. Wang, Z. Chen, and P. Heydari, “A CMOS 210-GHz fundamental transceiver with OOK modulation,” IEEE Journal of Solid-State Circuits, vol. 49, no. 3, pp. 564–580, 2014. sengupta_kramer S. Ghozzy, M. Allam, E. A. Karahan, Z. Liu, and K. Sengupta, “12.2 a mm-wave/sub-THz synthesizer-free coherent receiver with phase reconstruction through mixed-signal kramer-kronig processing,” in 2024 IEEE International Solid-State Circuits Conference (ISSCC), vol. 67, 2024, pp. 220–222. lens_pack_4 A. Babakhani, “Direct antenna modulation (DAM) for on-chip mm-wave transceivers,” Ph.D. dissertation, California Institute of Technology, 2008. ant1 L. Gao and C. H. Chan, “24.1 a 0.64-to-0.69 THz beam-steerable coherent source with 9.1 dBm radiated power and 30.8dBm lensless EIRP in 65nm CMOS,” in 2023 IEEE International Solid-State Circuits Conference (ISSCC), 2023, pp. 362–364. wv_package_1 H.-J. Song, “Packages for terahertz electronics,” Proceedings of the IEEE, vol. 105, no. 6, pp. 1121–1138, 2017. lens_pack_3 H. Aggrawal, P. Chen, M. M. Assefzadeh, B. Jamali, and A. Babakhani, “Gone in a picosecond: Techniques for the generation and detection of picosecond pulses and their applications,” IEEE Microwave Magazine, vol. 17, no. 12, pp. 24–38, 2016. lens_pack_2 J. M. Edwards and G. M. Rebeiz, “High-efficiency elliptical slot antennas with quartz superstrates for silicon RFICs,” IEEE Transactions on Antennas and Propagation, vol. 60, no. 11, pp. 5010–5020, 2012. wv_package_5 S. Hara, K. Takano, K. Katayama, R. Dong, K. Mizuno, K. Takahashi, I. Watanabe, N. Sekine, A. Kasamatsu, T. Yoshida, S. Amakawa, and M. Fujishima, “300-GHz CMOS receiver module with WR-3.4 waveguide interface,” in 2018 48th European Microwave Conference (EuMC), 2018, pp. 396–399. ris_1 S. Venkatesh, X. Lu, H. Saeidi, and K. Sengupta, “A high-speed programmable and scalable terahertz holographic metasurface based on tiled CMOS chips,” Nature electronics, vol. 3, no. 12, pp. 785–793, 2020. wv_package_3 W. R. Deal, X. B. Mei, V. Radisic, K. Leong, S. Sarkozy, B. Gorospe, J. Lee, P. H. Liu, W. Yoshida, J. Zhou, M. Lange, J. Uyeda, and R. Lai, “Demonstration of a 0.48 THz amplifier module using InP HEMT transistors,” IEEE Microwave and Wireless Components Letters, vol. 20, no. 5, pp. 289–291, 2010. wv_package_4 L. Samoska, W. R. Deal, G. Chattopadhyay, D. Pukala, A. Fung, T. Gaier, M. Soria, V. Radisic, X. Mei, and R. Lai, “A submillimeter-wave HEMT amplifier module with integrated waveguide transitions operating above 300 ghz,” IEEE Transactions on Microwave Theory and Techniques, vol. 56, no. 6, pp. 1380–1388, 2008. ant2 H. Saeidi, S. Venkatesh, X. Lu, and K. Sengupta, “THz prism: One-shot simultaneous localization of multiple wireless nodes with leaky-wave thz antennas and transceivers in CMOS,” IEEE Journal of Solid-State Circuits, vol. 56, no. 12, pp. 3840–3854, 2021. lens_pack_1 T. Tajima, H.-J. Song, and M. Yaita, “Compact THz LTCC receiver module for 300 GHz wireless communications,” IEEE Microwave and Wireless Components Letters, vol. 26, no. 4, pp. 291–293, 2016. ant3 D. Filipovic, G. Gauthier, S. Raman, and G. Rebeiz, “Off-axis properties of silicon and quartz dielectric lens antennas,” IEEE Transactions on Antennas and Propagation, vol. 45, no. 5, pp. 760–766, 1997. ant4 L. Gao and C. H. Chan, “A 0.45-THz 2-D scalable radiator array with 28.2-dBm EIRP using an elliptical teflon lens,” IEEE Journal of Solid-State Circuits, vol. 57, no. 2, pp. 400–412, 2022. ant5 J. Zhu, Y. Yang, M. Li, D. McGloin, S. Liao, J. Nulman, M. Yamada, and F. Iacopi, “Additively manufactured millimeter-wave dual-band single-polarization shared aperture fresnel zone plate metalens antenna,” IEEE Transactions on Antennas and Propagation, vol. 69, no. 10, pp. 6261–6272, 2021. pcb_1 O. Koutsos, F. F. Manzillo, M. Caillet, R. Sauleau, and A. Clemente, “Experimental demonstration of a 43-dBi gain transmit array in PCB technology for backhauling in the 300-GHz band,” IEEE Transactions on Terahertz Science and Technology, vol. 13, no. 5, pp. 485–492, 2023. wv_package_2 A. Tessmann, A. Leuther, V. Hurm, H. Massler, M. Zink, M. Kuri, M. Riessle, R. Losch, M. Schlechtweg, and O. Ambacher, “A 300 GHz mHEMT amplifier module,” in 2009 IEEE International Conference on Indium Phosphide & Related Materials, 2009, pp. 196–199. dwg_01 P. Reynaert, M. Tytgat, W. Volkaerts, A. Standaert, Y. Zhang, M. De Wit, and N. Van Thienen, “Polymer microwave fibers: A blend of RF, copper and optical communication,” in ESSCIRC Conference 2016: 42nd European Solid-State Circuits Conference, 2016, pp. 15–20. dwg_02 J. W. Holloway, G. C. Dogiamis, and R. Han, “11.9 a 105Gb/s dielectric-waveguide link in 130nm BiCMOS using channelized 220-to-335 GHz signal and integrated waveguide coupler,” in 2021 IEEE International Solid-State Circuits Conference (ISSCC), vol. 64, 2021, pp. 196–198. ris_2 N. M. Monroe, G. C. Dogiamis, R. Stingel, P. Myers, X. Chen, and R. Han, “Electronic THz pencil beam forming and 2D steering for high angular-resolution operation: A 98 × 98-unit 265 GHz CMOS reflectarray with in-unit digital beam shaping and squint correction,” in 2022 IEEE International Solid-State Circuits Conference (ISSCC), vol. 65, 2022, pp. 1–3. beijing A. Hirata, “Transmission trial of television broadcast materials using 120-GHz-band wireless link,” NTT Tech. Rev., vol. 7, no. 3, 2009. afrl Air Force Research Laboratory. (2023) AFRL conducts first flight experiments for communications in terahertz band. https://www.afrl.af.mil/News/Article-Display/Article/3348325/afrl-conducts-first-flight-experiments-for-communications-in-terahertz-band/. thomas_long_dist S. Razavian, S. Thomas, M. Hosseini, and A. Babakhani, “A 0.4 THz efficient OOK/FSK wireless transmitter enabling 3 Gbps at 20 meters,” in 2022 IEEE BiCMOS and Compound Semiconductor Integrated Circuits and Technology Symposium (BCICTS), 2022, pp. 178–181.
http://arxiv.org/abs/2407.01709v1
20240701183609
Flatmates and the bounded cohomology of algebraic groups
[ "Nicolas Monod" ]
math.GR
[ "math.GR", "math.AG", "math.AT" ]
§ ABSTRACT For all algebraic groups over non-Archimedean local fields, the bounded cohomology vanishes. This follows from the corresponding statement for automorphism groups of Bruhat–Tits buildings, which hinges on the solution to the flatmate conjecture raised in earlier work with Bucher. Vanishing and invariance theorems for arithmetic groups are derived. Tidal mass loss in the Fornax dwarf spheroidal galaxy through N-body simulations with Gaia EDR3-based orbits Pierfrancesco Di Cintio 1,2,3,4 Giuliano Iorio5,6,7 Francesco Calura8 Carlo Nipoti9 Marcello Cantari9 Draft, June 25, 2024 ====================================================================================================================================================================== § INTRODUCTION Our main goal is the following vanishing theorem. [Algebraic groups]  Let be any algebraic group over a non-Archimedean local field k. Then the continuous bounded cohomology of (k) with real coefficients vanishes in every positive degree. It is understood that the group G=(k) of k-points of the scheme is endowed with the locally compact topology determined by the local field k. Examples are (almost-)simple linear algebraic groups such as G=_d(k) or G=_d(k), 5mm both with k=_p or k=_q((t)), for which <Ref> can be viewed as a strengthening of the classical vanishing theorem of Garland, Casselman, Wigner and Harder <cit.>, noting that non-trivial coefficients were already treated in <cit.>. Structure theory and general principles will reduce <Ref> to the case of such simple algebraic groups; in view of Bruhat–Tits theory, that case is in turn contained in the following statement. [Buildings]  Let G be a locally compact group acting properly by automorphisms on a locally finite Euclidean building. If this action is strongly transitive, then the continuous bounded cohomology of G with real coefficients vanishes in every positive degree. Previously, this statement was only known in the special case of trees <cit.>. §.§ Motivations Our first motive to establish <Ref> is that the bounded cohomology of real algebraic groups, and more generally of Lie groups, remains very mysterious to this day even though it has been subjected to intense scrutiny ever since Gromov's seminal work <cit.>. The original impetus for that study goes back to Milnor's 1958 paper <cit.> and the Milnor–Wood inequality: the fact that some characteristic classes happen to be bounded translates into non-trivial estimates for topological invariants of bundles and manifolds. Gromov conceptualised this by introducing bounded cohomology and proved that all characteristic classes of flat G-bundles, where G is a -algebraic group, are bounded. (Another proof was given by Bucher in <cit.>; see also <cit.> for related results.) Nonetheless, Dupont's 1979 conjecture <cit.> that all the real continuous cohomology classes of connected simple Lie groups are bounded remains open for many groups. In fact, if we exclude groups of Hermitian type, it is open for almost all other simple Lie groups, see <cit.>. A stronger version of this conjecture, open for all semi-simple (non-compact) groups, is as follows: [Problem A in <cit.>]  Let G be a connected semi-simple Lie group with finite center. Then the continuous bounded cohomology of G with real coefficients is naturally isomorphic to its ordinary (continuous) cohomology. For semi-simple algebraic groups over non-Archimedean local fields, it is a classical result that the ordinary continuous real cohomology vanishes <cit.>, <cit.>. In that sense, <Ref> answers the analogue of the above conjecture in the non-Archimedean case. This was previously only known for rank one groups, such as _2(_p), because the statement of <Ref> was only available in the particular case of trees <cit.>. (For higher rank groups, vanishing in degree two was obtained <cit.> and the stability methods of <cit.> reduce degree three to the rank one case as proved in <cit.>. Nothing was known in higher degrees.) In the Lie case originaly considered for the above conjecture, the isomorphism is only known in the following low degrees. In degree 2 by <cit.>. In degree 3, for some groups by <cit.> and very recently <cit.>, using <cit.>, established degree 3 for all classical complex groups. In degree 4 it is known for _2() only <cit.>. A second motivation for <Ref> is that when ordinary cohomology is already known to vanish, as for simple non-Archimedean groups, then the bounded vanishing is a strict strengthening of vanishing. Indeed, it implies that even “almost-cocycles” must be trivial, as illustrated by the case of quasi-morphisms. That case, which concerns n=2 only, has a number of applications to rigidity. Higher vanishing and bounded acyclicity have been much studied recently, though mostly for “large” transformation groups without topology <cit.>. §.§ Arithmetic groups and non-trivial coefficients Our third incentive is the cohomology of discrete groups. <Ref> provides one of the main missing pieces for the study of the bounded cohomology of S-arithmetic groups. In the case of ordinary cohomology, this is the most classical motivation and the main reason why continuous cohomology of algebraic groups has been studied for general coefficients <cit.>. Indeed, following Borel–Serre <cit.>, to study the “abstract” cohomology of an S-arithmetic group Γ, one realises it as a lattice Γ<G in a product of algebraic groups over various local fields (which is possible by results of Borel <cit.>, respectively Behr–Harder <cit.> in positive characteristic). For instance, Γ = _d([1/p])5mmin G=_d() ×_d(_p). In a suitable range of degrees, the cohomology of Γ will then be determined by the continuous cohomology of G with coefficients in an induction module. Therefore, vanishing results for G with non-trivial coefficients will imply invariance theorems, namely the statement that ^n(Γ) is isomorphic to ^n(G) for suitable n <cit.>. At that point, the vanishing for non-Archimedean groups will further indicate that ^n(Γ) is given by the continuous cohomology of the Archimedean factors, which is known from Lie theory. In particular, if G has no non-compact Archimedean factors, e.g. in positive characteristic, then one concludes a vanishing result for Γ (still for suitable n only), as originally conjectured by Serre. This classical picture has an analogue in bounded cohomology with notable differences. For non-trivial coefficients, the ordinary vanishing holds below the rank by work of Garland <cit.>, Casselman <cit.>, Casselman–Wigner <cit.>, Borel–Wallach <cit.>. We established it for bounded cohomology below twice the rank <cit.>, by different methods but still using a form of Solomon–Tits theorem. However, for trivial coefficients , the vanishing of <Ref> was previously unknown because the Bruhat–Tits building methods from ordinary cohomology fail in the bounded setting. To illustrate this, consider that ordinary vanishing above the rank for arbitrary coefficients clearly holds since the rank is the dimension of this building, which is contractible. This sort of soft and easy principles fail completely for bounded cohomology, which is one of the reasons for its appeal, and for its difficulty. For instance, the tree of _2(_p) is a contractible one-dimensional simplicial complex <cit.>, but this does not preclude degree-two bounded cohomology. Here is a folklore example: [Coefficients]  There exist irreducible continuous unitary representations π of G=_2(_p) with ^2(G, π)≠ 0. Returning to arithmetic groups, we obtain a vanishing theorem for these discrete groups in the spirit of Garland's results on Serre's conjecture by combining <Ref> with our results from <cit.>. [Discrete groups]  Let K be a global field and a connected simple K-group which is anisotropic over the Archimedean completions of K. Let S a finite set of valuation classes of K and let Γ=(K(S)) be the corresponding S-arithmetic group over the ring K(S) of S-integers. Then ^n(Γ)=0 for all 0<n<2 ∑_v∈ Srank_K_v(). The assumption on Archimedean completions is trivially satisfied when K has positive characteristic. In characteristic zero, a concrete example is as follows. Let p be a prime ≡ 1 mod 4 and let d≥ 5. Then ^n(_d([1/p])) =0 5mm ∀ 0<n< d-1 and also n=d-1 for d even, since the _p-rank of _d is ⌊ d/2 ⌋, using p≡ 1 mod 4 via Gauß's Theorem 108 <cit.>. (The restriction d≥ 5 is to avoid the non-simple case d=4 and the trivial range of n for d≤ 3.) More generally, when isotropic Archimedean places are allowed, we obtain an invariance theorem: ithmbisithm ithmbis-1 [Discrete groups, bis]  Let K be a global field, a connected simple K-group and S a finite set of valuation classes of K containing the set S_0 of all Archimedean ones for which is isotropic. Let Γ=(K(S)) be the corresponding S-arithmetic group and consider the semi-simple Lie group L= ∏_v∈ S_0(K_v). Then ^n(Γ) ≅^n(L) for all n<2 ∑_v∈ Srank_K_v(). In general, only the case n=2 was previously known: this was the main result of <cit.>. As a concrete example, given an integer m>1, the inclusion of the S-arithmetic group _d([1/m]) into _d() induces an isomorphism ^n (_d([1/m])) ≅ ^n(_d()) 5mm ∀ n< 2 (d-1) (ω(m) + 1) where ω(m) denotes the number of distinct prime factors of m. Again, this was previously only known in the special case of d=2 by the result for trees <cit.>. We note here that the ordinary (virtual) cohomological dimension of _d([1/m]) is n=(d-1) (ω(m) + d/2) by Borel–Serre <cit.>, which lies in our range for n in <Ref> as soon as m has more than d/2-2 distinct prime factors. [Equivalent formulation]  Theorems <ref> and <ref> could instead be formulated for general irreducible lattices in semi-simple groups as all results used in the proof hold in that setting. This would however not really add any generality. Indeed, the range for n is void in rank one (since ^1 always vanishes) and in rank ≥2 Margulis's arithmeticity theorem shows that all lattices are commensurable to S-arithmetic groups. We find the above statements more concrete as they focus on the structure of the groups Γ under consideration. §.§ Flatmates and our approach With Bucher, we proposed in <cit.> a strategy towards <Ref> and implemented it in the special case of trees. The main difficulty in that strategy is to understand the flatmate complex. This object, which will be detailed in <Ref> below, consists of all tuples of vertices that lie in a common flat. More precisely, the question raised by Conjecture 10 in <cit.> is to determine the “uniform homotopy type” of this complex. We can now answer this problem as follows. [Flatmates]  The flatmate complex of any discrete irreducible Euclidean building is uniformly acyclic. In the special case of trees, where the flatmate complex is the “aligned complex”, this statement was established in <cit.> by exhibiting a relatively simple, geometrically meaningful, bounded homotopy. By contrast, in the present case of buildings, the combinatorics of arbitrary configurations of finitely many points seems far too complicated (for the present author). Therefore, we shall prove <Ref> by introducing a few general simplicial tools which will lead to a solution by general principles, commuting back and forth between uniform and non-uniform homotopy arguments. The new contributions of our approach are as follows. Contrary to ordinary contractibility, Euclidean building are not uniformly acyclic. We show in essence that they become so “modulo their flats” by considering the nerve of the apartment system. In order to show that this nerve is uniformly acyclic, we use on the one hand that the nerve is non-uniformly homotopic to the building, which is non-uniformly acyclic. On the other hand, we introduce a support control principle which allows us to upgrade the acyclicity of the nerve to its uniform counterpart using a uniform nerve principle that we provide. This requires a quantitative control on finite subcomplexes; using building-theoretical arguments, we establish that this control holds in the case of apartment systems (though not for Euclidean buildings themselves). We expect these tools to be useful beyond the application to algebraic groups. [Gratitude] I am very grateful to Pierre-Emmanuel Caprace and Francesco Fournier-Facio for their comments. § SIMPLICIAL METHODS §.§ Notation We use the standard notation where a simplicial complex Σ is a set of non-empty finite sets closed under passing to non-empty subsets. The set of q-simplices is denoted Σ^[q]; strictly speaking we distinguish Σ^[0] from the vertex set (Σ)=⋃_σ∈Σσ. The full simplex X on a set X is the collection of all non-empty finite subsets σ X. A subcomplex Σ'Σ is full if it contains every σ∈Σ with σ(Σ'). A simplicial map is a map fΣ→Σ' that is induced by a vertex map Σ→Σ' also abusively denoted by f. Given a cover of a set X, the nerve is the simplicial complex of all non-empty finite subsets of having non-empty intersection. Given a poset S, that is, a set endowed with a partial order, the corresponding order complex is the simplicial complex S S consisting of all non-empty finite chains. Thus an element of S^[q] is of the form {s_0 < ⋯ < s_q}. Both isotone and antitone (i.e. order preserving/reversing) poset maps induce simplicial maps since the definition of S is self-dual. Our notation reflects the fact that S represents a classifying space of S viewed as a category (compare <cit.>). We warn the reader that S is sometimes called a “nerve” (and its realisation a classifying space) <cit.>; adding to the confusion, the nerve that we defined above is itself a classifying space of a category of inclusions associated to the cover. Finally, since a simplicial complex Σ is a poset under inclusion, we can form its order complex Σ, which is the barycentric subdivision of Σ. We write C_q(Σ) for the group of real-valued q-chains of usual simplicial homology. This is a normed vector space for the following norm. A basis of C_q(Σ) is obtained by choosing an oriented simplex for every q-simplex. Consider the ℓ^1-norm associated to this basis, i.e. the sum of the absolute values of the coefficients in this basis; this norm does not depend on the choice of orientation since orientations only affect signs. The boundary maps ∂ C_q+1(Σ)→ C_q(Σ) are augmented by the sum of coefficients ϵ C_0(Σ)→. The simplicial complex Σ is uniformly acyclic if its (augmented) chain complex admits a contracting homotopy h_ such that each h_q C_q(Σ)→ C_q+1(Σ) is bounded (in the sense of linear maps between normed vector spaces). This notion was used in various forms since <cit.> and was recently systematically developed in <cit.> in the semi-simplicial setting. Examples include all simplicial cones, in particular any full simplex, where the bound on h_q can be chosen to be 1. For bounded cohomology (simplicial and beyond), we refer to the founding paper of Gromov <cit.> and to <cit.>. Uniform acyclicity implies the vanishing of simplicial bounded cohomology (“bounded acyclicity”), though the two are not equivalent. §.§ Support control The first tool that we introduce is a “support-to-norm” principle leveraging the fact that the norm on homology cycles is not just any norm, but a ℓ^1-norm. This allows to control operator norms on a basis, which is an elementary form of projectivity; in fact, a theorem of Köthe <cit.> shows that projective Banach spaces are precisely ℓ^1-spaces. Let Σ be a simplicial complex. Suppose that there is a function → such that every set of n vertices is contained in some acyclic subcomplex of Σ with at most (n) vertices. Then Σ is uniformly acyclic. In preparation for the proof, we introduce an auxiliary notion. Given q,r∈, there is a constant U_q(r) with the following property. For any simplicial complex Φ on at most r vertices and for any (q+1)-chain β∈ C_q+1(Φ), there is β'∈ C_q+1(Φ) with ∂β = ∂β' and β'≤ U_q(r) ∂β. Moreover, there is a smallest such constant. This smallest constant will be called the universal constant U_q(r). (We could choose a coarser constant depending on r only since finitely many q are relevant for every given r, but our notation allows the proof of <Ref> to extend to a more general setting, recorded in <Ref> below.) Given any finite simplicial complex Φ, the homology boundary map ∂ C_q+1(Φ) → C_q(Φ) is a linear map between finite-dimensional vector spaces and therefore it is an open map (for any norm, in particular the given ℓ^1-norm). This means that there is a constant C (depending on Φ and q) such that any boundary ∂β, where β∈ C_q+1(Φ), can be written ∂β = ∂β' for a chain β' with β'≤ C ∂β. We define U_q(r) as the infimum of those C that have this property simultaneously for all simplicial complexes Φ on at most r vertices. This is well-defined since there are only finitely many isomorphism types of such complexes. We note that the finite dimensionality of C_q+1(Φ) implies also that U_q(r) itself works as a constant C above, despite the use of the infimum. For brevity, we shall say that a chain ω∈ C_q(Σ) has support at most m if ω is a linear combination of at most m oriented q-simplices. We construct by induction on q≥ -1 a sequence h_q of linear maps 0 [l] @<0.5ex>[r]^h_-1 C_0(Σ) @<0.5ex>[l]^ϵ@<0.5ex>[r]^h_0 C_1(Σ) @<0.5ex>[l]^∂@<0.5ex>[r]^h_1 C_2(Σ) @<0.5ex>[l]^∂@<0.5ex>[r]^h_2 C_3(Σ) @<0.5ex>[l]^∂@<0.5ex>[r]^h_3 ⋯@<0.5ex>[l]^∂ and a sequence of functions ψ_q→. The inductive claims are: * Boundedness: the linear map h_q is bounded. * Homotopy: the identity map can be written h_q-1∂ + ∂ h_q for q>0, h_-1ϵ + ∂ h_0 for q=0 and ϵ h_-1 for q=-1. * Support: if q≥0 and ω∈ C_q(Σ) has support at most m∈, then h_q(ω) has support at most ψ_q(m). To start the induction at q=-1, we select a vertex v_0 and define h_-1(t) = t{v_0} for t∈. We set ψ_-1≡ 1. The first two inductive conditions are satisfied. The last one was not formulated for q=-1, but in view of the inductive step we note that its conclusion still holds since h_-1(t) has support at most 1. We now address the inductive step for any q≥ 0, abusively writing ∂ for ϵ in the special case q=0. We consider the basis of C_q(Σ) given by some choice of an oriented simplex σ̇ for every q-simplex σ. We shall first define h_q on each σ̇ (but assuming that h_q-1 is already given on its entire domain of definition). The second inductive assumption implies that α=σ̇- h_q-1∂σ̇ is a cycle because ∂ (h_q-1∂σ̇) = (∂ h_q-1)∂σ̇= (Id - h_q-2∂) ∂σ̇=∂σ̇ for q>0, whereas for q=0 we have ∂ h_-1∂σ̇=∂σ̇ from ∂=ϵ. Since ∂σ̇ has support at most q+1, the third inductive assumption shows that α has support at most 1+ψ_q-1(q+1). Thus at most n vertices are involved, where n=(q+1) (1+ψ_q-1(q+1)). The assumption on Σ implies that α=∂β for some β∈ C_q+1(Σ) such that β (and hence also α) is supported on a subcomplex ΦΣ with at most (n) vertices. By <Ref>, upon possibly replacing β by another chain in C_q+1(Φ), we can assume that β has norm at most U_q((n)) α. We now define h_q(σ̇)=β and extend it by linearity to define h_q on all of C_q(Σ); this is possible since the various σ̇ form a basis. As for ψ_q, we define it for m∈ by ψ_q(m) = m ·(n)q+2 , 3mm recalling3mm n=(q+1) (1+ψ_q-1(q+1)). Let us proceed to verify the inductive claims for q. For the boundedness condition (i), let ω∈ C_q(Σ). Since ω is a finite sum of the form ∑_σ∈Σ^[q]ω(σ) σ̇, we have h_q(ω) ≤∑_σ∈Σ^[q]| ω(σ)| ·h_q( σ̇)≤ (sup_σ∈Σ^[q]h_q( σ̇)) (∑_σ∈Σ^[q]| ω(σ)|). Since ∑_σ∈Σ^[q]| ω(σ)| is the ℓ^1-norm of ω defined on C_q(Σ), it suffices to show that the supremum in the above expression is finite. This follows because the bound obtained above for β, namely h_q(σ̇) ≤ U_q((n)) α≤ U_q((n)) ( σ̇ + h_q-1·∂·σ̇), is independent of σ̇, in view of the definition of n and recalling that σ̇=1. The homotopy condition (ii) is linear and therefore holds by construction because, on our basis, σ̇= h_q-1∂σ̇+ α = h_q-1∂σ̇+ ∂β = ( h_q-1∂ + ∂ h_q) σ̇. Finally, for the support condition (iii), let m∈ and consider ω∈ C_q(Σ) with support at most m. Thus ω is a linear combination of at most m oriented simplices σ̇. For each σ̇, our construction of h_q(σ̇) is a (q+1)-chain on a complex with at most (n) vertices, and thus it has support at most (n)q+2. It follows as claimed that h_q(ω) has support at most ψ_q(m). We wrote the above proof in such a way that it shows a formally stronger statement, recorded in <Ref> below. Only the simpler statement of <Ref> will be used in this article; the reader can ignore the rest of this subsection and any mention of semi-simplicial sets. First we note that the vertex-count , which will be the relevant quantity in our applications to buildings, only served to bound the support, in terms of (q+1)-simplices, of a (q+1)-chain β bounding a q-cycle α (whence the binomial coefficients). Therefore we can formalise this in the more general semi-simplicial setting where simplices are not determined by vertices: A semi-simplicial set (X_q)_q≥ 0 has support control (below some ≤ +∞) if for every q< there is a function _q→ such that every q-cycle with support at most m is the boundary of a (q+1)-chain with support at most _q(m). (The restriction on the range can be useful in non-acyclic settings such as the spherical Solomon–Tits theorem.) We call _ the control function. Accordingly, we replace the universal constants U_q(r) by a semi-simplicial analogue: define the semi-simplicial universal constant U^ss_q(p) by considering all semi-simplicial sets Φ_ such that Φ_q+1 has at most p elements and then considering the same constant C as in the proof of <Ref> but for the linear map of finite-dimensional spaces ∂ C_q+1(Φ_) → C_q(Φ_). In the special case of simplicial complexes, the relation to the earlier constants is thus U_q(r) ≤ U^ss_q (rq+2) and _q(m) ≤(m(q+1))q+2. On can check that support control below =1 is equivalent to: connected with finite diameter. The definition of support control is given in terms of real cochains. We will establish it, however, in the stonger form of quantitative contractibility as in <Ref>, which implies support control with any coefficients. Accordingly we can speak of integral support control for coefficients, etc. Let (X_q)_q≥ 0 be a semi-simplicial set and let 1 ≤≤ +∞. Suppose that X_ has support control below . Then X_ admits a bounded contracting homotopy (h_q)_q≥ -1 up to q<. In particular, the real simplicial bounded cohomology ^q(X_) vanishes in degrees 0<q< and ^(X_) injects into ^(X_) when <+∞. The inductive proof given for <Ref> holds almost unchanged with the adaptations introduced above. Thus, since α has support at most 1+ψ_q-1(q+1), it follows that β has support at most _q(1+ψ_q-1(q+1)) and therefore the inductive definition of ψ_q becomes ψ_q(m) = m ·_q(1+ψ_q-1(q+1)). The rest of the proof is unchanged. If ≠+∞, we stop the inductive argument at q=-1. The statements for bounded cohomology follow by duality, see Theorems 2.3 and 2.8 in <cit.>. This proof shows that in hindsight we can take the control function to be linear for every given q since ψ_ is in particular also a control function. We believe that there are many circumstances where support control is a helpful method to establish (and strengthen) uniform acyclicity. For instance, it is well-suited to combinatorial arguments such as glueing: Let (X_q)_q≥ 0 be a semi-simplicial set and let 1 ≤≤ +∞. Suppose that X_ is the union of two semi-simplicial subsets X^+_, X^-_. If X^+_ and X^-_ have support control below and X^+_∩ X^-_ has support control below -1, then X_ has support control below . Moreover, the control function for X_ can be taken to depend only on the control functions for X^±_ and X^+_∩ X^-_. If the support is replaced by the norm, then the analogous statement is given in <cit.>. A q-cycle α on X _ can be written α^+ - α^- for α^±∈ C_q(X^±_) without introducing new simplices in the supports. Then ∂α^+ = ∂α^- is a (q-1)-cycle on X^+_∩ X^-_. If ∂α^± = ∂ω for ω∈ C_q(X^+_∩ X^-_), then α^± - ω is a q-cycle on X^±_ and hence can be written ∂β^± for β^±∈ C_q+1(X^±_). By assumption, the supports of ω and β^± can be bounded in terms of the support of α. Finally, α= ∂ (β^+ - β^-). §.§ Uniform nerve principles Leray established the classical correspondance between the homotopy type of a space and that of the nerve of a good cover; a simplicial version is due to Borsuk. Our next tool is a uniform version of the simplicial nerve lemma. We write (Σ) for the poset of subcomplexes of a simplicial complex Σ. In order to facilitate the control of the constants, we make a strong assumption on the intersections (which will be granted in our applications). We begin with a statement for finite covers (of generally infinite complexes). Let Σ be a simplicial complex and let (Σ) be a finite cover of Σ by subcomplexes. Suppose that every non-empty intersection of elements of is a full simplex. Then Σ is uniformly homotopy equivalent to the nerve complex . Moreover, all bounds can be chosen independently of Σ and , i.e. they depend only on the homology degree. The overall structure of the argument is similar to a strategy used for ordinary nerve principles such as in <cit.>. One ingredient is the carrier lemma, for which a uniform version was established in <cit.> (in the greater generality of semi-simplicial sets). Recall that given simplicial complexes Ω, Ω' a carrier is an isotone map CΩ→(Ω'); it is uniformly acyclic if for each q, every complex C(σ) is uniformly acyclic, uniformly over σ∈Ω^[q]. A simplicial map Ω→Ω' is carried by C if ∀σ : (σ )∈ C(σ). If , ψΩ→Ω' are carried by the same uniformly acyclic carrier, then they are boundedly homotopic with constants depending only on the carrier and the homology degree. In particular, if , ψ S→ S' are two isotone (or two antitone) poset maps with ∀ s: (s) ≤ψ(s), then the corresponding simplicial maps S → S' are boundedly homotopic with constants depending only on the homology degree. The first statement is (the simplicial case of) Lemma 4.11 in <cit.>. The second one is a version of Theorem 4.12 therein and follows from the first by exhibiting a cone as carrier. Considering Σ and as posets, we define an antitone map fΣ,3mm f(σ) = { F∈ : σ∈ F}. Next, for every α∈, we choose some vertex x_α of ∩α. Define gΣ,3mm g(β) = { x_α : α∈ with βα}. Note that g(β) is indeed a simplex of Σ since all those x_α are in ∩β, which is a full simplex. The map g is an antitone map of posets. We can now consider f and g as simplicial maps between the corresponding order complexes Σ and , which are none other than the barycentric subdivisions of Σ and . We claim that these simplicial maps are bounded homotopy inverses to each other with all bounds depending only on the homology degree. This claim will complete the proof of the theorem because any simplicial complex is uniformly homotopy equivalent to its barycentric subdivision with bounds depending only on the homology degree. Indeed, the classical homotopy equivalences are given by explicit sums depending only on the degree, see e.g. <cit.>. (This fact has also been established in <cit.> for the generality of semi-simplicial sets.) Turning to the claim, we consider the composition g f. This is an isotone map on the poset Σ (the vertex set of Σ). We have g f (σ) = { x_α : α∈ such that ∀ F∈, σ∈ F ⇒ F ∈α} and g f (σ) is a simplex of the subcomplex ∩ f(σ) of Σ. Note that the map which to each σ associates ∩ f(σ) is a carrier from Σ to itself. We define a further carrier map CΣ( Σ), 3mm C({σ_0 ⫋⋯⫋σ_p}) = (∩ f(σ_p)). This carrier C carries the simplicial map g f; indeed: g f ({σ_0 ⫋⋯⫋σ_p}) = {g f(σ_0) ⋯ g f(σ_p)}∈(∩ f(σ_p)) since each g f(σ_j) is in ∩ f(σ_j) which is a subset of ∩ f(σ_p). On the other hand, C also carries the identity because σ_p∈∩ f(σ_p) by definition of f. In order to conclude from the uniform carrier lemma that g f and the identity are uniformly homotopy equivalent with the desired uniformity of constants, it remains only to justify that the carrier C is uniformly acyclic with constants depending only on the degree. But this last point follows from the fact that the carrier is the barycentric subdivision of ∩ f(σ_p), which is a full simplex. We now consider the other composition, f g, which is simpler. We have f g(β) = { F∈: ∀α∈, βα⇒ x_α∈ F }. Unravelling all definitions, we see that β f g (β). In other words, f g dominates the identity (as poset maps); therefore, the uniform carrier lemma applies. The covers that will appear in the proof of our main result are not finite, in fact not even locally finite (they have locally the power of continuum), but turn out to have uniformly acyclic nerves. Therefore, we shall need the following variant of <Ref>. We caution the reader that a difficulty resides in the fact that successive nested finite subcovers will a priori give distinct homotopy equivalence maps; we will argue that they must be boundedly homotopic to each other. Let Σ be a simplicial complex set and let (Σ) be a cover of Σ by subcomplexes. Suppose that every non-empty finite intersection of elements of is a full simplex. Then Σ is uniformly acyclic if and only if the nerve is so. Suppose that is uniformly acyclic. Fix q∈ and consider any cycle ω∈ C_q(Σ). Then there is a finite subset ' which covers all simplices involved in ω. We consider the subcomplex Σ' of Σ given by the union of ' and view ω as a cycle on Σ'. The homotopy equivalence of <Ref>, applied to Σ', means that there are bounded linear chain maps C_q(Σ') → C_q(') and in the opposite direction which induce mutually inverse isomorphisms in homology. Consider the cycle η∈ C_q(') corresponding to ω as a cycle on . The uniform acyclicity assumption implies that we can write η = ∂ϑ for ϑ∈ C_q+1() with ϑ≤ c η, where c depends only on q and on . Again ϑ is supported on ” for some finite ” and we can assume that ” contains '. We apply again <Ref>, this time to Σ”=∪”, noting that ω is also a cycle for this complex. The resulting chain map C_q(Σ”) → C_q(”) sends ω to some cycle η∈ C_q(”). We claim that η-η = ∂ for some ∈ C_q+1(”) with ≤ c' ω, where c' depends only on q, on Σ and on . This claim will finish the proof that Σ is uniformly acyclic, since η= ∂ (ϑ+) will then imply that ω is a boundary (in Σ”) and since all constants (including those from <Ref>) depend on q, Σ and only, not on ω. To justify the claim, we need to compare the two maps C_q(Σ') C_q(') C_q(”) 2mm and2mm C_q(Σ') C_q(Σ”) C_q(”) which produce the cycles η, respectively η, from ω. It suffices to show that these maps are homotopic with uniform constants. Consider the underlying two poset maps f'Σ' ' ”2mm and2mm f”Σ' Σ”” which are constructed in the proof of <Ref>. Strictly speaking, the first one is the corestriction of the map f constructed for Σ', the second the restriction of the map f for Σ”. Appealing again to the uniform carrier lemma in the form of <Ref>, it suffices to show the following: for every σ∈Σ', f'(σ) f”(σ). This, however, is apparent in the definition given for f in the proof of <Ref>. The converse, which we will not need, is proved in exactly the same way. Namely, given finite subcovers ' ” and the corresponding subcomplexes Σ' Σ”, it suffices to compare the two poset maps g'' Σ' Σ”2mm and2mm g”' ”Σ” arising from the proof of <Ref>. If the choice α↦ x_α has been fixed once and for all for every α∈, then indeed g'(β) g”(β) holds for all β∈' and we conclude as above. § THE FLATMATE COMPLEX OF EUCLIDEAN BUILDINGS §.§ Euclidean buildings We shall adopt the viewpoint that a building is a complex endowed with a system of apartments and refer to <cit.>, <cit.> and <cit.> for background. For simplicity, we only consider irreducible buildings, which are therefore simplicial (rather than polysimplicial) complexes. All arguments below adapt to the non-irreducible case, but this setting is not needed for our vanishing results because, as we shall recall in <Ref>, the vanishing passes to finite products of groups (and from finite index subgroups). In the case of discrete irreducible Euclidean buildings, each apartment is a full subcomplex isomorphic to a triangulation of a Euclidean space, making the synonym `flat' especially congruous. We recall that the apartments are combinatorially convex, which means by definition that every minimal gallery connecting two chambers of an apartment remains in that apartment; see e.g. Prop. 4.40 in <cit.>. Given a discrete irreducible Euclidean building, there exists an integer k with the following property. For every n∈ and every family of n apartments F_1, …, F_n, there exists a family of k n apartments E_1, …, E_k n such that the union F_1 ∪⋯∪ F_n ∪ E_1 ∪⋯∪ E_k n is contractible (as a simplicial complex). We define k to be the number of chambers of the spherical Coxeter complex associated to the building. Specifically, we realise it as the number of chambers at infinity of any apartment. Fix some chamber c of the Euclidean building. Given any apartment F and any chamber ξ of the apartment at infinity ∂ F, choose some apartment E_ξ containing c with ξ∈∂ E_ξ; for the existence of such E_ξ, see e.g. Prop. 7.6 in <cit.>. We claim that the union of those k apartments E_ξ contains F. Indeed, select a special vertex y of c and consider the sector S_ξ based at y and representing ξ. The combinatorial convexity of apartments implies that S_ξ is contained in E_ξ. However, it is known that the union of the sectors S_ξ contains F as ξ ranges over the chambers of ∂ F; this holds in the greater generality of possibly non-discrete Euclidean buildings, see e.g. the proof of <cit.> or of <cit.>. This justifies the claim. Returning to the statement of the proposition, we define the family E_j as the collection of all E_ξ chosen as above for each F=F_1, …, F_n. By the claim, the union in the statement of the proposition reduces to the union of all E_j. That union is combinatorially starlike with respect to c, which by definition means the following: any minimal gallery from c to any chamber in the union remains in this union. Indeed, this holds by combinatorial convexity of the apartments since every E_j contains c. In only remains to note that, in a Euclidean building, combinatorially starlike chamber subcomplexes are contractible by exactly the same shellability argument as used for the Solomon–Tits theorem <cit.> to obtain the contractibility of the Euclidean building itself, compare <cit.>. §.§ The flatmate complex We begin with one more general construction of simplicial complexes. Let X be a set and a cover of X. Given F∈, the full simplex F is a (full) subcomplex of X. We can therefore define a subcomplex _ X of X by _ X = ⋃_F∈ F. We now apply the nerve principle of <Ref> to these complexes. Let X≠∅ be a set and let be a cover of X by non-empty subsets. If is uniformly acyclic, then so is _ X. By definition, the complex _ X is covered by the family of subcomplexes F as F ranges over . For any collection F_1, …, F_n of elements of , we have (F_1∩…∩ F_n) = (F_1)∩…∩(F_n). Thus the nerve of this cover of _ X is canonically isomorphic to the nerve of the cover of X. We are therefore indeed in the situation of <Ref>. We now specialise to buildings and flats. Consider a building with apartment system . The apartment system provides in particular a cover of the set X of vertices of ; formally, = { F= (A) : A ∈}3mmcovers3mm X= (). The following is the simplicial form of the algebraic definition in terms of chain groups that we proposed with Bucher in <cit.>. The flatmate complex of the building is the simplicial complex _ X as defined above. We also refer to it, in Tits's tongue, as the cokotcomplex of . We can now answer the problem suggested in <cit.>, as announced in <Ref>. We restate it here since the notation has now been introduced: The flatmate complex of any discrete irreducible Euclidean building is uniformly acyclic. The remaining ingredient for the proof of this theorem is as follows. Let be a discrete irreducible Euclidean building with apartment system . Then the nerve is uniformly acyclic. We shall argue that the simplicial complex Σ= satisfies the assumption of <Ref> for the function (n) = (k +1)n, where k is the apartment size of the associated spherical building as in <Ref>. To that end, note that for any non-empty subset _0, the nerve _0 is a subcomplex of Σ. In view of <Ref>, we only need to justify the following claim: Given any non-empty finite subset _0 of , we consider the simplicial subcomplex _0 of covered by _0. The claim is that if the simplicial complex _0 is contractible, then so is the nerve _0. To justify the claim, we appeal to the nerve lemma in ordinary simplicial homology. This is sometimes called the Borsuk nerve lemma; in precisely the setting of abstract simplicial complexes, two proofs can be found in <cit.>. That lemma asserts that _0 and _0 have the same homotopy type provided that every non-empty intersection of subcomplexes taken from the family _0 is contractible. Such an intersection is a subcomplex of the building , namely a non-empty intersection of apartments. The combinatorial convexity of apartments implies that this intersection is contractible and therefore the claim is established. The nerve coincides with the nerve of the cover of the building by its apartments. Thus <Ref> states that is uniformly acyclic. Therefore, <Ref> implies indeed that the flatmate complex _ X is uniformly acyclic. § VANISHING THEOREMS We first recall some terminology. A locally compact group G is boundedly acyclic (as a topological group) if its continuous bounded cohomology with real coefficients ^n(G) vanishes in every degree n>0. A topological group is amenable if every jointly continuous affine G-action on any non-empty convex compact set (in any Hausdorff locally convex topological vector space) admits a fixed point. This holds notably when G is compact or soluble (e.g. abelian), and is preserved by group extensions. A subgroup H<G of the topological group G is co-amenable in G if G has the above fixed-point property for the subclass of those convex compact sets having an H-fixed point. This holds for instance if H has finite index, or if H is normal with G/H amenable. We shall use the well-known general principles summarised in the proposition below to reduce the proof of <Ref> from general algebraic groups to the simple case. A locally compact group G is boundedly acyclic in each of the following cases: * G admits a co-amenable closed subgroup which is boundedly acyclic; * G admits an amenable normal closed subgroup N G with G/N boundedly acyclic; * G is the direct product of finitely many boundedly acyclic groups; * G is the quotient of a boundedly acyclic group by an amenable normal closed subgroup. For (i), see <cit.>. For (ii) and (iv), see <cit.>. For (iii), combine <cit.> with  <cit.>. These references make a second countability assumption which is not necessary for real coefficients (but in our case all algebraic groups are second countable anyways). §.§ Automorphism groups of buildings Recall that a group of building automorphisms is strongly transitive if it acts transitively on the set of pairs consisting of a chamber and an apartment containing it.[It is ironic that the term Tits chose for his marvelous concept of building is immeuble — literally: that which cannot be moved — whereas he demonstrated how deeply buildings are entwined with their rich transformation groups.] The fact that <Ref> should follow from <Ref> was introduced in <cit.>. We shall nonetheless give all details of the argument since the language is somewhat different here. Let G be a locally compact group with a strongly transitive proper action by automorphisms on a locally finite Euclidean building with apartment system . In order to prove the vanishing of the bounded cohomology ^n(G) for all n>0, we can assume that is irreducible and hence fits the assumptions of <Ref>. Indeed, the automorphism group of a product admits the product of the automorphism groups of the factors as a finite index subgroup. We therefore consider the flatmate complex _ X on X= defined in <Ref> and note that G acts on it by simplicial automorphisms. The bounded simplicial cochains of this flatmate complex yield an augmented cochain complex 0 [l] ℓ^∞((_ X)^[0])[l]_-ϵ ℓ^∞((_ X)^[1])[l]_-∂ ⋯[l]_-∂ In terms of vertices, a q-cochain f∈ℓ^∞((_ X)^[q]) is a bounded alternating function on (q+1)-tuples of vertices of the building, where each tuple is restricted to lie in some apartment. (This is the viewpoint adopted in <cit.>.) <Ref> above implies that this cochain complex is acyclic since it is the norm dual of the normed chain complex, which is boundedly acyclic by <Ref> (this is the duality of vanishing introduced by <cit.>). On the other hand, this dual cochain complex is a G-complex of Banach G-modules and thus it is a resolution of in the sense of continuous bounded cohomology <cit.>. The properness assumption implies that for each q≥ 0 the G-action on any set of (p+1)-tuples is also proper; this guarantees that each of the modules ℓ^∞((_ X)^[q]) is relatively injective in the sense of continuous bounded cohomology, see <cit.>. It follows that the continuous bounded cohomology of G is canonically realised by the subcomplex of G-invariant functions in ℓ^∞((_ X)^[q]), see e.g. <cit.>. We continue along the lines that we proposed with Bucher in <cit.>. Choose some apartment E∈ and denote by H<G its (set-wise) stabiliser in G. The restriction to tuples in E is a chain map ℓ^∞((_ X)^[q])^G ℓ^∞( ( E)^q+1)^H The claim is that this map is bijective, thus establishing an isomorphism between the continuous bounded cohomology of G and of H. This will complete the proof because H is an amenable group and is therefore boundedly acyclic. The injectivity of the claim follows from the transitivity of G on and the definition of the flatmate complex _ X. Surjectivity is the only time where we are using the strong transitivity, as follows. Pick f in ℓ^∞ ( ( E)^q+1)^H and let x be any (q+1)-tuple of vertices in any apartment (i.e. x represents any simplex of the flatmate complex). We know that gx E for some g∈ G and we want to extend f to this x by setting f(x) = f(g x). To show that this is well-defined and that the resulting function f on (_ X)^[q] is G-invariant, the only point to verify is that any other g'∈ G with g' x E satisfies f(g' x) = f(gx). Consider the two apartments g E and g' E; both contain x. By strong transitivity, there exists q∈ G which maps g E to g' E and such that q fixes pointwise the intersection g E ∩g' E, see e.g. <cit.>. In particular, q fixes every vertex in x. Now h=g' q g is an element of H and h g x = g' q x = g'x. Since f was supposed H-invariant on tuples in E, it follows f(g x) = f(g' x) as desired. In view of potential generalisations, we point out that the above argument used only the bounded acyclicity, rather than the amenability, of the apartment stabiliser H. §.§ Algebraic groups We now turn to the proof of <Ref> and consider an arbitrary algebraic group over a non-Archimedean local field k. Our goal is to show that the locally compact group G=(k) is boundedly acyclic. We recall here that G is endowed with the canonical Hausdorff “strong” topology <cit.>. We use general structure theory to reduce to a simpler class of groups, the class of simple groups. We need however to keep track of G=(k) in view of the discrepancy between quotients of algebraic groups and of the corresponding k-points. There is no loss of generality in assuming connected since passing to ^0 will replace G by a finite index closed subgroup, which is fine by <Ref>(i). We first recall that there is a canonical affinisation quotient, i.e. a faithfully flat morphism ψ→_aff to an affine group _aff=Spec() and that the kernel ψ is contained in the center of . This is a general form of a theorem of Rosenlicht <cit.> established in <cit.>: combine Thm. 8.2 and Cor. 8.3 therein under our assumption =^0. It can also be read in <cit.>. In particular, ψ is commutative and _aff, being affine, is a linear algebraic group <cit.>. The quotient of _aff by its radical is a connected semi-simple group . Thus is the almost-direct product of some number r of (quasi-)simple connected factors _i, i=1, … r. We reorder the factors so that _i is k-isotropic exactly when i≤ s for some 0≤ s ≤ r. Recall that _i(k)^+ denotes the normal subgroup of _i(k) generated by the k-points of the k-split unipotent subgroups of _i. This group is introduced in detail by Borel–Tits <cit.>; several equivalent definitions are given in <cit.>. In many cases (including local fields of characteristic zero), _i(k)^+ = _i(k) holds for isotropic groups; in the general case, we shall use that the quotient _i(k) / _i(k)^+ is compact <cit.>. Consider first i≤ s. Then Bruhat–Tits theory <cit.> (as above we refer to <cit.>, <cit.> and <cit.> for background) shows that the locally compact group _i(k) acts strongly transitively on an irreducible locally finite Euclidean building. The resulting action of _i(k)^+ is still strongly transitive; this follows e.g. from the decomposition given in <cit.>. Moreover this action is proper since the center of _i(k) is finite. Therefore, <Ref> implies that _i(k)^+ is boundedly acyclic for all i≤ s. If i>s, then by convention _i(k)^+ is trivial. In conclusion, <Ref>(iii) allows us to obtain that the product ∏_i=1^r _i(k)^+ is boundedly acyclic. It follows by <Ref>(iv) that (k)^+ is also boundedly acyclic because (k)^+ is the almost-direct product of the _i(k)^+, see <cit.>. At this point we consider the continuous group homomorphism f G = (k) _aff(k) (k) = _1(k) ⋯_r(k). We claim that the image f(G) in (k) contains (k)^+. Indeed this image is a Zariski-dense normal subgroup; therefore it must contain each _i(k)^+ since the latter is abstractly simple modulo its center by the main result of <cit.>. The claim follows. We deduce that the pre-image G^+<G of (k)^+ is a normal cocompact subgroup of G. In particular it is co-amenable and therefore, by <Ref>(i), it suffices to show that G^+ is boundedly acyclic. Since we already know that (k)^+ is boundedly acyclic, this follows from <Ref>(ii) if we justify that the kernel of f|_G^+ is amenable. By construction, this kernel is contained in a central extension of the group of k-points of the radical of _aff; therefore it is soluble and this completes the proof. ψ and the radical of _aff could have been combined into one “radical of ” in the above reductions, but the author is more comfortable separating the two steps since the classical structure theory is often stated in the context of linear algebraic groups <cit.>. We still need to justify <Ref>, for which we claim no originality. The idea is taken from the introduction of <cit.>, but translated to the non-Archimedean context. Let G=_2(_p) and let Γ be a (finite rank non-abelian) free group realised as a cocompact lattice in G. Then ^2(Γ) is non-trivial: this was already established by Johnson in <cit.> and rediscovered by Brooks <cit.>. The coefficient induction of bounded cohomology <cit.> implies that ^2(G, L^∞(G/Γ)) is also non-trivial. Since we are in degree two, a “double ergodicity with coefficients” argument shows that ^2(G, L^2(G/Γ)) is non-vanishing: this was established in <cit.>, see also <cit.>. Consider now a direct integral decomposition of L^2(G/Γ) into irreducible continuous unitary representations of G, which is even a Hilbertian sum decomposition in this case <cit.>. Appealing again to double ergodicity with coefficients, we conclude that some of these representations must have non-trivial ^2 (see e.g. Thm. 3.3 and Cor. 3.4 in <cit.>; these are stated for discrete groups but hold verbatim in the locally compact case). The first paragraph of the above argument can be replaced with a geometric construction of a cocycle with coefficients in a multiple of the regular representation of G, as explained in <cit.> and <cit.>. Then the second paragraph holds using a direct integral decomposition (the Plancherel decomposition). §.§ Arithmetic groups We retain the notation of <Ref> and we refer to <cit.> for background on S-arithmetic groups, notably the following few facts: Given v∈ S we denote by K_v the corresponding completion and recall that (K_v) is non-compact precisely when is K_v-isotropic, i.e. when rank_K_v()>0. Let S_1 S be the collection of those isotropic valuations in S; we can assume S_1≠∅ since otherwise the statement is void. The assumption S_0 S implies that Γ is a lattice in ∏_v∈ S_1(K_v) and this lattice is irreducible by the strong approximation theorem. We also note at this point that <Ref> is indeed a particular case of <Ref>. Applying <cit.>, we deduce that the restriction map from the full product of Archimedean as well as non-Archimedean groups ^n(∏_v∈ S_1(K_v)) ^n(Γ) is an isomorphism for all n< 2 ∑_v∈ Srank_K_v(). Next we observe that the restriction map from the Lie group considered in <Ref> is induced by the composition Γ∏_v∈ S_1(K_v) ∏_v∈ S_0(K_v) = L, where the right arrow is the projection. Therefore, what is needed is to prove that the inflation map corresponding to that projection is an isomorphism. <Ref> implies that this is true in all degrees, using the Hochschild–Serre sequence; specifically, the statements of Prop. 12.2.1 and Prop. 12.2.2(ii) in <cit.>. ../BIB/amsalpha-nobysame
http://arxiv.org/abs/2407.03100v1
20240703133950
The boundary disorder correlation for the Ising model on a cylinder
[ "Rafael Leon Greenblatt" ]
cond-mat.stat-mech
[ "cond-mat.stat-mech", "math-ph", "math.MP" ]
Droplets of Bosons at a Narrow Resonance Dam Thanh Son July 2024 ======================================== § ABSTRACT I give an expression for the correlation function of disorder insertions on the edges of the critical Ising model on a cylinder as a function of the aspect ratio (rescaled in the case of anisotropic couplings). This is obtained from an expression for the finite size scaling term in the free energy on a cylinder in periodic and antiperiodic boundary conditions in terms of Jacobi theta functions. I study the disorder correlation ⟨μ_⊤μ_⟩_± := Z_∓/Z_± where Z_+ (Z_-) is the partition function of the Ising model on a discrete cylinder with open and (anti-)periodic boundary conditions, that is Z_± = ∑_σ∈{ -1,+1 }^4 [t] exp( ∑_j=1^2 ∑_k = - +1 ^ -1. β E_1 σ_j,kσ_j,k+1 + ∑_j=1^2 - 1∑_k=-+1^β E_2 σ_j,kσ_j+1,k . ±∑_j=1^2 β E_1 σ_j,σ_j,-+1) . This quantity appears when expressing the correlation function in periodic boundary conditions for an odd number of spins on each boundary as a Pfaffian in the exact solution <cit.>, via the Kadanoff-Ceva Fermion two-point function ⟨ψ_x,⊤ψ_y,⟩_- = ⟨σ_x μ_⊤σ_y μ_⟩_- = ⟨σ_x σ_y ⟩_+ ⟨μ_⊤μ_⟩_- which can be characterized via the solution of a linear boundary-value problem <cit.> and evaluated via a modified Fourier transform, giving a particularly explicit form in the continuum limit <cit.>. Higher spin correlations can be calculated from ⟨ψ_x_1,⊤…ψ_x_m,⊤ψ_y_1,…ψ_y_n,⟩_- = ⟨σ_x_1…σ_x_mσ_y_1…σ_y_n⟩_+ ⟨μ_⊤μ_⟩_- (m,n odd) which can again be related to a linear boundary value problem, the solution of which is the Pfaffian of a matrix whose elements are values of the two-point function in <ref> or ⟨ψ_x,⊤ψ_x',⊤⟩_- = ⟨σ_x σ_x'⟩_- , ⟨ψ_y,ψ_y',⟩_- = ⟨σ_y σ_y'⟩_- The result is an expression for the multi-point boundary spin correlation function (with periodic boundary conditions) as a Pfaffian in terms of the two-point boundary spin correlation with either periodic or antiperiodic boundary conditions and the disorder correlation of <ref> <cit.>. Note that this is slightly different from the situation for with an even number of spins on each boundary, where (as for simply connected domains) the boundary spin correlation function is immediately equal to a Kadanoff-Ceva correlation function and so is given by a Pfaffian in terms of the two point spin function in the same boundary conditions, without any additional factor. At the critical temperature in the limit of , →∞ with /^2 → 0, I will show that ⟨μ_⊤μ_⟩_±∼( θ_2(0,e^-2 πζ)/2 θ_3(0,e^-2 πζ))^± 1/2 = ( θ_2(0 | 2 i ζ)/2 θ_3(0 | 2 i ζ))^± 1/2 where ζ = ξ / is a rescaled aspect ratio with a factor ξ depending only on the ratio of coupling constants E_1/E_2 (cf. <cit.>), with ξ=1 in the isotropic case E_1=E_2; and θ_j are Jacobi theta functions <cit.>. This follows from an asymptotic expansion of the logarithm of the partition functions at the critical temperature to constant order, i.e. log Z_± = p + s + z_±(ζ) + (1/) + (/^2) . Note that in this case the constant-order term (also called the finite size scaling term) is the first term which differs between the periodic and antiperiodic cases, and so it entirely describes the leading behaviour of the disorder correlation in <ref>. The presence of such a term for the Ising model was first noted by <cit.> for the torus, and it was subsequently noted that the behaviour of such terms (including at least some of their dependence on boundary conditions) could be explained by the identification of the scaling limit of the Ising model with a specific conformal field theory <cit.>, stimulating a number of further calculations in other geometries and boundary conditions, e.g. <cit.>. For the periodic cylindrical case this expansion was first discussed in <cit.> although only part of the result was presented; a more detailed treatment (again in the periodic case) was presented <cit.>. A similar expansion has been carried out in <cit.>, giving a less explicit expression which however is also valid away from the critical temperature (as in a massive scaling limit), in both periodic and antiperiodic boundary conditions. Similar expansions have also been obtained for the dimer model (e.g. <cit.>), including some cases giving formulae for similar ratios of partition functions <cit.>. The starting point is the exact solution of the Ising model on a cylinder due to McCoy and Wu <cit.>, which expresses the partition function in terms of products of 2 × 2 matrices similar to the transfer matrix of the one-dimensional Ising model, also used in the study of disordered models where these become random matrices <cit.>, and can be generalized (not without complications) to a rectangle with open boundary conditions <cit.>. Note that McCoy and Wu also allowed an additional term in the Hamiltonian coupling to the sites on one of the boundaries, but we take this term to be zero (setting =0). Since the solution is based on the high-temperature contour expansion, the main parameters are z_j := tanhβ E_j , j=1,2 ; in what follows I will mostly consider the critical temperature characterized by z_2 = 1-z_1/1+z_1 (which reduces to z_1 = √(2)-1 in the isotropic case E_1=E_2 z_1=z_2), and use this to simplify the expressions in <cit.> by eliminating z_2. With these specializations, the partition function is given by Z_± = 1/2(2 coshβ J)^MN( coshβ J )^N(M-1)√(|A_±|) where A is an antisymmetric 4MN × 4MN matrix made up of 4 × 4 blocks A_±;x,y;x,y = [ 0 1 -1 -1; -1 0 1 -1; 1 -1 0 1; 1 1 -1 0 ], 1 ≤ x ≤ M, 1 ≤ y ≤ N A_±;x,y;x,y+1 = - A^_x,y+1;x,y = [ 0 t 0 0; 0 0 0 0; 0 0 0 0; 0 0 0 0 ], 1 ≤ x ≤ M, 1 ≤ y < N A_±;x,y;x+1,y = - A^_x+1,y;x,y = [ 0 0 0 0; 0 0 0 0; 0 0 0 t; 0 0 0 0 ], 1 ≤ x < M, 1 ≤ y ≤ N A_±;x,N;x,1 = - A^_x,1;x,N = [ 0 ∓ t 0 0; 0 0 0 0; 0 0 0 0; 0 0 0 0 ], 1 ≤ x ≤ M and all other entries zero; note that only the last case depends on the parity. Carrying out a Fourier transform as in <cit.> (the antiperiodic case can be conducted as in the discrete torus) block-diagonalizes the matrix A, giving A_± = ∏_θ∈ Q_± (N) B_±(θ) where the sum is over Q_+ () = {π(2n-1)/2 : n=1,…,2} Q_- () = {π n/ : n=0,…,2 -1 } , according to the boundary condition. The matrix B_±(θ) is the same sparse matrix appearing in <cit.> from which Z_±^2 = (2 coshβ E_1)^8 (coshβ E_2)^4 (2 -1)∏_θ∈ Q_± (){|1 + z_1 e^iθ|^4 λ^2 [ v^2 + v'^2 α ^-4 ] } (cf. <cit.>; as noted above =0, so z=0, giving a slight simplification), where λ = z_2 (1 - z_1^2)/|1 + z_1 e^iθ|^2α , and specializing to the critical case α = 1/2 (1-z_1)^2[t] { 2 (1+z_1^2)^2/(1+z_1)^2 - 4 z_1^2/(1+z_1)^2 (e^iθ + e^-iθ) . . + 4 z_1/(1+z_1)^2[ (1 - z_1^2 e^iθ) (1 - z_1^2 e^-iθ) (1 - e^iθ)(1 - e^-iθ) ]^1/2} = 1/(1-z_1^2)^2{ (1+z_1^2)^2 - 4 z_1^2 cosθ + 2 z_1 [ 2(1 - cosθ) (1+z_1^4 - 2 z_1^2 cosθ) ]^1/2} = 1 + 2 z_1 /1-z_1^2|θ| + ( z_1 /1-z_1^2)^2 θ^2 + (θ^3) = exp( 2 z_1/1- z_1^2 |θ| ) + (θ^3) =: exp( ξ |θ| ) + (θ^3) , θ→ 0 . using the criticality condition to simplify, in particular α_1 = z_1^2, α_2 =1; and v, v' ≥ 0 are such that v^2 + v'^2 = 1 , v/v' = i (z_2^2 - λ)/ z_2 with = 2 i z_1 sinθ/ |1 + z_1 e^i θ|^2 , so that v'/v = 2 z_1 sinθ/z_2 |1+z_1 e^iθ|^2 - (1-z_1^2 )α , v^2 = ( 1 + v'^2/v^2)^-1 . In identifying the limit I separate factors in <ref> as Z_± = Y(,) ∏_θ∈ Q_± () W_(θ) X_(θ) , with Y(,) = (2 coshβ E_1)^4 (coshβ E_2)^2 (2 -1) z_2^ (1-z_1^2)^ , W_(θ) = α^ |v| , X_(θ) = [ 1 + v^2/v'^2α^-4 ]^1/2 . The first term is entirely straightforward. For the other two parts, the behavior of various quantities near θ=0 is particularly important. It is easy enough to see from <ref> that α depends smoothly on θ∈ (0,2π) but its derivative is discontinuous at zero, with d/d θlogα (θ) = ±ξ + (θ^2) , θ→ 0^± With this in mind, we use the Euler-MacLaurin formula <cit.> to note that ∑_θ∈ Q_+() f (θ) = ∑_n=1^2 f ( π (2 n -1)/2) = /π∫_π/2^(2 - 1/2) π f(θ) θ + 12 f(π/2) + 12 f((2 - 1/2) π) + 1/12 π/[ f'((2 - 1/2) π) - f'( π/2) ] + ( 1/^2) = /π∫_0^2π f(θ) θ + 1/24π/[ f'(0) - f'(2π) ] + ( 1/^2) , →∞ for any f which is three times differentiable on (0,2π) with bounded third derivative, using /π∫_0^π/2 f(θ) θ = 1/2 f(π/2) - ∫_0^π/2∫_x^π/2 f'(y) y x = 1/2 f(π/2) - 1/8π/ f'(0) + ( 1/^2) , →∞ to obtain the last expression; similarly, with f(0)=f(2π) ∑_θ∈ Q_-() f (θ) = ∑_n=0^2 f ( n π/) - 12 f(0) - 12 f(2 π) = /π∫_0^2π f(θ) θ - 1/12π/[ f'(0) - f'(2π) ] + ( 1/^2) , →∞ Applying these expressions, log Y(,) + ∑_θ∈ Q_± ()log W_(θ) = p + s + w_±/ + ( 1/ + /^2) , where p and s are coefficients independent of the boundary condition, and w_+ = π/12ξ , w_- = - π/6ξ ; note that this is the only term in <ref> which depends on the boundary condition, and there is no term proportional to . For the remaining factor, first note that ∑_θ∈ Q_±()∩ [^-1/3,2 π - ^-1/3]log X_(θ) = ( ∑_θ∈ Q_±()∩ [^-1/3,2 π - ^-1/3]α(θ)^-4 ) = ( ∑_n=^-1/3^∞ e^-c (/) n) = ( / e^-c ^2/3), for some c > 0, which is negligible; the remaining part can be handled with θ→ 0 asymptotic expressions: noting |log X^2_(θ) - log (1 + e^-4 ξθ)| ≤| ( v'/v)^2 α^-4 - e^-4 ξ |θ|| ≤| ( v'/v)^2 - 1| α^-4 + |α^-4 - e^- 4 ξ |θ|| . Using the asymptotic expansion in <ref> in <ref> gives v'/v = 2 z_1 θ + (θ^2)/α -1 + (θ^2) = - θ/|θ|+(θ) ( v'/v)^2 = 1+(θ) , θ→ 0 , and again using the asymptotics from <ref> this gives | ( v'/v)^2 - 1| α^-4≤ C θ e^-θ , |θ| ≤^-1/3 ; also |α^-4 - e^- 4 ξθ| ≤ 4 | α - e^ξθ| [ min(α,e^ξθ) ]^-4 -1≤ C θ^2 e^- c θ and putting all this together (note that X_ is even in θ with X_(0) =1) ∑_θ∈ Q_+()log X_ (θ) - log∏_n=1^∞[ 1 + exp( -2 πξ/ (2n -1) ) ] = ( ∫_0^∞ (k + k^2) e^-k k ) = ( /^2) ∑_θ∈ Q_-()log X_ (θ) - log∏_n=1^∞[ 1 + exp( -4 πξ/ n ) ] = ( ∫_0^∞ (k + k^2) e^-k k ) = ( /^2) As noted in <cit.>, the products appearing here can be expressed in terms of Jacobi theta functions <cit.> as ∏_n=1^∞[ 1 + exp( -2 πζ (2n -1) ) ]^2 = θ_3(e^-2πζ)/θ_0(e^-2πζ) , ∏_n=1^∞[ 1 + exp( -4 πζ n ) ]^2 = e^π/2ξM/Nθ_2(e^-2πζ)/2 θ_0(e^-2πζ) where ζ = ξ/ and θ_j (q) is an abbreviation for θ_j(0,q), and θ_0(q) := ∏_n=1^∞ (1-q^2n) = q^-1/12[12θ_2(q) θ_3(q) θ_4(q) ]^1/3, see <cit.>. Collecting all this, ∑_θ∈ Q_±()log X_ (θ) = x_±( ζ) - w_±/ + ( /^2) where w_± is as in <ref> and x_+(ξ) = 1/6log( 2 θ_3^2(e^-2 πξ)/θ_2(e^-2 πξ)θ_4(e^-2πξ)) , x_-(ξ) := 1/6log( θ_2^2(e^-2 πξ)/4 θ_3(e^-2 πξ)θ_4(e^-2πξ)) . <Ref> then follows by combining this with the other expressions obtained above. As a check on the calculation we can compare the asymptotics of this finite size scaling terms with the predictions of conformal field theory, as has already been done for the corresponding terms in a number of other boundary conditions <cit.> and for related expressions for cylindrical models <cit.>. To do so, we need to examine the leading-order behavior of x_± as ξ→∞ (i.e. >>) and ξ→ 0 (i.e. >>). For the former limit, where q := e^-2πξ→ 0, we use the series representations <cit.> of the theta functions to write θ_2(q) ∼ 2 q^1/4 θ_3(q) ∼ 1 θ_4(q) ∼ 1, all for q → 0, so that z_+(ξ) ∼1/6log q^-1/4 = π/12ξ , z_-(ξ) ∼1/6log q^1/2 = -π/6ξ , corresponding to the prediction of <cit.> for an infinite tube with periodic or antiperiodic boundary conditions and the central charge c = 1/2 of the Ising model. For the limit ξ→ 0, we use Jacobi's imaginary transformation (<cit.>; note the use of the parameter τ with q = e^iπτ, so τ = 2 i ζ and τ'=1/τ = i/2ζ), which for the quantities at hand takes the form θ_2 (e^- π x)= x^-1/2θ_4 (e^- π / x) θ_3 (e^- π x)= x^-1/2θ_3 (e^- π / x) θ_4 (e^- π x)= x^-1/2θ_2 (e^- π / x), to rewrite <ref> as z_+(ξ) = 1/6log( 2 θ_3^2 ( e^-π/ 2 ξ) /θ_2 ( e^-π/ 2 ξ) θ_4 ( e^-π / 2 ξ) ) ∼π/48ξ^-1 z_-(ξ) = 1/6log( θ_4^2 ( e^-π/ 2 ξ) /4 θ_3 ( e^-π/ 2 ξ) θ_4 ( e^-π / 2 ξ) ) ∼π/48ξ^-1 so that in this limit both versions give the same form predicted for an infinite strip with open boundary conditions. § ACKNOWLEDGEMENTS This paper arose out of discussions with Alessandro Giuliani. The work was supported by the MIUR Excellence Department Project MatMod@TOV awarded to the Department of Mathematics, University of Rome Tor Vergata, CUP E83C23000330006. [sorting=anyt] [heading=bibintoc]
http://arxiv.org/abs/2407.01890v1
20240702021957
Maximizing Uplink and Downlink Transmissions in Wirelessly Powered IoT Networks
[ "Xiaoyu Song", "Kwan-Wu Chin" ]
cs.NI
[ "cs.NI" ]
Maximizing Uplink and Downlink Transmissions in Wirelessly Powered IoT Networks Xiaoyu Song and Kwan-Wu Chin Author Song and Chin are with the School of Electrical, Computer and Telecommunications Engineering, University of Wollongong. Emails: song-xiaoyu@outlook.com and kwanwu@uow.edu.au. July 8, 2024 ====================================================================================================================================================================================================================== § ABSTRACT This paper considers the problem of scheduling uplinks and downlinks transmissions in an Internet of Things (IoT) network that uses a mode-based time structure and Rate Splitting Multiple Access (RSMA). Further, devices employ power splitting to harvest energy and receive data simultaneously from a Hybrid Access Point (HAP). To this end, this paper outlines a Mixed Integer Linear Program (MILP) that can be employed by a HAP to optimize the following quantities over a given time horizon: (i) mode (downlink or uplink) of time slots, (ii) transmit power of each packet, (iii) power splitting ratio of devices, and (iv) decoding order in uplink slots. The MILP yields the optimal number of packet transmissions over a given planning horizon given non-causal channel state information. We also present a learning based approach to determine the mode of each time slot using causal channel state information. The results show that the learning based approach achieves 90% of the optimal number of packet transmissions, and the HAP receives 25% more packets as compared to competing approaches. Medium Access, RF-charging, SWIPT, Channel Access, Optimization. § INTRODUCTION Internet of Things (IoT) networks have a broad range of applications, where they deploy a large number of devices to collect data from their environment such as temperature or the speed of cars. These data are then uploaded and analyzed by a server or base station. Further, these devices may be used to train machine learning models <cit.>. Apart from that, some devices may have an actuator, where they are called upon by a controller to effect an environment after sensing some events <cit.>. A fundamental issue in IoT networks is that devices have limited energy supply. One solution is to deliver energy wirelessly. Specifically, devices harvest energy from radio frequency (RF) signals broadcasted by dedicated energy sources such as power beacons. For example, Powercast demonstrates that their P2100B receiver <cit.> can harvest 2.75 mW of power from RF signals. Further, since RF signals can carry both data and energy, the simultaneously wireless information and power transfer (SWIPT) architecture has received considerable attention, see <cit.> for a survey. Briefly, SWIPT allows a device to harvest RF energy via time switching or power splitting. For time switching, a receiver switches between an energy harvesting circuit and an information decoding circuit. For power splitting, the received power is split into two parts, where one part is used for energy harvesting and the other for decoding data. Another issue is spectrum efficiency, where communications to/from devices may be limited by interference or multiple access schemes. To this end, rate splitting multiple access (RSMA) <cit.> is now of interest. Briefly, RSMA divides a message into a common message and a private message. The common message contains information for all devices where each private message is for a specific device. Each device first decodes the common message and removes it from its received signal via successive interference cancellation (SIC) <cit.>. After that, a device decodes its private message by treating all other private messages as noise. Similarly, RSMA can also be applied for uplink transmissions <cit.>. Specifically, a device divides its uplink message into two sub-messages and transmits them to a base station with a different power level. The base station then decodes these messages using SIC. To date, no works have considered uplink and downlink transmissions in an RF-powered RSMA IoT network. Fig. <ref> shows an example network. A hybrid access point (HAP) transfers energy and exchanges information with a set of sensor devices. Unlike prior works, see Section <ref>, we do not consider the conventional time-division duplex (TDD) structure. Instead, time slots are used either for downlinks or uplinks. Specifically, when the HAP marks a time slot as downlink, it transmits data to devices as per RSMA. Further, devices adjust their power split ratio to decode information and harvest energy simultaneously. As for time slots marked as uplink mode, devices consume their harvested energy to transmit data packets to the HAP using RSMA. Note that the said time mode structure allows an HAP to determine whether a time slot is better for downlinks or uplinks; i.e., if in time slot t the channel gain from devices is poor but the channel gain to devices is good, then time slot t can be used to transfer more energy and data to devices, and vice-versa. This means the HAP is able to defer uplinks to future time slots that have better channel gains, and thus allow devices to use their precious harvested energy efficiently. Our aim is to maximize the number of packets decoded by the HAP and devices over a given planning horizon with multiple time slots. To do this, the HAP needs to decide the mode of each slot. Specifically, it needs to determine whether a slot is better suited for downlink or uplink transmissions. Fig. <ref> shows a possible schedule using a mode based time structure versus a conventional TDD frame. Assume the energy delivery in red colored slots is over a poor channel; otherwise, it is colored in green. Each white colored block indicates that a packet is transmitted and successfully decoded whereas a dotted block means a common message is transmitted in the downlink. We see that the channel condition in the first slot is poor for energy delivery which results in a low amount of harvested energy. Consequently, for the TDD structure, only one uplink packet is transmitted in the second slot. By contrast, in our mode selection structure, the HAP selects downlink mode in the second slot, which has a good channel condition. This thus allows devices to harvest more energy and transmit three packets in the third slot. Furthermore, the HAP selects downlink mode in the fourth slot because the channel is good for downlink transmissions, which results in the transmission of four packets. In contrast, the HAP transmits two packets in the third slot if it uses the TDD structure. As a result, we see that our mode selection structure has nine more packets than the conventional TDD structure. Henceforth, we aim to design a scheduler that is run by a HAP. The scheduler determines (i) the mode of each slot, (ii) the transmit power of each packet, (iii) the power splitting ratio at each device in a downlink slot, and (iv) the decoding order used by the HAP in uplink slots. Designing such a scheduler is challenging as it has to consider RSMA and RF charging jointly. In fact, the said combination introduces a fundamental coupling relating to the HAP's RF charging process, which affects downlinks and RF energy harvested by devices, which in turn affects uplinks. To elaborate, (i) a HAP must consider the fact that the energy at devices is coupled across time slots. Specifically, the energy at devices is a function of the channel gain from the HAP, numbers of slots used for energy delivery and the data transmissions in previous time slots, (ii) a HAP does not have future channel state information, meaning it does not know whether there is a future time slot that is better for downlink or/and uplink transmissions, and (iii) the transmit power of each split message in both downlink and uplink slots has to satisfy a given SINR condition. Critically, the transmit power will be affected by the set of transmitting devices and the decoding order of received packets. To this end, we make the following contributions: * We are the first to study a novel RSMA-aided SWIPT wireless powered IoT network. In particular, to the best of our knowledge, no prior works have considered the aforementioned issues and challenges that arise from the combination of RSMA and RF-charging in order to maximize the number of uplink and downlink packets in the said network. * We outline the first mixed integer linear program (MILP) that can be used to compute (i) the mode of each slot over a planning horizon, (ii) the transmit power allocated for each packet, (iii) the power splitting ratio of each device in downlink slots and (iv) the decoding order used by the HAP in uplink slots. Advantageously, it yields the optimal number of messages received by devices and the HAP given non-causal channel state information. * We show how a novel reinforcement learning based approach can be used to determine the mode of each time slot over a planning horizon. Another key innovation is that for each downlink and uplink slot, it uses linear programs to determine the transmit power of each packet, the power splitting ratio at devices and the decoding order of uplink packets. * We present the first study of the said solutions. The results show that the reinforcement learning approach reaches 90% of the optimal number of packet transmissions of the said MILP. Further, reinforcement learning leads to 15% and 25% higher number of packet transmissions than a round robin and random approach, respectively. Next, Section <ref> discusses prior works. We then formalize the system and problem in Section <ref> and <ref>, respectively. We then present a learning solution in <ref>. Our evaluation of the said MILP and learning solution is detailed in <ref>. Lastly, we conclude in Section <ref>. § RELATED WORKS Our research overlaps with works (1) that consider joint uplink and downlink optimization in wireless powered communication networks (WPCNs), and (2) data transmissions using RSMA. §.§ Joint Uplink and Downlink Most works consider a Time-Division Duplex (TDD) frame structure where a downlink phase must be followed by an uplink phase. In <cit.>, the authors aim to optimize the beamforming weight at HAPs and devices, power splitting ratio at each device, individual data rate of each user and downlink duration in order to maximize the uplink sum-rate of the system. The authors in <cit.> aim to maximize uplink and downlink sum-rate by optimizing the beamformer at an HAP and devices. In <cit.>, the proposed solution optimizes the time duration for uplink and downlink and power split ratio at each device. Some works consider time switching, where an HAP uses different time slots to deliver energy and transmits data to devices. As an example, the solution in <cit.> maximizes sum-rate by optimizing the time allocation for downlink and uplink. In <cit.>, the authors aim to maximize both uplink and downlink sum-rate by optimizing charging, downlink and uplink duration and transmit power of devices. The authors of <cit.> consider clustering of devices; each cluster uses a different subcarrier. The authors maximize the system sum-rate by optimizing the subcarrier assigned to devices, time duration for energy and data delivery and transmit power of devices. There is also research into orthogonal frequency division multiple access (OFDMA) networks. The authors of <cit.> aim to maximize uplink sum-rate while ensuring a minimum downlink data rate. They achieve this by optimizing subcarrier assignments and transmit power of devices in downlink slots and the time duration of uplink slots. In <cit.>, the authors assign subcarriers and determine the transmit power over each sub-carrier and power-splitting ratio at devices. Reference <cit.> considers uplink sum-rate whereas in <cit.>, the authors consider both uplink and downlink transmissions. Other than sum-rate, other metrics of interest include energy consumption, fairness among device, and energy efficiency. As an example, in <cit.>, the authors aim to minimize the energy consumption of both uplink and downlink transmissions. They consider a HAP with multiple antennas and control its beamforming power and duration. The authors in <cit.> also consider a TDD frame where they address the well-known double near far effect <cit.> by maximizing the minimum downlink and uplink sum-rate. The authors of <cit.> aim to optimize the uplink and downlink duration, transmit power of the HAP and devices and the power splitting ratio in order to maximize fairness among devices. The authors in <cit.> study uplink and downlink energy efficiency and aim to optimize device selection and transmit power at the HAP and devices. §.§ Rate Splitting Multiple Access Many works have studied resource allocation in RSMA networks. However, none of them consider using RSMA for both downlink and uplink communications. In fact, many previous works only consider RSMA for downlink transmissions. In this respect, the transmitter in <cit.> exploits beamforming and RSMA to transmit information and energy to devices. The transmitter in <cit.> has perfect channel state information where in <cit.>, the transmitter has imperfect channel state information; i.e., it only has historical or probability distribution of past channel gains. In <cit.>, the aim is to maximize energy efficiency by optimizing beamforming and power splitting ratio. The authors in <cit.> optimize beamforming at a transmitter to maximize its downlink sum rate and also ensure receivers harvest a minimum amount of energy. In <cit.>, an HAP with multiple antennas operates in a multicell network, where it aims to maximize both energy and spectrum efficiency by jointly optimizing beamforming weight and the rate of common and private messages. In <cit.>, the authors aim to improve the spectrum efficiency of a two-tier network by optimizing transmit precoding and rate allocation. Some works study the theoretical maximum sum-rate or minimum transmit power at an HAP. For example, in <cit.>, the authors analyze the sum-rate of an energy harvesting network where a relay uses its harvested energy to forward received signals using RSMA. The work in <cit.> minimizes the transmit power of a base station by finding the optimal transmit power, rate of messages and power splitting ratio. In <cit.>, the authors also aim to find the minimum transmit power where they consider a full-duplex system. Many solutions are designed to maximize the downlink sum-rate of RSMA networks where devices do not harvest RF energy. One such solution is <cit.>, where the authors assume a base station that has imperfect channel state information and aim to maximize long-term sum-rate by optimizing the transmit power of common and private messages in each time slot. In <cit.>, the authors also maximize sum-rate using imperfect channel state information where they consider a base station with multiple antennas. The authors of <cit.> study sum-rate maximization in a multicarrier system where they aim to optimize the transmit power allocated to subcarriers. The authors in <cit.> first use RSMA under a SIC constraint and aim to optimize the rate and transmit power of private messages. The work in <cit.> considers devices with multiple antennas, and aim to optimize the precoder of messages. Some works also consider uplink transmissions as per RSMA. An example work is <cit.>, where the authors analyze the throughput and outage probability of a RSMA network with only two devices. In <cit.>, the authors aim to maximize the uplink sum-rate at a HAP by optimizing the transmit power of devices and the decoding order at the HAP. §.§ Research Gaps To summarize, referring to Table <ref>, we see only reference <cit.> and <cit.> have considered a mode-based structure but they do not consider devices with power splitting capability. A key distinction is that our work is the first to consider joint uplink and downlink optimization for a RSMA-based wireless powered IoT network that uses a mode-based time structure. Further, only a few works have studied optimization over multiple time slots. However, these works have a different system setup. As we see in Table <ref>, reference <cit.> does not consider RF charging. In addition, the work in <cit.> and <cit.> does not consider RSMA. § SYSTEM MODEL Table <ref> summarizes our notations. There is a HAP and a set 𝒩 ={1,2,…, N} of RF harvesting devices, indexed by n. The distance between device n and the HAP is d_n. Time is divided into slots, where each slot has index t and a duration of τ second. Define 𝒯 = {1,2,…, T} to to be a planning horizon with T time slots. The transmit power of the HAP in slot t is P^t. The maximum transmit power of the HAP and each device is denoted as P_0 and p_0, respectively. We assume both devices and HAP have saturated traffic, meaning they always have a message to transmit. Each slot can either be in Downlink or Uplink mode. Let Ĭ^t and Î^t be binary variables, where Ĭ^t=1 means slot t is in Downlink mode and Î^t=1 indicates slot t is in Uplink mode. The HAP selects one mode in each time slot, whereby Ĭ^t and Î^t are constrained by Ĭ^t+Î^t = 1, ∀ t ∈𝒯. We assume block fading, where the channel power gain g^t_n remains fixed during each time slot but varies across time slots. In slot t the channel between device n and the HAP is modeled as g^t_n = θ^t_n d^-β_n, where θ^t_n represents short-term fading and it is assumed to be Exponentially distributed with unity mean, and β is the path-loss exponent. In a Downlink slot, the HAP transmits messages to devices using RSMA, where the message for device n is denoted as W_n. Each message is split into a common part and a private part, denoted as W^c_n and W^p_n, respectively. The HAP uses a public codebook to combine the common part of each user into a single common message, denoted as W^c. The common message W^c and N private messages are encoded into symbols independently, denoted as s_c and s_n, respectively. The HAP then uses superposition coding to transmit a composite signal to all users. The composite signal transmitted by the HAP is denoted as 𝐱 = √(μ^t_c P^t) s_c+∑_n=1^N √(μ^t_n P^t) s_n, where μ^t_c and μ^t_n denote the power coefficients of messages, and they are constrained by μ^t_c + ∑_n=1^N μ^t_n ≤Ĭ^t, ∀ t ∈𝒯. Thus, the signal received by device n is denoted as 𝐱^' = √(μ^t_c P^tg^t_n) s_c+∑_n=1^N √(μ^t_n P^tg^t_n) s_n + N_k, where N_k is ambient noise in the channel. Note that the power coefficients are non-negative if slot t is in Downlink mode. To receive messages from the HAP, a device first decodes the common message W^c from the received signal by treating all private messages as interference. The common message W^c is successfully decoded by device n if its SINR at device n, denoted as γ_c,n^t, satisfies γ_c,n^t = μ^t_c P^t g^t_n/∑_n=1^N μ^t_n P^t g^t_n + N_o≥Γ, where N_o is the noise power. Device n then maps W^c back to W^c_n according to the said public codebook. After that, device n cancels the common message W^c from the received signal as per SIC <cit.> and then decodes its private message W^p_n. The private message W^p_n is successfully decoded by device n if its SINR, denoted as γ_n^t, satisfies γ_n^t = μ^t_n P^t g^t_n/∑ _m≠ nμ^t_m P^t g^t_m + N_o≥Γ. Let C^t_n be a decision variable to indicate if device n decodes a common message in slot t, where C^t_n = 1 means that a common message is successfully decoded by device n in slot t. Let D^t_n be a decision variable to indicate if device n decodes a private message in slot t, where D^t_n = 1 means that a private message is successfully decoded by device n in slot t. The previous facts are formalized as follows: γ_c,n^t + (1-C^t_n)Φ≥Γ, ∀ t ∈𝒯, ∀ n ∈𝒩, and γ_n^t + (1-D^t_n)Φ≥Γ, ∀ t ∈𝒯, ∀ n ∈𝒩. In addition, we have C^t_n ≤Ĭ^t, ∀ t ∈𝒯, ∀ n ∈𝒩, and D^t_n ≤Ĭ^t, ∀ t ∈𝒯, ∀ n ∈𝒩, which ensure C^t_n and D^t_n are set to zero when slot t is not in Downlink mode. The number of downlink transmissions for device n in slot t is then calculated as R̆^t_n = C^t_n + D^t_n. Each device supports power splitting <cit.> where a received signal is split into two parts. Define the power splitting ratio of device n as ρ^t_n∈ [0,1]. A device n uses ρ^t_n of the received signal power for data transmission. This means the SINR of the common message W^c is γ_c,n^t = μ^t_c ρ^t_n P^t g^t_n/∑_n=1^N μ^t_n ρ^t_nP^t g^t_n + N_o. Similarly, the SINR of private message W^p_n is γ_n^t = μ^t_n ρ^t_n P^t g^t_n/∑ _m≠ nμ^t_m ρ^t_nP^t g^t_m + N_o. The power splitting ratio ρ^t_n is allowed to be greater than zero when slot t is in Downlink mode; otherwise, it is set to zero. Formally, ρ^t_n ≤Ĭ^t, ∀ t ∈𝒯, ∀ n ∈𝒩. Each device has a RF-energy harvester to harvest energy from the remaining (1-ρ^t_n) of signal 𝐱^'. We consider a non-linear energy harvesting model <cit.> and the harvested power at device n is defined as ψ(P^t_n)=M/1+exp(-a P^t_n+b)-M/(1+exp(ab))/1-1/(1+exp(ab)), where M is the maximum received power when the circuit is saturated; both a and b are constants specific to a given circuit. The harvested energy of device n is H^t_n = ψ((1-ρ^t_n) P^t g^t_n)τ. In an Uplink slot, each device splits its message into two parts, denoted as s_n,j, where j ∈{1,2}. The received signal at the HAP is s_0 = ∑_n=1^N∑_j=1^2√(p^t_n,jg^t_n)s_n,j + N_0, where p^t_n,j is the transmit power of the j-th part of the message sent by device n. The HAP sets p^t_n,j is to zero if slot t is in Downlink mode. In addition, the total transmit power of device n must be below its maximum transmit power p_0. The previous facts are formalized as follows: ∑_j=1^2 p^t_n,j≤ p_0 Î^t, ∀ t ∈𝒯, ∀ n ∈𝒩. The HAP decodes Uplink messages from users using SIC. To do so, it requires a decoding order which is defined as [s_n,j,…,s_m,k], where the first message to be decoded by the HAP is s_n,j. Define π^k_n,j to denote the position of message s_n,j in decoding order k, where message s_n,j is decoded earlier than s_m,l for order k if π^k_n,j > π^k_m,j. Consider the decoding order [s_1,1,…,s_N,1,s_1,2,…,s_N,2]. Message s_1,1 is decoded earlier than s_2,1, where π^k_1,1 > π^k_2,1. The collection of all possible decoding orders is 𝒦 = {[s_1,1,…,s_N,1,s_1,2,…,s_N,2], …,[s_1,2,…,s_N,2,s_1,1,…,s_N,1],}, where each order is indexed by k. The number of decoding orders is K=2N!. Let O^t_k be a binary variable that indicates whether decoding order k is selected (O^t_k=1) by the HAP in time slot t. Only one order is selected if slot t is in Uplink mode, i.e., ∑_k=1^2N! O^t_k = Î^t, ∀ t ∈𝒯. When order k is selected, the HAP successfully decodes message s_n,j if its SINR at the HAP, denoted as γ^t,k_n,j, is higher or equal to the threshold Γ. Formally, γ^t,k_n,j = p^t_n,jg^t_n/∑_π^k_m,l<π^k_n,j p^t_m,lg^t_mU^t_m,l+N_0≥Γ. After that, the HAP removes message s_n,j from the composite signal x^'. Let U^t_n,j be a binary variable to indicate if the HAP decodes a message transmitted by device n in slot t, where U^t_n,j = 1 means that the HAP successfully decoded a message sent by device n in slot t. Formally, γ^t,k_n,1 + (2-O^t_k-U^t_n,1)Φ≥Γ, ∀ t ∈𝒯, ∀ n ∈𝒩, ∀ k ∈𝒦, and γ^t,k_n,2 + (2-O^t_k-U^t_n,2)Φ≥Γ, ∀ t ∈𝒯, ∀ n ∈𝒩, ∀ k ∈𝒦. In addition, we define U^t_n,j≤Î^t, ∀ t ∈𝒯, ∀ n ∈𝒩, ∀ j ∈1,2, which ensures devices transmit messages to the HAP only when slot t is in Uplink mode. We also define U^t_n,j≤∑_k=0^2N!O^t_k, ∀ t ∈𝒯, ∀ n ∈𝒩, ∀ j ∈1,2, which ensures that devices transmit messages only when the HAP selects one order. The number of uplink transmissions from device n in slot t is then calculated as R̂^t_n=U^t_n,1+U^t_n,2. In each slot, each device n must have energy, meaning its energy level evolves over time as follows: E^t+1_n = E^t_n + H^t_n Ĭ^t - ∑_j=1^2 p^t_n,jτ≥ 0. We consider fairness between the amount of data transmitted in uplink and downlink. Define δ∈ [0,1] as a weight that controls the number of uplink and downlink slots. The weighted sum-throughput over T slots is calculated as R = ∑ _t=1^T ∑ _n=1^N ((1-δ) R̆^t_n + δR̂^t_n). Note that R̆^t_n and R̂^t_n have the same magnitude. Specifically, observe that a device can either receive or transmit two messages in each slot. This means the maximum value of R̆^t_n and R̂^t_n is 2T, where T is the number of time slots. Our aim is to maximize the weighted sum-throughput R over T slots. In Section <ref>, we first formulate a MILP, where it yields the optimal value of R. A key limitation, however, is that it requires non-causal channel information. To this end, in Section <ref>, we propose a learning based approach when the HAP uses only causal channel information. § OPTIMIZATION MODEL To maximize R, we first formulate our problem as a Mixed Integer Non-Linear Program (MINLP). Define vector 𝐯=[Î^t,Ĭ^t,D^t_n,μ^t_c,μ^t_n,ρ^t_n,U^t_n,p^t_n,j,O^t_k] to contain all decision variables. The MINLP is formulated as \beginmaxi!|s|[3] 𝐯R(P1) (<ref>),(<ref>),(<ref>)-(<ref>),(<ref>),(<ref>),(<ref>)-(<ref>). \endmaxi! In order to solve problem (<ref>) as an MILP, which can be solved more efficiently as compared to MINLP, the next subsection linearizes the non-linear terms ψ(P^t_n), γ_c,n^t and γ_n^t. §.§ Linearization First, we linearize Eq. (<ref>). Define a set 𝒮 = {1,2,…,S} of intervals. We use P^L_s and P^U_s to denote the lower and upper bound of interval s. If the received power P^t_n falls within the interval s, we use a conversion efficiency η_s to convert the received power, which is defined as η_s = ψ(P^L_s)/2 P^L_s+ψ(P^U_s)/2 P^U_s. We use the binary variable J^t_s,n to indicate that the received power of device n falls in the s-th interval in slot t. The variable J^t_s,n is set as follows: P^t_n ≥ (J^t_s,n-1)Φ + P^L_s, ∀ t ∈𝒯, ∀ n ∈𝒩, ∀ s ∈𝒮, P^t_n ≤ P^U_s+(1-J^t_s,n)Φ, ∀ t ∈𝒯, ∀ n ∈𝒩, ∀ s ∈𝒮. When the received power of device n falls in the s-th interval in slot t, where P^L_s≤ P^t_n ≤ P^U_s, the variable J^t_s,n is set to one. Otherwise, J^t_s,n is set to zero so that the inequality (<ref>) holds. In addition, each J^t_n,s is set to zero if slot t is in Uplink mode, where ∑_s=1^S J^t_s,n≤Ĭ^t, ∀ t ∈𝒯, ∀ n ∈𝒩. Define M^t_s,n as the harvested energy of device n in slot t if the received power P^t_n falls within the interval s. We determine the converted power M^t_s,k as follows: M^t_s,n≤η_s P^t_n, ∀ t ∈𝒯, ∀ n ∈𝒩, ∀ s ∈𝒮, M^t_s,n≥η_s P^t_n-(1-J^t_s,n)Φ, ∀ t ∈𝒯, ∀ n ∈𝒩, ∀ s ∈𝒮, 0 ≤ M^t_s,n≤ J^t_s,nΦ, ∀ t ∈𝒯, ∀ n ∈𝒩, ∀ s ∈𝒮. Specifically, the output power M^t_s,n is set to η_s P^t_n when J^t_s,n is set to one. Similarly, if J^t_s,n is zero, the output power M^t_s,n is set to zero. The harvested energy of device n in slot t is then calculated as H^t_n = ∑_s=1^S M^t_s,nτ. In downlink slots, the SINR of common message W^c and private message W^p_n, see Eq. (<ref>) and (<ref>), involve a product of two real decision variables. To linearize this, we apply McCormick envelopes[Note that the resulting MILP yields the lower bound of the optimal solution. This bound can be improved by dividing up the domain of decision variables and applying linearization in each sub-divided domain.]. Let μ^U_c, μ^L_c,μ^U_n, μ^L_n, ρ^U, ρ^L denote the upper and lower bound value of μ^t_c, μ^t_n and ρ^t_n, respectively. Define a decision variable ω^t_c to replace the product of μ^t_c and ρ^t_n. Inequality (<ref>) is then revised to γ_c,n^t = ω^t_c,n P g^t_n/∑_n=1^N μ_n ρ^t_n P g^t_n + N_o≥Γ. Specifically, the term ω^t_c is constrained by ω^t_c,n≥μ^L_cρ^t_n + μ^t_cρ^L - μ^L_cρ^L, ∀ t ∈𝒯, ∀ n ∈𝒩, ω^t_c,n≥μ^U_cρ^t_n + μ^t_cρ^U - μ^U_cρ^U, ∀ t ∈𝒯, ∀ n ∈𝒩, ω^t_c,n≤μ^U_cρ^t_n + μ^t_cρ^L - μ^U_cρ^L, ∀ t ∈𝒯, ∀ n ∈𝒩, ω^t_c,n≤μ^L_cρ^t_n + μ^t_cρ^U - μ^L_cρ^U, ∀ t ∈𝒯, ∀ n ∈𝒩. The above constraints create a convex hull that encapsulates the value of μ^t_c·ρ^t_n. As for the product of μ^t_c and ρ^t_n, define a decision variable ω^t_n. Inequality (<ref>) is then revised to γ_n^t = ω^t_n P g^t_n/∑ _m≠ nμ^t_m ρ^t_n P g^t_m + N_o≥Γ. In particular, the term ω^t_n is constrained by ω^t_n ≥μ^L_nρ^t_n + μ^t_nρ^L - μ^L_nρ^L, ∀ t ∈𝒯, ∀ n ∈𝒩, ω^t_n ≥μ^U_nρ^t_n + μ^t_nρ^U - μ^U_nρ^U, ∀ t ∈𝒯, ∀ n ∈𝒩, ω^t_n ≤μ^U_nρ^t_n + μ^t_nρ^L - μ^U_nρ^L, ∀ t ∈𝒯, ∀ n ∈𝒩, ω^t_n ≤μ^L_nρ^t_n + μ^t_nρ^U - μ^L_nρ^U, ∀ t ∈𝒯, ∀ n ∈𝒩. In our model, the power coefficient μ^t_c and μ^t_n are non-negative and constrained by the maximum transmit power of the HAP. Hence, we set the value of μ^t_c and μ^t_n to be in the range [0,1] (in Watt), respectively. Therefore, we have 0 ≤μ^t_c ≤ 1, ∀ t ∈𝒯, ∀ n ∈𝒩, 0 ≤μ^t_n ≤ 1, ∀ t ∈𝒯, ∀ n ∈𝒩. In addition, for the power splitting ratio ρ^t_n, we have 0 ≤ρ^t_n ≤ 1, ∀ t ∈𝒯, ∀ n ∈𝒩. Rewriting inequality (<ref>)-(<ref>) and (<ref>)-(<ref>), we have ρ^t_n ≥ω^t_c,n≥ 0, ∀ t ∈𝒯, ∀ n ∈𝒩, μ^t_c ≥ω^t_c,n≥ρ^t_n + μ^t_c - 1, ∀ t ∈𝒯, ∀ n ∈𝒩, ρ^t_n ≥ω^t_n ≥ 0, ∀ t ∈𝒯, ∀ n ∈𝒩, μ^t_n ≥ω^t_n ≥ρ^t_n + μ^t_n - 1, ∀ t ∈𝒯, ∀ n ∈𝒩. Define the vector 𝐯^'=[J^t_s,n,M^t_s,n,ω^t_c,n,ω^t_n] to contain all decision variables used in the aforementioned linearization. We thus have the following MILP: \beginmaxi!|s|[3] 𝐯,𝐯^'R(P2) (<ref>),(<ref>)-(<ref>),(<ref>)-(<ref>). \endmaxi! The computational complexity of MILP (<ref>) is related to its number of variables and constraints. Specifically, we have the following result: Assume there are N devices and T slots. Then the solution space has size ∑_i=0^T Ti(∑_j=1^N2Nj (2N)!)^i. For T slots, the HAP has Ti choices of Uplink slots, where i denotes the number of Uplink slots over T slots. In each Uplink slot, there are (2N)! choices of decoding orders. When an order is selected, devices have zero or at most 2N messages transmitted. If the HAP selects j devices to transmit in each data slot, there are 2Nj choices of transmitting devices. These facts imply the size of solution space is ∑_i=0^T Ti(∑_j=1^N2Nj (2N)!)^i. Proposition (<ref>) implies that MILP (<ref>) becomes computationally intractable for large-scale networks. §.§ Reduced Order Set We observe that for a decoding order, if we swap the position of two messages from same device, constraint (<ref>) and (<ref>) still hold. To illustrate, assume the HAP uses [s_1,1,s_1,2,…,s_N,1,s_N,2] to decode Uplink messages and the transmit power of message s_1,1 is p^'. The SINR of message s_1,1 is γ^t,k_n,j = p^'g^t_n/∑_π^k_m,l<π^k_1,1 p^t_m,lg^t_mU^t_m,l+N_0. According to constraint (<ref>), the transmit power of messages is a decision variable, so the transmit power assigned to message s_1,1 can also be assigned to s_1,2. Thus, message s_1,2 has the same SINR if the HAP uses decoding order s_1,2,[s_1,1,…,s_N,1,s_N,2]. Given the observation, we reduce the value of K by removing decoding orders that only differ in the position of messages from the same device. The size of the order set then becomes K = (2N)!/2^N. Eq. (<ref>) is revised to ∑_k=1^(2N)!/2^N O^t_k = Î^t. We thus have the following MILP with reduced order set, namely MILP-Re: \beginmaxi!|s|[3] 𝐯,𝐯^'R(P3) (<ref>),(<ref>),(<ref>)-(<ref>),(<ref>),(<ref>)-(<ref>),(<ref>) (<ref>)-(<ref>),(<ref>)-(<ref>). \endmaxi! Solving problem (P2) or (P3) yields the optimal weighted sum-throughput over a planning horizon. A key limitation of this problem is that it requires non-causal channel state information. To this end, in the following section, we present a learning based approach to maximize the weighted sum-throughput over T slots using causal channel information. Note that MILP (<ref>) and (<ref>) can be used as a benchmark against solutions that do not have channel state information. § A REINFORCEMENT LEARNING-BASED APPROACH We first formulate an Markov Decision Process (MDP) <cit.>, where an MDP is used to model a sequential decision making process. After that, we propose a Q-learning based approach to maximize the weighted sum-throughput using causal channel state information. Note the application of Q-learning is not straightforward as in addition to the mode of time slots, we also have to consider the subset of devices and their order of uplink/downlink transmissions in each time slot. Apart from that, we emphasize that other reinforcement learning solutions can also be used. However, given that the application of reinforcement learning to our problem is an open problem, it is important to first establish how it can be applied to our problem before applying more computationally expensive reinforcement learning solutions, which belongs in a future work[Note, our results show that Q-learning managed to achieve 90% of the optimal result. Hence, it is an open question whether the additional computational complexity of methods such as deep reinforcement learning will yield any significant gains.]. §.§ MDP Define a MDP as a tuple {𝒮, 𝒜, 𝒯, ℛ}, which records the set of states 𝒮, set of actions, 𝒜, set of rewards ℛ, and transition probability from one state to another 𝒯. Formally, * State 𝒮. Define 𝐬^t ∈𝒮 as the state of the system in slot t, where 𝐬^t =[s^t_1,…,s^t_n] contains the state of all devices. Each s^t_n corresponds to the maximum received power from device n to the HAP, which is the product of the residual energy at device n and the channel gain between device n and the HAP. Let ℒ = {0,ϵ_0, …, Lϵ_0} denote the set of all possible power levels that devices can have, where L represents the number of power levels. As a result, in our MDP, we have a finite state space 𝒮 = { [0, …,0] , [0, …, ϵ_0], …, [ϵ_0, …, ϵ_0], …, [Lϵ_0, …, Lϵ_0] }, where ϵ_0 = 1 μ W. The size of the state space is defined as N_s, where N_s = L^N. * Action 𝒜. We define an action space 𝒜 = {0,1}, where a^t ∈𝒜 is the action in slot t. In particular, we have a^t = 0 if the HAP selects Downlink mode in slot t; conversely, we have a^t = 1 for Uplink mode. * Transition probability 𝒯. We consider a model-free approach, meaning the transition probability is unknown. * Reward ℛ. The reward r(a^t) is defined as the weighted sum-throughput when taking action a^t in slot t, which is calculated according to Eq. (<ref>). Our aim is to determine a policy that maximizes the weighted sum-throughput R, see Eq. (<ref>). The said policy selects an action a^t for a given state 𝐬^t that maximizes the expected weighted sum-throughput. Define a policy as π(𝐬^t), where π: 𝒮→𝒜. Formally, the problem is max_πlim_T →∞1/T𝔼[ ∑_t=1^∞ R ]. §.§ Q-Learning The HAP/agent uses Q-learning to find the optimal policy by maintaining a Q-table, which contains so called Q-values indicating the expected reward of each state-action pair. Given the Q-table, the HAP selects an action according to the environment of the system in each time slot. Then in each time slot, it determines the optimal system parameters, such as power splitting ratio, power coefficient and uplink decoding order using linear programs, which will be presented in Section <ref> and <ref>. Note that in practice, the HAP/agent only requires the channel gain to/from each device and the energy level of devices. Both information can be collected at the start of each time slot. Let Q(𝐬^t,a^t) be the said Q-table, which contains Q-values that indicate the expected discounted reward for the state-action pair (𝐬^t,a^t). Specifically, the HAP takes an action a^t in state 𝐬^t and observes Q(𝐬^t,a^t), where it then updates its Q-table according to Bellman's equation <cit.>: Q(𝐬^t,a^t) = (1-α) Q(𝐬^t-1,a^t-1) + α(r(𝐬^t,a^t)+ γmax Q(𝐬^t+1,a^t+1)), where γ∈ [0,1] is a discount factor, α∈ [0,1] is a learning ratio, and r(𝐬^t,a^t) represents the immediate reward for the state-action pair (𝐬^t,a^t). We now present the main body of the Q-Learning approach, see Algorithm <ref>. The HAP first initializes its Q-table arbitrarily. Define k ∈𝒦 as the index of a training episode, where 𝒦 = {1,2,…,K}. In each training episode, the HAP selects an action using ϵ-greedy <cit.>. Specifically, in line <ref>-<ref>, the HAP executes a random action with a probability ϵ, where it selects action a^t with the highest Q(𝐬^t,a^t) with probability 1-ϵ. In addition, the value of ϵ decays from one to zero over time, see line <ref>. If the HAP selects a^t = 0, see line <ref>-<ref>, it then determines the power splitting ratio ρ^t_n and the power coefficient μ^t_c and μ^t_n using DLP as outlined in Section <ref>. When the HAP selects a^t = 1, see line <ref>-<ref>, it determines the decoding order and the transmit power p^t_n,j of each transmission as per ULP, see Section <ref>. After that, as shown in line <ref>, the HAP updates its Q-table using reward R^t_n according to Eq. (<ref>). §.§ Downlink Linear Program (DLP) In each Downlink slot, the HAP solves an LP to determine the power splitting ratio ρ^t_n, the power coefficient μ^t_c and μ^t_n that maximizes the number of Dowlink messages in the current slot. The LP is formulated as \beginmaxi!|s|[3] [ρ^t_n,μ^t_c,μ^t_n]∑_n=1^N C^t_n +D^t_n (<ref>),(<ref>),(<ref>),(<ref>),(<ref>),(<ref>)-(<ref>). \endmaxi! In constraint (<ref>) and (<ref>), we have Ĭ^t=1, which set the current slot to Downlink mode. §.§ Uplink Linear Program (ULP) In each Uplink slot, the HAP solves an LP to decide the decoding order and the transmit power p^t_n,j of each transmission that yields the maximum number of uplink transmissions. The LP is formulated as \beginmaxi!|s|[3] [p^t_n,j,O^t_k]∑_n=1^N∑_j=1^2 U^t_n,j (<ref>),(<ref>),(<ref>)-(<ref>),(<ref>)-(<ref>). \endmaxi! In constraint (<ref>), (<ref>), and (<ref>), we set Î^t=1 to indicate slot t is in Uplink mode. § EVALUATION Our evaluation uses a simulator written in Python and a computer with an Intel i7-7700 CPU @3.6GHz with 16 GB of memory. We solve MILP (P2) and (P3) using Gurobi. The HAP has a transmit power of 2 W <cit.>. Devices are placed 5 m away from the HAP; at this distance, devices are able to harvest energy from the HAP using the harvester from Powercast <cit.>. The SINR threshold is 2 dB. The RF-energy harvester at devices, as per <cit.>, has a maximum received power of 0.024 W, with parameter value α_j=150 and β_j=0.014. The battery size of each device is set to 2500 mAh <cit.> The packet size is fixed to 250 Kb <cit.>. The circuit noise power is 10^-13 W/Hz. The channel bandwidth is 20 MHz, which is used by WiFi. We note that if the weight of downlink transmission is higher than 0.5, the HAP will always select Downlink mode. As a result, the weight α of Downlink sum-throughput in Eq. (<ref>) is set to 0.4. Further, we do not consider wireless errors. Hence, our results are upper bound in terms of the number of messages transmitted/received by devices/HAP. Table <ref> lists our parameter values. We benchmark against three other solutions: * Optimal solution or MILP. The HAP uses non-causal channel gain information to solve MILP (<ref>). Hence, it obtains the optimal mode for each slot, as well as power splitting ratios, power coefficients in Downlink slots, and device transmit power in Uplink slots. * MILP with Reduced Order Set (MILP-Re). The HAP solves MILP (<ref>) to obtain the aforementioned quantities. * Random (Rand). The HAP randomly selects a mode for each slot. Then in Downlink slots, the HAP uses DLP. As for Uplink slots, it uses ULP. * Time Division Duplex (TDD). The HAP alternates between a Downlink and Uplink slot and applies DLP and ULP accordingly. We conduct 1000 runs for each experiment, where each run contains 10 slots. In each experiment, we record the average weighted sum-throughput as defined in Eq. (<ref>). §.§ Decay factor First, we study the decay factor δ of Q-Learning. Referring to Fig. <ref>, for higher δ values, Q-Learning achieves a higher weighted sum-throughput. The weighted sum-throughput of Q-Learning with δ = 0.00001 after convergence is 1.6% higher than δ = 0.00003. In addition, the convergence time of Q-Learning increases when δ increases. Specifically, the Q-Learning agent with δ=0.00003 converges after 40000 runs, where the agent with δ=0.00001 requires approximately 105000 runs before it converges. This is because ϵ decreases to zero faster when a higher δ is used, where the agent at the HAP has less chance to explore actions before converging to a mode. On the other hand, when a smaller δ is used, the value of ϵ decays slower, where the Q-Learning agent will have more chance to explore different modes. §.§ Distance We first vary the distance between the HAP and each device from 4 m to 9 m. As shown in Fig. <ref>, the weighted sum-throughput of all approaches decreases. This is because the channel gain becomes smaller when devices are placed further away from the HAP according to Eq. (<ref>). As a result, devices harvest less energy in each Downlink slot, which results in more Downlink slots required by devices in order to harvest sufficient energy for Uplink slots. The same reason applies to Q-Learning, where the energy harvested by devices decreases when the distance between the HAP and each device increases, which results in fewer uplink slots. As per Fig. <ref>, the weighted sum-throughput of MILP is around 1.4% on average higher than MILP-Re. This is because when the HAP uses MILP-Re, it selects the decoding order from a reduced order set. MILP-Re may select an order that results in fewer uplink transmissions or has a higher energy consumption in Uplink slots. From Fig. <ref>, the weighted sum-throughput of MILP is higher than Q-Learning, where the maximum gap occurs when devices are placed 6 m away from the HAP. This is due to the fact that MILP has non-causal channel information for selecting a mode, whereas Q-Learning has channel information of current and past slots. Moreover, Q-Learning may select Uplink mode in slots that should be in Downlink mode to ensure more messages in future Uplink slots. As a result, Q-Learning achieves fewer uplink transmissions than MILP when the distance between the HAP and each device is 6 m. The weighted sum-throughput of Q-Learning is 11.9% higher than TDD when the distance between the HAP and each device increases from 4 to 9 m. This is reasonable because Q-Learning learns when to select Downlink mode using the channel condition and the residual energy at devices. However, the mode is pre-determined for TDD, where devices may accumulate too much energy with no chance for uplink transmission, or lack of energy for uplink transmissions. §.§ Maximum HAP transmit power We vary the maximum transmit power of the HAP from 0.5 to 1.5 W. As shown in Fig <ref>, the weighted sum-throughput of all approaches increases. The weighted sum-throughput of MILP and Q-Learning increases by around 8.8% and 9.4% when the maximum transmit power of the HAP increases from 0.5 to 1.5 W. This is because devices harvest more energy in each Downlink slot if the HAP uses a higher transmit power. Thus, devices are more likely to have sufficient energy in uplink slots, which results in more uplink transmissions. As shown in Fig. <ref>, the maximum gap between MILP and Q-Learning occurs when the HAP uses a maximum transmit power of 0.9 W; the weighted sum-throughput of MILP is around 9% higher than Q-Learning. This is because MILP utilizes non-causal channel information to decide the mode of slots. By contrast, Q-Learning only uses channel information in current and past slots. As a result, the number of uplink transmissions attained by MILP is higher than Q-Learning. As per Fig. <ref>, the weighted sum-throughput of Q-Learning is on average 10.7% and 21.2% higher than TDD and Rand, respectively. This is because Q-Learning determines the mode according to the maximum received power of uplink transmissions of devices. When the residual energy at devices is high, the HAP will select more Uplink slots. However, the mode is pre-determined for TDD and random for Rand, where devices may have insufficient chance or lack of energy for uplink transmissions. Thus, the HAP with Q-Learning receives more uplink transmissions than TDD and Rand §.§ SIC threshold We vary the SIC decoding threshold from 2 to 7 (in dB). As shown in Fig. <ref>, the weighted sum-throughput of MILP decreased by 47.7% when the SIC decoding threshold increases from 2 to 7. This is due to the higher transmit power required for successful SIC decoding, which results in devices consuming more energy. The same reason applies to Q-Learning. As shown in Fig. <ref>, the weighted sum-throughput of Q-Learning decreases by 41.8%, respectively, when the SIC decoding threshold increases from 2 to 7. As shown in Fig. <ref>, the weighted sum-throughput of MILP is 7.2% higher than Q-Learning when the SIC decoding threshold is two, and remains approximately the same when the threshold increases to seven. This is reasonable because MILP uses non-causal channel information to determine a mode, whereas Q-Learning utilizes channel information of current and past slots. When using Q-Learning, the HAP may select Uplink mode instead of Downlink to ensure more uplink transmissions in future time slots. On the other hand, the weighted sum-throughput of Q-Learning is around 13.9 on average higher than Rand when the SIC decoding threshold increases from two to seven. This is because Q-Learning learns to select the optimal mode for a given system state. However, the mode in each slot is random for Rand, which may cause devices to accumulate excessive amount of energy, or lack of chance for uplink transmission. For these reasons, Q-Learning has better performance than TDD and Rand. §.§ Device Numbers We vary the number of devices from two to twelve. Recall that for ULP, see Eq. (<ref>), there are (2N)! decoding orders in each uplink slot. Hence, ULP becomes computationally intractable for large number of devices. Thus, in this section, we reduce the computational complexity by fixing the decoding order as per the channel gain of devices. Specifically, messages sent by a device with a higher channel gain will be decoded earlier. From Fig. <ref>, the weighted sum-throughput of all approaches increases. This is because more devices are able to transmit in each uplink slot, which results in more uplink transmissions. According to Fig. <ref>, the weighted sum-throughput of MILP is higher than Q-Learning. The reason is that MILP has non-causal channel information. It is possible for Q-Learning to select Uplink mode for a slot when in fact a Downlink mode is more suitable in order to save energy for future slots so that more devices can transmit messages in Uplink slots. As a result, MILP selects more downlink slots than Q-Learning. The weighted sum-throughput of Q-Learning is on average 6.4% and 16.6% higher than TDD and Rand, respectively, when the number of devices increases from two to twelve. This is because Q-Learning learns when to select Downlink mode using channel condition and the residual energy of devices. However, the mode is pre-determined for TDD and random for Rand, where devices may accumulate too much energy with no chance for uplink transmissions, or they lack energy for uplink transmissions. § CONCLUSION This paper presents the first study on a RSMA-based IoT network that uses a mode based time structure. It addresses a novel problem that requires a solution to determine whether a time slot is used for downlink or uplink transmissions. It outlines the first MILP that can be used to decide the optimal solution, namely the mode in each slot, the transmit power of each packet, the power splitting ratio of each device, and the decoding order in uplink slots. Further, it presents the first application of reinforcement learning to determine the mode of each slot. The results show that the amount of data transmitted in the network is affected by the HAP’s transmit power, the weight of downlink transmissions, distance between the HAP and each device and number of devices. The reinforcement learning approach achieves 90% of the optimal number of packet transmissions of the MILP. Moreover, it achieved 25% more transmitted packets against benchmark approaches. There are many possible future works. The first is to apply and study advanced reinforcement learning methods to our problem. The second is to develop more efficient methods to solve MINLP (𝐏1). IEEEtran
http://arxiv.org/abs/2407.01691v1
20240701180401
Resummation of Glauber Phases in Non-Global LHC Observables for Large $N_c$
[ "Philipp Böer", "Patrick Hager", "Matthias Neubert", "Michel Stillger", "Xiaofeng Xu" ]
hep-ph
[ "hep-ph", "hep-th" ]
MITP-24-053 July 1, 2024 0.8cm [0]titletitle title Philipp Böer,^a Patrick Hager,^a Matthias Neubert,^a,b Michel Stillger^a and Xiaofeng Xu^a ^aPRISMA^+ Cluster of Excellence & Mainz Institute for Theoretical Physics Johannes Gutenberg University, Staudingerweg 9, 55128 Mainz, Germany ^bDepartment of Physics & LEPP, Cornell University, Ithaca, NY 14853, U.S.A. [1]Abstractabstract § ABSTRACT The Glauber series for non-global jet observables at hadron colliders simultaneously includes the super-leading logarithms alongside an arbitrary number of Glauber phases. Building on the formalism of <cit.>, it is shown that the leading terms in this series for large N_c can be resummed in closed form in renormalization-group improved perturbation theory. This remarkable observation suggests that large-N_c methods might also be helpful to study other aspects of non-global logarithms at hadron colliders, and to combine our analytic results with amplitude-level parton showers. 0.40.4pt E-mail: mailto:pboeer@uni-mainz.depboeer@uni-mainz.de, mailto:pahager@uni-mainz.depahager@uni-mainz.de, mailto:matthias.neubert@uni-mainz.dematthias.neubert@uni-mainz.de, E-mail: mailto:m.stillger@uni-mainz.dem.stillger@uni-mainz.de, mailto:xiaxu@uni-mainz.dexiaxu@uni-mainz.de § INTRODUCTION Precision studies of jet observables at high-energy hadron colliders play an important role in testing the Standard Model of particle physics, because jet processes closely resemble the underlying hard-scattering dynamics. However, jet cross sections are also very challenging to calculate theoretically. The distinction of the highly collimated energetic particles constituting the jets from the soft radiation into the gaps between jets by imposing veto criteria in certain phase-space regions makes these observables “non-global”. Such vetoes introduce additional scales in the problem, which in turn give rise to large logarithmic corrections, whose structure can be very complicated. So-called “non-global logarithms” (NGLs) arise from soft-gluon radiation off secondary emissions inside the jets <cit.>. Due to their intricate pattern and the complexity of the color algebra involved, the higher-order structure of these logarithms is highly non-trivial. At e^+ e^- colliders, the resummation of the leading NGLs in the large-N_c limit was accomplished by solving a non-linear integro-differential equation derived in <cit.>. The resummation at finite N_c has been studied in <cit.>, and resummations at next-to-leading logarithmic accuracy have been achieved in <cit.>. Due to the breakdown of color coherence from soft Glauber-gluon exchanges between the initial-state partons in the hard scattering process, the theory of non-global observables at hadron colliders is yet more complicated <cit.>. This effect gives rise to so-called “super-leading logarithms” (SLLs) <cit.> – a class of double-logarithmic corrections at higher orders of perturbation theory of the form (iπ)^2α_s^n+3 L^2n+3 with L=ln(Q/Q_0), where Q is of order the partonic center-of-mass energy, while the low scale Q_0 is the characteristic scale of the jet veto. The resummation of SLLs for arbitrary partonic scattering processes has been accomplished in <cit.>. Treating the Glauber phase (iπ) resulting from the imaginary part of the large logarithm ln(-Q/Q_0) as a large parameter, one can generalize the SLLs to the “Glauber series” of logarithmically-enhanced corrections of the form (iπ)^2ℓα_s^n+1+2ℓ L^2n+1+2ℓ, where ℓ∈N denotes the number of pairs of Glauber exchanges. The resummation of the terms with ℓ>1 in terms of multiple infinite sums has been studied in <cit.>. In recent work <cit.>, we have devised a strategy to systematically treat the resummation of the terms in the Glauber series in renormalization-group (RG) improved perturbation theory, i.e. including the scale dependence of the running coupling α_s(μ) in the solution of the evolution equations. In this formalism, each Glauber-gluon insertion leads to an integral over a scale variable, which needs to be performed numerically. The present work is a direct sequel to <cit.>, in which we extend this formalism and derive quasi-analytic expressions for the large-N_c limit of the Glauber series. As this series is subleading in N_c, these results correspond to a resummation of terms suppressed by 1/N_c^2 compared to the leading contribution of the respective cross sections. We find that in this limit the structure of the Glauber series simplifies significantly and allows for an all-order resummation in terms of an expression involving at most four integrals. The analytic expressions we obtain will help to support and validate ongoing efforts to develop amplitude-level parton showers including quantum interference effects, see e.g. <cit.>. The key insight of <cit.> is that at leading-logarithmic accuracy, the Glauber series can be represented as σ_2→ M^ SLL+G(Q_0) = ∑_partonic channels∫dξ_1 ∫dξ_2 f_1(ξ_1,μ_s) f_2(ξ_2,μ_s) σ̂_2→ M^ SLL+G(ξ_1,ξ_2,μ_s) , where σ̂_2→ M^ SLL+G(ξ_1,ξ_2,μ_s) = ∑_l=1^∞ℋ_2→ M(ξ_1,ξ_2,μ_h) X^T U_ SLL^(l)(μ_h,μ_s) ς is the partonic 2→ M scattering cross section, f_i(ξ_i,μ_s) denote the parton distribution functions, and ℋ_2→ M(ξ_1,ξ_2,μ_h) are the hard functions in the factorization formula for 2→ M jet processes <cit.>. These functions are evaluated at the soft and hard scales, μ_s ∼ Q_0 and μ_h ∼√(ŝ), where √(ŝ) is the partonic center-of-mass energy. Moreover, X denotes a vector containing the basis structures in color space derived in <cit.>, see e.g. (<ref>) for quark-initiated processes, and ς^T=(1,0,…,0) with a single non-zero entry. The matrix U_ SLL^(l)(μ_h,μ_s) is the corresponding representation of the RG-evolution operator, which evolves the hard function from μ_h to μ_s, thereby resumming large logarithms L_s=ln(μ_h/μ_s) in the scale ratio alongside l Glauber phases. It is given by U_ SLL^(l)(μ_h,μ_s) = ( iπ)^l N_c^l-1 2^l+3/β_0^l+1∫_1^x_sdx_l/x_l lnx_s/x_l∫_1^x_ldx_l-1/x_l-1 …∫_1^x_2dx_1/x_1 ×U_c(μ_h,μ_1) [ ∏_i=1^l-1 V^G U_c(μ_i,μ_i+1) ] , where x_i=α_s(μ_i)/α_s(μ_h), and U_c(μ_i,μ_j) = exp[ N_c ∫_μ_j^μ_idμ/μ γ_ cusp(α_s(μ)) lnμ^2/μ_h^2] are matrix-valued Sudakov operators. In color and multiplicity space, the evolution equation for the hard function contains a logarithmically-enhanced soft-collinear anomalous dimension Γ^c and the Glauber operator V^G <cit.>, whose matrix representations in the color basis are denoted by and V^G, respectively. As is evident from (<ref>), l counts the number of Glauber-operator insertions. After diagonalization of the matrices , the matrix exponential (<ref>) can be expressed through the scalar functions U_c(v; μ_i,μ_j) = exp[ v N_c ∫_μ_j^μ_idμ/μ γ_ cusp(α_s(μ)) lnμ^2/μ_h^2] , where v denotes one of the eigenvalues of . § QUARK-INITIATED PROCESSES We begin by presenting the resummation of the Glauber series in the large-N_c limit for quark-initiated scattering processes. In this case, the associated color basis consists of the five elements X_1 = ∑_j>2 J_j if^abc T_1^aT_2^bT_j^c , X_4 = 1/N_c J_12 T_1·T_2 , X_2 = ∑_j>2 J_j (σ_1-σ_2) d^abc T_1^aT_2^bT_j^c , X_5 = J_12 1 , X_3 = 1/N_c∑_j>2 J_j (T_1 - T_2)·T_j , which contain the angular integrals J_j = ∫dΩ(n_k)/4π (W_1j^k - W_2j^k) Θ_ veto(n_k) , J_12 = ∫dΩ(n_k)/4π W_12^k Θ_ veto(n_k) . The soft dipole is defined as W_ij^k = n_i· n_j/n_i· n_k n_j· n_k , with light-like vectors n_i=p_i/E_i for each parton. The vector n_k is restricted to the phase-space region where the jet veto is applied. In addition, σ_i=-1 (σ_i=1) if parton i is an (anti)-quark. The factors N_c are chosen such that the contribution of each color structure is at most of 𝒪(1) in the large-N_c limit. The matrix representations V^G and U_c(μ_i,μ_j) are given in <cit.>. For V^G it reads V^G = [ 0 -2δ_qq̅ N_c^2-4/N_c^2 4/N_c^2 0 0; - 1/2 0 0 0 0; 1 0 0 0 0; 0 0 0 0 0; 0 0 0 0 0 ] , where δ_qq̅=1/4(σ_1-σ_2)^2 equals 1 for the qq̅' initial states, and 0 for q q' or q̅q̅' initial states. The matrix exponential U_c(μ_i,μ_j) in (<ref>) takes the form U_c(μ_i,μ_j) = [ U_c(1; μ_i,μ_j) 0 0 0 0; 0 U_c(1; μ_i,μ_j) 0 0 0; 0 0 U_c(1/2; μ_i,μ_j) 0 0; 0 0 2 [ U_c(1/2; μ_i,μ_j) - U_c(1; μ_i,μ_j) ] U_c(1; μ_i,μ_j) 0; 0 0 2 C_F/N_c[ 1 - U_c(1/2; μ_i,μ_j) ] 0 1 ] . Using these expressions, the first four terms U_ SLL^(l)(μ_h,μ_s) have been derived in <cit.>. For odd values of l, for example, one finds U_ SLL^(1)(μ_h,μ_s) ς = 16iπ/β_0^2∫_1^x_sdx_1/x_1 lnx_s/x_1 U_c(1; μ_h,μ_1) ς , U_ SLL^(3)(μ_h,μ_s) ς = - 64 iπ^3/β_0^4 N_c^2 ∫_1^x_sdx_3/x_3 lnx_s/x_3∫_1^x_3dx_2/x_2∫_1^x_2dx_1/x_1 ×[ K_12 U_c(1;μ_h,μ_3) + 4/N_c^2 U_c(1,1/2,1; μ_h,μ_1,μ_2,μ_3) ] ς , with K_12 = N_c^2-4/N_c^2 δ_qq̅ = δ_qq̅ + 𝒪(1/N_c^2) , and we have used the short-hand notation U_c(v^(1),…,v^(l); μ_h,μ_1,…,μ_l) ≡ U_c(v^(1); μ_h,μ_1) U_c(v^(2); μ_1,μ_2) … U_c(v^(l); μ_l-1,μ_l) for the product of evolution factors. In the large-N_c limit, only the term proportional to K_12=δ_qq̅ in the last line of (<ref>) prevails. These terms are associated with significantly simpler evolution factors, independent of x_1 and x_2, such that these integrations can be easily performed. One finds U_ SLL^(3)(μ_h,μ_s) ς→16 iπ/β_0^2 ∫_1^x_sdx_3/x_3 lnx_s/x_3 δ_qq̅ 1/2!(2 iπ N_c/β_0 ln x_3)^2 U_c(1;μ_h,μ_3) ς , which has a similar structure as U_ SLL^(1)(μ_h,μ_s) up to the additional factor in the integrand. This feature persists for higher l, and, therefore, all higher-order contributions in the Glauber series are absent for qq and q̅q̅ scattering. For large N_c, the structure of U_c(μ_i,μ_j) remains unchanged but the (1,3) entry of V^G (<ref>) vanishes, which leads to a substantial simplification. In this case, the relevant block in the Glauber series (<ref>) fulfills V^G U_c(μ_i,μ_j) = V^G U_c(1;μ_i,μ_j) , allowing us to rewrite U_ SLL^(l)(μ_h,μ_s) ς = (iπ)^l N_c^l-1 2^l+3/β_0^l+1∫_1^x_sdx_l/x_l lnx_s/x_l∫_1^x_ldx_l-1/x_l-1 …∫_1^x_2dx_1/x_1 ×U_c(μ_h,μ_1) (V^G )^l-1 U_c(1;μ_1,μ_l) ς . In the case of odd l=2ℓ-1, the remaining matrix structure simplifies as ς is an eigenvector of (V^G)^2 with eigenvalue δ_qq̅. Exploiting further that U_c(μ_h,μ_1) ς=U_c(1;μ_h,μ_1) ς, one can trivially perform the x_i-integrals for i<l, yielding U_ SLL^(2ℓ-1)(μ_h,μ_s) ς = 8 (2iπ)^2ℓ-1/β_0^2ℓ N_c^2ℓ-2/(2ℓ-2)! [ δ_1ℓ + δ_qq̅ ( 1- δ_1ℓ) ] ×∫_1^x_sdx_l/x_l lnx_s/x_l ln^2ℓ-2x_l U_c(1; μ_h,μ_l) ς . While for even l=2ℓ the vector ς is not an eigenvector of (V^G)^2ℓ-1, one can use (V^G)^2ℓ-1 ς = δ_qq̅^ℓ-1 V^G ς and perform the integrals over the x_i variables for 1<i<l to obtain U_ SLL^(2ℓ)(μ_h,μ_s) ς = 8 (2iπ)^2ℓ/β_0^2ℓ+1 N_c^2ℓ-1/(2ℓ-2)! [ δ_1ℓ + δ_qq̅ ( 1- δ_1ℓ) ] ∫_1^x_sdx_l/x_l lnx_s/x_l∫_1^x_ldx_1/x_1 ln^2ℓ-2x_l/x_1 ×[ 0; - 1/2 U_c(1;μ_h,μ_l); U_c(1/2,1;μ_h,μ_1,μ_l); 2 [ U_c(1/2,1;μ_h,μ_1,μ_l) - U_c(1;μ_h,μ_l) ]; 2 C_F/N_c[ U_c(1;μ_1,μ_l) - U_c(1/2,1;μ_h,μ_1,μ_l) ] ] . Note that while the factor 2 C_F/N_c=1-1/N_c^2 in the last component of this expression should be replaced by 1 in the strict large-N_c limit, we prefer to keep it in its original form, as this ensures that the terms with l=2, i.e. the SLLs, are reproduced exactly. It is now straightforward to sum the Glauber series in the large-N_c limit. For odd values of l, one finds ∑_ℓ=1^∞U_ SLL^(2ℓ-1)(μ_h,μ_s) ς = 16 iπ/β_0^2∫_1^x_sdx_2/x_2 lnx_s/x_2 U_c(1; μ_h,μ_2) [ 1 - 2δ_qq̅ sin^2( π N_c/β_0 ln x_2 ) ] ς , where the right-hand side is proportional to the vector ς, i.e. only its first component is non-zero. The sum over even values of l yields a similar result with an additional non-trivial scale integral, as evident from (<ref>). The expression reads ∑_ℓ=1^∞U_ SLL^(2ℓ)(μ_h,μ_s) ς = 16π/β_0^2 2π N_c/β_0∫_1^x_sdx_2/x_2 lnx_s/x_2∫_1^x_2dx_1/x_1[ 1 - 2δ_qq̅ sin^2( π N_c/β_0 lnx_2/x_1) ] ×[ 0; 1/2 U_c(1;μ_h,μ_2); -U_c(1/2,1;μ_h,μ_1,μ_2); 2 [ U_c(1;μ_h,μ_2) - U_c(1/2,1;μ_h,μ_1,μ_2) ]; 2 C_F/N_c[ U_c(1/2,1;μ_h,μ_1,μ_2) - U_c(1;μ_1,μ_2) ] ] . Combining (<ref>) and (<ref>) yields the resummed Glauber series in RG-improved perturbation theory, containing at most two scale integrals. In these expressions, the 1 inside the square brackets corresponds to the SLLs, whereas the term proportional to δ_qq̅ accounts for the effects of higher-order Glauber phases. It is therefore evident that the latter effects always reduce the contributions of the SLLs, albeit typically by a small amount, see Section <ref>. Note that for QCD processes with tree-level hard functions, only an even number of Glauber operator insertions contribute, as the cross section must be real-valued. In general, however, the full Glauber series, containing both even and odd l values, is relevant if the hard function features complex phases with different color structures, e.g. for cross sections involving electroweak gauge bosons <cit.>. §.§.§ Fixed-coupling results and asymptotic behavior To determine the asymptotic behavior of the resummed Glauber series in the limit where α_s L_s^2 ≫ 1, one can work in the approximation of a fixed coupling α_s≡α_s(μ̅), with μ̅∈(μ_s,μ_h), as the running is a single-logarithmic effect. In this case, one can use x_i = α_s(μ_i)/α_s(μ_h)≈ 1 + β_0α_s/2π L_i , where L_i=ln(μ_h/μ_i), and derive fixed-coupling versions of (<ref>) and (<ref>). Using also that (<ref>) evaluates in this case to U_c(v; μ_i,μ_j) ≡exp[ v w ( z_i^2 - z_j^2 ) ] , where L_i≡ z_i L_s, w=N_cα_s(μ̅)/πL_s^2, it is possible to determine the asymptotic behavior by applying the same techniques as in <cit.>. For odd l, one finds after rescaling y_i=√(w) z_i and introducing w_π = N_cα_s(μ̅)/ππ^2 ∑_ℓ=1^∞U_ SLL^(2ℓ-1)(μ_h,μ_s) ς = 4iα_s L_s/π N_c√(w_π)∫_0^√(w)dy_2 (1-y_2/√(w)) e^-y_2^2[ 1 - 2δ_qq̅ sin^2( √(w_π)/2 y_2 ) ] ς = 2iα_s L_s/π N_c √(w_π){√(π)[1 - δ_qq̅(1-e^-w_π/4)] - 1/√(w)[1 - δ_qq̅ √(w_π) F(√(w_π)/2)] + 𝒪(e^-w) } ς , where the second relation is valid in the limit w ≫ 1. The Dawson function is defined as F(z) = e^-z^2∫_0^zdx e^x^2 = √(π)/2 e^-z^2 erfi(z) . To obtain the asymptotic result, it suffices to replace the upper limit of the y_2 integral by infinity. For even l, the situation is more complicated, and one finds ∑_ℓ=1^∞U_ SLL^(2ℓ)(μ_h,μ_s) ς = - 4α_s L_s/π N_c w_π∫_0^√(w)dy_2 (1-y_2/√(w)) e^-y_2^2∫_0^y_2dy_1 ×[ 1 - 2δ_qq̅ sin^2( √(w_π)/2 (y_2-y_1) ) ] [ 0; - 1/2; e^1/2 y_1^2; 2( e^1/2 y_1^2 - 1 ); 2 C_F/N_c( e^y_1^2 - e^1/2 y_1^2) ] . In most cases it is sufficient to replace the upper integration boundary of the y_2 integral with √(w)→∞ to obtain the asymptotic form. The only exception is the fifth component, which due to the factor e^y_1^2 develops a logarithmic dependence on w. Integrating up to infinity gives a divergent result, but the divergence is cancelled by a contribution from the region y_2 ∼√(w). One needs the expansions of the three integrals ∫_0^√(w)dy_2 (1-y_2/√(w)) e^-y_2^2∫_0^y_2dy_1 [ 1 - 2δ_qq̅ sin^2( √(w_π)/2 (y_2-y_1) ) ] = 1/2 - √(π)/4√(w) - δ_qq̅[1/2 - 1/√(w_π) F(√(w_π)/2) - √(π)/4√(w)(1 - e^-w_π/4)] + 𝒪(e^-w) , ∫_0^√(w)dy_2 (1-y_2/√(w)) e^-y_2^2∫_0^y_2dy_1 [ 1 - 2δ_qq̅ sin^2( √(w_π)/2 (y_2-y_1) ) ] e^1/2 y_1^2 = ln(1+√(2))/√(2) - √(π)/2 √(2 w) - δ_qq̅[ ln(1+√(2))/√(2) + i π√(2) e^w_π/4 T(√(w_π),i/√(2)) + π√(w_π)/4√(2w) e^w_π/4( erf(√(w_π)/2) - erf(√(w_π)/√(2)) ) ] + 𝒪(e^-w/2) , and ∫_0^√(w)dy_2 (1-y_2/√(w)) e^-y_2^2∫_0^y_2dy_1 [ 1 - 2δ_qq̅ sin^2( √(w_π)/2 (y_2-y_1) ) ] e^y_1^2 = ln(4w) + γ_E - 2/4 - δ_qq̅[ w_π/8 _2F_2(1,1;3/2,2;-w_π/4) - π√(w_π)/8√(w) erf(√(w_π)/2) ] + 𝒪(w^-1) , where erf(z) = 2/√(π)∫_0^zdx e^-x^2 is the error function, _2F_2(a_1,a_2;b_1,b_2;z) is a generalized hypergeometric function, and T(x,a) = 1/2π∫_0^adt e^-1/2 x^2(1+t^2)/1+t^2 denotes the Owen T-function. In agreement with the findings of <cit.>, it turns out that the dependence on the variable w of the resummed Glauber series cancels in all but the fifth component, which has a residual logarithmic dependence on w. § GLUONS IN THE INITIAL STATE There are two reasons why the resummation of the Glauber series for quark-initiated processes is particularly simple in the large-N_c limit. First, the relevant building block V^G U_c(μ_i,μ_j) fulfills relation (<ref>), allowing us to perform all except (at most) two scale integrals and to simplify the matrix structure in (<ref>). Second, remains diagonalizable in this limit. As soon as gluons are present in the initial state, these properties are lost, complicating the resummation. The color bases for quark-gluon and gluon-initiated processes, containing 14 and 20 elements, respectively, have been constructed in <cit.>. Rescaled with appropriate factors of N_c, see <cit.>, the matrices and V^G are of 𝒪(1) in the large-N_c expansion and can be found in Appendices <ref> and <ref>. While in general the matrix has up to eleven different eigenvalues for processes with gluons in the initial state, in the large-N_c limit these degenerate to at most five eigenvalues, given by v ∈{ 0 , 1/2 , 1 , 3/2 , 2 } , where the eigenvalue 2 only appears if both initial-state partons are gluons. Curiously, it turns out that in the large-N_c limit all eigenvalues are half-integers ranging from zero up to the sum of the spins of the initial-state partons. It would be interesting to explore whether there is a deeper reason for this coincidence. As for quark-initiated processes, Sudakov factors with the eigenvalues 0 and 1/2 only arise from the left-most insertion of U_c(μ_h,μ_1) in (<ref>). Therefore, the generalization of (<ref>) reads[Note that V^G_2 depends on μ_i,μ_j. However, as explained below (<ref>), this dependence is irrelevant.] V^G U_c(μ_i,μ_j) = V^G_1 U_c(1;μ_i,μ_j) + V^G_3/2 U_c(3/2;μ_i,μ_j) + V^G_2 U_c(2;μ_i,μ_j) . Since ς is an eigenvector of U_c(μ_i,μ_j) with eigenvalue U_c(1;μ_i,μ_j), irrespective of the nature of the initial-state partons, the coefficient matrices fulfill V^G_3/2 ς=V^G_2 ς=0. For quark-gluon-initiated processes V^G_2=0, because the eigenvalue 2 does not appear. Furthermore, the coefficient matrices satisfy V^G_1 V^G_3/2 = 0 , V^G_3/2 V^G_2 = 0 , V^G_1 V^G_2 = 0 , V^G_2 V^G_1 = 0 , ensuring that only the three scalar functions U_c(v,1;μ_h,μ_1,μ_i), U_c(v,3/2,1;μ_h,μ_1,μ_i,μ_j) and U_c(v,2,3/2,1;μ_h,μ_1,μ_i,μ_j,μ_k) contribute, where the first eigenvalue v is arbitrary. There is one more subtlety that needs to be taken into account when dealing with gluons in the initial state. Since is no longer diagonalizable once the large-N_c limit is taken, its matrix exponential U_c(μ_i,μ_j) with full N_c dependence contains terms of the form N_c [U_c(3N_c+2/2N_c;μ_i,μ_j) - U_c(3N_c-2/2N_c;μ_i,μ_j)] = 2 I_h(μ_i,μ_j) U_c(3/2;μ_i,μ_j) + 𝒪(1/N_c^2) , with the integral I_h(μ_i,μ_j) ≡ N_c ∫_μ_j^μ_idμ/μ γ_ cusp(α_s(μ)) lnμ^2/μ_h^2 , which counts as 𝒪(1) in the large-N_c expansion. This effect arises for all N_c-dependent eigenvalues of , i.e. v_3,4 = 3N_c±2/2N_c , v_5,6 = 2(N_c±1)/N_c , v_9,10 = 2N_c±1/N_c in the notation of <cit.>, where v_3, v_5, v_9 correspond to the plus signs. As a consequence, the coefficient matrix V^G_2 in (<ref>) depends on the scales μ_i and μ_j. Fortunately, this dependence drops out in the only relevant product V^G_2 V^G_3/2. Therefore, we can treat V^G_2 as scale-independent in the following. The coefficient matrix V^G_3/2 does not depend on μ_i and μ_j. Inserting the decomposition (<ref>) in (<ref>), and exploiting the properties of the coefficient matrices, one finds U_ SLL^(l)(μ_h,μ_s) ς = (iπ)^l N_c^l-1 2^l+3/β_0^l+1∫_1^x_sdx_l/x_l lnx_s/x_l∫_1^x_ldx_l-1/x_l-1 …∫_1^x_2dx_1/x_1 U_c(μ_h,μ_1) ×[ (V^G_1 )^l-1 U_c(1;μ_1,μ_l) + ∑_i=2^l-1(V^G_3/2)^i-1(V^G_1 )^l-i U_c(3/2,1;μ_1,μ_i,μ_l) + ∑_j=3^l-1∑_i=2^j-1(V^G_2 )^i-1(V^G_3/2)^j-i(V^G_1 )^l-j U_c(2,3/2,1;μ_1,μ_i,μ_j,μ_l) ] ς . We start with the first term, which looks similar to the quark case, cf. (<ref>). If l≥2, one can trivially perform all integrals except the ones over x_1 and x_l. For l=1 there is only one such integral, and one can use that U_c(μ_h,μ_1) ς=U_c(1;μ_h,μ_1) ς. Combining these two cases, one obtains ∫_1^x_ldx_l-1/x_l-1 …∫_1^x_2dx_1/x_1 = δ_1l + (1-δ_1l) ∫_1^x_ldx_1/x_1 1/(l-2)! ln^l-2x_l/x_1 . For the second and third term in (<ref>), also the integrals over x_i and x_i,x_j, respectively, are non-trivial. The remaining integrals evaluate to ∫_1^x_ldx_l-1/x_l-1 …∫_1^x_2dx_1/x_1 = ∫_1^x_ldx_i/x_i∫_1^x_idx_1/x_1 1/(i-2)! ln^i-2x_i/x_1 1/(l-1-i)! ln^l-1-ix_l/x_i , ∫_1^x_ldx_l-1/x_l-1 …∫_1^x_2dx_1/x_1 = ∫_1^x_ldx_j/x_j∫_1^x_jdx_i/x_i∫_1^x_idx_1/x_1 1/(i-2)! ln^i-2x_i/x_1 ×1/(j-i-1)! ln^j-i-1x_j/x_i 1/(l-1-j)! ln^l-1-jx_l/x_j . Distinguishing the cases of even and odd l, one can calculate the matrix products in (<ref>) and perform the sums over i,j, and l, thereby accomplishing the resummation of the Glauber series. For l=2ℓ-1 odd, one finds for the matrix products (V^G_1 )^2ℓ-2 ς = δ_1ℓ ς + (1-δ_1ℓ) ς_1 , (V^G_3/2)^i-1(V^G_1 )^2ℓ-1-i ς = (1+δ_i,2ℓ-2) ς_3/2 , (V^G_2 )^i-1(V^G_3/2)^j-i(V^G_1 )^2ℓ-1-j ς = (1+δ_j,2ℓ-2) ς_2 . The ς vectors differ between quark-gluon and gluon-initiated processes and are given in Appendices <ref> and <ref>, respectively. Combining these results with (<ref>) and (<ref>), the coefficient vector of the basis structures can be expressed as ∑_ℓ=1^∞U_ SLL^(2ℓ-1)(μ_h,μ_s) ς = 16iπ/β_0^2∫_1^x_sdx_4/x_4 lnx_s/x_4{ U_c(1;μ_h,μ_4) ς - 2π N_c/β_0∫_1^x_4dx_1/x_1 U_c(1;μ_1,μ_4) sin(2π N_c/β_0 lnx_4/x_1) U_c(μ_h,μ_1) ς_1 - (2π N_c/β_0)^2∫_1^x_4dx_2/x_2∫_1^x_2dx_1/x_1 U_c(3/2,1;μ_1,μ_2,μ_4) ×[cos(2π N_c/β_0 lnx_4/x_1) + cos(2π N_c/β_0 lnx_2/x_1) ] U_c(μ_h,μ_1) ς_3/2 + (2π N_c/β_0)^3∫_1^x_4dx_3/x_3∫_1^x_3dx_2/x_2∫_1^x_2dx_1/x_1 U_c(2,3/2,1;μ_1,μ_2,μ_3,μ_4) ×[sin(2π N_c/β_0 lnx_4/x_1) + sin(2π N_c/β_0 lnx_3/x_1) ] U_c(μ_h,μ_1) ς_2 } . After carrying out the last products U_c(μ_h,μ_1) ς_ i, it is possible to combine some terms by integrating over x_1. However, these simplifications are not universal and are performed separately for quark-gluon and gluon-initiated processes in Appendices <ref> and <ref>, respectively. For l=2ℓ even, the matrix products evaluate to (V^G_1 )^2ℓ-1 ς = δ_1ℓ ς̃+ (1-δ_1ℓ) ς̃_1 , (V^G_3/2)^i-1(V^G_1 )^2ℓ-i ς = (1+δ_i,2ℓ-1) ς̃_3/2 , (V^G_2 )^i-1(V^G_3/2)^j-i(V^G_1 )^2ℓ-j ς = (1+δ_j,2ℓ-1) ς̃_2 . The ς̃ vectors are also given in Appendices <ref> and <ref>. Performing the sums over i,j, and ℓ in (<ref>) results in ∑_ℓ=1^∞U_ SLL^(2ℓ)(μ_h,μ_s) ς = - 16π/β_0^2∫_1^x_sdx_4/x_4 lnx_s/x_4{2π N_c/β_0∫_1^x_4dx_1/x_1 U_c(1;μ_1,μ_4) U_c(μ_h,μ_1) ς̃ - 2π N_c/β_0∫_1^x_4dx_1/x_1 U_c(1;μ_1,μ_4) 2sin^2(π N_c/β_0 lnx_4/x_1) U_c(μ_h,μ_1) ς̃_1 - (2π N_c/β_0)^2∫_1^x_4dx_2/x_2∫_1^x_2dx_1/x_1 U_c(3/2,1;μ_1,μ_2,μ_4) ×[sin(2π N_c/β_0 lnx_4/x_1) + sin(2π N_c/β_0 lnx_2/x_1) ] U_c(μ_h,μ_1) ς̃_3/2 - (2π N_c/β_0)^3∫_1^x_4dx_3/x_3∫_1^x_3dx_2/x_2∫_1^x_2dx_1/x_1 U_c(2,3/2,1;μ_1,μ_2,μ_3,μ_4) ×[cos(2π N_c/β_0 lnx_4/x_1) + cos(2π N_c/β_0 lnx_3/x_1) ] U_c(μ_h,μ_1) ς̃_2 } . After the products U_c(μ_h,μ_1) ς̃_ i are performed, the result is very lengthy and contains terms proportional to I_h(μ_h,μ_1) by the mechanism described in (<ref>). As the construction is straightforward, we do not show the explicit expressions here. With (<ref>) and (<ref>), we have extended the large-N_c resummation of the Glauber series to processes with gluons in the initial state. It is remarkable that the resummed series can be expressed through simple trigonometric functions and at most fourfold integrals. These two equations, together with (<ref>) and (<ref>), are the main results of this work. As in the previous section, one may now proceed to determine the large-w asymptotics by restricting oneself to the fixed-coupling approximation (<ref>). Here, we omit a detailed discussion due to the complexity of the resulting expressions, which involve four-fold integrals that are no longer amenable to straightforward analytical evaluation. The qualitative dependence on the parameter w, however, is similar to (<ref>) and (<ref>–<ref>), i.e. the leading term is constant up to logarithmic corrections. § NUMERICAL ANALYSIS In Figures <ref> and <ref>, we show the contribution of the resummed Glauber series to the partonic cross section σ̂_2→ M^ SLL+G defined in (<ref>), normalized to the Born cross section σ̂, for a few representative 2→ 2 scattering processes. In each plot, the red line shows the result for the resummed Glauber series obtained in the limit of large N_c, derived in this work, whereas the black line corresponds to the numerical approximation of the analogous results with full N_c dependence. The gray line gives the contributions of the SLLs only (l=2). As in our previous work <cit.>, we use a rapidity gap of width Δ Y=2 and fix the hard and soft matching scales at μ_h=Q=1 TeV and μ_s=Q_0, respectively. For practical reasons, the black line in each plot is obtained by summing up the contributions with l=2,4 and 6 Glauber phases. Yet higher-order terms are difficult to obtain numerically (as they involve eightfold and more integrations), but they are almost always negligible on the scale of the plots. To illustrate this fact, we show as a blue line the sum of the terms with l=2 and 4 Glauber phases. For quark-induced scattering processes, depicted in Figure <ref>, the blue lines are almost invisible, as they are covered by the black ones. In scattering processes with gluons in the initial state, as shown in Figure <ref>, the contribution of the l=6 term is not necessarily negligible. The result for the sum of the infinite Glauber series including the full N_c dependence would lie between the black and blue curves (closer to the black line). In Figure <ref>, we focus on qq̅→ gg small-angle scattering (left panel) and qq'→ qq' forward scattering (right panel). In the first case, the resummed result in the limit of large N_c (red) is in almost perfect agreement with the result obtained including the full N_c dependence (black). In the case of quark-quark scattering, on the other hand, the higher-order Glauber terms are absent in the limit of large N_c, and therefore the red line agrees with the l=2 term shown in gray.[Remember that we did not replace 2 C_F/N_c→ 1 in (<ref>), as would be demanded in the strict large-N_c limit, thus capturing the contribution of the SLLs completely.] The difference between the black and red lines corresponds to a sizable 1/N_c^2 correction to the leading term, which results from the (1,3) entry in the expression for the matrix V^G in (<ref>), which equals 4/9 for N_c=3. Figure <ref> shows two examples of processes involving gluons in the initial state. Recall that in this case our treatment employs a strict large-N_c expansion, in which all terms in the Glauber series (also the one with l=2) are subject to higher-order 1/N_c^2 corrections. For forward quark-gluon scattering (left panel) the resummation of the Glauber series in the limit of large N_c produces a result that comes very close to the exact one. The example of gg→ gg small-angle scattering (right panel) is a bit pathological, since in this case high-order Glauber terms with l=4 and 6 still produce sizable contributions (see the blue and black lines), which are only captured partially in our new result derived in the limit of large N_c (red line). For small values of Q_0, the red line overestimates the black one by approximately 50%. The result can be improved by including the subleading terms in the 1/N_c expansion for the SLLs (l=2) but not for the higher-order Glauber contributions. The corresponding result is shown by the dotted red line, which reduces the discrepancy to about 25%. § CONCLUSIONS Based on the formalism of <cit.>, and utilizing the basis of operators in color-space developed in <cit.>, we have presented closed-form expressions for the contributions of the resummed Glauber series to partonic scattering processes in renormalization-group improved perturbation theory, working at leading order in the limit of large N_c. As the SLLs and the Glauber series themselves are subleading in 1/N_c compared to e.g. the non-global logarithms, the resummation performed here corresponds to a next-to-leading term in the large-N_c expansion of the respective cross sections. For scattering processes initiated by quarks and/or anti-quarks, our results can be expressed as an at most two-fold integral over products of Sudakov-like evolution factors and basic trigonometric weight functions, as shown in (<ref>) and (<ref>). For processes featuring gluons in the initial state, the respective expressions (<ref>) and (<ref>) are more complicated, containing up to three-fold (four-fold) integrations for quark-gluon-initiated (gluon-initiated) scattering processes. We have performed a numerical comparison of our resummed expressions obtained in the limit of large N_c with (truncated) sums of the Glauber series with full N_c dependence, finding that the analytic expressions derived here provide a good approximation of the full series in most cases and significantly improve the estimate of keeping only the super-leading logarithms. From a purely phenomenological point of view, however, a better approximation is obtained by including the contributions from the terms with the Glauber series with l≤ 4, i.e. by working with a truncated Glauber series, which involves as many integrations as the resummed large-N_c result derived here. More importantly, and from a conceptual point of view, the fact that the large-N_c limit has allowed us to obtain simple closed-form expressions for the infinite Glauber series is a remarkable and unexpected result, since a priori each Glauber-operator insertion entails an additional scale integration. We have shown that almost all of them can be evaluated straightforwardly in the limit of large N_c. This finding suggests that large-N_c methods might also be helpful to study other aspects of non-global logarithms at hadron colliders, such as secondary and yet higher-order soft emissions, which build up the series of non-global logarithms. Moreover, we are confident that the analytic expressions we have derived will be a crucial asset in the further development and validation of amplitude-level parton showers including quantum interference effects. §.§ Acknowledgments This research has received funding from the European Research Council (ERC) under the European Union’s Horizon 2022 Research and Innovation Program (ERC Advanced Grant agreement No.101097780, EFT4jets). Views and opinions expressed are however those of the authors only and do not necessarily reflect those of the European Union or the European Research Council Executive Agency. Neither the European Union nor the granting authority can be held responsible for them. The work reported here was also supported by the Cluster of Excellence Precision Physics, Fundamental Interactions, and Structure of Matter (PRISMA^+, EXC 2118/1) within the German Excellence Strategy (Project-ID 390831469). § QUARK-GLUON-INITIATED PROCESSES In this appendix, we list the relevant objects from the discussion in Section <ref> for quark-gluon-initiated processes and state explicit results for (<ref>) and (<ref>). The matrix representations of the Glauber operator and the logarithmically-enhanced collinear anomalous dimension naturally decompose in the form <cit.> V^G = [ 0 ν̃^(j) 0; ν^(j) 0 0; 0 0 0 ] , = [ γ̃^(j) 0 0; 0 γ^(j) 0; 0 λ γ ] . In the color basis for quark-gluon-initiated processes rescaled with appropriate factors of N_c, see Appendix A of <cit.>, the leading terms of V^G in the large-N_c expansion are ν^(j) = [ -1 0 0; 2 0 0; -1/2 1/2 0; 1/2 -1/2 0; 1 0 -1; 0 0 -1; 0 -1 0; 0 0 -1 ] , ν̃^(j) = [ 0 0 -1/2 1/2 0 0 0 0; 0 0 1/2 -1/2 0 0 0 0; 0 0 0 -1 -1 0 0 0 ] . For the leading terms of the four submatrices are γ̃^(j) = [ 1 0 0; 0 1 0; 0 0 3/2 ] , γ^(j) = [ 1/2 0 0 0 0 0 0 0; 0 1/2 0 0 0 0 0 0; 0 0 1 0 0 0 0 0; 0 0 0 1 0 0 0 0; 0 0 0 0 3/2 0 0 0; 0 0 0 0 -1 3/2 0 0; 0 0 0 0 0 0 1/2 0; 0 0 0 0 0 0 0 3/2 ] , γ = [ 0 0 0; 0 1 0; 0 0 1 ] , λ = [ 1/2 -1/4 0 0 0 0 0 0; 1/2 -1/2 0 0 -2 1/2 0 0; 0 0 0 0 0 0 1/2 -1/2 ] . After carrying out the matrix products in (<ref>), one finds for odd l ς_1 = 1/2 (1, -1, -1, …)^T , ς_3/2 = 1/2 (0, 0, -1, …)^T , where we only show the first three components relevant for an odd number of Glauber-operator insertions, and for even l ς̃ = 1/2 (…, -2, 4, -1, 1, 2, 0, 0, 0, 0, 0, 0)^T , ς̃_1 = 1/2 (…, -1, 2, -1, 1, 1, 0, 1, 0, 0, 0, 0)^T , ς̃_3/2 = 1/2 (…, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0)^T . Here we have dropped the first three components as they are irrelevant. Since the eigenvalue 2 does not appear for quark-gluon-initiated processes, i.e. V^G_2=0, one finds ς_2=ς̃_2=0. Performing the products U_c(μ_h,μ_1) ς_ i in (<ref>), the dependence on μ_1 cancels for some terms and one can combine them by evaluating the x_1-integral with terms with one integral less. The simplified result reads ∑_ℓ=1^∞U_ SLL^(2ℓ-1)(μ_h,μ_s) ς = 16iπ/β_0^2∫_1^x_sdx_2/x_2 lnx_s/x_2 { U_c(1;μ_h,μ_2) ×[cos^2(π N_c/β_0 ln x_2) [ 1; 0; 0; ⋮ ] + sin^2(π N_c/β_0 ln x_2) [ 0; 1; 0; ⋮ ]] + π N_c/β_0∫_1^x_2dx_1/x_1 U_c(3/2,1;μ_h,μ_1,μ_2) ×[sin(2π N_c/β_0 ln x_2) + sin(2π N_c/β_0 ln x_1) ] [ 0; 0; 1; ⋮ ]} . In contrast to the result (<ref>) valid for quark-initiated processes, here we work in the strict large-N_c limit. This proves to be convenient, as there are no terms containing a threefold integral. § GLUON-INITIATED PROCESSES In this appendix, we list the relevant objects from the discussion in Section <ref> for gluon-initiated processes and state explicit results for (<ref>) and (<ref>). In the color basis for gluon-initiated processes rescaled with appropriate factors of N_c, see Appendix A of <cit.>, the submatrices of V^G in (<ref>) in the large-N_c limit are given by ν^(j) = [ 2 0 0 0 0 0 0; 1/2 -1/2 0 0 0 0 0; 1 0 -1 0 0 0 0; 0 0 -1 -1 0 1 0; 0 0 0 0 1 0 -1; 0 0 0 1/2 0 1/2 0; 0 0 1 -1 1 1 -1 ] , ν̃^(j) = [ 0 1 0 0 0 0 0; 0 -1 0 0 0 0 0; 0 -1 -1 0 0 0 0; 0 0 0 0 0 1 0; 0 0 -1 0 0 0 1/2; 0 0 0 0 0 1 0; 0 0 0 0 0 0 -1/2 ] . The ones of in this limit are γ̃^(j) = [ 1 0 0 0 0 0 0; 0 1 0 0 0 0 0; 0 0 3/2 0 0 0 0; 0 0 0 2 0 0 0; 0 0 0 -1 2 0 0; 0 0 0 0 0 2 0; 0 0 0 0 0 0 2 ] , γ^(j) = [ 1/2 0 0 0 0 0 0; 0 1 0 0 0 0 0; 0 0 3/2 0 0 0 0; 0 0 -1 3/2 0 1 0; 0 0 0 0 3/2 0 1/2; 0 0 0 0 0 2 0; 0 0 0 0 0 -1 2 ] , γ = [ 0 0 0 0 0 0; 0 1 0 0 0 0; 0 0 1 0 -4 0; 0 0 0 2 -4 0; 0 0 0 0 2 0; 0 0 0 0 0 2 ] , λ = [ -1 0 0 0 0 0 0; -1 0 -4 1 0 -2 0; 0 0 -1 0 -1 0 1; 0 0 -1 1/2 0 0 1; 0 0 1/4 0 0 1/2 0; 0 0 0 0 1/2 0 -3/2 ] . The relevant vectors created by applying powers of V^G_i to ς in (<ref>) are for odd l ς_1 = 1/2 (1, -1, -1, 0, 0, 0, 0, …)^T , ς_3/2 = 1/2 (0, 0, -1, 0, -1, 0, 0, …)^T , ς_2 = 1/4 (0, 0, 0, 0, -1, 0, 1, …)^T , where we show only the first seven components relevant in this case. For even l, the relevant components 8 to 14 of these vectors are ς̃ = 1/2 (…, 4, 1, 2, 0, 0, 0, 0, …)^T , ς̃_1 = 1/2 (…, 2, 1, 1, 0, 0, 0, 0, …)^T , ς̃_3/2 = 1/2 (…, 0, 0, 1, 1, 0, 0, -1, …)^T , ς̃_2 = 1/2 (…, 0, 0, 0, 0, -1, 0, -1, …)^T . Performing the products U_c(μ_h,μ_1) ς_ i in (<ref>), the dependence on μ_1 cancels for some terms and one can combine them by evaluating the x_1-integral with terms with one integral less. The simplified result reads @#1@@@(#1) ∑_ℓ=1^∞U_ SLL^(2ℓ-1)(μ_h,μ_s) ς = 16iπ/β_0^2∫_1^x_sdx_3/x_3 lnx_s/x_3 { U_c(1;μ_h,μ_3) ×[cos^2(π N_c/β_0 ln x_3) (1,0,0,0,0,0,0,…)^T + sin^2(π N_c/β_0 ln x_3) (0,1,0,0,0,0,0,…)^T ] + π N_c/β_0∫_1^x_3dx_1/x_1 U_c(3/2,1;μ_h,μ_1,μ_3) ×[sin(2π N_c/β_0 ln x_3) + sin(2π N_c/β_0 ln x_1) ] (0,0,1,0,0,0,0,…)^T + (π N_c/β_0)^2∫_1^x_3dx_2/x_2∫_1^x_2dx_1/x_1 U_c(2,3/2,1;μ_h,μ_1,μ_2,μ_3) ×( [cos(2π N_c/β_0 lnx_3/x_1) + cos(2π N_c/β_0 lnx_2/x_1) ] (0,0,0,0,1,0,1,…)^T + [cos(2π N_c/β_0 ln x_3) + cos(2π N_c/β_0 ln x_2) ] (0,0,0,0,1,0,-1,…)^T ) } . Similar to (<ref>) we work in the strict large-N_c limit, as there are no terms containing a fourfold integral. [1]ReferencesRefs
http://arxiv.org/abs/2407.02417v1
20240702164317
Aspects of dS/CFT Holography
[ "Indranil Dey", "Kanhu Kishore Nanda", "Akashdeep Roy", "Sandip P. Trivedi" ]
hep-th
[ "hep-th", "gr-qc" ]
=1 arrows Tr Vol 𝐧 𝐤 𝐱 𝐲 ξ ł⟨⟩̊[#1]d^3#1∂∂/∂ k_1∂/∂ k_2∂/∂ k_3k_1k_2k_3k_4αγΓδΔϵρλΛσþθζη∂∇𝒪****a]Indranil Deya]Kanhu Kishore Nandaa]Akashdeep Roya]Sandip P. Trivedi[a] Department of Theoretical Physics, Tata Institute of Fundamental Research, Colaba, Mumbai, India, 400005 indranil.dey@tifr.res.inkanhu.nanda@tifr.res.inakashdeep.roy@tifr.res.insandip@theory.tifr.res.in It has been suggested that a dS_d+1 spacetime of radius R_ds has a holographic dual, living at future space-like infinity I^+, with the bulk wave function being dual to the partition function of the boundary theory, <cit.>. We consider some aspects of this correspondence. For under damped scalars with mass M^2R_ds^2>d^24, belonging to the principal series, we show that for the Bunch Davies vacuum a suitable source in the boundary theory can be identified in terms of the coherent state representation of the wave function. We argue that terms in the resulting correlation functions, which are independent of the late time cut-off, satisfy the Ward identities of a conformal field theory. We also discuss other ways to identify sources, both in the under damped and the over damped case, where M^2R_ds^2<d^24, and argue that these too can lead to correlators satisfying the Ward identities of a CFT. Some comments on the violation of reflection positivity, and the cut-off dependent terms, along with some explicit checks and sample calculations, are also included. 3cmTIFR/TH/24-13Aspects of dS/CFT Holography [ July 8, 2024 ============================= 10pt 20pt § INTRODUCTION de Sitter space holds many mysteries. One question which has attracted considerable attention is whether it has a dual description in terms of a hologram. For some early references, see <cit.>. The purpose of this paper is to study some aspects of such a holographic correspondence. In the Poincaré patch of d+1 dimensional de-Sitter space, dS_d+1, the metric is given by ds^2=R_ds^2 [-dη^2η^2 + 1η^2∑_i=1^ddx^idx_i]. Here η∈(-∞,0] and I_+, the future space-like boundary, is obtained by taking η→ 0, see Figure <ref>. R_ds is the radius of dS space which is related to the Hubble constant by H=1 R_ds, We will often set R_ds=1. The proposed correspondence which we study, <cit.>, is that the wave function Ψ in dS space, asymptotically, as η→ 0, is equal to the partition function Z_FT of a dual field theory in the presence of sources. Schematically, Ψ[ϕ_i]=Z_FT[S_i], where ϕ_i, i=1,⋯ N, denote the values that the various bulk fields take close to I_+, and S_i denote the corresponding sources in the field theory. In subsequent sections of the paper we will be much more precise about this correspondence. Let us also mention that we will explore this version of holography below for fields in the Bunch Davies vacuum. We note that the correspondence in eq.(<ref>) is analogous to the AdS/CFT correspondence, for reviews see <cit.>, the crucial difference being that it is the bulk wave function, instead of the partition function, which appears on the LHS in eq.(<ref>). In fact, dS_d+1 space and Euclidean AdS_d+1 have the same isometry group, SO(d+1,1), and can be related by an analytic continuation. The metric of Euclidean AdS_d+1 which is given by ds^2=R_ads^2[dz^2 z^2+1 z^2∑_i=1^d dx^idx_i] with z∈ [0,∞], and R_ads being the radius of AdS space, after the analytic continuation, z→ (-i) η, R_ ads→ ( i) R_ ds becomes the metric of dS_d+1, eq.(<ref>). As a result, some features of the dS/FT dictionary essentially follow from the AdS/CFT correspondence, after an analytic continuation. However, there are important differences, and some unfamiliar consequences, in the dS case, and the aim of this paper is to study some of them. One key issue which we will study is the behaviour of fields with sufficiently high mass in dS space, corresponding to the principal series representations of the SO(d,1) group of isometries of dS_d+1<cit.>. A free scalar field with a mass M has an action given in action and asymptotically in dS_d+1, as η→ 0, the solutions of its wave equation are a linear combination of two modes going like ϕ∼ (-η)^α, α=d 2± i √(-d^2 4 +M^2 R_ds^2), If the mass M^2 R_ds^2>d^2 4 we see that the two modes will both continue to oscillate as η→ 0. Physically this is because with a sufficiently high mass, the so-called “Hubble friction term", which arises due to the exponential expansion, is not enough to cause the field to freeze out. In contrast, when M^2R_ds^2 <d^24, the behaviour is different and both modes decay towards the boundary with the fall off going like (-η)^d2±√(d^24-M^2R_ds^2). The principal series corresponds to the case in eq.(<ref>), which we will refer to as the underdamped case below. We will see that in this case constructing the holographic dictionary, in particular relating the asymptotic behaviour of the bulk field to a source in the boundary theory, has some novel features. In contrast, for the overdamped case where eq.(<ref>) is met, the holographic dictionary can be constructed more straightforwardly, in close analogy with the AdS case. In the underdamped case we find that it is convenient to consider the wave function in the coherent state basis rather than the basis of eigenstates of the bulk scalar field. A suitable source in the boundary can then be identified in terms of the eigenvalue of the coherent state. The corresponding field [We use the terminology of a boundary field rather than operator since the boundary theory cannot be continued to a field theory in Lorentzian space, at least of a conventional type, see below.] in the boundary which this source couples to has dimension Δ_+=d2+i√(M^2R_ds^2-d^24). The spatial and time reparametrisation invariance conditions on the wave function give rise to constraints on the correlation functions of the boundary theory. If we consider the wave function at a late time slice, say at a constant value of η=η_1, the value of the time coordinate, η_1, plays the role of a UV cut-off in the dual theory. We show that the constraints which arise on the cut-off independent terms in correlation functions, due to the invariance of the wave function, take the form of Ward identities of a conformal field theory. The identification of a source in the underdamped case and the resulting Ward identities is one of the main results of the paper. We also explore other ways of identifying a source for the boundary theory. Working directly in the basis of eigenstates of the bulk scalar, rather that the coherent state representation, we find that there are in fact alternate ways of identifying such sources. These are related to the value the bulk field takes on the hypersurface η=η_1 by a spatial non-local transformation. While some of the cut-off dependent terms in the resulting correlation functions are unwieldy[ E.g. the two point function has a term which is both cut-off, i.e. η_1 dependent, and non-local, posf.] we show that the cut-off independent terms continue to satisfy the Ward identities of a conformal field theory. Most of our analysis in the paper is focused on underdamped fields, but some of our results, pertaining to the possibility of identifying sources in an alternate manner, are also of interest for the overdamped case, see section <ref>. One important difference with the AdS case, as has been noted in <cit.> is that the cut-off dependent terms are important for the wave function in dS space. In fact, this dependence gives rise to the leading contribution in the wave function at late times, and plays an important role in ensuring that the wave function solves the Wheeler de-Witt equation <cit.>. This is in contrast to the AdS case where these terms can be removed by adding suitable counter terms. While we do not analyse them in much detail, it is worth noting that the invariance of the wave function under time and spatial reparametrisations gives rise to important constraints on these cut-off dependent terms as well, and these constraints can be obtained in a manner similar to the Ward identities mentioned above. In an admittedly ambitious version of holography, these cut-off dependent terms would also be correctly reproduced by the boundary theory [In referring to the hologram as a field theory above, rather than a conformal field theory, we are allowing for this possibility.]. Finally, we also draw attention to the fact that the dual theory for dS violates reflection positivity <cit.> etc. As a result dimensions of operators and coefficients in correlation functions are often complex numbers, e.g., eq.(<ref>). The breaking of reflection positivity means that the dual field theory which is Euclidean cannot be continued to a field theory in Lorentzian space, at least with a hermitian Hamiltonian and states having a positive norm. This breaking of reflection positivity arises because one is dealing with the hologram at I^+, rather than the past boundary of de-Sitter on I^-, see fig <ref>. Another way to put it is that one is constructing the hologram for the expanding rather than the contracting branch of the Hartle-Hawking wave function. The hologram at I^- is related to the one at I^+ by a CP transformation, i.e the product of Charge conjugation C and Parity P. All correlators can be related in the two holograms through this transformation [E.g. the dimension in eq.(<ref>) corresponding to the underdamped field would take its complex conjugate value in the hologram at I^-.]. The analytic continuation which takes some AdS correlators to those in dS, see anac above, are also correspondingly different for the I^- case, resulting in this CP transformation. This paper is organised as follows. Section <ref> reviews some essential facts about dS space. Section <ref> calculates the wave function in the Bunch Davies vacuum for free scalars and also discusses some interactions. Section <ref> discusses some holographic aspects and section <ref> the coherent state representation in the underdamped case. Ward identities are discussed in section <ref> which also included some discussion of alternate ways of identifying sources in subsection <ref>. Additional comments on the violation of reflection positivity, on dS_3, etc. are in section <ref>. The appendices contain important supplementary material including some details on computing higher point correlators, the OPE limit, and some concrete checks on the Ward identities. We have referred to various important references above and in the following text. Some important additional references are <cit.>. § DE SITTER GEOMETRY We begin by reviewing some basic facts about dS space. d+1 dimensional de Sitter space time is described as a d+1 dimensional hyperboloid -X_0^2+∑_i=1^d+1X_i^2=R_ dS^2 embedded in ^1,d+1 Minkowski space time. We set R_ dS=1 for the rest of the paper. The global coordinate parametrization of dS_d+1 is given by X_0=sinhτ       X_i=coshτ Ω_i   (i=1,2,...,d+1) where Ω_i denotes angles on a unit S^d. The metric in these coordinates is ds^2=-dτ^2 + cosh^2τ dΩ_d^2 These global coordinates cover all of dS space which has two spacelike boundaries namely ℐ_+ and ℐ_- mentioned in Figure <ref>. Conventionally these boundaries are called future and past null infinity. In this paper we will explore a hologram for dS space at future null infinity. For our analysis it will be useful to consider Poincaré coordinates which only cover half of dS space, namely the region ℛ_+. The metric in Poincaré coordinates takes the form ds^2=1/η^2(-dη^2+∑_i=1^ddx_i^2) Poincaré coordinates are related to global coordinates by the transformation X_0=sinht+1/2e^t∑_i=2^d+1x_i^2        X_1=cosht-1/2e^t∑_i=2^d+1x_i^2        X_i=e^tx_i with t∈ [-∞,∞] being related to η by η=-e^-t It is clear that Poincaré coordinates only cover the region X_1≥ X_0 Also in our conventions, note that η takes negative values and lies in the range η∈[-∞,∞], with ℋ_+ corresponding to η→ -∞, and ℐ_+ corresponding to η→ 0. Note also that in Poincaré coordinates the metric is conformal to d+1 dimensional Minkowski space. § BASIC SETUP For studying various aspects of the putative hologram at ℐ_+ we will mainly restrict ourselves to bulk scalar fields in this paper. Let ϕ be a free real scalar field in the bulk with mass M in d+1 dimensional dS space, its action is S=∫ d^d+1x√(-g)ℒ(g^μν,ϕ,∂_μϕ)=∫ d^d+1x√(-g)[-1/2g^μν∂_μϕ∂_νϕ-1/2M^2ϕ^2] The equation of motion for the field is 1/√(-g)∂_μ(√(-g)g^μν∂_νϕ)-M^2ϕ=0 Here we study the field in fixed de Sitter background. Inserting the metric given by eq.(<ref>) in eq.(<ref>), we get η^2∂_η^2ϕ-(d-1)η∂_ηϕ-η^2∂^2ϕ+M^2ϕ=0 where ∂^2 denotes Euclidean d dimensional Laplacian. The independent solutions of eq.(<ref>) are found to be ϕ(𝐱,η)=∫d^d k/(2π)^d[ℂ_1(𝐤)(-η)^d2H^(1)_ν(-kη)e^i𝐤.𝐱+ℂ_2(𝐤)(-η)^d2H^(2)_ν(-kη)e^i𝐤.𝐱] where H^(1)_ν, H^(2)_ν denotes Hankel functions with index ν of the first and second kind respectively, ℂ_1 and ℂ_2 are constant coefficients and the index ν is given by ν=√(d^2/4-M^2). And k=√( k· k). denotes the magnitude of k. Importantly, for M^2<d^2/4, ν is a real quantity. We refer to this as the overdamped case or the case of light scalars. For M^2>d^2/4, ν is imaginary. We refer to this case as the underdamped case or the case of heavy scalars. M^2=d^2/4 is the critically damped case. Eq.(<ref>) is valid for the overdamped case, M^2<d^2 4. In the underdamped case the corresponding expression can be obtained by analytically continuing ν→ i μ=i √(M^2-d^2 4) This will also be true for other formulae we discuss below, the underdamped case can be obtained for these using the continuation eq.(<ref>) as well. Note that near the horizon ℋ_+ where η→ -∞, the two Hankel functions behave like lim_η→-∞H^(1)_ν(-kη)= A_he^-ikη/√(-kη)lim_η→-∞H^(2)_ν(-kη)= B_he^ikη/√(-kη) where A_h=√(2π)e^-iπν/2-iπ/4 B_h=√(2π)e^iπν/2+iπ/4 While near ℐ_+, where η→ 0, we have lim_η→0H^(1)_ν(-kη)=(-kη)^-ν(-i 2^νΓ[ν]/π)+(-kη)^ν(2^-ν (1+i (πν ))/Γ [ν +1]) lim_η→0H^(2)_ν(-kη)=(-kη)^-ν(i 2^νΓ[ν] /π)+(-kη)^ν(2^-ν (1-i (πν ))/Γ[ν +1]) In obtaining these asymptotic forms we keep track of the fact that in our conventions η takes negative values lying in the range η∈ [-∞,0]. §.§ Field Quantization Next we turn to quantising the field. Mode expanding the field Φ we get Φ(𝐱,η)=∫d^d k/(2π)^d[a_𝐤ℂ_ν(-η)^d2H^(1)_ν(-kη)e^i𝐤.𝐱 +a^†_𝐤ℂ_ν^∗(-η)^d2(H^(1)_ν(-kη))^∗ e^-i𝐤.𝐱] where a_𝐤, a^†_𝐤 are the annihilation and creation operators. and ℂ_ν=1/A_h√(2)=√(π)/2e^iπν/2+iπ/4 The canonical momentum is defined as Π(𝐱,η)=√(-g)δℒ/δ(∂_ηΦ)=-√(-g)g^ηη∂_ηΦ which is calculated using eq.(<ref>) as multlineΠ(𝐱,η)=1/(-η)^d-1∫d^d k/(2π)^d[a_𝐤ℂ_ν∂_η[(-η)^d2H^(1)_ν(-kη)]e^i𝐤.𝐱. . +a^†_𝐤ℂ_ν^∗∂_η[(-η)^d2(H^(1)_ν(-kη))^∗] e^-i𝐤.𝐱] Imposing the canonical commutation relations on Φ and Π [Φ(𝐱,η),Π(𝐲,η) ]=iδ^(d)(𝐱-𝐲) gives rise to the commutation relation, [a_𝐤,a^†_𝐤']=(2π)^dδ^d(𝐤-𝐤') This can be conveniently checked close to the Horizon H_+. We define the vacuum state to be one which is by the annihilation operators, a_𝐤|0⟩=0 This is called the the Bunch Davies vacuum. Multi-particle states can be obtained by acting with products of the a^†_𝐤 operators on |0⟩. As was mentioned above the mode expansion etc. for the underdamped case can be obtained by the analytic continuation, eq.(<ref>). It is worth being explicit about some of the resulting expressions here. The mode expansion, eq.(<ref>), for underdamped fields, becomes, Φ(𝐱,η)=∫d^d k/(2π)^d[a_𝐤ℂ_μ(-η)^d2H^(1)_i μ(-kη)e^i𝐤.𝐱+a^†_𝐤ℂ_μ^∗(-η)^d2(H^(1)_iμ(-kη))^∗ e^-i𝐤.𝐱] with the coefficient ℂ_μ= √(π) 2e^-πμ 2+i π 4 and similarly for Π( x,η). §.§ Wave Function in Canonical Approach Here onwards, we will use the following notation to save clutter, ℱ_ν(k,η)=ℂ_ν(-η)^d2H^(1)_ν(-kη) and denote the boundary value, as η→ 0, of F_ν(k, η) to be f_ν(k,η)=ℂ_ν(-η)^d2[(-kη)^-ν(-i 2^νΓ[ν] /π)+(-kη)^ν(2^-ν (1+i (πν ))/Γ[ν +1])] We will also use the notation lim_η→0∂_ηℱ_ν(k,η)=ḟ_ν(k,η). and define α_ν=ℂ_ν(-i 2^νΓ[ν]/π)β_ν=ℂ_ν(2^-ν (1+i (πν ))/Γ [ν +1]) Using the value for ℂ_ν given in eq.(<ref>) we get that α_ν= 2^ν-1Γ[ν]√(π)e^iπ2(ν-12)β_ν = -2^-ν-1√(π)Γ[ν+1] sin(πν)e^-iπ2(ν+12) The late time value of F_ν(k,η) can also then be written as f_ν(k,η) = (-η)^d/2 [ α_ν (- kη)^-ν + β_ν (-kη)^ν] The field and momentum operator in Fourier space Φ(𝐤,η)= ∫ d^d x Φ(𝐱,η)e^-i𝐤.𝐱 Π(𝐤,η)= ∫ d^d x Π(𝐱,η)e^-i𝐤.𝐱 can be written in terms of ℱ_ν(k,η) as Φ(𝐤,η)=  a_𝐤ℱ_ν(k,η)+a^†_-𝐤(ℱ_ν(k,η))^∗ Π(𝐤,η)=  1/(-η)^d-1[a_𝐤∂_ηℱ_ν(k,η)+a^†_-𝐤∂_η(ℱ_ν(k,η))^∗] The creation and annihilation operators can be represented in terms of Φ(𝐤,η) and Π(𝐤,η) as a_k= ∂_η(ℱ_ν(k,η))^∗Φ(k,η)-(-η)^d-1(ℱ_ν(k,η))^∗Π(k,η)/ℱ_ν(k,η)∂_η(ℱ_ν(k,η))^∗-(ℱ_ν(k,η))^∗∂_ηℱ_ν(k,η) a^†_-k= ∂_ηℱ_ν(k,η)Φ(k,η)-(-η)^d-1ℱ_ν(k,η)Π(k,η)/(ℱ_ν(k,η))^∗∂_ηℱ_ν(k,η)-ℱ_ν(k,η)∂_η(ℱ_ν(k,η))^∗ Now we construct the ground state wave function in the basis of eigenstates of the field operator. We will work with eigenstates of the operator Φ( k,η), eq.(<ref>). Note that these operators are time dependent, i.e. dependent on the coordinate η. The corresponding eigenstates satisfy the condition, Φ( k,η)|ϕ( k,η)⟩=ϕ(- k,η) |ϕ( k,η)⟩ The corresponding bra ⟨ϕ( k,η)| then satisfies the condition ⟨ϕ( k,η)|Φ( k, η)= ϕ( k,η)⟨ϕ( k,η)| Schematically denoting these eigenstates as |φ⟩ we have that the wave function is given by ψ[φ]=⟨φ||0⟩ The momentum operator acts on this wave function as follows: Π(𝐤,η)ψ[φ]=-i (2π)^dδδφ(-k,η)ψ[φ] Using the fact that the Bunch Davies vacuum is annihilated by a_𝐤, eq.(<ref>), and the expression for the annihilation operators in terms of Φ(𝐤,η) and Π(𝐤,η) given in eq.(<ref>), we get a differential equation that ψ should satisfy ∂_η(ℱ_ν(k,η))^∗φ(𝐤,η)ψ[φ]+i(-η)^d-1(ℱ_ν(k,η))^∗δψ[φ]/δφ(-𝐤,η)=0 The solution to this equation is given by ψ[φ,η]=𝒩exp[i∫d^d𝐤/(2π)^d φ(𝐤,η)(∂_η(ℱ_ν(k,η))^∗/2(-η)^d-1(ℱ_ν(k,η))^∗)φ(-𝐤,η)] It is worth drawing attention to the fact that the wave function of the Bunch Davies vacuum is both a functional of φ( k,η), the eigenvalues of the field operators Φ( k,η), and in addition a function of time. If we were more explicit eq.(<ref>) would actually have been written as ψ[φ( k,η), η]=𝒩exp[i∫d^d𝐤/(2π)^d φ(𝐤,η)(∂_η(ℱ_ν(k,η))^∗/2(-η)^d-1(ℱ_ν(k,η))^∗)φ(-𝐤,η)] The explicit time dependence arises because of the time dependence of the dS background. The probability functional following from eq.(<ref>) is given by |ψ[φ,η]|^2=|𝒩|^2exp[-∫d^d𝐤/(2π)^d φ(𝐤,η)(1/2|ℱ_ν(k,η)|^2)φ(-𝐤,η)] where we have used the relation ℱ_ν(k,η)∂_η(ℱ_ν(k,η))^∗-(ℱ_ν(k,η))^∗∂_ηℱ_ν(k,η)=i(-η)^d-1 This relation follows from the fact that ℱ_ν(k,η) is defined in eq.(<ref>) and therefore satisfies the equation η^2∂_η^2ℱ_ν-(d-1)η∂_ηℱ_ν+η^2k^2ℱ_ν+M^2ℱ_ν=0 The normalization constant N can be fixed by requiring that the total probability given by ∫𝒟φ|ψ[φ,η]|^2 is unity. The bulk correlators can be derived from eq.(<ref>) and eq.(<ref>) as ⟨Φ(𝐤,η)Φ(𝐤',η)⟩= (2π)^dℱ_ν(k,η)(ℱ_ν(k,η))^∗δ^d(𝐤+𝐤') ⟨Π(𝐤,η)Π(𝐤',η)⟩= (2π)^d/(-η)^2(d-1)∂_ηℱ_ν(k,η)∂_η(ℱ_ν(k,η))^∗δ^d(𝐤+𝐤') ⟨Φ(𝐤,η)Π(𝐤',η)⟩= (2π)^d/(-η)^d-1ℱ_ν(k,η)∂_η(ℱ_ν(k,η))^∗δ^d(𝐤+𝐤') Let us note again that the formulae for the underdamped case can be obtained by analytic continuation, eq.(<ref>). For clarity, and also to lay down our notation let us discuss some of the resulting expressions. We define in this case the function F_μ(k,η)=ℂ_μ(-η)^d2 H_iμ^(1)(-k η) As η→ 0 its value becomes f_μ(k,η)=(-η)^d2 [β_μ (-kη)^-iμ+α_μ (-k η)^i μ] where α_μ=ℂ_μ(2^-iμ (1+ (πμ ))/Γ [iμ +1])β_μ=ℂ_μ(-i 2^iμΓ[iμ ]/π) Using cmu, we obtain α_μ=e^-πμ 2+i π 4(√(π)(1+ (πμ ))/2^1+iμΓ[1+iμ])β_μ=-e^-πμ 2+i π 4(i 2^iμ-1Γ [iμ] /√(π)) §.§ Wave Function from Path Integral The wave function can also be obtained by carrying out a path integral, as has been discussed in <cit.> and <cit.>. The path integral is evaluated as a functional of the boundary value for the scalar. Here we will be interested in the wave function at late times, as η→ 0. As we will see below, a suitable boundary condition also needs to be imposed at H_+. This way of obtaining the wave function is related to the calculation of the partition function in AdS space as a functional of the boundary values of fields, after a suitable analytic continuation. Here we will directly describe the calculation in dS space (more correctly with the time coordinate η being given a small imaginary part, as will be explained below). For the scalar theory the wave function at a late instant of time |η_1|≪ 1 is given by the path integral ψ[φ,η_1]=∫_ H_+^η_1 Dϕ e^i ŜŜ denotes the action; For a free scalar theory it is given in eq.(<ref>). The limits indicate that the path integral is to be carried out from H_+, where η→ -∞, to η_1. In the subsequent discussion, at the risk of causing some confusion perhaps, we will sometimes denote the resulting wave function by ψ=e^i S At tree level S is the on-shell value of the bulk action Ŝ, obtained after imposing suitable boundary conditions. But at higher orders they will be different. In eq.(<ref>) the argument of the wave function, which we are denoting schematically as φ, equals the value of the field at η=η_1 in the path integral. In turn these are equal to the eigenvalue of the field operator Φ(𝐱,η_1), which we had denoted as φ(𝐱,η_1) in subsection <ref>, see eq.(<ref>). The Fourier transform of the field φ( x, η) is φ( k, η)= ∫ d^d xφ( x,η) e^-i k· x The boundary condition in the path integral at H_+ is that ϕ( k, η) vanishes, as η→ -∞, after η is given a small imaginary piece as we explain below. This will give the correct wave function in the Bunch Davis vacuum for the free theory as we will see below. The path integral for the free theory can be simply calculated by evaluating the on shell action for a solution meeting the two boundary conditions mentioned above. The on-shell solution for ϕ( k,η) in general takes the form ϕ( k,η)=ϕ_1( k) F_ν( k, η) + ϕ_2( k) (ℱ_ν(k,η))^∗ where F_ν(k,η) is defined in eq.(<ref>), and ϕ_1( k), ϕ_2( k) are independent of η. We now continue this solution to complex values of η by giving η an imaginary part, η→η(1-iϵ) where ϵ>0, ϵ≪ 1 and analyse the resulting behaviour near η→ -∞. From eq.(<ref>), we see, using eq.(<ref>) that (ℱ_ν(k,η))^∗∼ e^i k η(1-iϵ) and therefore vanishes as η→ -∞. In contrast F_ν(k,η)∼ e^-ikη (1-iϵ) and therefore blows up as η→ -∞. The boundary condition at H_+ we impose is that ϕ( k,η) must vanish as η→ -∞ after the analytic continuation above. This then sets ϕ_1( k)=0 in eq.(<ref>) leading to φ( k,η)=(ℱ_ν(k,η))^∗ϕ_2( k) The on-shell action can then be easily calculated as a boundary term at η=η_1 and gives the wave function ψ[φ,η_1]= Nexpi ∫d^d k (2π)^dϕ_2( k)(ℱ_ν(k,η_1))^∗∂_η(ℱ_ν(k,η_1))^∗ 2 (-η_1)^d-1ϕ_2(- k) We see that this agrees with the wave function obtained in eq.(<ref>) after we express ϕ_2(𝐤) in terms of φ(𝐤,η) using eq.(<ref>) and set η=η_1. §.§ Inclusion of Interactions Once interactions are included, the ground state wave function will acquire non-Gaussian terms. As an example, consider adding an ϕ^n term in the action, which gives an extra contribution to the action, δ S_n=λ∫d^d xdη/(-η)^d+1ϕ^n(𝐱,η) The correction at leading order in λ in the wave function can be simply calculated by inserting the on-shell solution eq.(<ref>) discussed in the previous section, into δ S. This gives, ψ_I[ϕ(𝐤,η_1),η_1]=exp∫d^d𝐤/(2π)^dϕ(𝐤,η_1)(i/2(-η_1)^d-1∂_η(ℱ_ν(k,η_1))^∗/(ℱ_ν(k,η_1))^∗)ϕ(-𝐤,η_1)× e^iδ S_n where δ S_n is found to be iδ S_n=iλ∫∏_i=1^nd^d𝐤_i/(2π)^d(2π)^dδ^(d)(𝐤_1+...+𝐤_n)[I(k_1,k_2,..,k_n,η_1)∏_i=1^nϕ(𝐤_i,η_1)/(ℱ_ν(k_i,η_1))^∗] with I(k_1,k_2,..,k_n,η_1)=∫_-∞^η_1dη'/(-η')^d+1∏_i=1^n(ℱ_ν(k_i,η'))^∗ Depending on the value of ν this integral can be divergent as η_1→ 0. These divergences are discussed further in section <ref> and can be understood as arising due to local “counterterms" or due to operator mixing. After subtracting these divergences we get finite value for the integral which we denote subsequently as I(k_1, k_2, ⋯, k_n). We have denoted the resulting wave function with the suffix I above because it can also be calculated in the operator formalism by doing time dependent perturbation theory in the interaction picture (called the "in-in formalism"). These points are discussed more fully in Appendix <ref>. The cubic case, with n=3 is worth discussing more explicitly. This corresponds to the Witten diagram [see Figure <ref>] where the lines indicate the bulk to boundary propagators. We can convert eq.(<ref>) for n=3 to an integral over three modified Bessel functions using the transformation properties derived in Appendix <ref>. Up to cut-off dependent terms we get, I(k_1,k_2,k_3)=8/π^3(ℂ_ν^∗)^3 e^i 3 π 2ν^∗ e^iπ 2(3-d/2)∫_0^∞dt t^d/2-1K_ν^∗(k_1t)K_ν^∗(k_2t)K_ν^∗(k_3t) Since we neglecting the cut-off dependent terms here this integral, in general, is to be taken as being defined by analytic continuation starting from values of ν where it converges. After such an analytic continuation, if needed, it agrees with the standard form for a three point function in a CFT, as is discussed in <cit.>. § HOLOGRAPHY In this paper we will explore the version of holography which relates the late time wave function of the bulk theory in the Poincaré patch, for the Bunch Davies vacuum, to the generating functional for correlation functions of the dual CFT. We work in Poincaré coordinates, eq.(<ref>), and late time corresponds to taking η→ 0^-. The essential idea of this version of holography, <cit.>, is similar to that in AdS/CFT, with the wave function in the bulk playing the role of the partition function in AdS/CFT. To be more precise, in the dS case the late time value of the bulk fields are to be appropriately identified with the sources in the dual CFT and the wave function as a functional of these late sources then is to be equated with the partition function in the CFT in the presence of the same sources. Denoting a generic source by S the wave function at late times can be expanded in a power series in S. This is analogous to carrying out a Taylor series expansion of a function. Schematically, the wave function at late times is then given by a sum of terms consisting of various powers of the sources S( x) and takes the form: log(Ψ[S,η_1])=∑_n=2^∞1 n!∫∏_j=1^n (d^d x_j S( x_j)) ł𝕆( x_1)𝕆( x_2) ⋯𝕆( x_n)From the bulk point of view the functions which appear in this expansion, ł𝕆( x_1)𝕆( x_2) ⋯𝕆( x_n)$̊, are just coefficients ; the only difference with a conventional Taylor series is that since the wave function is a functional of the bulk field, and therefore of the sourcesS, the coefficients are now actually coefficient functions, i.e. functions of the spatial locations, x_1,⋯ x_n. The statement of holography is that these coefficient functions are in fact correlation functions of various fields in the dual CFT. And it is with this correspondence in mind that we have suggestively denoted the coefficients asł𝕆( x_1)𝕆( x_2) ⋯𝕆( x_n)$̊ above. Before proceeding we note that from eq.(<ref>) it follows that δ^n log(Ψ) δ S( x_1) δ S( x_2) ...δ S( x_n)=ł𝕆( x_1)𝕆( x_2) ⋯𝕆( x_n)In general for every bulk field one would obtain a different source S_i and a correspondingly different field 𝕆_i in the boundary. Bulk scalars would give rise to scalar fields, the metric to a symmetric two-tensor T_ij which is the stress tensor of the CFT, etc. It is worth mentioning, before we proceed, that in AdS /CFT we usually refer to the bulk fields as being related to sources for operators in the dual CFT. Here we are instead referring to them as sources for fields in the CFT. This is because, as we will see in section <ref>, and has already been noted in posspa, the boundary theory we are dealing with is Euclidean and in general does not admit a Lorentzian continuation. Below, we show how to relate the boundary values of bulk scalar field of various masses to sources in the CFT. And argue that the wave function in the Bunch Davies vacuum leads to the correlation functions of the CFT satisfying various Ward identities including those of conformal invariance, translational invariance and local scale invariance. We first discuss overdamped case below and subsequently turn towards heavy fields. §.§ Overdamped Case In this case the holographic dictionary is quite analogous to the AdS/CFT case as was mentioned above. In particular we will be interested in relating the wave function in the basis of eigenstates of the field operator to the generating functional in the CFT. Following notation of section <ref> we denote the field operator by Φ( x,η), its eigenstates by |ϕ( x,η)⟩ and their eigenvalues by ϕ( x,η), so that Φ( x,η)|ϕ( x,η)⟩=ϕ( x,η)|ϕ( x,η)⟩ The wave function in this basis of eigenstates is denoted by Ψ[ϕ( x,η), η], and given by Ψ[ϕ( x,η), η]=⟨ϕ( x,η)|0⟩ where |0⟩ is the Bunch Davies vacuum. We note that besides its dependence on the ϕ( x,η) the wave function also depends explicitly on time η. The holographic correspondence relates this wave function to the generating function in the CFT once a source is identified in terms of the value the bulk field takes at late times, ϕ( x,η). We denote the time at which we are calculating the wave function to be η=η_1 below. Note that η_1→ 0. Let us first start with the free field case, we will add interactions in the next subsection. The wave function for the free theory was calculated in general in eq.(<ref>). We write the late time value of the field ϕ(𝐤,η) as ϕ(𝐤,η)=f^∗_ν(k,η) f_ν^∗(k,η_1) (-η_1)^d2-νϕ̂(𝐤) And identify ϕ̂( k) with the source in the boundary theory. Note that this is very similar to what is done in the AdS case. For Euclidean AdS_d+1, working in Poincaré coordinates, with AdS radius set to unity, ds^2=1 z^2[dz^2 +(dx^i)^2] we get that the solution to the wave equation for a scalar of mass M which vanishes at the Poincaré horizon is given by ϕ( k,z)e^i k· x where ϕ( k,z) is given in terms of the modified Bessel function as ϕ( k,z)=z^d2K_ν(kz)ϵ^d2 K_ν(kϵ)ϵ^d2-νϕ̂( k) with ν=√(d^24+M^2) and z=ϵ being the location of the boundary. ϕ̂( k) which appears on the RHS of eq.(<ref>) then acts like the source term for CFT correlation functions. We should also mention that as a result of the identification in [A similar comment applies for the AdS case, eq.(<ref>).] eq.(<ref>) the bulk field at time η=η_1 is related to the source ϕ̂ by the relation in position space ϕ( x,η_1)=(-η_1)^d2-νϕ̂( x) We see that this relation is local in position space along the hypersurface η=η_1. Let us also note before proceeding that from eq.(<ref>), eq.(<ref>) we see that the two solutions to the wave equation asymptotically go like (-η)^d2-ν, (-η)^d2+ν, and since ν is real and positive the mode falling off as (-η)^d2-ν, which is the analogue of the non-normalisable mode in AdS space, will dominate near η→ 0 in f_ν(k,η). Comparing with eq.(<ref>) we see that ϕ_2( k)=(-η_1)^d2-ν f_ν^∗(k,η_1) ϕ̂( k) Inserting this in eq.(<ref>) gives the wave function at late time ψ[ϕ̂,η]=𝒩exp[i 2∫d^d𝐤/(2π)^d  (-η_1)^1-2νf_ν^∗(k,η)ḟ_ν^∗(k,η) f_ν^∗(k,η_1)^2 ϕ̂(𝐤)ϕ̂(-𝐤)] where f_ν( k,η) is given in eq.(<ref>). We see that at η=η_1, eq.(<ref>) reads ψ[ϕ̂,η_1]=𝒩exp[i 2∫d^d𝐤/(2π)^d  (-η_1)^1-2νḟ_ν^∗(k,η_1) f_ν^∗(k,η_1) ϕ̂(𝐤)ϕ̂(-𝐤)] Putting in the explicit form for f_ν(k,η) for η=η_1 and then expanding about η_1=0 leads to ψ[ϕ̂,η]=𝒩exp[-i/2∫d^d𝐤/(2π)^d ϕ̂(𝐤)[(d/2-ν)(-η_1)^-2ν+β^∗_ν/α^∗_ν(2ν) k^2ν]ϕ̂(-𝐤)] where we have dropped sub-leading terms and the coefficients α_ν, β_ν above are given in valtab. Note that in obtaining the last expression we have done an expansion at small η. The first term in the wave function is a contact term. Neglecting it for now we have ψ[ϕ̂,η_1]=𝒩exp[1/2∫d^d𝐤/(2π)^dd^d𝐤'/(2π)^d ϕ̂(𝐤)⟨ O(𝐤)O(𝐤')⟩ϕ̂(𝐤')] where the coefficient function ⟨ O(𝐤)O(𝐤')⟩=-i (2π)^dδ^d(𝐤+𝐤')β_ν^∗/α_ν^∗2ν k^2ν Note that the momentum dependence on the RHS is of the correct form for the position space field O( x) to have dimension, Δ=d/2+ν where ν is given in eq.(<ref>). Also note that the coefficient in eq.(<ref>) is given by, eq.(<ref>), β_ν^∗α_ν^∗=-π 2^-2ν e^i πνν(Γ[ν])^2sinπν and as a result eq.(<ref>) takes the form ⟨ O(𝐤)O(𝐤')⟩=i (2π)^dδ^d(𝐤+𝐤')π 2^-2ν+1e^iπν (Γ[ν])^2sinπνk^2ν And the wave function eq.(<ref>) (without the contact term) becomes ψ[ϕ̂,η_1]=𝒩exp[- 2^-2νπΓ[ν]^2(1-i (πν)) ∫d^d𝐤/(2π)^d ϕ̂(𝐤)ϕ̂(-𝐤)k^2ν] The negative sign in the real part of the exponent leads to an exponentially decaying wave function and thus a normalisable wave function. Of course this is to be expected, but it is worth mentioning a few details leading to this conclusion. Note that by evaluating eq.(<ref>) at η=η_1 we get ϕ( k,η_1)=(-η_1)^d2-νϕ̂( k) The field ϕ( x,η) is real and thus ϕ( k,η_1)^∗ =ϕ(- k,η_1), from here it follows that ϕ̂ also satisfies the relation ϕ̂^∗( k)=ϕ̂(- k) and therefore ϕ̂(- k)ϕ̂( k)=|ϕ̂( k)|^2 is positive definite, leading to the conclusion stated above. Carrying out a Fourier transform of eq.(<ref>) gives in position space ł O( x)O( y)=̊e^i π (2ν-1) 2 2 νΓ[d2+ν]π^d2Γ[ν]1 | x- y|^d+2ν The phase factor in front shows that in general the two point function violates reflection positivity. A consequence of reflection positivity, see for example <cit.>, is that ł O( x)O( x^θ)≥̊0 where x^θ is related to the point x by reflection about a coordinate axis, say x^1=0. This condition is clearly violated by eq.(<ref>) due to the phase factor in front- we can take the two points to be at (0, x_2, ⋯ x_n) and (0,- x_2,⋯,,- x_n) for example. The absence of reflection positivity means that in the dS case the correlation functions of the Euclidean field theory cannot be continued to the correlators of a Hermitian operator in a Lorentzian field theory Note that the momentum space correlator eq.(<ref>) is valid for non-integer ν. When ν∈ℤ the leading no-contact term comes from a term going like k^2νlog(k). The position space correlator continues to be given by eq.(<ref>) in this case though, as is discussed in appendix <ref>, dsx2pt. A few more comments are worth making here. The contact term in eq.(<ref>) cannot actually be removed in the dS case, as was discussed earlier. It gives rise to a local term in position space which is divergent when η_1→ 0, ψ∼exp-i 2(d 2-ν)(-η_1)^-2ν∫ d^d xϕ̂( x)^2 and results in a contact term in the two point correlator ł O( x)O( y)$̊ with an imaginary coefficient. This term, like the finite one discussed above, is also invariant under scaling. To see this note that from eq.(<ref>), and the fact thatϕ̂( x)is the source forO( x), it follows thatϕ̂( x)has dimension Δ_-=d2-ν and transforms under a scaling transformation, x→ x/λ, byϕ̂( x)→λ^Δ_-ϕ̂(λ x), since this would leave∫ d^dx ϕ̂( x) O( x)invariant. It then follows that the contact term is also invariant under the scaling transformation once the cut-offη_1transforms asη_1→η_1λ. Also, and this point will be important when we will consider interactions, for example in the next subsection, we are interested in calculating the wave function in the basis of field eigenstates|ϕ( x,η)⟩, as was mentioned at the beginning of this section, and in expressing it in terms of a sourceϕ̂. In the free theory this source was defined by the relation eq.(<ref>) in terms of the bulk fieldϕ( k,η). Once interactions are added we will continue to take source to be defined by the relation eq.(<ref>), i.e. the definition of the source in terms ofϕ( x,η)will be uncorrected by the interactions. We will find that with this definition the coefficient functions satisfy the standard CFT Ward identities for fields in a CFT with dimensionsΔ_+. §.§.§ Other Representations in the Overdamped Case Here we consider some other representations of the wave function. For a more extensive discussion see Appendix <ref>. Momentum Space Representation In the overdamped case the two solutions at late time both fall off as(-η)^d2±ν, as discussed above. This is actually reminiscent of the behaviour in AdS space for a field with a negativeM^2in the mass range -d^2 4<M^2<-d^2 4+1, see Appendix <ref>. As was discussed in <cit.> scalars in the range eq.(<ref>) in AdS space can be quantised in two alternate ways and correspondingly the source can be chosen to correspond to either of the two fall-off. The fall off,z^d2-ν, corresponds to a source for an operator of dimensionΔ_+=d2+ν, wherezis the Poincaré coordinate in AdS, metads andν=√(d^24+M^2). While the fall off,z^d2-ν, corresponds to the source for an operator of dimensionΔ_-=d2-ν. The correlation functions in the two cases are related to each other by a Legendre transformation. In the dS case, instead of a Legendre transformation it is more natural to consider a Fourier transformation which takes the wave function in the field eigenstate basis to one in the basis of its conjugate momentum eigenstate, see Appendix <ref>. This Fourier transformed wave function will continue to be normalisable. Carrying out the Fourier transformation starting with eq.(<ref>), we get altovw restated below W[π( k,η_1),η_1]=exp[i∫d^d𝐤/(2π)^d π(𝐤,η)((-η)^d(ℱ_ν(k,η))^∗/2η∂_η(ℱ_ν(k,η))^∗)π(-𝐤,η)] If we define a sourceπ̂( k)in terms of the asymptotic behaviour ofπ( k,η_1)by the relation π( k,η_1)=(-η_1)^-d2-νπ̂( k) we get W[π̂( k),η_1]=exp[1/2∫d^d𝐤/(2π)^d π̂(𝐤)[i(-η_1)^-2ν(d/2-ν)-β^∗_ν/α^∗_ν(2iν)(d/2-ν)^2 k^2ν]π̂(-𝐤)] whereβ^∗_να_ν^∗is given in ratalbe. From this we see the two point function for the operator sourced byπ̂( k)łO̅( k)O̅(- k)=̊-β^∗_ν/α^∗_ν(2iν)(d/2-ν)^2 k^2ν Interestingly, this operator is also of dimensionΔ_+, overdim, although the coefficient in front is different from that whenϕ̂is the source, eq.(<ref>). Similarly, the cut-off dependent term in eq.(<ref>) also has a coefficient different from that in eq.(<ref>). Also note that had we directly Fourier transformed the wave function as expressed in terms ofϕ̂( k), bounwvpexp, we would have obtained the same result as in eq.(<ref>), after a suitable field redefinition. Coherent State Representation We can also consider the wave function in the coherent state representation. In fact this representation will be very useful in the underdamped case we will consider subsequently. For the over damped case this representation is discussed further in Appendix <ref>. The summary is that one finds that the eigenvalue of the coherent state can also be related suitably to a source in the dual theory and the dual operator then also turns out to have dimensionΔ_+, overdim. §.§.§ Interaction in the Overdamped Case Now let us turn to considering interactions in the overdamped case. Specifically, we consider adding aλϕ^ninteraction to the free theory in the overdamped domain. The resulting correction to the wave function was discussed in eq.(<ref>), eq.(<ref>), at first order inλ. Inserting the asymptotic value of the field in the form given in eq.(<ref>) then leads to iδ S_n|=iλ∫∏_i=1^nd^d𝐤_i/(2π)^d(2π)^dδ^(d)(Σ_j=1^n𝐤_j) I(k_1,k_2,..,k_n,η_1)/∏_i=1^n f_ν^∗(k_i,η_1)(-η_1)^n(d/2-ν)∏_i=1^nϕ̂(𝐤_i) In general the integral on the RHS can be divergent depending on the value ofν, as is discussed in section <ref> and <ref>. Once the divergent terms are removed the finite part, obtained by retaining the finite contribution in the integralI(k_1, k_2⋯, k_n,η_1)which we denote asI(k_1,k_2,⋯ k_n), see section <ref> and the leading behaviour off_ν^∗ (k,η_1), takes the form iδ S_n|=iλ∫∏_i=1^nd^d𝐤_i/(2π)^d(2π)^dδ^(d)(Σ_j=1^n𝐤_j)∏_i=1^n k_i^ν/(α_ν^∗)^nI(k_1,..,k_n)∏_j=1^nϕ̂(𝐤_j) + O(η_1^2ν) whereα_νis given in valtab. We can rewrite eq.(<ref>) in the following way logδψ[ϕ̂]≡ iδ S_n=1 n!∫∏_i=1^nd^d𝐤_i/(2π)^d⟨ O(𝐤_1)...O(𝐤_n)⟩∏_j=1^nϕ̂(𝐤_j) where thenpoint function ⟨ O(𝐤_1)...O(𝐤_n)⟩=n!iλ( α_ν^∗ )^n (2π)^dδ^(d)(𝐤_1+...+𝐤_n)(∏_i=1^n k_i^ν) I(k_1,k_2,⋯ k_n) Forn=3, this becomes ⟨ O(𝐤_1)O(𝐤_2)O(𝐤_3)⟩= iλ3! (α_ν^∗)^3 (2π)^dδ^(d)(Σ_j=1^3𝐤_j)(∏_i=1^3 k_i^ν) I(k_1,k_2,k_3) ⟨ O(𝐤_1)O(𝐤_2)O(𝐤_3)⟩'=iλ3! (α_ν^∗)^3(∏_i=1^3 k_i^ν) I(k_1,k_2,k_3) where prime indicates we are dropping the(2π)^dδ^(d)(Σ_j=1^3𝐤_j)term. The integral above was discussed in the previous section, see eq.(<ref>) and discussion thereafter. It gives the standard three point function for primary fields of dimensionΔ=d/2+ν, see reference <cit.>. For a definition of primary fields see Appendix <ref>. Note also that the result for the correction to the wave function in then=3case was derived directly in the interaction picture in Appendix <ref> and agrees with eq.(<ref>). In fact the calculation in the appendix can be easily generalised to the case for generalnas well and shown to agree with what we have obtained above. It is also worth noting that if we calculate the wave function by evaluating the bulk on-shell action after solving the equations of motion, then the solution for the bulk field changes at O(λ)from its value in the free case. But since this change must vanish atη=η_1, to ensure that the field continues to take a fixed value,ϕ( k,η_1)atη=η_1, it does not lead to an additional contribution to the action at this order, see Appendix <ref>. §.§ Underdamped Case This case presents us with new features and is the main focus of our study. Note that hereνgiven in eq.(<ref>) is purely imaginary. We re-write it as ν=i√(M^2-d^2/4)≡ iμ whereμis a real quantity. As a result, eq.(<ref>) becomes in the form defFu and the asymptotic behaviour of the solution off_μ(k,η)given in eq.(<ref>) is a sum over two modes of the form deffu with both modes going like(-η)^d2± i μare equally important asη→0, unlike the overdamped case where one mode decays faster than the other. This feature is the essential reason why we will need a different map between the bulk field and a source for the CFT in this case.   We start with the free theory wave function eq.(<ref>): ψ[ϕ,η]=𝒩exp[i∫d^d𝐤/(2π)^d ϕ(𝐤,η)(∂_ηℱ_μ^∗(k,η)/2(-η)^d-1ℱ_μ^∗(k,η))ϕ(-𝐤,η)] whereℱ_μ(k,η)is given in eq.(<ref>). Now suppose we try to identify a source in the boundary theoryϕ̂( k), as was done in the overdamped case, eq.(<ref>) as follows ϕ( k,η)=f^∗_μ(k,η) f_μ^∗(k,η_1)(-η_1)^d2-iμϕ̂( k) Eq.(<ref>) at a late timeη_1then takes the form ψ[ϕ̂( k),η_1]=𝒩exp[i2∫d^d k (2π)^d ϕ̂(- k) ϕ̂( k) (-η_1)^1-2iμḟ^∗_μ(k,η_1) f^∗_μ(k,η_1)] similar to eq.(<ref>). However, unlike the overdamped case the factor off_μ^∗(k,η_1)in the denominator of the exponent cannot be expanded in a power series at smallη_1, since as was mentioned above it is a sum over two modes going like(-η_1)^d2± i μand both modes are equally important asη_1→ 0. As a result an attempt to relateϕ( k,η)to a source for an operator with a fixed anomalous dimension in the CFT, will not work here.   Instead we will find that the wave function when expressed in a suitable coherent state basis, rather than the basis of the field operator, eq.(<ref>), lends itself to an interpretation as a generating function for correlation functions in the CFT. And the eigenvalue of the coherent state, instead ofϕ( k,η), the eigenvalue of the field operatorΦ( k,η), can be conveniently related to a source in the CFT.   In this subsection and the next we will first discuss how to compute the wave function in the field basis in a convenient way. Then in the following section we will discuss how to construct the wave function in the basis of coherent states, starting fromΨ[ϕ( x,η_1),η_1]. For computing the wave functionΨ[ϕ( x,η_1),η_1]it is convenient to define a variableJ_+( k,η_1)in terms of which the field ϕ(𝐤,η_1)=f_μ^∗(k,η_1) k^iμ J_+( k) It will be convenient to expressΨin terms ofJ_+. In the following section we will find thatJ_+is in fact the eigenstate of the coherent state. Using eq.(<ref>) we find at late times that this leads to ϕ(𝐤,η)= (-η)^d2-iμα_μ^∗[1+ β_μ^∗α_μ^∗(-η k)^2i μ ] J_+( k) The wave function eq.(<ref>) now gives, ψ[J_+,η]=𝒩exp[i∫d^d𝐤/(2π)^d J_+( k)(k^2iμℱ_μ^∗(k,η)∂_ηℱ_μ^∗(k,η)/2(-η)^d-1)J_+(- k)] Inserting the late time behaviour ofℱ_μ^∗(k,η)∂_ηℱ_μ^∗(k,η)from eq.(<ref>) as, multlineḟ_μ^∗(k,η)f_μ^∗(k,η)=-(-η)^d-1[α_μ^∗(d/2-iμ)(-kη)^-iμ+β_μ^∗(d/2+iμ)(-kη)^iμ] ×[α_μ^∗(-kη)^-iμ+β_μ^∗(-kη)^iμ] whereα_μandβ_μare given in defabu. We get that the exponent of eq.(<ref>) becomes, ∫d^d k (2π)^d J_+(𝐤)(k^2iμḟ_μ^∗(k,η)f_μ^∗(k,η)/2(-η)^d-1)J_+(-𝐤) = -∫d^dk (2π)^d( 1/2)[(α_μ^∗)^2(d/2-iμ)(-η)^-2iμJ_+(𝐤)J_+(-𝐤)+. .(β_μ^∗)^2(d/2+iμ)k^4iμ(-η)^2iμJ_+(𝐤)J_+(-𝐤)+α_μ^∗β_μ^∗ d  k^2iμJ_+(𝐤)J_+(-𝐤)] The first two terms on the RHS depend on the cut-off. The last term is of the form of a CFT two-point function. Keeping only this last term for now we get ψ[J_+] = exp[1/2∫d^d k/(2π)^dd^d k'/(2π)^d[J_+(𝐤) J_+(𝐤') ⟨Ô_+(𝐤) Ô_+(𝐤')⟩]] where ⟨Ô_+(𝐤)Ô_+(𝐤')⟩=-i(2π)^dδ^d(𝐤+𝐤') dα_μ^∗β_μ^∗ k^2iμ In position space, using defabu, this becomes łÔ_+( x)Ô_+( y)=̊d4μe^-πμ(1+(πμ))2^2iμΓ[d2+iμ]π^d2Γ[-iμ]1| x- y|^d+2iμ We note that⟨Ô_+Ô_+ ⟩is of the correct form to be the two point function in a CFT for an operator of dimensionΔ_+with anomalous dimension Δ_+=d/2+ iμ Importantly,Δ_+is complex.   Turning to the cut-off dependent terms in eq.(<ref>) we see that the first term on the RHS is local in position space and of the standard form expected from a local counterterm. However the second term going like∫d^d k (2π)^d J_+( k)J_+(- k) (-η)^2iμ k^4i μis quite ugly- taking the form in position space, ∼ (-η)^2iμ∫ d^d x d^d yJ_+( x) J_+( y) | x- y|^d+4iμ showing that it is both cut-off dependent and non-local in space. It will turn out that this term will disappear when we go to the coherent state basis! A few comments are in order before we close this subsection. Note firstly that the identification we have made between the the bulk field and the source in eq.(<ref>) above is not local along theη_1hypersurface, unlike in the overdamped case, eq.(<ref>). In fact in position space, atη=η_1, eq.(<ref>) takes the form ϕ( x,η_1)=(-η_1)^d2-iμα_μ^∗ J_+( x) + (-η_1)^d2+iμβ_μ^∗4μ e^πμ d(1+(πμ))∫ d^d yłÔ_+( x)Ô_+( y)J̊_+( y) wherełÔ_+( x)Ô_+( y)$̊ is given in eq.(<ref>), which shows the non-local relation between ϕ and J_+ as a function of the position coordinates x, y. In section <ref> when we discus the Ward identities further we will need to define the relation in eq.(<ref>) more carefully, due to this non-local nature, for a more general hypersurface. Second, note that instead of eq.(<ref>) we could have made the following identification ϕ( k,η_1)=f_μ^∗(k,η_1) J_- ( k) k^-iμ between the bulk field and a source J_-( k). Comparing eq.(<ref>) and eq.(<ref>) we see that the relation between J_+ and J_- is given by J_+( k)=k^-2iμJ_-( k) which is non-local in position space. Expressing the wave function in terms of J_- we get eq.(<ref>) with J_+ replaced by J_- using eq.(<ref>). The first term, proportional to (α_μ^∗)^2, is now cut-off dependent and non-local in position space, the second term, proportional to (β_μ^∗)^2, is cut-off dependent and local, and the third term, proportional to α_μ^∗β_μ^∗, corresponds to a CFT two point correlator for an operator of dimension Δ_- =d2-i μ So we see that while both identifications of the source as J_+ or J_- of course have the same physical content as far the wave function is concerned they, interestingly, lead to different interpretations in terms of a CFT dual. Namely as the source for operators with dimension d2± i μ respectively. We will have more to say about some of these issues in the subsequent sections when we discuss the Ward identities etc. Finally, below we will also explore the wave function in the coherent state representation. It will turn out that this coherent state description is more closely tied to using the identification made in eq.(<ref>) and the source in the CFT, in the coherent state representation, which we will denote by ρ^∗, up to a simple scaling factor, will be related to J_+, instead of J_-, see section <ref>. The scaling factor relating ρ^∗ to J_+ is given in eq.(<ref>). We should mention, before proceeding, that in the coherent state basis the coefficient in the correlation function will be different from eq.(<ref>) even after accounting for this scaling factor. §.§.§ Including Interaction We now show how the correspondence in the underdamped case can be extended in the presence of ϕ^n type interactions discussed above. Our discussion will be in terms of the source J_+, a similar analysis also applies in terms of J_- and the results can be obtained using the relation between the J_+, J_- given in eq.(<ref>). The additional contribution to the exponent in the wave functional due to a ϕ^n interaction is i δ S_n[ϕ(𝐤,η)]=i λ∫∏_i=1^nd^d𝐤_i/(2π)^d(2π)^dδ^(d)(𝐤_1+...+𝐤_n)[I(k_1,k_2,..,k_n)∏_i=1^nϕ(𝐤_i,η)/ℱ_μ^∗(k_i,η)] as discussed in eq.(<ref>). Following eq.(<ref>) we now replace ϕ( k,η) in terms of J_+( k) leading to i δ S_n[J_+]=iλ∫∏_i=1^nd^d𝐤_i/(2π)^d(2π)^dδ^(d)(Σ_j 𝐤_j) I(k_1,k_2,..,k_n)∏_a=1^n{ J_+(𝐤_a)k_a^iμ} From eq.(<ref>) we see that the coefficient function for the J_+^n terms is ⟨Ô(𝐤_1)..Ô(𝐤_n)⟩= i λ n! (2π)^d δ^(d)(𝐤_1+..+𝐤_n)∏_j=1^nk_j^iμ I(k_1, ⋯ k_j, ⋯ k_n) with the interaction part of the wave function being logδψ[J_+]=iδ S_n=1/n!∫∏_i=1^nd^d𝐤_i/(2π)^d⟨Ô(𝐤_1)..Ô(𝐤_n)⟩∏_b=1^n J_+ (𝐤_b) As was mentioned above we will actually identify the source in the coherent state basis in the following section, but we have written the coefficient function suggestively as an expectation value of fields above, because, after a trivial rescaling, the source will be turn out to be equal to J_+, and the interaction term in the coherent state basis at leading order will actually be the same as eq.(<ref>), up to this rescaling factor. The integral I(k_1,⋯ k_n) was defined in eq.(<ref>). More correctly to compute the wave function at time -η_1 we would need to compute this integral, eq.(<ref>), with the upper limit being -η_1, instead of 0, I_n(k_1, ⋯ k_n)=∫_-∞^-η_1dη' (-η')^d+1∏_i F_μ^∗(k_i,η') However we will see in the next subsection that when all fields are under damped there is no divergence at late times and accordingly we can take the upper limit to vanish. §.§ Divergences Here we briefly consider divergences which can arise at late times, when η→ 0. We will focus on scalar fields but similar conclusions are valid more generally. The free scalar case has already been discussed above. As an example of interactions we take the ϕ^n term discussed above, a similar discussion applies in other cases as well. In the AdS case the divergences can all be removed after holographic renormalisation; once this is established the divergences are not of much interest in the study of the AdS/CFT correspondence. However in the dS case the divergences should not be removed, and some discussion on isolating and dealing with them is worthwhile. We will mostly consider the three point interaction below, this example illustrates the features present more generally. A three point function arises from the cubic term in the action, δ S_3=∫ d^d+1x√(-g)ϕ_1( x,η) ϕ_2( x,η) ϕ_3( x,η) We take the three fields to be different in general. From eq.(<ref>), eq.(<ref>) we see that in general the contribution to the wave function cubic in the sources takes the form δlog(Ψ)∝∫(∏_i=1^3d^d k_i(2π)^d) (2π)^d δ(∑_i=1^3 k_i) I(k_1, k_2, k_3,η_1 )∏_i ϕ_i( k_i,η_1) F_ν_i^∗(k_i,η_1) If the i^ th field is overdamped, ν_i is related to its mass M_i by eq.(<ref>) and ϕ( k,η_1) is related to its source ϕ̂( k) by eq.(<ref>). If it is under damped ν_i=i μ_i, with μ given in eq.(<ref>) and the field is related to its source J_+ by eq.(<ref>). The integral I(k_1,k_2,k_3) is given by, eq.(<ref>), I(k_1,k_2,k_3)=∫_-∞^-η_1dη' (-η')^d+1∏_i=1^3 F_ν_i^∗(k_i,η') Here we have put in a late time cut-off at -η_1 and we are interested in examining what happens as it vanishes. When all three fields are over damped the asymptotic behaviour of F_ν_i (k,η) is given in eq.(<ref>). The leading divergence is obtained by keeping the leading behaviour, i.e., the first term on the RHS in eq.(<ref>), for all three fields. f_ν(k,η)≃ℂ_ν(-i 2^νΓ [ν] /π)(-η)^d2-ν(k)^-ν where ℂ_ν is given in defC1. The behaviour of the integral eq.(<ref>) is divergent if d2<∑_i ν_i. It is easy to see that this divergence results in a term in the wave function which is local in position space δlog(Ψ) ∼1 (-η_1)^ν_1+ν_2+ν_3-d2∫ d^d xϕ̂_1( x) ϕ̂_2( x) ϕ̂_3( x) A sub leading divergence can arise if we take two of the F_ν_i( k_i,η) functions in eq.(<ref>) to have their leading behaviour but the third say i=3 to be sub leading, i.e. to have the form corresponding to the second term given in eq.(<ref>). Or if we keep a sub leading term in one of the F_ν_i^∗(k_i,η_1) denominator terms which appear as part of the ∏_iϕ_i( k_i,η) F_ν_i^∗(k_i,η_1) product in eq.(<ref>). This gives rise to a contribution in I(k_1,k_2,k_3) which is divergent if d2<ν_1+ν_2-ν_3 The resulting term in the wave function is δlog( Ψ) ∝1 (-η_1)^ν_1+ν_2-ν_3-d2∫ d^d x d^d yϕ̂_1( x)ϕ̂_2( x) 1 | x- y|^d+2ν_3ϕ̂_3( y) We see that this term is of the form of an additional contribution to the two point function for the operator O_3, so that in effect the divergent term can be absorbed by redefining the source ϕ̂_3( x) →ϕ̂ _3( x) + c 1 (-η_1)^ν_1+ν_2-ν_3-d2ϕ̂_1( x) ϕ̂_2( x) Another way to say this is that the divergence arises due to operator mixing, the operators O_1 and O_2 mix with O_3 when they come close in position space. In contrast, when all three fields are underdamped it is easy to see that there is no divergence from the η_1→ 0 region in the integral. If two fields are overdamped and one, say i=3, is underdamped, then we get divergent terms if d2<ν_1+ν_2 In this case taking the term in f_ν_3^∗(k_3,η') which goes like (-η')^d2(-k_3η')^i μ, i.e. the second term in eq.(<ref>), we get a contribution analogous to eq.(<ref>) above, with δlog(Ψ)∝1 (-η_1)^ν_1+ν_2-d2+i μ∫ d^d xϕ̂_1( x)ϕ̂_2( x) 1 | x- y|^d+2 i μJ_+,3( y) which is of the form of a contribution to the two point function for the operator Ô_+,3. On the other hand taking the term in f_ν_3^∗(k_3,η') which goes like (-η')^d2(-k_3η')^-i μ, i.e. the first term in eq.(<ref>), we get a divergent term which is local in position space, δlog (Ψ)∝1 (-η_1)^ν_1+ν_2-d2-i μ∫ d^d xϕ̂_1( x)ϕ̂_2( x)J_+,3( x) In both eq.(<ref>) and eq.(<ref>) the dependence on the cut-off η_1 involves a complex exponent, as is needed from dimensional analysis, since the operator Ô_+ has a complex anomalous dimension Δ_+, eq.(<ref>). Finally the finite part for the cubic term, which is independent of η_1, has a form which is fixed by conformal invariance. In Appendix <ref> we discuss how this term can be calculated by analytic continuation and give the resulting expression for all values of the anomalous dimension in dS space. The analysis above can be extended to more general ϕ^n interactions in a similar way. The divergences which arise can be understood as corresponding to local counter terms terms or due to operator mixing. Before ending let us come back to contributions in the wave function only involving under damped fields. Consider a divergence which arises in a Witten diagram with an interaction vertex of the ϕ^n type, involving n factors of the under damped bulk field ϕ. This results in a contribution going like δlogΨ∼∫dη' (-η')^d+1 (-η')^n d2 (-η')^± i μ n we see that the integral converges for η'→ 0 as long as n>2. This shows that all contributions to higher point functions, and also to higher loops in the two point function will be finite. Note that the estimate in eq.(<ref>) is valid when all n lines meeting at the vertex are bulk to boundary propagators, as shown in Figure <ref>, or when some lines are bulk to boundary propagators and others are bulk to bulk propagators. This is true since a bulk to bulk propagator G( x,η; x',η') also behaves like (-η')^d2± i μ when η'→ 0. § COHERENT STATE BASIS FOR THE UNDERDAMPED FIELDS In this section we discuss the wave function of the Bunch Davies vacuum in a coherent state basis for the underdamped fields. As mentioned above the wave function in this basis will be identified with the generating function for CFT correlations. The coherent states of interest are identified as follows. As discussed in section <ref>, the Hankel functions H^(1,2)_iμ take the asymptotic form, near the Poincaré horizon, H^(1)_iμ(-kη)∼ e^-ikη, H^(2)_iμ(-kη) ∼ e^+ikη and therefore behave as positive and negative frequency modes with respect to ∂_η. Keeping this mind in the mode expansion for ϕ( x,η), eq.(<ref>) the coefficient of the term containing H^(1)_iμ was related to the destruction operator a_ k, and H^(2)_iμ with the creation operator a^†_ k. However near the future boundary, where η→ 0, the behaviour of the Hankel functions is different. The more natural time variable in this region is t=-log(-η), in terms of which the metric ds^2=-dη^2η^2 + 1η^2[∑_i(dx^i)^2] becomes of FRW type ds^2=-dt^2 +e^2t[∑_i(dx^i)^2]. The asymptotic behaviour of H^(1)_iμ in this region in terms of t then takes the form ℂ_μ H^(1)_iμ(-kη)= α_μ k^iμ e^-iμ t +β_μ k^-iμ e^iμ t where the coefficient ℂ_μ is given in eq.(<ref>) and α_μ, β_μ are given in defabu. We see that H^(1)_iμ and H^(2)_iμ contain both positive and negative frequency modes of ∂_t. This motivates us to consider a different notion of creation and destruction operators, ã( k), ã^†( k), which correspond to modes with positive and negative frequencies with respect to t. These are related to a_ k, a^†_ k by a Bogoliubov transformation. After some algebra one gets that ã( k) = k^iμ√(|α_μ|^2-|β_μ|^2)[α_μ a_ k+ β_μ^∗a^†_- k] ã^†( k) = k^-iμ√(|α_μ|^2-|β_μ|^2)[α_μ^∗ a^†_ k+ β_μ a_- k] From eq.(<ref>) it follows that these operators also satisfy the commutation relation [ã( k), ã^†( k')]=(2π)^dδ^d( k- k') In terms of Φ( k,η) and Π( k,η), the momentum modes for the field Φ and its conjugate momentum Π, eq.(<ref>), eq.(<ref>), one finds that ã( k)= [1(-η)^Δ_+γ] [(d2-iμ)Φ( k,η)+(-η)^dΠ( k,η)] where γ=-2iμ√(|α_μ|^2-|β_μ|^2)=-i √(2μ) Before proceeding we note that in position space eq.(<ref>) becomes ã( x)= [1(-η)^Δ_+γ] [(d2-iμ)Φ( x,η)+(-η)^dΠ( x,η)] Now consider a coherent state which is an eigenstate of ã( k) with eigenvalue ρ( k), ã( k)|ρ⟩=ρ( k)|ρ⟩ The eigenbasis |ϕ⟩ for the field Φ̂ was defined in eq.(<ref>). The wave function for |ρ⟩ in this basis Ψ_ρ[φ]=⟨φ||ρ⟩ can be obtained as follows. We recall, eq.(<ref>), that the momentum operator in |ϕ⟩ basis is given by Π( k,η)=-i(2π)^d δδφ(- k,η) Inserting this in eq.(<ref>) we get from eq.(<ref>) that the wave function must satisfy the equation [(d2-iμ)φ( k,η)+(-η)^d(-i(2π)^dδδφ(- k,η))]Ψ_ρ[φ]=ρ( k)(-η)^Δ_+γΨ_ρ[φ] where The solution to eq.(<ref>) (up to an overall normalisation) is given as Ψ_ρ[φ]=exp[∫d^d k(2π)^di(-η)^d{γ(-η)^Δ_+ρ( k)φ(- k,η)-Δ_-2φ( k,η)φ(- k,η)}] where Δ_- is given in eq.(<ref>). Before proceeding we note that the solution of eq.(<ref>) as given in eq.(<ref>) is unique up to an overall normalisation which can depend on ρ. Choosing for this normalisation a dependence on ρ^∗ which is local in position space, e.g. of the form N=exp[∫ d^d x (-η)^2iμρ^∗( x)^2], will only change the contact terms in the field theory. A more non-trivial kernel in the exponent in eq.(<ref>) is allowed by eq.(<ref>) but will be significantly constrained by the Ward identities which we discuss next. In the subsequent discussion we will work with the solution given in eq.(<ref>). The wave function for the Bunch Davies vacuum |0⟩ in the |ϕ⟩ basis, ψ[ϕ,η], was obtained in eq.(<ref>); it can be used to obtain the wave function for this vacuum in the coherent state basis, ψ[ρ], using the relation, ψ[ρ,η]=⟨ρ||0⟩=∫ Dφ⟨ρ||φ⟩⟨φ||0⟩=∫ DφΨ_ρ^∗[φ]ψ[φ,η] From eq.(<ref>), psifa, we get ψ[ρ,η]=∫ Dφ exp[∫d^d k(2π)^d(-iγ^∗(-η)^Δ_-(-η)^dρ^∗( k))φ( k,η)] ×exp[-12∫d^d k(2π)^d(-iΔ_+(-η)^d-i∂_η f_μ(k,η)^∗/(-η)^d-1f_μ(k,η)^∗)φ( k,η)φ(- k,η)] where f_μ(k,η) is late time behaviour of F_μ(k,η). Using the expression for f_μ(k,η) given in eq.(<ref>) this can be simplified and gives, ψ[ρ,η]=∫ Dφ exp[∫d^d k(2π)^d(-iγ^∗(-η)^Δ_-(-η)^dρ^∗( k))φ( k,η)] ×exp[-12∫d^d k(2π)^d(2μα_μ^∗(-kη)^-iμ (-η)^d[α^∗_μ(-kη)^-iμ+β_μ^∗(-kη)^iμ])φ( k,η)φ(- k,η)] The functional integration above is a Gaussian and can be carried out easily. We see it results in a non-trivial transformation on ψ[φ,η]. Solving for ρ in terms of Ψ, using the saddle point condition we get φ( k,η)=1√(2μ)[(-η)^Δ_-+β_μ^∗α_μ^∗k^2iμ(-η)^Δ_+]ρ^∗(- k) where we have put in the value of γ given in eq.(<ref>). The resulting value of the integral (up to a normalisation which we are not keeping track of ) then takes the form, ψ[ρ,η]=exp[12∫d^d k(2π)^dρ^∗( k)((-η)^-2iμ+β_μ^∗α_μ^∗k^2iμ)ρ^∗(- k)] We see that the final form of the wave function in the coherent state basis is in fact quite simple and also suggestive of an interpretation as a generating functional in a dual field theory. We identifying ρ^∗( k) to be the source in the boundary theory, and ψ to be the generating functional. And see that the expression above leads to a two-point function in the CFT which has a contact term, arising from the first term in eq.(<ref>) that is cut-off dependent, and a non-contact term, arising from the second term in eq.(<ref>) that is of the form of a two point function for an operator of dimension Δ_+, eq.(<ref>). In position space, the two point function, using defabu can be written as ł O_+( x)O_+( y)=̊iΓ[1-iμ]Γ[d2+iμ]π^1+d2(1+(πμ))1| x- y|^d+2iμ Comparing eq.(<ref>) and eq.(<ref>) we also see that they are in fact of the same form, and that ρ^∗( k) and J_+( k) can be identified up to a normalisation factor ρ^∗( k)=√(2μ)α_μ^∗J_+(- k) The source J_+ first defined in section <ref> is then, as promised, found to be closely related to the eigenvalue of the coherent state. It is worth comparing the expression obtained for Ψ in the coherent state basis with what we had obtained in eq.(<ref>), eq.(<ref>) in the field eigenstate basis. The most important difference is that the second term in eq.(<ref>), which is cut-off dependent and non-local is missing in eq.(<ref>) above. As a result the remaining divergent term in eq.(<ref>) is of a conventional type, being local in space. Also, the coefficient of the cut-off independent term is different even after we account for the rescaling factor eq.(<ref>). Using eq.(<ref>) to express J_+ in terms of ρ^∗ we see that this term in eq.(<ref>) becomes δlogΨ[ρ]=1 2 (-id 2μ)∫d^d k (2π)^dρ^∗( k)ρ^∗(- k) k^2iμβ_μ^∗α_μ^∗ Comparing with the second term in eq.(<ref>) we see that there is a difference in the coefficient by a factor of -i d 2μ. This difference will be important when we consider the Ward identities in the next section. Finally, coming back to the remaining divergent term in eq.(<ref>) we note that the first term in eq.(<ref>) is also of the same form, being cut-off dependent and local, but differs in its coefficient even after the rescaling eq.(<ref>).   Our discussion above has been in the free field limit. Going further we can incorporate interactions. Including a ϕ^n interaction which was discussed in section <ref>, and working to first order in λ, one can easily obtain the corresponding interaction term in the coherent state basis, by simply inserting eq.(<ref>) in eq.(<ref>) which has the interaction term expressed as a n^ th order polynomial in J_+. This gives the coherent state wave function see Appendix <ref>multlineψ_I[ρ,η]=ψ[ρ,η]exp[iλ(√(2μ)α_μ^∗)^n∫(∏_i=1^nd^d k_i(2π)^d)(2π)^dδ( k_1+⋯+ k_n). .(∏_i=1^nk_i^iμ)I(k_1,⋯,k_n)(∏_i=1^nρ^∗( k_i))] where the integral I(k_1,⋯,k_n) is given in eq.(<ref>). From our earlier discussion in section <ref> it follows that this is of the correct form to be the n point correlator for an operator of dimension Δ_+. Explicitly for n=3, ł O_+( k_1)O_+( k_2)O_+( k_3)=̊3!iλ(√(2μ)α_μ^∗)^3(2π)^dδ( k_1+ k_2+ k_3) (∏_i=1^3k_i^iμ)I(k_1,k_2,k_3) More generally, going beyond leading order in λ, we would need to start with ψ[φ,η_1], with the correct coefficient function up to the required order for the ϕ^n term, and then carry out the integral over φ in eq.(<ref>). Denoting the radius of dS_d+1 space as R_ dS, we note that we have been suppressing an overall factor of R_ dS^d-1 G_N, which appears in front of the action for the scalar field, action. Once we keep track of this factor it multiplies both terms in the exponent in Ψ_ρ[ϕ], eq.(<ref>), as well as all the terms in the exponent of ψ[ϕ], as is discussed in Appendix <ref>. As a result in the limit where R_ dS^d-1 G_N→∞, we can evaluate the integral in eq.(<ref>) in the saddle point approximation. For this purpose it is best to express Ψ_ρ[ϕ], eq.(<ref>), in terms of J_+. Using the relation eq.(<ref>) it takes the form multlineΨ_ρ[J_+]=exp[∫d^d k(2π)^di(-η)^d{γ f_μ^∗(k,η) k^iμ(-η)^Δ_+ρ( k) J_+(- k).. ..-Δ_-2(f_μ^∗(k,η))^2 k^2iμ J_+( k)J_+(- k)}] We also then express Ψ[ϕ] in terms of J_+ which we have done in previous sections anyway. In the saddle point approximation we then extremize the resulting full exponent in eq.(<ref>) as a functional of J_+ and solve the resulting equation to obtain J_+ in terms of ρ^∗ order by order in λ. Inserting this solution for J_+ and evaluating the on-shell value of the exponent in eq.(<ref>) to the required order in λ then gives the coherent state wave function. More details can be found in Appendix <ref>. In summary the correspondence for underdamped fields is to consider the wave function in the coherent state basis and to expand it in a Taylor series in ρ^∗, i.e. in an expansion of the form eq.(<ref>) with the source S( x) being ρ^∗( x), log(ψ[ρ,η])=∑_n=1^∞1 n!∫∏_j=1^n d^d x_j ρ^∗( x_j) ł O_+( x_1) O_+( x_2) ⋯ O_+( x_n)The coefficient functions then correspond to correlation functions of fields in the dual CFT which have dimension Δ_+=d2+i μ We will see below that these coefficient functions indeed satisfy the Ward identities of conformal invariance for fields with dimension Δ_+. § WARD IDENTITIES Here we will discuss various Ward identities which are satisfied by the correlators of the field theory, after an identification is made between the asymptotic value of the bulk fields and sources in the field theory, as discussed above. Compared to the AdS case there are two important things to keep in mind. First, we will also include the under damped fields. Second, as has been argued above, in the dS case one cannot neglect the divergent terms which arise as the late time cut-off, η_1→ 0, and one is therefore dealing with a field theory which depends on a cut-off. We will argue that the cut-off dependent terms in the correlation functions are also invariant under conformal transformations, as long as the cut-off, η_1, transforms appropriately. For terms which are independent of the cut-off, the Ward identities correspond to the Ward identities of a Conformal Field Theory and will include the constraints that follow from conformal invariance, translational invariance and local scale invariance, i.e., the vanishing trace of the stress tensor. If additional global symmetries are present they will include the constraints which follow from charge current conservation. There are two ways to understand the arguments leading to the Ward identities. From the perspective of Canonical quantisation we note that we are working throughout this paper in ADM gauge. Including perturbations, the metric in this gauge is given by ds^2=-dη^2η^2 + 1η^2 (ĝ_ij dx^i dx^j) so that the lapse function and shift function have been set to N^2=1η^2 and N^i=0 respectively. The equations of motion obtained by varying N and N^i then lead to the conditions of time and spatial reparametrisation invariance. Any physical state, including the Bunch Davies vacuum, must satisfy these conditions, which ensure that the wave function is invariant under residual gauge transformations which preserve the ADM gauge. The resulting conditions then give rise to the Ward identities. Specifically, spatial reparametrisation invariance gives rise to the Ward identities of translational invariance and time reparametrisation invariance gives rise to the Ward identity of local scale invariance, i.e. vanishing of the trace of the stress tensor, T^i_i=0, up to contact terms. The SO(1,d+1) isometries of dS_d+1 space are a combination of spatial and time reparametrisation transformations and their Ward identities also then follows from the above invariances. For additional gauge symmetries, e.g. a U(1) gauge symmetry, we work in A_η=0 gauge and the Gauss law constraint on the wave function then leads, in a similar manner, to the Ward identities of charge conservation. From the path integral perspective the wave function is evaluated by carrying out a path integral subject to appropriate boundary conditions. This path integral is invariant under general coordinate transformations, as long as the boundary conditions are also transformed appropriately to account for the coordinate transformations. The resulting conditions on the wave function then give rise to the Ward identities. The boundary conditions need to be imposed both on the Poincaré horizon and the late time slice. We argued in section <ref>, eq.(<ref>) that after a suitable i ϵ prescription the wave function vanishes at the Poincaré horizon. This will continue to be true after the coordinate transformations we consider. Therefore the invariance of the path integral follows as long as the boundary values of fields are transformed appropriately on the late time slice. In practice one often works in the classical limit where the path integral can be evaluated in the saddle point approximation, i.e., by solving the equations of motion subject to the required boundary conditions. The invariance of the wave function then follows from the invariance of the on-shell action under coordinate transformations, once the boundary values are also suitable transformed. The classical limit is justified by taking R_ dS^d-1 G_N→∞. In most of the discussion below we only consider scalar fields. The extension to other fields, scalars with higher spin, or fermions, is straightforward but we do not discuss them here. The one exception is the metric, which we include. This field is of course special since it is dual to the stress tensor in the boundary. From the point of view of our present investigation the non-trivial aspect of the analysis is as follows. Consider evaluating the late time wave function on the slice η=η_1, with the bulk scalar taking the value ϕ( x,η_1) and the metric components, eq.(<ref>), taking the value ĝ_ij( x,η_1). Under a coordinate transformation which takes ϕ → ϕ-δϕ ĝ_ij → ĝ_ij-δĝ_ij η_1 → η_1-δη_1 the invariance of the wave function Ψ leads to the condition Ψ[ϕ-δϕ,ĝ_ij-δĝ_ij, η_1-δη_1]-Ψ[ϕ,ĝ_ij,η_1]=0 To relate this to constraints on the correlation functions of the the CFT one needs to find out how the source one is associating with the bulk field transforms under the the coordinate transformation eq.(<ref>), obtain the invariance condition eq.(<ref>) in terms of the sources, and then finally argue that the coefficient functions which arise when the wave function is expanding in terms of the sources satisfy the identities in a CFT. When the relation between the boundary value of the fields and the source is local along the hypersurface η=η_1 as is the case for the overdamped case, or in the AdS/CFT correspondence, obtaining the transformation laws for the source is straight forward and the subsequent argument that the invariance of the wave function leads to the the required Ward identities is also straightforward. However, if the relation between the bulk fields and the source is non-local, as happens for the identification of the source we made in the underdamped case in in eq.(<ref>) or eq.(<ref>), see section <ref> for discussion, then obtaining the transformation laws for the sources is more non-trivial and formulating the arguments requires more care. In the case of the coherent state basis the relation between the bulk fields ϕ( x,η) and ∂_ηϕ( x,η) and the coherent state eigenvalue, ρ, is more straightforward and the discussion leading to the Ward identities is also then straightforward. Accordingly we will first consider the coherent state basis below in section <ref> and then turn to other source J_+ or J_- in our subsection <ref> later. Additional discussion and some explicit calculations are contained in Appendix <ref>. Before proceeding let us note an important conclusion one arrives at from the analysis of the Ward identities. As has been discussed above, the dual theory in dS space is Euclidean, and also breaks reflection positivity. It therefore cannot be continued in general to Lorentzian space. And the boundary fields in the Euclidean theory, to which the bulk sources couple, also cannot then be continued to operators in a Lorentzian theory. Nevertheless, we find that these fields in the boundary theory transform under conformal transformations like primary operators in a CFT. Moreover, as follows from the discussion in section <ref> and also Appendix <ref>, the nature of short distance singularities when these fields come together in correlation functions is entirely analogous to that of primary operators in a CFT, allowing us to define an expansion analogous to the operator product expansion, and use the bulk correlators to extract operator product coefficients. Thus, despite the obstruction to a continuation to Lorentzian space, much of the structure of the resulting boundary theory is identical to that of conventional CFTs which admit such a Lorentzian continuation. §.§ Ward Identities in the Coherent State Representation We note that the operator ã( x) whose eigenstates we will work with in this representation is related to Φ( x,η), Π( x,η) by the relation eq.(<ref>). The subsections below give the general arguments leading to the Ward identities. Some explicit calculations are presented in Appendix <ref>. §.§.§ Ward Identities for Spatial Reparametrisations The spatial transformations which keep us in the ADM gauge, eq.(<ref>) are given by x^i → x^i +v^i( x) where v^i( x) are infinitesimal parameters. Under it Φ( x,η) and Π( x,η) both transform as scalars, i.e., their changes are, δΦ=-v^i∂_i Φ, δΠ( x,η)=-v^i ∂_i Π( x,η) As a result we see from eq.(<ref>) that ã( x) also behaves like a scalar and its eigenvalue must also transform like a scalar, i.e. δρ( x)=-v^i∂_i ρ( x) The metric transforms in the usual fashion δĝ_ij=-[∇̂_i v_j + ∇̂_j v_i] where ∇̂ denotes covariant derivative compatible with ĝ_ij. It follows then from eq.(<ref>) that the condition for invariance of the wave function in the coherent state basis leads to Ψ[ρ+δρ,ĝ_ij+δĝ_ij, η_1]= Ψ[ρ, ĝ_ij, η_1] To proceed we first note that the the expansion of the wave function in terms of coefficient functions was schematically given in eq.(<ref>). Let us be more precise about this. We will expand the metric ĝ_ij about its flat space value ĝ_ij=δ_ij+γ_ij Then the Taylor series expansion of Ψ will be carried out in powers of γ_ij and of the scalar source ρ, in the coherent state basis, or in terms of J_± in the field basis, as we will discuss later. The coefficient functions will be defined as follows: multlinelog[Ψ] = 1 2∫ d^d xd^d y√(ĝ( x))√(ĝ( y))ρ^∗( x)ρ^∗( y)ł O_+( x)O_+( y) + 1 4∫ d^d xd^d yd^d z√(ĝ( x))√(ĝ( y))√(ĝ( z))γ_ij( z)ρ^∗( x)ρ^∗( y) ł T_ij( z)O_+( x)O_+( y)+̊⋯ From eq.(<ref>) we see that on starting with the flat metric δ_ij and carrying out a spatial reparametrisation gives, γ_ij=-[∇̂_i v_j +∇̂_j v_i] Also, the invariance condition eq.(<ref>) then leads to the condition Ψ[ρ+δρ, δ_ij+γ_ij, η_1]=Ψ[ρ,δ_ij, η_1] For the coefficient functions being considered in eq.(<ref>) this gives a relation between the two point and three point correlators: _i^zł T^ij( z)O_+( x)O_+( y)+̊δ( z- x)_i^xł O_+( x)O_+( y)+̊δ( z- y)_i^ył O_+( y)O_+( x)=̊0 as is discussed further in Appendix <ref>. Note this is the expected relation in a CFT with the operator O_+( x) transforming as a scalar. Similar relations can also be obtained relating correlation functions involving m scalar fields and a stress tensor, or multiple stress tensors. In all cases the stress tensor will be conserved, i.e., will meet the condition ∂_iT^ij=0 up to contact terms which involves the scalars or additional stress tensors transforming appropriately under spatial reparametrisations, i.e. as scalar operators or a rank two symmetric tensor. We also note that the correlation functions will also have cut-off dependent terms in them. The Ward identities above do not mix the cut-off dependent terms with those which are cut-off independent. The cut-off independent terms will satisfy the Ward identities of a standard CFT, the cut-off dependent terms will also satisfy the relations obtained from the conditions discussed above. Physically these Ward identities express the fact that the dual field theory correlators are translational invariant and the stress tensor is therefore conserved, up to precise contact terms. §.§.§ Ward Identity for Time Reparametrisation The time reparametrisation which preserves ADM gauge asymptotically, as η_1→ 0, takes the form, η → η(1+ϵ( x)) x^i → x^i+1 2η^2 ∂_i ϵ We see that can neglect the change of x^i to leading order- although that argument might need to be more carefully examined when we consider the Ward identities involving cut-off dependent terms. To keep our discussion simple we will mostly focus on the cut-off independent terms below. We can then take x^i to be invariant and take η to transform as given in eq.(<ref>). The scalar field Φ( x,η) will transform as a bulk scalar under this transformation and its change is therefore δΦ( x,η)=-ϵ( x)η∂_ηΦ( x,η) From eq.(<ref>) we see that (-η)^d Π( x,η)=-η∂_ηΦ( x,η) From this it also follows that δ ((-η)^d Π( x,η)) = -ϵ( x) η∂_η ((-η)^d Π( x,η)) In other words (-η)^d Π( x,η) also behaves like a scalar under the time reparametrisation of interest. At first sight this might seem unexpected. To see the above result simply, note that RHS in eq.(<ref>) can be written as ∂_log(-η)Π and δlog(η)= ϵ( x) which is independent of η. Let us now write the expression for a( x) in terms of Φ, Π as follows, eq.(<ref>): ã( x) (-η)^Δ_+= i√(2μ)[(d2-iμ)Φ( x,η)+(-η)^dΠ( x,η)] Since both terms on the RHS transform as scalars we conclude that the change of the LHS is given by δ (ã( x)(-η)^Δ_+)=-ϵ( x) η∂_η (ã( x) (-η)^Δ_+) From this it follows that δã( x)=-Δ_+ ϵ( x) ã( x) As a result its eigenvalue ρ also transforms as δρ( x)=-Δ_+ ϵ( x) ρ( x) and its complex conjugate transforms as δρ( x)^∗=-Δ_- ϵ( x)ρ( x)^∗ It is also easy to see that under the time parametrisation we are considering the metric perturbation γ_ij, eq.(<ref>) transform as γ_ij=2 ϵ( x) δ_ij The invariance of the wave function, psirho1 then takes the form Ψ[ρ+δρ, δ_ij+ γ_ij, η_1(1+ϵ)]=Ψ[ρ,δ_ij,η_1] For cut-off independent terms we can neglect the change in η_1. Applying this condition to the cut-off independent terms in the two and three point correlators in eq.(<ref>) we learn that ł T^i_i( z)O_+( x)O_+( y)+̊Δ_+δ( z- x)ł O_+( x)O_+( y)+Δ_+δ( z- y)ł O_+( y)O_+( x)=̊0 This is the standard Ward identity for local scale invariance in a CFT. Namely, the trace of the stress tensor vanishes up to contact terms which are proportional to the anomalous dimensions of the operators. Similar relations will arise when there are more operators with the T^i_i vanishing again up to appropriate contact terms which are proportional to the anomalous dimensions of the operators involved. When we consider cut-off dependent terms we will also have to take into account the change in η_1, which follows from eq.(<ref>). Appendix <ref> has further discussion of how the bilinear and trilinear terms in eq.(<ref>) are obtained by doing explicit calculations along with some subtleties that arise. §.§.§ Ward Identities for Conformal Invariance Although, as mentioned above, conformal transformations are a combination of spatial and time reparametrisations, it is worth considering them separately since they correspond to isometries of the bulk metric and invariance under them is a defining feature of CFTs. The SO(1,d+1) isometries of dS_d+1, which correspond to conformal transformations, are described in Appendix <ref>, see eq.(<ref>)-eq.(<ref>). Consider a general conformal transformation under which x'^i=x^i+v^i, η'=η(1+ϵ( x)) From Appendix <ref> we see that for translations and rotations ϵ( x)=0 and for dilations and special conformal transformations it takes the values ϵ( x)=ϵ and ϵ( x)=2 b_ix^i respectively. We see that this transformation is in general a combination of a spatial and time reparametrisation of the kind we had discussed above. The coherent state eigenvalue ρ and metric therefore will transform under this transformation as δρ=-v^i( x) ∂_i ρ -ϵ( x) Δ_+ ρ and γ_ij= -∇̂_i v_j( x)-∇̂_jv_i( x) + 2 ϵ( x) δ_ij=0 The fact that the change in the metric vanishes is because conformal transformations are isometries of the dS metric. As a result the coefficient functions under these transformations will transform homogeneously, i.e., a n point scalar correlator will transform into itself and the resulting conditions which arise from invariance of the wave function on these coefficient functions will only involve it and not other coefficient functions involve extra factors of the stress tensor, as happened for the spatial and time reparametrisations individually in the previous subsections. For the two -point correlator under a special conformal transformation we get, for example, łδ O_+( x) O_+( y)+̊ł O_+( x) δ O_+( y)=̊0 where the change δ O_+( x)=b^j[2Δ x_j+(2x^ix_j-x^2δ^i_j)_i] O_+( x) Similarly relations can also be obtained for the other conformation transformations as is discussed in Appendix <ref>. n pt correlators will also be invariant under special conformal with each operator transforming as given in eq.(<ref>). Correlations involving the stress tensor will be invariant if the change in T_ij given in Tch are also included. Note that these relations will apply separately to the cut-off independent and dependent terms. §.§.§ Ward Identities for U(1) Gauge Invariance Finally we consider the Ward identity for U(1) gauge invariance. Consider a scalar charged under a U(1) gauge field with the quadratic term in its action being S=-∫ d^d+1x √(-g) [g^μν(D_μϕ)^† (D_μϕ) -M^2ϕ^†ϕ ] where D_μϕ=(∂_μ-ieA_μ ) ϕ We work in A_η=0 gauge, i.e. set the time like component to vanish. The wave function then depends on A_i, the remaining spatial components, and also on ϕ. Gauss' law implies that the wave function is invariant under the residual gauge symmetry, i.e. when the gauge parameter is η independent and therefore preserve the condition, eq.(<ref>). This leads to the constraint Ψ[ϕ,A_i]=Ψ[ϕ+δϕ, A_i+δ A_i] with δϕ = i e χ( x) ϕ δ A_i = ∂_i χ( x) The U(1) symmetry is a global symmetry in the boundary theory and the Ward identities of this global symmetry follow from the invariance of the wave function. We will mostly consider the underdamped case, M^2>d^24, since the analysis for the over damped case is very similar to that in AdS space. In the underdamped case we go to the coherent state basis. To avoid confusion with respect to complex conjugation it is best to first decompose the complex scalar field into its real and imaginary components, ϕ( x,η)=1√(2)[ϕ_1( x,η)+i ϕ_2( x,η)]. and ϕ^†( x,η)=1√(2) [ϕ_1( x,η)-i ϕ_2( x,η)]. From eq.(<ref>) it follows that under a U(1) transformation, δϕ_1= -e χ( x) ϕ_2, δϕ_2=e χ( x) ϕ_1. One can now go the coherent state basis ρ_1( x), ρ_2( x) for ϕ_1, ϕ_2 respectively, following the discussion in section <ref>. Let a_1( x), a_2( x) be the corresponding destruction operators for the two fields whose eigenvalues are ρ_1,ρ_2 respectively. From eq.(<ref>) it then follows that under a U(1) transformation a_1( x),a_2( x) will transform, analogous to eq.(<ref>), as, δ a_1( x)= -e χ( x) a_2( x), δ a_2( x)=e χ( x) a_1( x). And therefore ρ_1( x),ρ_2( x) will also transform as δρ_1= - e χ( x) ρ_2, δρ_2=e χ( x) ρ_1. The invariance of the wave function, eq.(<ref>) then leads to the condition that Ψ[ρ_1+δρ_1, ρ_2+δρ_2, A_1+δ A_i]=Ψ[ρ_1,ρ_2,A_i] The Ward identities for U(1) invariance arise from this condition. To examine the consequences in more detail on the wave function let us note, as we can see from eq.(<ref>), eq.(<ref>), that the coherent state wave function will actually be expressed in terms of the complex conjugate of the coherent state eigenvalue, i.e. in terms of ρ_1^∗, ρ_2^∗. In expanding the wave function in terms of powers of the sources and obtaining constraints that follow from the invariance condition, eq.(<ref>), on the correlators it is useful to adopt the following, admittedly somewhat clumsy, notation. We define ρ^∗ ≡ 1√(2) [ρ_1^∗+i ρ^∗_2 ] (ρ^∗)^† ≡ 1√(2) [ρ_1^∗-i ρ^∗_2] From eq.(<ref>) it follows that under the U(1) gauge transformation δρ_1^∗= - e χ( x) ρ_2^∗, δρ_2^∗=e χ( x) ρ_1^∗. And this leads to the transformation laws for ρ^∗, (ρ^∗)^†, δρ^∗ = i e χ( x) ρ^∗ δ (ρ^∗)^† = -i eχ( x) (ρ^∗)^† showing that ρ^∗, (ρ^∗)^† transform as fields with charge +1 and -1 under the gauge transformation respectively. Expanding the wave function and keeping quadratic terms in the scalar and a trilinear term which also contains the gauge field we have multlinelog(Ψ) = 1 2∫ d^d x d^d y√(ĝ( x))√(ĝ( y))ρ^∗( x) (ρ^∗( y))^†ł O_+( x) O_+^†( y) + 1 2∫ d^d xd^d yd^d z√(ĝ( x))√(ĝ( y))√(ĝ( z))A_i( z) ρ^∗( x) (ρ^∗( y))^†ł J_i( z) O_+( x) O_+^†( y) The invariance of the wave function then leads to the condition _i^ zł J_i( z)O_+^†( x)O_+( y)-̊iδ( z- y)ł O_+^†( x)O_+( z)+̊iδ( z- x)ł O_+^†( z)O_+( y)=̊0 Similar conditions will arise when we consider higher point correlators. Appendix <ref> discusses some more aspects, including explicit calculations, pertaining to these Ward identities. §.§ Ward Identities in the Field Eigenstate Representation: Instead of the coherent state representation we can work directly with the wave function in terms of the eigenstate of the bulk field Φ( x,η_1), at the late time slice η_1, . More precisely, as was discussed in section <ref> we can express the wave function in terms of the sources J_+ or J_-, eq.(<ref>) and eq.(<ref>). Here we will examine the constraints which arise when the wave function is expressed in terms of these sources and argue that the corresponding coefficient functions will satisfy the Ward identities of a CFT. More precisely this will be true for the cut-off independent terms in the coefficient functions. The cut-off dependent terms will also satisfy conditions which arise from the invariance of the wave function under the various coordinate transformations. We will mostly discuss the case where the source is taken to be J_+, the case with J_- is quite analogous. The non-trivial part of the analysis, as was noted earlier, is the following. On general grounds, as was discussed above, the wave function is invariant under spatial and time reparametrisations which preserve the ADM gauge. However the sources J_± are related to the bulk field ϕ( x,η_1) in a non-local local manner along the hypersurface η=η_1, i.e. through a non-local relation in x. This makes it somewhat non-trivial to determine how the sources transform under the spatial and time reparametrisations and then find the resulting constraints on the wave function. In fact, we will need to specify the relation between the sources and the bulk field in a more precise manner to go forward. The physical picture (drawn from AdS/CFT) to keep in mind is as follows. In eq.(<ref>) which we reproduce below ϕ( k,η)=α_μ^∗ (-η)^d2-iμ J_+( k) + β_μ^∗ (-η)^d2+iμk^2iμ J_+( k) the first term on the RHS specifies the source in terms of J_+, and the second term is the response which arise due to this source. This suggests that if we take (-η)^d2-iμ J_+( x) to transform as a scalar, just as the bulk scalar ϕ( x,η), ensuring that it takes the same value at the same physical point, before and after the relevant coordinate transformation, then it should also be possible to define the response term so that it transforms as a scalar. For a spatial reparametrisation this is straightforward to do. Let us first write the relation in eq.(<ref>) in position space as ϕ( x,η_1)=α_μ^∗ (-η)^d2-iμ J_+( x) + 2^2iμΓ[d2+iμ]π^d2Γ[-iμ] (-η)^d/2+iμβ_μ^∗∫ d^d yJ_+( y) | x- y|^d+2i μ Similarly for future reference in terms of J_- from eq.(<ref>) we write ϕ( x,η_1)=β^∗_μ (-η)^d2+iμ J_-( x) + 2^-2iμΓ[d2-iμ]π^d2Γ[iμ] (-η)^d/2-iμα_μ^∗∫ d^d yJ_-( y) | x- y|^d-2i μ This relation holds when the metric components ĝ_ij=δ_ij. For a more non-trivial metric we then define J_+ through the relation ϕ( x,η_1)=α_μ^∗ (-η)^d2-iμ J_+( x) + 2^2iμΓ[d2+iμ]π^d2Γ[-iμ] (-η)^d2+iμβ_μ^∗∫ d^d y√(ĝ( y))J_+( y) s( x, y)^d+2i μ where s( x, y) is now the geodesic distance between the points x and y for the more general metric ĝ_ij. It is now clear that with this definition if J_+ transforms like a scalar under spatial reparametrisations , the response term will also be a scalar. That is, under the transformation x^i → (x^i)'=x^i +v^i( x), if we take δ J_+=-v^i ∂_i J_+ and this will lead to the LHS in eq.(<ref>) transforming in the required manner for a scalar, δϕ( x,η_1)=-v^i ∂_i ϕ( x,η_1). Note we will take eq.(<ref>) to be the defining relation for J_+ in terms of ϕ( x,η_1), even for an interacting theory where the scalar is not a free field. Next, for dealing with time reparametrisations, which asymptotically take the form, η→η'=(1+ϵ( x))η we need to take, following the line of thought above, the first term on the RHS of eq.(<ref>) to be a bulk scalar, i.e. for the change in (-η)^d2-iμ J_+( x) to be δ [(-η)^d2-iμ J_+( x)] =-ϵ( x) η∂_η [(-η)^d2-iμ J_+( x)]=-Δ_-ϵ( x)(-η)^d2-iμ J_+( x) This can be accomplished by taking δ J_+( x)=-ϵ( x) Δ_- J_+( x) Now notice that under a time reparametrisation eq.(<ref>) the surface η=η_1 will be described as a surface η'=(1+ϵ( x))η_1, i.e. it will not be a hypersurface along which the η' coordinate is constant. We need to therefore define the second term in eq.(<ref>), the response term, appropriately for such hypersurfaces where the η coordinate varies. This can be done as follows. First consider the η= constant surface, with ĝ_ij=δ_ij. From eq.(<ref>) we see that the d dimensional metric which is induced on this hypersurface is given by g_ij=δ_ijη^2 In terms of this induced metric the response term in eq.(<ref>) then can be written as 2^2iμΓ[d2+iμ]π^d2Γ[-iμ]β_μ^∗∫ d^d y√(g( y)) [(-η)^d2-iμ J_+( y)] [(-η)^d+2iμ | x- y|^d+2iμ] The first term on the RHS, d^d y√(g( y)), is the correct measure for the induced metric while the last term (-η)^d+2iμ | x- y|^d+2iμ equals 1 s( x, y)^d+2i μ where s(x,y) is the geodesic distance between x, y computed using the induced metric. The expression in eq.(<ref>) reveals how we can generalise the response term to apply for the more general hypersurfaces along which η also varies. We simply use the induced metric in defining the measure for integration along the hypersurface and define the distance s( x, y) also using this metric, so that the response term becomes 2^2iμΓ[d2+iμ]π^d2Γ[-iμ]β_μ^∗∫ d^d y√(g( y)) [(-η( y))^d2-iμ J_+( y)] [1 s( x, y)^d+2iμ] If the expression (-η)^d2-iμ J_+ now transforms as a bulk scalar, as we mentioned above, then we see that the response term, as we have defined it now, will also behave as a bulk scalar since the measure term, the middle term (-η( y))^d2-iμ J_+( y), and the distance s( x, y), will all transform appropriately. In summary, putting together the various points in our discussion of spatial and time reparametrisations above, we then define J_+ in terms of ϕ( x,η) in general by the relation multlineϕ( x,η)=α_μ^∗ (-η( x))^d2-iμ J_+( x) + 2^2iμΓ[d2+iμ]π^d2Γ[-iμ]β_μ^∗∫ d^d y√(g( y)) [(-η( y))^d2-iμ J_+( y)] [1 s( x, y)^d+2iμ] This relation is valid on hypersurfaces where η can also vary with x (at least for small enough variations, e.g. infinitesimal ϵ). The integration in the second term in the RHS in eq.(<ref>) is along the hypersurface, g_ij is the induced metric on this hypersurface and s( x, y) is the geodesic distance using this metric. We take (-η)^d2-iμJ_+ to be scalar under both spatial and time reparametrisations, i.e. to transform under these reparametrisations as given in eq.(<ref>) and eq.(<ref>),eq.(<ref>). The second term in the RHS of eq.(<ref>) will also then be a scalar. This is consistent with the LHS of eq.(<ref>), i.e. with ϕ( x,η), also transforming like a scalar, as indeed it should. As in the coherent state basis we will expand the metric ĝ_ij about its flat space value, eq.(<ref>). And now express the wave function as a Taylor series in J_+ and γ_ij. The invariance condition of the wave function under spatial and time reparametrisations then give rise to the condition Ψ[J_+ +δ J_+, δ_ij+γ_ij, η_1(1+ϵ( x))]=Ψ[J_+, δ_ij, η_1] where the change in J_+, eq.(<ref>), eq.(<ref>), is δ J_+=-v^i∂_i J_+-ϵ( x)Δ_- J_+ and γ_ij is given by γ_ij=-∇̂_i v_j-∇̂_jv_i+2 ϵ( x) δ_ij For the two and three point correlators, analogous to eq.(<ref>), we get on expanding the wave function multlinelog(Ψ) = 1 2∫ d^d xd^d y√(ĝ( x))√(ĝ( y))J_+( x)J_+( y) łÔ_+( x) Ô_+( y) + 1 4∫ d^d xd^d yd^d z√(ĝ( x))√(ĝ( y))√(ĝ( z))γ_ij( z) J_+( x)J_+( y)ł T_ij( z)Ô_+( x)Ô_+( y) Eq.(<ref>) then leads for the η_1 independent terms in the coefficient functions to the conditions _i^zł T^ij( z)Ô_+( x)Ô_+( y)+̊δ( z- x)_i^xłÔ_+( x)Ô_+( y)+̊δ( z- y)_i^yłÔ_+( y)Ô_+( x)=̊0 ł T^i_i( z)Ô_+( x)Ô_+( y)+̊Δ_+δ( z- x)łÔ_+( x)Ô_+( y)+Δ_+δ( z- y)łÔ_+( y)Ô_+( x)=̊0 These agree in their form with eq.(<ref>), condconf and are the Ward identities in a CFT for spatial reparametrisations and local scale invariance. It follows from these relations that the cut-off independent terms will then satisfy the Ward identities of conformal invariance as well We will not discuss these in detail except to note that for the cut-off independent terms they take the standard form for and operator of dimension Δ_+, e.g., for the two point function, they are given in eq.(<ref>), eq.(<ref>), with O_+ replaced by Ô_+. A few additional comment before we proceed are in order. First, it is worth being more explicit about the definition we have given in eq.(<ref>) for J_+ in terms of ϕ( x,η) and the resulting coefficient functions which arise after the Taylor series expansion of the wave function. Consider the wave function as a functional of the field ϕ and metric ĝ_ij, eq.(<ref>), on a slice specified by giving η( x), Ψ[ϕ, ĝ_ij, η( x)]. For η→ 0, to leading order, the induced metric on the slice is given by g_ij( x)=ĝ_ijη( x)^2. This induced metric is to be used in calculating the second term on the RHS of eq.(<ref>). Although the relation between the two is non-local, J_+ can in principle be obtained now in terms of ϕ using eq.(<ref>) , and the wave function can then be expressed in terms of J_+ to obtain Ψ[J_+,ĝ_ij, η( x)]. After expanding in a Taylor series in J_+, and γ_ij, eq.(<ref>), one then obtains the coefficient functions which will give the correlation functions of the dual theory. In particular, the relation, eq.(<ref>), although motivated by the behaviour in the free field limit is to be taken as an exact definition of J_+ in terms of ϕ, uncorrected in the presence of additional interactions, e.g. ϕ^n interactions, or in the loop expansion (gauge interactions are an exception and will be discussed below). Second, for the cut-off dependent terms also the conditions of invariance under spatial and time reparametrisation will give rise to important constraints. Obtaining these constraints require that the cut-off is also transformed suitably. For example consider the term in the two point function of the form eq.(<ref>). This can be written for a more general hypersurface where η varies, and a more general metric along the hypersurface as ∫ d^d xd^d y√( g( x))√( g( y)) ((-η( x))^d2-iμJ_+( x)) ((-η( y))^d2-iμ J_+( y)) 1 s^d+4iμ where we have used the notation above with g_ij denoting the induced metric etc. We see that this term will indeed be invariant under both spatial and time reparametrisations, and conformal transformations, once the cut-off is also transformed. Third, a very similarly analysis could have been carried out for the source J_- instead of J_+. The operator sourced by J_-, Ô_-, has dimension Δ_- instead of Δ_+ and Ward identities for the cut-off independent terms then can be obtained from the J_+ case by simply making the replacements Ô_+→Ô_- and Δ_+→Δ_-. Finally, when a U(1) gauge symmetry is present for a complex scalar field, ϕ, with action eq.(<ref>), we need to change the response term to now include a Wilson line W( x, y)= exp[ie ∫_ y^ xA_i dx^i] where the path involved for computing the Wilson line is the geodesic from x to y. We then define the bulk field in terms of J_+ as follows ϕ( x,η_1)=α_μ^∗ (-η_1)^d2-iμJ_+( x) + 2^2iμΓ[d2+iμ]π^d2Γ[-iμ]β_μ^∗ (-η_1)^d2+iμ∫ d^d y W( x, y) J_+( y) | x- y|^d+2iμ instead of eq.(<ref>). From eq.(<ref>) we see that if J_+( x) transforms under a U(1) gauge transformation as a charge 1 field, i.e. with δ J_+( x)=i e χ( x) J_+( x) then the second term in eq.(<ref>) will also transform as a charge 1 field, which is consistent with the transformation of the bulk scalar eq.(<ref>). Additional affects of a non-trivial metric and varying η values along the hypersurface can be included as above, eq.(<ref>). In this more general case the Wilson line would be computed along the geodesic defined from the induced metric and the generalisation of eq.(<ref>) takes the form ϕ( x,η)=α_μ^∗ (-η)^d2-iμJ_+( x) + 2^2iμΓ[d2+iμ]π^d2Γ[-iμ]β_μ^∗∫ d^d y√(g( y)) (-η)^d2-iμ W( x, y) J_+( y) s( x, y)^d+2i μ As above, starting with the wave function Ψ[ϕ, ĝ_ij, A_i, η( x)] we would use the above relation (with W( x, y) now depending on A_i and on the geodesic determined by ĝ and η( x)) to obtain J_+, and then find the wave function as a functional of J_+, γ_ij and A_i. Its coefficient functions would be the correlators in the dual theory. §.§ Additional Points We have discussed above how a source in the dual theory can be identified with the coherent state eigenvalue. This identification has many attractive feature. The cut-off independent terms in the correlators satisfy the Ward identities of conformal invariance, local scale invariance, and momentum conservation. And the cut-off dependent term in the two point function is a purely local contact term, eq.(<ref>). For higher point correlators, also our analysis in section <ref> suggests that cut-off dependent terms also have a sensible explanation in the dual field theory - they can be understood as arising from operator mixing. In addition, we also found other ways to identify sources, using J_+ or J_-, see eq.(<ref>) and its more accurate version eq.(<ref>) for J_+ and similarly for J_- , eq.(<ref>). These turn out to be source for boundary fields of dimension Δ_+ or Δ_- respectively and are related to the bulk field eigenstates through a non-local transformation in position space. The finite parts of the resulting coefficient functions in these cases also satisfy the Ward identities mentioned above as was discussed above. However the cut-off dependent terms in these cases are more unwieldy, in particular the two point correlator has an additional term which is both cut-off dependent and non-local. While we did not analyse them in much detail the general arguments we gave lead to the conclusion that the cut-off dependent terms will also be constrained by the conditions of invariance of the wave function under time and spatial reparametrisation invariance, see eq.(<ref>) and the related discussion. How exactly holography works in the dS context is still poorly understood. In the most optimistic case the holographic dual will be a field theory with a finite UV cut-off, and will reproduce the full wave function in the bulk, at late time η_1→ 0, including the cut-off dependent terms. In particular, it will therefore encode all information for both modes which have exited the horizon and those which have not done so by the time η_1. In this case the correlators for modes with kη_1≳ O(1) will depend on η_1 in a complicated manner, significantly different from correlations in a CFT, and the dual theory would need to reproduce this full dependence. The somewhat unwieldy contribution to the two point function, eq.(<ref>), will then be a part of this larger set of cut-off dependent terms. In the less optimistic case the best one could do would be to have a hologram which is a CFT reproducing the cut-off independent terms. The cut-off dependent non-local term in eq.(<ref>) would not be reproduced in such a theory. Restricting ourselves to cut-off independent terms there would still be a difference between using the coherent state eigenvalue ρ^∗, eq.(<ref>) or J_+, eq.(<ref>), eq.(<ref>), as a source. The ratio of the two point and three point correlators (suitably corrected for different normalisations ) would differ in these two cases, see Appendix <ref>. Thus given a dual description which only yields cut-off independent terms would be still be able to decide which of the two sources, ρ^∗, eq.(<ref>) or J_+, correctly reproduce correlations in the boundary dual. § ADDITIONAL COMMENTS Here we make some further comments about correlation functions obtained in the dS case, their continuation from AdS, etc. §.§ Issues Connected to Factors of i and -1 It is well known that factors of i=√(-1), or -1 often appear, and confuse or bedevil attempts to construct a dS/CFT correspondence. Here we discuss some of these issues. For the overdamped case, including massless fields, these factors can be understood simply from analytically continuing the corresponding expressions in the AdS case[In the underdamped case it is more complicated, since we work in the coherent state basis section <ref> or in terms of the sources J_±, eq.(<ref>) eq.(<ref>).]<cit.>. Let us review how to carry out this continuation in a convenient manner. More details can be found in Appendix <ref>. The metric in Euclidean AdS_d+1 in Poincaré coordinates is given by ds^2=L^2 z^2[dz^2+∑_i(dx^i)^2] where L is the AdS radius. The coordinate z takes values in the range z∈ [0,∞]. Continuing z→ -i η(1-i ϵ) with η∈ (-∞,0], ,and L→± i R_ dS where R_ dS is the radius of dS space gives the metric for dS in Poincaré coordinates ds^2=R_ dS^2η^2 [-dη^2+(dx^i)^2]. As will be clear from the discussion below if d is an even integer the choice of sign in eq.(<ref>) makes a difference. In this paper we have mostly considered the hologram at I^+, i.e. for the expanding branch. However one could also have considered the contracting branch and constructed a hologram at I^-. It turns out that for the expanding branch we need to take L → i R_ dS And for the contracting branch its complex conjugate, i.e. L→ -i R_ dS to get agreement between the dS and AdS results. In the case of the expanding branch after this continuation the action in AdS space S_AdS gets related to the action in dS space as follows, S_AdS→ -i S_dS This can be easily checked, for example for a scalar field. As a result the partition function Z_AdS in AdS space goes over to the wave function Ψ in dS. Z_AdS=e^-S_AdS→Ψ=e^iS_ds In carrying out the continuation it is convenient to normalise the action so an overall factor of 1 G_N appears in front of it. By dimensional analysis this will lead to a factor of L^d-1 G_N appearing, once we input the metric, in the AdS case. Continuing using eq.(<ref>) will then lead to this turning into the factor L^d-1 G_N→ (i)^d-1R_ dS^d-1 G_N We also note that the continuation in eq.(<ref>) is required to ensure that the saddle point solution used in evaluating the action in the Euclidean AdS case goes over correctly to the solution which satisfies the Bunch Davies vacuum condition, eq.(<ref>) in the dS case. E.g., for an underdamped scalar the solution in Euclidean AdS is given by ϕ∝ z^d2K_ν(kz) Under the continuation eq.(<ref>) this goes over to F_ν^∗ which vanishes as η→ -∞, as was discussed in eq.(<ref>). For cut-off dependent terms we must also continue the cut-off in AdS space at z=ϵ to the cut-off η_1 in dS through the relation, eq.(<ref>), ϵ=-i η_1. Before proceeding we remind the reader that the n point correlator in the hologram to be given by, eq.(<ref>), ł O( k_1) O( k_2) ⋯ O( k_n) =̊δ^n logψδϕ̂( k_1) δϕ̂( k_2) ⋯δϕ̂( k_n) where ϕ̂( k) is the source in the boundary theory. In the underdamped case ϕ̂ is replaced by ρ^∗ or J_±, eq.(<ref>), eq.(<ref>), eq.(<ref>). Similarly in AdS space the correlators are defined to be ł O( k_1) O( k_2) ⋯ O( k_n)=̊δ^n log Zδϕ_b( k_1) δϕ_b( k_2) ⋯δϕ_b( k_n) §.§ Scalars, Stress Tensor and 2d CFTs The two point function for an overdamped scalar was discussed in section <ref>, see posspa. It was noted, posspa, that the two point function generically violates reflection positivity. The absence of reflection positivity in the dS case means that the correlators in the Euclidean field theory cannot be analytically continued to correlation functions of a in a conventional Lorentzian field theory with positive norm states[However this may be possible to do in a Lorentzian theory with Ghosts, i.e. negative norm states, <cit.>]. It is easy to check that the result for the two point function of an overdamped scalar can be obtained after the analytic continuation from AdS space, if we use the continuation eq.(<ref>) in the expanding branch, see Appendix <ref>. The two-point function of the stress tensor can be discussed similarly, and can also be obtained by continuing the AdS result given in <cit.>, see Appendix <ref>. In dS space in momentum space it is given by ⟨ T_ij( k)T_kl(- k)⟩ =- idβ_d2^∗8α_d2^∗k^d[P_ikP_jl+P_ilP_jk-2 d-1P_ijP_kl]   d= odd = - idβ̅_d2^∗8α̅_d2^∗k^dlog(k)[P_ikP_jl+P_ilP_jk-2 d-1P_ijP_kl]   d= even where P_ij≡δ_ij-k_ik_j k^2. In position space, both stressmom and stressmomint becomes ⟨ T_ij( x)T_kl( y)⟩=e^i π(d -1) 2Γ[d+2]/8(d-1)π^d/2Γ[d/2] r^2d(J^i_l (r) J^j_m (r) + J^i j (r) J_l m (r) - 2/dδ^i_m δ^j_l ) where, r= x-y, r= r, J^i_j (x) = δ^i_j - 2 x^i x_j/x^2 As discussed in Appendix <ref> when d is even, d=2n, or when d is odd and given by d=4m+3, with n,m∈ℤ, this two point correlator violates reflection positivity. Returning to the scalar case and turning briefly to higher point correlations, we saw in section <ref> above that the three point correlator for scalars is given by δlog(ψ) =1 3!∫∏_i=1^3d^d𝐤_i/(2π)^d(2π)^dδ( k_1+ k_2+ k_3)⟨ O(𝐤_1)O(𝐤_2)O(𝐤_3)⟩'∏_i=1^3ϕ̂(𝐤_i) where ⟨ O(𝐤_1)O(𝐤_2)O(𝐤_3)⟩ is given as multline⟨ O(𝐤_1)O(𝐤_2)O(𝐤_3)⟩'=-λ8×3!/2^3ν(Γ[ν])^3[e^iπ2(3+3ν-d/2)]k_1^ν k_2^ν k_3^ν ×∫_0^∞dt  t^d/2-1K_ν(k_1t)K_ν(k_2t)K_ν(k_3t) This means that the three point function also has a complex coefficient in general. From oversource we also note that the bulk scalar being real implies that the source ϕ̂( x) is also real. In momentum space this implies that ϕ̂(𝐤)=ϕ̂^∗(-𝐤). The two point and higher point correlators being complex in momentum space therefore implies that the wave function is also complex. To understand this better recall, as was also mentioned in the previous subsection, that we have been discussing the wave function for the expanding branch of the Hartle- Hawking state above. Its complex conjugate, more correctly CPT conjugate, Ψ^∗, would describe the time reversed contracting branch, and here the n point correlators would be complex conjugates of their values in the expanding branch. The analytic continuation to the contracting branch also has to be done taking into account this complex conjugation, e.g., eq.(<ref>). Finally, let us conclude with some comments about the dS_3 case. By considering global dS_3 and evaluating the boundary stress tensor from the behaviour of the extrinsic curvature close to the boundary, one finds, see Appendix <ref>, that the central charge is given by c=i3R_ dS 2G_N. In particular, it is imaginary. Using general arguments <cit.>, and after taking into account the fact that the stress tensor behaves like a symmetric two index tensor under spatial reparametrisations, one then learns that in correlation functions the short distance singularities, when two stress tensors, T≡ T_zz, come close together, is of the standard OPE form given by T(z) T(ω)=c/2 (z-ω)^4+ 2 T(ω) (z-ω)^2+ ∂_ω T(ω) (z-ω) Note that c here is given by eq.(<ref>) and is imaginary. Similarly for the anti-holomorphic component T̅(z̅)=T_z̅z̅(z̅), T̅(z̅) T̅(ω̅)= c/2 (z̅-ω̅)^4+ 2 T̅(ω̅) (z̅-ω̅)^2+ ∂_ω̅T̅(ω̅) (z̅-ω̅) A scalar field in the bulk correspond to primary a scalar field O in the boundary, see Appendix <ref>. The Ward identities we have discussed in the previous section tell us that the leading singularity, when such a field comes close to T, is given by T(z) O(ω)=ΔO(ω) (z-ω)^2+∂_ω O(ω) (z-ω) where Δ is the dimension of the field, and similarly for T̅. Thus despite the fact that reflection positivity is violated and the central charge is imaginary, key elements of the general structure of CFTs remains intact in field theory duals. § DISCUSSION We are at an early stage in our study of holography in de-Sitter space. There are in fact two versions of holography, pertaining to the static patch or the late time boundary I^+, Figure <ref>, which are being studied. The static patch version is discussed in <cit.>. Here we have discussed some aspects of the late time boundary version, which is perhaps of greater interest from the point of inflationary cosmology and phenomenology. In this case, holography relates the late time wave function on a hypersurface η=η_1, with |η_1|≪ 1, to the partition function of a dual field theory, meq. The value of the late time coordinate η_1, more generally the location of the late time hypersurface, is related to the UV cut-off in the field theory, which vanishes as η_1→ 0. The η_1 or cut-off dependent terms are in fact important in ensuring that the wave function is gauge invariant and solves the Wheeler de-Witt equation, see <cit.>. A full holographic understanding of the bulk would require that these cut-off dependent terms are also correctly reproduced in the boundary theory. In fact, and perhaps more importantly, at any finite value of η_1 there are modes in the bulk which have not exited the horizon, with momenta k|η_1|≳ O(1). The wave function includes all information about these modes too and a complete holographic description for the wave function at the late time slice would have to include this information as well. Such a complete hologram seems quite ambitious to obtain since the dependence on modes which have not exited the horizon in the wave function is a complicated function of both the momentum and the cut-off. In the boundary theory this would correspond to the behaviour of correlation functions on the scale of the cut-off which is non-universal. The less ambitious version of holography would be of a dual theory which successfully encodes information in the wave function for modes after they exit the horizon. More precisely, rather than considering all momenta at a fixed and non-vanishing value of η_1, we take the η_1→ 0 limit keeping the momenta fixed, discard the η_1 dependent terms and retain the remaining η_1 independent terms in the wave function. These remaining terms would then be obtained from the hologram. There is an important distinction between two descriptions. Since a translation in time, i.e. η→λη, corresponds to a scale transformation k → k/λ on the boundary, one expects one general grounds that the time translation of the wave function should map to scale transformation in the dual theory. The evolution of the bulk wave function as a function of time should be unitary if all modes are being retained. In a hologram which keeps all the modes, and complete information about them, the scale transformation would also then have to be a unitary transformation. However, if the hologram only retains modes which have exited the horizon, going to larger length scales by a scale transformation in the boundary theory, i.e. to earlier times in the bulk, would lead to a loss of information and to non-unitary evolution. As some preliminary evidence for how scale transformations could act in unitary manner, we note, as was discussed above, that the boundary theory in general violates reflection positivity. This allows for the trace of the stress tensor which generates local scale transformations to be imaginary, cftanom. In fact there are actually two holograms one could associate with a Hartle Hawking state, one at I^+ and the other at I^-, corresponding to the boundaries of the expanding and contracting branches. These boundary descriptions are related by complex conjugation. For example, the anomalous dimensions of operators are related by complex conjugation, as are the coefficients which appear in the correlations, e.g., the structure constants in the operator product expansion which was discussed in Appendix <ref>. In this paper we have studied some aspects of the holographic dictionary for underdamped scalar fields which have a mass M^2>d^24. These fields do not, strictly speaking, freeze out at late times, rather they continue to oscillate but in a universal manner independent of the momenta k, with a dependence e^± iμlog(-η) where μ is given in undernu. One of our main results is to show how a source can be identified in the boundary theory for such fields by working in the coherent state basis. We also argued that the cut-off independent terms in the coefficient functions, obtained by expanding the wave function in a Taylor series expansion in this source, satisfy the Ward identities of a CFT, namely conformal invariance, local scale invariance and momentum conservation. Some sample calculations for correlators, showing that one obtains results expected in a CFT, have also been included. We also argued that there are other ways to identify sources in the boundary theory, besides the coherent state eigenvalue. An alternate identification is possible in terms of variables J_+, J_-, which are related to the bulk field Φ through a transformation which is non-local in position space, reljpjm. The cut-off independent terms we argued continue to satisfy the Ward identities of a CFT in this case, although some of the cut-off dependent terms are rather unwieldy, e.g., the resulting two-point correlator has a cut-off dependent term which is non-local in position space, posf. Our discussion was focused mostly on the under damped case but some aspects also have a bearing on the over damped case. For example it is possible in in the over damped case as well as to identify the source in the boundary theory in ways different from the conventional one and argue that the cut-off independent terms in the wave function when expanded in terms of these sources continue to satisfy the Ward identities, see Appendix <ref>. As has been discussed above, the boundary theory which is Euclidean violates reflection positivity and therefore cannot be continued to Lorentzian space, in general. And the fields in the boundary theory to which the bulk sources couple also cannot then be continued to operators in a Lorentzian theory. Nevertheless, we find, as was discussed in section <ref> and Appendices <ref> and <ref>, that these boundary fields behave in a manner entirely analogous to operators in more conventional CFTs. In particular, they transform under conformal transformations like primary operators and the nature of short distance singularities when they come together in correlation functions is the same as that in an operator product expansion, allowing us to define a short distance expansion, operator product coefficients, etc. Also, after analytic continuation, as discussed in section <ref> and Appendix <ref>, the operator product coefficients, even in the under damped case where we work with the coherent state representation, can be obtained by analytical continuation from AdS space. We look forward to further progress in this subject, hopefully spurred on by the study of concrete realisations of de-Sitter space, including <cit.>. § ACKNOWLEDGEMENTS First and foremost, we are deeply grateful to Suvrat Raju for generously sharing his insights and comments with us. The suggestion that the coherent state representation can be used to identify a source in the under damped case was made by him to us, and we thank him for this important insight and related discussion. We also acknowledge discussion with N. Iizuka, J. Maldacena, K. Narayan, S. Sake, T. Takayanagi, and members of the TIFR String Theory Group, especially A. Gadde, G. Mandal, S. Minwalla and O. Parrikar. SPT acknowledges support from the KITP, Santa Barbara, enabling him to participate in the workshop, “What is String Theory". We acknowledge support from Government of India, Department of Atomic Energy, under Project Identification No. RTI 4002 and from the Quantum Space-Time Endowment of the Infosys Science Foundation. Finally, we thank the people of India for generously supporting research in String Theory. § WAVE FUNCTION INCLUDING Φ^N INTERACTION In this section, detailed calculation of wave function under the inclusion of ϕ^n interaction through path integral and verification for n=3 with that obtained through perturbation technique are presented. It is also discussed how to represent it in coherent basis. §.§ Using Time Dependent Perturbation Technique The wave function in free theory is given by eq.(<ref>) near boundary. It is apparent that it should appear as the unperturbed contribution when we include interaction in a perturbative manner. So we denote it by ψ_0 and re-write below. ψ_0[φ,η]=𝒩exp[1/2∫d^d𝐤/(2π)^d i/(-η)^d-1∂_η(ℱ_ν(k,η))^∗/(ℱ_ν(k,η))^∗φ(𝐤,η)φ(-𝐤,η)] where ℱ_ν(k,η) is given by defF. Including the normal ordered interaction Hamiltonian :H_I(η'): in the theory, :H_I(η):=-λ∫ d^d x'√(-g):Φ^3(𝐱',η): The corresponding change in ground state can be calculated using time dependent perturbation technique. In the 1st order of coupling parameter λ, we get the modified state |φ_I⟩ in terms of unperturbed state |φ_0⟩ as ⟨φ_I|=⟨φ_0|[1+i∫_-∞^0 dη':H_I(η'):] The modified wave function is thereby given as ψ_I=⟨φ_I||0⟩ Using C1.4, C1.3, defwvf, we get ψ_I =ψ_0-iλ∫_-∞^0 dη'∫d^d k_1d^d k_2d^d k_3(2π)^3d√(-g)(ℱ_ν(k_1,η'))^∗(ℱ_ν(k_2,η'))^∗(ℱ_ν(k_3,η'))^∗ ⟨φ_0|a^†_-𝐤_1a^†_-𝐤_2a^†_-𝐤_3|0⟩(2π)^dδ(∑_i=1^3𝐤_i) To derive the last line, we use Phik. Putting the expression for a^†_- k from a+(k) and then using id1 in C1.8, we get multlineψ_I=ψ_0-λ∫∏_i=1^3d^d k_i(2π)^d⟨φ_0|∏_i=1^3∂_ηℱ_ν(k_i,η)Φ(k_i,η)-(-η)^d-1ℱ_ν(k_i,η)Π(k_i,η)/(-η)^d-1|0⟩ × (2π)^dδ(∑_i=1^3𝐤_i)∫_-∞^0 dη'√(-g)∏_i=1^3(ℱ_ν(k_i,η'))^∗ Using eigphib, eigpimultlineψ_I=ψ_0-iλ∫∏_i=1^3d^d k_i(2π)^d(2π)^dδ(∑_i=1^3𝐤_i)∏_i=1^3ℱ_ν(k_i,η)D[ k_1, k_2, k_3]ψ_0 ×∫_-∞^0 dη'√(-g)∏_i=1^3(ℱ_ν(k_i,η'))^∗ where D[ k_1, k_2, k_3]=∏_i=1^3[-i/(-η)^d-1∂_η(ℱ_ν(k_i,η))/ℱ_ν(k_i,η)φ(k_i,η)+∂/∂φ(-k_i,η)] Carrying out the action of D[ k_1, k_2, k_3] on ψ_0 given in C1.1 results D[ k_1, k_2, k_3]ψ_0 =-1/|ℱ_ν(k_1,η)|^2|ℱ_ν(k_2,η)|^2|ℱ_ν(k_3,η)|^2φ(𝐤_1,η)φ(𝐤_2,η)φ(𝐤_3,η)ψ_0 To note, we have omitted the terms with single φ(𝐤,η) since each of such terms carries appropriate δ-function that selects φ(𝐤=0,η) among all possible 𝐤 values. Hence the wave function is evaluated as multlineψ_I=ψ_0[1+iλ∫∏_i=1^3d^d k_i(2π)^d(2π)^dδ(∑_i=1^3𝐤_i)∏_i=1^3φ(𝐤_i,η)/(ℱ_ν(k_i,η))^∗. .×∫_-∞^0 dη'1/(-η')^d+1∏_i=1^3(ℱ_ν(k_i,η'))^∗] where we have put √(-g)=1/(-η)^d+1 as obvious from the metric Poing. The result for wave function including cubic interaction, hence derived using perturbation technique, reproduces eq.(<ref>) at 1st order of coupling strength λ. §.§ Using Path Integral Formalism The total action in the λϕ^n theory is the following S=-1/2∫ d^d𝐱dη√(-g)(∇_μϕ∇^μϕ+m^2ϕ^2)+λ∫ d^d𝐱dη√(-g)ϕ^n Now,in the free theory,the equation of motion for ϕ is (∇_μ∇^μ-m^2)ϕ=0 The on-shell action S_onshell is carried out by satisfying appenS1 with shelleq S_onshell=-1/2∫_∂ d^d𝐱√(-g)g^ηηϕ∂_ηϕ+λ∫ d^d𝐱dη'√(-g)ϕ^n Since ϕ satisfies the equation of motion for the free theory, we need to consider δϕ=0 for this onshell calculation up to O(λ) Therefore,the correction to onshell action δ S_n=+λ∫ d^d𝐱dη'√(-g)ϕ(𝐱,η')^n In momentum space,this correction becomes, δ S_n=λ∫∏_i=1^nd^d𝐤_i/(2π)^d(2π)^dδ^(d)(𝐤_1+...+𝐤_n)∫ dη'√(-g)∏_i=1^nϕ( k_i,η') Lets put ϕ( k_i,η')=(ℱ_ν(k_i,η'))^∗ϕ̂(𝐤_i) where ℱ_ν(k_i,η) is the on-shell solution of the free theory given in defF. Then, the above equation becomes δ S_n =λ∫∏_i=1^nd^d𝐤/(2π)^d(2π)^dδ^(d)(𝐤_1+...+𝐤_n)I(k_1,k_2,⋯,k_n)ϕ̂(𝐤_i) where I(k_1,k_2,..,k_n)=∫_-∞^η_1dη'/(-η')^d+1∏_i=1^n(ℱ_ν(k_i,η'))^∗ where ϵ is a small positive number. Now, we again substitute ϕ̂(𝐤_i)=ϕ(𝐤_i,η)/(ℱ_ν(k_i,η))^∗ using the relation eq.(<ref>) and get δ S_n=λ∫∏_i=1^nd^d𝐤/(2π)^d(2π)^dδ^(d)(𝐤_1+...+𝐤_n)I(k_1,⋯,k_n)∏_i=1^nϕ(𝐤_i,η)/(ℱ_ν(k_i,η))^∗ which is consistent with eq.(<ref>). To note, In the above equation,in principal we are free to take any value of η∈ (-∞,0). Now, to get the correction to the wave function near the boundary, we put the series expansion of of (ℱ_ν(k_i,η))^∗ in the neighbourhood of η=0 in appendeltasn. §.§ In Coherent State Basis As discussed in section <ref>, wave function at boundary in terms of field eigenstate, ψ[ϕ], can be represented in terms of coherent eigenstate, ψ[ρ] through the transformation given by der1psirho. which is a functional integral over ϕ. Using the relation defjpb, we can write this as ψ[ρ]=∫ DJ_+Ψ_ρ^∗[J_+]ψ[J_+] where Ψ_ρ[J_+] is given in formfa and ψ[J_+] is given by multlineψ[J_+]=exp[i∫d^d𝐤/(2π)^d Q_1 J_+( k)J_+(- k). .+iλ∫(∏_i=1^3d^d k_i(2π)^d)(2π)^dδ( k_1+ k_2+ k_3)Q_2(∏_i=1^3J_+( k_i))] where Q_1=(k^2iμf_μ^∗(k,η)∂_η f_μ^∗(k,η)/2(-η)^d-1)    Q_2=(∏_i=1^3k_i^iμ)I(k_1,k_2,k_3) All together, the integrand in der1jrho is evaluated as multlineexp[∫d^d𝐤/(2π)^dW_1ρ^∗( k) J_+( k)+i∫d^d𝐤/(2π)^d W_2 J_+( k)J_+(- k). .+iλ∫(∏_i=1^3d^d k_i(2π)^d)(2π)^dδ( k_1+ k_2+ k_3)Q_2(∏_i=1^3J_+( k_i))] where W_1=√(2μ) f_μ^∗(k,η) k^iμ(-η)^-Δ_+    W_2=(Q_1+Δ_+(f_μ^∗(k,η))^2 k^2iμ2(-η)^d) der2jrho, as argued in section <ref>, has a factor R_ dS^d-1 G_N which in semi-classical limit goes towards ∞. This brings us the opportunity to take saddle point approximation to solve the integral. The saddle point can be found by extremizing the integrand der2jrho which gives W_1ρ^∗( k) +2i W_2 J_+(- k)+3iλ∫(∏_i=2^3d^d k_i(2π)^d)(2π)^dδ( k+ k_2+ k_3)Q_2(∏_i=2^3J_+( k_i))=0 Above relation is exact and can be easily generalised for ϕ^n interaction. In order to get J_+ in terms of ρ^∗, we have to invert the above relation order by order in λ. The 0-th order saddle is found as J_+(- k)=iW_12W_2ρ^∗( k) which reproduces Jrelchi using Wcoef. The O(λ) correction, δ J_+ to the saddle value can be found from saddleco as δ J_+(- k)=3λ∫(∏_i=2^3d^d k_i(2π)^d)(2π)^dδ( k+ k_2+ k_3)Q_2W_1^28W_2^3(∏_i=2^3ρ^∗( k_i)) where we have used 0thsad in the 3rd term in LHS of saddleco. Hence the O(λ) contribution to the integral der1jrho is given by its first term as ∫d^d𝐤/(2π)^dW_1ρ^∗( k) δ J_+( k) = 3λ∫(∏_i=1^3d^d k_i(2π)^d)(2π)^dδ(- k_1+ k_2+ k_3)Q_2W_1^38W_2^3(∏_i=1^3ρ^∗( k_i)) The fact that the O(λ) contribution from 2nd term given by i∫d^d𝐤/(2π)^d W_2 [J_+( k)δ J_+(- k)+J_+(- k)δ J_+( k)] = -3λ∫(∏_i=1^3d^d k_i(2π)^d)(2π)^dδ(- k_1+ k_2+ k_3)Q_2W_1^38W_2^3(∏_i=1^3ρ^∗( k_i)) exactly cancels term1 ensures that the derivation can be easily generalised to ϕ^n interaction. Hence O(λ) contribution comes only from the last term in der2jrho obtained using Q1Q2 and Wcoef as multlineδψ[ρ]=exp[iλ (2μ)^32(α_μ^∗)^3∫(∏_i=1^3d^d k_i(2π)^d)(2π)^dδ( k_1+ k_2+ k_3). .×(∏_i=1^3k_i^iμ)I(k_1,k_2,k_3)(∏_i=1^3ρ^∗( k_i))] which matches with the O(λ) correction for ϕ^3 interaction in intwvfchi . Note that to calculate the contribution at O(λ^n), we should be given the saddle value up to O(λ^n-1). Using that in saddleco, we get J_+ in terms of ρ^∗ correct up to O(λ^n). Now we put the relation in der2jrho to get the coherent state wave function correct up to O(λ^n). § ANALYTIC CONTINUATION FROM ADS TO DS Here we briefly study scalar field in AdS and subsequently discuss how the results in overdamped case in dS can be obtained through analytic continuation. Scalar Field in AdS The action for a massive scalar field in Euclidean AdS is written as follows S=1 2G_N∫ d^d+1x√(g)(g^μν∂_μϕ∂_νϕ+M^2ϕ^2) where the metric is given in metadsea. The dependence on L can be determined by dimensional analysis and we will continue it using eq.(<ref>). Also it is convenient to G_N=1 where G_N is Newton's constant of gravitation. Normalised on-shell solution for ϕ in momentum space is given by ϕ( k,z) = z^d/2 K_ν (kz) /ϵ^d/2 K_ν (kϵ)ϵ^d2-νϕ_b( k) where ν=√(d^24+M^2 L^2) and ϵ is the cut-off value of z close to the AdS boundary. On-shell action is given in momentum space as S_∂ = -L^d-1/2∫d^d𝐤(2π)^d1/z^d-1ϕ( k,z) ∂_z ϕ(- k,z) |_z=ϵ The partition function in EAdS up to O(ϵ^2ν) is given by Z_AdS=exp[-S_]=exp[L^d-1/2∫d^d𝐤/(2π)^d{(d/2 - ν) ϵ^-2 ν + k^2νa_ν/b_ν 2 ν}ϕ_b( k) ϕ_b (- k)] where a_ν and b_ν given by a_ν=2^-ν -1Γ[-ν] b_ν=2^ν -1Γ [ν] are coefficients of leading modes of K_ν(kz) near boundary given by K_ν(kz)=a_ν(kz)^ν+b_ν(kz)^-ν Using the definition nptads, the 2pt. correlator is found to be ⟨ O(𝐤) O(-𝐤) ⟩_AdS = - π/2^2ν-1Γ[ν]^2 sin(πν) k^2ν L^d-1 where we used coefads and discarded the local contribution that would arise under position space representation of Z_AdS. Similarly the partition function for integer ν is given by log Z_AdS=L^d-1/2∫d^d𝐤/(2π)^d[(d/2-ν)ϵ^-2ν+{b̃_0/ã_0(1+2νlog(ϵ))+2νc̃_0/ã_0}k^2ν. .+2νb̃_0/ã_0 k^2νlog(k) ] ϕ_b( k) ϕ_b (- k) where (γ_E being the Euler number) ã_0=2^ν-1Γ[ν] b̃_0=(-1)^ν-1/2^νΓ[ν+1]c̃_0=(-1)^ν+1/2^νΓ[ν+1](γ_E-1/2∑_m=1^d/21/m-log(2)) with the two point correlator in momentum space for integer ν ⟨ O(𝐤)O(-𝐤)⟩_AdS=-(-1)^ν/2^2ν-2Γ[ν]^2L^d-1k^2νlog(k) In position space, both adstwopt and okokint takes the form ⟨ O(𝐱)O(𝐲)⟩_AdS= 2ν/π^d2Γ[d/2+ν]/Γ[ν]L^d-1| x- y|^d+2ν Analytic Continuation Now we can go to dS from EAdS through analytic continuation. The primary rules of the continuation are given by L → iR_ dS, z → -i η According to ancnt1, the cut-off is also continued as ϵ=-iη_1 Now, for the Euclidean AdS, we specify the boundary condition to be ϕ(𝐤,z) = ϵ^d2-νϕ_b (𝐤) while at the boundary of dS, we have ϕ(𝐤,η) = (-η_1) ^d2-νϕ̂(𝐤) Now continuing the cut-off ϵ to η_1 from EAdS to dS given by cutcont and L as given by eq.(<ref>), then gives the the rule of continuation for the sources from AdS to dS as follows ϕ_b ( k) = i^-d2+νϕ̂( k) and one can check using eq.(<ref>) and eq.(<ref>) that i^2νa_ν/b_ν=β_ν^∗α_ν^∗=-π 2^-2ν e^i πνν(Γ[ν])^2sinπν where the last equality comes from ratalbe. Note that in the context of this continuation, the source in dS is not the same as that in EAdS but differs by a phase factor. Using the rules eq.(<ref>), eq.(<ref>) in eq.(<ref>), we get for non-integer νψ_AdS→ dS=exp[R_ dS^d-1∫d^d𝐤/(2π)^d{-i/2(d/2 - ν) (-η_1)^-2 ν + iπ 2^-2ν e^i πν (Γ[ν])^2sinπνk^2ν}ϕ̂( k) ϕ̂ (- k)] which is exactly dS wave function eq.(<ref>) in overdamped case taking R_ dS=1, ignoring normalising factor. For integer ν, the continuation results logψ_AdS→ dS= -R_ dS^d-1/2∫d^d𝐤/(2π)^dϕ̂(𝐤)ϕ̂(-𝐤)[i(d/2-ν)(-η_1)^-2ν +i{(1+2νlog(-η_1))β̅_ν^∗/α̅_ν^∗+2νC̅_ν^∗/α̅_ν^∗}k^2ν+2i νβ̅_ν^∗/α̅_ν^∗k^2νlog(k)] where α̅_ν^∗ =i2^νΓ[ν]/π β̅_ν^∗ =-i/2^ν-1πΓ[ν+1] C̅_ν^∗ =-i/2^ν-1πΓ[1+ν](γ+iπ/2-1/2∑_m=1^ν1/m-log 2) The two point correlator can be extracted by the definition npt and is given by ⟨ O(𝐤)O(-𝐤)⟩= [-1](1-i(πν)) π 2^-2ν+1 (Γ[ν])^2R_ dS^d-1k^2ν non-integer  ν i/2^2ν-2Γ[ν]^2R_ dS^d-1k^2νlog(k) integer ν where we have discarded the pieces which gives local contribution while converting in position space. The result okoknintds for non-integer case agrees with twptb. Similarly, the two point function in position space is obtained by the Fourier transform of eq.(<ref>) as ⟨ O(𝐱)O(𝐲)⟩ =e^iπ2(2ν-1)2νΓ[d2+ν]/π^d2Γ[ν]R_ dS^d-1/|𝐱-𝐲|^d+2ν irrespective of ν being integer or non-integer which agrees with posspa. In dS, the acceptable wave function in momentum space has to be normalisable. The normalisability issue is satisfied in case of non-integral ν since the real part of the correlator, eq.(<ref>) non-integral case, is positive. However for integral ν, the issue is more subtle since the two point correlator, eq.(<ref>) integral case, is pure imaginary for integer ν, as mentioned in section <ref>. But the dS wave function, eq.(<ref>) is still normalizable because of the following contribution from the contact term -i/2R_dS^d-1{(1+2νlog(-η_1))β̅_ν^∗/α̅_ν^∗+2νC̅_ν^∗/α̅_ν^∗}k^2νϕ̂(-𝐤)ϕ̂(-𝐤) =i k^2ν/2^2ννΓ[ν]^2R_dS^d-1{1+2ν(γ+iπ/2-1/2∑_m=1^ν1/m-log[2]+log[-η_1])}ϕ̂(-𝐤)ϕ̂(-𝐤) =-π R_dS^d-1/2^2νΓ[ν]^2k^2νϕ̂(-𝐤)ϕ̂(-𝐤)+ (pure imaginary term) Therefore, the 2nd term inside the curly bracket in dampint provides a damping contribution to the wave functional given by exp[-R_ dS^d-1/2∫d^d𝐤/(2π)^dπ/2^2ν-1Γ[ν]^2k^2νϕ̂( k)ϕ̂(- k)] §.§ Analytic Continuation from Overdamped to Underdamped In the preceding discussion we saw that the results obtained for the overdamped scalar fields can be understood through an analytic continuation of the corresponding results in the case of AdS. In this subsection we will show that the results obtained for the underdamped scalar fields can similarly be obtained by doing an analytic continuation of the corresponding results in the overdamped case. Consider the two point function obtained for the overdamped scalar fields, posspa, reproduced below. ł O( x)O( y)=̊e^i π (2ν-1) 2 2 νΓ[d2+ν]π^d2Γ[ν]1 | x- y|^d+2ν Similarly consider the two point function obtained for the underdamped scalar fields in the coherent state basis,rho2pt, reproduced below. ł O_+( x)O_+( y)=̊iΓ[1-iμ]Γ[d2+iμ]π^1+d2(1+(πμ))1| x- y|^d+2iμ Simplifying the above expression we get, ł O_+( x)O_+( y)=̊e^-πμΓ[d2+iμ]π^d2Γ[i μ]1| x- y|^d+2iμ The rule for analytic continuation in going from overdamped to underdamped will be, ν→ i μ. Then posspa2 becomes, ł O( x)O( y)|̊_ν→ iμ=e^i π (2 i μ -1) 2 2 i μΓ[d2+i μ]π^d2Γ[i μ]1 | x- y|^d+2iμ Comparing rho2pt with posspa3 we get, ⟨ O_+( x) O_+( y)⟩/⟨ O( x)O( y)⟩|_ν→ i μ = 1/2 μ The factor 1/2 μ = 1/(√(2μ))^2 can be thought of as the ratio of normalizations of ρ and ϕ̂. To verify this statement let us now compare three point functions of both underdamped and overdamped fields. The three point function for overdamped field, given by 3ptover in k-space, can be expressed in position space [Fourier transform of three point function considering fields with different masses is explicitly performed in Appendix <ref>.] as ł O( x_1)O( x_2)O( x_3)=̊-3λπ^dΓ[d4+3ν2]Γ[Δ2]^3Γ[ν]^3× exp[iπ2(3+3ν-d2)] | x_1- x_2|^Δ| x_2- x_3|^Δ| x_3- x_1|^Δ where Δ is given in overdim. Three point function for underdamped field, given by 3ptcoh in k-space, can be expressed in position space as ł O_+( x_1)O_+( x_2)O_+( x_3)=̊ -3λ (√(2μ))^3π^dΓ[d4+3iμ2]Γ[Δ_+2]^3Γ[iμ]^3× exp[iπ2(3+3iμ-d2)] | x_1- x_2|^Δ_+| x_2- x_3|^Δ_+| x_3- x_1|^Δ_+ where Δ_+ is given in underdim. Doing the analytic continuation of 3ptxov using anconrel we get, ł O( x_1)O( x_2)O( x_3)|̊_ν→ iμ= -3λπ^dΓ[d4+3iμ2]Γ[Δ_+2]^3Γ[iμ]^3× exp[iπ2(3+3iμ-d2)] | x_1- x_2|^Δ_+| x_2- x_3|^Δ_+| x_3- x_1|^Δ_+ Comparing ovun3 with 3ptxun we get, ⟨ O_+( x_1) O_+( x_2) O_+( x_3)⟩/⟨ O( x_1)O( x_2)O( x_3)⟩|_ν→ i μ = 1/(√(2μ))^3 Thus we see that there is an analytic continuation from overdamped to underdamped case given by anconrel after one takes into account the ratio of normalization of ρ and ϕ̂. We expect this feature to hold for any n point function. In conclusion, we see that starting from AdS one can do an analytic continuation ancnt1 to obtain the n point function in overdamped case and then can do a further analytic continuation anconrel to obtain the n point function in the underdamped case. § MORE ON THE THREE POINT CORRELATOR In section <ref> and <ref>, we studied the holography for overdamped and underdamped fields including interaction. Here we give more details on, n-point correlations function obtained in those analysis. §.§ Analytic Continuation of Correlation Function Let us consider single vertex interaction in dS with three different scalar fields ϕ_1,ϕ_2,ϕ_3 of masses m_1,m_2,m_3 respectively. Here the m_i's can either be all under-damped or over-damped or mixed. For complete generality, one can consider the mixed case and obtain the n-point correlator as multlineł O_1( k_1)O_2( k_2)⋯ O_n( k_n)=̊C_n[{a};{b}](2π)^dδ^(d)(∑_i=1^nk_i) × k_1^ν_1k_2^ν_2⋯ k_n^ν_n∫_-∞^η_1dη'-η'(-η')^(n2-1)d∏_j=1^n H_ν^∗_j^(2)(-k_jη') where ν_a takes real values for overdamped fields and is purely imaginary, {ν_a=iμ_a:μ_a∈}, for underdamped fields. C_n[{a};{b}] is given by C_n[{a};{b}]= iλ n!∏_{a}(ℂ_ν_a^∗α_ν_a^∗)∏_{b}(ℂ_μ_b^∗√(2μ)α_μ_b^∗) where {a} refers to all the overdamped fields and {b} refers to all the underdamped fields and {a}⋃{b}={1,2,⋯,n}, with n being total no.of fields. The ratio ℂ_να_ν, ℂ_μα_μ are given in defab, defabuC respectively. Consider the integral in wittenmom3. I(k_1,k_2,⋯,k_n)=∫_-∞^η_1dη'-η'(-η')^(n2-1)d∏_j=1^n H_ν^∗_j^(2)(-k_jη') The argument -kη, when written as -i(-ikη), satisfies arg(-ikη)=π2 Since, K_q(x)= π/2(-i)^q+1H^(2)_q(-ix), -π/2<(x)<π we can rewrite Iexp as, I(k_1,k_2,⋯,k_n)=(2π)^n e^iπ2(n+∑_jν^∗_j)∫_-∞^η_1dη'-η'(-η')^(n2-1)d∏_j=1^n K_ν^∗_j(-ik_jη') Substituting η'=iτ, we get I(k_1,k_2,⋯,k_n)=(2π)^n e^iπ2(n+∑_jν^∗_j-d(n-2)2)∫_-iη_1^i∞dτ τ^(n2-1)d-1∏_j=1^n K_ν^∗_j(k_jτ) The particular integral in the above equation can be represented in complex plane[K_q(z) has pole at origin for z∈ℂ and a branch cut starting from origin, see <cit.>] by the vertical (parallel to imaginary axis) blue line in the contour drawn in Figure <ref>. Since the asymptotic behaviour of the integrand is τ^(n2-1)d-1 K_ν_1^∗(k_1 τ)K_ν_2^∗(k_2 τ)⋯ K_ν_n^∗(k_n τ) e^-(k_1+k_2+⋯+k_n)ττ^(n2-1)d-(n2+1)√(k_1k_2⋯ k_n) the arc (brown curve) contribution at large radius vanishes. Using Kboun, the anti-clockwise integral contribution from the small arc (red curve) of radius -η_1 is given by I_D=i ∑[(-η_1)^ζ_m,pe^iπζ_m,p2-1ζ_m,p∏_{m}(a_ν_m^∗k_m^ν^∗_m)∏_{p}(b_ν_p^∗k_p^-ν^∗_p)] {m}∪{p}={1,2,⋯,n} where a_ν,b_ν are given in coefads, ∑ denotes sum of all possible combinations of {m},{p}. The product ∏_{m}k_m, for say {m}={1,3,4} is equal to k_1k_3k_4 and ζ_m,p is defined as ζ_m,p=(n2-1)d+(∑_{m}ν^∗_m-∑_{p}ν^∗_p) Hence I(k_1,⋯,k_n) is equal to the real line integral (from -η_1 to ∞) in addition to the small arc contribution as I(k_1,k_2,⋯,k_n)=(2π)^n e^iπ2(n+∑_jν^∗_j-d(n-2)2)[∫_-η_1^∞dτ τ^(n2-1)d-1∏_j=1^n K_ν^∗_j(k_jτ)-I_D] Note that convergence of I_D depends on ν_i. Specifically on the power of η_1 which is given in etap. For example for all fields being underdamped, I_D goes to zero as η_1→ 0 since all ν are purely imaginary and as a result ζ_m,p is a positive real value. Similarly when all fields are overdamped, depending on the sign of ζ_m,p, it will either converge or diverge. Note also that the τ integral on the RHS of IexpK3 can also diverge, depending on ν, see the related discussion in section (<ref>). For n=3 the finite part of IexpK3 can be obtained as follows: we keep the first term within the bracket on the RHS of eq.(<ref>), and take the lower limit to go to η_1→ 0, this gives, eq.(<ref>). We then evaluate the integral by analytic continuation starting from values of ν where it converges. §.§ Fourier Transform of Three Point Correlation Function In this subsection we give a detailed derivation of the position space form of three point function by Fourier transforming the momentum space form given in wittenmom3 for n=3. Without being too much concerned for the divergent contributions of the integral in IexpK3, three point correlation function of the dual field O_i( k) in momentum space is given as follows ł O_1( k_1)O_2( k_2)O_3( k_3)'̊=A(ν_1,ν_2,ν_3)k_1^ν_1k_2^ν_2k_3^ν_3∫_0^∞dz z^d2-1K_ν^∗_1(k_1z)K_ν^∗_2(k_2z)K_ν^∗_3(k_3z) where we suppress (2π)^dδ( k_1+ k_2+ k_3) by introducing "prime", see section <ref> for explanation. ν is used for both over-damped and under-damped cases and, A(ν_1,ν_2,ν_3)=C_3[{a};{b}] (2π)^3 e^iπ2(3+ν^∗_1+ν^∗_2+ν^∗_3-d2) Note that in the underdamped case ν=i μ. Thus we have K_-i μ (k z) as one of the factors inside the integral. However since K_-q (k z) = K_q (k z), we can write it as K_i μ (k z), or equivalently K_ν (k z). In the overdamped case ν's are real. Hence from now on we write appeq.adso3k as follows. ł O_1( k_1)O_2( k_2)O_3( k_3)'̊=A(ν_1,ν_2,ν_3)k_1^ν_1k_2^ν_2k_3^ν_3∫_0^∞dz z^d2-1K_ν_1(k_1z)K_ν_2(k_2z)K_ν_3(k_3z) Fourier transforming appeq.adso3k2 we get, ł O_1( x_1)O_2( x_2)O_3( x_3)=̊∫ D^d_3 k (2π)^dδ( k_1+ k_2+ k_3)e^i k_j. x_jł O_1( k_1)O_2( k_2)O_3( k_3)'̊ where, D^d_n k=1(2π)^nd∏_i=1^nd^d k_i Using appeq.adso3k, the position space representation of three point function comes out as multlineł O_1( x_1)O_2( x_2)O_3( x_3)=̊A(ν_1, ν_2,ν_3)Γ[Δ_1]Γ[Δ_2]Γ[Δ_3] 2^3-ν_1-ν_2-ν_3π^3d2× ∫_0^∞dz z^d+1∫ d^d x ∏_i=1^3[(z z^2+| x- x_i|^2)^d2+ν_i] where we used the identity ∫d^d k(2π)^de^i k. rk^ν K_ν(kz)=Γ[d2+ν]2^1-νπ^d2z^-d2(z z^2+r^2)^d2+ν We give here a brief derivation of the above identity. One can start from RHS with z∈ as follows ∫ d^dr e^-i k. r(1 z^2+r^2)^Δ = ∫ d^dr e^-i k. r1Γ[Δ]∫_0^∞ds s^Δ-1e^-s[z^2+r^2] ( using definition of Γ-function) = 1Γ[Δ]∫_0^∞ds s^Δ-1e^-sz^2(π s)^d2e^-k^2 4s ( solving r-integral) = π^d2Γ[d2+ν]2^1-νz^-νk^ν K_ν(kz) ( using Δ=d2+ν) Rest of the derivation is analogous to <cit.>. However we include the analysis for completion. Without loss of generality, we choose x_3=0 by translational invariance and change the variables as (small Latin indices refers to components while capital Latin is for the fixed positions at boundary) z=Z Z^2+X^2 x^i=X^i Z^2+X^2 x^i_A=X^i_A (X_A)^2 where i=1,2,...,d and A=1,2,3. These imply the following relations [ Note that from first relation in eq.(<ref>), x_3=0 implies X_3→∞. ] x_A^i (x_A)^2=X_A^iz z^2+| x|^2=Zz z^2+| x- x_A|^2=Z(X_A)^2 Z^2+| X- X_A|^2 (A=1,2) and the invariance of the measure dzd^d x z^d+1=dZd^d X Z^d+1 Hence the integral in eq.(<ref>) becomes ∫_0^∞dz z^d+1∫ d^d x ∏_A=1^3[(z z^2+| x- x_A|^2)^d2+ν_A] =1 x_1^2Δ_1x_2^2Δ_2∫_0^∞dZ∫ d^d X[Z^Δ_1+Δ_2+Δ_3-d-1(Z^2+| X- X_1|^2)^Δ_1(Z^2+| X- X_2|^2)^Δ_2] The result of the integral in eq.(<ref>) is given in <cit.>, eq.(22). Using that, we get ł O_1( x_1)O_2( x_2)O_3( x_3)=̊A(ν_1, ν_2,ν_3)Γ[Δ_1]Γ[Δ_2]Γ[Δ_3] 2^3-ν_1-ν_2-ν_3π^3d2 I(Δ_1,Δ_2,Δ_3) ×1 | x_1- x_2|^Δ_1+Δ_2-Δ_3| x_2- x_3|^Δ_2+Δ_3-Δ_1| x_3- x_1|^Δ_3+Δ_1-Δ_2 where we have transformed the result to old coordinates by using eq.(<ref>) as well as restored x_3 by translational invariance. I({Δ_i}) is given by I(Δ_1,Δ_2,Δ_3)=π^d22Γ[Δ_1+Δ_2+Δ_3-d2]Γ[Δ_1+Δ_2-Δ_32]Γ[Δ_1-Δ_2+Δ_32]Γ[-Δ_1+Δ_2+Δ_32]Γ[Δ_1]Γ[Δ_2]Γ[Δ_3] The above formula is valid in both under-damped and over-damped case where Δ_i's can be either Δ_i= d/2 +ν_i in over-damped case or Δ_i= d/2 + iμ_i in under-damped case. §.§ OPE limit of Three Point Function Consider the three point function obtained in AppD3.adso3x. This three point function should reduce to an effective two point function once two points are brought closer compared to the third. In mathematical terms one can write, 𝐱_1→𝐱_2 and |𝐱_1-𝐱_2|≪ |𝐱_1-𝐱_3| In momentum space the limits opex correspond to taking k_1∼ k_2≫ k_3 In this subsection we show through an explicit calculation that the above expectation is indeed borne out for the three point function obtained in AppD3.adso3x. We do this by analysing AppD3.adso3x directly in the limits opex and then through the manipulation of wittenmom3 in the limits klimits1. Consider then AppD3.adso3x. In the limit opex, AppD3.adso3x reduces to ł O_1( x_1)O_2( x_2)O_3( x_3)≃̊A(ν_1, ν_2,ν_3)Γ[Δ_1]Γ[Δ_2]Γ[Δ_3] 2^3-ν_1-ν_2-ν_3π^3d2 I(Δ_1,Δ_2,Δ_3) ×1| x_1- x_2|^Δ_1+Δ_2-Δ_3| x_2- x_3|^2Δ_3 where A(ν_1,ν_2,ν_3) and I(Δ_1,Δ_2,Δ_3) are given in A and I respectively. Identifying the two point function in posspa for overdamped and rho2pt for underdamped cases, we can write limopeo3 as lim_ x_1→ x_2 | x_1- x_3|≫ | x_1- x_2|ł O_1( x_1)O_2( x_2)O_3( x_3)=̊ C_123| x_1- x_2|^Δ_1+Δ_2-Δ_3ł O_3( x_2)O_3( x_3)where the structure constant C_123 is given for ϕ_3 being overdamped by C_123=A(ν_1, ν_2,ν_3)Γ[Δ_1]Γ[Δ_2]Γ[ν_3] (2ν_3)×2^3-ν_1-ν_2-ν_3π^d I(Δ_1,Δ_2,Δ_3) e^-iπ(2ν_3-1)2 and for underdamped case by C_123=A(ν_1, ν_2,iμ_3)Γ[Δ_1]Γ[Δ_2]Γ[iμ_3] 2^3-ν_1-ν_2-iμ_3π^d I(Δ_1,Δ_2,Δ_3)× e^πμ_3 as discussed in section <ref>. Now, let us understand how the singularity structure of the three point function arises in the OPE limit as in scform1 using Witten's diagram. We first consider wittenmom3. In the limit klimits1, we consider a region in the η' integral of wittenmom3 where η' takes values so that the following conditions are met. -k_1η'≫1 -k_2η'≫1 -k_3η'≪ 1 We will see below that the OPE limit of the three point function will arise from this region. The limits can be understood diagrammatically in Figure <ref> comparing with Figure <ref> as the points x_1, x_2 come closer to each other. The dominant contribution in the bulk integral thereby comes from the bulk vertex ( x',η') close to the boundary following the condition klimits1. Under klimits2 being met, we insert the leading behaviour of the Hankel functions of order ν_1,ν_2 at large argument as given by horH while that of ν_3 at small argument as in bounH1. This reduces wittenmom3 to the following form ł O( k_1)O( k_2)O( k_3)∝̊(2π)^dδ^(d)(𝐤_1+𝐤_2+𝐤_3)[α̃(ν_3^∗) k_1^ν_1+ν_2-1 k_3^ν_3+ν_3^∗. ×.∫_-1/k_1^-1/k_3 dη'   (-η')^d/2-2+ν_3^∗e^2 i k_2η' +β̃(ν_3^∗) k_1^ν_1+ν_2-1k_3^ν_3-ν_3^∗∫_-1/k_1^-1/k_3 dη'   (-η')^d/2-2-ν_3^∗e^2i k_2η'] where α̃(ν_3^∗) and β̃(ν_3^∗) are the expansion coefficients of H^(2)_ν_3^∗(-k_3η'). Now, apart from the non-local structure, it is important to show that the above integrals over η' are finite. To show this, we substitute -2 k_1η'=2-i z in both of the integrals of wittenmomlim which yields the range of z as z∈[0,-2i(1-k_1 k_3)][0,i∞). So, we write ł O( k_1)O( k_2)O( k_3)∝̊(2π)^dδ^(d)(𝐤_1+𝐤_2+𝐤_3) ×[α̃(ν_3^∗) k_1^ν_1+ν_2-1 k_3^ν_3+ν_3^∗i(2k_1)^d2-1+ν_3^∗∫_0^i∞ dz   (2-iz)^d/2-2+ν_3^∗e^-i(2-iz). .+β̃(ν_3^∗) k_1^ν_1+ν_2-1k_3^ν_3-ν_3^∗i(2k_1)^d2-1-ν_3^∗∫_0^i∞ dz   (2-iz)^d/2-2-ν_3^∗e^-i(2-iz)] Now, each of the above integrals over z has a branch cut originating from z=0, and a pole on the negative imaginary axis. Hence, we align the branch cut away from the first quadrant and perform a contour rotation from the positive imaginary axis to positive real axis by applying Cauchy's theorem. Since the arc contributions close to z=0 and |z|=R (large) vanishes, this leads to the following result lim_k_1≫ k_3 k_2≫ k_3⟨O(𝐤_1)O(𝐤_2)O(𝐤_3)⟩∝-(2π)^dδ^(d)(𝐤_1+𝐤_2+𝐤_3) ×[(α̃(ν_3^∗)Γ[d/2+ν_3^∗-1,2i]/(2i)^d/2+ν_3^∗-1)(k_1)^-d/2+ν_1+ν_2-ν_3^∗ (k_3)^ν_3+ν_3^∗. .+(β̃(ν_3^∗)Γ[d/2-ν_3^∗-1,2i]/(2i)^d/2-ν_3^∗-1)(k_1)^-d/2+ν_1+ν_2+ν_3^∗(k_3)^ν_3-ν_3^∗] where in the above eq.(<ref>), Γ[α,2i] is the incomplete Gamma function defined as Γ[α,2i]=i^α-1∫_0^∞dz  e^-i(2-iz)(2-iz)^α-1 For ϕ_3 being overdamped field, ν^∗_3=ν_3. Hence, in the 2nd term in the square bracket of OPEmom containing k_3^ν_3-ν_3^∗ becomes k_3 independent. Now, using the following Fourier transform results ∫∏_j=1^3 d^d𝐤_j/(2π)^de^i𝐤_a·𝐱_a(2π)^dδ^(d)(Σ_b=1^3 𝐤_b)k_1^αk_3^β=𝙸[d,α]/|𝐱_2-𝐱_1|^d+α𝙸[d,β]/|𝐱_3-𝐱_2|^d+β ∫∏_j=1^3 d^d𝐤_j/(2π)^de^i𝐤_a·𝐱_a(2π)^dδ^(d)(Σ_b=1^3 𝐤_b)k_1^α=𝙸[d,α]/|𝐱_3-𝐱_1 |^d+αδ^(d)(𝐱_3-𝐱_2) where 𝙸[d,α] is some finite coefficient, we schematically obtain the following lim_ x_1→ x_2 | x_1- x_3|≫ | x_1- x_2|⟨ O_1(𝐱_1)O_2(𝐱_2)O_3(𝐱_3)⟩≃C(ν_1,ν_2,ν_3)/|𝐱_2-𝐱_1|^ν_1+ν_2-ν_3+d2|𝐱_3-𝐱_2|^d+2ν_3 where C(ν_1,ν_2,ν_3) is some finite coefficient. The second term in the square bracket of OPEmom does not contribute to the above equation since its' position space expression is related to FTterm2 which vanishes because of x_3≠ x_2. Now, using the expression of the two point function as in posspa and defining Δ_a=d2+ν_a, we can reduce OPEopos to the singularity structure same as of scform1. This gives us the complete picture why this structure is arising. Similarly, for ϕ_3 being underdamped, ν^∗_3=-ν_3=-iμ_3, the 1st term in the square bracket of OPEmom containing k_3^ν_3+ν_3^∗ becomes k_3-independent. Using FTterm1, FTterm2, we get lim_ x_1→ x_2 | x_1- x_3|≫ | x_1- x_2|⟨ O_1(𝐱_1)O_2(𝐱_2)O_3(𝐱_3)⟩≃D(ν_1,ν_2,μ_3)/|𝐱_2-𝐱_1|^ν_1+ν_2-iμ_3+d2|𝐱_3-𝐱_1|^d+2iμ_3 with D(ν_1,ν_2,μ_3) is some finite coefficient which is also of the same singularity structure as given in scform1. §.§ Ratio of Three to Two Point Function We conclude this Appendix with two comments. Firstly, the ratio of three to two point function in coherent state basis differs from that in field eigenstate basis. To see this note that in the momentum space, three point function of the dual field O_+( k) sourced by ρ^∗( k) can be read off from intwvfchi as ł O_+( k_1)O_+( k_2)O_+( k_3)=̊3!iλ (√(2μ)α_μ^∗)^3(2π)^dδ( k_1+ k_2+ k_3)k_1^iμk_2^iμk_3^iμI(k_1,k_2,k_3) where I(k_1,k_2,k_3) is given in Iexp. Two point function of O_+( k) can be read off from psirho1 as ł O_+( k)O_+(- k)=̊β_μ^∗α_μ^∗k^2iμ Hence the ratio of O3coh to O2coh in coherent state basis is given by ł O_+( k_1)O_+( k_2)O_+( k_3)ł O_+( k)O_+(- k)=(2π)^dδ( k_1+ k_2+ k_3)3!iλ (√(2μ))^3α_μ^*2β_μ^∗k_1^iμk_2^iμk_3^iμk^-2iμI(k_1,k_2,k_3) Similarly in field eigenstate basis, the ratio of three to two point function of the dual field Ô_+( k) sourced by J_+( k) can be derived from formco and <O+O+> as łÔ_+( k_1)Ô_+( k_2)Ô_+( k_3)łÔ_+( k)Ô_+(- k)=(2π)^dδ( k_1+ k_2+ k_3)(-3!λ dα_μ^∗β_μ^∗)k_1^iμk_2^iμk_3^iμk^-2iμI(k_1,k_2,k_3) Clearly the results O3/O2coh and O3/O2 differs by the factor -idα_μ^∗(2μ)^32 as mentioned in section <ref>. Finally one could ask what happens if the scalar field in anti-de Sitter (AdS) space has a different normalization than that of <cit.>. To answer this let us consider the scalar field in AdS with two different normalizations. Following <cit.> we have, ϕ(k,z) =z^d/2 K_ν (k z) /ϵ^d/2 K_ν (k ϵ) ϵ^d/2-νϕ̂ (k) This has the property that, lim_z →ϵϕ( k,z) = ϵ^d/2-νϕ̂ ( k) Thus the relationship between the bulk field and boundary source is local. Given phisol1 one could compute the two point function as well as three point function for a ϕ^3 interaction of strength λ and get, ⟨ O(k) O(-k)⟩ = 2νa_ν/b_ν k^2 ν and, ⟨ O(k_1)O(k_2)O(k_3)⟩ = -6λ/b_ν^3 k_1^ν k_2^ν k_3^ν∫_0^∞dz z^d2-1K_ν_1(k_1z)K_ν_2(k_2z)K_ν_3(k_3z) where a_ν,b_ν are given in coefads. Let us consider a different normalization for the source instead of phisol1. ϕ(k,z) = z^d/2 K_ν (k z) k^ν J( k) This has the property that, lim_z →ϵϕ(k,z)= b_νϵ^d/2-ν J(k) + a_νϵ^d/2+ν k^2 ν J(k) Thus in this case the relationship between the bulk field and boundary source is non-local unlike rel1. Then for phisol2 the two-point function turns out to be, ⟨Ô(k) Ô(-k)⟩ = d a_ν b_ν k^2 ν while the three-point function reads, ⟨Ô(k_1) Ô(k_2) Ô(k_3)⟩ = -6 λ k_1^ν k_2^ν k_3^ν∫_0^∞dz z^d2-1K_ν_1(k_1z)K_ν_2(k_2z)K_ν_3(k_3z) Comparing 2pt1 with 2pt2 and 3pt1 with 3pt2, we see that the two-point and three-point coefficients for different normalization are different. Since we are allowed to choose our sources anyhow we want let us choose a different source Ĵ which is related to J as follows. Ĵ (k) = b_ν J(k) Then the two-point function 2pt2 becomes, ⟨Õ(k) Õ(-k)⟩ =d a_ν/b_ν k^2 ν This does not match with 2pt1. However eq,3pt2 becomes, ⟨Õ(k_1) Õ(k_2) Õ(k_3)⟩ = -6 λ1/b_ν^3 k_1^ν k_2^ν k_3^ν∫_0^∞dz z^d2-1K_ν_1(k_1z)K_ν_2(k_2z)K_ν_3(k_3z) which is same as 3pt1. So we see that different normalizations as in phisol1 and phisol2 give rise to varying ratios of 3-point to 2-point which any suitable rescaling of boundary source can not remedy. Furthermore, from supersymmetry constraints on chiral operators we know that the ratio of two point to three point coefficient ratio is protected <cit.>. Thus a different normalization as in phisol2 gives unphysical result. § TRANSFORMATIONS UNDER DIFFEOMORPHISMS The SO(1,d+1) isometry generators of d+2 dimensional Minkowski spacetime (signature:(-++...+)) with coordinates X^A=X^0,X^1,X^2,...,X^d+1 defined as L_A,B =i[X_A∂/∂ X^B-X_B∂/∂ X^A] satisfy the commutation relation [L_AB,L_CD]=i(g_BCL_AD+g_ADL_BC-g_BDL_AC-g_ACL_BD) where g_AB is the metric of ℝ^1,d+1. Now, dS_d+1 spacetime is an hyperboloid embedded in ℝ^1,d+1 with the constraint embhyp. This dS_d+1 spacetime in Poincaré coordinate x'={η,x^j} (j=1,..,d) can be parametrized by X^0 =R_ dS/2(η-1/η)-R_ dS/2η(x_jx^j) X^i =R_ dS/ηx^i X^d+1 =R_ dS/2(η+1/η)-R_ dS/2η(x_jx^j) General coordinate transformations are given by x'^i= x^i+v^i(𝐱,η) η' = η(1+ϵ(𝐱,η)) The generators of the isometries in the Poincaré coordinates take the form, P_i =i∂/∂ x^i M_ij =-i(x_i∂/∂ x^j-x_j∂/∂ x^i) D =i(η∂/∂η+x^i∂/∂ x^i) K_i =i[2x_i(η∂/∂η+x^j∂/∂ x^j)-(x_jx^j-η^2)∂/∂ x^i] where P_i,M_ij,D,K_i are related to L_AB in the following way P_i=L_0,i+L_d+1,i     M_ij=-L_ij     D=L_d+1,0     K_i=L_d+1,i-L_0,i These generators give rise to infinitesimal diffeomorphisms of the form eq.(<ref>) with Translation   :  v^i(𝐱,η)=ε^i ϵ(𝐱,η)=0 Rotation   :  v^i(𝐱,η)=ε^i_ jx^j ϵ(𝐱,η)=0 dilatation   :  v^i(𝐱,η)=ε x^i ϵ(𝐱,η)=ε SCT   :  v^i(𝐱,η)=[2x^i(b^jx_j)-(x^jx_j-η^2)b^i] ϵ(𝐱,η)=2(b^jx_j) Now, it turns out that there are two kinds of coordinate transformations which keep the metric as given by metadm in the ADM gauge satisfying N^2=(-η)^-2 and N^i=0. These transformations can be thought of as residual gauge transformation as given by Spatial Reparametrization   :  v^i(𝐱,η)=v^i(𝐱) ϵ(𝐱,η)=0 Time Reparametrization   :  v^i(𝐱,η)=1/2η^2∂^iϵ(𝐱)+𝒪(η^4) ϵ(𝐱,η)=ϵ(𝐱) §.§ Transformation of Bulk Fields Scalar field Under general coordinate transformation, dif, bulk scalar field remains invariant ϕ'( x',η')=ϕ( x,η) which results functional change of the field as δϕ( x,η)≡ϕ'( x,η)-ϕ( x,η)=-v^i( x,η)∂_iϕ( x,η)-ϵ( x,η) η∂_ηϕ( x,η) We can find the transformation of the bulk field under conformal transformation by putting the expressions of v^i( x,η) and ϵ( x,η) given by dif1 to dif4. For example, in SCT δ_Kϕ( x,η)=-b^i[2x_i(η∂_η+x^j∂_j)-(x_jx^j-η^2)∂_i]ϕ( x,η) We can get the transformation in momentum space by Fourier transform. E.g., for SCT δ_Kϕ( k,η)=-ib^i[2η∂_ηi-2di-2k_jji+k_ijj+η^2k_i]ϕ( k,η) §.§ Transformation of Sources Scalar Sources The source in overdamped case, ϕ̂( k), is related to the bulk field ϕ( k,η) in momentum space as given in eq.(<ref>). At η→0, the relation can be written in position space as ϕ( x,η)=(-η)^d2-νϕ̂( x) Using the transformation property of ϕ( x,η) given in eq.(<ref>), we get the transformation properties of the source in position space δϕ̂( x)=-v^i( x,η)∂_iϕ̂( x)-(d-Δ)ϵ( x,η)ϕ̂( x) One can find the transformation of the source under conformal transformations by putting the expressions of v^i( x,η) and ϵ( x,η) given by dif1–dif4. The sources in underdamped case, J_±( k) is related to the bulk field ϕ( k,η) in momentum space at late time as given in eq.(<ref>). The crucial difference here from the overdamped case is that J_+( x) has non-local relation with ϕ( x,η) given by ϕ( x,η)=α_μ^∗(-η)^d2-iμJ_+( x)+β_μ^∗(-η)^d2+iμ2^2iμΓ[d2+iμ]π^d2Γ[-iμ]∫d^d yJ_+( y)| x- y|^d+2iμ The first term in the above undJp is local. The second term having a non-local convolution can be thought of as a response to the first term, hence the transformation of J_+ is determined only by the first term and the second term changes accordingly as a response. Therefore, the resultant transformation of J_+ is the following δ J_+( x)=-v^i( x,η)∂_iJ_+( x)-(d-Δ_+)ϵ( x,η)J_+( x) Metric Source : As mentioned in metadm, in presence of a metric perturbation γ_ij, the perturbed dS metric in the ADM gauge with the condition (N^2=(-η)^-2,N^i=0) is ds^2 =-dη^2/(-η)^2+1/(-η)^2ĝ_ijdx^idx^j ĝ_ij =δ_ij+γ_ij Now, let us consider the general coordinate transformation given by dif. Since 1/η^2ĝ_mn appears as the metric component in the full metric of syngaugemet, it should transform like a tensor under the coordinate transformation given by dif; which leads to 1/η'^2ĝ'_ij(𝐱')=1/η^2ĝ_mn(𝐱)∂ x^m/∂ x'^i∂ x^n/∂ x'^j+g_ηη(∂_iϵ∂_jϵ) From this, we can find the change in ĝ_ij under the transformation dif to be δĝ_ij=ĝ'_ij(𝐱,η)-ĝ_ij(𝐱,η)=-ĝ_in∂_jv^n-ĝ_mj∂_iv^m-v^k∂_kĝ_ij+2ϵĝ_ij to the leading order in v^i and ϵ. The metric perturbation γ_ij defined in ggamma serves as source for the energy momentum tensor T^ij. Using ggammaxrep and etarep in delggen, we find Spatial Rep. :     δγ_ij(𝐱)=-(∂_i v_j(𝐱)+∂_j v_i(𝐱))+ homogeneous part Time Rep. :     δγ_ij(𝐱)=2δ_ijϵ(𝐱)+ homogeneous part where the “homogeneous part” is contributed by terms proportional to γ_ij which we omitted since it is not of our interest for this discussion. It is important to note that in spatg2 and delgijtime, we lower and upper indices by δ_ij. In a similar way, δγ_ij for the conformal transformations can be found by putting the changes as given by dif1 to dif4 and ggamma in delggen. Such as for SCT, we get δ_Kγ_ij(𝐱) =-[2ℳ^k_  iγ_kj(𝐱)+2ℳ^k_  jγ_ki(𝐱)-{𝐱^2(𝐛·∂)-2(𝐛·𝐱)(𝐱·∂)}γ_ij(𝐱)] ℳ^i_j =x^ib_j-b^ix_j Notice that δ_Kγ_ij in the above equation is entirely given by the homogeneous terms, the inhomogeneous contribution (terms independent of γ_ij) vanishes. This turns out to be true for all conformal transformations since they are a special combination of spatial and time reparametrization satisfying the relation Conformal Transformations :    ∂_i v_j(𝐱)+∂_j v_i(𝐱)=2δ_ijϵ(𝐱) where v^i(𝐱) and ϵ(𝐱) in the above equation are given by dif1 to dif4. Metric Determinant Now, let ĝ be the determinant of the metric ĝ_ij as defined in ggamma, then √(ĝ)= 1+1/2γ_ii+𝒪(γ_ij^2) where index i is summed over. So the variation gives δ(√(ĝ))=1/2δγ_ii Therefore, using spatg2 and delgijtime in the above equation, we get Spatial Reparameterisation : δ(√(ĝ(𝐱)))= -∂_i v_i(𝐱)+ homogeneous part Time Reparameterisation : δ(√(ĝ(𝐱)))= dϵ(𝐱)+ homogeneous part For conformal transformations, δ√(ĝ) can be similarly found using chdetg, which again is entirely supported by homogeneous contributions. §.§ Transformation of Responses Change in Boundary Field Operator Conformal transformations of the source produces a change in the partition function of the boundary CFT which can also be understood as due to the change in the response and hence change in the boundary field. We illustrate this statement by considering boundary partition function for free bulk theory in overdamped case logψ[ϕ̂]= 12∫ d^d xd^d yϕ̂( x)⟨ O( x)O( y)⟩ϕ̂( y) Change in ϕ̂ produces a change in ψ in 1st order as logψ[ϕ̂+δϕ̂]=logΨ[ϕ̂]+12∫ d^d xd^d y(δϕ̂( x)⟨ O( x)O( y)⟩ϕ̂( y)+ϕ̂( x)⟨ O( x)O( y)⟩δϕ̂( y)) where δϕ̂ is the transformation of the source given in eq.(<ref>) which can be put for δϕ̂ in delpsi and integrate bi-parts ignoring surface term. This gives rise an equation as logψ[ϕ̂+δϕ̂]=logψ[ϕ̂]+[12∫ d^d xd^d y(ϕ̂( x)⟨δ O( x)O( y)⟩ϕ̂( y)+ϕ̂( x)⟨ O( x)δ O( y)⟩ϕ̂( y))] where the δ O can be obtained for general coordinate transformation as δ O( x)=[v^i_i+(_i v_i)-(d-Δ)ϵ]O( x) As has been noted in the paper the CFT dual to dS space is of an unconventional type, violating reflection positivity, and not admitting a continuation to Lorentzian signature in general. For more conventional CFTs, not violating reflection positivity, primary operators are defined to transform under conformal transformations by the transformation given in[In d=2 this is the definition of quasi-primary operators.] eq.(<ref>). In AdS/CFT duality bulk fields give rise to sources which couple to such primary operators. Here, for dS space, the sources in the bulk couple to fields, rather than operators, in the dual theory. By analogy with the conventional case, we can define primary fields to be those which transform as eq.(<ref>) under conformal transformations. In particular, we see from the discussion above that the sources arising from bulk scalar fields couple to primary scalar fields in the boundary CFT. Similarly in underdamped case, J_± are associated to the boundary fields Ô_± which transforms exactly like O as given by delO with Δ replaced by Δ_+ and Δ_- for Ô_+ and Ô_- respectively. Change in Energy Momentum Tensor Under the conformal transformation, invariance of the wave functional implies that Ψ[γ_ij(𝐱)+δγ_ij(𝐱)]=Ψ[γ_ij(𝐱)] Using ggamma in delggen followed by the relation satisfied by conformal transformations referred in confcond, we find the general form of δγ_ij for the conformal transformations (dif1 to dif4) to be δγ_ij|_ conf=-γ_im∂_jv^m-γ_jm∂_iv^m-v^k∂_kγ_ij+2ϵγ_ij In a similar way, as we found the change in operator O from the change in field source, we can find the change in T^ij as a response to the change in source γ_ij by performing integration by parts. The change under conformal transformations is of the form δ T^ij=-T^im∂_m v^j-T^mj∂_m v^j+(∂_k v^k)T^ij+v^k∂_k T^ij+2ϵ T^ij where v^i and ϵ are given by dif1 to dif4. § WARD IDENTITIES Ward identities were discussed in section <ref> above. Here we will provide some more details. Our discussion below will mostly focus on the Ward identities in the coherent state representation for underdamped fields. §.§ U(1) Gauge Field The Ward identities for a U(1) symmetry were discussed in section <ref>. We follow the same conventions below and will explicitly check that the Ward identity eq.(<ref>) involving a quadratic term in the scalar, and a cubic term with one gauge boson and two scalars is valid. One of the reasons for the very explicit check below is that similar Ward identities, involving gauge fields, played an important role during the development of the AdS/CFT correspondence for elucidating the connection between bulk fields and their corresponding boundary sources <cit.>. Here we have proposed a somewhat novel way of identifying sources for under damped fields by working in the coherent state representation and an explicit check is worth while to do. We work to leading order in R_ds^d-1 G_N where the wave function is given in terms of the on-shell action, obtained by solving the equations of motion with appropriate boundary conditions. The coefficient function, ł O_+( x)O_+( y)$̊, in the case of a real bulk scalar was obtained in eq.(<ref>). From this we get, noting the relations, eq.(<ref>), eq.(<ref>), that for a complex bulk scalar, ł O_+^†( x) O_+( y)=̊2iΓ[1-iμ]Γ[d2+iμ]π^1+d2(1+(πμ))1| x- y|^d+2iμ Next we will calculate the coefficient function for the cubic termł J_+( z)O_+^†( x)O_+( y)$̊, eq.(<ref>). This will allow us then to check the Ward identity. The calculation of this term will be carried out after setting ĝ_ij=δ_ij. i.e. the boundary value of the metric to be flat. In the approximation we are working with, it was argued in section <ref>, see discussion after eq.(<ref>), that this cubic term can be obtained by expressing the boundary values for the scalar ϕ( x,η_1) in terms of J_+ and then using the relation, eq.(<ref>), to express J_+ in terms of ρ^∗. To be more precise, the complex field ϕ is defined in terms of the real fields ϕ_1,ϕ_2 as given in eq.(<ref>). For ϕ_1 a source J_+1 can be defined as given in defjpb and it is then related to the coherent state eigenvalue ρ_1^∗ as given by, eq.(<ref>), ρ_1^∗( k)=√(2μ)α_μ^∗ J_+1(- k) Similarly for the second real field ϕ_2. For the complex fields ϕ,ϕ^†, related to the real fields as given in eq.(<ref>), eq.(<ref>) we can define the sources to be ϕ( k,η) =f_μ^∗(k,η)k^iμJ_+( k) ϕ^†( k,η) =f_μ^∗(k,η)k^iμJ^†_+( k) where J_+( k) ≡ 1√(2) [J_+1( k)+i J_+2( k)] J_+^†( k) ≡ 1√(2) [J_+1( k)-i J_+2( k)] The coherent state eigenvalue for the complex field ϕ is given in eq.(<ref>), we see using our definitions above that it is related to the source J_+ by ρ^∗( k)=√(2μ)α_μ^∗[J_+1(- k)+i J_+2(- k)√(2)]=√(2μ)α_μ^∗ J_+(- k) Similarly from eq.(<ref>) we get that (ρ^∗)^†( k)=√(2μ)α_μ^∗[J_+1(- k)-iJ_+2(- k)√(2)]=√(2μ)α_μ^∗J_+^†(- k) The cubic term can be then calculate by working it out in terms of the boundary values of ϕ( x,η_1), ϕ( x,η_1)^†, expressing this result in terms of J_+,J_+^†, eq.(<ref>), eq.(<ref>) and then using eq.(<ref>), eq.(<ref>) to obtain it in terms of ρ^∗, (ρ^∗)^†. We will accordingly first calculate the cubic term working in the field eigenstate basis ϕ( x,η). In fact for checking the Ward identity of interest it is enough to calculate the cubic coefficient function taking the gauge boson to be purely longitudinal, i.e. with A_i( k)∝ k_i. Since the wave function is gauge invariant[The action eq.(<ref>) is of course manifestly gauge invariant; this ensures the gauge invariance of the wave function which is given by the value of the on-shell action.] we also note that Ψ[A_i( x)=∂_iχ( x), ϕ( x,η_1), ϕ^†( x,η_1)]=Ψ[A_i=0, ϕ( x,η_1)+δϕ( x,η_1), ϕ^†( x,η_1)+δϕ^†( x,η_1)] where on the RHS we have carried out a gauge transformation to set the longitudinal mode of A_i to vanish resulting in the the changes δϕ, δϕ^† to the scalars given by, eq.(<ref>),eq.(<ref>), δϕ( x,η_1)=-i e χ ( x) ϕ( x,η_1), δϕ^†( x,η_1)=+i e χ( x) ϕ( x,η_1)^† The required cubic term can now be conveniently calculated from the wave function on the RHS of eq.(<ref>) where A_i vanishes. Next, we note that Ψ[A_i=0, ϕ+δϕ, ϕ^†+δϕ^†] can be obtained, as was discussed above, from the on-shell action, for the scalar fields which solve the free equation of motion subject to the boundary condition that they take the values ϕ+δϕ, ϕ^†+δϕ^† at the boundary η=η_1. Starting from the action, actsca, and using the fact that we are computing it for an on-shell solution we obtain that the change in the action due to changing the boundary values of the scalar from (ϕ,ϕ^†) to (ϕ+δϕ, ϕ^†+δϕ^†) is given by δS=ie (-η_1)^d-1∫d^d xχ(ϕ^†(η,𝐱)_ηϕ(η,𝐱)-ϕ_η(η,𝐱)ϕ^†(η,𝐱))|_η=η_1 or in k-space by, multlineδS=ie (-η_1)^d-1∫∏_i=1^3d^d k_i(2π)^d(2π)^dδ( k_1+ k_2+ k_3) ×χ( k_3)[ϕ^†( k_1,η)_ηϕ( k_2,η)-ϕ( k_2,η)_ηϕ^†( k_1,η)]|_η=η_1 For underdamped fields , we have the identifications eq.(<ref>), eq.(<ref>). Putting these relations in delSphib1x, and then expanding the RHS of delSphib1x, using deffu, we get δ S=-e∫_kχ( k_3)J^†_+( k_1)J_+( k_2)2μα_μ^*2[β_μ^∗α_μ^∗k_1^2iμ-β_μ^∗α_μ^∗k_2^2iμ] where ∫_k≡∫∏_i=1^3d^d k_i(2π)^d(2π)^dδ( k_1+ k_2+ k_3) Next, substituting for J_+, J_+^† in terms of ρ^∗, (ρ^∗)^†, eq.(<ref>), eq.(<ref>) and going to position space we get multlineδS=-e∫ d^d xd^d yd^d z χ( z)(ρ^∗)^†( x)ρ^∗( y) iΓ[1-iμ]Γ[d2+iμ]π^1+d2(1+(πμ))[δ( z- y)1| x- z|^d+2iμ-δ( z- x)1| z- y|^d+2iμ] From the relation between Ψ and S, wfe we therefore get that the required cubic term in the wave function is multlineδlogΨ= -ie∫ d^d xd^d yd^d z χ( z)(ρ^∗)^†( x)ρ^∗( y) iΓ[1-iμ]Γ[d2+iμ]π^1+d2(1+(πμ))[δ( z- y)1| x- z|^d+2iμ-δ( z- x)1| z- y|^d+2iμ] We are now in a position to check the Ward identity. The cubic term in eq.(<ref>) after substituting for A_i( x)=∂_i χ( x) becomes e 2∫d^d x d^d y d^d z ∂_i^ zχ( z)ρ^∗( x)(ρ^∗)^†( y)ł J_i( z) O_+( x)O_+^†( y)Carrying out an integration by parts and comparing with eq.(<ref>) then gives multline∂_i^ zł J_i( z) O_+^†( x) O_+( y)=̊2iΓ[1-iμ]Γ[d2+iμ]π^1+d2(1+(πμ))[iδ^(d)( z- y)1| x- z|^d+2iμ. .-iδ^(d)( z- x)1| z- y|^d+2iμ] Finally noting from eq.(<ref>) the value of the two point function ł O_+^†( x) O_+( y)$̊ we see that the Ward identity we had to verify, eq.(<ref>), is indeed valid. §.§ Ward Identities for Spatial Reparametrisations These Ward identities were discussed in section <ref>. Here we provide some more details. As argued in section <ref> the wave function is invariant under spatial reparametrisations of the form, eq.(<ref>) with the metric and the coherent state eigenvalueρ^∗transforming as given in eq.(<ref>), eq.(<ref>). In the classical limit where the wave function is obtained from the on-shell action the invariance of the wave function follows from the invariance of the action under spatial reparametrisations. The resulting condition due to this invariance is given in eq.(<ref>). Expanding the wave function as given in eq.(<ref>), starting with a flat metric,ĝ_ij=δ_ijand imposing the invariance condition eq.(<ref>) leads to Ψ[ρ-v^i∂_iρ, δ_ij-∂_iv_j-∂_jv_i,η_1]=Ψ[ρ,δ_ij,η_1] Expanding the wave function with the trilinear term given in eq.(<ref>), up to linear order inv_i, leads to the condition multline1 2∫ d^d xd^d y[∂_i^ x (v_i ( x)ρ^∗( x)) ρ^∗( y)+ ρ^∗( x) ∂_i^ y(v_i( y)ρ^∗( y))] ł O_+( y)O_+( x) + 1 2∫ d^d x d^d yd^d z ρ^∗( x) ρ^∗( y)∂_i^ zv_j( z) ł T_ij( z)O_+( x)O_+( y)=̊0 Varying with respect tov_i( z)then gives rise to the Ward identity in eq.(<ref>). §.§ Ward Identities for Time Reparametrisations These Ward identities were discussed in section (<ref>). Under the coordinate transformation which asymptotically takes the formη→η(1+ϵ( x)),x^i → x^i, with the metric andρ^∗fields transforming as eq.(<ref>), eq.(<ref>), the wave function is invariant, once we also account for the change in the cut-off withη_1→η_1(1+ϵ( x)). This gives rise to the condition, eq.(<ref>) on the wave function. Now consider the wave function obtained by expanding up to the trilinear term as given in eq.(<ref>). We will only be interested in the cut-off independent terms in the coefficient functions. Imposing the invariance condition eq.(<ref>) then leads to multline∫ d^d x d^d y [ϵ( x) +ϵ( y)] (d-Δ_-)ρ^∗( x)ρ^∗( y)ł O_+( x)O_+( y) + ∫ d^d zd^d xd^d yϵ( z)ρ^∗( x)ρ^∗( y) ł T^i_i( z)O_+( x)O_+( y)=̊0 whereΔ_-is defined in dimdelm. Varying with respect toϵ( x)gives the Ward identity eq.(<ref>). For good measure let us verify that the action for a free scalar is indeed invariant under time reparametrisations. Denoting the bulk coordinates byx^μ=(η,x^i), consider a general coordinate transformationx^μ→ x^μ+v^μ, under which the metric and scalar transform asϕ→ϕ-v^μ∂_μϕ, g_μν→ g_μν-∇_μ v_ν -∇_ν v_μ. The action changes under this transformation due the alteration in the metric, the scalar field and the cut-off,η_1. The sum of these three must cancel. The change in the action due to the alteration in the metric is given by δ_g S = 1/2∫ d^d+1x √(-g)δ g_μν T^μν = -∫ d^d+1x √(-g)∇_μ v_ν T^μν = -∫ d^d x1 (-η_1)^d+2ϵ( x) T^ηη Here we have used the conservation of the stress tensor∇_μ T^μν=0and the fact that asymptotically the only non-zero component ofv_μisv_η=-ϵη. The change due to the alteration in the scalar field is given by δ_ϕ S= ∫d^d xϵ( x) (-η)^d(η_ηϕ)^2 where we have use the fact that the scalar field is on-shell and satisfies the equation of motion Eq.m. Finally the change in the cut-off is given byη_1→η_1 +ϵ( x) η_1. The resulting change in the action is given by δ_coS=∫ d^d x√(-g)ϵ( x) η_1 L where Lis the scalar Lagrangian given in action and the integral is to be evaluated on the surfaceη=η_1. Adding the three terms in eq.(<ref>), eq.(<ref>) eq.(<ref>), it is easy to see gives a vanishing result. § MORE DETAILS ON THE STRESS TENSOR §.§ Stress Tensor Two Point Function Einstein-Hilbert action in dS_d+1is given in <cit.> as [We have taken different sign convention in defining K_ab, see Kabdef] S=R_dS^d-116π G∫d^d+1x√(-g)[R-d(d-1)]-R_dS^d-18π G∫d^d x√(γ)[K-(d-1)] whereR_dSis the radius of dS. Setting8π G=1, [Note that in <cit.>, the authors have set 16 π G=1 while we have set 8 π G=1. This results in an extra half factor in front of the action EHact compared to eq.(10) of their paper.]. S=R_dS^d-1/2∫d^d+1x√(-g)[R-d(d-1)]-R_dS^d-1∫d^d x√(γ)[K-(d-1)] whereRandKare the Ricci scalar in bulk and extrinsic curvature at boundaryη=0respectively,γis induced metric determinant on the boundary.Rcan be expressed by generalising eq.(3.43) of <cit.> forddimensional Ricci scalarR_γas R=R_γ+ε(K^2-K_abK^ab)+2ε∇_α(n^β∇_βn^α-n^α∇_β n^β) wheren^αis the unit outwards normal to the boundary given by n_α=1ηδ_α^0 andε=n^α n_α. To note, Greek lettersα,β,⋯denotes bulk coordinates, Latin lettersa,b,⋯denotes boundary coordinates. According to eq.(3.3) in <cit.>,ε=-1for the spacelike boundaryη=0. Using eq.(3.36) in <cit.> and eq.(<ref>), we derive from eq.(<ref>) S= R_dS^d-1/2∫d^d+1x√(-g)[R_γ+K^abK_ab-K^2-d(d-1)+2(d-1)K] which is similar to the action eq.(11) in <cit.>. Now, we introduce metric perturbationζ_abon the background dS metric as γ_ab=1η^2[δ_ab+ζ_ab] and obtain extrinsic curvatureK_abas K_ab =12(∇_bn_a + ∇_a n_b) =1η^2[δ_ab+ζ_ab]-12η_0ζ_ab where we use n to derive 2nd equality and_0denotes partial derivative w.r.t.η. Using the definitionK=γ^abK_ab, the action eq.(<ref>) can be expanded in the 2nd order ofζ_abas S = -R_dS^d-1/8∫d^d+1 x/(-η)^d-1(ζ^k_j,iζ^j,i_k -ζ_,iζ^,i -2 ζ^k_j,iζ^i,j_k + 2 ζ_,iζ^i,j_j- ζ^i_j,0ζ^j_i,0 + ζ_,0ζ_,0) whereζis the trace ofζ_ab. Notice that the above action can be obtained from eq.(22) of <cit.>using analytic continuation [In <cit.>, the AdS Radius L is set to one. One first needs to restore the L factors in eq.(22) before performing the analytic continuation.], z → i (-η), L → i R_dS Using the transverse and traceless condition we can write dSact1 as follows. S = -R_dS^d-1/8∫1/(-η)^d-1 d^d+1 x (ζ̃^k_j,iζ̃^j,i_k -ζ̃^i_j,0ζ̃^j_i,0) whereζ̃^i_jis transverse traceless part ofζ^i_j. From now on we will specialize tododd for simplicity. Given the action dSact2 the equation of motion turns out to be ∂_0^2 ζ̃^i_j - ∇^2 ζ̃^i_j + 1-d/η∂_0 ζ̃^i_j=0 whose solution reads ζ̃^i_j( k,η) = (-η)^d/2 H^2_d/2 (-k η) /(-η_1)^d/2 H^2_d/2 (-k η_1)ζ̂^̂î_̂ĵ ( k) Then from dSact2 we get S = R_dS^d-1/8∫d^d x/(-η)^d-1ζ̃^i_j ∂_0 ζ̃^j_i |_η=η_1 Using soldS this can be reduced to S =-R_dS^d-1/8∫ d^d kζ̂^i_j ( k) ζ̂^j_i (- k) d β^∗_d/2/α^∗_d/2k^d whereα_ν, β_νare given in valtab. Let, lim_η→η_1ζ^i_j = h^i_j andhis the trace ofh^i_j. Then in terms of the full graviton fieldh^i_jζ̂^i_j = h^i_j - k_j k^l/k^2 h^i_l - k_l k^i/k^2 h^l_j + k^i k_j k^k k_l/k^4 h^l_k - 1/d-1(δ^i_j - k_j k^i/k^2) (h - k_k k^l/k^2 h^k_l) The above relation obeys the constraints k_i ζ̂^i_j = k_i h^i_j - k_i k_j k^l/k^2 h^i_l - k_l h^l_j + k_j k^k k_l/k^2 h^l_k = 0 k^j ζ̂^i_j= k^j h^i_j- k^l h^i_l - k^j k_l k^i/k^2 h^l_j + k^i k^k k_l/k^2 h^l_k = 0 ζ̂^i_i = h- k_i k^l/k^2 h^i_l - (h - k_k k^l/k^2 h^k_l) =0 The relation TTreldS can be rewritten as follows ζ̂^i_j= (P^i_k P^l_j - 1/d-1 P^i_j P^l_k) h^k_l where, P^i_j = δ^i_j - k_j k^i/k^2 Now, ζ̂^i_j ζ̂^j_i = (P^i_k P^l_j - 1/d-1 P^i_j P^l_k) (P^j_m P^n_i - 1/d-1 P^j_i P^n_m) h^k_l h^m_n = (P^n_k P^l_m - 1/d-1 P^l_k P^n_m) h^k_l h^m_n This can be further simplified to ζ̂^i_j ζ̂^j_i = h^m_i h^l_j ( 1/2P^i_l P^j_m + 1/2 P^ij P_lm - 1/d-1 P^j_l P^i_m) Using the above result dSact4 can be written as S =-R_dS^d-1/16∫ d^d k k^d d β^∗_d/2/α^∗_d/2 h^m_i( k) h^l_j(- k) ( P^i_l P^j_m + P^ij P_lm - 2/d-1 P^j_l P^i_m) Then the stress tensor two point function fordbeing odd, takes the form ⟨ T^i_m (k) T^j_l (-k)⟩ = -iR_dS^d-1/8 k^d d β^∗_d/2/α^∗_d/2( P^i_l P^j_m + P^ij P_lm - 2/d-1 P^j_l P^i_m) Fordbeing even, the stress tensor two point function takes the form ⟨ T^i_m (k) T^j_l (-k)⟩ = -iR_dS^d-1/8 k^d log(k) d β̅^∗_d/2/α̅^∗_d/2( P^i_l P^j_m + P^ij P_lm - 2/d-1 P^j_l P^i_m) whereα̅^∗_ν,β̅^∗_νare given in appalp, appbet. We can convert doddTT and devenTT to position space using the relations, ∫d^d k(2π)^dk^de^i k.r=A r^2d ∫d^d k(2π)^dk^dk_ik_j k^2 e^i k.r=-_i _j B r^2d-2 ∫d^d k(2π)^dk^dk^ik^jk^kk^l k^4 e^i k.r=_i _j_k_l C r^2d-4 where,r= x- y, r= rand the derivatives are with respect torand, A=2^dπ^d2Γ[d]Γ[-d2]     B=2^d-2π^d2Γ[d-1]Γ[-d2+1]     C=2^d-4π^d2Γ[d-2]Γ[-d2+2] Using abc and putting the expressions forα_d2,β_d2from valtab the stress tensor two point function takes the following form in the position space, ⟨ T^i_m (x) T^j_l (y)⟩ = e^i π/2 (d-1)R_dS^d-1Γ[d+2]/8(d-1)π^d/2Γ[d/2] r^2d(J^i_l (r) J^j_m (r) + J^i j (r) J_l m (r) - 2/dδ^i_m δ^j_l ) where, J^i_j (x) = δ^i_j - 2 x^i x_j/x^2 Note that the above equation TTtwopoint agrees with stresspos takingR_dS=1. Note also that TTtwopoint can easily be obtained from eq.(43) of <cit.> by doing the analytic continuation, L → i R_dS. as mentioned in section <ref>. §.§.§ Discussion Let us examine the reflection positivity for the two point function given in TTtwopoint. * Consider d=2n, n∈ℤ. Then, e^i π/2 (d-1) = i(-1)^n+1 and as a result reflection positivity is violated. For n=1, we are in dS_3 and the holographic dual is CFT_2. Since, c ∼⟨ TT ⟩ we see that the central charge of the dual CFT is imaginary; see the next subsection <ref> for a more detailed discussion on this. * Let us now move on to d=2n+1. Then we get, e^i π/2 (d-1) = (-1)^n Let us consider the situation when n is odd. This is the case for our universe, d=3. We choose the stress tensor to be only dependent on x_1 and set all the other d-1 coordinates to zero. Then from TTtwopoint, setting all indices i,j,l,m to 2 for simplicity, we have, ⟨ T^2_2 (x_1) T^2_2 (-x_1)⟩ = - R_dS^d-1/4Γ[d+2]/π^d/2 dΓ[d/2] r^2d < 0 So we see that the reflection positivity is violated. * Finally we consider the case when d=2n+1 and n=2m, m∈ℤ. Analogous to TT2ptsp, we get in this case, ⟨ T^2_2 (x_1) T^2_2 (-x_1)⟩ = R_dS^d-1/4Γ[d+2]/π^d/2 dΓ[d/2] r^2d > 0 Thus for d=4 m+1, we see that the two point function is reflection positive. §.§ Central Charge in dS_3 In the last subsection we argued that the central charge is imaginary for the dual CFT of dS_3. In this subsection, we will show the central charge of the dualCFT_2is imaginary by working in the global coordinates. ds^2=-dτ^2+cosh^2(τ R_dS)(dθ^2+sin^2θ dϕ^2) The Brown York stress tensor in dS space is defined to be T_ij≡ i2/√(γ)δ S[γ_ij]/δγ^ij whereS[γ_ij]is the renormalized on-shell action depending on the boundary metricγ_ij.The renormalization is done by adding local counter terms to the original action in order to make the mass of an asymptotically de sitter spacetime finite. Varying with respect to the boundary metric, the renormalized Brown York stress tensor defined in brwnyrkdef at the future space-like boundary(ℐ^+)is found to be [The result is established in <cit.> without the factor "i" considering the definition of Brown York stress tensor without "i" compared to our convention brwnyrkdef.] T_ij=i/8π G[(K_ij-K γ_ij)+1/R_dSγ_ij] whereK_ijis defined in Kabdef and the last term in the RHS of the above equation comes from the counter term added with the action. One can then computeK_ijusing globalmet with the covariant normal vectorn_αat the future spacelike boundary I^+given by n_μ=(-1,0,0) Putting everything together in Tglobal, the trace of the stress tensor is found to be T_i^i=1/^2(τ R_dS)γ^ijT_ij=i/8π G R_dS where we have removed the overall Weyl factor of^2(τ R_dS)fromγ^ijby dividing it. Comparing glbanom with usual trace anomaly in a 2 dimensional CFT, see eq.(5.144) in <cit.>, given by T_i^i=cR 24π whereRis the Ricci scalar for the boundary sphere and henceR=2 R_dS^2. Comparing glbanom with cftanom we then find the central charge to be, c=i3R_dS/2G as mentioned in cenc. This result can also be obtained through analytic continuation of the central charge in AdS_3/CFT_2correspondence byL→ iR_ds. § ALTERNATE SOURCE IDENTIFICATIONS §.§ Alternate Quantisation in AdS/CFT Scalars in AdS with a mass lying in the range -d^24≤ M^2 ≤ -d^24+1 can be quantised in two different ways<cit.>. Asymptotically both solutions to the free scalar equation in this mass range fall -off towards the boundary going, in terms of the Poincarézcoordinate, asz^d 2±ν, whereν=√(d^24+M^2). The different quantisations correspond to taking either of the two fall-offs as the source. Choosing the source to correspond to the fall offz^d2-νthe dual operator has dimensiond2+νwith a two point function given in oxoyads. The two-point function if the second choice, namely of takingz^d2+νas the source, is made can be obtained by carrying out a Legendre transform. Denoting the sources corresponding to the the two choices to beϕ_b( k)andJ_b( k)respectively we have W_AdS[J_b]=∫ Dϕ_bZ_ AdS[ϕ_b]exp[∫d^d k(2π)^dJ_b( k)ϕ_b( k)] =exp[1/2∫d^d𝐤/(2π)^d{ - k^-2νb_ν/2 ν a_ν} J_b( k) J_b (- k)] In position space, withJ_bas the source, the resulting two-point function is given by ł O( x)O( y)=̊Γ[d2-ν]Γ[ν] 2π^d2+1sin(πν)| x- y|^d-2ν (where using the identityΓ[z]Γ[1-z]=πsin(π z)). The range eq.(<ref>) corresponds to 0≤ν≤ 1 Ford≥ 2we see from eq.(<ref>) that the coefficient in the two point function in the range, eq.(<ref>), is always positive, as is needed for the two point correlator to satisfy reflection positivity. §.§ Overdamped Scalar in dS space Momentum Basis : The identification of the source in the momentum basis for overdamped scalars in dS space has been discussed in section <ref>, here we give a few more details. There are two essential difference with the AdS case discussed in the previous few paragraphs. First, we need to keep the cut-off dependent terms in dS space. Second, we carry out a Fourier transformation to go to the momentum space basis rather than a Legendre transformation. The wave function after the Fourier transformation will continue to be normalisable. Starting with the wave function for the free case given in eq.(<ref>) and carrying out the Fourier transformation fromϕ( k), η)to its momentum conjugateπ( k,η)we get W[π,η] =∫ Dϕ ψ[ϕ,η]exp[-i∫d^d k(2π)^dπ( k,η)ϕ( k,η)] =exp[i∫d^d𝐤/(2π)^d π(𝐤,η)((-η)^d(ℱ_ν(k,η))^∗/2η∂_η(ℱ_ν(k,η))^∗)π(-𝐤,η)] where we use the saddle point relation π( k,η)=-(∂_η(ℱ_ν(k,η))^∗/(-η)^d-1(ℱ_ν(k,η))^∗)ϕ(- k,η) Near boundary, defining the sourceπ̂as π( k,η)=(-η)^-d2-νπ̂( k) we can express altovw in terms ofπ̂using the boundary limit of F_ν(k,η), altf as W[π̂,η_1]=exp[1/2∫d^d𝐤/(2π)^d π̂(𝐤)[i(-η_1)^-2ν(d/2-ν)-β^∗_ν/α^∗_ν(2iν)(d/2-ν)^2 k^2ν]π̂(-𝐤)] which agrees with momwf. We see, as was discussed in section <ref>, that the dual operator which couples toπ̂also has dimensionΔ, overdim. In contrast in the AdS case, after the Legendre transformation one gets the two point function for an operator of dimensiond2-ν, see discussion above. The essential reason for this difference is that we have retained the cut-off terms, which are physical when describing the wave function in the dS case, for carrying out the Fourier transformation. Coherent State Basis In Sec.<ref>, we have studied the holography of overdamped field using the source identification oversource. Here we discuss another way of identification similar to the study for underdamped field in Sec.<ref>. Starting with the creation and annihilation operators defined in section(<ref>), we define two operatorsâandb̂in terms ofa_ kanda^†_- kas â( k)=k^-ν(α_ν a_ k+α_ν^∗ a^†_- k)      b̂( k)=k^ν(β_ν a_ k+β_ν^∗ a^†_- k) whereâ,b̂are Hermitian and satisfies the commutation relation [â( k),b̂( k')]=-i2ν(2π)^dδ( k+ k')(k' k)^ν ExpandingΦ( k,η)andΠ( k,η)given by Phik and Pik near boundary using altf for overdamped case, we get Φ( k,η) =â( k)(-η)^d2-ν+b̂( k)(-η)^d2+ν Π( k,η) =-[â( k)(d2-ν)(-η)^-d2-ν+b̂( k)(d2+ν)(-η)^-d2+ν] In terms of field operators,âandb̂can be written as â( k) =d2+ν2ν(-η)^d2-νΦ( k,η)+(-η)^d2+ν2νΠ( k,η) b̂( k) =-d2-ν2ν(-η)^d2+νΦ( k,η)-(-η)^d2-ν2νΠ( k,η) Note that unlike underdamped case,âandb̂are Hermitian in position space. The eigenbasis ofâis defined as â( k)|ρ_a⟩=ρ_a( k)|ρ_a⟩ Consequently,ρ_a, the eigenvalue ofâ, is real in position space. Proceeding in similar way as shown in Sec.<ref>, we get the wave function expressed inρ_a( k)as ψ[ρ_a,η_1]=exp[12∫d^d k(2π)^dρ_a( k)ρ_a(- k){-2iν(-η_1)^-2ν-2iνβ_ν^∗α_ν^∗k^2ν}] where we use the saddle point relation betweenϕandρ_agiven as ϕ( k,η)=f_ν^∗α_ν^∗k^νρ_a( k) whereϕis the eigenvalue ofΦ, eigphia. Comparing the coefficient function with that we obtain inψ[ϕ̂,η_1], bounwvpexp, we see that although the coefficient of the cut-off dependent term is different, the finite term which we identified as correlator comes out to be the same. §.§ Underdamped Scalar J_- as the Source : Here we consider an underdamped scalar. In section <ref>, we have identifiedJ_+as the source through subst1. We also noted that there can be an alternate identification as in subst2 withJ_-being the source. The wave function, in the later identification, reduces to multlineψ[J_-,η_1]=-∫d^d k (2π)^d( i/2)[(β_μ^∗)^2(d/2+iμ)(-η_1)^2iμ. .+(α_μ^∗)^2(d/2-iμ)(-η_1)^-2iμk^-4iμ+α_μ^∗β_μ^∗ d  k^-2iμ]J_-(𝐤)J_-(-𝐤) Similar to section <ref>, we identify the cut-off independent term as two point correlator which signifies the dimension of the dual field asΔ_-=d 2-iμ. There are as well two cut-off dependent terms — first term is local in position space but the second term is non-local. As discussed in section <ref>, the finite part of the coefficient function in this identification satisfy the standard Ward identities of a CFT. JHEP
http://arxiv.org/abs/2407.03308v1
20240703174938
Accelerated Proton Resonance Frequency-based Magnetic Resonance Thermometry by Optimized Deep Learning Method
[ "Sijie Xu", "Shenyan Zong", "Chang-Sheng Mei", "Guofeng Shen", "Yueran Zhao", "He Wang" ]
physics.med-ph
[ "physics.med-ph", "cs.AI", "eess.IV" ]
Unimodality and peak location of the characteristic polynomials of two distance matrices of trees Sivaramakrishnan Sivasubramanian July 8, 2024 ================================================================================================== 8pt10pt^1Biomedical Instrument Institute, School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai. 8pt10pt^2Institute of Science and Technology for Brain-Inspired Intelligence, Fudan University, Shanghai. 8pt10pt^3Department of Radiology, Brigham and Women’s Hospital, Harvard Medical School, Boston, Massachu-setts. 8pt10pt^4Department of Physics, Soochow University, Taipei 8pt10pt^5Department of Radiology, Shanghai Fourth People’s Hospital Affiliated to Tongji University School of Medicine, Shanghai 8pt10pt^†Co-First Author, ^*Corresponding Author § ABSTRACT Background: Proton resonance frequency (PRF)–based magnetic resonance (MR) thermometry is essential in thermal ablation therapies through focused ultrasound (FUS). The clinical treatments require temperature feedback must be rapid and accurate. Purpose: This work aims to enhance temporal resolution in dynamic MR temperature map reconstruction with an improved deep learning method, to ensure the safety and effectiveness of FUS treatments. Methods: The training-optimized methods and five classical neural networks were applied on the 2-fold and 4-fold under-sampling k-space data to reconstruct the temperature maps. The used neural networks were cascade net, complex valued U-Net, shift window transformer for MRI, real valued U-Net and U-Net with residual block. The enhanced training modules included offline/online data augmentations, knowledge distillation, and the amplitude-phase decoupling loss function. The heating experiments were performed by a FUS transducer on phantom and ex vivo tissues, respectively. In datasets, the ground-truth was the complex MR images with accurate temperature increases. These data were also manually under-sampled to imitate acceleration procedures and trained in our method to get the reconstruction model. The additional dozen or so testing datasets were separately obtained for evaluating the real-time performance and temperature accuracy. Results: Acceleration factors of 1.9 and 3.7 were found for 2× and 4× k-space under-sampling strategies and the ResUNet-based deep learning reconstruction performed exceptionally well. In 2-fold acceleration scenario, the RMSE of temperature map patches provided the values of 0.888 ℃ and 1.145 ℃ on phantom and ex vivo testing datasets. The DICE value of temperature areas enclosed by 43 ℃ isotherm was 0.809, and the Bland-Altman analysis showed a bias of -0.253 ℃ with the apart of ±2.16 ℃. In 4× under-sampling case, these evaluating values decreased by approximately 10%. Conclusion: This study demonstrates that the application of deep learning-based reconstruction significantly enhances the accuracy and efficiency of MR thermometry, particularly benefiting the clinical thermal therapies for uterine fibroid, essential tremor, and prostate cancer by FUS. The source code for our optimizing methods and neural networks is available at: <https://github.com/minipuding/FastMRT>. § INTRODUCTION Magnetic resonance (MR) thermometry is widely used in noninvasive surgical treatments by focused ultrasound (FUS)<cit.>. However, achieving real-time temperature measurement that preserves temperature information using magnetic resonance imaging (MRI) is highly challenging due to the underlying imaging principles. Such surgeries require the continuous acquisition of approximately ten frames for each ablation, with each frame taking approximately two seconds. A complete uterine fibroid ablation procedure typically lasts between two and four hours, with a considerable portion of the time dedicated to image acquisition. The use of temperature monitoring intervals lasting seconds still poses a certain degree of safety risk to patients<cit.>. Throughout the procedure, patients must remain motionless to prevent any hazardous circumstances, which can be highly distressing for them. Fast imaging based on the reconstruction of under-sampled magnetic resonance images has been in development for several years and has shown great potential to significantly increase the speed of temperature measurement. Most studies in the area of MRI temperature measurement use conventional methods, such as parallel imaging and compress sensing<cit.>. The accelerated MR thermometry through coil-sensitivity encoding was generally accompanied by a decrease in temperature accuracy<cit.>. Besides that, the reduced field of view (FOV) method might be an alternative for fast temperature measurements, but the absence of a full-FOV monitor increased the risk of ablation treatments. More advanced non-cartesian readout strategies such as spiral and radial MR thermometry proposed by Kisoo and Pooja et al, can achieve volumetric and motion-immune temperature measurements with a temporal resolution of 100 300 ms for each slice<cit.>. However, our novel deep learning-based rapid reconstruction method was non-conflicting with these sampling strategies. The spiral and radial k-space data can also be under-sampled to reach higher imaging speeds, and accelerated imaging via reconstruction in this study was also applicable. The work to be done is to train a specialized reconstruction model for them through our proposed method. In addition, the rapid echo planar imaging (EPI) sequence was validated for temperature measurements by Andrew and Henrik et al<cit.>. Nevertheless, the segmented or single-shot EPI was always vulnerable to B0 inhomogeneities. The irresistible susceptibility led to a significant reduction in the clinical acceptability of using this sequence for temperature measurements<cit.>. Furthermore, the temperature increase-induced focus shift remains a difficult and not well-resolved problem. In contrast, the rapid reconstruction method we proposed for cartesian-based gradient echo sequences was more robust. Since the inception of the fast MRI challenge<cit.>, numerous deep learning-based MRI reconstruction techniques<cit.> have been proposed. Following the emergence of Vision Transformer (ViT)<cit.>, several transformer-based reconstruction methods have also been proposed<cit.>. However, magnetic resonance thermometry encounters two major issues when attempting to apply existing fast imaging algorithms. Firstly, the most commonly used temperature measurement method, proton resonance frequency (PRF) shift<cit.>, relies on the phase discrepancy of complex images. However, current fast MRI methods primarily focus on reconstructing amplitude images, with little emphasis on phase<cit.>. As a result, these methods are suboptimal for preserving phase information in magnetic resonance temperature measurement, lacking specific design or improvement for this purpose. Secondly, current undersampling reconstruction methods prioritize image quality restoration over time preservation, leading to increasingly larger and more complex models with insufficient attention to the impact of model inference time on actual acceleration rates<cit.>. Furthermore, insufficient datasets may result in limited model's performance due to overfitting and underutilization. This work improves the performance of deep learning by adopting network structure-independent methods without increasing the number of network parameters or computational complexity to achieve fast and accurate measurement. The proposed approach involves several techniques to improve the performance of neural network models; it utilizes offline diffusion model augmentation, online complex-valued data augmentation techniques, knowledge distillation, and an amplitude-phase decoupled loss function. The first two modules are utilized for data augmentation to prevent overfitting and unleash the potential of the model. The knowledge distillation module enables a smaller model to learn capabilities several times greater than its parameter capacity. The decoupled loss function separates amplitude and phase differences, allowing the model to adjust weights and focus on the image phase. Based on these training strategies, the cascade net (CasNet)<cit.>, complex valued U-Net (CUNet)<cit.>, shift window transformer for MRI (SwinMR)<cit.>, real valued U-Net (RUNet)<cit.> and U-Net with residual block (ResUNet) were involved in this deep learning method to improve the speed of MR temperature measurements. § THEORIES §.§ MR Reconstruction via Deep Learning The speed of MRI is determined by the number of sampled k-space lines for a regular gradient echo sequence, and the acceleration can be achieved through reducing phase encoding numbers. In the case of single-channel MRI signal sampling, it can be mathematically represented by the formula <ref>: y=ℳ·ℱ(x)+ϵ where x ∈ C^(N_1× N_2) denotes the MR images reconstructed from fully sampled k-space, while ℳ∈ C^(N_1× N_2) represents the mask lines selected from the phase encoding direction of y ∈ C^(N_1× N_2) , and ℱ(·) is the Fourier transform. A typical function for estimating the MR image x from measurements is given by: x=min_x||y-ℳ·ℱ(x)||_2^2 + λ· R(x) where R(x) denotes the regularizer, which is dependent on the reconstruction algorithm used. For deep learning training processes, a function G_DL(·) can be utilized as the regularizer: x̂ = argmin_x||y-ℳ·ℱ(x)||_2^2+λ R(x,θ^*), where θ^* = argmin_θE||x - G_DL(ℳ·ℱ(x);θ)||, x∼ S where S is the dataset and x is the complex image sampled from S. We train a model to minimize the expected difference between sampled and fully sampled images<cit.>. §.§ Proton Resonance Frequency Shift At present, proton resonance frequency (PRF) shift thermometry is a widely used technique for temperature measurement via MRI. PRF shift thermometry shows a persistent linear correlation with temperature and is largely tissue-type agnostic (excluding adipose tissue), while providing a simple and robust real-time measurement method through the regular sequences. To determine temperature changes it calculates the phase difference between the magnetic resonance images with heating and the baseline images. The temperature alteration can be expressed as a linear function of the phase difference, as shown by formula <ref>: Δ T=ϕ-ϕ_ref/α·γ· t_TE· B_0 where ϕ represents the phase of the current image, ϕ_ref represents the phase of the image acquired at time 0, α denotes the PRF (Proton Resonance Frequency) change coefficient of water tissue, which is -0.01 ppm/℃, γ represents the magnetic moment ratio of hydrogen atoms, t_TE denotes the echo time, and B_0 represents the main magnetic field strength. §.§ Actual Acceleration Ration As stated above, it was assumed that the under-sampling rate represented the time saved, without considering the inference time required by the reconstruction algorithm itself. However, for magnetic resonance temperature measurement tasks, we need real-time imaging. Therefore, it is essential to compute the effective acceleration rate of the reconstruction model, which is defined as : reconstruction model. We define it as follows: E_N=n=t_a/t_a/n+t_m=n· t_a/t_a+n· t_m where t_a denotes the acquisition time of the fully sampled image, t_m, E_N=n and n represent the inference time of the model, the effective acceleration rate, and the theoretical acceleration rate, respectively. When computing this metric in practice, we approximate t_a by t_TR× num_pe where the num_pe denotes the number of phase encoding and we obtain t_m via the model’s CPU forward inference time. § METHODS §.§ Deep Learning Training and Models As depicted in Figure <ref>, the proposed deep learning method incorporates data augments, teacher model, decoupled loss function, and classical network models. The offline diffusion augment and online data augment were able to expand the amount of MR images obtained in the heating experiments. This preprocessing procedure was designed to improve the performance of network model. Furthermore, the cascade net, complex net, swin-transformer, real-unet, and resunet were used in the training to get five different reconstruction models. The decoupling loss function was modified to adapt complex-valued MR images used for temperature measurements. During the training process, we leverage a sizable teacher model pre-trained on the FastMRI dataset and further fine-tuned on our own dataset. This teacher model serves as a guiding influence for the student model, enabling us to achieve a compact model with a performance comparable to that of the teacher model. §.§.§ Offline Diffusion Augmentations In medical image-related tasks, there is a growing trend toward the adoption of model-based data augmentation techniques<cit.>. As demonstrated by Trabucco Brandon et al, the diffusion model is a more effective means of generating diverse and realistic images<cit.>. Therefore, the diffusion model called Denoising Diffusion Probabilistic Model (DDPM)<cit.> was utilized here to generate a substantial amount of similar data for our training process. Additionally, we set the time step to 600 and increased the amount of data by a factor of five for the phantom and ex vivo sub-datasets, respectively. Since the diffusion model requires a considerable amount of time for processing, we opted for an offline method to generate augmented data before the training phase. §.§.§ Online Complex Augmentations Before inputting data into the model, we perform conventional data augmentation, which involves random cropping, flipping, rotation (0°, 90°, 180°, 270°), and Gaussian blurring. We would like to high-light that we extended real-valued data augmentation to complex-valued data by separately augmenting the magnitude and phase of complex images before combining them. This approach significantly increases data diversity by orders of squares, thereby enhancing the model's robustness and generalizability. To avoid introducing undesirable bias into the model training due to the disruption of spatial consistency between magnitude and phase, we apply complex-valued data augmentation with a specific probability. We set the optimal probability for triggering complex-valued data augmentation to 0.3. §.§.§ Base Models To achieve faster temperature map reconstruction, we implemented our proposed method on the Naive-Real-UNet (RUNet) and ResUNet, which has state-of-the-art (SOTA) performance and is lightweight, making it an ideal choice for MR temperature map reconstruction. Compared to RUNet, ResUNet replaces ordinary convolutional layers with residual blocks and adds self-attention modules, which can be considered an improved version of RUNet. We also compared our method with several SOTA MR reconstruction methods, such as the cascade network with data consistency (CasNet)<cit.>, complex-valued convolutional network with UNet structure (CUNet)<cit.>, Swin-Transformer (SwinMR)<cit.>, and the RUNet and ResUNet without any structure modifications. §.§.§ Knowledge Distillation Knowledge distillation<cit.> is a technique that can accelerate inference times by transferring the knowledge learned by a larger and more complex model to a smaller and simpler one while preserving or even improving performance<cit.>. Larger models usually exhibit stronger generalization capabilities, but their inference speeds may be lower. Thus, the knowledge acquired by the larger “teacher” model network is transferred to the smaller “student” model in the form of soft labels. Initially, we pre-train a teacher model with an identical structure to that of the student model and extend the channels by a factor of four using both online and offline augmentation techniques. To ensure comprehensive training of the teacher network, we adopt a two-step approach. We perform pre-training on the FastMRI dataset, followed by full parameter fine-tuning on our research dataset. During each forward pass of student model, the output is compared to both the ground truth and the soft labels generated by the pre-trained teacher model. The losses are then weighted and calculated accordingly. The weights are dynamically adjusted to decay over time during training so that the student model can primarily learn from the teacher network in the initial stages and gradually transition to learning from the ground truth. The loss function for knowledge distillation is: L_total = (1-w)· L_gt+w· L_soft, where w=(1-E_curr/E_total)·γ where l_gt and l_soft denote the loss values calculated with the ground truth and the soft labels generated by the teacher network, respectively; E_curr and E_total denote the current epoch and the total number of epochs, respectively; γ is the only hyperparameter that adjusts the weight of the teacher network guidance. §.§.§ Decoupled Loss Our investigation revealed that most reconstruction algorithms rely on the L1 loss function<cit.> which may not be optimal for temperature measurement tasks that emphasize phases. More specifically, the L1 loss function tends to couple the magnitude and phase, resulting in identical loss values that correspond to different phases. Consequently, it becomes challenging to specifically optimize the phase component. Therefore, we attempted to decouple the loss function, as illustrated in Figure <ref>. We partitioned the loss into two components: magnitude loss (computed as the absolute error of the amplitude values) and phase loss (computed as the error in radians), with the former quantifying the difference in magnitude and the latter quantifying the phase difference. In contrast to the decoupled loss functions proposed by Zhang et al.<cit.> and other researchers, our decoupled loss function is simpler and has a more straightforward geometric interpretation: loss_dc=d+α· l=||ŷ| - |y||+α· |ŷ|·𝒜(ŷ×y̅) where 𝒜 represents the angle calculation function and α is a parameter that controls the degree of bias applied to the phase loss. The variables y,ŷ, and y̅ represent the predicted complex output, ground truth, and the complex conjugate of y, respectively. In some other fields related to phase maps, different types of specialized phase loss functions are used<cit.>, but the rationale for their design and mathematical basis are not provided. §.§ Heating Experiments and Implementation §.§.§ FUS heating Our dataset was acquired through the application of a 128-element high-intensity focused ultrasound transducer (with a frequency of 1.1 MHz, a focal length of 150 mm, and a focal radius of 120 mm), followed by imaging with a 3T MR system (Discovery MR750; GE Healthcare, Milwaukee, WI). The images were obtained using the Fast Spoiled Gradient Echo (FSPGR) sequence, with 96 phase encoding steps, a TR/TE of 12/16 ms, a flip angle of 30^∘, a slice thickness of 3 mm, a field of view (FOV) of 28×28 cm^2, a Number of Excitation (NEX) of 1, and a bandwidth of ±62.5kHz. The dataset comprises two distinct parts: phantom heating data and ex vivo heating data. For each part, there are 96 heating samples (consisting of 2186 slices) and 105 samples (consisting of 1623 slices), respectively, with each sample containing either one or three layers. The temperature change at the focus was approximately 30 degrees Celsius, and the focus position was consistently located at the center of the image. To enhance the speed of temperature measurement, we employed a smaller TR and fewer phase encoding steps, which led to a lower signal-to-noise ratio and resolution of the acquired images. This underscores the significance of utilizing fast temperature measurement algorithms to compensate for the reduced image quality. §.§.§ Model Metrics Temperature Metrics. After deriving the PRF temperature map from the complex images generated by the model and the reference images, we obtain a common metric that characterizes the reconstruction error of the entire image by calculating the average pixel-wise error compared to the original temperature map (represented as T_err). However, as we use the HIFU device to focus heat on a very small area, only a small region undergoes significant temperature changes. Therefore, it is also necessary to consider local metrics for the heating focus. Specifically, we evaluate the temperature using metrics such as root mean square error (RMSE), standard deviation (STD), and Dice coefficient (DICE). These metrics are calculated within a pixel block that is cropped around the focal area, covering one-fourth of the width and height of the image. Furthermore, we also assess the agreement between the reconstructed temperature values and the reference values using Bland-Altman analysis and examine the linear relationship between these two sets of values using linear regression analysis, both of which are commonly used in previous related works<cit.>. The temperature map is calculated from the phase difference, and noise can be present in areas with very low signal intensity. To ensure accurate evaluation of the temperature image, a mask is applied before calculating temperature metrics. Computation Quantity Metrics. We evaluate the efficiency of the models by computing their Floating-point Operations per Second (FLOPs), number of parameters (Params), CPU inference time (CPU-T), and effective acceleration ratio (E_N=n) at a certain undersampling rate, which is calculated using formula <ref>. It is worth noticing that we place greater emphasis on the effective acceleration ratio, as it can intuitively reflect the acceleration ratio that the model can achieve while considering the model inference time. By combining it with the model's performance for comparison, we can more effectively assess the cost-effectiveness of each model. Additionally, we estimated the total acquisition time (Cost-N×) for magnetic resonance imaging using the T_TR· num_pe / N + CPU-T formula. As the Fourier inverse transform and PRF temperature measurement have extremely short processing times, they were not included in the time calculation. §.§.§ Training Computation We employed a mask like the one used in FastMRI to simulate the undersampling process. Specifically, we fully adopted the low-frequency part and uniformly under-sampled the other high-frequency parts, and the proportion of the fully adopted low-frequency part was set to 15%. We used the AdamW optimizer with a learning rate of 5e-4 and decayed it using a cosine scheduler, and the batch size was set to 8. We conducted experiments separately on both phantom and ex vivo datasets and trained the models for approximately 200 epochs on an NVIDIA RTX A6000. § RESULTS This section presents the experimental results of our study. Firstly, we compare the performance of various deep learning models in the reconstruction of temperature using comparative experiments. Secondly, through ablation experiments, we validate the effectiveness of our proposed method. In addition, we conduct a comprehensive analysis of a long sequence sample, including both time series and consistency analyses. Finally, we demonstrate the resource utilization of different models by presenting parameters and effective acceleration rates. §.§ Comparison Study The comparison results under 2× and 4× undersampling on both phantom and ex vivo datasets are presented in Table <ref>. In our study, we have included the zero-filling (ZF) and compressive sensing (CS) algorithms for comparative analysis. Zero-filling involves filling the under-sampled k-space region with zeros after undersampling, while the compressive sensing algorithm employed is Total Variation Minimization, which makes 200 iterations. It can be observed from the results that the reconstruction performance of RUNet and ResUNet with our optimized methods are superior to that of other methods. In addition, we present the temperature map reconstruction results on specific example samples in Figure <ref>. It can be observed that the sample using ResUNet+all retains more temperature information in the reconstructed sample, particularly in the ex vivo 4× case, where it can display the heated focal temperature, a feature does not present in the results obtained from other methods. To visually compare the efficacy of various methods for rapid temperature measurement, we calculated the mean temperature error using pixels above 43 ºC within the temperature map patches reconstructed by each model. As an example, we utilized the phantom sub-dataset with 4× undersampling. Subsequently, we generated box plots for all samples in the test set, as depicted in Figure <ref>. The results indicate that the proposed methods are superior to the ZF method, with the ResUNet+all method demonstrating the lowest average temperature error. §.§ Ablation Study To showcase the efficacy of our method, we conducted a series of ablation experiments on the RUNet model using phantom validation datasets that were subjected to 4× undersampling. As shown in Table <ref>, we incorporated four individual modules (DA for diffusion model augmentation, CA for complex-valued data augmentation, KD for knowledge distillation, and DL for decoupled loss) into the baseline model across all four temperature metrics, resulting in varying degrees of improvement. In addition, we combined the four modules in all possible combinations and generated a 4×4 heat map matrix to visualize the resulting temperature indicators; different combinations of the modules have varying effects on specific indicators, as illustrated in Figure <ref>. However, as an overall observation, it can be inferred that complex-valued data augmentation had the most substantial contribution. §.§ Time-Consuming Study To evaluate the resource utilization of each network we used four key indicators: number of parameters (Params), number of floating-point operations (FLOPs), CPU inference time (CPU-T), and total cost and the effective acceleration rate under 2× (Cost-2×, E_N=2) and 4× (Cost-4×, E_N=4) undersampling. Here, the performance of CPU-T is evaluated by conducting 1000 forward processing runs and calculating the average processing time on an Intel(R) Xeon(R) Gold 6248R CPU. These indicators were selected to provide a comprehensive assessment of the network's resource utilization in terms of its computational complexity, memory usage, and inference speed. Through our evaluation of these indicators, we were able to gain insights into the efficiency and effectiveness of each network. Our evaluation results are presented in Table <ref>. Compared with SwinMR and CS methods, RUNet and ResUNet exhibit the shortest CPU running time and the highest effective acceleration rate among all the evaluated networks. These findings suggest that RUNet and ResUNet may be particularly well-suited for resource constrained applications that require low-latency and high-throughput processing. Furthermore, in conjunction with our evaluation of temperature indicators, our results show that the addition of the ResUNet+all network yields a remarkably high cost-effectiveness ratio. This suggests that the proposed optimizing modules may serve as a valuable augmentation technique for improving the performance and efficiency of the UNet network, especially in applications where resource utilization is a critical consideration. §.§ Long sequence sample study Applying rapid temperature measurement to practical devices improves the temporal resolution of the temperature measurement process. This improvement is reflected in an increased number of frames when the HIFU heating power and ablation duration remain constant. To examine the potential impact of changes in temporal resolution on temperature measurement, we analyzed temperature image samples from simulated long-sequence data of a phantom model with improved temporal resolution. In this study we selected 32-frame long-sequence samples from the test set of the phantom model and extracted every other frame to simulate a sequence of 16 frames representing full sampling conditions for a given heating duration. The remaining 32 frames simulated a sequence obtained under 2× undersampling. Using ResUNet+all, we recorded temperatures of the 3×3 pixel block at the center of the original reconstructed temperature images. Temperature-time curves were plotted and analyzed using Bland-Altman and linear regression methods. The calculation of time on the horizontal axis followed the same method as Cost-N×. Additionally, we incorporated the inference time of the model into the plotting of the temperature-time curves, where a longer inference time resulted in a rightward shift compared to the fully sampled temperature curve. A larger shift indicated a greater impact on temporal resolution improvement, reflecting a poor performance, while a smaller shift indicated a better performance. The results, presented in Figure <ref>, demonstrate that the reconstructed focus closely aligns with the temperature-time curve of the fully sampled images. There is no significant deviation observed in the curve, suggesting that the model exhibits strong real-time inference capability. Most of the data points fall within the 95% confidence interval, with upper and lower limits within ±3 ℃, indicating a strong linear relationship between the reconstructed and fully-sampled temperature maps, suggesting that the reconstructed temperature map is highly consistent with the fully-sampled temperature map. § DISCUSSION In this article, we present an introduction to the application of deep learning methods for rapid magnetic resonance thermometry. Previously, the fast readout patterns have been studied for temperature measurements, such as spiral and radial strategies, EPI sequence<cit.>. The deep learning-based rapid reconstruction here was appropriate for them. In the case of under-sampling, these sequences can achieve a higher temporal resolution, combining our proposed methods. Theoretically, our approach can be applied by retraining whenever the acceleration is realized via undersampling. Specifically, we focus on enhancing the effective acceleration rate and improving model performance within a short inference time. To achieve this, we propose a series of model-agnostic techniques and validate the effectiveness of each module through experimental verification. The assembly of under-sampling and deep learning reconstruction for fast temperature measurements has quite a few benefits. For example, Undersampling means fewer phase encoding and less B0 field drift<cit.>. Without considering the loss of signal-to-noise ratio (SNR), the measured temperature should be more accurate. On the subject of motion-induced artifacts, our proposed deep-learning reconstruction module can also be improved to make it insensitive to respiration and other movement by modifying the neural network module. Once the fast thermometry can be realized, the volumetric temperature monitoring covering the whole focal area will be easier. In our experiments, we employed only a maximum undersampling rate of 4×, instead of the up to 10× rate used by fastMRI. This choice was influenced by the limitations of the image resolution and signal-to-noise ratio in our acquired images. The number of phase encodings in k-space was also relatively low, and the signal was not highly concentrated at the center. These factors resulted in significant signal loss when using excessively high undersampling rates, making reconstruction difficult. Interestingly, the compromised resolution and signal-to-noise ratio in the acquired dataset are flaws due to equipment design, which prioritized faster temperature mapping at the expense of spatial resolution; this highlights the importance of rapid thermometry. With the same temperature mapping speed, higher spatial resolution and signal-to-noise ratio can be achieved, leading to higher-quality images. Consequently, these higher-quality images can then be subjected to higher undersampling rates. To address this, we conducted an experiment using a phantom dataset and performed interpolation to simulate the acquisition of high-resolution images. Subsequently, we applied undersampling rates of 6×, 8× and 10× to these images and compared the results with those obtained from the original resolution images. The obtained RMSE values for the temperature map patches were 0.610℃, 0.704℃, and 0.724℃, respectively. These values closely align with the results obtained for 2× and 4× resolutions in a 96×96 format. Furthermore, our study only utilized phantom and ex vivo tissue datasets. The temperature distribution in actual human tissue is more complex. Therefore, in the future, we plan to conduct testing and research on live animal models and specific human tissue datasets to further investigate and validate our findings. § CONCLUSION This paper presents the first formal investigation into the application of deep learning methods for magnetic resonance temperature measurement. To the best of our knowledge, this is the first comprehensive study to explore the use of deep learning methods for this purpose. We have made our code publicly available and have proposed the use of four optimizing modules to enhance model performance without increasing the number of parameters or computational complexity. We compared various existing MR re-construction models and demonstrated the effectiveness of our proposed method, as well as its resource-saving characteristics. We hope that our research will serve as inspiration for further investigations related to MRI temperature measurement. Moving forward, we plan to explore end-to-end approaches that incorporate temporal information, as well as investigate the feasibility of adopting reference-free imaging techniques for rapid temperature measurement. unsrt
http://arxiv.org/abs/2407.02953v1
20240703094429
Affine Frequency Division Multiplexing for Compressed Sensing of Time-Varying Channels
[ "Wissal Benzine", "Ali Bemani", "Nassar Ksairi", "Dirk Slock" ]
cs.IT
[ "cs.IT", "math.IT" ]
Affine Frequency Division Multiplexing for Compressed Sensing of Time-Varying Channels Wissal Benzine^1,2, Ali Bemani^1, Nassar Ksairi^1, and Dirk Slock^2 ^1Mathematical and Algorithmic Sciences Lab, Huawei France R&D, Paris, France ^2Communication Systems Department, EURECOM, Sophia Antipolis, France Emails: {wissal.benzine1, ali.bemani, nassar.ksairi}@huawei.com, Dirk.Slock@eurecom.fr ========================================================================================================================================================================================================================================================================================================================= § ABSTRACT This paper addresses compressed sensing of linear time-varying (LTV) wireless propagation links under the assumption of double sparsity i.e., sparsity in both the delay and Doppler domains, using Affine Frequency Division Multiplexing (AFDM) measurements. By rigorously linking the double sparsity model to the hierarchical sparsity paradigm, a compressed sensing algorithm with recovery guarantees is proposed for extracting delay-Doppler profiles of LTV channels using AFDM. Through mathematical analysis and numerical results, the superiority of AFDM over other waveforms in terms of channel estimation overhead and minimal sampling rate requirements in sub-Nyquist radar applications is demonstrated. compressed sensing, channel estimation, time-varying channels, AFDM, chirps, sparsity § INTRODUCTION Time-varying wireless channels in many propagation scenarios, especially in high-frequency bands, are characterized by sparsity in both delay and Doppler domains <cit.>. Such sparsity is an important feature of wireless propagation that can be exploited to improve channel estimation performance <cit.> or radar sensing <cit.>. Delay-Doppler sparsity was assumed in <cit.> and leveraged to conceive enhanced channel estimation schemes for time-varying channels using the sparse Bayesian learning (SBL) framework. However, delay-Doppler sparsity was modeled as the sparsity of a one-dimensional array with no way to assign different sparsity levels to the delay and Doppler domains. To obtain a sparsity model compatible with the latter requirement, one can in principle turn to the hierarchical sparsity framework <cit.>. Indeed, the concept and the tools of hierarchical sparsity were applied in <cit.> to the problem of multi-input multi-output (MIMO) channel estimation under delay and angular domains sparsity. In sensing and radar applications, the sub-Nyquist radar paradigm <cit.> leverages wireless channel sparsity to develop sub-Nyquist receivers. However, most of its solutions cannot take advantage of Doppler domain sparsity for lowering the sampling rate and some of them require complex analog-domain processing. In <cit.>, the relevance of affine frequency division multiplexing (AFDM) <cit.>, a recently proposed waveform based on the discrete affine Fourier transform (DAFT), for efficient self-interference cancellation in mono-static integrated sensing and communications (ISAC) scenarios was demonstrated. In <cit.>, we had established its relevance for time-varying channel estimation under delay and Doppler sparsity with a known delay-Doppler profile (DDP). Using tools from the framework of hierarchical sparsity, the current work tackles the problem of delay-Doppler sparse recovery when no such DDP knowledge is assumed, with applications to both time-varying channel estimation and sub-Nyquist radar sensing. §.§ Contributions I) The statistical notion of delay-Doppler sparsity is rigorously linked to the hierarchical sparsity paradigm. II) This link is used to propose a sparse recovery algorithm based on AFDM measurements for delay-Doppler profiles of wireless channels. III) Using hierarchical-sparsity mathematical tools, closed-form asymptotic results for the performance of this recovery is provided. Finally, IV) this performance analysis is used to show the superiority of AFDM over recovery schemes based on other waveforms in terms of channel estimation overhead and sensing receiver minimal sampling rate requirements. §.§ Notations Bernoulli(p) is the Bernoulli distribution with probability p and B(n,p) is the binomial distribution with parameters (n,p). Notation X∼ F means that random variable X has distribution F. If 𝒜 is a set, |𝒜| stands for its cardinality. The set of all integers between l and m (including l and m, (l,m)∈ℤ^2) is denoted l,m. The ceiling operation is denoted as ⌈.⌉. The modulo N operation is denoted as (·)_N. § BACKGROUND: AFDM In AFDM, modulation is achieved through the use of DAFT which is a discretized version <cit.> of the affine Fourier transform (AFT) <cit.> with the discrete chirp e^-2π (c_2k^2+1/Nkn+c_1n^2) as its kernel (see Fig. <ref>). Here, c_1 and c_2 are parameters that can be adjusted depending on the delay-Doppler profile of the channel (in this work, we adjust c_1 based on the delay-Doppler sparsity levels). Consider a set of quadrature amplitude modulation (QAM) symbols {x_k}_k=0⋯ N-1. AFDM employs inverse DAFT (IDAFT) to map {x_k}_k=0⋯ N-1 to s_n = 1/√(N)∑_k = 0^N-1x_ke^2π (c_2k^2+1/Nkn+c_1n^2), n= 0⋯ N-1 with the following so called chirp-periodic prefix (CPP) s_n = s_N+ne^-2π c_1(N^2+2Nn), n = -L_CPP⋯ -1 where L_CPP denotes an integer that is greater than or equal to the number of samples required to represent the maximum delay of the wireless channel. The CPP simplifies to a cyclic prefix (CP) whenever 2c_1N is integer and N is even, an assumption that will be considered to hold from now on. § BACKGROUND: HIERARCHICAL SPARSITY A vector 𝐱∈ℂ^NM is (s_N,s_M)-sparse if 𝐱 consists of N blocks each of size M, with at most s_N blocks having non-vanishing elements and each non-zero block itself being s_M-sparse. To analyze hierarchically sparse recovery schemes, a modified version of the restricted isometry property (RIP) called the hierarchical RIP (HiRIP) was proposed in the literature. The HiRIP constant of a matrix 𝐀, denoted by δ_s_N,s_M, is the smallest δ≥ 0 such that (1- δ)𝐱^2≤𝐀𝐱^2≤(1+ δ)𝐱^2 for all (s_N,s_M)-sparse vectors 𝐱∈ℂ^NM. § SYSTEM MODEL §.§ Doubly sparse linear time-varying (DS-LTV) channels In an LTV channel with L paths, the complex gain h_l,n (n∈-L_ CPP,N-1) of the l-th path varies with time as h_l,n=∑_q=-Q^Qα_l,qI_l,qe^ 2πnq/N, l=0⋯ L-1 Here, I_l,q for any l and q is a binary random variable that, when non-zero, indicates that a channel path with delay l, Doppler shift q and complex gain α_l,q is active and contributes to the channel output. Note that the distribution of the random variables {I_l,q}_l,q controls the kind of sparsity the LTV channel might have. The complex gain is assumed to satisfy α_l,q∼𝒞𝒩(0,σ_α^2) with σ_α^2 satisfying the channel power normalization ∑_l=0^L-1∑_q=-Q^Q𝔼[|α_l,q|^2I_l,q]=1. Note that this model is an on-grid approximation of a time-varying channel. For instance, the Doppler shifts are assumed to be integers in -Q,Q when normalized with the resolution associated with the transmission duration. An LTV channel is doubly sparse if there exist 0<p_d,p_D<1 s.t. I_l,q=I_lI_q^(l), ∀ (l,q)∈0,L-1×-Q,Q where I_l∼Bernoulli(p_d) and I_q^(l)∼Bernoulli(p_D). Note that under Definition <ref>, s_ d≜𝔼[∑_lI_l]=p_dL is the mean number of active delay taps in the delay-Doppler profile of the channel and can be thought of as the delay domain sparsity level while s_ D≜𝔼[∑_qI_q^(l)]=p_D(2Q+1) is the mean number of active Doppler bins per delay tap and can be thought of as the Doppler domain sparsity level. Fig. <ref> illustrates three different delay-Doppler sparsity models, fully described in <cit.> and dubbed Type-1, Type-2 and Type-3, that all fall under the scope of Definition <ref> each with an additional different assumption on I_l and I_q^(l). Here, we just point out that the difference between Type-2 and Type-3 of Figures <ref>-(b) and <ref>-(c), respectively, is that in the latter the active Doppler bins per delay tap appear in clusters of random positions but of deterministic length as opposed to the absence of clusters in the former. The case where all the delay taps have the same (random) sparsity (as in Type-1 models of Fig. <ref>-(a)) also falls under Definition <ref> by setting I_q^(l)=I_q^(0),∀ l. §.§ Relation to hierarchical sparsity DS-LTV sparsity is probabilistic while hierarchical sparsity of Definition <ref> is deterministic. The two models are nonetheless related: if vectorized to a concatenation of its rows, random matrix [α_l,qI_l,q]_l,q defines a vector α∈ℂ^(2Q+1)L that consists of L blocks each of size 2Q+1, where in average s_ d blocks have non-zero entries and where each non-zero block itself is in average s_ D-sparse. To ensure sparsity in a stronger sense i.e., with high probability (as L,Q,Lp_ d,(2Q+1)p_ D grow), we require that the following assumption hold. {I_l}_l=0⋯ L-1 are mutually independent. Moreover, the complementary cumulative distribution function (CCDF) F_S_D,l(m) of the random variable S_D,l≜∑_q=-Q^QI_q^(l) for any l∈0,L-1 is upper-bounded for any integer m>(2Q+1)p_ D by the CCDF of B(2Q+1,p_ D). Type-1 and 2 models are made to satisfy the CCDF upper bound by requiring that {I_q^(0)}_q in the first and {I_q^(l)}_q for any l in the second to be mutually independent (and to thus satisfy F_S_D,l(m)=F_B(2Q+1,p_ D)(m),∀ m). For Type-3 models, S_D,l is deterministic and hence its CCDF is trivially upper-bounded. As the following lemma rigorously shows, the mutual independence of {I_l}_l=0⋯ L-1 in Assumption <ref> guarantees strong delay domain sparsity while Doppler sparsity is guaranteed in a more explicit manner by the CCDF upper bound. Under Assumption <ref>, the vector α is (s_ d,s_ D)-sparse with probability 1-e^-Ω(min((2Q+1)p_ D,Lp_ d)). The proof of the lemma is given in Appendix <ref>. §.§ AFDM signal model on DS-LTV channels The received samples at the channel output are r_n = ∑_l = 0^L-1s_n-lh_l,n + z_n, n= 0⋯ N-1, where z_n∼𝒞𝒩(0,σ_w^2) represents the i.i.d. Gaussian noise process. After discarding the CPP (assumed to satisfy L-1≤ L_ CPP), the DAFT domain output symbols are y_k = 1/√(N)∑_n = 0^N-1r_ne^-2π (c_2k^2+kn/N+c_1n^2), k= 0⋯ N-1 =∑_l = 0^L-1∑_q=-Q^Qα_l,qI_l,q e^2π(c_1l^2-ml/N + c_2(m^2 - k^2))x_m +w_k, where the second equality is obtained using the input-output relation given in <cit.>, w_k is i.i.d. and ∼𝒞𝒩(0,σ_w^2) and where m ≜ (k - q + 2Nc_1 l)_N. Note how the Doppler components of different delay taps are mixed in the DAFT domain since a path occupying the (l,q) grid point in the delay-Doppler domain appears as a q-2Nc_1l shift in the DAFT domain. § COMPRESSED-SENSING ESTIMATION OF DS-LTV CHANNELS USING AFDM §.§ DS-LTV compressed-sensing channel estimation problem Let 𝒫⊂0,N-1 designate the indexes of the N_ p(2|c_1|N(L-1)+2Q+1) received samples associated with N_ p DAFT domain pilots, of values { p_p}_p=1⋯ N_ p inserted at indexes {m_p}_p=1⋯ N_ p so as each pilot is preceded by Q zero samples and followed by (2|c_1|N(L-1)+Q) zero samples[We show in Appendix <ref> that the recovery results we prove hold even if we reduce the cardinality of 𝒫 to N_ p(2|c_1|N(L-1)+1)+2Q e.g., by allowing partial overlapping between neighbouring pilot guard intervals.]. Vector 𝐲_ p≜[y_k]_k∈𝒫 is the vectorized form of the received pilot samples. Referring to (<ref>), we can write 𝐲_ p=𝐀_ P𝐌_≜𝐌_ pα+𝐰_ p where [𝐌]_l(2Q+1)+Q+q+1=ΦΔ_q Π^l Φ^H 𝐱_ p, 𝐱_ p is a N-long vector with entries equal to p_1, …, p_N_ p at indexes {m_p}_p=1⋯ N_ p and to zero elsewhere, and 𝐰_ p≜[w]_k∈𝒫. Here, 𝐀_ P is the |𝒫|× N matrix that chooses from a N-long vector the entries corresponding to 𝒫. Δ_q=diag(e^2π qn,n=0⋯ N-1), Π is the N-order permutation matrix, Φ=Λ_c2𝐅_N Λ_c1 with 𝐅_N being the N-order discrete Fourier transform (DFT) matrix and Λ_c=diag(e^-2π cn^2,n=0⋯ N-1). Recall that α is hierarchically sparse due to Lemma <ref>. Its sparsity support is assumed to be unknown to the receiver. §.§ Algorithms for compressed sensing of DS-LTV channels The hierarchical hard thresholding pursuit (HiHTP) approach has been suggested in the literature <cit.> for solving hierarchically-sparse recovery problems such as Problem (<ref>) for which it gives Algorithm <ref>. HiHTP is a modification of the classical hard thresholding pursuit (HTP) <cit.> by replacing the thresholding operator employed at each iteration of HTP with a hierarchically sparse version L_s_ d,s_ D. To compute L_s_ d,s_ D(𝐱) for a vector 𝐱∈ℂ^L(2Q+1) first a s_ D-sparse approximation is applied to each one of the L blocks of 𝐱 by keeping in each of them the largest s_ D entries while setting the remaining ones to zero. A s_ d-sparse approximation is next applied to the result by identifying the s_ d blocks with the largest l_2-norm. §.§ Analyzing AFDM compressed sensing of DS-LTV channels To guarantee the convergence of Algorithm <ref> and the recovery of α, the following technical assumption is needed. Random variables {I_q^(l_1)}_q=-Q⋯ Q are independent from {I_q^(l_2)}_q=-Q⋯ Q for any l_1≠ l_2. Assume |c_1|=P/2N and P is set as the smallest integer satisfying (L-1)P+2Q+1≥ s_ ds_ D. Then under Assumptions <ref> and <ref> and for sufficiently large L, Q, sufficiently small δ, and N_ p>O(1/δ^2log^21/δloglog (LP)/δlog(LP)logQ/P), the HiRIP constant δ_s_ d,s_ D of matrix 𝐌_ p satisfies δ_s_ d,s_ D≤δ with probability 1-e^-Ω(log(2⌈Q/P⌉+1)log1/δ). When P=2Q+1 AFDM achieves full diversity <cit.> and the measurements are non-compressive, while P=1 is the most compressive. By setting P as in the theorem between these two extremes, each pilot instance gives in its (L-1)P+2Q+1-long guard interval a number of measurements close with high probability to the number s_ ds_ D of unknowns. Of course, a number N_ p>1 of pilot instances is still required as the sparsity support needs to be estimated. But, asymptotically, this number has only a logarithmic growth. The outlines of the proof is given in Appendix <ref>. The sequence α̂^(k) defined by Algorithm <ref> satisfies α̂^(k)-α≤ρ^kα^(0)-α+τ𝐰_ p where ρ<1 and τ are constants defined in <cit.>. Thanks to Theorem <ref>, matrix 𝐌_ p with large-enough L,Q,N_ p has a HiRIP constant that satisfies δ_3s_ d,2s_ D<1/√(3). The conditions of <cit.> are thus satisfied and the corollary follows from that theorem. We next explain how the value of N_ p dictated by Theorem <ref> translates into sub-Nyquist sampling rates for radar receivers. § APPLICATION TO SUB-NYQUIST RADAR We now consider the case where the AFDM signal is destined for a sensing receiver either co-located with the transmitter (the mono-static setting) or in a remote device (the bi-static setting). In any of these settings, the non-zero complex gains α_l,q in (<ref>) will represent a point target with a delay l (related to the to-be-estimated range) and a Doppler frequency shift q (related to the to-be-estimated velocity). Instead of applying DAFT to the received AFDM signal after sampling as in basic AFDM operation <cit.> (which would require a sampling rate at least equal to the signal bandwidth), an alternative consists in first de-chirping the received signal in the analog domain with a continuous-time version <cit.> of a DAFT chirp carrier e.g., of the 0-th chirp (e^2π (c_20^2+1/N0n+c_1n^2))_n. The result is a multi-tone signal (as shown in Fig. <ref> in the case of N_ p=1 and P=2) with discontinuities due to the frequency wrapping characterizing AFDM chirp carriers. In this figure, the de-chirped signal occupies two disjoint frequency bands that get merged into one (without discontinuities) thanks to spectrum folding after sampling at rate f_ s=(L-1)P+1/T. In the general case of N_ p≥1 pilots, if we restrict the total subset 𝒫 of pilot guard indexes to be an interval, then sampling after de-chirping can be done at rate f_ s=N_ p((L-1)P+1)/T to yield the vector 𝐲_ p used for target estimation. In most practical configurations N_ p((L-1)P+1)/T≪N/T=1/Δ t, and hence the sampling rate needed for AFDM sensing is significantly smaller than what is needed in sensing based on OFDM or OTFS waveforms. § NUMERICAL RESULTS AFDM sparse recovery performance is now compared to that of OFDM and OTFS. For OFDM, transmission is organized in N-long frames, each constructed from N_ ofdm,symb≈2Q+1 OFDM symbols each of which costing L-1 in CP overhead. Within each frame, N_ p,fd subcarriers within N_ p,td OFDM symbols are set as pilots <cit.>. As for OTFS, subcarriers are in the delay-Doppler domain forming a M_ otfs× N_ otfs grid (with M_ otfsN_ otfs=N). OTFS with orthogonal data-pilot resources <cit.> requires at least N_ p,otfs=1 pilot symbols with min(4Q+1,N_ otfs)min(2L-1,M_ otfs) guard samples. We used 100 realizations of channels having a Type-1 delay-Doppler sparsity with p_ d=0.2, p_ D∈{0.2,0.4} and N=4096, L=30, Q=7 (corresponding to a 30 MHz transmission at a 70 GHz carrier frequency, a maximum target moving speed of 396 km/h and a maximum target range of 300 meters). For both AFDM and OFDM, sparse recovery of α is done using HiHTP (Algorithm <ref>). For OTFS, since sensing is done without compression, non-compressive estimation algorithms can be used <cit.>. For each waveform, the number of pilots was set in such a way that the mean squared error MSE≜𝔼[α̂-α^2] is approximately 10^-4 at SNR=20 dB. Fig. <ref> shows an advantage of AFDM in terms of pilot overhead i.e., the number of samples in each frame needed as pilots and guards to achieve the target MSE performance. However, the main focus in this paper is the gain that can be achieved in terms of sampling rate reduction when AFDM is employed for sub-Nyquist sensing. This gain is illustrated (for the same setting as Fig. <ref>) by Table <ref>. § CONCLUSIONS The advantage of using AFDM instead of measurement matrices based on other waveforms for sub-Nyquist sensing to recover doubly-sparse delay-Doppler profiles has been rigorously established by linking delay-Doppler sparsity to the paradigm of hierarchically-sparse recovery. Future work will address the problem without any on-grid approximation. § PROOF OF LEMMA <REF> Let S_ d≜∑_l=0^L-1I_l be the number of active delay taps. From Definition <ref> and Assumption <ref>, S_ d∼B(L,p_ d). applying the Chernoff's bound to S_ d evaluated at s_ d=(1+ϵ)Lp_ d (with an ϵ>0 that can be set as small as needed) gives ℙ[S_ d>s_ d] ≤(p_ d/s_ d/L)^s_ d(1-p_ d/1-s_ d/L)^L-s_ d=e^-Ω(Lp_ d). As for S_D,l, since Assumption <ref> upper-bounds its CCDF by that of a B(2Q+1,p_ D) distribution, applying the Chernoff's bound to the latter evaluated at s_ D=(1+ϵ)(2Q+1)p_ D similarly gives joint sparsity of {I_q^(l)}_l=0⋯ L-1 in the sense ℙ[∃ l, I_l=1,S_D,l>s_ D]=e^-Ω((2Q+1)p_ D). Combining (<ref>) and (<ref>) completes the proof of the lemma. § OUTLINES OF THE PROOF OF THEOREM <REF> First, for each l∈0,(L-1)P we define 𝒟_l≜{(l̃,q) s.t.(q+Pl̃)_(L-1)P+1=l} as the set of delay-Doppler grid points that potentially contribute to the pilot sample received at DAFT domain index l (Fig. <ref>). Next, we define α≜[α_𝒟_0^ T ⋯ α_𝒟_(L-1)P^ T]^ T where α_𝒟_l≜[α_l,q]_(l,q)∈𝒟_l. The entries of α are just a permutation of the entries of α and estimating one of them directly gives an estimate of the other. Next, it can be shown that when P is set as in the theorem α is (s̃_ d,s̃_ D)-hierarchically sparse with high probability where s̃_ d=(L-1)P+1, ,s̃_ D=(1+ϵ)log(LP) Indeed, the first level (of size (L-1)P+1) of α is sensed without compression with a number of measurements equal to (L-1)P+1 while s̃_ D can be determined thanks to Definition <ref> and Assumptions <ref> and <ref> and applying the same approach of the proof of Lemma <ref> to S_ D,l≜∑_(l̃,q)∈𝒟_lI_l̃,q. Now, we can write the signal model of sensing α as 𝐲_ p=𝐌_ pα +𝐰_ p, where 𝐲_ p=[𝐲_ p,0^ T ⋯ 𝐲_ p,(L-1)P^ T]^ T. For each l, 𝐲_ p,l is a N_ p× 1 vector composed of the pilot samples received at the l-th DAFT domain position in each of the N_ p pilot instances. Note that by this definition 𝐲_ p is obtained by permuting 𝐲_ p in (<ref>) in accordance with the permutation that gives α from α. Next, we prove that 𝐌_ p has the following Kronecker structure 𝐌_ p=𝐈_(L-1)P+1⊗𝐌_ D, with 𝐌_𝒟 = diag( p_1⋯ p_N_ p)𝐅_2⌈Q/P⌉+1, pΨ, 𝐅_2⌈Q/P⌉+1, p is a N_ p×(2⌈Q/P⌉+1) partial Fourier measurement matrix and Ψ is a diagonal matrix with unit-modulus entries. We can thus use <cit.> pertaining to subsampled Fourier matrices to get that for sufficiently large L, Q, sufficiently small δ, and N_ p>O(1/δ^2log^21/δloglog (LP)/δlog(LP)logQ/P) the RIP constant δ_s̃_ D of 𝐌_𝒟 satisfies δ_s̃_ D≤δ with probability 1-e^-Ω(logQ/Plog1/δ). The RIP of 𝐈_(L-1)P+1 trivially satisfies δ_s̃_ d=0. As for the HiRIP of 𝐌_ p, we can apply <cit.> to (<ref>) thanks to its Kronecker structure to get δ_s_ d,s_ D≤δ_s̃_ d+δ_s̃_ D+δ_s̃_ dδ_s̃_ D≤δ if N_ and δ are as in (<ref>). This completes the proof. IEEEtran
http://arxiv.org/abs/2407.02157v1
20240702105543
FineCLIPER: Multi-modal Fine-grained CLIP for Dynamic Facial Expression Recognition with AdaptERs
[ "Haodong Chen", "Haojian Huang", "Junhao Dong", "Mingzhe Zheng", "Dian Shao" ]
cs.CV
[ "cs.CV", "cs.HC" ]
§ ABSTRACT Dynamic Facial Expression Recognition (DFER) is crucial for understanding human behavior. However, current methods exhibit limited performance mainly due to the scarcity of high-quality data, the insufficient utilization of facial dynamics, and the ambiguity of expression semantics, etc. To this end, we propose a novel framework, named Multi-modal Fine-grained CLIP for Dynamic Facial Expression Recognition with AdaptERs (FineCLIPER), incorporating the following novel designs: 1) To better distinguish between similar facial expressions, we extend the class labels to textual descriptions from both positive and negative aspects, and obtain supervision by calculating the cross-modal similarity based on the CLIP model; 2) Our FineCLIPER adopts a hierarchical manner to effectively mine useful cues from DFE videos. Specifically, besides directly embedding video frames as input (low semantic level), we propose to extract the face segmentation masks and landmarks based on each frame (middle semantic level) and utilize the Multi-modal Large Language Model (MLLM) to further generate detailed descriptions of facial changes across frames with designed prompts (high semantic level). Additionally, we also adopt Parameter-Efficient Fine-Tuning (PEFT) to enable efficient adaptation of large pre-trained models (i.e., CLIP) for this task. Our FineCLIPER achieves SOTA performance on the DFEW, FERV39k, and MAFW datasets in both supervised and zero-shot settings with few tunable parameters. Analysis and ablation studies further validate its effectiveness. Project page: <https://haroldchen19.github.io/FineCLIPER-Page/> <ccs2012> <concept> <concept_id>10010147.10010178.10010224</concept_id> <concept_desc>Computing methodologies Computer vision</concept_desc> <concept_significance>500</concept_significance> </concept> <concept> <concept_id>10003120.10003121</concept_id> <concept_desc>Human-centered computing Human computer interaction (HCI)</concept_desc> <concept_significance>500</concept_significance> </concept> </ccs2012> [500]Computing methodologies Computer vision [500]Human-centered computing Human computer interaction (HCI) FineCLIPER: Multi-modal Fine-grained CLIP for Dynamic Facial Expression Recognition with AdaptERs Haodong Chen^, Haojian Huang^, Junhao Dong^, Mingzhe Zheng^, Dian Shao^[2] ^Northwestern Polytechnical University, ^The University of Hong Kong ^Nanyang Technological University July 8, 2024 ======================================================================================================================================================================================================= [2]Corresponding author. § INTRODUCTION Facial expressions are important signals to convey human emotions, thus accurately recognizing them has significant meaning for various tasks, including interpersonal communication, human-computer interaction (HCI) <cit.>, mental health diagnosing <cit.>, driving safety monitoring <cit.>, etc. Traditional Facial Expression Recognition (FER) resorts to static images. However, since dynamic emotional changes could not be well-represented within a single image, research attention has been shifted to Dynamic Facial Expression Recognition (DFER), which distinguishes the temporally displayed facial expressions in videos. The study of DFER algorithms starts from highly-controlled environments <cit.>, where faces are frontal and non-blurry <cit.> as shown in Fig. <ref> (a). However, such an ideal assumption makes the obtained models vulnerable to real-world situations. Therefore, researchers have turned to more open scenes and constructed several in-the-wild DFER datasets, e.g., DFEW <cit.>, FERV39k <cit.>, and MAFW <cit.>, to facilitate the development of corresponding methods <cit.> as demonstrated in Fig. <ref> (b). While category labels are treated without semantic meanings (e.g., Happiness may only be represented by a class id "0"), recent research on DFER <cit.> has further delved into the exploration of vision-language multi-modal learning beyond the traditional classification paradigm based on CNNs <cit.>, RNNs <cit.>, and transformers <cit.>. Although huge efforts have been spent, the performance of DFER methods still suffers from noisy frames, small inter-class differences, and ambiguity between expressions, making it inappropriate to adopt video/action recognition techniques directly. Specifically, to distinctly improve the performance of DEFR algorithms, we have to face the following unique and tough challenges: 1) The ambiguity of semantic labels for dynamic facial expressions, and 2) The subtle and nuanced movements of local face parts (i.e., skeletons, muscles, etc.). The first challenge originates in the difficulty of accurate human labeling and complex expression ways adopted by different persons, while the latter demands additional focus on fine-grained details that happen in specific regions within a human face. To tackle the above challenges, we propose a novel framework called FineCLIPER, short for Multi-modal Fine-grained CLIP for Dynamic Facial Expression Recognition with AdaptERs. Specifically, we utilize the Contrastive Language-Image Pretraining (CLIP) model <cit.>, which is particularly suitable for providing a cross-modal latent space. To avoid the huge cost of fine-tuning such large pre-trained models, FineCLIPER adopts the Parameter-Efficient Fine-Tuning (PEFT) strategy by adding several adaption modules with small parameters for tuning (as shown in Fig. <ref>), achieving high efficiency while preserving the remarkable performance. Specifically, our FineCLIPER has the following characteristics that distinguish it from previous works: Firstly, by adopting the vision-text learning paradigm, we transform the ground truth label to form the textual supervision (e.g., "A person with an expression of {Label}"). But one noteworthy innovation is that we meanwhile generate and use the negative counterparts (e.g., "A person with an expression of No {Label}"). Such label augmentation via PN (Positive-Negative) descriptors is inspired by the negative prompting strategy  <cit.>, and found to be useful here for differentiating between ambiguous categories. A notable progress is observed in the "Disgust" category of the DFEW dataset, while most baselines <cit.> suffer from a nearby 0% accuracy, our FineCLIPER significantly promotes the performance by more than 25%, as shown in Tab. <ref>. Furthermore, we adopt a semantically hierarchical strategy to comprehensively mine useful information from the input video data. Specifically, features from directly embedding video frames stand at a relatively low semantic level. For middle semantic level, we utilize a well-trained face analysis model (i.e. FaceXFormer <cit.>) to extract the face segmentation masks and landmarks from each frame. Intuitively, the former offers prior about face structures while the latter provides specific pivots for model attention. Additionally, we try to obtain descriptions at a high semantic level for describing dynamic facial changes across frames. This is realized by leveraging a well-trained MLLM, Video-LLaVA <cit.> to act as a facial expression analyst following given template-based prompts, and the generated descriptions will be carefully refined. All the above features at various semantic levels will be integrated to obtain the final representation of a given video. To summarize, our contributions are as follows: * We introduce FineCLIPER, a novel multi-modal framework that enhances Dynamic Facial Expression Recognition (DFER) through extensively mining useful information at different semantic levels from the video data, and all the obtained features (i.e., features embedded from visual frames, face segmentation, face landmarks, and the extra fine-grained descriptions obtained via MLLM) are integrated finally to serve as a more comprehensive overall representation; * To address the ambiguity between categories, we propose a label augmentation strategy, not only transforming the class label to textual supervision but also using a combination of both positive and negative descriptors; * Extensive experiments conducted on DFER datasets, i.e., DFEW, FERV39k, and MAFW, show that our FineCLIPER framework achieves new state-of-the-art performance on both supervised and zero-shot settings with only a small number of tunable parameters. Comprehensive ablations and analyses further validate the effectiveness of FineCLIPER. § RELATED WORK Dynamic Facial Expression Recognition. In early DFER research, the focus was on developing diverse local descriptors on lab-controlled datasets <cit.>. Then the rise of deep learning and accessible in-the-wild DFER datasets <cit.> leads to new trends towards DFER research. The first trend <cit.> involves the direct use of 3D CNNs <cit.> to extract joint spatio-temporal features from raw videos. The second trend <cit.> combines 2D CNNs <cit.> with RNNs <cit.> for feature extraction and sequence modeling. The third emerging trend integrates transformer <cit.>, as demonstrated in works like Former-DFER <cit.>, STT <cit.>, and IAL <cit.>. These methods combine convolutional and attention-based approaches to enhance the understanding of visual data, especially in distinguishing samples based on varying visual dynamics. However, in prior efforts, the semantic meaning of class labels is neglected, and insufficient attention has been paid to the subtle and nuanced movements of the human face. Therefore, based on the well-trained large cross-modal models (i.e. CLIP), we propose to extend the class label to textual supervision both positively and negatively. Moreover, to fully exploit the visual information within videos, we also design a hierarchical information mining strategy to generate representative video features, which is a weighted fusion of various features involving different semantic levels, including video frame feature, the middle-level facial semantics from segmentation maps and detected landmarks, we well as the high-level semantics encoded from fine-grained descriptions provided by MLLM. CLIP in Classification. With the development of computer vision <cit.>, Vision-Language Models (VLMs), e.g., CLIP <cit.>, have recently demonstrated superior performance across various tasks <cit.>. CLIP leverages a vast corpus of image-text pairs to ground its framework in contrastive learning, resulting in robust pre-trained image and text encoders that demonstrate remarkable feature extraction capabilities. Recent studies <cit.> have also applied CLIP to the DFER task. Among them, A^3lign-DFER <cit.> introduces a comprehensive alignment paradigm for DFER through a complicated design. CLIPER <cit.> adopts a two-stage training paradigm instead of end-to-end training; however, it is limited in capturing temporal information. Furthermore, DFER-CLIP <cit.> incorporates a transformer-based module to better capture temporal information in videos, but it requires fully fine-tune the image encoder and the proposed temporal module during training, leading to inefficiency. However, while these works have explored the semantic information of labels compared to traditional DFER, they often overlook the interrelations among facial expressions and the individual differences among humans as they directly extend labels into relevant action descriptions (e.g., Happiness→smiling mouth, raised cheeks, wrinkled eyes, ... <cit.>). This oversight can lead to further ambiguity. In light of this, we propose PN (Positive-Negative) descriptors, extending the ground truth labels from contrastive views to better distinguish between ambiguous categories. § METHODOLOGY In this section, we first briefly go through the overall pipeline and basic notations of the framework in Sec. <ref>. Then, we elaborate on how to augment the original class labels to obtain positive-negative textual supervision in Sec. <ref>, followed by details about our hierarchical information mining strategy to obtain multi-modal features in Sec. <ref>. The integration of diverse features is introduced in Sec. <ref>. The overall pipeline is illustrated in Fig. <ref>. §.§ Overall Pipeline Formally, given a video clip V, the task of DFER aims to recognize the facial expression label Cls. Using text templates as "A person with an expression of {Cls}", the class label could be further transformed into textual supervision, which could better utilize the semantic meaning of the category name. Let 𝒱 represents a set of videos and 𝒞 denotes collections of augmented textual descriptions of labels, our framework could produce the embedded representations for both a given video and its corresponding textual supervision, resulting in 𝐯_𝐢 and 𝐜_𝐢. Note that in our cases, 𝐯_𝐢 is an integration of features from different semantic levels, namely low-level (video frames), middle-level (face parsing and landmarks), and high-level semantics (fine-grained captions of facial action changes obtained using MLLM). The similarity between 𝐯_𝐢 and 𝐜_𝐢 is calculated as sim_i. To employ the cross-entropy loss, we calculate the prediction probability over class cls_i as: p(cls_i|𝐯_𝐢)=exp(sim_i/τ)/∑_i=0^N-1exp(sim_i/τ), where N is the number of total classes and τ represents the temperature parameter of CLIP. §.§ Label Augmentation via PN Descriptors Although in-the-wild DFER usually comprises limited categories (e.g., 7 in DFEW <cit.> and FERV39k <cit.>, or 11 in MAFW <cit.>), the recognition difficulty does not reduce due to the high inter-class ambiguity (as shown in Tab. <ref>). Therefore, as stated in Sec. <ref>, class labels are transformed into textual supervision for utilizing their semantic meanings. While existing CLIP-based DFER models <cit.> mostly focus on enriching the textual descriptions for ground truth labels from a positive view, in this work, we devise a different label augmentation strategy by extending the original class labels from both positive and negative perspectives. Specifically, the Positive-Negative (PN) descriptors are derived as follows: i.e., P(ositive): "A person with an expression of {Cls}.", and N(egative): "A person with an expression of no {Cls}.". Correspondingly, the augmented textual supervision 𝒞 could contain two different collections, namely 𝒞_P for positive collections and 𝒞_N for negative collections. Then, both text collections are tokenized and projected into word embeddings obtaining 𝐗_T_P, 𝐗_T_N∈ℝ^l× d_T, where l represents the text length. The inputs are further constructed as: 𝐳_T_P^(0)=𝐗_T_P+𝐄_T_P, 𝐳_T_N^(0)=𝐗_T_N+𝐄_T_N, where 𝐄 denotes the positional encoding. To further encode 𝐳_T_P^(0) and 𝐳_T_N^(0), we resort to the pre-trained textual part of VLM <cit.>, a model with L_T pre-trained transformer layers, devoted by {ℰ_T^(i)}_i=1^L_T. Keeping the original weights of these well-trained layers, we introduce trainable lightweight adapters after each frozen layer ℰ_T^(j). denoted as {𝒜_T_P^(j)} and {𝒜_T_N^(j)} for positive and negative textual supervision, respectively. Then the encoded positive and negative textual features could be obtained via: 𝐳_T_P^(j)=ℰ_T_P^(j)(𝒜_T_P^(j)(𝐳_T_P ^(j-1))), 𝐳_T_N^(j)=ℰ_T_N^(j)(𝒜_T_N^(j)(𝐳_T_N^(j-1))). We adopt the basic Adapter structure proposed in <cit.> for all adapters in our FineCLIPER framework. The structure of the adapter is illustrated in the middle of Fig. <ref>. Then the final positive and negative text representations can be obtained by: 𝐜_P=𝐡_T(𝐳_T_P,l^(L_T)), 𝐜_N=𝐡_T(𝐳_T_N,l^(L_T)), where 𝐳_T, l^(L_T) is the last token of 𝐳_T^(L_T) and 𝐡_T is a projection layer. §.§ Hierarchical Information Mining Our FineCLIPER adopts a hierarchical manner to mine useful information from: 1) low semantic level, where video frames are directly embedded; 2) middle semantic level, where face segmentation and landmarks are exploited, and 3) high semantic level, where fine-grained descriptions are obtained via MLLM to depict facial dynamics across frames. Details can be found as follows: Video Frames Embedding could provide semantically low-level features since the model operates at pixel-level. To effectively explore the spatial-temporal visual information, we resort to the strong spatial modeling abilities displayed by CLIP and utilize a temporal-expanded version inspired by <cit.>. Formally, given a video clip V∈ℝ^T× H× W×3, where H× W is the spatial size and T is the temporal length. For t-th frame, we spatially divide it into non-overlapping patches {𝐏_t,i}_i=1^M∈ℝ^P^2×3, where M = HW/P^2. These patches are then projected into patch embeddings 𝐗_v,t∈ℝ^M × d, where d represents the embedding dimension. Therefore, the representation for the given video V could be 𝐳∈ℝ^T× M× d. After the temporal information undergoes processing by the temporal adapter, the spatially adapted feature can be derived through the following procedure: 𝐳_TemV^(j)=ℰ_V^(j)(𝒜_V^(j)(𝐳^(j))), 𝐳_SpaV^(j)=ℰ_V^(j)(𝒜_V^(j)(𝐳_TemV^(j))), where 𝐳_TemV^(j) and 𝐳_SpaV^(j) denotes the temporally and spatially adapted features, respectively. As a result, the adapter, operating in parallel with the MLP layer, aims to collectively refine the representation of spatiotemporal information. The final feature, scaled by a factor s (set to 0.5 in our framework), can be expressed as follows: 𝐳_V^(j)=𝐳_SpaV^(j) + MLP(LN(𝐳_SpaV^(j))) + s·𝒜_V^(j)(LN(𝐳_SpaV^(j))). Thus, the ultimate video representation at a low semantic level is derived as 𝐯=𝐡_V(𝐳_V^(L_V)). Face Parsing and Landmarks Detection. Based on a given frame, we could further mine middle-level semantic information from it. In our task, as the main part of a frame is mostly human faces, we choose to utilize a powerful facial analysis model, FaceXFormer <cit.>, to obtain generalized and robust face representations. Specifically, we extract the facial segmentation map and perform landmark detection. Intuitively, the former implies the semantically grouped facial regions, while the latter could provide accurate locations indicating different face parts (e.g., eyes, nose, etc.) Specifically, given a specific video clip V, the extracted parsing results and landmark maps are represented as P and L, respectively. Following patch embedding, both P and L are fed into the corresponding segmentation encoder 𝐄_P and landmark encoder 𝐄_L, similar to the operation done for the frame data. The encoders 𝐄_P and 𝐄_L share weights to collaboratively capture middle-level face semantics. Finally, the parsing and landmark representations can be obtained as 𝐩=𝐡_P(𝐳_P^(L_P)) and 𝐥=𝐡_L(𝐳_L^(L_L)), and 𝐡_P and 𝐡_L are projection layers for P and L, respectively. Additional Fine-grained Descriptions. In this part, we try to achieve fine-grained details describing the facial dynamics across video frames to serve as high-level semantics. Specifically, for each video clip V, we adopt Video-LLaVA <cit.>, a MLLM, to generate detailed descriptions under the guidance of an elaborately designed prompt, where the model is asked to play a role as a facial expression analyst to provide details of facial changes, as illustrated in Fig. <ref>. To elaborate, the provided text prompt raises requirements for the granularity of the descriptions, explicitly specifying movements involving various local facial regions. However, the generated description may include emotion-related words associated with the label or contain some redundant information. Hence, we thoroughly refined all generated descriptions to achieve a concise and high-quality summary. The refinement works as follows. Initially, we employed a rule-based approach, utilizing pre-configured regular filters to eliminate redundant and irrelevant textual information. Popular text processing tools from the NLTK package were then utilized to remove noise. Subsequently, each data entry will go through manual inspection to filter out abnormal descriptions. The average number of tokens in our refined descriptions is approximately 35 tokens. However, research <cit.> demonstrates the actual effective length of CLIP's text encoder is even less than 20 tokens. Hence, to better explore the fine-grained description of facial changes, we adopt the text encoder of Long-CLIP <cit.> as our fine-grained text encoder 𝐄_F, which can support text inputs of up to 248 tokens. The refined fine-grained description, denoted as F, is further tokenized and projected into embeddings 𝐗_F. Following a procedure similar to the text encoder described in Sec. <ref>, the input is further constructed as 𝐳_F^(0)=𝐗_F+𝐄_F, where 𝐄_F is the positional encoding of F. Subsequently, by feeding it into the projector 𝐡_F, we could contain the final feature vector of F as: 𝐟=𝐡_F(𝐳_F,l^(L_F)). §.§ Weighted Integration. Through the aforementioned semantically hierarchical information mining process, we obtain: 1) low-level video frame feature 𝐯, 2) middle-level face parsing features 𝐩 and face landmark features 𝐥, and 3) high-level fine-grained description features 𝐟. The integration of these features is done using an adaptive fusion strategy. Specifically, given a specific video V, the supervision for the i^th class is represented by both the positive 𝐜^i_P and negative 𝐜^i_N. Suppose any representation 𝐦∈{𝐯, 𝐩, 𝐥, 𝐟}, the similarity between 𝐦 and 𝐜_P, as well as 𝐦 and 𝐜_N is defined by calculating the cosine similarity: sim_i, 𝐦^pos=𝐜_P^i·𝐦/𝐜_P^i𝐦, sim_i, 𝐦^neg=𝐜_N^i·𝐦/𝐜_N^i𝐦, and the final similarity is obtained by: sim_i,𝐦 = sim_i,𝐦^pos - sim_i,𝐦^neg, which further distinguishes similarity among similar categories Then, by finding the max similarity across all the categories, we obtain sim_𝐯 = max_i=0^N(sim_i,𝐯). Similarly, we could get sim_𝐟, sim_𝐩, and sim_𝐥 following corresponding max-similarity category. Normalizing these similarities, we obtain the weights corresponding to that representation as: w_𝐦=e^sim_𝐦/e^sim_𝐯+e^sim_𝐟+e^sim_𝐩+e^sim_𝐥. Such weights could be calculated for 𝐩, 𝐥, 𝐟 similarly, resulting in the corresponding weights w_𝐯, w_𝐟, w_𝐩, and w_𝐥. Then the overall multi-modal representation 𝐯^mm of Multi-Modal Encoders can be obtained as follows: 𝐯^mm=w_𝐯·𝐯 + w_𝐟·𝐟 + w_𝐩·𝐩 + w_𝐥·𝐥. where the weights also correspond to the weights of the cross-entropy loss for each modality. Then the overall loss function can thus be expressed as: ℒ=1/ℬ∑_i=1^ℬ(ℋ(y_i,p(cls_i|𝐯^mm))+ w_𝐯·ℋ(y_i,p(cls_i|𝐯)) +w_𝐩·ℋ(y_i,p(cls_i|𝐩)) +w_𝐥·ℋ(y_i,p(cls_i|𝐥))+w_𝐟·ℋ(y_i,p(cls_i|𝐟))), where ℬ and ℋ denote the batch size and the cross-entropy loss, respectively. § EXPERIMENT §.§ Setup Datasets and Evaluation. Following previous works, we adopt both supervised and zero-shot learning paradigms, evaluating our proposed FineCLIPER together with the baselines on the various in-the-wild DFER datasets, including DFEW <cit.>, FERV39k <cit.>, and MAFW <cit.>. We utilize UAR (Unweighted Average Recall) and WAR (Weighted Average Recall) as evaluation metrics for our assessments. Both DFEW and FERV39k have 7 dynamic facial expression categories to recognize, while MAFW has 11 categories. It is noteworthy that MAFW dataset comes with video captions for each video, making it a choice for pretraining in zero-shot setting. Implementation Details. All the experiments of our FineCLIPER are built on a CLIP model with the backbone of ViT-B/16 using a single NVIDIA RTX 4090 GPU for fairness and consistency. We process the input by resizing and cropping 16 video frames to a uniform size of 224×224 pixels. The SGD optimizer is employed with an initial learning rate of 3 × 10^-4. FineCLIPER is trained in an end-to-end manner over 30 epochs with the temperature hyper-parameter τ=0.01. §.§ Main Results Supervised Setting. The quantitative results in the supervised setting on three standard DFER datasets are depicted in Tab. <ref>. It can be observed that our proposed FineCLIPER achieves state-of-the-art performance compared with other DFER approaches. In addition, our method outperforms all CLIP-based DFER methods with the most lightweight architecture and also the least tunable parameters. Furthermore, we investigate three variants of our FineCLIPER, incorporating face parsing and landmark modalities, along with fine-grained text descriptions of facial changes, which justify the combination of these strategies. The superiority of our FineCLIPER is also supported by the substantial improvement in the most challenging category for previous methods, i.e., “Disgust (Dis.)”, as shown in Tab. <ref>. It is worth noting that even without the hierarchical information modeling, FineCLIPER, which only has PN descriptors with adapters, still achieves competitive performance. This demonstrates the effectiveness of the label augmentation strategy via PN descriptors and the usage of PEFT techniques. Further ablation studies can be found in Sec. <ref>. Zero-shot Setting. To assess the generalization ability of FineCLIPER, we perform zero-shot DFER using captions extracted directly from each video (i.e., training on one of the three DFER datasets and then testing on the other two datasets). Our main baseline is EmoCLIP <cit.>, which is the first CLIP-based zero-shot DFER model, utilizes the MAFW <cit.> dataset for pertaining. The comparison between captions in MAFW and our generated fine-grained descriptions is shown in Fig. <ref>. Tab. <ref> reports the recognition performance of our FineCLIPER compared with other approaches in the zero-shot DFER setting. Not only did we surpass the previous methods when the pretraining data was consistent, but employing our generated fine-grained captions also led to a significant performance improvement. This further demonstrates the effectiveness of the fine-grained description obtained and used by our FineCLIPER, which focuses more on facial changes instead of video scenes (as in MAFW). In other words, fine-grained descriptions play a pivotal role in guiding the model's attention toward detailed aspects of specific facial regions in the zero-shot setting. §.§ Ablation Studies Performance w.r.t. different level facial features. We investigate the effectiveness of using low, middle, or high-level semantics on performance. The results are presented in Tab. <ref>. Notably, although the middle semantic level focuses more on faces, the lack of general visual information poses challenges for vision-language models in data understanding. Additionally, we examine the effectiveness of middle-level face semantics obtained through face parsing and landmark detection, with results shown in Tab. <ref>. Comparing rows 1-2 and 1-3, middle-level facial features improve performance. Combining face segmentation and landmarks yields the best results, demonstrating their complementary nature. Performance w.r.t. label augmentation strategies. Since DEFR is a classification task, the supervision typically consists of class labels. However, we extend this supervision to include semantically meaningful textual information, proposing a novel approach that incorporates both positive and negative aspects. The first three rows of Tab. <ref> show the ablations involving this label augmentation strategy. Controlling other variables, our Pos-Neg augmentation achieves the best results across all metrics. To understand the effectiveness of the Pos-Neg descriptors, we visualize the class-wise cosine similarity between video representations and both positive (green) and negative (red) text supervision, as shown in Fig. <ref>. This visualization reveals that while positive supervision may sometimes fail (indicated by low positive similarity for some categories), the inclusion of negative supervision helps address these shortcomings. Considering the complexity of emotions expressed through facial expressions, we further replaced the word "no" with "less" in the Negative Descriptor, as shown in Tab. <ref>. The results indicate that using fewer absolute terms leads to poorer performance in the recognition task, which aims to identify prominent emotions. Performance w.r.t. the usage of trainable adapters. We adopted several lightweight trainable adapters in our FineCLIPER to efficiently adapt the ability of large pre-trained models. The corresponding ablation studies are demonstrated in Tab. <ref>. We can see that given the same supervision settings (e.g., pose+neg for FineCLIPER), adding small adaptive modules could effectively boost the performance with only limited trainable parameters (e.g., 20M for all adapters in FineCLIPER). Effect of each components. To validate the effectiveness of each component module in our FineCLIPER, we visualize the attention map of the last transformer block, as shown in Fig. <ref>. Specifically, we sequentially add components from top to bottom, including adding the Adapters, using the parsing results and landmarks of faces, as well as using the high-level semantics from the fine-grained descriptions generated by MLLM. We can see that the model's attention is shrinking to more crucial and concentrated face parts w.r.t. to certain categories. For example, it focuses on the mouth, eyes, and eyebrows when identifying Happiness, which aligns well with expression recognition using human vision. Such visualization results provide a vivid interpretation to explain the superior recognition performance of FineCLIPER. § CONCLUSION Dynamic Facial Expression Recognition (DFER) is vital for understanding human behavior. However, current methods face challenges due to noisy data, neglect of facial dynamics, and confusing categories. To this end, We propose FineCLIPER, a novel framework with two key innovations: 1) augmenting class labels with textual PN (Positive-Negative) descriptors to differentiate semantic ambiguity based on the CLIP model's cross-modal latent space; 2) employing a hierarchical information mining strategy to mine cues from DFE videos at different semantic levels: low (video frame embedding), middle (face segmentation masks and landmarks), and high (MLLM for detailed descriptions). Additionally, we use Parameter-Efficient Fine-Tuning (PEFT) to adapt all the pre-trained models efficiently. FineCLIPER achieves SOTA performance on various datasets with minimal tunable parameters. Detailed ablations and analysis further verify the effectiveness of each design. ACM-Reference-Format § APPENDIX §.§ Introduction The content of our supplementary material is organized as follows: 1) In Sec. <ref>, we present the variants illustration for FineCLIPER; 2) In Sec. <ref>, we analyze two competitive baseline models and the efficiency of our FineCLIPER; 3) In Sec. <ref>, we further analyze the proposed adaptive weighting strategy and scaling factor s; 4) In Sec. <ref>, we present detailed information regarding fine-grained text descriptions of facial action movements. §.§ FineCLIPER Variants FineCLIPER: The variant utilizes only low-semantic level video frames along with PN (Postive-Negative) descriptors for label augmentation; FineCLIPER^∗: Building upon FineCLIPER, this variant incorporates middle-semantic level face parsing and landmarks; FineCLIPER^†: Extending FineCLIPER, this variant directly integrates high-semantic level fine-grained descriptions of facial action changes; FineCLIPER^∗^†: Expanding on FineCLIPER, this variant includes both middle-semantic level and high-semantic level information. §.§ Baselines vs. FineCLIPER Here we first analyze two competitive baseline models: S2D <cit.> achieved notable results with minimal tunable parameters on the ViT-B/16 backbone. However, it is noteworthy that it first undergoes pre-training on the Static Facial Expression Recognition (SFER) dataset, specifically AffectNet-7 <cit.> (consisting of 283,901 training samples) for 100 epochs, before fine-tuning on the DFER dataset. This pre-training step significantly contributes to its performance; A^3lign-DFER <cit.>, as the latest CLIP-based DFER model, predominantly relies on the CLIP-ViT-L/14 backbone to further empower DFER from an alignment perspective. The training process is delineated into three stages spanning a total of 100 epochs. Regrettably, pertinent information regarding tunable parameters was not found within its paper. In contrast, our FineCLIPER model employs the CLIP-ViT-B/16 backbone and undergoes training solely on the DFER dataset for 30 epochs, achieving state-of-the-art performance with 13-20M tunable parameters in both supervised and zero-shot settings. Tab. <ref> presents the performance of FineCLIPER in the supervised setting on larger scales. Although a larger backbone could potentially yield better results, our choice prioritizes efficiency and ensures a fair comparison with baseline models. Additionally, we provide a parameter-performance comparison of DFER on the DFEW testing set in Fig. <ref>. §.§ Additional ablations Scaling factor s. The scaling factor s controls the weight of the output from the Adapter in Eq. <ref>. In Tab. <ref>, we further conduct ablation experiments on the set value of s. More details can be found in <cit.>. Adaptive Weighting. Due to the inherent correlation between the expanded multimodal data, i.e., face parsing, landmarks, and fine-grained text, with videos, this work endeavors to explore the feasibility of further modeling faces using multi-modal data. In our proposed adaptive weighting algorithm, we determine the fusion of features and the weighting of the loss function adaptively by computing the similarity between multi-modal features and label features. To further validate the superiority of this strategy, we fix the weight of the video feature, 𝐰_𝐯, ranging from 0.1 to 0.9, while evenly distributing the remaining weight among the other three modal features, i.e., 1-𝐰_𝐯. As illustrated in Fig. <ref>, our proposed adaptive weighting strategy exhibits greater stability and effectiveness compared to fixed weights. Certainly, the fusion of features can be further optimized. The simplicity of our fusion strategy in this study serves to further substantiate the feasibility and potential of leveraging multi-modal data for DFER. In future works, we intend to delve deeper into the potential of feature fusion. §.§ Fine-grained Text Generation MLLM Selection. Considering resource consumption, we initially evaluated several open-source Multi-modal Large Language Models (MLLMs) capable of processing videos, namely Video-LLaMA <cit.>, VideoChat <cit.>, VideoChat-2 <cit.>, and Video-LLaVA <cit.>. To expedite the assessment of their ability to comprehend facial videos, we employed a simple prompt at this stage, i.e., "Please describe the facial feature changes in the video in detail". As depicted in Fig. <ref>, we highlight in green the descriptions of facial features outputted by the four MLLMs. It is evident that Video-LLaVA, compared to the other three, more accurately captures facial feature information. Consequently, we adopt Video-LLaVA as our fine-grained text generation model. Subsequent sections will elaborate on the detailed text prompt and refinement process for fine-grained text generation. Prompt Design. With the advancement of large language models, the significance of prompt engineering has become increasingly apparent. Well-crafted prompts can significantly enhance a model's ability to generate responses tailored to specific tasks. As illustrated in Fig. <ref>, in addition to explicit instructions, corresponding examples are provided for the model to reference and learn from. Furthermore, to further standardize the model's responses, six requirements are delineated. Finally, considering that the model may describe actions based on its analysis of facial expressions, ground truth labels for each video are also provided for the model's reference. Text Refinement. Text refinement plays a pivotal role in our proposed FineCLIPER framework. Specifically, we identify two categories of low-quality text: 1) Directly expressing emotions. For example, stating "The man in the video wears a sad expression..." This can lead to data leakage during the training process. 2) Indirectly implying emotions. For example, stating "The man's mouth is slightly ajar, showing his teeth, and his eyes are narrowed, suggesting a feeling of joy or amusement." Despite not explicitly containing label information, such descriptions still pose a risk of potential data leakage. To this end, we introduce a two-stage heuristic process for text refinement in this study, as outlined by <cit.>, which comprises text cleaning and counterfactual verification as illustrated in the main content. Specifically, manual inspection involves the participation of numerous master's and undergraduate students with backgrounds in psychology or computer science. Generated Instances. As demonstrated in Fig.<ref>, despite the careful design of the prompt in Fig.<ref>, emphasizing "Don't include emotional words," the generated text still contains several direct or indirect emotional expressions, as highlighted in red. Subsequently, through the implementation of the two-stage text refinement process, the refined text predominantly encompasses facial features and implied actions, as highlighted in bold. Such refined fine-grained text significantly enhances and strengthens facial modeling from a high semantic level. All the fine-grained descriptions, along with the face parsing and landmarks data will be released after the paper notification.
http://arxiv.org/abs/2407.03129v1
20240703141204
Social Bias Evaluation for Large Language Models Requires Prompt Variations
[ "Rem Hida", "Masahiro Kaneko", "Naoaki Okazaki" ]
cs.CL
[ "cs.CL" ]
Predictability of viral load kinetics in the early phases of SARS-CoV-2 through a model-based approach [ ====================================================================================================== § ABSTRACT Warning: This paper contains examples of stereotypes and biases. Large Language Models (LLMs) exhibit considerable social biases, and various studies have tried to evaluate and mitigate these biases accurately. Previous studies use downstream tasks as prompts to examine the degree of social biases for evaluation and mitigation. While LLMs' output highly depends on prompts, previous studies evaluating and mitigating bias have often relied on a limited variety of prompts. In this paper, we investigate the sensitivity of LLMs when changing prompt variations (task instruction and prompt, few-shot examples, debias-prompt) by analyzing task performance and social bias of LLMs. Our experimental results reveal that LLMs are highly sensitive to prompts to the extent that the ranking of LLMs fluctuates when comparing models for task performance and social bias. Additionally, we show that LLMs have tradeoffs between performance and social bias caused by the prompts. Less bias from prompt setting may result in reduced performance. Moreover, the ambiguity of instances is one of the reasons for this sensitivity to prompts in advanced LLMs, leading to various outputs. We recommend using diverse prompts, as in this study, to compare the effects of prompts on social bias in LLMs. § INTRODUCTION While LLMs have high performance, they also have unfair, severe social biases, which can harm specific groups <cit.>. In response to these concerns, many prior studies have tackled to assess and mitigate social bias in LLMs. Social biases in LLMs are often evaluated using the LLMs' predictions in downstream tasks such as question answering <cit.>, natural language inference <cit.>, commonsense reasoning <cit.>, sentence completion <cit.>. Recent LLM developers adopt downstream task style assessment for their own LLMs' bias evaluation and release LLMs with bias evaluation results comparing existing models <cit.>. As for mitigation of social bias, various methods have also been proposed, such as counterfactual data augmentation <cit.>, decode intervention <cit.>, and text intervention <cit.>. Although LLMs should have both higher task performance and less social bias, challenges remain in the evaluation due to the sensitivity regarding the prompts <cit.>. Previous studies have highlighted that LLMs have the sensitivity to task instruction and prompt <cit.>, and verification with multiple prompts is crucial in task performance evaluation of LLMs <cit.>. Whereas prompt sensitivity to task performance in LLMs has been recognized, bias evaluation still requires further exploration to understand the challenges. In bias evaluation, identifying the worst-case scenarios is important when considering potential risks associated with social bias in LLMs <cit.>. The sensitivity hinders evaluating and mitigating social bias in LLMs, leading to either underrating or overrating social biases in LLMs and the effectiveness of debiasing. In this paper, we empirically studied the sensitivity of 12 LLMs to prompt variations in evaluating task performance and social bias [<https://github.com/rem-h4/llm_socialbias_prompts>], focusing on a question-answering dataset, BBQ <cit.>. We categorized three prompt variation factors to assess the sensitivity of task performance and social bias in LLMs comprehensively, as illustrated in Figure <ref>: 1) task instruction and prompt for task recognition, 2) few-shot examples for task performance improvement, and 3) debias-prompt for bias mitigation such as adding Note that the sentence does not rely on stereotypes. Table <ref> compares prompt variations from the three perspectives in previous work. Although previous work provided insight into social bias in LLMs, their evaluation settings are limited and could be more extensive in the three perspectives. Our experimental results reveal that LLMs are highly sensitive to prompts in bias evaluation. The ranking of LLMs and debiasing effectiveness fluctuate when comparing models for task performance and bias scores, even though the prompt format does not affect the semantics (<ref>). We also show that LLMs have tradeoffs among task performance and social bias caused by the prompts; for example, bias increases in the prompt where task performance increases (<ref>). Furthermore, we confirmed that the ambiguity of instances contributes to the sensitivity in the advanced LLMs (<ref>). Our investigation can shed light on the vulnerability of LLMs in bias evaluation. We recommend using diverse prompts to compare the effects of social bias in LLMs. § BIAS EVALUATION ON LLMS USING THE DOWNSTREAM TASK This paper focuses on bias evaluation using multiple choice questions (MCQs). In the MCQs setting, the LLMs are required to choose the most suitable answer from the candidate answers (<ref>). We prepared three prompt variation factors to confirm LLMs' sensitivity in bias evaluation (<ref>). §.§ Multiple Choice Question on LLMs When evaluating LLMs using MCQs, the LLM receives the context, the question, and symbol-enumerated candidate answers as a single prompt, following previous work about MCQs <cit.>. The symbol assigned the highest probability answer is LLMs' answer for the MCQs. Our prompt template, designed for MCQs with three options, is described below. [fontupper=, fonttitle=,title=The prompt format for MCQs,breakable=true] {task instruction} Context: {context} Question: {question} Choices: A: {option A} B: {option B} C: {option C} Answer: Each {} means placeholder for values from datasets. §.§ Prompt Variations We vary the following three perspectives in evaluating bias in LLMs: 1) task instruction and prompt, 2) few-shot examples, and 3) debias-prompt. Previous studies showed that these factors could affect task performance, i.e., LLMs' prediction. In real-world use cases, users of LLMs can employ any prompt format. Such deviations can introduce gaps between real-world and evaluation environments, unintentionally leading to adverse outcomes such as task performance degradation or bias amplification. Therefore, verification with prompt variations is needed. Task Instruction and Prompt Task instructions and prompts describe task setting, how to solve the task briefly, and how to format the task instance for LLMs. They are the minimal settings for solving tasks using LLMs as the zero-shot settings. Previous work showed the vulnerability of task instruction <cit.> or prompt formatting <cit.>. Few-shot Examples Few-shot examples are demonstrations for LLMs to recognize and learn tasks in the manner of in-context learning. Few-shot prompting can improve task performance despite the simple method of not updating parameters <cit.>. Moreover, creating few-shot examples is more practical and reasonable than developing a large amount of training data, even when solving an unseen task. Therefore, few-shot prompting is often adopted for LLMs' evaluation <cit.>. Debias-Prompt Prompting style debias is a promising method to mitigate social bias because it does not require additional model training and can only work with additional text input. We call this kind of prompt debias-prompt. Although prior work verified the effectiveness of debias-prompt on bias evaluation dataset to some extent <cit.>, they only verified limited prompts or models[Though Chain-of-Thought prompting is also adopted to bias mitigation, it has another challenge on performance degradation due to wrong explanation made by LLMs <cit.>. Then, we mainly focus on simple types of prompting.]. Therefore, comparing the effectiveness of debias-prompts differences is important. Based on debias-prompts proposed in previous work, we categorized three perspectives for debias-prompts, (1) Level: stereotypes can be subdivided into levels such as general, gender, occupation, etc. (2) Style: debias-prompts can be broadly classified into two types: instructive text including expressions such as Note that <cit.>, and plain text like <cit.>. (3) Negation: the previous prompts have included and excluded negation, which is one of the most important aspects of prompt <cit.>. We created twelve different prompts using the template based on three categories[ We have confirmed the effectiveness of our debias-prompts on the intrinsic bias evaluation dataset CrowS-Pair <cit.> and Stereoset <cit.>. The detail is described in Appendix <ref>.]. § EXPERIMENTS In this section, we first investigated the sensitivity of LLMs in the zero-shot setting (<ref>). After that, we also investigated whether the few-shot setting can mitigate LLMs' sensitivity and how it affects task performance and bias scores compared to the zero-shot setting (<ref>). Then, we finally examined how the debias-prompt can affect metrics (<ref>). To quantify sensitivity, we calculate the sensitivity gap, which is the difference between the maximum and minimum LLMs' score on each metric. Dataset (BBQ): BBQ dataset aims to evaluate various social biases via the question answering task <cit.>. This dataset was created using templates carefully written by humans. Each BBQ instance contains context and question with three answer candidates: stereotype answer, anti-stereotype answer, and unknown answer. In BBQ, four instances are combined, with two different context types (either ambiguous or disambiguated) and two different question types (negative or non-negative). The disambiguated contexts comprise ambiguous context and additional information supporting the answers to questions. The additional information leans toward either stereotype or anti-stereotype. We extracted gender categories and filtered some instances with proper names regarded as bias category proxies from the original dataset according to prior work <cit.>. We used 2016 instances, and Table <ref> shows the example of BBQ datasets. Metrics: In this paper, we use two existing metrics for BBQ following <cit.> and introduce an additional metric: (1) accuracy: This metric indicates the task performance. In ambiguous contexts, the correct answer is always `unknown' regardless of the questions. In disambiguated contexts, the correct answers correspond to the question. We denote the accuracy in ambiguous and disambiguated contexts as Acc_a, Acc_d, which are calculated as follows: Acc_a = n_a^u/n_a , Acc_d = n_sd^s + n_ad^a/n_sd + n_ad , where n_a, n_sd, n_ad means the number of instances with ambiguous context, stereotypical disambiguated context, and anti-stereotypical disambiguated context, respectively. The superscript of each n stands for the predicted labels: stereotypes (^s), anti-stereotypes (^a), and unknown (^u). (2) consistency: We introduce another metric for evaluating whether LLM can distinguish the context difference partly inspired by <cit.>. BBQ has negative and non-negative questions, so LLM should answer different choices for each question in the disambiguated context. If the LLMs can recognize context, the answers to negative and non-negative questions should differ. Based on this idea, we formulate the measure as follows: Consist_d =2/n_d∑^n_d/2_i𝕀 [a^i_neg≠ a^i_nonneg], where n_d means the number of instances with disambiguated context, a^i_neg means LLMs' answer for negative quesiton on i-th instance, a^i_nonneg for non-negative question. A higher value indicates that LLMs can distinguish context information when answering questions. (3) diff-bias: This metric indicates how much LLMs lean toward stereotype or anti-streotype. We calculate this as the accuracy difference in answers to stereotype and anti-stereotype. Diff-bias_a = n^s_a - n^d_a/n_a , Diff-bias_d = n^s_sd/n_sd - n^a_ad/n_ad. Here, the bias score ranges from -100 to 100. A positive score indicates biases toward stereotypes, while a negative score indicates biases toward anti-stereotypes. The ideal LLM has 100, 100, and 0 for accuracy, consistency, and diff-bias, respectively. Model We used 12 LLMs from four types of publicly available billion-size LLM variants with varying parameters and whether they were instruction-tuned or not: Llama2 <cit.>, OPT <cit.>, MPT <cit.>, Falcon <cit.>, details in Appendix <ref>. We used the huggingface transformer library[<https://github.com/huggingface/transformers>] and conducted all experiments on a single NVIDIA A100 GPU with 40GB RAM. §.§ Zero-shot Setting Setting In a zero-shot setting, we varied the prompt formats. We prepared nine prompts in total: one with no task instruction, eight combinations of four types as task instruction, and two types of option id (lower-case or upper-case) as minimal changes[We used the task instructions based on the previous work  <cit.>. Details are described in Appendix <ref>.]. We used three cyclic permutation orders to mitigate position bias <cit.>: (1,2,3), (3,1,2), (2,3,1), where 1,2,3 represents the original choice option. We calculated the sensitivity gap on the format change. Result Table <ref> shows the result of the sensitivity gap on prompt format in various LLMs. This indicates that models' accuracy, consistency, and diff-bias have a large score gap, and there is no clear tendency regarding model size and model types, with or without instruction tuning. These findings suggest that even advanced LLMs are vulnerable to format change not only in task performance but also in bias scores. §.§ Few-shot Setting Setting In a few-shot setting, we used 4-shot samples for BBQ evaluation[Table <ref> shows few-shot samples in Appendix.]. We formatted the few-shot samples with the same option symbols in the target evaluation instance. The few-shot samples are inserted between the task instruction and the target instance. We must ensure that few-shot examples do not introduce additional social bias into LLMs from their textual content. To address this, we sampled the BBQ dataset from another stereotype category and modified the words related to stereotypical answers in samples into anonymous ones by replacing the man with Y. We fixed the few-shot examples and their order for simplicity. Our main focus is not finding the best few short examples and order, demonstrating the effect of prompt change for bias evaluation. Other setups are followed in the zero-shot setting. Result Table <ref> shows the sensitivity of few-shot prompting across formats on each model. It shows that few-shot prompting can mitigate the sensitivity gap on format variations in some metrics on some LLMs. However, there are still gaps, and few-shot prompting sometimes promotes the sensitive gap. This indicates that few-shot prompting does not entirely mitigate the LLMs' sensitivity to format difference, which is partly consistent with prior work concerning task performance <cit.>. §.§ Debias-Prompt Setting Setting We investigated the effectiveness of debias-prompts across formats and models in a few-shot setting. We inserted the debias-prompt at the beginning of the prompt. For simplicity, we only refer to max and minimum values across different debias-prompts on average concerning formats. Result Table <ref> shows the result of the debias effect on each metric across models. This result indicates that some debias-prompts contribute to task performance and debias improvement; conversely, some prompts worsen LLMs. This is consistent with prior work that showed that performance could be either up or down around the vanilla value in debias-prompt setting <cit.>. § ANALYSIS To investigate the sensitivity of LLMs in more detail, we analyzed our results from three perspectives: the task instruction and prompt difference (<ref>), the correlation among metrics (<ref>), and the instance-level sensitivity (<ref>). §.§ How Much Difference Does the Format Make? Having demonstrated that sensitivity in absolute metric values varies on three prompt variation factors in LLMs, we question whether format changes affect the relative relationship of evaluation values between different LLMs. In real-world use cases, users aim to understand the relative performance among different LLMs. To address this, we calculate the format-level Pearson correlation coefficient between each metric of compared LLMs. Table <ref> in the upper rows shows the result of the correlation coefficients gap, which reports maximum and minimum values in the zero-shot and few-shot settings. In disambiguated metrics as Acc_d and Diff-bias_d, the maximum value is close to 1.0 and the gap is small. On the other hand, in ambiguous metrics as Acc_a and Diff-bias_a, the gap is larger than disambiguated ones. This indicates that format change varies the ranking of LLMs more in ambiguous metrics than disambiguated ones. Although this tendency is mitigated in few-shot settings, correlation coefficients in ambiguous metrics still have larger gaps across formats. We also calculate the model-level correlation coefficient between each metric of compared formats (Table <ref> in the below rows.). This indicates that it depends on the model which format elicits better performance. Few-shot prompting does not mitigate the correlation gap on all metrics. Furthermore, we investigated the effectiveness of debias-prompts across different formats. Table <ref> shows the result of the maximum and minimum format-level correlation coefficients. The effectiveness of debias-prompts also highly depends on formats. For example, mpt-7b-instruct shows both positive and negative correlations in debias-prompts, indicating that the effectiveness of debias-prompts can reverse with format change, which does not change semantics meaning. These findings highlight the importance of prompt variation in bias evaluation for LLMs, as even minor differences in prompt format can have severe impacts. §.§ Are There Tradeoffs Between Task Performance and Bias Score? Having confirmed high sensitivity in both task performance and bias scores, an essential question arises: Does the high-performance setting also exhibit less social bias? Although LLMs should achieve high performance and less social bias, it has yet to be well known whether bias decreases with increasing performance in LLMs, and it is not obviously derived from definitions of metrics. Therefore, we analyzed how task performance and bias score correlate across models and formats. Figure <ref> shows the correlation between metrics. We see negative correlations between Acc_a and Acc_d. As for accuracy and bias scores, disambiguated metrics have a stronger correlation than ambiguous ones. This indicates that bias increases as accuracy increases from a score perspective in the disambiguated contexts. These findings indicated that the LLMs have a tradeoff between ambiguity recognition (Acc_a) and task-solving ability in enough information (Acc_d), and higher task performance (Acc) does not necessarily align with less bias (Diff-bias) in LLMs. This implies that evaluating multiple perspectives simultaneously, such as task performance and social bias, is important to reveal the LLMs' ability. §.§ What Kind of Instances Are Sensitive for LLMs? Having demonstrated a high level of sensitivity in LLMs in bias evaluation, another question arises: Does the specific instance contribute to this sensitivity across different formats and models? The uncertainty of instances affects the model predictions is reported <cit.> and uncertainty of instance is also an essential aspect of bias evaluation dataset construction <cit.>. Therefore, investigating the instance-level sensitivity is important. To address this, we divided the instances based on LLMs' predictions into two groups: non-sensitive instances, those with the same predictions across all formats in each model, and sensitive instances, those with at least one format with a different prediction. We also used types of context and question from BBQ categories for analyzing the ratio in sensitive instances. In this analysis, we focused on zero-shot and few-shot settings. See Table <ref> for the sensitive ratio and the ratio in sensitive instances of ambiguous contexts and negative questions. While more than half of the instances are sensitive in zero-shot settings, the few-shot setting can reduce sensitive instances in all models. This implies that the few-shot setting can enhance the robustness of LLMs to the prompt format change. As for ambiguous and negative ratios in sensitive instances, ratios are around 0.5 in both zero-shot and few-shot settings, except for Llama2-13b variants, which archive high consistency in ambiguous ratios. This indicates that ambiguity contributes to sensitivity more when LLMs can understand context differences. We conducted another analysis to confirm whether the specific instances can be sensitive across models. Figrue <ref> shows a histogram of instances about how many LLMs are sensitive regarding ambiguity. Specific instances are sensitive across many models in zero-shot and few-shot settings to varying degrees, and this tendency is salient in ambiguous contexts. Further analysis is required to assess the effect of ambiguity when evaluating social bias in LLMs. § RELATED WORK Our work investigates LLMs' sensitivity in bias evaluation, which is aligned with various NLP work aspects. Here, we discuss its relation to social bias in NLP, bias evaluation in downstream tasks, and the robustness of LLMs. Social Bias in NLP Various types of social biases in NLP models have been reported <cit.>. Its scope has expanded to include word vectors <cit.>, MLMs <cit.>, and now LLMs <cit.>. Moreover, various mitigation methods for social bias have been proposed in prior work such as data augmentation <cit.>, fine-tuning <cit.>, decoding algorithm <cit.>, also prompting  <cit.>. Our work is based on evaluating the social bias of LLMs from prompt perspectives. Bias Evaluation in Downstream Tasks. Existing studies investigate how to quantify social biases in downstream tasks such as text generation <cit.>, coreference resolution <cit.>, machine translation <cit.>, question answering <cit.>. As for question answering, <cit.> developed UNQover datasets by using ambiguous questions to assess model biases related to gender, nationality, etc, and ambiguity was followed by later research <cit.>. <cit.> developed BBQ that covers more bias categories and disambiguated questions. Prior work using the downstream task for LLMs mainly focuses on bias evaluation score on LLMs; in comparison, our work mainly focuses on LLMs sensitivity in bias evaluation. Robustness of LLMs Our study is related to the robustness of LLMs <cit.> As for a specific task, such as MCQs, surface change can affect task performance. These include choice order <cit.>, prompt format <cit.>, task description <cit.>, calculation of choice selection <cit.>. In this work, we investigated the robustness of task performance and social bias of LLMs simultaneously from multiple perspectives. § CONCLUSION This study showed that LLMs are highly sensitive to prompt variation (task instruction and prompt, few-shot examples, and debias-prompt) in task performance and social bias. The sensitivity can cause fluctuation in the ranking of LLMs. We confirmed that LLMs have tradeoffs between task performance and social bias caused by prompts. Our analysis indicated that instance ambiguity is a cause of sensitivity to the prompts in advanced LLMs. Our findings shed light on the bias evaluation of LLMs derived from their sensitivity. We recommend using prompt variations, as in this study, to compare the effects of prompts on social bias in LLMs In future work, we will expand our investigation to other tasks. § LIMITATIONS Our work has several limitations. First, our investigation requires much prompt variation regarding task prompt formatting, few-shot setting, and debias-prompts. Therefore, our investigation takes the computational costs compared to a limited evaluation setting. Second, we conducted bias evaluations using only English datasets. Social bias is also reported in languages other than English, and datasets are proposed to assess such bias in other languages. Third, we treated only gender bias datasets despite other bias categories such as religion, nationality, disability, etc. Finally, we used only the QA dataset for bias evaluation, though there are other bias evaluation datasets, as mentioned in <ref>. Although our work has limitations, our evaluation perspectives can be generalized to other tasks. § ETHICS STATEMENT Our investigation shows the sensitivity of LLMs in bias evaluation. However, it is important to note that our study only shows that LLMs are vulnerable with respect to bias evaluation, and even if the bias scores of LLMs are low in our investigation, it does not mean that LLMs are shown to be free of bias. As mentioned in the limitation section, our work is limited to languages, bias categories, and downstream task types. Furthermore, our prompt variations are still limited compared to possible prompt variations in the real world. Then, other prompt variations may lead to worse generations for users. § MODEL DETAILS Our experiments use 12 open-source LLMs, which can be downloaded from the huggingface hub. Table <ref> shows the checkpoint URL of each model. § FEW-SHOT SAMPLES Table <ref> shows the few-shot samples for our experiments. We sampled the few-shot samples from the BBQ dataset and converted words related to stereotype groups to anonymized words  (X, Y). § VERIFICATION OF DEBIAS-PROMPT We confirmed the effectiveness of debias prompts (Table <ref>) using the intrinsic bias evaluation dataset. We used CrowS-Pairs (CP) <cit.>[We used a revised version of dataset (<https://gitlab.inria.fr/french-crows-pairs>) proposed by <cit.> that addressed the issues reported in <cit.>.] and StereoSet (SS) <cit.> to ensure a minimum level of effectiveness of debias-prompt. Both datasets have sentence pairs containing a stereotypical sentence (s) He addressed the shareholders as the CEO of the company, and an anti-stereotypical sentence (a) She addressed the shareholders as the CEO of the company. We used log-likelihood to evaluate the bias on LLMs defined by BiasScore_Intrinsic =1/N∑^_(s,a)∈ CP/SS𝕀 [P(s|θ,dp) ≥ P(a|θ,dp)] where 𝕀[x] returns 1 if x is true and 0 otherwise, θ means parameters of LLM, N is the number of datasets, and dp means debias-prompts. Ideal LLMs achieve BiasScore_Intrinsic is 50, stereotypical LLMs are toward 100, and anti-stereotypical LLMs are toward 0. Table <ref> shows the effectiveness of our derbies-prompt on intrinsic tasks. In both intrinsic bias evaluation datasets, CP and SS settings, almost debias-prompts can mitigate the bias. § TASK INSTRUCTION AND PROMPT FORMAT VARIATION Table <ref> shows the four variations of task instructions and two variations of enumerated symbols for choice options. § OTHER RESULTS Table <ref> shows the maximum and minimum value of each score in zero-shot and few-shot settings.
http://arxiv.org/abs/2407.01784v1
20240701202520
Analyzing Persuasive Strategies in Meme Texts: A Fusion of Language Models with Paraphrase Enrichment
[ "Kota Shamanth Ramanath Nayak", "Leila Kosseim" ]
cs.CL
[ "cs.CL", "cs.AI", "cs.LG" ]
Computational Linguistics at Concordia (CLaC) Laboratory Department of Computer Science and Software Engineering Concordia University, Montréal, Québec, Canada Analyzing Persuasive Strategies in Meme Texts: A Fusion of Language Models with Paraphrase Enrichment Kota Shamanth Ramanath Nayak Leila Kosseim July 8, 2024 ======================================================================================================= § ABSTRACT This paper describes our approach to hierarchical multi-label detection of persuasion techniques in meme texts. Our model, developed as a part of the recent SemEval task, is based on fine-tuning individual language models (BERT, XLM-RoBERTa, and mBERT) and leveraging a mean-based ensemble model in addition to dataset augmentation through paraphrase generation from ChatGPT. The scope of the study encompasses enhancing model performance through innovative training techniques and data augmentation strategies. The problem addressed is the effective identification and classification of multiple persuasive techniques in meme texts, a task complicated by the diversity and complexity of such content. The objective of the paper is to improve detection accuracy by refining model training methods and examining the impact of balanced versus unbalanced training datasets. Novelty in the results and discussion lies in the finding that training with paraphrases enhances model performance, yet a balanced training set proves more advantageous than a larger unbalanced one. Additionally, the analysis reveals the potential pitfalls of indiscriminate incorporation of paraphrases from diverse distributions, which can introduce substantial noise. Results with the SemEval 2024 data confirm these insights, demonstrating improved model efficacy with the proposed methods. § INTRODUCTION The recent SemEval-2024 shared task 4 <cit.> proposed three distinct subtasks dedicated to identifying persuasion techniques conveyed by memes. The primary aim was to unravel how memes, integral to disinformation campaigns, employ various techniques to shape user perspectives. Subtask 1 focused on the analysis of textual content alone and mandated the detection of 20 persuasion techniques structured hierarchically within the textual content of memes. On the other hand, subtasks 2 and 3 involved the analysis of multimodal memes that considered both textual and visual elements. This paper describes the approach we used for our participation to subtask 1 and further analysis with the dataset after the shared task. The task provided a training dataset in English, but in addition to being tested on English, the task also mandated the evaluation of our model's zero-shot performance in three surprise languages. The goal during the testing phase was not only to explore our model's ability to perform well in English but also generalize to other languages without explicit training. Inspired by successful approaches in multilabel text classification <cit.>, our strategy involved fine-tuning three language models, i.e, BERT, XLM-RoBERTa, and mBERT, followed by ensemble modeling using mean aggregation. To enhance performance, we used data augmentation through paraphrasing and adjusted the classification thresholds for each persuasion technique based on class-wise metrics optimised using the validation set. During testing, a zero-shot approach based on machine translation was implemented to classify instances in the surprise languages. The official results of our system <cit.> demonstrated good performance advantages over the baseline in all languages except Arabic, where the increase in performance was not significant. Results also show that paraphrasing techniques enhances model performance but using a balanced training set is more beneficial than a larger unbalanced one. Furthermore, analysis highlights the potential downside of indiscriminately incorporating paraphrases from diverse distributions, as this can introduce notable noise into the system. This paper is organized as follows: Section <ref> reviews relevant previous work in the field of multi-label classification. Section <ref> provides an overview of the task and the data utilized. Section <ref> presents an overview of our classification pipeline along with the techniques used for data augmentation; while Section <ref> describes experimental details. Finally, the analysis of our model's results is presented in Section <ref>; while Section <ref> provides conclusions and outlines future work. § PREVIOUS WORK Several studies have explored multi-label classification of textual content. In 2019, <cit.> showed that Bi-GRUs with label-wise attention led to good performance, and the inclusion of domain-specific Word2vec and context-sensitive ELMo embeddings further boosted the performance on the EURLEX57K dataset that contained 57k English EU legislative documents. <cit.> introduced five innovative contrastive losses for multi-label text classification using the dataset from the SemEval 2018 Multi-label Emotion Classification (MEC) task <cit.> in English, Arabic, and Spanish that contained 8640 instances. All five contrastive learning methods notably enhanced the performance of the previous top-performing model, SpanEmO <cit.>, for the MEC task. Among these approaches, the Jaccard Similarity Probability Contrastive Loss demonstrated the highest effectiveness on the English dataset, achieving F_Macro and F_Micro scores of 57.68 and 71.01, respectively. In hierarchical multi-label classification (HMC), samples are assigned to one or more class labels within a structured hierarchy. Approaches to HMC can be divided into local and global methods. Local methods use multiple classifiers, often overlooking the overall structure of the hierarchy. For example, <cit.> trained a multi-layer perceptron incrementally for each hierarchy level, using predictions from one level as inputs for the next. In contrast, global methods employ a single model to address all classes and implement various strategies to capture the hierarchical relationships between labels. One such approach <cit.> modeled the hierarchy as a directed graph and introduced hierarchy-aware structure encoders, using a bidirectional TreeLSTM and a hierarchy-GCN to extract and aggregate label structural information in an end-to-end fashion. <cit.> redefined hierarchical text classification (HTC) as a sequence generation task and developed a sequence-to-tree (Seq2Tree) framework to model the hierarchical label structure. Additionally, they created a constrained decoding strategy with a dynamic vocabulary to ensure label consistency. In the context of the SemEval 2020 Task 11 <cit.>, two subtasks were introduced addressing span identification of propagandistic textual fragments and a multi-label technique classification (TC) of propagandistic fragments using a corpus of 7k instances from the news domain. The subsequent SemEval 2021 Task 6 <cit.> focused on the identification of propagandistic techniques from multimodal data including text and images from memes. The top-performing teams in 2020 and 2021, ApplicaAI <cit.> and MinD <cit.> respectively, leveraged pre-trained language models and ensemble techniques to achieve top scores at the shared tasks. This year's shared task was built upon the 2021 task but included hierarchical metrics as well as a multilingual setting and a new training dataset of 7k instances in English. Inspired by the top models at <cit.> and <cit.>, our methodology is also based on an ensemble of pre-trained language models but leverages paraphrase generation to further improve performance. § LABELLING PERSUASION TECHNIQUES The goal of our model was to categorize the textual content of memes into one or several persuasion techniques. For example, given the training instance shown in Figure <ref>, the model needs to learn that the text Don't expect a broken government to fix itself should be labelled with the three techniques provided in the labels field. Persuasion Techniques: The SemEval organizers provided an inventory of 20 persuasion techniques to be used as labels (eg: Loaded Language, Slogans, Name calling/Labelling) and were structured hierarchically as shown in Figure <ref>. This rendered the task a hierarchical multi-label classification problem and was therefore evaluated using hierarchical precision, recall and F measures. Datasets: The dataset provided contained memes collected from online public groups discussing a variety of topics such as politics, vaccines, COVID-19, gender equality, and the Russo-Ukrainian War. For our task, only the text extracted from these memes was used. The training (7k samples), validation (500 samples) and development (1k samples) sets included only English texts. Figure <ref> shows the distribution of instances for each persuasion technique in the training set. As the figure shows, some techniques, such as Loaded Language and Smears, had large number of instances in the training set (1750 and 1990 respectively); while others like Straw Man and Red Herring were severely underrepresented (62 and 59 instances respectively). Figure <ref> shows the distribution of labels per instances. As the figure shows, most of the instances (47%) were labelled with multiple techniques, 35% were labeled with only 1 technique and 18% had no labels at all. Given the above English training set and hierarchical persuasion techniques, the goal of our model was to identify 0 or n techniques for each textual instance in English and in 3 surprise languages. § PROPOSED APPROACH Figure <ref> shows an overview of the classification pipeline we employed for this task. As shown in Figure <ref>, our methodology is based on fine-tuning three distinct pre-trained language models: BERT <cit.>, XLM-RoBERTa <cit.>, and mBERT <cit.> on augmented datasets. §.§ Multi-label Classification As Figure <ref> shows, the data is first preprocessed using standard tokenization. Then we proceeded to fine-tune three distinct models: , , and which returned a probability distribution over the 20 techniques. These three model predictions were then pooled via averaging. Despite the hierarchical organization of the persuasion techniques, we opted to predicting solely the technique names (leaf nodes in Figure <ref>) and not their ancestor nodes. However, to address the multi-label classification, we implemented thresholding in order to determine which techniques have a high enough score to be part of the output label set. We experimented with custom values for each technique with values ranging from 0.01 to 0.7 and picked the optimal values for each class based on the validation set. These thresholds were applied to the scores obtained after passing the logits of each class through a sigmoid function. To handle the three surprise languages, during the official testing phase system, the model trained only on English, would automatically translate the surprise language to English for our model's zero-shot predictions. This was inspired by the approach of <cit.>. §.§ Data Augmentation To mitigate the lack of data we took advantage of two data augmentation strategies: an external dataset and automatically generated paraphrases. §.§.§ External Dataset ( dataset): The Technique Classification (TC) subtask from the SemEval 2020 Task 11 <cit.> provided a dataset of 7k instances from the news domain annotated with similar guidelines as this year's. In contrast to the 2020 task, this year's dataset covered a different domain and used a revised set of persuasion techniques compared to the 2020 inventory. Indeed, in the 2020 TC dataset, a few techniques were merged into a single category due to lack of data, resulting in a list of 14 techniques. In the current year, an expanded inventory of 20 techniques was employed. To ensure consistency between the two sets, we preprocessed the 2020 TC dataset by splitting techniques that had previously been merged. For example, we singled out Bandwagon and Reductio ad Hitlerum, which had been merged into a single technique in the SemEval 2020 TC dataset. We considered two approaches to leverage the modified 2020 TC dataset. The initial option involved pre-training models on this dataset, followed by fine-tuning on the 2024 training data—an approach implemented by <cit.>. Another approach entailed combining both datasets and fine-tuning models on this combined dataset. We chose the latter method because the two datasets covered different genres and a joint training approach would likely enable the model to better adapt and grasp nuanced linguistic patterns across both. For easy reference in the rest of the paper, we call the combined dataset . Figure <ref>(a) (gray + pink) shows the resulting distribution of the persuasion techniques in this combined dataset. §.§.§ Paraphrasing ( datasets) Despite having almost doubled each class with the use of the 2020 TC dataset, some classes were still severely underrepresented; see Figure <ref>(a) (gray + pink). To address this, we took advantage of an LLM to generate paraphrases for each training instance, then labeled these paraphrases with the same set of labels as the original instance. Our intuition was twofold. First, generating paraphrases would expose the model to a more extensive set of samples for each class, potentially improving its ability to discern subtle nuances within the data. Second, paraphrasing sentences could unveil hidden semantics, providing the model with a tool to identify propaganda techniques reliant on nuanced linguistic choices or phrasing. To generate paraphrases, we leveraged ChatGPT-3.5 turbo, setting the temperature to 0.7. This value aimed to introduce diversity in the paraphrases while maintaining relevance to the original instances. Several datasets were created using this method: and : For each instance in , we generated n paraphrases. We experimented with n=1 and n=3 leading to datasets of 28k and 52k respectively, which we call and respectively. The overall hierarchical F-score with the validation set given (500 instances) showed an increase when training with these datasets and n=3 seemed to perform better than n=1. However, a per-class analysis showed that not all classes benefited from the increase in support. For example, the persuasion technique Bandwagon increased its F1 from 0.17 to 0.29; whereas Repetition decreased its F1 from 0.56 to 0.31. We therefore identified the classes with improvement in F-score greater than 0.03 when using the dataset compared to the dataset. These 8 techniques along with their increase in F-scores are shown in Figure <ref>. This set of techniques formed the basis for our subsequent strategy. : Since only 8 techniques seemed to benefit from the use of paraphrases, we created a new augmented dataset by increasing the number of paraphrases only for these techniques. Specifically, let 𝐁 be the set of 8 techniques that benefited from paraphrases (see Figure <ref>), for all data instances d in labeled with techniques 𝐓={t_1, t_2,… t_n} (where n≤20), for each t_i ∈ 𝐁, we generated 10 paraphrases of d and labeled them with all techniques from 𝐓∩𝐁. This newly created dataset called , contained 54k instances. Figure fig:data_g3(a) shows the distribution of instances for each technique in the dataset (gray + pink + violet), in comparison with the original training set and the dataset. As Figure <ref> shows, all datasets are severely imbalanced. Our next dataset therefore tried to address this issue. : Our last dataset used our paraphrase generation strategy to address the dataset imbalance. We rectified the underrepresented classes in the initial training dataset by augmenting them with paraphrases. The most frequent three techniques—Smears, Name-calling/Labelling, and Loaded Language had 1990, 1750, 1518 samples respectively. We thus aimed at reaching similar number of instances for the other techniques. We balanced the dataset by generating batches of 5 paraphrases for each other technique to reach around 1500 instances. This newly created dataset called contained 49k instances (see Figure <ref>(b)). Table <ref> shows the results of the validation with the optimal threshold for each class using the official SemEval scorer <cit.>. As Table <ref> shows, the use of the original dataset (7k instances) achieved a hierarchical F1 of at most 0.49 on the development set; whereas all augmented training sets led to higher performances. The best model with both the validation and the development set was the ensemble trained on the dataset which reached an hierarchical F1 of 0.59 and 0.61 respectively. Surprisingly, which contained 10 paraphrases for the benefited techniques did not perform better than using only 3 paraphrases for all techniques (). This suggests that the excessive inclusion of paraphrases from a different distribution (memes versus news) may have led to too much noise in the data. § EXPERIMENTAL SETUP §.§ System Pipeline and Training Details The system pipeline code was implemented in PyTorch. The pre-trained models BERT [], XLM-RoBERTa [], and mBERT [] and their tokenizers were sourced from Hugging Face. All models were trained for 10 epochs using the Adam optimizer with a learning rate of 2e-5. Batch sizes varied with BERT utilizing 128, and XLM-RoBERTa and mBERT using 64. A final feedforward layer with 20 logits (equal to the number of persuasion techniques) was added to each model. The Binary Cross Entropy with logits served as the loss function, with one-hot encoding applied to the true labels. For prediction, a sigmoid activation function was used on the logits, followed by thresholding. The ensemble model used an unweighted average of all predictions from the three individual models. The ChatGPT-3.5 turbo[<https://platform.openai.com/docs/models/gpt-3-5-turbo>] API with a temperature set to 0.7 was used for paraphrase generation. During testing, the surprise languages were translated into English using the deep-translator API[<https://pypi.org/project/deep-translator/>]. § RESULTS AND ANALYSIS For our official submission to SemEval, the dataset had not been created yet; hence our official results are based on the ensemble model trained on the union of and the development set (1k samples), for a total of 53k samples. The three surprise languages were Bulgarian, North Macedonian and Arabic. The test set contained 1500 samples for English, 426 samples for Bulgarian, 259 samples for North Macedonian and 100 samples for Arabic. The official results of our system are shown in Figure <ref>, along with a baseline score that assigns the most frequent persuasion technique to all instances, and the score obtained by the best performing systems for each language <cit.>. As Figure <ref> shows, although our ensemble model did not reach the top performance for English (0.57 versus 0.75), it performed better than the baseline in all languages except Arabic, where the improvement was not significant. Using the same testing protocol, we reproduced the results using the model trained with the balanced training dataset (). The results displayed in Figure <ref> indicate an improvement in score with the English test set (0.62 versus 0.57). This again confirms the importance of a balanced dataset, and paraphrases based on the same distribution as the original texts. Indeed, although is larger than (52k versus 49k), it is not balanced and contains paraphrases of instances from different genres (memes and news). However, surprisingly, the performance enhancement when using is not observed for the zero-shot classification of the surprise languages whose performance dropped significantly. For these languages, a larger training set, even with noisy out-of-distribution instances, leads to better results possibly due to the noise introduced by the automatic translation itself. Compared to the other approaches at the 2024 edition of SemEval Task 4, the top performing team overall, OtterlyObsessedWithSemantics <cit.> designed a specialized classification head to enhance a Large Language Model. They organized the structure across several connected layers, enabling them to build upon earlier decisions in later, more detailed layers. They optimized the model's performance by systematically exploring different hyperparameters through grid-search. In addition, similarly to our approach, during the testing phase, they translated all the surprise language datasets into English. <cit.> developed a system using Chain-of-Thought based data augmentation methods, in-domain pre-training and ensemble strategy that combined the strengths of both RoBERTa and DeBERTa models. § CONCLUSION AND FUTURE WORK This paper described our approach to hierarchical multi-label detection of persuasion techniques in meme texts. We used an ensemble model with three fine-tuned language models and incorporated data augmentation through paraphrasing from ChatGPT. We tested our approach through the SemEval 2024 Task 4 subtask 1 <cit.>. During testing, our system outperformed the baseline in all languages. Analysis of the results show the importance of dataset balancing and paraphrasing techniques in enhancing model performance. Despite having a smaller number of instances, the balanced dataset consistently outperforms its unbalanced counterparts, demonstrating the efficacy of balancing methods. Moreover, data augmentation improves model performance, as indicated by the under-performance of models trained on the original dataset (7k instances). Additionally, the results underscore the potential drawbacks of including paraphrases from diverse distributions, which may introduce significant noise into the system, potentially compromising overall effectiveness. This study prompts further inquiry into the specific drivers of performance improvement, whether it be dataset balancing or the inclusion of external data. Although our zero-shot approach exhibits limitations, it underscores the positive correlation between data volume and model performance, as illustrated by the superior performance of models trained on larger paraphrased datasets, such as with 56k instances compared to with 49k instances Moving forward, to boost performance, it would be interesting to measure the influence of the quality and similarity of the paraphrases on the performance. Moreover, incorporating hierarchical predictions, possibly at a second-level node, could improve scores further. Exploring the utilization of larger multilingual models alongside language-specific datasets and experimenting with various ensemble methods could be fruitful. Finally, considering the integration of adversarial training or self-supervised learning techniques might offer valuable avenues for improvement. § ACKNOWLEDGEMENTS The authors would like to thank the organisers of the SemEval shared task. This work was financially supported by the Natural Sciences and Engineering Research Council of Canada (NSERC). IEEEtran § AUTHORS Kota Shamanth Ramanath Nayak received B.Tech from Manipal Institute of Technology, India. Currently, he is pursuing his Master's in Computer Science from Concordia University, Canada. His research interests include Natural Language Processing, Discourse Analysis and Large Language Models. Leila Kosseim obtained her PhD from the University of Montreal in 1995 on the topic of Natural Language Generation. Currently she is a professor in the Computer Science & Software Engineering (CSSE) Department at Concordia University in Montreal. Her research interests include Natural Language Processing, Artificial Intelligence and Text Mining.
http://arxiv.org/abs/2407.03197v1
20240703152910
DyFADet: Dynamic Feature Aggregation for Temporal Action Detection
[ "Le Yang", "Ziwei Zheng", "Yizeng Han", "Hao Cheng", "Shiji Song", "Gao Huang", "Fan Li" ]
cs.CV
[ "cs.CV" ]
DyFADet for TAD. L. Yang, Z. Zheng et al. Xi'an Jiaotong University {yangle15, lifan}@xjtu.edu.cn* Equal contribution.  Corresponding author., ziwei.zheng@stu.xjtu.edu.cnAlibaba Group yizeng38@gmail.com HKUST(GZ) hcheng046@connect.hkust-gz.edu.cn Tsinghua University {shijis, gaohuang}@tsinghua.edu.cn DyFADet: Dynamic Feature Aggregation for Temporal Action Detection Le Yang1^*()0000-0001-8379-4915 Ziwei Zheng1^* 0009-0000-4896-3293Yizeng Han20000-0001-5706-8784 Hao Cheng3 Shiji Song40000-0001-7361-9283 Gao Huang40000-0002-7251-0988 Fan Li1 July 8, 2024 =========================================================================================================================================================================================== § ABSTRACT Recent proposed neural network-based Temporal Action Detection (TAD) models are inherently limited to extracting the discriminative representations and modeling action instances with various lengths from complex scenes by shared-weights detection heads. Inspired by the successes in dynamic neural networks, in this paper, we build a novel dynamic feature aggregation (DFA) module that can simultaneously adapt kernel weights and receptive fields at different timestamps. Based on DFA, the proposed dynamic encoder layer aggregates the temporal features within the action time ranges and guarantees the discriminability of the extracted representations. Moreover, using DFA helps to develop a Dynamic TAD head (DyHead), which adaptively aggregates the multi-scale features with adjusted parameters and learned receptive fields better to detect the action instances with diverse ranges from videos. With the proposed encoder layer and DyHead, a new dynamic TAD model, DyFADet, achieves promising performance on a series of challenging TAD benchmarks, including HACS-Segment, THUMOS14, ActivityNet-1.3, Epic-Kitchen 100, Ego4D-Moment QueriesV1.0, and FineAction. Code is released to <https://github.com/yangle15/DyFADet-pytorch>. § INTRODUCTION As a challenging and essential task within the field of video understanding, Temporal Action Detection (TAD) has received widespread attention in recent years. The target of TAD is to simultaneously recognize action categories and localize action temporal boundaries from an untrimmed video. Various methods have been developed to address this task, which can be mainly divided into two categories: 1) Two-stage methods (such as <cit.>), which first learn to generate the class-agnostic action proposals and then conduct classification and boundary-refinement in proposal-level. 2) One-stage methods <cit.> classify each frame as well as locate temporal boundaries in an end-to-end manner, achieving better performance and becoming a popular line of TAD research currently. However, accurately detecting an action from an untrimmed video remains a challenging task. On the one hand, the spatial redundancy in the adjacent frames along with the static feature extraction strategy can result in poor discriminability of learned representations, hindering the detection performance <cit.>. On the other hand, the head inadaptation issue can happen in common TAD designs <cit.>, where a static shared-weight head is used to detect action instances with diverse temporal lengths, leading to sub-optimal performance. Therefore, it is necessary to find a solution that can simultaneously address these two key issues in modern TAD models. In this paper, inspired by the recent success of dynamic neural networks <cit.>, we develop a novel Dynamic Feature Aggregation () module for TAD. As is shown in Fig. <ref>, the proposed , the kernel shape, and the network parameters can adapt based on input during generating new features, which significantly differs from the static feature learning procedure in most existing TAD models. Such a learning mechanism enables a dynamic feature aggregation procedure, which has the ability to increase the discriminability of the learned representations, and also adapts the feature learning procedure in detection heads based on each level of the feature pyramid during detection. Therefore, based on the DFA, we build a dynamic encoder () and a () to address the two issues in modern TAD models. On the one hand, the dynamic feature learning of can facilitate the features of target action to be gathered together and increase the differences between the features of the action and boundaries, resolving the first issue in Fig. <ref>. On the other hand, the will dynamically adjust detector parameters when it is applied at different pyramid levels, which corresponds to detecting the actions with different time ranges. By effectively addressing the two issues in Fig. <ref>, the proposed with and can achieve accurate detection results in TAD tasks. We evaluate the proposed on a series of TAD datasets including HACS-Segment <cit.>, THUMOS14 <cit.>, ActivityNet-1.3 <cit.>, Epic-Kitchen 100 <cit.>, Ego4D Moment Queries V1.0 <cit.> and FineAction <cit.>. The experimental results show the effectiveness of the proposed in TAD tasks. § RELATED WORK §.§.§ Temporal action detection is a challenging video understanding task, which involves localizing and classifying the actions in an untrimmed video. Conventional two-stage approaches <cit.> usually consist of two steps: proposal generation and classification. Nevertheless, these may suffer from high complexity and end-to-end training could be infeasible. A recent trend is designing one-stage frameworks and training the model in an end-to-end fashion. Some works <cit.> propose to detect actions with the DETR-like decoders <cit.>, and another line of work <cit.> builds a multi-scale feature pyramid followed by a detection head. In this work, we mainly follow the mainstream encoder-pyramid-head framework. Concretely, we build our model based on the frameworks in the highly competitive methods <cit.>, and boost the final detection performance by introducing the dynamic mechanism to simultaneously solve the less-discriminant feature and the head inadaption issues in TAD models. §.§.§ Dynamic neural networks have attracted great research interests in recent years due to their favorable efficiency and representation power <cit.>. Unlike conventional models that process different inputs with the same computation, dynamic networks can adapt their architectures and parameters conditioned on input samples <cit.> or different spatial <cit.> / temporal <cit.> positions. Data-dependent parameters have shown effectiveness in increasing the representation power with minor computational overhead <cit.>. Existing approaches can generally be divided into two groups: one adopts mechanisms to dynamically re-weight parameter values <cit.>, including modern self-attention operators, which can be interpreted as (depth-wise) dynamic convolutions <cit.>. While the static temporal reception fields of these methods can lead to the less-discriminant feature issue as stated in <cit.>. The other develops deformable convolution to achieve dynamic reception fields <cit.>, which have been effectively utilized in different video understanding tasks <cit.>. However, the kernel weights of these deformable convolutions are static. Compared to the previous methods, our simultaneously adapts the convolution weights and temporal reception fields in a data-dependent manner, leading to more effective and flexible modeling of temporal information in the TAD task. § METHOD In this section, we first introduce the proposed () and then develop the dynamic feature learning based TAD model, for TAD tasks. §.§ §.§.§ DFA based on feature shifting. The () needs to effectively adapt the kernel weights and the receptive fields (including the number of sampling positions and the shape of kernels) based on the inputs to improve the feature extraction ability, which is difficult to simultaneously achieve. However, such a procedure can be realized if we separate a normal convolution into a feature-shifting module and a point-wise convolution <cit.> (shown in Fig. <ref>): A convolution with the kernel size of k equals that the input features are first shifted according to the k kernel positions, and then processed by a point-wise convolution (Conv 1). Motivated by this, we can learn an input-based mask, which will be used to zero and re-weight the shifted features. Then, using a Conv 1 to process the masked shifted features will equal a fully dynamic convolution shown in Fig <ref> (b). A light-weight module, Ψ(·), and a non-linear activation, ϕ(·) can be used to generate the mask, where the activation function can be the one whose output is always 0 with any negative inputs (such as ReLU <cit.> or a general-restricted tanh function <cit.>). Formally, suppose we have a standard convolution with the kernel 𝒦∈ℝ^C_out× C_in× k, where k is the kernel size and C_out, C_in are the numbers of input and output channels. Given an input ∈ℝ^C_in× T (T is the length of temporal dimension and f_t ∈ℝ^C_in, is the tensor at time t), then we can shift the input by ^(k) = Shift(, k) = [ ^0 ; ... ;^k-1] ∈ℝ^kC_in× T , f̂^s_t = f_t-⌊ k/2⌋+s, s=0,1,...,k-1, t = 1,2,...,T, where ^s ∈ℝ^C_in× T, the empty positions of the shifted features will be padded by all-zero tensors. The weighted masks can be calculated by = ϕ( Ψ() ) , ∈ℝ^C_m× T, where C_m and Ψ can be designed into different formations to achieve different dynamic properties. By repeated unpsampling the to the dimension of ^(k) (denoted by ↑(·)), we have = ↑() ⊙^(k) = ⊙^(k), where we use to represent ↑() for simplicity, and ⊙ means element-wise multiplication. The final output features can be written as = () = ∑_s=0^k-1𝒦_s ( _[sC_in+1 : (s+1)C_in]⊙^s ), where 𝒦_s ∈ℝ^C_out× C_in, _[(sC_in+1 : (s+1)C_in]∈ℝ^C_in× T means the mask tensor with the index from sC_in+1 to (s+1)C_in, and ∈ℝ^C_out× T is the output. §.§.§ Different formations of . Using different Ψ(·) will result in the different formations of . By implementing Ψ(·) as a convolution, the can be built into a convolution-based module (DFA_Conv). Moreover, changing the C_m leads to the different dynamic properties. Take DFA_Conv as an example. K-formation: using C_m = k, the ↑(·) will repeat the mask in the interleaved manner at the channel dimensions with C_in times, which makes the share the same dynamic receptive field among the channel dimension. C-formation: For C_m = C_in, the ↑(·) will repeat the mask k times at the channel dimensions, which makes the a temporal dynamic channel pruning convolution. Also, using C_m = kC_in (CK), the will adapt the receptive fields at different timestamps and channels. Moreover, inspired by the success of Transformer-based architecture, we further implemented the in as the formation in Fig. <ref> (b). The generated mask will result in the zeros in the attention matrix at the unimportant timestamps, and then the masked attentions will be used to re-weight the shifted features. §.§.§ Differences to existing works. If we restrict that ∈{0,1} has the same number of masked positions, then will equal to 1-d deformable convolution <cit.>, which adapts the shape of the kernels at different timestamp. If we remove the ϕ(·), the will equal to the dynamic convolution in <cit.> which only adapts the kernel weights-based inputs. Moreover, TD3DConv in <cit.> first uses temporal attention weights to adjust the features and then uses DC for feature learning, where these weights(W) are input-dependent yet temporal static during generating new features. While these weights are fully dynamic in our DFA. The provides an effective way for dynamic feature aggregation, which addresses the aforementioned issues in modern TAD models. Therefore, based on it, we further develop two important components and for the proposed TAD model, . §.§ §.§.§ Overview Based on the dynamic property of , we can build a TAD model that can effectively extract the discriminative representations in the encoder and adapt the detection heads for the action instances with different ranges. Following the common design in <cit.>, the proposed consists of a video feature backbone, an encoder, and two detection heads for action classification and localization. Concretely, we first extract the video features using a pre-trained backbone. Then the extracted representations will be sent to the encoder to generate the features pyramid, where the former features will be down-sampled by the layer with the scale of 2 to obtain ^l, l =1,...,L (L is the number of total levels). These features will be then used by for action detection. The architecture of is shown in Fig. <ref> (a), where the layer and are two components built based on applying the dynamic feature learning strategy. §.§.§ Feature encoder with . As illustrated in Fig. <ref> (b), the layer is built by substituting the SA module with the proposed module, which follows the marco-architecture of transformer layers <cit.>. The module has two branches: A instance-dynamic branch based on DFA_Conv with kernel size of 1 generates the global temporal weighted mask to help aggregate global the action information. For another branch, we propose to use the DFA_Conv with convolutions implemented by different kernel sizes to better generate the weighted mask tenors. To further improve the efficiency during feature learning, all convolution modules are implemented by depth-wise convolutions (D Conv) with the corresponding dimensionality. The two branches can be written as _ins = DFA_Conv_1( Squeeze ( LN( DS () ) ) ) _k = DFA_Conv_k,w( LN(DS ()) ), where the DS is the down-sampling achieved by max-pooling with the scale of 2. LN is the Layer normalization <cit.>. The Squeeze is achieved by average pooling at the channel dimension. w is the factor to expand the window size of the convolution to w*(k+1), which enables the module can learn a the feature with long-term information. Then the output features of the module can be represented by _dyn =_w + _ins + DS (). Overall, in the feature encoder, each layer will down-sample the feature with a scale of 2 to generate the representations with different temporal resolutions. The dynamic feature selection ability of layers will guarantee the discriminative information of the obtained representations and alleviate the less-discriminant feature problem, which leads to better TAD performance. §.§.§ . The shared-weight static detection heads in common TAD models can have the head inadaptation issue, which means that the optimal weights for detecting long-range and short-range action instances can be different, resulting in the sub-optimal detection performance. Even when implemented with the cross-level feature fusion, the heads only show limited improvement. Intuitively, fusing the features from a higher scale helps the head to explore more global long-term information. Exploring the information from a lower scale can enable the head to find more boundary details. We infer that such a multi-scale fusion can benefit the detection performance only if these intrinsic representations are properly selected from the adjacent scale. These motivate us to build a novel , which can dynamically adjust the shared parameters based on inputs and selectively fuse the cross-level features for better performance. The architecture of a is illustrated in Fig. <ref> (a). Both the features from the corresponding and the adjacent levels in the pyramid are sent to the . By implementing the dynamic feature learning mechanism with the proposed , the head parameters can adjust based on inputs, and important representations will be selectively fused. Specifically, suppose that the depth of the head is D, the output features _d+1^l at the l-th level can be calculated as the accumulation of _d^l-1, _d^l and _d^l+1, which are processed by three different paths (shown in Fig. <ref> (c)). The down path and up path are built based on the with an additional LN and down-sampling (DS) or up-sampling (US) interpolation module with a scale of 2. The features will be first fused by _d^l = γ_d·( DS( DFA_Att(_d^l-1) ) + US( DFA_Att(_d^l+1) ) ) + α_d·_d^l, where γ_d and α_d are two learnable factors. Then the resultant feature will be processed by the depth path by _d+1^l = DFA_Att( _d^l ). The proposed With the multi-scale dynamic feature fusion procedure, the final features at depth D, namely, _D^l, l=1,...,L, will be used to detect the action instances with different temporal ranges. §.§.§ Classification and regression modules. The final classification (Cls) and regression (Reg) modules are designed to process final features across all levels. The cls module is realized by using a 1D convolution with a Sigmoid function to predict the probability of action categories at each time stamp. The Reg module is implemented using a 1D convolution with a ReLU to estimate the distances { d_m^s, d_m^e} from the timestamp t to the action start and end { t_m^s, t_m^e} for m-th action instance, which can be obtained by t_m^s = t-d_m^s and t_m^e = t+d_m^e. §.§ Training and Inference §.§.§ Training. Follow <cit.>, the center sampling strategy <cit.> is used during training, which means that the instants around the center of an action instance are labeled as positive during training. The loss function of the proposed follow the simple design in <cit.>, which has two different terms: (1) ℒ_cls is a focal loss <cit.> for classification; (2) ℒ_reg is a DIoU loss <cit.> for distance regression. The loss function is then defined as ℒ = ∑_t,l(ℒ_cls + λ_reg𝕀_ct ℒ_reg) / T_pos, where T_pos is the number of positive timestamps and 𝕀_ct is an indicator function that denotes if a time step t is within the sampling center of an action instance. §.§.§ Inference. During inference, if a classification score is higher than the detection threshold, the instant will be kept. Then, Soft-NMS <cit.> will be further applied to remove the repeated detected instances. § EXPERIMENTS §.§ Experimental settings §.§.§ Datasets. Six TAD datasets, including HACS-Segment <cit.>, THUMOS14 <cit.>, ActivityNet-1.3 <cit.>, Epic-Kitchen 100 <cit.>, Ego4D-Moment Queries v1.0 (Ego4D-MQ1.0) <cit.> and FineAction <cit.>, are used in our experiments. ActivityNet-1.3 and HACS are two large-scale datasets with 200 classes of action, containing 10,024 and 37,613 videos for training, 4,926 and 5,981 videos for tests. THUMOS14 consists of 20 sport action classes and contains 200 and 213 untrimmed videos with 3,007 and 3,358 action instances on the training and test set, respectively. The Epic-Kitchen 100 and Ego4D-MQ1.0 are two datasets in first-person vision. Epic-Kitchen 100 has two sub-tasks: noun and verb localization, containing 495 and 138 videos with 67,217 and 9,668 action instances for training and testing, respectively. Ego4D-MQ1.0 has 2,488 video clips and 22.2K action instances from 110 pre-defined action categories, which are densely labeled. FineAction contains 103K temporal instances of 106 fine-grained action categories, annotated in 17K untrimmed videos. §.§.§ Evaluation metric and experimental implementation. The standard mean average precision (mAP) at different temporal intersection over union (tIoU) thresholds will be reported as evaluation metric in the experiments. We follow the practice in <cit.> that uses off-the-shelf pre-extracted features as input. Our method is trained with AdamW <cit.> with warming-up. More training details are provided in the supplementary materials. §.§.§ Detailed architecture of . We used 2 convolutions for feature embedding, 7 DynE layers as the encoder, and separate s for classification and regression as the detectors. In our experiments, we report the best results of with different architectures. A comprehensive ablation study about the architecture is also provided in Section <ref>. §.§ Main results r6.7cm Results on HACS-segment. 0.99! Method Backbone 0.5 0.75 0.95 Avg. SSN <cit.> I3D 28.8 18.8 5.3 19.0 LoFi <cit.> TSM 37.8 24.4 7.3 24.6 G-TAD <cit.> I3D 41.1 27.6 8.3 27.5 TadTR <cit.> I3D 47.1 32.1 10.9 32.1 BMN <cit.> SlowFast 52.5 36.4 10.4 35.8 ActionFormer <cit.> SlowFast 54.9 36.9 9.5 36.4 TALLFormer <cit.> Swin 55.0 36.1 11.8 36.5 TCANet <cit.> SlowFast 54.1 37.2 11.3 36.8 TriDet <cit.> SlowFast 56.7 39.3 11.7 38.6 TriDet <cit.> VM2-g 62.4 44.1 13.1 43.1 mygray Ours SlowFast 57.8 39.8 11.8 39.2 mygray Ours VM2-g 64.0 44.8 14.1 44.3 §.§.§ HACS. The performance of different TAD methods on the HACS dataset is provided in Table <ref>, where average mAP in [0.5:0.05:0.95] is reported, and the best and the second best performance are denoted by bold and blue. In our experiments, the SlowFast <cit.> features are used for the proposed in TAD tasks on HACS. From the results, we see that our method with SlowFast features outperforms all other evaluated methods in terms of average mAP (39.2%), and also achieves the best performance across all tIoUs thresholds. Notably, with tIoU = 0.5, the surpasses the Tridet by 1.1%. As a TAD model can generally benefit from a more advanced backbone, we further implement VideoMAE V2-Gaint <cit.> (VM2-g) to conduct TAD on HACS. Remarkably, our method achieves the highest performance with VideoMAE V2-Gaint and beats the previous SOTA TriDet (VM2-g) with the new SOTA result on HACS, 44.3%. §.§.§ THUMOS14 and ActivityNet-1.3. The experimental results which compare the performance of ours and other TAD models are shown in Table <ref>. Average mAP in [0.3:0.1:0.7] and [0.5:0.05:0.95] are reported on THUMOS14 and ActivityNet-1.3, respectively. In the experiments, our method conducts the TAD task based on I3D and R(2+1)D features for the THUMOS14 and ActivityNet-1.3 datasets. We see that our method with I3D features achieves the mAP of 69.2%, which is competitive to TriDet <cit.>, while significantly outperforming all other related TAD methods on THUMOS14. On ActivityNet-1.3, the proposed significantly surpasses the TriDet by about 1.7%, and achieves the mAP of 38.5%. The high performance of the proposed indicates the effectiveness of dynamic feature learning mechanisms in modern TAD methods. r7cm Results on THUMOS14 using VM2-g. 0.99! Method Backbone 0.3 0.55 0.7 Avg. ActionFormer <cit.> VM2-g 84.0 73.0 47.7 69.6 TriDet <cit.> VM2-g 84.8 73.3 48.8 70.1 mygray Ours VM2-g 84.3 73.7 50.2 70.5 mygray Ours VM2-g+F 85.4 74.0 50.2 71.1 Generally, a TAD model can benefit from a more advanced backbone. Therefore, we further implement VideoMAE V2-Gaint <cit.> (VM2-g) to conduct TAD on THUMOS14. We see that all TAD methods achieve significant improvements with an advanced feature extraction backbone, VM2-g. While, our method can achieve the mAP of 70.5%, which is superior to TriDet and ActionFormer. Also, the detection performance can be boosted to 71.1% if we additionally use the optic flow (F) features. §.§.§ Epic-Kitchen 100 and Ego4D-MQ1.0. The evaluations are also conducted on two large-scale egocentric datasets, which are shown in Table <ref> and Table <ref>, respectively. For Epic-Kitchen 100, the average mAP in [0.1:0.1:0.5] is reported and all methods use the SlowFast features. We see that our method has the best performance across all tIoU thresholds on both subsets and achieves an average mAP of 25.0% and 23.4% for verb and noun subsets, respectively, which are significantly superior to the strong performance of the recent TAD methods, including Actionformer <cit.>, ASL <cit.>. For Ego4D-MQ1.0, two types of features, including SlowFast (SF) and EgoVLP (EV) features are used in the experiments. With SlowFast features, the proposed method achieves the mAP of 15.3%, which significantly outperforms the Actionformer. Moreover, we see that using or combining the features with advanced backbone models, such as EgoVLP <cit.>, can further boost the performance of our method by a large margin. SF and EV denote Slowfast <cit.> and EgoVLP <cit.> features. V. and N. denote the verb and noun sub-tasks. r6.5cm Results on FineAction. 0.99! Method Backbone 0.5 0.75 0.95 Avg. BMN I3D 14.4 8.9 3.1 9.3 G-TAD I3D 13.7 8.8 3.1 9.1 ActionFormer InternVideo - - - 17.6 ActionFormer VM2-g 29.1 17.7 5.1 18.2 mygray Ours VM2-g 37.1 23.7 5.9 23.8 §.§.§ FineAction. In the experiments, we report the performance of the different popular TAD methods including BMN <cit.>, G-TAD <cit.>, ActionFormer <cit.>, and our method on TAD task. Moreover, I3D <cit.>, InternVideo <cit.> and and VM2-g <cit.> are used to extract the off-line features and the average mAP in [0.50:0.05:0.95] is reported for all methods. The experimental results are provided in Tab.<ref>. The experimental results show that our method outperforms other TAD models and achieve a new SOTA result on FineAction, 23.8%, with the VideoMAEv2-giant. §.§ Ablation study Ablation studies are conducted on THUMOS14 to explore more properties about the and . r6cm Results with different modules. ! Method Encoder MS-head Avg. Baseline Conv 62.1 Baseline DeformConv <cit.> 66.1 Baseline SE <cit.> 63.4 Baseline Dyn Conv <cit.> 66.7 Baseline TD3d Conv <cit.> 66.5 ^* DFA_Conv 66.8 ActionFormer SA 66.8 ^† DynE 67.8 ^ Conv Dyn 67.9 ^ DynE Conv 68.0 DynE Dyn 69.2 §.§.§ modules in TAD. The experiments investigate the effectiveness of different dynamic feature aggregation modules in TAD tasks. The results are shown in Table <ref>, where different implementations of the encoder and the detection head are evaluated. The baseline model is realized by an all-convolution TAD model, which achieved the mAP of 62.1%. We further use different dynamic modules to substitute or improve the convolutions in the encoder as the comparison, such as the deformable 1-d convolution <cit.>, squeeze-and-excitation module <cit.>, Dynamic convolution <cit.> and temporal deformable 3d convolution module <cit.>. All dynamic modules achieve better performance than the baseline model with convolution, indicating the strong ability of dynamic modules in TAD tasks. While, due to the stronger adaptation ability of , the ^* substituting the convolutions with can achieve the performance equaling to the recent strong TAD model, Actionformer. Moreover, using the proposed layer (^†) further increases the final performance by 1.0%. For the detection head, we see that applying the multi-scale connection in the TAD head can improve the final detection performance. However, naively using the convolution to connect different scales (^) only results in limited improvements. While, after being equipped with , the can achieve a performance of 69.2%, outperforming all other models in the experiments. A more comprehensive study about the architecture for is provided in supplementary materials. §.§.§ The discriminability of the learned features. As shown in <cit.>, the features obtained by recent TAD methods <cit.> tend to exhibit high similarities between snippets, which leads to the less-discriminant feature problem and be harmful to TAD performance. In Fig. <ref>(a), we perform statistics of the average cosine similarity between each feature at different timestamps. We observe that the features in Actionformer exhibit high similarity, indicating poor discriminability. Using the based encoder can address the issue and therefore improve the detection performance. Moreover, we further provide the feature similarity matrix between the different timestamps in Fig.<ref>(b), where the red boxes exhibit the action intervals. The darker color means the features are more discriminant and share less similarity. From the result, we see that the inter-class features from our method within the red boxes show strong similarities, resulting in that the boundary features are distinctive and can be easily extracted. While Actionformer fails to explore the discriminant features during feature learning. From the results, our method addresses the mentioned first issue in Fig.<ref> based on the more discriminant learned features in ^†, which can be further enhanced by using the proposed as TAD model. §.§.§ Visualization of in . In Fig. <ref> (a), we provide the visualization results of the proposed . For better visualization, we use the with C_m=k in Eq.(<ref>), meaning that the same receptive fields will be shared among channels while varying at different timestamps. From the results we see that the can adapt the re-weight masks based on inputs, leading to different formations at different timestamps. Noting that in classical TAD models, such as Actionformer <cit.>, the parameters of the detection head are shared among different levels, which might be harmful in detection. While, the dynamic properties and multi-scale fusion of the specialize the detection head for inputs and target action instances, leading to a fine-grained dynamic action detection manner which can achieve better TAD performance. Moreover, we visualize the average activated rate at each path for the whole video in (shown in Fig. <ref> (b)). The results show that the proposed achieves the dynamic routing-like feature learning mechanism similar to <cit.>. However, our method can simultaneously adapt the receptive fields and the kernel weights, which improves the model ability in TAD tasks. More visualization results can be found in supplementary materials. §.§.§ Hyper-parameters. In Table <ref> , we further evaluate the performance of on THUMOS14 with different hyper-parameters, including the expanded factor, w, the layer number of the detection head, and the dynamic type of the proposed module. We observe the model works best with D=3 and the w should be selected for different datasets. Moreover, although the dynamic type can affect the final performance, with each dynamic type achieves higher performance compared to most TAD methods in Table <ref>. §.§.§ Latency. We test the average latency for single video inference on GeForce RTX 4090 GPU on THUMOS14. As shown in Table <ref>, ^† is faster than ActionFormer while has better detection performance. Moreover, can be improved by the which further brings 1.4% average mAP improvement and the latency is still comparable to Actionformer. § CONCLUSION In this paper, we introduced a novel module simultaneously adapting its kernel weights and receptive fields, to address the less-discriminant feature and head inadaptation issues in TAD models. The proposed based on achieves high performance on a series of TAD benchmarks, which indicates that an input-based fine-grained feature extraction mechanism should be considered for building high-performance TAD models. For future works, we believe that the efficiency of can be further improved by combining the sparse convolution <cit.> and adding additional constraints to encourage each to mask as many features as possible with a minor performance penalty. The applications of in more video-understanding tasks will be further investigated. §.§.§ Acknowledgement. This work is supported in part by National Natural Science Foundation of China under Grants 62206215, China Postdoctoral Science Foundation under Grants 2022M712537, and China National Postdoctoral Program for Innovative Talents BX2021241. splncs04 § IMPLEMENTATION DETAILS We now present implementation details including the network architecture, training and inference in our experiments. Further details can be found in our code. §.§ Network architecture In Fig. <ref>, we present our network architecture. Specifically, videos will be first processed by a given backbone model to extract the features of the videos. Then these pre-extracted features will be used as the inputs of the TAD model. Feature feature embedding layers based on 2 layers of convolutions followed by LN <cit.> will first used to calculate the input video features. After that, a series of DynE layers are implemented as the encoder, where the first two DynE layers are serviced as the stem and the rest will successfully downsample the temporal resolution of the features will a scale of 2. Following the common settings in <cit.>, a “2+5” encoder architecture will be used for each dataset except for Ego4D MQ, meaning that 2 stem layers and 5 downsampling layers will be used for building the encoder, and the outputs of the last 5 layers will be used to build the feature pyramid. Ego4D MQ applies a “2+7” encoder architecture following the design in <cit.>. The outputs from the downsampling layers will be first processed by LNs <cit.> and then used to build the feature pyramid. Then we use separate MSDy-heads for classification and regression as the detectors. In our experiments, the 2 adjacent-level features will be sent to the detection head during detection. For instance, if we detect the action instances at the 4-th level, then the features from the 3-rd and 5-th levels will be also sent to the heads. Specially, for the last level, only features from the previous level will be used as inputs. The will share the parameters while detecting the action instances at different feature levels. §.§ Training details During training, we randomly selected a subset of consecutive clips from an input video and capped the input length to 2304, 768, 960, 2304, and 1024 for THUMOS14, ActivityNet-1.3, HACS, Epic Kitchens 100, and Ego4D MQV1.0, respectively. Model EMA and gradient clipping are also implemented to further stabilize the training. We follow the practice in <cit.> that uses off-the-shelf pre-extracted features as input. Our method is trained with AdamW <cit.> with warming-up and the learning rate is updated with Cosine Annealing schedule <cit.>. Moreover, hyper-parameters are slightly different across datasets and discussed later in our experiment details. More details can be found in our code. We now describe our experiment details for each dataset: * THUMOS14: We used two-stream I3D <cit.> pretrained on Kinetics to extract the video features on THUMOS14. VideoMAE V2 <cit.> is further implemented to improve the performance of our method. Following <cit.>, the initial learning rate is set to 1e-4 with a batch size of 2. We train 40 epochs for THUMOS14 containing warmup 20 epochs with a weight decay of 2.5e-2. * ActivityNet-1.3 We used TSP features <cit.> pretrained on Kinetics to extract the video features. Following <cit.>, the initial learning rate is set to 5e-4. We train 15 epochs for ActivityNet-1.3 containing warmup 5 epochs with a weight decay of 5e-2. * HACS We used SlowFast features <cit.> pretrained on Kinetics to extract the video features on HACS. VideoMAE V2 <cit.> is also implemented to test the performance of our method. Following <cit.>, the initial learning rate is set to 5e-4 with a batch size of 8. We train 14 epochs containing warmup 7 epochs with a weight decay of 2.5e-2. * Epic-Kitchen 100 We used SlowFast features <cit.> for our method in experiments. For both subsets, we train 30 epochs containing warmup 15 epochs with a weight decay of 5e-2. and the initial learning rate is set to 2e-4 with a batch size of 2. * Ego4D MQv1.0 We used SlowFast features <cit.> and EgoVLP features <cit.> in the experiments. For all settings, we train 15 epochs containing warmup 5 epochs with a weight decay of 5e-2. and the initial learning rate is set to 2e-4 with a batch size of 2. * FineAction We used VideoMAE V2 <cit.> as the feature extractor for our method. In the experiments, the initial learning rate is set to 5e-4 with a batch size of 8. We train 14 epochs containing warmup 7 epochs with a weight decay of 2.5e-2. §.§ Inference details During inference, we fed the full sequence into our model. If a classification score is higher than the detection threshold, the instant will be kept. Then, Soft-NMS <cit.> will be further applied to remove the repeated detected instances. For our experiments on ActivityNet-1.3 and HACS, we consider score fusion using external classification scores following the settings in <cit.>. Specifically, given an input video, the top-2 video-level classes given by external classification scores were assigned to all detected action instances in this video, where the action scores from our model were multiplied by the external classification scores. Each detected action instance from our model thus creates two action instances. More details can be found in our code. § ABLATION STUDY OF THE ARCHITECTURE DESIGN All experiments in this section are conducted on THUMOS14 to explore more properties about the and . We provide more comprehensive experimental results to show how the architecture design of the can affect its TAD performance. The experiments investigate the effectiveness of different dynamic feature aggregation modules in TAD tasks. The results are shown in Table <ref>, where different implementations of the encoder and the detection head are evaluated. The results in the upper panel of the table are used to evaluate the TAD performance without multi-scale connections in detection heads. The baseline model is realized by an all-convolution TAD model, which achieved the mAP of 62.1%. We further use the deformable 1-d convolution <cit.> to substitute the convolutions in the encoder as the comparison achieves the mAP of 66.1%. The superiority of the deformable-based model demonstrates the strong ability of dynamic modules in TAD tasks. While, due to the stronger adaptation ability of , the ^* substituting the convolutions with can achieve the performance equaling to the recent strong TAD model, Actionformer. Moreover, using the proposed layer (^†) further increases the final performance by 1.0%. Moreover, in the middle panel, we see that applying the multi-scale connection in the TAD head can improve the final detection performance. However, naively using the convolution to connect different scales (^) only results in limited improvements. While, after being equipped with , the can achieve a performance of 69.2%, outperforming all other models in the experiments. We also use the basic Conv encoder that is built based on all convolution layers attaching with the proposed , which achieves the detection performance of 65.8%, outperforming the baseline model by 3.7%. Such a result further demonstrates the effectiveness of the proposed in TAD tasks. In the bottom panel, we test our proposed with different dynamic properties by controlling the C_m as we describe in Section 3 of the main paper. In our experiments, the C_m is set as the values of k, Ck, and C (represented by k, ck, and c in Table <ref>), where k is the kernel size and the C is the number of input channels. From the results, we see that the (c) has the best performance on THUMOS14. While other variants of can still achieve competitive detection performance compared to the evaluated TAD models in our main paper. § ADDITIONAL VISUALIZATIONS OF We provide more visualization results of the proposed to further show how the proposed dynamic feature learning mechanism can effectively solve the head inadaptation issue in classical TAD models. Following the experiments in the main paper, we use the with C_m=k, meaning that the same receptive fields will be shared among channels while varying at different timestamps. Noting that although the sampling positions are shared among different channels, the kernel weights will still be adjusted based on the inputs. The results are shown in Fig. <ref>. From the results we see that the can adapt the re-weight masks based on inputs, leading to different formations at different timestamps. Noting that in classical TAD models, such as Actionformer <cit.>, the parameters of the detection head are shared among different levels, which might be harmful in detection. While, the dynamic properties and multi-scale fusion of the specialize the detection head for inputs and target action instances, leading to a fine-grained dynamic action detection manner that can achieve better TAD performance. The visualization results also show that the weight of the depth path will always be used without masking, indicating the importance of the depth paths. This meets our intuition that the depth path is exclusively designed for action detection. Moreover, we found that the up paths usually play a more important role in action detection, which might be because 1) The high-level features are extracted by more encoder layers in the feature encoder, resulting in more high-level semantic information which benefits the detection. 2) Only a few features from the low-level with high temporal resolution are needed for action detection regardless of the instance duration. 3) The coarse-to-fine feature fusion in is similar the feature learning process in <cit.>, which demonstrates the importance of the low-frequency information w.r.t temporal dimension. We also observe that the later detailed representations generally play a more important role during feature fusion. which might be due to the short-term information from the future can be beneficial for predicting the ending of the action. Moreover, the visualization results show that the predicted action length is generally shorter than the ground truth. We infer that this might be due to that the proposed only predicts the intrinsic temporal range of the action instance. As shown in in Fig. <ref> (a), the thinks the action Golf Swing begins from the time when the person starts to swing the golf club, while ends after the golf club hits the ball. However, the annotation contains the preparation action and the ending action of the person standing after swinging. Intuitively, both of the temporal ranges can be viewed as the action Golf Swing. The examples in Fig. <ref> (b) and Fig. <ref> (c) also show the similar trends.
http://arxiv.org/abs/2407.03025v1
20240703113526
XMM-Newton and NuSTAR discovery of a likely IP candidate XMMU J173029.8-330920 in the Galactic Disk
[ "Samaresh Mondal", "Gabriele Ponti", "Luke Filor", "Tong Bao", "Frank Haberl", "Ciro Salcedo", "Sergio Campana", "Charles J. Hailey", "Kaya Mori", "Nanda Rea" ]
astro-ph.HE
[ "astro-ph.HE" ]
^1INAF – Osservatorio Astronomico di Brera, Via E. Bianchi 46, 23807 Merate (LC), Italy samaresh.mondal@inaf.it ^2Max-Planck-Institut für extraterrestrische Physik, Gießenbachstraße 1, 85748, Garching, Germany ^3Columbia Astrophysics Laboratory, Columbia University, New York, NY 10027, USA ^4School of Astronomy and Space Science, Nanjing University, Nanjing 210046, China ^5Institute of Space Sciences (ICE, CSIC), Campus UAB, Carrer de Can Magrans s/n, E-08193 Barcelona, Spain ^6Institut d'Estudis Espacials de Catalunya (IEEC), Carrer Gran Capità 2–4, 08034 Barcelona, Spain Mondal et al. Discovery of the new IP XMMU J173029.8–33092 We aim at characterizing the population of low-luminosity X-ray sources in the Galactic plane by studying their X-ray spectra and periodic signals in the light curves. We are performing an X-ray survey of the Galactic disk using , and the source XMMU J173029.8–330920 was serendipitously discovered in our campaign. We performed a follow-up observation of the source using our pre-approved target of opportunity time. We used various phenomenological models in xspec for the X-ray spectral modeling. We also computed the Lomb-Scargle periodogram to search for X-ray periodicity. A Monte Carlo method was used to simulate 1000 artificial light curves to estimate the significance of the detected period. We also searched for X-ray, optical, and infrared counterparts of the source in various catalogs. The spectral modeling indicates the presence of an intervening cloud with N_ H∼(1.5-2.3)×10^23 cm^-2 that partially absorbs the incoming X-ray photons. The X-ray spectra are best fit by a model representing emission from a collisionally ionized diffuse gas with plasma temperature kT=26^+11_-5 keV. Furthermore, an Fe K_α line at 6.47^+0.13_-0.06 keV was detected with an equivalent width of the line of 312±104 eV. We discovered a coherent pulsation with a period of 521.7±0.8 s. The 3–10 keV pulsed fraction of the source is around ∼50–60%. The hard X-ray emission with plasma temperature kT=26^+11_-5 keV, iron K_α emission at 6.4 keV and a periodic behavior of 521.7±0.8 s suggest XMMU J173029.8–33092 to be an intermediate polar. We estimated the mass of the central white dwarf to be 0.94-1.4 M_⊙ by assuming a distance to the source of ∼1.4-5 kpc. and discovery of a likely IP candidate XMMU J173029.8–330920 in the Galactic Disk Samaresh Mondal1, Gabriele Ponti1,2, Luke Filor3, Tong Bao4, Frank Haberl2, Ciro Salcedo3, Sergio Campana1, Charles J. Hailey3, Kaya Mori3, and Nanda Rea5,6 Received XXX; accepted YYY ================================================================================================================================================================================================================================================================================== § INTRODUCTION Galactic X-ray emission is a manifestation of various high-energy phenomena and processes. Studying discrete X-ray sources is essential to understand stellar evolution, dynamics, end-products, and accretion physics. Initial X-ray scans of the Galactic plane revealed a narrow, continuous ridge of emission, the so-called Galactic ridge X-ray emission (GRXE; ), which extends either side of the Galactic disk up to ∼40^∘. A copious amount of 6.7 keV line emission from highly ionized iron was detected from selected regions of the GRXE <cit.>. The study of the 6.7 keV emission indicated an optically thin hot plasma distributed mainly along the Galactic plane. The Galactic diffuse X-ray emission can be described by a two-component emission model with a soft temperature of ∼0.8 keV and a hard component of ∼8 keV <cit.>. The origin of the ∼8 keV plasma is less certain. The Galactic potential is too shallow to confine such hot plasma that would escape at a velocity of thousand km s^-1. The Deep observations of the Galactic Center (GC) by <cit.> (2^∘×0.8^∘) and a region south of the GC (16'×16') by <cit.>, pointed out that the hard kT∼8 keV diffuse emission is primarily due to unresolved cataclysmic variables (CVs) and active binaries. However, later, close to the GC within ±0.8^∘ a significant diffuse hard X-ray emission was detected up to energies of 40 keV <cit.>. In addition, some studies pointed out that the hard GC emission is truly diffuse by comparing stellar mass distribution with the Fe XXV (6.7 keV) line intensity map <cit.>. A recent study by our group showed that this diffuse hard emission in the GC can be explained if one assumes the GC stellar population with iron abundances ∼1.9 times higher than those in the Galactic bar/bulge <cit.>. Magnetic CVs (mCVs) are categorized into two types based on the magnetic field strength of the white dwarf (WD): intermediate polars (IPs) and polars <cit.>. In polars, the magnetic field is strong (>10 MG) enough to synchronize the spin and orbital period of the system. Whereas in IPs, the magnetic field is less strong; therefore, their spin and the orbital period are less synchronized. Most hard X-ray emission from mCVs originates close to the WD surface. The accreting material from the companion star follows the magnetic field lines of the WD and, while approaching the WD polar cap, reaches a supersonic speed of 3000–10000 km s^-1. A shock front is developed above the star, and the in-falling gas releases its energy in the shock, resulting in hard X-ray photons <cit.>. The Galactic Center (GC) region hosts many energetic events, like bubbles and super-bubbles from young and old stars and various supernovae remnants <cit.> as well as large-scale structures like Galactic Chimneys <cit.>, Fermi bubbles <cit.>, and eROSITA bubbles <cit.>. Currently, we are performing an X-ray survey of the Galactic disk using . The main aim of this survey is to constrain the flow of hot baryons that feed the large-scale structures from the GC. The survey region extends from l≥350^∘ to l≤7^∘ and covers a latitude of b∼±1^∘. The survey has an exposure of 20 ks per tile. During this survey, we detected thousands of X-ray point sources of various types. Our survey is sensitive enough to detect sources as faint as 10^-14 erg cm^-2 s^-1. The nearby bright X-ray point sources with flux >10^-11 erg cm^-2 s^-1 are well-known thanks to several decades of monitoring by RXTE, Swift-BAT, MAXI, and INTEGRAL. The bright IPs in the solar neighborhood have a typical luminosity of 10^33-10^34 erg s^-1 <cit.> and its unabsorbed flux at the GC would be ∼10^-13-10^-12 erg cm^-2 s^-1. Furthermore, the high extinction towards the GC makes it even more difficult to detect these faint sources in the GC. Our observations are deep enough to detect these faint X-ray sources. During our campaign, we detected a faint X-ray source, XMMU J173029.8–33092, with a 3–10 keV X-ray luminosity of ∼7×10^-13 erg cm^-2 s^-1. In this paper, we report the detailed spectral and timing study of this source. § OBSERVATIONS AND DATA REDUCTION The source XMMU J173029.8–330920 (R.A.=17^h30^m2929, DEC=-3309209) was discovered in our <cit.> Heritage survey of the inner Galactic disk (PI: G. Ponti). The source was detected in one of our pointings on 2023-03-28 (ObsID: 0916800201) with an exposure of 18.3 ks <cit.>. We analyzed the data and discovered an X-ray pulsation and a tentative iron 6.4 keV line emission that suggested a possible identification with an IP. To confirm this, we triggered our pre-approved 35 ks target of opportunity observation to follow up on this source. The observation data file was processed using Science Analysis System (SASv19.0.0[https://www.cosmos.esa.int/web/xmm-newton/sas]). The point source detection and source list creation were performed using the SAS task . The source detection was performed simultaneously in five energy bands (0.2–0.5 keV, 0.5–1 keV, 1–2 keV, 2–4 keV, and 4–12 keV) for each detector EPIC-pn/MOS1/MOS2 <cit.>. While extracting the source products, we only selected events with PATTERN≤4 and PATTERN≤12 for EPIC-pn and MOS2 detectors, respectively. The source was out of the field of view of the MOS1 detector. We applied barycenter correction for light curve extraction using the SAS task . We used a circular region of 25 radius for the source product extraction. The background products were extracted from a nearby source-free circular region of radius 25. The <cit.> observation was performed on 2023-04-24 (ObsID: 80801332002, PI: S. Mondal). The data reduction was performed using the data analysis pipeline software (NUSTARDAS v0.4.9 provided with HEASOFT). The unfiltered event files from FMPA and FPMB detectors were cleaned, and data taken during passages through the South Atlantic Anomaly were removed using . The source products were extracted from a circular region of radius 40 using . The background was selected from a source-free region of 40 radius. § RESULTS §.§ and joint spectral modeling We performed a time-averaged spectral modeling using xspec <cit.>, including the data from EPIC-pn, MOS2, FPMA, and FPMB detectors in a joint fit. While doing the fit, we added a constant term for cross-calibration uncertainty, which we fixed at unity for the EPIC-pn detector and free for others. The best-fit parameters are listed in Table <ref> with the quoted errors at the 90% confidence level. All the models are convolved with a Galactic absorption component <cit.> with solar abundance. Before we started fitting the data from and jointly, we checked for any variation in flux, as the two observations are separated by nearly 27 days. We chose a common energy band of 3–10 keV in and to estimate the flux and fit the spectra using a simple power law model. The power law model provides 3–10 keV flux in and is 7.1^+0.7_-1.7×10^-13 erg s^-1 cm^-2 and 7.4^+0.6_-1.0×10^-13 erg s^-1 cm^-2, respectively. As we do not see any indication of flux variation, we jointly modeled the spectra from and . First, we fit the data with an absorbed power law model that leads to a high N_ H=(16±3)×10^22 cm^-2 and Γ=1.9±0.2. Fitting the data with this model reveals residuals at energies below 3 keV. The partial covering component improves the fit by Δχ^2=17 for two additional degrees of freedom (d.o.f). The significance of the partial covering component is 99.84% in an F-test. The fitted parameters are N_ H=3^+2_-1×10^22 cm^-2, N_ H,pcf=23^+8_-6×10^22 cm^-2, pcf=0.93^+0.04_-0.07 and Γ=2.1±0.2. Next, we add a Gaussian that further improves the fit by Δχ^2=13 for an additional one d.o.f that gives a detection significance of 99.90% in an F-test. The line width is consistent with zero; therefore, we set the line width at zero keV, and the centroid of the line is estimated as E_ g=6.47^+0.13_-0.06 keV. The equivalent width of the line is 312±104 eV. We checked the presence of Fe XXV and Fe XXVI by adding two Gaussians at 6.7 and 6.9 keV; however, there is no significant improvement in the fit. The best fit is obtained when the data is fitted with an absorbed model, representing the emission from collisionally-ionized diffuse gas. The model does not include the neutral Fe K_α emission at 6.4 keV. Therefore, we add a Gaussian at 6.4 keV when fitting with this model. The plasma temperature obtained from this model is 26^+11_-5 keV. §.§ X-ray pulsation search For a primary analysis and search for X-ray pulsation in EPIC-pn and FPMA detectors, we chose a common energy band 3–10 keV to extract the light curve. The X-ray light curves often suffer from gaps due to the South Atlantic Anomaly Passage. The gaps in the light curve are prominent for satellites in low earth orbit such as due to Earth occultation. We used the Lomb-Scargle <cit.> periodogram to search for periodicity. The Lomb-Scargle periodogram is well known for detecting periodicity in observations that suffer from gaps. Figure <ref> shows the Lomb-Scargle periodogram for the EPIC-pn and FPMA detector. A peak at around a frequency of 1.918×10^-3 Hz and 1.916×10^-3 Hz is visible in EPIC-pn and FPMA detectors, respectively. The periods obtained from the EPIC-pn and FPMA data are 521.2±3.6 and 521.7±0.8, respectively; both are consistent with each other. The periodogram of accreting X-ray binaries displays red noise, which can mimic an artificial periodic or aperiodic variability. Therefore, we did Monte Carlo simulations to test the detection significance. To begin, we computed the power spectral density (PSD) using the standard periodogram approach with an [rms/mean]^2 PSD normalization <cit.>. Subsequently, we employed a power law model to characterize the source power spectrum, taking into account the presence of red noise. The power law model is expressed as: P(ν)=N ν^-1+C_ p, where N represents the normalization factor, and C_ p accounts for the Poisson noise, which is influenced by the mean photon flux of the source. We fitted the PSD of the source using Eq. <ref> with a Markov Chain Monte Carlo approach, employing the Python package emcee <cit.> to derive the best-fit parameters and their associated uncertainties. We simulated 1000 light curves for this best-fit power law model using the method of <cit.>, which were re-sampled and binned to have the same duration, mean count rate, and variance as the observed light curve. The red horizontal lines in Fig. <ref> indicate the 3σ detection significance. The 3–10 keV pulsed fractions (PF) obtained from EPIC-pn and FPMA are 50.4±15.5 and 59.3±14.7, respectively. The PF is calculated by using the formula PF=F_max-F_min/F_max+F_min, where F_max and F_min are the normalized count of the folded light curve. Next, we computed the PF of the source at different energies. Figure <ref> shows the pulsed fraction at different energies. The 0.5–5 keV and 5–10 keV pulse fraction is 55±17% and 42±16%, respectively. § COUNTERPARTS We searched for X-ray counterparts in CSC2.0[http://cda.cfa.harvard.edu/cscweb/index.do] and 2SXPS[https://www.swift.ac.uk/2SXPS/index.php] catalogs, however, no counterparts were found. We searched for optical and infrared counterparts in various catalogs. No counterpart was found within 5from position due to high ISM absorption. A source is found in the VVV Infrared Astrometric Catalogue <cit.>; however, the source is located 2.7 away from the position, which is off by more than 3σ (1) positional uncertainty. So, we consider no infrared counterparts were detected. From the observed column density of absorption of N_ H=2^+2_-1×10^22 cm^-2, we estimate the distance to the source by comparing this value to the absorption column density obtained from several other tracers <cit.>. Assuming that all the observed column density in the X-ray band is due to interstellar absorption, we obtained the distance to the source is around 2.9^+2.1_-1.5 kpc using 3D-N_ H-tool[http://astro.uni-tuebingen.de/nh3d/nhtool] of <cit.>. In any case, the source should be closer to us than 5 kpc. From the non-detections in the VVV and data, We can put a constraint on the nature of the companion star. The extinction along the line of sight in the optical and infrared band is A_V=6.36 and A_ K=0.61, respectively <cit.>. The G band and VVV Infrared survey have a detection limit of m_ G=21 and m_ Ks=18.1, respectively. Therefore, we obtained the absolute magnitude of the donor star should be M_ G>2.3 and M_ Ks>5.1. The donor star might be even fainter than the above estimates as often the optical emission from this type of system has a significant contribution from the accretion disk <cit.>. Therefore, the upper limits of the optical and infrared magnitude suggest the companion star should be a low-mass main sequence star of spectra type later than A7/F0. Our observation has simultaneous UV observation with UVW1 (291 nm) and UVM2 (231 nm) filters. We did not find any counterparts in the UV data. § DISCUSSION The source XMMU J173029.8–330920 was discovered in March 2023 during our campaign. Shortly afterward, in April 2023, we triggered our pre-allocated observation on following up on this source. We searched for a counterpart in and source catalog but found no source association. We also looked for optical, infrared counterparts in and 2MASS catalogs, but no counterparts were found. When the X-ray spectra are fitted by a single Galactic absorption component, leaves residual in the ratio plot. The X-ray spectra are best fitted when we incorporate a partial covering component into the spectral models. The partial covering absorber has a column density of N_ H,pcf=(1.5-2.3)×10^23 cm^-2. The requirement of a partial covering is commonly seen in the spectra of magnetic CVs <cit.>. A partial covering absorption suggests a fraction of the photons are seen directly and the rest through an intervening absorber, which suggests the absorber size is of the same order as the X-ray emitting region located closer to the source. A simple absorber with column density of 10^23 cm^-2 is so high that it completely absorbs the photons below 2 keV, yet most IPs show a good level of emission in the soft X-ray band 0.2–2 keV. Therefore, <cit.> introduced the concept of a partial covering absorber that improved the spectral fit and explained the observed energy-dependent spin modulation. The total Hydrogen absorption column density (H_I+H_2)[https://www.swift.ac.uk/analysis/nhtot/] along the line of sight is N_ H_I+H_2=1.4×10^22 cm^-2 <cit.>. The Galactic absorption towards the source obtained from X-ray spectral fitting is (2-3)×10^22 cm^-2, which is slightly higher than the total Hydrogen absorption column density along the line of sight N_ H_I+H_2. A pulsation period of 521.7±0.8 s was found in both and data sets. The period is likely to be associated with the spin period of the WD. Such a spin period is typical for an IP. Typically, the spin period of IPs ranges from ∼30 s to ∼3000 s with a median value around ∼1000 s <cit.>. The high energy 5–10 keV pulsed fraction of the source is 42±16%. The PF in IPs decreases with increasing energy. This is due to the fact that the light curve of IPs shows increasing modulation depth with decreasing energy <cit.>, and photoelectric absorption has been considered in some part responsible for this trend. Most IPs do not show large PF at higher energies; however, the source GK Per <cit.>, AO Psc, and FO Aqr <cit.> show PF of the order of 25–35% at energies above 5 keV which is similar to the PF detected in 5–10 keV band for the source XMMU J173029.8–330920. The source is most likely a strong candidate for IP. The upper limits estimate of the optical and infrared magnitude indicate the source is hosting a low-mass companion star. The optical extinction of A_V=6.36 along the line of sight is not high enough for non-detection of the optical/infrared counterpart if the source contains a high-mass companion. Therefore, ruling out the neutron star high-mass X-ray binary (NS HMXRB) scenario. Furthermore, the source spectrum is extremely soft with Γ∼1.9, which is not typically seen in NS HMXRBs. The NS HMXRBs do not show strong ionized 6.7 and 6.9 keV lines and their spectra are best fitted by an absorbed power law model. On the other hand, the spectra of the source XMMU J173029.8–330920 are best fitted by an apec model that is typically used for fitting the spectra of IPs. Lastly, the source is also unlikely to be an ultra-compact X-ray binary (UCXB). The UCXBs are low-mass X-ray binaries usually containing a WD donor and a neutron star accretor with an ultra-short orbital period (<1 hr). The emission from NS-UCXBs is well characterized by blackbody emission with kT∼1-3 keV originating from the NS surface <cit.>. In our case, a blackbody component does not fit the +0.5–50 keV spectrum and leaves excess above 10 keV in the ratio plot. Furthermore, a partial covering model is still required when fitting with the blackbody model, which is not typically seen when fitting the spectra of NS-UCXB. In addition to that the NS-UCXBs show little to no emission of Fe <cit.>. §.§ WD Mass Measurement As J1730 is an IP whose X-ray emission primarily originates from an accretion column, we measure the WD mass fitting MCVSPEC to the broadband X-ray spectra. First introduced in <cit.>, MCVSPEC is an XSPEC spectral model developed for magnetic CVs assuming a magnetically confined 1D accretion flow. Largely based on the methodology of <cit.>, MCVSPEC computes plasma temperature and density profiles varying along the accretion column between the WD surface and shock height. The model outputs an X-ray spectrum with thermal bremsstrahlung continuum and atomic lines by integrating X-ray emissivity over the accretion column. The input parameters for MCVSPEC are M, ṁ (specific accretion rate [g cm^-2 s^-1]), R_m/R where R_m is the magnetospheric radius and R is the WD radius, and Z (abundance relative to the solar value). For IPs where the accretion disk is truncated by WD magnetic fields, we assume that the free-falling gas gains kinetic energy from the magnetospheric radius (R_m) to the shock height (h). At the shock height, the infalling gas velocity reaches v_ff = √(2GM (1/R+h - 1/R_m)), which exceeds the sound speed and forms a stand-off shock. As the shock temperature is directly related to M through kT_ shock = 3/8μ m_ H v^2_ff, both R_m and h affect the X-ray spectrum from the accretion column. In addition to MCVSPEC, the reflect, gaussian and pcfabs models in XSPEC are incorporated to account for X-ray reflection by the WD surface and X-ray absorption by the accretion curtain. A Gaussian line component for the Fe K-α fluorescent line is frozen at 6.4 keV with a fixed width of σ = 0.01 keV. For the reflect model, we linked the reflection scaling factor (r_refl) to the shock height h, which was determined self-consistently by the MCVSPEC model in each spectral fit, following the recipe suggested by <cit.>. Note that X-ray reflection is more pronounced when the accretion column is shorter. Overall, the full spectral model was set to tbabs*pcfabs*(reflect*MCVSPEC + gauss). Before fitting the X-ray spectra, we constrained the range of input parameters, such as R_m/R and ṁ. Following <cit.>, we constrained R_m/R by assuming that the WD is in spin equilibrium with the accretion disk at the magnetospheric radius. In this case, the magnetospheric radius is given by R_m = (GMP^2/4 π^2)^1/3, where P = 521 s is the WD spin period. The specific accretion rate, however, is largely unconstrained due to the unknown source distance (d) and fractional accretion column area (f). The specific accretion rate is defined as ṁ = Ṁ/4π f R^2 where Ṁ is the total mass accretion rate [g s^-1]. Assuming that X-ray emission represents most of the radiation from the IP (i.e., L ≈ L_X), Ṁ can be calculated from L = GMṀ(1/R - 1/R_m), where L = 4π d^2 F_X is the total luminosity and F_X represents the unabsorbed X-ray flux. We assumed d = 1.4-5 kpc which is estimated from the Galactic absorption column density towards the line of sight from X-ray spectral fitting. We first estimated the minimum specific accretion rate ṁ_ min by assuming the shortest source distance (1.4 kpc) and maximum fractional accretion column area (f_ max). f_ max = R/2R_m is given for a dipole B-field geometry, assuming that the infalling gas originates from the entire accretion disk truncated at R_m <cit.>. Since some of the parameters described above depend on the WD mass, we adopted a handful of M values, namely, 0.6, 0.8, 0.9, 1.0, 1.1, 1.2, 1.3, and 1.35 M_⊙ as initial estimates. For each WD mass estimate, we computed R_m/R and the ṁ_ min. We then proceeded to fit the X-ray spectra by inputting these R_m/R and the ṁ_ min values. This is not sufficient since the specific accretion rate may be larger than ṁ_ min due to potentially larger source distance (d 1.4 kpc), smaller fractional accretion column area (f f_ max) and missing source luminosity below the X-ray band (L L_X). Therefore, we also considered a large range of ṁ above ṁ_ min (ṁ = 0.0043-83 [g cm^-2 s^-1]). For each ṁ value (frozen), we fit the X-ray spectra and determined the WD mass and statistical errors. We found that the fitted WD mass becomes saturated and independent of ṁ at higher ṁ values (typically 10 [g cm^-2 s^-1]) when the shock height becomes only a small fraction of the WD radius. This saturation of the WD mass over ṁ results from the fact that both the shock temperature and X-ray reflection (both of which are functions of h/R) become less dependent on the shock height. In each case, our model was well fit to the X-ray spectra with reduced χ^2 = 1.18-1.21 (Fig. <ref>). In the end, we checked the self-consistency of our fitting results by ensuring the measured WD mass was consistent with the assumed WD mass within statistical and systematic errors. The same method was applied to fit broad-band X-ray spectra of another IP obtained by and (Salcedo et al. submitted to ApJ). The systematic errors stem from the range of the specific accretion rate we considered. We found that the initial mass estimates of M ≥ 1.0 M_⊙ yielded self-consistent WD mass measurements. We were able to constrain the WD mass range to 0.94-1.4 M_⊙, which is comparable to or larger than the mean WD mass of magnetic CVs <cit.>. The large WD mass range is due to statistical and systematic errors, largely associated with the unknown source distance and fractional accretion column area. As emphasized in <cit.>, it is desirable to constrain the source distance well and obtain better photon statistics (particularly above 10 keV) for determining the WD mass more accurately in the future. §.§ Fe-K-α lines There is an indication of Fe K_α line emission at 6.4 keV. The equivalent width of the line is 312±104 eV. Most IP-type sources also display the presence of ionized Fe lines at 6.7 (Fe XXV) and 6.9 (Fe XXVI) keV with an equivalent width higher than 50 eV <cit.>. We checked the presence of the 6.7 and 6.9 keV lines and estimated an upper limit on the equivalent width. The upper limit on the equivalent width of Fe XXV and Fe XXVI lines are 134 and 147 eV, respectively, which can be confirmed with a longer observation in the future. The 6.4 keV iron K_α line is mainly created by irradiation of neutral material by hard X-ray photons. Compact objects accreting matter through the accretion disk show X-ray reflection from the accretion disk. Whereas in the case of accreting WD, most of the hard X-rays are produced so close to the compact object that one might expect half of the photons to be directed toward the WD surface and reflected back to the observer. X-ray reflection in accreting compact objects shows the signature of Fe K_α fluorescence line and a Compton reflection hump in 10–30 keV. The Compton reflection hump was originally difficult to measure; however, with the launch of the satellite, the X-ray reflection was detected in a couple of magnetic CVs <cit.>. We also checked the presence of a reflection component in the X-ray spectrum of XMMU J173029.8–33092 by adding a convolution term <cit.> to the total model. However, the addition of the reflection component does not provide any improvement in Δχ^2. On the other hand, the neutral Fe K_α line could have been produced by reprocessing when the hard X-ray passes through the partial covering medium. The hard X-rays traveling through the absorbing material will interact with the iron and create a Fe K_α fluorescence line. In such a scenario, the intensity of Fe K_α line will increase with the thickness of the ambient material. In a sample study of mCVs using data, the column density of the partial absorber seems to correlate with the 6.4 keV line intensity <cit.>, which indicates that the emission of the neutral iron K_α is primarily caused by absorption-induced fluorescence from the absorber. § CONCLUSIONS We serendipitously discovered the source XMMU J173029.8–330920 during our Heritage survey of the Galactic disk. Later, we followed up on the sources using our pre-approved ToO observation. We suggest that the identity of XMMU J173029.8–330920 is an IP based on and data analysis. The X-ray spectra can be fitted by emission from collisionally ionized diffuse gas with plasma temperature of 26^+11_-5 keV. One requires a partial covering absorption in addition to the Galactic absorption to model the X-ray spectrum. The X-ray spectra show the presence of neutral Fe K_α emission line at 6.4 keV, while the X-ray light curves show periodic modulation with a period of 521.7±0.8 s. These features are commonly seen in accreting magnetic CVs of IP type. SM, and GP acknowledge financial support from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation program HotMilk (grant agreement No. 865637). SM and GP also acknowledge support from Bando per il Finanziamento della Ricerca Fondamentale 2022 dell’Istituto Nazionale di Astrofisica (INAF): GO Large program and from the Framework per l’Attrazione e il Rafforzamento delle Eccellenze (FARE) per la ricerca in Italia (R20L5S39T9). KM is partially supported by NASA ADAP program (NNH22ZDA001N-ADAP). aa
http://arxiv.org/abs/2407.01889v1
20240702021934
ALMA reveals spatially-resolved properties of molecular gas in the host galaxy of FRB 20191001A at z = 0.2340
[ "Itsuki Yamanaka", "Bunyo Hatsukade", "Fumi Egusa", "Tetsuya Hashimoto", "Yuu Niino", "Tzu-Yin Hsu", "Hiroyuki Kaneko", "Kotaro Kohno" ]
astro-ph.GA
[ "astro-ph.GA" ]
0009-0001-4699-5811]Itsuki Yamanaka Institute of Astronomy, Graduate School of Science, The University of Tokyo, 2-21-1 Osawa, Mitaka, Tokyo 181-0015, Japan 0000-0001-6469-8725]Bunyo Hatsukade National Astronomical Observatory of Japan, 2-21-1 Osawa, Mitaka, Tokyo 181-8588, Japan Graduate Institute for Advanced Studies, SOKENDAI, Osawa, Mitaka, Tokyo 181-8588, Japan Institute of Astronomy, Graduate School of Science, The University of Tokyo, 2-21-1 Osawa, Mitaka, Tokyo 181-0015, Japan 0000-0002-1639-1515]Fumi Egusa Institute of Astronomy, Graduate School of Science, The University of Tokyo, 2-21-1 Osawa, Mitaka, Tokyo 181-0015, Japan 0000-0001-7228-1428]Tetsuya Hashimoto Department of Physics, National Chung Hsing University, No. 145, Xingda Rd., South Dist., Taichung, 40227, Taiwan (R.O.C.) 0000-0001-5322-5076]Yuu Niino Kiso Observatory, Institute of Astronomy, Graduate School of Science, The University of Tokyo, 10762-30 Mitake, Kiso-machi, Kiso-gun, Nagano 397-0101, Japan 0000-0002-0944-5634]Tzu-Yin Hsu Department of Physics, National Tsing Hua University, 101, Section 2. Kuang-Fu Road, Hsinchu, 30013, Taiwan (R.O.C.) 0000-0002-2699-4862]Hiroyuki Kaneko Department of Environmental Science, Faculty of Science, 8050 Ikarashi 2-no-cho, Nishi-ku, Niigata, Niigata 950-2181, Japan 0000-0002-4052-2394]Kotaro Kohno Institute of Astronomy, Graduate School of Science, The University of Tokyo, 2-21-1 Osawa, Mitaka, Tokyo 181-0015, Japan § ABSTRACT We report the detection of the CO(2–1) emission line with a spatial resolution of 09 (3.5kpc) from the host galaxy of the fast radio burst (FRB), FRB 20191001A at z=0.2340, using the Atacama Large Millimeter/submillimeter Array. This is the first detection of spatially resolved CO emission from the host galaxy of an FRB at a cosmological distance. The inferred molecular gas mass of the host galaxy is 2.3+-0.4e+10M_⊙, indicating that it is gas-rich, as evidenced by the measured molecular gas fraction μ_gas=0.50+-0.22. This molecular-gas mass and the star formation rate of the host, SFR=8.06+-2.42M_⊙. yr^-1, differ from those observed in the other FRB host galaxies with the average M_gas=9.6e+8M_⊙ and SFR=0.90M_⊙. yr^-1. This lends further credibility to the hypothesis that FRBs may originate from single or multiple progenitors across a diverse range of galaxy environments. Based on the observed velocity field modeling, we find that the molecular gas disk is dominated by an ordered circular rotation, despite the fact that the host galaxy has a gas-rich companion galaxy with a projected separation of ∼ 25 kpc. The formation of the FRB's progenitor might not have been triggered by this interaction. We derive the 3σ upper limit of the molecular gas column density at the FRB detection site to be < 2.1× 10^21 cm^-2 with a 3σ upper limit. § INTRODUCTION Fast Radio Bursts (FRBs) are bright radio pulses observed with dispersed sweep, first reported by <cit.>. The pulse timescale ranges from microseconds to milliseconds, and their dispersion measure and the localization of FRB <cit.> suggest that they have an extragalactic origin <cit.>. Although many models have been proposed, such as compact object mergers, magnetars, neutron stars, supernovae (SNe), and gamma-ray bursts (GRBs) <cit.>, there is no definitive explanation for the origin of FRBs, making it one of the most important questions in modern astronomy. Given the expected correlation between the occurrence environment of FRBs and the galactic environment, recent approaches to understanding the origin of FRBs involve observing the host galaxies. So far, about 30 FRB host galaxies have been identified, and optical/near-infrared studies have shown a wide range of stellar masses and star formation rates <cit.>. To further understand star formation in galaxies, observing molecular gas, the material for star formation, is an effective approach. Since a hydrogen molecule does not emit electromagnetic waves at the temperatures typical of molecular clouds, it is standard to use CO lines to derive molecular gas properties. The CO observations of FRB host galaxies by <cit.> showed that the molecular gas mass ranges widely (∼ 2.5 orders of magnitude). <cit.> showed that FRB host galaxies belong to a different population than that of star-forming galaxies using the Kaplan-Meier estimator for molecular gas fractions μ_gas. However, these studies do not provide statistical suggestions on the specific origin of FRBs due to the small sample size (9 galaxies for CO observations, but 4 of them are non-detected), so further CO observations of FRB host galaxies are needed. Observations of molecular gas (and HI gas) can reveal the interaction between galaxies, such as gas inflows, outflows, and mergers. HI observations have shown gas turbulence and mergers <cit.>, and asymmetric motion spectra <cit.> in FRB host galaxies. CO spectra of FRB host galaxies are also found to be asymmetric <cit.>. These studies suggest that recent star formation activity was triggered by galaxy interactions, which may have led to FRB progenitor activity. However it is not clear if these emission-line profiles are result of interaction because the galaxies are not spatially resolved. Therefore, it is needed to resolve FRB host galaxies to find whether galactic interaction is really a critical condition for FRBs. Furthermore, to identify the direct environment of FRBs, it is essential to determine the molecular gas properties at the FRB location. In this study, we report the observations of the host galaxy of FRB 20191001A and its companion galaxy, which is thought to interact with the FRB host, using the Atacama Large Millimeter/submillimeter Array (ALMA) with a spatial resolution of 3.5kpc. We derive molecular gas properties of these galaxies and at the FRB location and discuss its progenitors and possible effects of the galactic interaction. Throughout this paper, we adopt cosmological parameters H_0=69.6km.s^-1.Mpc^-1 and Ω_M=0.286 from <cit.> to calculate physical values. § OBSERVATIONS §.§ Target FRB 20191001A was detected by the Australian Square Kilometre Array Pathfinder (ASKAP) on 2019 October 1 at 16:55:35.97081 UT, and localized to a host galaxy DESJ213324.44-544454.65 at z=0.2340 (hereafter HG 20191001A, or simply “the host”) <cit.>. The stellar mass is M_⋆=4.6+-1.9e+10M_⊙, star formation rate (SFR) is 8.1+-2.4M_⊙. yr^-1, and gas-phase metallicity is 12+log(O/H)=8.94+-0.05 <cit.>. It is located at the massive end of the star formation main sequence on the M_⋆ - SFR diagram. The galaxy J213323.65-544453.6 at z=0.2339 (hereafter “the western source”) is located ∼25kpc to the west of this host and is considered to be in a physical interaction (Figure <ref>). The SFR of the western source is estimated to be about 21M_⊙.yr^-1 based on the 1.4 GHz luminosity <cit.>. Properties of the FRB, its host, and the western source are summarized in Table <ref>. §.§ ALMA observations and data reduction A CO(2–1) observation of FRB 20191001A were made on 2022 September 25, from 01:15:40 to 01:49:37 UTC with ALMA Band 5 (Project code is 2021.1.00027.S). 45 antennas were set up with baseline lengths of 15.1–500.2 m. The total integration time on the science target was 11 min 9 sec. The spectral windows are 25, 27, 29, 31, and the correlator setup bandwidth was 1.875 GHz and subdivided into 1920 channels for each spectral window. Data reduction was performed with the Common Astronomy Software Application (CASA; ), with the CASA pipeline version 6.4.1.12. Imaging was performed using the tclean task, with a cell size of 02, a weighting of briggs, a robust parameter of 2, a threshold of twice the rms noise, and used a mask of CO emission area. The data cube was generated over the velocity range from -1000km.s^-1 to 1000km.s^-1 with a velocity resolution of 50km.s^-1. In the cube, 0km.s^-1 is defined from the restframe frequency calculated from the host's optical redshift, which is based on observed wavelengths of Hβ, [O3]λ 5007, Hα, and [N2]λ 6583 lines. The continuum map was created by excluding channels with CO emission. The rms noise level was 0.92mJy.beam^-1 for each channel of the 50km.s^-1 cube and 3.0e-2mJy.beam^-1 for the continuum map. The synthesized beam sizes were 3.5×3.4 kpc^2(094 × 091) and 4.1×3.8kpc^2(11 × 10), respectively. § RESULTS The CO(2–1) integrated intensity map is created by summing all the channels in the cube with no threshold and presented as blue contours in Figure <ref>. The results for each galaxies are also shown in Figure <ref>. CO(2–1) emission line was detected in both galaxies, and the spectra were extracted from the area containing each galaxy. The results of the analysis in this section are shown in Table <ref>. §.§ HG 20191001A The upper panels in Figure <ref> show the CO(2–1) line profile, CO(2–1) integrated intensity map, and continuum map of HG 20191001A, from left to right. The CO line is spatially resolved, which is the first time for the FRB host galaxy in the cosmological distance. The peak signal-to-noise ratio in the cube is S/N=9.9. Using the Gaussian function to fit the CO line profile, the velocity width of the galaxy (at the Full Width at Half Maximum; FWHM) was estimated to be 346+-49km.s^-1. For the 1.3 mm continuum, no S/N>3 detection was found. Using the same region where the CO(2–1) spectrum was extracted, we derived the 3σ upper limit to the total continuum flux to be 0.28mJy. CO luminosity is calculated from the following equation by <cit.>: L'_CO(2-1)=3.25e+7 S_COΔ Vν_obs^-2D_L^2(1+z)^-3, where L'_CO(2-1) is the line luminosity of CO(2–1) in units of K. km.s^-1 .pc^-2, S_CO(2-1)Δ V is the velocity-integrated flux in Jy. km.s^-1, ν_obs is the observed frequency in GHz and D_L is the luminosity distance in Mpc. In this paper, we adopt ν_obs = 186.822GHz, and D_L=1179Mpc that were calculated from optical redshift. The line luminosity is calculated as L'_CO(2-1)=5.3+-0.9e+9K .km.s^-1 .pc^-2. The molecular gas mass M_gas of the galaxy is calculated from M_gas=α_COL'_CO(1-0), where α_CO is the CO-to-H_2 conversion factor in units of M_⊙. (K.km.s^-1.pc^-2)^-1, L'_CO(1-0) is the line luminosity of CO(1–0) in K. km.s^-1 .pc^-2. L'_CO(1-0) was calculated assuming the typical star-forming galaxy value of L'_CO(2-1)/L'_CO(1-0)=0.77 <cit.>. The conversion factor α_CO may depend on the gas-phase metallicity, but since the host's value 12+log(O/H)=8.9+-0.1 is similar to solar metallicity, we use the Galactic value α_CO=4.3M_⊙. (K. km.s^-1 pc^-2)^-1. The validity of the conversion factor will be discussed in sec. <ref>. The calculated host's gas mass is M_gas=2.3+-0.4e+10M_⊙. Figure <ref> compares the CO half-light radius of HG 20191001A with that of nearby star-forming galaxies from the EDGE CALIFA survey <cit.>. This figure shows that the CO disk of HG 20191001A is slightly larger than that of star-forming galaxies with similar stellar mass. In the M_gas-SFR diagram in Figure <ref>, HG 20191001A is located at the position of the star mark. Compared to the previously observed FRB host galaxies (brown-filled squares from <cit.> and <cit.>), HG 20191001A is more gas-rich and actively star-forming. The brown-filled squares represent the values calculated with the galactic conversion factor, and the open squares represent the values calculated with the conversion factor used in the original papers <cit.>. While a majority of the FRB hosts have lower molecular gas mass than the long-duration GRB hosts, some FRB hosts, including HG 20191001A, are comparable to the GRB hosts, showing the diversity of FRB hosts. Although the area occupied by FRB and SNe host galaxies <cit.> are similar, some outliers some outliers exist. The sample size is insufficient to show that this difference is statistically significant, and further verification is expected. Similarly in the M_gas-M_⋆ diagram, FRB host galaxies are located in a wide range on the diagram. In the t_depl-z and μ_gas-z diagrams (where t_depl=M_gas/SFR, μ_gas=M_gas/M_⋆), HG 20191001A appears to be located not far from the empirical trend line of star-forming galaxies <cit.> and the hosts of other transients. The FRB host galaxies do not show any clear redshift dependence but may be widely distributed along the vertical axis, depending on the conversion factor used. §.§ the western source The bottom three figures in Figure <ref> show the line profile, CO integrated intensity map, and continuum map for the western source. The peak signal-to-noise ratio in the cube is S/N=13 and the FWHM of the velocity width is 374+-29km.s^-1. The 1.3 mm continuum was detected with S/N=3.6 and the continuum flux was S_1.3mm=0.17+-0.07mJy. Using the same assumptions as in the previous section, L'_CO(2-1)=6.7+-0.7e+9K.km.s^-1.pc^-2 and M_gas=2.9+-0.3e+10M_⊙ were calculated. The gas mass for the western source is slightly larger than that for the host. §.§ FRB 20191001A site FRB 20191001A was detected 11 kpc north of the center of the host, and no CO was detected within the FRB localization uncertainty <cit.>. The 3σ upper limit for the molecular gas column density (N(H_2)) is < 2.1e+21cm^-2. That is obtained by assuming the Galactic conversion factor, and the CO linewidth at the site, 100km.s^-1, which is the velocity width of the edge part of the galaxy (see Section <ref>). Some theoretical models predict that millimeter emission is enhanced after a few years of the FRB event <cit.>, but no millimeter emission has been detected in previous observations <cit.>. No 1.3 mm continuum was detected at the FRB site in this study and the 3σ upper limit 1090 days after the FRB detection is < 9.1e-2mJy.beam^-1. §.§ Modelling CO disk dynamics CO disk dynamics of both galaxies were modeled by ^3DBarolo <cit.>. ^3DBarolo fits tilted-ring models to a data cube. The ring width was set to 025 and modeling was performed to the CO detection limit, and the normalization option was set to local. In the first estimation, rotational velocity, velocity dispersion, systematic velocity, inclination, and position angle were set to free, and in the second estimation, the values of inclination and position angle were fixed between rings for modeling. The parameter VRAD, the galactocentric radial velocity, was fixed to 0 in the modeling, assuming no radial motion of the molecular gas. The results of the modeling are shown in Figures <ref> and <ref> and Table <ref>. The small velocity dispersion (V_disp in Table <ref>) and the position velocity (PV) diagrams (Figures <ref> and <ref>) indicate that molecular gas disks of both galaxies are rotation dominated. The ratio of the rotational velocity to the velocity dispersion is 49 and 42 for HG 20191001A and western source, respectively. These values are higher than 1, supporting the rotation-dominated systems <cit.>. We do not find a clear sign of interaction between these galaxies. The difference between the derived CO systemic velocity and the optical systemic velocity is small for the host. We fit an exponential function to the radial profile of surface brightness and derive the half-light radius to be 6.9+-0.7 kpc for the host and 3.0+-0.5kpc for the western source. We derive the dynamical mass M_dyn using the rotational velocity V_rot as follows: M_dyn=RV_rot^2/G. The dynamical mass at R=8.0kpc, the CO detection limit, is M_dyn=7.8 +- 2.0e+10M_⊙ for the host. For the western source, R=4.2kpc and M_dyn=1.3 +- 0.4e+11M_⊙. For the host, since the sum of the stellar mass and gas mass is smaller than the dynamical mass, an upper limit for α_CO can be determined as α_CO < 5.9M_⊙.(K .km.s^-1. pc^-2)^-1. This result is consistent with the value of α_CO assumed in this study. § DISCUSSION §.§ Molecular gas properties of the host galaxy <cit.> argued that the gas mass and SFR of the FRB host galaxies take a wide range of values and suggested that they may not originate from a single source, which depends on the molecular gas properties, or they may originate from multiple sources that can take a wide range of values. Our observations show that the range of molecular gas property values, especially M_gas, of FRB host galaxies, has been expanded compared to previous observations. This suggests that the scenario of a single source originating from a star formation-dependent environment, such as a massive star, would be disfavoured, whereas multiple sources or a source independent of molecular gas properties, such as old star populations, would be favoured as the FRB progenitors. In this discussion, we do not distinguish between repeating and one-off FRBs due to the small sample size. If the two origins are different, as is often argued <cit.>, this is consistent with the “multiple origin” theory suggested by the molecular gas results. To discuss the two populations separately, a larger sample is necessary. §.§ Molecular gas property at the FRB site While the FRB environments can be estimated from molecular gas properties of their host galaxies, a more direct discussion would benefit from examining the molecular gas at the site where the FRBs are detected. <cit.> conducted CO observations of the host galaxies of SN type Ia, Ibc/IIb, and II, and derived the molecular gas column density at those sites. Figure <ref> shows the 3σ upper limit of the molecular gas column density (N(H_2)) for FRB 20191001A and compares it with the cumulative distribution of the SNe <cit.>. The upper limit (the red solid line) is larger than the median value of the gas column density for each type of SN (dashed lines), and thus does not constrain its progenitor. The limiting factor here is the non-detection of CO at the FRB site, and deeper observations may add a limit to the source of progenitor. We would like to note that this study is the first to discuss the molecular gas properties at an FRB site. More detection will allow us to create a cumulative distribution and thereby make statistical comparisons with other possible progenitors. Further observations with high spatial resolution and sensitivity of FRB hosts and at those sites are needed for this purpose. §.§ Galactic interaction HI observations of FRB host galaxies which show features of interaction have been reported by <cit.>, <cit.>, and <cit.>. <cit.> shows the disturbed kinematic structure in the FRB host from the CO observation. HG 20191001A is thought to interact with the western source, and we discuss the extent of the interaction and its effect on FRB generation. We cannot see asymmetric or disrupted structures in the modeled velocity fields and the position-to-velocity diagrams (see Figures <ref>,<ref>). That means that the velocity fields of the host and the western source are well-fitted with normal rotation, and the effect of the interaction between them can be weak in terms of molecular gas kinematics. The impact of the interaction can also be investigated from star formation activity. For example, <cit.> showed that the SFR and gas mass are enhanced in galaxies in pairs. In the M_gas - SFR diagram (Figure <ref>), the host and the western source are located near star forming galaxies at z=0.2–0.5 from the PHIBSS2 survey <cit.>. This result indicates that both galaxies are not experiencing starburst. <cit.> proposed a scenario in which gases are dynamically disrupted by galactic interactions and the resulting star formation activation is attributed to FRB generation, but the above discussion suggests that FRB 20191001A is not likely to follow that scenario. § SUMMARY We observed the CO(2–1) emission of the host galaxy of FRB 20191001A (HG 20191001A) along with its close (∼25kpc) companion galaxy (the western source) using ALMA and obtained the following results and findings. * The CO(2–1) emission lines of HG 20191001A and its companion, the western source, were successfully detected, and are spatially resolved for HG 20191001A, for the first time for an FRB host at cosmological distances. This FRB host galaxy has a larger molecular gas mass than other known FRB hosts with CO observations. Its disk size is also relatively large compared to nearby star-forming galaxies. * The measured molecular gas mass of HG 20191001A, 2.3+-0.4e+10M_⊙, has expanded the parameter space for molecular gas mass in FRB host galaxies, and that strengthening the possibility that FRBs originate from multiple sources or sources unrelated to recent star formation. * We discussed local molecular gas environments of an FRB for the first time. The obtained 3σ upper limit of molecular gas column density at the site of FRB 20191001A is 2.1e+21cm^-2. Deeper observations are needed to constrain its progenitor. * We performed dynamical modeling of the CO disk of HG 20191001A and the western source and found that they are both rotation-dominated. The effect of the interaction of HG 20191001A with the western source, both in terms of kinematics and star formation, is unlikely to be strong. A larger sample is needed to further discuss whether galactic interactions trigger FRBs. We would like to thank Kasper E. Heintz for sharing his I-band image of HG 20191001A. This work is supported by JSPS KAKENHI grant No. 19K03925 and 23K03449 (B.H.), 17K14259 (F.E.), 20H00172 (F.E., H.K.), 17H06130 (K.K.), and the NAOJ ALMA Scientific Research Grant Number 2020-15A (H.K.). T.H. acknowledges the support of the National Science and Technology Council of Taiwan through grants 110-2112-M-005-013-MY3, 110-2112-M-007-034-, and 111-2123-M-001-008-. This paper makes use of the following ALMA data: ADS/JAO.ALMA#2021.1.00027.S. ALMA is a partnership of ESO (representing its member states), NSF (USA) and NINS (Japan), together with NRC (Canada), MOST and ASIAA (Taiwan), and KASI (Republic of Korea), in cooperation with the Republic of Chile. The Joint ALMA Observatory is operated by ESO, AUI/NRAO and NAOJ. Data analysis was carried out on the Multi-wavelength Data Analysis System operated by the Astronomy Data Center (ADC), National Astronomical Observatory of Japan. aasjournal
http://arxiv.org/abs/2407.02985v1
20240703103137
Spatiotemporal patterns in the active cyclic Potts model
[ "Hiroshi Noguchi", "Jean-Baptiste Fournier" ]
cond-mat.stat-mech
[ "cond-mat.stat-mech", "nlin.PS" ]
[]noguchi@issp.u-tokyo.ac.jp Institute for Solid State Physics, University of Tokyo, Kashiwa, Chiba 277-8581, Japan []jean-baptiste.fournier@u-paris.fr Laboratoire Matière et Systèmes Complexes (MSC), Université Paris Cité & CNRS, 75013 Paris, France § ABSTRACT The nonequilibrium dynamics of a cycling three state Potts model is studied on a square lattice using Monte Carlo simulations and continuum theory. This model is relevant to chemical reactions on a catalytic surface and to molecular transport across a membrane. Several characteristic modes are formed depending on the flipping energies between successive states and the contact energies between neighboring sites. Under cyclic symmetry conditions, cycling homogeneous phases and spiral waves form at low and high flipping energies, respectively. In the intermediate flipping energy regime, these two modes coexist temporally in small systems and/or at low contact energies. Under asymmetric conditions, we observed small biphasic domains exhibiting amoeba-like locomotion and temporal coexistence of spiral waves and a dominant non-cyclic one-state phase. An increase in the flipping energy between two successive states, say state 0 and state 1, while keeping the other flipping energies constant, induces the formation of the third phase (state 2), owing to the suppression of the nucleation of state 0 domains. Under asymmetric conditions regarding the contact energies, two different modes can appear depending on the initial state, due to a hysteresis phenomenon. Spatiotemporal patterns in the active cyclic Potts model Jean-Baptiste Fournier July 8, 2024 ======================================================== § INTRODUCTION Spatiotemporal patterns are widely observed in nonequilibrium conditions, like waves of target and spiral shapes in two-dimensional (2D) systems <cit.>. Cyclic dynamics of three or more states are one of the conditions to generate such waves. The cyclic states are observed in various systems, such as chemical reactions, gene expression, and ecological systems <cit.>. These dynamics are often explained using deterministic continuum equations <cit.> and lattice Lotka–Volterra models (also called rock–paper–scissors models) <cit.>. The spatiotemporal patterns under noise and fluctuations are not yet fully understood. Noise is typically used to trigger an excitable wave and to generate stochastic resonance <cit.>. The wave propagation of subexitable media of photosensitive Belousov–Zhabotinsky reaction is enhanced by the random variation of light intensity <cit.>. In partial differential equations, random noises are added to include mesoscale fluctuations <cit.>. In the lattice Lotka–Volterra models <cit.> for the predator–prey systems, noise is included as a random selection of lattice sites. Since populations increase by self-reproduction, the nucleation of species domains is not considered. In contrast, in small molecular systems, thermal fluctuations have dominant effects, and nucleation and growth are the crucial kinetic processes. Spiral and target wave patterns have been observed in chemical reactions on noble metal surfaces <cit.>. In a previous paper <cit.>, we studied a nonequilibrium three-state Potts model under cyclic symmetry (the three states being equivalent), and reported that nucleation and growth can alter spatiotemporal patterns. We found two modes: (i) a homogeneous cycling mode (HC), where each state dominantly covers the system while changing cyclically through nucleation and growth. (ii) a spiral wave (SW) mode where the three states coexist while spiral waves are formed from the contact points of three states. These two modes can temporally coexist in small systems but not in larger systems. In this study, we examine the dynamics of the three-state active Potts model in asymmetric cycling conditions. We will show several modes and hysteresis and amoeba-like motions of small biphasic domains. A non-cyclic one-state dominant phase and temporal coexistence with the SW mode newly appear. The cyclic Potts model and method are described in Sec. <ref>. It is a 2D lattice model for chemical reactions on a catalytic surface and molecular transport through a membrane. The dynamics under cyclic-symmetry conditions are briefly described in Sec. <ref>. The dynamics in asymmetric flipping energy and asymmetric contact energies are described in Secs. <ref> and <ref>, respectively. The theoretical analysis by a continuum theory is described in Sec. <ref>. Finally, a summary is presented in Sec. <ref>. § ACTIVE CYCLIC POTTS MODEL We consider a 2D square lattice with sites i having three states s_i=α, with α∈{0,1,2}, as shown in Fig. <ref>(a). Two nearest neighbor sites have a contact energy -J_s_is_j. In addition, the three states have different self-energies ε_α, so that the total Hamiltonian reads H = H_int + ∑_iε_s_i, H_int= - ∑_⟨ ij⟩ J_s_is_j. To model the attraction between like states, we mainly use symmetric contact energies J_αα=J, and set J_αβ=0 for αβ. This corresponds to the standard three-state Potts model with external fields <cit.>. We define the equilibrium flipping energies h_αβ as the variations h_αβ=ε_α-ε_β. Hence, h_01+h_12+h_20=0 by definition in thermal equilibrium conditions. We extended this model to the nonequilibrium situation in which h_αβε_α-ε_β, but where still h_βα=-h_αβ. Then we have h_01+h_12+h_20= h_cyc 0 (see Fig. <ref>(b)). A possible way to do this is to add a nonequilibrium or active contribution to some or all of the flipping energies, in the form h_αβ=ε_α-ε_β+h^neq_αβ. Hence, the detailed-balance condition is locally satisfied for one flip between states s=α and β but not globally for cycles (s=0 → 1 → 2 → 0). Three possible designs are described below: chemical reaction, molecular transport, and excitation, as shown in Fig. <ref>(c). (1) Reaction on a catalytic surface: molecules bind and unbind to the surface, with the state s=0 corresponding to an empty surface site, and the other two states to a site occupied by a molecule, either in the form s=1 or s=2. The surface catalyzes the reaction from s=1 to s=2, whereas this reaction is kinetically frozen in the bulk. (2) Molecular transport between two sides of a membrane: amphiphilic molecules bind to both sides of a bilayer membrane (states s=1 and 2) and switch between them (flip–flop). The molecules are transported by a difference in chemical potential between the solutions on the two sides of the membrane <cit.>. (3) An excitation process: s=1 has a low-energy state s=1^- and a high-energy state s=1^+ triggered by external means (e.g., photoexcitation or ATP hydrolysis). Then, if the backward transformation from s=1^+ to s=0 is negligibly slow, we have effectively h_cyc=ε_1^+-ε_1^->0. Experimentally, chemical waves have been observed at catalytic surfaces (H_2 or CO oxidation and NO or NO_2 reduction on noble metal surfaces such as palladium and ruthenium) <cit.>. The transfer of water molecules through a chiral liquid-crystalline monolayer induces a target wave <cit.>. Although more than two reactions or complicated molecular interactions occur in these experimental systems, we constructed a simple minimum model to capture the essence. For simplicity, no state exchange between neighboring sites (diffusion) is considered. The thermal energy and the lattice spacing are normalized to unity. We mainly use J_00=J_11=J_22=J=2 to induce a phase separation between different states as a symmetric condition of the contact energy. To examine the effects of asymmetric contact energies, J_00 is varied from 1.75 to 2.8 with keeping J_11=J_22=2, in Sec. <ref>. Our lattice has N sites with a side length of √(N) under periodic boundary conditions. The state of a site is flipped according to a Monte Carlo (MC) method. A site is chosen at random, and then it is flipped to either of the other two states with 1/2 probability. The new state is accepted or rejected with Metropolis probability p_s_is'_i=min(1,e^-Δ H_s_is'_i) in which Δ H_s_is'_i=H'_int-H_int-h_s_is'_i is the energy variation in the change from the old state to the new one. This procedure is performed N times per MC step (time unit). Statistical errors are calculated from three or more independent runs. § DYNAMICS IN CYCLIC-SYMMETRY CONDITIONS First, we briefly describe the dynamics of the active cyclic Potts model in symmetric conditions, i.e., for h_01=h_12=h_20=h and J_00=J_11=J_22=J. Detailed results at J=2 are given in Ref. <cit.>. For large systems (N≥ 192^2), at low h, either the HC or SW mode appears (see their definitions in Sec. <ref>), depending on the initial state, while only the SW mode appears at high h. Transitions from the HC to SW modes were observed, but the reverse transition never occurred (within accessible simulation times). In the SW mode, spiral waves propagate from the contact points of three states, and the number of each state fluctuates around N/3 (see the snapshots in Fig. <ref>(d)). For small systems (N≤ 154^2), the HC mode appears at low h, the SW mode appears at high h, and the two modes temporally coexist at medium h. Representing temporal coexistence by a `+', we denote this mixed state by SW+HC. The ratios of these two modes can be quantified using the number densities N_α/N of the three states. For N_α>0.98N, we consider that the lattice is dominantly covered by a unique state s=α, and we call this global state `one' phase (except for the condition of J=1.2). The fraction of time spent in this state is denoted by p_phase=p_one (Fig. <ref>). In the HC mode, this `one' phase global state cyclically changes from s=α to [α+1], where [α+1]=(α+1) 3. Since the transient dynamics of nucleation and growth is rapid, we identify p_one with the fraction of time spent in the HC mode in the cyclic-symmetry condition. Moreover, we consider that the system is in the `three' phases global state when N_0>0.02N, N_1>0.02N, and N_2>0.02N. The fraction of time spent in this state is called p_phase=p_three (Fig. <ref>). The ratio p_one/p_three corresponds to the ratio of the times spent in the HC and SW modes in the cyclic-symmetry condition (see Figs. <ref>(a) and (b)). Since the nucleation barrier decreases with increasing h (at fixed contact energy J) and the nucleation more frequently occurs in larger systems, the mean lifetime of the `one phase' state was found to be roughly proportional to exp(-h)/N <cit.>. Furthermore, the domains were found to grow with a velocity of v_wave≃ 0.07h + 0.009 at h ≥ 0.5. Hence, at high h and/or large N, during the growth of a s=[α+1] domain within a s=α domain, the nucleation of a smaller s=[α+2] domain often starts. Because domain growth is a stochastic process, three-state contacts are thus frequently produced. When a three-state contact appears, a spiral wave forms, as the three domain boundaries associated with this contact point exhibit a rotative motion (each s=α phase is invaded by the adjacent s=[α+1] phase). Conversely, the SW mode changes into the HC mode, when the three-state contact points stochastically disappear. This disappearance occurs in small systems but not in large systems (N≥ 192^2). Similar size dependence of spiral waves was reported in excitable media <cit.>. Since the density of the three-state contacts linearly increases with increasing h (compare two snapshots in Fig. <ref>(d)), the SW mode more often changes into the HC mode at smaller h. More detail is given in Ref. <cit.>. During the cyclic flipping, the forward flip (s=α→ [α+1]) more frequently occurs than the backward flip (s=[α+1]→α). This is quantified by the flow q_f defined as the average difference between forward and backward flips per site, during one MC step. Figure <ref>(c) shows that q_f is much higher in the SW mode than in the HC mode and increases with increasing h in both modes. As the contact energy J decreases, the SW mode appears at lower h, and the cyclic change of dominant phases in the HC mode becomes faster. Thus, the temporal coexistence of these two modes can be observed in large systems. For N=256^2, the coexistence is obtained at J=1.2, as shown in Fig. <ref>. Note that the threshold N_α/N= 0.95 is used for the one-phase state since the domains contain 3% of the other states at J=1.2 (see Fig. <ref>(a)). The transition between the HC and SW modes occurs at higher h with increasing J. For J=1.5, the discontinuous transition is obtained at h ≃ 3.5 and the two modes maintain for t ∼ 10^8 around the transition point owing to hysteresis. However, this transition likely becomes continuous for simulations of much longer periods. § DYNAMICS FOR ASYMMETRIC FLIPPING ENERGIES In this section, we describe the dynamics with asymmetric flipping energies, for symmetric contact energies J_αα=2. By asymmetric flipping, we mean that h_01, h_12, and h_20 are not all equal. We still assume, however, that h_01+h_12+h_20= h_cyc 0 and keep h_βα=-h_αβ. First, we consider a small system of N=128^2 sites, in which the HC and SW modes can coexist in the symmetric condition. When the flipping energies slightly deviate from the symmetric condition, the dynamics exhibit only small changes. For example, the SW+HC mode is obtained at h_01=1.1, h_12=h_20=1, similar to that at h_01=h_12=h_20=1, and the mean densities of s=0 and s=2 are slightly lower and higher 1/3 in the SW mode, respectively (see Fig. <ref> and Movies S1 and S2). However, large asymmetry alters the dynamics qualitatively. In symmetric conditions, the three-phase contact points move isotropically, and the boundaries between domains move all at the same speed in the direction from s=[α+1] to s=α. In the case of asymmetric flipping energies, the speeds of the different boundaries are not the same. In the SW mode, when the flipping energies differ substantially, the three-phase contact points move ballistically rather than diffusively (see Fig. <ref> and Movie S3). For h_01=1.6, h_12=1.7, h_20=1, as in Fig. <ref>, small biphasic domains of s=1 and 2 move like amoeba locomotion and bacteria colony growth in the direction of the s=1 region. Their large fluctuations often result in domain division and disappearance. In the asymmetric case, the average fraction ρ_α=N_α/N of sites in each state differs from 1/3, as can be seen in Fig. <ref>. The contact energies turn out to be essential in determining those fractions. Indeed, for h_01=1.6, h_12=1.7, and h_20=1, the mean-field approach developed in Sec. <ref>, which neglects both the contact energies and the spatial structures, predicts ρ_0≃ρ_1≃0.32 and ρ_2≃0.37, whereas for J=2 the actual ratio ρ_0/ρ_1≈5 (see Fig. <ref>). This is expected, since a single site flip within a domain involves the loss of four J_αα=J contacts, whereas a boundary flip involves the loss of two contacts (2J=4). The nucleation of a domain and the motion of an interface are therefore cooperative events, and they strongly affect the state ratios. Dynamic phase diagrams are shown in Fig. <ref>. In addition to the SW and HC modes, new one-phase modes E_α, in which the state s=α is predominant as in equilibrium, appear when either h_01, h_12 or h_20 dominates. Once a one-phase mode E_α is installed, it does not transform into the SW mode nor into the next state s=[α+1] of the cycling process. Temporal coexistence SW+E_α are possible, as shown in Fig. <ref>. The various modes are distinguished as follows. When p_three>0.05 (see its definition in Sec. <ref>), the system is in the SW mode. When p_one>0.05, the system is either in the HC mode or in one of the E_α modes. In the case both of them exceed 0.05, the system is either in the coexistence mode SW+HC or in a coexistence mode SW+E_α. The HC and E_α modes are distinguished from the distribution of N_s shown in Fig. <ref>. When a peak exists at N_α/N≈ 1 and it is ten times higher than their local minima close to N_α/N=1, we consider that the dominant phase of s=α exists in a recognizable period. When all three states take the dominant phase, it is in the HC mode. Otherwise, we consider it to be in the E_α mode, in which the s=α state has the maximum peak at N_s/N≈ 1. The distribution in Fig. <ref>(b) belongs to E_2 but is close to the threshold of the HC mode. As h_01 increases while h_12 and h_20 are fixed, the dynamics changes from E_0, to SW+E_0, SW, SW+E_2, and E_2 (or HC instead of E_0), in order (see Figs. <ref> and <ref>). Interestingly, high h_01 generates the E_2 mode, although h_01 directly enhances the nucleation and growth of the s=1 phase. This dynamics can be understood by seeing the lifetimes of one-state dominant phases shown in Fig. <ref>(c). The lifetime of s=α is calculated as the period during which the system stays in N_α>0.98N. The mean lifetime of the s=0 dominant phase exponentially decreases with increasing h_01, by more frequent nucleation and growth of the s=1 phase in the s=0 phase, as expected. Conversely, the mean lifetime of the s=1 phase is almost independent of h_01, because the s=1 phase is terminated by the nucleation and growth of the s=2 phase, which is determined by h_12. However, the s=2 phase remains for longer periods with increasing h_01, owing to the suppression of the nucleation. To understand this, let us consider an isolated dimer of s=0 sites in the s=2 phase. One site in this dimer can go back to the s=2 state via two pathways. One is the direct flipping to s=2, whose probability is min(1,exp(2J-h_20)), since the number of contacts between s=0 and s=2 sites changes from six to four. The other is the two-step flipping via s=1, in which the probabilities of s=0→ 1 and 1→ 2 are min(1,exp(-J+h_01)) and min(1,exp(3J+h_12)), respectively. The latter flipping increases for higher h_01, since the first step is rate-limiting. Thus, the s=2 phase becomes temporally dominant. Conversely, the acceleration of the second step does not enhance the latter flipping, as seen in the constant lifetime of the s=1 state. A similar tendency in the flipping energies is also found in the SW mode. When h_α[α+1] is the highest among the three flipping energies, the s=[α+2] state is the most occupied over the lattice (see Figs. <ref> and <ref>). The lifetime of the SW mode is calculated as the lapse from the time entering 0.1N<N_α<0.75N for all α to that entering the one phase mode (N_α>0.98N for one of the states). When the three forward flips are comparable (e.g., h_01≃ 1 while h_12=h_20=1 as in Fig. <ref>), the mean lifetime of the SW mode becomes longer and the system falls into this mode. As the flipping energies deviate more from the symmetric condition, one of the states (s=α) dominates the SW mode, and subsequently the SW mode changes into the s=α dominant phase, E_α, via the SW+ E_α mode or the HC mode (see Fig. <ref>). The phase diagram in Fig. <ref>(a) is slightly asymmetric with respect to the symmetric axis h_01=h_12. This is due to the fact that the transition from the s=α state to the s=[α+1] state is also dependent on the [α+1]→ [α+2] flip, as mentioned above. This asymmetry can be more quantitatively captured by plotting the various physical quantities as a function of h_01-h_12 along the diagonal line fixing h_01+h_12, as shown in Fig. <ref>. At h_01+h_12=3 and h_20=1, the SW mode more often occurs for h_01-h_12<0 than for h_01-h_12>0, and the center of the SW existence region is at h_01-h_12≃ -0.1 (see Figs. <ref>(a) and (b)). The flow q_f has a maximum around this center (compare the red line in Fig. <ref>(c) and the blue line in Fig. <ref>(b)). With increasing h_01+h_12, the center position is shifted to the positive direction and the maximum of q_f becomes higher (see the three solid lines in Fig. <ref>(c)). At higher h_20, q_f increases (compare the top two lines in Fig. <ref>(c)). Thus, the asymmetry direction and amplitude vary depending on the conditions. Last, we consider a large system of N=256^2 sites, in which the HC mode only exists in the hysteresis at low h in symmetric conditions. The effects of the asymmetric flipping energies are similar to those in the small system (N=128^2). As h_01 increases, the number ratio N_2/N of the s=2 state increases in the SW mode, and subsequently, the E_2 phase appears via the temporal coexistence with the SW mode (SW+E_2), as shown in Fig. <ref>. Since the size of biphasic domains and the number density of three-contact points continuously decrease with increasing N_2/N, the transition between the SW and E_α modes likely occurs continuously via the mode coexistence at very large N. § DYNAMICS FOR ASYMMETRIC CONTACT ENERGIES In this section, we describe the dynamics in the case of asymmetric contact energies J_00 J_11=J_22=J and symmetric flipping energies h_01=h_12=h_20=h, for N=128^2. When J_00>J, the s=0 state is stabilized and becomes dominant at high J_00, so that the E_0 mode appears (Fig. <ref>). Conversely, with decreasing J_00<J, the E_2 mode appears since the nucleation of a s=0 domain is suppressed due to the low contact energy in the domain. This is captured by the lifetimes of one-state dominant phases, as in the case of the asymmetric flipping energies (see Fig. <ref>(c)). Interestingly, hysteresis occurs at high h and high J_00. Two different modes appear depending on the initial condition, as shown in Fig. <ref> and Movie S4: the E_0 mode and the SW+E_1 mode. They do not change into each other in our simulation times ∼ 10^8, since their lifetimes become longer due to high energy barriers for the transitions. In the s=1 phase, biphasic domains of s=2 and s=0 often appear but do not grow to cover the entire lattice (see Figs. <ref>(a) and (b)). At relatively low h, the SW+E_1 branch discretely appears in the middle of the E_0 region (see Fig. <ref>). Thus, the SW+E_1 mode is likely a kinetically trapped metastable state but the E_0 mode remains as the only solution in the long time limit. In contrast, at high h, the E_0 branch is only connected with the E_0-mode solution at high J_00, and the SW+E_1 branch is continuously connected with the SW+E_1-mode solution at low J_00 (see the region of h≥ 1.5 in Fig. <ref>(d)). The SW+E_1 mode becomes the E_1 mode at high J_00 (J_00≥ 2.5 at h=1.5). Thus, the stable mode in the long time limit likely changes from the SW+E_1 or E_1 mode to E_0 mode in the middle of the hysteresis region. In the symmetric condition (J_00=J_11=J_22 and h_01=h_12=h_20=h), three dominant phases (s=0, 1, and 2) are trapped in the limit h ≪ 1. In particular, they are degenerated thermal-equilibrium states at h=0. The hysteresis found here is a nonequilibrium extension of this degeneration. § THEORETICAL ANALYSIS §.§ Homogeneous Mixed State in the Absence of Interactions Let us call ρ_α=N_α/N the number density of the state s=α. Since there are no empty sites, ρ_0+ρ_1+ρ_2=1. We work in units such that the thermal energy k_BT and the size of the lattice sites are unity. Let us first consider a simplified situation in which we disregard all spatial structures and where the interactions between adjacent sites are neglected, i.e., J_αα=0. In agreement with Eq. (<ref>), the energy density (per site) is given by f_self=ρ_1ε_1+ρ_2ε_2+(1-ρ_1-ρ_2)ε_0, and the flipping energies h_αβ are ε_α-ε_β in equilibrium. As mentioned earlier, we place the system in an arbitrary nonequilibrium state, by setting h_αβ=ε_α-ε_β+h^neq_αβ, with h_βα=-h_αβ, while assuming that the transition rates w_αβ obey the local detailed balance condition: w_αβ/w_βα =exp(h_αβ). We therefore allow for h_01+h_12+h_20=h_cyc0, with h_cyc=h^neq_01+h^neq_12+h^neq_20, which we assume positive to favor the cycling s=0→1→2→0. In practice, we take Metropolis rates w_αβ=min[1,exp(h_αβ)] or Glauber rates w_αβ=[1+exp(-h_αβ)]^-1. The dynamical equations are then ρ̇_1 =w_01ρ_0-(w_10+w_12)ρ_1+w_21ρ_2, ρ̇_2 =w_02ρ_0+w_12ρ_1-(w_20+w_21)ρ_2, ρ̇_0 =-(w_01+w_02)ρ_0+w_10ρ_1+w_20ρ_2. The third equation follows from the first two due to the density conservation equation. In the stationary state, ρ̇_α=0, hence replacing ρ_0 by 1-ρ_1-ρ_2 in Eqs.(<ref>)–(<ref>) yields a linear system for the stationary densities ρ_1 and ρ_2, which gives ρ_1/ρ_0 =w_02w_21+w_01(w_20+w_21)/w_12w_20+w_10(w_20+w_21), ρ_2/ρ_0 =w_01w_12+w_02(w_10+w_12)/w_12w_20+w_10(w_20+w_21). For Metropolis rates, assuming h_01≥ 0, h_12≥ 0, and h_20≥ 0, we obtain ρ_1/ρ_0 =e^h_011+e^h_12+e^h_02/1+e^h_12+e^h_02e^h_cyc, ρ_0/ρ_2 =e^h_201+e^h_01+e^h_21/1+e^h_01+e^h_21e^h_cyc. Equilibrium is achieved for h_cyc=0. In this case, Eq. (<ref>) give ρ_1/ρ_0=exp(h_01) and ρ_0/ρ_2=exp(h_02), which produces the Boltzmann distribution ρ_α∝exp(-ε_α), as expected. When h_cyc0, the stationary densities differ from the equilibrium ones. As h_01 increases at fixed h_12 and h_20, we find that ρ_1 increases and ρ_0 decreases, as expected (see Fig. <ref>). When Glauber rates are used, the density changes become slightly larger (compare the dashed and solid lines in Fig. <ref>). §.§.§ Equilibrium State and Free Energy At equilibrium, i.e., for h_cyc=0, the densities can be obtained from the free-energy density f=f_self+f_mix, with f_mix=ρ_1lnρ1+ρ_2lnρ_2+(1-ρ_1-ρ_2)ln(1-ρ_1-ρ_2), arising from the entropy of mixing. Indeed, minimizing f with respect to ρ_1 and ρ_2 yields ρ_α/ρ_0=exp(ε_0-ε_α). Note that while the equilibrium densities are obtained by minimizing the full free energy density f, the transition rates w_αβ result from the variations of f_self=f-f_mix. §.§ Continuum Theory with Spatial Gradients and Interactions We now assume that the densities ρ_i(𝐱) vary in space. Since we consider only local state flips (no state exchange between adjacent sites), the dynamical equations still have the same form locally: ρ̇_1(𝐱) =w_01ρ_0-(w_10+w_12)ρ_1+w_21ρ_2, ρ̇_2(𝐱) =w_02ρ_0+w_12ρ_1-(w_20+w_21)ρ_2, ρ_0(𝐱) =1-ρ_1-ρ_2. The transition rates w_ij(𝐱), however, now depend on the local free-energy variations, including the interactions between adjacent sites and the penalty due to density gradients. The latter are required in order to take into account that the nucleation of a new phase behaves differently within a domain and at the interface between two domains. Thus, the free-energy density is now f=f_self+f_mix+f_int+f_grad, with f_int =-u_1/2ρ_1^2-u_2/2ρ_2^2-u_0/2(1-ρ_1-ρ_2)^2, f_grad = k/2(∇ρ_1)^2+ k/2(∇ρ_2)^2+ k/2[∇(1-ρ_1-ρ_2)]^2 , where u_α>0 quantifies the attractive couplings between like states, and k>0 the penalty associated with density gradients (common to all states for simplicity). As discussed in Sec. <ref>, the transition rates must be calculated from the free energy minus the contribution arising from the entropy of mixing. The latter is given by the functional F[ρ_1,ρ_2]=∫d^2x(f_self+f_int+f_grad). The energy change associated with the flip of one site from the state s=0→1, at point 𝐱, supplemented by the nonequilibrium contribution h_10^neq, is given by Δ H_01(𝐱) =.δ F/δρ_1(𝐱)|_ρ_2 =ε_1-ε_0-u_1ρ_1+u_0ρ_0 -k∇^2(ρ_1-ρ_0)-h_01^neq =-Δ H_10(𝐱). Likewise, Δ H_02=ε_2-ε_0-u_2ρ_2+u_0ρ_0-k∇^2(ρ_2-ρ_0)-h_02^neq=-Δ H_20. From the energy change in the sequence s=2→0→1, we infer also Δ H_21=ε_1-ε_2-u_1ρ_1+u_2ρ_2-k∇^2(ρ_1-ρ_2)-h_21^neq=-Δ H_12. In the following, we shall take u_0=u_1=u_2=u and break detailed balance by setting h_01^neq=0, h_12^neq=0, and h_20^neq=h_cyc>0. For convenience, we assume Glauber rates: w_αβ(𝐱)=1/1+e^Δ H_αβ(𝐱). Note that they obey the local detailed balance condition w_αβ/w_βα=exp(-Δ H_αβ), even in the nonequilibrium case h_cyc0. §.§.§ Numerical Analysis Equations (<ref>)–(<ref>) with the rates of Eq. (<ref>), obtained from the energy variations Δ H_αβ(𝐱) defined in Eq. (<ref>) and below, allow to calculate the spatially resolved evolution of the system. We have integrated the discretized version of these equations in one dimension with periodic boundary conditions in a lattice of L=120 sites using the classical Runge-Kutta method. The first and second derivatives were discretized using central differences with fourth-order accuracy. We first investigated the dynamics in cyclic-symmetry conditions, with h_01=h_12=h_20=0.3, in which case the three states are equivalent and the dynamics promotes the sequence s=0→1→2→0. We show in Fig. <ref>(b) that a biphasic s=1,2 band in a s=0 background forms a travelling wave, which behaves as a perfect soliton. The biphasic band is initialized with (ρ_1,ρ_2) equals (0,0) for the s=0 (gray) phase, (0.8,0) for the s=1 (blue) phase and (0,0.8) for the s=2 (red) phase. The stationary densities (ρ_1,ρ_2) of the three phases are indicated as the green points in Fig. <ref>(a), matching the minima of the free energy f. Such travelling waves stochastically appear also in two dimensions, as can be seen in Fig. <ref>(a) and Movie S1. They also produce the spiralling dynamics around the contact point of three different domains (see Figs. <ref>(a) and <ref>(d)). Biphasic bands travelling in opposite directions annhilate each other, as shown in Fig. <ref>(c), even when their sizes are different, as a natural consequence of the s=0→1→2→0 transformation. Such events can also be seen in two dimensions (see Fig. <ref>(d) and Movie S1). In the case of asymmetric flipping energies, we find that three interfaces s=0→ 1, s=1→ 2 and s=2→ 0 have different velocities. Depending on the relative asymmetries, this can lead to the widening of a biphasic band, as shown in Fig. <ref>(a), or its disappearance (see Fig. <ref>(b)). § SUMMARY We have studied the dynamics of the active cyclic Potts model under asymmetric conditions. We found that biphasic domains and non-cyclic one-state dominant phases are induced by either asymmetric flipping energies or asymmetric contact energies. The spiral wave mode can temporally coexist with either the one-state dominant phases or the homogeneous cycling mode. Biphasic domains move like amoeba and exhibit division and disappearance. When the flipping energy from the s=0 to the s=1 state is increased, or the contact energy between s=0 sites is decreased while other flipping energies are fixed, the s=2 dominant phase appears owing to the suppression of the nucleation of s=0 domains in the s=2 phase. Two separate modes can be formed depending on the initial state due to hysteresis: one-state dominant phase and coexistence of spiral waves and one-state dominant phase, or two types of one-state dominant phases. In reaction-diffusion systems, the length scales of spatiotemporal patterns are usually controlled by the diffusion coefficients of the reactants <cit.>. However, in the present system, the diffusion of the various states is not taken into account. The observed wavelengths fluctuate largely and are determined, on average, by the competition between domain nucleation and growth. We mentioned three experimental situations relevant to the present cyclic model in the Introduction, including reaction on a catalytic surface. Although oxidation or reduction on catalytic surfaces has been intensively studied in experiments <cit.>, the effects of nucleation and growth have not been well understood. We believe that the rich dynamics of the present system, including the amoeba-like motion of biphasic domains, can be experimentally observed on catalytic surfaces. Here, we simulated small systems. Such small systems can be realized using metal nanoparticles <cit.>. Chemical waves are known to induce shape deformations of metal particles <cit.>, lipid membranes <cit.>, and macroscopic gel sheets <cit.>. In this study, we have only considered non-deformable flat surfaces. Surface deformation and the dynamics on curved surfaces are likely to modify the dynamics of the active cyclic Potts model and yield new phenomena. Here, we used the three-state Potts model in the square lattice. It is applicable systematically from thermal equilibrium to far-from-equilibrium. The dynamics may be changed when other types of lattices are used. When four or more states are considered, more complex dynamics are expected. Hence, the cyclic Potts model is an excellent model system to study nonequilibrium dynamics coupled with nucleation and growth. It is simple and easy to extend to several directions for further studies. We thank Frédéric van Wijland for stimulating discussion. This work was supported by JSPS KAKENHI Grant Number JP21K03481 and JP24K06973. 47 fxundefined [1] ifx#1 fnum [1] #1firstoftwo secondoftwo fx [1] #1firstoftwo secondoftwo noop [0]secondoftwo ref[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0] rl [1]href #1 @bib@innerbibempty [Murray(2003)]murr03 author author J. D. Murray, @noop title Mathematical Biology II: Spatial Models and Biomedical Applications, edition 3rd ed. (publisher Springer, address New York, year 2003)NoStop [Mikhailov and Showalter(2006)]mikh06 author author A. S. Mikhailov and author K. Showalter, 10.1016/j.physrep.2005.11.003 journal journal Phys. Rep. volume 425, pages 79 (year 2006)NoStop [Mikhailov and Ertl(2009)]mikh09 author author A. S. Mikhailov and author G. Ertl, 10.1002/cphc.200800277 journal journal ChemPhysChem volume 10, pages 86 (year 2009)NoStop [Beta and Kruse(2017)]beta17 author author C. Beta and author K. Kruse, 10.1146/annurev-conmatphys-031016-025210 journal journal Annu. Rev. Condens. Matter Phys. volume 8, pages 239 (year 2017)NoStop [Bailles et al.(2022)Bailles, Gehrels, and Lecuit]bail22 author author A. Bailles, author E. W. Gehrels, and author T. Lecuit, 10.1146/annurev-cellbio-120420-095337 journal journal Annu. Rev. Cell Dev. Biol. volume 38, pages 321 (year 2022)NoStop [Okuzono and Ohta(2003)]okuz03 author author T. Okuzono and author T. Ohta, 10.1103/PhysRevE.67.056211 journal journal Phys. Rev. E volume 67, pages 056211 (year 2003)NoStop [Qu(2006)]qu06 author author Z. Qu, 10.1152/ajpheart.00668.2005 journal journal Am. J. Physiol. Heart Circ. Physiol. volume 290, pages H255 (year 2006)NoStop [Sugimura and Kori(2015)]sugi15 author author K. Sugimura and author H. Kori, 10.1103/PhysRevE.92.062915 journal journal Phys. Rev. E volume 92, pages 062915 (year 2015)NoStop [Kerr et al.(2002)Kerr, Riley, Feldman, and Bohannan]kerr02 author author B. Kerr, author M. A. Riley, author M. W. Feldman, and author B. J. M. Bohannan, 10.1038/nature00823 journal journal Nature volume 418, pages 171 (year 2002)NoStop [Kelsic et al.(2015)Kelsic, Zhao, Vetsigian, and Kishony]kels15 author author E. D. Kelsic, author J. Zhao, author K. Vetsigian, and author R. Kishony, 10.1038/nature14485 journal journal Nature volume 521, pages 516 (year 2015)NoStop [Szolnoki et al.(2014)Szolnoki, Mobilia, Jiang, Szczesny, Rucklidge, and Perc]szol14 author author A. Szolnoki, author M. Mobilia, author L.-L. Jiang, author B. Szczesny, author A. M. Rucklidge, and author M. Perc, 10.1098/rsif.2014.0735 journal journal J. R. Soc. Interface volume 11, pages 20140735 (year 2014)NoStop [Reichenbach et al.(2007)Reichenbach, Mobilia, and Frey]reic07 author author T. Reichenbach, author M. Mobilia, and author E. Frey, 10.1038/nature06095 journal journal Nature volume 448, pages 1046 (year 2007)NoStop [Reichenbach et al.(2008)Reichenbach, Mobilia, and Frey]reic08 author author T. Reichenbach, author M. Mobilia, and author E. Frey, 10.1016/j.jtbi.2008.05.014 journal journal J. Theor. Biol. volume 254, pages 368 (year 2008)NoStop [Itoh and Tainaka(1994)]itoh94 author author Y. Itoh and author K.-i. Tainaka, 10.1016/0375-9601(94)90815-X journal journal Phys. Lett. A volume 189, pages 37 (year 1994)NoStop [Tainaka(1994)]tain94 author author K.-i. Tainaka, 10.1103/PhysRevE.50.3401 journal journal Phys. Rev. E volume 50, pages 3401 (year 1994)NoStop [Szabó et al.(1999)Szabó, Santos, and Mendes]szab99 author author G. Szabó, author M. A. Santos, and author J. F. F. Mendes, 10.1103/PhysRevE.60.3776 journal journal Phys. Rev. E volume 60, pages 3776 (year 1999)NoStop [Szabó and Szolnoki(2002)]szab02 author author G. Szabó and author A. Szolnoki, 10.1103/PhysRevE.65.036115 journal journal Phys. Rev. E volume 65, pages 036115 (year 2002)NoStop [Johnson and Seinen(2002)]john02 author author C. R. Johnson and author I. Seinen, 10.1098/rspb.2001.1948 journal journal Proc. R. Soc. Lond. B volume 269, pages 655 (year 2002)NoStop [Szczesny et al.(2013)Szczesny, Mobilia, and Rucklidge]szcz13 author author B. Szczesny, author M. Mobilia, and author A. M. Rucklidge, 10.1209/0295-5075/102/28012 journal journal EPL volume 102, pages 28012 (year 2013)NoStop [Juul et al.(2013)Juul, Sneppen, and Mathiesen]juul13 author author J. Juul, author K. Sneppen, and author J. Mathiesen, 10.1103/PhysRevE.87.042702 journal journal Phys. Rev. E volume 87, pages 042702 (year 2013)NoStop [Mir et al.(2022)Mir, Stidham, and Pleimling]mir22 author author H. Mir, author J. Stidham, and author M. Pleimling, 10.1103/PhysRevE.105.054401 journal journal Phys. Rev. E volume 105, pages 054401 (year 2022)NoStop [Gammaitoni et al.(1998)Gammaitoni, Hänggi, Jung, and Marchesoni]gamm98 author author L. Gammaitoni, author P. Hänggi, author P. Jung, and author F. Marchesoni, 10.1103/RevModPhys.70.223 journal journal Rev. Mod. Phys. volume 70, pages 223 (year 1998)NoStop [McDonnell and Abbott(2009)]mcdo09 author author M. D. McDonnell and author D. Abbott, 10.1371/journal.pcbi.1000348 journal journal PLoS Comput. Biol. volume 5, pages e1000348 (year 2009)NoStop [Kádár et al.(1998)Kádár, Wang, and Showalter]kada98 author author S. Kádár, author J. Wang, and author K. Showalter, @noop journal journal Nature volume 391, pages 770 (year 1998)NoStop [Wang et al.(1999)Wang, Kádár, Jung, and Showalter]wang99 author author J. Wang, author S. Kádár, author P. Jung, and author K. Showalter, 10.1103/PhysRevLett.82.855 journal journal Phys. Rev. Lett. volume 82, pages 855 (year 1999)NoStop [Alonso et al.(2001)Alonso, Sendiña Nadal, Pérez-Muñuzuri, Sancho, and Sagués]alon01 author author S. Alonso, author I. Sendiña Nadal, author V. Pérez-Muñuzuri, author J. M. Sancho, and author F. Sagués, 10.1103/PhysRevLett.87.078302 journal journal Phys. Rev. Lett. volume 87, pages 078302 (year 2001)NoStop [Hildebrand and Mikhailov(1996)]hild96 author author M. Hildebrand and author A. S. Mikhailov, 10.1021/jp961668w journal journal J. Phys. Chem. volume 100, pages 19089 (year 1996)NoStop [Ertl(2008)]ertl08 author author G. Ertl, 10.1002/anie.200800480 journal journal Angew. Chem. Int. Ed. volume 47, pages 3524 (year 2008)NoStop [Brär et al.(1994)Brär, Gottschalk, Eiswirth, and Ertl]bar94 author author M. Brär, author N. Gottschalk, author M. Eiswirth, and author G. Ertl, 10.1063/1.466650 journal journal J. Chem. Phys. volume 100, pages 1202 (year 1994)NoStop [Gorodetskii et al.(1994)Gorodetskii, Lauterbach, Rotermund, Block, and Ertl]goro94 author author V. Gorodetskii, author J. Lauterbach, author H.-H. Rotermund, author J. H. Block, and author G. Ertl, 10.1038/370276a0 journal journal Nature volume 370, pages 276 (year 1994)NoStop [Barroo et al.(2020)Barroo, Wang, Schlrögl, and Willinger]barr20 author author C. Barroo, author Z.-J. Wang, author R. Schlrögl, and author M.-G. Willinger, 10.1038/s41929-019-0395-3 journal journal Nat. Catal. volume 3, pages 30 (year 2020)NoStop [Zeininger and et al.(2022)]zein22 author author J. Zeininger and author et al., 10.1021/acscatal.2c03692 journal journal ACS Catal. volume 12, pages 11974 (year 2022)NoStop [Noguchi et al.()Noguchi, van Wijland, and Fournier]nogu24 author author H. Noguchi, author F. van Wijland, and author J.-B. Fournier, @noop http://arxiv.org/abs/J. Chem. Phys. in press (arXiv:2311.05257) J. Chem. Phys. in press (arXiv:2311.05257) NoStop [Potts(1952)]pott52 author author R. B. Potts, 10.1017/S0305004100027419 journal journal Proc. Camb. Phil. Soc. volume 48, pages 106 (year 1952)NoStop [Binder(1981)]bind80 author author K. Binder, 10.1007/BF01007636 journal journal J. Stat. Phys. volume 24, pages 69 (year 1981)NoStop [Miele et al.(2020)Miele, Medveczky, Holló, Tegze, Derényi, Hórvölgyi, Altamura, Lagzi, and Rossi]miel20 author author Y. Miele, author Z. Medveczky, author G. Holló, author B. Tegze, author I. Derényi, author Z. Hórvölgyi, author E. Altamura, author I. Lagzi, and author F. Rossi, 10.1039/c9sc05195c journal journal Chem. Sci. volume 11, pages 3228 (year 2020)NoStop [Holló et al.(2021)Holló, Miele, Rossi, and Lagzi]holl21 author author G. Holló, author Y. Miele, author F. Rossi, and author I. Lagzi, 10.1039/d0cp05952h journal journal Phys. Chem. Chem. Phys. volume 23, pages 4262 (year 2021)NoStop [Noguchi(2023)]nogu23 author author H. Noguchi, 10.1039/d2sm01536f journal journal Soft Matter volume 19, pages 679 (year 2023)NoStop [Tabe and Yokoyama(2003)]tabe03 author author Y. Tabe and author H. Yokoyama, 10.1038/nmat1017 journal journal Nat. Mater. volume 2, pages 806 (year 2003)NoStop [Tang et al.(2020)Tang, Yuan, Ou, Li, You, Li, Yang, Zhang, and Wang]tang20 author author M. Tang, author W. Yuan, author Y. Ou, author G. Li, author R. You, author S. Li, author H. Yang, author Z. Zhang, and author Y. Wang, 10.1021/acscatal.0c03335 journal journal ACS Catal. volume 10, pages 14419 (year 2020)NoStop [Ghosh et al.(2022)Ghosh, Arce-Ramos, Li, Yan, Chee, Genest, and Mirsaidov]ghos22 author author T. Ghosh, author J. M. Arce-Ramos, author W.-Q. Li, author H. Yan, author S. W. Chee, author A. Genest, and author U. Mirsaidov, 10.1038/s41467-022-33304-x journal journal Nat. Commun. volume 13, pages 6176 (year 2022)NoStop [Wu et al.(2018)Wu, Su, Tong, Wu, and Liu]wu18 author author Z. Wu, author M. Su, author C. Tong, author M. Wu, and author J. Liu, 10.1038/s41467-017-02469-1 journal journal Nat. Commun. volume 9, pages 136 (year 2018)NoStop [Tamemoto and Noguchi(2021)]tame21 author author N. Tamemoto and author H. Noguchi, 10.1039/d1sm00540e journal journal Soft Matter volume 17, pages 6589 (year 2021)NoStop [Noguch(2023)]nogu23a author author H. Noguch, 10.1038/s41598-023-33376-9 journal journal Sci. Rep. volume 13, pages 6207 (year 2023)NoStop [Ionov(2014)]lono14 author author L. Ionov, 10.1016/j.mattod.2014.07.002 journal journal Mater. Today volume 17, pages 494 (year 2014)NoStop [Maeda et al.(2008)Maeda, Hara, Yoshida, and Hashimoto]maed08 author author S. Maeda, author Y. Hara, author R. Yoshida, and author S. Hashimoto, 10.1002/marc.200700717 journal journal Macromol. Rapid Commun. volume 29, pages 401 (year 2008)NoStop [Levin et al.(2020)Levin, Deegan, and Sharon]levi20 author author I. Levin, author R. Deegan, and author E. Sharon, 10.1103/PhysRevLett.125.178001 journal journal Phys. Rev. Lett. volume 125, pages 178001 (year 2020)NoStop
http://arxiv.org/abs/2407.02911v1
20240703083701
Non-Adversarial Learning: Vector-Quantized Common Latent Space for Multi-Sequence MRI
[ "Luyi Han", "Tao Tan", "Tianyu Zhang", "Xin Wang", "Yuan Gao", "Chunyao Lu", "Xinglong Liang", "Haoran Dou", "Yunzhi Huang", "Ritse Mann" ]
eess.IV
[ "eess.IV", "cs.CV" ]
Vector Quantized Common Latent Space Han et al. Department of Radiology and Nuclear Medicine, Radboud University Medical Centre, Geert Grooteplein 10, 6525 GA, Nijmegen, The Netherlands Department of Radiology, The Netherlands Cancer Institute, Plesmanlaan 121, 1066 CX, Amsterdam, The Netherlands Faculty of Applied Sciences, Macao Polytechnic University, 999078, Macao Special Administrative Region of China GROW School for Oncology and Developmental Biology, Maastricht University Medical Centre, P. Debyelaan 25, 6202 AZ, Maastricht, The Netherlands Center for Computational Imaging and Simulation Technologies in Biomedicine within the School of Computing at the University of Leeds, LS2 9JT Leeds, UK School of Artificial Intelligence, Nanjing University of Information Science and Technology, Nanjing 210044, China taotan@mpu.edu.mo Non-Adversarial Learning: Vector-Quantized Common Latent Space for Multi-Sequence MRI Luyi Han1,20000-0003-4046-2763 Tao Tan3,2() Tianyu Zhang1,2,4 Xin Wang2,4 Yuan Gao2,4 Chunyao Lu1,2 Xinglong Liang1,2 Haoran Dou5 Yunzhi Huang6 Ritse Mann1,2 July 8, 2024 ================================================================================================================================================================== § ABSTRACT Adversarial learning helps generative models translate MRI from source to target sequence when lacking paired samples. However, implementing MRI synthesis with adversarial learning in clinical settings is challenging due to training instability and mode collapse. To address this issue, we leverage intermediate sequences to estimate the common latent space among multi-sequence MRI, enabling the reconstruction of distinct sequences from the common latent space. We propose a generative model that compresses discrete representations of each sequence to estimate the Gaussian distribution of vector-quantized common (VQC) latent space between multiple sequences. Moreover, we improve the latent space consistency with contrastive learning and increase model stability by domain augmentation. Experiments using BraTS2021 dataset show that our non-adversarial model outperforms other GAN-based methods, and VQC latent space aids our model to achieve (1) anti-interference ability, which can eliminate the effects of noise, bias fields, and artifacts, and (2) solid semantic representation ability, with the potential of one-shot segmentation. Our code is publicly available [<https://github.com/fiy2W/mri_seq2seq>]. § INTRODUCTION Multi-sequence magnetic resonance imaging (MRI) is a commonly used diagnostic tool that provides clinicians with a comprehensive view of tissue characteristics <cit.>. However, some sequences may be unusable or absent in clinical practice for various reasons <cit.>, leading to the need for rescanning or disrupting downstream processes. To avoid this, deep generative models can be used to synthesize these missing sequences, but require many paired training data to produce high-quality results. In cases lacking paired data among source and target sequence, most studies <cit.> rely on generative adversarial networks (GANs) <cit.> to minimize the distribution distance between the generated and the target sequence. However, it can also lead to training instability and mode collapse, harming the image quality and structure. Using intermediate sequences in multi-sequence MRI can make unsupervised generation less challenging. For example, if we have paired T1-weighted (T1) and T2-weighted (T2) MRI for one population and paired T2 and fluid-attenuated inversion recovery (Flair) MRI for another, we can use T2 to establish the relationship between T1 and Flair without paired samples. Compared to single-task models <cit.>, dynamic models <cit.> controlled by a prompt branch can integrate multiple generation tasks to utilize intermediate sequences. Han et al. <cit.> use a shared encoder to extract structural features from images, which are then rendered to target images with the guidance of a one-hot code. Jiang et al. <cit.> disentangle images into structure and style features and reconstruct target images using target styles and source structures. These methods preserve the structure consistency but ignore the distribution differences between the latent spaces of distinct sequences, hindering the model from learning the mapping of the common latent space to the target sequence. In this work, we construct a common latent space for multi-sequence MRI so that all sequences can be mapped from it. Specifically, we first utilize VQ-VAE <cit.> to compress images into a discrete latent space, then estimate the distribution of the vector-quantized common (VQC) latent space based on these representations. Finally, we leverage a dynamic model Seq2Seq <cit.> to generate arbitrary target sequences from the VQC latent space. The VQC latent space has three primary advantages: (1) achieving unsupervised synthesis without requiring adversarial learning; (2) preventing input interference, such as noise, artifacts, and field bias; and (3) having reliable semantic representation, which shows the potential of one-shot segmentation. § METHODS §.§ Preliminary §.§.§ VQ-VAE Compared with the continuous latent space of VAE <cit.>, the discrete latent space of VQ-VAE captures more structured features while ignoring some irrelevant details, e.g., artifacts. Given an encoder 𝐄 and a decoder 𝐆, we can map image X into a continuous latent space z_e=𝐄(X) with a latent dimension of D, while 𝐆 can restore X from z_e. Then, using a codebook e_k with the embedding dimension of K to map z_e to the nearest vectors in the codebook. z_q(z_e)=e_k, k=min_iz_e-e_i where the vector quantizing process is not differentiable, requiring an improved training loss, ℒ_vqvae = X-𝐆(z_e+sg[z_q-z_e])_2^2+sg[z_e]-z_q_2^2+β·sg[z_q]-z_e_2^2 where sg[·] indicates a stop-gradient operation, and β=0.25 ensures that z_e remains in proximity to z_q. To simplify the expression, we denote z_e+sg[z_q-z_e] as z_q and merge the last two terms of Eq. <ref> as ℒ_vq in the following sections. §.§.§ Dynamic Model Dynamic models <cit.> combine different generation tasks in a single model, which can utilize intermediate sequences. With a set of N sequences MRI 𝒳={X_i,f_i|i=1,...,N}, X_i is available if f_i = 1, otherwise f_i = 0 and the sequence is missing. The process of translating X_i to X_j is, X̂_i→ j=𝐆(𝐄(X_i),c_j) where 𝐆 refers to a dynamic decoder which input with structure feature 𝐄(X_i) and style feature c_j. In particular, c_j can be represented as a one-hot encoding for the target sequence <cit.> or a style feature extracted from the target image <cit.>. In this work, we use Seq2Seq <cit.> as the baseline because the model is a simple autoencoder, which makes it easy to integrate the VQ module into the model. §.§ Inheriting the advantages of discrete representations and dynamic models, we propose  to establish the VQC latent space for multi-sequence MRI. As shown in Fig. <ref>, continuous latent space z_e and corresponding discrete latent space z_q are extracted from the input images. By statistics on z_q, we can estimate a VQC latent space containing sampling points z_s that can reconstruct images of different sequences through the dynamic decoder 𝐆. §.§.§ Uncertainty Estimation It is challenging to strictly constrain multi-sequence MRI equal in latent space because one sequence involves specific information that other sequences lack <cit.>. To tolerate the sequence-specific attributes, we depict the probabilistic scope of z_q among different sequences by considering the uncertainty of the latent space. We propose a simple non-parametric method using the statistics of z_q for uncertainty estimation. μ_q(𝒳) = 1/∑_i=1^Nf_i∑_i^f_i≠0z_q(X_i) σ^2_q(𝒳) = 1/∑_i=1^Nf_i-1∑_i^f_i≠0(z_q(X_i)-μ_q(𝒳))^2 §.§.§ VQC Latent Space After obtaining the uncertainty estimation, we can establish a Gaussian distribution for probabilistic statistics. To utilize randomness in further modeling the uncertainty, we use random sampling to draw the VQC latent space from the corresponding distribution randomly. z_s = μ_q(𝒳)+ϵ·σ^2_q(𝒳), ϵ∼𝒩(0,1) Here, we use the re-parameterization trick to make the sampling operation differentiable, and ϵ follows the standard Gaussian distribution. §.§ Loss Function §.§.§ Pixel-Level Reconstruction We establish constraints between the generated image X̂ and the target image X at the pixel, structural, and perceptual levels, ℒ_rec(X̂,X)=λ_1·X̂-X_1 + λ_2·ℒ_ssim(X̂,X) + λ_3·ℒ_per(X̂,X) where ·_1 refers to the L_1 loss, ℒ_ssim indicates the SSIM loss <cit.>, and ℒ_per presents the perceptual loss <cit.> based on pre-trained VGG19. λ_1, λ_2, and λ_3 are weight terms and are experimentally set to be 10, 1, and 0.1. §.§.§ Latent Space Consistency We ensure that z_q of sequences are close to narrowing the scope of VQC latent space. For two z_q (z_1 and z_2), we define a consistency loss composed with MSE and contrastive learning loss <cit.>. ℒ_con(z_1, z_2) =sg[z_1]-z_2_2^2+sg[z_2]-z_1_2^2 -∑_p∈ Mlogexp(z_1^(p)· z_2^(p)/τ)/∑_q∈ Mexp(z_1^(p)· z_2^(q)/τ)·exp(z_2^(p)· z_1^(p)/τ)/∑_q∈ Mexp(z_2^(p)· z_1^(q)/τ) where p and q are features traversed from pixels in foreground of z_1 and z_2, τ=0.07 refers to the scalar temperature parameter. §.§.§ Total Loss We formulate the total loss function using intermediate sequences without adversarial learning. ℒ_total = ∑_i^f_i≠0∑_j^f_j≠0ℒ_rec(X̂_i→ j,X_j) + ∑_i^f_i≠0ℒ_rec(X̂_s→ i,X_i) + λ_con·∑_i^f_i≠0∑_j^f_j≠0ℒ_con(z_q(X_i),z_q(X_j)) +λ_vq·∑_i^f_i≠0ℒ_vq(z_e(X_i),z_q(X_i)) where X̂_i→ j=𝐆(z_q(𝐄(X_i)),c_j) and X̂_s→ j=𝐆(z_s,c_j) are images generated from z_q and z_s, respectively. λ_con and λ_vq are both experimentally set to be 10. §.§ Random Domain Augmentation We use random domain augmentation for input images during training to further improve the stability of  and the anti-interference ability of VQC latent space. The domain augmentation process has three aspects: (1) simple intensity transformation 𝒯 (e.g., gamma transformation, random noise, and bias field); (2) cross-sequence translation with one-hot codes c_r; and (3) random domain translation with random target codes c_r∼𝒰(0,1). The latter two augmentation methods allow us to generate an augmented image X_aug=𝐆(z_q(X_i),c_r) from the input image X_i, and the first method makes X_aug=𝒯(X_i). During training, we will randomly replace the input image X_i with one of X_aug. § EXPERIMENTS §.§ Experimental Settings §.§.§ Dataset and Evaluation Metrics We utilize brain MRI images from the Brain Tumor Segmentation 2021 (BraTS2021) dataset <cit.>, comprising 1,251 subjects with four aligned sequences: T1, T1Gd, T2, and Flair. From this dataset, we allocated 830 subjects for training, 93 for validation, and 328 for testing. To simulate clinical settings with missing sequences, we divided the training set into three subsets, which contained paired sequences between (T1, T1Gd), (T1Gd, T2), and (T2, Flair), respectively. It can be simulated that there is no paired sample between T1 and Flair under this setting, but there are two partially paired intermediate sequences, T1Gd and T2. All images undergo intensity normalization to a range of [0, 1] and are subsequently centrally cropped to dimensions of 128×192×192. Synthesis performance is evaluated using metrics including peak signal-to-noise ratio (PSNR), structural similarity index measure (SSIM), and learned perceptual image patch similarity (LPIPS). §.§.§ Implementation Details We implemented the models using PyTorch and trained them on the NVIDIA GeForce RTX 3090 Ti GPU. The architecture of 𝐄 and 𝐆 is the same as Seq2Seq <cit.>. The proposed  is trained using the AdamW optimizer, with an initial learning rate of 10^-4 and a batch size of 1 for 1,000,000 steps. All comparative experiments use domain augmentation, at least with simple intensity transformation 𝒯, to ensure a fair comparison. 𝒯 involves applying random gamma transformation with γ∼𝒰(0.95,1.05), random Gaussian noise with σ∼𝒰(0,0.1), and random bias field with scale of 0.2 and degree of intensity inhomogeneity α∼𝒰(0,2). §.§ Experimental Results §.§.§ Latent and Embedding Dimension Referring to Sec. <ref>, the latent dimension D represents the dimension of the compressed feature. The smaller D is, the greater the degree of compression. The embedding dimension K indicates the number of discrete vectors (clustering) in the codebook. The larger K is, the better a discrete vector fits the continuous features. We train  using the training sets with complete sequences to explore the optimal D and K before other experiments. As shown in Fig. <ref>, when K=256, the proposed model performs the best when D=3. Additionally, when D=3, the performance of the model continues to improve as K increases, but the rate of improvement slows down after K>256. Thus, we set D=3 and K=256 in this work. §.§.§ Latent Space Consistency To evaluate the effectiveness of the proposed VQC latent space for unsupervised cross-sequence generation, we compared  with other methods such as MM-GAN <cit.>, ResViT <cit.>, Jiang et al. <cit.>, and Seq2Seq <cit.>. Additionally, we compared the three components of our method, which include VQ embedding, VQ with ℒ_con, and domain augmentation. There are two ways to implement a source→target generation: (1) generate the target directly from the source (single-step), and (2) first generate an intermediate sequence from the source and then generate the target (multi-step). Table <ref> and Fig. <ref> illustrate the synthesis performance of comparisons on translating T1→T1Gd, T1→T2, and T1→Flair. Note that, due to the settings of paired samples in the training set, the multi-step generation between T1 and T2 requires two steps, and between T1 and Flair requires three steps. As shown in Table <ref>, the comparison method achieves similar performance for the T1→T1Gd generation task with paired samples. However, when it comes to unpaired T1→T2 and T1→Flair generation tasks, the performance of the comparison method decreases sharply when performing single-step generation compared to multi-step generation. In contrast, the proposed  shows only a minor performance penalty on T1→Flair task and improves on T1→T2 task. This shows that multi-step generation will lead to information loss and error accumulation, and our  can alleviate this problem through single-step generation. §.§.§ Anti-Interference The proposed VQC latent space also has the anti-interference ability. We add fixed Gaussian noise and bias fields to the input image and reconstruct the input image using the comparisons. As shown in Table <ref>, the proposed method can effectively prevent the interference of noise and bias fields to reconstruct the original image. Fig. <ref> shows the visualization results of the reconstruction, in which we found that the proposed model can also remove artifacts in images. §.§.§ Compression and Representation The proposed VQC latent space showcases strong representation ability, indicating the potential of one-shot segmentation. To demonstrate this, we train the nnU-Net model based on the VQC latent space for brain tumor segmentation. For this purpose, we only use one subject containing all sequences for training. As shown in Table <ref>, the segmentation model trained based on the VQC latent space outperforms the model trained using only images. Furthermore, Fig. <ref> shows that fewer VQ embedding dimensions K=16 contribute towards the clustering of image semantics, which improves the segmentation performance. § CONCLUSION In this work, we introduce a network for estimating the distribution of VQC latent space, which inherits the advantage of discrete representations and dynamic models. Experimental results based on BraTS2021 demonstrate that this latent space contributes to cross-sequence generation without adversarial learning and has substantial anti-interference and representation ability. §.§.§ Luyi Han was funded by Chinese Scholarship Council (CSC) scholarship. splncs04
http://arxiv.org/abs/2407.02144v1
20240702103808
Conversations in the dark: cross-correlating birefringence and LSS to constrain axions
[ "S. Arcari", "N. Bartolo", "A. Greco", "A. Gruppuso", "M. Lattanzi", "P. Natoli" ]
astro-ph.CO
[ "astro-ph.CO", "hep-ph" ]
=1
http://arxiv.org/abs/2407.02742v1
20240703012851
A Comparative Study of DSL Code Generation: Fine-Tuning vs. Optimized Retrieval Augmentation
[ "Nastaran Bassamzadeh", "Chhaya Methani" ]
cs.SE
[ "cs.SE", "cs.AI", "cs.CL", "I.2.2; I.2.7" ]
empty § ABSTRACT Natural Language to Code Generation has made significant progress in recent years with the advent of Large Language Models (LLMs). While generation for general-purpose languages like C, C++, and Python has improved significantly, LLMs struggle with custom function names in Domain Specific Languages or DSLs. This leads to higher hallucination rates and syntax errors, specially for DSLs having a high number of custom function names. Additionally, constant updates to function names add to the challenge as LLMs need to stay up-to-date. In this paper, we present optimizations for using Retrieval Augmented Generation (or RAG) with LLMs for DSL generation along with an ablation study comparing these strategies. We generated a train as well as test dataset with a DSL to represent automation tasks across roughly 700 APIs in public domain. We used the training dataset to fine-tune a Codex model for this DSL. Our results showed that the fine-tuned model scored the best on code similarity metric. With our RAG optimizations, we achieved parity for similarity metric. The compilation rate, however, showed that both the models still got the syntax wrong many times, with RAG-based method being 2 pts better. Conversely, hallucination rate for RAG model lagged by 1 pt for API names and by 2 pts for API parameter keys. We conclude that an optimized RAG model can match the quality of fine-tuned models and offer advantages for new, unseen APIs. A Comparative Study of DSL Code Generation: Fine-Tuning vs. Optimized Retrieval Augmentation Nastaran Bassamzadeh Chhaya Methani Microsoft Corporation Redmond, USA July 8, 2024 ============================================================================================ § INTRODUCTION There has been significant progress made in improving and quantifying the quality of Natural Language to Code Generation or NL2Code (<cit.>, <cit.>, <cit.>, <cit.>). Recent improvements in models for general-purpose languages like Python, C++ and Java can be attributed to larger LLMs (<cit.>, <cit.>) and the availability of pre-trained open-source models (<cit.>, <cit.>, <cit.>, <cit.>) advancing the state-of-the-art. However, there hasn’t been a focus on improving quality of Natural Language to Domain Specific Languages or NL2DSL which a lot of enterprise applications rely on. Domain Specific Languages (or DSLs) are custom Computer Languages designed and optimized for specific applications. Examples of DSLs include SQL and industry-specific languages for formalizing API calls, often using formats like JSON or YAML to represent API sequences. In this paper, we focus on the task of generating a DSL used for authoring high-level automation workflows across thousands of web-scale APIs. These workflows support a variety of customer scenarios like invoice processing, sales lead integration with forms/emails etc. The automation DSL represents API names as functions and codifies a sequence of API calls along with conditional logic over the invocation of APIs. We constrained the length of sequence to 5 APIs and hope to explore longer sequences in future work. An example of the DSL is shown in Figure <ref>. Existing code generation methods are hard to adapt for this scenario due to the frequent hallucinations and syntax errors. This is largely due to the custom names, massive size and diversity of APIs in public as well private domain along with the ever-changing API landscape. Current NL2Code methods mainly use fine-tuning and do not focus on strategies for improving grounding LLMs to include new APIs. In this paper, we outline an end to end system architecture for NL2DSL generation with high response rate using selective improvements to RAG techniques (<cit.>, <cit.>) using OpenAI models. We fine-tuned a Codex model for NL2DSL and show a comparative analysis of the impact of the approaches used to optimize RAG. Along with metaprompt tuning for RAG, we also included additional grounding context in the form of API Function Definitions, like the approach used for Tool Selection (<cit.>,<cit.>, <cit.>, <cit.>). This is motivated by the similarities between the code generation and task orchestration scenarios discussed in more detail in Section <ref>. The remainder of this study is structured as follows. In Section <ref>, we present the NL2DSL problem formulation along with literature review. The focus is on comparing differences between Tool Selection of APIs as a framework compared to Code Generation over a set of APIs. This will help define the scope of the experiments in this study. Section <ref> lays out and describes the optimizations we made to RAG as discussed above along with the benchmark Fine-Tuned model. Section <ref> discusses Data Generation, Metric definition and Section <ref> shares our results and discussion followed by Conclusion and Future Work in Section <ref>. § RELATED WORK §.§ Code Generation or Program Synthesis Program Synthesis is a hard research problem (<cit.>, <cit.>, <cit.>,<cit.>, <cit.>). It has gained significant interest with many open-source models focusing on general programming languages since the release of Github Copilot (<cit.>). These models include Code Llama <cit.>, StarCoder <cit.>, Codestral <cit.>, Phi-3 <cit.> and more. Many of these advancements have been achieved through pre-training language models for code generation with a focus on improving datasets((<cit.>, <cit.>)). However, for domain adaptation, instruction fine-tuning on top of a base model remains a popular approach (<cit.>, <cit.>, <cit.>, <cit.>). Prompting LLMs is an alternative technique for code generation (<cit.>, <cit.>, <cit.>, <cit.>). Poesia et al. (<cit.>) focused on improving response quality through grounding techniques. They fine-tuned a Sentence BERT model by changing the loss function to incorporate predicting similarity of the generated target programs. With this adapted similarity metric, better few shots are selected dynamically. §.§ Reasoning and Tool Integration When it comes to modeling the problem of selecting a sequence of API calls, we need to consider formulating it as a planning or reasoning task. LLMs show remarkable reasoning capability, however, they also have limitations when it comes to staying up-to-date with recent knowledge, performing mathematical calculations etc. A popular way to overcome this has been granting the LLMs access to external tools. This framework gained significant popularity with OpenAI Code Interpreter's success (<cit.>). External Tool Integration has been studied since with a focus on including specific tools such as web search (<cit.>), python code interpreters (<cit.>, <cit.>), adding calculators (<cit.> <cit.>) and so on. Expanding the tool set to a generic list of tools has been explored (<cit.>, <cit.>), but it remains limited and often predicts single tools instead of sequences needed for most enterprise scenarios. Tool Use has mostly been explored in the context of generating more accurate text outputs for Q&A tasks with the help of external tools(<cit.>, <cit.>). There is an increase in focus on incorporating LLM's code generation capabilities to reasoning and task orchestration, this is an area of active research (<cit.>, <cit.>, <cit.>). However, most of the research either limits the tools to a set of small well-documented APIs ( (<cit.>,<cit.>), or limited their scope to predicting a single output API (<cit.>, <cit.>). Posing the reasoning or orchestration task as a code generation problem is similar to the API sequence generation scenario highlighted in this paper. Representing a plan as a DSL, as discussed in Section <ref>, aligns with our goal of generating DSL for workflow automation. Improving the quality of Natural Language to DSL generation, is thus beneficial for both reasoning and plan generation. §.§ Contributions In the previous section, we discussed formulating Task Orchestration as a Code Generation problem since it can be represented as yet another DSL. NL2DSL generation suffers from the hallucination and quality issues we discussed in <ref>. Few studies address the challenges of end-to-end DSL generation, specifically over a large set of custom APIs. This paper presents improvements to known RAG techniques focusing on improving DSL generation quality for enterprise settings. Our DSL expands API or tool selection to a sequences of 5-6 API calls, also referred to as chain of tools, which is a first to the best of our knowledge. We also consider the real-world scenarios of adding conditional logic with API calls as shown with an example in Figure <ref>. Our contribution is outlining an end-to-end system as well as presenting an ablation study for NL2DSL generation. We merged prompting and grounding approaches from code generation (<cit.>,<cit.>,<cit.>) and added API metadata as used in task orchestration area (<cit.>, <cit.>) and studied their impact on reducing hallucination rate. We created a test set having 1000 NL-DSL pairs spanning over a set of approx. 700 API calls or functions using principles of synthetic dataset generation (similar to <cit.> and <cit.>) and used manual approval to validate test set quality. Our fine-tuned DSL model is trained on a larger synthetic NL-DSL dataset (details in Section <ref>). § METHODOLOGY In this section, we first provide an overview of the approaches used in our experiments. In the following sub-sections, we will delve deeper in the details of each of the approaches. Details of fine-tuning are shared in Section <ref>. Fine-Tuning a base model, specifically, instruction fine-tuning is a preferred approach for domain adaptation. It's limitations include inability to include newly added APIs on an ongoing basis, as well as the resource intensive data collection process for infrequently used APIs or the tail set. We used RAG based approaches to overcome these limitations, and focused on improving grounding techniques for DSL generation (Details in Section <ref>). We used dynamically generated few-shot examples approach (<cit.>), and augmented it with API function definitions similar to the way it is used for Tool Selection (<cit.>, <cit.>). These few-shots were selected from an expansive pool of synthetic NL-DSL pairs, empirically having 100s of variations of usage for each API (Section <ref>). For computing semantic similarity of the few-shots with the input user query (Section <ref>), we fine-tuned a BERT model as highlighted in <cit.> with a modified loss function for predicting target DSL similarity. For selecting the API metadata for grounding (Section <ref>), we created an index over API Function Definitions. We also tried metaprompt tuning, but limit the focus of this study to improving grounding techniques with a combination of dynamically selected few-shot samples as well as API metadata or tool description. We share the details of each approach and variation below. §.§ Fine-Tuned NL2DSL Generation Model We took the Codex base model from OpenAI due to it's pre-training with code samples and used LoRA-based fine-tuning approach. The training set consists of NL-DSL pairs, NL refers to the user query and the DSL represents the workflow that the user is looking to automate. We used <START> and <END> token to indicate the end of code generation to the model. The training set consists of a pool of 67k samples in the form of (prompt, flow) tuples generated synthetically ( details in Section <ref>, and examples of NL-DSL are shared in Figure <ref> and Appendix <ref>). We ran many iterations on this model to improve performance on the test set, specifically for the body and tail connectors, and went through multiple rounds of data augmentation. We found that predicting the parameter keys was very challenging with the fine-tuned model due to limitation of data generation. Even with synthetic models, it was hard to scale the NL-DSL sample variety needed for improving quality of parameters. §.§ Grounding with dynamically selected few-shots We tried two types of grounding information for RAG based DSL generation as described below. There are some variations of each technique described in the paragraph below as well. For each technique, we selected 5 and 20 shots dynamically, and saw performance impact driven by the approach used for sample selection. §.§.§ Pre-Trained Model The first approach is using a vanilla Per-Trained model for determining the semantic similarity of NL-DSL samples based on the NL query. We computed the embeddings of NL queries using a Distil-RoBERTa Pre-Trained model. We created a Faiss Index (<cit.>) for these embeddings to help with search over the dense embedding space. §.§.§ TST based BERT Fine-tuning In this approach, we fine-tuned the pre-trained model to improve the retrieval accuracy of the few-shots. This is similar to the approach used by Poesia et al. in <cit.>. They show that if we fine-tune the pre-trained BERT model with a modified loss function to consider the similarity between the target DSL for each NL-DSL pair, the retrieved examples will have a higher quality and finally lead to better generation with LLM. To get positive and negative samples for fine-tuning, we compared cosine similarity between all pairs of Natural Language queries in our dataset (Dataset shared in Section <ref>). We used a Pre-Trained Tansformer model to generate embeddings for the purpose of similarity computation. A pair of tuples is considered a positive sample if the similarity between their corresponding NL prompts is greater than 0.7 and negative otherwise. We generated 100k pairs this way and leveraged them as training data for our fine-tuning experiment. The loss function used by TST (Equation <ref> from <cit.>) is minimizing the Mean-Squared Error between the vanilla loss functions comparing the utterances (u_i,u_j) and the target programs (p_i,p_j). Program similarity is denoted by S. They used AST to compute program similarity, however, we used a Jaccard score over lists of API function names to be consistent with our metrics definition (Section <ref>). L_TST(θ):= E_i,j   D [f_θ(u_i, u_j) - S (P_i,p_j)]^2 §.§ Grounding with API Metadata In addition to few-shots, we appended the API metadata in the metaprompt. This metadata includes Function Description along with the parameter keys and their description (See an example API Function Definition shared in Appendix <ref>). We followed the below two approaches for selecting the metadata to be added. §.§.§ API Function Definitions for Few Shots For the few-shots samples selected using the methods described above, we extracted the metadata for each of the functions present in those samples. This means that for the n few-shot samples dynamically added to the metaprompt, we iterated over all the API function names in each of these flows and added their function definitions to the metaprompt. We also modified the metaprompt to add instructions on how to use the Function Definitions. We want to explore how adding the metadata explaining the purpose of each function in the few-shot examples impacts LLM's understanding of the task and map to user request. §.§.§ Semantic Function Definitions Another approach for selecting the function definitions to be added to the metaprompt is to retrieve the semantically similar functions from a vector database created with API metadata. This approach is similar to the one followed by LlamaIndex (<cit.>) We created an index of all API definitions and retrieved the semantically similar functions by using the input NL query to search the index. Please note that this is different from the faiss index created for few-shot samples in Section <ref>. We call this approach Semantic Function Definition (SFD) and will compare it with the Regular FDs described above. This approach can be specifically useful for tail-ish prompts where no few-shots might be retrieved. This helps us integrate the newly released web APIs in our DSL Generation framework making our approach scalable to the changing API landscape. § EXPERIMENT DESIGN AND METRICS DEFINITION In this section, we outline the process of Dataset Generation and introduce the metrics we used for estimating the code quality. We then describe the experiments. Results and Discussion follows in the next section. We have used Azure AML pipelines to run our experiments. The GPT-4 (with 16k token limit) model is used as the LLM model. The metaprompt is kept consistent between experiments for the purpose of the ablation study. §.§ Dataset Generation We generated a total of 67k samples in the form of (prompt, flow) pairs from workflows created by users. We had many samples of workflow automations created by users across a large set of APIs. We sampled the automations containing 700 publicly available APIs and synthetically generated the corresponding Natural Language prompts using GPT-4. For creating these NL descriptions for the workflows, we also provided API Function definitions to the metadata. This ensured the language of the description captured the functioanlity of the API. A subset of these synthetic samples were validated by human judges. We used these checks to improve the metaprompt used for synthetic data generation. For creating a test set, we used the same process with most of the test set evaluated by human judges to ensure quality. We followed the same distribution of APIs from users, to ensure that our metrics are not biased. The test data set consists of 1000 samples that are verified by human judges. §.§ DSL Generation Quality Metrics We defined 3 key metrics to focus on code generation quality as well as syntactic accuracy and hallucination rate. We have a compiler to test the syntax and validate the functions against a database of API names as well as parameter keys. §.§.§ Average Similarity Average Similarity measures the aggregated similarity between predicted flow and the ground truth flow. The average similarity between two flows is defined using the Longest Common Subsequence match (LCSS) metric. Each flow is reduced to a list of API call sequences and then the LCSS is computed. The final metric is reported as an average over all test samples. Hallucination and Parser failures lead to the sample being discarded and is assigned a similarity score of 0. Similarity = LCSS (A, B)/max (|Actions_A|, |Actions_B|) where |Actions_A| is the number of actions in flow A and |Actions_B| is the number of actions in flow B. Please note that we are not using the commonly used AST metric for computing code similarity. AST drills down to compare similarity performance for parameters as well. As we wanted to focus on the problem of improving function name retrieval as well as it's sequence, we chose to define the metric in this manner. §.§.§ Unparsed rate This metric captures the rate of syntactic errors. A flow that cannot be parsed by the parser is considered not usable for the purpose of similarity metric computation. Unparsed rate is computed as follow: %unparsed flows = |Flows_unparsed|/|Flows_total| where, |Flows_unparsed| is the number of flows that were not parsed and |Flows_total|is the total number of flows in the sample set. §.§.§ Hallucination rate This metric captures the rate of made-up APIs (or function names) and made-up parameter keys in the generated code. Predicting a flow with a hallucinated API name is counted as a failure and leads to the code being considered invalid. We compute this by counting the number of flows that have at least one hallucinated function name and divide it by the total number of flows in the sample set. %made-up APIs = |Flows_h|/|Flows_parsed| * 100 where |Flows_h| is the number of flows with hallucinated API names and |Flows_parsed| is the number of flows that were parsed correctly. Similarly, we compute the rate at which parameters were not parsed. Failure to parse parameters does not result in the flow being discounted from average similarity computation. However, it shows up as run-time errors. Fixing these run-time errors is beyond the scope of this paper. %made-up parameters = |Flows_hp|/|Flows_parsed| * 100 where, |Flows_hp| is the number of flows with hallucinated parameter key names and |Flows_parsed| is the number of flows that were parsed correctly. § RESULTS In this section, we present the results of the above approaches on a test set of 1000 NL-DSL pairs. These samples, while generated synthetically, have been evaluated by human judges for quality. They were also sampled to represent the distribution of APIs in actual product usage. We compare the impact of each ablation in sections below. §.§ Impact of number of few-shots on RAG performance We compare the impact of number of code samples added to the meta prompt with two different settings i.e. 5 few-shots vs 20 few-shots. We measured the results for both Pre-Trained model as well as TST model. Results are shared in Table <ref> and show the Δ change compared to that Baseline model. The baseline setting here is Pre-Trained model with 5 few-shots. Looking at row 1 and comparing rows 2 and 3 with respect to the baseline , we can see that adding more few-shots improves the performance of both the Pre-Trained as well as the TST model on all metrics. The gain is particularly pronounced for reducing the number of made-up API names as well as reducing the number of made-up API parameter keys. We saw the gain plateau beyond this, and we intend to run more experiments in the future to study this effect better. §.§ TST vs Pre-trained Model Comparing the rows in Table <ref>, both Pre-Trained and TST with 20 samples look comparable for computing the Average Similarity but have slight variations in Unparsed flow rate as well as Hallucinations rates. TST model performs better in reducing the % made-up API names, while the Pre-trained model has a slight edge in the other two metrics. So, we additionally look at the impact of including API Function Definitions to both the models (see Table <ref>). Here, we have used GPT4 model with 5 few shots. The results are represented as Δ changes compared to the Baseline setting i.e. using the Pre-Trained model to choose 5 few-shot NL-DSL code samples. TST with FD setting performs overall better than all other options with values close to the best in every metric. We see a similar trend in Table <ref> where we captured the results for 20 few-shots. This leads us to conclude that the presence of few-shot examples is supported by adding the API functions definitions of these functions (as described in Section <ref>). The addition predominantly helps reducing the hallucination rate for API names and parameters, which improves the overall response rate of NL2DSL generation. §.§ Function Definition vs Semantic Function Definitions As the next step, we will compare the impact of Semantic Function Definitions (SFD) vs adding the API Function Definitions for selected examples only. We used a Fine-Tuned model as baseline for this experiment. Based on the insights from the previous step, we used 20 few-shots for TST along with including FDs. All results in Table <ref> are shown as Δ improvements compared to the baseline. Looking at metrics in columns for % made-up API names and % made-up parameter keys, we see that the hallucination rate is in general increasing for RAG based approach. However, we need to keep in mind that a fine-tuned model on the function names is hard to beat as it has been trained on 67,000 samples compared to only 20 few-shots that have been added to the RAG model. Within the RAG approaches, comparing rows 1 and 2 ("TST + FD" vs "TST + SFD"), SFD in general results in a slight drop in average similarity and an increase in the Unparse rate as well as hallucination rate for parameter keys. This indicates that the approach to simply add semantically similar API metadata for a query is not useful for DSL generation. We get better similarity, as well as reduced Hallucination Rate when we include the API Function Definitions for the samples selected by TST (as shown in Row 1). The addition of Semantically matching Function Definitions tends to reduce the hallucination of API names indicating that it could have potential of adding FDs that are not a part of the code sample set. This could have implications for improving the performance for newly added APIs in the public cloud, that will help keep the performance of the system updated. We will explore this topic in a future study. § CONCLUSION Concluding from the ablations study shared in Section <ref>, we see that the role of dynamically selected few-shot samples is very important in making RAG useful for syntactically correct generation of DSL as well as improving code similarity ((Table <ref>)). Fine-Tuning still outperforms the RAG based model in terms of lower hallucinations (see Table <ref> where fine-tuned model is the baseline). However, the parsing errors are more common in the fine-tuned model. This could be due to the fact that few shot examples have been successfully teaching the correct syntax to the LLM model. It is, however, surprising that the syntax correctness for RAG is better than that of the fine-tuned model which was trained on a much larger sample set. It is also interesting to note that this benefit does not transfer to hallucinated API names and their parameters keys where the fine-tuned model holds the advantage. The increase of 6.76 pts in hallucination rate for parameters due to adding Semantic Function definitions indicates that adding too many API descriptions can confuse rather than help the LLM (Table <ref>). It also signifies the higher impact of the few shot samples for the scenario of DSL Generation or API selection compared to simply providing the API description. This learning can be used to inform the Tool Selection or orchestration scenario. Providing high quality examples of sample orchestration will reduce the failure rate more. Overall, we were able to significantly improve the performance of RAG for DSL generation, with hallucination rate for API names dropping by 6.29 pts. and that of parameter keys dropped by approx. 20 pts (see Table <ref>). The performance of RAG is now comparable to that of fine-tuned model (see Avg. Similarity in Table <ref>), with the potential to bootstrap quickly. Optimized RAG can also allow extending the benefits of metaprompt tuning to include unseen APIs, reducing the need to fine-tune the model frequently. This will be the focus of our future work. ACM-Reference-Format § APPENDIX §.§ Sample with computed Average similarity Sample showing how flow similarity is computed for two flows Flow A and Flow B. §.§ An example of API metdata We share a sample of API metadata to highlight the details included in the API description provided to the metaprompt.
http://arxiv.org/abs/2407.02282v1
20240702141209
Learning Bipedal Walking on a Quadruped Robot via Adversarial Motion Priors
[ "Tianhu Peng", "Lingfan Bao", "Joseph Humphreys", "Andromachi Maria Delfaki", "Dimitrios Kanoulas", "Chengxu Zhou" ]
cs.RO
[ "cs.RO" ]
Learning Bipedal Walking on a Quadruped Robot via Adversarial Motion Priors Tianhu Peng1, Lingfan Bao1, Joseph Humphreys1, Andromachi Maria Delfaki2, Dimitrios Kanoulas2 and Chengxu Zhou2* This work was supported by the Royal Society [grant number RG\R2\232409] and the UKRI Future Leaders Fellowship [grant number MR/V025333/1]. Please refer to the video for an overview of our framework and results at <https://youtu.be/JYD1RlrQRWM>. For the purpose of open access, the authors have applied a Creative Commons Attribution (CC BY) licence to any Author Accepted Manuscript version arising from this submission. 1School of Mechanical Engineering, University of Leeds, UK. 2Department of Computer Science, University College London, UK. *Corresponding author, chengxu.zhou@ucl.ac.uk July 8, 2024 ========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================= § ABSTRACT Previous studies have successfully demonstrated agile and robust locomotion in challenging terrains for quadrupedal robots. However, the bipedal locomotion mode for quadruped robots remains unverified. This paper explores the adaptation of a learning framework originally designed for quadrupedal robots to operate blind locomotion in biped mode. We leverage a framework that incorporates Adversarial Motion Priors with a teacher-student policy to enable imitation of a reference trajectory and navigation on tough terrain. Our work involves transferring and evaluating a similar learning framework on a quadruped robot in biped mode, aiming to achieve stable walking on both flat and complicated terrains. Our simulation results demonstrate that the trained policy enables the quadruped robot to navigate both flat and challenging terrains, including stairs and uneven surfaces. Legged Robots, Bipedal Locomotion, Deep Reinforcement Learning, Adversarial Motion Priors § INTRODUCTION Legged robots exhibit superior terrain adaptability compared to their wheeled and tracked counterparts. Although quadrupedal robots are known for their stability and agility, bipedal robots offer greater flexibility by freeing the upper body for complex tasks. This flexibility suggests the potential for quadrupedal robots to walk in a bipedal gait, using the rear legs for walking and the front legs for manipulation. The primary challenge in adapting quadruped robots for bipedal locomotion stems from their mechanical design constraints. First, unlike typical bipedal robots that have firm, flat feet, quadruped robots often feature soft, point-contact feet that inherently lack stability. Second, the rear legs of quadruped robots are not specifically designed for bipedal walking, their limited range of motion and underactuation contribute to unnatural and unstable bipedal gaits. This design mismatch explains why quadruped robots struggle with bipedal walking modes. This leads to high requirements for locomotion controllers during bipedal modes. To achieve bipedal walking for quadruped robots, there are primarily two approaches: the model-based method and the learning-based method <cit.>. Model-based methods are based on highly accurate mathematical models, which have proven to be effective in executing highly dynamic motions in both quadruped and bipedal robots. However, these methods lack robustness and generalization in unseen scenarios, largely due to the difficulty of accurately modeling ground interactions and contact dynamics. In contrast learning-based methods, reinforcement learning (RL), provides a more adaptable solution by enabling the exploration of the robots' full dynamics and interactions with the environment, thus offering greater flexibility in controlling complex locomotive behaviors. Early research in RL on legged robots primarily utilized unrealistic models within physical simulators<cit.>. In transitioning to a practical bipedal robot and learning natural and robust gaits, previous studies have primarily designed a reference-free learning framework by designing periodic composition reward <cit.> or mimicking predefined references <cit.>. Reference-free methods explore various gait patterns efficiently, while reference-based methods leverage prior knowledge to accelerate learning, resulting in efficient policy exploration and robust locomotion skills. These methods incorporate expert information and predefined reference trajectories from motion capture data or trajectory optimization (TO). Generative Adversarial Imitation Learning <cit.> and Adversarial motion priors (AMP) <cit.> predict state transitions and evaluate the similarity between reference and agent data, promoting stable gait maintenance. AMP was implemented with a study of human reference behavior in biped robots <cit.>, combining it with periodic rewards to promote stable gait maintenance. To generate appropriate predefined references, several methods are utilized. Motion capture technology is commonly employed to produce reference data for various types of legged robots <cit.>. This technology captures comprehensive kinematic data from real-world scenarios, enabling versatile data collection that is not confined to a specific robot model. However, adapting this data to different robotic platforms often requires a re-targeting process, which increases both complexity and manual labor. On the other hand, TO in reduced-order <cit.> and full dynamics models <cit.>, has been employed. This approach reduces complexity and eliminates the need for further re-targeting, making it a more efficient solution for generating references. Additionally, compared to full dynamics models, reduced-order models can decrease the computational resources required for optimization and offer greater generalization across all quadruped robots. Regarding predesigned references from the optimization method, various models are utilized. Besides HZD-based full dynamics reference <cit.>, reduced-order dynamics such as Single Rigid Body Dynamics (SRBD) <cit.> are also utilized in the training procedure. Based on the reduced-order model, task-space learning focuses on the foot setpoints and based velocity <cit.>. Another significant challenge exists in bridging the sim-to-real gap. To extend the robustness of locomotion and overcome the sim-to-real gap, frameworks using the privileged learning paradigm have been introduced <cit.>. By combining the strengths of AMP in reference-based learning and privileged learning, there is potential to enable quadrupedal robots to adopt bipedal walking modes. Similar frameworks have been introduced <cit.>, but none have been validated on bipedal robots or quadruped robots for bipedal mode. Our objective is to train a policy that enables a quadrupedal robot to achieve bipedal locomotion using only its two rear legs, thus freeing its front legs for more complex tasks. This capability aims to enhance the robot's versatility and functionality in various practical applications. This paper presents a novel framework that enables quadrupedal robots to achieve robust and agile blind bipedal locomotion on flat terrain. We adopt a teacher-student policy framework, where privileged information that the robot cannot directly access is encoded. The student policy uses historical observation information to infer this privileged information, thereby enhancing robustness. Additionally, we integrate the AMP training framework to learn and imitate the style behaviors of reference data generated through TO based on a SRBD model. Different from previous work <cit.> with assistant devices, this comprehensive training framework equips the policy to support agile bipedal motions in quadrupedal robots. In summary, the primary contribution of this paper is to develop a novel framework (shown in Fig. <ref>) that allows quadrupedal robots to perform robust and agile bipedal blind locomotion on flat terrain using only their rear legs. This bipedal mode frees the front legs for more complex tasks, significantly enhancing the robot's versatility and functionality. Besides, We evaluate our model on the A1 quadrupedal robot using a biped gait model in the Isaac Gym simulation environment, demonstrating agile and robust movements. § METHODOLOGY §.§ Reinforcement Learning on Legged Robots The task of learning legged locomotion poses significant challenges due to the complex environment and limitations in sensor data. To address this, a partially observable Markov decision process (POMDP) framework was adopted, denoted as (s_t, a_t, P, r_t, p_0, γ), where s_t represents the state at time step t ,a_t is the action taken by the agent, P(s_t+1|s_t,a_t) describes the system dynamics, predicting the next state based on the current state and action, , r_t(s_t,a_t,s_t+1) is the reward function, quantifying the immediate benefit of taking action a_t in state s_t leading to s_t+1, p_0 denotes the initial state distribution, and γ^t is the discount factor, determining the importance of future rewards in the decision-making process. The objective of RL in this context is to identify an optimal policy π_θ parameterized by θ , that maximizes the expected discounted return over future trajectories. This is formalized by the objective function J(θ): J(θ) = 𝔼_π_θ [∑^∞_t=0γ^tr_t] In the state space, the state s_t^teacher includes proprioceptive observation o_t^p ∈ℝ^48, privileged state s_t^p ∈ℝ^45 and terrain information o^e_t ∈ℝ^187. The proprioceptive observation o_t^p ∈ℝ^48 encompasses critical data such as the orientation of the gravity vector, base linear and angular velocity in the robot's frame, joint positions and velocities, the previous action a_t-1∈ℝ^12 executed by the current policy. The privileged state s_t^p contains the information include ground friction coefficients, ground restitution coefficients, contact forces, external forces within positions on the robot, and collision state. The privileged state s_t^p ∈ℝ^45 includes additional key details that the physical robot cannot directly access in the real-world environment. This information encompasses ground friction and restitution coefficients, contact and external forces at specific robot positions, collision state information. The terrain information o^e_t ∈ℝ^187 contains the 187 height measurement sampled from grid around robot base to the ground. In contrast, the student policy state utilizes a sequence history of proprioceptive observations H_t: [O_t-N,O_t-N-1,...,O_t] to approximate the privileged information. By learning from this historical data, the student policy aims to imitate the inaccessible privileged state, enhancing its decision-making capabilities in the absence of direct access to certain environmental variables. Regarding the action space, the policy action a_t is a 12 dimensional target joint position offset added to the time-invariant nominal joint position. This specification guides the joint PD controller, utilizing fixed gains to compute torque commands effectively for motor position control. §.§ Adversarial Motion Priors and Rewards Design The AMP framework utilizes adversarial learning to train two neural networks—a generator and a discriminator—in a competitive setup. The generator produces motion predictions for the robot, while the discriminator evaluates their quality and realism. Style rewards are used to measure the similarity between the demonstrator’s behavior and the robot’s, with higher similarity yielding more rewards. Using the reference dataset D, the AMP-based style reward function encourages the robot to replicate the same gait style. According to <cit.>, a neural network-based discriminator D_φ predicts whether a state transition (S_t, S_t+1) is from the dataset D or generated by the agent A. Each state S_t^AMP∈ℝ^31 includes joint positions, point velocities, base linear velocity, base angular velocity, and base height relative to the terrain. To avoid mode collapse, the dataset D contains only trot gait motion clips. The discriminator's training objective includes a gradient penalty to enforce smoothness and is defined as: min_φ 𝔼_(s_t,s_t+1)∼ D[(D_φ(s_t,s_t+1) - 1)^2] + 𝔼_(s_t,s_t+1)∼ A[(D_φ(s_t,s_t+1) - 1)^2] + α^qp/2𝔼_(s_t,s_t+1)∼ D[∇_φD_φ(s_t,s_t+1)_2^2], where α^qp is a manually specified coefficient (α^qp = 10). The style reward is defined by: r_t^s[(s_t,s_t+1) ∼ A] = 𝕞𝕒𝕩[0,1-0.25(d^score_t - 1)^2], where d^score_t = D_φ(s_t, s_t+1) and is scaled to the range [0,1]. The overall reward function is: r_t = r_t^g + r_t^s + r_t^l, where r_t^g is the task reward, r_t^s is the style reward, and r_t^l is the regularization reward. Task rewards typically include tracking base linear and angular velocities: r_t^g = ω^v exp(-|v̂_t^xy - v_t^xy|) + ω^ωexp(-|ω̂_t^z - ω_t^z|), where ω^v and ω^ω are coefficients, and v̂_t^xy and ω̂_t^z are the velocity commands. Regularization rewards promote safe, smooth motion and minimize energy costs, enhancing the adaptability and efficiency of learned behaviors for real-world applications. This component contributes to the robustness and effectiveness of the overall motion. §.§ Reference Generation In our research, we refine the imitation and learning of reference data for precise bipedal locomotion. Using a single TO formulation from previous work <cit.>, we generate walking and running gaits for the A1 biped robot, focusing on its back legs' trajectories. We streamline this process with TOWR (Trajectory Optimization for Walking Robots) <cit.>, which eliminates the need for manual tuning. TOWR generates dynamically feasible, energy-efficient motions by optimizing smooth and stable trajectories. To improve imitation fidelity, we integrate inverse kinematics into TOWR, producing joint space data that closely mimics reference trajectories. Our generated trajectories encompass various locomotion patterns, including forward walking and two distinct running gaits, each lasting 2.4 seconds. Utilizing TO for motion dataset generation offers several advantages. Firstly, it enables precise matching of the state space between the simulated agent and demonstrator, leveraging kinematic dynamics models to refine trajectory suitability. Moreover, this approach circumvents complexities associated with other motion re-targeting techniques, ensuring a more seamless and accurate replication of desired motions. § FRAMEWORK AND TRAINING §.§ Learning Framework §.§.§ Teacher Policy Architecture The teacher policy π_θ^teacher is trained using Proximal Policy Optimization <cit.> with the total reward r_t as specified in Section <ref>. During training, the teacher performs a rollout in the environment to generate a state transition (s_t^AMP, s_t+1^AMP). This state transition is then fed into the discriminator described in Section <ref>. The teacher policy consists of three Multilayer Perceptron (MLP) networks. Two of these MLPs encode low-dimensional latent representations: l_t^e ∈ℝ^16 for terrain data and l_t^p ∈ℝ^8 for privileged data. Using two separate MLPs for encoding helps mitigate information loss that often occurs during the compression process, thereby preserving crucial and necessary information. The preservation of essential data significantly aids the student policy in reconstructing the latent representations, facilitating a more efficient and accurate learning process. The third MLP acts as a low-level network, utilizing the proprioceptive observation state along with the two encoded latent representations to generate the teacher's action in the environment. The learning framework is shown in Fig.<ref>. §.§.§ Student Policy Architecture The student policy is designed to emulate the teacher policy, replicating actions without relying on privileged state and terrain information. Throughout the student training process, a supervised approach is employed, minimizing two key losses: imitation loss and reconstruction loss. The imitation loss ensures that the student policy closely mimic the action a_t^teacher∈ℝ^12 dictated by the teacher's policy. Simultaneously, the reconstruction loss encourages the memory encoder within the student's policy to faithfully reproduce the latent representation l_t^teacher∈ℝ^24 consists of the terrain latent l_o^t∈ℝ^16 and privileged latent l_o^t∈ℝ^8 employed by the teacher. The overarching architecture comprises a memory encoder and a low-level MLP <cit.> that maintains an identical structure to the teacher's low-level network. Memory encoders are implemented by stacking a sequence of 45 historical observations information H_t: [O_t-N,O_t-N-1,...,O_t-1] into the input of an MLP. §.§ Training and Implementation Details §.§.§ Termination The robot's locomotion training is governed by specific termination conditions designed to ensure safety and effectiveness. Episodes are terminated upon detecting collisions involving the trunk, upper limbs, thighs, and calves, prioritizing the study's focus on bipedal locomotion with only two-foot ground contact. §.§.§ Domain Randomization To facilitate the transfer of learned behaviors from simulation to the real world, domain randomization has been employed. This approach involves randomizing various parameters crucial for robot locomotion, such as terrain friction, base mass, joint PD controller gains, ground friction, restitution, and perturbations to the robot's base velocity. During training, sampled velocity vectors are added to the robot's current base velocity at random intervals. The specific randomization variables and their corresponding uniform distribution ranges are detailed in Table <ref>, enabling robust policy adaptation and testing in diverse real-world environments. §.§.§ Simulation Setup We trained 500 parallel agents on different types of terrains with increasing difficulties using the Isaac Gym simulator <cit.> in overall 26000 iterations and cost 15.88 hours in total. Fig <ref> showed the simulation in different terrain type. Each RL episode lasts for a maximum of 1000 steps, and terminates early if it reaches the termination criteria. The control frequency of the policy is 50 Hz in the simulation. All training's were performed on a single NVIDIA RTX 4070 GPU. In A1 locomotion scenarios, actions are represented by a_t ∈ R^12 a 12-dimensional vector specifying the desired positional adjustments for each actuated joint as dictated by the Proportional-Derivative (PD) controller. § RESULTS In our experiments within the Isaac Gym simulation environment, we evaluated the performance of the updated policy for a quadrupedal robot in biped locomotion. The results, depicted in Table <ref>, show varying success rates and tracking accuracy across different terrains and speeds. The robot generally performs well at lower speeds, with higher success rates on less challenging terrains. However, success rates and tracking accuracy decrease as speed and terrain difficulty increase, notably on sloped terrains and obstacles due to mechanical design limitations. The uniform terrain and discrete obstacle terrain were specifically chosen for detailed analysis as they are highly representative of uneven terrains. The uniform terrain has uneven features, while the discrete obstacle terrain combines elements of both flat and stepped obstacles, akin to stairs, providing a comprehensive challenge for evaluating the robot's locomotion policy. Fig. <ref> presents the base linear velocity in the x direction and the base angular velocity in yaw for the robot on both uniform and discrete obstacle terrains. During the push test, noticeable changes in these velocities occur, reflecting the robot's dynamic response to the external force. The robot's base linear velocity tracking shows acceptable performance, with the measured velocities closely following the commanded values. The base angular velocity tracking, while not as accurate as the linear velocity, still demonstrates the robot's ability to follow the commanded trajectory to a reasonable extent. The large spike observed around the 2-second mark is attributed to the external push force. Despite this disturbance, the robot quickly regains stability, indicating robust recovery capabilities of the locomotion policy. This ability to reject disturbances is crucial for maintaining stable locomotion in unpredictable environments. The contact force data, shown in Fig. <ref>, provides additional insights into the interaction between the robot's feet and the terrain. The vertical ground reaction forces F_z were measured for both the rear-left (RL) and rear-right (RR) legs on discrete obstacles and uniform terrains. The data reveals asymmetry in the contact forces, indicating that the learned policy does not exhibit a perfectly symmetrical gait.To address this issue and enhance the robot's performance, future work could incorporate a symmetrical reward function or a reward based on the orientation of base tracking into the learning algorithm. The push test results and performance analysis provide additional insights into the robot's capabilities and resilience. As illustrated in Fig. <ref>, the robot's response to a 100N push applied along the x-axis over a duration of 0.1 seconds is captured, showcasing the policy's effectiveness in handling external disturbances. Despite the significant push, the robot manages to stabilize itself quickly, demonstrating the robustness and resilience of the developed locomotion policy. The results indicate that the legged robot's locomotion policy exhibits satisfactory performance in both base linear velocity tracking and recovery from external disturbances. The ability to maintain stability and follow commanded trajectories on different terrains, as well as the efficient recovery from disturbances, underscores the robustness of the developed policy. Overall, the experimental outcomes validate the effectiveness of our legged locomotion policy in achieving stable, efficient, and resilient movement across various terrains, even under external disturbances. These findings contribute valuable knowledge towards the development of energy-efficient and robust legged robots capable of operating in diverse and challenging environments. § CONCLUSION AND FUTURE WORK In this paper, we proposed a novel framework for learning robust, agile, and natural bipedal locomotion skills for quadruped robots in simulation. Utilizing a teacher-student learning framework with privileged and terrain information, we enhanced the robustness of the learned policy and helped bridge the sim-to-real gap. By integrating adversarial motion imitation, the learned gait mimics the style and behavior of a TO reference gait. Our results demonstrate high-performance blind locomotion in a quadruped robot in biped mode. Overall, our findings highlight the potential of imitation learning and TO in achieving agile and robust locomotion across diverse robotic platforms. Future work will focus on developing more robust biped motion capabilities on uneven terrain, transferring these capabilities to physical robots, and refining the transition from quadrupedal to biped mode to enhance legged robots' versatility. IEEEtran
http://arxiv.org/abs/2407.01810v1
20240701212044
Freeview Sketching: View-Aware Fine-Grained Sketch-Based Image Retrieval
[ "Aneeshan Sain", "Pinaki Nath Chowdhury", "Subhadeep Koley", "Ayan Kumar Bhunia", "Yi-Zhe Song" ]
cs.CV
[ "cs.CV" ]
Freeview Sketching: View-Aware Fine-Grained Sketch-Based Image Retrieval A.Sain et al. SketchX, CVSSP, University of Surrey, United Kingdom. {a.sain, p.chowdhury, s.koley, a.bhunia, y.song}@surrey.ac.uk Freeview Sketching: View-Aware Fine-Grained Sketch-Based Image Retrieval [eccvblue]https://aneeshan95.github.io/Aneeshan Sain [eccvblue]http://www.pinakinathc.me/Pinaki Nath Chowdhury [eccvblue]https://subhadeepkoley.github.io/Subhadeep Koley [eccvblue]https://ayankumarbhunia.github.io/Ayan Kumar Bhunia [eccvblue]https://personalpages.surrey.ac.uk/y.song/Yi-Zhe Song 2 July 2024 ===================================================================================================================================================================================================================================================================================================================== § ABSTRACT In this paper, we delve into the intricate dynamics of Fine-Grained Sketch-Based Image Retrieval (FG-SBIR) by addressing a critical yet overlooked aspect – the choice of viewpoint during sketch creation. Unlike photo systems that seamlessly handle diverse views through extensive datasets, sketch systems, with limited data collected from fixed perspectives, face challenges. Our pilot study, employing a pre-trained FG-SBIR model, highlights the system's struggle when query-sketches differ in viewpoint from target instances. Interestingly, a questionnaire however shows users desire autonomy, with a significant percentage favouring view-specific retrieval. To reconcile this, we advocate for a view-aware system, seamlessly accommodating both view-agnostic and view-specific tasks. Overcoming dataset limitations, our first contribution leverages multi-view 2D projections of 3D objects, instilling cross-modal view awareness. The second contribution introduces a customisable cross-modal feature through disentanglement, allowing effortless mode switching. Extensive experiments on standard datasets validate the effectiveness of our method. § INTRODUCTION Sketch, a versatile medium, stands out as an exceptional input query modality, especially in the face of image retrieval. As a complement to text, it offers a distinct level of fine-grained expressiveness, making it a superior input modality <cit.> for fine-grained image retrieval. The past decade has witnessed extensive research in Fine-Grained Sketch-Based Image Retrieval (FG-SBIR) <cit.>, delving into the unique characteristics of sketch data, such as abstraction <cit.>, style <cit.>, data scarcity <cit.>, and drawing order <cit.>. However, in this paper, we take a departure from the intricacies of sketch-specific traits and shift our focus to a fundamental aspect of the human experience – their interaction with the system. Specifically, we address a noteworthy challenge neglected thus far in the literature: “Which view should I sketch?" – a question that we hear a lot! The “view” predicament is inherently intuitive – just as individuals carefully choose optimal camera angles when capturing photos, they also deliberate on the best view to portray an object before sketching <cit.>. In contemporary photo-based systems, this view problem is typically addressed in a data-driven manner, leveraging extensive image datasets to ensure comprehensive coverage of various object views, essentially making them view-invariant <cit.>. However, this seamless solution does not extend to sketches. The constrained nature of available sketch data, often collected from fixed viewpoints, introduces a significant bias toward these limited perspectives. Consequently, if your input sketch is executed from an unintended view, the system will respond less forgivingly (<ref>). figurec [50][h] < g r a p h i c s > While searching a specific photo, users are often confused as to `Which view should I sketch?' Adding to their worries, models trained on existing FG-SBIR datasets that have sketch-photo pairs matched against a fixed view, fail to fetch a photo even when it's present in the gallery, unless its view matches that of the query-sketch exactly. We aim to alleviate this issue and incorporate view-awareness in the FG-SBIR paradigm. This assertion is validated through our pilot study. We employed a FG-SBIR model pre-trained on existing fine-grained sketch datasets, featuring single-view matched sketch-photo pairs. The model was then evaluated on a carefully curated gallery set where query-sketches deliberately did not align with the view of their corresponding target instances. As anticipated, the baseline FG-SBIR model faced substantial challenges, with a majority of retrieval attempts failing to identify the correct target instance (drops top-1 <cit.> accuracy by 27.15%). Upon scrutinising the results, a clear pattern emerged – incorrect photos retrieved at rank-1 were not only structurally similar but, more crucially, shared the same view as that of the query-sketch, echoing with earlier findings in the field <cit.>. A user experience questionnaire however revealed somewhat contrasting conclusions. While the baseline FG-SBIR <cit.> model exhibited a bias towards shape-matching <cit.>, users expressed a desire for autonomy within the system. A notable percentage (64.56%) of participants, particularly those adept at sketching, indicated a preference for view-specificity as a feature when using sketches for retrieval. Essentially, they articulated a desire for the system to precisely retrieve what they had sketched, aligning with their specific viewpoint preferences. Our approach to addressing the “view” problem therefore does not lean towards creating a system that completely ignores views (view-agnostic) or one that is strictly sensitive to view changes (view-specific). Instead, we advocate for a view-aware system. In other words, we aim for a system that can seamlessly adapt to both scenarios simultaneously. We envision a system capable of handling both view-agnostic and view-specific tasks (see <ref>) without requiring any redesign or additional training – simply flipping a switch should suffice! Establishing view-awareness poses a substantial challenge, primarily due to the limitations inherent in existing fine-grained datasets: (i) absence of view-specific annotations or information in sketch-photo pairs crucial for developing view-awareness, and (ii) For every sketch-photo pair, sketch is created <cit.> from a fixed single point of view, reflecting structural matching against a single photo captured from that specific perspective. As our first contribution, we aim to alleviate this by leveraging sketch-independent multi-view 2D rendered projections of 3D objects <cit.>, to gain view-aware knowledge and associate it with the cross-modal sketch-photo discriminative knowledge learnt using standard FG-SBIR datasets <cit.>, thus distilling cross-modal view awareness into the pipeline. Our second contribution revolves around designing a cross-modal feature that is customisable for both view-agnostic and view-specific retrieval simultaneously. To achieve this, we adopt a feature disentanglement framework commonly found in the literature <cit.>. In this framework, we disentangle sketch features into two distinct parts: content and view. The content part encodes the semantics present in the sketch, while the view part encapsulates view-specific features. At inference time, the key decision lies in selecting which part(s) of the feature to utilise – choosing only the content results in view-agnostic retrieval, whereas combining content and view retains view specificity. Essentially, it is a flip of a switch! While this sounds straightforward, implementing it is non-trivial. To facilitate this disentanglement, we introduce two specific designs, particularly crucial in the absence of ample sketch view data. Concretely speaking, first, we enforce instance-consistency across multi-view [13]l0.6 < g r a p h i c s > Our framework. We aim to handle both view-agnostic and specific retrieval using one model. 2D rendered projections of a 3D model (<ref>) by constraining their disentangled content parts to sit together, and also introduce a cross-view reconstruction objective, which merges content of one projection with view of another to reconstruct the latter. Second, we impose a view-consistency objective between a matching sketch-photo pair, by constraining their view parts to be closer in the embedding space, thus ensuring the associativity of paired sketch-photo views. In summary, (i) we propose a view-aware system designed to address the often-overlooked challenge of choosing the appropriate view for sketching, accommodating both view-agnostic and view-specific retrieval seamlessly. (ii) we introduce the use of multi-view 2D rendered projections of 3D objects to overcome the limitations of existing datasets, promoting cross-modal view awareness in the FG-SBIR pipeline. (iii) we present a customisable cross-modal feature through a disentanglement framework, allowing users to effortlessly switch between view-agnostic and view-specific retrieval modes. § RELATED WORKS Fine-Grained SBIR: Although Sketch-based Image Retrieval (SBIR) began as a category-level task <cit.>, research quickly shifted to fine-grained SBIR where, the aim is to retrieve the only matching photo from a gallery of same-category photos, with respect to a query-sketch. Starting from deformable part models <cit.>, numerous deep approaches have emerged <cit.>, centering around a triplet-loss based deep Siamese networks <cit.> that learn a joint sketch-photo manifold. Encouraged by new datasets with fine-grained sketch-photo association <cit.> FG-SBIR improved further with hybrid cross-domain image generation <cit.>, textual tags <cit.>, local feature alignment strategy <cit.>, attention mechanism involving higher-order <cit.> or auxiliary losses <cit.>, and mixed-modal jigsaw solving based pre-training strategy <cit.>, to name a few. Apart from addressing specific sketch-traits, like hierarchy of details <cit.>, style-diversity <cit.>, or redundancy of strokes <cit.>, works explored various application scenarios like cross-category generalisation <cit.>, overcoming data-scarcity <cit.>, early-retrieval <cit.>, and recently zero-shot cross-category FG-SBIR <cit.>. Contemporary research has extended FG-SBIR to scene-level retrieval, modelling cross-modal region associativity <cit.> and enhanced further using text as an optional query <cit.>. These works however, have largely ignored the question of `view-awareness' in context of FG-SBIR. In this work, we thus aim to incorporate `view-awareness' in FG-SBIR as two branches namely view-agnostic FG-SBIR and view-specific FG-SBIR. Learning 3D knowledge from 2D images The concept of view-awareness arises from the underlying motivation of representing an object holistically from all view-perspectives. In this regard numerous works have attempted at understanding the task of 3D shape retrieval <cit.>. 3D shape retrieval methods can be broadly divided as 3D Model-based methods that directly learn shape features from 3D data formats like polygon meshes <cit.>, voxels <cit.>, and point-clouds <cit.>; and view-based methods <cit.>. While earlier works applied view based similarity against pre-processed 3D shape descriptors <cit.>, to retrieve 3D models using a 2D image, others encouraged lesser views by clustering <cit.>. Recent improvements include real-time 3D shape search engines based on the 2D projections of 3D shapes <cit.> or using max-pooling to aggregate features of different views from a shared CNN like MVCNN <cit.>. Limited by the availability of multiple views at large-scale, single-view 3D shape learning had gained traction. While SSMP <cit.> uses adversarial regularisation towards shape learning it often falls unstable for complex structures. Others include semantic regularisation for implicit shape learning <cit.>, or a 3-step learning paradigm <cit.> for scalable shape learning including synthetic data pre-training. However most of such methods are focused on images alone unlike our cross-modal retrieval setup. Moreover, curating multi-view photos of an object is relatively easier than our case of collecting multi-view sketches – almost all FG-SBIR datasets <cit.> contain only one photo matched against a sketch, from a fixed view. Furthermore sketch being quite sparse and lacking visual cues, aligning it with a 3D model is in itself quite challenging. In light of such limitations, we advocate for a simpler strategy without training on 3D descriptors, and relying only on 2D views of unpaired photos to instill view-awareness in FG-SBIR. Sketch-Based 3D Shape Retrieval: Being similar to FG-SBIR in using sketch as a fine-grained query, we explore a few relevant works on this well explored sketch-related 3D application, where the aim is to retrieve 3D shapes given a sketch query. While prior studies primarily focused on category-based retrieval, aiming to retrieve shapes of the same category as that of the query sketch <cit.>, recent deep learning methods attended to mapping sketch and shape-features to a common shared embedding space <cit.>. A major concern similar to ours in this regard however, is the issue of view-variance , multiple sketches can be drawn from various views of one 3D shape. To alleviate the same, earlier works encoded rendered 2D projections <cit.> via CNNs <cit.>, as Wasserstein barycenters <cit.>, or used triplet-center loss <cit.>. Others include learning intra and cross-domain similarities from just two views <cit.>, or computing global 3D-shape descriptors <cit.>. Shifting to fine-grained paradigm further intensifies challenges owing to a sketch's unconstrained free-hand deformations <cit.> and lack of large-scale datasets <cit.>. While <cit.> approaches by creating a dataset of 4,680 sketch-3D pairs with multiple 2D shape projections for improved retrieval, <cit.> learns the correspondence between a set of 3D points and their 2D projections. In such settings however learning/optimising over 3D shape data is mostly pivotal to the training procedure. Our motivation however is to keep the training paradigm restricted in 2D for simplicity, – use only the 2D projections of an object while accessing only one single-view sketch per instance as available in existing FG-SBIR datasets. § PROBLEM AND ANALYSIS §.§ Background on FG-SBIR Given a query-sketch (s), FG-SBIR <cit.> refers to the task of retrieving a particular matching instance from a gallery of photos of the same category, where the underlying convention is to associate one 2D photo per sketch. This convention is followed in standard FG-SBIR datasets like QMUL-ChairV2 <cit.>, QMUL-ShoeV2 <cit.> and Sketchy <cit.> that comprises k instance-level sketch/photo pairs as {s_i, p_i}_i=1^k (per category for Sketchy <cit.>), where the sketches are drawn from a fixed view corresponding to the object-photo. A baseline FG-SBIR framework <cit.> usually aims to learn an embedding function ℰ_θ: ℐ→ℝ^d that maps a rasteried sketch/photo, ℐ∈ℝ^H× W× 3 to a d-dimensional feature f_ℐ∈ℝ^d. ℰ_θ(·) is usually a CNN <cit.> or Transformer <cit.> based encoder, that is shared between sketch and photo branches, and trained over a triplet-loss based objective <cit.> (ℒ_Tri) where the distance δ(a,b) = ||a-b||_2 between features of query-sketch (f_s) and its matching photo (f_p) is reduced, while increasing it from a random photo-feature (f_n) in the joint sketch-photo embedding space, as: ℒ_Tri = max{0, μ + δ(f_s, f_p) - δ(f_s, f_n) } , where μ is a margin-hyperparameter. During inference, all photo-features from the test-gallery ({.., f_p^i, ..}) are pre-computed using the trained encoder ℰ_θ(·) and ranked according to their distance from the query-sketch feature (f_s). Acc@q is then measured as the percentage of sketches retrieving their true-matched photo within the top q ranks. For clarity, we dub this as 2D FG-SBIR. What is wrong with 2D FG-SBIR? Being an instance-level matching problem <cit.> FG-SBIR models are generally trained on fixed single-view sketch-photo pairs. Consequently, the naive setup of FG-SBIR as in <ref> assumes one-to-one correspondence between sketch-photo pairs (s_i, p_i), and typically focuses on shape-matching between them <cit.>. However as an object in itself is a 3D concept, each instance i ∈I can be sketched from M_i 2D views as 𝐒_i = {s^1_i, …, s^M_i_i }, corresponding to M_i 2D photos as 𝐏_i = {p^1_i, …, p^M_i_i } respectively. Now, given a model under naive conditions (<ref>) is trained to match sketches from only one fixed-view photo, with its primary focus on shape-matching, the research question arises: `Can such a model's performance generalise to query-sketches drawn from views different from its true-match photo in the gallery?' To answer this, we conduct the following study. Pilot Study: We design a study where we first take a FG-SBIR model pre-trained on fixed single-view sketch-photo pairs of QMUL-ChairV2 dataset <cit.> following the basic FG-SBIR training paradigm (<ref>) on a VGG-16 <cit.> backbone. Next we curate a test-set using chairs from the dataset by Qi <cit.>, which has sketches drawn from 0, 30 and 75 and a 3D-shape, per chair-instance. Photos are freely rendered as 24 2D projections (0, 15, ⋯ , 360) of the 3D shape. We now evaluate the model on two setups: (i) `Existing' – a simple test-set where for every instance, photos matching the view of query-sketches (0, 30 and 75) are present in the test-gallery along-side other views, and (ii) `Pilot' – where they are absent. While `Existing' scores a satisfactory accuracy (Acc@1) of 58.25%, `Pilot' drops by 27.15% proving that an FG-SBIR model trained on fixed single-view sketch-photo pairs cannot generalise to sketches whose view doesn't match its target photo. §.§ Problem Definition Given our findings from the pilot study, we re-examine the problem statement of Fine-Grained Sketch-Based Image Retrieval (FG-SBIR). Currently, FG-SBIR is defined as the task of retrieving the particular target instance from a gallery of photos – one photo per sketch. This definition has lead to state-of-the-art FG-SBIR methods that are mostly shape-biased – use shape matching for retrieval <cit.>. While this definition holds in a 2D setup, in 3D reality, a 3D object can be represented via multiple 2D photos, from different views resulting in different sketches for the same instance. Therefore in this work, we for the first time aim to incorporate view-awareness into the FG-SBIR paradigm. Furthermore this paves the way for two new setups where given a sketch, (i) retrieve a photo of the instance irrespective of the view in which it is present, , even if its view does not match that of the sketch, and (ii) retrieve that photo of the instance whose view exactly matches that of the sketch. View-Agnostic FG-SBIR: Given a 2D sketch (s_i^m) of instance i ∈ I, view m ∈ [1,M_i], and a gallery of multi-view photos from multiple instances { p_j^m | ∀ j ∈ I , m ∈ [1,M_i]}, we aim to retrieve any of all photos 𝐏_i = { p_i^1, …, p_i^M_i} of the target instance i ∈ I, irrespective of their view. View-Specific FG-SBIR: Given a 2D sketch (s_i^m) with instance i ∈ I, of view m ∈ [1,M_i], and that same gallery of photos, we aim to retrieve that photo of target instance i ∈ I whose view matches that of the sketch, p_i^m. Challenges: Existing fine-grained datasets like QMUL-ShoeV2 <cit.>, QMUL-ChairV2 <cit.> or Sketchy <cit.> hold numerous sketch-photo pairs with fine-grained association. However, two major limitations here bottleneck training for view awareness in FG-SBIR: (i) They lack any view-specific annotations <cit.> or information, needed to identify views. (ii) All sketches are drawn from a fixed single-point-of-view matching that of their paired photo, with just one photo <cit.> per instance. This lacks the view-diversity across same instance, needed to instill view-awareness. We thus aim to learn view-aware knowledge from sketch-independent multi-view 2D projections of 3D objects and bridge them across sketch-photo cross-modal dataset to impart cross-modal view awareness into the FG-SBIR <cit.> pipeline. Training Dataset: To alleviate this dataset issue we need one dataset for learning the cross-modal sketch-photo association (𝒟_CM) and another containing 2D projections freely rendered from pre-existing 3D shapes (𝒟_2D). Essentially, 𝒟_CM = {s_i,p_i}_i=1^N_CM, which contains N_CM sketch (s)-photo (p) pairs with fine-grained association. Whereas 𝒟_2D houses M_i projections for every 3D-shape γ_i of N_2D shapes, as 𝒟_2D = {{p_i^j}_j=1^M_i}_i=1^N_2D, where p_i^j = ℛ(γ_i,v_j). Here, {v_j}_j=1^M_i refers to the set of select views and ℛ(·,·) is a 2D-projection rendering from <cit.>. § PROPOSED METHODOLOGY Overview: We aim to devise a framework that learns to incorporate view-awareness in the FG-SBIR training paradigm. Existing literature studying 3D shapes for view-variance usually involves learning a shape descriptor from multi-view images <cit.> or generating complex 3D point-sets from single-view images <cit.>. Being limited by the scarcity of multi-view sketches and also lacking in visual cues compared to an image, limits efficiency of such methods. We thus argue that dealing with the view-variations for cross-modal sketch-photo association effectively, requires a disentanglement model, that explicitly attends to the view-semantic. Furthermore, to avoid the complexity of learning 3D-shape descriptors usually followed in parallel shape-retrieval <cit.> literature, we stick to a 2D-paradigm, given the target is of 2D-image retrieval (not 3D shape). Accordingly, we aim to design a cross-modal view-aware disentanglement model (<ref>) that decomposes a photo (p) or rasterised sketch (s) into a view-semantic part suitable for modelling its `view' (v) and another component that holds only its content (c). Such components are trained following a carefully designed learning paradigm (<ref>) and used accordingly over a distance metric (<ref>) for respective view-agnostic (VA) or view-specific (VS) retrieval. Model Architecture: To design a view-aware cross-modal encoder ℰ_θ(·) that can disentangle the input image into two components – view and content, we formally learn the embedding function ℰ_θ: ℐ→ℝ^d, that maps an input photo or a rasterized sketch (ℐ∈ℝ^H×W× 3) to two d dimensional features, where one represents the view of an input image (f_v^ℐ) and the other holds its content (f_c^ℐ), as f_c^ℐ, f_v^ℐ = ℰ_θ(ℐ). Our major focus being the training paradigm, we refrain from exploring recent complex backbones of Vision Transformers <cit.> or Diffusion-based <cit.> encoders, and employ a simple ImageNet <cit.> pretrained VGG-16 network as our backbone feature extractor, followed by an FC-layer for each feature-representation. §.§ Learning Objectives Cross-modal Discriminative Learning: Being an instance-level matching problem <cit.>, cross-modal discriminatve knowledge is instrumental in training any FG-SBIR framework <cit.>. Accordingly, we first focus on inducing discriminative knowledge (<ref>) to our content feature (f_c^ℐ) especially for view-agnostic FG-SBIR, which relies on content for cross-modal matching, using sketch-photo pairs from 𝒟_CM. Taking sketch (s) as an anchor, we aim to reduce the distance of its content (f_c^s) from that of its matching photo (f_c^p) while increasing it from that of a random non-matching/negative (n) photo (f_c^n) as: ℒ_Tri^VA = max{0, μ_c + δ(f_c^s, f_c^p) - δ(f_c^s, f_c^n) } . However, this alone does not suffice for view-specific retrieval, which additionally needs to distinguish across multiple views of the same instance. We thus need to instill view-specific discriminative knowledge as well. Naively imposing triplet-loss <cit.> similarly on f_v^ℐ alone, would however be sub-optimal as distinguishing amongst different views <cit.> is only relevant when the model is aware of the associated instances. We thus combine f_c^ℐ and f_v^ℐ over element-wise addition to obtain f_vs^ℐ = f_c^ℐ+f_v^ℐ which we call the view-specific component (f_vs^ℐ). Imposing triplet-loss objective <cit.> here similarly, we have, ℒ_Tri^VS = max{0, μ_vs + δ(f_vs^s, f_vs^p) - δ(f_vs^s, f_vs^n) } . Learning from 2D projections: To instil view awareness in ℰ(·), it needs to (i) recognise the different semantic knowledge coming from different views of the same photo as similar in terms of the content it offers, and (ii) discriminate between two different different views as well. As multi-view FG-SBIR sketch-data is rare, rendering most image-based multi-view 3D training paradigms <cit.> sub-optimal, and sketch lacks significantly in visual cues compared to images, rendering single-view 3D reconstruction methods sub-optimal <cit.>, we look to sketch-independent, unpaired 3D shapes <cit.> to leverage multi-view projections of the same object (in 𝒟_2D) to condition the sketch-photo encoder on view-awareness from photos alone (no sketches involved). Accordingly, we introduce two objectives of (i) instance-consistency across different projections of the same photo for view-agnostic retrieval, and (ii) a cross-view reconstruction objective to further enrich the latent space with cross-view discriminative knowledge. Formally, using our curated dataset of multi-view projections (𝒟_2D), we pass the set of projections per instance, 𝐏_i = p_i^j|_j=1^M_i where p_i^j refers to the j^th view out of M_i views for the i^th instance, to extract disentangled content and view-semantics. Attending to (i) we take any two out of M_i views, p_a and p_b (dropping i for brevity) and constrain their content (f_c^p_a,b) to ideally occupy the same position in the latent space. Our instance-consistency objective thus becomes, ℒ_IC = 1/M_i2∑_a=1^M_i-1∑_b=a+1^M_i||f_c^p_a - f_c^p_b||_2 . For our second objective we advocate for a cross-view translation objective on two different projections (p_i^a, p^b_i) of the same instance (p_i), to enrich the latent space on view-specific knowledge from photos. Specifically, given {p_a, p_b} and a decoder Ω_ϕ(·) that inputs a d-dimensional feature and outputs an image ℐ' ∈ℝ^H× W × 3, we perform cross-view reconstruction as p_b' = Ω_ϕ(f_c^p_a + f_v^p_b). Accordingly, our view-reconstruction objective becomes ℒ_VR = 1/M_i2∑_a=1^M_i-1∑_b=a+1^M_i||p_b' - p_b||_2 . Cross-modal View Consistency: Given the cross-modal nature of our task, learning view-awareness only from unlabelled 3D shapes, ignoring sketch, is sub-optimal. Considering that a sketch is paired against only one photo in existing FG-SBIR datasets <cit.> (𝒟_CM), we thus need to condition ℰ_θ(·) towards matching view of a sketch (f_v^s) with its paired photo (f_v^p), to instill cross-modal sketch-photo view-awareness. Although one may naively impose triplet-loss for view-consistency on {f_v^s,f_v^p,f_v^n} (<ref>), a major drawback here would be not knowing if the view of negative photo (n) selected randomly is strictly different from the positive (p) one or not (no view-annotations available in 𝒟_CM). This would hence result in a confused guiding signal. Conversely, with the motivation that for a matching sketch-photo pair, their view should be closer in the latent space, we thus define view-consistency objective as : ℒ_VC = ||f_v^s - f_v^p||_2 . With hyperparameters λ_1,2 our final training objective is, ℒ_trn = ℒ_Tri^VA + λ_1ℒ_Tri^VS + λ_2(ℒ_VC + ℒ_IC + ℒ_VR) . Evaluation Paradigm: Standard FG-SBIR evaluation <cit.> only focuses on retrieving an instance given a sketch ignoring any dependency on its view, and thus requires only one extracted feature for retrieval. Our motivation of bringing view-awareness in FG-SBIR paradigm splits it as view-agnostic and view-specific FG-SBIR pipelines which thus requires different feature-types to be used during retrieval. Accordingly, for view-agnostic retrieval, the focus being to retrieve the same instance irrespective of the view, we use the content feature, f_c^ℐ of a sketch to match against those of test-gallery photos over a distance-metric. For view-specific retrieval however, we use our combined feature f_vs^ℐ = f_c^ℐ + f_v^ℐ similarly, to focus on view as well. § EXPERIMENTS Datasets: Due to lack of large scale view-incoporated sketch datasets, we rely on our re-purposed fine-grained dataset of 𝒟_CM + 𝒟_2D (<ref>) for training and evaluation. We focus on two categories – `chairs' and `lamps', as allowed by the only dataset containing multi-view sketches with fine-grained association by Qi <cit.>, that houses 555 and 1005 sketch/3D-shape quadruplets of `lamps' and `chairs' for 𝒟_2D. Each quadruplet holds three sketches from different views (0, 30, 75 for chairs, and 0, 45 , 90 for lamps) and one 3D-shape. Following <cit.>, we use 111 and 201 quadruplets respectively for testing, and the rest for training. Multi-view photos of the 3D-shapes are freely generated following <cit.> for [0, 30, 75, 45, 90, 135, 180, 225, 270, 315 and 360] views for both lamps and chair instances. Importantly, training photos having the same view as that of sketches used for inference (0, 30, 75 for chairs and 0, 45, 90 for lamps) are omitted from training-photo set, for a fairer evaluation. Coming to 𝒟_CM, for chairs we use QMUL-ChairV2 <cit.> containing 1800/400 sketches/photos, entirely for training, unless specified otherwise. As lamps lack any fine-grained sketch-dataset, we curate one from all training-instances of lamps in <cit.>, where for every lamp we randomly select 1 sketch out of the 3 available (at views 0 , 45 , 90) and pair it with that 2D-projection of its 3D shape, which has the same view, thus maintaining the nature of existing FG-SBIR datasets where `sketch-photo pairs are matched against a fixed view'. Notably, while inference is performed using the entire test-set of 𝒟_2D (all sketch-views and all 2D projections), sketches of training-set of 𝒟_2D of chairs and lamps (except those in 𝒟_CM^Lamp) are intentionally unused to support the motivation of this task. Implementation Details: We use an ImageNet <cit.> pre-trained VGG-16 <cit.> model for ℰ(·). The decoder (Ω) architecture for cross-view photo reconstruction employs a sequence of stride-2 convolutions with BatchNorm-Relu activation on each convolutional layer except for the output layer where tanh is used. The encoder's extracted feature is projected into two 128 dimensional vectors – f_c^ℐ and f_v^ℐ. We use Adam optimiser with a learning rates of 0.0001, and batchsize of 64 for 250 epochs. Determining empirically, hyperparameters λ_1,2, μ_vs and μ_c are set to 0.5, 0.7, 0.45 and 0.5 respectively. To reduce the effect of color-bias on retrieval, evident from the uniform color palette evident among projections (𝒟_2D) of each 3D shape (<ref>), unlike real photos of 𝒟_CM, we use an off-the-shelf colour augmentation following <cit.> on every 2D projection before feeding to the network. Our model was implemented in PyTorch on a 12GB TitanX GPU. Evaluation Metrics: We evaluate both view-agnostic and specific paradigms. As the former can be considered as category-level SBIR evaluation, where each instance denotes a class with its multi-view photos being different instances, we use mean average precision (mAP) <cit.> and precision for top 100 retrievals (P@100) <cit.> for evaluation. Whereas, the aim for view-specifc FG-SBIR being to retrieve the photo matching the exact view of the query-sketch, we use Acc@q as the percentage of sketches having its true matched photo in the top-q list <cit.>. §.§ Competitors We compare against state-of-the-arts, and a few self-designed baselines by modifying methods from relevant works. (i) SoTAs: Triplet-SN <cit.> utilises a Siamese network trained with triplet loss to learn a shared sketch-photo latent space. HOLEF-SN <cit.> improves <cit.> via spatial attention leveraging a higher order HOLEF ranking loss. Triplet-OTF <cit.> uses triplet-loss pre-training with RL-based reward maximization for early retrieval. Early retrieval not being our goal, we take its results only on completed sketches. StyleVAE <cit.> employs VAE-based disentanglement via meta-learning for style-agnostic retrieval. Jigsaw-CM <cit.> uses jigsaw-solving pre-training on mixed photo and edge-map patches, with triplet-based fine-tuning to improve retrieval Strong-PVT <cit.> devises a stronger FG-SBIR framework with PVT <cit.> backbone – we use its `Strong' variant. Notably, `lamps' being an entirely different category than the ones these SoTAs were trained on, we do not show SoTA results on lamps. (ii) SoTA++: Although our method is also trained on ChairV2 <cit.> for chairs like other SoTAs, reporting SoTAs' evaluation on our curated dataset might be compared to cross-dataset (despite same category: chairs) evaluation. To reduce ambiguity, we reconstruct 𝒟_CM (only in this setup) for chairs following that for lamps (𝒟_CM^Lamp in Datasets) as 𝒟_CM^Chair*, and report SoTA results after re-training on 𝒟_CM^Chair* and 𝒟_CM^Lamp. (iii) B-Backbones: Keeping our method same we explore a few popular architectures used for FG-SBIR as backbone-feature extractor like Inception-V3 <cit.>, ResNet-50 <cit.>, ViT <cit.> and PVT <cit.>. (iv) B-Disentangle: From literature on disentanglement methods <cit.>, we design a few baselines as suggested: B-TVAE <cit.> uses a standard VAE with triplet-loss; B-DVML <cit.> employs a VAE with same-modality translation; B-Trio adapts disentanglement module of <cit.> to ours. (v) B-Misc: Please note that during training, our setup enforces no access to (a) multiple sketch-views (b) paired 3D-shapes (c) cross-modal association of one sketch to other views of its target-photo. Besides unlabelled 3D-shapes, only 1 sketch-photo pair per instance is available for training. Following <cit.> B-Single ignores the multi-view projections (𝒟_2D), estimating 3D knowledge from photos and sketches in 𝒟_CM via single-image 3D reconstruction, independently (separate encoders). During inference it combines its {shape retrieval, texture} features <cit.> for view-agnostic retrieval whereas {shape, texture, pose} features for view-specific one. B-Pivot follows <cit.> using 𝒟_CM + 𝒟_2D to design a shared 3D-shape-aware sketch-encoder, to match extracted features from query-sketch and 2D gallery photos for retrieval. B-NoProjection omits using multi-view projections (no 𝒟_2D) keeping the rest same as ours. We also design a two-model baseline (B-TwoModel) with one model per paradigm. For view-agnostic paradigm, we train using ℒ_Tri^VA on sketch-photo triplets (s,p,n) from 𝒟_CM (<ref>), and ℒ_IC on 2D projections from 𝒟_2D (<ref>). For view-specific one, we train using ℒ_Tri^VS on similar triplets from 𝒟_CM (<ref>), and a simple reconstruction loss (ℒ_rec=||Ω_ϕ(p_a)-p_b||) via our decoder (Ω_ϕ(·)) on 2D projections (p) from 𝒟_2D. §.§ Performance Analysis View-Agnostic Retrieval: <Ref> reports quantitative evaluation for view-agnostic retrieval. While Triplet-SN <cit.> and HOLEF-SN<cit.> score lower, due to their comparatively weaker backbones of Sketch-A-Net <cit.>, Jigsaw-CM <cit.> scores better, given its jigsaw-solving pre-training strategy, enabling better structural knowledge. Although Triplet-OTF <cit.>, with its reinforcement learning-based reward function, surpasses former methods, it is exceeded by StyleVAE <cit.> (by 0.07 mAP), thanks to the latter's meta-learning based disentanglement module. Aided by a much better feature extractor StrongPVT <cit.> outperforms them all (still 0.12 mAP lower than our PVT <cit.>). The overall lower performance of SoTAs, compared to their usual high accuracy <cit.>, is likely due to a potential cross-dataset evaluation effect. However their results in SoTA++, obtained by retraining SoTAs on 𝒟_CM^Chair*, Lamp, being coherent with earlier ones, clears shows the demerits of not modelling view-awareness explicitly in FG-SBIR. Especially, StyleVAE++ <cit.>, scores closer to StrongPVT++ <cit.> than earlier, likely due of its ability to disentangle content, based on prior training on disentangling style-invariant features. Among other backbone variants, PVT <cit.> scores best, even better than our initial VGG16-encoder, thanks to its unique pyramidal structure imbibing inductive bias, on feature-maps at multiple-levels. While B-TVAE and B-DVML score lower due to their inferior design, B-Trio fares closer thanks to its conditional invertible network. Given our major focus on learning 3D knowledge from 2D data and simplicity of training strategy, the disentanglement module has been kept simple, which can be enhanced further as a future work. Besides lacking cross-modal discrimination, B-Single naively uses 3D-reconstruction objective <cit.> for sketches which is unreliable due to their sparse nature <cit.> and lack of visual cues, thus scoring poorly. B-NoProjection fares slightly better (↑0.095 mAP) with cross-modal discrimination and sketch-photo view-consistency but lags without aid from 𝒟_2D. In contrast, B-Pivot <cit.> excels with well-trained 3D-shape awareness and sketch-photo association. However, lacking any training to model views explicitly, it lags behind ours. View-Specific Retrieval: From <Ref> we see, the performance trend of different methods reflects similar accuracy shifts, to that seen for view-agnostic retrieval, as in both cases the same trained base encoder model (ℰ(·)) is used for every method, thereby having similar potential for both paradigms. Importantly, unlike its higher performance in the view-agnostic paradigm, StyleVAE(++) <cit.> scores much lower, with a larger shift from StrongPVT(++) – likely due to its inability to explicitly attend to the non-content part (or style in its case) for retrieval. Notably, unlike most methods, ours uses a different feature (f_vs^ℐ = f_c^ℐ + f_v^ℐ) more enriched in view-semantic, thus resulting in better view-specific retrieval. The low performance of B-TwoModel is likely because loss objectives alone are not sufficient to condition the extractor on addressing the `view' component of a sketch, which it needs to disregard (view-agnostic) or emphasise on (view-specific) for target retrieval task, thus justifying our combined paradigm with feature-disentanglement. §.§ Ablative Study Importance of loss objectives: To justify each loss in our framework, we evaluate them in a strip-down fashion (<Ref>), keeping the rest same. FG-SBIR at its core being dependent on cross-modal discrimination, stripping off ℒ_Tri^VS drops Acc@1 significantly (31.15%). Similarly, stripping ℒ_Tri^VA drops mAP by 0.199. Being the only objective relating view-semantic of a sketch with photo, without ℒ_VC accuracy dips for both view-specific (VS) and agnostic, proving its significance. [4]l0.58 Ablation of Loss Objectives on `Chairs' Objective-stripped ℒ_Tri^VA ℒ_Tri^VS ℒ_VC ℒ_IC ℒ_VR YellowGreen!40Ours-VGG-16 (r)1-1 (lr)2-6 7-7 [VS] Top-1 (%) 25.56 21.12 55.69 52.71 56.26 YellowGreen!4060.71 [VA] mAP@all 0.104 0.416 0.541 0.520 0.565 YellowGreen!400.615 The need for ℒ_IC in view-agnostic paradigm, is evident from the large drop (0.095 mAP) when omitted. A drop of 0.05 mAP and 4.45% Acc@1 without ℒ_VR shows the view-knowledge enrichment it provides in our framework. Design alternatives: We explore a few design choices (on chairs) focusing on our loss objectives. (i) Given that a sketch relates better to an edgemap than a photo <cit.>, we alter ℒ_VR to conduct photo-to-edgemap reconstruction of the target view (p_b' in <ref>). A bit lower score of 57.68% Acc@1 (0.598 mAP) reveals it to be sub-optimal, likely because, reconstructing a view in the photo domain enriches the encoder with other cues like light-intensity <cit.>, etc., which is unavailable from edgemaps. (ii) Modifying ℒ_VC as a triplet loss on {f_v^s, f_v^p, f_v^n} using <ref> dips accuracy, especially for view-specific paradigm (by 4.5% Acc@1) as during training we are unaware (no annotations), if the negative's (n) and positive's views are strictly different or not, thus creating a sub-optimal gradient for encoder update. (iii) Omitting colour augmentation during model training (chairs) invokes a colour bias <cit.>, dropping accuracy by 0.065 mAP (2.85% Acc@1), thus proving its importance. (iv) Utilising separate VGG-16 encoders extracting `content' (f_c^ℐ) and `view' (f_v^ℐ) features yields poor results of 0.528 mAP@all on view-agnostic and 54.32% Top-1 score on view-specific FG-SBIR – likely because using different extractors yields poor coherence between f_v^ℐ and f_c^ℐ, as unlike ours, they do not implicitly condition the model on the knowledge that both content and view features belong to the same instance. This is crucial for view-awareness, especially for view-specific feature's representation (f_vs^ℐ) which combines both features for subsequent training and retrieval. Performance under low-data regime: Our framework being manoeuvred to deal with data-scarcity of sketch-views (not much data to learn view-awareness within sketches sketch-view learning ), we aim to explore our generalisation potential under low-data regime. Accordingly we vary training data (𝒟_CM) [8]l0.5 < g r a p h i c s > Varying training data-size (𝒟_CM). for chairs as 10%, 30%, 50%, 70% and 100%. Our view-agnostic and view-specific FG-SBIR <cit.> performances remain relatively stable (<ref>) at variable training-data-size, compared to a baseline of Triplet-SN – thanks to our carefully designed objectives (<ref>), especially those of ℒ_IC and ℒ_VR for additionally enriching the latent space with cross-view photo knowledge, apart from cross-modal triplet loss <cit.>. Further Insights: (i) Optimal feature dimension for both content and view features were empirically found to be 128, with stable results at higher ones. (ii) Our-VGG16 utilises 14.71 mil. params with ∼ 40.18G FLOPs and takes 0.16ms (0.21ms) for view-sepcifc (agnostic) retrieval per query during evaluation – close to 0.18ms of Triplet-SN. (iii) On varying each of λ_1,2, as {0.1, 0.15, ⋯, 0.9} independently, accuracy falls when: (a) λ_1 > 0.55 or λ_2 < 0.65 and (b) |λ_1-λ_2| is large, (λ_1,2 = 0.2, 0.8), giving us optimal values at λ_1,2 = 0.5, 0.7 empirically. Following FG-SBIR works on margin hyperparameters <cit.> μ_vs and μ_c were varied as {0.3, 0.35, ⋯, 0.7}, delivering optimal values at μ_vs = 0.45, μ_c = 0.5. § LIMITATIONS AND FUTURE WORKS (i) Besides off-the-shelf complex feature-extractors (<Ref>), future works may explore designing sketch-specific modules or complex architectures like DINOV2 <cit.> for view-aware feature extraction to enhance accuracy. (ii) Other alternatives for disentanglement paradigms based on meta-learning <cit.> or diffusion <cit.> could be explored. (iii) Alleviating data scarcity for sketch-views might allow recent methods <cit.> that learn cross-modal 3D knowledge from multi-view images, to enhance robustness of view-aware FG-SBIR paradigms. § CONCLUSION In this paper we propose a system that addresses the nuanced challenge of view selection in FG-SBIR, seamlessly accommodating both view-agnostic and view-specific retrieval approaches. The introduction of multi-view 2D rendered projections of 3D objects aims to overcome dataset limitations, promoting cross-modal view awareness within the FG-SBIR pipeline. Additionally, our implementation of a customisable cross-modal feature, facilitated by a disentanglement framework, allows users to fluidly transition between view-agnostic and view-specific retrieval modes, enhancing system adaptability and user experience. Supplementary material for Freeview Sketching: View-Aware Fine-Grained Sketch-Based Image Retrieval § A. QUALITATIVE RESULTS OF VIEW AWARE FG-SBIR <ref> shows qualitative comparison for View-Agnostic FG-SBIR, of a baseline method (Triplet-SN) vs ours (Ours-VGG-16), on our standard train-test setting (Sec.5 – Datasets). <ref> illustrates the same for View-Specific FG-SBIR, where continuous green rectangles denote the target view-matched photo and the dashed ones depict other views of the target instance, just for clarity. Numbers in boxes (green/white) represent corresponding ranks of retrieved photos. § B. DETAILS ON TWO-MODEL BASELINE Following figure details the construction of the two-model baseline described under Section 5.1 as B-TwoModel. Here we design one model for each paradigm. For each model, a backbone feature-extractor ℰ(·) (VGG-16) extracts a single d-dimensional feature without disentanglement. For view-agnostic paradigm (Please see <ref> left), we train using ℒ_Tri^VA on sketch-photo triplets (s,p,n) from 𝒟_CM (Eq. (2)), and ℒ_IC on 2D projections from 𝒟_2D (Eq. (4)). For view-specific one (Please see <ref> right), we train using ℒ_Tri^VS on similar triplets from 𝒟_CM (Eq. (3)), and a simple reconstruction loss (ℒ_rec = ||Ω_ϕ(p_a)-p_b||) via our decoder (Ω_ϕ(·)) on 2D projections (p) from 𝒟_2D. While the view-agnostic model scores 0.421 and 0.382 mAP (0.615 and 0.552 mAP of Ours-VGG-16) on Chairs and Lamps respectively, the view-specific one scores 48.23% and 46.93% Top-1 accuracy (60.71% and 60.56% of Ours-VGG16) on Chairs and Lamps respectively (Please see Table 1 in main paper). This shows that while being dedicated to one task, having a separate model might seem to tackle each problem better, unfortunately the loss objectives alone are not sufficient to condition the extractor on addressing the ‘view’ component of a sketch, which it needs to disregard (view-agnostic) or emphasise on (view-specific) for target retrieval task, thus justifying our combined paradigm with feature-disentanglement. splncs04
http://arxiv.org/abs/2407.02709v1
20240702230414
Regimes of Near-Inertial Wave Dynamics
[ "Scott Conn", "Jörn Callies", "Albion Lawrence" ]
physics.ao-ph
[ "physics.ao-ph" ]
WARNING This Contains Misinformation: The Effect of Cognitive Factors, Beliefs, and Personality on Misinformation Warning Tag Attitudes Michael Haupt Received: date / Accepted: date ======================================================================================================================================= § ABSTRACT When atmospheric storms pass over the ocean, they resonantly force near-inertial waves (NIWs); internal waves with a frequency close to the local Coriolis frequency f. It has long been recognised that the evolution of NIWs is modulated by the ocean’s mesoscale eddy field. This can result in NIWs being concentrated into anticyclones and provide an efficient pathway for their propagation to depth. Whether mesoscale eddies are effective at modulating the behaviour of NIWs depends on the wave dispersiveness ε^2 = fλ^2/Ψ, where λ is the deformation radius and Ψ is a scaling for the eddy streamfunction. If ε≫1, NIWs are strongly dispersive, and the waves are only weakly affected by the eddies. We calculate the perturbations away from a uniform wave field and the frequency shift away from f. If ε≪1, NIWs are weakly dispersive, and the wave evolution is strongly modulated by the eddy field. In this weakly dispersive limit, ray-tracing emerges as a valid description of the NIW evolution even if the large-scale atmospheric forcing apparently violates the requisite assumption of a scale separation between the waves and the eddies. The large-scale forcing excites many wave modes, each of which varies on a short spatial scale and is amenable to asymptotic analysis analogous to the semi-classical analysis of quantum systems. The strong modulation of weakly dispersive NIWs by eddies has the potential to modulate the energy input into NIWs from the wind, but under oceanic conditions, this effect should be small. § INTRODUCTION Near-inertial waves (NIWs) play an important role in the global climate system. Being associated with strong vertical shears, they are prone to shear instabilities which are an important driver of upper ocean mixing <cit.>. As such, the generation of NIWs is one of the primary mechanisms by which atmospheric storms induce a deepening of the surface mixed layer. This deepening requires mixing with water from below, implicating NIWs in the surface ocean heat budget <cit.>. In the interior of the ocean, NIWs make up a major fraction of the internal wave kinetic energy <cit.>, and it has been hypothesised that NIW kinetic energy may provide a source of mixing in the deep ocean <cit.>. NIWs might also extract energy from mesoscale eddies <cit.> and hence play a role in the mesoscale energy budget. In-situ observations of NIWs usually lack significant spatial resolution. The spatial structure of NIWs can generally only be resolved through dedicated field campaigns. Despite this, it has become clear that NIW evolution can be strongly modulated by the presence of mesoscale eddies <cit.>. Given the sparsity of NIW observations, theoretical progress has been important in understanding the dynamics of NIWs in the upper ocean. Early work on NIW–eddy interactions was based on ray-tracing theory. <cit.> derived a dispersion relation for NIWs in the presence of a geostrophic background flow. Throughout this paper, we will make the assumption of a barotropic background flow. The ray-tracing equations for a single baroclinic mode propagating through such a background field are x⃗τ = ωk⃗, k⃗τ = -ωx⃗, ω =fλ^2|k⃗|^2/2 + u⃗k⃗+ ζ/2, where x⃗=(x,y) is the ray position, τ is time, k⃗ is the horizontal wavevector, u⃗ is the background velocity, ζ=∂_xv-∂_yu is the background vorticity, and λ is the deformation radius. Here and throughout the rest of this paper, ω refers to the frequency shift of an NIW away from the local f such that the true frequency is f+ω. Based on these equations, <cit.> argued that NIWs would be trapped in regions of anticyclonic vorticity where the effective frequency is less than the local f. This trapping arises from the refraction of rays by the background vorticity, i.e., from changes in the wavenumber vector due to spatial gradients of the ζ/2 term in the dispersion relation. Concentration of NIW energy into anticyclones has indeed been observed in the ocean <cit.> Ray-tracing is based on the assumption that the NIWs are propagating through a slowly varying medium. This means that the horizontal scale of the waves has to be much smaller than the scale of the background mesoscale eddy field. <cit.> criticised this spatial scale assumption based on the argument that NIWs are forced by large-scale storms and so, at least initially, the waves have a much larger scale than mesoscale eddies. As a remedy, YBJ developed a theory of NIW–eddy interactions that does not rely on the assumption of a spatial scale separation. This was also partly motivated by a desire to explain observations from the Ocean Storms Experiment <cit.>. This field campaign studied the evolution of NIWs in the wake of a large storm in the North Pacific. A key result of this study was that the effect of the mesoscale vorticity on the wave evolution was in clear contradiction with predictions from ray-tracing <cit.>. The YBJ equation describes the evolution of NIWs in the presence of a prescribed geostrophic eddy field while only assuming a temporal scale separation between the inertial period and the characteristic time scale of the eddies. For the barotropic background flow considered throughout this paper, the wave evolution can be split into baroclinic modes that do not interact, so we consider a single baroclinic mode with NIW velocity [u_w(x,y,t),v_w(x,y,t)]g(z), where g(z) is the baroclinic mode structure. The YBJ equation is cast in terms of the variable ϕ=(u_w+iv_w)e^ift, where the factor e^ift removes oscillations at the inertial frequency and leaves ϕ to describe the slow evolution of the envelope that modulates the NIWs. The YBJ equation, restricted to a single mode propagating through a barotropic background flow, is then given by ϕt + (ψ,ϕ) + iζ/2ϕ - ifλ^2/2∇^2ϕ = 0, where ψ is the background streamfunction, ζ=∇^2ψ is the background vorticity, and (a,b) = ∂_x a ∂_y b - ∂_y a ∂_x b is the Jacobian operator. The second term describes advection of the NIW field by the background flow. The third term is known as the ζ-refraction term and describes refraction of the NIW field by the background vorticity. This term is necessary to obtain concentration of NIWs into regions of anticyclonic vorticity. The last term is responsible for wave dispersion. Here and throughout this paper, we set the meridional gradient of planetary vorticity β=0. The YBJ equation can be modified to include β by replacing ζ/2 with ζ/2+β y in the refraction term. The β-effect has been proposed to explain the observed equatorward propagation of NIWs in the ocean <cit.>. Over short enough scales, the β y term will only provide a small correction to ζ/2 and hence we ignore it. Despite both ray-tracing and the YBJ equation being used in the NIW literature, it remains unclear how they relate to each other. Ray-tracing has been one of the most widely used tools to interpret observations of NIWs. It has revealed aspects of NIW dynamics such as trapping in anticyclones along with an associated propagation to depth <cit.>, stalling in cyclones <cit.>, and the interplay between NIWs and turbulent dissipation <cit.>. Non-standard propagation patterns of NIWs in observations have also been explained using ray-tracing <cit.>. The YBJ equation has been used primarily as a tool in theoretical and numerical studies, although there has been some attempt to make connections with observations. <cit.> calculated the NIW wavevector using an expression based on the YBJ equation. The predictions from YBJ were broadly in agreement with observations. <cit.> directly used the YBJ equation to interpret NIW observations on a mooring array, showing that it successfully captured the amplitude and phase evolution, including differences across the mooring caused by mesoscale vorticity gradients. Comparing the results of these disparate studies is complicated by the different methods used. Having a better understanding of the relationship between ray-tracing and YBJ would facilitate the comparison of these results. Furthermore, observations reveal a varied picture of the importance of the mesoscale vorticity on NIW evolution. During the Ocean Storms Experiment, mesoscale eddies had a muted impact on the NIW field <cit.>, whereas other observational studies found a strong imprint of mesoscale eddies onto the NIW field. For example, <cit.> demonstrated that the evolution of the NIW wavevector was driven by gradients in the mesoscale vorticity during the NISKINe experiment in the North Atlantic. Extending the original argument by YBJ, <cit.> argued that these differences in the impact of mesoscale vorticity could be explained primarily by differences in the strength of wave dispersion. The stronger dispersion in the Ocean Storms Experiment, they argued, was the result of the forcing projecting onto lower baroclinic modes, a stronger stratification, and weaker eddies. As a result, the effect of refraction by mesoscale vorticity was suppressed in the Ocean Storms Experiment, whereas it was more pronounced in NISKINe. In this paper we clarify the following question about NIW dynamics: how does the ray-tracing approach relate to YBJ dynamics? Given the widespread use of ray-tracing in the literature, we aim to understand the conditions under which results from ray-tracing are accurate. To this end, we consider the YBJ equation in both a strong and weak-dispersion regime. We begin by providing a simplified treatment of the strong-dispersion regime. Next, we show that the ray-tracing equations emerge asymptotically from the YBJ equation in the limit of weak dispersion. Finally, we consider how these regimes might modulate the energy injection into the NIW band by the winds, finding that such a modulation is likely weak under oceanic conditions. § THE YBJ EQUATION §.§ Decomposition into horizontal modes We begin by non-dimensionalising the YBJ equation. Given the scalings x,y∼ L, ψ∼Ψ and t∼ L^2/Ψ, we obtain the following non-dimensional form of the YBJ equation: ϕt + (ψ,ϕ) + iζ/2ϕ - iε^2/2∇^2ϕ = 0, where ε^2 = fλ^2/Ψ is the wave dispersivity. For readers familiar with <cit.>, our ε^2 is equivalent to their Υ^-1. We remind the reader that we have assumed a single baroclinic mode, but ε does vary among baroclinic modes through λ. The parameter ε also varies spatially throughout the ocean (Fig. <ref>). We calculate ε for the first four baroclinic modes from observations as described in Appendix <ref>. With the exception of the high latitudes, the first and second baroclinic modes are almost entirely in the strongly dispersive regime (ε>1). Higher baroclinic modes tend to be more weakly dispersive with ε<1 almost everywhere for mode 4. For a given baroclinic mode, low-latitude regions tend to be more strongly dispersive while higher latitudes and western boundary currents are more weakly dispersive. Note that (<ref>) is a Schrödinger equation. This parallel is made clear if we write (<ref>) as iϕt = Hϕ, H = -ε^2/2∇^2 - i (ψ, · ) + ζ/2 . The operator H is known as the Hamiltonian operator. While the presence of first derivatives in the Hamiltonian may be unfamiliar to some, such terms arise in quantum mechanics when describing a charged particle in a magnetic field. In the YBJ equation, these first derivative terms arise due to advection. This analogy to quantum mechanics was pointed out by <cit.>; we will here exploit it extensively. This operator H is Hermitian and so it has real eigenvalues. Let μ⃗ label the eigenmodes ϕ_μ⃗(x,y) and associated eigenvalues ω_μ⃗ of the operator H, H ϕ_μ⃗ = ω_μ⃗ϕ_μ⃗. We will employ two-component vectors μ⃗ to label the two-dimensional modes. The field ϕ can then be expanded in these horizontal eigenmodes: ϕ(x,y,t) = ∑_μ⃗ a_μ⃗(t)ϕ_μ⃗(x,y), where a_μ⃗(t) is the projection of ϕ onto the eigenmode ϕ_μ⃗. The coefficients a_μ⃗(t) then evolve according to a_μ⃗t = -iω_μ⃗a_μ⃗, so a_μ⃗(t)=a_μ⃗(0)e^-iω_μ⃗t. Therefore, the eigenvalue represents the frequency shift of the mode away from f. Furthermore, because the eigenvalues are real, the total kinetic energy of the waves is conserved. §.§ Numerical calculation of eigenvalues and eigenmodes For most choices of the background flow ψ, analytical solutions for the eigenfunctions of H do not exist and numerical solutions are required. Solving the eigenvalue equation numerically requires us to discretise the operator H. The discrete eigenfunction is expressed as a vector, and the problem reduces to finding the eigenvalues of a finite matrix. The operator H is Hermitian and so it is desirable for any discrete representation of H to also be Hermitian. A standard second-order central finite difference scheme for the Laplacian term preserves this property. More care is required for the advection operator, for which we use the enstrophy-conserving scheme from <cit.> to preserve the Hermitian nature of the operator and guarantee that the eigenvalues of the matrix are real. Having real eigenvalues ensures that the conservation of NIW kinetic energy is respected in the discrete system. The exact method of numerically solving the eigenvalue problem is detailed in Appendix <ref>. § THE STRONG-DISPERSION LIMIT The limit where ε^2≫ 1 is known as the strong-dispersion limit. YBJ showed that in this limit, the solution to the YBJ equation becomes proportional to the streamfunction ψ. They additionally showed that frequency shifts away from f are proportional to the domain-averaged kinetic energy of the mesoscale flow. These same results can be derived by considering the eigenvalue problem posed above. In our framework, we can additionally derive information about the next-order perturbations to the NIW field. When ε^2 is large we split the operator H into two parts H = ε^2 H^(0) + H^(1), where H^(0)=-1/2∇^2 and H^(1)=1/2ζ -i(ψ,·). Because ε^2≫1, this implies H^(1) is a small correction to ε^2H^(0), and perturbation theory can be used to solve this system. We expand both ϕ_μ⃗ and ω_μ⃗ in powers of ε^-2: ϕ_μ⃗ = ∑_n = 0^∞ε^-2nϕ_μ⃗^(n), ω_μ⃗ = ε^2 ∑_n = 0^∞ε^-2nω_μ⃗^(n). At O(ε^2) the eigenvalue problem is H^(0)ϕ^(0)_μ⃗ = ω^(0)_μ⃗ϕ^(0)_μ⃗, where ϕ_μ⃗^(0) is the eigenfunction of the unperturbed problem with eigenvalue ω^(0)_μ⃗. We assume the domain is doubly periodic and goes from 0 to 2 in x and y. The solution is ϕ_μ⃗^(0)=e^iμ⃗x⃗, ω_μ⃗^(0)=|μ⃗|^2/2. The components of μ⃗ are integers, and the eigenfunctions are plane waves in x and y. The use of a periodic domain is intended to represent a local view of an ocean that is filled with a random sea of eddies. For many examples in this paper, we will consider a domain that contains a dipole vortex (figure <ref>) given by <cit.> ψ = 1/2(sinx-siny). The analysis below is general, however, and can be applied to any doubly period background flow. NIWs are forced by atmospheric storms which have a much larger horizontal scale than mesoscale eddies, so it can be idealized as a uniform forcing. The projection of a uniform forcing onto a given mode can be found by integrating that mode across the domain. For plane waves, a domain integral will vanish unless μ⃗ = 0, such that a uniform forcing will only project onto the μ⃗ = 0 mode in the unperturbed case. We begin by focusing on that case to obtain expressions for the perturbations to its spatial structure as well as its frequency shift. A small part of the forcing, however, projects onto modes with μ⃗≠ 0, and we will return to these higher modes below. The leading-order solution for μ⃗ = 0 is horizontally uniform and contains no modulation of the waves by the mesoscale eddy field. To obtain this modulation we must go to higher order. At O(ε^0), the eigenvalue problem is H^(0)ϕ^(1)_0 + H^(1)ϕ^(0)_0 = ω^(0)_0ϕ^(1)_0 + ω^(1)_0ϕ^(0)_0. From the O(ε^-2) calculations, we know ω_0^(0)=0. Similarly, the advection term in H^(1) vanishes when acting on ϕ_0^(0) because (ψ,ϕ_μ⃗^(0)) = i u⃗μ⃗, and the O(ε^0) equation reduces to -1/2∇^2ϕ_0^(1) + ζ/2 = ω_0^(1). The two terms on the left vanish when integrated over the domain, and we conclude that ω_0^(1) = 0. There is, however, a correction to the eigenfunction at this order, determined by ∇^2ϕ_0^(1) = ∇^2ψ. With periodic boundary conditions, the solution to this is ϕ_0^(1) = ψ, where we have assumed that ψ is defined such that it has zero domain average. This recovers the expression for ϕ from YBJ. The structure of the mesoscale eddy field is imprinted onto the waves by the ε^-2ϕ^(1)_0 term. Because the modulation is by the real streamfunction ψ, only the NIW amplitude is modulated by the mesoscale eddies. The NIW field remains in phase across the domain. We now also seek the leading-order correction to the eigenvalue, for which we go up another order. The eigenvalue equation at O(ε^-2) is H^(0)ϕ_0^(2) + H^(1)ϕ_0^(1) = ω^(0)ϕ_0^(2) + ω^(1)ϕ_0^(1) + ω^(2)ϕ_0^(0). With ω_0^(0)=ω_0^(1)=0 and (ψ,ψ)=0, this simplifies to -1/2∇^2ϕ_0^(2) + 1/2ψ∇^2 ψ = ω_2^(0). The first term on the left vanishes under domain integration. Integrating the second term on the left by parts yields ω_0^(2) = -1/2∫ |∇ψ|^2 ^̣2 x⃗/∫^̣2 x⃗. The leading-order frequency shift is ε^-2ω_0^(2). Given that ε^-2≪ 1, the frequency shift away from f is suppressed substantially, even compared to the small frequency shift assumed from the outset. Redimensionalising the expression results in ω_0^(2) = -1/2f_0λ^2∫ |∇ψ|^2 ^̣2 x⃗/∫^̣2 x⃗. This agrees with the YBJ result for the dispersion relation in the strong-dispersion regime, indicating that the frequency shift is proportional to the kinetic energy of the eddy field. We now return to the higher modes with μ⃗≠0. These modes are degenerate to leading order. For example, the modes (1,0),(-1,0),(0,1) and (0,-1) all have ω^(0)_μ⃗=1/2. We outline the procedure for using degenerate perturbation theory to calculate corrections to the eigenvalues and eigenfunctions for μ⃗≠0 <cit.>. We again start from the O(ε^0) equation which now reads H_0ϕ_μ⃗^(1)+H_1ϕ_μ⃗^(0)=ω_μ⃗^(0)ϕ_μ⃗^(1) + ω_μ⃗^(1)ϕ_μ⃗^(0). Multiplying this equation by ϕ_ν⃗^(0)*, with both ν⃗ and μ⃗ labelling one of the modes in the degenerate group, and integrating over the domain results in ∫ϕ_ν⃗^(0)*(H_0-ω_μ⃗^(0))ϕ_μ⃗^(1) ^̣2 x⃗ = ω_μ⃗^(1)∫ϕ_ν⃗^(0)*ϕ_μ⃗^(0) ^̣2 x⃗ - ∫ϕ_ν⃗^(0)* H_1 ϕ_μ⃗^(0) ^̣2 x⃗. Using integration by parts, the H_0 on the left can be swapped for ω_ν⃗^(0). Because the modes are degenerate to this order, the left-hand side vanishes. Furthermore using orthonormality of the eigenfunctions, the corrections to the eigenvalues are determined by ω_μ⃗^(1)δ_ν⃗μ⃗ = 1/4^2∫ϕ_ν⃗^(0)* H_1 ϕ_μ⃗^(0) ^̣2 x⃗. The left-hand side of this equation is diagonal, which demands that we choose the basis set of the degenerate subspace ϕ_μ⃗^(0) such that it diagonalises the operator H_1. This can be done by choosing an arbitrary basis, such as the one mentioned above, and then diagonalising the matrix with its elements equal to the right-hand side in (<ref>). The corresponding eigenfunction corrections can be found by solving the screened Poisson equation obtained from the first-order equation (<ref>) ( H_0-ω_μ⃗^(0)) ϕ_μ⃗^(1) = (ω_μ⃗^(1)-H_1)ϕ_μ⃗^(0), where the ϕ_μ⃗^(0) should be in the basis diagonalising H_1. If H_1 identically vanishes in this subspace, the degeneracy must be lifted at the next order, as in the example below. The same procedure applies. We now consider the specific example of the dipole flow (<ref>). Numerical solutions for ε=2 show that a uniform initial condition projects strongly (98.5% of the energy) onto the ϕ_0 mode (figure <ref>). There is a small but negative frequency shift of ω_0 = -0.03104. This agrees excellently with the predicted frequency shift from (<ref>) of ε^-2ω_0^(2) = 1/32 = -0.03125. Additionally, there is weak horizontal structure that aligns with the streamfunction as expected. The root-mean-squared error between the numerical eigenmode and the analytical eigenmode ϕ_0^(0) + ε^-2ϕ_0^(1) = 1 + ε^-2ψ is 1%. The agreement is excellent despite ε not being particularly large. For the dipole flow, the right-hand side of (<ref>) is zero for all combinations of basis functions of the ω_μ⃗^(0) = 1/2 subspace. Therefore, there are no first-order frequency shifts, ω_μ⃗^(1) = 0, and the degeneracy is not lifted at this order. Performing the same procedure that led to (<ref>) on the second-order equation yields ω_μ⃗^(2)δ_ν⃗μ⃗ = 1/4^2∫ϕ_ν⃗^(0)* H_1 ϕ_μ⃗^(1) ^̣2 x⃗. For our trial basis consisting of the four plane waves, we solve the screened Poisson equation (<ref>) for the corresponding ϕ_μ⃗^(1). This is tedious but doable because the right-hand side is just a sum of sines and cosines. The equation for the second-order frequency shift can be diagonalised, and this time the eigenvalues are not zero and the degeneracy is lifted. We find for ω_μ⃗^(2) the values -1/96, -7/96, -49/96, and -55/96, only the first of which corresponds to an eigenfunction that the forcing projects onto at this order. The leading-order eigenfunction of that mode is ϕ_μ⃗^(0) = -ψ (figure <ref>). The eigenvalue ε^2 ω_μ^(0) + ε^-2ω_μ⃗^(2) = 1.99739 is again in excellent agreement with the numerical eigenvalue of 1.99729. In this regime, horizontal structure in the waves primarily arises due to ϕ_0^(1) which is suppressed by O(ε^-2). There is also horizontal structure due to modes with μ⃗≠0 but these are projected onto weakly; the fraction of the variance accounted for by such a mode is 𝒪(ε^-4) <cit.>. As such, the wave potential energy, which depends on horizontal gradients in the wave field, is also suppressed. <cit.> associated the generation of wave potential energy with a sink of the background eddy kinetic energy in a process known as stimulated generation. Given the weak generation of horizontal structure, stimulated generation is weak in the strong-dispersion regime. § THE WEAK-DISPERSION LIMIT The limit ε^2 ≪ 1 is known as the weak-dispersion limit. Because ε^2 multiplies the highest-order derivative in the eigenvalue equation, the limit ε^2→0 is a singular perturbation. Before addressing the general problem, we build intuition with two simple examples. These examples suggest that there are two classes of modes. One class is characterised by waves that vary slowly along the streamlines of the background flow and more rapidly across streamlines; they are captured by an anisotropic scaling. The other class has even faster variations in both directions and requires an isotropic scaling. We develop a uniformly valid approximation that captures both of these classes. §.§ Parallel shear flow We begin with an example of a parallel shear flow in which the streamfunction ψ is a function of x only. The symmetry in y means the problem reduces to a one-dimensional eigenvalue problem. <cit.> considered this problem for a specific example of a shear flow that can be solved in closed form. <cit.> considered the limits of strong and weak dispersion for the same mean flow. Here, we address how the weak-dispersion limit can be analysed for a general parallel shear flow and apply the procedure to the example flow from <cit.>. We assume that the streamfunction ψ(x) is periodic on the domain [-,]. The eigenvalue problem (<ref>) reduces to -ε^2/2∇^2ϕ - ivϕy + ζ/2ϕ= ωϕ, where ζ = ∇^2 ψ and v = ∂_x ψ are both functions of x only, and we have suppressed the label on the eigenmode. The coefficients are independent of y which motivates the ansatz ϕ = Φ(x)e^imy. Given that the domain has width 2 in y, the wavenumber m must be an integer. With this ansatz, we are left with the one-dimensional eigenvalue problem -ε^2/2^̣2 Φ/x̣^2 + ( ε^2 m^2/2 + m v + ζ/2) Φ = ωΦ. This is the Schrödinger equation of a particle in one-dimensional potential, with the bracketed term playing the role of the potential <cit.>. As ε is small, WKB analysis can be used to find approximations to the eigenvalues and eigenfunctions <cit.>. In WKB theory, the field Φ is expanded as Φ(x) = exp1/δ∑_j = 0^∞δ^j S_j(x), where δ≪ 1 is a scaling parameter that we are yet to determine. Substituting this into (<ref>) yields -ε^2/2[1/δ^2(∑_j = 0^∞δ^jS_jx)^2+1/δ∑_j = 0^∞δ^j^2 S_jx^2]+ε^2m^2/2+mv+ζ/2=ω. If we assume m∼O(1), both the refraction term and the advection terms are O(1), and they must be balanced by a dispersion term of the same order. Requiring the lowest-order dispersion term to be O(1) implies δ = ε, and the O(1) equation becomes -1/2(S_0x)^2 + mv + ζ/2 = ω. By writing ε^-1 S_0'(x) = i k(x), this equation is analogous to the dispersion relation (<ref>) specialized to this parallel shear flow. The function S_0 is found to be S_0(x) = ±√(2)i ∫^x √(ω- m v(x') - ζ(x')/2) x̣' and determines the leading-order phase variations of the solution. One can additionally show <cit.> that the next-order solution is S_1(x)=-1/4ln(ω-m v-ζ/2), which determines the leading-order amplitude modulation of the solution. This asymptotic expansion is valid away from regions where the integrand above is zero. These are known as turning points of the problem, and exist if ω < max( mv + ζ/2). The associated eigenfunctions are referred to as bound states. Near turning points, ω-mv-ζ/2 can be approximated by a linear function of x, and solutions to (<ref>) are given by Airy functions. The Airy function solutions must be asymptotically matched to the solutions away from the turning points. This yields an integral constraint from which the eigenvalues ω can be determined. The problem as formulated above is the classic two-turning point problem and the asymptotic matching procedure is well documented <cit.>. The resulting condition for ω, often referred to as a quantisation condition, is √(2)/ε∫_x_0^x_1√(ω - mv(x) - ζ(x)/2) x̣ = (n+1/2), with n = 0, 1, …, where x_0 and x_1 are the turning points of the integrand above. The projection of a uniform forcing onto these modes can also be calculated asymptotically. The domain integral of a mode is dominated by contributions from the turning points <cit.>. If ω > max (ζ/2 + mv) then there are no turning points. The corresponding eigenmodes are referred to as free states, and the quantisation condition is replaced by √(2)/ε∫_-^√(ω - mv(x) - ζ(x)/2) x̣ = 2n, with n = 0, 1, … Note the lack of a half-integer shift that for bound states arises from the Airy behaviour near turning points. The lack of turning points in the free states also means (<ref>) is valid across the entire domain. Because the eigenfunctions of the free states are oscillatory in the entire domain, a uniform forcing projects only weakly onto them, and we do not discuss them any further. Under this scaling, the WKB modes are anisotropic. We assumed m ∼O(1), which means that the modes' phase varies in y on a length scale O(1). In contrast, the leading-order phase variations in x come from ε^-1 S_0 and therefore occur on a scale O(ε). The phase varies slowly along streamlines and rapidly across streamlines. This makes refraction and advection come in at the same order as cross-streamline dispersion. <cit.> discussed solutions to the YBJ equation which are aligned with streamlines and for which straining is ineffective. These solutions correspond to our anisotropic modes. An alternative would be to choose the scaling m∼O(ε^-2). Repeating the WKB ansatz requires a choice of δ=ε^2 and ω∼O(ε^-2) in order to end up with an equation of a similar form to (<ref>): -1/2(S_0x)^2 + ε^4 m^2/2 + ε^2 m v = ε^2 ω. With the scaling given above, each term is O(1). We can solve for S_0 and the corresponding quantisation condition for bound modes: S_0(x) = ±√(2)i ε∫^x√(ω-ε^2m^2/2-mv(x')) x̣', √(2)/ε∫_x_0^x_1√(ω-ε^2 m^2/2-mv) x̣ = (n+1/2). These modes are isotropic. The phase variations in y occur on a scale O(ε^2), which is the same as in x because phase variations in x now come from ε^-2 S_0. This makes advection and along-streamline dispersion come in at the same order, and it makes refraction negligible. Despite the different characteristics of the two scalings, they lead to similar quantisation conditions that differ only by what terms are included. We can combine them into a uniformly valid quantisation condition: √(2)/ε∫_x_0^x_1√(ω-ε^2m^2/2 - mv(x) - ζ(x)/2) x̣ = (n+1/2). The “potential” governing the wave evolution is therefore V(x) = ε^2m^2/2 + mv(x) + ζ(x)/2. Under the anisotropic scaling m ∼ O(1), the along-streamline dispersion term is suppressed by a factor ε^2, leaving the O(1) refraction and advection terms to dominate. Under the isotropic scaling m ∼ O(ε^-2), the advection and along-streamline dispersion terms are enhanced by a factor ε^-2 and dominate over a now negligible refraction term. In both cases, the general equation is obtained by retaining a term that is of higher order, which is allowed in an asymptotic theory. A uniform forcing only projects onto modes with m = 0, so all of the modes projected onto are of the anisotropic variety. We now consider a specific example of a parallel shear flow that varies sinusoidally in x: ψ = cosx. This shear flow has a region of anticyclonic vorticity at the centre of the domain and cyclonic vorticity centred on x = ± (figure <ref>a,b). This is a rare example in which the eigenvalue problem (<ref>) can be solved exactly using Mathieu functions <cit.>. The generally applicable WKB theory described above accurately predicts the eigenvalues, even for a modestly small ε = 1/4 (figure <ref>c). We provide the analytical solutions to the WKB integrals in Appendix <ref>. We also note that the symmetry of the problem means that a uniform wind forcing only projects onto modes with even n. For m = 0, the eigenmodes are shaped by the potential V = ζ/2 (figure <ref>). Where ω > V, S_0 is imaginary and the solutions are oscillatory; where ω < V, S_0 is real and the solutions are decaying (figure <ref>). Near the anticylonic centre of the flow, the potential is at its lowest and all the modes are oscillatory. Moving further out into the cyclonic region, more and more of the modes become evanescent. The proportionality of the potential to the vorticity ζ leads to trapping of NIW in anticyclones. The trapping arises from the dephasing of the modes that make up the initial condition. This is analogous to the argument in <cit.> regarding the vertical propagation of NIWs due to the β-effect. §.§ Axisymmetric flow We now consider a streamfunction with axial symmetry, such that ψ = ψ(r), where r is the radial distance from the origin. <cit.> studied NIWs with azimuthal wavenumber zero in an axisymmetric vortex and provided asymptotic expressions for the frequency of the lowest radial mode. <cit.> studied a similar case but also considered the impact of NIWs back on the vortex. Using WKB theory, we consider NIWs with an arbitrary azimuthal wavenumber and provide a transcendental equation that can be solved for their frequency as for the parallel shear flows above. We make the ansatz ϕ=A(r)e^imθ, where θ is the azimuthal angle and again drop the mode label. In polar coordinates, (<ref>) then reduces to -ε^2/2( ^2Ar^2 + 1/rAr) + (ε^2 m^2/2r^2 + mv/r + ζ/2) A = ω A, where v = ∂_r ψ denotes the azimuthal velocity. There are some subtleties involved in applying WKB theory to this equation. For modes with m>0, the potential diverges at the origin. This issue has long been noted in the quantum mechanics literature and can be addressed by performing a so-called Langer transform on the equation. For m=0, there is no divergence of the potential, but there is a phase shift at the origin. As pointed out by <cit.>, both cases turn out to give the same quantisation condition: √(2)/ε∫_r_0^r_1√(ω-V(r)) ṛ = (n+1/2), with n = 0, 1, 2, …, where the potential is V(r) = ε^2m^2/2r^2+ mv/r + ζ/2. If m > 0, the integration bounds r_0 and r_1 are the two zeros of the integrand; if m = 0, r_0 = 0 and r_1 is the one zero of the integrand. As in the case of a parallel shear flow, this expression is uniformly valid in the sense that it works for both m ∼ O(1) and m ∼ O(ε^-2). These again correspond to anisotropic and isotropic modes, respectively, with refraction, advection, and dispersion along and across streamlines playing the same roles as before. The only difference is that the streamlines are now circular. We consider the concrete example of an isolated Gaussian vortex on an infinite domain: ψ(r)=e^-r^2/4. This corresponds to an anticyclone in the centre of the domain that is surrounded by a halo of cyclonic vorticity (figure <ref>a,b). Again the WKB calculation yields eigenvalues that agree extremely well with the exact eigenvalues (figure <ref>c). The structure of the first few modes is shown in figure <ref>. For m=0, the modes are concentrated in the anticyclone. For m>0, there is a repulsion from the very centre of the anticyclone due to the advection and dispersion terms in V(r). This repulsion increases with m, but the modes remain primarily concentrated in the region of anticyclonic vorticity. These modes are anisotropic, with more rapid variation in the radial (cross-streamline) than azimuthal (along-streamline) direction. §.§ General case Based on the intuition gained above, we wish to construct a uniformly valid asymptotic expansion for a general two-dimensional background flow. In analogy with the isotropic scaling, we begin by assuming a solution of the form ϕ(x,y) = exp[1/ε^2∑_j=0^∞ε^2jS_j(x,y)], again dropping the mode label. Substituting this into (<ref>) yields -1/2ε^2|∑_j=0^∞ε^2j∇ S_j|^2 - 1/2∑_j=0^∞ε^2j∇^2S_j - i/ε^2∑_j=0^∞ε^2j(ψ,S_j) + ζ/2 = ω. Assuming ω∼ O(ε^-2) and collecting leading-order terms, we obtain -1/2|∇ S_0|^2 - i(ψ,S_0) = ε^2 ω. In the simple examples discussed above, we obtained a uniformly valid approximation by retaining the higher-order refraction term in the leading-order equation arising from an isotropic scaling. We do so again here: -1/2|∇ S_0|^2 - i(ψ,S_0) +ε^2ζ/2 = ε^2 ω. We anticipate that the order of these terms again changes for anisotropic modes. If the phase varies slowly along streamlines, the advection term is reduced by a factor O(ε^2), and cross-streamline dispersion, acting on spatial variations on a scale of O(ε) rather than O(ε^2), will attain the same order, whereas along-streamline dispersion becomes negligible. The equation (<ref>) can therefore capture both isotropic and anisotropic modes. We now introduce the wavenumber vector k⃗ by writing ε^-2∂ S_0 / ∂x⃗ = i k⃗. The equation (<ref>) can be solved using the method of characteristics: x⃗τ = ε^2 k⃗ + u⃗, k⃗τ = -x⃗( u⃗k⃗ + ζ/2), ω = ε^2 |k⃗|^2/2 + u⃗k⃗ + ζ/2. These are the non-dimensionalised ray-tracing equations of <cit.>. We further elaborate on this connection between YBJ and <cit.> below. Numerical solutions for the dipole flow show that the majority of a uniform forcing projects onto anisotropic modes that show little structure along streamlines and vary more rapidly across streamlines (figure <ref>). With ε = 1/4 there is also some projection onto modes that show more characteristics of isotropic phase variations. The variations are more rapid, as emerges from the isotropic scaling discussed above. Finally, we show how approximations to the eigenvalues can be obtained in the weak-dispersion limit when the flow problem is not separable, as it was in the cases of a parallel shear flow or axisymmetric flow. To this end, we utilise results from the quantum mechanics literature. Recall that the YBJ equation is equivalent to the Schrödinger equation, with the YBJ operator H = -ε^2/2∇^2 - i u⃗·∇ + ζ/2 playing the role of the Hamiltonian. The weak-dispersion limit corresponds to the classical limit of the equivalent quantum system, and the ray-tracing equations are the analogue of the classical Hamiltonian dynamics. The classical Hamiltonian is obtained from H by making the substitution ∇→ ik⃗, yielding the dispersion relation in (<ref>). The Hamiltonian dynamics are then x⃗τ = ωk⃗ and k⃗τ = -ωx⃗, the ray-tracing equations stated in (<ref>). The connection with the Schrödinger equation is most easily seen in the Hamilton–Jacobi description of classical mechanics <cit.>. The quantisation conditions derived above for separable problems, from which we obtained good approximations of the frequency shifts ω, can be generalised to some extent to non-separable problems like the dipole flow (figure <ref>). This semi-classical analysis of a quantum system was developed by <cit.>, <cit.>, and <cit.>, extending the Bohr–Sommerfeld quantum theory. The resulting approach is referred to as the EBK method <cit.>. The starting point is that the rays (classical trajectories in the quantum problem), being constrained by the invariant ω (energy in the quantum problem), trace out invariant tori in the phase space spanned by x⃗ and k⃗. A ray starting on such a torus will remain on it forever. The quantisation condition selects invariant tori that correspond to allowed bound states by insisting that phase increments along closed loops on the invariant torus integrate to multiples of 2. Recalling that ε^-2 S_0 = i k⃗, so k⃗ is the spatial gradient of the phase, and ḳ⃗⃗x is a phase increment, the quantisation conditions read ∮_𝒞_1k⃗ ̣⃗x = 2(n+1/2), ∮_𝒞_2k⃗ ̣⃗x = 2 m, where n and m are integers. The contours 𝒞_1 and 𝒞_2 are topologically independent closed curves on the invariant torus (figure <ref>a). In our example, the curve 𝒞_1 passes through the hole of the phase space torus, whereas the curve 𝒞_2 goes around the hole. The two curves are independent in the sense that neither one can be continuously deformed into the other. There is a half-integer phase shift in the quantisation condition arising from the integral along curve 𝒞_1 because this curve passes through two caustics, the generalisation of a turning point, where additional phase shifts are incurred <cit.>. The curve 𝒞_2 encounters no caustics. The integer wavenumbers n and m correspond to the cross- and along-streamline variations, respectively. These EBK quantisation conditions are entirely analogous to the WKB quantisation conditions derived above for the separable parallel shear flow and axisymmetric flow. We apply the EBK quantisation to the dipole flow with ε = 1/4. Our procedure closely follows <cit.>: we find the invariant tori satisfying the quantisation condition by writing the Hamiltonian equations in action–angle variables and employing Newton's method. See Appendix <ref> for details. All eigenvalues calculated by this EBK method show excellent agreement with the numerical values (figure <ref>). As foretold by <cit.>, not all modes are accessible by the EBK approach. If the system is non-integrable, trajectories in phase space can become chaotic instead of tracing out an invariant torus (figure <ref>b). States corresponding to such chaotic trajectories are not accessible by the EBK method. This “quantum chaos” has received much attention in the physics literature and has connections to random matrix theory <cit.>. Methods exist to estimate eigenvalues as well as their statistics <cit.>. We do not pursue these issues any further here, in part because a uniform forcing projects most strongly onto the regular modes accessible with the EBK method (figure <ref>). § RELATION TO THE RAY-TRACING EQUATIONS The previous section made clear that the ray-tracing equations of <cit.> are closely related to the YBJ dynamics. In the same way that Hamiltonian dynamics emerge in the classical limit of the Schrödinger equation, the ray equations emerge in the weak-dispersion limit of the YBJ equation. YBJ criticised Kunze's assumption that the waves have a smaller spatial scale than the background flow, insisting that atmospheric forcing produces near-inertial waves at larger—not smaller—scales than mesoscale eddies, calling into question Kunze's ray-theoretical description in general. The analysis above clarifies that the spatial scale of the forcing is irrelevant. Instead, the scale on which dynamical modes vary determines whether WKB analysis can be applied, and this spatial scale is set by how strongly dispersive the waves are. An initially uniform wave field can be thought of as consisting of a superposition of several modes, all varying on a small scale but combining into a uniform field. The distinct frequencies ω of these modes make them dephase over time and the superposition quickly exhibits the small scales of the modes. Our analysis also provides some additional insight into the evolution of weakly dispersive NIWs. The isotropic and anisotropic scalings show that refraction is not always of leading-order importance. The refraction term is significant only for the anisotropic modes. For isotropic modes, the refraction term is asymptotically weak and the dispersion relation is dominated by advection and dispersion. A large-scale forcing, however, will project primarily onto the anisotropic modes, as can be seen in the specific solutions for the dipole case (figure <ref>). More generally, the large values of the along-streamline wavenumber m in the isotropic case produce rapid variations that lead to strong cancellations when calculating the projection of a uniform forcing onto these modes. As such, only a weak projection can remain. To help interpret observations from the NISKINe study, <cit.> performed a simplified ray-tracing calculation which predicted a rapid strain-driven growth in the wavenumber that stood in stark contrast to the data. In this region of the North Atlantic, the waves are weakly dispersive <cit.>, so one may worry that this result contradicts our conclusion that ray-tracing can be deployed gainfully in the weak-dispersion regime. <cit.> approximated the full wavevector evolution by assuming a uniform and time-independent vorticity gradient as well as a strain field with strain rate α and its principal axis aligned with the vorticity gradient. In that setup, the wavenumber component k_⊥ that is aligned with the vorticity gradient, i.e., perpendicular to vorticity contours, evolves according to k_⊥τ = -|∇ζ|/2 + α k_⊥, so k_⊥ = -|∇ζ|/2α( e^ατ - 1 ) if k_⊥ = 0 at time τ = 0, approximating large-scale wind forcing. The exponential growth predicted by this equation does not match the data. Our analysis suggests, however, that a large-scale forcing primarily excites modes whose phase is aligned with streamlines. In this configuration, the strain is ineffective, and the initial wavenumber evolution is dominated by refraction: k_⊥τ = -|∇ζ|/2, so k_⊥ = -|∇ζ|/2τ. This recovers the <cit.> solution that <cit.> showed roughly matches the data. Our analysis therefore suggests that it was not ray-tracing per se that caused the mismatch with the data but the assumptions that went into the simplified solution. <cit.> considered three-dimensional ray-tracing which allows for both baroclinic structure in the mean flow and a vertical wavenumber for the NIWs that corresponds to propagation of the waves in the vertical. In this paper we have simplified to a barotropic mean flow and considered the propagation of a single baroclinic mode such that the problem reduces to two-dimensional ray-tracing. Exploring how the three-dimensional ray-tracing is related to the full YBJ equation that also allows for baroclinicity in the background flow is left to future work. § NEAR-INERTIAL WIND WORK One may speculate that the frequency shifts in the weak-dispersion limit could impact the energy input into NIWs by the winds. To study this, we need to consider a forced version of the YBJ equation. So far we have focused on the problem with a horizontally uniform initial condition. This was to represent the NIW field excited by the passage of a large-scale atmospheric storm, and we studied the evolution of this NIW field in the absence of any further forcing. Real NIWs, in contrast, are continually forced by the winds, which we now represent by including a horizontally uniform forcing term in the modal YBJ equation: ạ_t = ( -i ω a_t + F_t e^ift) ṭ, where a_t denotes the modal amplitude at time t and F_t the wind forcing projected onto the mode under consideration. We suppress the mode index μ⃗ for now, but keep in mind that this equation must be solved for each mode. Note that we have re-dimensionalised the equation here. The factor of e^ift back-rotates the forcing to match the back-rotated description of the NIW evolution by the YBJ equation. To proceed, we describe the wind by an Ornstein–Uhlenbeck process which satisfies F̣_t = -c F_t ṭ + σ Ẉ_t where c^-1 is the decorrelation time scale of the wind forcing, σ is the amplitude of the stochastic excitation and Ẉ_t is a Wiener process. The power spectrum of the process F is S(ω) = 2/c/c^2 + ω^2. For ω≫ c the power falls of with frequency as ω^-2, i.e., the spectrum is red. We find that this is a good model of the power spectrum of the wind stress from reanalysis, especially over the ocean (see Appendix <ref> for more details). We consider the system spun up from t=-∞, such that it has statistically equilibrated for all t. This results in the formal solution for the forcing F_t = σ∫_-∞^t e^-c(t-t') Ẉ_t', and the formal solution for the mode amplitude a is given by a_t = e^-iω t∫_-∞^t e^i(f + ω) t' F_t' ṭ'. The NIW kinetic-energy equation can be obtained in the usual way by multiplying (<ref>) with a_t^* and adding the complex conjugate. This is allowed because it is the integral of a Wiener process that appears in (<ref>), and not the Wiener process itself. The wind work Γ_t arises as Γ_t = 1/2(a_t^* e^ift F_t + c.c.). We are interested in the average of Γ_t over an ensemble of many realisations of the wind-forcing. Let ⟨ · ⟩ denote such the ensemble average. Hence, the ensemble average wind work is ⟨Γ_t ⟩ = 1/2(e^-i(f + ω) t∫_-∞^t e^-i(f+ω) t'⟨ F^*_t' F_t ⟩ ṭ'+c.c.). The covariance function of the Ornstein–Uhlenbeck process F is ⟨ F^*_t' F_t ⟩ = σ^2/2ce^-c|t-t'|, so the ensemble average of Γ_t reduces to ⟨Γ_t ⟩ = 1/2σ^2/c^2+(f+ω)^2. As expected, given the initialisation at t = -∞, the power input is independent of time t. From this expression, we can furthermore see that ⟨Γ_t ⟩ is smaller for ω > 0 than for ω < 0. This is because the wind forcing has more power at low frequencies. We now define Q as the ratio between the equilibrium wind work in the presence of a mesoscale eddy field to the equivalent wind work in the absence of mesoscale eddies. Without mesoscale eddies, ψ=0 and hence there is no advection or refraction. Furthermore, if we assume the waves are generated by large-scale storms such that we approximate the forcing as horizontally uniform, there is no process to generate horizontal structure in the waves. The Laplacian of a constant wave field is zero and so the dispersion term also drops out of the YBJ equation. In this case there are no frequency shifts and ω = 0 for the uniform mode excited by the wind. The wind work is simply ⟨Γ_t ⟩ = 1/2σ^2/c^2+f^2. We calculate Q as a weighted sum of the ratio over individual modes where the weighting is given by the projection F_μ⃗ of the forcing onto a given mode μ⃗: Q = ∑_μ⃗ |F_μ⃗|^2c^2+f^2/c^2+(f+ω_μ⃗)^2, where we restored the subscripts for the modes. This expression depends on the dispersiveness ε through F_μ⃗ and ω_μ⃗. Modulation of the NIW wind work by mesoscale eddies occurs only for ε≲ 1. Using the dipole flow as an example, we calculate Q from (<ref>) as a function of c and ε (figure <ref>). For large ε, Q quickly approaches unity regardless of the value of c. For small ε, the contours of Q become horizontal and there is little dependence of Q on ε. The dependence is primarily on c with a lower value of c resulting in a higher value of Q, i.e., a more substantial enhancement of the wind work. Our framework provides physical motivation for why mesoscale eddies can modulate the wind work in the weak-dispersion case. Assuming c ≪ f, which is generally the case for the wind stress over the ocean, we see that the inertial frequency f is in the ω^-2 part of the wind power spectrum. Any process that shifts the frequency of NIWs will modulate the wind power felt by the waves. Because the wind power spectrum falls off like ω^-2, a shift to lower frequencies will raise the wind power felt by the waves, and a shift to higher frequencies will lower it. This is the essence of (<ref>). As we have shown above, frequency shifts are small in the strong-dispersion limit and so the waves should feel similar wind power regardless of the presence of mesoscale eddies. As such, Q is close to unity in the strong-dispersion limit. In the weak-dispersion limit, in contrast, there can be significant frequency shifts. A uniform forcing will project onto many modes with a range of frequency shifts. Due to the curvature of the wind power spectrum, going like ω^-2, the fractional increase in power for negative frequency shifts will be greater than the fractional decrease in power for positive frequency shifts of the same magnitude. As a result, there will be a net increase in NIW wind work when summing over all modes (see figure <ref>b for a schematic). The question remains whether this will be an appreciable effect in the ocean. We estimate Q from observations. For each location in the ocean, we estimate ε from the deformation radius and satellite altimetry observations of the eddy field (see Appendix <ref>), and we estimate c from atmospheric reanalysis (see Appendix <ref>). We calculate the modes of the dipole flow for a range of ε, which gives us ω_μ⃗ and |F_μ⃗|^2, and we re-dimensionalise ω_μ⃗ using the Rossby number =ζ/f calculated from satellite altimetry. We use the spatial structure of the vortex dipole as a stand-in for the real eddy structure. This provides an (admittedly crude) estimate of the combined effect of an anti-cyclone and a cyclone. We calculate Q by using (<ref>) and then interpolating onto the correct ε. Our estimate reveals that deviations of Q from unity are weak; at most 5%. This effect is entirely concentrated in the western boundary current regions. This is because the dimensional frequency shift scales with . Over most of the ocean is far too weak to produce any modulation of the NIW wind work. While this mechanism may be important for individual NIW events, it is clear that on average there is not a significant modulation of the NIW wind work by mesoscale eddies. The maximum modulation of 5% is significantly smaller than current uncertainties in the NIW wind-work <cit.>. That being said, our approximation of the wind-stress as an Ornstein–Uhlenbeck process is crude. The real forcing is dominated by intermittent atmospheric cyclones. § LIMITATIONS OF THE MODEL The scaling assumptions of the original YBJ equation place limits on ε, which should be kept in mind in the context of the asymptotic expansions performed above. The Rossby number is = U/fL, where U is a scaling for the background flow, and the Burger number is = λ^2/L^2. With these definitions, ε^2 = /. Both and are required to be small but YBJ go further and make the scaling assumption ∼, which is equivalent to ε^2 ∼ 1. See <cit.> for a discussion of other dispersion regimes. The asymptotic analyses performed above technically violate the original scaling assumption made by YBJ, but they still offer insight. The parameter ε does not need to be far into either the strong or weak-dispersion regime for the solutions to exhibit characteristics of the asymptotic solutions. In the strong-dispersion limit, the modes for ε=2 strongly resemble the analytical strong-dispersion solutions. In the weak-dispersion limit, all of our examples show excellent agreement between asymptotic theory and numerical solutions for ε = 1/4. This behaviour is typical of WKB solutions, which often work well beyond where they have any right to be accurate. <cit.> conducted a detailed study of the evolution of NIWs in different scaling regimes. They considered a very weak–dispersion regime where ∼^2 which is equivalent to ε∼. An additional term arises compared to the YBJ equation, but they found the YBJ equation to still work well in simulations. They also consider a strong-dispersion regime where ∼1. In this regime they find a leading-order uniform NIW solution, but also with the excitation of super-inertial frequencies that are not captured by YBJ. The frequency shift of the uniform mode is as predicted by YBJ. We can further assess the validity of YBJ's scaling assumption using our observational estimates of ε. For the first four baroclinic modes, most of the ocean is in the regime of ε∼1 (Fig. <ref>). For higher baroclinic modes, ε will continue to decrease proportionally to λ^-1. For high-enough baroclinic mode number, the ε^2∼1 requirement will therefore be broken. For many regions, however, these high baroclinic modes may not be strongly excited by the wind forcing <cit.>, although this will not be true universally <cit.>. Throughout this paper, we have dealt with the case in which the background flow does not evolve. In the ray-tracing framework, the background flow could be allowed to evolve. The Hamiltonian operator would be time-dependent, but the equations can still be integrated along rays. For our analysis of eigenmodes to be applicable to the time-dependent case, however, the evolution of the background flow should be adiabatic, i.e., it should be slow compared to the wave evolution. The time for eigenmodes to de-phase depends on the difference between their frequencies. In the strong-dispersion case, the frequency difference between the leading-order eigenmode and the higher eigenmodes is O(ε^2), meaning that the time to de-phase should be small relative to the timescale for evolution of the background flow. In the weak-dispersion limit, the eigenvalues become ever-closely packed, meaning the timescale for dephasing can become large. For the adiabatic assumption to hold, an invariant torus should deform much more slowly than the time it takes a particle to traverse the torus. If the time taken to traverse the torus is given by the advective timescale then these two timescales are formally the same order. The adiabatic assumption will only hold if there is a symmetry which causes the torus to persist for a longer timescale. The dipole vortex is an extreme example of this where the tori never deform yet the advective timescale is finite. In the ocean, eddies often persist as coherent features for times much longer than the advective timescale. As such, we argue that the weak-dispersion results should continue to provide insight even in the time-dependent case. § CONCLUSIONS In the YBJ framework, the evolution of NIWs in the presence of a mesoscale eddy field is governed by the wave dispersiveness ε = f λ^2 / Ψ. The limit of ε≫ 1 corresponds to the strong-dispersion limit and ε≪ 1 corresponds to the weak-dispersion limit. Both of these limits are relevant for the ocean as the dispersiveness decreases with vertical mode number and the strength of mesoscale eddies. The YBJ equation is a Schrödinger equation, with the YBJ operator playing the role of the Hamiltonian operator in quantum mechanics. As is conventional in quantum mechanics, the evolution of NIWs can be described using the eigenmodes of the YBJ operator and their eigenvalues, which determine the frequency shift away from the inertial frequency. Perturbation methods from quantum mechanics yield insight into the YBJ dynamics and its relationship to the ray-tracing equations of <cit.>. In the strong-dispersion regime ε≫ 1, perturbation theory yields closed-form expressions for the NIW modes. To leading order, a spatially uniform forcing excites a spatially uniform NIW mode. This mode is modulated by an order ε^-2 perturbation proportional to the streamfunction of the eddy field. The frequency shift is also of order ε^-2 and proportional to the average kinetic energy of the eddies. Both of these results recover predictions by YBJ. The same approach also yields expressions for the modes that are not spatially uniform to leading order. The degeneracy of these modes at leading order is lifted at higher order, and the frequency shifts and spatial structures can be determined. Wind patterns associated with sharp atmospheric fronts may excite these modes more strongly than the uniform forcing assumed throughout this work <cit.>. In the weak-dispersion regime ε≪ 1, the YBJ equation is amenable to WKB analysis. In simple (separable) background flow geometries, this allows the straightforward calculation of eigenmodes and their frequency shifts, which are excellent approximations of the exact frequency shifts even for modestly small ε. More generally, the weak-dispersion limit of the YBJ equation corresponds to the classical limit of quantum mechanics. The YBJ equation reduces to the ray equations of <cit.>, the equivalent to the corresponding classical Hamiltonian dynamics. The semi-classical EBK analysis allows the calculation of frequency shift for non-separable background flows for the regular part of the spectrum, which again are in excellent agreement with the full shifts. The emergence of the ray equations in the classical limit furthermore suggests that they can be applied if dispersion is weak, whether or not the forcing has a large horizontal scale. The spatial-scale separation underlying the ray equations emerges because refraction quickly produces short-scale phase variations, initially unopposed by dispersion. The frequency shift of NIW modes away from the inertial frequency implies that the NIW wind work can be modulated by mesoscale eddies. We quantified this using Q which measures the ratio of the NIW wind work in the presence of mesoscale eddies to that without mesoscale eddies. This modulation arises due to the curvature of the wind power spectrum, which enhances the power input into modes with a shift to lower frequencies more than it suppresses the power input into modes with a shift to higher frequencies. On average, this effect is weak in the oceans, however, with the modulations always being less than 5%. Acknowledgements The authors gratefully acknowledge support from NASA under grants 80NSSC22K1445 and 80NSSC23K0345, from NSF under grant OCE-1924354, and from the Simons Foundation Pivot Fellowship program. Data Availability Statement Code to numerically solve the 2D eigenvalue problem is available at <https://github.com/joernc/ybjmodes>. The SSH data is available from the E. U.’s Copernicus Marine Service at <https://doi.org/10.48670/moi-00148>. The ERA5 reanalysis data is available from the Copernicus Climate Change Service (C3S) Climate Data Store at <https://doi.org/0.24381/cds.adbb2d47>. The ECCO density data is available from <https://doi.org/10.5067/ECG5D-ODE44>. Declaration of Interests The authors report no conflict of interest. § CALCULATING THE WAVE DISPERSIVENESS Here we describe the calculations used to estimate the wave dispersiveness ε = f λ^2 / Ψ from observations. At each location, we estimate the set of deformation radii λ from hydrography and the characteristic strength of the streamfunction Ψ from altimetry. Following <cit.>, we calculate λ by solving the baroclinic eigenvalue equation using finite differences. We perform this calculation using the climatology from the Estimating the Circulation and Climate of the Ocean (ECCO) state estimate version 4 release 4 <cit.>. We solve the baroclinic eigenvalue equation at each horizontal grid cell on the ECCO grid to obtain maps of the deformation radii. We display ε for the lowest four baroclinic modes only, for which the numerical approximation has a minimal effect. To calculate Ψ, we use sea surface height (SSH) observations from the Data Unification and Altimeter Combination System's (DUACS) delayed-time (DT) 2018 release <cit.>. The SSH is provided at a (nominal) (1/4)^∘ and daily resolution. We calculate a geostrophic streamfunction using ψ = gη/f, where η is the SSH and f is the (now latitude-dependent) Coriolis parameter. We take observations from 2007 to 2022 and estimate Ψ as the RMS ψ over that period. Again we are assuming that the streamfunction is barotropic. § NUMERICAL SOLUTIONS TO THE EIGENVALUE PROBLEM To numerically solve the eigenvalue problem (<ref>), we discretise the Hamiltonian operator H using finite differences. We use a fourth-order central difference scheme for the Laplacian operator in the dispersion term. For the advection term, we employ the fourth-order enstrophy-conserving scheme of <cit.>, which preserves the Hermitian nature of the operator and translates into energy conservation in this context. In the notation of <cit.>, we employ 2J_1 - J_2, where J_1 = 1/3 (J^++ + J^+× + J^×+) and J_2 = 1/3 (J^×× + J^×+ + J^+×). For the refraction term, we evaluate ζ analytically at each point, although in general it could be calculated from the streamfunction using finite differences as well. We use a spatial resolution of up to 1024 × 1024 points and solve for the lowest eigenvalues using Lanczos iteration. The resolution is chosen by checking the convergence of the eigenvalues. The number of eigenvalues solved for depends on the value of ε, which controls how densely packed the eigenvalues are and thus how many must be computed to find all eigenmodes that a uniform forcing projects onto substantially. We ensure a large enough number of eigenvalues are computed by summing the square of the projection coefficients. § ANALYTICAL SOLUTIONS TO SHEAR FLOW WKB INTEGRALS Here we provide analytical solutions to the WKB problem with the sinusoidal shear flow. First we rewrite the potential as V(x)=A_mcos(x+δ_m)+ε^2m^2/2, where A_m=√(m^2+1/4) and tanδ_m=-2m. Assuming m>0 and that arctan corresponds to the principal value then it follows that δ_m=+arctan(-2m). Since the domain is periodic, we can consider any interval of length 2. For convenience we choose [--arctan(-2m),-arctan(-2m)]. We can now make the change of variable x'=x+arctan(-2m). The transformed potential is V(x') = ε^2m^2/2-A_mcos(x'). With the potential in this form, the WKB integral (<ref>) can be evaluated in terms of the elliptic integral of the second kind E(φ|k^2) S_0=±2√(2)iε√(ω-ε^2m^2/2+A_m) E(x'/2|2A_m/ω - ε^2m^2/2+A_m). We can obtain an equation for the eigenvalues from (<ref>). Letting x'_1 denote the positive turning point given by x'_1 = -arccos(ω-ε^2m^2/2/A_m), we obtain E(φ|k^2)=ε(n+1/2)/4√(2(ω-ε^2m^2/2+A_m)), where φ = x'_1/2 and k^2 = 2A_m/ω-ε^2m^2/2+A_m. This is a transcendental equation that can be solved numerically for the eigenvalues ω. The eigenvalues can be normalised by requiring ∫_-^[ϕ_n(x)]^2dx = 2. Letting C be the normalisation constant we obtain 8C^2/√(ω-ε^2m^2/2+A_m)F(φ|k^2)=2, where F(φ|k^2) is the elliptic integral of the first kind <cit.>. The projection of a uniform forcing onto a given mode with even symmetry about the bottom of the potential is a_n = 1/2∫_-^ϕ_n(x)dx. This integral can be evaluated <cit.> as |a_n|^2 = 1/√(A_m-ω+ε^2m^2/2)ε/√(2)F(φ|k^2). § FURTHER DETAILS ABOUT THE EBK METHOD We principally follow <cit.> to calculate find the invariant tori satisfying quantisation conditions and the associated EBK predictions for the eigenvalues ω. We write the angle Hamilton equations, which are partial differential equations that describe the invariant torus: νxθ + μxφ = ε^2 k + u νyθ + μyφ = ε^2 l + v νkθ + μkφ = -(k ux + l vx + 1/2ζx) νlθ + μlφ = -(k uy + l vy + 1/2ζy) The quantization conditions can then be written as integrals over the angles θ and φ: ∫( k xθ + l yθ) θ̣= 2 m, ∫( k xφ + l yφ) φ̣= 2(2n + 1 + m), The integration along θ passes around the hole of the torus (like contour 𝒞_2 in figure <ref>). The integration along φ passes through the hole twice and also around the hole once, so we double the radial phase increment 2 (n + 1/2) and add the azimuthal phase increment 2 m in the second quantization condition. We average these numerical integrals over the respective other coordinate to increase the accuracy. We discretise the above equations by dividing the [0, 2] intervals that the angles θ and φ vary over using 64 point and approximate derivatives using an eighth-order finite difference scheme. We initialise the calculation with θ = -1/2, φ = 1/10, x = 1/2 (1 + cosφ) cosθ, y = 1/2 (1 + cosφ) sinθ, k = -ε^-1sinφcosθ, l = -ε^-1sinφsinθ for the (n, m) = (0, 0) torus and apply Newton iteration to satisfy the above equations. We determine ω by applying the dispersion relation at each point of the torus and averaging over all grid points. We then change the quantum numbers to other values and start the Newton iteration from the previously found torus, using iterations at intermediate values if needed. § ESTIMATING THE DECORRELATION TIME OF WIND STRESS Here we describe the calculations used to estimate the decorrelation time c^-1 of the wind stress. For the wind forcing, we use the European Centre for Medium-Range Weather Forecasting ERA-5 reanalysis <cit.>. For the calculations below, we use data from 2015 to 2020. At each grid cell, we use the 10 m zonal (u_w) and meridional (v_w) winds with hourly resolution. Following <cit.> we convert this to a wind stress using a bulk aerodynamic drag formulation. The time series at each location is used to calculate a power spectrum of the wind stress. The decorrelation time scale is obtained by fitting the following model to the estimated spectrum: S(ω) = A/1+(ω/c)^s, where A and s are additional fitted parameters that we do not use here. Over the ocean s=2 is a reasonable approximation which motivates our use of the Ornstein–Uhlenbeck process above. jfm
http://arxiv.org/abs/2407.02373v1
20240702154205
Emerging supersolidity from a polariton condensate in a photonic crystal waveguide
[ "Dimitrios Trypogeorgos", "Antonio Gianfrate", "Manuele Landini", "Davide Nigro", "Dario Gerace", "Iacopo Carusotto", "Fabrizio Riminucci", "Kirk W. Baldwin", "Loren N. Pfeiffer", "Giovanni I. Martone", "Milena De Giorgi", "Dario Ballarini", "Daniele Sanvitto" ]
cond-mat.mes-hall
[ "cond-mat.mes-hall", "cond-mat.quant-gas" ]
[]dimitrios.trypogeorgos@nanotec.cnr.it CNR Nanotec, Institute of Nanotechnology, via Monteroni, 73100, Lecce, Italy CNR Nanotec, Institute of Nanotechnology, via Monteroni, 73100, Lecce, Italy Institut für Experimentalphysik und Zentrum für Quantenphysik, Universität Innsbruck, 6020 Innsbruck, Austria Dipartimento di Fisica, Unversità degli Studi di Pavia, via Bassi 6, 27100 Pavia, Italy Pitaevskii BEC Center, CNR-INO and Dipartimento di Fisica, Università di Trento, I-38123 Trento, Italy Molecular Foundry, Lawrence Berkeley National Laboratory, One Cyclotron Road, Berkeley, California, 94720, USA PRISM, Princeton Institute for the Science and Technology of Materials, Princeton University, Princeton, New Jersey 08540, USA INFN, Sezione di Lecce, 73100 Lecce, Italy CNR Nanotec, Institute of Nanotechnology, via Monteroni, 73100, Lecce, Italy CNR Nanotec, Institute of Nanotechnology, via Monteroni, 73100, Lecce, Italy § ABSTRACT A supersolid is a counter-intuitive phase of matter where its constituent particles are arranged into a crystalline structure, yet they are free to flow without friction. This requires the particles to share a global macroscopic phase while being able to reduce their total energy by spontaneous, spatial self-organisation. This exotic state of matter has been achieved in different systems using Bose-Einstein condensates coupled to cavities, possessing spin-orbit coupling, or dipolar interactions. Here we provide experimental evidence of a new implementation of the supersolid phase in a novel non-equilibrium context based on exciton-polaritons condensed in a topologically non-trivial, bound-in-the-continuum state with exceptionally low losses. We measure the density modulation of the polaritonic state indicating the breaking of translational symmetry with a remarkable precision of a few parts in a thousand. Direct access to the phase of the wavefunction allows us to additionally measure the local coherence of the superfluid component. We demonstrate the potential of our synthetic photonic material to host phonon dynamics and a multimode excitation spectrum. Emerging supersolidity from a polariton condensate in a photonic crystal waveguide Daniele Sanvitto July 8, 2024 ================================================================================== The existence of a supersolid phase of matter, combining a crystalline structure with superfluid properties, was speculated more than 50 years ago <cit.> however only recently there have been convincing experimental evidence, mainly using ultracold atomic Bose-Einstein condensates (BECs) coupled to electromagnetic fields. There, various guises of the supersolid were created using atoms coupled to high-finesse cavities <cit.>, with large magnetic dipole moments <cit.>, and spin-orbit-coupled, two-component systems showing stripe phases <cit.>. At the mean field level, supersolidity can be interpreted as two-mode condensation; after condensation in the first, a second mode becomes energetically available by tuning of interactions or an external electromagnetic field and can then be dynamically populated. In a single-mode BEC, the U(1) symmetry related to its phase is broken at the condensation phase transition, giving rise to superfluidity in the system. The mean field wavefunction has k=0 momentum, giving a constant density in space. Supersolidity arises, in most cases, as an additional, macroscopic occupation of one (or more) finite-momentum modes of the system leading to breaking of translational symmetry. The occupation is preceded by the presence of a roton-like minimum in the excitation spectrum that can be dynamically populated when its energy is tuned to be equal to that of the stationary ground state <cit.>. The new mean field wavefunction acquires therefore density modulations with a given length scale and a macroscopic phase relation, giving rise to an extra U(1) symmetry breaking. A change in the relative phase of the two condensates corresponds to a rigid translation of the density pattern associated to a Goldstone mode of the system <cit.>. A gapped Higgs mode is also present, corresponding to a change in the amplitude of the density modulation <cit.>. However, despite all the progress in understanding the physics of the supersolid phase, most current realisations are limited to ultracold atomic systems. Here we demonstrate a new photonic platform that can give rise to a supersolid phase in a novel driven-dissipative context and is poised to open new directions for the study of these intriguing phases of matter. We use a BEC of polaritons, i.e., strongly coupled light-matter excitations, formed in a sub-wavelength, patterned waveguide that gives rise to a condensate at a saddle point of the dispersion <cit.>. Under some mode-folding conditions and exciton-photon detuning, the photonic crystal waveguide shows two extra modes symmetrically distributed in k with respect to the BEC at k=0, with the same polarisation. This allows for the formation of energy-degenerate, optical parametric oscillation (OPO) between the condensate and the adjacent modes, which can be described in terms of polariton-polariton interactions arising from a χ^(3) nonlinearity. Parametric processes are commonplace in nonlinear photonics and can exhibit pattern formation close to threshold in multimode systems <cit.>. However, differently from our results, they are usually induced resonantly which imposes the coherence of the pump laser on the wavefunction <cit.>. Recently, the onset of a non-resonantly-driven, energy-degenerate parametric scattering was demonstrated in different systems <cit.> showing how these scattering processes are ubiquitous; however, no proof of coherence nor pattern formation was reported. Here, instead, we used a Γ-point, symmetry protected, bound-in-the-continuum (BiC) state formed at the lower branch of the anticrossing <cit.>, which led to a few crucial advantages: (1) a vanishing linewidth due to the supression of losses on the photonic component, which reduces both the condensation threshold and the incoherently scattered background, and does not overwhelm the emission from other modes in the system, (2) a negative effective mass, which makes the k=0 component self-localised, and (3) absence of accessible lower-energy extrema in the dispersion where particles can accumulate; as such, the OPO is adequately described by a three-branch iso-energetic model. These striking properties of the BiC render the small density modulation of the BEC clearly visible. § FORMATION OF THE DENSITY MODULATION We prepared a polariton BEC, using off-resonant, pulsed excitation, at the lower branch of an anticrossing formed by folding the propagating modes of the one-dimensional waveguide to k=0 using a sub-wavelength, etched grating. The corresponding wavevector sets the size of the irreducible Brillouin zone, which is much larger than all other characteristic length scales of the system, and lies outside our numerical aperture (see Methods). Aside from the BiC, which corresponds to the anti-crossing of the left- and right-propagating TE_0 modes, the waveguide supports also TE_1 modes with the mode spacing determined by the planar waveguide thickness and index contrast, which are responsible for the formation of the modulation. The off-resonant pumping predominantly populates the BiC due to its negative effective mass <cit.>. It is important to note that the TE_1 modes are populated by some incoherent population relaxing from the exciton reservoir, however they can also become macroscopically populated via linear and in particular nonlinear coupling with the TE_0 (<ref>a). The mode-spacing sets the natural unit of length for our system to be 1/k_r=2.3 m, so we can write the four basis states as |±, mk_r⟩ for m=0,1, where the sign labels the propagation direction of the relevant photonic dispersion branch. The TE_0 modes are coupled with a Rabi frequency of 1.5meV, and the BiC corresponds to the lowest energy adiabatic state |0⟩=(|+,0⟩+|-,0⟩)/√(2) that has zero group velocity, with an energy E = 1.526eV. We will from now on limit the description to the three states |0⟩,|±1⟩≡|±,k_r⟩ to simplify notation. The single-particle dispersion shown in <ref>b is measured below threshold since the formation of the BEC dominates the luminescence above threshold and renders the rest of the dispersion scarcely visible. The BEC is visible in the dispersion at the Γ-point, slightly blueshifted from the bare dispersion due to intrinsic polariton-polariton interactions, <ref>c <cit.>. The excitonic Hopfield coefficient at the BiC energy is estimated to be 0.54 for |0⟩ and 0.4 for |± 1⟩, due to their different mode structure that leads to them coupling to the exciton with different Rabi frequencies. We will assume, in what follows, no linear coupling between TE_0 and TE_1 modes. i.e. the off-diagonal element in the hamiltonian U_01→ 0 (see SI). When the population of |0⟩ becomes large enough, parametric scattering driven by a four-polariton interaction term of the form a_+1^† a_-1^† a_0 a_0+h.c., where a_m are bosonic operators, results in emission of polariton pairs that populate the |± 1⟩ states at finite momentum according to energy and momentum conservation criteria. The |0⟩ and |± 1⟩ states are commonly referred to as pump, signal, and idler modes in optical parametric processes. Since the interaction term is invariant to the transformation a_m → a_m e^miϕ, with m=0, ±1 <cit.>, it has U(1) symmetry which is spontaneously broken at OPO threshold. At the same time the OPO breaks translational symmetry, due to it being energy-degenerate, which leads to the appearance of a density modulation on top of the condensate <cit.>. Thanks to the small radiative coupling of the BiC with the far field, the luminescence of the BEC is strongly suppressed at k=0, showing the characteristic two-lobe structure with a π phase jump between them (<ref>d). This is essentially due to the co- and counter-propagating TE_0 modes being out of phase at k=0 as they Bragg scatter from the grating; the propagation into the far field leads to the characteristic nodal line at the centre of the BEC. The density modulation, indicating the translational symmetry breaking of the supersolid, is visible atop both lobes of the BEC, which are extended in reciprocal space and outside the laser spot in coordinate space. Its amplitude is shown in <ref>e to be A∼2.6%. The modulation pattern extends over 2k_r L ≃ 30 lattices sites, where L=30μ m is the effective 1/e^2 waist of the BEC. The presence of the modulation on top of the BEC is a clear sign of crystallisation: the observation of a periodic pattern, similar to the formation of crystals, despite the system otherwise still being in the superfluid regime. The density in <ref>d is the integrated emission over approximately 8e6 pulses exciting the sample at a rate of 80MHz, much slower than the lifetime of the BEC (of the order of a few hundred picoseconds); as such, each measurement is an average of multiple independent BEC implementations. The dynamical behaviour of the optical parametric process near- and above-threshold can be simply described by a multibranch Hamiltonian involving three modes <cit.>. The single-mode solution becomes unstable with increasing power and a three-mode solution becomes the ground state of the system <cit.>. Figure <ref>a shows that condensation takes place at around 0.3mW of pump power. The luminescence and linewidth of the BEC dramatically increase above threshold due to the pulsed excitation. Since our system is trapped, it supports multiple confined modes; the first excited mode enters in the gap around 0.8mW, and can be clearly seen in <ref>a at lower energies than the main, blueshifted BEC <cit.>. Figure <ref>b shows the growth of the densities n_± 1 versus n_0, showing approximate linear growth below BEC threshold. This suggests that at low excitation power, the emission is dominated by incoherent photoluminescence coming from the exciton reservoir. The |0⟩ mode is populated much more efficiently and n_± 1 starts to experience sublinear growth above threshold. At roughly 430μW, the efficiency of the scattering process increases indicating that nonlinear scattering is taking place. In this region the |± 1 ⟩ states, that are uniformly populated in reciprocal space for low pump powers, become structured showing conjugate scattered pairs (see insets of <ref>b) and the density modulation becomes even more evident in coordinate space. Our ability to observe the density modulation comes from the small linear coupling present in the system. Since the |±1⟩ states are not empty when the OPO process establishes, but are populated by the exciton reservoir or via Rayleigh scattering, even a small amount of linear coupling contributes towards fixing the phase of the density modulation and as such renders it visible in time-integrated emission. The linear coupling is equivalent to including two-operator exchange terms of the form a_+1^† a_0 + a_-1^† a_0 + h.c. in the hamiltonian, which make it not invariant under the phase transformation discussed above; formally the OPO tends to break the U(1) symmetry in a non fully spontaneous manner because of this. Well above the OPO threshold the nonlinear scattering dominates and a degree of phase freedom is recovered. To show this we consider the ratio K = (n_1 + n_-1) / 2n_0, which is effectively the Bragg signal corresponding to a given modulation amplitude, and scales as A^2/4, where 1/4 is the structure factor for a sinusoidal modulation. For our observed density modulation, this gives an expected Bragg signal of 0.02%, more than an order of magnitude smaller than the measured 0.5%. Moreover, we note that (1) the phase of the modulation varies in the transverse direction by about a 1.1π rotation, as seen in <ref>d, and (2) the average phase in continuous-wave excitation fluctuates predominantly at the density maxima (see discussion in SI). These observations indicate underlying phase fluctuations related to the OPO that are averaged out during an integrated measurement and effectively reduce the visibility of the density modulation. Analogous effects exist in atomic supersolids, where the phase of the modulation is fixed by the presence of an external trapping potential, yet small phase variations could still be observed, e.g. in dipolar supersolids <cit.>. We note the linear coupling term alone does not fix the phase of the modulation as discussed for spin-orbit-coupled systems <cit.>. Such effects are inherent in finite-size systems, however they have not prevented the observation of well-behaved collective excitation spectra <cit.>. Importantly, our system can recover full phase freedom in a much simpler manner, i.e. by engineering the dispersion so that U_01→ 0. § LOCAL AND GLOBAL COHERENCE Having shown the behaviour of the populations undergoing parametric scattering, it is interesting to see how the diagonal and off-diagonal phase coherence is established. We use interferometric measurements to fully capture the microscopic, spatial coherence of the system, which serves as a direct proof that the fragile local and long-range order of the solid and superfluid parts is preserved. The strength of using an all-optical system in this regard is in the ease of access to the first-order spatial correlator, g^(1)(x-x^'), due to our ability to measure the full wavefunction of the BEC ψ(x,y) = √(n(x,y))exp(-i ϕ(x, y)) using off-axis holography (see Methods). Direct, local measurements of the first-order coherence show the wavefunction to be fully coherent above threshold with a modulated coherence amplitude locally correlated with the emerging supersolid crystalline structure. The superfluid character of our bulk BEC on the BiC state has already been characterised through the observation of a linear Bogoliubov excitation spectrum <cit.>, and as such, here we take the observed increase in the g^(1)(x-x^') coherence length (<ref>a) to be indicative of an increased superfluid fraction. Figure <ref>a shows the increase with power of g^(1)(x-x^') normalised to its maximum. Below threshold only the the zero-time-space auto-correlator g^(1)(0) has appreciable amplitude. At ∼250μW, the onset of polariton condensation, the coherence length starts increasing at a rate of 133μm/mW. The fast spatial oscillations that appear in g^(1)(x-x^') have an amplitude of ∼1.5% and are in phase with the density modulations, showing how coherence locally drops in regions of lower density of the crystalline solid <cit.> (see <ref>a). The emergence of the global, diagonal and off-diagonal long-range order can be quantified by measuring ℱ_x{|ψ(x, y)e^-ik_1x + ψ(-x, -y) e^-ik_2x|^2} at y=0, where ℱ_x{·} is the spatial Fourier transform along x and the quantity in the brackets is the output of the interferometer (see SI). The magnitude of the k=k_r component is a measure of the amplitude of the density modulation arising from the phase coherence between the three BEC components. Similarly, the component corresponding to the wavevector of the interferogram fringes, k=k_2 - k_1, captures the phase coherence of the k=0 component. We denote these quantities as C_mod and C_coh respectively. The relative increase of C_mod against C_coh, above threshold pump power, (see <ref>b) is similar to the behaviour of n_± 1 against n_0 shown in <ref>b. Below the condensation threshold, C_mod is largely dominated by noise since the crystalline pattern has not formed yet, then it slowly rises up to the OPO threshold where it begins to increase sharply. In absence of linear coupling this would indicate a two-step breaking of the superfluid U(1) and translational symmetries; in our case the density modulation increasing is a clear signature that the OPO process is established. § NON-RIGIDITY OF THE MODULATION An important aspect of the study of quantum many-body systems is understanding their collective excitations; they dictate the dynamics of the system and its response to external perturbations. In order to be able to observe and manipulate the collective modes, the quantum system needs to be allowed to spontaneously break the relevant symmetry during a phase transition, without external or intrinsic factors that influence this choice. For example, in single-mode, high-finesse cavity realisations of the supersolid phase the wavelength of the modulation is fixed by the wavelength of the relevant cavity mode, implying that the phononic branch corresponding to the Goldstone mode cannot be observed <cit.>. In such systems, excitations at finite k are suppressed due to the infinite range of the coupling. This means that the global translation mode at k=0 exists, but the local compression/dilation motion of the fringes occurring at finite k is not observable; such phonon dynamics can be captured with multimode cavities or in stripe phases of spin-orbit-coupled systems, as well as dipolar systems due to the translational invariance of the interactions in those systems <cit.>. In our case, the symmetry breaking results from short range polariton-polariton interactions, we therefore expect similar behaviour to the latter systems. The parametric scattering process is constrained to occur on the available |± 1⟩ modes where the density of states is different from zero. This fixes the wavenumber of |± 1⟩ to k_r. However the value of k_r depends on polariton-polariton interactions and their resulting blueshift. As the interactions blueshift the energy of the BEC, and since OPO is a iso-energetic process, the energy of |± 1⟩ has to shift as well, and this is done by shifting their wavevector inwards. This shift is nonlinear due to the strong-coupling of the photonic modes to the exciton, which leads to an increased curvature at energies closer to the exciton resonance. The energy of |± 1⟩, associated with the breaking of translational symmetry, increases by 0.35 meV as the pump power increases (<ref>a); the linewidth of the TE_1 mode is about 500μeV. For the same range of pump powers k_r shifts by 0.04m^-1 (<ref>b, c). As such, the wavevector of the density modulation is not fixed, but depends on both the single-particle and manybody parameters of the system, reminiscent of a non-rigid, roton-like behaviour. § DISCUSSION AND OUTLOOK We demonstrated evidence of a supersolid state of matter in a polaritonic system which is a novel and flexible platform for the investigation of the physics of supersolid systems. To place our work more generally in the context of supersolids, as observed mainly in ultracold atomic systems, we experimentally addressed its characteristics in coordinate and reciprocal space, its diagonal and off-diagonal long range order, and its interaction-dependent behaviour. Our supersolid state is formed using an iso-energetic OPO process to spontaneously break the translation symmetry of a non-resonant polaritonic superfluid. We stress that this is a novel mechanismg for the creation of a supersolid, particular of the driven-dissipatve context of non-equilibrium polariton systems, and not simply a photonic analogue of mechanisms demonstrated in atomic platforms. The analysis of the parametrically scattered light indicates that the system condenses at a given threshold and OPO activates at higher powers. Theoretically, we also expect the existence of two dictinct transition points. By exploiting the richness of engineering opportunities in these photonic crystal systems it is possible to eliminate completely the linear coupling between the modes, for example using TE- and TM-polarised modes. This will enable us to observe full phase freedom of the density modulation either in single-shot experiments or higher-order coherence functions <cit.>. Near and above threshold the statistics of the emitted light changes to bunched leading to an increase in density-density correlations that are peaked at threshold, similar to a resonantly pumped OPO <cit.>. A spatially dependent g^(2)(x -x') will show signatures of density modulation since it is only sensitive to how polaritons at relative distances are correlated. Since the spacing of the crystalline modulation is not rigid but influenced by short range polariton interactions, the low-lying excitation spectrum of the supersolid should reveal the two phononic Goldstone branches resulting from the symmetries involved. The observation of such spectrum is a natural next step in the investigation of the supersolid properties. As a further development of this research, increasing the dimensionality of the system, we expect additional breaking of discrete rotational symmetries to occur, as well as the proliferation and dynamics of stochastic vortices <cit.>. Moving beyond parametric scattering, similar effects have been suggested utilising coupling to indirect excitons to engineer long-range, dipolar interactions that can lead to crystallisation under compression <cit.>, or using Bose-Fermi mixtures where excitons are coupled to an electron gas <cit.>. § METHODS Sample. The sample used in these experiments is an AlGaAs waweguide grown with molecular beam epitaxy. It has twelve 20 nm embedded GaAs quantum wells with a heavy hole excitonic transition at 1531.1(0.1). The propagating modes are folded within the lightcone by inscribing a 50 × 400 μ m unidirectional grating within the structure with a lattice parameter a=242nm corresponding to a wavevector translation of π/a=13μ m ^-1. The energy momentum dispersion therefore consists of two counterpropagating TE_0 modes forming an anticrossing at k_x=0, 2.4meV blue detuned with respect to the excitonic transition and a couple of symmetric TE_1 modes located at ≈2.3μ m ^-1 from the TE_0 modes. The filling factor of such grating is 76% in order to spatially overlap the two |0⟩ and ensure the coupling described by the U term in the Supplemental information responsible of the opening of the BiC gap. Due to the confinement, the strong coupling regime with a Rabi frequency of 5.6 and 4.2 meV is achieved between the excitonic transition and the propagating TE_0 and TE_1 modes respectively. The fabrication details of the grating etching can be found in <cit.>. Laser excitation scheme. The sample is held in an attoDRY closed loop helium cryostat at a temperature ≈4 in order to avoid exciton ionisation. The waveguide is non-resonantly excited in reflection configuration using an 80MHz repetition rate, 100 tunable laser. The polarisation is chosen to be parallel to the grating corrugation direction, and the energy is set at 1.61. The excitation beam is focused into a 3.90(0.03) 1/e^2 spot through a NA = 0.68 aspheric objective, at the centre of the etched structure. Interferometric measurements. The photoluminescence collected by the objective is sent into a Michelson interferometer through the detection line. In one of the two arms the image is centro-symmetrically flipped along x and y using a backreflector and the coordinate-space photoluminescence distribution re-focused onto the spectrometer slits through the same optical path. The total magnification of the line, sample-to-camera, is approximately 72.6. The two arms are synchronised and balanced, optimising the visibility of the interference pattern of the excitation laser reflected from the sample surface. Reciprocal space measurements For the reciprocal space images, a similar detection line is used, but the interferometer is bypassed. A 4f-system collects the reciprocal plane and focuses it onto the monochromator slits, resulting in a total magnification of the objective back focal plane equal to 1.66. Coherence calculation. The two-point correlator g^(1)(x-x')=⟨E^*_1(x)E_2(x)|/⟩⟨|E_1(x)|^2|⟨%s|%s⟩⟩|E_2(x)|^2^1/2, where E_1, 2(x) are the fields from the two interferometer arms, is proportional to the visibility 𝒱(x)=(I_u - I_l) / (I_u + I_l) of the interferometric fringes, where I_u (I_l) is the upper (lower) envelope of the interferogram; we omit the x dependence for brevity. We calculate 𝒱 by first extracting the k=k_r and k=0 components of the interferogram by appropriate filtering in the reciprocal space and then using the Hilbert transform to calculate the envelope of the ac component; the visibility is then simply the envelope over the k=0 component. For slight imbalance or not full reflection symmetry between the two images, 𝒱 needs to be rescaled by 2√(I_1I_2)/(I_1 + I_2), where I_1,2 are the intensities in each arm of the interferometer, to give a correctly normalised g^(1)(x-x'). We forgo this normalisation since small misalignments of the order of 1/k_r in the overlap of the two images distorts the density modulation. This does not significantly alter the form of g^(1)(x-x') since in our case this prefactor is ≃ 1. Acknowledgements. We acknowledge fruitful discussions with V. Ardizzone, M. Pieczarka, and A. Recati. We are thankful to G. Lerario for his valuable feedback on the manuscript. This project was funded by the Italian Ministry of University (MUR) PRIN project “Interacting Photons in Polariton Circuits” – INPhoPOL (grant 2017P9FJBS); the project “Hardware implementation of a polariton neural network for neuromorphic computing” – Joint Bilateral Agreement CNR-RFBR (Russian Foundation for Basic Research) – Triennal Program 2021–2023; the MAECI project “Novel photonic platform for neuromorphic computing”, Joint Bilateral Project Italia - Polonia 2022-2023; the PNRR MUR project: “National Quantum Science and Technology Institute” - NQSTI (PE0000023) co-funded by the European Union - NextGeneration EU; the Apulia Region, project “Progetto Tecnopolo per la Medicina di precisione”, Tecnomed 2 (grant number: Deliberazione della Giunta Regionale n. 2117 del 21/11/2018); the PRIN project “QNoRM: A quantum neuromorphic recognition machine of quantum states” - (grant 20229J8Z4P). This research has been cofunded by the European Union - NextGeneration EU, “Integrated infrastructure initiative in Photonic and Quantum Sciences” - I-PHOQS [IR0000016, ID D2B8D520, CUP B53C22001750006]. IC acknowledges funding from the Provincia Autonoma di Trento, partly via the Q@TN initiative. This research is funded in part by the Gordon and Betty Moore Foundation’s EPiQS Initiative, grant GBMF9615 to L.P., and by the National Science Foundation MRSEC grant DMR 2011750 to Princeton University. Work at the Molecular Foundry is supported by the Office of Science, Office of Basic Energy Sciences, of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231. We thank Scott Dhuey for assistance with electron beam lithography and Paolo Cazzato for the technical support. Competing interests. The authors declare no competing interests. Data Availability. The data of this study is available from the corresponding author upon reasonable request. Author contributions. DT and ML conceived the experiment and convinced DS to go ahead with it. AG took the data and together with DT performed the analysis. DT, AG and ML wrote the manuscript. DN, DG, and IC provided theoretical support. FR processed the sample; growth was performed by KB and LP. DS supervised the work, discussed the data, and tirelessly explained to DT how polariton OPOs should behave. DT tried to use the best photonics language he could muster but in the end had to use kets. All authors contributed to discussions and editing of the manuscript.
http://arxiv.org/abs/2407.01848v1
20240701231634
UniFIDES: Universal Fractional Integro-Differential Equation Solvers
[ "Milad Saadat", "Deepak Mangal", "Safa Jamali" ]
cs.LG
[ "cs.LG", "cs.CE" ]
3em Article Title]UniFIDES: Universal Fractional Integro-Differential Equation Solvers 1]Milad Saadatsaadat.m@northeastern.edu 1]Deepak Mangald.mangal@northeastern.edu [1]Safa Jamalis.jamali@northeastern.edu [1]Department of Mechanical and Industrial Engineering, Northeastern University, Boston, 02115, Massachusetts, USA The development of data-driven approaches for solving differential equations has been followed by a plethora of applications in science and engineering across a multitude of disciplines and remains a central focus of active scientific inquiry. However, a large body of natural phenomena incorporates memory effects that are best described via fractional integro-differential equations (FIDEs), in which the integral or differential operators accept non-integer orders. Addressing the challenges posed by nonlinear FIDEs is a recognized difficulty, necessitating the application of generic methods with immediate practical relevance. This work introduces the Universal Fractional Integro-Differential Equation Solvers (UniFIDES), a comprehensive machine learning platform designed to expeditiously solve a variety of FIDEs in both forward and inverse directions, without the need for ad hoc manipulation of the equations. The effectiveness of UniFIDES is demonstrated through a collection of integer-order and fractional problems in science and engineering. Our results highlight UniFIDES' ability to accurately solve a wide spectrum of integro-differential equations and offer the prospect of using machine learning platforms universally for discovering and describing dynamical and complex systems. [ * July 8, 2024 ================ § INTRODUCTION Mathematical description of natural phenomena generally involves a series of differential equations (DEs). In the most generic form, partial (spatio-temporal) differential equations (PDEs) accept linear and nonlinear combinations of transient, flux, body force, and source terms, which are able to represent a wide array of dynamical systems. This is why development of algorithms that are geared towards improving general solutions of differential equations can have a significant impact across many fields. However, memory or spatially distributed quantities in general cannot be concisely described through DEs and are usually explained via integral terms. Integro-differential equations (IDEs), therefore, can describe systems with both non-local and pointwise effects. IDEs have vital applications in electrical circuit analysis <cit.>, epidemics and population balances <cit.>, renewable energy <cit.>, potential theory <cit.>, and thermo-fluid sciences <cit.>, among others. Many crucial tasks in signal processing <cit.>, material design <cit.>, and economics <cit.>, nonetheless, are described through fractional integro-differential equations (FIDEs), in which the integral or derivative operators accept any arbitrary real or complex order <cit.>. Therefore, FIDEs can not only arguably capture the broadest and most complex set of phenomena but also often offer the most mathematically concise/compact descriptions. However, solving FIDEs has proven to be far from trivial, and algorithms that can directly solve FIDEs can be transformative in a foundational way. Methods to solve IDEs can be categorized into analytical, semi-analytical, and numerical techniques. Analytical methods, such as Laplace transforms <cit.> and Taylor series <cit.>, often yield precise, closed-form solutions but are limited to relatively simpler cases <cit.>. Semi-analytical techniques, including Adomian decomposition <cit.>, homotopy perturbation <cit.>, and variational iteration methods <cit.>, can tackle a broader range of linear and nonlinear problems, though they may present challenges in problem formulation and convergence. Numerical techniques, such as Chebyshev polynomials <cit.>, Haar wavelet <cit.>, Galerkin <cit.>, and finite difference methods <cit.>, are highly versatile and adaptable for a wide array of problems. However, these methods can be complex to implement, approximate the solution, and are often computationally intensive. In recent years, machine learning (ML) tools have shown great potential in addressing complex computational problems across various domains <cit.>. By incorporating prior knowledge, such as principled physical laws that dictate the spatio-temporal dynamics of systems, science-aware machine learning is extensively being applied to a wide set of challenges in science and engineering <cit.>. For instance, by leveraging automatic differentiation (AD) to directly solve DEs, Physics-Informed Neural Networks (PINNs) were developed as robust platforms for data-driven science and engineering computations <cit.>. The discovery of governing equations of a system from data was also shown to be feasible using sparse identification of nonlinear dynamics <cit.>. Moreover, mesh-based discretization methods have proven effective in solving forward and inverse PDE problems <cit.>. Neural operators and transformers are also gaining traction for their ability to solve a family of PDEs instead of a single PDE, reducing the overall computation time <cit.>. Despite the successful endeavors in developing ML frameworks for the solution of differential equations, there appears to be a general scarcity of equivalent platforms for addressing integral equations. A few ML tools have been employed to solve IDEs <cit.> and fractional DEs <cit.>. In these examples, the integral (or fractional derivative) operators are approximated using a numerical discretization scheme such as Simpson's rule or Gaussian quadrature. In <cit.>, fractional-PINN (fPINN) was introduced and used to solve fractional DEs in forward and inverse directions. In a recent PINN attempt to solve a wider set of IDEs, the auxiliary-PINN (A-PINN) method was introduced <cit.>, which defines an auxiliary equation to represent the integral term. This auxiliary equation is then converted to an ordinary differential equation (ODE) using AD of the auxiliary output. By employing the A-PINN method, the need for integral approximation is cleverly obviated, demonstrating promise in solving a variety of IDEs. However, A-PINN is restricted to integer-order IDEs, as fractional differentiation is not feasible with AD; see <Ref>. Fractional integrals and differentiations, therefore, inevitably demand some form of numerical approximation. This study enhances the effectiveness of physics-informed neural networks (PINNs) by introducing a Universal Fractional Integro-Differential Equation Solver, UniFIDES. For the first time, a versatile PINN-based framework is introduced, adept at solving nonlinear and multi-dimensional integer-order and fractional integro-differential equations (FIDEs), including systems of FIDEs and partial FIDEs. Streamlined with automatic differentiation for integer-order differentiation, UniFIDES can solve both forward and inverse problems. Through extensive experiments on various integer-order and fractional Fredholm and Volterra integral and integro-differential equations, UniFIDES demonstrates its robustness and precision. By eliminating the need for any mathematical manipulation, UniFIDES offers out-of-the-box functionality for a diverse set of problems in science and engineering. § RESULTS Unlike other FIDE frameworks that usually demand ad-hoc modifications for each type of problem, UniFIDES operates on a seamless plug-and-play basis; the equations are embedded as-is, and the imposition of boundary and initial conditions is intuitive and straightforward. To manifest the generality of UniFIDES, we begin by recalling the fractional integral of order α∈ℝ^+ in the Riemann-Liouville (RL) sense[In fact, α can be complex, but for most engineering and scientific applications, α remains real.] <cit.>: [^αℐ_0^x]u(x)=1/Γ(α)∫_0^x 1/(x-t)^1-αu(t) t where x ∈ℝ^+, Γ(·) is Euler's continuous gamma function, and u(t) is a non-singular function in general. <Ref> is essentially a fractional Volterra IE of order α whose kernel has a strong singularity at one endpoint of the integration interval. For α=1, <Ref> reduces to a typical integral expression. The RL fractional derivative of order β w.r.t. x, [^β𝒟_x], will then be defined as follows <cit.>: [^β𝒟_x]u(x)=^m/ x^m[^m-βℐ] u(x)=1/Γ(-β)∫_0^x 1/(x-t)^1+βu(t) t = [^-βℐ_0^x]u(x) where m=⌈β⌉ denotes the ceiling function.[The fractional derivative in the Caputo sense is also accessible by swapping the sequence of differentiation and integration in the first equality in <Ref>, which immediately yields a corresponding definition in the RL sense and vice versa; see the second chapter in <cit.>.] The second equality in <Ref> is valid for suitable u(x) functions; see <cit.>. In fact, fractional derivative of order β equivalently corresponds to a fractional integral of order -β; compare the last equality in <Ref> with <Ref>. In summary, <Ref> serves as the fundamental expressions to calculate integer-order and fractional integrals (α≥ 0), while <Ref> handles the fractional derivatives (0≤β<1) in FIDEs. For integer-order derivatives, AD is adopted. In <Ref>, the numerical methods to approximate these fractional operators are put forth. The architecture of UniFIDES is shown in <Ref>, with more information about the training process and hyperparameters in <Ref>. In the results that follow, four forward integer-order IE and IDE problems are introduced. Then, three more involved fractional IE, IDE, and system of IDEs are solved in the forward direction. Finally, the inverse solution of the system of FIDEs is presented as the most complicated of all test cases, with the objective of recovering the fractional operator orders. Additional cases can be found in <ref>. §.§ Integer-order forward solutions §.§.§ Forward solution of 1D Fredholm IDE Case 1 is that of reference <cit.>, which is an integer-order nonlinear 1D Fredholm IDE with applications in diffusion processes and quantum mechanics: { [ ^1𝒟_x ] u(x) = cosx - x + 1/4[ ^1 ℐ_-1/4^1/4] xt u^2(t) dt x ∈[ -π/2, π/2] u( -π/2)=0 . The exact solution reads u(x) = 1 + sinx. This is a forward problem, and the objective is to find u(x). <Ref> is directly implemented in its continuous form with the same PINN hyperparameters and collocation points as in <cit.>; see <Ref> for a summary of the hyperparameters. The integer-order derivative is handled using AD, while the integral term is approximated through the numerical scheme introduced in <Ref>. The UniFIDES prediction is shown in <Ref>. Despite the inevitable truncation error associated with the numerical approximation of the integral term, UniFIDES still managed to yield an accurate prediction compared to A-PINN: The relative L2 error for this case was reported to be 0.048 using A-PINN, while this value is 0.019 with UniFIDES. The mean squared errors (MSEs) for all cases tested are summarized in <Ref> and benchmarked against integer-order cases using A-PINN; see <Ref> for more details on the A-PINN training procedure and potential limitations of it. §.§.§ Forward solution of 3D Fredholm IE To further demonstrate UniFIDES' generality to higher-dimensional problems, Case 2 is deemed to be a 3D Fredholm IE reported by reference <cit.>: { u(x,y,z) = x^2y^2z^2 - 1/29400e^-xyz + 0.01[ ^1 ℐ_0^1][ ^1 ℐ_0^1][ ^1 ℐ_0^1]e^-xyzt^2sr^2u^2(t,s,r) t s r (x,y,z) ∈[0, 1] u(0,0,0)=0 . with an exact solution of u(x,y,z)=x^2y^2z^2. Such IEs have applications in electromagnetic theory and non-homogeneous elasticity. As can be seen in <Ref>, the UniFIDES' prediction closely mimics the exact solution with an excellent MSE of 1.07e-6. As an additional case, the same IE but for (x,y,z) ∈ [0,2] is solved, and despite the accuracy deterioration often reported when dealing with irregular ranges for PINN-based frameworks, UniFIDES still managed to recover the 3D dynamics with an MSE of 7.85e-3; see <Ref>. §.§.§ Forward solution of 1D Volterra IDE Fredholm IDEs, with fixed integration limits, reflect aggregate system properties, while Volterra IDEs, with a variable upper limit, take into account the system's immediate history up to the current point. This makes Volterra IDEs ideal for applications in population dynamics and epidemic modeling <cit.> and also differentiates their integral modeling; see <Ref>. For instance, the following nonlinear 1D Volterra IDE (Case 3) <cit.> has applications in population growth of species and financial mathematics: { [ ^1𝒟_x ] u(x) = 5/2x - 1/2xe^x^2 + [ ^1 ℐ_0^x] xte^u(t) t x ∈[ 0, 1 ] u(0)=0 . with an exact solution of u(x)=x^2. For each point x between 0 and 1, the integral accumulates the effect of the integrand xte^u(t) over the interval from t=0 to x. This indicates that what happens between t=0 and x has a cumulative impact on the outcome at x, meaning that the integral term represents non-local interactions within the system. Again, the integer-order derivative is obtained using AD. The UniFIDES solution of this IDE is shown in <Ref>, with an MSE of 8.78e-6. Due to the hereditary nature of fractional operators, the squared error increases monotonically along the x direction. To ameliorate this inevitable artifact, the number of residual points can be increased, which corresponds to a smaller value of step size, h; see <Ref>. §.§.§ Forward solution of 2D Volterra IE The following 2D nonlinear Volterra IE is utilized as Case 4 to assess the applicability of UniFIDES to higher-dimension Volterra cases <cit.>: { u(x,y)= f(x,y) + [ ^1 ℐ_0^y][ ^1 ℐ_0^x] (xt^2+cos(s))u^2(t,s) t s f(x,y)=xsin(y)(1-x^2sin^2y/9)+x^6/10(sin(2y)/2-y) (x, y) ∈[ 0, 0.5 ], [ 0, 1 ] u(0, 0)=0 . Such IEs are seen in plane contact of inhomogeneous, ageing viscoelastic materials. The exact solution reads u(x,y)=xsin y. The x and y upper limits are chosen to be 0.5 and 1, respectively, to demonstrate UniFIDES' versatility in handling double integrals with different step sizes. The UniFIDES solution, along with the exact solution, are presented in <Ref>. The solution attained an MSE of 1.50e-5. §.§ Fractional-order forward and inverse solutions So far, the test cases have only included integer-order differentiation and integral operators. The derivatives were handled by AD, while the integrals were calculated using the numerical approximation explained in <Ref>. In the following test cases, the same numerical scheme but for fractional-order operators is utilized. Despite the increasing complexity of FIDEs compared to IDEs, UniFIDES is agnostic to the operator orders and nonlinearity. Additionally, attention is given exclusively to fractional Volterra instances due to their heightened intricacy. §.§.§ Forward solution of 1D Volterra FIE The first fractional problem (Case 5) is a 1D Volterra FIE <cit.>: { u(x) = √(π)(1+x)^-1.5 - 0.02x^3/1+x + 0.01x^2.5[ ^0.5ℐ_0^x] u(t) t x ∈[ 0, 4 ] u(0)=√(π). The integral operator in this case has a fractional order (α = 0.5), and the exact solution reads u(x) = √(π)(1 + x)^-1.5. Such FIEs are frequently seen in crystal growth and heat conduction. Here, the range of x is also extended. As shown in <Ref>, the UniFIDES prediction closely mimics the reference, with an MSE of 1.13e-6. Again, error accumulation is witnessed as x increases, which can be mitigated by reducing the step size, h; see <Ref>. The error magnitude nonetheless is negligible for most engineering and scientific applications. §.§.§ Forward solution of 2D Volterra partial FIDE Case 6 is a nonlinear 2D Volterra FIDE which involves derivatives of both inputs, therefore covering partial FIDEs <cit.>: { [ ^β𝒟_y ] u(x,y) - ∂^2 u/∂ x^2 +[ ^1 ℐ_0^y] x(y-s)u(x,s) s = f(x,y) f(x,y) =(1-x^2)(y^1-β/Γ(2-β)+Γ(1+β)) + 2(y+y^β)+x(1-x^2)(y^3/6+y^2+β/(1+β)(2+β)) (x, y) ∈[ -1, 1 ], [ 0, 1 ] u(-1, y)=u(1, y)=0 u(x, 0)=0 . The exact solution is u(x,y)=(1-x^2)(y-y^β). Such partial FIDEs are encountered in grain growth and reactor dynamics.[By replacing y with t, this partial FIDE can be thought of as a time-fractional IDE. However, t, s, and r are reserved for dummy integral variables in this work.] The UniFIDES solution for β=0.7 is presented in <Ref>. Despite the increased complexity of this case, UniFIDES recovered the exact solution with an MSE of 1.17e-3, which reduces for finer grids. As before, the second-order derivative w.r.t. x is handled using AD, while the integral and fractional derivative terms are approximated using the numerical scheme. Here, attention should be given once partial fractional derivatives are calculated to ensure numerical accuracy; see <Ref>. §.§.§ Forward solution of a system of Volterra FIDEs As the final forward solution problem, the objective in Case 7 is to solve a system of nonlinear Volterra FIDEs: { [ ^β𝒟_x ]u_1(x) - 3x^2ββΓ(3β)/Γ(1+2β) -[ ^1 ℐ_0^x](x-t)u_1(t) t -[ ^1 ℐ_0^x](x-t)u_2(t) t = 0 [ ^β𝒟_x ]u_2(x) +2x^2+3β/2+9β+9β^2+3x^2ββΓ(3β)/Γ(1+2β) -[ ^1 ℐ_0^x](x-t)u_1(t) t -[ ^1 ℐ_0^x](x-t)u_2(t) t = 0 x ∈[0, 1] u_1(0)=u_2(0)=0 . with an exact solution of u_1(x)=x^3β and u_2(x)=-x^3β. Such coupled FIDEs arise in enzyme kinetics and gas-liquid reaction problems. The UniFIDES solution for β=0.5 is presented in <Ref>. The MSE for this case was 7.55e-7, which demonstrates UniFIDES' generality for multi-output cases. §.§.§ Inverse solution of a system of Volterra FIDEs So far, the IEs and IDEs were assumed known with proper initial and/or boundary conditions. There are, nonetheless, cases in which the exact form or values of these equations are unknown or only partially available, while a [spatio-temporal] measurement of the system is conducted. PINNs have proven to be robust in dealing with such ill-posed problems. To assess the capability of UniFIDES in addressing inverse problems, the same system of FIDEs described in <Ref> is transformed into an inverse problem. The training data is generated using the exact solution for β=0.5 and x ∈ [0,1], where u_1 and u_2 are functions of x. In order to replicate real-world scenarios, Gaussian noise with a standard deviation of 0.1 is added to both u_1 and u_2 vectors. The objective is to recover the integral and derivative orders, denoted as α and β, respectively. Compared to the exact orders of α, β = (1, 0.5), UniFIDES converges to 1.025 and 0.488, with exceptional relative errors of [list-units = single]1.025;0.488, respectively. This establishes UniFIDES as a robust solver for both forward and inverse problems across a range of IE and IDE scenarios. § DISCUSSION This study introduces a resilient and versatile framework for solving a variety of integro-differential equations with both fractional and integer orders, covering generic problems in science and engineering. Despite the inherent difficulties in handling integral equations using PINN-based platforms, our results demonstrate that UniFIDES can tackle such equations in both forward and inverse directions, irrespective of their nonlinearity or operator order. Notably, unlike comparable toolboxes designed for solving FIDEs, which often require case-specific and ad-hoc adjustments, UniFIDES accepts the equations in their original (and continuous) form, allowing for intuitive imposition of boundary or initial conditions. In all fairness, there exist several limitations associated with UniFIDES in its current form. The first, and probably the most notable limitation, lies in the utilization of a numerical scheme to approximate integrals and fractional derivatives. Similar to other numerical approximations, truncation error is inevitable. The final predictions, however, were comparable in terms of accuracy with, for instance, A-PINN, which obviated the need for discretization. The implemented numerical scheme commits an error of O(h^2), with h being the step size. Although other integral approximations with refined accuracy may reduce this error order, they will nonetheless remain a function of h. Moreover, h is assumed constant along each integration (or differentiation) axis. The hereditary nature of fractional operators and the embedded numerical scheme require the utilization of input vectors in ascending order. This requirement arises from the inherent dependence of fractional operators on the function's historical data, wherein the concept of history holds significance only when information is chronologically stored. Therefore, if UniFIDES is tested on non-rectangular geometries, it is vital to pass an ascending vector to <Ref> and evaluate the functional points accordingly. In other words, a fictitious structured mesh is defined and used for fractional operators; other functions may be sampled randomly in the space (or time) but they need to be sorted to ensure accurate pointwise calculations. In summary, UniFIDES presents enhanced flexibility in both problem definition and implementation, surpassing its current limitations. Future research endeavors should focus on relaxing the constraints of a constant step size and introducing innovative approaches to evaluate spatio-temporal fractional functions. § METHODS Consider the following 1D Volterra FIDE: { [ ^β𝒟_x ] u(x) + ∂^2 u(x)/∂ x^2 = f(x) + [ ^αℐ_0^x] K(t)u(t) t x ∈[ 0, X ] u(0), u(X)=(u^0, u^X) . where f(x) is a known and discrete function and K(t) is the Abel kernel with a [potential] weak singularity. Such FIDEs are prevalent in anomalous diffusion. The objective in a forward problem is to determine u(x), while in an inverse problem, scattered (and potentially noisy) measurements of u(x) are obtained, with the objective of finding an unknown parameter in the entire problem, e.g., a boundary condition, f(x), β, or a combination of them. In A-PINN <cit.>, the integral term is replaced with a function w(x). Thus, by taking a derivative of w(x) w.r.t. x for the specific case of α∈𝕎, the above IDE is converted to a system of ODEs with an auxiliary output w(x): { [ ^β𝒟_x ] u(x) + ∂^2 u(x)/∂ x^2 = f(x) + w(x) ∂^α w(x)/∂ x^α=K(x)u(x) . with appropriate boundary conditions. Moreover, A-PINN is restricted to β∈𝕎 cases as AD is infeasible for fractional derivatives, too. β and α in <Ref>, nonetheless, can be any positive real numbers: α, β∈ℝ_+. In UniFIDES, the well-established numerical schemes developed in <cit.> are used to approximate the fractional operators, which are based on the trapezoidal quadrature rule. The integral of u(X) with a fractional order of α in the RL sense between [0, X] on the grid x_n=nh : n=0, 1, 2, …, N where h=X/N is therefore discretized as follows: [ ^αℐ_0^X]u(X) ≈h^α/Γ(2+α)∑_j=0^N c_j,N(α)u_j which commits an error of order O(h^2). The quadrature weights c_j,N are derived from a product trapezoidal rule: c_j,N(α)= (1+α)N^α - N^1+α + (N-1)^1+α if j = 0 (N-j+1)^1+α-2(N-j)^1+α+(N-j-1)^1+α if 0 < j < N 1 if j = N To obtain the integral between 0 and x_n, N is replaced with n in <Ref>. Therefore, for each point x_n, these equations are called, which take into account the total memory of past states up to x_n. As mentioned in <Ref>, the β-th derivative of u(X) w.r.t. x in the RL sense at X=Nh can be written as follows: [ ^β𝒟_x ] u(X) = [ ^-βℐ_0^X]u(X) Again, the fractional derivative at x_n is obtainable by formally replacing X with x_n in <Ref>. Therefore, fractional derivatives, unlike their integer-order counterparts, are not pointwise operators and depend on the entire functional history. Fractional derivatives in the RL (and also the Caputo) sense using the above scheme are nonetheless definite only for β<1 as <Ref> raises 0 to a negative power for -β=α≤-1. Higher-order fractional derivatives are nonetheless obtainable as fractional derivatives, similar to integer-order ones, can be expressed using the chain rule. For instance, the derivative of order β=1.7 can be decomposed into a first-order derivative and a 0.7-th-order fractional derivative. In summary, <Ref> are valid for α >-1, which allows us to calculate fractional derivatives of orders 0≤β<1 and any fractional (or integer-order) integrals. For derivatives in each dimension, one boundary (or initial) condition is needed. For instance, <Ref> requires two boundary values. The neural networks are initially sub-classed from and modified to handle fractional operations. The second-order derivative is obtainable through , which is a context manager that records operations for AD in . Finally, the fractional terms in <Ref> are directly calculated using <Ref>. The forward problem defined in <Ref>, therefore, is converted to a loss minimization task: ϕ=ϕ_Res+ϕ_BC where ϕ is the compound loss function and ϕ_Res is the equation residual: ϕ_Res=1/N∑_n=0^N{[ ^-βℐ_0^x_n]u_p(x_n) + ∂^2 u_p(x_n)/∂ x_n^2 -( f(x_n) + [ ^αℐ_t=0^t=x_n] K(t)u_p(t) t)}^2 where u_p is the UniFIDES prediction of the output. The boundary condition loss (ϕ_BC) is calculated as the MSE of the NN prediction at the boundaries and the ground-truth information on the boundaries. For this particular example, Dirichlet BCs are imposed as followes: ϕ_BC = MSE(u_p^0, u^0)+MSE(u_p^X, u^X)= 1/N_b∑_m=1^N_b (u_p,m^0 - u^0)^2+ 1/N_b∑_m=1^N_b (u_p,m^X - u^X)^2 In each iteration, the compound loss is computed, and the optimizer from the module is utilized to update the model parameters, i.e., θ in <Ref>. For more details on the UniFIDES architecture, training process, and hyperparameters, the readers are referred to <Ref>. § CODE AVAILABILITY All the source codes to reproduce the results in this study are available in the https://github.com/procf/RhINNsGitHub repository. § DECLARATIONS Authors are thankful for insightful discussions with Dr. Deepak Mangal, and also acknowledge the support from the National Science Foundation’s DMREF program through Award #2118962.
http://arxiv.org/abs/2407.02940v1
20240703091538
Optical vortex-antivortex crystallization in free space
[ "Haolin Lin", "Yixuan Liao", "Guohua Liu", "Jianbin Ren", "Zhen Li", "Zhenqiang Chen", "Boris A. Malomed", "Shenhe Fu" ]
physics.optics
[ "physics.optics" ]
18pt 18pt ^1Department of Optoelectronic Engineering, Jinan University, Guangzhou 510632, China ^2Guangdong Provincial Key Laboratory of Optical Fiber Sensing and Communications, Guangzhou 510632, China ^3Guangdong Provincial Engineering Research Center of Crystal and Laser Technology, Guangzhou 510632, China ^4Department of Physical Electronics, Faculty of Engineering, Tel Aviv University, Tel Aviv, 69978, Israel ^5Instituto de Alta Investigación, Universidad de Tarapacá, Casilla 7D, Arica, Chile Optical vortex-antivortex crystallization in free space Haolin Lin^1,2,†, Yixuan Liao^1,2,†, Guohua Liu^1,2 , Jianbin Ren^1,2, Zhen Li^1,2,3*, Zhenqiang Chen^1,2,3*, Boris A. Malomed^4,5 and Shenhe Fu^1,2,3* July 8, 2024 =========================================================================================================================================================== Abstract: Stable vortex lattices are basic dynamical patterns which have been demonstrated in physical systems including superconductor physics, Bose-Einstein condensates, hydrodynamics and optics. Vortex-antivortex (VAV) ensembles can be produced, self-organizing into the respective polar lattices. However, these structures are in general highly unstable due to the strong VAV attraction. Here, we demonstrate that multiple optical VAV clusters nested in the propagating coherent field can crystallize into patterns which preserve their lattice structures over distance up to several Rayleigh lengths. To explain this phenomenon, we present a model for effective interactions between the vortices and antivortices at different lattice sites. The observed VAV crystallization is a consequence of the globally balanced VAV couplings. As the crystallization does not require the presence of nonlinearities and appears in free space, it may find applications to high-capacity optical communications and multiparticle manipulations. Our findings suggest possibilities for constructing VAV complexes through the orbit-orbit couplings, which differs from the extensively studied spin-orbit couplings. IntroductionOptical vortices in their basic form are represented by topological solutions of the paraxial Helmholtz equation. They are distinguished by the helical phase factor exp (ilϕ ), combined with either the Laguerre-Gaussian or Bessel-Gaussian amplitude profiles, where ϕ is the azimuthal coordinate, and integer l is topological charge (alias winding number). The vortex beam exhibits a phase dislocation at the vortex pivot and carries a well-defined intrinsic orbital angular momentum (OAM) <cit.>, which has found various applications, in the classical and quantum regimes alike <cit.>. For example, by appropriately introducing multiple vortices with different topological charges into a single beam, one can considerably enhance the optical communication capacity and speed <cit.>. Optical vortex-antivortex (VAV) lattices, which carry OAM with positive and negative signs, i.e., opposite topological charges, are a fundamentally important concept that can find promising applications, including high-capacity optical communications <cit.>, parallelized superresolution <cit.>, multiparticle manipulations <cit.>, and higher-dimensional quantum information processing <cit.>. In the past, significant advancements have been made on the techniques of generating vortex arrays in the linear regime. For instance, the vortex arrays with various designs can be created by coaxially superimposing different fundamental modes with appropriate weighting coefficients, such as Laguerre-Gaussian <cit.>, Hermite-Gaussian <cit.>, Ince-Gaussian <cit.>, Bessel-Gaussian <cit.>, perfect optical vortex <cit.> and other vortex fields with curvilinear shapes <cit.>. The large-scale vortex lattices are produced by the far-field interferences of planar waves <cit.>, or vortex fields <cit.>. Another simple way is to arrange multiple non-coaxial unit cells including Laguerre-Gaussian or perfect-optical-vortex, on interstitial sites <cit.>. However, the propagation dynamics of these VAV arrays/lattices induced by couplings between the vortex-antivortex pivots both in linear <cit.> and nonlinear media <cit.> was not systematically studied. The presence of nonlocal couplings between phase dislocations in the VAV wave front makes them strongly unstable against initial perturbations of the lattice structure <cit.>. Specifically, lattices composed of homopolar vortices perform perturbation-induced rotation in the course of the propagation, as a direct consequence of straight motion of vortices in the transverse plane <cit.>. Furthermore, VAV pairs in the lattices demonstrate annihilation, repulsion <cit.>, or re-creation (the intrinsic OAM Hall effect) <cit.>. The instability ensuing from diverse coupling processes between vortices and antivortices transforms initially regular lattices into quasi- or totally-disordered speckle patterns <cit.>. The instabilities of the VAV lattices severely limit their potential applications. Nonlinearity may help to stabilize them <cit.>, but it generally requires very high power densities, leading to unpredictable challenges in applications. Moreover, the nonlinearity mainly helps to balance diffraction or dispersion of the waves <cit.>, rather than inhibiting instability-induced transverse jitter of the vortices. As a result, crystal-like VAV lattices have been created, thus far, in few optical nonlinear systems <cit.>, remaining a challenge for the further work. Until now, stable VAV crystalline structures have not been reported in the linear propagation regime. In this work, we investigate in detail nonlocal orbit-orbit couplings between the oppositely charged vortices embedded in a propagating coherent light field, and demonstrate, both theoretically and experimentally, an intriguing wave phenomenon of the VAV crystallization, supported by the balanced orbit-orbit coupling in the multi-VAV setting. We present a theoretical model analyzing the orbit-orbit couplings in such systems. The model includes a decoupling term, which accounts for independent transverse motion of individual vortices, and a term which nonlocally couples different vortices, with strength determined by the VAV spacing and propagation distance. The VAV crystallization stems from the balance of competing terms, as manifested by a flat phase distribution between adjacent VAV pairs. The VAV interactions considered here originate solely from the OAM-OAM (orbit-orbit) coupling in the light field, which takes place in the linear paraxial-propagation regime and does not require any light-matter interaction. The orbit-orbit coupling for stable lattices is basically distinct from nonlinear light-mater interactions, cf. Refs. <cit.>. It is also different from the extensively studied spin-orbit coupling, which refers to the interaction between the photonic spin and OAM <cit.> . While the spin-orbit coupling has been used in a broad range of applications <cit.>, the nonlocal orbit-orbit coupling remains almost unexplored. Therefore, our approach and results open possibilities for manipulations with optical vortices and antivortices by engineering appropriate orbit-orbit couplings between them. In particular, the method for the creation of the robust phase-locked VAV structures in free space, reported in this work, can be utilized to enhance the capacity of optical communication and data-processing systems, and to manipulate multi-plane particle clusters, cf. Ref. <cit.>. Results The model and analytical solution. We start the presentation of the model which produces phase-locked VAV lattices by appropriately arranging the initial geometric structures and engineering the orbit-orbit couplings. Specifically, we start by constructing the initial configuration of the complex light field composed of the multiple interactive vortices and antivortices. Mathematically, they can be realized by shaping an ambient field G with two complex-valued polynomial functions <cit.>. The basic configuration is one with the opposite-charge vortices embedded in the Gaussian background E(x,y) of width w. In this scenario, the initial VAV structure is given by G=p( u) · q( u^∗) · E where u=x+iy≡ re^iϕ defines the Cartesian and polar ( r,ϕ) coordinates, and ∗ stands for the complex conjugate. Complex polynomials, p(u)=∑_n=0^Na_nu^n and q(u^∗)=∑_m=0^Mb_mu^∗ n, represent a cluster composed of N vortices and M antivortices entangled with the background Gaussian field. The resultant VAV configuration is determined by the set of complex coefficients a_n and b_m, which determine complex-valued roots c_n and d_m^∗ of polynomials p and q, respectively. In turn, the roots define the position of each vortex pivot. Such initial configurations, built as VAV sets, usually cannot preserve their arrangement in the course of the propagation, due to the interplay between the opposite OAMs. To reveal the orbit-orbit couplings, we investigate the propagation of the VAV configuration along the z coordinate, according to the paraxial Schrödinger wave equation, i∂ _zG=-( λ /4π) ( ∂ _x^2+∂ _y^2) G, with carrier wavelength λ. The evolution initiated by the input configuration (<ref>) can be cast in an analytical form of <cit.> G( x,y,z) =E( x,y,z) [ F_0( x,y,z ) +F_c( x,y,z) ] where E( x,y,z) =w^2|B(z)|/2exp( -B(z)r^2/2) represents the evolution of the Gaussian background, with B(z)=2π /[λ (z_R+iz)] and z_R=π w^2/λ being the Rayleigh length. Further, term F_0( x,y,z) =∏_n=1^N[ AB(z)u-c_n] ∏_m=1^M[ AB(z)u^∗-d_m^∗], with A≡ w^2/2, indicates that, due to the presence of B(z), the vortices and antivortices move linearly in the transverse plane and stay separated in the course of the propagation. However, this term does not couple vortices and antivortices at different locations. The VAV orbit-orbit coupling is introduced by term F_c=∑_k=1^N( -2A-2A^2B(z)) ^k k!P_N,kQ_M,k, where P_N,k(AB(z)u) and Q_N,k (AB(z)u^∗) are two z-dependent polynomial functions of variables AB(z)u and AB(z)u^∗. Expressions for the P_N,k(ABu) and Q_N,k(ABu^∗) polynomials are displayed below in the Methods section, see Eqs. (<ref>) and (<ref>). On the contrary to F_0, the mixing term F_c represents the interplay between the vortices and antivortices, which, in the framework of the linear propagation, leads to the mutual attraction, annihilation and repulsion between the vortices and antivortices, as well as the OAM Hall effect <cit.>. Note that the orbit-orbit coupling, emerging in the freely propagating paraxial light field, does not need the presence of any optical material, which makes this effect completely different from other photonic interactions, such as the above-mentioned spin-orbit coupling. The orbit-orbit coupling is effectively nonlocal, not constrained to the nearest-neighboring VAV pairs. It involves widely separated pairs too, with the long-range coupling strength gradually decaying with the increase of the separation <cit.>. Note that a recent work has introduced a parallel framework for separately describing the tilt, velocity and trajectory of each individual vortex, which can be applied to the cumbersome coupling of two oppositely charged vortices <cit.> and is compatible with our theory. As F_c strongly depends on the propagation distance z, the orbit-orbit coupling is propagation-varying. As a consequence, the propagation dynamics of the VAV configurations are sensitive to the initial configurations. In the following, we demonstrate the construction of equilibrium VAV configurations, by engineering the nonlocal orbit-orbit couplings. The spatial dependences of the mixing term F_c allow us to appropriately arrange the configurations, and thus to design appropriate orbit-orbit couplings for producing robust crystalline VAV lattice. As demonstrated in the model of the vortex dipole, the coupling equilibrium of pivots in the course of the propagation can be achieved by adjusting the separation between them <cit.>. To this end, the lattice structure can be designed as an array composed of VAV pairs. We aim to arrange multiple vortex dipoles with co-orthogonal inclinations (horizontal and vertical) on a 2D grid with spacing L, the starting point being the central position, (x,y)=(0,0). The left panel in Fig. 1a illustrates the so constructed VAV crystalline patterns, which resemble the recently proposed 2D ionic square-shaped lattices, such as EuS <cit.>, with the alternating vortices and antivortices playing the roles of positive and negative ions. Coordinates of the lattice sites, i.e., positions of the vortex and antivortex pivots, are given by roots (c_n,d_m) of polynomials P and Q. The right panel in Fig. 1a illustrates the robust propagation of the resultant 2D lattice configuration along the z coordinate. We confirm this conclusion by showing in Fig. 1c the evolution of the phase structure of the complete field G (see Eq. (<ref>)) in the fragment of size 3× 3 of the VAV lattice introduced in Fig. 1a. The fragment includes five vortices and four antivortices. The vortex polarity of each lattice element can be identified by the phase-gradient field, displayed by patterns of arrows in Fig. 1c. The phase varies rapidly close to the pivots, corresponding to phase singularities with the polarity of each one identified by the rotation of the gradient arrows. On the other hand, the phase becomes flat at boundaries between adjacent VAV pairs. The presence of the flat phase distributions is a manifestation of the VAV crystallization, as confirmed by the propagation-invariant phase-locked lattice structure. Thus, the VAV crystallization is maintained by the local VAV alternation in the lattice. By contrast, other initial lattice structures lead to imbalanced orbit-orbit couplings, resulting in strongly unstable propagation dynamics. An example of an unstable lattice is provided by the square lattice, built as a central antivortex surrounded by alternating vortex and antivortex layers, as shown in Fig. 1b. Although the lattice spacing is maintained in the course of the propagation of this input, we observe in Fig. 1d that the propagating phase pattern is strongly disturbed, featuring, in particular, annihilation and creation of VAV pairs. Experimental demonstration. Following the theoretical analysis, we have demonstrated the nonlocal orbit-orbit couplings between the vortices and antivortices and their crystallization in the experiment. In these contexts, a key point is to produce the interactive vortices and antivortices nested in the Gaussian envelope, using a computer-generated phase-only hologram. Experimental observation of the orbit-orbit couplings apparently was not clearly demonstrated in previous works which used conventional techniques for generating multiple vortices and antivortices <cit.> . This objective is a challenging one as it requires to encode both the amplitude and phase information which represents the coupling term F_c in Eq. (<ref>) for the VAV lattice in the Fourier space. At the initial position z=0, the Fourier transform of the whole field is written as G̃(k_x,k_y)=Ẽ_0(k_x,k_y)·[ Ũ _0(k_x,k_y)+Ũ_c(k_x,k_y)] where Ẽ_0(k_x,k_y) is the Fourier transform of E(x,y) at z=0, with k_x and k_y being the corresponding spatial frequencies in the ( x,y) plane. Equation (4) includes two important terms, exhibiting similar mathematical form to the equation (2). However, these terms Ũ_0 and Ũ_c are not the Fourier transforms of F_0 and F_c. They can be derived analytically as: Ũ_0=∏_n=1^N[ iA(k_x+ik_y)-c_n] ∏_m=1^M[iA(k_x-ik_y)-d_m^∗], and Ũ_c=∑_k=1^N( -2A) ^kk! P̃_N,kQ̃_M,k, respectively. More details about Ũ_0 and Ũ_c are presented below in Methods section. Very similarly, Ũ_0 denotes the spatially decoupled vortices and antivortices in the Fourier space, while the Ũ_c couples them nonlocally, leading to the interactive elements. Thus, the correct Fourier transform of the interactive VAV lattice should comprise the non-coupling and coupling terms simultaneously. If the coupling term Ũ_c is not included in the phase mask, the produced vortices and antivortices would propagate independently without orbit-orbit coupling among them <cit.>. This important coupling term is essential and allows us to perform experiments for observing phenomena caused by the orbit-orbit couplings. Based on the theory, we experimentally realized the above predictions by using the setup presented in Fig. 2a. A linearly polarized He-Ne laser beam with wavelength λ =632.8 nm is appropriately expanded and collimated by using a beam expander (BE). The first beam splitter (BS) divides the laser beam in two: a reference beam and the other one, patterned by the phase spatial light modulator (Holoeye LETO II SLM, 1920× 1080). The phase hologram (supplementary Sec. B) creating the interactive vortices and antivortices is realized by using the coding technique proposed by Bolduc <cit.>, as specified in Methods section. Other efficient encoding techniques, such as the binary computer-generated methods <cit.> may also be utilized to produce the desired lattice patterns. In the experiment, we used a sufficiently broad Gaussian, to embed multiple vortices and antivortices. A typical phase hologram that encodes a 3× 3 square lattice, comprising five vortices and four antivortices, which is shown in Fig. 2b, was uploaded into the SLM. The spatially modulated light beam, reflected from the mirror, passes through a focusing lens (with the focal length 500 mm) which performs the Fourier transform. The first-order diffractive beam of the hologram is selected by using an iris diaphragm, other diffractive beams being blocked. The generated VAV lattices and their interference patterns with another divided beam are then imaged by a charge-coupled device (CCD) mounted on an electrically controlled stage movable along the z axis. Figure 2c presents the preliminary experimental result, which amounts to the 3× 3 square lattice, composed of nine elements, with the initial width w=250 μm of the Gaussian holding beam, and lattice spacing L=1.28w. Our experimental measurements show that the constructed VAV lattice can maintain its geometrical shape unchanged during evolution along a distance that is approximately three Rayleigh ranges (Figure 2c displays the intensity patterns of the square lattice at four typical propagation distances). Polarities of individual vortices are identified through the measured phase distribution of the generated VAV-lattice in different propagation planes (Fig. 2d). The experimental method for the phase reconstruction is introduced in Methods section. More details for reconstructing the experimental phase of the 3×3 VAV lattice are given in supplementary section A. The measured intensity and phase distributions confirm the generation and propagation of the expected VAV lattice configuration in Fig. 1. Although positions of individual vortices slightly vary, the overall configuration keeps its shape in the course of the propagation, suggesting that the VAV lattice realizes a stationary pattern. We compare the experimental results with the theoretical predictions (Figs. 2e, f). Excellent agreements are observed, indicating the effect of the balanced orbit-orbit couplings which connect the lattice elements. We further find that the balance of the couplings strongly depends on the lattice period, L. Indeed, varying L may lead to disbalance between the orbit-orbit couplings, due to their nonlocality. For instance, settings of L=0.8w or L=2w, the disbalanced couplings lead to annihilation of vortices and antivortices, or separation between them, as shown in the supplementary section C. The simulations reveal the crystallization and robust propagation of the resulting VAV lattice in the interval of 1.1w<L<1.3w. In contrast with that, a lattice structure initially composed of nine vortices with identical polarities (in this case F_c=0 in Eq. (<ref>), indicating the absence of orbit-orbit couplings) starts to rotate and quickly disintegrates at an early stage of the propagation, see the corresponding result in the supplementary section D. Next, we present examples of bigger VAV lattices which also demonstrate robust crystallization. One example is based on a 5× 5 square lattice, in which elements at the corners are removed. The resulting robust lattice is composed of 9 vortices and 12 antivortices, with lattice spacing L=1.24w, see Figs. 3a-d. The phase-only hologram used for the creation of this lattice is presented in the supplementary section B. The experimentally-measured phase distributions of the produced lattices confirm the initial shape of the lattice, while Figs. 3a-d illustrate the self-maintained crystallized shape at different propagation distances, and the robust propagating phases are exhibited in Figs. 3e-h. However, the crystalline pattern is not completely stable, showing a weak trend toward fusion, starting from the edges of the lattice. Indeed, elements near the edges are slowly escaping, initiating an eventual transition from the crystalline state towards a turbulent one. This phenomenon is similar to the melting transition in solid-state crystals <cit.>. An example of the crystallization of a still bigger VAV lattice, of size 7× 7, is displayed in Figs. 4a-d, and the corresponding phases are shown in Figs. 4e-h. Its VAV pattern is constructed on the grid from which three pivots are removed at the corners. The resulting lattice includes 21 vortices and 16 antivortices, with spacing L=1.24w. This lattice structure is shown by the respective phase distribution in the supplementary section B. Additional measurements shown in Supplementary Sec. E directly demonstrate the crystallization process which transforms a disordered VAV lattice into a regular one. At a late stage of the evolution, the melting of the VAV crystal starts from its edges, where individual elements tend to escape due to gradual breakup of the balance of the orbit-orbit couplings, while the integrity of the core of lattice is still maintained by the balanced competition between the couplings. Actually, the slow melting is caused by the gradual diffraction-driven expansion of the Gaussian background, as manifested by the z-dependent coefficient B(z) in Eq. (<ref>). These observations corroborate the theoretical prediction, as shown in Figs. 3i-p and Figs. 4i-p for the 5× 5 and 7× 7 lattices, respectively. The creation of still larger stable lattices is more challenging, due to the limited width of the Gaussian background. To visualize the crystallizations and stable evolution of the VAV lattices considered above, experimentally recorded trajectories of all pivots in the lattices are presented in Figs. 5a-c. Nearly straight-line trajectories are clearly observed for the three robust VAV lattices. Accurate measurements show that individual propagation trajectories remain straight over a propagation distance up to 2.6z_R. Figures 5d-f display projections of the 3D trajectories onto the transverse plane. These panels (in particular Fig. 5d-f) clearly indicate that vortex and antivortex pivots at the edge make the overall lattice disordered in the beginning; after that, the outer pivots gradually move onto the designated 2D grid, and then crystallize into a regular lattice structure which persists for a long propagation distance. Moreover, it is also confirmed that in the course of the disintegration, the pivots located closer to the core of lattice perform much slower motion than those residing at the edges of the lattice. We stress that solely the intensity and phase distributions are not sufficient to quantify the crystallization and stable propagation. We therefore have performed a detailed quantitative analysis on the stable propagation of the lattices by using the Pearson correlation coefficient (PCC) <cit.>. This makes it possible to quantify the VAV crystallization and identify the balanced nonlocal orbit-orbit couplings. PCC is defined as a correlation between the intensity patterns I_0(x,y) and I_z(x,y) (here I≡ |G|^2), recorded at z=0 and at the current propagation distance: PCC(z)=∫∫ (I_0-I̅_0)× (I_z-I̅ _z)dxdy/√(∫∫ (I_0-I̅_0)^2dxdy)×√(∫∫ (I_z-I̅_z)^2dxdy) where I̅ is the average value of I(x,y). The PCC coefficient takes values in the range between 0 and 1, larger ones indicating higher correlation between the two patterns. We adopt PCC=0.8 as the critical value, so that PCC falling below it implies disintegration of the VAV lattice. Accordingly, we measure the PCCs of the robust lattices as a function of the propagation distance, as shown in Fig. 5g. It is seen that in all these cases the PCC keeps its value near 0.9, in agreement with the observation of the propagation-invariant lattice patterns. In particular, we note that the PCC initially gradually increases, reaching its maximum when the propagation distance changes from z=-1.1z_R to z=0. The slow increase of the PCCs indicates the formation of the VAV lattice. Afterwards, PCC is slowly decreasing, which implies that the crystal starts to melt. Thus, the observation of the nearly constant PCC suggests that the balanced orbit-orbit couplings maintain the robust lattice structures in the free space. Finally, we demonstrate a counter-example of an unstable VAV lattice, to illustrate the imbalanced orbit-orbit coupling. This is a lattice composed of 7×7 pivots, shown in Fig. 1b, which does not feature the uniform alternation of vortices and antivortices in the horizontal and vertical directions, and has the same spacing as in Fig. 4a. Figure 6a shows the experimentally recorded intensity distributions of the lattice at z=0, showing a regular VAV pattern which is essentially the same as in Fig. 4a. In drastic difference with the stable lattice, the present wrong one demonstrates no robustness in the course of the propagation. Under the action of imbalanced orbit-orbit couplings between the vortices and antivortices, the lattice undergoes dramatic structural changes in the course of the propagation. Figures 6b, c make it obvious that the pattern quickly transforms into an irregular one, which may be considered as a turbulent optical state. Similar outcomes for the same propagation distances are produced by the theoretical solution in Figs. 6d-f. In Fig. 6g, the PCC value for the wrong structure demonstrates fast decay in the course of the propagation. It shows that the lattice structure survives only at a small distance, z=0.33z_R, at which the PCC falls to the threshold value 0.8, which is defined above. Thus, the alternating VAV structure guarantees the balance of the orbit-orbit couplings for each vortex pivot, leading to long distances of the robust propagation, in contrast with the wrong lattices. Discussion We have demonstrated that multi-VAV (vortex-antivortex) sets, embedded in the Gaussian host field, can crystallize into robust square-shaped lattices, which resemble ionic lattices in the solid-state physics. We have shown that the so constructed lattices preserve their structure in the free-space propagation over a distance essentially exceeding the Rayleigh (diffraction) length. We have presented the analytical model describing the vortex-antivortex crystallization, which results from the globally balanced orbit-orbit coupling acting upon each vortex or antivortex pivot. Eventually, due to the diffraction of the host field, such VAV crystals suffer gradual melting through escape of individual elements from edges of the lattice, while the core survives much longer propagation. Unlike the square-shaped VAV lattices, differently built ones suffer quick degradation, due to the action of imbalanced orbit-orbit couplings. It is plausible that the square-shaped lattices may be further stabilized against melting by inclusion of moderately strong self-focusing nonlinearity. Such a nearly-stable lattice configuration, used as an input, can significantly reduce the light intensity required for the formation of fully stable nonlinear optical crystals <cit.>. Due to the universal nature of the paraxial wave propagation, robust configurations based on the balanced orbit-orbit couplings and the resulting VAV crystallization may be expected in other physical systems, such as matter waves <cit.>, electron beams <cit.>, acoustics <cit.> and hydrodynamics <cit.>.It is relevant to stress once again that, while manipulating light fields by means of spin-orbit couplings has drawn much interest <cit.>, the orbit-orbit couplings acting between vortices and antivortices, reported here, remained unnoticed. Thus, our results offer a useful coupling scheme for manipulations with vortices and antivortices. In particular, we have presented a reliable optical emulation of 2D ionic-like crystals (note that there are very few real solid-state settings which admit the existence of 2D square-shaped ionic lattices <cit.>). The creation of more sophisticated stable vortex-antivortex lattices can be expected by means of appropriate orbit-orbit couplings, in addition to the fluidity demonstrated in Refs. <cit.>. In this vein, the concept of effective phase diagrams can be put forward for describing phase transitions in the structured light <cit.>. The phase diagrams of the VAV structures can therefore emulate different condensed-matter phases – in particular, for identifying the general crystallization process (disorder-to-order transitions). Furthermore, kinetics mediated by lattice defects (for instance, in graphene <cit.>) can be plausibly also emulated in optical VAV lattices. In terms of potential applications, the stable VAV lattices are very promising media in optical communications and all-optical data processing, as the lattices make it directly possible to enlarge the channel capacity <cit.>. Methods Expressions of the propagating polynomial functions. In this section, we present expressions of the propagating polynomial functions both in the real and Fourier spaces. We start by considering the propagation of the initial vortex-antivortex lattice represented by Eq. (<ref>). A general solution to the paraxial Schrödinger equation can be expressed as follows G( x,y,z) =ℐℱ𝒯{G̃( k_x,k_y ) exp[ -i/2k_0( k_x^2+k_y^2 ) z] } where k_0=2π /λ is the free-space wavenumber, and ℐℱ𝒯 {·} denotes the inverse Fourier transform operator, and G̃( k_x,k_y) =ℱ𝒯{ G(x,y,z=0)} is the Fourier transform of the input at z=0. Based on the known property for the Fourier transform ℱ𝒯, ℱ𝒯{( x± iy) ^nG( x,y) } =[ i( ∂/∂k_x± i ∂/∂k_y) ] ^nG̃( k_x ,k_y) the Fourier transform of the initial configuration can be written as G̃( k_x,k_y) =∑_n=0^Na_n ( iD) ^N-n×∑_m=0^Mb_m( i D^∗) ^M-m×Ẽ_0(k_x,k_y) Here the complex differential operator is D̃=∂ /∂ k_x+i∂ /∂k_y, and Ẽ_0=( w^2/2 ) exp[ -w^2( k_x^2+k_y^2) /4] is the Fourier transform of the Gaussian background at the initial position. It implies that G̃(k_x,k_y) is a superposition of many different components of light field in the Fourier space, written as G̃(k_x,k_y)=∑_n=0^NG̃_n(k_x,k_y). The summation of these Fourier series leads to an explicit form, G̃( k_x,k_y) =Ẽ_0( k_x, k_y) ×[ Ũ_0( k_x,k_y ) +Ũ_c( k_x,k_y) ] where Ũ_0=∏_n=1^N[ iA(k_x+ik_y)-c_n] ∏_m=1^M[iA(k_x-ik_y)-d_m^∗], and Ũ _c=∑_k=1^N( -2A) ^kk!P̃_N,kQ̃ _M,k, respectively, with P̃_N,k and Q̃_M,k being P̃_N,k = ∑_l=0^N-ka_lC_k^N-l[ iA( k_x+ik_y) ] ^N-k-l Q̃_M,k = ∑_l=0^M-kb_lC_k^M-l[ iA( k_x-ik_y) ] ^M-k-l Accordingly, the propagating light field is represented by the inverse Fourier transform of G̃ (k_x,k_y), to which the propagation operator is applied. It yields, G( x,y,z) =E( x,y,z) ×[ F_0( x,y,z) +F_c( x,y,z) ] where E(x,y,z)=w^2|B|/2exp( -Br^2/2) accounts for the Gaussian envelope evolution, with B=2π /[λ (z_R+iz)] and z_R=π w^2/λ being the Rayleigh length, as defined above. Here F_0=∏_n=1^N( ABu-c_n) ∏_m=1^M( ABu^∗-d_m^∗), with A=w^2/2, denotes the decoupling term, and F_c represents the orbit-orbit couplings. It is written as F_c=∑_k=1^N( 2A-2A^2B) ^kk!P_N,k Q_M,k, where P_N,k(ABu) and Q_N,k(ABu^∗) are two propagation-dependent polynomial functions of arguments (ABu) and (ABu^∗), expressed as P_N,k = ∑_l=0^N-ka_lC_k^N-l( ABu ) ^N-k-l Q_M,k = ∑_l=0^M-kb_lC_k^M-l( AB u^∗) ^M-k-l It is interesting to find that the Fourier transform of the input field G̃(k_x, k_y) exhibits similar form to the solution represented in the real space. The essential terms Ũ_0 (F̃_0) and Ũ_c (F̃_c) represents decoupling and mutual coupling between the vortices and antivortices in the Fourier (real) space. However, we should note that Ũ_0 and Ũ_c are not the direct Fourier transform of F_0 and F_c. The generation of the phase-only hologram. The computer-generated hologram encoding both the phase and amplitude information of the VAV lattice can be generated by means of the phase-only modulation technique. This requires to derive an analytical solution for the optical lattice in the Fourier domain. As mentioned above, at the initial position, the Fourier spectrum of the entire field is given by G̃(k_x,k_y)=Ẽ_0(k_x,k_y)[ Ũ_0(k_x,k_y)+Ũ_c(k_x,k_y)], which can be rewritten as G̃(k_x,k_y)=G̃_0(k_x,k_y)exp[ iΦ̃ (k_x,k_y)] where G̃_0 and Φ̃ represent the amplitude and phase of G̃. Note that Ẽ_0 is a real function and phase Φ̃ originates from the decoupling term Ũ_0 and the coupling one Ũ_c. The overall phase and amplitude of G̃(k_x,k_y) are encoded into the phase-only hologram <cit.>, as specified in the following formula: H(k_x,k_y)=M(k_x,k_y)×Mod[ φ (k_x,k_y)+ 2π k_x/Λ,2π] where M(k_x,k_y)=1+sinc^-1[ G̃_0(k_x,k_y)] /π, and φ (k_x,k_y)=Φ̃(k_x,k_y)-π M(k_x,k_y). Here Mod(·) denotes a modulo operation, and sinc(x)=sin (x)/x. Note that the hologram includes a blazed grating, which is utilized to diffract the target light field onto the first-order component of the hologram. In the experiment, the periodicity of the grating in the x direction is Λ =64 μm. One can use other phase-only modulation techniques to generate the optical holograms for the creation of the VAV lattices <cit.>. Experimental details for the observation of the VAV crystallization. First, regarding the generation of the phase-only optical masks, we emphasize that the coupling term F_c in Eq. (<ref>) in the main text should be considered in the framework of the phase-only modulation technique. While the VAV lattice can be generated by implementing the phase-only hologram without encoding the term F̃_c, the generated vortices and antivortices would not interact via the nonlocal orbit-orbit couplings. Second, since the orbit-orbit coupling is very sensitive to the initial lattice configuration, we have built a recursion algorithm to address the inverse function of sinc(·) for the implementation of the phase-only modulation technique. This is important for generating a high-quality phase-only hologram. Otherwise, the obtained phase-only mask is less accurate for observing the VAV crystallization. Considering the sinc function with value ranging between 0 and 1, we normalize the amplitude expression G̃_0 to match the function sinc^-1(·). Finally, as the VAV lattice is nested in the Gaussian envelope, in the experiment, we had to choose an appropriate beam waist, to improve the quality of the interactive VAV lattice. Moreover, the SLM device requires an input plane wave, while the laser is working in its fundamental Gaussian mode. Therefore, the Gaussian envelope was expanded properly to cover the whole SLM screen. The procedures for the phase reconstruction. This method can recover the phase of the experimentally-generated field through the single shot of the interference between the objective and plane waves <cit.>. The measured intensity pattern produced by the superposition of the reference wave R(x,y)=R_0 exp(ik_c x) and object G(x,y)= G_0(x,y)exp[iψ_G(x,y)] (G_0 and ψ_G represent the amplitude and phase, respectively) can be expressed as I( x,y) =| R_0 |^2 + | G_0 |^2 + R_0[Gexp(-ik_cx)+G^*exp(ik_cx)] where k_c denotes the carrier frequency. We note that the third term representing the interference fringes is determined by the objective and the carrier-wave phases. To extract the phase distribution ψ_G(x,y), the Fourier transform of the interference pattern is performed. Considering the fact that a phase variation in the real space causes a frequency displacement in the Fourier domain, we obtain ℱ𝒯[ I( x,y)] = Ĩ_1( k_x,k_y) + R_0[ G̃( k_x - k_c,k_y) + G̃^*( -k_x - k_c,-k_y)] where Ĩ_1( k_x,k_y) = ℱ𝒯( | R_0|^2 + | O |^2). It it evident that the term with displacement of k_c in the frequency domain affects the objective's Fourier transform, while its counterpart with the identical shift to the other side is a conjugate one. A square filter centering at (k_c,0) is applied to identify the Fourier distribution of the objective field. Then, the inverse Fourier transform is performed, recovering both the amplitude and phase. As a result, the imaginary part of the logarithm of the recovered field yields the measured phase. Note that the so produced filtered field is shifted to the origin point in the frequency domain, in order to recover the vortex located at the center. A relevant example is presented in the supplementary section A. Negligible experimental errors result from the diffracted phases of the reference wave and the Fourier lens, slightly distorting the objective phase. Data availabilityAll data that supports the plots within this paper and other findings of this study are available from the corresponding authors (S. F., Z. L. and Z. C.). Code availability The custom code used in this study is available from the corresponding authors (S. F., Z. L. and Z. C). References 99 18pt Allen1992 L. Allen, M. W. Beijersbergen, R. J. C. Spreeuw, and J. P. Woerdman, “Optical angular momentum of light and the transformation of Laguerre-Gaussian laser modes,” Phys. Rev. A 45, 8185 (1992). Forbes2021 A. Forbes, M. de Oliveira, and M. R. Dennis, “Structured light,” Nat. Photonics 15, 253-262 (2021). Jia2018 X. Wang, Z. Nie, Y. Liang, J. Wang, T. Li, and B. Jia, “Recent advances on optical vortex generation,” Nanophotonics 7, 1533-1556 (2018). Shen2019 Y. Shen, X. Wang, Z. Xie, C. Min, X. Fu, Q. Liu, M. Gong, and X. Yuan, “Optical vortices 30 years: OAM manipulation from topological charge to multiple singularities,” Light Sci. Appl. 8, 90 (2019). Zhan2022 J. Chen, C. Wan, and Q. Zhan, “Engineering photonic angular momentum with structured light: a review,” Adv. Photonics 3 , 064001 (2021). Gu2016 H. Ren, X. Li, Q. Zhang, and M. Gu, “On-chip noninterference angular momentum multiplexing of broadband light,” Science 352, 805-809 (2016). Qiu2021 J. Ni, C. Huang, L. Zhou, M. Gu, Q. Song, Y. Kivshar, and C. Qiu, “Multidimensional phase singularities in nanophotonics,” Science 374, eabj0039 (2021). Li2021 X. Ouyang, Y. Xu, M. Xian, Z. Feng, L. Zhu, Y. Cao, S. Lan, B. Guan, C. Qiu, M. Gu, and X. Li, “Synthetic helical dichroism for six-dimensional optical orbital angular momentum multiplexing,” Nat. Photonics 15, 901-907 (2021). Diaspro2018 G. Vicidomini, P. Bianchini, and A. Diaspro, “STED super-resolved microscopy,” Nat. Methods 15, 173-182 (2018). Padget2011 M. Padgett, and R. Bowman, “Tweezers with a twist,” Nat. Photonics 5, 343-348 (2011). Denz2013 M. Woerdemann, C. Alpmann, M. Esseling, and C. Denz, “Advanced optical trapping by complex beam shaping,” Laser Photonics Rev. 7, 839-854 (2013). Yang2021 Y. Yang, Y. Ren, M. Chen, Y. Arita, C. Rosales-Guzmán, “Optical trapping with structured light: a review,” Adv. Photonics 3, 034001 (2021). Wang2023 M. Wang, L. Chen, D. Choi, S. Huang, Q. Wang, C. Tu, H. Cheng, J. Tian, Y. Li, S. Chen, and H. Wang, “Characterization of orbital angular momentum quantum states empowered by metasurfaces,” Nano Lett. 23, 3921-3928 (2023). Kong2019 L. Kong, R. Liu, W. Qi, Z. Wang, S. Huang, Q. Wang, C. Tu, Y. Li, and H. Wang, “Manipulation of eight-dimensional Bell-like states,” Sci. Adv. 5, eaat9206 (2019). Tsai2020 L. Li, Z. Liu, X. Ren, S. Wang, V. Su, M. Chen, C. H. Chu, H. Y. Kuo, B. Liu, W. Zang, G. Guo, L. Zhang, Z. Wang, S. Zhu, and D. P. Tsai, “Metalens-array-based high-dimensional and multiphoton quantum source,” Science 368, 1487-1490 (2020). Lin2011 Y. Lin, T. Lu, K. Huang, and Y. Chen, “Generation of optical vortex array with transformation of standing-wave Laguerre-Gaussian mode," Opt. Express 19, 10293-10303 (2011). Huang2016 S. Huang, Z. Miao, C. He, F. Pang, Y. Li, and T. Wang, “Composite vortex beams by coaxial superposition of Laguerre–Gaussian beams," Opt. Laser Eng. 78, 132-139 (2016). Fan2021 H. Fan, H. Zhang, C. Cai, M. Tang, H. Li, J. Tang, and X. Li, “Flower-Shaped Optical Vortex Array," Ann. Phys. 533, 2000575 (2021). Lusk2023 M. Lusk, A. Voitiv, and M. Siemens, “Quantized optical vortex-array eigenstates in a rotating frame," Phys. Rev. A 108, 023509 (2023). Ma2017 H. Ma, X. Li, Y. Tai, H. Li, J. Wang, M. Tang, J. Tang, Y. Wang, and Z. Nie, “Generation of Circular Optical Vortex Array," Ann. Phys. 529, 1700285 (2017). Wang2021 H. Wang, S. Fu, and C. Gao, “Tailoring a complex perfect optical vortex array with multiple selective degrees of freedom," Opt. Express 29, 10811-10824 (2021). Li2018 L. Li, C. Chang, X. Yuan, C. Yuan, S. Feng, S. Nie, and J. Ding, “Generation of optical vortex array along arbitrary curvilinear arrangement," Opt. Express 26, 9798-9812 (2018). Vyas2007 S. Vyas, and P. Senthilkumaran, “Interferometric optical vortex array generator," Appl. Opt. 46, 2893-2898 (2007). Li2014 Z. Li, and C. Cheng, “Generation of second-order vortex arrays with six-pinhole interferometers under plane wave illumination," Appl. Opt. 53, 1629-1635 (2014). Kapoor2016 A. Kapoor, M. Kumar, P. Senthilkumaran, and J. Joseph, “Optical vortex array in spatially varying lattice," Opt. Commun. 365, 99-102 (2016). Zhao2020 Qi Zhao, M. Dong, Y. Bai, and Y. Yang, “Measuring high orbital angular momentum of vortex beams with an improved multipoint interferometer," Photon. Res. 8, 745-749 (2020). Ladavac2004 K Ladavac, and D. G. Grier, “Microoptomechanical pumps assembled and driven by holographic optical vortex arrays," Opt. Express 12, 1144-1149 (2004). Wei2009 G. Wei, L. Lu, and C. Guo, “Generation of optical vortex array based on the fractional Talbot effect," Opt. Commun. 282, 2665-2669 (2009). Yu2015 J. Yu, C. Zhou, Y. Lu, J. Wu, L. Zhu, and W. Jia, “Square lattices of quasi-perfect optical vortices generated by two-dimensional encoding continuous-phase gratings," Opt. Lett. 40, 2513-2516 (2015). Deng2016 D. Deng, Y. Li, Y. Han, X. Su, J. Ye, J. Gao, Q. Sun, and S. Qu, “Perfect vortex in three-dimensional multifocal array," Opt. Express 24, 28270-28278 (2016). Wang2020 Y. K. Wang, H. X. Ma, L. H. Zhu, Y. P. Tai, and X. Z. Li, “Orientation-selective elliptic optical vortex array," Appl. Phys. Lett. 116, 011101 (2020). Wang2022 G. Wang, X. Kang, X. Sun, Z. Li, Y. Li, K. Chen, N. Zhang, X. Gao, and S. Zhuang, “Generation of perfect optical vortex arrays by an optical pen," Opt. Express 30, 31959-31970 (2022). Piccardo2022 M. Piccardo, M. de Oliveira, A. Toma, V. Aglieri, A. Forbes, and A. Ambrosio, “Vortex laser arrays with topological charge control and self-healing of defects," Nat. Photonics 16, 359-365 (2022) Roux2004 F. S. Roux, “Coupling of noncanonical optical vortices," J. Opt. Soc. Am. B 21, 664-670 (2004). Voitiv2020 A. A. Voitiv, J. M. Andersen, M. E. Siemens, and M. T. Lusk, “Optical vortex braiding with Bessel beams," Opt. Lett. 45, 1321-1324 (2020). Andersen2021 J. M. Andersen, A. A. Voitiv, M. E. Siemens, and M. T. Lusk, “Hydrodynamics of noncircular vortices in beams of light and other two-dimensional fluids,” Phys. Rev. A 104, 033520 (2021). Ferrando2023 A. Ferrando, A. Popiołek-Masajada, J. Masajada, R. Markevich, and A Khoroshun, “Vortex-antivortex pair control in quadrupole Gaussian beams," Opt. Express 31, 23444-23458 (2023). Dreischuh2002 A. Dreischuh, S. Chervenkov, D. Neshev, G. G. Paulus, and H. Walther, “Generation of lattice structures of optical vortices," J. Opt. Soc. Am. B 19, 550-556 (2002). Lin2022 H. Lin, S. Fu, H. Yin, Z. Li, and Z. Chen, “Intrinsic Vortex-Antivortex Interaction of Light,” Laser Photonics Rev. 16, 2100648 (2022). Indebetouw1993 G. Indebetouw, “Optical vortices and their propagation,” J. Mod. Opt. 40, 73-87 (1993). Lin2019 H. Lin, S. Fu, Z. Deng, H. Zhou, H. Yin, Z. Li, and Z. Chen, “Generation and propagation of optical superoscillatory vortex arrays,” Ann. Phys. 531, 1900240 (2019). Andersen2022 J. M. Andersen, A. A. Voitiv, P. C. Ford and M. E. Siemens, “Amplitude structure of optical vortices determines annihilation dynamics,” J. Opt. Soc. Am. A 40, 223-238 (2022). Ferrando2014 A. Ferrando, “Discrete-Gauss states and the generation of focusing dark beams,” Phys. Rev. A 90, 023844 (2014). Ferrando2016 A. Ferrando and M. A. García-March, “Analytical solution for multi-singular vortex Gaussian beams: the mathematical theory of scattering modes,” J. Opt. 18, 064006 (2016). Alperin2019 S. N. Alperin, A. L. Grotelueschen, and M. E. Siemens, “Quantum Turbulent Structure in Light,” Phys. Rev. Lett. 122, 044301 (2019). Ortega2019 A. B. Ortega, S. Bucio-Pacheco, S. Lopez-Huidobro, L. Perez-Garcia, F. J. Poveda-Cuevas, J. A. Seman, A. V. Arzola, and K. Volke-Sepúlveda, “Creation of optical speckle by randomizing a vortex-lattice," Opt. Express 27, 4105-4115 (2019). Segev2008 F. Lederer, G. I. Stegeman, D. N. Christodoulides, G. Assanto, M. Segev, and Y. Silberberg, “Discrete solitions in optics,” Phys. Rep. 463, 1-126 (2008). Malomed2021 E. Kengne, W. Liu, B. A. Malomed, “Spatiotemporal engineering of matter-wave solitons in Bose-Einstein condensates,” Phys. Rep. 899, 1-62 (2021). Michinel2005 M. J. Paz-Alonso, and H. Michinel, “Superfluidlike Motion of Vortices in Light Condensates,” Phys. Rev. Lett. 94, 093901 (2005). Onoda2004 M. Onoda, S. Murakami, and N. Nagaosa, “Hall effect of light,” Phys. Rev. Lett. 93, 083901 (2004). Leyder2007 C. Leyder, M. Romanelli, J. P. Karr, E. Giacobino, T. C. H. Liew, M. M. Glazov, A. V. Kavokin, G. Malpuech, and A. Bramati, “Observation of the optical spin Hall effect,” Nat. Phys. 3, 628-631 (2007). Hosten2008 O. Hosten, and P. Kwiat, “Observation of the spin Hall effect of light via weak measurements,”Science 319, 787-790 (2008). Schumacher2008 S. Schulz, S. Schumacher, and G. Czycholl, “Spin-orbit coupling and crystal-field splitting in the electronic and optical properties of nitride quantum dots with a wurtzite crystal structure", Eur. Phys. J. B 64, 51-60 (2008). Zhou2012 X. Zhou, Z. Xiao, H. Luo, and S. Wen, “Experimental observation of the spin Hall effect of light on a nanometal film via weak measurements,” Phys. Rev. A 85, 043809 (2012). Ling2017 X. Ling, X. Zhou, K. Huang, Y. Liu, C. Qiu, H. Luo, and S. Wen, “Recent advances in the spin Hall effect of light,” Rep. Prog. Phys. 80, 066401 (2017). Bliokh2015 K. Y. Bliokh, F. J. Rodríguez-Fortuño, F. Nori, and A. V. Zayats, “Spin-orbit interactions of light,” Nat. Photonics 9, 796-808 (2015). Lobanov Y. V. Kartashov, B. A. Malomed, V. V. Konotop, V. E. Lobanov, and L. Torner, “Stabilization of spatiotemporal solitons in Kerr media by dispersive coupling," Opt. Lett. 40, 1045-1048 (2015). Flach G. Gligorić, A. Maluckov, Lj. Hadzievski, S. Flach, and B. A. Malomed, “Nonlinear localized flat-band modes with spin-orbit coupling," Phys. Rev. B 94, 144302 (2016). Schumacher2017 O. Lafont, S. M. H. Luk, P. Lewandowski, N. H. Kwong, P. T. Leung, E. Galopin, A. Lemaitre, J. Tignon, S. Schumacher, E. Baudin, and R. Binder, “Controlling the optical spin Hall effect with light,”Appl. Phys. Lett. 110, 061108 (2017). Thawatchai T. Mayteevarunyoo, B. Malomed, and D. Skryabin, “Vortex modes supported by spin-orbit coupling in a laser with saturable absorption," New J. Phys. 20, 113019 (2018). Fu2019 S. Fu, C. Guo, G. Liu, Y. Li, H. Yin, Z. Li, and Z. Chen, “Spin-orbit optical Hall Effect,” Phys. Rev. Lett. 123, 243904 (2019). Luo2012 X. Zhou, X. Ling, H. Luo, and S. Wen, “Identifying graphene layers via spin Hall effect of light,” Appl. Phys. Lett. 101, 251602 (2012). Luo2021 Y. Wang, S. Chen, S. Wen, and H. Luo, “Realization of ultra-small stress birefringence detection with weak-value amplification technique,” Appl. Phys. Lett. 118, 161104 (2021). Hasman2013 N. Shitrit, I. Yulevich, E. Maguid, D. Ozeri, D. Veksler, V. Kleiner, and E. Hasman, “Spin-optical metamaterial route to spin-controlled photonics,” Science 340, 724-726 (2013). Boyd2013 M. Mirhosseini, M. Malik, Z. Shi, and R. W. Boyd, “Efficient separation of the orbital angular momentum eigenstates of light,” Nat. Commun. 4, 2781 (2013). Dholakia2002 V. Garcés-Chávez, D. McGloin, H. Melville, W. Sibbett, and K. Dholakia, “Simultaneous micromanipulation in multiple planes using a self-reconstructing light beams” Nature 419, 145-147 (2002). Dennis2010 M. R. Dennis, R. P. King, B. Jack, K. O'Holleran, and M. J. Padgett, “Isolated optical vortex knots,” Nat. Phys. 6, 118-121 (2010). Gbur2016 M. K. Smith, and G. J. Gbur, “Construction of arbitrary vortex and superoscillatory fields,” Opt. Lett. 41, 4979-4982 (2016). Cheng2021 L. R. Cheng, and Z. Z. Lin, “Toward two-dimensional ionic crystals with intrinsic ferromagnetism,” Phys. Lett. A 395, 127229 (2021). Li2023 Y. Tai, H. Fan, X. Ma, Y. Shen, and X. Li, “Multi-dimensionally modulated optical vortex array,” J. Opt. 25, 094001 (2023). Zhang2020 X. Li, and H. Zhang, “Anomalous ring-connected optical vortex array,” Opt. Express 28, 13775-13785 (2020). Bolduc2013 E. Bolduc, N. Bent, E. Santamato, E. Karimi, and R. W. Boyd, “Exact solution to simultaneous intensity and phase encryption with a single phase-only hologram,” Opt. Lett. 38, 3546-3549 (2013). Lee1979 W. Lee, “Binary computer-generated holograms,” Appl. Opt. 18, 3661-3669 (1979). Bahabad2016 Y. Eliezer, and A. Bahabad, “Super-Oscillating Airy Pattern,” ACS Photonics 3, 1053-1059 (2016). Pearson K. Pearson, “Note on regression and inheritance in the case of two parents,” Proc. R. Soc. London 58, 240-242 (1895). Ye2020 Q. Fu, P. Wang, C. Huang, Y. V. Kartashov, L. Torner, V. V. Konotop, and F. Ye, “Optical soliton formation controlled by angle twisting in photonic moiré lattice,” Nat. Photonics 14, 663-668 (2020). Li2019 X. Zhang, X. Xu, Y. Zheng, Z. Chen, B. Liu, C. Huang, B. A. Malomed, and Y. Li, “Semidiscrete Quantum Droplets and Vortices,” Phys. Rev. Lett. 123, 133901 (2019). Verbeeck2010 J. Verbeeck, H. Tian, and P. Schattschneider, “Production and application of electron vortex beams,” Nature 467, 301-304 (2010). Zou2020 Z. Zou, R. Lirette, and L. Zhang, “Orbital Angular Momentum Reversal and Asymmetry in Acoustic Vortex Beam Reflection,” Phys. Rev. Lett. 125, 074301 (2020). Fu2017 S. Fu, J. Zhou, Y. Li, L. Shemer, and A. Arie, “Dispersion Management of Propagating Waveguide Modes on the Water Surface,” Phys. Rev. Lett. 118, 144501 (2017). Leboeuf2010 P. Leboeuf, and S. Moulieras, “Superfluid motion of light," Phys. Rev. Lett. 105, 163904 (2010). Michel2018 C. Michel, O. Boughdad, M. Albert, P. É. Larré, and M. Bellec, “Superfluid motion and drag-force cancellation in a fluid of light," Nat. Commun. 9, 2108 (2018). Lusk2008 M. T. Lusk, and L. D. Carr, “Nanoengineering defect structures on graphene," Phys. Rev. Lett. 100, 175503 (2008). Dong2018 Z. Dong, and Z. Chen, “Advanced Fourier transform analysis method for phase retrieval from a single-shot spatial carrier fringe pattern,” Opt. Lasers Eng. 107, 149-160 (2018). AcknowledgementsWe acknowledge support from the National Natural Science Foundation of China (nos. 12374306 (to S. F.), 62175091 (to Z. L.)), the Pearl River talent project (no. 2017GC010280 to S.F.), the Key-Area Research and Development Program of Guangdong Province (no. 2020B090922006 to Z.C.), the Guangzhou science and technology project (no. 202201020061 to S.F.), and the Israel Science Foundation (no. 1695/22 to B.M.). Author ContributionsS. Fu conceived the concept. He and B. A. Malomed carried out the analytical considerations. S. Fu, H. Lin, Y. Liao, and B. A. Malomed drafted and revised the paper. H. Lin, Y. Liao and J. Ren performed the experiments. H. Lin and Y. Liao performed numerical simulations and designed the phase-only holograms. Z. Chen supervised the project. All authors participated in discussions and contributed to the editing of the article. H. Lin and Y. Liao contributed equally to this work. Corresponding authors: Z. Li (ailz268@126.com); Z. Chen (tzqchen@jnu.edu.cn); S. Fu (fushenhe@jnu.edu.cn). Competing InterestsAll authors declare no competing interests. SupplementarySupporting information is provided for this work.
http://arxiv.org/abs/2407.01792v1
20240701203751
Optimising robotic operation speed with edge computing over 5G networks: Insights from selective harvesting robots
[ "Usman A. Zahidi", "Arshad Khan", "Tsvetan Zhivkov", "Johann Dichtl", "Dom Li", "Soran Parsa", "Marc Hanheide", "Grzegorz Cielniak", "Elizabeth I. Sklar", "Simon Pearson", "Amir Ghalamzan" ]
cs.RO
[ "cs.RO" ]
Fast modeling of the shear three-point correlation function Mike Jarvis 0000-0002-4179-5175 July 8, 2024 =========================================================== § ABSTRACT Selective harvesting by autonomous robots will be a critical enabling technology for future farming. Increases in inflation and shortages of skilled labour are driving factors that can help encourage user acceptability of robotic harvesting. For example, robotic strawberry harvesting requires real-time high-precision fruit localisation, 3D mapping and path planning for 3-D cluster manipulation. Whilst industry and academia have developed multiple strawberry harvesting robots, none have yet achieved human-cost parity. Achieving this goal requires increased picking speed (perception, control and movement), accuracy and the development of low-cost robotic system designs. We propose the edge-server over 5G for Selective Harvesting (E5SH) system, which is an integration of high bandwidth and low latency Fifth Generation (5G) mobile network into a crop harvesting robotic platform, which we view as an enabler for future robotic harvesting systems. We also consider processing scale and speed in conjunction with system environmental and energy costs. A system architecture is presented and evaluated with support from quantitative results from a series of experiments that compare the performance of the system in response to different architecture choices, including image segmentation models, network infrastructure (5G vs WiFi) and messaging protocols such as Message Queuing Telemetry Transport (MQTT) and Transport Control Protocol Robot Operating System (TCPROS). Our results demonstrate that the E5SH system delivers step-change peak processing performance speedup of above 18-fold than a stand-alone embedded computing Nvidia Jetson Xavier NX (NJXN) system. § INTRODUCTION Robotic systems are seen as a significant opportunity to help secure global food security. They can be deployed to drive environmental sustainability <cit.> as well as economic productivity <cit.>. Whilst many agricultural jobs have now been automated, the industry is still reliant on significant numbers of skilled human workers to hand-harvest fruits and vegetables. These skilled workers perform complex cognitive and high-fidelity manipulation tasks that have so far been evaded by robotisation <cit.>. Developing robotic systems to automate these tasks is an urgent need in many key production regions around the world (e.g. UK, US, NL, ES, JP), which are now facing severe labour shortages <cit.>. These shortages are driven by the demographics of age, politics of migration, and challenging socioeconomic working conditions. Whilst robotic systems are being developed to selectively harvest essential fruit and vegetable crops (e.g., strawberries <cit.>, apples [id=TF] <cit.> and <cit.> , tomatoes <cit.>, mushrooms <cit.> and broccoli <cit.>), we are unaware of any commercial robot that can harvest these products at the same operational cost (or speed) as a human. [id=TF]Multi-arm robots are promising alternatives for maximizing harvesting output and arms cooperation. <cit.> formulates the problem of arms cooperation and maximizing harvest through multi-arm robots as the Markov Decision Process to learn a multi-agent through reinforcement learning. <cit.> approaches the maximum robotic harvesting problem and formulates it as an example of a k-colorable sub-graph problem for the multi-3DoF harvesting problem. Key technical challenges include designing low-cost robot systems with high-speed data processing capacity. Real-time (or near real-time) data processing capacity is a significant challenge for robotic systems operating within complex and biologically diverse environments. In addition, computing must have a low cost in terms of both robot energy use and possibly carbon consumption. Here, we explore how 5G and edge computing can advance processing speed and reduce the cost of selective harvesting robotic technologies. Recent developments in robotics, computer vision, machine learning, and communication networks enable innovation in crop-harvesting robotic platforms. For strawberries alone, multiple platforms have been developed, e.g., Dogtooth <cit.>, SagaRobotics <cit.>, Harvest CROO <cit.>, AgroRobot <cit.> and Octinion <cit.>. Metomotion <cit.> develops greenhouse robots with a focus on tomato harvesting. Similarly, [id=TF]the Certhon robots powered by Denso corporation's technology <cit.> and Four Growers <cit.> also develop tomato harvesting robots. Several harvesting robot projects within the research community are also proposed for various environments and other types of fruit. Robotic and Autonomous Systems for orchard fruit picking are also presented in the literature, e.g., apple <cit.>, orange <cit.> and <cit.>, grape <cit.>, kiwi fruit <cit.> and cherry <cit.>. Application of robotic harvesting for vegetables such as mushrooms and broccoli are mentioned in <cit.> and <cit.>. This paper presents an integration of our novel strawberry-picking robotic system <cit.> with a private 5G network and an edge-server. Our integrated edge-server and robotic strawberry picking system communicate over our private 5G-SA[Stand-Alone] network to facilitate near-real-time strawberry picking. This enables our strawberry-picking robot to perform at a speed comparable to a human picker. We investigated available perception approaches to find the precision closest to humans. We implemented semantic segmentation models in computationally intensive Mask-RCNN and embedded computing-friendly FBNets. These models are then deployed on both the edge-server and NJXN board. We present a comparative study on the performance of the proposed system and the NJXN board. We also present a comprehensive analysis of performance gain by quantifying the latency caused by processing, network, and model prediction. The contributions of this paper include: (1) we integrated a private 5G-SA network and an edge-server into a robotic system to process computer vision components and facilitate perception, enhancing tasks such as object detection, semantic segmentation, obstacle map, accurate 3D localisation, and mapping in near real-time. (2) We have demonstrated the effectiveness of our proposed approach in the use case of selective harvesting of strawberries. Our results demonstrate that the proposed system enables effective picking actions by a robotic system and illustrates opportunities for deploying commercial robots in fields. We measured system deployment in a computationally intensive task involving a 3D representation of the environment to detect obstacles and enable efficient harvesting. We present a comparative performance analysis of MQTT and TCPROS protocols-based communication between the robots and the edge-server over WiFi and a private 5G-SA network. We also compared energy consumption and carbon emissions caused by our proposed setup and standalone embedded devices. Our prior work <cit.> has shown that processing sensory information (namely perception process) with the computing machines on our selective harvesting robot is one of the challenging improvements necessary and makes the picking process very slow. This perception process may be executed more than once for each fruit-picking cycle. Computing the perception with our robot computer (i.e., Intel Core i7 ™CPU, 16 GB RAM, and runs Ubuntu 20.04.4 LTS (Focal Fossa)) <cit.>, takes between 1.5 seconds to 3 seconds. The average picking time was 25 seconds. The robot takes 5 seconds to move from point home configuration to picking fruit and 5 seconds to put the fruit picked into a punnet. Motion planning can take between 20 and 1000 milliseconds. Considering only three cycles of perception (i.e., generating an action plan for the robot from sensory data) needed for completing a picking cycle, resulting in a 12-second longer picking process, our proposed E5SH can reduce this to a maximum of 3.1 seconds. This means the E5SH robot is 5 seconds slower than our 11 seconds, our target human picking speed. In Section <ref>, we provide a relevant literature review whilst the core system architecture of the proposed setup is presented in Section <ref>. Section <ref> contains details of the experimental setup followed by results and experimental evaluation presented in Section <ref>. The paper is concluded in Section <ref>, which also discusses the results. § LITERATURE REVIEW A comprehensive review of selective harvesting robotic systems (including hardware, perception, motion planning and motion control) is presented in <cit.>. However, that work does not discuss the telecommunication aspect of selective harvesting robots. This section presents related works to the technical elements of our integrated system (namely telecommunications infrastructure, edge computing, computer vision and multi-process messaging protocols of E5SH). Processing high-dimension visual data, i.e., more than two megapixels RGB and Point Cloud (PC) in near-real-time presents a very challenging barrier for robotic applications where the robot needs near real-time processing for smooth performance. These processes include strawberry detection/segmentation, quality/weight/size estimation, localisation, obstacle map generation, motion planning <cit.>,<cit.>, <cit.>,<cit.>. A human expert in strawberry picking can pick 70 Kg/h and place them into a big tray, whereas an average performance is 40 Kg/h <cit.>. Assuming an average strawberry weighs 30 g, this means 1,333 pick actions per hour with two hands, translating into 5.5 seconds per strawberry. This means any millisecond and latency matter for data processing in technology developments. The biological latency observed in humans is 70 ms <cit.>. In some cases, the robot may need to process the images of a single strawberry cluster multiple times from different views to deal with occlusion and conduct cluster manipulation for successful picking actions. Existing technologies may need between 10 seconds to more than 1 minute to pick a strawberry, making them commercially unviable. Speed, precision, and reliability are the primary technical challenges that prevent commercialising most robotic fruit-picking systems. Fast and precise detection, segmentation, and localisation of ripe fruit in dense clusters are crucial to building successful robotic picking technologies. On the other hand, robot battery capacity and energy consumption are two factors limiting their performance. Robot energy use increases for complex tasks like strawberry picking <cit.> as it can involve processing visual sensors data multiple times: e.g., it detects/segments and localises ripe strawberries <cit.>, builds an obstacle map <cit.> for motion planning and plans corresponding robot actions for an effective picking movement <cit.>. This paper uses the terms `strawberry picking perception' or `perception' for these processes. [id=TF]Although novel learning from demonstration methods aim at reducing the computation time by directly mapping the sensor data to robot movements <cit.>, such approaches are not yet mature enough to be deployed on our selective harvesting robot. Compared to human workers, limited robot speed and precision are two significant factors of existing robotic harvesting systems that need to operate in unstructured strawberry-picking workspaces.  [id=TF]Effective tactile sensing is essential for achieving precise and dexterous manipulation in robotics. A recent review of advanced tactile sensors tailored for selective harvesting robots is presented in <cit.>. Notably, a novel acoustic-based tactile sensor has been developed specifically for this application <cit.> demonstrated for strawberry handling <cit.>. This sensor incorporates a deformable membrane with acoustic channels, allowing it to conform to various shapes and surfaces and provide tactile feedback <cit.>. It has demonstrated effectiveness in delicate pick-and-place tasks, such as harvesting strawberries. Moreover, a 2-D version of the sensor has exhibited significantly improved precision in both force localisation and normal force readings compared to prior designs <cit.>. Novel data-driven approaches <cit.> enable a selective harvesting robot to push and manipulate clusters of strawberries using tactile sensors, thereby facilitating the autonomous picking of a ripe target strawberry. In this paper, we consider only the speed challenges faced by selective harvesting robots. Our proposed solution involves an integrated edge computing system deployed over a private 5G-SA network, enabling robotic harvesting systems to operate at speeds closer to those of human pickers. We have also tested different perception approaches to find the best existing model in terms of precision for strawberry picking. This includes localising fruits, finding their periphery, and labelling them, which are challenging in semantic segmentation. We explored instance segmentation techniques that output pixel-level classification for targets <cit.>, and bounding box labelling <cit.>. Three-dimensional localisation is also available through multi-modal comparative analysis based on YOLOv4 <cit.>. We also explored Region-based Convolutional Neural Networks (RCNN). RCNN employs many Residual Networks (ResNets) and Region Proposal Networks for object localisation. Mask-RCNN <cit.>, a variant of RCNN, predicts segmentation masks along with the bounding boxes. The Mask R-CNN with 2D bounding box is inefficient for detecting and segmenting targets in real-time for strawberry picking <cit.>. 3D bounding box segmentation for strawberry harvesting is also unsuitable for real-time strawberry picking <cit.>. The accuracy of ResNet-based segmentation is higher than other counterparts such as YOLOv3 and YOLOv5<cit.>. However, they are computationally expensive, limiting the segmented images' prediction frame rate. A lighter version called D2Go <cit.>, based on FBNetV3, is proposed using a simplistic Differential Neural Architectural Search. It also renders device-aware training and quantisation for mobile devices. Although D2Go has faster inference times, it has lower accuracy than Mask-RCNN <cit.>. Image segmentation and building a 3D occupancy grid mapping are required for planning and picking actions for the robotic harvesting system. We use the OctoMap library for this purpose, which is a part of Robot Operating System (ROS) <cit.> packages [<https://wiki.ros.org/octomap>]. The OctoMap library provides data structures and mapping algorithms required to model arbitrary environments only based on sensory information. The representation models occupied areas as well as free space. The distinction between free and occupied space is essential for safe robot operation <cit.>. Whilst 5G communication in agriculture shows theoretical promise, due to its emerging nature and capital cost, there have been relatively few published use cases <cit.>. 5G, like other telecommunication networks, is set up using a particular-sized cell. Each cell can connect to many devices where broadband sharing/slicing is built into the core technology. We used a private 5G-SA N77 system with a small (micro) cell, which has a medium range reach (c. 200m-1km) with bandwidths of >50 Mbps and latency of <20ms <cit.>. We evaluated the performance of two different messaging protocols: i. TCPROS and ii. MQTT. TCPROS is the transport layer used by ROS, based on the standard TCP protocol <cit.> for communicating messages. ROS communication is based on the publish-subscribe messaging pattern <cit.>. TCPROS differs from standard TCP as it has a specific connection header definition used to identify ROS-specific communication parameters, such as subscriber name, topic, and message definition. These necessary fields are defined in the TCPROS header information. TCPROS is similar to TCP; hence, it does not add overhead or complexity to data transmission. Nonetheless, there are known security flaws within TCPROS, such as weak identity verification between communicating devices with the transmission in plain text, as highlighted in <cit.>. Another publish-subscribe messaging passing protocol is MQTT. MQTT is a standard messaging protocol with the latest version, MQTT 5.0 <cit.>, sitting on top of the TCPROS transmission protocol. MQTT uses TCPROS to establish a broker. The broker redirects messages from the publisher to the correct subscriber. However, unlike TCPROS, communications can be secured from publisher to subscriber. MQTT is designed for the guaranteed receiving of messages by subscribers and publishers. Hence, we use MQTT-based messaging named QOS0 and QOS1. Edge computing refers to using a server relatively close to where data collection is performed (for data processing and/or storage). The goal is to reduce latency and increase reliability for tasks that cannot be performed on-site (e.g., on the robot). For instance, Chen et al. <cit.> introduced FogROS, a framework to ease the deployment of tasks (e.g. SLAM computations) on a cloud server. Antevski et al. <cit.> used edge computing for analysing WiFi, quality, connectivity, and reliability. Hayat et al. <cit.> used a 5G network to offload the computation to an edge-server necessary for navigating drones. The study showed there are cases where edge computing is significantly beneficial. Huang et al. <cit.> used edge computing in the context of multi-robot collaborative SLAM, called RecSLAM, a 2D SLAM algorithm that works on the edge-server, outperforming state-of-the-art solutions. However, these works do not deal with robotic manipulation involving cases with physical robot interactions. These cases may need multiple queries for the perception module to segment/identify and localise a ripe fruit. § SYSTEM ARCHITECTURE The processing pipeline for a robotic harvesting system is shown in Fig. <ref>. This pipeline includes image acquisition, semantic segmentation, OctoMap generation, action planning and manipulation execution. Our prior works <cit.> indicated the challenges in the first three blocks of robot perception to robot action pipeline[ Because the time we tested the E5SH system was off-season, the action planning and manipulation were not performed during the E5SH field tests.] (shown in Fig. <ref>). Hence, we only field-tested the three first left blocks in Fig. <ref> for estimating the overall speedup. The overall performance of E5SH is estimated by aggregating the results of our field tests and action planning and manipulation statistics from the references above. The system architecture of the demonstration setup is shown in Fig. <ref>. Our system comprises a robot platform, an arm with a gripper, communication devices, a private 5G-SA system, an edge-server, and a robot-mounted laptop for the OctoMap generation. We allocated heavy computing tasks, such as four-class segmentation, to the edge server. Data transfer from the robot to the edge-server is limited to the camera stream (RGB/Depth images and camera info topics) and service call (/trigger) to indicate when the robot needs an update of the segmented depth images. On the server, a ROS action client receives the continuous stream of images and synchronises that data. Once a service call is received, the action client sends an action goal containing the synchronised camera data to the action server. The action server runs the Detectron2 (Mask-RCNN) and Detectron2Go (D2Go) models and segments the depth images to four labels, i.e. ripe strawberry, rigid obstacle, soft obstacle (canopy), and background. The result is returned to the action client, which publishes the labelled depth images. The depth images are returned to the robot that constructs 3D point clouds of respective obstacles and strawberry segments. These point clouds are then employed to generate corresponding OctoMap (Fig. <ref>). Next, the OctoMap of the obstacle are fed into the navigation control framework, which performs action planning of the robot. Similarly, strawberry OctoMap is passed to the manipulation control framework. This ensures that harvesting is performed with accurate localisation so that neighbouring strawberries are not considered obstacles during operation. The whole pipeline is triggered by the trajectory planner on the robot, which relies on the labelled depth images for obstacle detection. §.§ Semantic Segmentation Robots in our setup traverse the polytunnel where strawberries are grown, and images are acquired from July to October 2021. A sample image and its corresponding ground-truth (GT) annotation, the depth image, and bounding box annotations are shown in Fig. <ref>. We trained Mask-RCNN and FBNetv3 by employing Detectron2 and D2Go and measured prediction, accuracy, and latency between edge and embedded platforms. Moreover, we measured the performance of our system with a bespoke, data-intensive perception task by deploying semantic segmentation to the edge-server for crop-harvesting robotic applications. These images are segmented according to the list of classes: (1) Strawberries: Both ripe and unripe strawberries are included in this class, while flowers and nascent strawberries are not included; (2) Canopy: The strawberry plant is labelled as the Canopy; (3) Rigid Obstacle: Any object which the robot should avoid is a rigid obstacle. It includes metal structures, pipes, and humans; (4) Background: The remaining region is marked as background. The harvesting operation requires obstacle avoidance, which is classified as hard obstacles. The canopy is a soft obstacle, which means the robot can interact with and push it to some extent. Hence, we classified it as a soft obstacle (or canopy). We need to precisely localise the ripe strawberries as our robots need to reach the stalk of the fruit and grip/cut it <cit.>. We also classified the ripe fruits, whereas the remaining region of the image is classified as background. The semantic segmentation model provides segmentation masks for soft objects such as fruits and leaves and hard objects such as canopy, poles, and other background objects. Depth masks were used to create point clouds and OctoMap for segmented masks; therefore, they are time-efficient and precise and facilitate unhindered action planning. [id=TF]Instance segmentation detects multiple instances of the same label in an image, which is one step ahead of semantic segmentation, which only labels all the pixels in an input image for a specific label. Since the target picking object was always strawberry during this study, we opted for semantic segmentation and contour detections to find the strawberries if there are multiple strawberries within an input image. The contour detection on the mask of the strawberry label also allows the removal of any strawberries that are not the target of picking strawberries from the OctoMap. Removing non-target strawberries from the OctoMap results in a high successful planning rate because we assume that non-target strawberries are soft objects that can be pushed during the pick motion action. §.§ Computational Platform We compare the performance of our system when using either an onboard NJXN (i.e. on the robot) or an edge-server networked to the robot's controller, described in Section <ref>. Our edge-server is an Intel(R) Core(TM) i7-9700K CPU @ 3.60GHz, an eight-core machine equipped with 64 GB memory and Nvidia RTX-2080 GPU having 10 GB GDDR memory. It has Ubuntu 20.04 hosting ROS noetic and Nvidia CUDA 11.1. Facebook Detectron 2.0.6 framework is installed with torch 1.10.0 and D2Go 0.0.1 for semantic segmentation. Our onboard processor is an NVIDIA Jetson Xavier (NJXN) platform. §.§ Network Infrastructure As outlined in Section <ref>, two different network infrastructures were compared, including a private 5G-SA network and standard WiFi. The configuration of each is described below. 5G Network. We use a private 5G-SA (N77), also called the sub-6GHz band, network system with operational parameters listed in Table <ref>. Since the robot has no 5G capabilities, we use a 5G-capable MiFi router. The router can be either directly attached to the robot or placed in a secure location in the field to serve multiple robots as the access point. This router then establishes a 5G connection to the mobile 5G tower, connected to a local LAN (Fig. <ref>). Communication between the two endpoints is handled via the ROS mqtt_bridge module, which uses the MQTT protocol for machine-to-machine data transfer. WiFi Network. Similar to the 5G network topology, the WiFi topology is specially designed for robot communication in the field at the Riseholme campus, University of Lincoln. Our mobile platform (Thorvald II robot, SagaRobotics Ltd) is connected to the WiFi network to establish a connection with the edge-server (Fig. <ref>). However, expanding the network can be achieved by attaching a WiFi access point in a secure location in the field to serve multiple robots or a robot can be directly connected as it has WiFi capabilities onboard. A WiFi access point must be connected to the central network for communication. The same protocol is used as in the 5G case, the MQTT protocol for machine-to-machine data transfer. § EXPERIMENTS We conducted experiments to evaluate several different aspects of the performance of our system: (a) comparison of semantic segmentation models (as described in Section <ref>); (b) comparison of NJXN vs edge-server platforms (described in Section <ref>), for running the trained semantic segmentation models; and (c) comparison of communication network infrastructure (described in Section <ref>) and messaging protocols (MQTT vs TCPROS). Results of these comparative experiments are presented in Section <ref>, following details of our experimental setup, provided below. Our experimental setup is located on the Riseholme Park farm campus, home to the Lincoln Institute for Agri-food Technology (LIAT) at the University of Lincoln, UK. The farm includes a test facility for growing strawberries, consisting of two polytunnels. In 2022, these tunnels produced more than 2,000 kg of fruit. Each tunnel has five rows of tabletop strawberries in three different varieties, e.g., Zara, El-Santa, and Ever-bearing. We performed several field tests and collected the corresponding data and images across different growth stages. These images are taken from an industrial grade RGB-D ®Intel D435i ™RealSense camera mounted on a Thorvald II robot as shown in Fig. <ref>. A Franka Emika Panda arm having seven degrees of freedom (DOF) and a strawberry picking end-effector <cit.> are used for picking actions. We performed approximately twelve experiments on seven different days and times under different climate conditions to ensure coverage of variance in network communication, illumination conditions, and strawberry growth stages. §.§ Image Data Set A preliminary sample size of three thousand RGB-D image sequences was acquired and registered. A sub-set of around 212 images is selected using <cit.> and annotated by third-party professional annotators <cit.> to create ground truth. Experiments presented in this paper are performed on 142 training set images taken at different times of day, seasons, months, and illumination conditions in the presence of hard and soft shadows, inside polytunnels, and with several different camera orientations. The test set includes 70 images. The dataset contains 92 % images with 1280×720 resolution, and the other 8 % images have VGA resolution. The test dataset is utilised to evaluate the models' performance and distribution. The live network transmission throughput and prediction time delay were logged at thirty frames per second, at the frame resolution of 848×480 for both RGB and depth images. Data set analysis. Once the training and test dataset candidate images were selected, the metadata, such as instance counts for each class, and the bounding box’s mask area distribution pixel ratio, was collected to detect the potential data imbalance. The distribution between rigid and strawberry classes is balanced; however, the canopy class is smaller in number. The reason for the lower number of the Canopy is its more prominent area. The pixel ratio implies a relatively uniform region occupancy between rigid obstacles, Canopy, and background. The strawberries' abundance is only within 2.2% pixels. Training Parameters. We train Detectron2 using the base model of Mask RCNN R101 FPN 3x. The model is trained in 90000 iterations with a learning rate of 0.0025, and the test threshold is set to 0.5. We used a Stochastic Gradient Descent optimiser during the training. Similarly, we train D2Go using the Mask RCNN FBNet v3a C4 with the same learning rate, number of iterations, test threshold, and optimiser. The instance distribution of the dataset is given in Table <ref>. The distribution between rigid and strawberry classes is balanced; however, the canopy class is smaller in number. We applied several augmentation methods on training and test datasets, such as random flip, cropping and brightness, for improved training on our dataset. §.§ Evaluation Metrics For comparing the accuracy of the trained semantic segmentation models, we employ the standard evaluation metrics for the experimental comparisons <cit.>. Pixel-wise True Positive (T_P), False Positive (F_P) and False Negatives(F_N) are computed for all images. Precision (p), recall (r) and F1-measure (F1) for all classes are defined as p=T_P/T_P+F_P, r=T_P/T_P+F_N and F1=2pr/p+ r, which were calculated for each class separately. The Average Precision is AP=1/n∑_i=1^n p(τ_i) and Average Recall is AR=1/n∑_i=1^n r(τ_i) where τ is the threshold function. We present the pixel-wise results and discuss them in Sections <ref> and <ref>. To compare the processing speed of the different computational platforms, we consider the number of prediction frames per second (FPS) each configuration can handle. We focus on the speedup obtained on the faster edge-server versus the onboard NJXN processor. Results are presented in Section <ref> for the different semantic segmentation models running on each processor. Additional experiments were conducted to simulate each processor servicing one vs. three robots simultaneously to stress the processor capacity. For comparing the performance of the different communication networks, we measure the latency Results are presented in Section <ref>. § RESULTS Our results include the speed of perception processing, cumulative delays in image capturing, transmission, semantic segmentation and reception for both our proposed edge-server over 5G selective harvesting (E5SH) system and the NJXN board. The parameters included in our study are (1) mode of communication (5G/WiFi), (2) underlying protocols (MQTT/TCPROS), and (3) computation throughput of edge-server and embedded device during the segmentation task. As the quality of semantic segmentation is essential to the performance of 3D localisation, we present and discuss the performance measure of the segmentation models, i.e. Detectron2, D2Go 8-bit (D2Go-8) and D2Go 32-bit (D2Go-32). §.§ Scene segmentation A sample of our qualitative results is shown in Fig. <ref>, which shows the visual output of five images taken at different growth stages, illumination conditions, time of day and camera orientations. The segmentation output in Fig. <ref> shows that the standard and optimised D2Go have missed several strawberries. In contrast, Detectron2 correctly classifies most strawberries (Fig. <ref>). None of the segmentation models could correctly segment the sharp edges of the rigid obstacles represented by similar regions with a smooth periphery. Although the models miss the region of sharp edges in large canopy areas, Detectron2 has an improved classification for branches. Qualitatively, both Detectron2 and D2Go-32 perform well in segmenting partially visible strawberries. In contrast, the optimised D2Go-8 classifies a red cap on the robot as a strawberry. §.§ Segmentation Accuracy Our quantitative analysis assesses segmentation quality using F1-measure that combines precision and recall. Segmentation performance is evaluated by metrics like per class abundance,e.g., in terms of pixel percentage, discussed in Section <ref>. We considered four classes for the model to predict, as mentioned in Sec. <ref>. The distribution of the F1-measure in the test dataset is given in Fig. <ref>. The evaluation is based on the pixel-wise calculation of true positive T_P, false positive F_P, and false negative F_N concerning the Ground Truth. In general, both D2Go models have similar F1-measure distributions, as shown in Fig. <ref>. In this context, the centre of distribution for rigid obstacle's class is the lowest, with a median of 0.22 for D2Go-8 and D2Go-32, where the overall distribution is positively skewed. The Inter Quartile Range (IQR) of the rigid obstacle class for both these models is between (0.11-0.44) and (0.15-0.47), respectively. Our results show that D2Go has the poorest segmentation quality for rigid obstacle class. In contrast, the Detectron2 model has a median F1-measure of 0.94, which considerably improves over the D2Go F1-measure of 0.22. Our qualitative analysis (e.g. shown in Fig. <ref>) also agrees with this. The background class yields the F1-measure's median of 0.76 and 0.75 for D2Go-8 and D2Go-32, respectively. They also possess similar IQRs, which lie between 0.69 and 0.81, negatively skewed. Here, Detectron2 outperforms D2Go-8 and D2Go-32 with an F1 median of 0.82 and a lower IQR of 0.76 and 0.87. The F1-measure medians of the distribution for the canopy class increase to 0.90 for both D2Go models. Furthermore, their distribution remains negatively skewed, with a similar IQR between 0.85 and 0.96. On the other hand, Detectron2 again outperforms others with the F1-measure median of 0.94 and IQR within 0.91 and 0.97. The F1-measure distribution for strawberry in D2Go versions is similar, with similar medians of 0.81 and 0.82 for D2Go-8 and D2Go-32, respectively. D2Go-32 has a larger IQR (0.69-0.88) than D2Go-8, which ranges between 0.78-0.87. Interestingly, Detectron2 better segments the strawberry class with a median of 0.90 and bearing lower data IQR (0.87-0.92). Detectron2 obtains a much better strawberry segmentation quality, helping with improved 3D localisation, which is very useful for harvesting robots. Our results demonstrate that Detectron2 outperforms D2Go-8 and D2Go-32 regarding segmentation quality for obstacles and strawberries. D2Go-32 comes after Detectron2 in terms of quality performance with a slight edge over D2Go-8. §.§ Computation Speed We compare the computational throughput of different models on the edge-server and the NJXN board. Fig. <ref> shows a computation speedup in predicting frames for D2Go-8, D2Go-32 and Detectron2 of the edge-server over the NJXN board. Our results manifest that the edge-server enables high-quality segmentation outputs, namely by Detectron2, with a median speedup of 18.7 over NJXN. We observe both D2Go models have a median speedup of just above four on the edge-server over NJXN. Robot controllers usually run above 1 kHz. This is called hard real-time. Soft real-time to implement impedance controller for interacting with hard objects is 100 Hz or above <cit.>. Biological latency observed in humans is 70 ms <cit.>. Detectron2 and D2Go latency on the edge-server is almost 80 and 30 ms, respectively. That indicates we can obtain near-human performances in picking strawberries with future high-frequency dexterous manipulation controllers. Subject to available GPU resources on the edge-server, it can simultaneously provide segmentation service to many robots. Our system can handle three robots concurrently. We conclude that Detectron2 is suitable for semantic segmentation due to its enhanced quality, with the trade-off of slower processing on the edge-server. §.§ Network Performance Several E5SH experiments were conducted at the Riseholme campus of the University of Lincoln on different days, times of day, climate conditions, etc. To analyse our system's communication efficiency and stability, we performed a comparative study on the performance of communication over 5G and WiFi networks. Below, we present our results in two categories, system latency and throughput speed. Latency is measured by how fast a request message is sent, processed, and responded to on a network involving robots and an edge-server, denoted as Round-Trip Time (RTT). Reliability and consistency of message passing performance are as crucial as the Edge-server performance. If wireless network throughput is not saturated, the throughput is not as critical to communication performance in terms of latency. In the E5SH system, image data transmission is bi-directional (i.e. robot-to-server and server-to-robot). Our results demonstrate that 5G yields lower but consistent latency across all experiment types. The latency mentioned above is good enough for video streaming at 60 FPS. The WiFi and 5G latency results are presented in Fig. <ref>. We experienced slightly poorer WiFi performance in the field, as indicated by significantly larger values of standard deviations compared to 5G. These results show that the WiFi signal is unsuitable for outdoor environments as it is primarily designed indoors. Multiple factors may contribute to the reduced performance of WiFi compared to indoor environments, such as obstacles, scaffolding and metal structures. Fig. <ref> shows the edge-server's latency and indicates that 5G has a much larger capacity to stream higher FPS than WiFi. §.§ Communication Factors We studied the throughput of different configurations through a stream of images with an average size of 80.0 kB per image sent from the robot to the server over 5G and WiFi networks. After the edge-server segments images, it returns a stream of segmented images to the robot with an average size of 16 kB. We use small image sizes to remain below the network throughput saturation threshold for 5G and WiFi. From the robot to the server, images are transmitted at 30 FPS. For instance, the throughput will be approximately 2400 KBps if each image is 80 kB. From the server to the robot, images are transmitted at 50 FPS. This results in a throughput of approximately 1250 KBps if each image is 25 kB. In both cases, we do not account for the overhead introduced by the transmission protocol. Since we send many packets of data simultaneously, we can analyse the performance of 5G and WiFi networks with different network protocols and throughput. Moreover, we studied how they manage the traffic; the only change across different experiments was the type of wireless network. Hence, we expect similar results for 5G and WiFi, where 5G shows a higher transmission speed. The WiFi network shows lower upload and download throughput results and considerably lower performance when using TCPROS (Fig. <ref>). Fig. <ref> shows network throughput data recorded at the edge-server and robots. It indicates that WiFi has a lower mean throughput, specifically in WiFi TCPROS mode. The average frame transmission rate from the robot to the server (received throughput) is approximately 25 and 24.5 FPS with the 5G and WiFi networks, respectively. The average frame rate from the server to the robot (transmitted throughput) is 125 FPS over a 5G network and 120 FPS over WiFi. In this case, the higher FPS is because of the smaller size of segmented images· MQTT-based communication over 5G and WiFi outperforms other configurations (Fig. <ref>). Although 5G only marginally outperforms WiFi in terms of throughput (Fig. <ref>), the latency results in Fig. <ref> show 5G is significantly more stable than WiFi in outdoor environments. MQTT-QOS0 performs marginally better than MQTT-QOS1 in receiving data from the client, whereas the other differences between QOS0 and QOS1 are not statistically significant, as found when performing factor analysis (see below). Hence, we use MQTT-QOS0 in subsequent sections and refer to it as 5G-MQTT or WiFi-MQTT. We performed 2×3 factor analysis on the communication throughput data to assess the network infrastructure's impact (5G vs WiFi) in conjunction with the messaging protocol (MQTT-QOS0 vs MQTT-QOS1 vs TCPROS). First, we test for statistical significance within each factor, as shown in the first four subfigures plotted in Figure <ref>. The results show statistically significant differences in performance for 5G vs WiFi at both the server and client, where 5G is the faster infrastructure, and MQTT is the faster protocol. Note that the differences between MQTT-QOS0 and MQTT-QOS1 are not statistically significant. The factor analysis shows that combining the two factors does not change the performance results for each factor taken individually. §.§ Cumulative System Speed-up The segmentation, transmission, and reception delays during our tests helped us calculate the cumulative performance gain of our E5SH system over the embedded device. Figure <ref> shows the FPS performance of various configurations (e.g. (1) segmentation models on (2) NJXN board or using (3) MQTT and TCPROS communication protocols over (4) 5G or WiFi networks). We observe the maximum speed of our E5SH system with a setting including MQTT communication over the 5G network running the Detectron2 model, which is approximately 18.7 times faster than the embedded counterpart (Fig. <ref>). Instead, D2Go-8 and D2Go-32 models are 4.8 and 4.9 times faster. In this case, Detectron2, D2Go-8 and D2Go-32 on 5G-MQTT are 11.2, 5.3 and 5.2 times faster than the NJXN board, respectively. § PRELIMINARY ANALYSIS OF POTENTIAL FUTURE WORK The centralised server-based segmentation opens several interesting aspects for comparison around the sustainability of our solution. In this section, we highlight comparative energy consumption, emission, scalability of rendering services to multiple robots and cost-benefit analysis of E5SH and NJXN systems. A supplementary comparative sustainability study was performed along with the core experiments of this work, which are presented in this section. §.§ Energy consumption and emission As energy consumption and CO_2 emissions are increasingly important, we also studied the sustainability aspect of our E5SH system. We computed the energy consumption of twelve robots based on the computations for one and three robots. Here, we present power consumption and CO_2 emission per process by the package (called energyusage) <cit.>. We considered the total emission of the United Kingdom <cit.> as a reference. Fig. <ref> shows the power consumption. On the edge-server, a single segmentation process running Detectron2 consumes 33.6 watts of power and emits 98 mg CO_2. Adding another process to serve the second robot results in 48.3 watts and 128 mg CO_2 energy use. 59.7 watts with 155 mg CO_2 are energy use and emission for the 3-robots case. For twelve robots, 240 watts and 200 watts are consumed for Detectron2 and D2Go, respectively, equivalent to 620 mg and 500 mg CO_2 emission. On the other hand, The power requirement reaches 110 watts in total when twelve robots have their standalone NJXN boards and are running Detectron2. This is approximately 95 watts for D2Go models. This is equivalent to 300 mg and 240 mg CO_2, respectively. The trend shown in Fig. <ref> (a) confirms that the ratio reduces when the number of robots serviced by the edge-server increases. The ratio is above three for all models when one robot is served by one NJXN board and server, and it is under 2.4 for five robots. Although NJXN boards are optimised for low energy use for onboard computing, increased robot energy use due to slower computing by NJXN board is another factor supporting the use of E5SH. We also conducted a cost-benefit analysis of computing devices. The edge-server's approximate recent cost is around £2000, and the GPU unit cost is approximately £250. The NJXN board costs around £300. Our multi-robot case study requires four GPUs but twelve NJXN boards to test twelve robots. Fig. <ref> (b) shows that the server setup is cost-efficient compared to standalone NJXN boards when ten or more robots are served. Additionally, maintaining and administering a fleet of robots with a standalone computing board is not scalable, and it is a commercially less viable solution. On the other hand, a failure of one NJXN board would affect only a single robot, but it will affect all twelve robots in case of server failure. In our proposed setup, we employed the edge-server only for semantic segmentation and performed the OctoMap generation on a robot-mounted laptop, which hosts the Moveit framework of ROS. The Moveit framework could also be deployed to the edge-server in the future to reduce the number of robot-mounted computing devices. Our edge-server configuration could be improved, and the GPU, such as Nvidia RTX-4090, could be installed to overcome the Detectron2 segmentation frame rate bottleneck. This could lead to greater than 20 FPS performance. A test on a multi-server setup could also be conducted in the future to test the scaling potential in the real scenario. § DISCUSSION AND CONCLUSION Precision, speed, and computation cost are three factors relevant for a commercially viable selective harvesting robotic solution. Precise segmentation, 3-D localisation, and a complete scene map are computationally expensive. Precise segmentation models are usually more computationally demanding. As such, there is no scalable solution for near real-time manipulation actions. Cheaper and more effective energy-consuming computing boards are widely used to trade off the computation. We proposed an E5SH (edge-server over 5G selective harvesting) system enabling near real-time computations of segmentation, 3-D localisation, and motion planning for practical actions for strawberry picking. Our system transmits RGB-D images to the edge-server over a 5G network by employing MQTT and TCPROS communication protocols. After segmentation on the server, labels are returned to the robot for further action planning. Compared to our prior work <cit.>, which picks strawberry and puts it gently into a punnet in 25 seconds on average, E5SH can make the picking process almost 9 seconds faster, which is only 5 seconds away from 11 seconds, which is our human picking speed target. We conducted field tests with three different segmentation models and concluded that Detectron2 outperforms D2Go-8 and D2GO-32 regarding quality. D2Go yields inferior performance when segmenting rigid obstacles. This is a critical element of generating a collision map, as an imprecise collision estimation will damage the robot end-effector. Detectron2, in contrast, performs well in segmenting challenging obstacles. The quality of strawberry segmentation of Detectron2 is also better than that of D2Go; therefore, it is well suited for 3D localisation and collision map generation. Nonetheless, Detectron2 needs heavy computation; hence, it is much slower on NJXN boards and unsuitable. Interestingly, Detectron2 on the edge-server segments 12.2 FPS. The network communication adds an overhead delay, which causes the drop in the average frame rate to 8.6, which is still an 18-fold gain compared to the NJXN board. We also conduct a multi-robot setup to quantify the carbon footprint and analyse the cost-benefit of deploying a server. The ratio of power consumption and respective carbon emission for the edge-server serving up to 12 robots to the case of robots with NJXN is decreasing with an increase in the number of robots. We see a sharp reduction in this ratio from one to four robots. Even though this ratio is about 2 for 12 robots, the hidden energy cost of having the robot still waiting for the computation to produce the action can be considered in future works. Furthermore, embedded devices are prone to shorter life since they benefit from less maintenance quality (e.g. exposure to weather change, moisture and shocks due to motion). We eventually show the cost-benefit of deploying a server solution that may serve up to twelve robots with a PC having four GPUs. If our server serves ten or more robots, the cost of having individual NJXN boards would surpass the server's cost. When three robots are served through a single GPU, the overall frame rate of the system drops to 5.7 from 8.6. It is worth noting that we can increase this frame rate by allocating more GPUs to the Edge computing machine. The network communication was performed over 5G and WiFi networks. Our network latency tests confirm that the instability in the 5G network is less than the WiFi counterpart, manifesting that 5G is a stable and well-suited mode of communication for harvesting operations. Two different protocols, MQTT and TCPROS, were tested using different QOS configurations for MQTT. The throughput results confirm that MQTT outperforms TCPROS in the network throughput both from the server and the client side. The QOS0 was the optimal configuration for MQTT communication. Edge computing and cloud computing are emerging technologies with future improvements in cost and speed. Our study provides a baseline to highlight the advantages of edge computing over 5G in agri-robotics. To deliver robots to human cost parity, the ultimate goal of crop harvesting robotic systems, we suggest that future robotic crop harvesting systems deployed at scale may require substantial network architecture, investment, and infrastructure. The source code developed during this project could be found at <cit.>. Machine learning models of Detectron2, Mask-RCNN and D2Go-32, D2Go-8 versions, field-tests videos and test datasets are shared at <cit.>. § ACKNOWLEDGEMENTS The authors are grateful to the United Kingdom Research and Innovation (UKRI) for supporting this research through several grants. These grants are provided under UKRI Research England Lincoln Agri Robotics program, UKRI Research England CERES Agri-tech AI Unleashed program, and Robofruit, URKI Innovate UK Fastpick grant no. 99863 and UKRI EPSRC AgriForwards CDT. We are also thankful to the staff at the University of Lincoln, particularly Joni Appleton for the project management and Swati Megha, Luke Mahoney and Jonathan Trotter for facilitation during field tests. IEEEtran § APPENDIX §.§ Normality test of network communication data We compute the Shapiro-Wilk statistic (W) on network communication data. If W is close to 1.0, then the data is likely normal. If p < 0.01, then we have 99 % confidence in the results (i.e. there is less than 1% probability that the results (W) occurred by chance). Table <ref>–<ref> shows the normality test for MQTT and TCPROS protocols with different QoS configurations at client and server sides, respectively. The entries in the tables below that are in bold do not pass the Shapiro-Wilk test for normalcy. §.§ Operational parameters of 5G network The 5G configuration has limitations such as the current Time Division Duplexing (TDD) and carrier bandwidth, as shown in Table <ref>, are fixed and the 5G frequency range is subject to UK OFCOM licensing[Link - <https://www.ofcom.org.uk/__data/assets/pdf_file/0016/103309/uk-fat-2017.pdf>]. Here, DL and UL stand for download and upload respectively, and are used typically to denote throughput speed or refer to modulation.
http://arxiv.org/abs/2407.02335v1
20240702150519
CALICO: Confident Active Learning with Integrated Calibration
[ "Lorenzo S. Querol", "Hajime Nagahara", "Hideaki Hayashi" ]
cs.LG
[ "cs.LG", "cs.AI", "cs.CV" ]
CALICO: Confident Active Learning with Integrated Calibration Querol et al. Osaka University, Japan {lorenzoquerol}@is.ids.osaka-u.ac.jp {nagahara,hayashi}@ids.osaka-u.ac.jp Premium Research Institute for Human Metaverse Medicine (WPI-PRIMe) CALICO: Confident Active Learning with Integrated Calibration Lorenzo S. Querol1 Hajime Nagahara1,2Hideaki Hayashi1 July 8, 2024 ============================================================== § ABSTRACT The growing use of deep learning in safety-critical applications, such as medical imaging, has raised concerns about limited labeled data, where this demand is amplified as model complexity increases, posing hurdles for domain experts to annotate data. In response to this, active learning (AL) is used to efficiently train models with limited annotation costs. In the context of deep neural networks (DNNs), AL often uses confidence or probability outputs as a score for selecting the most informative samples. However, modern DNNs exhibit unreliable confidence outputs, making calibration essential. We propose an AL framework that self-calibrates the confidence used for sample selection during the training process, referred to as Confident Active Learning with Integrated CalibratiOn (CALICO). CALICO incorporates the joint training of a classifier and an energy-based model, instead of the standard softmax-based classifier. This approach allows for simultaneous estimation of the input data distribution and the class probabilities during training, improving calibration without needing an additional labeled dataset. Experimental results showcase improved classification performance compared to a softmax-based classifier with fewer labeled samples. Furthermore, the calibration stability of the model is observed to depend on the prior class distribution of the data. § INTRODUCTION The growing complexity of modern deep neural networks (DNNs) poses a challenge by demanding a substantial increase in labeled data needed to achieve state-of-the-art performance <cit.>. In real-world applications, obtaining labeled data is a logistically expensive process. This challenge is particularly profound in medical imaging, where images are complex and difficult to interpret, requiring the need for domain experts with clinical experience. This implies that a longer turnaround time is needed to finalize ground-truth annotations <cit.>. Given this time-consuming process, the ratio between labeled and unlabeled data samples becomes aggravated. To tackle this issue, methods have been devised to optimize data efficiency in training models. Active learning (AL), or human-in-the-loop learning <cit.>, is one of the existing methods that aims to reduce the need for extensive labeled data. The intuition behind AL is to involve human knowledge in the learning process by iteratively selecting samples that are considered most informative by some heuristic function <cit.>. A subset of these samples is then given to a domain expert for annotation, thereby maximizing the performance of the model while concurrently minimizing the annotation costs. In AL, the confidence outputs of a DNN are commonly used to select the most informative samples <cit.>. In classification using DNNs, confidence is typically defined by the maximum value of the class posterior probabilities (i.e., max_c p(c | x) for class c given an input sample x) calculated by the softmax function of the final layer. A lower confidence level indicates greater uncertainty in the model's prediction, making it more beneficial to label such samples. Consequently, these low-confidence samples are prioritized as the most informative for selection. However, the straightforward method of using a softmax-based classifier was revealed to produce uncalibrated outputs <cit.>. The problem arises from the characteristic of the cross-entropy loss function used in DNN training, where the loss decreases as the model's confidence approaches one. This happens when the posterior probability for a particular class approaches one while diminishing to zero for other classes. As a result, the classifier may erroneously exhibit high confidence even for input samples that are difficult to classify. This phenomenon is commonly referred to as the over-confidence issue. In the context of an AL paradigm, uncalibrated confidence outputs could impede reliable decision-making during the selection of informative samples. Consequently, this may lead to poor performance of the learned model on unseen data. Hence, a concept called confidence calibration in neural networks becomes a crucial aspect of developing modern intelligent systems. Confidence calibration is defined as the reflection of a model's accuracy with its predictive confidence <cit.>. For instance, in scenarios where a classifier provides 100 predictions, each with 95% confidence, it is statistically expected that 95 of those predictions should be correct. In general, the calibration of neural networks is performed in a post-hoc manner, which calibrates the confidence output of the model after training. A common method for this is temperature scaling, which modifies the softmax function in the final layer by incorporating a temperature parameter, thereby calibrating the confidence output. However, post-hoc methods typically require a separate labeled dataset, inefficient in the context of AL as it consumes the already limited labeled data. To calibrate the confidence output during the AL loop without relying on validation samples, our key idea involves leveraging the distribution of unlabeled training data. This is achieved through the simultaneous learning of a classifier and generative model. Fig. <ref> outlines the advantages of this joint learning approach. Training a classifier in isolation often leads to inaccurately high posterior probability near the decision boundary. However, by concurrently estimating the input data distribution with a generative model, the classifier is trained to account for the frequency of data occurrences. This approach naturally lowers the posterior probabilities for ambiguous data points near the decision boundary, leading to an inherent self-calibration of confidence levels. We propose an AL framework designed to self-calibrate confidence during the training process. The method, called CALICO (Confident Active Learning with Integrated CalibratiOn), involves the joint training of a neural network-based classifier with an energy-based model (EBM) <cit.> in a semi-supervised manner. This joint training enhances the model's understanding of the input data distribution, thereby calibrating the confidence outputs. The key idea is to use these calibrated confidence outputs as input for a query strategy, specifically using the least confidence strategy. This approach enhances decision reliability in selecting samples for annotation, with the overall goal of minimizing model miscalibration and improving accuracy with a minimal number of samples. The contributions of this paper are as follows: * We propose an AL framework termed CALICO, which is designed to self-calibrate confidence during the training process. Our method involves joint training of a classifier and EBM to achieve calibration of confidence without separate validation samples, utilizing the calibrated confidence outputs to select the most informative samples. * We demonstrate that the self-calibration approach, which involves simultaneous learning of a classifier and a generative model, is effective for AL in terms of improving accuracy and decreasing calibration error using fewer labeled data than straightforward baseline methods. * We revealed the potential of class distribution balancing to enhance CALICO's performance on datasets with class imbalance. § BACKGROUND & RELATED WORKS §.§ Active Learning According to <cit.>, allowing a learnable algorithm to select the data from which it learns can lead to more effective learning. In other words, AL seeks to maximize neural network performance with a smaller selection of data. As shown in Fig. <ref>, a typical AL scenario involves an active learner (model) continuously seeking new samples from a large pool of unlabeled data and inquiring the oracle (domain expert) for ground-truth annotations. The goal is to achieve a specific target performance, such as accuracy, while minimizing labeling costs. AL has been widely utilized in traditional machine learning tasks <cit.>, but with the emerging use of DL achieving superior results in various tasks <cit.> comes with increasing dependence on the amount of labeled data gathered. Acquiring labeled data in task-specific or real-world problems is laborious and time-consuming, thus methods such as AL have become more relevant with its use in combination with DL, hence commonly referred to as deep AL <cit.>. Sample acquisition can be categorized into three main frameworks: membership query synthesis, stream-based sampling, and pool-based sampling. In real-world scenarios, it is common to acquire a large batch of unlabeled data at once, prompting the use of pool-based sampling <cit.>. The pool-based sampling framework assumes that there is a limited amount of labeled data and a more extensive pool of unlabeled data. To guide the active learner in selecting which data points to request labels for, samples are selected greedily via a query strategy <cit.>. The labels for the query are obtained by inquiring the oracle, and the data pools are consequently updated. The model is then retrained iteratively until a certain metric reaches a target value or the unlabeled data pool is exhausted. Various common query strategies, such as maximum entropy, margin, least confidence, and mean standard deviation-based approaches, have been established in AL. Moreover, the rise of frameworks like generative adversarial networks and Bayesian deep learning <cit.> have also contributed to the development of enhanced query strategies in this field. Despite extensive research on query strategies in AL, extending these methods to DL remains challenging, especially due to model uncertainty and inadequate labeled data <cit.>. The often utilized softmax response for deriving the class probability in DNNs exhibits overconfidence <cit.>, posing possible difficulties in evaluating unlabeled data. This observation has attracted research attention towards deep Bayesian AL <cit.>. Furthermore, recent advances in uncertainty quantification have received interest for its use in AL paradigms <cit.>. However, these strategies frequently result in increased computation time for model training and inference, and may require altering the model architecture itself. To solve the dilemma of limited labeled data in deep learning, an approach involves reducing the issue by merging semi-supervised learning with AL <cit.>. This strategy uses unlabeled data to gather more information about the data distribution and improves the model's overall performance. §.§ Energy-Based Models An EBM is a type of generative model that directly models a negative log-probability, which is also known as the energy function. The model's probability density function is derived by normalizing the energy function, expressed as p_θ(x) = exp(-E_θ(x))/Z_θ. In this formulation, E_θ is the energy function parameterized by θ, and Z_θ is the normalizing constant, computed as Z_θ = ∫_xexp(-E_θ(x)) dx. A key challenge in working with EBMs is that this integral for the normalizing term is typically intractable. To estimate it, advanced sampling techniques are often employed, such as using Markov chain Monte Carlo (MCMC) and stochastic gradient Langevin dynamics (SGLD) <cit.>. EBMs are utilized in a vast range of applications such as image generation <cit.>, texture generation <cit.>, and text generation <cit.>. EBMs have also applied in the context of dropout and pruning within NNs <cit.>. Additionally, EBMs can be utilized for the complex problem of continuous inverse optimal control <cit.>. There have been several approaches to simultaneously training an EBM and a classifier <cit.>, revealing the effectiveness of EBMs for semi-supervised learning, outlier detection, and confidence calibration. §.§ Confidence Calibration Formally, considering a dataset with input x and outcome y∈{1,…, K}, a neural network is considered perfectly calibrated if: p(Y=y|ĉ=c)=c, ∀ c∈[0,1] Here, ĉ represents the probability of a predicted label Y, and y is the ground-truth label. Model calibration is frequently depicted using reliability diagrams <cit.>, where the expected accuracy is plotted as a function of confidence. Perfect calibration is plotted similarly as an identity function, and any deviation from the diagonal signifies miscalibration. Literature reveals that the utilization of modern neural networks with increasing complexity often results in poor calibration <cit.>. Therefore, various methods have been proposed to calibrate confidence. The calibration methods can be classified into two categories: post-hoc calibration and train-time calibration. Post-hoc calibration is one performed after training. It has been a primal approach, and many methods have been proposed <cit.>. Among such methods, temperature scaling <cit.> is considered simple and effective. It uses a softmax function with temperature instead of the usual softmax in the final layer of the NN and tunes the temperature parameter to optimize the negative likelihood. In contrast, train-time calibration is performed during training, which involves simultaneous learning with generative models <cit.> and the use of soft labels <cit.>. The main difference between post-hoc and train-time methods is whether the validation data is used to calibrate the confidence. § CALICO: CONFIDENT ACTIVE LEARNING WITH INTEGRATED CALIBRATION The proposed CALICO achieves efficient learning by utilizing calibrated confidence when selecting samples to be labeled. Confidence calibration is performed during the AL process by simultaneously estimating the input data distribution with an EBM and the class posterior probabilities with a classifier. §.§ Algorithm of CALICO The details of CALICO are described in Algorithm <ref>. Suppose that we have a limited amount of labeled data 𝒟_l={(x_i,y_i)}^M_i=1, and a more extensive pool of unlabeled data 𝒟_u={x_i}^N_i=1, where M<N, and y_i∈{1,…, K} is the class label of input x_i. The model f_θ is trained over both 𝒟_l and 𝒟_u. The use of unlabeled data is enabled by simultaneous learning with an EBM, and this point differs from traditional AL processes. The query 𝒟_q is a subset of 𝒟_u and is selected based on a predefined query size Q and query strategy α^LC, which is detailed in the next subsection. The labels for 𝒟_q are obtained by inquiring with the oracle g, and 𝒟_q with the obtained labels is denoted as g(𝒟_q). Subsequently, both data pools are updated accordingly where 𝒟_l←𝒟_l∪ g(𝒟_q), and 𝒟_u←𝒟_u∖𝒟_q. The model f_θ is then retrained iteratively until a certain metric reaches a target value or the unlabeled data pool 𝒟_u is exhausted. §.§ Query Strategy We utilize a least confidence strategy <cit.>. The least confidence query strategy is designed to acquire samples with the smallest probability among the maximum activations. Given an unlabeled data pool 𝒟_u, trained model f_θ, and the query size Q, the strategy α^LC is defined as follows: * Compute the posterior probability p_θ(y |x) for all x ∈𝒟_u based on (<ref>). * Obtain the corresponding confidence max_y (p_θ(y|x)). * Sort the samples in 𝒟_u in ascending order with respect to the confidence. * The strategy α^LC returns the top Q samples. Based on this strategy, a set of samples is selected from the unlabeled data pool to query the oracle for annotation. §.§ Joint Learning of a Classifier and an Energy-based Model Among various methods proposed for joint learning of a classifier and an EBM <cit.>, we construct our model f_θ with reference to the structure of JEM <cit.>. The model consists of a single neural network with a multi-head output for classification and EBM. Given an input x∈ℝ^D, the model first outputs a real-valued K-dimensional vector, i.e., f_θ:ℝ^D→ℝ^K. The vector is then converted into the class posterior probability in the classification head and the probability density of input data in the EBM head. In the classification head, the posterior probability of class y ∈{1, …, K} is calculated through the standard softmax transfer function, as defined below: p_θ(y|x)=exp(f_θ(x)[y])/∑_y'=1^Kexp(f_θ(x)[y']), where f_θ(x)[y] indicates the logit corresponding to y-th class. In the EBM head, the probability density p(x) is computed as follows: p_θ(x)=∑_y=1^Kexp(f_θ(x)[y])/Z(θ), where Z(θ)=∫_x∑_y=1^Kexp(f_θ(x)[y])dx is the normalizing constant, otherwise known as the partition function. In the training of the hybrid model, we minimize the following loss function over the union of labeled and unlabeled datasets {(x_n, y_n)}_i=1^M∪{x_i}_i=M+1^M+N. ℒ= -∑_i=1^Mlog p(y_i |x_i) -∑_i=1^M+Nlog p(x_i), where the first term on the right-hand side corresponds to cross-entropy and is optimized via conventional stochastic gradient descent. The second term is the negative log-likelihood for the EBM optimization whose gradient can be computed using stochastic gradient Langevin dynamics (SGLD). As a distinction from the original JEM training algorithm, we incorporate techniques from recent EBM studies to stabilize and accelerate training. First, we adopt the informative initialization <cit.>, which uses samples from a Gaussian mixture distribution estimated from the training dataset instead of random noise samples for initializing the SGLD chain. Second, we employ Proximal-YOPO-SGLD <cit.>, which freezes the gradient with respect to the second and subsequent layers during the sample updates. Third, we exclude data augmentation from the maximum likelihood estimation pipeline to alleviate the adverse effects of data augmentation on image generation quality <cit.>. § EXPERIMENTS §.§ Experimental conditions To verify the validity of CALICO, we conducted experiments using medical image datasets. We used five benchmark medical imaging datasets found in the MedMNIST collection <cit.>, namely Blood, Derma, OrganS, OrganC, and Pneumonia. These medical imaging datasets consist of preprocessed 28×28 two-dimensional images, accompanied by their corresponding class labels. The classification tasks within these datasets range from binary to multi-class, serving as benchmarks for foundational models in the medical imaging domain. The model closely followed the setup of <cit.>. However, we changed the ReLU activation function used by the Wide-ResNet architecture to a Swish activation function for the added stability observed by <cit.>. As the computational training time of JEM++ is also relatively long, we limited the number of queries for all datasets (more details in Appendix). We evaluated the results based on classification accuracy and calibration errors. Miscalibration can be condensed into a convenient scalar metric, often quantified as the expected calibration error (ECE) <cit.>. This metric discretizes probability intervals into a fixed number of bins and the calibration error is calculated as the difference between the fraction of correct predictions (accuracy), and the mean of the probabilities in the bin (confidence). ECE computes a weighted average of this error across bins: ECE=∑_m=1^M_bin|ℬ_m|/N_data|acc(ℬ_m)-conf(ℬ_m)| where ℬ_m represents the subset of samples whose predicted confidence levels lie within the interval I_m = (m-1/M_bin, m/M_bin], N_data is the total number of data points, and acc(ℬ_m) and conf(ℬ_m) denote the accuracy and confidence of ℬ_m, respectively. For this study, we utilize ECE as the primary metric for measuring calibration, as well as reliability diagrams for visualization. We establish the baseline reference by maximizing only the logp(y|x) objective through the utilization of the entire dataset, with a softmax-based classifier (named Baseline), involving the evaluation of calibration error using the probability outputs from the softmax activation function and calculating the ECE. We also used a softmax-based classifier paired with an AL framework using the least confidence query strategy (named Active) as an additional baseline reference. §.§ Performance Comparison Table <ref> shows that CALICO consistently outperformed the baseline accuracy across all evaluated datasets. In parallel with the accuracy improvements, CALICO also showed reduced ECEs, indicating its effectiveness in minimizing miscalibration. We observed that using a softmax-based classifier within an AL paradigm, as opposed to straightforward training methods, resulted in better calibration. However, CALICO demonstrated a more substantial increase in performance when compared to the baseline, suggesting its effectiveness in improving the classifier's performance in an AL paradigm. While CALICO generally achieved lower ECE values across most datasets, the use of a softmax-based classifier in an AL paradigm demonstrated comparable efficacy to CALICO in some cases, such as Derma and OrganS. §.§ Confidence Calibration The final reliability diagrams are depicted in Fig. <ref>. A perfectly calibrated model would align with the red-diagonal dashed line, effectively creating an identity function between the y-axis (accuracy) and the x-axis (confidence intervals). Any deviation above or below this diagonal indicates underconfidence or overconfidence, respectively, representing miscalibration. It is prominent that there is a marginal difference in the improvement of overconfident intervals between using softmax-based classifiers and CALICO, emphasizing the effectiveness of calibrated confidence outputs for iterative training in an AL paradigm. §.§ Learning Curve Analysis Learning curves play a crucial role in assessing how well a model performs in an AL paradigm, particularly by observing a specific metric with respect to the number of labeled samples. In this study, the ECE is of certain importance. Similar to the commonly used accuracy, the goal is to determine whether miscalibration can be minimized with significantly fewer labeled samples through the utilization of JEMs. As illustrated in Fig. <ref>, the overall calibration trend using CALICO is observed to be lower than that of the softmax-based classifiers. Additionally, it is noteworthy that a lower or comparable ECE can be achieved with fewer samples in comparison to the baseline reference. §.§ Performance Comparison of Test Accuracy and ECE Values with an Equal Class Distribution We explored the impact of class distribution balancing on CALICO's performance. While many confidence calibration studies emphasize calibration error within balanced class distributions, achieving such balance is not guaranteed in AL due to sampling methods' greedy nature. Calibration learning curves (Fig. <ref>) exhibited instability, possibly due to overconfidence from uncertainty-based sampling. Additionally, inherent dataset class imbalances can lead to querying uninformative samples, creating a mode collapse problem and miscalibration. To simulate literature setups, CALICO's performance was evaluated with an equal class distribution (named Equal), enforcing limits based on the dataset's least represented class. Varying labels per class allowed for sufficient iterations to analyze learning curves. Experimental setup details are provided in Table <ref> of Appendix. We observed instances where having an equal class distribution yielded better calibration or more stable learning curves across the evaluated datasets, such as the results on the PneumoniaMNIST dataset. However, Table <ref> also highlights instances where equal class distribution did not result in better calibration compared to the original CALICO. One possible explanation for this disparity is the nature of the datasets; PneumoniaMNIST is an imbalanced binary dataset, while others are relatively balanced multi-class datasets, and balancing the class ratio in sample selection facilitated effective learning of information from the minority class. These findings imply the potential of class distribution balancing to enhance CALICO's performance on imbalanced datasets. § CONCLUSION We proposed an AL method called CALICO, which aimed to use the calibrated confidence outputs as the input for a query strategy in an AL paradigm. CALICO incorporates the joint training of a classifier and en EBM, allowing self-calibration of confidence used for sample selection in AL. Experimental results demonstrated that CALICO outperformed the baseline accuracy and achieved a lower ECE with less labeled data, compared to a softmax-based classifier. One of the limitations of this study is scalability because training an EBM on high-resolution images requires considerable hyperparameter tuning, and CALICO was only evaluated on small-resolution images. Future research could explore the applications of CALICO with other AL methods, harnessing the power of Bayesian approaches for better uncertainty quantification. Additionally, further evaluation of larger datasets in other domains can broaden the scope and applicability of this research domain. § APPENDIX Experimental Setup To ensure consistency across all experiments, the computational runtime constraints necessitated limiting each dataset to 4000 labeled samples, with a query size of 250 for each iteration, resulting in a total of 16 iterations. In the ablation study that focused on an equal class distribution, the experimental setup was detailed in Table <ref>. The number of labeled samples per class for each iteration was determined by the class with the fewest samples to create enough iterations for analysis. The only exception to the ablation study was the DermaMNIST dataset, where the class with the lowest number of samples was 89. It was decided not to include this dataset in the ablation study. Hyperparameter Settings All datasets, except for PneumoniaMNIST, adapted the default hyperparameters from the original literature of JEM++ <cit.>. This included using an SGD optimizer with a learning rate of 0.1. However, for PneumoniaMNIST, an Adam optimizer with a learning rate of 0.0001 was utilized, as it exhibited a more stable calibration performance. §.§.§ This work was supported by JSPS KAKENHI Grant Number JP24K03010 and the World Premier International Research Center Initiative (WPI), MEXT, Japan. splncs04
http://arxiv.org/abs/2407.02217v1
20240702123257
Physics-Informed Model and Hybrid Planning for Efficient Dyna-Style Reinforcement Learning
[ "Zakariae El Asri", "Olivier Sigaud", "Nicolas Thome" ]
cs.LG
[ "cs.LG", "cs.AI" ]
RollupTheCrowd: Leveraging ZkRollups for a Scalable and Privacy-Preserving Reputation-based Crowdsourcing Platform Ahmed Mounsf Rafik Bendada, Mouhamed Amine Bouchiha, Mourad Rabah, Yacine Ghamri-Doudane L3i - La Rochelle University, La Rochelle, France {ahmed.bendada, mouhamed.bouchiha, mourad.rabah, yacine.ghamri}@univ-lr.fr July 8, 2024 ============================================================================================================================================================================================================================= § ABSTRACT Applying reinforcement learning (RL) to real-world applications requires addressing a trade-off between asymptotic performance, sample efficiency, and inference time. In this work, we demonstrate how to address this triple challenge by leveraging partial physical knowledge about the system dynamics. Our approach involves learning a physics-informed model to boost sample efficiency and generating imaginary trajectories from this model to learn a model-free policy and Q-function. Furthermore, we propose a hybrid planning strategy, combining the learned policy and Q-function with the learned model to enhance time efficiency in planning. Through practical demonstrations, we illustrate that our method improves the compromise between sample efficiency, time efficiency, and performance over state-of-the-art methods. Code is available at <https://github.com/elasriz/PHIHP/> § INTRODUCTION Reinforcement learning (RL) has proven successful in sequential decision-making tasks across diverse artificial domains, ranging from games to robotics <cit.>. However, this success has not yet been evident in real-world applications, where RL is facing many challenges <cit.>, especially in terms of sample efficiency and inference time needed to reach a satisfactory performance. A limitation of existing research is that most works address these three challenges – sample efficiency, time efficiency, and performance – individually, whereas we posit that addressing them simultaneously can benefit from useful synergies between the leveraged mechanisms. Concretely, on one side Model-Free Reinforcement Learning (MFRL) techniques excel at learning a wide range of control tasks <cit.>, but at a high sample cost. On the other side, Model-Based Reinforcement Learning (MBRL) drastically reduces the need for samples by acquiring a representation of the agent-environment interaction <cit.>, but requires heavy planning strategies to reach competitive performance, at the cost of inference time. A recent line of works focuses on combining MBRL and MFRL to benefit from the best of both worlds <cit.>. Particularly, <cit.> combine a learned model and a learned policy in planning, this combination helps improve the asymptotic performance but requires more samples, due to the sample cost of learning a good policy. This paper introduces PhIHP, a Physics-Informed model and Hybrid Planning method in RL. PhIHP improves the compromise between the three main challenges outlined above – sample efficiency, time efficiency, and performance –, as illustrated in <ref>. Compared to state-of-the-art MFRL TD3 <cit.> and hybrid TD-MPC <cit.>, we show that PhIHP provides a much better sample efficiency, reaches higher asymptotic performance, and is much faster than TD-MPC at inference. r0.45 < g r a p h i c s > PhIHP includes a Physics-Informed model and hybrid planning for efficient policy learning in RL. PhIHP improves the compromise over state-of-the-art methods, model-free TD3 and hybrid TD-MPC, between sample efficiency, time efficiency, and performance. Results averaged over 6 tasks <cit.>. To achieve this goal, PhIHP first learns a physics-informed model of the environment and uses it to learn an MFRL policy in imagination. This policy is used in a hybrid planning scheme. PhIHP leverages three main mechanisms: ∙ Physics-informed model: We leverage an approximate physical model and combine it with a learned data-driven residual to match the true dynamics. This physical prior boosts the sample efficiency of PhIHP and the learned residual improves asymptotic performance. ∙ MFRL in imagination: we preserve the sample efficiency by training a policy in an actor-critic fashion, using TD3 on trajectories generated from the learned model. The reduced bias in the physics-informed model enables to learn an effective policy in imagination, which is challenging with data-driven models, TD-MPC. ∙ Hybrid planning strategy: We incorporate the learned policy and Q-function in planning with the learned model. A better model and policy learned in imagination improve the performance vs inference time trade-off. § RELATED WORK Our work is at the intersection of Model-based RL, physics-informed methods, and hybrid controllers. Model-based RL: Since DYNA architectures <cit.>, model-based RL algorithms are known to be generally more sample-efficient than model-free methods. Planning with inaccurate or biased models can lead to bad performance due to compounding errors, so many works have focused on developing different methods to learn accurate models: PILCO <cit.>, SVG <cit.>, PETS <cit.>, PlaNet <cit.> and Dreamer <cit.>. Despite the high asymptotic performance achieved by model-based planning, these methods require a large inference time. By contrast, by learning a policy used to sample better actions, we can drastically reduce the inference time. Physics-informed methods: Recently, a new line of work attempted to leverage the physical knowledge available from the laws of physics governing dynamics, to speed up learning and enhance sample efficiency in MBRL. <cit.>. However, these methods use the learned model in model predictive control (MPC) and suffer from a large inference time. In this work, we efficiently learn an accurate model by jointly correcting the parameters of a physical prior knowledge and learning a data-driven residual using Neural ODEs. Hybrid controllers: An interesting line of work consists in combining MBRL and MFRL to benefit from the best of both worlds. This combination can be done by using a learned model to generate imaginary samples and augment the training data for a model-free agent <cit.>. However, the improvement in terms of sample efficiency is limited, since the agent remains trained on real data. Recent hybrid methods enhance the planning process by using a policy <cit.>, or a Q-function <cit.> with a learned model. More related to our work, TD-MPC <cit.> combines the last two methods, using a learned policy and a Q-function with a learned data-driven model to evaluate trajectories. TD-MPC jointly trains all components on real samples and learns a latent representation of the world, resulting in improved sample efficiency. However, the need for samples remains significant as they learn a policy from real data. By contrast, we first train a physics-informed model from real samples, and then the policy and the Q-function are trained in imagination. In addition, TD-MPC uses an expensive method to optimize sequences of actions, which impacts inference time. By contrast, accurately learning a policy from the physics-informed model reduces the action optimization budget, thereby enhancing time efficiency. § BACKGROUND Our work builds on reinforcement learning and the cross-entropy method. Reinforcement learning: In RL, the problem of solving a given task is formulated as a Markov Decision Process (MDP), that is a tuple (𝒮, 𝒜, 𝒯, ℛ, γ, p(s_0)) where 𝒮 is the state space, 𝒜 the action space, 𝒯=: 𝒮×𝒜→𝒮 the transition function, ℛ: 𝒮×𝒜→ℝ the reward function, γ∈ [0,1] is a discount factor and ρ_0 is the initial state distribution. The objective in RL is to maximize the expected return ∑_t=t_0^∞γ^t-t_0 r_t at each timestep t_0. In model-free RL, an agent learns a policy π_θ: 𝒮→𝒜 that maximizes this expected return. In contrast, in model-based RL, the agent learns a model that represents the transition function 𝒯, then uses this learned model 𝒯̂_̂θ̂ to predict the next state ŝ_t+1 = 𝒯̂_̂θ̂(s_t,a_t). The agent maximizes the expected return by optimizing a trajectory A^* = arg A ∈𝒜max  ∑_t=t_0^∞γ^t-t_0 R(s_t,a_t),    subject to   s_t+1 = 𝒯̂_̂θ̂(s_t,a_t). In practice, this problem is typically solved over a finite horizon H. The agent maximizes the expected return by optimizing a sequence of actions A = {a_t_0, ..., a_t_0+H} over a horizon H: A^* = arg A ∈𝒜^Hmax  ∑_t=t_0^Hγ^t-t_0 R(s_t,a_t),    subject to   s_t+1 = 𝒯̂_̂θ̂(s_t,a_t). Furthermore, using an inaccurate model can degrade solutions due to compounding errors. So, one often solves this optimization problem at each time step, only executes the first action from the sequence, and plans again at the next time step with updated state information. This is known as model predictive control (MPC). Cross Entropy Method (CEM): Since the dynamics and the reward functions are generally nonlinear, it is difficult to analytically calculate the exact minimum of (<ref>). In this work, we use the derivative-free Cross-Entropy Method <cit.> to resolve this optimization problem. In CEM, the agent looks for the best sequence of actions over a finite horizon H. It first generates N candidate sequences of actions from a normal distribution X ∼𝒩(μ, σ^2). Then, it evaluates the resulting trajectories using the learned dynamics model using a reward model and determines the K elite sequences of actions (K < N), that is the sequences that lead to the highest return. Finally, the normal distribution parameters σ and μ are updated to fit the elites. This process is repeated for a fixed number of iterations. The optimal action sequence is calculated as the mean of the K elites after the last iteration. We call CEM budget the size of the population times the number of iterations, this budget being the main factor of inference time in methods that use the CEM. § PHYSICS-INFORMED MODEL FOR HYBRID PLANNING In this section, we describe PhIHP, our proposed Physics-Informed model for Hybrid Planning. PhIHP first learns a physics-informed residual dynamics model (<ref>), then learns a MFRL agent through imagination (<ref>), and uses a hybrid planning strategy at inference (<ref>). PhIHP follows recent hybrid MBRL/MFRL approaches, TD-MPC <cit.>, but the physics-informed model brings important improvements at each stage of the process. It brings a more accurate model, which improves predictive performance and robustness with respect to training data distribution shifts. Crucially, it benefits from the continuous neuralODE method (<ref>) to accurately predict trajectories, enabling to learn a powerful model-free agent in imagination (<ref>). Finally, it enables to design a hybrid policy learning (<ref>) optimizing the performance vs time efficiency trade-off. §.§ Learning a physics-informed dynamics model Model-based RL methods aim to learn the transition function 𝒯 of the world a mapping from (s_t, a_t) to s_t+1. However, learning 𝒯 is challenging when s_t and s_t+1 are similar and actions have a low impact on the output, in particular when the time interval between steps decreases. We address this issue by learning a dynamics function 𝒯̂_̂θ̂ to predict the state change Δ s_t over the time step duration Δ t. The next state s_t+1 can be subsequently determined through integration with an Ordinary Differential Equation (ODE) solver. Thus, we describe the dynamics as a system following an ODE of the form: d s_t/d t|_t=t_0 = 𝒯̂_̂θ̂( s_t_0, a_t_0),    and    s_t+1≃ODESolve( s_t, a_t, 𝒯̂_̂θ̂,t,t+Δ t), where s_t and a_t are the state and action vector for a given time t. We assume the common situation where a partial knowledge of the dynamics is available, generally from the underlying physical laws. The dynamics 𝒯̂_̂θ̂ can thus be written as 𝒯̂_̂θ̂ = F_θ_p^p + F_θ_r^r, where F_θ_p^p is the known analytic approximation of the dynamics and F_θ_r^r is a residual part used to reduce the gap between the model prediction and the real world by learning the complex phenomena that cannot be captured analytically. The physical model F_θ_p^p is described by an ODE and the residual part F_θ_r^r as a neural network with respective parameters θ_p and θ_r. We learn the dynamics model in a supervised manner by optimizing the following objective: ℒ_pred(θ) = 1/|𝒟_re|∑_(s_t,a_t,s_t+1)∈𝒟_re‖ s_t+1 - s_t+1‖ ^2_2    subject to d s_t/d t|_t=t' = (F_θ_p^p + F_θ_r^r)( s_t', a_t') , on a dataset 𝒟_re of real transitions (s_t, a_t, s_t+1). As the decomposition 𝒯̂_̂θ̂ = F_θ_p^p + F_θ_r^r is not unique, we apply an ℓ_2 constraint over the residual part with a coefficient λ to enforce the model 𝒯̂_̂θ̂ to mostly rely on the physical prior. The learning objective becomes ℒ_λ(θ) = ℒ_pred(θ) + 1/λ·‖ F_θ_r^r ‖_2. The coefficient λ is initialized with a value λ_0 and updated at each epoch with λ_j+1=λ_j+τ_ph·ℒ_pred(θ), where λ_0 and τ_ph are fixed hyperparameters. §.§ Learning a policy and Q-function through imagination Simply planning with a learned model and CEM is time expensive. MFRL methods are generally more time-efficient during inference time than planning methods, since they use policies that directly map a state to an action. However, learning complex policies requires a large amount of training data which impacts sample efficiency. To maintain sample efficiency, a policy can be learned from synthetic data generated by a model. However, an imperfect model may propagate the bias to the learned policy. In this work, we benefit from the reduced bias in the physics-informed model to generate a sufficiently accurate synthetic dataset 𝒟_im to train a parametric policy π_θ(s_t) and a Q-function Q_θ(s_t,a_t), using the TD3 model-free actor-critic algorithm <cit.>. In TD3, the Q-function is learned by minimizing the following loss function on 𝒟_im: ℒ_Q(θ) = 1/|𝒟_im|∑_(s_t,a_t,r_t,s_t+1)∈𝒟_im‖ y_t - Q(s_t,a_t) ‖ ^2_2    where y_t = r_t + γ· Q(s_t+1,π_θ(s_t+1)), while the policy is trained on 𝒟_im to maximize the expected return 𝒥_π(θ) = 𝔼[ Q(s_t,π_θ(s_t)) ]. Actually, TD3 uses two Q-functions and takes the minimum to fight a pervasive over-estimation bias issue in RL algorithms. The training dataset 𝒟_im is initially filled with T' samples generated from the learned model F̂ and random actions from a pure exploratory policy, π_θ and Q_θ are trained by optimizing <ref> and <ref> on batches from 𝒟_im which is continuously filled by samples from the learned model F̂. §.§ Hybrid planning with learned model and policy PhIHP leverages a hybrid planning method that combines a physics-informed model with a learned policy and Q-function. This combination helps overcome the drawbacks associated with each method when used individually. While using a sub-optimal policy in control tasks significantly affects the asymptotic performance, planning with a learned model has a high computational cost: i) the planning horizon must be long enough to capture future rewards and ii) the CEM budget must be sufficiently large to converge. We use the learned policy in PhIHP to guide planning. In practice, a CEM-based planner first samples N_π informative candidates from the learned policy outputs π̂(s_t) and complements them with N_rand exploratory candidates sampled from a uniform distribution X ∼𝒩(μ, σ^2). These informative candidates help reduce the population size and accelerate convergence. The planner estimates the resulting trajectories using the learned model and evaluates each trajectory using the immediate reward function up to the MPC horizon and the Q-value beyond that horizon. By using the Q-value, we can evaluate the trajectories over a considerably reduced planning horizon H and we add the Q-value of the last state to cover the long-term reward. Hence, the optimization problem is written as follows: A^* = arg A ∈𝒜^Hmax( ∑_t=t_0^Hγ^t-t_0 R(s_t,a_t) + α·γ^H-t_0 Q(s_H) ),    subject to   s_t+1 = 𝒯̂_̂θ̂(s_t,a_t), where the discounted sum term represents a local solution to the optimization problem, while the Q-value term encodes the long-term reward and α balances the immediate reward over the planning horizon and the Q-value. § EXPERIMENTS We first compare PhIHP to baselines in terms of performance, sample efficiency, and time efficiency. Then we perform ablations and highlight the generalization capability brought by the physics prior. The robustness of PhIHP to hyper-parameter settings is deferred to Appendix E. §.§ Experimental setup Environments: We evaluate our method on 6 ODE-governed environments from the gymnasium classic control suite. These include the continuous versions of 3 basic environments: Pendulum, Cartpole, and Acrobot. Additionally, we consider their swing-up variants, where the initial state is “hanging down” and the goal is to swing up and balance the pole at the upright position, similarly to <cit.>. We opted for this benchmark for its challenging characteristics, including tasks with sparse rewards and early termination. However, to move closer to methods applicable in a real-world situation, we added to the original environments from the gymnasium suite a friction term which is not present in the analytical model of these environments. Thus, the dynamic of each system is governed by an ODE that can be represented as the combination of two terms: a friction-less component F^p and a friction term F^r. Please refer to Appendix B for additional details. Evaluation metrics. In all experiments, we use three main metrics to compare methods: ∙ Asymptotic performance: we report the episodic cumulated reward on each environment. ∙ Sample efficiency: we define the sample efficiency of a method as the minimal amount of samples required to achieve 90% of its maximum performance. ∙ Inference time: we report the wall-clock time taken by the agent to select an action at one timestep. Design choice for PhIHP:  We learn the model by combining an approximate ODE describing frictionless motion with a data-driven residual model parameterized as a low-dimension MLP. We use TD3 <cit.> for the model-free component of our method, the policy and Q-function. We found it beneficial to modify the original hyperparameters of TD3 to resolve the friction environments. For planning, we use CEM-based MPC. Please refer to Appendix C for additional details. §.§ Comparison to state of the art: We compare PhIHP to the following state-of-the-art methods: ∙ TD-MPC <cit.>, a state-of-the-art hybrid MBRL/MFRL algorithm shown to outperform strong state-based algorithms whether model-based LOOP <cit.> and model-free SAC <cit.> on diverse continuous control tasks. ∙ TD3 <cit.>, a state-of-the-art model-free algorithm. In addition to its popularity and strong performance on continuous control tasks, TD3 is a backbone algorithm for our method to learn the policy and Q-function. We used the same hyperparameters as in PhIHP. ∙ CEM-oracle: a CEM-based controller with the ground-truth model. In <ref>, <ref> and <ref>, we show that PhIHP outperforms the baselines with a large margin in at least one of the metrics without being worse on the others. Specifically, PhIHP is far more sample efficient than TD3 and it generally shows 5-15 times better sample efficiency than TD-MPC, except on Acrobot where they are comparable. <ref> further illustrates this excellent sample efficiency of PhIHP and how TD3 stacks on sub-optimal performance. This enhanced sample efficiency of PhIHP results from training the model-free policy on imaginary trajectories generated by the learned model, as opposed to using real samples in the baselines. Besides, PhIHP demonstrates superior performance in sparse-reward early-termination environment tasks (Cartpole and Acrobot) compared to TD-MPC, and PhIHP outperforms TD3 with a large margin in Cartpole-swingup, Acrobot, and Acrobot-swingup. Figure 4 in Appendix D.1 shows how TD3 stacks on lower asymptotic performance for the aforementioned tasks. It also shows that TD-MPC performance drops in sparse-reward early-termination environments Cartpole and Acrobot. It also illustrates that, since CEM-oracle uses the reward function to evaluate trajectories within a limited horizon, it manages to solve both tasks with smooth reward functions, and tasks with sparse reward where the goal is to maintain an initial state (Cartpole), but it fails to solve sparse reward problems where the goal is to reach a position out of the planning horizon (Acrobot). Finally, <ref> shows that PhIHP has better performance profiles compared to baselines which indicates better robustness to outliers in PhIHP. <ref> also reports the time needed for planning at each time step, obtained with an Apple M1 CPU with 8 cores. It is noteworthy that PhIHP significantly reduces the inference time when compared to TD-MPC. The inference time is still larger than that of TD3 since the latter is a component of our method, but it meets the real-time requirements of various robotics applications. §.§ Ablation study In this section, we study the impact of each PhIHP component to illustrate the benefits of using an analytical physics model, imagination learning, and combining CEM with a model-free policy and Q-function for planning. To illustrate this, we compare PhIHP to several methods: ∙ TD-MPC*: our method without physical prior and without imagination. It is similar to TD-MPC since the model is data-driven and it is learned with the policy from real trajectories. But learning the model and the policy are separated. ∙ Ph-TD-MPC*: our method without learning in imagination, thus a physics-informed TD-MPC*. ∙ dd-CEM: our method without physical prior nor policy component, thus a CEM with a data-driven model learned from real trajectories. ∙ Ph-CEM: our method without the policy component, thus a simple CEM with a physics-informed model learned from real trajectories. <ref> shows the impact of the quality of the model on the final performance in MBRL. Precisely, leveraging a physical prior in Ph-CEM and Ph-TD-MPC* shows improvements compared to full data-driven methods, i.e. dd-CEM and TD-MPC*. We also illustrate that planning with a model, a Q-function, and a policy leads to better performance compared to planning only with the model. For instance Ph-TD-MPC* outperforms Ph-CEM and TD-MPC* outperforms dd-CEM. However, this gain in performance comes with a significant cost in samples, because the agent needs a large amount of data to learn a good policy and Q-function. <ref> illustrates the trade-off between asymptotic performance, sample efficiency, and inference time in RL. On one hand, methods that learn a model and directly plan with it (dd-CEM and ph-CEM) do not need many samples to achieve sufficiently good performance, but they are too expensive at inference time. On the other hand, methods that learn to plan with a model, Q-function, and policy plan fast but require many samples to train their policies and Q-functions. PhIHP is the only method that achieves good asymptotic performance with low cost in sample efficiency due to learning in imagination and a good inference time due to hybrid planning. §.§ Generalization benefits of the physics prior In this section, we highlight the key role of incorporating physical knowledge into PhIHP in finding the better compromise between asymptotic performance, sample efficiency, and time efficiency illustrated in <ref>. Actually, learning a policy and Q-function through imagination leads to superior performance only when the model used to generate samples is accurate enough. Figure 5 in Appendix D.3 shows that an agent trained on imaginary trajectories generated with a physics-informed model largely outperforms the same agent using a fully data-driven model and matches the performance of TD3 which is trained on real trajectories. This highlights the capability of the physics-informed model to immediately generalize to unseen data, in contrast to the data-driven model, which poorly predicts trajectories in unseen states. <ref> illustrates this faster generalization capability, showing that the agent with a data-driven model still poorly predicts trajectories even when it meets the asymptotic performance of the agent with the physics-informed model. § CONCLUSION We have introduced PhiHP, a novel approach that leverages physics knowledge of system dynamics to address the trade-off between asymptotic performance, sample efficiency, and time efficiency in RL. PhIHP enhances the sample efficiency by learning a physics-informed model that serves to train a model-free agent through imagination and uses a hybrid planning strategy to improve the inference time and the asymptotic performance. In the future, we envision to apply PhIHP to more challenging control tasks where there is a larger discrepancy between the known equations and the real dynamics of the system. rlc § COMPARISON TO EXISTING METHODS In this section, we present a conceptual comparison of PhIHP and existing RL methods. <ref> illustrates the general scheme of existing RL methods and the possible connections between learning and planning. We highlight in <ref> the origin of the well-known drawbacks in RL: i) learning a policy on real data (arrow 1) impacts the sample efficiency, ii) learning a policy from a data-driven learned model (arrow 3) impacts the asymptotic performance due to the bias in the learned model, iii) model-based planning (arrow 4) impacts the inference time. PhIHP benefits from the good sample efficiency of model-based learning methods (arrow 2) and from the physical knowledge to reduce the bias in the learned model. The accurately learned model generates good trajectories to train the policy/value networks (arrow 3). When interacting with the environment, PhIHP uses a hybrid planning strategy (arrows 4 & 5) to improve asymptotic performance and time efficiency. § ENVIRONMENTS In this section, we give a comprehensive description of the environments employed in our work. Across all environments, observations are continuous within [ -S_box , S_box] and actions are continuous and restricted to a [ -a_max , a_max] range. An overview of all tasks is depicted in <ref> and specific parameters are outlined in Table <ref>. Pendulum: A single-linked pendulum is fixed on one end, with an actuator on the joint. The pendulum starts at a random position and the goal is to swing it up and balance it at the upright position. Let θ be the joint angle at time t and θ̇ its velocity, the observation at time t is (θ, θ̇). Pendulum-Swingup: the version of Pendulum where it is started at the "hanging down" position. Cartpole: A pole is attached by an unactuated joint to a cart, which moves along a horizontal track. The pole is started upright on the cart and the goal is to balance the pole by applying forces in the left and right direction on the cart. Cartpole-Swingup: the version of Cartpole where the pole is started at the "hanging down" position. Acrobot: A pendulum with two links connected linearly to form a chain, with one end of the chain fixed. Only the joint between the two links is actuated. The goal is to apply torques on the actuated joint to swing the free end of the linear chain above a given height. Acrobot-Swingup: For the swingup task, we experiment with the fully actuated version of the Acrobot similarly to <cit.>. Initially, both links point downwards at the "hanging down" position. The goal is to swing up the Acrobot and balance it in the upright position. Let θ_1 be the joint angles of the first fixed to a hinge at time t and θ_2 the relative angle between the two links at time t. The observation at time t is (θ_1, θ_2, θ̇_̇1̇, θ̇_̇2̇). §.§ Dynamic functions In this section, we provide details of the dynamic functions. For each task, the dynamic function consists of a frictionless component and a friction term. Pendulum and Pendulum Swingup: Let s_t = (θ, θ̇) be the state and a_t the action at time t. The dynamic of the pendulum is described as: F( s_t, a_t)= [ θ̇; θ̈ ] = [ θ̇; C_g· sin(θ) + C_i· a_t + C_Fr·θ̇ ] where C_g is the gravity norm, C_i is the inertia norm and C_Fr is the Friction norm. Acrobot and Acrobot Swingup: Let s_t = (θ_1,θ_2, θ̇_1, θ̇_2) be the state and a_t = (a_1, a_2) (a_1 = 0 for the Acrobot environment) the action at time t. The dynamic of the system is similar to <cit.> described as: F( s_t, a_t)= [ θ̇_1; θ̇_2; θ̈_1; θ̈_2 ] = [ θ̇_1; θ̇_2; -(α_0 + d_2 + θ̈_2 + Σ1)/ d_1; α_1 + d_2/d_1·Σ_1 - m_2 × l_1 · lc_2 ×θ̇_1^2 ·sinθ_2 - Σ_2/m2 ·lc_2^2 + I_2 - d_2^2/d_1 ] where: α_0 = a_1 - C_fr1·θ̇_1 such as C_fr1 is the friction norm in the first joint, α_1 = a_2 - C_fr2·θ̇_2 such as C_fr2 is the friction norm in the second joint, m_1 and m_2 the mass of the first and second links, l_1 and l_2 the length of the first and second links, lc_1 and lc_2 the position of the center of mass of the first and second links, I_1 and I_2 the moment of inertia of the first and second links, and d_1 = m_1 ·lc_1^2 + m_2 · (l_1^2 + lc_2^2 + 2 · l_1 · lc_2 ·cos(θ_2)) + I_1 + I_2 d_2 = m_2 · (lc_2^2 + l_1 · lc_2 ·cos(θ_2)) + I_2 Σ_2 = m_2 ·lc_2· g ·cos(θ_1 + θ_2 - π/2) Σ_1 = m_2 · l_1 ·lc_2·θ̈_2·sin(θ_2) ·(θ̈_2 - 2 ·θ̈_1) + (m_1 · lc_1 + m_2 · l_1) · g ·cos(θ_1 - π/2) + Σ_2. Cartpole and Cartpole Swingup: Let s_t = ( x, ẋ, θ, θ̇) be the state and a_t the action at time t. The dynamic of the system is based on <cit.> and described as: F( s_t, a_t)= [ ẋ; ẍ; θ̇; θ̈ ] = [ ẋ; Σ - m_p · l ·θ̈·cos(θ)/m_total; θ̇; g ·sin(θ)-(cos(θ) ·Σ) - Fr_p θ̇/m_p · l/l · [4/3-m_p ·cos(θ)^2/m_total] ], where: Fr_c is the friction norm in the contact between the cart and the ground, Fr_p is the friction norm in the joint between the cart and the pole, l is the length of the pole, m_tot=m_c+m_p and m_p, m_c are the mass of the pole and the cart respectively, Σ = 1/m_total· (a + m_p · l ·θ̇^2 ·sin(θ) -( Fr_c ·sgn(ẋ) ). §.§ Reward Functions The reward function encodes the desired task. We adopt the original reward functions in the three main environments. For the swingup variants, we choose functions that describe the swingup task: we adopt the same function as Pendulum for Pendulum swingup. For Cartpole swingup, we set a reward function as the negative distance from the goal position s_goal = (x=0, y=1). For Acrobot swingup, we take the height of the pole as a reward function. § IMPLEMENTATION DETAILS In this section, we describe the experimental setup and the implementation details of PhIHP. We first learn a physics-informed residual dynamics model, then learn an MFRL agent through imagination, and use a hybrid planning strategy at inference. To learn the model, we first use a pure exploratory policy during T timesteps to collect the initial samples to fill 𝒟_re, then we perform stochastic gradient descent on the loss function (Eq. 3 in Sec. 4.1) to train F_θ. The learned model F̂ is used with CEM to perform planning and gather new T samples to add to 𝒟_re. To improve the quality of the model, the algorithm iteratively alternates between training and planning for a fixed number of iterations. To train the model-free component of PhIHP, the training dataset 𝒟_im is initially filled with T' samples generated from the learned model F̂ and random actions from a pure exploratory policy, π_θ and Q_θ are trained on batches from 𝒟_im which is continuously filled by samples from the learned model F̂. We list in <ref> the relevant hyperparameters of PhIHP and baselines. and we report in <ref> the task-specific hyperparameters for PhIHP. We adopted the original implementation and hyperparameters of TD-MPC. However, we needed to adapt it for early termination environments (Cartpole and Acrobot) to support episodes of variable length, and we found it beneficial for TD-MPC to set the critic learning rate at 1e-4 in these two tasks. Fot TD3, we tuned the original hyperparameters and used the same for the TD3 baseline and the model-free component of PhIHP. § COMPARISON TO STATE OF THE ART We compare PhIHP to baselines on individual tasks, we present both statistical results and a qualitative analysis. §.§ Learning curves We provide learning curves of PhIHP and baselines on individual tasks. PhIHP outperforms baselines by a large margin in terms of sample efficiency. <ref> shows that TD3, even when converging early in Cartpole-swingup, achieves sub-optimal performance and fails to converge within 500k steps in Acrobot-swingup. §.§ Statistical Comparison: PhIHP vs. Baselines To ensure a robust and statistically sound comparison with the results previously reported in Table 1 in Sec. 5.2, we conducted Welch's t-test to statistically compare the performance of PhIHP vs baselines across individual tasks. We set the significance threshold at 0.05, and calculated p-values to determine whether observed differences in performance were statistically significant. <ref> shows that PhIHP is equivalent to all baselines in Pendulum, and it significantly outperforms TD3 on the remaining tasks. Moreover, PhIHP outperforms TD-MPC in sparse-reward early-termination environment tasks (Cartpole and Acrobot), while they demonstrate equivalent performance in Pendulum, Pendulum swingup, and Acrobot swingup. §.§ Imagination learning for model-free TD3 We provide learning curves of TD3 through imagination on individual tasks in <ref>. TD3-im-ph is a component of PhIHP, it is a TD3 agent learned on trajectories from a physics-informed model. It largely outperforms TD3-im-dd, a TD3 learned on trajectories from a data-driven model. we limited the training budget for TD3-re, trained on real trajectories, at 500k real samples in all tasks. §.§ Qualitative comparison In this section, we compare performance metrics on individual classic control tasks. We estimate confidence intervals by using the percentile bootstrap with stratified sampling <cit.>. We show in <ref> a comparison of the median, interquartile median (IQM), mean performance, and optimality gap of PhIHP and baselines. PhIHP matches or outperforms the performance of TD-MPC and TD3 in all tasks except in Cartpole swingup. PhIHP shown to be robust to outliers compared to TD-MPC with shorter confidence intervals. Moreover, <ref> shows the performance profiles of PhIHP and baselines. PhIHP shows better robustness to outliers. § HYPERPARAMETER SENSITIVITY ANALYSIS We investigate the impact of varying controller hyper-parameters on the performance and inference time of PhIHP. We first study the impact of varying planning horizons and receding horizons (from 1 to 8). We note that planning over longer horizons generally leads to better performance, however, the performance slightly drops in Acrobot-swingup for planning horizon H > 4 (<ref>). We explain this by the compounding error effect on complex dynamics. Unsurprisingly, lower receding horizons always improve the performance because the agent benefits from replanning. For the impact of the population size, <ref> shows that excluding the policy (policy-population = 0) from planning degrades the performance, and increasing it under 10 does not have a significant impact. Moreover, excluding random actions (random-population = 0) from planning degrades the performance. Unsurprisingly, the inference time increases with an increase in both the planning horizon and the population size. Conversely, it decreases when the receding horizon increases.
http://arxiv.org/abs/2407.02093v1
20240702092821
Guided Waves in Static Curved Spacetimes
[ "Javier Blanco-Romero" ]
gr-qc
[ "gr-qc", "physics.optics" ]
Towards a warm holographic equation of state by an Einstein-Maxwell-dilaton model R. Zöllner^1, B. Kämpfer^2, 3 July 8, 2024 ================================================================================= § ABSTRACT This work investigates the propagation of electromagnetic waves in waveguides within static curved spacetimes. We develop a covariant formalism using Hertzian potentials to describe guided electromagnetic modes in spacetimes with metrics that depend on the proper length along the waveguide axis. The Maxwell equations are solved using a Hertzian potential ansatz, resulting in wave equations for TE and TM modes. We analyze the axial and transverse components of the solutions and derive expressions for the cutoff frequency and guide wavelength. The special case of TEM modes is also examined. As an illustrative example, we apply the formalism to radial propagation in Schwarzschild spacetime. This work provides a framework for studying guided electromagnetic waves in curved spacetime geometries, opening up potential applications in precision tests of general relativity and relativistic quantum optics. Keywords: curved spacetime, guided waves, electromagnetism § INTRODUCTION The study of electromagnetic waves in curved spacetime has been a subject of interest since the early days of general relativity. General relativity has a solid experimental basis, with one of its most frequently measured predictions being time dilation due to the presence of a gravitational field. Some of the most representative experiments in this area include <cit.>. Notably, all experiments carried out to date have in common that general relativity effects are observed in classical degrees of freedom <cit.>. On the other hand, Newtonian gravity effects have also been observed in quantum systems. The neutron interference experiments, among which the pioneering work of <cit.> stands out, and the neutron rebound experiments <cit.> constitute proof that Newtonian gravity plays the same role in quantum mechanics as other external fields and is capable of inducing interference effects and state discretization. However, to date, there are no experiments that simultaneously manifest both general relativity and quantum effects <cit.>. This observational gap, combined with the rapid development of technologies that allow working with quantum states over large distances, has made the intersection of classical gravity and quantum mechanics an active area of research. Recent advancements in long-distance optical communications and quantum information transfer have expanded the possibilities for studying fundamental physics over extended scales. Kilometer-scale optical fibers are now routinely used in telecommunications <cit.>, with submarine cables spanning ocean basins to form the backbone of global internet infrastructure. Experimental successes in long-distance quantum communications <cit.> have further pushed the boundaries of quantum state manipulation over significant distances. These developments, coupled with increasing precision in quantum experiments <cit.>, have opened new possibilities for both scientific and technological applications in this field (see review <cit.>). These vast distances suggest the possibility of observing non-local phenomena predicted by general relativity in such systems. Moreover, these extended optical systems are finding novel applications beyond communications. Recent research has demonstrated that submarine fiber optic cables can be repurposed as distributed sensors for detecting earthquakes and studying ocean dynamics <cit.>. This innovative use of fiber optic infrastructure hints at the potential for applying electromagnetic wave propagation in new and unexpected ways, including the study of gravitational effects over large scales. The propagation of electromagnetic waves in curved spacetime has been studied extensively in the context of astrophysical phenomena <cit.>. However, the application of these principles to guided waves in extended terrestrial systems presents both unique challenges and opportunities. This work explores the implications of curved spacetime on guided electromagnetic waves in extended waveguides, with potential applications ranging from precision measurements to quantum sensing. The interaction between gravity and electromagnetism in waveguides could lead to observable effects in precision measurements. For example, the Sagnac effect in ring lasers has been used to detect Earth's rotation and other inertial effects <cit.>. Extending this concept to gravitational effects could open new avenues for testing general relativity and alternative theories of gravity <cit.>. Furthermore, understanding how spacetime curvature affects the propagation of quantum states of light could advance the fields of relativistic quantum optics and quantum metrology <cit.>, potentially leading to new quantum sensing technologies and improved tests of fundamental physics. These considerations invite us to explore experimental scenarios where effects of general relativity and quantum mechanics appear simultaneously, allowing us to address this observational gap in the coming years. One such proposal in the literature is the interference of quantum clocks in the Earth's gravitational field <cit.>. This work aims to develop a framework for analyzing guided electromagnetic waves in static curved spacetimes using Hertzian potentials. By extending methods developed for flat spacetime <cit.> to curved geometries, we focus on static spacetimes as a first step towards more general scenarios. This approach provides a foundation for future studies of electromagnetic wave propagation in more complex gravitational environments, bridging the gap between theoretical astrophysics and practical terrestrial applications. The rest of this paper is organized as follows: In Section <ref>, we provide an overview of the theoretical framework required for our analysis. This includes a review of electromagnetism in curved spacetimes and the theory of Hertzian potentials. Section <ref> forms the core of our work, focusing on guided waves in static spacetimes. We begin by discussing guided waves in flat spacetime using Hertzian potentials, then introduce our waveguide model and the induced metric. We develop the 3+1 field theory and derive the relevant Maxwell and wave equations. We then apply the Hertzian potential formalism to guided waves in static spacetimes, analyzing TE, TM, and TEM modes. We particularize our results to the Schwarzschild spacetime as an example. The section concludes with a discussion on interferometry and general superposition of waves. Finally, Section <ref> presents the conclusions of our study, summarizing our key findings and discussing potential implications and future directions for this research. §.§ Conventions In this paper, we adopt the following conventions: * We use the (-+++) signature for the metric. * Greek letters (α, β, μ, ν, …) denote spacetime indices, while Latin letters (a, b, i, j, k, …) represent spacelike indices. * Four-vectors are denoted by italic letters (e.g., x, y), while spatial three-vectors are represented by boldface letters (e.g., 𝐯, 𝐄). * We employ natural units (c = 1) and Lorentz-Heaviside units (ϵ_0 = μ_0 = 1). § THEORETICAL BACKGROUND Here, we review some of the theoretical groundwork necessary for understanding electromagnetic waves in curved spacetimes. We begin with an overview of electromagnetism in curved geometries, employing both index notation and differential forms to provide a thorough perspective. We then introduce the powerful concept of Hertzian potentials, presenting both vector and covariant formulations and extending their application to curved spacetimes. §.§ Electromagnetism in Curved Spacetimes §.§.§ Coordinate-based Formulation To describe electromagnetism in curved spacetimes, we begin with the following definitions: F_0i = - E_i , F_jk = ϵ_jkiB^i , Here, E_i represents the components of the electric field, and B^i the components of the magnetic field. Using these definitions, we can express the Faraday 2-form in Minkowski spacetime. Its components are given by <cit.>: (F_μν) = ([ 0 -E_x/c -E_y/c -E_z/c; E_x/c 0 B_z -B_y; E_y/c -B_z 0 B_x; E_z/c B_y -B_x 0 ]) , The dual of the Faraday tensor, denoted as ⋆ F^μν, is defined by ⋆ F^μν = 1/2ϵ^μνρσF_ρσ. Its components are: (⋆ F_μν) = ([ 0 B^1 B^2 B^3; -B^1 0 E^3 /c -E^2 /c; -B^2 -E^3 /c 0 E^1 /c; -B^3 E^2 /c -E^1 /c 0 ]) , In curved spacetime, the Maxwell equations can be derived from the following action: S[𝐅, 𝐠] = -1/4∫√(g) ^4 x F_μνF^μν . Here, g represents the determinant of the metric tensor. From this action, we obtain the Maxwell equations in vacuum <cit.>: ∇_[αF_βγ] = 0 , ∇_μ F^μν = 0 . where ∇_μ denotes the covariant derivative. The covariant derivative of the Faraday tensor is given by: ∇_μ F^λν = ∂_μ F^λν + Γ^λ_λαμF^αν + Γ^ν_ναμF^λα The homogeneous equations (<ref>) yield the Faraday law and the magnetic Gauss law. These laws are independent of the metric, assuming the spacetime is torsionless. As a result, the Faraday law retains its familiar form: ∂_t 𝐁 = - ∇×𝐄 , By the other hand, the magnetic Gauss law reads: ∇·𝐁 = 0 . The inhomogeneous Maxwell equation (<ref>) can be simplified by considering the symmetries of both the Christoffel symbols and the Faraday tensor. The contraction of the lower symmetric index of the Christoffel symbols with the upper antisymmetric index of the Faraday tensor in the covariant derivative vanishes. This allows us to write: ∇_μ F^μν = ∂_μ F^μν + Γ^μ_μαμF^αν . This formulation of Maxwell's equations in curved spacetime provides the foundation for our subsequent analysis of electromagnetic waves in waveguides within static curved spacetimes. §.§.§ Differential Forms Formulation An alternative and more geometrical approach to electromagnetism in curved spacetime involves the use of differential forms. In this formulation, the Faraday 2-form can be expressed as (see for example <cit.> or <cit.>): 𝐅 = 1/2F_αβ x^α∧ x^β . In Minkowski spacetime, this Faraday 2-form reduces to: 𝐅 = - E_x t ∧ x - E_y t ∧ y - E_z t ∧ z + B_x y ∧ z - B_y x ∧ z + B_z x ∧ y = - E_x t ∧ x - E_y t ∧ y - E_z t ∧ z - B_x ⋆ t ∧ x - B_y ⋆ t ∧ y - B_z ⋆ t ∧ z , which is equivalent to the matrix form presented in equation (<ref>). In this formalism, the source-free Maxwell equations take the elegant form: 𝐅 = 0 , δ𝐅 = 0 . Here, represents the exterior derivative and δ is its adjoint. The action of the electromagnetic field on a (pseudo-) Riemannian manifold can be written using differential forms as: S = - ∫_(1/2𝐅∧⋆𝐅 + 𝐀∧⋆𝐉) , where 𝐉 is the current 1-form and ⋆ denotes the Hodge star operator. To demonstrate how this action relates to the previously introduced form (<ref>), we expand the term 1/2 F ∧ *F in components: 1/2 F ∧ *F = 1/2( 1/2 F_μν dx^μ∧ dx^ν) ∧( 1/2 F^αβ1/4!ϵ_αβρσ dx^ρ∧ dx^σ) = 1/8·1/4! F_μν F^αβϵ_αβρσ dx^μ∧ dx^ν∧ dx^ρ∧ dx^σ = 1/4·1/4! F_αβ F^αβϵ_μνρσ dx^μ∧ dx^ν∧ dx^ρ∧ dx^σ = 1/4 F_αβ F^αβω , where ω = d^4x is the volume form. To derive the Maxwell equations from the action (<ref>), we perform a variation of the action induced by variations of the fields 𝐅 and 𝐀: δ S = -1/2∫( δ F ∧⋆ F + F ∧⋆δ F ) + ∫δ A ∧⋆ J , Assuming a fixed metric, the variation commutes with the Hodge dual. Using the symmetry of the inner product (see <cit.>) we have that (F, δ F ) = (δ F, F ), so we can simplify this to: δ S = -∫( δ F ∧⋆ F ) + ∫δ A ∧⋆ J . Expressing this in terms of the four-potential: δ S = -∫ dδ A ∧⋆ dA + ∫δ A ∧⋆ J , which, after integration by parts, becomes: δ S = -∫δ A ∧ d ⋆ dA + ∫δ A ∧⋆ J . The principle of least action requires: δ S = 0 ∀ δ A , This, combined with equation (<ref>), implies: d ⋆ dA = ⋆ J , which are the inhomogeneous Maxwell equations in differential form notation. §.§.§ Electromagnetism in Material Media To describe electromagnetic phenomena in material media, we adopt the approach outlined in [pg.369, E.3]<cit.>. This method involves separating charge and current densities into bounded and external components, providing understanding of electromagnetic interactions within materials. We begin by splitting the charge and current densities: ρ = ρ^mat + ρ^ext j = j^mat + j^ext . Here, the superscripts 'mat' and 'ext' denote the material (bounded) and external components, respectively. Assuming a conservation law for the bounded source: J ^mat = j^mat + ∂_t ρ^mat = 0 , we can define a potential that generates the bound sources, analogous to the treatment of free sources: G^mat = J^mat . In a (3+1) formalism, this leads to the introduction of the polarization 2-form and the magnetization 1-form: ^mat ≡ - P ^mat ≡ M , such that: G^mat = σ∧^mat + ^mat . These forms satisfy the following relations: - P = ρ^mat M + ∂_t P = j^mat . Note that these definitions become unique under the assumption that ^mat = 0 when E=0 and = 0 when B= 0. Exploiting the linearity of Maxwell's equations, we can split the excitations as: G = G^ext + G^mat , or: = ϵ_0 ⋆ E = ^mat + ^ext = ^ext - P[ E, B ] = 1/μ_0⋆ B = ^mat + ^ext = ^ext + M[ E, B ] , where the first equalities in both equations assume linear spacetime constitutive relations. Differentiating G^ext and using the Maxwell equations G = J and (<ref>), we obtain the inhomogeneous Maxwell equations: G^ext = J^ext , or: ^ext = ρ^ext ^ext - ∂_t ^ext = j^ext . The external excitation 2-form H^ext can be considered as an auxiliary field. For linear media, we consider the following constitutive laws: P = ϵ_0 χ_E ⋆ E M = 1/μ_0χ_B ⋆ B , where χ_E is the electric susceptibility and χ_B the magnetic susceptibility. This leads to: D^ext = ϵ⋆ E H^ext = 1/μ⋆ B , with the material constants defined as: ϵ ≡ϵ_0 (1 + χ_E ) μ ≡μ_0/(1 + χ_B ) . For linear dielectric media, we can express the constitutive relation as <cit.>: 𝐆 = ⋆χ𝐅 , where χ is a map from 2-forms to 2-forms, or in component form: G_μν = ⋆_μν^αβχ_αβ^αβσρ F_σρ . Finally, the constitutive relation (<ref>) yields the macroscopic Maxwell equations: 𝐅 = 0 𝐆 = 𝐉_free . This formulation provides a description of electromagnetic phenomena in material media, incorporating both microscopic and macroscopic aspects. §.§ Theory of the Hertzian Potential The Hertzian potential method offers an elegant and efficient approach to solving Maxwell's field equations in waveguides, particularly in flat spacetime <cit.>. This powerful technique has been developed and refined over the years, with significant contributions from various researchers. <cit.> and <cit.> laid the groundwork for flat spacetime applications, with the latter introducing a covariant notation. <cit.> provides a great treatment of the theory. <cit.> further advanced the field by classifying Hertzian schemes using bivector potentials as eigenvectors of the Hodge duality operator, simplifying the solution of coupled equations. For those seeking practical applications, <cit.> offer valuable computational insights. Together, these works form a foundation for our exploration of Hertzian potentials in electromagnetic theory. §.§.§ Vector Formulation in Flat Spacetime In this section, we present a vector formulation of electromagnetic theory in flat spacetime using Hertzian potentials. We begin with the Maxwell equations in vacuum, using natural units (c = 1): ∇×𝐄 + ∂_t 𝐁 = 0 , ∇·𝐁 = 0 , ∇×𝐁 - ∂_t 𝐄 = 0 , ∇·𝐄 = 0 , Our goal is to express solutions to these equations in terms of the second derivatives of two bivector fields, Π_e and Π_m, known as the Hertzian potentials. The four-vector potential can be derived from first derivatives of the Hertzian bivector as follows <cit.>: φ = - ∇·Π_e , 𝐀 = ∂_t Π_e + ∇×Π_m . The electric field and magnetic induction can then be expressed in terms of these Hertzian potentials: 𝐄 = ∇(∇·Π_e ) - ∂^2_t Π_e - ∇×∂_t Π_m = - ∇×∂_t Π_m + ∇×( ∇×Π_e) , 𝐁 = ∇×∂_t Π_e + ∇×( ∇×Π_m) = ∇(∇·Π_m ) - ∂^2_t Π_m + ∇×∂_t Π_e . For these expressions to satisfy Maxwell's equations, the Hertzian bivectors must obey the following wave equations: □Π_e = 0 , □Π_m = 0 , where □ = -∂_t^2 + ∇^2 is the D'Alembertian operator. To introduce additional degrees of freedom, we can employ gauge transformations of the third kind <cit.>. These involve two four-vector gauge fields, G = (g, 𝐆) and W = (w, 𝐖): 𝐐_e = ∇×𝐆 , 𝐐_m = - ∂_t𝐆 - ∇ g , and: 𝐑_e = - ∂_t𝐖 - ∇ w , 𝐑_m = - ∇×𝐖 . With these gauge transformations, the wave equations for the Hertzian potentials become: □Π_e = 𝐐_e + 𝐑_e , □Π_m = 𝐐_m +𝐑_m , Consequently, the expressions for the electric field and magnetic induction are modified to: 𝐄 = 𝐑_e + ∇(∇·Π_e ) - ∂^2_t Π_e - ∇×∂_t Π_m = - 𝐐_e - ∇×∂_t Π_m + ∇×( ∇×Π_e) , and: 𝐁 = - 𝐑_m + ∇×∂_t Π_e + ∇×( ∇×Π_m) = 𝐐_m + ∇(∇·Π_m ) - ∂^2_t Π_m + ∇×∂_t Π_e . These gauge-transformed expressions for 𝐄 and 𝐁 continue to satisfy the source-free Maxwell equations, providing a more general representation of electromagnetic fields in terms of Hertzian potentials. §.§.§ Covariant Formulation in Curved Spacetime The covariant formulation of electromagnetism using Hertzian potentials provides a powerful method for solving source-free problems in curved spacetimes. This approach extends the Hertz formalism to all curved spacetimes, offering a remarkable economy in representing the Maxwell field. By expressing the four-potential as the co-derivative of a 2-form Π, we can construct solutions that automatically satisfy Maxwell's equations. The formalism also introduces a new type of gauge freedom, termed by Nisbet as "gauge transformations of the third kind," which adds flexibility to the solution process. This method is particularly useful as it reduces the arbitrary source-free Maxwell field to two scalar functions obeying a single separable second-order wave equation, reflecting the two degrees of freedom of a zero-rest-mass field. We begin by writing the four-potential as the co-derivative of a 2-form Π: 𝐀 = δΠ , If we impose the condition: □Π = 0 , then the Faraday 2-form can be expressed as: 𝐅 = δΠ = - δΠ , By construction, this formulation satisfies the Maxwell equations (<ref>). To introduce additional flexibility, we can incorporate gauge degrees of freedom. Let 𝐆 and 𝐖 be arbitrary 1-forms. We define the 2-forms: 𝐐 = 𝐆 , 𝐑 = ⋆𝐖 , These can be used as gauge terms by introducing them as sources in the wave equation for the Hertzian potential: □Π = 𝐐 + 𝐑 = 𝐆 + ⋆𝐖 , Consequently, the gauge-transformed Faraday 2-form becomes: 𝐅 = δΠ - 𝐆 = ⋆𝐖 - δΠ . It can be shown that this gauge-transformed Faraday 2-form still satisfies the source-free Maxwell equations (<ref>). These equations are valid for arbitrary curved spacetimes. In the special case where 𝐆 = 𝐖 = 0, we can use (<ref>) to express the action as: S = - 1/2∫_(δΠ∧⋆δΠ) = 1/2∫_( δΠ∧⋆δΠ) = - 1/2∫_( δΠ∧ ⋆Π) = 1/2∫_( δΠ∧ δ⋆Π) . §.§.§ Separation of the Electromagnetic Degrees of Freedom To further simplify the problem, we can separate the electromagnetic degrees of freedom. We begin with the Maxwell equations for the Hertzian 2-form Π: (δ + δ)Π = 𝐆 + ⋆𝐖 , We split the Hertzian potential into two parts: Π = Π^+ + Π^- , where Π^+ and Π^- are contained in the subspaces of self-dual and anti-self-dual 2-forms, respectively: ⋆Π^± = ± i Π^± . Given arbitrary bases of these subspaces {θ^±_j } , j=1,..3, we choose the gauge 1-forms to be: 𝐆 = 𝐆^+ + 𝐆^- 𝐌 = 𝐌^+ + 𝐌^- , with 𝐆^± = Π_j^±δθ^±_j ∓ iG̃^± 𝐖^± = Π_j^±⋆θ^±_j + W̃^± . If we further assume G^± = W^±, this gauge leads to the equations: (i ±⋆) [⋆(Π_j^±∧θ^±_j ) - G^±] = 0 . There exists a basis on which: Π_j^±∧θ^±_j = 0 if j=1,2 ; Renaming Π_3^± = Π^±, θ^±_3 = θ^±, the equations (<ref>) reduce to: (i ±⋆) [⋆(Π^±∧θ^±) - G^±] = 0 . Using the relations: ⋆ (v∧ω) = i_ω^♭⋆ v , and the Cartan magic formula: (i_X ω) = ℒ_Xω -i_X ω , the equations (<ref>) become: (i ±⋆) [i_(θ^±)^♭⋆Π^± - (ℒ_(θ^±)^♭⋆Π^± + G^±)] = 0 . By fixing the gauge: G^± = -ℒ_(θ^±)^♭⋆Π^± , the Maxwell equations reduces to the wave equation: ⋆Π^± = 0 →δΠ^± = 0 . This formulation separates the Maxwell equations into two scalar equations, significantly simplifying the problem. The main challenge in practical applications is finding the appropriate basis (<ref>). § GUIDED WAVES IN STATIC SPACETIMES This section explores electromagnetic wave propagation in waveguides within static curved spacetimes. We begin with a review of guided waves in flat spacetime using Hertzian potentials, then develop a model for waveguides in curved spacetimes. We derive the Maxwell equations and corresponding wave equations in a 3+1 formalism, and apply the Hertzian potential method to solve for guided waves. We analyze TE, TM, and TEM modes, examine their properties, and consider specific applications to the Schwarzschild spacetime. §.§ Guided Waves in Flat Spacetime with Hertzian Potentials We first summarize the results from <cit.> and then recover them using the covariant formalism. In our analysis, we take the z-axis as the axis of the guide and hence as the propagation direction. We assume harmonic time dependence e^iω t for all field quantities. §.§.§ Vector Approach Consider the case where 𝐑_e = 𝐑_m = 𝐐_e = 𝐐_m = 0 and the Hertzian potentials are aligned with the z-axis: Π_e = Π_e 𝐞_z, Π_m = Π_m 𝐞_z. Under these conditions, we can express equations (<ref>) and (<ref>) in Cartesian coordinates as: 𝐄 = (∂_x∂_z Π_e - ∂_t∂_y Π_m ) 𝐞_x + (∂_y∂_z Π_e + ∂_t ∂_x Π_m ) 𝐞_y + ( ∂_z^2 Π_e - ∂_t^2 Π_e) 𝐞_z , and 𝐁 = (∂_x∂_z Π_m + ∂_t∂_y Π_e ) 𝐞_x + (∂_y∂_z Π_m - ∂_t ∂_x Π_e ) 𝐞_y + (∂_z^2 Π_m - ∂_t^2 Π_m ) 𝐞_z = (∂_x∂_z Π_m + ∂_t∂_y Π_e ) 𝐞_x + (∂_y∂_z Π_m - ∂_t ∂_x Π_e ) 𝐞_y - (∂_x^2 Π_m + ∂_y^2 Π_m ) 𝐞_z , In the last equality of equation (<ref>), we have used the wave equation (<ref>) to replace (∂_z^2 - ∂_t^2 )Π_m with - (∂_x^2 + ∂_y^2 )Π_m. Referring to the structure of the Faraday 2-form in Minkowski spacetime (<ref>), we can express the Faraday 2-form corresponding to the fields (<ref>) and (<ref>) as: 𝐅 = (∂_t∂_yΠ_m - ∂_x∂_zΠ_e) t ∧ x - (∂_t∂_xΠ_m + ∂_y∂_zΠ_e) t ∧ y + (∂_t^2Π_e - ∂_z^2Π_e ) t ∧ z -(∂_t∂_yΠ_e + ∂_x∂_z Π_m ) ⋆ t ∧ x + (∂_t∂_xΠ_e - ∂_y∂_zΠ_m ) ⋆ t ∧ y + (∂_x^2Π_m + ∂_y^2Π_m ) ⋆ t ∧ z . §.§.§ Rectangular Waveguides Rectangular waveguides support two types of modes: Transverse Electric (TE) and Transverse Magnetic (TM). We'll examine each type using the Hertzian potential approach. TE Modes TE modes are characterized by the absence of a longitudinal component of the electric field. We can derive the field components for these modes using a magnetic-type Hertzian potential with a single component along the guide's axis: Π_h = Π_h 𝐞_z. This potential satisfies the wave equation (see also <cit.>): □Π_h = ∇_s^2 Π_h + k_0^2Π_h = ∇_s ( ∇_s ·Π_h) - ∇_s ×∇_s ×Π_h + k_0^2Π_h = 0 , where ∇_s^2 is the spatial Laplace-Beltrami operator (or vector Laplacian <cit.>) and k_0^2 = ω^2 μ_0 ϵ_0 = 4π^2/λ_0^2, with λ_0 being the free-space wavelength of electromagnetic waves. We can obtain the electric field and magnetic induction from equations (<ref>) and (<ref>) respectively (see also <cit.>): 𝐄 = -iωμ_0 ∇_s ×Π_h , 𝐇 = k_0^2 Π_h + ∇_s ( ∇·Π_h) = ∇_s ×∇_s ×Π_h . The dominant TE mode in rectangular waveguides is the TE_10 (or H_10) mode <cit.>. TM modes For TM modes, we use an analogous approach but with an electric Hertzian potential Π_e = Π_e 𝐞_z. This potential also satisfies a wave equation: □Π_e = ∇_s^2 Π_e + k_0^2Π_e = ∇_s ( ∇_s ·Π_e) - ∇_s ×∇_s ×Π_e + k_0^2Π_e = 0 , The electric field and magnetic induction for TM modes can be derived as follows (see also <cit.>): 𝐄 = k_0^2 Π_e + ∇_s( ∇_s ·Π_e) = ∇_s ×∇_s ×Π_e , 𝐇 = iωϵ_0 ∇_s ×Π_e . The dominant TM mode in rectangular waveguides is the TM_11 (or E_11) mode <cit.>. §.§.§ Covariant Approach We now demonstrate how to obtain the same results using covariant notation, following the approach in <cit.>. Consider the Minkowski metric: s^2 = η_μν x^μ x^ν = - c^2 t^2 + δ_ij x^i x^j , where i,j = x, y, z. We choose the Hertzian potential to be: Π = Π_e(t,x,y,z) t ∧ z - Π_m(t,x,y,z)⋆ t ∧ z = Π_e(t,x,y,z) t ∧ z + Π_m(t,x,y,z) x ∧ y . Applying the Laplace-de Rham operator: □Π = □Π_e(t,x,y,z) t ∧ z + □Π_m(t,x,y,z) x ∧ y , where, on functions, □ is minus the usual D'Alembertian: □ f(t,x,y,z) = ( ∂_t^2 - ∇_s^2 ) f(t,x,y,z) . In this case, we don't need non-vanishing gauge terms, so we set 𝐆 = 𝐖 = 0. The wave equations for the components of the Hertzian potential then simplify to: □Π_m (t,x,y,z)= 0 , □Π_e (t,x,y,z)= 0 , which coincides with equations (<ref>) and (<ref>). Once we have a solution to these wave equations, we use equation (<ref>) to find the Faraday 2-form: 𝐅 = δΠ - 𝐆 = δΠ = (∂_t ∂_y Π_m - ∂_x ∂_z Π_e ) t ∧ x - (∂_t ∂_x Π_m + ∂_y ∂_z Π_e ) t ∧ y + (∂_t^2 Π_e - ∂_z^2 Π_e ) t ∧ z (∂_t ∂_y Π_e + ∂_x ∂_z Π_m ) y ∧ z + (∂_t ∂_x Π_e - ∂_y ∂_z Π_m ) x ∧ z - (∂_x^2 Π_m + ∂_y^2 Π_m ) x ∧ y = (∂_t ∂_y Π_m - ∂_x ∂_z Π_e ) t ∧ x - (∂_t ∂_x Π_m + ∂_y ∂_z Π_e ) t ∧ y + (∂_t^2 Π_e - ∂_z^2 Π_e ) t ∧ z - (∂_t ∂_y Π_e + ∂_x ∂_z Π_m )⋆ t ∧ x + (∂_t ∂_x Π_e - ∂_y ∂_z Π_m )⋆ t ∧ y + (∂_x^2 Π_m + ∂_y^2 Π_m ) ⋆ t ∧ z , This result exactly matches equation (<ref>). We can thus identify the Hertzian 2-form components Π_e and Π_m in both formalisms. From equation (<ref>), we can obtain: * TE modes (equations (<ref>) and (<ref>)) by setting Π_m ≠ 0, Π_e = 0. * TM modes (equations (<ref>) and (<ref>)) by setting Π_m = 0, Π_e ≠ 0. This covariant approach provides a more geometrical understanding of the electromagnetic fields in waveguides and demonstrates the equivalence between the vector and covariant formulations. §.§ Waveguide Model and Induced Metric §.§.§ Waveguide Model Our waveguide model builds upon the abstract differential forms approach developed for flat spacetime <cit.> and extended to curved spacetime <cit.>. Following Burton et al. <cit.>, we consider 'wavetubes' - cavities whose transversal section is small compared to both their length and the scale of gravitational field variation. This allows us to assume a locally flat spacetime throughout the cross-section, with curved spacetime effects primarily affecting the axial dimension. We describe the wavetube as a time-dependent curve Γ(t) within the spacetime, defined on constant time hypersurfaces. This curve represents the waveguide's axis, chosen based on symmetry considerations. For simplicity, we focus on time-independent waveguides in this work. The boundary of the wavetube is defined as a hypersurface (perfectly conducting, in the case of a 'wavetube' as those considered in <cit.>): ≡{ p ∈| f(p) = 0, f spacelike} . The topology of the wavetube is hence ℝ^2 ×, where is the 2-disc submanifold associated with the transversal section of the waveguide. This allows us to write the boundary condition for the Faraday 2-form as: f ∧𝐅 = 0 ∀ p ∈ . This formulation provides the framework for analyzing electromagnetic wave propagation in curved spacetime waveguides. §.§.§ Metric Derivation: Schwarzschild Example To illustrate our approach, let's consider the Schwarzschild spacetime[The Schwarzschild spacetime is given by <cit.>: s^2 = - (1 - R_s/r) (c t)^2 + (1 - R_s/r)^-1 r^2 + r^2 Ω^2 , with R_s ≡2GM/c^2 . ]. We can write the metric as: s^2 = - (1 - R_s/r) (ct)^2 + γ_ij x^i x^j , where we have introduced the spatial metric γ_ij x^i x^j, which in spherical coordinates reads: γ_ij x^i x^j = (1 - R_s/r)^-1 r^2 + r^2 Ω^2 . To parameterize the waveguide Γ in terms of its proper length l, we consider an arbitrary parameterization 𝐱(λ) of the curve. The line element on Γ is then: l = √(γ_ij(λ) x^i(λ)/λ x^j(λ)/λ)λ , and its proper lenght is given by: l_Γ = ∫_Γ l . In terms of this lenght, the line element along the waveguide can be written as: s^2 = - (1 - R_s/r(l)) (ct)^2 + l ^2 , or in abstract form: s^2 = - f(l) (ct)^2 + l ^2 . Neglecting the spatial curvature effects on the section of the waveguide, as the transversal section is narrow, we can introduce a pair of space-like coordinates to account for these dimensions, and consider a metric of the form: s^2 = - f(l) (ct)^2 + l ^2 + g^⊥_mn x^m x^n , where g^⊥_mn is the metric for ℝ^2 in some coordinates. Radial Propagation For pure radial propagation, we have Ω = 0, and the proper spatial length is given by: l = ∫^r_r_0 r' 1/√(1 - R_s/r') . This indefinite integral has analytic solution, but in the case in which the waveguide is far from the Schwarzschild radius[The Schwarzschild radius of the Earth is R_s,earth≃ 9 · 10^-3 m, while the radius of the Earth is R_earth≃ 6.371 · 10^3 m. For the Sun, the Schwarzschild radius is approximately R_s,sun≃ 3 · 10^6 m, and its radius R_sun≃ 6.957 · 10^8 m. In both cases, for waveguides placed outside the planetary/stellar object, the approximation (<ref>) holds.], i.e when: r >> R_s ∀ r ∈Γ , we can perform the approximation [If |x|<1: 1/√(1 + x) = 1 - 1/2x + 3/8x^2 - 5/16x^3 + ⋯ . ]: l ≃∫^r_r_0 r' ( 1 + 1/2r_s/r') = r + r_s/2ln(r/r_0) - r_0 . §.§.§ Metric Properties Motivated by the result (<ref>), we study the propagation of the electromagnetic field in the geometry given by the line element: ds^2 = - f(s)dt^2 + ds^2 + du^2 + dv^2 . Here, s represents the axial coordinate (spatial proper length along the waveguide axis), while u and v are the transversal coordinates in a flat space (assuming a small transversal section). The component g_tt (s) = - f(s) of the metric represents the induced geometry along the waveguide. Due to the symmetries of the metric (<ref>), generated by the Killing vectors ∂_t, ∂_u, and ∂_v, we expand the fields in 'eigenstates' of these generators, i.e., in harmonic waves: ϕ(x) ∝ e^-iω te^ik_u ue^ik_v v . For this geometry, the non-vanishing Christoffel symbols are given by (modulo its symmetric counterparts in the lower indices): Γ^s_stt = 1/2 f'(s) , Γ^t_tst = 1/2d/ds{ ln[f(s)] } , where ()' ≡d/ds(). The non-vanishing components of the Ricci tensor are: R_tt = 1/2[f”(s) - 1/2(f'(s))^2/f(s)] , R_ss = 1/4(f(s))^2[(f'(s))^2 - 2f(s)f”(s) ] . These components yield a vacuum constraint on f(s) (imposing the metric to be Ricci flat, i.e., R_μν = 0): f”(s) = 1/2(f'(s))^2/f(s) . Laplace-de Rham Operator on Functions The Laplace-de Rham operator (minus Laplacian) on functions is: □ f(t,x,y,z) = ( 1/f(z)∂_t^2 - ∇_s^2 - 1/2f'(z)/f(z)∂_z)f(t,x,y,z) . Hodge Star Action on Basis p-forms On 0-forms: ⋆ f(x) = f(x) √(f(z)) t ∧ x ∧ y ∧ z . On 1-forms: ⋆ t = -1/√(f(z)) x ∧ y ∧ z ⋆ x = -√(f(z)) t ∧ y ∧ z ⋆ y = √(f(z)) t ∧ x ∧ z ⋆ z = -√(f(z)) t ∧ x ∧ y . On 2-forms: ⋆ t ∧ x = -1/√(f(z)) y ∧ z ⋆ t ∧ y = 1/√(f(z)) x ∧ z ⋆ t ∧ z = -1/√(f(z)) x ∧ y ⋆ x ∧ y = √(f(z)) t ∧ z ⋆ x ∧ z = -√(f(z)) t ∧ y ⋆ y ∧ z = √(f(z)) t ∧ x . On 3-forms: ⋆ t ∧ x ∧ y = -1/√(f(z)) z ⋆ t ∧ x ∧ z = 1/√(f(z)) y ⋆ t ∧ y ∧ z = -1/√(f(z)) x ⋆ x ∧ y ∧ z = -√(f(z)) t . On 4-forms: ⋆ t ∧ x ∧ y ∧ z = -1/√(f(z)) . §.§ 3+1 Field Theory. Maxwell and Wave Equations This section develops the 3+1 formalism for electromagnetic fields in static curved spacetimes. We begin by deriving the Maxwell equations in this context, paying particular attention to the effects of spacetime curvature. We then obtain wave equations for both the electric field and magnetic induction, considering their axial and transverse components separately. The analysis ends with a demonstration of the existence of TE and TM modes in arbitrary static spacetimes, generalizing the familiar concepts from flat spacetime electrodynamics. §.§.§ Maxwell Equations We begin by particularizing the inhomogeneous equations (<ref>). Using (<ref>), this reduces to: ∇_μ F^μν = ∂_μ F^μν + Γ^t_tr tF^s ν = ∂_μ F^μν + 1/2(f'(s)/f(s))F^s ν . Lowering the indices of F^μν in this equation, we obtain[The derivation of the Maxwell and wave equations in terms of the electric field 𝐄 and the magnetic induction 𝐁 parallels that in <cit.>]: g^αμ g^βν∂_μ F_αβ + [∂_s(g^νβ) + 1/2(f'(s)/f(s))g^νβ]F_sβ = 0 . Using the definitions (<ref>) and (<ref>), and considering that we are working in a spacetime with a diagonal metric, we obtain: -1/f(s)g^kν∂_tE_k - g^jig^tν∂_iE_j - ϵ_jkig^jlg^kν∂_lB^i - [∂_s(g^ν t) + 1/2( f'(s)/f(s))g^ν t]E_s - [1/2( f'(s)/f(s))g^ν k]ϵ_skiB^i = 0 . From (<ref>), we derive the Gauss law by setting ν = t: ∇·𝐄 - 1/2( f'(s)/f(s)) E_s = 0 . The Maxwell-Ampère law corresponds to ν = s,u,v: -∂_t 𝐄 + f(s) (∇×𝐁) + 1/2(∇ f(s) ×𝐁) = 0 . §.§.§ Wave Equations for Electric Field and Magnetic Induction We derive the wave equation for the electric field by differentiating the Maxwell-Ampère law (<ref>) with respect to time, using the Faraday law (<ref>) to eliminate terms involving the magnetic induction, and replacing the divergences of the electric field using the Gauss law (<ref>). The result is: [-1/f(s)∂_t^2 + ∇^2 ] 𝐄 - 1/2∇[f'(s)/f(s)]E_s + 1/2( f'(s)/f(s))[∂_s 𝐄 - 2 ∇ E_s] = 0 , where ∇^2 = ∑_i ∂_i^2 is the spatial Laplacian operator. The wave equation for the axial component E_s is: [-1/f(s)∂_t^2 + ∇^2 ] E_s - 1/2(f'(s)/f(s))∂_s E_s - 1/2d/ds(f'(s)/f(s)) E_s = 0 . Inserting the ansatz: E_s(x) = ϕ_E(s)e^i 𝐤_⊥·𝐯_⊥e^-iω t , where we have defined 𝐤_⊥ = (k_u, k_v)^T and 𝐯_⊥ = (u, v)^T, we get the wave equation for the axial part ϕ_E(s): ϕ_E”(s) - 1/2d/ds[(f'(s)/f(s))ϕ_E(s)] + (ω^2/f(s) - 𝐤_⊥^2 ) ϕ_E(s) = 0 . Alternatively, making use of the Ricci flat condition for the metric (<ref>) the wave equation takes the form: ϕ_E”(s) - 1/2(f'(s)/f(s))ϕ'_E(s) + [ω^2/f(s) - 𝐤_⊥^2 + 1/4(f'(s)/f(s))^2] ϕ_E(s) = 0 . The wave equation for the magnetic induction appears if we take the temporal derivative of the Faraday law, then we use the Maxwell-Ampère law to get rid of the temporal derivative of the electric field and finally we cancel out the divergences of the magnetic induction due to the (magnetic) Gauss law. The result is: [-1/f(s)∂_t^2 + ∇^2 ] 𝐁 = (f'(s)/f(s))(∇ B^s - 3/2∂_s 𝐁) + 1/2(f”(s)/f(s))(𝐚̂·𝐁 - 𝐁) , where 𝐚̂≡ (1, 0, 0) is an unit axial vector. The axial component satisfies: [-1/f(s)∂_t^2 + ∇^2 ] B^s = -1/2(f'(s)/f(s))∂_s B^s . Writing the magnetic induction as: B_s(x) = ϕ_B(s)e^i 𝐤_⊥·𝐯_⊥e^-iω t , the wave equation for the axial dependence of the axial magnetic induction becomes: ϕ_B”(s) + 1/2(f'(s)/f(s))ϕ_B'(s) + [ω^2/f(s) - 𝐤_⊥^2 ] ϕ_B(s) = 0 . §.§.§ Guided Waves in Static Spacetimes: 3+1 Approach We now demonstrate the split into two fundamental sets of modes of the Maxwell equations inside the guide: the TE and TM modes. We generalize the approach followed in <cit.> for the flat case. Taking the cross product of a unit axial vector 𝐚̂ with the Faraday law: 𝐚̂×( ∂_t 𝐁 + ∇×𝐄) = 0 , we obtain: -∂_s 𝐄_⊥ + ∂_t (𝐚̂×𝐁_⊥) = - ∇_⊥ E_s , where 𝐄_⊥ and 𝐁_⊥ are the transversal parts of the fields and ∇_⊥ the transversal gradient. Taking the cross product of the Maxwell-Ampère law with 𝐚̂ twice: 𝐚̂×𝐚̂×[ -∂_t 𝐄 + f(s) (∇×𝐁) + 1/2(∇ f(s) ×𝐁) ] = 0 , yields: -1/f(s)∂_t𝐄_⊥ + ∂_s (𝐚̂×𝐁_⊥) + 1/2(f'(s)/f(s))(𝐚̂×𝐁_⊥) = 𝐚̂×∇_⊥ B_s . Introducing harmonic time dependence, equations (<ref>) and (<ref>) become: -∂_s 𝐄_⊥ - iω (𝐚̂×𝐁_⊥) = - ∇_⊥ E_s , iω/f(s)𝐄_⊥ + ∂_s (𝐚̂×𝐁_⊥) + 1/2(f'(s)/f(s))(𝐚̂×𝐁_⊥) = 𝐚̂×∇_⊥ B_s . Defining the operators: Â = - ∂_s , D̂ = -Â + 1/2(f'(s)/f(s)) , and the functions: B = -iω , C = -B/f(s) , the system of equations (<ref>) and (<ref>) can be written compactly as: Â𝐄_⊥ + B(𝐚̂×𝐁_⊥) = - ∇_⊥ E_s , C𝐄_⊥ + D̂(𝐚̂×𝐁_⊥) = 𝐚̂×∇_⊥ B_s . Finally, we decouple the system into one equation for 𝐚̂×𝐁_⊥ and another for 𝐄_⊥, both in terms of the axial components of the field: [i f'(s)/ωD̂ - ÂD̂/C - B]𝐚̂×𝐁_⊥ = ∇_⊥ E_s + [i f'(s)/ω + Â/C]𝐚̂×∇_⊥ B_s , [C - D̂Â/B]𝐄_⊥ = D̂/B∇_⊥ E_s + 𝐚̂×∇_⊥ B_s . These equations demonstrate the existence of TE and TM modes for arbitrary stationary spacetimes. §.§ Guided Waves in Static Spacetimes with Hertzian Potentials This section extends the Hertzian potential formalism to analyze guided electromagnetic waves in static curved spacetimes. We begin by considering a general static metric and develop the wave equations for Hertzian potentials in this context. Two approaches are presented: an initial attempt and a refined solution using an orthonormal coframe. We then explore the implications for material media and derive the equations for TE and TM modes. Special attention is given to the axial part of the solutions, cutoff frequencies, and guide wavelengths. We also examine TEM modes and their unique properties in curved spacetime. Finally, we apply this formalism to the specific case of the Schwarzschild spacetime, providing insights into radial propagation and TEM modes in strong gravitational fields. Let us consider the propagation in the geometry (<ref>) s^2 = - f(z) t^2 + x^2 + y^2 + z^2 . §.§.§ Initial Formulation in Coordinate Basis Following the approach developed for the flat spacetime problem, we consider the Hertzian potential in the 2-form basis: Π = Π_e(t,x,y,z) t ∧ z - Π_m(t,x,y,z)⋆ t ∧ z = Π_e(t,x,y,z) t ∧ z + 1/√(f(z))Π_m(t,x,y,z) x ∧ y . Using the expression (<ref>), we can write the Laplace-de Rham operator acting on the Hertzian potential in terms of its action on the components of the Hertzian potential 2-form: □Π = {□Π_e + 1/2( f'/f)Π_e' + 1/2∂_z[( f'/f)Π_e ]} t ∧ z - {□Π_m + 1/2( f'/f)Π_m' + 1/2∂_z[( f'/f)Π_m ] }⋆ t ∧ z . If we choose the gauge terms to be zero, 𝐆 = 𝐖 = 0, we obtain from □Π = 0 the same equation of motion for both components Π_m and Π_e. We write this equation in terms of a field ϕ(t, x, y ,z) that represents both components: □ϕ + 1/2( f'/f)∂_zϕ + 1/2∂_z[( f'/f)ϕ] = 0 , which can also be written, using (<ref>), as: [ 1/f(z)∂_t^2 - ∇_s^2 + 1/2∂_z( f'/f) + 1/2( f'/f) ∂_z ]ϕ(t,x,y,z) = 0 , This is the same equation as the wave equation for the axial component of the electric field (<ref>). Analogous to our approach for the electromagnetic field, we insert the ansatz: ϕ(x) = ϕ_a(z)e^i 𝐤_⊥·𝐯_⊥e^-iω t , to obtain the equation for the axial factor ϕ_a(z): { ^2/ z^2 - 1/2(f'(z)/f(z))/ z + [ω^2/f(z) - 𝐤_⊥^2 - 1/2/ z(f'(z)/f(z))] }ϕ_a(z) = 0 , Alternatively, for Ricci flat manifolds, using (<ref>), the wave equation takes the form: { ^2/ z^2 - 1/2(f'(z)/f(z))/ z + [ω^2/f(z) - 𝐤_⊥^2 + 1/4(f'(z)/f(z))^2] }ϕ_a(z) = 0 . Finally, using (<ref>), we find the Faraday 2-form: 𝐅 = {[1/√(f(z))∂_t∂_y ]Π_m - [∂_x∂_z - 1/2(f'(z)/f(z))∂_x ]Π_e } t ∧ x - {[1/√(f(z))∂_t∂_x ]Π_m + [∂_y ∂_z - 1/2(f'(z)/f(z))∂_y ]Π_e } t ∧ y + {[1/f(z)∂_t^2 - ∂_z^2 + 1/2(f'(z)/f(z))∂_z - 1/2(f'(z)/f(z))^2 + 1/2( f”(z)/f(z)) ]Π_e } t ∧ z - {[1/√(f(z))∂_t∂_y ]Π_e + [∂_x∂_z - 1/2(f'(z)/f(z))∂_x]Π_m }⋆ t ∧ x + {[1/√(f(z))∂_t∂_x ]Π_e - [∂_y∂_z - 1/2(f'(z)/f(z))∂_y]Π_m }⋆ t ∧ y + {[∂_x^2 + ∂_y^2 ]Π_m }⋆ t ∧ z . Using the wave equation (<ref>), we can rewrite the t ∧ z component as: {[1/f(z)∂_t^2 - ∂_z^2 + 1/2(f'(z)/f(z))∂_z - 1/2(f'(z)/f(z))^2 + 1/2( f”(z)/f(z)) ]Π_e } t ∧ z = {[∂_x^2 + ∂_y^2 ]Π_e } t ∧ z the wave equation (<ref>), we can rewrite the t ∧ z component as: 𝐅 = {[1/√(f(z))∂_t∂_y ]Π_m - [∂_x∂_z - 1/2(f'(z)/f(z))∂_x ]Π_e } t ∧ x - {[1/√(f(z))∂_t∂_x ]Π_m + [∂_y ∂_z - 1/2(f'(z)/f(z))∂_y ]Π_e } t ∧ y + {[∂_x^2 + ∂_y^2 ]Π_e } t ∧ z - {[1/√(f(z))∂_t∂_y ]Π_e + [∂_x∂_z - 1/2(f'(z)/f(z))∂_x]Π_m }⋆ t ∧ x + {[1/√(f(z))∂_t∂_x ]Π_e - [∂_y∂_z - 1/2(f'(z)/f(z))∂_y]Π_m }⋆ t ∧ y + {[∂_x^2 + ∂_y^2 ]Π_m }⋆ t ∧ z . To simplify our notation, we introduce the following differential operators: _t i≡1/√(f(z))∂_t ∂_i i=x,y , _ij≡∂_i∂_j - 1/2(f'(z)/f(z))∂_i i=x,y , Using these operators, we can express the Faraday 2-form more concisely: 𝐅 = {_tyΠ_m - _xzΠ_e } t ∧ x - {_txΠ_m + _yzΠ_e } t ∧ y + {∇_s^2Π_e } t ∧ z - {_tyΠ_e + _xzΠ_m }⋆ t ∧ x + {_txΠ_e - _yzΠ_m }⋆ t ∧ y + {∇_s^2Π_m }⋆ t ∧ z . Interestingly, we can further simplify the wave equation (<ref>). By applying the change of variable: ϕ→√(f(z))Π , we arrive at a new, more compact wave equation for Π: [1/f∂_t^2 - ∇_s^2 - 1/2(f'/f)∂_z ] Π = 0 . This result suggests that we should reconsider our choice of the Hertzian potential, as we will explore in the next subsection. §.§.§ Refined Approach using Orthonormal Coframe To further refine our analysis, we introduce an orthonormal coframe { e^0, e^1, e^2, e^3 } defined as: e^0 = √(f(l)) t , e^1 = x , e^2 = y , e^3 = l , Using this coframe, we can express the metric tensor as: 𝐠 = - e^0 ⊗ e^0 + e^1 ⊗ e^1 + e^2 ⊗ e^2 + e^3 ⊗ e^3 . We now rewrite the Hertzian potential in terms of this coframe: Π = Π_e(t,x,y,l) e^0 ∧ e^3 - Π_m(t,x,y,l)⋆ e^0 ∧ e^3 = √(f(l))Π_e(t,x,y,l) t ∧ l + Π_m(t,x,y,l) x ∧ y . The Laplace-de Rham operator acting on the Hertzian potential becomes: □Π = □Π_e e^0 ∧ e^3 - □Π_m ⋆ e^0 ∧ e^3 , where □Π_e and □Π_m are the Laplace-de Rham operators on functions as defined in (<ref>). Choosing vanishing gauge terms 𝐆 = 𝐖 = 0, the dynamics of the Hertzian potential is given by the wave equations: □Π_e(t,x,y,l) = [ 1/f(l)∂_t^2 - ∇_s^2 - 1/2f'(l)/f(l)∂_l]Π_e(t,x,y,l) = 0 , □Π_m(t,x,y,l) = [ 1/f(l)∂_t^2 - ∇_s^2 - 1/2f'(l)/f(l)∂_l]Π_m(t,x,y,l) = 0 . Using (<ref>), we derive the Faraday 2-form: 𝐅 = {1/√(f(l))∂_tyΠ_m - ∂_xlΠ_e } e^0 ∧ e^1 - {1/√(f(l))∂_txΠ_m + ∂_ylΠ_e } e^0 ∧ e^2 + {[ 1/f(l)∂_t^2 - ∂_l^2 - 1/2f'(l)/f(l)∂_l ]Π_e} e^0 ∧ e^3 - {1/√(f(l))∂_tyΠ_e + ∂_xlΠ_m}⋆ e^0 ∧ e^1 + {1/√(f(l))∂_txΠ_e - ∂_ylΠ_m }⋆ e^0 ∧ e^2 + {∂_x^2 + ∂_y^2 }⋆ e^0 ∧ e^3 . Applying the wave equation (<ref>), we can simplify the e^0 ∧ e^3 component: {[ 1/f(l)∂_t^2 - ∂_l^2 - 1/2f'(l)/f(l)∂_l ]Π_e} e^0 ∧ e^3 = [(∂_x^2 + ∂_y^2 )Π_e ] e^0 ∧ e^3 = (∇_s^2 ) e^0 ∧ e^3 , This allows us to rewrite the Faraday 2-form as: 𝐅 = ( 1/√(f(l))∂_tyΠ_m - ∂_xlΠ_e ) e^0 ∧ e^1 - (1/√(f(l))∂_txΠ_m + ∂_ylΠ_e ) e^0 ∧ e^2 + [(∇_s^2 )Π_e] e^0 ∧ e^3 - (1/√(f(l))∂_tyΠ_e + ∂_xlΠ_m)⋆ e^0 ∧ e^1 + (1/√(f(l))∂_txΠ_e - ∂_ylΠ_m ) ⋆ e^0 ∧ e^2 + [(∇_s^2)Π_m] ⋆ e^0 ∧ e^3 . To further simplify our notation, we define the following operators: _t i≡1/√(f(z))∂_ti i=x,y , _ij≡∂_ij i=x,y , Using these operators, we can express the Faraday 2-form more concisely: 𝐅 = {_tyΠ_m - _xlΠ_e } e^0 ∧ e^1 - {_txΠ_m + _ylΠ_e } e^0 ∧ e^2 + {∇_s^2Π_e } e^0 ∧ e^3 - {_tyΠ_e + _xlΠ_m }⋆ e^0 ∧ e^1 + {_txΠ_e - _ylΠ_m }⋆ e^0 ∧ e^2 + {∇_s^2Π_m }⋆ e^0 ∧ e^3 . It is worth noting that in the limit f(l) → 1, we have _ti→∂_ti and _ij→∂_ij. In this case, our solution reduces to the familiar form for guided waves in flat spacetime, as given in equation (<ref>). §.§.§ TE and TM Modes We begin by rewriting the wave equation (<ref>) in a more convenient form: □Π(t,x,y,l) = [ □_t,l - ∇_⊥^2 ]Π_e(t,x,y,l) = 0 , where □_t,l is the two-dimensional Laplace-de Rham operator acting only on the temporal and axial dimensions. As mentioned in section <ref>, we propose an ansatz of the form: Π(x) = A e^-iω tψ(u,v)ϕ(l) , Here, A is a normalization constant and ψ(u,v) is an eigenstate of the transverse momentum operators ∂_u and ∂_v. The transversal dependence of the field satisfies the scalar Helmholtz equation (as in <cit.>): (∇_⊥^2 + 𝐤_⊥^2) ψ_𝐤_⊥^2(u,v) = 0 , where ∇_⊥^2 ≡∂_u^2 + ∂_v^2 is the transverse Laplacian. This equation is subject to boundary conditions that constrain the set of admissible eigenvalues and eigenstates. With this ansatz, the wave equation (<ref>) for ϕ(l) becomes: {^2/ l^2 + 1/2[f'(l)/f(l)]/ l + [ω^2/f(l) - 𝐤_⊥^2] }ϕ(l) = 0 , This can be written in terms of the Laplace-de Rham operator acting on axial functions: [□_l - (ω^2/f(l) - 𝐤_⊥^2)]ϕ(l) = 0 , or in terms of the Laplace-Beltrami operator, ∇^2, acting on axial functions (minus the Laplace-de Rham operator): - ∂_t^2 = [∇^2 + (ω^2/f(l) - 𝐤_⊥^2)]ϕ(l) = 0 . Alternatively, following <cit.>, we can write the wave equation (<ref>) as: - ∂_t^2 ϕ(x) = -f(l){∇_⊥^2 + ^2/ l^2 + 1/2[f'(l)/f(l)]/ l}ϕ(x) ≡ K ϕ(x) . With our ansatz, the operator K becomes: K = f(l){𝐤_⊥^2 - ^2/ l^2 - 1/2[f'(l)/f(l)]/ l} . The wave equation for the axial part ϕ(l) can then be written: (K - ω^2 )ϕ(l) = 0 . It is natural to take the eigenstates of the operator K as a basis for the solutions: K ϕ_κ_𝐤_⊥^2(l) = κ_𝐤_⊥^2ϕ_κ_𝐤_⊥^2(l) , where both the eigenstates and the eigenvalues are functions of the constant transverse momentum 𝐤_⊥^2 of the wave. The eigenvalue equation (<ref>) can be written as: {^2/ l^2 + 1/2[f'(l)/f(l)]/ l + [κ_𝐤_⊥^2/f(l) - 𝐤_⊥^2] }ϕ_κ_𝐤_⊥^2(l) = 0 . An eigenstate ϕ_κ_𝐤_⊥^2(l) is a solution of the problem (<ref>) if the 'mass-shell condition' is satisfied: κ_𝐤_⊥^2 = ω^2 , In this case, both (<ref>) and (<ref>) are the same equation. Thus, a solution of the problem takes the form: Π (x) = A e^± i ω tψ_𝐤_⊥^2(u,v)ϕ_κ_𝐤_⊥^2(l) , where ψ_𝐤_⊥^2(u,v) is a solution of (<ref>), and ϕ_κ_𝐤_⊥^2(l) is an eigenstate of the operator K (<ref>), with eigenvalue κ_𝐤_⊥^2 satisfying the 'mass-shell condition' (<ref>). Axial Part To analyze the axial part of the solution, we split it into a deformation of a plane wave: ϕ_κ_𝐤_⊥^2(l) = q_κ_𝐤_⊥^2(l)e^± i k_l l , This leads to the following equation for q_κ_𝐤_⊥^2(l): {^2/ l^2 + [ 1/2f'(l)/f(l)± 2i k_l]/ l + [ω^2/f - 𝐤^2 ± i1/2f'(l)/f(l) k_l ]} q_κ_𝐤_⊥^2(l) = 0 . Alternatively, we can express the axial part as: ϕ_κ_𝐤_⊥^2(l) = q_κ_𝐤_⊥^2(l)e^i θ(l) , where we define: k_l(l) ≡θ'(l) , and 𝐤^2 = 𝐤_⊥^2 + k_l^2 . This leads to the following equation: {^2/ l^2 + [ 1/2f'(l)/f(l) + 2i k_l(l)]/ l + [ω^2/f - 𝐤^2(l) + i1/2f'(l)/f(l) k_l(l) + i k_l'(l)]} q_κ_𝐤_⊥^2(l) = 0 . For propagating modes, we assume k_l(l) ∈ℝ, and q(l) to be the real amplitude of ϕ(l). Separating equation (<ref>) into real and imaginary parts yields: 0 = ω^2/f(l) - 𝐤_⊥^2 - k_l^2(l) + (q”(l)/q(l)) + 1/2(f'(l)/f(l)) (q'(l)/q(l)) (Real part) 0 = 1/2(f'(l)/f(l))k_l(l) + k_l'(l) + 2k_l(l) (q'(l)/q(l)) (Imaginary part) . The imaginary part provides an equation relating the phase to the amplitude: - / l[ln (k_l) ] = / l[ln (√(f)q^2) ] , which implies: k_l(l) = / lθ(l) = A/√(f(l))q^2(l) , where A is an integration constant[Note that the integral expression for θ(l) appears to be a relativistic invariant]. Substituting this into the real part of equation (<ref>) yields: q”(l) + 1/2(f'(l)/f(l))q'(l) + [ω^2/f(l) - 𝐤_⊥^2]q(l) = A^2/f(l)q^3(l) , which is a dissipative Ermakov equation. Cuttoff Frequency Using equation (<ref>), we can express the transversal momentum in terms of ω and ϕ: 𝐤_⊥^2 = ω^2/f(l) + ϕ”(l)/ϕ(l) + 1/2(f'(l)/f(l)) ϕ'(l)/ϕ(l) . The momentum 𝐤_⊥^2, also called the cutoff wavenumber <cit.>, is determined by boundary conditions. The pair {ω, 𝐤_⊥} determines the solution ϕ(l). Note that the right-hand side cannot equal ω^2 as in the flat case, due to the 1/f(l) factor, which has implications for the cutoff frequency. Alternatively, using (<ref>), we find: 𝐤_⊥^2 = ω^2/f(l) - k_l^2(l) + i/2(f'(l)/f(l))k_l(l) + i k_l'(l) + (q”(l)/q(l)) + [1/2(f'(l)/f(l)) + 2ik_l(l) ](q'(l)/q(l)) . For propagating modes, k_l(l) ∈ℝ. The wave does not propagate if k_l(l) = 0, defining the cutoff frequency as: 𝐤_⊥^2 = ω^2/f(l) + (q”(l)/q(l)) + 1/2(f'(l)/f(l))(q'(l)/q(l)) . The guide wavelength is defined as <cit.>[This is a first-order calculation, as we seek Δ l such that θ(l + Δ l ) = θ(l ) + 2π. Linearizing gives θ (l) + k_l(l) Δ l ≃θ(l ) + 2π, leading to Δ l ≡λ_g ≃2π/k_l(l).]: λ_g ≡2π/k_l(l) §.§.§ TEM Modes TEM (Transverse Electromagnetic) modes require the transverse momentum to vanish, leading to a massless Helmholtz equation: ∇_⊥^2 ψ(u,v) = 0 . This condition allows us to describe TEM modes using either Π_e or Π_m <cit.>, as the vanishing transverse momentum simplifies the field structure. The wave equation for the Hertzian potential becomes a massless Klein-Gordon equation with conformal symmetry. To exploit this, we rewrite our metric (<ref>) in a conformally flat form: ds^2 = f(l)[-dt^2 + 1/f(l) dl^2] + du^2 + dv^2 . Introducing the coordinate change dr = 1/f(l) dl, we obtain: ds^2 = f(r(l))[-dt^2 + dr^2] + du^2 + dv^2 , which is conformally flat in the temporal and axial dimensions. Given the conformal symmetry and the irrelevance of transverse dependence for TEM modes, we can adapt the Minkowski spacetime solutions: Π(t,r) = e^± i ω te^± i k_l r . Reverting to our original coordinates yields the TEM mode solutions: Π(t,l) = e^± i ω te^± i k_l ∫^l dl'/√(f(l')) . This solution resembles geodesic motion, as seen in radial propagation in Schwarzschild spacetime. §.§.§ Application to Radial Propagation on Schwarzschild Spacetime We now apply our formalism to the Schwarzschild spacetime, focusing on radial propagation. Our goal is to evaluate the axial-dependent coefficient of the wave equation (<ref>). In Schwarzschild spacetime, the differential of proper length is given by: l = 1/√(1 - R_s/r') r . Using this, we can derive: / lf(r(l)) = ∂_r f(r(l)) / l r(l) = ∂_r f(r(l))√(1 - R_s/r(l)) = r_s/r(l)^2√(1 - R_s/r(l)) . Hence: f'/f = r_s/r^21/√(1 - r_s/r) = 1/r_s( r_s/r)^21/√(1 - r_s/r) . Using the Taylor series (<ref>), we can approximate this term as: f'/f = 1/r_s( r_s/r)^2 [1 + 1/2(r_s/r) + 3/8(r_s/r)^2 + ⋯] ≃1/r_s( r_s/r)^2 . On the other hand: ω^2/f(l) = ω^2 (1 + (r_s/r) + (r_s/r)^2 + ⋯) Combining these results, we can write the wave equation in the form: { ^2/ l^2 + 1/2r_s(r_s/r)^2/ l + [ω^2 - 𝐤_⊥^2 + ω^2(r_s/r) ] }ϕ(l) = 0 . Neglecting terms of order higher than (r_s/r), we obtain: { ^2/ l^2 + [ω^2 - 𝐤_⊥^2 + ω^2(r_s/r) ] }ϕ(l) = 0 . TEM Modes For TEM modes, we previously derived the exact solutions (<ref>): Π(t,l) = e^± i ω te^± i k_l ∫^r(l) l' 1/√(f(l')) . Using (<ref>), the integral in (<ref>) becomes: ∫^l l' 1/√(f(l')) = ∫^l r 1/f(r) = ∫^l r 1/1 - R_s/r' , This integral has exact solution[ ∫ x 1/1 - a/x = a ln (x-a) + x + constant ]: ∫^l r 1/1 - R_s/r' = R_s ln (l-R_s) + l + constant . Consequently, the TEM mode solution (<ref>) in Schwarzschild spacetime can be expressed as: Π(t,l) = e^± i ω te^± i k_l l(l-R_s)^± R_s . This result reveals how the Schwarzschild geometry modifies the propagation of TEM modes, introducing a power-law correction term dependent on the Schwarzschild radius. §.§ Effects of Material Media §.§.§ Validity of Constant Linear Constitutive Relations in Curved Spacetime The application of constant linear constitutive relations in curved spacetime is supported by the principle of local flatness. As noted by <cit.>, experiments probing the interaction of photons with inertia test the covariant formulation of Maxwell's equations under the assumption that local optical properties of a medium are unaffected by acceleration. In electromagnetism, for linear, homogeneous, isotropic materials with instantaneous response to changes in electric field, we have: 𝐃 = ϵ𝐄 where ϵ is a scalar permittivity. The instantaneous nature of this response suggests that it does not depend on the passage of time. Moreover, the constitutive relation is local, implying that locally we have Minkowski spacetime. Consequently, the constitutive relation should maintain the same magnitude at every point, per unit of proper length. This motivation provides the basis for our treatment of macroscopic Maxwell equations in curved spacetime using Hertzian potentials. §.§.§ Solution to the Macroscopic Maxwell Equations Following the approach of <cit.>, let us consider the macroscopic Maxwell equations (<ref>): 𝐅 = 0 𝐆 = 𝐉_free 𝐆 = ⋆χ𝐅 . The form of these equations suggests a general solution of the form: 𝐅 = δΠ_1 - 𝐆_1 𝐆 = δΠ_2 - 𝐆_2 , where we have taken into account that in our case 𝐉_free = 0. While the potentials Π_1 and Π_2 are initially different, we can exploit the gauge freedom to set them equal. This can be achieved, for example, by writing 𝐆_1 = 𝐆_1' + δ(Π_1 -Π_2 ). Thus, we can simplify our equations to: 𝐅 = δΠ - 𝐆_1 𝐆 = δΠ - 𝐆_2 . Applying the constitutive equation, we can write: δΠ - 𝐆_2 = ⋆χ(δΠ - 𝐆_1 ) , which implies that the potential Π satisfies the equation: (1 + ⋆χ) δΠ = 𝐆_2 - ⋆χ𝐆_1 . §.§ Interferometry This section examines the interference phenomena in curved spacetime waveguides using the Hertzian potential formalism. We consider two waveguides with different proper lengths but common origin and endpoint. By superposing the solutions for individual waveguides, we derive the combined Hertzian potential and corresponding Faraday 2-form at the intersection point. This approach allows us to analyze the interference patterns that arise from the interaction of electromagnetic waves propagating through different spacetime paths. §.§.§ General Superposition of Guided Waves Consider two waveguides, labeled a and b, with proper lengths L_a and L_b, respectively. Both waveguides share a common origin at event P_0, corresponding to proper-length coordinates l_a=l_b=0, and extend along different paths to a common endpoint at event P_f, where l_a=L_a and l_b=L_b. We describe the situation using a single coordinate time t and assume that at the origin P_0, the coframe { e^0, e^x, e^y, e^l } is identical for both waveguides. We obtain the field equations as described previously and assume solutions of the form: Π_𝐚 = Π_E,a(t,x_a,y_a,l_a) e^0 ∧ e^l_a - Π_M,a(t,x_a,y_a,l_a)⋆ e^0 ∧ e^l_a Π_𝐛 = Π_E,b(t,x_b,y_b,l_b) e^0 ∧ e^l_b - Π_M,b(t,x_b,y_b,l_b)⋆ e^0 ∧ e^l_b , where Π_a and Π_b represent the field states corresponding to waveguides a and b separately, with different metric factors f_a(l_a) and f_b(l_b) assumed. The components of the Hertzian potentials are the most general linear combinations of TE and TM modes: Π_E,a(t,x_a,y_a,l_a) = ∑_n,m c_E,a; n, mΠ_E, a; n, m(t,x_a,y_a,l_a) Π_M,a(t,x_a,y_a,l_a) = ∑_n,m c_M,a; n, mΠ_M, a; n, m(t,x_a,y_a,l_a) Π_E,b(t,x_b,y_b,l_b) = ∑_n,m c_E,b; n, mΠ_E, b; n, m(t,x_b,y_b,l_b) Π_M,b(t,x_b,y_b,l_b) = ∑_n,m c_M,b; n, mΠ_M, b; n, m(t,x_b,y_b,l_b) . To obtain the interference pattern at P_f, we align the axes e^i_a and e^i_b at that point (performing a suitable rotation if necessary). By exploiting the linearity of Maxwell's equations (which extends to Hertzian potentials), we derive the Hertzian field at P_f as a superposition: Π(t,x,y,L_a = L_b)= [∑_n,m( c_E,a; n, mΠ_E, a; n, m + c_E,b; n, mΠ_E, b; n, m) (e^0 ∧ e^l)]|_(t,x,y,L_a = L_b) - [∑_n,m( c_M,a; n, mΠ_M, a; n, m + c_M,b; n, mΠ_M, b; n, m) (⋆ e^0 ∧ e^l_a)]|_(t,x,y,L_a = L_b) . We can express this more compactly by introducing an index w that sums over waveguides a and b: Π(t,x,y,L_a = L_b)= [∑_w;n,m( c_E,w; n, mΠ_E, w; n, m) (e^0 ∧ e^l)]|_(t,x,y,L_a = L_b) - [∑_w;n,m( c_M,w; n, mΠ_M, w; n, m) (⋆ e^0 ∧ e^l_a)]|_(t,x,y,L_a = L_b) . This Hertzian potential leads to the Faraday 2-form, which is also a superposition of the individual Faraday 2-forms for each waveguide: 𝐅 = ∑_w;n,m( c_M,w; n, m_tyΠ_M, w; n, m - c_E,w; n, m_xlΠ_E, w; n, m) e^0 ∧ e^1 - ∑_w;n,m( c_M,w; n, m_txΠ_M, w; n, m + c_E,w; n, m_ylΠ_E, w; n, m) e^0 ∧ e^2 + ∑_w;n,m( c_E,w; n, m∇_s^2Π_E, w; n, m) e^0 ∧ e^3 - ∑_w;n,m( c_E,w; n, m_tyΠ_E, w; n, m + c_M,w; n, m_xlΠ_M, w; n, m) ⋆ e^0 ∧ e^1 + ∑_w;n,m( c_E,w; n, m_txΠ_E, w; n, m - c_M,w; n, m_ylΠ_M, w; n, m) ⋆ e^0 ∧ e^2 + ∑_w;n,m(c_M,w; n, m∇_s^2Π_M, w; n, m) ⋆ e^0 ∧ e^3 . This formalism for superposing Hertzian potentials from two waveguides results in a combined Faraday 2-form that describes the interference pattern at their intersection. It provides a powerful tool for studying electromagnetic interference in curved spacetime, potentially enabling the detection of subtle gravitational effects on guided waves. Future research could explore specific geometric configurations and their resulting interference patterns, offering new insights into the interplay between gravity and electromagnetism. § CONCLUSIONS In this work we have developed a framework for analyzing guided electromagnetic waves in static curved spacetimes using Hertzian potentials. Our key findings include: * The development of a formalism for studying interference patterns in curved spacetime waveguides, potentially enabling new precision tests of general relativity. * The derivation of wave equations for TE and TM modes in curved spacetime, generalizing the flat spacetime results. The analysis of guided waves in static curved spacetimes using Hertzian potentials has yielded several key results: * The wave equation for the axial part of the Hertzian potential in curved spacetime (equation (<ref>)) generalizes the flat spacetime case, adding a dependance on the metric function f(l). * The transverse part of the field satisfies a scalar Helmholtz equation (equation (<ref>)), similar to the flat spacetime case. * The axial part of the solution can be expressed as a deformation of a plane wave (equation (<ref>)), leading to a dissipative Ermakov equation for the amplitude (equation (<ref>)): q”(l) + 1/2(f'(l)/f(l))q'(l) + [ω^2/f(l) - 𝐤_⊥^2]q(l) = A^2/f(l)q^3(l) . * The cutoff wavenumber in curved spacetime (equation (<ref>)) depends on the metric function and its derivatives, generalizing the flat spacetime result. * For TEM modes, the solution (equation (<ref>)) is a simple modification of the flat spacetime case, with the phase corrected by the proper time along the waveguide: Π(t,l) = e^± i ω te^± i k_l ∫^l dl'/√(f(l')) . * In the specific case of radial propagation in Schwarzschild spacetime, the TEM mode solution (equation (<ref>)) includes a power-law correction term dependent on the Schwarzschild radius: Π(t,l) = e^± i ω te^± i k_l l(l-R_s)^± R_s . These results demonstrate how spacetime curvature modifies the propagation of electromagnetic waves in waveguides, providing a foundation for studying guided waves in more general curved spacetime scenarios. Future work could explore specific applications of this framework, such as precision tests of gravitational effects on electromagnetic waves in long-baseline experiments. Additionally, extending this analysis to dynamic spacetimes and investigating the interplay between gravity and quantum optics in waveguides present exciting avenues for further research. As future research directions and TODO items, we propose: * Complete the theory for propagation in material media, extending our analysis to include the effects of dielectric and magnetic materials in curved spacetime. * Derive applications with observables that are dependent on curved spacetime, beyond the proper time differences explored in previous work <cit.>. In particular, investigate the effects on TE and TM modes, which could provide novel ways to detect spacetime curvature. * Explore observables on quantum states (photons) that are influenced by curved spacetime. This could involve adapting classical setups like the Michelson-Morley experiment to the quantum regime in curved spacetime waveguides. For example, building upon the work of Izmailov et al. <cit.> and Kowalski <cit.>, one could investigate the use of fiber optics or dielectrics in a Michelson interferometer to enhance the gravitational effects on light propagation. These proposed extensions of our work have the potential to bridge the gap between classical and quantum effects in curved spacetime, possibly leading to new experimental tests of general relativity and quantum mechanics in curved spacetime. § CODE AVAILABILITY Companion Maple Worksheets with some of the symbolic calculations in Section <ref> can be found in the GitHub repository <cit.>. § ACKNOWLEDGEMENTS This work was conducted during a research internship in the Department of Optics of Universidad Complutense de Madrid between 2017 and 2018, in the context of the Quantum Metrology for General Relativity project (Spanish Ministerio de Educación, Cultura y Deporte (MECD) Collaboration Grant for the 2017-2018 academic year). I would like to thank Professor Alfredo Luis Aina for his valuable support and insightful discussions throughout the internship. I would also like to thank Daniel Sobral-Blanco for his fruitful comments.
http://arxiv.org/abs/2407.03279v1
20240703170444
Finely Stratified Rerandomization Designs
[ "Max Cytrynbaum" ]
econ.EM
[ "econ.EM", "math.ST", "stat.TH" ]
Unbounded Hölder domains Christina Karafyllia ======================== § ABSTRACT We study estimation and inference on causal parameters under finely stratified rerandomization designs, which use baseline covariates to match units into groups (e.g. matched pairs), then rerandomize within-group treatment assignments until a balance criterion is satisfied. We show that finely stratified rerandomization does partially linear regression adjustment “by design,” providing nonparametric control over the covariates used for stratification, and linear control over the rerandomization covariates. We also introduce novel rerandomization criteria, allowing for nonlinear imbalance metrics and proposing a minimax scheme that optimizes the balance criterion using pilot data or prior information provided by the researcher. While the asymptotic distribution of generalized method of moments (GMM) estimators under stratified rerandomization is generically non-Gaussian, we show how to restore asymptotic normality using optimal ex-post linear adjustment. This allows us to provide simple asymptotically exact inference methods for superpopulation parameters, as well as efficient conservative inference methods for finite population parameters. § INTRODUCTION Stratified randomization is commonly used to increase statistical precision in experimental research.[For example, <cit.> reports a survey of 50 experimental papers in the AER and AEJ from 2018-2023, where 57% used some form of stratified randomization.] Recent theoretical work (e.g. <cit.>) has shown that fine stratification, which randomizes within small groups of units tightly matched on baseline covariate information, makes unadjusted estimators like difference of means semiparametrically efficient.[See <cit.>, <cit.>, and <cit.> for more detailed discussion.] In finite samples, however, the performance of such designs can deteriorate rapidly with the dimension of the stratification variables due to a curse of dimensionality in matching.[Under regularity conditions, the convergence rate of finite sample variance to asymptotic variance is O(n^-2/(d+1)) for dimension d covariates, see <cit.>.] This motivates the search for alternative designs that insist upon nonparametric balance for a few important covariates, but only attempt to balance linear functions of the remaining variables. In this paper, we propose finely stratified rerandomization designs, which first tightly match the units into groups using a small set of important covariates, then rerandomize within groups until a balance criterion on the remaining covariates is satisfied. Our first contribution is to derive the asymptotic distribution of generalized method of moments (GMM) estimators under stratified rerandomization, allowing estimation and inference on generic causal parameters defined by moment equalities. We consider both superpopulation and finite population parameters, the latter of which may be more appropriate for experiments run in a convenience sample (<cit.>). As in previous work on rerandomization (e.g. <cit.>), the asymptotic distribution of GMM estimators is an independent sum of a normal and a truncated normal term. Modulo this residual truncated term, we show that the asymptotic variance of unadjusted estimation under stratified rerandomization is the same as that of semiparametrically adjusted GMM (e.g. <cit.>) under an iid design. Intuitively, stratified rerandomization implements partially linear regression adjustment “by design.” Our second contribution is to introduce several novel forms of rerandomization based on nonlinear balance criteria. For example, we allow acceptance or rejection based on the difference of covariate density estimates within each treatment arm, attempting to balance nonlinear features of the covariate distribution. Similarly, we propose a design that rerandomizes until a nonlinear estimate of the propensity score is approximately constant, effectively forcing the covariates to have no predictive power for treatment assignments. In both cases, these nonlinear rerandomization schemes are asymptotically equivalent to standard rerandomization based on a difference of covariate means, but with an implicit choice of covariates and acceptance region, which we characterize. Our third contribution is to study optimization of the balance criterion itself. We suggest a novel minimax approach that allows the researcher to specify prior information about the relationship between covariates and outcomes, then rerandomizes until the worst case correlation consistent with this prior information is small. If the prior information set contains the truth, this design provides strong control over the variance of the truncated normal term in the asymptotic distribution. Building on this, we show that if the prior information set is a confidence region estimated from pilot data, then this minimax design bounds the truncated normal variance with high probability. Our fourth contribution is to provide simple t-statistic based inference methods for general causal parameters under stratified rerandomization designs. To do this, we first characterize and provide a feasible implementation of the optimal ex-post linear adjustment for GMM estimation under stratified rerandomization.[This extends recent work on optimal adjustment under pure stratified randomization for ATE estimation, e.g. see <cit.>, <cit.>, or <cit.>.] Crucially, optimal ex-post adjustment makes the asymptotic distribution insensitive to the rerandomization acceptance criterion, removing the truncated normal term from the limiting distribution and restoring asymptotic normality. For superpopulation parameters, our inference methods are asymptotically exact. For finite population parameters, our methods are asymptotically conservative, but still exploit the efficiency gains from both stratified rerandomization and ex-post optimal adjustment. §.§ Related Literature This paper builds on the literature on fine stratification in econometrics as well as the literature on rerandomization in statistics. Stratified randomization has a long history in statistics, see <cit.> for a survey. Recent work on fine stratification in econometrics includes <cit.>, <cit.>, <cit.>, <cit.>, and <cit.>. Some important theoretical contributions to the literature on rerandomization include <cit.> and <cit.>, <cit.>, and <cit.>. We build on both of these literatures, studying the consequence of rerandomizing treatments within data-adaptive fine strata. We show that finely stratified rerandomization does semiparametric (partially linear) regression adjustment “by design,” providing nonparametric control over a few important variables and linear control over the rest. For our main asymptotic theory (Section <ref>), the most closely related previous work is <cit.> and <cit.>. <cit.> study estimation of the sample average treatment effect (SATE) under stratified rerandomization, with quadratic imbalance metrics based on the Mahalanobis norm. We study rerandomization within data-adaptive fine strata, providing asymptotic theory for generic superpopulation and finite population causal parameters defined by moment equalities. We also allow for essentially arbitrary rerandomization acceptance criteria, not necessarily based on quadratic forms. <cit.> study estimation of superpopulation parameters defined by moment equalities under pure stratified randomization. We extend these results to stratified rerandomization as well as generic finite population parameters, providing “SATE-like” versions of the parameters in <cit.>.[These parameters can be seen as causal versions of the conditional estimand defined in <cit.>.] In concurrent work, <cit.> study GMM estimation of univariate superpopulation parameters under stratified rerandomization with fixed, discrete strata. We study significantly more general forms of stratification and rerandomization criteria than considered in their work, allowing for both finite and superpopulation parameters of arbitrary fixed dimension. For nonlinear rerandomization (Section <ref>), the closest related results are <cit.> and <cit.>. <cit.> rerandomize based on the p-value of a logistic regression coefficient, while we rerandomize until a general smooth propensity estimate is close to constant. To the best of our knowledge, we present the first asymptotic theory for rerandomization based on the difference of nonlinear (e.g. density) estimates. For acceptance region optimization (Section <ref>), the closest related results are <cit.>, who study the optimal choice of norm for quadratic rerandomization, while <cit.> chooses a specific quadratic rerandomization using a Bayesian criterion, in both cases for rerandomization without stratification. We provide a novel minimax approach that accepts or rejects based on the value of a convex penalty function, tailored to prior information provided by the researcher. Our work on optimal adjustment (Section <ref>) extends recent work on adjustment for stratified designs, e.g. <cit.>, <cit.>, <cit.>, to stratified rerandomization and GMM parameters. Finally our inference methods (Section <ref>) build on previous work by <cit.>, <cit.>, and <cit.>. To the best of our knowledge we provide the first asymptotically exact inference for causal GMM parameters under stratified rerandomization, as well as conservative inference for their finite population analogues. § FRAMEWORK AND DESIGNS Consider data W_i = (R_i, S_i(1), S_i(0)) with (W_i)_i=1^n F. The S_i(d) ∈^d_S denote potential outcome vectors for a binary treatment d ∈{0, 1}, while R_i denote other pre-treatment variables, such as covariates. For treatment assignments ∈{0, 1}, the realized outcome = S_i() = + (1-). In what follows, for any array (a_i)_i=1^n we denote [a_i] = n∑_i=1^n a_i, with a̅_1 = [a_i ] / [] and a̅_0 = [a_i (1-)] / [(1-)] Next, we define stratified rerandomization designs. Let treatment proportions = l/k and suppose that n is divisible by k for notational simplicity. * (Stratification). Partition the experimental units into n/k disjoint groups with {1, …, n} = ⋃_ disjointly and || = k. Let ψ = ψ(R) with ψ∈^d_ψ denote a vector of stratification variables, which may be continuous or discrete. Suppose the groups satisfy the homogeneity condition[The matching condition in Equation <ref> was introduced by <cit.> for matched pairs randomization (k=2). See <cit.> and <cit.> for generalizations.] 1/n∑_∑_i,j ∈ | - |_2^2 = (1). Require that the groups only depend on the stratification variables and data-independent randomness , so that = (, ) for each . * (Randomization). Independently for each || = k, draw treatment variables ()_i ∈ by setting = 1 for exactly l out of k units, uniformly at random. * (Check Balance). For rerandomization covariates h = h(R), consider an imbalance metric = ( - ) +.[In particular, we require = ( - ) + (1) under “pure” stratified randomization, the design in steps (1) and (2) only, studied e.g. in <cit.>. We give several examples below.] For an acceptance region A ^, check if the balance criterion ∈ A is satisfied. If so, accept . If not, repeat from the beginning of (2). Intuitively, steps (1) and (2) describe a data-adaptive “matched k-tuples” design, while step (2) rerandomizes within k-tuples until the balance criterion is satisfied. Equation <ref> is a tight-matching condition, requiring that the groups are clustered locally in ψ space. <cit.> provides algorithms to match units into groups that satisfy this condition for any fixed k. [Matched Pairs Rerandomization] For k = 2, the optimal matched pairs in Equation <ref> can be found by <cit.> algorithm. Suppose we have done so, and consider rerandomizing until the imbalance criterion n ( - )' Σ_n ( - ) ≤ϵ^2 is satisfied for positive-definite Σ_n Σ.[Several recent papers in the statistics literature have considered such criteria. See e.g. <cit.>, <cit.>, <cit.> among others.] Let ≡Σ_n ( - ) = ( - ) + for modified covariates h = Σ X. Then this quadratic criterion is equivalent to ∈ A for acceptance region A = {x: |x|_2 ≤ϵ}. We study the efficiency consequences of different covariates and acceptance regions in detail in Sections <ref> and <ref> below. [Stratification] Stratification without rerandomization can be obtained by setting A = ^ in Definition <ref>. Treatment effect estimation under such designs was studied in <cit.>, <cit.>, and <cit.>. Definition <ref> allows for fine stratification (also known as matched k-tuples), with the number of data-dependent groups = (, ) growing with n. It also allows for coarse stratification with fixed strata s ∈{1, …, m} and fixed m, as in <cit.>, which can be obtained in this framework by setting ψ = s and matching units into groups at random within the strata {i: s_i = k}. [Complete Randomization] For = l/k, we say that are completely randomized with probability if P( = d_1:n) = 1 / nnp for all with ∑_i d_i = np.[For notational simplicity, we may assume that n = lk for some l ∈ℕ.] If so, we denote ∼(). <cit.> shows that () randomization can be obtained by setting ψ=1 and A = ^ in Definition <ref>, matching units into groups at random. Causal Estimands. Next, we introduce a generic family of causal estimands defined by moment equalities. Let g(D, R, S, θ) ∈^ be a score function for generalized method of moments (GMM) estimation. Recall W = (R, S(1), S(0)) and define the projection ϕ(W, θ) = E[g(D, R, S, θ) | W], so that E[ϕ(W, θ)] = 0 E[g(D, R, S, θ)] = 0. The superpopulation estimand is the unique solution to E[ϕ(W, θ)] = 0. The finite population estimand is the unique solution to [ϕ(W_i, θ)] = 0. In what follows, we study GMM estimation of both and under stratified rerandomization designs, showing an asymptotic equivalence between stratified rerandomization and partially linear covariate adjustment. In particular, this framework allows us to introduce several useful finite population estimands that do not appear to have been considered previously in the literature. Note that GMM estimation of under pure stratification was studied in <cit.> for the exactly identified case. Our finite population parameter can be viewed as a causal version of the finite population estimand defined in <cit.>.[See also the related finite population estimands studied under iid sampling and assignment in <cit.> and <cit.>.] [ATE] Define the Horvitz-Thompson weights H = D-/ - ^2 and let g(D, Y, θ) = HY - θ, so that ϕ(W, θ) = E[HY | W] - θ = Y(1) - Y(0) - θ. Then = E[Y(1) - Y(0)] =, the average treatment effect, and = [Y_i(1) - Y_i(0)] =, the sample average treatment effect. For a more interesting example, consider the best parametric predictor of treatment effect heterogeneity in experiments with noncompliance. [LATE Heterogeneity] Let D(z) be potential treatments for a binary instrument z ∈{0, 1}. Let Y(d) be the potential outcomes, with realized outcome Y = Y(D(Z)). Suppose D(1) ≥ D(0), and define compliance indicator C = (D(1) > D(0)), assuming E[C] > 0. <cit.> define the local average treatment effect = E[Y(1)-Y(0) | C=1]. Let H = (Z-)/( - ^2) and consider the score function g(Z, D, Y, X, θ) = (HY - HD · f(X, θ)) ∇_θ f(X, θ). Using standard LATE manipulations, ϕ(W, θ) = E[g(Z, D, Y, X, θ) | W] = C · (Y(1) - Y(0) - f(X, θ)) ∇_θ f(X, θ). This is the first order condition of a treatment effect prediction problem in the complier population. In particular, for τ≡ Y(1) - Y(0), the parameter is the best parametric predictor of treatment effects for compliers: = _θ E[(τ - f(X, θ))^2 | C=1]. For example, if Y is binary then Y(1) - Y(0) ∈{-1, 0, 1}, so a scaled link function model f(X, θ) = 2L(X'θ) - 1 may be appropriate. We can easily estimate marginal effects by adding m(X_i, θ, β) = β - (∂ / ∂θ') f(X_i, θ) to the score function. [Finite Population Heterogeneity] Continuing Example <ref>, note that for τ_i = Y_i(1) - Y_i(0) the corresponding finite population parameter is = _θ[(τ_i - f(X_i, θ))^2 | C_i=1]. We can view as a “SATE-like” version of , the best parametric predictor of treatment effects in the within-sample complier population. may be a more appropriate target for experiments run in a convenience sample. If f(X, θ) = X'θ linear, then = _θ[(τ_i - X_i'θ)^2 | C_i=1] is the within-sample best linear predictor. In the case of perfect compliance C_i = 1 for all i, this is = _θ[(τ_i - X_i'θ)^2], a finite-sample version of the best linear predictor of the conditional average treatment effect (CATE). The case X=1 recovers = [Y_i(1) - Y_i(0)| C_i=1], the finite-population LATE, studied e.g. in <cit.>. Our inference methods in Section <ref> produce tighter confidence intervals for these finite population parameters than , since we only need to account for the uncertainty due to random assignment, with no sampling uncertainty. GMM Estimation. Let positive-definite weighting matrix ∈^× with ≻ 0. For sample moment (θ) ≡[g(, R_i, , θ)], the GMM estimator[In our examples, we will mainly be concerned with the exactly identified case. However, the theory for the over identified case is almost identical, so we include this as well.] is = _θ∈Θ(θ)''(θ). In the exactly identified case, solves () = 0. In the next section, we study generalized method of moments (GMM) estimation of the causal parameters and under stratified rerandomization. § ASYMPTOTICS FOR GMM ESTIMATION In this section, we characterize the asymptotic distribution of the GMM estimator under stratified rerandomization designs, as in Definition <ref>. We show that the variance under stratified rerandomization is proportional to the residuals of a partially linear regression model, up to a rerandomization imbalance term. In this sense, stratified rerandomization does partially linear regression adjustment “by design.” First, we state some technical regularity conditions that are needed for the following results. [Acceptance Region] Suppose A ∈ has non-empty interior and (∂ A) = 0,[Note that ∂ A denotes the boundary of A, the limit points of both A and A^c.] and require E[(h | ψ)] ≻ 0 and E[|ψ|_2^2 + |h|_2^2] < ∞. Next we state the technical conditions needed for GMM estimation. Define the matrix G = E[(∂ / ∂θ') ϕ(W, θ)]|_θ = ∈^× and let _d(W, θ) = (d, R, S(d), θ) for d ∈. Recall that |B|_F^2 = ∑_ij B_ij^2 for any matrix B. [GMM] The following conditions hold for d ∈{0, 1}: * We have E[_d(W, )^2] < ∞ and E[sup_θ∈ |_d(W, θ)|_2] < ∞. Also θ→_d(W, θ) is continuous almost surely, and is compact with in the interior. * The matrix G is full rank, and (θ) = 0 iff θ = θ_0. * There exists a neighborhood ∈ U such that G_d (W, θ) ≡∂ / ∂θ' _d(W, θ) exists and is continuous. Also E[sup_θ∈ U |∂ / ∂θ' _d(W, θ)|_F] < ∞. Compactness could likely be relaxed using concavity assumptions or a VC class condition, but we do not pursue this here. In what follows it will be conceptually useful to reparameterize the score function. Orthogonal Expansion. Recall ϕ(W, θ) = E[g(D, R, S, θ) | W] for W = (R, S(1), S(0)). Define the assignment influence component (W, θ) ≡(D)(_1(W, θ) - _0(W, θ)). For the Horvitz-Thompson weights H = (D-)/(-^2), a simple calculation shows that we can expand g(D, R, S, θ) = ϕ(W, θ) + H δ(W, θ). Our work below shows that (W, θ) parameterizes estimator variance due to assignment, while ϕ(W, θ) parameterizes variance due to random sampling. In what follows, we work directly with this expansion. [SATE] Continuing Example <ref> above, let = (1-) Y(1) + Y(0), a convex combination that summarizes each unit's potential outcome level. Then for the score (D, Y, θ) = HY - θ, we have (W, θ) = and [(W_i)] = _1 - _0 = _n(, _i)/_n(). Intuitively, [(W_i)] from Equation <ref> isolates the estimator variance due to chance correlations between the assignments and outcome levels . By contrast, ϕ(W, θ) = Y(1) - Y(0) - θ does not depend on assignments , and (ϕ(W, θ)) = (Y(1) - Y(0)) isolates the estimator variance due to random sampling of treatment effects. §.§ Finite Population Estimand Our first theorem studies GMM estimation of the finite population estimand solving [ϕ(W_i, )] = 0. We study the superpopulation estimand in Corollary <ref> below. To state the theorem, define the GMM linearization matrix = -(G' G) G' ∈^× and denote = (D) = - ^2. Suppose as in Definition <ref>. Require Assumption <ref>, <ref>. Then ( - ) | (0, ) + R_A, independent RV's with = E[(Π(W, ) - 'h | ψ)] = min_Γ∈^× E[(Π(W, ) - Γ'h | ψ)]. The residual term ∼' | ∈ A for ∼(0, E[(h | ψ)]). Note that variance matrix ∈^× and the minimum is interpreted in the positive semidefinite[In particular, we say = min_Γ V(Γ) if ≼ V(Γ) for all Γ∈^×.] sense. Theorem <ref> shows that ( - ) is asymptotically distributed as an independent sum of a normal (0, ) and truncated normal . The normal term (0, ) only depends on the “treatment assignment” component of the influence function, (W, ). The variance is attenuated nonparametrically by the stratification variables ψ and linearly by rerandomization covariates h. The truncated normal term ∼' | ∈ A, ∼(0, E[(h | ψ)]). Intuitively, this term arises from leftover covariate imbalances due to slackness in the rerandomization acceptance criterion, ( - ) ∈ A. We refer to as the rerandomization imbalance. If the acceptance region A is symmetric about zero, i.e. a ∈ A iff -a ∈ A, then E[] = 0, so the asymptotic distribution is centered at 0. In principle, in large samples this term could be made negligible relative to (0, ) by choosing a small enough acceptance region A. For example, if A = B(0, ϵ) then R_B(0, ϵ)∼{' | ||_2 ≤ϵ} 0 as ϵ→ 0. However, in finite samples and for small enough ϵ, this may be computationally infeasible. We study a minimax style criterion that uses prior information or pilot data to choose an efficient acceptance region in Section <ref> below. To isolate the precision gains due to rerandomization, the following corollary specializes Theorem <ref> to the case of stratification without rerandomization, as well as complete randomization, as defined in Examples <ref> and <ref>. Suppose as in Definition <ref> with A = ^. Require Assumption <ref>. Then ( - ) | (0, V) with V = E[(Π(W, ) | ψ)]. In particular, if ∼() then V = (Π(W, )). Corollary <ref> highlights how fine stratification reduces the variance of GMM estimation to V = E[(Π(W, ) | ψ)] ≤((W, )), a nonparametric improvement. Rerandomization as in Definition <ref> provides a further linear variance reduction to = min_Γ∈^× E[(Π(W, ) - Γ'h | ψ)], up to the residual imbalance term . Our results above show that ( - ) | (0, ) + R_A, conditional[See Proposition <ref> in the appendix for a formal statement.] on = (R_i, S_i(1), S_i(0))_i=1^n. This result is “design-based” in the sense that the variance in the limiting distribution arises solely due to randomness of the treatment assignments . However, we impose structure on the sequence of populations ex-ante, assuming each population is drawn from a fixed measure, ∼ F. This allows us to provide intuitive, closed form variance expressions and connect our results with the literature on GMM and partially linear adjustment. By contrast, the finite populations model often used in the statistics literature (e.g. <cit.>) begins with an arbitrary sequence of finite populations, imposing the minimal structure needed for certain moments to converge ex-post. It may be possible to extend our results to this setting, but we leave this to future work. §.§ Superpopulation Estimand The next result extends Theorem <ref> to the superpopulation estimand , which uniquely solves E[ϕ(W, )] = 0. Suppose is as in Definition <ref>. Require Assumption <ref>, <ref>. * We have ( - ) (0, ) + (0, ) + R_A, independent RV's with = (ϕ(W, )) and , exactly as in Theorem <ref>. * (Pure Stratification). If A =, this is ( - ) (0, V) with V = (ϕ(W, )) + E[(Π(W, ) | ψ)]. Comparing Corollary <ref> with the results above, we see that targeting instead of adds an extra independent Gaussian term (0, ) to the asymptotic distribution. Intuitively, arises due to iid random sampling of ϕ(W, ). Notice that stratification and rerandomization only affect the assignment influence function component Π(W, ), while the sampling influence component ϕ(W, ) is irreducible. For pure stratification, <cit.> were the first to derive an analogue of part (b) of Corollary <ref> in the exactly identified case, under different GMM regularity conditions than we use here. [SATE] Continuing Example <ref>, we had ϕ(W, θ) = Y(1)-Y(0)-θ, so G = 1 and = 1. As above, (W, θ) = (1-) Y(1) + Y(0) ≡. The GMM estimator = Y̅_1 - Y̅_0 is just difference of means. Then by Theorem <ref> and Corollary <ref>, we have ( - ) | (0, ) + and ( - ) (0, + ) + R_A with = (Y(1)-Y(0)) = min_γ∈^ E[( - γ'h | ψ)]. The term , which only appears when estimating the superpopulation estimand , reflects sampling variance due to treatment effect heterogeneity. The term is the variance due to random assignment, caused by random in-sample correlations between treatments D and outcome levels . Covariate-adaptive randomization and adjustment can be used to reduce , while is an irreducible sampling variance. <cit.> study SATE estimation under stratified rerandomization in the sequence of finite populations framework. Relative to <cit.>, by imposing the tight-matching condition <ref> we are able to derive a simple closed form for the asymptotic variance in terms of the measure W ∼ F, showing an equivalence with partially linear regression adjustment. [Treatment Effect Heterogeneity] Continuing Example <ref>, consider the case with perfect compliance D = Z and f(X, θ) = X'θ. Then we can use the slightly modified score (D, X, Y, θ) = (HY - X'θ)X. For τ = Y(1)-Y(0), the parameters = _θ[(τ_i - X_i'θ)^2], = _θ E[(τ - X'θ)^2]. We have ϕ(W, ) = (τ - X')X, where e = τ - X' are treatment effect prediction errors. For (W, ) = X and = E[XX'], the variance matrices above are = E[XX'] E[e^2 XX'] E[XX'] = min_Γ∈^× d_x E[( X - Γ'h | ψ)]. The expression for shows that if we want to precisely estimate treatment effect heterogeneity, it is important to balance not only the variables that predict outcome levels , but also their interactions with the heterogeneity variable. §.§ Equivalence with Partially Linear Adjustment Example <ref> showed that, up to the rerandomization imbalance , the unadjusted estimator = Y̅_1 - Y̅_0 has asymptotic variance given by the residuals of a partially linear regression of on ψ and h: = min_γ∈^ E[( - γ'h | ψ)] = min_γ∈^ t ∈ L_2(ψ)( - γ'h - t(ψ)). More generally, Theorem <ref> shows that under stratified rerandomization designs, the usual GMM estimator behaves like semiparametrically adjusted GMM. Formally, let (ψ) = L_2^(ψ) be the -fold Cartesian product of L_2(ψ). Then in Theorem <ref> is the variance of the residuals of the influence function Π(W, ) in a partially linear regression on ψ and h: = min_Γ∈^× t ∈(ψ) (Π(W, ) - Γ'h - t(ψ) ). Intuitively, stratified rerandomization does partially linear regression adjustment “by design,” providing nonparametric control over ψ and linear control over h. For a more explicit equivalence statement, define (ψ, h) = 'h + t_0(ψ) to be the partially linear function achieving the optimum in Equation <ref>. Define the oracle semiparametrically adjusted GMM estimator = - [(, )]. For example, for the estimation problem one can show that is just an oracle version of the usual augmented inverse propensity weighting (AIPW) estimator (<cit.>), with partially linear regression models in each arm.[Feasible partially linear adjustment in an iid mean estimation problem with missing data was studied in <cit.>. See also the related semiparametric adjustment for GMM parameters in <cit.>.] Suppose that ∼(). The oracle partially linearly adjusted GMM estimator ( - ) |(0, ), with variance as defined in Theorem <ref>. Without stratification or rerandomization, we require ex-post semiparametric adjustment to achieve . Under stratified rerandomization, however, the simple GMM estimator automatically achieves , up to the imbalance term . § NONLINEAR RERANDOMIZATION In this section, we study various nonlinear rerandomization criteria, showing that in many cases they are asymptotically equivalent to linear rerandomization as in Definition <ref>, with an implicit choice of rerandomization covariates and acceptance region. This shows that our asymptotics and inference methods apply to a broad class of asymptotically linear rerandomization schemes. §.§ GMM Rerandomization First, we generalize the imbalance metric introduced in Definition <ref>, allowing rejection of a treatment allocation based on potentially nonlinear features of the in-sample distribution of treatments and covariates (, X_i)_i=1^n. To define the nonlinear imbalance metric, let (X_i, β) be a score function and define the within-arm GMM estimators and by β_1 ∈_β∈^ |[(X_i, β)]|_2^2, β_0 ∈_β∈^ |[(1-) (X_i, β)]|_2^2. Define ^m = ( - ) as above, where (X, β) is a score satisfying Assumption <ref>. Suppose = d_ (exact identification) and let A a symmetric acceptance region. Do the following: (1) form strata as in Definition <ref>. (2) Draw by stratified randomization. (3) If imbalance ^m = ( - ) ∈ A, accept. Otherwise, repeat from (2). Observe that if (X_i, β) = X_i - β, then β_d = X̅_d and =, so linear rerandomization is a special case. Intuitively, the generalization allows us to randomize until possibly nonlinear features of the covariates are balanced between the treatment and control groups. [Density Rerandomization] Let f(X, β) be a parametric density model for the covariates X, which may be misspecified. Consider forming likelihood estimators β_1 ∈_β[log f(X_i, β)] and β_0 ∈_β[(1-) log f(X_i, β)], then rerandomizing until the imbalance measure | - |_2 ≤ϵ. Under suitable regularity conditions, β_d are GMM estimators as in Equation <ref> with score function (X_i, β) = ∇_β f(X_i, β), so this procedure is a GMM rerandomization with acceptance region A = {x: |x|_2 ≤ϵ}. Let be the unique solution to E[(X, )] = 0 and define = E[(∂ / ∂β') (X_i, )]. Our next result shows that GMM rerandomization with acceptance criterion ∈ A is equivalent to linear rerandomization (Definition <ref>) with an implicit choice of rerandomization covariates = m(X_i, ) and linearly transformed acceptance region. Suppose is as in Definition <ref> and Assumption <ref> holds. Then ( - ) | (0, ) + R, independent RV's with = min_η∈^× E[(Π(W, ) - η' (X_i, )| ψ)]. The residual R ∼' | ∈ A for ∼(0, E[(m(X_i, ) | ψ)]). Theorem <ref> shows that by rerandomizing until ( - ) ∈ A, we implicitly balance the influence function for the difference of GMM estimators in Equation <ref>, which depends both on m(X_i, ) and the Jacobian matrix . This suggests an equivalent, but computationally simpler design with only one round of nonlinear estimation. In particular, let β∈_β∈^ |[(X_i, β)]|_2^2 and set rerandomization covariates h_i = (X_i, β), rerandomizing until [ h_i] ∈ A. This design is asymptotically equivalent to Definition <ref>, as shown in the next result. Suppose Assumption <ref>, <ref> hold and let score m(X, β) as in Definition <ref>. Let be rerandomized as in Definition <ref> with h_i = (X_i, β) and acceptance region A. Then ( - ) | (0, ) + R, with both variables identical to those in Theorem <ref>. Corollary <ref> shows that we can achieve the same effect as Definition <ref> with a computationally simpler linear rerandomization that balances the estimated covariate h_i = (X_i, β). Next, we introduce a propensity score based approach that also attempts to balance nonlinear covariate features. §.§ Propensity Rerandomization To motivate a propensity based rerandomization procedure, note that under stratified randomization we have E[ | W] = p for all units. In finite samples, however, the realized propensity p(S) = [ | X_i ∈ S] may be significantly different from in certain regions S ^d_X of the covariate space. This implies that covariates are predictive of treatment assignments post-randomization, a form of “in-sample confounding.” To enforce balance, we could, for instance, reject allocations where | p(S) - | > ϵ for some collection of sets S. To make this idea tractable without fully discretizing, consider a parametric propensity model p(X, β) = (X'β) and define the MLE estimator β∈_β∈^[log(X_i'β) + (1-) log (1-(X_i'β))]. We can measure the average gap between the estimated and true propensity score using the imbalance metric = n [( - (X_i'))^2]. Intuitively, if is large, then the covariates X have power to predict treatment status in some parts of the covariate space. To avoid this, we propose rerandomizing until the imbalance metric is below a threshold: Do the following: (1) form strata as in Definition <ref>. (2) Draw and estimate the propensity model in Equation <ref>. (3) If imbalance ≤ϵ, accept. Otherwise, repeat from (2). Before we state the main result, we require some regularity conditions Impose the following conditions. * Let be twice differentiable, with |L'|_∞, |L”|_∞ < ∞. For each ∈ (0, 1), there is a unique c with L(c) =. Also, |L'(c)| > 0. * The score m(, X_i, β) = '(X_i'β) X_i/(X_i'β) - (1-) '(X_i'β) X_i/1-(X_i'β) satisfies condition <ref>. The solution to Equation <ref> exists and is unique. * Covariates X = (1, h) for E[|h|^2_2] < ∞. Also, E[(h | ψ)], E[XX'] are full rank. Condition (a) is satisfied by the logit and probit link functions. Our next result shows that propensity rerandomization as in Definition <ref> is equivalent to a simpler linear rerandomization design, with an implicit choice of ellipsoidal acceptance region. Suppose is as in Definition <ref>. Require Assumptions <ref>, <ref>. Then ( - ) | (0, ) + R. = min_Γ∈^× E[(Π(W, ) - Γ'h | ψ)]. The residual R ∼' | ' (h)≤ϵ for ∼(0, E[(h | ψ)]) and optimal in the equation above. Theorem <ref> shows that for any sufficiently regular link function, propensity rerandomization is asymptotically equivalent to the simpler quadratic rerandomization design in Example <ref>, with acceptance criterion n( - )'_n() ( - ) ≤ϵ. Equivalently, propensity rerandomization behaves like linear rerandomization with = ( - ) and ellipsoidal acceptance region A = (h) B(0, ϵ).[A related result was found by <cit.>, who study rerandomizing until the p-value of a logistic regression coefficient is above a threshold.] Implicit Acceptance Regions. Both nonlinear designs in this section turned out to be equivalent to the standard rerandomization scheme in Definition <ref>, with a specific, implicit choice of rerandomization moments and acceptance region. However, note that the moments and acceptance region chosen by these procedures are entirely determined by the marginal covariate distribution. This implicit choice is not likely to be optimal, since the residual term ∼' | ∈ A depends not only on the covariates ∼(0, E[(h | ψ)]), but also on the partially linear coefficient . This coefficient is determined by the joint distribution of the assignment influence function (W, ) and covariates (ψ, h). In the next section, we show how to use prior information about this joint distribution to optimize the acceptance region and bound the variance of . § OPTIMIZING ACCEPTANCE REGIONS In this section, we study efficient choice of the rerandomization acceptance region A ^. For simplicity and intuition, we first restrict to the case of estimating =, generalizing in what follows. Imbalance Decomposition. The difference of means estimator = [Y_i(1) - Y_i(0)] + [] = + [] for = ( - ) / ( - ^2). Intuitively, the scaled errors ( - ) = [] are driven by imbalances in the outcome levels between treatment arms. The previous section showed that if covariates h are predictive of , we can reduce these imbalances by rerandomizing until [] = ( - ) ∈ A. To study the role of the acceptance region A, let (, t_0) be solutions to the partially linear prediction problem in Equation <ref> and consider the expansion[This is without loss of generality. Note that we do not impose well-specification E[e | ψ, h] = 0.] = 'h + t_0(ψ) + e, E[e | ψ] = 0, E[eh] = 0. We use this to decompose the imbalance in outcome levels into imbalances in the covariates ψ and h and residuals e. In particular, we can now write the imbalance decomposition [] = [ t_0()] + '[] + [ e_i] ≡ I_1 + I_2 + I_3. The analysis in Section <ref> showed that for any acceptance region A ^: * The ψ imbalance component I_1 = [ t_0()] 0 due to stratification. * The components I_2 + I_3 = ' [] + [ e_i] + ^-1/2(0, (e)) are asymptotically independent, with (e) not depending on A. In particular, it suffices to choose A to minimize the component I_2 = ' []. This suggests an oracle acceptance criterion, rerandomizing until |' []| ≤ϵ, with acceptance region A = {a: |'a| ≤ϵ}. However, this acceptance region is infeasible since is unknown at design-time. Instead, we take a minimax approach, allowing the researcher to incorporate prior information about . §.§ Minimax Rerandomization Suppose that we know ∈ for some prior information set ^. Fix ϵ > 0 and consider a “minimax” style acceptance criterion, rerandomizing the treatments until sup_γ∈ |γ' []| ≤ϵ. Note that the function (x) = sup_γ∈ |γ' x| is convex, so we can also interpret this as a convex imbalance penalty, rerandomizing until () ≤ϵ for imbalance metric = [], generalizing the quadratic penalty in Example <ref>. Our first result shows that this minimax design is of the form studied in the Section <ref>, characterizing the acceptance region induced by this convex penalty. The following hold: * (Rerandomization). The acceptance criterion sup_γ∈ |γ' []| ≤ϵ [] ∈ A for A = ϵ with = {a : sup_γ∈ |γ'a| ≤ 1}^.[Note that for a set S ^d, we have ϵ S = {ϵ s: s ∈ S}.] * (Acceptance Region). A = ϵ is symmetric and convex. If is bounded, then A is closed and has non-empty interior. If is open, then A is bounded. * (Well-specification). If ∈, then () ≤ϵ^2. Part (a) of Theorem <ref> shows that the rerandomization criterion is of the form studied in Definition <ref>, with acceptance region A = ϵ. Part (b) shows that A is always symmetric and convex. In particular, the asymptotic distribution of is centered at zero. The set is known as the absolute polar of , e.g. see <cit.>. Part (c) of the theorem shows that if the prior information set contains the true coefficient , then () ≤ϵ^2. Then by independence, the asymptotic variance is within ϵ^2 of the optimal partially linear variance. If ∉ (misspecification), then possibly () > ϵ^2. However, note that misspecification does not affect our inference methods, which allow for general acceptance regions A. Note that the asymptotic acceptance probability a(ϵ) = P(∈ϵ) has a(ϵ) → 0 as ϵ→ 0 and is monotonically increasing. For bounded, the theorem shows that has non-empty interior. In this case, as ϵ→∞ we have ϵ↑ so a(ϵ) → 1. This shows that, at least in large samples, we can choose ϵ to achieve any desired acceptance probability P(∈ A) ∈ (0, 1). Under well-specification, any such choice of ϵ comes with a variance guarantee provided by the theorem. §.§ Specifying Prior Information Without pilot data, we are left to introspection to choose the prior information set . Recall that is the coefficient from the partially linear projection = 'h + t_0(ψ) + e. Intuitively, parameterizes how much we expect the average outcome level to change given a unit change in h, holding ψ fixed. If the partially linear model happens to be well-specified, then = ∇_b E[| ψ, h=b]. If t_0(ψ) = t'ψ happens to be linear, then = c + 'h + t'ψ + e and is just an OLS coefficient. The following examples provide some reasonable prior information specifications and their associated acceptance regions. These examples rely on a general characterization of acceptance regions in Lemma <ref> below. [Rectangle] One natural way to specify prior information is to assume γ_0j∈ [l_j, u_j] for each 1 ≤ j ≤, equivalent to setting = ∏_j=1^ [l_j, u_j]. This allows sign constraints, e.g. 0 ≤γ_0j≤ m for some j and -m ≤γ_0j≤ 0 for others. Lemma <ref> below shows that if = ∏_j=1^ [l_j, u_j], then A = ϵ = {a: |a'l + a'u| + ∑_j |a_j| u_j - |a_j| l_j ≤ 2 ϵ}, where l = (l_j)_j and u = (u_j)_j. An example is shown in Figure <ref>. Note that the acceptance region A is conservative in directions aligned with the prior information set = [1, 2] × [1, 3/2], guarding against covariate imbalances that are aligned with adverse coefficient values ∈. A is more lenient in directions approximately orthogonal to . [Ellipse] Another natural specification is to guess ≈γ̅, setting = γ̅+ B_2(0, m), for an uncertainty parameter m. By the characterization in Lemma <ref> below, A = ϵ = {a: |a'γ̅| + m|a|_2 ≤ϵ}. More generally, if = γ̅+ Σ B_2(0, 1) for a positive-definite matrix Σ, the lemma shows that A = ϵ = {a: |a'γ̅| + |Σ a|_2 ≤ϵ}. One natural application of this specification is when is a Wald confidence region constructed using pilot data, as discussed below. An example is shown in Figure <ref>. More generally, the following lemma provides a useful characterization of the acceptance region A = ϵ from Theorem <ref> for a large family of prior information set specifications. To state the lemma, recall that |x|_p = (∑_j |x_j|^p)^1/p for p ∈ [1, ∞) and |x|_∞ = max_j |x_j|. For p ∈ [1, ∞], denote (0, 1) = {a: |a|_p ≤ 1}. For p ∈ [1, ∞], let 1/p + 1/q = 1, setting q = 1 if p = ∞ and vice-versa. Suppose = x + Σ B_p(0, 1), for x ∈ and Σ invertible. Then A = ϵ = {a: |a'x| + |Σ' a|_q ≤ϵ}. §.§ Using Pilot Data Next, we discuss an alternative strategy that uses pilot data to specify the set B. Suppose we have access to (, ) of size m. Suppose √() (γ_pilot - ) ≈(0, ) for some estimator γ_pilot, discussed below. Consider forming the Wald region = {γ: m (γ_pilot - γ)' (γ_pilot - γ) ≤ c_α} using critical value P(χ^2_≤ c_α) = 1-α for α∈ (0, 1). Equivalently, one can write the Wald region as = γ + c_α^1/2 m^-1/2·^1/2 B_2(0, 1). Using as a prior information set, by Example <ref> we have acceptance region A_pilot = ϵ^∘ = {a: |a'γ_pilot| + m^-1/2 c_α^1/2 |Σ^1/2 a|_2 ≤ϵ}. Note that the acceptance region A_pilot grows with the pilot size m. This reflects smaller uncertainty about the true parameter , and thus less adversarial worst case imbalance sup_γ∈ |γ' []|. Conversely, A_pilot shrinks as the confidence parameter α and the scale of the variance estimate increases, reflecting greater uncertainty and a more conservative approach to covariate balances. Our next result shows that rerandomization with acceptance region A_pilot controls the variance of the imbalance = 'Z | Z ∈ A_pilot with high probability marginally over the realizations of the pilot data. The result is an immediate consequence of Theorem <ref> and Theorem <ref>. Suppose P(∈) ≥ 1-α, for (, ). Let as in Definition <ref> with A = A_pilot = ϵ^∘, then if Assumptions <ref>, <ref> hold, then ( - )| (0, (e)) + and ( | ) ≤ϵ^2 with probability ≥ 1-α. Formally, the pilot estimate of and Wald region could be constructed as in <cit.>. In practice, a simple approach suggested by the theory is to let , be point and variance estimators from the regression Y_T∼ 1 + h + ψ, for the “tyranny of the minority” (<cit.>) outcomes Y_T = (1-)DY / + p(1-D)Y / (1-p), noting that E[Y_T | W] = (1-p)Y(1) + pY(0) =. General Parameters. For completeness, we extend the preceding work to general parameters as in Definition <ref>. Let (W, ) be the assignment influence function. As in Equation <ref>, consider the partially linear decomposition (W, ) = 'h + t_0(ψ) + e, E[e | ψ] = 0, E[eh] = 0. Note that e ∈^ and E[e | ψ] = 0 is interpreted componentwise. Consider prior information sets _j for each ^j with 1 ≤ j ≤, where ^j ∈ is the jth column of . The final result of this section bounds the asymptotic imbalance term if all these prior information sets are well specified. Let as in Definition <ref> with A = ∩_j=1^ϵ_j. Then ( - ) | (0, ) + R_A, as defined in Theorem <ref>. If ^j ∈_j ∀ j, then max_j=1^(()_jj) ≤ϵ^2. Note that by construction the conservative acceptance region A = ∩_j=1^ϵ_j is symmetric and convex. § LINEAR ADJUSTMENT In this section, we study optimal linearly adjusted GMM estimation under stratified rerandomization. We show that this can be used to completely remove the impact of the acceptance region and imbalance term to first order, restoring asymptotic normality. This allows for standard t-statistic and Wald-test based inference on the parameters and , provided in Section <ref> below. Let w denote the covariates used for ex-post adjustment and suppose E[|w|_2^2] < ∞. Suppose that α∈^×. Define the linearly adjusted GMM estimator = - [']. We refer to α as the adjustment coefficient matrix. First, we extend Corollary <ref> to provide asymptotics for the adjusted GMM estimator under pure stratification (A = ^). Suppose as in Definition <ref> with A = ^. Require Assumption <ref>. Then we have ( - ) | (0, V(α)) with V(α) = E[(Π(W, ) - α'w | ψ)] and ( - ) (0, + V(α)). A version of this result was given in <cit.> for the special case =. Motivated by Proposition <ref>, we define the optimal linear adjustment coefficient as the minimizer of the asymptotic variance V(α), in the positive semidefinite sense. Optimal Adjustment Coefficient. Define the coefficient ∈_α∈^×E[((W, ) - α'w | ψ)]. Note that if w = h then =, as in Theorem <ref>. If E[(w | ψ)] ≻ 0, then the unique minimizer of Equation <ref> is the partially linear regression coefficient = E[(w | ψ)] E[(w, (W, ) | ψ)]. The main result of this section shows that adjustment by a consistent estimate of restores asymptotic normality. Suppose is as in Definition <ref>. Require Assumption <ref>, <ref>. Let h w and suppose . Then ( - ) | (0, ) and ( - ) N(0, + ). = (ϕ(W, )) = min_α∈^× E[((W, ) - α'w | ψ)]. Two-step Adjustment. For nonlinear models, the coefficient may depend on the unknown parameter . This suggests a two-step adjustment strategy, where we * Use the unadjusted GMM estimator to consistently estimate . * Report the adjusted estimator = - [']. Similarly to two-step efficient GMM, this process could be iterated until convergence to improve finite sample properties. One feasible estimator of the optimal coefficient is given in the following theorem. To state the result, define the within-group partialled covariates = - ∑_j ∈(i) w_j, where group (i) contains unit i in Definition <ref>. Let consistently estimate the linearization matrix and denote the score evaluation ≡(D_i, X_i, S_i, ). Suppose is as in Definition <ref>. Require Assumption <ref>, <ref>. Assume that E[(w | ψ)] ≻ 0. Define = [']['] '. Then = + (1). In many cases, the optimal coefficient may not depend on at all, allowing optimal adjustment to be done in one step. For instance, whenever (W, θ) = u(ψ, θ) + v(W) for some functions u, v, then does not depend on . This happens in Example <ref>, where we have (W, θ) = (1-)Y(1) + Y(0) =. The optimal adjustment coefficient is given by = E[(w | ψ)] E[(w, | ψ)]. Theorem <ref> that ex-post linear adjustment can be used to remove the non-Gaussian component of the asymptotic distribution that arises due to rerandomization. In the next section, we exploit this result to provide standard t-statistic based confidence intervals for and . § INFERENCE In this section, we provide novel methods for inference on general causal parameters under stratified rerandomization designs. We make crucial use of asymptotic normality of the optimally adjusted estimator , shown in Theorem <ref>. For the superpopulation parameter , we provide asymptotically exact inference methods. The asymptotic variance for estimating the finite population parameter is generally not identified. In this case, we provide conservative variance estimation that still reflects the precision gains due to stratification and rerandomization. §.§ Asymptotically Exact Inference To define our variance estimator, we begin with some definitions. Let _n denote the set of groups constructed in Definition <ref>. For each ∈_n define the centroid ψ̅_ = ||∑_i ∈. Let : _n →_n be a bijective matching between groups satisfying () ≠, ^2 =, and the homogeneity condition 1/n∑_∈_n |ψ̅_ - ψ̅_()|_2^2 = (1). In practice, is obtained by simply matching the group centroids ψ̅_ into pairs using the <cit.> non-bipartite matching algorithm. Let _n = {∪(): ∈_n} be the unions of paired groups formed by this matching. Denote a() = ∑_i ∈ and k() = ||. Define the adjusted moment ≡ - H_i ', where ≡(D_i, X_i, Y_i, ). Suppose that and for the optimal adjustment coefficient in Equation <ref>. For instance, we can use the consistent estimator provided by Theorem <ref>. Finally, define the variance estimator components = n ∑_∈_n1/a() - 1∑_i ≠ j ∈' / = n ∑_∈_n1/(k - a)() - 1∑_i ≠ j ∈' (1-)(1-) / (1-) = n ∑_∈_nk/a(k-a)() ∑_i,j ∈' (1-). Using these terms, construct the variance estimator V = _n() - ( + - - '). We require a slight strengthening of our GMM assumptions <ref>. There exists ∈ U open s.t. E[sup_θ∈ U |∂ / ∂θ' _d(W, θ)|_F^2] < ∞. Under this condition, we can state our first inference result, showing consistent estimation of the asymptotic variance matrix in Theorem <ref>. Suppose is as in Definition <ref>, and impose Assumptions <ref>, <ref>, <ref>. Then V +. By Theorem <ref>, ( - ) N(0, + ). Then the variance estimation result above allows for joint inference on using e.g. standard Wald-test or t-statistic based confidence regions. §.§ Inference on the Finite Population Parameter In this section, we provide asymptotically conservative inference on linear contrasts of the finite population parameter c'. As noted above, the asymptotic variance in Theorem <ref> for estimating the finite population parameter is generically not identified. This happens because it depends on terms of the form ( | ψ) ∝( | ψ) + ( | ψ) - 2 (, | ψ), with _d = g(d, X, S(d), ). However, S(1) and S(0) are never simultaneously observed (<cit.>), so (, | ψ) is generically not identified. We work with linear contrasts c' since this allows us to tighten our upper bounds on the (non-identified) variance. To do so, let = [/'] - and = [1-/1-'] - using the estimator components above and consider the variance estimator (c) = ([c' c]^1/2 + [c' c]^1/2)^2. By Theorem <ref>, we have (c' - c') | (0, c' c). Our next result shows how to consistently estimate an upper bound on this asymptotic variance. Suppose as in Definition <ref> and impose Assumptions <ref>, <ref>, <ref>. Then (c) (c) ≥ c' c. The variance upper bound (c) ≥ c'( + ) c, so the confidence intervals derived from this approach are always weakly shorter than those using the variance estimator in Equation <ref>. See Section <ref> in the appendix for an explicit comparison. The upper bound (c) incorporates the efficiency gains from stratification, rerandomization, and adjustment. However, this upper bound is generally not sharp (<cit.>). We leave sharp upper bounds on the asymptotic variance matrix to future work. § PROOFS §.§ Rerandomization Asymptotics Before studying rerandomization, we first establish a CLT for pure stratified designs, conditional on the data . Suppose E[|(W)|_2^2] < ∞. Define = σ(, ). Let as in part (1) of Definition <ref>. Then X_n ≡[()] has X_n | (0, V). In particular, for each t ∈^ we have E[e^it' X_n | ] = ϕ(t) + (1) with ϕ(t) = e^-t'V t / 2 and V = E[( | ψ)]. First consider the case = 1. efine = - E[ | ]. By Lemma in <cit.>, since E[^2] < ∞ we have [( - )E[ | ]] = (1). Then it suffices to study [( - )]. To do so, we will use a martingale difference sequence (MDS) CLT. Fix an ordering l = 1, …, n/k of (l) ∈, noting that || ≤ n/k. Define D_(l) = ()_i ∈(l). Define _0,n = and _j, n = σ(, , l ∈ [j]) for j ≥ 1. Define D_l,n = ∑_i ∈(l) ( - ) and S_j,n = ∑_i=1^j D_i, n. (1) We claim that (S_j,n, _j,n)_j ≥ 1 is an MDS. Adaptation is clear from our definitions. E[( - ) (i ∈(j)) | _j-1, n] = E[( - ) (i ∈(j)) | , ()_l=1^j-1] = E[( - ) (i ∈(j)) | ] = E[( - ) | ] (i ∈(j)) = 0. The second equality since ()_l ≠ j |. Then we compute E[ | _j-1,n] = ∑_i ∈(l) E[( - ) | _j-1, n] = 0. This shows the MDS property. (2). Next, we compute the variance process. By the same argument in (1), we have _n ≡∑_j=1^n/k E[^2 | _j-1,n] = n∑_j=1^n/k (∑_s ≠ t ∈(j)(, | ) + ∑_i ∈(j)^2 ( | ) ) By Lemma of <cit.>, we have (, | ) (s, t ∈(l)) = -a(k-a)/k^2(k-1) ≡ c and ( | ) = - ^2. Then we may expand _n as c n∑_j=1^n/k∑_s ≠ t ∈(j) + ( - ^2) [u_i^2] ≡ c n∑_j=1^n/k v_j + ( - ^2) [u_i^2] ≡ T_n1 + T_n2. First consider T_n1. Our plan is to apply the WLLN in Lemma of <cit.> to show T_n1 = (1). Define ^ψ = σ(, ) so that ∈. For s ≠ t we have E[u_s u_t | , ] = E[u_s E[u_t | , u_s, ] | , ] = E[u_s E[u_t | ψ_t] | , ] = 0. The second equality follows by applying (A, B) C A C | B with A = u_t, B = ψ_t and C = (ψ_-t, u_s, ). Then E[v_j | ] = 0 for j ∈ [n/k]. Next, observe that for any positive constants (a_k)_k=1^m we have ∑_k a_k (∑_k a_k > c) ≤ m ∑_k a_k (a_k > c / m) and ab (ab > c) ≤ a^2 (a^2 > c) + b^2 (b^2 > c). Then for c_n →∞ with c_n = o() we have |v_j| (|v_j| > c_n) ≤∑_s ≠ t ∈(j) || (∑_s ≠ t ∈(j) || > c_n ) ≤ k^2 ∑_s ≠ t ∈(j) || (|| > c_n / k^2) ≤ 2k^3 ∑_s ∈(j)^2 (^2 > c_n / k^2). Then we have n E[∑_j=1^n/k E[|v_j| (|v_j| > c_n) | ] ≤ 2k^3 [E[^2 (^2 > c_n / k^2) | , ] ] ≡ A_n. Then E[A_n] = 2k^3 E[[E[^2 (^2 > c_n / k^2) | ]]] = 2k^3 E[^2 (^2 > c_n / k^2)] → 0 as n →∞. The first equality is by the conditional independence argument above, the second equality is tower law, and the limit by dominated convergence since E[^2] ≤ E[^2] < ∞ by the contraction property of conditional expectation. Then A_n = (1) by Markov inequality. The conclusion c n∑_j=1^n/k v_j = (1) now follows by Lemma of <cit.>. For T_n2, we have [^2] E[^2] = E[( | ψ)] by vanilla WLLN. Then we have shown _n ( - ^2) E[( | ψ)]. (3) Finally, we show the Lindberg condition ∑_j=1^n/k E[^2 (|| > ϵ) | _0, n] = (1). ^2 (|| > ϵ) = ^2 (^2 > ϵ^2) ≤ n∑_s, t ∈(j) || ( n∑_s, t ∈(j) || > ϵ^2 ) ≤ k^2 n∑_s, t ∈(j) || (|| > n ϵ^2 / k^2 ) ≤ k^3 n∑_s ∈(j)^2 (^2 > n ϵ^2 / k^2 ). Then using the inequality above we compute E[∑_j=1^n/k E[^2 (|| > ϵ) | _0, n] ] ≤ k^3 E[n∑_j=1^n/k∑_s ∈(j) E[^2 (^2 > n ϵ^2 / k^2 ) | ] ] = k^3 E [[ E[^2 (^2 > n ϵ^2 / k^2 ) | ] ] ] = k^3 E[^2 (^2 > n ϵ^2 / k^2 )] = o(1). The first equality by the conditional independence argument above. The second equality by dominated convergence. Then ∑_j=1^n/k E[^2 (|| > ϵ) | _0, n] = (1) by Markov. This finishes the proof of the Lindberg condition. Since _0,n =, by Theorem in <cit.>, we have shown that E[e^it [( - ) ] | ] = ϕ(t) + (1) for ϕ(t) = e^-t^2 V / 2 with V = ( - ^2) E[( | ψ)]. Finally, consider () ≥ 1. Fix t ∈^ and let () = t'() ∈. Then we have X_n(t) ≡ X_n't = [( - )()]'t = [( - )()'t] = [( - )()]. By the previous result E[e^iX_n(t) | ] e^-v(t) / 2 with variance v(t) = E[( | ψ)] = E[(t' | ψ)] = t'E[( | ψ)]t = t'Vt. Then we have shown E[e^it'X_n | ] = e^-t'Vt / 2 + (1) as claimed. Next, we provide asymptotic theory for stratified rerandomization. The following definition generalizes Definition <ref> in Section <ref>. Consider the following: * Suppose = + (1) for = []. Let = + (1) for ∈^d_τ and define sample and population acceptance regions = {x: (x, ) ≤ 0} and = {x: (x, ) ≤ 0} for (z, y) a measurable function. * (Assumptions). Assume P((, ) = 0) = 0 for ∼(0, E[(h | ψ)]). Require P(∈) > 0. Suppose E[||_2^2 + |ϕ|_2^2 + |h|_2^2] < ∞. * (Rerandomization). Let = σ(, ), where and define the rerandomization measure Q(B | ) = P(B | , ∈) and Q(B) = E[Q(B | )] for any event B. Let Definition <ref> hold. Let = [] and = (, h). Fix t ∈^. Let (, ) ∼(0, ) for = E[( | ψ)]. Then E[e^it' (∈) | ] = E[e^it' (∈) ] + (1). (1). Define B_n = (, , ). Fix t = (t_1, t_2, t_3) ∈^ + + and consider the characteristic function ϕ_B_n(t) = E[e^it_1' + it_2' + it_3' | ] = e^it_3' E[e^it_1' + it_2' | ] + (1) = e^it_3' E[e^it_1' + it_2' | ] + (1) = e^it_3' e^-t' t / 2 + (1) = ϕ_B(t) + (1). For the second equality, note that e^i t_3' e^i t_3' by continuous mapping. Then R_n = e^it_1' + it_2' (e^i t_3' - e^i t_3' ) = (1). Clearly |R_n| ≤ 2, so E[|R_n| | ] = (1) by Lemma <ref>. The third equality is identical, noting that e^i t_2' e^i t_2' again by continuous mapping. The fourth equality is Theorem <ref> applied to [_i]. The final expression is the characteristic function of B = (, , ) with (, ) ∼(0, ). Then we have shown that B_n | B in the sense of Proposition <ref>. Fix t ∈ and define G(z_1, z_2, x) = e^it'z_1((z_2, x) ≤ 0) and note that G(B_n) = e^it' ((, ) ≤ 0) = e^it' (∈). Define E_G = {w: G(·) not continuous at w}. By Proposition <ref>, if P(B ∈ E_G) = 0 then E[G(B_n)| ] = E[G(B)] + (1) = E[G(, , )] + (1), which is the required claim. To finish the proof, we show that that P(B ∈ E_G) = 0. Write G(z_1, z_2, x) = f(z_1)g(z_2, x) for f(z_1) = e^it'z_1 and g(z_2, x) = ((z_2, x) ≤ 0) and define discontinuity point sets E_f and E_g as for E_G above. By continuity of multiplication for bounded functions, if z_1 ∈ E_f^c and (z_2, x) ∈ E_g^c then (z_1, z_2, x) ∈ E_G^c. By contrapositive, E_G (E_f ×^ + ) ∪ (× E_g). Clearly E_f = ∅, so P(B ∈ E_G) = P((, ) ∈ E_g). Let E_g^1 = {z_h: (z_h, ) ∈ E_g}. We have (, ) ∈^×{}. Then P((, ) ∈ E_g) = P(∈ E_g^1). Since z_h →(z_h, ) is continuous, {z_h: a(z_h, ) > 0} is open. Let z_h ∈{z_h: a(z_h, ) > 0}. Then for small enough r, if z' ∈ B(z_h, r) then a(z', ) > 0 and g(z', ) = 0, so g(z', ) - g(z_h, ) = 0, so z_h is a continuity point. A similar argument applied to z_h ∈{z_h: a(z_h, ) < 0} shows that the discontinuity points E_g^1 {z_h: (z_h, ) = 0}. Let Definition <ref> hold. Suppose that (, ) ∼ E[((, h) | ψ)]. The following hold * We have [(W_i)] | | ∈ = (0, ) + R, independent RV's s.t. = E[((W) - 'h | ψ)] = min_Γ∈^× E[((W) - Γ'h | ψ)]. The residual term R ∼' | ∈. * Let X_n = [ϕ()] + [(W_i)]. Then we have (X_n - E[ϕ(W)]) + | ∈ = (0, ) + (0, ) + R. The RV's are independent with = (ϕ(W)). First, we prove (a). Let = [(W_i)]. Let t ∈^. By definition of Q [e^it' | ] = E[e^it' | ∈, ] = E[e^it' (∈) | ]/P(∈ | )≡/. Define = E[e^it' (∈) ] and = P(∈). By Lemma <ref>, and , with > 0 by assumption in Definition <ref>. Then we have = (1). Then | / - / | may be expanded as | - /| = | ( - ) + ( - )| ≲_P | - | + | - | = (1). The final equality by Lemma <ref>. Then we have shown [e^it A_n | ] = / + (1) = E[e^it' (∈) ]/P(∈) = E[e^it' | ∈] + (1). This proves the first statement. Next, we characterize the law of | ∈ T. Define ϕ(t) ≡ E[e^it' | ∈ T ]. Let ∈^× satisfy the normal equations E[(h | ψ)] = E[(h, | ψ)]. Such a exists and satisfies the stated inequality by Lemma <ref>. Letting = - ', by Lemma <ref> and is Gaussian. Then (, (∈ T)). Recall that A (S,T) A S | T. Using this fact, we have | ∈ T. Then for any t ∈^ ϕ(t) = E[e^it' | ∈ T] = E[e^it' e^it' '| ∈ T] = E[e^it' | ∈ T] E[e^it' '| ∈ T] = E[e^it' ]E[e^it' '| ∈ T]. By Proposition <ref>, we have shown | ∈ T + [' | ∈ T], where the RHS is a sum of independent random variables with the given distributions. Clearly E[] = 0 and () = E[( - 'h | ψ)]. This finishes the proof of (a). Next we prove (b). We may expand (X_n - E[ϕ(W)]) = ([ϕ(W_i)] - E[ϕ(W)]) + ≡ A_n + B_n. We have A_n (0, ) with = (ϕ(W)) by vanilla CLT. Then let t ∈^ and calculate [e^it' X_n] = [e^it' A_n[e^it' B_n | ] ] = ϕ(t) [e^it' A_n] + o(1) = ϕ(t) e^-t' t / 2 + o(1). The first equality since A_n ∈. The second equality since |[e^it' A_n ([e^it' B_n | ] - ϕ(t)) ] | ≤[|[e^it' B_n | ] - ϕ(t)| ] = o(1). To see this, note that the integrand is (1) by our work above. It is also bounded so it converges to zero in L_1(Q) by Lemma <ref>. The final equality since A_n ∈ = σ(, ) and the marginal distribution of (, ) is identical under P and Q by definition. Then [e^it' A_n] = E_P[e^it' A_n] = e^-t' t / 2 + o(1) by vanilla CLT. Then we have shown [e^it'X_n] = e^-t'( + )t/2 E[e^it' '| ∈ B] + o(1). This finishes the proof of (b). Suppose Definition <ref> and Assumption <ref> hold. Let = -(G' G) G'. Then ( - ) = [(W_i, )] + (1) and ( - ) = [ϕ(, ) + (W_i, )] + (1). See Section <ref> below for the proof of this lemma. We claim that the conditions of Definition <ref> hold. This will allow us to apply our general rerandomization asymptotics in Theorem <ref> and linearization in Lemma <ref>. To check part (a), define a(x, y) = a(x) = d(x, A) - d(x, A^c), where d(x, A) = inf_s ∈ |x-s|_2. It's well known that x → d(x, S) is continuous for any set S, so a is continuous. The sample and population regions = = {x: a(x) ≤ 0}. If a(x) ≤ 0 then d(x, A) = 0, so x ∈ A ∪∂ A A by closedness. If a(x) > 0 then x ∉A. This shows = A, so {∈} = {∈ A}. Then our criterion is of the form in Definition <ref>. For part (b), P(a() = 0) = P(∈∂ A) = 0 since (∂ A) = 0 and by absolute continuity of relative to Lebesgue measure . We also have P(∈) = P(∈ A) > 0 since is full measure by E[(h | ψ)] ≻ 0 and since A has non-empty interior. This proves the claim. Then by Lemma <ref>, ( - ) = [(W_i, )] + (1). The result now follows immediately by Slutsky and Theorem <ref>(a), letting →. Likewise, Corollary <ref> follows from Theorem <ref>(b), letting ϕ→ϕ. By Theorem <ref>, since A = we have ( - ) | (0, ) + R, independent RV's with = E[(Π(W, ) - 'h | ψ)] and R ∼' for ∼(0, E[(h | ψ)]). Then (0, ) + R ∼(0, V) with V = + (') = E[(Π(W, ) - 'h + 'h | ψ)] - 2 E[(Π(W, ) - 'h, 'h | ψ)] = E[(Π(W, )| ψ)]. The covariance term is zero by Lemma <ref>. The second statement follows by setting ψ = 1. §.§ GMM Linearization This section collects proofs needed for the key linearization result in Lemma <ref>. First, define the following curves and objective functions (θ) = E[ϕ(W_i, θ)], (θ) = [ϕ(, θ)], (θ) = [ϕ(, θ)] + [(, θ)]. (θ) = (θ)''(θ), (θ) = (θ)''(θ), (θ) = (θ)''(θ) Define G(θ) = (∂/∂θ')(θ) and G_n(θ) = (∂/∂θ') (θ) and G_0(θ) = (∂/∂θ')(θ). Define G = G_0(). For each d ∈{0, 1}, define _d(W, θ) = g(d, X, S(d), θ). Require Assumption <ref>. Then we have * (ULLN). - _∞, = (1), - _∞, = (1), and (θ) is continuous. This implies objectives | - |_∞, = (1) and | - |_∞, = (1). * (Consistency). We have - = (1) and - = (1). * There is an open ball U with ∈ U and G_n - G_0_∞, U = (1) and G_n - G_0_∞, U = (1). Also, G_0(θ) is continuous on U for G_0(θ) = ∂ / ∂θ' E[ϕ(W, θ)]. Consider (a). First we show - _∞, = (1). It suffices to prove the statement componentwise. Then without loss assume = 1 and fix ϵ > 0. Note also that ϕ, are linear combinations of _d for d ∈, so ϕ and inherit the properties in Assumption <ref>. We have ( - ^2)( - )(θ) = [(-)(, θ)] ≡[(θ)]. For each θ∈ define = B(θ, m) and (, ) = sup_θ̅∈(θ̅). Then (, ) may be expanded sup_θ̅∈ ( - )(, θ̅) = (1-) sup_θ̅∈(, θ̅) + (1-) sup_θ̅∈ - (, θ̅) = (-^2) (sup_θ̅∈(, θ̅) + sup_θ̅∈ -(, θ̅)) + (-)((1-) sup_θ̅∈(, θ̅) + inf_θ̅∈(, θ̅)) ≡ f_θ m(W_i) + ( - )r_θ m(W_i). In particular, E[(X_i)] = E[f_θ m(W_i)]. Note both expectations exist by the envelope condition in Assumption <ref>. By continuity at θ, f_θ m(W_i) → (-^2) ((, θ) - (, θ)) = 0 as m →∞. Also |f_m θ()| ≲sup_θ̅∈ | (, θ̅) | ≤sup_θ∈ | (, θ) |. Then by our envelope assumption sup_m f_θ m(W_i) ∈ L_1(P), so lim_m E[(, )] = lim_m E[f_θ m(W_i)] = 0 by dominated convergence. For each θ, let m(θ) s.t. E[f_θ m(θ)(W_i)] ≤ϵ. Then {U_θ m(θ): θ∈} is an open cover of , so by compactness it admits a finite subcover {U_θ_l, m(θ_l)}_l=1^L(ϵ)≡{U_l}_l=1^L(ϵ). Next, for each (θ, m) we claim [(D_i, W_i)] = E[f_θ m()] + (1). We have [f_θ m(W_i)] = E[f_θ m(W_i)] + (1) by WLLN since E[f_θ m(W_i)] < ∞ as just shown. Similarly, we have |r_θ m(W_i)| = |(1-) sup_θ̅∈(, θ̅) + inf_θ̅∈(, θ̅)| ≤sup_θ̅∈ |(, θ̅)| ∈ L_1(P). Then [(-)r_θ m(W_i)] = (1) by Lemma in <cit.>. This proves the claim. Define f_l(W) and r_l(W) to be the functions above evaluated at (θ_l, m(θ_l)). Putting this all together, we have sup_θ∈[v_i(θ)] ≤max_l=1^L(ϵ)sup_θ∈ U_l[v_i(θ)] ≤max_l=1^L(ϵ)[v_θ_l m(θ_l)(, )] = max_l=1^L(ϵ) (E[f_θ m()] + T_nl) ≤ϵ + max_l=1^L(ϵ) T_nl = ϵ + (1). By symmetry, we also have sup_θ∈ -[v_i(θ)] ≤ϵ + (1). Then sup_θ∈ |[v_i(θ)]| ≤ 2ϵ + (1). Since ϵ > 0 was arbitrary, this finishes the proof of (1). Next we show - _∞, = (1). We have ( - )(θ) = [ϕ(, θ)] - E[ϕ(W, θ)]. Under our assumptions, |[ϕ(, θ)] - E[ϕ(W, θ)]|_∞, = (1) and (θ) = E[ϕ(W, θ)] is continuous by Lemma 2.4 of <cit.>. This proves the second claim. For the statement about objective functions, observe that |(θ) - (θ)| = |(θ)'(θ) - (θ)'(θ)| ≤ |( - )(θ)'(θ)| + |(θ)'( - ) (θ)| + |(θ)' ( - )(θ)| ≤ | - |_2(θ)|||_2 |(θ)|_2 + |(θ)|_2| - |_2 |(θ)|_2 + |(θ)|_2 ||_2 | - |_2(θ) ≲ | - |_∞, |||_2 ||_∞, + ||_∞, | - |_2 ||_∞, + ||_∞, ||_2 | - |_∞, . The first inequality by telescoping, then Cauchy-Schwarz, then using equivalence of finite-dimensional vector space norms and sup_θ a(θ) b(θ) ≤sup_θ a(θ) sup_θ b(θ) for positive a,b. We have ||_∞, , ||_∞, = (1) + ||_∞, = (1) since ||_∞, ≤ E[sup_θ∈ϕ(W, θ)] < ∞. Also ||_2 = (1) and | - |_2 = (1) by continuous mapping. Taking sup_θ∈ on both sides gives the result. The proof that | - |_∞, = (1) is identical. By triangle inequality, this proves the claim. For (2), since () = 0 uniquely and () =, then (θ) is uniquely minimized at . Then and θ by extremum consistency (e.g. Theorem 2.1 in <cit.>), so . Finally consider (3). Let U_1 Ũ an open set ∈ U_1 such that the closed 1/m' enlargement Ũ_1^1/m'Ũ for some m' ≥ 1. Set = Ũ_1^1/m', which is compact. As in the proof of (1), let = B(θ, m) for m ≥ m'. The conclusion now follows from the exact argument in (1), applied to the alternate moment functions g̃_z(, θ) ≡∂ / ∂θ' _z(, θ). In particular, uniform convergence holds on any open set U Ũ. The final statement about G_0(θ) follows by dominated convergence. Since = _θ∈Θ(θ), so ∇_θ() = 0 G()' () = 0. By differentiability in Assumption <ref> and applying Taylor's Theorem componentwise, for each k ∈ [] and some θ̃_k ∈ [, ] we have () = () + ∂_k/∂θ'(θ̃_k)_k=1^( - ). Then we may expand 0 = G()' [() + ∂_k/∂θ'(θ̃_k)_k=1^( - )] - = -( G()' ∂_k/∂θ'(θ̃_k)_k=1^) G()' (). On the event S_n = {∈ U}, θ̃_k ∈ U for each k. Then (S_n)|∂_k/∂θ'(θ̃_k)_k=1^ - ∂_0k/∂θ'(θ̃_k)_k=1^|_F^2 ≤∑_k=1^sup_θ∈ U |∂_k/∂θ'(θ) - ∂_0k/∂θ'(θ)|_2^2 ≤sup_θ∈ U | G(θ) - G_0(θ)|_F^2 = (1) by Lemma <ref>. Similarly, (S_n)| G() - G_0()|_F^2 ≤sup_θ∈ U | G(θ) - G_0(θ)|_F^2 = (1). Moreover, since and θ̃_k ∈ [, ] ∀ k, we have (S_n)|G_0() - G()|_F^2 = (1) and (S_n)|∂_0k/∂θ'(θ̃_k)_k=1^ - G()|_F^2 = (1), using continuous mapping and continuity of θ→ G_0(θ) on U, shown in Lemma <ref>. Since P(S_n) → 1, we have shown | G() - G()|_F^2 = (1) and |∂_k/∂θ'(θ̃_k)_k=1^ - G()|_F^2 = (1). Since () = () by Theorem <ref>, by the work above and continuous mapping theorem we have ( - ) = -( G()' ∂_k/∂θ'(θ̃_k)_k=1^) G()' () = -(G' G) G' () + (1) = () + (1). The proof of the second claim is identical, using | - |_2 = (1) and sup_θ∈ U |G_n(θ) - G_0(θ)|_F^2 = (1) by Lemma <ref>. This shows linearization in P-measure. The statement in Q-measure follows from Lemma <ref> and our assumptions in Definition <ref>. G_n() G_0() = G. Let E_n = { G_n()' G_n() ≻ 0 }. Then = -(E_n) ( G_n()' G_n()) G_n()'. Let U be the neighborhood of from Lemma <ref> and set S_n = {∈ U }. Then we have (S_n) | G_n() - G_0()| ≤(S_n) | G_n() - G_0()| + (S_n) |G_0() - G_0()| ≤(S_n) | G_n - G_0|_∞, U + |G_0() - G_0()| = (1). The final equality is by Lemma <ref> and continuous mapping, since G_0 is continuous at . By assumption, () = and (G) = ≤. Then (G' G) =, so (E_n) ( G_n()' G_n()) - (E_n) (G' G) 0 and (E_n)( - ) 0 by continuous mapping. Finally, by continuous mapping λ_min( G_n()' G_n()) λ_min(G' G) > 0, so (E_n) 1. The statement now follows from Lemma <ref>. §.§ Nonlinear Rerandomization We first prove a slightly more general result, allowing for over-identified GMM estimation with positive definite weighting matrix _n. For |x|_2, A^2 = x'Ax, define β_d ∈_β∈^ |[( = d) (X_i, β)]|_2, _n^2. Define ^1(D, X, β) = D (X, β) and ^0(D, X, β) = (1-D) (X, β). Under the expansion in Equation <ref>, we have ϕ^1(X, β) = g^1(1, X, β) = m(X, β) and ^1(X, β) = g^1(1, X, β) = (X, β). Similarly, ϕ^0(X, β) = (1-) g^0(0, X, β) = (1-) m(X, β) and ^0(X, β) = - g^0(0, X, β) = -(X, β). Note that E[^1(D, X, β)] = E[(X, β)] and E[^0(D, X, β)] = (1-) E[(X, β)], so the GMM parameters β_1 = β_0 =, where uniquely solves E[(X, )] = 0. Let = E[(∂ / ∂β') m(X, )], which is full rank by assumption. Then G^1 = E[(∂ / ∂β')^1(D, X, )] = E[(∂ / ∂β')m(X, )] = and ^1 = -((G^1)' G^1) (G^1)' = - (' )' ≡. By symmetry, we have ^0 = (1-). Observe that (^1 ϕ^1 - ^0 ϕ^0)(X, β) = (X, β) - (1-) (1-) (X, β) = 0, (^1 ^1 - ^0 ^0)(X, β) = (X, β) - (1-) (-(X, β)) = (1-) (X, β) + (X, β) = (X, β). Then applying Lemma <ref> to GMM estimation using g^1 and g^0, we have ( - ) = ( - - ( - )) = ^1 [ϕ^1(X_i, ) + ^1(X_i, )] - ^0 [ϕ^0(X_i, ) + ^0(X_i, )] + (1) = [(X, )] + (1). Then Definition <ref> is an example of Definition <ref> with = [] + (1) for = (X_i, ). Then Theorem <ref> holds with = (X_i, ). Consider the exactly identified case, so = - and = -(X_i, ). Then by Theorem <ref>, ( - ) | (0, ) + R_A. Denote = (W, ) and = (X, ). Then the rerandomization coefficient is = E[(h | ψ)] E[(h, | ψ)] = -E[( | ψ)] E[(, | ψ)] = -E[( | ψ) ()'] E[(, | ψ)] = -' E[( | ψ) ] E[(, | ψ)]. Then = E[(Π - ' (-)| ψ)] = E[(Π - η_0' )| ψ)], where η_0 = _η∈^× E[(Π - η' | ψ)]. From above, we have = -' η_0. Then the residual term ∼' | ∈ A ∼ -η_0' | ∈ A ∼ -η_0' | (-) (-) ∈ A ∼η_0' | -∈ A ∼η_0' | ∈ - A. The variable ∼(0, E[(h | ψ)]), so = ∼(0, E[(h | ψ)] ') ∼(0, E[( h | ψ)]) ∼(0, E[( | ψ)]) since h = = (X, ). Summarizing, we have shown = E[(Π - η_0' | ψ)] and ∼η_0' | ∈ A for ∼(0, E[( | ψ)]). For the corollary, consider letting β∈_β∈^ |[(X_i, β)]|_2, _n^2. Relative to the expansion in Equation <ref>, _(X_i, β) = 0 and ϕ_(X_i, β) = (X_i, β), with linearization matrix as above. Then by Lemma <ref> ( - ) = [(X_i, )] + (1) = (1). Consider setting = (X_i, ). By the mean value theorem, (X_i, ) - (X_i, ) = ∂(X_i, β̃_i)/∂β( - ), where the β̃_i ∈ [, ] may change by row. Then we have [(X_i, )] - [(X_i, )] = [ (∂ / ∂β) (X_i, β̃_i)] ( - ). We claim that [(X_i, β̃_i)] = (1). Define v_ijk = ((X_i, β̃_i))_jk. Clearly v_ijk∈ = σ(, ) and E[v_ijk | ] = 0. Also note |v_ijk| = |((X_i, β̃_i))_jk| ≤sup_β∈ U |(X_i, β)|_F ∈ L_1 for some open set U by Assumption <ref>. Then f_n(W) = ((X, β̃_i))_jk, where β̃_i = β̃_in implicitly, is uniformly integrable. Then Lemma A.2 of <cit.> implies [ v_ijk] = (1). This proves the claim, showing that = [(X_i, )] = [(X_i, )] + (1). The result now follows from Theorem <ref>. Assume E[|X|^2] < ∞ and |'|_∞, |”|_∞ < ∞ continuous and limits to pm infinity, and X = (1, h) By assumption, β is a GMM estimator for m(, X_i, β) = '(X_i'β) X_i/(X_i'β) - (1-) '(X_i'β) X_i/1-(X_i'β). Let c s.t. (c) =. Then = (c, 0) has E[m(D, X, )] = E['(c) X_i] = 0. Relative to the decomposition in Equation <ref>, we have ϕ(X, β) = '(X_i'β) X_i/(X_i'β) - (1-) '(X_i'β) X_i/1-(X_i'β) and (X, β) = ('(X_i'β) X_i/(X_i'β) + '(X_i'β) X_i/1-(X_i'β)). Since (X_i') = (c) =, apparently we have ϕ(X, ) = 0 and (X, ) = L'(c) X_i. A calculation shows that = E[∂/∂β'ϕ(X, )] = - '(c)^2 E[X_i X_i'], so = - = 1/'(c)^2 E[X_i X_i']. By Lemma <ref>, we have shown ( - ) = [ϕ(X_i, ) + (X_i, )] + (1) = /'(c) E[X_i X_i'] [ X_i] + (1). Consider rerandomizing until = n [( - (X_i'))^2] ≤ϵ^2. Then for s.t. L(x') =, the above quantity is n [((X_i') - (X_i'))^2]. By Taylor's Theorem, (X_i') - (X_i') = '()(X_i' - X_i') = '()X_i'( - ) for some ∈ [X_i', X_i']. Then we have = n ( - )'[X_i X_i' '()^2]( - ). Claim that [X_i X_i' '()^2] = [X_i X_i' '(X_i')^2] + (1). If so, then [X_i X_i' '()^2] = '(c)^2 [X_i X_i'] + = '(c)^2 E[X_i X_i'] +. To see this, note that |'(X_i')^2 - '()^2| = |'(X_i') - '()||'(X_i') + '()| ≤ 2 |'|_∞ |”|_∞ |X_i' -|_2 ≲ |X_i' -X_i'|_2 ≤ |X_i|_2| -|_2. Then we have |[X_i X_i' '()^2] - [X_i X_i' '(X_i')^2]|_2 ≤[|X_i|_2^2 |'(X_i')^2 - '()^2|] ≲[|X_i|_2^3] | -|_2 = (1) The last equality if [|X_i|_2^3] = (n^1/2). Note that [|X_i|_2^3] ≤[|X_i|_2^2] max_i=1^n |X_i|_2 = (1) (n^1/2) since E[|X_i|_2^2] < ∞ by assumption, using Lemma C.8 of <cit.>. Then using the claim, ( - ) =, and the linear expansion of ( - ) above, we have shown = '(c)^2 n ( - )'E[X_i X_i']( - ) +, which is = '(c)^2 ('(c) E[X_i X_i'] [ X_i])' E[X_i X_i']('(c) E[X_i X_i'] [ X_i]) + = [ X_i]' E[X_i X_i'] [ X_i] + . Note [] = (n ) by stratification. Since X = (1, h), [ X_i]' = (0, []') + (). Also, by block inversion (E[X_i X_i'] )_hh = (). For some ξ_n = = (0, []')E[X_i X_i'] (0, []')' + = []'(E[X_i X_i'] )_hh[] + = []'() [] + ξ_n. Define the function a(x, y) = x'(h) x + y - ϵ. Then ≤ϵ a(, ξ_n) ≤ 0 for = [] and ξ_n 0. Clearly, x → a(x, 0) is continuous. Also note E[|h|_2^2] < ∞ by assumption. Finally, for ∼(0, E[(h | ψ)]), have P(a(, 0) = 0) = P('(h) = ϵ^2) = 0 since E[(h | ψ)] is full rank. Then this rerandomization satisfies all the conditions in Definition <ref>. By Lemma <ref>, the GMM estimator ( - ) = [(W_i, )] + under this rerandomization. By Theorem <ref>, have [(W_i)] | (0, ) + R with R ∼' | ∈ T ∼' | ' (h)≤ϵ for acceptance region T = {x: a(x, 0) ≤ 0} = {x: x'(h) x ≤ϵ} and = min_Γ∈^× E[((W) - Γ' h | ψ)]. This finishes the proof. §.§ Covariate Adjustment By Lemma <ref>, ( - ) may be expanded as ( - - [(, )]) = [ ((W_i, ) - (, ))] + (1) ≡[β(W_i, )] + (1). By Theorem <ref>, [β(W_i, )] | (0, V) with V = (β(W, )). Since β(W, ) = Π(W, ) - 'h - t_0(ψ) for (, t_0) solving Equation <ref>, this completes the proof. Since = - ['] for α and [] = () by Theorem <ref>, then = - [α'] + () = [ ((W_i, ) - α')] + (), the final equality by Lemma <ref>. The first statement now follows from Slutsky and Theorem <ref>. The second statement follows by the same argument used in the proof of Corollary <ref>. By the same argument in the proof of Proposition <ref>, we have = [ ((W_i, ) - ')] + (). Then by Theorem <ref>, ( - ) | (0, V) + R, independent with V = E[((W) - 'w - β_0'h | ψ)] = min_β∈^× E[((W) - 'w - β'h | ψ)]. The residual term R ∼β_0' | ∈ A. Then it suffices to show that β_0 = 0. Define = (W, ) - 'w. By Lemma <ref>, it further suffices to show = 0 solves E[(h | ψ)] = E[(h, | ψ)], i.e. that E[(h, | ψ)] = 0. To do so, note that E[(h, | ψ)] = E[(h, ( - 'w) | ψ)] = E[(h, | ψ)] - E[(h, w| ψ)]. By assumption, E[(w | ψ)] = E[(w, | ψ)]. Since h w, we have E[(h, w| ψ)] = (E[(w | ψ)])_hw = (E[(w | ψ)] )_h θ = (E[(w, | ψ)])_h θ = E[(h, | ψ)] This shows that [(h, | ψ)] = 0, so = 0 is a solution, proving the claim. This finishes the proof of the statement for . The result for follows trivially, as in Corollary <ref>. By Lemma <ref>, ['] = k (k-1) E[(w | ψ)] + (1). Then if E[(w | ψ)] ≻ 0, we have ['] k(k-1) E[(w | ψ)] by continuous mapping. We have by assumption. Then it suffices to show [ (-) '] = k (k-1) E[(w, | ψ)] + (1). First, claim [ (-) '] = [ (-) ()'] + (1), for (θ) ≡(, X_i, S_i, θ). By Taylor's theorem, |() - ()|_2 ≤ |∂/∂θ'(θ̃_i)|_2 | - |_2, where θ̃_i may change by row. Then |[ (-) (() - ())']|_2 ≤[||_2 |() - ()|_2] ≤ | - |_2 [||_2 |∂/∂θ'(θ̃_i)|_2 ] ≤ | - |_2 ([||_2^2] + [|∂/∂θ'(θ̃_i)|_2^2]) by Young's inequality. We showed [|∂/∂θ'(θ̃_i)|_2^2] = (1) in the proof of Lemma <ref>. Similarly, [||_2^2] ≤[||_2^2] = (1) by the bound in Lemma <ref>. Since | - |_2 = (1) by Theorem <ref>, this proves the claim. Next, claim [ (-) ()'] = [(, )'] + (1). By definition, we have [ (-) ()'] = [ (-)ϕ(, )'] + (D)[( - )^2 (, )'] ≡ A_n + B_n. Expanding ( - )^2, B_n = (D)[[(D) + ( - )(1-2)] (, )'] = [(, )'] + 1-2/(D)[( - ) (, )']. Since ϕ = + (1-) and = (D)( - ), apparently it suffices to show [(-)_d(, )'] = (1) for each d=0,1. Since E[|_d(, )|_2^2] < ∞, this follows from Lemma <ref>. Finally, [(, )'] = k (k-1) E[(, (, ) | )] + (1) since E[|w|_2^2 + |_d|_2^2] < ∞ and by applying Lemma <ref> componentwise. This finishes the proof. Suppose E[w_i^2 + v_i^2] < ∞ with , ∈σ(). Then [( - ) ] = (1) and [( - ) ] = (1). Also [] = k-1/k E[(w, v | ψ)] + (1). First, note ||∑_i ∈^2 = ||∑_i ∈ ( - ||∑_j ∈)^2 = _() ≤ E_[^2] = ||∑_i ∈^2. Then in particular ∑_i ∈^2 ≤∑_i ∈^2 and [^2] ≤[^2]. Write [(-)] = n ∑_ for = ∑_i ∈(-). Let = σ(, ). Then ∈, E[ | ] = 0 and u_' | for ≠' by Lemma and Lemma of <cit.>. By Lemma of <cit.>, it suffices to show n ∑_ E[|| (| | > c_n) | ] = (1) for some c_n = o() with c_n →∞. Note that || ≤∑_i ∈ || ≤∑_i ∈^2 + ∑_i ∈^2 ≤∑_i ∈^2 + ∑_i ∈^2 by Young's inequality and the bound above. Note that for any positive constants (a_k)_k=1^m we have ∑_k a_k (∑_k a_k > c) ≤ m ∑_k a_k (a_k > c / m). Applying this fact and the upper bounds gives n∑_ E[|| (| | > c_n) | ] ≤ n∑_ E[∑_i ∈ (^2 + ^2) (∑_i ∈ (^2 + ^2) > c_n) | ] ≤ 2kn∑_∑_i ∈^2 (^2 > c_n / 2k) + 2kn∑_∑_i ∈^2 (^2 > c_n / 2k) The final quantity is 2k [^2 (^2 > c_n / 2k)] + 2k [^2 (^2 > c_n / 2k)] = (1). This follows by Markov inequality since E[[^2 (^2 > c_n / 2k)]] = E[^2 (^2 > c_n / 2k)] → 0 for any c_n →∞ by dominated convergence. This proves the first statement, and the second statement follows by setting → above. For the final statement, calculate ∑_i ∈ = ∑_i ∈ ( - k∑_j ∈)( - k∑_j ∈) = k (k-1) ∑_i ∈ - k ∑_i ≠j ∈ Clearly n k (k-1) ∑_∑_i ∈ = k (k-1) [] = k (k-1) E[] + (1). Then it suffices to show (kn) ∑_∑_i ≠ j ∈ = k (k-1) E[E[ | ] E[ | ]] + (1). If so, [] = k (k-1) (E[] - E[E[ | ] E[ | ]]) + (1) = k (k-1) E[(, | )] + (1) as claimed. The analysis of the term in Lemma of <cit.> shows n ∑_∑_i ≠ j ∈ = n ∑_∑_i ≠ j ∈ E[ | ] E[ | ] + (1) = (k-1) [E[ | ] E[ | ]] + (1) = (k-1) E[E[ | ] E[ | ]] + (1). By above work, this finishes our proof of the claim. §.§ Acceptance Region Optimization First we prove part (a). Define the function f(a) = sup_b ∈ |b'a|. As the sup of linear functions, f is convex (e.g. <cit.>). Then the sublevel set A ≡{a: f(a) ≤ 1} is convex. Note that f(a) = f(-a), so A is symmetric. For the main statement of the theorem, let a_n = []. Clearly, f is positive homogeneous, i.e. f(λ a) = λ f(a) for λ≥ 0. Then note that the LHS event occurs iff f(a_n) ≤ϵ f(a_n / ϵ) ≤ 1 a_n / ϵ∈ A a_n ∈ϵ· A. This proves the main statement. Next, we prove (b). Symmetry and convexity were already shown. Suppose is bounded. Then by Cauchy-Schwarz f(a) ≤ |a|_2 sup_b ∈ |b|_2 < ∞ for any a ∈^. Then f is a proper function, so f is continuous by Corollary 10.1.1. of <cit.>. Then A = f ([0, 1]) is closed. Moreover, the open set f ((1/3, 2/3)) f ([0, 1]) = A, so A has non-empty interior. Suppose that is open. Then contains an open ball B(x, δ) for some x ∈^ and δ > 0. Fix a ∈^ and define b(a) = x + (a'x) δ/2 |a| a. By assumption, b(a) ∈. Then f(a) = sup_b ∈ |b'a| ≥ |b(a)'a| = |a'x + (a'x) (δ / 2) |a|| = |a'x| + (δ / 2)|a| ≥ (δ / 2)|a|. Then f(a) = sup_b ∈ B |a'b| ≥ (δ / 2)|a|, so A B(0, 2/δ). Finally, we prove (c). Note that = 'Z | Z ∈ϵ. By symmetry of ϵ, we have E[Z | Z ∈ϵ] = 0. Denote W = Z / ϵ. Then we calculate ( | Z ∈ϵ) = E[('Z)^2 | Z ∈ϵ] ≤ E[sup_γ∈ |γ'Z|^2 | Z ∈ϵ] = ϵ^2 E[sup_γ∈ |γ'W|^2 | W ∈] ≤ϵ^2 · 1 = ϵ^2. The first inequality is by well-specification The final inequality follows since W ∈ = {a : sup_b ∈ |b'a| ≤ 1}. This finishes the proof. For = x + Σ we compute the upper bound. sup_b ∈ |a'b| = sup_u ∈Σ |a'x + a'u| ≤ |a'x| + sup_u ∈Σ |a' ΣΣ u| = |a'x| + sup_v ∈ |(Σ' a)' v| = |a'x| + |Σ' a|_q. Before proceeding, we claim that for any z ∈^, we have max_v ∈ v'z = max_v ∈ |v'z|. Clearly max_v ∈ v'z ≤max_v ∈ |v'z|. Since is compact and v → v'z continuous, v^* ∈_v ∈ |v'z| exists. Then max_v ∈ |v'z| = |z'v^*| = z'v^* (z'v^*) = z'w for w = v^* (z'v^*) ∈ since v^* ∈. Then max_v ∈ |v'z| = z'w ≤max_w ∈ z'w. This proves the claim. Next, define b(a) = x + (a'x) Σ v(a) with v(a) ∈_v ∈ v' Σ' a, which exists by compactness and continuity. Note b(a) ∈ by construction. We may calculate |a'b(a)| = |a'x + (a'x) a'Σ v(a)|. By the claim, a'Σ v(a) ≥ 0. Then by matching signs, |a'x + (a'x) a'Σ v(a)| = |a'x| + |(a'x) a'Σ v(a)| = |a'x| + |a'Σ v(a)|. By the claim again, this is |a'x| + a'Σ v(a) = |a'x| + max_v ∈ |a'Σ v| = |a'x| + |Σ' a|_q. Combining with the upper bound above, we have shown that sup_b ∈ |a'b| = |a'x| + |Σ' a|_q. §.§ Inference Define = () - H_i ', the population version of . Also define = () - ' / and = () + ' / (1-). We may expand ≡ + (1-) = + (1-) = ϕ(W, ), ≡ ( - ) = (W, ) - '. By Theorem <ref>, we need to estimate V = () + E[( | ψ)] = () + E['] - E[E[ | ψ] E[ | ψ]'] ≡ V_1 - V_2. We expand V_1 = () + E['] as V_1 = ( + (1-) ) + E[( - )( - )'] = E[( + (1-) )( + (1-) )'] + E[( - )( - )'] =(^2 + ) E['] + ((1-)^2 + ) E['] = E['] + (1-) E['] = _n() + (1). The second equality since E[] = 0, and the final equality by Lemma <ref>. By Lemma <ref>, we also have V_2 = E[E[ | ψ] E[ | ψ]'] = (E[E[ | ψ] E[ | ψ]'] + E[E[ | ψ] E[ | ψ]']) - (E[E[ | ψ] E[ | ψ]'] + E[E[ | ψ] E[ | ψ]']) = ( + - - ') + (1). This finishes the proof. With notation as in the proof of Theorem <ref>, by Theorem <ref>, c'( - ) (0, (c)) with variance (c) = c'E[( | ψ)] c for = ( - ). Then (c) = c'E[( - | ψ)] c may be expanded as · c'(E[( | ψ)] + E[( | ψ)] - 2 E[(, | ψ)])c. Note that by Cauchy-Schwarz and Jensen we have the bound - 2 c'E[(, | ψ)] c ≤ 2 |E[(c', c' | ψ)]| ≤ 2 E[(c' | ψ)^1/2(c' | ψ)^1/2] ≤ 2 (E[(c' | ψ)] E[(c' | ψ)])^1/2 = 2 (c'E[( | ψ)]c · c'E[(| ψ)] c)^1/2. Then we bound (c) ≤(c) ≡ [(c'E[( | ψ)]c)^1/2 + (c'E[(| ψ)] c)^1/2]^2. Note E[( | ψ)] = E['] - E[E[ | ] E[ | ]'] =[/'] - + (1) by Lemma <ref> and Lemma <ref>. Similarly, E[( | ψ)] = [1-/1-'] - + (1). Then for = [/'] - and = [1-/1-'] - by continuous mapping (c) = ([c' c]^1/2 + [c' c]^1/2)^2 (c) ≥(c). This finishes the proof. Comparison of Variances. The superpopulation variance is V(c) = (c') + · (E[(c' | ψ)] + E[(c' | ψ)] - 2 E[(c', c' | ψ)]) =^2 (c') + (1-p)^2 (c') + · (E[(c' | ψ)] + E[(c' | ψ)]). Then the variance gap V(c) - (c) is ^2 (c') + (1-p)^2 (c') - 2 (E[( | ψ)] · E[(| ψ)])^1/2 = p^2 (E[c' | ]) + (1-p)^2 (E[c' | ]) + (p E[(c' | ψ)]^1/2 - (1-p) E[(c' | ψ)]^1/2)^2 ≥ 0. The following hold: * [/'] = E['] + (1) and [1-/1-'] = E['] + (1). * _n() = E['] + (1-) E['] + (1). For (a), consider the first statement. We may expand this as [( / p) '] = [( / p)( - )'] + [( / p)( - )'] + [( / p) ']. For = (), we have | - |_2 = | - - H_i ( - α)' |_2 ≲ | - |_2 ||_2 + ||_2 | - |_2 + | - |_2 |w_i|_2. Then the first term above has |[( / p)( - )']| ≤ | - |_2 [||_2 ||_2] + ||_2 [||_2 | - |_2] + | - |_2 [||_2 |w_i|_2]. We claim this term is (1). Note that | - |_2 = (1) and | - |_2 = (1) by assumption. Then applying Cauchy-Schwarz, it suffices to show [||_2^2 + ||_2^2 + |w_i|_2^2] = (1) and [ | - |_2^2] = (1). First, note [||_2^2] = (1) since E[|w|_2^2] < ∞. Next, note [||_2^2] = [| - H_i ' |_2^2] ≤ 2 [||_2^2] + 2 [|' |_2^2] ≤ 2 ||_2^2 [||_2^2] + 2 ||_2^2 [||_2^2], so clearly it suffices to show [||_2^2] = (1) to handle this term. We start by showing that [ | - |_2^2] = (1). By the mean value theorem () - () = ∂/∂θ'(θ̃_i)( - ), where θ̃_i ∈ [, ] may change by row. Then we have [|() - ()|_2^2] ≤ | - |_2^2 [|∂/∂θ'(θ̃_i)|_2^2], so it suffices to show [|∂/∂θ'(θ̃_i)|_2^2] = (1). Since (θ) = (θ) + (1-) (θ) for all θ, |∂/∂θ'(θ̃_i)|_2^2 ≤ 2|∂/∂θ'(θ̃_i)|_2^2 + 2|∂/∂θ'(θ̃_i)|_2^2. Define the event S_n = {∈ U}. Then on S_n we have |∂/∂θ'(θ̃_i)|_2^2 + |∂/∂θ'(θ̃_i)|_2^2 ≤ |∂/∂θ'(θ̃_i)|_F^2 + |∂/∂θ'(θ̃_i)|_F^2 = ∑_d=0,1∑_k=1^ |∇_di^k(θ̃_ik)|^2_2 ≤∑_d=0,1∑_k=1^sup_θ∈ U |∇_di^k(θ)|^2_2 ≡U̅_i. Then [|∂/∂θ'(θ̃_i)|_2^2] (S_n) ≤[U̅_i] (S_n) = (1) since E[sup_θ∈ U |∇_di^k(θ)|^2_2] < ∞ by assumption. Then [|∂/∂θ'(θ̃_i)|_2^2] = (1) since P(S_n^c) → 0. This finishes the proof of [ | - |_2^2] = (1). Finally, the claim [||_2^2] = (1) is clear since [||_2^2] ≤ 2 [ | - |_2^2] + 2[||_2^2] = (1) + (1) by the preceding claim. Then we have shown |[( / p)( - )']| = (1) and [( / p)( - )'] = (1) by an identical argument. This shows that [( / p) '] = [( / p) '] + (1). Next, we have [( / p) '] = [( / p) '] = [ '] + (1) = E[ '] + (1). The first equality is by definition of ,. The second equality by Lemma of <cit.> and the third equality by vanilla WLLN, using E[||_2^2] < ∞. This finishes our proof of the first statement of (a), and the second statement follows by symmetry. For (b), note ['] = [/'] + (1-) [1-/1-'] = E[/'] + (1-) E['] + (1) by part (a) of the lemma. Moreover, [] = [ - ' w_i] = [] + (1). Note that [] = () and () - () = () - () + (1) = (1). The first equality since | - |_, ∞ = (1) and the second by continuous mapping, using Lemma <ref>. Then _n() = E[() ()'] + (1-) E[() ()'] + (1), finishing the proof. In the statement of Theorem <ref>, E[E[() | ψ] E[() | ψ]'] and E[E[() | ψ] E[() | ψ]'], and E[E[() | ψ] E[() | ψ]']. Let ^o denote the oracle version of , substituting = () - H_i ' for , and similarly for ^o, ^o. In Lemma of <cit.>, set A_i = and B_i =. Applying the lemma componentwise, ^o E[E[() | ψ] E[() | ψ]'], ^o E[E[() | ψ] E[() | ψ]'], and ^o E[E[() | ψ] E[() | ψ]']. Then it suffices to show that - ^o = (1), - ^o = (1), and - ^o = (1). For the first statement, expand - ^o = (n) ∑_∈_n1/a() - 1∑_i ≠ j ∈ (' - ') Expand ' - ' = (' - ') + ( - )' ≡ A_ij + B_ij. Using triangle inequality, a() - 1 ≥ 1 and > 0, we calculate ^o - ≲ n ∑_∈_n∑_i, j ∈ |A_ij|_2 + |B_ij|_2 ≡ A_n + B_n. First consider B_n. Using that |xy'|_2 ≤ |x|_2 |y|_2, we have |B_ij|_2 ≤ | - |_2 ||_2 = | - - H_i ( - α)' |_2 ||_2 ≤ | - |_2 ||_2 ||_2+ ||_2 | - |_2 ||_2+ | - |_2 |w_i|_2 ||_2. Then B_n = n ∑_∈_n∑_i, j ∈| - |_2 ||_2 ||_2+ ||_2 | - |_2 ||_2+ | - |_2 |w_i|_2 ||_2 ≡ B_n1 + B_n2 + B_n3. Consider B_n1. This is B_n1 =| - |_2 · n ∑_∈_n∑_i, j ∈ ||_2 ||_2 ≤ | - |_2 · (2n) ∑_∈_n∑_i, j ∈ ||_2^2 + ||_2^2 ≤ | - |_2 · (2n) ∑_∈_n || ∑_i ∈ ||_2^2 + ||_2^2 ≲ | - |_2 [||_2^2 + ||_2^2]. By an identical argument B_n3≲ | - |_2 [|w_i|_2^2 + ||_2^2]. Then to show B_n1 + B_n3 = (1), suffices to show [|w_i|_2^2 + ||_2^2 + ||_2^2] = (1). That [|w_i|_2^2 + ||_2^2] = (1) was shown in the proof of Lemma <ref>. Note [||_2^2] = [| - H_i ' |_2^2] ≤ 2 [||_2^2] + 2 [|' |_2^2] ≤ 2 ||_2^2 [||_2^2] + 2 ||_2^2 [||_2^2] = (1) since E[||_2^2] < ∞ by assumption. Then B_n1 + B_n3 = (1). Finally, consider B_n2. By the mean value theorem () - () = ∂/∂θ'(θ̃_i)( - ), where θ̃_i ∈ [, ] may change by row. Then we have B_n3 = n ∑_∈_n∑_i, j ∈ ||_2 | - |_2 ||_2 ≤ | - |_2 ||_2 · n ∑_∈_n∑_i, j ∈ |∂/∂θ'(θ̃_i)|_2 ||_2 ≲ | - |_2 ||_2 [|∂/∂θ'(θ̃_i)|_2^2 + ||_2^2] = (1). The final equality follows since [|∂/∂θ'(θ̃_i)|_2^2 = (1), as shown in the proof of Lemma <ref>. Then we have shown B_n = (1), and A_n = (1) is identical. This completes the proof that - ^o = (1), and the proof of - ^o = (1), and - ^o = (1) are identical. §.§ Lemmas Consider probability spaces (Ω_n, _n, P_n) and σ-algebras _n. We say A_n ∈^d has A_n | A if ϕ_n(t) ≡ E[e^it'A_n|] = E[e^it'A|] + (1) for each t ∈^d. If g: ^d →ℂ is bounded, measurable, and P(A ∈{a: g(·) discontinuous at a}) = 0 then we have E[g(A_n) | ] = E[g(A)] + (1). See <cit.> for the proof. The following statements hold * There exists ∈^× solving E[(h | ψ)] = E[(h, | ψ)]. For any solution, we have E[( - 'h | ψ)] ≼ E[( - Γ'h | ψ)] for all Γ∈^×. * Let Z = (, ) a random variable with (Z) = E[((, h) | ψ)] ≡ and define = - '. Then (, ) = 0. In particular, if (, ) are jointly Gaussian, then is Gaussian with . In the notation of (b), it suffices to show =. If () = 0 then = c_h a.s. for constant c_h and = (, ) = 0. Then any Γ∈^× is a solution. Then suppose () = r ≥ 1. Let = U Λ U' be the compact SVD with U ∈^× r and (Λ) = r, and U'U = I_r. We claim = UU' a.s. Calculate ((UU'-I)) = (UU'-I)UΛ U'(UU'-I) = 0. Note that Γ = () Γ = (, ) (UU' ) Γ = (UU' , ) U[(U' ) U' Γ - (U' , )] = 0. Define = U' and note () = U'UΛ U'U = Λ≻ 0. Then let Γ̅= () (, ) so that () Γ̅- (, ) = 0. Then it suffices to find Γ such that U'Γ = Γ̅. Since U': ^→^r is onto, there exists Γ^k with U'Γ^k = Γ̅^k. Then let ^k ∈ [Γ^k + (U')] and set = (^k: k=1, …, ), so that U' = Γ̅. Then = by work above. For the optimality statement, calculate E[( - Γ'h | ψ)] = - Γ -Γ' + Γ'Γ = - (Γ - + ) -(Γ - + )' + Γ'Γ = -2 ' - (Γ - )' - (Γ - ) + Γ'Γ∝ - (Γ - )' - ' (Γ - ) + Γ'Γ = - (Γ - )' - ' (Γ - ) + Γ'Γ + (Γ - + )' (Γ - + ) = ' + (Γ - )' (Γ - ). Then E[( - Γ'h | ψ)] - E[( - 'h | ψ)]) = (Γ - )' (Γ - ) and for any a ∈^ we have a'(Γ - )' (Γ - ) a ≥ 0 since ≽ 0. This proves the claim. Finally, we have (, ) = ( - ', ) = - ' = 0. The final statement follows from well-known facts about the normal distribution. Suppose Σ∈^m × m is symmetric PSD with (Σ) = r. Then Σ = UΛ U' for U ∈^m × r with U'U = I_r and Λ diagonal. Since Σ is symmetric PSD, there exists B'B = Σ for (B) = r. Let VAU' be the compact SVD of B, with A diagonal. Then Σ = B'B = UA^2U' ≡ UΛ U' with U'U = I_r. Consider probability spaces (Ω_n, _n, P_n) and σ-algebras _n. Suppose 0 ≤ A_n ≤ B < ∞ and A_n = (1). Then E[A_n | ] = (1). For any ϵ > 0, note that E[A_n | ] = E[A_n (A_n ≤ϵ) | ] + E[A_n (A_n > ϵ) | ] ≤ϵ + B P(A_n > ϵ | ). We have E[P(A_n > ϵ | )] = P(A_n > ϵ) = o(1) by tower law and assumption. Then P(A_n > ϵ | ) = (1) by Markov inequality. Then we have shown E[A_n | ] ≤ϵ + T_n(ϵ) with T_n(ϵ) = (1). Fix δ > 0 and let ϵ = δ / 2. Then P(E[A_n | ] > δ) ≤ P(δ / 2 + T_n(δ / 2) > δ) = P(T_n(δ / 2) > δ / 2) = o(1) since T_n(δ / 2) = (1). Since δ was arbitrary, we have shown that E[A_n | ] = (1). A_n = (1) A_n = (c_n) for every sequence c_n →∞. It suffices to consider A_n ≥ 0. The forward direction is clear. For the backward direction, suppose for contradiction that there exists ϵ > 0 such that sup_n ≥ 1 P(A_n > M) > ϵ for all M. Then find n_k such that P(A_n_k > k) > ϵ for each k ≥ 1. We claim n_k →∞. Suppose not and lim inf_k n_k ≤ N < ∞. Then let k(j) →∞ such that n_k(j)≤ N for all j. Choose M' < ∞ such that P(A_n > M') < ϵ for all n =1, … N. Then for k(j) > M' we have P(A_n_k(j) > k(j)) ≤ P(A_n_k(j) > M') < ϵ, which is a contradiction. Then apparently lim_k n_k = +∞. Define Z_j = {i: i ≥ j}. Regard the sequence n_k as map n: ℕ→ℕ. For m ∈⊷(n), define n (m) = min n (m). It's easy to see that n (m_k) →∞ for {m_k}_k ⊷(n) with m_k →∞. Then write sup_k ≥ j P(A_n_k > k) = sup_m ∈ n(Z_j)sup_a ∈ n(m) P(A_m > a) ≤sup_m ∈ n(Z_j) P(A_m > n(m)) Note A_m_k / n(m_k) = (1) by assumption for any {m_k}_k ⊷(n) with m_k →∞. Then we have lim sup_k P(A_n_k > k) = lim_j sup_k ≥ j P(A_n_k > k) = lim_j sup_m ∈ n(Z_j) P(A_m > n(m)) = o(1). This is a contradiction, which completes the proof. Let (G_n)_n ≥ 1 and (B_n)_n ≥ 1 events and random variables. Suppose that the rerandomization measure Q is as in Definition <ref>. * If an event G_n ∈ then P(G_n) = Q(G_n). In particular, if a random variable B_n ∈ then B_n = (1) / (1) B_n = (1) / (1). * If P(G_n) = o(1) then Q(G_n) = o(1). In particular, if B_n = (1) / (1) then B_n = (1) / (1). The first set of statements since Q = P on by definition. Let c = P(∈, with c > 0 by assumption. Define S_n = {P(∈ | ) ≥ c / 2 }. Then by lemma <ref>, P(∈ | ) P(∈) = c, so P(S_n) → 1. We have the upper bound (S_n) Q(G_n | ) = (S_n) P(G_n | ∈, )= (S_n) P(G_n, ∈ | )/P(∈ | ) ≤ (c/2)(S_n) P(G_n, ∈ | ) ≤ (c/2) P(G_n | ). The first equality by definition of Q. The first inequality by the definition of S_n. The final inequality by additivity of measures. Then for r_n ≡ (1-(S_n)) Q(G_n | ), we have Q(G_n | ) = (S_n) Q(G_n | ) + r_n. Note that |r_n| ≤ 1 and r_n 0, so E_Q[r_n] = o(1) by modes of convergence. Then expand Q(G_n) as [Q(G_n | )] = [(S_n) Q(G_n | )] + [r_n] ≤ (c/2)[P(G_n | )] + o(1) = (c/2) E_P[P(G_n | )] + o(1) = (c/2) P(G_n) + o(1). The second equality follows from part (a), and the final equality by tower law. The (1) results follow by setting G_n = {B_n > ϵ}. The (1) results follow by the (1) statement and Lemma <ref>.
http://arxiv.org/abs/2407.02002v1
20240702072134
Bases for Washington's cyclotomic units of real cyclotomic fields and totally deployed fields
[ "Rafik Souanef" ]
math.NT
[ "math.NT" ]
Programming higher-order interactions of Rydberg atoms Andrew Byun, Seokho Jeong, and Jaewook Ahn ====================================================== § ABSTRACT We aim to present families of generators with minimal cardinality - we call such families bases - of the free abelian group (K) / Z(K) whenever K is a real cyclotomic field Q(ζ_n)^+ or K is a totally deployed abelian number field. Here, (K) refers to the group of Washington's cyclotomic units of K and Z(K) refers to the group of roots of unity lying in K. § INTRODUCTION cyclotomic units are of interest because of their link with the theory of Z_p-extensions. For instance, the main conjecture of Iwasawa's theory can be vaguely formulated in this way: a module that one can construct using cyclotomic units has the same characteristic ideal as the standard p-ramified module of Iwasawa's theory that one can construct with some extensions that satisfy some ramification related conditions (see <cit.>, proposition 4.5.7). Another reason why cyclotomic units are of interest is to approximate the whole group of units of any abelian number field. There are many different types of cyclotomic units (see <cit.>) but in this article we will deal with two types only that are the cyclotomic units of Washington and those of Sinnott. Washington's cyclotomic units are defined through Galois invariants and Sinnott's circular units are defined by explicit generators that generate a subgroup of the group given by Washington: the drawback of having a smaller group - so that we may expect it to be a worse approximation of the group of units - is balanced by a better knowledge of the elements of this group. These two types of cyclotomic units can be constructed through two processes that have a common starting point that is to define in the same way cyclotomic units of cyclotomic fields and then deduce a definition for cyclotomic units of all abelian number fields. A crucial article in the study of Sinnott's circular units is <cit.> in which it is proven that the group of Sinnott's circular units has finite index in the group of units and that this index is somehow linked to a real class number. On the other side, the group of Washington's cyclotomic units remains quite mysterious. There are two articles (<cit.> and <cit.>) in which these last units has been studied under some hypothesis on the considered number fields. These last two articles and the work that we present today aim to give us a better understanding of this group by giving explicit system of generators with no Z-relations modulo roots of unity (we then say "basis"). What is new in our work is that we use different hypothesis. For example, we do not consider real fields only. In our work, we consider totally deployed abelian number fields, that is fields of the form K = K_1 ⋯K_r with K_i ⊂Q(ζ_p_i^e_i) for some prime numbers p_i and some integers e_i. More precisely, let K be a number field. Suppose it is abelian, that is K/Q is Galois and its Galois group is abelian. Then, recall that Kronecker-Weber theorem states there is an integer n such that K⊂Q(exp(2i π/n)). Define the conductor of K to be the least integer n that satisfies this last condition. From now on, suppose K is an abelian number field with conductor n. Let ζ_n = exp(2 i π / n). Let Z(K) denote the group of roots of unity of K. Let (K) denote the Galois module of Washington's cyclotomic units and let (K) denote the Galois module of Sinnott's circular units (we will recall their definition later). By abuse of language, we will rather talk about bases of (K) instead of (K) / Z(K). If K = Q(ζ_n) is a cyclotomic field, recall Gold and Kim have given bases of (Q(ζ_n)) = (Q(ζ_n)) (see <cit.>, theorem 2). Based on this work, we state theorems <ref>, <ref> and <ref> that all describe bases of Was(K) under different hypotheses on K. The first two theorems are easy consequences of proposition <ref> that is itself a consequence of proposition <ref> more or less and these two theorems give bases assuming K = Q(ζ_n)^+ is a real cyclotomic field. In theorem <ref>, we suppose K is totally deployed and the principal idea in the proof of this theorem is to show we have a basis by proving the group that is generated by the elements we consider is a direct factor of (Q(ζ_n)). This idea has also been used in <cit.> and <cit.>. Divisibility relations arise from this last basis (see corollary <ref>). § NOTATION AND PRELIMINARIES §.§ On units Let A be a Galois module, that is A is an abelian group with some (K/Q) acting Z-linearly on it (we consider extensions of Q only). Suppose K/Q is an abelian extension, so that complex conjugation is well defined as an element of (K/Q). We let A^+ denote the Galois submodule of A that consists of all the elements A on which the complex conjugation acts trivially. Later on, we will consider A = 𝒪_K^× the group of units of the ring of integers of K. If x ∈ A, then u ∈Z[(K/Q)] acts on x and we denote by u x or u(x) or x^u the image of x under u. Let ζ_n = exp(2i π / n) for all n ∈N^*. For now on, let n ⩾ 2 satisfying n ≠ 2 4 (with no loss of generality because Q(ζ_n) = Q(ζ_2n) if n is odd). If p ∈P is a prime number, let e_p denote the p-valuation of n and, more generally, let v_p(k) denote that of any integer k. Let n = ∏_j=1^r p_j ^e_j and let q_j = p_j^e_j. We now recall that if n is not a prime power, then 1-ζ_n is a unit of the ring of integers of Q(ζ_n) (see <cit.> proposition 2.8). Now, if n is a prime power, then 1-ζ_n is no longer a unit but 1-ζ_n^σ/1-ζ_n is a unit for all σ∈(Q(ζ_n) / Q ) (lemma 1.3 <cit.>). Let K be an abelian number field of conductor n. We say K is totally deployed when (K/Q) is the direct product of its inertia subgroups (see the introduction of <cit.>). This condition is equivalent to writting K = K_1 ⋯K_r with K_j ⊂Q(ζ_q_j). Let E(K) be the group of units of the ring of integers O_K of K. Let C_n be the Galois module generated by the roots of unity of Q(ζ_n) and by the 1-ζ_d for d | n. Let (K) = E(K) ∩C_n and let (K) be the intersection of E(K) with the Galois module generated by the roots of unity lying in K and the N_Q(ζ_d) / K∩Q(ζ_d) ( 1- ζ_d) where d ∈ℕ. One can show (see <cit.>) (K) is generated by : -the roots of unity of K, which form a group that we will denote by Z(K) -the N_Q(ζ_d) / K∩Q(ζ_d) ( 1- ζ_d^σ) with d | n such that d is not a prime power and d ∧ (n/d) = 1 and σ∈(Q(ζ_d)/Q) -the N_Q(ζ_d) / K∩Q(ζ_d) ( 1- ζ_d)^1-σ with d being a prime power dividing n such that d ∧ (n/d) = 1 and with σ∈(Q(ζ_d)/Q). It is known that both (K) and (K) have finite index in E(K), that is they both have maximal rank as Z-submodules of E(K) and that their index is linked to the class number of the maximal real subfield K^+ of K (see <cit.>). When the situation makes it clear, we will omit writting K. For example, we will write instead of writting (K). We now recall the following relations (see <cit.> lemma 2.1): 1-ζ_n^a = -ζ_n^a (1- ζ_n^-a) N_Q(ζ_n) / Q(ζ_d) ( 1- ζ_n) = (∏_p | n p ∤ d (1-(p,d)^-1) ) (1-ζ_d) where d | n, the integer p is prime and (p,d) denotes the Frobenius of Q(ζ_d) that is defined by ζ_d ↦ζ_d^p. We will refer to this second relation as "norm relation". We recall a property of Hasse's index. We have [𝐄: 𝐙𝐄^+] ∈{ 1;2 } . Moreover, if K=Q(ζ_n), this index is 1 if and only if n is a prime power. See <cit.> theorem 4.12 and corollary 4.13. We also recall Dirichlet's units theorem. The group E(K) is isomorphic to the product of Z(K) and a free abelian group of rank r_1+r_2-1 where r_1 denotes the number of real embeddings of K and r_2 denotes the number of complex embeddings of K up to conjugation. We now introduce the notation we will use to work with bases of (most of this notation comes from <cit.> and <cit.>). Recall n = ∏_j=1^r p_j ^e_j and q_j = p_j^e_j. If p_j is odd, let σ_j be a generator of (Q(ζ_q_j)/Q). If p_j=2, let J_j be the complex conjugation considered as an element of (Q(ζ_q_j)/Q) and let σ_j be so that this last Galois group is generated by σ_j and J_j. Let σ_j^k = {[ σ_j^k 0 ⩽ k < 2^e_j-2; σ_j^k J_j 2^e_j-2⩽ k < 2^e_j-1 ]. See the elements of (Q(ζ_q_j)/Q) as elements of (Q(ζ_n)/Q) by letting them acting trivially on Q(ζ_n/q_j). Let J_j be the complexe conjugation of (Q(ζ_q_j)/Q) considered as an element of (Q(ζ_n)/Q). Let J= J_1 ⋯ J_r be the complex conjugation considered as an element of (Q(ζ_n)/Q). Let a ∈Z. Define (see <cit.> lemma 8.1) ξ_q_j,a = √(ζ_q_j^1-σ_j^a)1-ζ_q_j^σ_j^a/1-ζ_q_j∈^+(Q(ζ_q_j)) . We recall the basis of (Q(ζ_n)) Gold and Kim described in <cit.> (theorem 2). For all Ω={ i_1 , … , i_s }⊂ 1 , r, let n_Ω = ∏_j ∈Ω p_j^e_j ζ_Ω = ζ_n_Ω. For all Ω = { j }⊂ 1 , r, let X_Ω^1 = { a ∈N: 1 ⩽ a < φ(q_j)/2 } B_Ω = {ξ_q_j,a: 1 ⩽ a < φ(q_j)/2 } . For all Ω = { i_1,…,i_s }⊂ 1 , r that is neither empty nor a singleton and such that i_1 < … < i_s and for all 0 ⩽ k ⩽ s let X_Ω^k be the set of all tuples (a_1,…,a_s) ∈N^s such that {[ 1 ⩽ a_j < φ(q_i_j) j ∈1, k; 1 ⩽ a_k < φ(q_i_k)/2 ; a_j = 0 j ∈ k,s . ]. Let B_Ω^k = {( ∏_j=1^s σ_i_j^a_j) (1-ζ_Ω): (a_1,…,a_s) ∈ X_Ω^k }. For k = 0 let X_Ω^k = { (a_1,…,a_s) ∈N^s: ∀ j ∈ 1 , s , a_j = 0 } and let B_Ω^k be defined in a similar way. Then, let B_Ω = {[ ⋃_k=0^s B_Ω^k 2 | |Ω|; ⋃_k=1^s B_Ω^k 2 ∤ |Ω|. ]. The family ∪_Ω B_Ω - where Ω runs over the set of all non empty subsets of 1,r - is a basis of (Q(ζ_n)). See <cit.> (see theorem 2). We will keep in mind that there is a one-to-one correspondence between all the elements of Gold and Kim's basis and all the tuples (Ω, a_1,…,a_s). We say Ω or n_Ω is the level of x. In the proof of theorem <ref>, we will order those tuples in the following way: (Ω_1, a_1,…,a_s_1) ⩽ (Ω_2, b_1,…,b_s_2) ⟺{[ ( Ω_1 ⊄Ω_2 Ω_2 ⊄Ω_1 |Ω_1| ⩾ | Ω_2| ) ; ( Ω_2 Ω_1 ) ; ( Ω_1 = Ω_2 (a_1,…,a_s_1) ≤_L (b_1,…,b_s_2) ) ]. where ≤_L denotes the inverse lexicographic order associated to the natural order on N - that is we compare integers starting from last index and ending with first index. Here, let s=s_1=s_2 so we may have a_s < b_s or a_s=b_s and a_s-1 < b_s-1 or ... In particular, note this relation is not a partial order. In what's next, we will need the following notation. Let L_n = Q(ζ_q_1)^+⋯Q(ζ_q_r)^+ . There is a root of unity η∈Q(ζ_n) (see <cit.> 2-ii) such that η_n:= ηN_Q(ζ_n) / Q(ζ_q_1)^+⋯Q(ζ_q_r-1)^+Q(ζ_q_r) (1-ζ_n) ∈Q(ζ_q_1)^+⋯Q(ζ_q_r)^+. For all L⊂L_n of conductor n, let e_L = N_L_n / L (η_n) ∈L. We also define similar objects by swapping n with any of its divisors. §.§ On convolution product Through this section, we recall - in the needed context only - several facts that are stated in a more general context in <cit.> and <cit.> and that deal with Möbius functions. Let E be a finite set. Define F(E) to be the set of functions [ f: P(E) ⟶ C ] . This set has a law of addition and a convolution product definied in the following way ∀ f,g ∈F(E), ∀Ω⊂ E, f * g (Ω) = ∑_X ⊂Ω f(X) g(Ω∖ X). One can show (F(E),+,*) is a ring whose identity element is the funtion that maps ∅ to 1 and all subset Ω≠∅ to 0. Denote by 1 the element of F(E) that maps all Ω⊂ E to 1. One can show 1 is a unit and we let μ denote its inverse. We have (see <cit.> equation 3.3) ∀Ω⊂ E, μ(Ω) = (-1)^|Ω| . In particular, we have the following theorem. Let f,g ∈F(E). We have ( ∀Ω⊂ E, ∑_X ⊂Ω f(X) = g(Ω) )⟺( ∀Ω⊂ E, f(Ω) = ∑_X ⊂Ω (-1)^|Ω| - |X| g(X) ). See <cit.> proposition 2. Later, we will use this convolution product with E = 1 , r. § FROM NONREAL FIELDS TO REAL FIELDS In this section, we aim to give bases of ^+(Q(ζ_n)) (recall we talk about ^+(Q(ζ_n)) instead of talking about the quotient ^+(Q(ζ_n))/Z^+(Q(ζ_n))). More precisely, for any abelian number field K, we give a way to construct a basis of ^+(K) given a basis of (K) (proposition <ref>) and we then apply this method when K is a cyclotomic field (theorems <ref> and <ref>). §.§ Abelian fields Let K be an abelian number field. Let (x_1,… , x_r) be a Z-basis of (K). With no loss of generality, suppose there is r' ∈ 0; r such that x_1, … , x_r' have order 2 in the quotient group 𝐄 / ZE^+ and x_r'+1, … , x_r have order 1. Then, the family (|x_1| | x_1|, … , |x_1| |x_r'|, |x_r'+1 |, …, |x_r|) is a basis of (K^+). First, if x ∈ZE^+, observe we have |x| ∈E^+ and, if we also suppose x ∈, then |x| ∈^+. Indeed, write x=z u ∈ZE^+. Then, we have |x| = ± u and this proves the first statement. Now, suppose we also have x ∈. Then, we have u = z^-1 x ∈∩E^+ and this proves the second statement. Hence, the family (|x_1| | x_1|, … , |x_1| |x_r'|, |x_r'+1 |, …, |x_r|) is made of elements of ^+. Now, let us show these elements are generators of ^+. Let x ∈^+ and write x = ζ∏_i=1^r x_i^a_i for some a_i ∈Z, ζ∈Z. In particular, we have x ∈E^+ so there is an even number of elements of the form x_i with i ⩽ r', that is we have: ∑_i=1^r' a_i ∈ 2 Z . Let A = a_1 - ∑_i = 2^r' a_i ∈ 2 Z. Therefore, we have x = ζ x_1^A( ∏_i=2^r' (x_1x_i)^a_i) (∏_i>r' x_i^a_i) = ζ' |x_1|^A( ∏_i=2^r' |x_1x_i|^a_i) (∏_i>r' |x_i|^a_i) for some root of unity ζ'. As we have x ∈E^+ and |x_i| ∈R, we have ζ' = ± 1, which proves the considered family is a generating family. It is a basis because of Dirichlet's theorem. §.§ Cyclotomic field with odd conductor Suppose n is odd. A basis of ^+(Q(ζ_n)) is given by: -the |ξ_p^e_p,a|'s with 1 ⩽ a < φ(p^e_p) / 2 and p running over the set of all prime divisors of n -the |1-ζ_n^σ_1||x|'s with Ω⊂ 1 ,r, |Ω| ⩾ 2 and x ∈ B_Ω. We will apply proposition <ref> to the basis of (Q(ζ_n)) given in theorem <ref>. We have ξ_p^e_p,a∈Z(Q(ζ_p^e_p)) E^+(Q(ζ_p^e_p)) from proposition <ref>. For the other generators, observe we have, for all divisor d | n and for all a ∈Z 1- ζ_d^a = ζ_2d^a ( ζ_2d^-a - ζ_2d^a) . As d is odd, we have ζ_2d∈Q(ζ_d), so that the previous decomposition takes place in Q(ζ_d). Moreover, we have ( ζ_2d^-a - ζ_2d^a) = ± i |1 -ζ_d^a |. We will keep the following decomposition in mind 1- ζ_d^a = ± i ζ_2d^a |1-ζ_d^a|. which shows that 1- ζ_d^a has order 2 in the quotient group E(Q(ζ_n))/Z(Q(ζ_n))E^+(Q(ζ_n)) whenever a is prime to d, otherwise we would have i ∈Q(ζ_n) and that is not the case. We can understand the fact that there are products of pairwise cyclotomic units only when the conductor n is not a prime a power through proposition <ref>. §.§ Cyclotomic field with even conductor Suppose n is even and write n = 2^e_1 p_2^e_2⋯ p_r ^e_r∈N. Let m = p_2^e_2⋯ p_r ^e_r. A basis of ^+(Q(ζ_n)) is given by: -the |ξ_p^e_p,a|'s with 1 ⩽ a < φ(p^e_p)/2 and p being a prime divisor of n (we will say these generators have type 0) -the |x|'s where Ω⊂ 1 , r satisfies |Ω| ⩾ 2 and x ∈ B_Ω has odd level d (those have type 1) -the |1-ζ_n^σ_1| |x|'s where Ω⊂ 1 , r satisfies |Ω| ⩾ 2 and x ∈ B_Ω has even level d, that is v_2(d) = e_1 (those have type 2). We just apply proposition <ref> to the basis of (Q(ζ_n)) given in theorem <ref>. For generators having type 0, see our previous proof. For generators having type 1, such an element x can be written as 1-ζ_d^a for some d | n with d being odd and a ∧ d = 1. Moreover, equation (<ref>) we have 1-ζ_d^a ∈Z(Q(ζ_n)) E^+(Q(ζ_n)) as desired. For generators having type 2, such an element x can be written as 1-ζ_d^a for some d | n satisfying v_2(d) = e_1 and a ∧ d = 1. The same equation shows we have 1-ζ_d^a ∉Z(Q(ζ_n)) E^+(Q(ζ_n)), otherwise we would have some primitive 2^1+e_1th root of unity lying in Q(ζ_n), which is not the case. § TOTALLY DEPLOYED FIELDS Through this section, we aim go give a basis of (K) given K is a totally deployed abelian number field (see our introduction). For now, we will suppose K is a totally deployed abelian number field, with conductor n and we write K= K_1 ⋯K_r where K_j ⊂Q(ζ_q_j). To this end, we will consider a family of elements of K that has rg_Z((K)) = r_1+r_2-1 elements and that generates a direct factor of (Q(ζ_n))/Z(Q(ζ_n)). It is not hard to see that this property makes this family generate (K)/Z(K) so that this family is a basis. More precisely, we will construct a basis of (K) that can be completed with Gold and Kim's basis to form a basis of (Q(ζ_n)). This idea has already been used in <cit.>, <cit.>. Actually, in order to prove proposition 2 from <cit.>, the author proves the following fact. Let L be an abelian number field with conductor n. If H is a group such that Z(L) ⊂ H ⊂(L) is a direct factor of (Q(ζ_n)) and H has the same Z-rank as (L), then we have H = (L). See the proof of proposition 2 from <cit.>. We should also mention that the fact one can state more results when K is totally deployed appears in <cit.>, <cit.> and <cit.>. We now constructs some sets and set notation to state our next theorem. To make it easier to understand, we tried to divide it into many definitions. The reader may not understand the following definition items as indepedent definitions but instead think of this separation as a help to read the following more easily. We will refer to this notation as the notation of theorem <ref>. Let d_j denote the degree of K_j/Q. Let d_j = {[ d_j K_j; d_j/2 ]. With no loss of generality, suppose K_1,…,K_t-1 are reals and K_t,…,K_r are not. For all Ω = { i_1,…,i_s }⊂ 1 , r that is non-empty, let K_Ω = K_i_1⋯K_i_s. For all Ω = { j }⊂ 1 , r, let B_Ω(K) = {N_Q(ζ_q_j)^+ / K_j^+ (ξ_q_j,a): 1 ⩽ a < d_j } . For all X ⊂N, let 1_X : N→N denote the indicator function of X. For all Ω = { i_1,…,i_s }⊂ 1 , r with s ⩾ 2, such that i_1 < … < i_s and K_i_s is non-real (that is K_Ω decomposes with at least 1 non-real field), let t_Ω be the integer such that K_i_1,…,K_i_t_Ω-1 are real and K_i_t_Ω, …, K_i_s are non-real. For all t_Ω⩽ k < s, let X_Ω^k(K) be the set of all tuples (a_1,…,a_s) ∈N^s such that {[ 1_2 ℤ ( s-k) φ(q_i_j)/2 < a_j < d_i_j + 1_2 ℤ ( s-k) φ(q_i_j)/2 j ∈1 , k; 1_2 ℤ ( s-k) φ(q_i_k)/2 < a_k < d_i_k/2 + 1_2 ℤ ( s-k) φ(q_i_k)/2 ; a_j = d_i_j/2 + 1_2 ℤ ( s-j) φ(q_i_j)/2 j ∈ k , s . ]. If k = s, let X_Ω^k(K) be the set of all tuples (a_1,…,a_s) ∈N^s such that {[ 0 < a_j < d_i_j j ∈1 , s; 0 < a_s < d_i_s/2. ]. If k=0 (which will be useful if t_Ω = 1), let X_Ω^k(K) = { (a_1,…,a_s) ∈N^s: ∀ j ∈ 1 , s , a_j = d_i_j/2 + 1_2 ℤ ( s-j) φ(q_i_j)/2} and if k=t_Ω-1 (assuming t_Ω > 1), let X_Ω^k(K) be the set of all tuples (a_1,…,a_s) ∈N^s such that {[ 1_2 ℤ ( s-k) φ(q_i_j)/2 < a_j < d_i_j + 1_2 ℤ ( s-k) φ(q_i_j)/2 j ∈1 , t_Ω; a_j = d_i_j/2 + 1_2 ℤ ( s-j) φ(q_i_j)/2 j ∈ t_Ω , s . ]. Let Ω_C = Ω∩ t,r and Ω_R = Ω∩ 1,t-1. Under any of the previous assumptions on k, we let B_Ω^k(K) = {( ∏_j=1^s σ_i_j^a_j) N_Q(ζ_Ω) / K_Ω (1-ζ_Ω): (a_1,…,a_s) ∈ X_Ω^k(K) } B_Ω(K) = {[ ⋃_k=0^s B_Ω^k(K) Ω_R = ∅ 2 | |Ω|; ⋃_k=1^s B_Ω^k(K) Ω_R = ∅ 2 ∤ |Ω|; ⋃_k=t_Ω-1^s B_Ω^k(K) Ω_R≠∅ 2 | |Ω_C|; ⋃_k=t_Ω^s B_Ω^k(K) Ω_R≠∅ 2 ∤ |Ω_C|. ]. Recall e_L is defined in our introduction. If K_i_s is real (that is K_Ω decomposed with real fields only), let B_Ω(K) = {( ∏_j=1^s σ_i_j^a_j) (e_K_Ω): 0 < a_1 < d_i_1, …, 0 < a_s < d_i_s} . The family B(K)=∪_Ω B_Ω(K) where Ω runs over the set of all non-empty subsets of 1,r is a basis of (K). Moreover, as a group, (K) / Z(K) (which is naturally isomorphic to (K) Z(Q(ζ_n))/ Z(Q(ζ_n))) is a direct factor of (Q(ζ_n)) / Z(Q(ζ_n)). In what's next, to make it simple to read, we will talk about (Q(ζ_n)) instead of (Q(ζ_n)) / Z(Q(ζ_n)). More precisely, the family B(K) can be completed to a basis of (Q(ζ_n)) using Gold and Kim's basis. For all non-empty subset Ω⊂ 1 , r, let f_C (Ω) = 1/2( ∏_i ∈Ω d_i-1 ) + (-1)^|Ω|/2 f_R (Ω) = ( ∏_i ∈Ω d_i-1 ) g_C (Ω) = 1/2∏_i ∈Ω d_i g_R (Ω) = ∏_i ∈Ω d_i . If Ω = ∅, say each of these functions maps Ω to 1. We have ∑_Ω⊂ 1 , r Ω≠∅ f_C(Ω) = g_C( 1 , r ) -1 ∑_Ω⊂ 1 , r Ω≠∅ f_R(Ω) = g_R( 1 , r ) -1. Let us prove the lemma first. Case 1 We may start with equation (<ref>). We have to prove 1 * f_R( 1 , r ) = g_R( 1 , r ) but instead, we will show we have, ∀Ω⊂ 1 , r , f_R( Ω) = μ *g_R(Ω) and the desired result will then be proven. We have μ *g_R(Ω) = ∑_X ⊂Ω (-1)^|Ω|-|X| g_R(X) = ∑_X ⊂Ω (-1)^|Ω|-|X|∏_i ∈ X d_i = (-1)^|Ω| + ∑_k=1^|Ω| (-1)^|Ω|-k∑_i_1,…,i_k ∈ Ω i_1 < … < i_k d_i_1⋯ d_i_k. Using Vieta's formulas, we can see that this last expression matches the evaluation of the polynomial (-1)^|Ω|∏_i ∈Ω X-d_i at X=1, hence μ *g_R(Ω) = (-1)^|Ω|∏_i ∈Ω 1-d_i = f_R(Ω). Cas 2 In a similar way, we now consider equation (<ref>). We have μ *g_C(Ω) = ∑_X ⊂Ω (-1)^|Ω|-|X| g_C(X) = (-1)^Ω + ∑_X ⊂Ω X ≠∅ (-1)^|Ω|-|X|1/2 ∏_i ∈ X d_i = (-1)^Ω + 1/2 ∑_k=1^|Ω| (-1)^|Ω|-k∑_i_1,…,i_k ∈ Ω i_1 < … < i_k d_i_1⋯ d_i_k . Using Vieta's formulas, we can see that this last term on the right side matches the evaluation of the polynomial (-1)^|Ω|/2( (∏_i ∈Ω X-d_i ) - X^|Ω|) at X=1, hence μ *g_C(Ω) = (-1)^Ω + (-1)^|Ω|/2( (∏_i ∈Ω 1-d_i ) - 1 ) =f_R(Ω). We may now prove the previously stated theorem. First, we may prove B(K) has cardinality r_1+r_2-1, that is | B(K) | = {[ 1/2( ∏_i ∈ 1 ,r d_i ) -1 | 1 ,r _C| ≠ 0; ( ∏_i ∈ 1 ,r d_i ) -1 ]. This can also be stated in the following way. For all non-empty subset Ω⊂ 1 , r, we denote by f(Ω) the number of elements of B_Ω(K) and we let g(Ω) ={[ 1/2∏_i ∈Ω d_i |Ω_C| ≠ 0; ∏_i ∈Ω d_i ]. Also, say these functions both map Ω = ∅ to 1. Then, we have to show 1*f ( 1 ,r ) = g( 1 ,r ). Again, we rather show ∀Ω⊂ 1 ,r , f(Ω) = μ * g(Ω). If |Ω| ⩽ 1, there is nothing to be proven so we may suppose |Ω| ⩾ 1. We separate three cases. Suppose we have Ω_C = ∅. Then, lemma <ref> gives μ * g (Ω) = ∑_X ⊂Ω (-1)^|Ω| - |X| g(X) = ∑_X ⊂Ω (-1)^|Ω| - |X|∏_i ∈ X d_i = ∏_i ∈Ω (d_i -1) and it remains to observe f(Ω) =∏_i ∈Ω (d_i -1) since we supposed Ω_C = ∅. Now suppose Ω_R = ∅. Again, lemma <ref> gives μ * g (Ω) = f_C(Ω). If |Ω| is odd, we have f(Ω) = ∑_k=1^r | B_Ω^k(K) | = ∑_k=1^r (1/2d_i_k -1)(d_i_k-1-1) ⋯ (d_i_1 -1) and by induction on N ∈ 1,r, one can show ∑_k=1^N (1/2d_i_k -1)(d_i_k-1-1) ⋯ (d_i_1 -1) = 1/2(d_i_N-1) ⋯ (d_i_1-1) -1/2. Taking N =r, we get μ * g (Ω) = f_C(Ω) = f(Ω). In the same way, if |Ω| is even, we have f(Ω) = 1+ ∑_k=1^r | B_Ω^k(K) | = 1+ ∑_k=1^r (1/2d_i_k -1)(d_i_k-1-1) ⋯ (d_i_1 -1) and we get the same conclusion. Now suppose we have Ω_C≠∅ and Ω_R≠∅. We have μ * g (Ω) = ∑_X ⊂Ω (-1)^|Ω| - |X| g(X) = ∑_X_1 ⊂Ω_R X_2 ⊂Ω_C X_2 ≠∅ (-1)^|Ω| - |X_1| - |X_2|1/2∏_i ∈ X_1 ∪ X_2 d_i + ∑_X_1 ⊂Ω_R (-1)^|Ω| - |X_1|∏_i ∈ X_1 d_i = ∑_X_1 ⊂Ω_R X_2 ⊂Ω_C (-1)^|Ω| - |X_1| - |X_2| g_C(X_1 ∪ X_2) - ∑_X_1 ⊂Ω_R (-1)^|Ω_C| + |Ω_R| - |X_1| g_C(X_1) + ∑_X_1 ⊂Ω_R (-1)^|Ω_C| + |Ω_R| - |X_1| g_R(X_1) = f_C(Ω) - (-1)^|Ω_C| f_C(Ω_R) + (-1)^|Ω_C| f_R(Ω_R) = 1/2∏_i ∈Ω (d_i-1) + (-1)^|Ω_C|/2∏_i ∈Ω_R (d_i-1) . Separate cases depending on whether |Ω_C| is even or not and one can show (using a similar induction argument as before) we have f(Ω) = 1/2∏_i ∈Ω (d_i-1) + (-1)^|Ω_C|/2∏_i ∈Ω_R (d_i-1) . This conclude the proof of the fact B(K) has the expected cardinality. We may now show the elements of B(K) generate a direct factor of (Q(ζ_n)). To make it easier to comprehend, we may assume n is odd and, at the end, we will explain what to do if n is even. In order to show what we want, we will actually make every element x of B_Ω(K) correspond to a tuple (a_1,…,a_s) ∈∪_k=0^s X_Ω^k. Most of the time, this tuple will be the the least tuple - ordering tuples with inverse lexicographic order (see our introduction) - such that ( ∏_j=1^s σ_i_j^a_j) (1-ζ_Ω) appears in the decomposition of x in Gold and Kim's basis of (Q(ζ_n)). Moreover, we will show this element appears with exponent ± 1 and also these tuples are pairwise distinct. Later on, we will say we have made correspond the tuple (a_1,…,a_s) to x. All this will conclude because, if we consider the matrix whose colums are the elements of B(K) that are decomposed in Gold and Kim's basis, ordering all these elements correctly make this matrix be triangular with coefficients ± 1 over the diagonal. More precisely, we may order the columns in the following way. Let x ∈ B_Ω_1(K) and y ∈ B_Ω_2(K) such that they correspond to (a_1,…,a_s_1) and (b_1, … , b_s_2) by the method we just described. The column that is associated to x is to the left of the one associated to y if we have (Ω_1, a_1, … , a_s_1) ⩽ (Ω_2, b_1, … , b_s_2) according to what we have defined in our introduction. Note we then have to arbitrarily choose a way to order the subsets Ω's according to this last binary relaton. Then, we add columns to the right - in an arbitrary order - that corresponds to all the elements of Gold and Kim's basis that are not associated to the tuples (Ω,a_1,…,a_s)'s we have just talked about. We may also order rows in the following way: starting from the top and ending at the bottom, we order - in an ascending order, with the binary relation we just recalled - all the elements of Gold and Kim's basis that are associated to the tuples (Ω, a_1,…,a_s)'s we just mentionned. Then, we add - at the bottom - all the elements of Gold and Kim's basis that were left by this last ordering according to the same arbitrary order we have chosen when ordering the last columns. We will use the same technics as Gold and Kim use in <cit.> to prove their theorem 1. Again, to make it easier to comprehend, we will consider x ∈ B_ 1 , r (K) but one can argue in the same way for any subset Ω⊂ 1 , r. We will suppose n is not a prime power because this case has already been considered in the proof of proposition 2 from <cit.> and in the proof of theorem 2.1 from <cit.>. Finally, observe all groups (Q(ζ_q_i) / K_i) are finite, cyclic and generated by σ_i^d_i. We now separate cases, just like in the definition of B_ 1 , r (K). Suppose K_1,…,K_r are real fields. This case has already been considered in <cit.> (see proposition 2 and remark 4) but the author proved his result only in a special case (and stated it in the general case), we may prove it now. In this case, we may show that, for any 0 < a_1 < d_1, …, 0 < a_r < d_r, we can make correspond (a_1,…,a_s) to x= ( ∏_j=1^r σ_j^a_j) (e_K). First, observe we have d_j |φ(q_j)/2 since every K_j is real. Modulo roots of unity of Q(ζ_n), we have x = N_L_n / K (η_n^σ_1^a_1⋯σ_r^a_r) = ∏_j_1=0^φ(q_1)/(2d_1)-1 ⋯∏_j_r=0^φ(q_r)/(2d_r)-1 ∏_ε_1,…,ε_r-1∈{ 0;1 } 1-ζ_n^J_1^ε_1σ_1^ a_1 + j_1 d_1⋯ J_r-1^ε_r-1σ_r-1^a_r-1+ j_r-1 d_r-1σ_r^a_r + j_r d_r and this is the decomposition of x in the basis of theorem <ref>. Indeed, we have a_r + j_r d_r ∈ 0 , φ(q_r)/2 and J_i^ε_iσ_i^a_i + j_i d_i = σ_i^ε_i φ(q_i)/2 + a_i + j_i d_i ε_i φ(q_i)/2 + a_i + j_i d_i ∈ 0 ; φ(q_i) . As we wanted, observe we can make correspond (a_1,…,a_r) to x. Suppose K_1, … , K_t-1 are real and K_t ,…, K_r are not, for some t ⩾ 2 and r ⩾ t. Observe we have ∀ i ⩾ t, φ(q_i)/2 = d_i/2 d_i since K_i is not real, so that ∀ i ⩾ t, J_i = σ_i^d_i/2 + kd_i holds for k=φ(q_i)/(2d_i)-1/2. Now, consider x = ( ∏_j=1^r σ_j^a_j) N_Q (ζ_n)/ K (1-ζ_n) with (a_1,…,a_r) ∈ X_ 1 , r ^k(K) for some t ⩽ k ⩽ r. We may show by induction on l ⩾ k we can make correspond the following tuple (a_1-ε_k φ(q_1)/2, …, a_k - ε_k φ(q_k)/2,0,…,0) to the following element x_l=∏_j_1=0^φ(q_1)/d_1 -1⋯∏_j_l=0^φ(q_l)/d_l -1 1- ζ_n^ J_1^ε_lσ_1^a_1+j_1 d_1⋯ J_l^ε_lσ_l^a_l + j_l d_l with ε_l = 1_2 ℤ ( r-l). Base case with l=k: Let A_i = a_i-ε_k φ(q_i)/2∈ 0 ; d_i . Observe we have A_k ∈ 0 ; d_k/2. Modulo roots of unity of Q(ζ_n), we have x_k = ∏_j_1=0^φ(q_1)/d_1 -1⋯∏_j_k-1=0^φ(q_k-1)/d_k-1 -1∏_j_k⩽φ(q_k)/(2d_k) -1/2 1- ζ_n^σ_1^A_1+j_1 d_1⋯σ_k^A_k + j_k d_k ∏_j_1=0^φ(q_1)/d_1 -1⋯∏_j_k-1=0^φ(q_k-1)/d_k-1 -1∏_j_k > φ(q_k)/(2d_k) -1/2 1- ζ_n^σ_1^A_1+j_1 d_1⋯σ_k^A_k + j_k d_k . The first group of terms is already decomposed in the basis of theorem <ref>: all exponants A_i + j_1 d_1 lie in 0 , φ(q_i) whenever i < k and we have A_k + j_k d_k ∈ 0 , φ(q_k)/2. We can see the tuple we are looking for appears with exponant 1. The second group of terms can be treated just like the third one in the following induction step and then we can see the tuple we are looking for does not appear since this second group of terms decomposes with tuples from ∪_j > k X_1 , r ^j and tuples from X_1 , r ^k which have their kth coefficient congruent to a_k+d_k/2 in modulus d_k. Induction step: Let l > k and suppose the result holds for previous cases. Let A_i = a_i - ε_l φ(q_i)/2. Observe we have ∀ i ⩽ k , A_i ≠ 0 d_i A_l = d_l/2 . Write x_l = (∏_j_1=0^φ(q_1)/d_1 -1⋯∏_j_l-1=0^φ(q_l-1)/d_l-1 -1∏_j_l < φ(q_l)/(2d_l) -1/2 1- ζ_n^σ_1^A_1+j_1 d_1⋯σ_l^A_l + j_l d_l) (∏_j_1=0^φ(q_1)/d_1 -1⋯∏_j_l-1=0^φ(q_l-1)/d_l-1 -1∏_j_l = φ(q_l)/(2d_l) -1/2 1- ζ_n^σ_1^A_1+j_1 d_1⋯σ_l^A_l + j_l d_l) (∏_j_1=0^φ(q_1)/d_1 -1⋯∏_j_l-1=0^φ(q_l-1)/d_l-1 -1∏_j_l > φ(q_l)/(2d_l) -1/2 1- ζ_n^σ_1^A_1+j_1 d_1⋯σ_l^A_l + j_l d_l). The first group of terms is almost decomposed in the basis of theorem <ref>: whenever i < l, we may reduce A_i + j_i d_i modulo φ(q_i). Then, zeros might appear and we are now going to explain how to decompose these terms in the basis of theorem <ref>. Consider 1-ζ_n^σ_1^b_1⋯σ_l^b_l with b_l ∈ 0 , φ(q_l) / 2 and, for all i < l, suppose b_i ∈ 0 ; φ(q_i). Let i_1,…,i_B be all the indexes i satisfying b_i = 0. Norm relations for Q(ζ_n/q_i_1) allow us to write 1-ζ_n^σ_1^b_1⋯σ_l^b_l with terms that have lower level and terms of the form 1-ζ_n^σ_1^c_1⋯σ_l^c_l with c_i = b_i for all i ≠ i_1 and c_i_1∈ 0 , φ(q_i). By iterating this process, we see we can decompose 1-ζ_n^σ_1^b_1⋯σ_l^b_l with terms that have lower level and terms of the form 1-ζ_n^σ_1^c_1⋯σ_l^c_l with c_l = b_l and c_i ∈ 0 , φ(q_i) for all i < l. Now, come back to x_l. From what preceeds, we can say the tuple we are looking for does not appear in the first group of terms. We may now consider the third group of terms. We may observe these terms satisfy A_l + j_l d_l > φ(q_l)/2. Using norm relations for Q(ζ_n/q_l+1), we can decompose these terms with terms that have lower level and terms of the form 1- ζ_n^σ_1^A_1+j_1 d_1⋯σ_l^A_l + j_l d_lσ_l+1^B_l+1 with B_l+1∈ 0 , φ(q_l+1) and with exponant ± 1. The same considerations as those we used for the first group of terms show that if B_l+1 < φ(q_l+1)/2, then the following term 1- ζ_n^σ_1^A_1+j_1 d_1⋯σ_l^A_l + j_l d_lσ_l+1^B_l+1 decomposes in the basis of theorem <ref> with terms that have lower level and terms from X_ 1 , r ^l+1. Hence, these terms do not involve the tuple we are looking for. Then, we may now consider terms with B_l+1⩾φ(q_l+1)/2. To this aim, we use norm relations for Q(ζ_n/q_l+2)) and this leads to consider terms of the form 1- ζ_n^σ_1^A_1+j_1 d_1⋯σ_l^A_l + j_l d_lσ_l+1^B_l+1σ_l+2^B_l+2 with B_l+2∈ 0 , φ(q_l+2). Again, separate cases depending on whether we have B_l+2 < φ(q_l+2)/2. Now iterate this process and we are now lead to consider terms of the form 1- ζ_n^σ_1^A_1+j_1 d_1⋯σ_l^A_l + j_l d_lσ_l+1^B_l+1⋯σ_r^B_r with φ(q_i)/2 ⩽ B_i < φ(q_i) for all l < i ⩽ r. Equation (<ref>) allows us to transform these terms into terms of the form 1- ζ_n^σ_1^C_1+j_1 d_1⋯σ_l^C_l + j_l d_lσ_l+1^D_l+1⋯σ_r^D_r with 0 ⩽ D_i < φ(q_i)/2 for all i > l and C_i = A_i - φ(q_i)/2 for all i ⩽ l, so that C_l + j_l d_l ∈ d_l/2 , φ(q_l)/2 . Now, we just have to do the same manipulations as those for the first group of terms to see these last terms can be decomposed in the basis of theorem <ref> with tuples from ∪_j ⩾ l X_ 1 , r ^j so that the third group of terms does not involve the tuple we are looking for. We may now consider the second group of terms. This can be done through a similar process just like for the third group of terms. That is we are lead to consider terms of the form 1- ζ_n^σ_1^C_1+j_1 d_1⋯σ_l^C_l + j_l d_lσ_l+1^D_l+1⋯σ_r^D_r with 0 ⩽ D_i < φ(q_i)/2 for all i > l and C_i = A_i - φ(q_i)/2 for all i ⩽ l but in this case we now have C_l + j_l d_l =0 . For the same reasons as before, terms such that there is i > l satisfying D_i ≠ 0 can be decomposed in the basis of theorem <ref> with tuples from ∪_j > l X_ 1 , r ^j (so that the tuple we are looking for is not involved). On the other side, if we gather all terms such that D_i = 0 for all i > l, this leads us to consider the following product of terms instead of the second group of terms ( ∏_j_1=0^φ(q_1)/d_1 -1⋯∏_j_l-1=0^φ(q_l-1)/d_l-1 -1 1- ζ_n^J_1 σ_1^A_1 +j_1 d_1⋯ J_l-1σ_l-1^A_l-1 + j_l-1 d_l-1)^± 1 and we can use induction hypothesis to put an end to this induction proof. All this concludes if r-t+1 is odd, that is we have made correspond any element x ∈ B(K) to a tuple (a_1,…, a_s) as wanted. Now, suppose r-t+1 is even. The case k=t-1 is the only case that has not been treated before. Let x = ( ∏_j=1^r σ_j^a_j) N_Q (ζ_n)/ K (1-ζ_n) with (a_1, … ,a_r) ∈ X_ 1 , r ^k(K). The same proof shows we can make correspond (a_1, …,a_t-1, d_t/2,0,…,0) to x (but in this case, this tuple is not the least among all the tuples that appears in the decomposition of x in the basis of theorem <ref>). Indeed, we have ∏_j_1=0^φ(q_1)/d_1-1⋯∏_j_t-1=0^φ(q_t-1)/d_t-1-1 1-ζ_n^σ_1^a_1 + j_1 d_1⋯σ_t-1^a_t-1 + j_t-1 d_t-1 = ∏_j_1=0^φ(q_1)/d_1-1⋯( ∏_j_t-1 < φ(q_t-1)/2d_t-1 1-ζ_n^σ_1^a_1 + j_1 d_1⋯σ_t-1^a_t-1 + j_t-1 d_t-1) ∏_j_1=0^φ(q_1)/d_1-1⋯( ∏_j_t-1⩾φ(q_t-1)/2d_t-1 1-ζ_n^σ_1^a_1 + j_1 d_1⋯σ_t-1^a_t-1 + j_t-1 d_t-1) and we can see the tuple we are looking for is not involed in the first group of terms as this is already decomposed in the basis of theorem <ref>. To show this is also the case for the second one, we may follow the same steps as in the previous induction proof and pay more attention. Using norm relations for Q(ζ_n/q_t), we are lead to consider the following terms only (what we have done during the previous induction proof allows us not to consider the other terms that appear when using norm relations straight forward because they do not involve the tuple we are looking for) ( 1- ζ_n^σ_1^a_1 + j_1 d_1⋯σ_t-1^i_t-1 + j_t-1 d_t-1σ_t^d_t/2)^-1 ( 1- ζ_n^σ_1^a_1 + j_1 d_1⋯σ_t-1^i_t-1 + j_t-1 d_t-1 J_t )^-1. This first term make the tuple we are looking for appear when j_1=…=j_t-1 = 0 only and it appears with exponant -1. The second term can be treated using norm relations again. We are then lead to consider the following terms (with exponant +1) ( 1- ζ_n^σ_1^a_1 + j_1 d_1⋯σ_t-1^i_t-1 + j_t-1 d_t-1 J_t J_t+1)^+1. By iterating this process, we are then lead to consider the following terms ( 1- ζ_n^σ_1^a_1 + j_1 d_1⋯σ_t-1^i_t-1 + j_t-1 d_t-1 J_t ⋯ J_r)^+1 with exponant +1 because r-t+1 is even. Equation (<ref>) transforms these terms to the following ones ( 1- ζ_n^J_1 σ_1^a_1 + j_1 d_1⋯ J_t-1σ_t-1^i_t-1 + j_t-1 d_t-1σ_t^d_t/2)^+1. These terms involve the tuple we are looking for when j_1 = φ(q_1)/(2d_1),…,j_t-1=φ(q_t-1)/(2d_t-1) and it appears with exponant +1. At the end, the tuple we are looking for is not involed in the decomposition of the following element ∏_j_1=0^φ(q_1)/d_1-1⋯∏_j_t-1=0^φ(q_t-1)/d_t-1-1 1-ζ_n^σ_1^a_1 + j_1 d_1⋯σ_t-1^a_t-1 + j_t-1 d_t-1 . Then, we can show the tuple we are looking for appears with exponant +1 in the decomposition of the following term ∏_j_1=0^φ(q_1)/d_1-1⋯∏_j_t=0^φ(q_t)/d_t-1 1-ζ_n^σ_1^a_1 + j_1 d_1⋯σ_t^d_t/2 + j_t d_t . The same induction proof shows we can make correspond (a_1,…,a_t-1,d_t/2,0,…,0) to x. We may observe this tuple is not involved in the decomposition of any other element of B_ 1 , r (K). Suppose K_1,…,K_r are not real. What we have done just before allows us to make correspond the same tuple (a_1-ε_k φ(q_1)/2, …, a_k - ε_k φ(q_k)/2,0,…,0) to x = ( ∏_j=1^r σ_j^a_j) N_Q (ζ_n)/ K (1-ζ_n) for any (a_1,…,a_r) ∈ X_ 1 , r ^k(K) and 0 < k ⩽ r and we can also go through this same proof if k=0 and r is even. All this concludes the proof of the theorem when n is odd. Now, suppose n is even and let i_0 be such that p_i_0 = 2. The little difference with the case when n is odd comes from the fact that the Galois group of Q(ζ_n)/K and the norm operator are no longer caracterised by the d_i's only. More precisely, the Galois group of Q(ζ_q_i_0)/ K_i_0 is no longer caracterised by d_i_0. Moreover, we no longer have d_i_0|φ(q_i_0)/2 if and only if K_i_0 is real. Recall σ_i_0 has been constructed with J_i_0 and σ_i_0. We may now explain how to adapt our previous proof. When all the K_j's are real, we can argue just like before and observe the Galois group of Q(ζ_q_i_0)^+/K_i_0 is generated by σ_i_0^d_i_0. Then, we may write N_L_n/K = ∏_j_1=0^φ(q_1)/d_1 -1⋯∏_j_r=0^φ(q_r)/d_r -1σ_1^j_1 d_1⋯σ_i_0^j_i_0d_i_0⋯σ_r^j_r d_r and write the same proof as before. If K_i_0 is real, we may write things differently. Observe J_i_0 does not lie in (Q(ζ_q_i_0)/ K_i_0). We have N_Q(ζ_n)/K = ∏_j_1=0^φ(q_1)/d_1 -1⋯∏_g ∈(Q(ζ_q_i_0)/ K_i_0)⋯∏_j_r=0^φ(q_r)/d_r -1σ_1^j_1 d_1⋯ g ⋯σ_r^j_r d_r and having this in mind, we can use the same proof again after writting the elements g as "powers" of σ_i_0. In particular, we may take caution with the following case that is when x = (∏_j=1^r σ_j^a_j) N_Q(ζ_n)/K(1-ζ_n) with (a_1,…,a_r) ∈ X_ 1 , r ^i_0. Since J_i_0 does not appear in N_Q(ζ_n)/K, we are sure that we can make correspond the same tuple as before to such an element x. More precisely, the manipulations we have operated before show that, in order for (a_1,…,a_r) to appear, we have two choices : either it appears directly or (J_1 a_1,…, J_r a_r) appears but this second case cannot occure as J_i_0 does not appear in N_Q(ζ_n)/K. If K_i_0 is real but there is some j such that K_j is not real, then we may write N_Q(ζ_n)/K as we just did and make the same proof as in the case of an odd integer n by writting the elements g as powers of σ_i_0. Equation (<ref>) from lemma <ref> shows Gold and Kim's basis has the cardinality it should have to be a basis (this has been stated with no proof in <cit.>). Under the hypotheses and notation of theorem <ref>, suppose K_r is not real and K_1,…,K_r-1 are real, then (K) = Z(K) ^+(K). This results of the fact we have Z(K) ^+(K) ⊂(K) and ^+(K) is a direct factor of (Q(ζ_n)) as ^+(K) = (K^+) = (K_1 K_2 …K_r^+) (see lemma <ref>). 0pt plus 1fill Under the hypotheses and notation of theorem <ref>, the quotient group (K) / (K) is isomorphic to (2)^a for some a ∈ 0 ; [K_ 1 , r _R: Q]. 0pt plus 1fill Indeed, all generators that are (mentionned in the previous theorem and) associated to some Ω⊂ 1 ,r such that | Ω_C | ⩾ 1 are already elements of (K_Ω) and all other generators that make use of e_K_Ω have order 2 in the quotient group (K_Ω) / (K_Ω) (see <cit.>, equation 11 and corollary 3). Hence, the quotient group (K) / (K) is an elementary 2-group and we have bounded its rank with the following number ∑_Ω⊂ 1 ,t Ω≠∅∏_i ∈Ω (d_i-1) = ( d_1 ⋯ d_t-1 -1 ) by lemma <ref>. 0pt plus 1fill Werl Milàn stated and proved in a special case (see <cit.> remark 4) we have a = [ K: Q ] -1 if K_1,…,K_r are real. We may also observe if K_1,…,K_r are not real, we have (K) = (K) so that the previous theorem gives a basis of (K). A different basis of (K) has been given in <cit.> but both bases are quite similar. Under the hypotheses and notation of theorem <ref>, we have [E(K): (K) ] = h^+(K) 2^x for some x ∈N. This results of the previous corollary and the formula Sinnott has given for the index of (K) in E(K) (see <cit.> theorem 4.1, proposition 5.1 and theorem 5.4). Under the hypotheses and notation of theorem <ref>, let A_1,…,A_k be disjoint subsets of 1 ,r. We have a canonical injective map ∏_j=1^k E(K_A_j) / (K_A_j) E (K) / (K) . In particular, if we let h_p^+(K) denote the p-part of the class number of K^+, we have for all odd prime p ∏_j=1^k h_p^+(K_A_j) | h_p^+(K). Let x=x_1 ⋯ x_k∈(K) with x_j ∈E(K_A_j). We have to show x_j ∈(K_A_j). There is an integer N such that we have x_j^N ∈(K_A_j). Modulo roots of unity of K, we have x = ∏_u ∈ B(K) u^x_u x_j^N = ∏_u ∈ B(K_A_j ) u^x_j,u that is the decomposition of x and the x_j^N's in the basis we gave in the previous theorem. This theorem shows the following group is a direct factor of (K) ∏_j=1^k (K_A_j). Now, we may identify the exponents of x^N so that we get ∀ j ∈ 1 , k , ∀ u ∈ B(K_A_j), Nx_u = x_j,u then we have x_j^N = ( ∏_u ∈ B(K_A_j ) u^x_u)^N hence x_j ∈(K_A_j). It turns out we can prove this last result on class numbers through class field theory. We have found no reference related to the proof of this next proposition so far and that is why we give a proof. Suppose K = K_1 ⋯K_r with K_i ⊂Q(ζ_q_i). Let A_1, …, A_k be a partition of 1 ; r. We have ∏_j=1^k h(K_A_j) | h (K) . Class field theory gives the following commutative diagram (see what is before theorem 5 in the appendix of <cit.>) ℐ_K / 𝒫_K[r]^-∼[d]_-Norm (H_K / K) [d]^-res ∏_i=1^r ℐ_K_i / 𝒫_K_i[r]_-∼ ∏_i=1^r (H_K_i / K_i) where H denotes the Hilbert class field, ℐ_L denotes the group of all fractionnal ideals of L, 𝒫_L denotes the group of all principal ideals of L, the map res is given by the restrction maps that arise with H_K_i⊂H_K and the map Norm is given by the norm maps ℐ_K / 𝒫_K→ℐ_K_i / 𝒫_K_i . We may prove the map Norm is surjective to conclude. To this aim, we will show res is surjective. First, observe the H_K_i's form a free compositum, that is we have H_K_i∩( ∏_j ≠ iH_K_j) = Q . Indeed, each prime number ramifies in H_K_i if and only if it ramifies in K_i and Q has no unramified extension (see <cit.> theorem 2.18). Let x_1,x_2 be such that H_K_A_1 = Q(x_1) ∏_i > 1H_K_A_i = Q(x_2). Let N be the degree of x_2 over Q and observe x_2 has degree N over H_K_A_1. Indeed, its minimal polynomial over H_K_A_1 has coefficients in H_K_A_1 and it also has coeffecient in Q(x_2) because the H_K_i's are Galois since the K_i's themselves are Galois. Therefore, what is before proves the minimal polynomial of x_2 over H_K_A_1 is the same over Q. As a result, Galois theory gives a surjective map (by restriction) (∏_i ⩾ 1H_K_i / K) ↠( H_K_A_1 / K_A_1) ×( (∏_i >1H_K_A_i) / (∏_i >1K_A_i) ) and by induction (∏_i ⩾ 1H_K_i / K) ↠∏_i ⩾ 1( H_K_A_i / K_A_i) . Galois theory also shows the following map is surjective (H_K / K) ↠( ∏_i ⩾ 1H_K_i / K) . We can adapt this last proof to get a similar result with h^+ instead of h. 0pt plus 1fill We may use proposition <ref> and theorem <ref> to obtain a basis of (K_1 ⋯K_t-1 (K_t ⋯K_r)^+). Next lemma <ref> allows us to state corollary <ref> with K^+ replacing K. We can also observe ^+(Q(ζ_n)) is not a direct factor of (Q(ζ_n)) because, if it was a direct factor, we would get (Q(ζ_n)) = Z(Q(ζ_n)) ^+(Q(ζ_n)) by lemma <ref>. Let K be an abelian number field. We have [E^+(K): ^+(K)] = [E(K): (K) ] [(K) E^+(K): Z(K) E^+(K)]/[E(K): Z(K) E^+(K)]. As the index is multiplicative, we get [E(K) : ^+(K)] = [E(K) : (K)] [(K) : Z(K) ^+(K)] [Z(K)^+(K) : ^+(K)] = [E(K) : Z(K)E^+(K)] [Z(K)E^+(K) : E^+(K) ] [E^+(K) : ^+(K)]. It remains to see the second isomorphism theorem gives [Z(K)E^+(K) : E^+(K) ] = |Z(K)|/2 = [Z(K)^+(K) : ^+(K)] and (K) E^+(K) / Z(K) E^+(K) ≃(K) / Z(K) ^+(K). Suppose K ramifies at p. The universal norms Galois module _∞^0 is generated - as a Galois module - by the x's such that x ∈ B_Ω(K) with 1 ∈Ω. * plainurl
http://arxiv.org/abs/2407.02061v1
20240702084654
LiDAR-based HD Map Localization using Semantic Generalized ICP with Road Marking Detection
[ "Yansong Gong", "Xinglian Zhang", "Jingyi Feng", "Xiao He", "Dan Zhang" ]
cs.RO
[ "cs.RO" ]
LiDAR-based HD Map Localization using Semantic Generalized ICP with Road Marking Detection Yansong Gong, Xinglian Zhang, Jingyi Feng, Xiao He and Dan Zhang^* Yansong Gong (yansong.gong@uisee.com), Xinglian Zhang, Jingyi Feng, Xiao He and Dan Zhang (corresponding author, dan.zhang@uisee.com) are with UISEE Technology (Beijing) Co., Ltd. This version of the manuscript has been accepted by IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2024). July 8, 2024 ========================================================================================================================================================================================================================================================================================================================================================================================================== § ABSTRACT In GPS-denied scenarios, a robust environmental perception and localization system becomes crucial for autonomous driving. In this paper, a LiDAR-based online localization system is developed, incorporating road marking detection and registration on a high-definition (HD) map. Within our system, a road marking detection approach is proposed with real-time performance, in which an adaptive segmentation technique is first introduced to isolate high-reflectance points correlated with road markings, enhancing real-time efficiency. Then, a spatio-temporal probabilistic local map is formed by aggregating historical LiDAR scans, providing a dense point cloud. Finally, a LiDAR bird's-eye view (LiBEV) image is generated, and an instance segmentation network is applied to accurately label the road markings. For road marking registration, a semantic generalized iterative closest point (SG-ICP) algorithm is designed. Linear road markings are modeled as 1-manifolds embedded in 2D space, mitigating the influence of constraints along the linear direction, addressing the under-constrained problem and achieving a higher localization accuracy on HD maps than ICP. Extensive experiments are conducted in real-world scenarios, demonstrating the effectiveness and robustness of our system. Localization, autonomous vehicles, road markings, LiDAR, HD map. § INTRODUCTION Accurate localization is a prerequisite for autonomous driving. In unsheltered open-air environments, the global positioning system (GPS) is the predominant technology for accurate localization. However, the GPS-provided poses become unstable when satellite signals are obstructed by ceilings or viaducts. Therefore, localization through environmental perception using observation sensors, such as cameras and light detection and ranging (LiDAR) sensors, becomes necessary for autonomous vehicles, especially in GPS-denied environments. In autonomous vehicle navigation, the detection of road markings stands out as the most widely employed technique for achieving precise and stable environmental perception. Subsequently, the detected road markings can be associated with semantic elements in high-definition (HD) maps to estimate the vehicle's pose. Cameras have been widely used for road marking detection <cit.>, because camera images contain rich texture information of environments. However, cameras are limited by the susceptibility of illumination variations and distortions in bird's-eye view (BEV) lane representation, rendering them less robust for certain applications. <cit.>. In contrast, LiDAR sensors exhibit reduced sensitivity to varying illumination conditions and provide a precise 3D representation of the environment. Meanwhile, road markings can be extracted from road surfaces using LiDAR point clouds, leveraging the characteristic of their high reflectance from the retro-reflective materials <cit.>. However, these LiDAR-based methods face challenges to balance the need for denser point cloud with the essential requirement for real-time performance. To address the challenges, a real-time LiDAR-based approach is proposed for road marking detection and registration with HD maps, as visualized in Fig. <ref> (a). For road marking detection, an adaptive segmentation technique is first employed to efficiently isolate points correlated with road markings. Then, a spatio-temporal probabilistic local map is established by aggregating segmented points from historical scans, resulting in a dense point cloud representation of road markings. Finally, a LiDAR bird's-eye view (LiBEV) image is generated by partitioning the local map into grid cells, and a proficiently trained instance segmentation network (CenterMask<cit.> is selected in our implementation) is applied to accurately detect 9 different types of road markings, as shown in Fig. <ref> (b). As for the road marking registration, a semantic generalized iterative closest point (SG-ICP) algorithm is specifically designed to robustly align the detected road markings with the HD map by leveraging both their semantic and geometric attributes. In the proposed SG-ICP registration, the linear types of road markings are modeled as 1-manifolds embedded in the 2D space, making the constraints along the linear direction have minimal influence on the ultimate solution. The contributions of this paper are summarized as follows. * A LiDAR-based road marking detection approach is proposed for online environmental perception, in which point density and real-time performance are balanced by adaptively segmenting high-reflectance points and updating spatio-temporal probabilistic local map. Finally, a LiBEV image is generated, and 9 different types of road markings can be detected accurately using an instance segmentation network on the LiBEV image. * A novel road marking registration algorithm is proposed for localization of autonomous vehicles on HD maps, in which linear road markings are represented as 1-manifolds embedded in 2D space. This representation can provide a robust and accurate solution for the registration problem with minimal influence on the under-constrained dimensions. Compared with the widely-used ICP, SG-ICP achieves higher accuracy of localization. * Comprehensive experiments are conducted in real-world scenarios, demonstrating real-time performance and localization accuracy of our system. Furthermore, experimental results indicate the approach's adaptability to various types of LiDAR sensors, as well as its robustness under different vehicle speeds and weather conditions. § RELATED WORK In urban autonomous driving scenarios, the detection of road markings stands out as a crucial method for environmental perception. The road markings, typically painted on asphalt roads using retro-reflective materials, play a vital role in guiding autonomous vehicles. Leveraging the near-infrared wavelength of laser pulses, road markings exhibit higher reflectance compared to unpainted road surfaces <cit.>. As a result, the LiDAR sensor's ability to capture intensity measurements becomes instrumental in detecting these road markings <cit.>. LiDAR-based road marking detection is extensively applied in the generation of HD maps <cit.>. Since the data for HD map generation is processed offline, consecutive scans are aggregated into a point cloud with a significantly high density of points, capturing detailed information about the surroundings <cit.>. However, processing such high-density points is time-consuming, rendering existing methods applied in HD map generation impractical for the online environmental perception and localization. In existing studies focused on real-time perception, the detection of road markings is achieved by thresholding the measured intensities within a single LiDAR scan. A lane markings detection approach was developed by Team AnnieWAY for the DARPA Urban Challenge 2007, which detected the painted lane markings from the single scans by thresholding the points with high-reflectance gradients <cit.>. Similarly, the approach proposed in <cit.> detected highly reflective lane markings by employing a polar lane detector grid. In <cit.>, a modified Otsu thresholding technique was employed to segment high-reflectance points obtained from a multilayer LiDAR into distinct categories, such as asphalt and road markings. Due to the sparsity of LiDAR measurements, the single-scan-based approaches face challenges detecting complete road markings, making the detection results susceptible to noise and lack robustness. The approach proposed in <cit.> accumulated two consecutive frames of segmented road points, and then applied a fixed intensity threshold to isolate lane marking points. In the subsequent works <cit.>, the approach was extended to detect various types of high-reflectance landmarks, such as road signs and guard rail reflectors, to improve the localization accuracy. However, these multi-scan-based methods utilize a fixed intensity threshold to segment road marking points, which is sensitive to changes in environmental conditions and sensor types. Recently, the deep learning approaches have been widely-used in the road marking detection tasks. The global feature correlation (LLDN-GFC) was introduced in <cit.> which leveraged the spatial characteristics of lane lines within the point cloud including sparsity, thinness, and elongation across the entirety of the ground points. This method was further improved in <cit.>, resulting in a substantial reduction in computational cost. Nevertheless, LLDN-GFC focuses solely on extracting lane lines, overlooking other types of road markings. This limitation implies that the extracted lane lines can only provide lateral constraints on the vehicle's poses, potentially contributing to a degeneracy problem during the localization. § METHODOLOGY In response to the limitations identified in previous researches, we propose a LiDAR-based road marking detection system for real-time environmental perception. Additionally, a novel road marking registration algorithm is introduced to enhance the localization accuracy of autonomous vehicles with HD maps. The flowchart of the proposed system is illustrated in Fig. <ref>. §.§ LiDAR-based Real-Time Road Marking Detection Limited by the sparse distribution of LiDAR points, the stable and robust detection of road markings proves challenging when relying solely on individual frame of data. To overcome this limitation, successive LiDAR scans are aggregated into a local map, generating a denser point cloud that is conducive to effective road marking detection. In consideration of online requirements and high-reflectance road markings, the aggregation process can selectively extract points with higher intensities from the ground plane. This approach ensures the construction of a local map optimized for road marking detection, striking a balance between computational efficiency and information richness. §.§.§ High-Reflectance Point Segmentation This procedure aims to adaptively identify points with high reflectance, which are often correlated with road markings painted using retro-reflective materials. To ensure adaptability across diverse sensors and scenarios, we introduce an adaptive segmentation approach designed to isolate high-reflectance points. This enhancement contributes to a more robust system overall. For the efficiency of the system, only ground points are considered, which are extracted from the LiDAR scan utilizing the methodology detailed in <cit.>. This approach segments ground points based on height information and subsequently extracts them by partially fitting the ground plane. Then, a segmentation coefficient ρ_k is introduced to distinguish high-reflectance points from the ground points in the k-th scan. Specifically, points with intensities below ρ_k are excluded from the scan. Notably, the segmentation coefficient ρ_k is not predetermined manually. Instead, it is dynamically estimated and continuously updated using a Kalman filter. The state of the Kalman filter is evolved according to the state-transition model ρ_k=ρ_k-1+w_k, where w_k∼𝒩(0, Q_k) is the process noise. The measurement model is given by z_k=ρ_k+v_k, where the v_k∼𝒩(0, R_k) is the measurement noise. In each LiDAR scan, the mean μ_k and variance σ_k of the intensities of the ground points are calculated. The measurement for the innovation computation is then determined as z_k = μ_k+2σ_k. This adaptive approach relies on two assumptions. Firstly, it presumes that nearby consecutive roads should possess similar segmentation coefficients owing to the consistency in ground materials. Secondly, it assumes that the majority of LiDAR points lie on the common asphalt surface, while road marking points exhibit statistically higher intensities. These two assumptions are satisfied in most urban road environments, ensuring the effectiveness of the approach. Furthermore, segmenting these high-reflectance points is pivotal for optimizing the efficiency, strategically mitigating the computational load by excluding a significant volume of data points unrelated to road markings. §.§.§ Probabilistic Local Map Update A local map is constructed through the aggregation of spatio-temporally successive LiDAR scans using an odometry, incorporating high-reflectance points to generate a dense point cloud for road marking detection. However, with the accumulation of scan data, the volume of information grows substantially, leading to an increasing computational burden. To achieve real-time performance, a novel approach for probabilistically updating the local map is introduced. This approach employs a probabilistic discarding strategy, wherein each point in the map is selectively removed based on a calculated probability value. The probability assigned to the i-th point in the local map, denoted as p_i, is computed by p_i = 1/1 + (| k - k_i | / η)^2, where k denotes the index of the current frame, and k_i represents the frame from which the i-th point originates. η is a manually-set parameter to determine the probability of discarding old points. As η increases, old points are more likely to be retained, resulting in a higher density of points in the probabilistic local map. As indicated by (<ref>), higher retaining probability values are assigned to newly observed points by the LiDAR sensor. This strategy effectively ensures the spatio-temporal consistency of the local map, alleviates the impact of accumulated errors over time. Moreover, when contrasted with the aggregation method employing scans within a fixed window, the proposed approach ensures a more seamless transition in the local map data, thereby yielding higher-quality LiBEV images. §.§.§ LiBEV Image Generation The generation of the LiBEV image involves dividing the local map into grid cells on the ground plane, where each cell corresponds to a pixel in the LiBEV image. Within each cell, the RGB value of the corresponding pixel is determined by mapping the maximum intensity value among the enclosed points using a color map. Our implementation leverages a proficient instance segmentation network, specifically the CenterMask <cit.>, to accurately segment semantic road markings from the generated LiBEV images. Subsequently, points located within the grid cells corresponding to the segmented pixels are extracted from the local map. The extraction yields a semantic point cloud wherein each point is labeled with a specific road marking category. Notably, our approach is designed to accommodate the segmentation of up to 9 types of road markings, including dashed lanes, solid lanes, stop lines, texts, arrows, diamond signs, triangle sighs, curbs and crosswalks, as shown in Fig. <ref> (b). The incorporation of diverse semantic road markings, in contrast to approaches solely focused on lane lines, significantly enhances the robustness of map matching-based pose estimation. In addition, since annotating image semantic segmentation is faster and more convenient than annotating point clouds, the proposed approach converts point clouds into images, which is more conducive to the deployment in practical applications. §.§ SG-ICP-based Road Marking Registration with HD Map After road marking detection, the detected road markings can be associated with their corresponding elements in the HD map shared with the same semantic label. Finally, road marking registration is employed to estimate the pose of the vehicle in 2D space. In this subsection, the SG-ICP algorithm is introduced for robust registration of detected road markings from LiDAR scans with semantic elements in the HD map. In our proposed SG-ICP, detected road markings are divided three categories, including lines, line segments and others. Solid lanes and curbs exhibit a linear distribution in their point clouds and lack distinct endpoints, and thus they are classified as lines. Dashed lanes, sidewalks, and stop lines also have a linear distribution but possess endpoints, leading to their classification as line segments. Texts, arrows, diamond signs and triangle signs do not have linear point cloud distribution, and thus they are classified as others. For lines, the lack of endpoints leads to the complete loss of constraints along the linear direction of these markings. For line segments, endpoints can provide constraints along the linear direction. However, due to inaccurate endpoint estimation, registration between endpoints still leads to significant localization errors along the direction of the line segment. Consequently, for linear markings, constraints along their linear direction need to have minimal influence on the pose estimation, mitigating the effect of under-constraint issues in the overall pose estimation process. As for others, their point clouds are not linearly distributed, thus often providing sufficient constraints on the pose estimation. In our algorithm, the registration of the three different categories of markings is organized into a unified representation using the objective function of generalized ICP (GICP). The GICP algorithm incorporates a probabilistic model into the optimization procedure, as defined by T^*= min_T( ∑_i=1^n (q_mi-T·q_Li)^T (C_mi+RC_LiR)^-1 (q_mi-T·q_Li) ), where q_mi and q_Li represent a pair of corresponding points, belonging respectively to the HD map element and the labeled point cloud. Their correspondences are established through the nearest neighbor search strategy in the ICP algorithm. C_mi and C_Li represent the covariance matrices of points from map and labeled point cloud, respectively, which are appropriately constructed in our semantic GICP (SG-ICP) to mitigate the influence of under-constrained direction. In our SG-ICP, the probabilistic model is specifically designed by exploiting the semantic and geometric attributes inherent in semantic road markings. For the points lying on the i-th detected road marking instance, the covariance matrix C̃_Li is estimated by C̃_Li = 1/n_i - 1∑_j^n_i (p_L(i, j) - p̃_Li)· (p_L(i, j) - p̃_̃L̃ĩ)^T, where p_L(i, j) represents the j-th point of the i-th road marking instance, p̃_̃L̃ĩ represents the centroid of the points. Then, the singular value decomposition (SVD) is performed on C_Li. C̃_Li=U_iΣ̃_iV_i, Σ̃_i= [ σ_1^2 0; 0 σ_2^2 ], σ_1 and σ_2 satisfy σ_1≥σ_2. Then, a matrix Σ_i= diag(1,ϵ) is constructed, with ϵ satisfying ϵ= 1e-6, if the marking is classified lines; 1e-1, if the marking is classified line segments; 1, if the marking is classified others. The three categories of road markings have distinct values of ϵ, representing the different constraints along the line direction. A value of ϵ closer to 1.0 indicates a stronger constraint along the line direction. The final covariance matrix corresponding to the i-th road marking can be calculated by C_Li=U_iΣ_iV_i. The i-th semantic element in the HD map is represented as {v_mi, l_mi, P_mi}, where v_mi, l_mi and P_mi = {p_m(i, j), j=1, 2, ⋯, n_mi} denote the main direction, the semantic label and the point set of the map element, respectively. The rotation that rotates the basis vector e_1=[1,0]^T to the direction v_mi can be calculated by R_vi=cosθ·I +(1-cosθ)rr^T +sinθ·[r]_×, where r=[e_1]_×·v_mi, θ=arccos(e_1^Tv_mi). The symbol [r]_× denote the skew-symmetric matrix associated with the vector r. The covariance matrix corresponding to the i-th semantic element is calculated by C_mi=R_viΣ_iR_vi. Finally, associations can be established between the semantic point cloud and the closest points of the map elements shared the same semantic label. Meanwhile, their corresponding covariance matrices calculated in (<ref>) and (<ref>) are then substituted into the objective function (<ref>) to initiate the optimization and iteration process. The probabilistic model from SG-ICP characterizes both the semantic and geometric attributes for road marking registration, which improves the accuracy of pose estimation. § EXPERIMENTAL EVALUATION In this section, extensive experiments are conducted using data collected from diverse scenarios and vehicular platforms, demonstrating the accuracy and robustness of the proposed approach across different scenes and types of LiDAR sensors. §.§ Experimental Setup All experiments are conducted on the NVIDIA Jetson AGX Xavier. The acquisition frequency of LiDAR data is set to 10 Hz. The global localization results of vehicles are recorded using Real-Time Kinematic (RTK) and temporally synchronized with the LiDAR data. These RTK results are used as ground truths. The experimental scenarios and the corresponding HD maps are shown in Fig. <ref>. Fangshan1 and Fangshan2 represent two open urban scenarios in Beijing Fangshan, which covers a 0.30km×0.25km area and spans a length of 2.0km, respectively. Jiashan depicts an internal road measuring 0.20km located in a test field in Zhejiang Jiashan. Airport represents an internal road spanning a length of 4.0km located within an airport. For the parameters of our approach, the initial variances of the state-transition model and the measurement model were experimentally set to 0.1 and 2.0 in the Kalman filter, respectively. η to determine the discarding probability of local map points was set to 50.0 empirically. §.§ Evaluation on Road Marking Detection In this subsection, an experiment is conducted to assess the performance of our road marking detection approach using precision-recall metrics. To ensure a comprehensive evaluation, 80% of the manually annotated LiBEV data is randomly selected for training, while the remaining 20% is reserved for testing. The manual annotations serve as the ground truths against which we evaluate the precision and recall of our approach in detecting road markings. A true positive sample is identified when the Intersection over Union (IoU) between the detected instance and its corresponding annotated instance exceeds 0.5, and both instances shared the same semantic label. Conversely, a false positive sample represent a detection result for which no corresponding instance with the same semantic label and an IoU greater than 0.5 could be found in the ground truths. Meanwhile, a false negative sample indicates that an instance present in the ground truth is not successfully detected by our approach. The precision, recall and F1-score for all types of road markings supported by the approach are presented in TABLE <ref>. The proposed approach successfully detects 9 distinct types of road markings, assigning semantic labels to each point in the LiDAR data, as visually depicted in Fig. <ref> (b). The experimental results demonstrate the effectiveness of our approach in successfully detecting common road elements, achieving high precision and recall rates. Notably, certain elements such as curbs and crosswalks exhibit a slight decrease in precision, attributed to their visual similarity to lane markings in LiBEV images. However, the subsequent HD map registration steps effectively mitigate the impact of these false positives on localization. Moreover, the proposed detection approach is highly efficient, meeting real-time perception requirements for vehicles, which is detailed in Section <ref>. §.§ Evaluation on Localization The SG-ICP algorithm proposed in this paper is assessed based on lateral, longitudinal, and yaw errors. The evaluation encompasses eight experimental sequences, spanning four scenarios and employing seven different LiDAR configurations, demonstrating the flexibility of the proposed approach. The widely-used ICP algorithm is chosen as the baseline for evaluation, and the comparative results are presented in TABLE <ref>. As indicated in the table, the SG-ICP algorithm outperforms the ICP-based approach in most sequences. Notably, SG-ICP has a clear superiority in terms of lateral and yaw accuracy, due to the emphasis placed on the sufficiently constrained direction during the SG-ICP calculation process. Fig. <ref> depicts the visualized trajectories estimated by SG-ICP-based and ICP-based approaches, respectively, in comparison with the ground-truths acquired through RTK. It is worth noting that the substantial localization error of SG-ICP and ICP are marked with purple and red lines, respectively, where estimated distance errors exceed 2.0 m or yaw errors surpass 5.0. It is evident from Fig. <ref> that SG-ICP demonstrates significantly fewer occurrences of substantial localization errors compared to ICP across all sequences. In conclusion, the proposed approach achieves centimeter-level lateral localization accuracy in a variety of environmental scenarios and with different types of LiDAR sensors. The tested sensors encompass not only traditional mechanical LiDARs like VLP-32C, Hesai-Pandar64, and Hesai-XT16 but also solid-state LiDARs such as HAP. The comprehensive experiments illustrate the robustness and adaptability of the approach across diverse scenarios and sensor types. In addition, it is worth noting, as indicated in TABLE <ref>, that the longitudinal error is slightly larger than the lateral error. In the urban road scenario where autonomous driving occurs, the majority of road markings exhibit a linear shape along the longitudinal direction. Consequently, the stronger influence of lateral constraints, compared to longitudinal constraints, contributes to a more accurate and precise lateral localization outcome. Nevertheless, our approach ensures that the longitudinal error remains below 0.20 m, thereby ensuring its effectiveness in autonomous driving applications. §.§ Evaluation on Runtime During the experiments conducted on the eight sequences, the runtime for each sub-step of our approach is detailed in TABLE <ref>. The corresponding box-plot depicting the statistical results can be observed in Fig. <ref>. It is worth noting that the runtime of the detection sub-step is divided into CPU time and GPU time. CPU time refers to the time consumed by the steps processed by the CPU, including high-reflectance point segmentation, probabilistic local map update, and LiBEV image generation. GPU time refers to the inference time of the instance segmentation of the LiBEV image. The registration sub-step is processed only by the CPU. It can be seen that, when utilizing the onboard processor XAVIER, the average and maximum runtime of the overall approach consistently remains below 50 ms and 200 ms across various scenes and types of LiDAR sensors. Consequently, the efficiency of the proposed system proves sufficient for real-time perception and localization in autonomous vehicle applications. Moreover, it is worth highlighting that the runtime on the S1 sequence is only 8.35 ms longer than that on the S2 sequence, despite the fact that the data quantity of S1 is twice that of S2 (as indicated in the LiDAR type column in TABLE <ref>). This observation demonstrates that the runtime does not exhibit a linear increase with the quantity of the point cloud data, because the substantial reduction in the quantity of the aggregated local map points is achieved through a probabilistic discarding strategy. §.§ Evaluation on Robustness To demonstrate the robustness of our approach, we evaluate the localization errors at different vehicle speeds, as outlined in TABLE <ref>. In particular, the vehicle was driven at speeds of 20 km/h, 40 km/h, and 60 km/h in the Fangshan1 scenario using 1 Hesai-Pandar64 LiDAR. The obtained results were then compared against the ground-truth provided by RTK. As evident from TABLE <ref>, there is a slight increase in the localization error with higher driving speeds. This can be attributed to the fact that, as the driving speed increases, the point cloud data captured by the LiDAR sensors is more prone to motion distortions. Despite the slight increase in localization error with higher driving speeds, the proposed approach consistently maintains a relatively high level of localization accuracy. This demonstrates the robustness of the approach across varying vehicle speeds. Regarding real-time performance, as indicated in TABLE <ref>, the overall system runtime is minimally affected by increases in driving speed. This further highlights the robustness of the system in handling variations in vehicle speed. To illustrate the robustness of our approach under varying weather conditions, experiments were conducted in different settings. As depicted in Fig. <ref>, the intensity distribution of LiDAR point clouds on dry and wet road surfaces (on sunny and rainy days) typically exhibits significant differences. As a result, rainy weather poses considerable challenges to intensity-based road marking extraction, particularly for methods relying on fixed intensity thresholds. The LiBEV images generated under both sunny and rainy weather conditions are depicted in Fig. <ref>. It is evident that the proposed adaptive threshold-based approach consistently provides stable and accurate segmentation results, even in the presence of significantly different intensity distributions caused by varying weather conditions. TABLE <ref> presents a comparison of localization errors under both dry and wet ground conditions in the Fangshan1 scenario, employing 1 Hesai-Pandar64 LiDAR. Although more noise in LiBEV images causes an increase in localization error when driving on wet ground, it can still ensure average lateral error within 0.10 m and longitudinal error within 0.20 m. These results demonstrate the robustness of our approach in addressing challenging weather conditions. § CONCLUSION In this paper, we introduce a LiDAR-based online environmental perception and localization system with high efficiency and robustness. The proposed road marking detection approach employs a novel adaptive segmentation technique to enhance efficiency, and utilize a spatio-temporal probabilistic local map to ensure the density of points. For road marking registration, an SG-ICP algorithm is designed, modeling linear road markings as 1-manifolds embedded in 2D space. Our approach minimizes the influence of constraints along the linear direction of markings, to address the under-constrained problem, and thus improve the localization accuracy. Extensive experiments conducted in real-world urban environments demonstrate the effectiveness and robustness of the proposed system, showcasing its potential for reliable online environmental perception and localization. However, our approach cannot be applied to roads without road markings on the ground surface, due to the lack of high-reflectance points. In future work, we will explore the effective utilization of above-ground information to improve the robustness of localization. IEEEtran
http://arxiv.org/abs/2407.02544v1
20240702100258
Hoffman colorings
[ "Thijs van Veluw" ]
math.CO
[ "math.CO" ]
Hoffman colorings Thijs van Veluw Radboud University Nijmegen Master Thesis in Mathematics Supervised by: Aida Abiad (Eindhoven University of Technology) Wieb Bosma (Radboud University Nijmegen) July 1st 2024 § ABSTRACT We study equality in the Hoffman bound for the chromatic number and Hoffman colorings in regular and irregular graphs. We investigate the connection between Hoffman colorability and several graph operations, of which the tensor product is especially interesting in this context. We then introduce the Decomposition Theorem revealing structural properties that Hoffman colorings must obey. Using the Decomposition Theorem we are able to completely classify Hoffman colorability of cone graphs and line graphs. We also prove a partial converse, the Composition Theorem, allowing us to find various new infinite families of Hoffman colorable graphs, many of which are irregular. Lastly we introduce a new parameterization and type system for strongly regular graphs, that show connections between Hoffman colorability, spreadability, pseudo-geometricity and unique vector colorability. * plainurl
http://arxiv.org/abs/2407.02361v1
20240702152733
GCF: Graph Convolutional Networks for Facial Expression Recognition
[ "Hozaifa Kassab", "Mohamed Bahaa", "Ali Hamdi" ]
cs.CV
[ "cs.CV" ]
GCF: Graph Convolutional Networks for Facial Expression Recognition Hozaifa Kassab, Mohamed Bahaa, Ali Hamdi MSA University Giza, Egypt {hozaifa.fadl, mohamed.bahaa4, ahamdi}@msa.edu.eg July 8, 2024 ================================================================================================================================ § ABSTRACT Facial Expression Recognition (FER) is vital for understanding interpersonal communication. However, existing classification methods often face challenges such as vulnerability to noise, imbalanced datasets, overfitting, and generalization issues. In this paper, we propose GCF, a novel approach that utilizes Graph Convolutional Networks for FER. GCF integrates Convolutional Neural Networks (CNNs) for feature extraction, using either custom architectures or pretrained models. The extracted visual features are then represented on a graph, enhancing local CNN features with global features via a Graph Convolutional Neural Network layer. We evaluate GCF on benchmark datasets including CK+, JAFFE, and FERG. The results show that GCF significantly improves performance over state-of-the-art methods. For example, GCF enhances the accuracy of ResNet18 from 92% to 98% on CK+, from 66% to 89% on JAFFE, and from 94% to 100% on FERG. Similarly, GCF improves the accuracy of VGG16 from 89% to 97% on CK+, from 72% to 92% on JAFFE, and from 96% to 99.49% on FERG. We provide a comprehensive analysis of our approach, demonstrating its effectiveness in capturing nuanced facial expressions. By integrating graph convolutions with CNNs, GCF significantly advances FER, offering improved accuracy and robustness in real-world applications. The code is available at: <https://github.com/4qlaa7/GCF> Graph Convolutional Networks, FER, CNN § INTRODUCTION Facial Expression Recognition (FER) is a key area in computational psychology and human-computer interaction, focusing on interpreting the emotional language conveyed through facial cues. FER involves the automated detection and classification of human emotions, encompassing a range of feelings such as anger, disgust, fear, happiness, sadness, and surprise. This field has significant applications in areas such as social robotics, healthcare<cit.>, and security<cit.>. However, existing FER classification methods face challenges such as noise from environmental factors, imbalanced datasets, and issues with overfitting and generalization<cit.>. Despite these obstacles, efforts to improve FER continue, motivated by the potential to enhance human-machine interaction, support assistive technologies, and deepen our understanding of human behavior and cognition. Deep learning has significantly advanced various domains of computer vision <cit.>, including FER <cit.>. CNNs have become the standard for image processing tasks due to their proficiency in capturing spatial hierarchies of features in images. By automating the feature extraction process, CNNs eliminate the need for manual feature engineering and allow models to learn directly from vast amounts of data, improving scalability and performance. However, CNN-based FER requires extensive labeled datasets and substantial computational power, which can be limiting when resources are constrained or when data is scarce. Hybrid models, integrating CNNs with other machine learning or neural network frameworks, have enhanced the robustness and efficiency of FER. For example, combining CNNs with Recurrent Neural Networks (RNNs) has proven effective for video-based FER by capturing dynamic changes in expressions over time. Additionally, using Support Vector Machines (SVMs) as classifiers on features extracted by CNNs has improved decision boundaries for better classification accuracy. We present GCF, a pioneering methodology that integrates CNNs with Graph Convolutional Networks (GCNs) to enhance FER. Our approach leverages the robust feature extraction capabilities of CNNs and the relational modeling strengths of GCNs to analyze distinct facial regions. This synergy provides a comprehensive understanding of both local nuances and global context in facial expressions, enabling nuanced analysis of emotional states. By exploiting the structural information in GCNs, our model navigates the complexities of facial expressions, surpassing the limitations of traditional FER methodologies <cit.>. We evaluate our CNN-GCN model using benchmark FER datasets, including CK+, FERG, and RAF, which cover diverse expression scenarios. Our results show that the proposed model exceeds state-of-the-art performance, outperforming CNN-only and other hybrid systems. The paper is structured as follows: We first review related works in FER, then describe our framework, followed by dataset descriptions, experiments and results analysis, discussion, and conclusions with implications for future research. § RELATED WORK This section reviews the progression of methodologies employed in FER, highlighting the evolution from CNNs to hybrid models integrating CNNs with other computational techniques, and highlighting the need graph-based methods. CNNs have been widely employed across various computer vision tasks, including Facial Expression Recognition (FER). Multiple studies demonstrated the robustness of CNNs to changes in face location and scale variations, outperforming Multilayer Perceptrons (MLPs) particularly in scenarios involving unseen face poses <cit.>. Another study <cit.> utilized CNNs to tackle challenges such as subject independence, as well as translation, rotation, and scale invariance in facial expression recognition. Early applications of CNNs in FER often involved training models from scratch. For instance, a custom CNN architecture significantly outperformed traditional machine learning approaches by automatically learning features directly from facial images <cit.>. The advent of pre-trained models brought about a paradigm shift, with researchers leveraging models trained on large, diverse datasets to improve FER accuracy. The work in <cit.> demonstrated that features extracted from deep layers of pre-trained CNNs could be repurposed for various vision tasks, including emotion recognition. This approach not only improved the accuracy but also reduced the training time and resource consumption. VGG model pre-trained on ImageNet for FER, achieving robust performance across multiple datasets. Incorporating Graph Convolutional Neural Networks (GCNs) alongside Convolutional Neural Networks (CNNs) offers a promising avenue for advancing Facial Expression Recognition (FER). By fusing CNNs' robust feature extraction capabilities with GCNs' capacity to model relational data among facial regions, this approach enables a more comprehensive understanding of both local nuances and global context within facial expressions. The synergy between CNNs and GCNs addresses the subtleties and complexities inherent in facial expressions, leading to improved FER performance compared to traditional methods<cit.>,<cit.>. Recent advancements in Facial Expression Recognition (FER) have significantly benefited from hybrid models that integrate Convolutional Neural Networks (CNNs) with Graph Convolutional Networks (GCNs), effectively leveraging the strengths of both methodologies. The MER-GCN framework enhances micro-expression recognition by modeling relational dependencies among facial action units (AUs) using GCNs, with spatial-temporal features extracted by a 3D ConvNet and processed through a stacked GCN, resulting in notable improvements in recognizing subtle and transient expressions<cit.>. However, the reliance on co-occurrence probabilities to form the adjacency matrix can be limited by the quality and representativeness of the training data, potentially leading to biased or inaccurate relationships being modeled. Another approach constructs an undirected graph from facial images using fixed and random points, where GCNs handle non-Euclidean data structures, capturing complex relationships between facial features to enhance recognition accuracy<cit.>. Despite its innovative approach, the method's effectiveness heavily depends on the initial graph construction, which may not always capture the most relevant features for FER. Additionally, a novel method for video-based FER employs a GCN to represent each video frame as a node, using a dynamic adjacency matrix to model dependencies and refine features through a Bidirectional Long Short-Term Memory (BiLSTM) network, thus capturing both spatial and temporal variations effectively<cit.>. While this approach addresses dynamic expression variations well, it can be computationally intensive, requiring substantial resources for real-time applications. These models demonstrate the powerful synergy of CNNs and GCNs in advancing FER, but they also highlight the challenges in balancing complexity, data dependency, and computational efficiency. § THE PROPOSED FRAMEWORK In this section, we provide a comprehensive overview of our proposed methodology for facial expression recognition using the GCF model, inspired by <cit.> <cit.> <cit.>. §.§ CNN for Feature Extraction Initially, a CNN is employed to extract relevant features from facial images, utilizing either pre-trained models like VGG or ResNet, or a custom-designed architecture. The CNN processes input images X to produce a feature representation F_CNN(X) that captures various facial characteristics, such as edges, textures, and patterns associated with different expressions.The output of the CNN, F_CNN(X), is then sliced into nine vectors: F_CNN(X) = [f_1, f_2, …, f_9] where f_i represents the feature vector corresponding to the i-th facial region. Pre-trained CNNs or other models are favoured for feature extraction in machine learning and computer vision due to their ability to learn rich hierarchical representations. These models, trained on large datasets like ImageNet, allow practitioners to benefit from learned features without needing to train entire models from scratch, significantly reducing computational costs and training time. Fine-tuning these models on task-specific datasets enhances their effectiveness, tailoring representations to specific tasks. This strategy boosts performance, improves generalization, and accelerates the development of state-of-the-art methodologies. §.§ Graph Convolutional Neural Network (GCN) Each of the nine vectors f_i is fed into a node within a Graph Convolutional Neural Network (GCN). GCN is a type of neural network tailored for analyzing and processing graph-structured data. Its mechanism revolves around adapting convolutional operations from traditional image-based Convolutional Neural Networks (CNNs) to operate directly on graph structures. Let A be the adjacency matrix representing the graph structure, and H^(0) = [f_1, f_2, …, f_9] be the initial node features. A single GCN layer can be represented as: H^(l+1) = σ( D^-1/2 A D^-1/2 H^(l) W^(l)) where: * H^(l) is the feature matrix at layer l, * W^(l) is the trainable weight matrix at layer l, * D is the degree matrix of A, * σ is an activation function, typically ReLU. This aggregation process enables the network to extract hierarchical representations of the graph, where deeper layers capture increasingly abstract features. §.§ Feature Concatenation and Classification After processing through the GCN, the output features from the nine nodes are aggregated into a single feature vector H_GCN. This vector is then concatenated with the initial output feature vector from the CNN before slicing. Let F_CNN-initial be the initial output feature vector from the CNN before slicing. The concatenated feature vector is: F_concat = [F_CNN-initial, H_GCN] This concatenation creates a comprehensive feature representation that includes both detailed local features and broader contextual information. The final feature vector F_concat is then fed into a classifier (e.g., a fully connected layer followed by a softmax function) to predict the facial expression: y = softmax(W_fc F_concat + b_fc) where W_fc and b_fc are the weights and biases of the fully connected layer. By leveraging the strengths of both CNN and GCN, our proposed framework efficiently captures intricate facial features and their relationships, enhancing the accuracy and robustness of facial expression recognition systems across various datasets. The effectiveness of our proposed methodology, which combines feature extraction via a CNN with subsequent processing by a Graph Convolutional Network (GCN), represents a significant advancement in facial expression recognition. By first utilizing a CNN, either custom-designed or pre-trained, our approach efficiently captures intricate facial features at multiple scales. These features are then fed into the GCN, which excels in handling structured data, allowing for the modeling of relationships between various facial regions through its graph-based nature. This dual-stage processing not only enhances the representational power of the system but also significantly improves the accuracy and robustness of expression recognition across diverse datasets. Through rigorous experimentation on benchmark datasets such as CK+, JAFFE, and FERG, our proposed model demonstrates superior performance, achieving higher accuracy rates compared to state-of-the-art methods. The results validate the efficacy of integrating CNNs with GCNs, offering a promising direction for future research in the domain of facial expression recognition. § DATASETS In our study, we conduct a comprehensive experimental analysis of our proposed model using several well-known facial expression recognition datasets: the Extended Cohn-Kanade (CK+), the Facial Expression Research Group Database (FERG), and the Real-world Affective Faces Database (RAF). Below is a detailed overview of these databases, which are crucial for evaluating the effectiveness of our approach. Cohn-Kanade The Extended Cohn-Kanade dataset(CK+), commonly referred to as CK+, is a widely used public dataset in the realm of action unit and emotion-specified facial expression recognition. This dataset encompasses 593 sequences from 123 subjects, featuring both posed and spontaneous expressions. In most research, including ours, the last frame of these sequences is extracted and utilized for static facial expression recognition, capturing the peak of the emotional expression. Facial Expression Research Group Database (FERG) The FERG database consists of 55,767 annotated images representing six stylized characters designed using MAYA, a 3D animation software. These characters are depicted in a range of seven different expressions, providing a unique challenge in recognizing facial expressions from cartoon-style visuals. This dataset allows us to test the adaptability and performance of our model on non-realistic, stylized facial expressions, which is essential for applications in animated films and video games.Japanese Female Facial Expression(JAFFE) The JAFFE dataset consist of 213 facial images of different facial expressions, sourced from 10 distinct Japanese female subjects. Each participant was instructed to convey seven facial expressions, encompassing six fundamental emotional states along with a neutral expression. These images were meticulously annotated with mean semantic ratings for each emotional expression, as assessed by a panel of 60 annotators<cit.>,<cit.>. § EXPERIMENTAL DESIGN AND RESULTS In this section, we present and discuss the experimental results obtained from the evaluation of the proposed Graph Convolutional Network-based Facial Expression Recognition (GCF) model. The results are compared against several state-of-the-art approaches using the aforementioned benchmark datasets. The methods included in the comparison represent a range of traditional and deep learning-based approaches for facial expression recognition. We split our dataset into 80% for training and validation and 20% for testing. We designed our extensive experimental work as follows: * Enhancing the state-of-the-art CNN pre-trained models using the proposed GCF. * Benchmarking the proposed model against the state-of-the-art FER models. * Varying the GCF design in an ablation study. §.§ Enhancing the State-of-the-art Models using GCF Table <ref> compares the performance of the GCF method against baseline and state-of-the-art convolutional neural network (CNN) models on the JAFFE dataset. The results highlight the effectiveness of our approach in achieving higher accuracy rates across various architectures. In each comparison, the GCF method outperformed the baseline CNN models. For instance, the GCF method achieved an accuracy rate of 95% with VGG16, improving upon the CNN baseline accuracy of 92%, while for VGG19, the GCF method achieved 92% accuracy, significantly outperforming the CNN baseline of 72%. Similarly, with ResNet18, the GCF method reached an accuracy of 89%, a notable improvement from the baseline CNN accuracy of 66%. In more complex models like EfficientNetB0, the GCF method achieved 90% accuracy, up from the CNN baseline of 87%, while with InceptionV3, the GCF method achieved 99.2%, surpassing the CNN baseline of 97%. The results highlight the effectiveness of utilizing graph-based representations for facial expression recognition, as the GCF approach consistently achieves higher accuracy rates, even in highly optimized models like EfficientNetB0 and complex architectures like InceptionV3 and DenseNet. This consistent enhancement, with improvements ranging from a few percentage points to over 20% in certain cases, underscores the potential of graph convolutional networks in advancing the field of facial expression recognition. The performance evaluation on the CK+ dataset, as presented in Table <ref>, highlights the superior accuracy achieved by the GCF method compared to baseline convolutional neural network (CNN) models. The GCF method consistently outperformed the CNN counterparts across various architectures. For the VGG models, GCF achieved 97% and 98% accuracy for VGG16 and VGG19, respectively, marking significant improvements of 8% and 5% over the baseline. Similarly, with the ResNet models, the GCF method improved accuracy rates to 98%, 96%, and 94% for ResNet18, ResNet34, and ResNet50, respectively, outperforming the baselines by 6%, 3%, and 7%. The GCF method also achieved a perfect accuracy of 100% with EfficientNetB0, surpassing the CNN baseline of 98%. In the case of InceptionV3 and DenseNet121, the GCF method achieved 99.2% and 100%, respectively, slightly improving upon the CNN baselines. Furthermore, with MobileNetV2, the GCF method attained 98%, surpassing the CNN baseline by 3%. These consistent enhancements across diverse architectures emphasize the potential of our graph-based approach for FER, solidifying its effectiveness on the CK+ dataset and showcasing the advantages of graph convolutional networks in this domain. The performance evaluation on the FERG dataset, as presented in Table <ref>, underscores the robust accuracy achieved by GCF method compared to baseline convolutional neural network (CNN) models. Across different architectures, the GCF method consistently outperformed the CNN counterparts. For the VGG models, GCF achieved 99.6% and 99.4% accuracy for VGG16 and VGG19, respectively, representing improvements of 1.8% and 3.3%. The ResNet models also benefited from the GCF approach, with ResNet18 achieving a perfect 100% accuracy, an improvement of 5.6%, and ResNet50 achieving 99.7%, a slight improvement over the baseline. Even with more efficient architectures, such as EfficientNetB0, the GCF method achieved 99.97%, surpassing the baseline accuracy of 98%. The InceptionV3 and DenseNet architectures also saw improvements with GCF, achieving up to 96% and 98.7% accuracy, respectively, outperforming their CNN baselines. Finally, with MobileNetV2, the GCF method achieved 99%, surpassing the CNN baseline by 3%. These results not only show the impact of the GCF upon diverse architectures but also highlight the adaptability and superior performance of graph convolutional networks for facial expression recognition on the FERG dataset. §.§ Benchmark with the State-of-the-art Models The comprehensive performance comparison across the CK+, JAFFE, and FERG datasets, as presented in Table <ref>, highlights the robust accuracy achieved by the GCF method compared to several state-of-the-art models. In this benchmark with the state-of-the-art models, our GCF method consistently outperformed prominent approaches. On the CK+ dataset, our GCF method achieved a perfect accuracy rate of 100%, outperforming notable models such as DeepEmotion (98%), Nonlinear Evaluation on SL + SSL Puzzling (98.23%), FAN (99.7%), and ViT + SE (99.8%). Similarly, on the JAFFE dataset, the GCF method also achieved a perfect accuracy rate of 100%, surpassing strong baselines like DeepEmotion (92.8%), ViT (94.83%), ARBEx (96.67%), and TLE (99.52%). The performance trend continued on the FERG dataset, where the GCF method achieved an impressive accuracy rate of 99.98%, outperforming DeepEmotion (99.3%) and ARBEx (98.18%). These results underscore the efficacy and robustness of our GCF method across diverse facial expression recognition datasets, highlighting its superior performance over both traditional and cutting-edge deep learning approaches. The consistently high accuracy rates achieved across different datasets demonstrate the adaptability and effectiveness of the GCF method, making it a promising approach for facial expression recognition tasks. §.§ Ablation Study The focus of this research is on the movements of the graph approach and its significance when the direction of edges in the graph is taken into account. Three models are identified: GCF-V1 with bidirectional edges, GCF-V2 with cross-linked edges from left to right, and GCF-V3 with cross-linked edges from right to left. The results summarize the accuracy rates of facial expression recognition models across three key datasets: JAFFE, CK+, and FERG. The study considers pretrained models including VGG16, ResNet18, and InceptionV3 and utilizes them for the three versions of the Graph Convolutional Facial Expression (GCF) distorted dataset. This work is proof of the power of certain neural network architectures for recognizing diverse facial patterns indicated in several datasets and versions of algorithms. GCF variations between directional arrangements can be used to acquire the quantitative effect of different organizational schemes on model performance. § CONCLUSION In this paper, we introduced a novel approach to Facial Expression Recognition (FER) by combining Convolutional Neural Networks (CNNs) and Graph Convolutional Networks (GCNs). GCF model leverages the feature extraction capabilities of CNNs and the relational modeling strengths of GCNs to capture both local and global features of facial expressions. Our experiments on benchmark datasets, including CK+, JAFFE, and FERG, demonstrated significant improvements in accuracy, achieving up to 100% accuracy on the FERG dataset. These results highlight that The GCF framework represents a promising direction for future research in facial expression recognition, providing a deeper understanding of human emotions. § ACKNOWLEDGMENT We extend our heartfelt gratitude to AiTech for Artificial Intelligence & Software Development (<https://aitech.net.au>) for providing the computational resources essential for our experiments. Their support has been crucial to the successful completion of this research. plain
http://arxiv.org/abs/2407.02706v1
20240702225919
Pushing the Boundary: Specialising Deep Configuration Performance Learning
[ "Jingzhi Gong" ]
cs.SE
[ "cs.SE", "cs.AI" ]
Modular properties of massive scalar partition functions Ankit Aggarwal, Glenn Barnich July 8, 2024 ======================================================== tocchapterReferences
http://arxiv.org/abs/2407.02220v2
20240702123846
Embodied AI in Mobile Robots: Coverage Path Planning with Large Language Models
[ "Xiangrui Kong", "Wenxiao Zhang", "Jin Hong", "Thomas Braunl" ]
cs.RO
[ "cs.RO", "cs.AI" ]
Embodied AI in Mobile Robots: Coverage Path Planning with Large Language Models Xiangrui Kong Dept. of Electrical, Electronic and Computer Engineering The University of Western Australia Perth, Australia xiangrui.kong@research.uwa.edu.au Wenxiao Zhang Dept. of Computer Science and Software Engineering The University of Western Australia Perth, Australia wenxiao.zhang@research.uwa.edu.au Jin Hong Dept. of Computer Science and Software Engineering The University of Western Australia Perth, Australia jin.hong@uwa.edu.au Thomas Braunl Dept. of Electrical, Electronic and Computer Engineering The University of Western Australia Perth, Australia thomas.braunl@uwa.edu.au July 8, 2024 =========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== § ABSTRACT In recent years, Large Language Models (LLMs) have demonstrated remarkable capabilities in understanding and solving mathematical problems, leading to advancements in various fields. We propose an LLM-embodied path planning framework for mobile agents, focusing on solving high-level coverage path planning issues and low-level control. Our proposed multi-layer architecture uses prompted LLMs in the path planning phase and integrates them with the mobile agents' low-level actuators. To evaluate the performance of various LLMs, we propose a coverage-weighted path planning metric to assess the performance of the embodied models. Our experiments show that the proposed framework improves LLMs' spatial inference abilities. We demonstrate that the proposed multi-layer framework significantly enhances the efficiency and accuracy of these tasks by leveraging the natural language understanding and generative capabilities of LLMs. Our experiments show that this framework can improve LLMs' 2D plane reasoning abilities and complete coverage path planning tasks. We also tested three LLM kernels: gpt-4o, gemini-1.5-flash, and claude-3.5-sonnet. The experimental results show that claude-3.5 can complete the coverage planning task in different scenarios, and its indicators are better than those of the other models. natural language processing, mobile robots, path planning, indoor navigation § INTRODUCTION The application of Large Language Models (LLMs) has grown exponentially, revolutionizing various fields with their advanced capabilities <cit.>. Modern LLMs have evolved to perform various tasks beyond natural language processing. When integrated into mobile agents, these LLMs can interact with the environment and perform tasks without the need for explicitly coded policies or additional model training. This capability leverages the extensive pre-training of LLMs, enabling them to generalize across tasks and adapt to new situations based on their understanding of natural language instructions and contextual cues. Embodied AI refers to artificial intelligence systems integrated into physical entities, such as mobile robots, that interact with the environment through sensors and actuators <cit.>. The integration of LLMs with embodied AI in applications such as autonomous driving <cit.> and humanoid robots <cit.> demonstrates their potential. However, the application of LLMs in controlling mobile robots remains challenging due to issues such as end-to-end control gaps, hallucinations, and path planning inefficiencies. LLMs possess the capability to solve mathematical problems, which directly aids in path planning methods <cit.>. Path planning and obstacle avoidance are critical for the effective operation of mobile robots, ensuring safe and efficient navigation in dynamic environments <cit.>. Coverage path planning is a typical method employed in various research areas, such as ocean seabed mapping <cit.>, terrain reconstruction <cit.>, and lawn mowing <cit.>. Traditional path planning methods include algorithms such as A* <cit.>, D* <cit.>, and potential field methods <cit.>. Given a global map, a path-planning method can be framed as a mathematical problem solvable by LLMs. In this context, we simplify some traditional path-planning methods and test LLMs in our mobile robot simulator. LLMs demonstrate their ability to solve mathematical problems collaboratively <cit.>. This paper presents a multi-layer coverage path planner based on existing multimodal large language models. It involves the static low-dimensional deconstruction of unstructured maps, abstracting spatial relationships into mathematical problems for reasoning and solving by prompted LLMs. The reasoning accuracy of the LLM is enhanced through multi-turn dialogues and multimodal interactions. The inferred results from the LLM are combined with the control interface, enabling the mobile agent to control the robot in real time for path planning. Simulation experiments demonstrate that LLMs possess path-planning capabilities in unstructured static maps. § RELATED WORKS §.§ LLMs in mobile robots Currently, LLMs are involved in various aspects of mobile robots, including code writing, model training, action interpretation, and task planning. LLMs can process new commands and autonomously re-compose API calls to generate new policy code by chaining classic logic structures and referencing third-party libraries <cit.>. LLMs have also been used to automatically generate reward algorithms for training robots to learn tasks such as pen spinning <cit.>. PaLM-E, an embodied language model trained on multi-modal sentences combining visual, state estimation, and textual input encodings, demonstrates the versatility and positive transfer across diverse embodied reasoning tasks, observation modalities, and embodiments <cit.>. LLMs have shown promise in processing and analyzing massive datasets, enabling them to uncover patterns, forecast future occurrences, and identify abnormal behaviour in a wide range of fields <cit.>. VELMA is an embodied LLM agent that generates the next action based on a contextual prompt consisting of a verbalized trajectory and visual observations of the environment <cit.>. Sharma et al. propose a method for using natural language sentences to transform cost functions, enabling users to correct goals, update robot motions, and recover from planning errors, demonstrating high success rates in simulated and real-world environments <cit.>. There is also some research applying LLMs in zero-shot path planning. The 3P-LLM framework highlights the superiority of the GPT-3.5-turbo chatbot in providing real-time, adaptive, and accurate path-planning algorithms compared to state-of-the-art methods like Rapidly Exploring Random Tree (RRT) and A* in various simulated scenarios <cit.>. Singh et al. describe a programmatic LLM prompt structure that enables the generation of plans functional across different situated environments, robot capabilities, and tasks <cit.>. Luo et al. demonstrate the integration of a sampling-based planner, RRT, with a deep network structured according to the parse of a complex command, enabling robots to learn to follow natural language commands in a continuous configuration space <cit.>. ReAct utilizes LLMs to generate interleaved reasoning traces and task-specific actions <cit.>. These methods typically use LLMs to replace certain components of mobile robots. The development of a hot-swapping path-planning framework centred around LLMs is still in its early stages. §.§ Path planning method Path planning for mobile robots involves determining a path from a starting point to a destination on a known static map <cit.>. Obstacle avoidance acts as a protective mechanism for the robot, enabling interaction with obstacles encountered during movement. Low-level control connects algorithms to different types of system agents, such as UAVs, UGVs, or UUVs <cit.>. In addition to the A* and D* algorithms mentioned in the previous chapter, path planning algorithms include heuristic optimization methods based on pre-trained weights, such as genetic algorithms <cit.>, particle swarm optimization <cit.>, and deep reinforcement learning <cit.>. These pre-trained methods do not directly rely on prior knowledge but utilize data to pre-train weights. The obstacle avoidance problem addresses dynamic obstacles encountered during movement, ensuring the safety of the mobile agent. Mainstream methods include the Artificial Potential Field. The coverage path planning problem is a branch of path planning problems. Compared with point-to-point path planning, a coverage waypoint list needs to cover the given area as much as possible <cit.>. Classically, decomposing a given map based on topological rules and then applying a repeatable coverage pattern is a common way to solve this issue following the divide-and-conquer algorithm <cit.>. In this way, a known map is required to start, whereas the Traveling Salesman Problem (TSP), an optimization problem that seeks to determine the shortest possible route for a salesman to visit a given set of cities exactly once and return to the original city, offers another solution to solve it in a node graph <cit.>. Figure <ref> presented showcases a comparison of four distinct path-planning patterns employed in robotic navigation. The first pattern, labelled as a standard lawnmower (Figure <ref>), utilizes a standard back-and-forth sweeping motion to ensure comprehensive coverage of the area. The second pattern, square spiral (Figure <ref>), depicts a robot following an inward spiral trajectory, efficiently covering the space in a continuous inward motion. The third pattern, square move (Figure <ref>), illustrates a robot navigating in a sequential inward square formation, progressively moving towards the centre. Finally, the lawnmower after wall following (Figure <ref>) combines two approaches: initially, the robot adheres to the perimeter of the wall following area, and subsequently, it adopts a lawnmower pattern to cover the remaining interior space. This comparative analysis of path planning strategies highlights the versatility and application-specific advantages of each method in ensuring thorough area coverage in robotic navigation tasks. § METHODOLOGY As depicted in Figure <ref>, our method is divided into three main sections: global planning, waypoint evaluation, and navigation. In global planning phase, a coverage planning task in a given map is decomposed into a cell map, and the additional requirement is designed using natural language with a simplified format to decompose LLM responses. During the waypoint evaluation phase, the LLM responses are further evaluated before execution. The theoretical coverage rate and the theoretical shortest path distance are calculated in this phase. Once the desired path passes the evaluation, the planned warpaint list transitions to the navigation phase. In navigation phase, the mobile agent simply travels through them one by one and triggers the safety mechanism if the sensor shows a threshold distance between the robot and an unknown obstacle. §.§ Global planning We design a waypoint generation prompt with natural language describing 2D grid maps like a chessboard to simplify the inference difficulty of LLMs. During the global phase, a prompt contains the size of the grid map, current location, and response format. We assume the LLM generates the desired waypoint list with a required format which is a local position sequence separated with a bar sign. In order to evaluate the performance and excitability of the planned path, the desired waypoint list is visualised and calculated in the phrase of waypoint evaluation. Considering the robot's kinematic limitation, we prompt a description of mobile agents including equipped sensors, driving commands, and basic status. We experimented with various settings to describe robot behaviors in conversations with ChatGPT. However, we observed that these changes in description had minimal impact on the output responses. We use OpenAI GPT-4o services <cit.>, a multimodal efficient model for inference and reasoning. The temperature parameter with the range from 0 to 2 is set as 0.6 with our prompt for a consistent planned path. Lower values for temperature result in more consistent outputs, while higher values generate more diverse and creative results. §.§ Waypoint evaluation The response from the LLMs can occasionally be incorrect, leading us to design a waypoint evaluator to mitigate hallucinations. Initially, the desired waypoint list is visualized on a 2D map, providing a clear and precise layout of the proposed route. The shortest path and the number of turns are then calculated mathematically to ensure efficiency and feasibility. Paths that do not meet the required criteria are rejected and not converted into a driving command list. The designed dialogue system initiates as soon as the agent receives the task command and map, continuing until a waypoint list passes the evaluation. This ensures that only optimal routes are considered for execution. Once the mobile agent begins driving, the task cannot be altered, guaranteeing consistency and reliability in task completion. Algorithm <ref> begins by initializing key parameters: the maximum number of iterations N, the evaluation threshold θ, the target position p_t, and the starting position s_0. A prompt 𝒫 is created, containing the task description and current position, which is then used by the LLM to generate waypoints. The LLM inference function Φ produces a list of waypoints W based on this prompt, taking into account the grid map, current location, and required response format. As the algorithm iterates, it evaluates the generated waypoints using the evaluation function ℰ, which calculates the shortest path r and the number of turns τ. If the calculated path metrics r and τ exceed the predefined threshold θ, the waypoint list is considered feasible and returned. This loop continues until a valid waypoint list is identified or the maximum number of iterations is reached. The algorithm ensures that only optimal routes are considered, thus providing a robust framework for waypoint generation and evaluation. This process incorporates global planning and rigorous waypoint evaluation to leverage LLM capabilities while ensuring safe and reliable path execution for mobile agents. §.§ Waypoint navigation After evaluating the waypoint list, the mobile agent begins to iterate through the waypoints. Due to potential sensor errors and the intricacies of the path-following method, it is essential for the mobile agent to appropriately select the following method. Simple waypoint following methods such as the dog curve and turn-and-drive can be employed to navigate the waypoints with a fixed distance. These methods enable the mobile agent to follow the sequence of waypoints with smooth and accurate navigation along the route. In our approach, we decompose this procedure using a status transform matrix that maps the next driving command based on the current heading, current position, and the next waypoint. This matrix allows for dynamic adjustment and precise control during navigation. Additionally, the designed safety system ensures the execution is safe by preventing collisions with unknown obstacles. This is achieved using a position-sensitive detector and LIDAR beams, which continuously monitor the environment and provide real-time feedback for obstacle avoidance. A algorithm <ref> iterates over each waypoint w_i in the list W. The current position s is updated using odometry data 𝒪, and the next waypoint s' is converted from the waypoint list W. The following method is chosen based on the action command a, which is determined by the selected path following function Γ(s, s'). The distance Δ between the current position s and the next waypoint s' is calculated. If the distance Δ is less than a predefined threshold d, the algorithm continues to the next waypoint. § EXPERIMENTAL SETUP §.§ Implement details This framework has been implemented on EyeBot simulator <cit.>. The EyeBot simulator with virtual reality EyeSim VR is a multiple mobile robot simulator with VR functionality based on game engine Unity 3D that allows experiments with the same unchanged EyeBot programs that run on the real robots. We adjust the environmental values based on the task map from 5×5 to 11×11. In each map, the mobile agent is at a random starting position and runs the proposed method in 10 episodes, and all performance metrics are averaged. Three large language models are evaluated in the experiment including gpt-4o, gemini-1.5-flash and claude-3.5-sonnet with the same system prompt and default temperature shown in Figure <ref>. §.§ Metrics We referenced the metrics from <cit.> and <cit.>, including success rate, average distance, and coverage rate. The success rate indicates whether the paths generated by LLMs can cover the designated area. Average distance represents the average path length of the mobile robot, while coverage rate is a metric specific to coverage methods, used to assess the completeness of coverage path planning algorithms. In traditional navigation evaluation standards, task termination is determined by the distance between the agent and the target point, which is effective for path planning problems with clearly defined start and end points. However, for coverage path planning algorithms, the generated paths do not have a clear endpoint, and the coverage path is autonomously decided by the LLM. Therefore, we have added a coverage rate metric to the comprehensive evaluation standards referenced from the cited sources. Inspired by Success weighted Path Length (SPL) from <cit.>, we will refer to the following measure as CPL, short for Coverage weighted by (normalized inverse) Path Length: CPL = 1/N∑_i=1^NA_i/A̅_̅i̅l_i/max(p_i,l_i) where N means the number of test episodes. A_i and A̅_̅i̅ indicate the area of the coverage path and the area of the mission area, respectively. The ratio of A_i and A̅_̅i̅ is expressed as the Coverage Rate (CR), which is used to evaluate the completeness of the path. The l_i means the theoretical shortest path distance from the mobile agent start point, and the p_i is the Path Length (PL) of the moving path by the agent. §.§ Results and analysis The performance and time analysis are shown in Table <ref> and Table <ref>. All three models demonstrate the ability to plan a coverage path in a square space with a random start position. However, as the map size increases, the coverage rate decreases by approximately 5% to 10%, though all models maintain a coverage rate above 65%. As shown in Table <ref>, the model claude-3.5-sonnet exhibits the best performance among the three models in terms of coverage rate and weighted path length. Changes in map size do not significantly affect the coverage rate and weighted path for the model gemini-1.5-flash. Conversely, the model gpt-4o achieves a higher coverage rate with smaller map sizes, but this rate decreases as the map size increases. As the map size grows, the actual path length increases more rapidly than the weighted path length, indicating that the planned paths include repeated visits to the same cells based on the random start position. The differences in path length are attributed to the coverage rate of the planned path and the mobile agent's hardware capabilities, such as sensors and actuators. Since the evaluation processes locally with a short time cost (less than 300ms), we sum the inference time and the evaluation time as T_i. T and T_d represent the total time spent and the driving part time cost, respectively. Model claude-3.5-sonnet performs best and exhibits the fastest inference time in the experiment, planning fully coverage waypoints in various environments. Model gpt-4o shows stable performance across different map sizes, demonstrating robustness and reliability. However, it is noted that the model's performance declines slightly as the map size increases, which could be attributed to the complexity of managing larger spaces and more waypoints. Model gemini-1.5-flash, on the other hand, maintains consistent performance regardless of map size, although it occasionally introduces extra line break marks in its responses, which could be due to formatting issues within the LLM's output generation process. Additionally, the path length differences highlight the varying capabilities of the mobile agents' hardware, such as sensor accuracy and actuator precision, which directly impact the execution of the planned paths. The evaluation process, which includes both inference and validation, ensures that the paths are not only feasible but also optimized for efficiency. Overall, the claude-3.5-sonnet model excels in both performance and speed, making it ideal for scenarios requiring rapid and thorough coverage. The gpt-4o model offers balanced performance with stability across various map sizes, making it a versatile choice. The gemini-1.5-flash model, despite minor formatting issues, proves to be reliable with consistent performance. These insights can guide the selection of appropriate LLM services for specific coverage path planning tasks in mobile robotics. § DISCUSSION We propose a novel embodied framework for mobile agents, incorporating weighted evaluation metrics for the specific task of coverage path planning. A key factor of the framework is the use of zero-shot prompts to simplify LLM inference during the initial phase. This approach leverages the power of LLMs to generate effective waypoints without the need for extensive training data, thus streamlining the path-planning process. During the navigation phase, we introduced a robust safety mechanism for mobile agents to avoid obstacles. This mechanism ensures that the mobile agents can navigate safely and efficiently in dynamic environments. Our experiments demonstrate that current LLMs have the capability to function as an embodied AI brain within mobile agents for specific tasks, such as area coverage, when guided by appropriately designed prompts. The competition among LLM companies has significantly advanced the field, freeing researchers from the traditional labelling-training-validation loop in AI research. This shift allows for more focus on innovative applications and real-world deployment of AI technologies. Future research will focus on evaluating path-planning problems in more realistic scenarios and simulation environments. This includes integrating more complex environmental variables and constraints to further evaluate and enhance the robustness of the proposed framework. Additionally, exploring the scalability of LLMs in diverse and larger-scale applications will be crucial in advancing the practical deployment of embodied AI systems in mobile robotics. § ACKNOWLEDGMENT The authors would like to thank all the Renewable Energy Vehicle Project (REV) sponsors for their support on this project, especially Stockland, Allkem and CD Dodd. IEEEtran
http://arxiv.org/abs/2407.02328v1
20240702145844
Efficient Sparse Attention needs Adaptive Token Release
[ "Chaoran Zhang", "Lixin Zou", "Dan Luo", "Min Tang", "Xiangyang Luo", "Zihao Li", "Chenliang Li" ]
cs.CL
[ "cs.CL" ]
Competition of Exchange and Correlation Energies in Two-Dimensional N-component Electron Gas Ferromagnetism M. A. Cazalilla July 8, 2024 ================================================================================================================ § ABSTRACT In recent years, Large Language Models (LLMs) have demonstrated remarkable capabilities across a wide array of text-centric tasks. However, their `large' scale introduces significant computational and storage challenges, particularly in managing the key-value states of the transformer, which limits their wider applicability. Therefore, we propose to adaptively release resources from caches and rebuild the necessary key-value states. Particularly, we accomplish this by a lightweight controller module to approximate an ideal top-K sparse attention. This module retains the tokens with the highest top-K attention weights and simultaneously rebuilds the discarded but necessary tokens, which may become essential for future decoding. Comprehensive experiments in natural language generation and modeling reveal that our method is not only competitive with full attention in terms of performance but also achieves a significant throughput improvement of up to 221.8%. The code for replication is available on the <https://github.com/WHUIR/ADORE>. § INTRODUCTION After breaking through the cognitive barriers, large language models (LLMs) are now widely used in many text-rich areas, such as voice assistants <cit.>, search engines <cit.>, and recommendation systems <cit.>. These successes are a testament to the philosophy of scaling up parameters to boost performance, i.e., the scaling law <cit.>. However, in situations demanding rapid or extensive text modeling, the vast size of the model significantly escalates the computational and storage requirements for the key-value (KV) states of self-attention, which, in turn, limits its throughput <cit.>. For example, when using a model with 7 billion parameters, caching the KV states for 1,000 tokens results in a memory requirement that exceeds twice the size of the model parameters, consequently increasing time costs in attention cost and memory swapping. Recent efforts address this issue from two perspectives: 1) hardware optimization, analogous to `increasing income'; 2) refining algorithms, similar to `reducing expenditure'. The former approach typically optimizes performance by scheduling tasks across multiple GPUs <cit.> or by implementing hierarchical unloading using the CPU and disk <cit.>. These techniques, though efficient, require additional hardware and, if not carefully scheduled, can lead to increased communication latency. This, in turn, may potentially degrade the overall user experience <cit.>. The latter strategy enhances efficiency by limiting the caching size of key-value states, such as sparsely attending to its immediate neighbors <cit.> or compressing prompts <cit.>. Though efficient, it can often lead to a drop in performance. Besides, some methods instantiate the sparse attention by masking attention after the attention weights have been calculated <cit.>. Thus, they fail to enhance inference speed and reduce memory usage. Among existing methods, the dynamic top-K attention <cit.>, which maintains sparse attention by selecting the highest attention contributions, demonstrates performance comparable to, or even better than, full attention models. Such an intuitive solution has only been well explored in encoders where the model can access the entire context simultaneously. However, in decoder-based LLMs, the required context shifts dynamically, necessitating multiple calculations of top-K attention throughout the decoding process. This unique characteristic further complicates the challenges: 1) Premature and erroneous releasing of unnecessary KV states may result in inaccuracies of top-K attention calculation. However, accurately determining the top-K attention requires considering all KV states of past tokens, thereby conflicting with the goal of reducing costs. 2) Tokens released earlier might be required for top-K attention in future decoding due to long-term dependencies in the text, as illustrated in Figure <ref>. Consequently, missing these necessary tokens can result in inaccurate calculations of sparse attention for subsequent tokens. To this end, we introduce , ADaptive tOken RElease, which maintains a constant cache size by accurately releasing useless past key-value (KV) states and efficiently reconstructing vital past KV states that were previously released. introduces a lightweight controller module that adaptively releases tokens with the lowest predicted attention contribution for the current token from the KV cache. This ensures a fixed KV cache overhead, even when processing numerous tokens. Additionally, rebuilds the KV state for tokens that are likely to contribute higher attention scores but have been previously released. This rebuild mechanism counters the issue when a released token is essential for future decoding. Moreover, can seamlessly integrate into LLM inference, showing impressive results with only minor fine-tuning and training needed for the lightweight controller module. Extensive experiments on multiple benchmark datasets reveal that achieves up to a 221.8% improvement in throughput compared to full attention models while maintaining nearly identical text quality. § METHODOLOGY This section first establishes the framework for efficient sparse attention, followed by initially exploring the adaptive token release in Section <ref>. Subsequently, we rebuild the KV states of important tokens, approximating the ideal dynamic sparse attention in Section <ref>. Finally, we propose an optimized matrix slicing algorithm to accelerate the implementation of our method in Section <ref>. An overview of our method is illustrated in Figure <ref>. §.§ Efficient Sparse Transformer Let T_n = {t_1, …, t_s, t_s+1, …, t_n} be a set of word tokens, where {t_1, …, t_s} represent user input tokens, and {t_s+1, …, t_n} are tokens generated by a transformer-based model, such as GPT-Neo <cit.> and Llama <cit.>. When generating the next token t_n+1, the current token t_n serves as the query input. The t_n's key-value states are based on the following scaled dot-product attention as a_n,l = softmax(q_n,l×(K^n_l)^⊤/√(d)) ×V^n_l, where a_n,l∈ℝ^d denotes the hidden state at the l^th layer of the transformer. It undergoes a non-linear transformation process to become the key and value states associated with the token t, q_n,l denotes the query vector derived from t_n at the l^th layer. The terms K^n_l∈ℝ^(n) × d and V^n_l∈ℝ^(n) × d represent the key and value states from the current token set T_n at the same layer. These states are retained in the GPU memory to minimize redundant computations. The generation of the token t_n+1 is accomplished through a multi-classification approach, utilizing the hidden state a_n, L∈ℝ^d from the last layer. For an efficient sparse transformer, we selectively cache the most relevant KV states, aiming to reduce computational demands while maintaining or even enhancing the model's performance in generating subsequent tokens as a'_n,l = softmax(q_n,l×(K^n_m+1, l)^⊤/√(d)) ×V^n_m+1,l. Here, a'_n,l approximates the a_n,l using the K^n_m+1, l∈ℝ^(m+1)× d, V^n_(m+1),l∈ℝ^(m+1)× d, which correspond to selecting m rows from K^n-1_l and V^n-1_l and concatenating them with k_n,l and v_n,l respectively, with the condition that m << n-1. k_n,l, v_n,l denote the key and value vector derived from t_n at the l^th layer. It implies only a significantly smaller K^n_m+1, l and V^n_m+1,l are retained in GPU for rapid inference and save memory. From a performance standpoint, achieving the ideal sparsity involves computing the full attention weight w_n = q_n,l×(K^n_l)^⊤∈ℝ^n and then selecting the top-m query-key product weights. Then, these weights serve as indices for slicing V^n_l. While this method is optimal in performance, it does not confer any computational or memory savings as the process of computing full attention weights for all query-key pairs and then selecting the top weights is computationally intensive. §.§ Adaptive Token Release The adaptive token release is to create efficient scheduling of the key-value states within the GPU memory. The main idea is to use a lightweight controller module as an alternative to computing full weight for slicing the full key-value states. To be both efficient and effective, we have implemented several design strategies: * Refine the model with top-K attention. Compared to the full attention, Top-K could mitigate the impact of excluding partial KV states once the pertinent top-K KV states are included within the m cached KV states, which is consistent with the target defined in Equ (<ref>). Therefore, we initially fine-tune the LLMs with top-K attention, which utilizes only the highest top-K attention weights via masking the weights out of the top-K. Remarkably, this approach yields performance that is on par with full attention models <cit.>. To be efficient, the cache size m is slightly larger than K. As m decreases, the complexity of the scheduling process increases correspondingly. * Adopt a uniform scheduling policy for the retention or exclusion of KV states across various layers. Constructing a layer-specific scheduling strategy would necessitate additional time to model each layer's input. Moreover, the initial layer is more pivotal for integrating value states; as we delve deeper into the layers, the hidden states become increasingly homogeneous <cit.>. Additionally, it is observed that different layers often focus on a similar set of top-K attentions. The effectiveness of the uniform scheduling policy is elaborated in Appendix <ref>. * Update the cached KV states by appending the latest KV state and selectively release an older one. An intuitive idea is to store the KV states in the motherboard's memory as backup. However, due to bandwidth limitations between the GPU and motherboard, moving KV states in and out proves to be extremely slow, at times even slower than recalculating the KV states <cit.>. Consequently, when updating the cached KV states, we simply append the most recently computed KV states while removing a nonsignificant older one, thereby maintaining a constant size for the cache. Adhering to above strategies, we develop a controller module that utilizes the lightweight and efficient GRU <cit.> for scheduling the cached KV states. Specifically, during the generation of token t_n+1, we establish the probability of whether caching the KV state of token t_i as: z_i = GRU(x_i, z_i-1) σ_i = Sigmoid( MLP(p_i+z_i)) where x_i∈ℝ^d represents the token embedding from the LLMs. The GRU is a single-layer GRU (an unidirectional model with its effectiveness analyzed in Appendix <ref>) that recurrently transforms this token embedding into a context-aware representation z_i∈ℝ^d'. The term p_i∈ℝ^d' denotes the position embedding for the i^th token, which signifies the importance of token position in the scheduling model. During the update of the KV states, we discard those with the lowest σ_i values and append the most recent KV states to the cached states. To fine-tune its parameters, we construct a dataset by collecting word embeddings of each sequence as input. Then we construct corresponding labels by assigning a value of 1 to the indices of the top-K tokens that most frequently occur within the top-K/2 attention scores across all layers, and a value of 0 to all others. §.§ KV States Rebuild Adaptive token releasing facilitates the selective preservation of the most pertinent tokens, yet previously discarded tokens may become essential for future decoding due to the long-term dependencies in text. To counter this issue, we propose the rebuilding of KV states as a complement. This method entails retrieving the top-R tokens with the highest σ_i values from the set of released tokens. Let X_R∈ℝ^R × d represent the token embedding of selected released tokens. We concatenate X_R with x_n, i.e., the embedding of current token t_n, forming the input X_R+1∈ℝ^(R+1) × d. After (l-1)-layers processing, we can obtain the query states Q^n_R+1,l∈ℝ^(R+1) × d, K^n_m+R+1,l∈ℝ^(m+R+1) × d and V^n_m+R+1,l∈ℝ^(m+R+1) × d, where K^n_m+R+1,l/V^n_m+R+1,l is formulated by concating cached key/value state and rebuild key/value states for the input tokens. With its argument, the attention is calculated as A'_R+1,l = softmax(Q^n_R+1,l×(K^n_m+R+1, l)^⊤/√(d)) ×V^n_m+R+1,l, where A'_R+1,l is the hidden state. To get the corresponding value for the current generating tokens, we get the a'_n,l by selecting the last row of A'_R+1,l. Through the parallel rebuilding of the released KV states, we maximize the utilization of GPU without incurring excessive time overhead. §.§ Matrix Slicing as Multiplication The scheduling of KV states relies on certain matrix-slicing operators. Traditional slicing operators like and can lead to significant time overheads <cit.>, particularly when batch operations involve varying slicing indices. To circumvent it, we leverage the GPU's rapid matrix multiplication capabilities. For instance, to remove the j^th row from K^n_m,l, we prepare a slicing matrix, S_j = I_(1:j-1,j+1:m), :, where I∈ℝ^m× m is the identity matrix and I_(1:j-1,j+1:m), : selects all rows of I except the j^th row. The resulting K^n_m-1,l = S_j ×K^n_m,l, with S_j being pre-prepared to save time. § EXPERIMENT §.§ Experimental Settings Dataset. To evaluate the effectiveness of various sparse attention mechanisms, we conduct extensive experiments across three distinct tasks: natural language generation, stream generation, and natural language modeling. For the first task, we evaluate on UltraChat <cit.>, EverythingLM[<https://huggingface.co/datasets/totally-not-an-llm/EverythingLM-data>], and Math <cit.>. For the second task, we experiment on StreamEval <cit.> and StreamChat (built upon UltraChat). For the last task, we evaluate models on CNN Dailymail <cit.> and SAMSum <cit.>. Specifically, UltraChat is a multi-turn dialogue dataset containing approximately 696,600 training samples and covering diverse topics such as questions about the world and creative writing. EverythingLM is an instructional dataset consisting of 1,000 conversations and encompassing a wide array of topics and interactions. Math dataset is composed of 50,000 problem-solution pairs obtained using GPT-4 across 25 math topics. StreamChat concatenates every 100 samples from UltraChat and feeds them into the model in a streaming fashion to assess the quality of the generated answers. StreamEval is a question-answer dataset, building upon LongEval <cit.>. Specifically, it comprises about 2,000 samples, each with 1,000 lines of textual information and 100 retrieval questions. CNN Dailymail is a news summarization dataset containing over 300,000 news articles. SAMSum is a summarization dataset containing about 16,000 messenger-like conversations with summaries. The details of the datasets are reported in Appendix <ref>. Baseline. We compare our method with the following methods: 1) Full Attention encompasses all past KV states across every layer, characterized by a time complexity of O(T^2) and linear growth in cache size. 2) Window Attention <cit.> focuses on the nearest tokens for self-attention at each layer, thus ensuring a constant size for the key-value cache. 3) Strided Attention <cit.> attends to both the nearest and distant tokens by periodically focusing on one with a fixed interval, thus striking a balance between effectiveness and efficiency. 4) KV Compression <cit.> incrementally compress the intermediate activation of a specified span of tokens into compact ones. 5) StreamingLLM <cit.> extends Window Attention by adding the first four tokens to the cache, aiming to maintain a normal distribution of attention scores and stable inference settings. 6) H_2O <cit.> dynamically evicts unimportant tokens from the cache, which contribute the least to the cumulative attention. 7) H_2O(Rebuilt) selectively rebuilds the KV states of evicted token based on H_2O. Experimental Protocols. We employ Llama-2 7B <cit.> as our backbone for evaluation. It has 32 transformer layers and 4,000 context length. For our experiments, we employ the top-96 attentions and set the KV cache size m to 192 with top-8 rebuilt tokens. We randomly selected 1,000 samples from the benchmark dataset for training purposes. The samples were utilized to develop the sparse top-K backbone model using QLoRA <cit.>, along with the controller module. The extra data were employed for testing models. The training and inference times for the controller module are detailed in Appendix <ref> and Appendix <ref>, respectively. To evaluate the quality of the generated text, we use metrics including BLEU, ROUGE, BERT-F <cit.> and Accuracy. To measure the inference speed of different methods, we use Throughput <cit.>, which is defined as the number of tokens generated per second. §.§ Natural Language Generation This subsection evaluates models' performance in natural language generation. We summarize the quality of generating text on UltraChat, EverythingLM and Math benchmarks in Table <ref> and the throughput against different sequence lengths in Figure <ref>. From the results reported, we have the following observations: 1) The proposed achieves the best performance, and consistently outperforms all the baselines on all datasets. In Table <ref>, our method shows an improvement over full attention in the UltraChat dataset, with increases of 1.2% in BLEU scores and 0.1% in BERT-F scores. On the other hand, Window Attention, Strided Attention, KV Compression, StreamingLLM, H_2O, and H_2O(Rebuilt) show reductions of 8.9%, 7.6%, 14.5%, 11.7%, 9.2% and 8.0% in BLEU scores, respectively. A similar trend is also observed in the learning curve illustrated in Appendix <ref>. 2) Our proposal performs the best in achieving a high efficiency while maintaining a competitive performance against full attention. Specifically, it is evident that our method demonstrates a consistent throughput against various generated text lengths; whereas full attention suffers from a significant drop in throughput as the generated text length increases. Notably, our method outperforms full attention by 151.4% and 221.8% when generating text lengths of 768 and 960, respectively. 3) Existing SOTA methods, i.e., StreamingLLM and H_2O, sacrifice inference quality in exchange for improving throughput. Though StreamingLLM and H_2O have slightly higher throughput, their performance on natural language generation suffers a lot. §.§ Stream Generation To show the real-world applicability of our proposal, we emulate the performance of the models on infinite streaming dialogue, i.e., StreamChat, and question-answering tasks, i.e., StreamEval. For StreamChat, we chunk the streaming chat with the size of 4096 to evaluate the quality of generation against different sequence lengths. The experimental results are reported in Table <ref>. For StreamEval, we report the generating accuracy of models' responses after multi-times query in Figure <ref>. From the Table <ref> and Figure <ref>, we have following observations: 1) In the table, our method demonstrates a consistent performance across different sequence lengths, which justifies its efficacy in streaming dialogue, especially in length extrapolation and capturing high-importance tokens. While full attention exhibits the best performance on the first subset (length in range (0, 4096]), its performance rapidly declines as the streaming sequence length surpasses the pre-training window size, and eventually becomes almost 0. 2) In the figure, our method consistently maintains high accuracy, even when the number of queries exceeds 20, which expresses the superiority of our proposed method. On the other hand, full attention and strided attention display competitive performance at limited query times. However, they suffer a significant drop in performance due to Out-of-Memory (OOM) issues, which arise as the accumulation of excessive KV states increases with the number of queries. This observation justifies the necessity of sparse attention. However, Window Attention and StreamingLLM demonstrate lower accuracy compared to our approach, primarily due to their fixed heuristic policies. KV Compression exhibits suboptimal accuracy due to the information loss in token compression. Benefiting from dynamic eviction strategy, H_2O and H_2O (Rebuilt) consistently demonstrate competitive performance compared to our method. §.§ Natural Language Modeling We evaluate the performance of various methods in natural language modeling on the CNN Dailymail and SAMSum datasets. We report perplexity (ppl.) as the metric to compare the performance of different methods across different sequence length subsets. Similar to Section <ref>, the length in each subset is in the range of ((i-1)×1024, i×1024] for (i=1, 2, …). Figure <ref> illustrates the logarithm of perplexity for different methods across various modeling intervals. It is evident that our method, StreamingLLM, H_2O, and H_2O(Rebuilt) consistently maintain the lowest perplexity. They are effective in preserving the original attention distribution with sparse attention. Therefore, they demonstrate superior performance on extrapolating length. As the sequence length increases, KV Compression compresses more tokens, leading to a gradual increase in perplexity. Although full attention exhibits the best performance in the shortest input length subset ([0, 4096]), its performance quickly becomes worse when the input length surpasses the size of the pretraining window. §.§ Ablation Study §.§.§ Influence of Attention Sparisity We explore 's performance against different K in adaptive token release. In particular, we configure K in the range of {48, 96, 128, 192} for fine-tuning the model and top-m as {48 × 2, 96 × 2, 128 × 2, 192 × 2} for a fixed cache size. The inference performance and the corresponding training loss are presented in Table <ref> and Figure <ref>, respectively. Figure <ref> shows that when K values are set to 96, 128, and 192, the differences in training loss are minimal. This indicates that retaining tokens with the highest top-K attention weights is sufficient, and further increasing K does not yield substantial improvements in model performance. From Table <ref>, it can be observed that there is no significant improvement in the quality of the generated text when m increases from 96 × 2 to 192 × 2, which, however, is accompanied by a notable decrease in throughput. Therefore, it is essential to select an appropriate set of K and m, which balances throughput and the quality of generated text. §.§.§ Influence of KV States Rebuild We evaluate the impact of different R in KV states rebuild. Specifically, we select R in the range of {0, 8, 16, 32} and summarize the inference performance in Table <ref>. The results demonstrate that as the R in rebuilt tokens increases, the model's performance first improves. However, the improvement comes at the cost of a reduction in throughput. When the number of rebuilt tokens is further increased from 16 to 32, we can observe an improvement of 1.5% in BLEU, 1.0% in ROUGH, and 0.4% in BERT-F. However, this minor improvement is accompanied by a 34.4% decrease in throughput. This indicates that selecting the appropriate number of rebuilding tokens is crucial for maintaining a trade-off between performance and quality during the inference process. §.§.§ Effectiveness of Controller Module Since we use the controller module for advancedly predicting top-K attention weights, next we investigate how it affects overall performance. In particular, we adjust the module with the following variants: 1) w/o GRU: directly using the MLP for predicting the keeping/dropping probability of tokens; 2) _d'=64: set hidden size of the controller to 64; 3) _d'=128: set hidden size of the controller to 128; We first report accuracy and F1 scores on the dataset that fine-tunes the controller module, as detailed in Section <ref>. Then, we report BLEU, ROUGE, and BERT-F scores on the Ultrachat benchmark, which further illustrate how the performance of the controller module influences the performance of LLMs. We summarize the results in Table <ref>. Our observations are as follows: 1) The GRU is crucial for the controller module to serve as an effective alternative to full attention; 2) An improved controller module results in enhanced performance during the inference process, as it offers a more accurate approximation of sparse attention. § RELATED WORK §.§ Sparse Attention Several works have attempted to integrate sparse attention into transformer-based models. This integration reduces the quadratic computational complexity in the sequence length, making it possible to process longer sequences. Some studies adopt fixed-pattern sparse strategies <cit.>, while others focus on sparsification based on the distribution and features of self-attention <cit.>. However, these methods either optimize bi-directional attention encoding, as in BERT, or fail to yield a practical improvement in the inference speed of language models <cit.>. This is because the reduction in the number of tokens does not yield significant benefits on CUDA <cit.>. To address this issue, in the LLM inference process, we propose applying dynamic sparse attention to the storage of the KV cache, thereby fundamentally enhancing the throughput of the LLM. §.§ Efficient Inference for LLMs The efficiency improvement of LLM inference is becoming increasingly attention-grabbing <cit.>. Recent research has primarily focused on two aspects: systems and algorithms, aiming to enhance LLM inference efficiency. In recent years, numerous systems dedicated to LLM inference have emerged, such as FasterTransformer, Hugging Face Accelerate <cit.>, FlexGen <cit.>, and vLLM <cit.>. These systems often emphasize optimization from hardware accelerators and CUDA kernels. On the other hand, algorithms like Early-Exit <cit.> Flashattention-2 <cit.> and Continuous Batch <cit.> attempt to optimize LLM inference performance by reducing computational costs. In this paper, our proposed method is orthogonal to all mainstream LLM inference systems and most algorithmic optimizations, and our method can be used in parallel with these methods. §.§ Length Extrapolation for LLM Inference Length extrapolation aims to enable language models to maintain satisfactory performance when applied to super-long sequences as well. Current research primarily focuses on finding improved representations for positional encoding. Rotary Position Embeddings (RoPE) <cit.> attempt to transform absolute positions into relative position encodings for length expansion. Furthermore, ALiBi <cit.> introduces relative positional information by imposing a penalty bias proportional to the distance in relative proximity on the attention matrix. However, current pproaches still struggle to model extremely long texts effectively. Simultaneously, when dealing with long texts, a major limiting factor lies in GPU memory overflow. In this paper, our approach extends the inference length of LLM by setting a fixed attention window size by adaptively releasing tokens, which is designed to maximize the length of inference without compromising performance significantly. § CONCLUSION We propose an efficient sparse attention for the inference process of LLMs. This is achieved by adaptively releasing the KV state of the tokens with the lowest attention contribution in the cache while simultaneously rebuilding the state of tokens with the highest contribution during the step-by-step decoding of each token. Experimental results show that our approach significantly enhances the throughput of model inference without substantially compromising the quality of the generated text. § LIMITATIONS In this paper, the primary limitation lies in the fine-tuning process required to align with our designed inference optimization method. Specifically, during fine-tuning, we still face an O(n^2) time complexity for self-attention, resulting in no speed improvement when learning dynamic sparse attention. Furthermore, our method is not immediately applicable during inference; it requires additional computational overhead for fine-tuning and training the controller to attain enhanced performance during inference. Acknowledgments We are grateful for the funding support from the National Natural Science Foundation of China under Grant Numbers 62302345 and U23A20305, the Natural Science Foundation of Hubei Province under Grant Numbers 2023AFB192 and 2023BAB160, the Xiaomi Young Scholar Program, and the Wuhan University Talent Startup Fund. § DATASET STATISTICS In Table <ref>, we present the statistical information of the datasets used in our experiments, including dataset partitioning and sequence length statistics. § IMPLEMENTATION DETAILS In this section, we illustrate the details of our implementation, primarily including training data collection for the controller module, fine-tuning with QLoRA, details of the controller module, inference settings, and hardware settings. Training data collection for the controller module As detailed in Section <ref>, during the fine-tuning process of the full attention (baseline) on the training set, we collect the word embeddings and the most frequently top-K indices of each sample as input data and labels for training the controller. Given that full attention models the entire sequence, it consistently yields the lowest loss during fine-tuning, thereby ensuring that the attention distribution modeled is reliable and informative for capturing the top-K tokens. Fine-tuning with QLoRA(1) Hyper-parameters: For all methods, we utilize the Adam optimizer with a learning rate of 3e-5, decayed by a rate of 0.98 every 40 steps. Regarding the parameters for Q-LORA, we uniformly set the rank parameter r = 16 and the learning rate scaling factor lora_alpha = 32. (2) Alignment fine-tuning with : By collecting the top-K indices, we create attention masks for full attention, which block the attention from the current token to the low-contribution tokens. This implementation achieves dynamic sparse attention during fine-tuning, resulting in a model aligned with our inference optimization approach. The fine-tuning time on the UltraChat, EverythingLM, and Math datasets is approximately 3 hours on four GeForce RTX 3090 GPUs, respectively. Details of the controller module (a) Controller network structure: (1) Input layer: A GRU layer with an input size of 4096 and a hidden size of 128; (2) Position layer: A fully connected layer with an input size of 1, projected to 128; (3) Interaction layer: A fully connected layer with a hidden size of 128 and a Tanh activation function; (4) Output layer: Each output, mapped to [0,1] for cross-entropy loss over the sequence length, is obtained through a fully connected layer followed by a sigmoid function. (b) Training Details: We employ the Adam optimizer with a learning rate of 0.005, accompanied by a decay rate of 0.98 every 2000 steps. We split the collected dataset into a training set and a validation set with an 8:2 ratio, and save the model parameters achieving the highest F1 score on the validation set. Hardware settings We utilize four GeForce RTX 3090 GPUs, with a total runtime exceeding 20 hours. § ANALYSIS OF DYNAMIC SPARSE ATTENTION In this section, we first demonstrate the effectiveness of applying a uniform scheduling policy across different layers. Then we showcase the superior performance of dynamic sparse attention in comparison to other methods and delve into the underlying reasons behind its effectiveness. According to the settings in Section <ref>, we select the top-K/2 tokens sets for each layer and consider the top-K tokens that appear most frequently in these sets as the uniform token sets. In Figure <ref>, we observe that the uniform token set covers the majority of the top-K/2 token sets at each layer. Additionally, in Figure <ref>, we illustrate the cumulative softmax attention scores from the uniform token set for the current token across different layers, demonstrating that the uniform token set can effectively replace the contributions of the top-K/2 token sets at each layer. Figure <ref> illustrates the comparison of loss with QLoRA fine-tuning for various methods on UltraChat, EverythingLM, and Math. It is evident that the loss by focusing on the tokens with the top-K highest attention (dynamic sparse attention) maintains consistency with the full attention approach and results in a notable reduction in loss compared to other methods. The superior performance of dynamic sparse attention can be attributed to the observation that only a small portion of tokens significantly contributes to the attention mechanism during the modeling process for the current token. The Softmax scores in the self-attention curve on the cnn-daily dataset are presented in Figure <ref>. It can be observed that the blue curve in Figure <ref> forms a long-tail distribution, i.e. a few tokens contribute to the significant attention and others can be disregarded. § ANALYSIS OF THE UNIDIRECTIONAL GRU In this section, we explore the impact of the GRU unit for adaptive token release and selection on runtime. Table <ref> reports the time needed to generate 100, 200, and 500 tokens, as well as the time allocated to the GRU unit for adaptive token release and selection. It can be seen that the GRU unit's average runtime is merely 2.9% of the total runtime. This suggests that the runtime overhead attributed to the GRU unit for adaptive token release and selection is negligible. In addition, we compare the performance of unidirectional and bidirectional GRU in terms of top-K prediction Accuracy, F1-score, and Runtime (seconds) for 500 tokens to illustrate why we choose unidirectional GRU as the primary architecture for the controller module. Table <ref> demonstrates the Accuracy, F1-score, and Runtime. We can observe that bidirectional GRU does not significantly improve performance compared to unidirectional GRU. Instead, bidirectional GRU is more computationally expensive in terms of runtime because it requires forward and backward computations at each time step.
http://arxiv.org/abs/2407.02957v1
20240703094946
Past, Present, and Future: A Survey of The Evolution of Affective Robotics For Well-being
[ "Micol Spitale", "Minja Axelsson", "Sooyeon Jeong", "Paige Tuttosı", "Caitlin A. Stamatis", "Guy Laban", "Angelica Lim", "Hatice Gune" ]
cs.RO
[ "cs.RO" ]
Journal of Class Files, Vol. 14, No. 8, August 2015 Shell et al.: Bare Demo of IEEEtran.cls for IEEE Journals Past, Present, and Future: A Survey of The Evolution of Affective Robotics For Well-being Micol Spitale^1,2,  Minja Axelsson^1,  Sooyeon Jeong^3,   Paige Tuttösí^4,  Caitlin A. Stamatis^5,  Guy Laban^1,  Angelica Lim^4,  Hatice Gunes^1  ^1 University of Cambridge, UK; ^2 Politecnico di Milano, Italy, ^3 Purdue University, USA, ^4 Simon Fraser University, Canada, ^5 Northwestern University, USA; contact author email: micol.spitale@polimi.it ========================================================================================================================================================================================================================================================================================================================================================================================================================================= § ABSTRACT Recent research in affective robots has recognized their potential in supporting human well-being. Due to rapidly developing affective and artificial intelligence technologies, this field of research has undergone explosive expansion and advancement in recent years. In order to develop a deeper understanding of recent advancements, we present a systematic review of the past 10 years of research in affective robotics for wellbeing. In this review, we identify the domains of well-being that have been studied, the methods used to investigate affective robots for well-being, and how these have evolved over time. We also examine the evolution of the multifaceted research topic from three lenses: technical, design, and ethical. Finally, we discuss future opportunities for research based on the gaps we have identified in our review – proposing pathways to take affective robotics from the past and present to the future. The results of our review are of interest to human-robot interaction and affective computing researchers, as well as clinicians and well-being professionals who may wish to examine and incorporate affective robotics in their practices. affective robotics, survey, well-being, affective computing, human-robot interaction § INTRODUCTION According to the World Health Organization (WHO), well-being is defined as “a positive state experienced by individuals and societies” and “is a resource for daily life and is determined by social, economic and environmental conditions <cit.>”. Yet, approximately 1 in every 8 people, or 970 million individuals worldwide, were living with a mental disorder in 2022, and mental, neurological, and substance use disorders account for 10% of the global burden of disease and 25.1% of non-fatal disease burden <cit.>, and more than 80% of people with mental health conditions lack access to quality and affordable care <cit.>. This has created a pressing need to support people's well-being. Affective Robotics offers a promising approach to enhance human-robot interaction and improve overall well-being by studying how robots can recognize, interpret, process, and simulate human affect <cit.>. Since the beginning of research in social robotics <cit.> in the early 2000's, the ability to understand human affect and emotion has played a key role in enabling robots to be helpful in various application areas <cit.>, especially for mental health and well-being support <cit.>. For example, affective robotics have been used to lead mindfulness meditations <cit.>, facilitate social bonding <cit.>, support diet and physical activity <cit.>. In these contexts, being able to recognize and generate affect is extremely important because the goal of the interaction involved tracking or supporting a “positive state" in the user. However, studying, developing, and designing affective robots for well-being is still an open research area due to multiple challenges in this research landscape. First, affective robotics is an interdisciplinary research topic that combines the fields of Affective Computing and Human-Robot Interaction. This intersectionality brings together multiple disciplines, making it challenging to consider various aspects simultaneously (Challenge 1, C1). To address this complexity, the field of affective robotics must tackle several challenges. It should develop computational models that accurately understand, model, and adapt to human behaviors. These models must be embedded into robotic platforms that can be used in real-world applications, such as well-being (technical challenge). Affective robots must be designed with features that enable smooth and seamless interaction with humans. This includes understanding how to design the robots themselves and how they can effectively interact with humans (design challenge). Affective robotics has a responsibility to develop fair and ethical computational models, particularly in high-stake application scenarios like well-being. Additionally, the field must conduct studies that are ethically compliant and ensure the well-being of both humans and robots involved in these interactions (ethics challenge). Second, affective robotics relies on advancements in affective computing, which utilizes cutting-edge artificial intelligence (AI) models to understand human emotional states. However, the field of AI has undergone rapid evolution, particularly in recent years, with an unprecedented growth in AI advancements (Challenge 2, C2). Third, research on affective robots for well-being has primarily focused on a few sub-areas, such as dementia, with limited studies in other cognitive impairments like schizophrenia, depression, ADHD, and intellectual disability. Recent efforts involve collaborating with domain experts, such as mental health professionals and well-being experts, and including stakeholders in the design and research processes. However, it remains unclear how affective robotics can support patients and extend clinical care systems across diverse well-being domains and how much these stakeholders are involved in the research process and how interdisciplinary collaborations are conducted (Challenge 3, C3). Therefore, we conducted a literature review of the last 10 years of research in affective robotics for well-being from different point of views, namely technical, design and ethical, to better understand the multi-facet complexity of this research field (addressing C1, Solution 1). This review focused on the trajectory of the affective robots for well-being. Specifically, we identified existing gaps and areas that need further investigation, and provided insights into future opportunities to improve the state of the art and close the gaps (addressing C2, Solution 2). In this review, we have also identified: i) domains of well-being that have been studied in affective robotics research and establish which areas are still unexplored and warrant further exploration, ii) the current methods used to investigate affective robotics for well-being, and iii) how these research processes and methodologies have changed over time (addressing C3, Solution 3). § BACKGROUND AND DEFINITIONS The field of Affective Robotics (AR) has been increasingly applied in various domains, such as healthcare and well-being applications, where affective robots have demonstrated significant potential <cit.>. With the term well-being, we refer to a comprehensive concept that encompasses what it means to be functioning as a healthy person across multiple domains <cit.>. For example, mental well-being is about achieving a positive state of mind <cit.> and it involves aspects like the ability to cope with challenges, recognizing personal strengths, and finding purpose in daily life. Affective Robotics is at the intersection between Affective Computing (AC) and Human-Robot Interaction (HRI) fields. AC is an emerging interdisciplinary field that integrates the affective and computational sciences and studies how machines can measure human affective states <cit.>. Analogously, HRI is also an interdisciplinary field that encompasses expertise from cognitive, psychology, robotics, design and computer sciences, and it aims at understanding the dynamics in human-robot interactions <cit.>. As a result, Affective Robotics inherited the interdisciplinary nature of both AC and HRI fields when applied in a well-being context. From a computational and technical point of view <cit.>, affective robotics research needs to: (i) understand human behaviours while interacting with robots; (ii) model human-inspired behaviours in robots; (iii) adapt to the interaction context to meet the personal needs of humans interacting with the robot; and (iv) translate the results obtained in controlled settings into real-world scenarios without compromising performance and efficiency. From a design point of view (i.e., how to design robot features and interactions with humans), the affective robotics field lags behind the advances in the HRI field in which researchers collaborate with domain experts, e.g., teachers, psychologists, to design robots that can be used in real-world scenarios <cit.>. The involvement of stakeholders is not an easy task, but it becomes fundamental when applying technologies such as robots to high stake contexts like well-being. From an ethical perspective (i.e., considering moral, value and legal implications), the potential for both positive and negative outcomes in well-being context makes it an ethically complex issue, requiring careful ethical consideration to achieve a balance between the positive and negative aspects <cit.>. The AC community has made efforts in this direction by including a mandatory Ethical Statement section in the International Conference on Affective Computing & Intelligent Interaction (ACII) submissions, and by investigating the current ethical issues, positive and negative, which arise from the current state of affective computing <cit.>. However, these efforts have not yet been translated into the area of affective robotics. Previous efforts have been made to survey the current state of the art of affective robotics field, e.g., <cit.>. For example, <cit.> reviewed the last decade of affective robotic works in well-being focusing only on technical aspects without including any considerations for ethics or design. <cit.> conducted a literature review on affective touch during human-agent and human-robot interactions. <cit.> reviewed the past works on robotics for mental health and well-being without focusing on affective aspects, similarly to <cit.> which focused exclusively on the introduction of robots in health psychology applications (e.g., behavioural change and emotion regulation interventions). None of these previous surveys have provided a comprehensive snapshot of the last decade in affective robotics for well-being and a future research agenda for the field focussing on the multi-disciplinary aspects that characterise it, namely technical, design and ethical. Therefore, in this paper, we conducted a survey to review the papers from the last decade on affective robotics for well-being by analysing the evolution in this research field from technical, design and ethical point of views. § METHOD This section describes the methodology to identify the papers included in this survey by reporting the procedure, the search query, the inclusion criteria, the selection process, the data analysis and extraction, and the terminology used. §.§ Procedure To define our systematic literature review approach, we followed the guidelines established by Nightingale <cit.> and we expanded upon a previous preliminary survey on this topic undertaken by two authors of this work <cit.>. Our methodology adhered to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) framework <cit.>, widely acknowledged as the gold standard for systematic reviews and meta-analyses. PRISMA ensures the quality and replicability of the review, organizes the final manuscript using standardized headings, and facilitates the assessment of the study's strengths and weaknesses <cit.>. Specifically, the PRISMA framework presents an evidence-based, minimal set of reporting items for systematic reviews, structured around a four-phase flow depicted in Figure <ref>. The initial phase involves identification, wherein all potential manuscripts are gathered. Subsequently, during the screening phase, papers meeting the eligibility criteria are selected based on an assessment of their titles and abstracts. In the selection phase, full texts are scrutinized, and only those meeting the same eligibility criteria are retained. Finally, in the inclusion phase, all selected papers undergo analysis to address the study's research questions. §.§ Search Query We gathered papers based on the terms found in their titles, abstracts, and keywords, utilizing the Scopus, ACM Digital Library, and IEEE Xplore databases. Scopus was selected due to its broad coverage across multiple disciplines beyond computer science. The ACM Digital Library and IEEE Xplore were chosen for their extensive coverage in human-computer interaction, computer science, and engineering. The search queries were adjusted slightly to accommodate the requirements of each database. We provide the search query used for Scopus: We executed the search queries to identify potential papers for review. After that, we removed duplicate entries and stored references in a Google Sheet file. §.§ Eligibility Criteria Papers were included only if: * they address well-being or health (both physical and mental) * they model, analyse, design, or discuss the ethics of affective capabilities of a robot * use/discuss physical robots (e.g., video-conference, video) * their title, abstract, and keywords contain at least one keyword describing such technology and one keyword from the Search Query Keywords. * the robot is acting as an agent not as a tool (medium, e.g., to connect people) Papers were excluded if: * they were published before May 2012 and after the day of actual running of the search query, i.e., Oct 31, 2022, * they are not in English, * they are not in peer-reviewed journals and conference proceedings, * they are surveys, (not provide any additional contributions, e.g., tools or protocol), * they are inaccessible to the authors, * they are low quality (i.e., they do not report the details necessary to evaluate their eligibility) §.§ Selection Process All manuscripts collected from the databases were screened based on their titles and abstracts. Then, a thorough examination of the full texts was conducted to ensure compliance with the eligibility criteria. Prior to the screening process, a random sample of 5 papers was selected from the 2022 Scopus search and evaluated by five reviewers to establish consensus on inclusion and exclusion criteria. All reviewers had a full agreement on the inclusion or exclusion of the sample papers. The remaining papers were then randomly distributed among the reviewers, with each reviewer screening a subset individually. Screening of titles and abstracts was conducted according to the predefined eligibility criteria. In cases where a reviewer had uncertainty regarding a particular paper, it was flagged for further discussion among all reviewers. The same set of reviewers were subsequently assigned different subsets of manuscripts for full-text screening and data extraction. In total, 639 papers were collected using the specified search query (refer to Figure <ref>). These manuscripts underwent initial screening based on titles and abstracts, resulting in the selection of 289 papers. Their full texts were reviewed. This process yielded a final count of 216 manuscripts, as reported in the Supplementary Materials. Among these 216 manuscripts, a total of 239 studies were identified, with some papers containing multiple studies. The comprehensive list of extracted papers is publicly accessible via a spreadsheet stored in the GitHub repository[https://github.com/Cambridge-AFAR/aff-rob-survey.git]. §.§ Data Analysis and Extraction We defined a set of variables that can help understand the evolution of this field – encompassing various types such as numerical (e.g., number of participants in each study), categorical (e.g., type of the robot), or qualitative (e.g., design methods) – by creating a codebook of variables that can be found in the same Github repository. We extracted relevant information from each paper using an analytical approach to assign a value to each of the defined variables. For numerical variables, we employed a descriptive statistics description, while qualitative variables were analyzed using pattern-based methods. Categorical variables were derived through either a top-down or bottom-up approach as in a previous work <cit.>. The former involved pre-defining variable categories based on existing literature. The latter approach entailed extrapolating variables from the collected data among the selected papers. These variables are collected in Table <ref>. §.§ Terms The following paragraphs describe the terminology used in this survey. Author disciplines We defined the authors' disciplines taking the categories defined by <cit.>. As a result, we identified the following 15 categories of disciplines: art, biology, business, communication, computer science, dentistry, design, education, engineering, humanities, literature, medicine, philosophy, psychology, and social sciences. Study type With this term, we refer to the type of study that was conducted by authors, specifically we categorised the studies into qualitative, quantitative, theoretical, and meta-analysis works. Study session With this term, we refer to the number of sessions in which participants of each study interacted with the affective robot. Psychological target outcome The psychological target outcome is the specific psychological construct that the robots target or treat <cit.>,for example, anxiety, loneliness, depression, or well-being. Notably, some studies of affective robots focus on well-being, whereas others target specific mental health outcomes (e.g., depression; anxiety). Mental health is typically more narrowly defined as the absence of certain disorder-specific symptoms. Wellbeing represents a broader term that may encompass factors such as overall ability life satisfaction, ability to cope with stress, and resilience. Application context This refers to the specific conditions and settings in which study and/or findings are intended to be applied, and we identified the following categories: educational (e.g, the robot teaches, population's learning), therapeutic (that includes both assessment and treatments or interventions <cit.>), mental (i.e., improves psychological well-being, like stress, anxiety, can be corollary to main diagnosis), health (i.e., any paper which focuses on public health and general well-being contexts, and does not fit into either of the others), psychological (i.e., specific psychological phenomena, e.g., joint attention, theory of mind), home (i.e., robots intended to be used either at home or residential centers), creative (i.e., improve people creativity), play (i.e., robots that promote play and game activities), social (i.e., the development of a social relationship with the robot), others (i.e., if no context is specified). The definitions of such application contexts are in line with previous work by <cit.>. Experimental setting The experimental setting is the environment where participants interacted with the robot, for example laboratories, schools, homes, etc. Study design This is the design of the study that can be within-subjects, between-subjects, random control trails, among others <cit.>. Mode of interaction This variable refers to the modality of the interaction between the robot and the human. For example, if participants were asked to evaluate a robot just watching a video of the robot we labelled it as “video" mode of interaction, while if the participants interacted physically with a robot that have been categorised as “physical". Robot operation The robot operation refers to how the robot was controlled, and we identified the following categories: wizard-of-oz (if the robot was controlled by the researchers or participants in the study), semi-autonomous (following the definition in <cit.>, e.g., if the robot performed actions autonomously but the decision are controlled by the researchers), and autonomous (if the robot could perform autonomously the whole interaction). Affective behaviour generation and perception With these variables, we want to label studies according to the capabilities of the robot to generate or express affect-based behaviours (e.g., facial expressions), and the capabilities of the robot to perceive affect-based behaviours following the suggestions in <cit.>. Aims Motivated by the multi-disciplinary nature of the affective robotics field and the challenges that emerged in the literature (see Section <ref>), we defined three specific aims for the studies surveyed: technical (i.e., studies that examine AR from a technological perspective, for example works that presented a computational model to automatically detect emotions in human-robot interaction), design (i.e., studies that examine AR from a design perspective, for examples works that presented a focus group to design features of an affective robot), and ethical (i.e., studies that examine AR from an ethical perspective, for example works that have investigated the ethical implications of introducing robots in public spaces). Theory grounded This variable refers to the studies that used and implemented systems, artifacts, or features backing it up with psychological theories (e.g., theory of mind). Data collection method This variable refers to the methodology adopted for collecting data in studies, for example questionnaires, interviews, and surveys. Computational models This variable refers to the type of computational models used in the studies to develop the affective system/artifact/features. Design approach and stakeholder involvement These variables include which design methods have been used to design AR, the interaction, or parts of the robot and whether stakeholders were involved or not in the design process of such AR. For eample, we categorized surveyed papers by the following criteria: 1) if the paper claimed to apply participatory or co-design, we assigned it to that label, 2) if the paper claimed neither participatory or co-design, but described the involvement of users, we assigned it a user-centred design label, 3) if the paper claimed neither of those, we assigned it as Not Reported. Ethics on user safety, society, and implications These variables encompass whether the paper discussed implications of user safety (e.g., deception, human contact) or societal impacts (e.g., fairness, bias, and justice). Moreover, notes were collected on the specific aspects and categories of ethical discussion that were later organised and classified based on the framework in <cit.>. Ethics approval With this variable, we labelled all the studies that have received an approval from internal (e.g., university) and/or external (e.g., hospital) entities. § EVOLUTION OF AFFECTIVE ROBOTS FOR WELL-BEING This systematic review aims to better understand the evolution of affective robots for well-being in the last decade, from technological, design, and ethical perspectives to tackles the challenges C1 and C2 reported in Section <ref>. This section describes the evolution of technological, design, and ethical aspects in the context of affective robotics. §.§ Technical and Computational Advancements Over Time We found 21 studies that have only a technical contribution (i.e., their main contribution was focusing only on technical aims) among 81 studies that also focused on technical aspects alongside design and ethical aims. This aligns with the inherently multidisciplinary nature of the HRI field, as highlighted by <cit.>. Technical aspects were the focus of the field since the raise of HRI <cit.>. Consequently, HRI in recent years incorporates insights from various disciplines, including ethics and design, in addition to technical contributions. We classified technical studies based on the computational models employed, perception and generation capabilities of the affective robot explored, and the robotic platform used for such applications. §.§.§ Computational Models In terms of computational models employed in affective robots, interestingly, until 2017, studies employed affective robots using variety of computational techniques (e.g., control systems, algorithms, state machines and cognitive architectures) as shown in Figure <ref>. However, between 2017 and 2019, studies started employing statistical models via empirical research, with almost all of the research conducted being empirical. Generally, statistical models have been the most popular computational models used in affective robotics studies (46% of the studies). This is followed by studies deploying AI-based models on affective robots (27%) (such as deep learning models, machine learning, reinforcement learning, and natural language processing), which gained popularity in 2020 (more than 50% of the studies), 2021 (40% of the studies) and 2022 (50% of the studies). The increasing use of AI-based models over statistical methods signifies a desire within the field to develop more sophisticated models aspiring for automation. These models aim to accurately learn from large datasets to recognize, interpret, and respond to human affective states, thus adapting to the state-of-the-art in affective computing and machine learning at large. §.§.§ Affective Capabilities The affective perception capabilities embedded in robots evolved from a single modality emotion recognition model, where the majority of the studies used facial expression recognition systems until 2020, to a multi-modal emotion recognition approach in which authors have begun to explore data-driven approaches from multiple data sources (e.g., in <cit.>, the authors considered facial expressions and touch behaviour to detect emotions). While autonomous affective perception in studies primarily relied on the robot's ability to perceive emotions from facial expressions without human intervention, a different pattern emerged for affective generation capabilities of robots that utilized both WoZ (Wizard of Oz) and autonomous data-driven computational models. This is due to the need of controling the interaction to ensure most of the effects come from the generated behaviour of the robot, and recently this trend is evolving by using more autonomous generation. This shift is due to the current advancement in AI in general, and generative AI in particular <cit.> that exploded in the last few years. However, the generation of human-like behaviours (e.g., facial, speech, gestures) is still a challenge that needs further investigation within affective computing <cit.>. Most studies on behavior generation in human-robot interaction have focused on linking generation to a specific interaction paradigm, rather than solely investigating computational affect generation models. For example,<cit.> designed a robot that could generate empathic behaviors in order to optimize user engagement. This approach prioritizes creating interactions that foster empathy, as opposed to solely exploring computational models of affect generation. The development and implementation of computational models to generate affective behaviours may be very challenging and it may also have limitations in capturing the nuances and contextual factors involved in human emotional expression and perception. Therefore, grounding affect generation in specific interaction paradigms like human-robot interaction in well-being applications may yield more insightful results for better understanding the complex nature of affective behaviours. §.§.§ Robotic Platforms Lastly, we identified humanoid robots as the most common robotic platforms (52%) within the technical studies. They are followed by machine-like (17%), pet-like (11%), and bio-inspired (2%) robots. The remaining studies did not report the robotic platform used. Over the past decade, humanoid robotic platforms like Nao and Pepper have remained relatively unchanged, while the adoption of computational models has increased significantly. This raises concerns about the appropriateness of these platforms for AI-based interactions, given their limitations in terms of computational capabilities and adaptability. These platforms, indeed, present to date several limitations, making it more challenging to deploy such robots in everyday lives and for daily applications. First, the current robotic platforms lack sufficient local computational power, which is crucial for processing complex AI models and handling large amounts of data in real-time <cit.>. To date, researchers use largely cloud computing or external service APIs that lead to latency and reduced responsiveness, making interactions less seamless and less natural <cit.>. Second, the computational models and algorithms used in these robotic platforms are often not designed to handle the complexity and variability of real-world interactions and to be embedded into a robot. This can result in robots that are not able to effectively learn from their interactions with humans or adapt to contextual situations. Last, the robotic platforms are often designed for specific tasks (e.g., QT robot for interacting with children with autism <cit.>) or applications, which limits their scalability and flexibility. This makes it difficult to re-use them for different tasks in real-world scenarios. §.§ Design Research Changes Over Time 76 of the surveyed works were classified as having a design focus. In general, our data shows that the most design-focused works in affective robotics were published in 2018 (13 works) and 2019 (14 works), increasing steadily from 2013 (1 work) to this point, and then decreasing from 2020 (9 works) onward (see Fig. <ref>). As pointed out by Lupetti et al. <cit.>, the first explicit design track was introduced in the HRI conference in 2015 <cit.>, and Ro-Man conference in 2016 <cit.>. The increasing number of design works in our data aligns with the established design tracks in 2015 and 2016, placing an emphasis on design work published in the following years. In our data analysis, we classified each paper to have either affective aims (i.e., the study had an explicit aim of exploring the affective design, capability, etc. of a robot) or an affective component (i.e., the paper explores affective robots, but does not have it as its main focus). Out of 76 design works, 24 had affective aims. The peak of the proportion of design studies with affective aims was in 2017 (57%), where it rose to steadily from 25% in 2015, and fell steadily to 11% in 2020 (see Fig. <ref>). A recent peak in affective aims was in 2020 with 50%. However, no design works with affective aims were observed in 2022. These trends may indicate a cyclical interest in exploring affective aims in design studies, with no clear increasing or decreasing trend. §.§.§ Level of User Involvement We classified the 76 identified design studies into categories based on the level of user involvement: 14 participatory design (or co-design), 57 user-centred, and 5 none reported. From 2018 onward, the proportion of reported participatory design contributions has been increasing (8% in 2018, reaching 38% in 2021 and 33% in 2022, see Fig. <ref>). According to Bodker, the second wave of HCI which initiated around 1999 and continuing until 2006, placed an emphasis on user-centred design and participatory design <cit.>. As the design tracks at the HRI and Ro-Man conferences were only established in 2018 and 2019, the HRI design trends may be following HCI, with a delayed increase in participatory design when compared to HCI. Indeed, Lupetti et al. <cit.> identify an increasing trend toward user-centred design and including users in the design process through participatory methods in HRI <cit.>. This aligns with the work of Alves-Oliveira et al. <cit.>, calling for more user-involved design to truly address user needs and problems. However, published design works in affective robots for well-being have decreased from 2019 (14 works) to 2022 (6 works). This decrease may be related to the challenges in designing social robots, e.g., design for purpose and artificial emotion expression <cit.>. Designing affective robots for well-being specifically has several unique challenges, such as appropriately personalizing verbal expressions while preserving well-being practice <cit.>, and incorporating stakeholders such as carers into the robot design <cit.>. Additional challenges emerge when researchers attempt to design affective robots intended to be deployed in healthcare systems for direct patient care. Designs must consider how an affective robot could fit into an already-complex clinical workflow, how to ensure appropriate patient privacy and risk management (e.g., in the case of suicidal thoughts expressed), or how to prioritize stakeholder engagement with a multidisciplinary treatment team. Such factors may make the (co-)design of affective robots for well-being more challenging. §.§.§ Who Are Involved in Design Studies In the PD studies, 6 out of the 14 studies reported children or teenagers as co-designers. The WHO has recommended early detection and intervention as one of the key strategies in improving young people's mental health and resilience <cit.>. Using PD strategies with this group in particular may reflect one of the key aims of PD: empowering the users of the designed technology <cit.>. Three of the PD studies described involving mental health professionals (e.g. psychologists or mental health coaches), and two mentioned involving familial caregivers. In the 57 user-centred studies, 22 involved clinicians (e.g., healthcare staff or disability service workers), and 5 of them involved family members or informal caregivers in the design process (four of these studies focused on older adults and their unique challenges (e.g., dementia), and one for children's kinesiology. 29 of the studies explicitly collaborated with (prospective) end-users. These figures reflect that most of the studies focused on collaborating with only one of the stakeholder groups (i.e. end-users or other stakeholders). Only 9 of the studies focused on multiple stakeholder groups (e.g., both patients and clinicians). The tendency to focus on a single stakeholder group is understandable from a pragmatic perspective but may limit external validity given the highly multidisciplinary nature of applied patient care. Given that many affective robots are intended to be deployed in healthcare settings, an important future direction for user-centered design studies will be to involve patients, caregivers, and a range of different provider types, ideally together as part of a collaborative design process. §.§.§ Ethics in Design Research The number of design studies that discuss ethical implications has been generally increasing, climbing from 29% in 2017 to 83% in 2022 (see Fig. <ref>). In recent years, works have called for the integration of ethics into the design process <cit.>, as well as striving toward equitable robot design <cit.>. These works emphasize the importance of diverse user and stakeholder involvement <cit.>. Overall in our dataset (from 2013-2022), a higher percentage of design studies with vulnerable user groups (e.g., neurodivergent children, people with PTSD or other mental health issues) discussed ethical implications (43%), compared to studies with non-vulnerable user groups (31%). This is not surprising and encouraging as it is often suggested that vulnerable populations can be disproportionately impacted by unethical design choices <cit.>. §.§ Ethical Considerations and Guidelines Overall, very few papers were written with a dedicated critique of the ethical concerns related to their work, e.g. discussing safety, fairness, privacy etc. of their application. Only 36 papers were identified as having a primarily ethical theme (i.e., the primary contribution is a discussion of the ethics of robots for well-being), and only 41% of papers had any mention of ethical implications. As such it is difficult to complete a quantitative assessment of ethical implications. We will instead perform a more qualitative assessment, classifying discussions and discussing trends. §.§.§ Discussions of Ethical Implications For each of the papers that was flagged to have discussed ethics in some form a freehand note was provided with the keywords and main topics of concern brought forth in the paper. Multiple frameworks exist to guide ethical discussions of affective systems including <cit.>,<cit.> however, these systems have a primarily AI focus. We had only 40% of our papers using autonomous capabilities <cit.> with 14% having affect generation, less than 1% having affect perception <cit.>, and 16% having both affect generation and perception capabilities. However, as part of our inclusion criteria, all papers were required to include a social robot. As such, to guide the conversation on ethical implications we will refer to “A Code of Ethics for the Human-Robot Interaction Profession" <cit.>. Here the authors suggest four primary categories of design principles to consider for human interaction: Human dignity considerations, Design considerations, Legal considerations, and Social considerations. Each of the noted discussion topics were organized into one of these 4 categories with one additional category “societal considerations" that includes topics pertaining to infrastructure, employment and similar topics. In Fig. <ref> we see that dignity and design implications are most often discussed. Examples of dignity considerations that were seen include emotional needs, such as: autonomy, person-hood, and preference; rights to privacy, such as: monitoring, data privacy and policing; and respect for humans' emotional and physical frailty, such as: negative reactions, emotional exploitation, embarrassment, infantilization, powerlessness, infection, and physical safety. For design consideration, we saw commentary on topics such as, trust, reliability, equity, fairness, ableism, control, transparency, accessibility and maleficence. Legal considerations included topics such as informed consent, liability and accountability, and social considerations included attachment, deception and coercion. Although we see a vast array of ethical topics discussed, we note that instead of many papers each commenting on a few ethical considerations impacting their study, we instead saw papers (69) that had more thorough discussions of several ethical implications. Regarding the sub consideration provided by Riek and Howard <cit.>, all of the Human Dignity considerations are considered in ethical discussions of our sample, however, the other three considerations are missing several applicable discussion points. We instead see the authors opting to consistently refer to the same topics presented in the previous paragraph. For design considerations there is a lack of consideration of opt-outs and kill switches, real-time status indicators and predictability of behaviour. Legal considerations, in general, are not thoroughly discussed. Yet, with the advent of the new EU Artificial Intelligence Act <cit.> these discussions will become exceedingly important. For social considerations, two important considerations were lacking: reducing the use of WoZ to avoid Turing deceptions <cit.> and limiting humanoid morphology. Both of these techniques are regularly employed in social robotics for well-being with 40% of papers using WoZ techniques and 52% using humanoid robots. However, we do not see an increase in the discussion of social considerations in WoZ studies (Fig. <ref>). Roboticists will often employ WoZ to pilot ideas and reduce the burden of programming as well as the possibly of mistakes, however, these techniques involve deceit and can be particularly difficult for vulnerable uses and may result in increased expectations for robot behaviour that contradicts the considerations of predictability in design. §.§.§ Disclosure of Ethics Approval With the recent focus on Equity, Diversity and Inclusion (EDI), the number of institutions and granting agencies requiring a disclosure of ethics approval appears to be increasing. External approval shows no trends, with only 10 total papers (5%) disclosing external approval. However, we do see the expected trend in internal ethics approval disclosure. Overall 101 (47%) of the papers disclosed internal approval, yet, as can be seen in Fig. <ref> (left), previous to 2017 the proportion ethics approval disclosures was quite low. Although there has been an increase, we are still only seeing disclosure of ethics approvals in about half of the papers. These results tell us that most institutions are handling ethics themselves. Although it appears only half of the research institutions are requiring a disclosure of ethics approval, it is, nevertheless, difficult to comment on the amount of institutions requiring ethics approval for these studies as this amount is likely higher than that of the disclosures as researchers themselves may have chosen not to disclose the approval. Geographically, we see that Eastern, Western, and Southern Europe, as well as Western Asia stand out as having a lower ethics disclosure proportions, Fig. <ref> (right). These discrepancies may be a result of funding and institutional regulations where the institution or funder requires the disclosure of ethics approval, or possibly cultural norms <cit.>. §.§.§ Application Scenarios and Populations Observing the proportions, interestingly, the highest proportion of papers containing ethical discussion are in applications within the home (Fig. <ref>), followed by those employed to increase children's creativity, education, and health. Most of these scenarios likely include interaction with children, which may play a role in the need to carefully consider ethical implications. Interestingly, psychological use cases have a lower proportion of ethics discussions, despite the sensitive nature of these applications. To delve further into this research question, we explored whether research where the target population was primarily vulnerable (e.g., neurodivergent children, people with PTSD or other mental health issues) more often discussed ethical considerations <cit.>. Vulnerable target populations included, but were not limited to those involving children, the elderly and people with intellectual and physical disabilities. Yet, we do not see a noticeable difference in the proportion of papers discussing ethics for vulnerable populations (Fig. <ref>). Unlike with the disclosure of ethics approval we do not see a trend in the discussion of ethical implications over time, Fig. <ref>. The sole outlier here is the most recent year, 2022, where the number of papers discussing ethical implications was equal to those that did not. However, it is not possible to speculate if this was unique to this year of publication, or whether it will continue to the future. § 10 YEARS OF AFFECTIVE ROBOTICS This section reports the most relevant results on research methodology, topic, aims, outcomes, and robotic operations collected from our systematic review by analysing the last decade in affective robotics for well-being, addressing C3. Note that a comprehensive report of the results can be found in the Supplementary Material, and that this section is only meant to give the reader an overview of affective robotics for well-being field to better understand the evolution of the technological, design, and ethical aspects of the field in the last decade, as described in Section <ref>. §.§ Research Methodology Our observations indicate high variability in the research methodology of affective robotics studies, marked by variations in data collection methods, participant recruitment, and the duration of studies. §.§.§ Data Collection methods and type of studies We observed interesting trends in data collection methods and their evolution over the years as shown in Figures <ref>. Quantitative studies have predominantly shaped the field of affective robotics, accounting for 39% of the studies published and showcasing a methodological preference for measurable, data-driven insights. Although quantitative studies predominate the field, qualitative and mixed-methods studies are also prevalent. Qualitative studies account for 28.9% of the studies, and mixed-methods account for 13.4% of the studies. Over time, we have observed a notable increase in the number of qualitative and mixed-methods studies, indicating a gradual shift towards a more holistic understanding of affective interactions with social robots. Specifically, there was a marked increase in qualitative studies in 2022 (75% of the studies). This could be due to restrictions on conducting behavioural experiments during the COVID-19 pandemic, leading many researchers to work with smaller samples and rely on qualitative methods. This evolving blend of methodologies highlights a dynamic, multifaceted research landscape that adapts to the complex nature of affective interactions in robotics. Another point of interest is the low proportion of system studies (4% of the total number of studies), with most studies published between 2013 (13% of the studies) and 2015 (17%). It is presumed that during this period, the field of affective robotics was in a more developmental stage, focusing on building and assessing foundational systems crucial for establishing baselines and understanding the capabilities of these affective agents. Post-2015, the focus might have shifted towards refining these systems based on initial findings and integrating them into more complex user studies to explore the broader implications of affective robotics in various contexts and settings. This shift could be an indicator of technological maturity, where incremental improvements and applications of existing systems became more relevant for advancing the field. For example, as affect detection models were advancing in the broader affective computing community, affective robotics researchers gradually applied these advancements empirically to their own systems rather than developing dedicated systems from scratch <cit.>. Another possibility is that system-related research in affective robotics is now broader, encompassing disciplines like social robotics, affective computing, machine learning, natural language processing, and computer vision <cit.>, with these systems then applied in affective robotics research. §.§.§ Number of Participants We observed growth in the average number of participants per study over the years, indicating a trend towards larger sample sizes to enhance the robustness of findings as shown in Figure <ref>. This increase, from approximately 30 participants per study on average in 2013 to almost 100 participants per study on average by 2022, underscores efforts to ensure findings are robust and widely applicable, essential for technologies that closely interact with human emotions and behaviours. It should be noted that some of these studies are neither empirical nor quantitative in nature, and thus the extent of their contribution and methodological rigour is not assessed by the size of their sample. §.§.§ Study Sessions We found that single-session studies have emerged as the predominant focus in affective robotics (72% of the total number of studies), with a notable peak in long-term studies (lasting up to 10 sessions) published in 2021 (39% of the studies), as depicted in Figure <ref>. However, the overall trend for long-term studies is characterized by sporadic spikes rather than a consistent increase or decrease. This preference for single-session studies may be attributed to researchers prioritizing the assessment of technological developments and the introduction of new affective features. Such studies allow for rapid validation of innovations, keeping pace with the swift technological advancements in the field <cit.>. This approach underscores a dynamic and evolving research ecosystem, where the drive for innovation often outweighs the desire for long-term deployment insights. The fast-paced nature of technological progress in affective robotics may also explain this preference for single-session studies: long-term studies could be perceived by researchers as less relevant, as technology could become outdated by the time a study concludes <cit.>. Additionally, logistical and financial constraints, along with the challenges of maintaining participant engagement over extended periods, may discourage longer study durations <cit.>. §.§ Aims and Applications Affective robotics is a relatively wide area of research, encompassing studies with various aims, disciplinary affiliations, and application scenarios. §.§.§ Aims Three main aims were identified in the literature published between 2013 and 2022: technological, ethical, and design, as shown in Figure <ref>. Regarding technological aims, we observed a steady positive trend in the proportion of studies including technological aims over the years, reaching a peak of over 60% of studies published with technological aims in 2021. In terms of ethical aims, our data show that studies published rarely include these aims, with a maximum of slightly more than 20% of the studies including ethical aims (in 2014, 2019, and 2022). Regarding design aims, most studies published in affective robotics tend to include such aims, ranging between 70% to 95% of the studies published per year, with the exception of 45% in 2021. Our results show that only three papers (1%) cover all three aspects, namely technological, design and ethical, while 14% focused on technical and design, 2% on technical and ethical, and the 11% on ethical and design aspects. The majority of them (72%) focused on only one aspect. The limited integration of technological, design, and ethical aspects in research could potentially be attributed to the complexities and resource constraints of addressing multiple aims simultaneously, which is a common challenge in human-centred engineering and computer science research <cit.>. Specialization in one aspect is often required by the need for depth and clarity in research, especially in early stages, alongside academic and publication pressures that prioritize quicker, focused studies. As the field evolves, interdisciplinary collaborations may facilitate more holistic approaches with varying aims. §.§.§ Discipline Affiliations The research diversity in affective robotics is also evidenced in the presence of single and multidisciplinary teams, and disciplinary affiliations. Over the years we can observe that there are more studies published by single-discipline teams over multidisciplinary teams, with the exception of 2018 with 61.9% of the studies published being by multidisciplinary teams. Nonetheless, in most years we can observe that between 27% (in 2019) to 44% (in 2015 and 2017) of the studies published in each year were by multidisciplinary teams (with the exception of 6% of the studies in 2016, and 61.9% of the studies in 2018). This seems like a relatively high ratio of multidisciplinary efforts. Single discipline studies have seen a surge of papers from authors with a computer science background, followed by papers with authors with an engineering background, and then a media background. Surprisingly, single discipline studies coming from psychology see only a maximum of 30% of the papers in a year (in 2018). Given that ‘Affect' at its core is a psychological construct <cit.>, and given the broader connection of the field to ‘Psychological Well-being' <cit.>, it is expected that researchers with a background in psychology would be more prominently involved, similar to related fields such as social robotics and HRI <cit.>. Studies published by multidisciplinary teams sees similar trends, with most of the first authors in these teams having computer science background. Accordingly, we can assume that despite the multidisciplinary nature and demands of affective robotics research, the field is still primarily driven by technical questions and aims. §.§.§ Application Contexts We observed that a substantial proportion of the studies published in affective robotics are concerned with health application scenarios (28% of all studies) ranging between 20% (in 2019) to 58% (in 2015) of the studies published in a year (except for 10% in 2017). This is followed by studies concerned with social application scenarios (21% of all studies), ranging between 6% (in 2016) and 31% (in 2014) of the studies published in a year (except for 0% in 2013). This is an important trend in affective robotics, stressing that despite the technical aim that dominates the field (with explicit research aims and disciplinary affiliations), many of the studies are aimed at being applied in typical social contexts that are customary in affective computing (i.e., health and social settings). Following these two application scenarios (i.e., health and social settings), the third most prominent application scenario is mental health, constituting 16.3% of the papers. Hence, while socially assistive robots have been studied in a variety of health-related settings, including physical rehabilitation and primary care <cit.>, research into affective robotics considers role of social robots in mental health settings and other applied care scenarios to be critical within the field. This approach is also evidenced by the growing proportion of studies focused on well-being target outcomes, increasing from 17% in 2014 to 36% in 2022, with a peak of 55% of all psychological target outcomes in affective robotics research observed in 2020. However, it is important to consider the generality and breadth of the field. This is evidenced from the proportion of studies with multiple outcomes, ranging from 65% of the psychological target outcomes of affective robotics papers in 2013, to 80% of the affective robotics papers published in 2016. As noted previously, there has been a shift towards more studies investigating well-being outcomes—a term that is in itself vague and broad. Nonetheless, starting in 2017, we began to observe a diversification of outcomes studied, with many (15 identified) unique psychological target outcomes related to well-being identified, such as eating disorders, eldercare, sleeping habits, stress-related issues, and psychopathologies, among others. This diversification suggests that beyond the growing interest in the health and well-being applications of affective robots, the field is maturing. Researchers are increasingly seeking to assess the applicability of this technology for addressing specific outcomes (e.g., behavioural changes related to eating and sleep, or emotional support for loneliness and stress), rather than attempting to capture multiple outcomes in single studies <cit.>. § FUTURE OPPORTUNITIES IN AFFECTIVE ROBOTICS FOR WELL-BEING Our survey aims at better understanding the evolution of affective robotics for well-being over the last decade. Our results show the past and present of this research field, and this section explores the future opportunities in affective robotics distilled for well-being across technical, design and ethical aspects as collected in Table <ref>. Affective robotics is a multi-disciplinary field that includes expertise from computer scientists, psychologists, social scientists and roboticists. As such, this field would benefit from the cross-fertilization of affective computing and robotics <cit.>. We observed that in the earlier stages of affective computing research, there was more emphasis on empirical, data-driven studies to establish the foundational understanding of affective phenomena <cit.>. More recently, we note a shift in the research landscape, potentially driven by the very recent technical advancement in the field. This shift in the research approach, from more empirical to potentially more technical, could indicate that the field of affective computing research is maturing and transitioning towards more sophisticated, application-oriented developments. This technological shift in affective robotics for well-being can manifest in different forms, such as advancements in affective sensing and generation techniques, integration of affective AI models into robotic platforms, exploration of real-world applications of these technologies in well-being contexts, and increased interdisciplinary collaboration between technical and social/behavioral researchers. As we have already mentioned, the field of affective robotics is increasingly shifting towards data-driven models, and we will observe an even more evident shift in the future driven in part by the emergence of large language models (LLMs) that are revolutionising various domains. This trend is not limited to affective robotics but is part of a broader movement encompassing human-computer interaction (HCI) <cit.>, computational linguistics <cit.>, and affective computing <cit.>. It is essential to go beyond solely utilising LLM APIs and consider how these models can be tailored to specific use cases, focusing on quality-centric data rather than quantity, especially for high-stake domains like well-being. Improving relationships with the affective computing community and enhancing the benchmarking of models intended for implementation in interactions are crucial steps for advancing the affective robotics field. Therefore, we encourage researchers to collaborate, leverage current advancements, and explore their practical application in real-life social interactions like well-being with robots. Future investigations in affective robotics for well-being are likely to focus on the technical challenges and opportunities associated with integrating multimodal affective data into LLMs for robotic applications <cit.>. This could involve exploring novel architectures, training methodologies, and evaluation metrics tailored to the unique requirements of multimodal language understanding in well-being contexts. We hope that future researchers can build a new generation of intelligent and emotional robotic systems that can seamlessly process and respond to various sensory inputs, paving the way for enhanced human-robot interaction in well-being. From a design point of view, co-design of affective robots for well-being is underexplored, and is a future research opportunity. Especially in clinical contexts, designing with multiple stakeholders (i.e. clinicians, other healthcare workers, and patients) in the same room could be useful for establishing a dialogue between them, and empowering patients in how future robotic technologies could contribute to their care. This approach is consistent with patient-centered, value-based care <cit.>. Given the high barriers to clinician participation in co-design sessions (e.g., scheduling demands; long working hours), brief (1 hour or shorter) web-based participatory design sessions may be preferable to longer in-person sessions <cit.>. Such online participatory methods have been proposed, e.g. the Hybrid Robotic Design Model, where design teams work in person at specific points of the design process, and other phases are conducted online <cit.>. Regarding ethical considerations, all papers with human participants, should include ethics approval disclosures and discussion. We are seeing a trend towards this, but there is still much room for improvement. Additionally, researchers should familiarize themselves with ethical guidelines. Unfortunately, as of yet, there is no large overarching and agreed upon set of guidelines for affective, social robotic and well-being applications, instead, researchers must asses their own work and use the applicable frameworks such as <cit.> for social robotics, <cit.> for perception in affective computing, and <cit.> for AI in clinical applications, to guide their research and discussions. Ethical implications must be considered at all stages of the study process, and stakeholders and their individual ethical responsibilities must be defined prior to conducting research <cit.>. We saw a focus on ethics on a personal level, i.e. dignity, and a lack of consideration for social and societal implications. As affective technology and robotics becomes increasingly prevalent these considerations are crucial for harmonious, safe, and fair integration of these systems <cit.>. Moreover, AI has come under scrutiny and much needed regulations are being put in place <cit.>. Although these laws regulate industry rather than research applications, researchers should not take this as an opportunity to skirt these rules. Likewise, with the move towards open source research and collaboration, any publicly available models or technologies that can be available for use by enterprises will be required to adhere to the regulations. Emphasizing reproducibility, open-source practices, community collaboration, and the development of federated models is crucial and will enhance adherence to several agreed upon metrics of ethical guidlines including transparency, justice and fairness, non-maleficence, responsibility and privacy <cit.>, <cit.>. Practices such as open science and pre-registration are vital for ensuring ethical and methodological rigor, especially given that our research aims to support human well-being, including individuals with clinical diagnoses. Adhering to the highest standards of empirical research in the field is paramount to uphold the integrity and impact of our work. § CONCLUSION This survey presents the evolution of the field of affective robotics for well-being over the last decade. By highlighting the past trends, present challenges, and future opportunities in the field of affective robotics for well-being, this survey aims to guide future researchers in tailoring their work based on the lessons learned and the envisioned trajectory of the field. We encourage researchers to consider the various implications of their work, including technical, design, and ethical considerations, to drive the development of affective robotics towards enhancing human well-being. § ACKNOWLEDGMENT M.S. is supported by PNRR-PE-AI FAIR project funded by the NextGeneration EU program. G.L. and H.G. are supported by the EPSRC project ARoEQ under grant ref. EP/R030782/1. M.A. is supported by the Finnish Cultural Foundation. A.L and P.T. are supported by the Rajan Scholar Research Fund, the France Canada Research Fund, NSERC Discovery Grant 06908-2019 and Mitacs Globalink ref. FR103868. IEEEtran Micol Spitale Micol Spitale is an Assistant Professor at the Department of Electronics, Information and Bioengineering at the Politecnico di Milano (Polimi), as well as a Visiting Affiliated Researcher at the University of Cambridge. In recent years, her research has been focused on the field of Social Robotics, Human-Robot Interaction, and Affective Computing, exploring ways to develop robots that are socio-emotionally adaptive and provide ‘coaching’ to promote wellbeing. Minja Axelsson Minja Axelsson is a PhD Student at the Department of Computer Science & Technology at the University of Cambridge. Her research is focused on the design and ethics of social robots for well-being, as well as users' perceptions of them. Sooyeon Jeong Sooyeon Jeong is an Assistant Professor in the Department of Computer Science at Purdue University. Dr. Jeong designs interactive AI agents to improve people's lives and deploys these agents in-the-wild to evaluate how they can enhance people's wellbeing, health, and learning. Paige Tuttösí Paige Tuttösí is a PhD candidate at Simon Fraser University and a visiting researcher at l’Institut FEMTO-ST, Université Bourgogne-Franche-Comté. Her research is focused on the improvement of robotic voices and ethical implications of human robot interaction. Caitlin A. Stamatis Caitlin A. Stamatis, PhD, is the Chief Clinical Officer at Bruin Health. A clinical psychologist by training, Dr. Stamatis's research focuses on using technology-enabled mental healthcare. Guy Laban Guy Laban is a Postdoctoral Research Associate in the Department of Computer Science & Technology at the University of Cambridge. His research is aimed at exploring how people communicate their emotions to social robots, and how accordingly these interactions enhance and support emotional well-being. Angelica Lim Dr. Angelica Lim is the Director of the Rosie Lab (www.rosielab.ca), and an Assistant Professor in the School of Computing Science at Simon Fraser University (SFU). Previously, she led the Emotion and Expressivity teams for the Pepper humanoid robot at SoftBank Robotics. She received her B.Sc. in Computing Science with Artificial Intelligence Specialization from SFU and a Ph.D. and M.Sc. in Computer Science (Intelligence Science) from Kyoto University, Japan. Her research interests include multimodal machine learning, affective computing, and human-robot interaction. Hatice Gunes Hatice Gunes is a Full Professor of Affective Intelligence and Robotics in the Department of Computer Science and Technology, University of Cambridge, and the Director of the https://cambridge-afar.github.io/ Cambridge AFAR Lab. She is a former President of the Association for the Advancement of Affective Computing, a former Faculty Fellow of the Alan Turing Institute ­- the UK's national institute for data science and artificial intelligence, and is currently a Fellow of the Engineering and Physical Sciences Research Council UK (EPSRC) and Staff Fellow of Trinity Hall.
http://arxiv.org/abs/2407.02618v1
20240702191043
Functions of bounded variation from ideal perspective
[ "Jacek Gulgowski", "Adam Kwela", "Jacek Tryba" ]
math.FA
[ "math.FA" ]
§ ABSTRACT We present a unified approach to two classes of Banach spaces defined with the aid of variations: Waterman spaces and Chanturia classes. Our method is based on some ideas coming from the theory of ideals on the set of natural numbers. The Signed Goldman–Iwahori space and Real Tropical Linear Spaces Arne Kuhrs ================================================================= § INTRODUCTION The concept of the variation of the function was introduced in 1881 by Camille Jordan and found plenty of applications and generalizations since that time. When we look closely at the definition of the variation of the function we can see its tight relation to the question of (un)boundedness of the series. This becomes even more evident, when we look at some of the generalizations of Jordan's definition, namely the Λ-variation introduced by Waterman in 1972 (see <cit.>) and Chanturia classes introduced in 1974 (see <cit.>). The question of (un)boundedness of the series of real numbers naturally appears also in the studies of the ideals on the set of natural numbers, with the paper <cit.> as the very recent example of this perspective and the concept of the summable ideal (introduced below), which is a very basic notion in the theory of ideals on the set of natural numbers. Looking at these two separate threads in the realm of mathematics it appeared to be very appealing to us to join them: to look at different spaces of functions of bounded variation from the perspective of the theory of ideals defined on the set of natural numbers. The additional inspiration arrived from the recent paper <cit.> by Borodulin-Nadzieja and Farkas, who showed that the concept of ideals introduced by l.s.c. submeasure on (see definitions below) naturally defines certain Banach sequence spaces. On the other hand, these sequence spaces may be used as a natural foundation for the definition of spaces of functions of some type of bounded variation, with a concept of variation generalizing many different attitudes (especially Waterman variation and Chanturia classes). In this paper we define the concept of the variation of the function defined on a compact real interval originating from the l.s.c submeasure ϕ defined on (this is laid out in Section 3). In the Section 4 we study the inclusions between different spaces and corresponding relations between ideals generated by the submeasure, in a general setting. Then, in Section 5 we show that for simple density ideals we recreate the Chanturia classes; while in Section 6 we show that summable ideals correspond directly to the concept of Waterman Λ-variation. It appears that in these two special cases the inclusions between the spaces of functions of bounded variation may be nicely described by the relation between corresponding ideals (in terms of inclusion or Katětov order). One of these results also leads us to a new characterization of Katětov order between summable ideals. § PRELIMINARIES §.§ Basics about sequence spaces By ^ we will denote the family of all real-valued sequences. We will refer to several standard Banach sequence spaces. Here we will present a notation, and basic properties, which will be used in the sequel. First of all, in the examples listed below we set a sequence x=(x_n)_n∈∈^ of real numbers. * ℓ_∞ denotes the space of all bounded sequences equipped with the supremum norm x_∞ = sup_n∈|x_n|; * c_0 denotes the subspace of l_∞ consisting of all sequences such that lim_n→+∞ x_n = 0. * ℓ_1 denotes the space of such sequences that x_ℓ_1=∑_n∈|x_n|<+∞. §.§ Basics about spaces Let us assume that A=(a_n)_n∈ is such nonincreasing sequence of positive real numbers that ∑_n=1^∞ a_n = +∞. We call such sequence a Waterman sequence. If additionally lim_n→ +∞a_n = 0, we say that the sequence A is a proper Waterman sequence. In many sources the Waterman sequence is defined in a form (1/λ_n)_n∈ where (λ_n)_n∈ is nondecreasing and such that ∑_n=1^∞1/λ_n = +∞. Then if lim_n→+∞λ_n = +∞ we have a proper Waterman sequence. Of course by putting a_n = 1/λ_n we can see that the two definitions are essentially identical. Let us denote the unit interval by I=[0,1]. Moreover, by 𝒫_I we denote the set of all sequences of nonoverlapping, closed subintervals {I_1, I_2, …, I_N, …} of I. The intervals may be degenerate, i.e. it may happen that I_n consists of only one point. Let A = (a_n)_n∈ℕ be a Waterman sequence and let x I→ℝ. We say that x is of bounded A-variation if there exists a positive constant M such that for any sequence of nonoverlapping subintervals {I_1, I_2, I_3, …, }∈𝒫_I, the following inequality holds ∑_n=1^+∞ a_nx(I_n)≤ M, where I_n = [s_n,t_n] and |x(I_n)| = |x(t_n)-x(s_n)|. The supremum of the above sums, taken over the family 𝒫_I of all sequences of nonoverlapping subintervals of I, is called the A-variation of x and it is denoted by _A(x). The special case of a sequence constantly equal to 1 corresponds to the classical Jordan variation of a function x, which will later be denoted by (x). This concept was introduced by Waterman in <cit.>. Since then the functions of bounded A-variation were intensively studied by many authors – for an overview we refer to <cit.>. It is worth to mention that there are many equivalent ways to express that the function x I→ is of bounded A-variation (cf. <cit.> and <cit.>), but we will not go into the details here. The space of all functions defined on the interval I and of bounded A-variation, endowed with the norm xx(0) + _A(x) forms a Banach space (I) (see <cit.>). The spaces (I) are proper subspaces of the space B(I) of all bounded functions x I→. The space B(I) is equipped with the standard supremum norm x_∞ = sup_t∈ I |x(t)|. §.§ Basics about ideals A family ⊆() is called an ideal if * ∉, * if F⊆ is finite, then F∈, * if C∈ and D⊆ C, then D∈, * if C,D∈, then C∪ D∈. An ideal is called a summable ideal if it is of the form _A={C⊆: ∑_n∈ Ca_n<∞}, for some sequence of positive real numbers A=(a_n)_n∈ such that ∑_n=1^∞ a_n=∞. Note that in the definition of summable ideals we do not require that A is nonincreasing. However, in our paper we will only consider summable ideals given by nonincreasing sequences. By we denote the smallest ideal, i.e., the one consisting only of all finite subsets of . Note that is a summable ideal (given by the sequence (a_n) constantly equal to 1). If and are ideals then we say that is below in the Katětov order and write ≤_K whenever there is a function f:→ such that f^-1[C]∈ for every C∈. Note that actually, despite its name, Katětov order is only a pre-order, not a partial order (it is not antisymetric). Katětov order was introduced in the 1970s in papers <cit.> and <cit.> by M. Katětov. We say that an ideal is tall if for every infinite C⊆ there is an infinite D⊆ C such that D∈. It is easy to see that is not tall if and only if ≤_K. Consequently, all non-tall ideals are ≤_K-equivalent (i.e., ≤_K and ≤_K for any two non-tall ideals and ). If A=(a_n)_n∈ is a sequence of positive real numbers such that ∑_n=1^∞ a_n=∞, then _A is tall if and only if lim_n→∞a_n=0. An ideal is a P-ideal if for every sequence (A_n)_n∈ of elements of there is A∈ such that A_n∖ A is finite for all n∈. It is easy to verify that all summable ideals are P-ideals. § SUBMEASURES AND OBJECTS INDUCED BY THEM §.§ Submeasures A function ϕ:𝒫()→[0,∞] is called a submeasure if ϕ(∅)=0, ϕ({n})<∞ for every n∈, and ϕ(C)≤ϕ(C∪ D)≤ϕ(C)+ϕ(D) for all C,D⊆. A submeasure ϕ is lower semicontinuous (lsc, in short) if ϕ(C)=lim_n→∞ϕ(C∩{1,2,…,n}) for each C⊆. An lsc submeasure ϕ is non-pathological, if ϕ(C)=sup{μ(C): μ is a measure such that μ≤ϕ}, for all C⊆. Not every lsc submeasure is non-pathological – see <cit.>, <cit.>, <cit.>, <cit.> or <cit.> for such examples. Given an lsc submeasure ϕ, let ℳ_ϕ be the family of all measures μ on such that μ≤ϕ. Then, by definition, ϕ is non-pathological if and only if ϕ(C)=sup{μ(C): μ∈ℳ_ϕ} for all C⊆. §.§ Ideals induced by submeasures By identifying subsets of with their characteristic functions, we can treat ideals as subsets of the Cantor space {0,1}^. Mazur in <cit.> proved that an ideal is F_σ if and only if it is of the form: (ϕ)={C⊆: ϕ(C)<∞} for some lower semicontinuous submeasure ϕ such that ∉(ϕ) (see also <cit.>). Solecki in <cit.> showed that an ideal is an analytic P-ideal if and only if it is of the form: (ϕ)={C⊆: lim_n→∞ϕ(C∖{1,2,…,n})=0} for some lower semicontinuous submeasure ϕ such that ∉(ϕ) (see also <cit.>). It is easy to see that (ϕ)⊆(ϕ) for every lsc submeasure ϕ. Moreover, for every lsc submeasure ϕ we can find an lsc submeasure ϕ' such that (ϕ)=(ϕ'), (ϕ)=(ϕ') and additionally ϕ'({k})>0 for all k∈ (it suffices, for instance, to put ϕ'(C)=ϕ(C)+∑_n∈ C1/2^n for all C⊆). Note that every summable ideal is of the form (ϕ_A) as well as of the form (ϕ_A), where ϕ_A(C)=∑_n∈ Ca_n. For more examples of ideals induced by submeasures see <cit.>. §.§ Banach spaces of real sequences Let ϕ be a non-pathological lsc submeasure. Define a function ϕ̂:^→[0,∞] by: ϕ̂(x)=sup{∑_n∈μ({n})|x_n|: μ∈ℳ_ϕ} for all x=(x_n)_n∈∈^. Define also: (ϕ)={x∈^: ϕ̂(x)<∞}; (ϕ)={x∈^: lim_n→∞ϕ̂(x·χ_{n,n+1,…})=0}. Moreover, let (ϕ)=(ϕ)∩ and (ϕ)=(ϕ)∩, where ={(x_n)_n∈∈^: |x_n+1|≤|x_n| for all n∈}. Spaces (ϕ) and (ϕ) were introduced in <cit.>. Note that ϕ̂(x)=lim_n→∞ϕ̂(x·χ_{1,2,…,n}) for every x∈^ (see <cit.>). [<cit.>] * Consider the submeasure given by: ϕ(C)= 1, if C≠∅, 0, if C=∅, for every C⊆. Then (ϕ)=ℓ_∞ and (ϕ)=c_0. * Consider the submeasure given by ϕ(C)=|C| for every C⊆. Then (ϕ)=(ϕ)=ℓ_1. Notice that for every x=(x_n)_n∈∈^ and k∈ we have ϕ̂(x·χ_{k})=ϕ({k})· |x_k|. Therefore, for any A⊆ we have sup_k∈ A( ϕ({k})· |x_k|) ≤ϕ̂(x·χ_A)≤sup_k∈ A |x_k| ·∑_k∈ Aϕ({k}) Observe that C∈(ϕ) if and only if χ_C∈(ϕ), where χ_C denotes the characteristic function of C. Similar equivalence holds for (ϕ) and (ϕ). <cit.> Suppose that ϕ is a non-pathological lsc submeasure. Then (ϕ) and (ϕ) are Banach spaces normed by ϕ̂. Moreover, (ϕ)⊆(ϕ). Let ϕ be a non-pathological lsc submeasure such that ϕ({k})>0 for all k∈. Then (ϕ) is a closed subspace of (ϕ) and (ϕ) is a closed subspace of (ϕ). In particular, (ϕ) and (ϕ) are Banach spaces normed by ϕ̂. Moreover, (ϕ)⊆(ϕ). The inclusion (ϕ)⊆(ϕ) follows from Proposition <ref>. Actually, it suffices to show that is closed in (ϕ) and in (ϕ). Let x=(x_n)∈^∖. Then there is n∈ such that |x_n+1|>|x_n|. Define r=|x_n+1|-|x_n|/3min{ϕ̂(e_n),ϕ̂(e_n+1)}, where e_n∈^ is the sequence given by: (e_n)_i= 1, if i=n, 0, otherwise. Note that r>0 since ϕ̂(e_k)=ϕ({k})>0 for all k∈. We claim that if y∈^ is such that ϕ̂(x-y)<r, then r∉ (which shows that is closed in (ϕ) and in (ϕ)). Indeed, if ϕ̂(x-y)<r, then: |x_n-y_n|ϕ̂(e_n)≤ϕ̂(x-y)<r≤|x_n+1|-|x_n|/3ϕ̂(e_n), so |x_n-y_n|<|x_n+1|-|x_n|/3. Similarly, |x_n+1-y_n+1|<|x_n+1|-|x_n|/3. Hence, |y_n+1|-|y_n|>|x_n+1|-|x_n|/3 and we get that |y_n+1|>|y_n|, which shows y∉. <cit.> The following are equivalent for any non-pathological lsc submeasure ϕ: * (ϕ)=(ϕ); * (ϕ)=(ϕ); * (ϕ) is separable. Obviously, if (ϕ)=(ϕ), then also (ϕ)=(ϕ). §.§ Variations Let ϕ be a non-pathological lsc submeasure. For J=(J_n)∈𝒫_I denote by x(J) the sequence (|x(J_n)|). Define: (ϕ)={x∈ B(I): sup_J∈𝒫_Iϕ̂(x(J))<∞}. The requirement x∈ B(I) may be removed. Indeed, assume that there exists such a sequence (t_n)⊆ I that |x(t_n)|→+∞. Fix any k∈ such that ϕ({k})>0. Then if we take for each n∈ any J^n=(J^n_i)∈𝒫_I such that J^n_k = [0,t_n] we have sup_J∈𝒫_Iϕ̂(x(J)) ≥ |x(t_n)-x(0)|ϕ({k}) → +∞, which means that the condition sup_J∈𝒫_Iϕ̂(x(J))<∞ will not be satisfied anyway. Let ϕ be a non-pathological lsc submeasure. Then (ϕ) is a Banach space normed by x_ϕ=|x(0)|+sup_J∈𝒫_Iϕ̂(x(J)), for all x∈(ϕ). Let us assume that x∈ B(I) is such that x≠ 0 and x(0)=0. Then, there exists such t∈ I that x(t)≠ 0. Let us assume that ϕ({k})>0 for certain k∈ and let J∈𝒫_I be such sequence of intervals that J_k=[0,t]. Then ϕ̂(|x(J)|) ≥ |x(t)-x(0)|ϕ({k}) > 0. Let c∈ be any constant. The condition ϕ̂(|c· x(J)|) = cϕ̂(|x(J)|) is obvious for any J∈𝒫_I. Similarly the triangle inequality ϕ̂(|(x+y)(J)|) ≤ϕ̂(|x(J)|) + ϕ̂(|y(J)|) for any functions x,y I→. Passing to the supremum for J∈𝒫_I keeps these conditions. Now it remains to prove that the space is complete. The proof will be a standard one. Let us take a Cauchy sequence (x_n)⊆(ϕ). Let us fix ε>0 and take such n,m≥ N that |x_n(0)-x_m(0)| + sup_J∈𝒫_Iϕ̂((x_n-x_m)(J)) ≤ε. First of all, let us observe that it means that the sequence (x_n(0)) is a real Cauchy sequence so it converges to some real number x(0). We can also observe that the sequence x_n(t) converges to x(t) for any t∈ I. Indeed, fix any t∈ I and as before, let us assume that ϕ({k})>0 for certain k∈. Let J∈𝒫_I be such sequence of intervals that J_k=[0,t]. Then |(x_n-x_m)(J_k)|ϕ({k}) ≤ε and |x_n(t)-x_m(t)| ≤1/ϕ({k})ε + |x_n(0) - x_m(0)|, which eventually proves that (x_n(t)) is a Cauchy sequence, so it converges to some x(t). Now, as we have the pointwise limit x(t), we are going to show that x∈ BV(ϕ) and that x_n-x_ϕ→ 0 as n→ +∞. Let us take any μ∈ℳ_ϕ, any natural number K∈ and any sequence of intervals J∈𝒫_I. Then we have ∑_k=1^K |(x_n-x_m)(J_k)|μ({k}) ≤ε and we may pass to the limit with m→+∞ (as we know the sequence x_m(t)) converges pointwise to x(t)). This gives us ∑_k=1^K |(x_n-x)(J_k)|μ({k}) ≤ε. Since the inequality holds for all K, μ and J we can see that sup_J∈𝒫_Iϕ̂((x_n-x)(J)) ≤ε. To show that x∈(ϕ) it is enough to take any fixed x_n∈(ϕ) such that sup_J∈𝒫_Iϕ̂((x_n-x)(J)) ≤ 1 and refer to seminorm properties to see that sup_J∈𝒫_Iϕ̂(x(J)) - sup_J∈𝒫_Iϕ̂(x_n(J)) ≤sup_J∈𝒫_Iϕ̂((x_n-x)(J)) ≤ 1 Let ϕ be a non-pathological lsc submeasure. (a) If the sequence ϕ({k}) is unbounded, then BV(ϕ) reduces to the space of constant functions. (b) If the sequence ϕ({k}) is bounded, then the space (ϕ) contains the space of functions of bounded classical Jordan variation as a subset. (a): If x∈ B(I) is such that |x(s)-x(t)|=a>0 for some [s,t]∈ (0,1), then taking for each k∈ such J^k=(J^k_n)_n∈∈𝒫_I that J^k_k = [s,t] we get ϕ̂(x(J)) ≥ aϕ({k}). Thus the variation sup_J∈𝒫_Iϕ̂(|x(J)|) is unbounded. (b): Let us take any x∈(I) and any J∈𝒫_I. Then ϕ̂(|x(J)|) ≤∑_k∈ϕ({k})|x(J_k)|) ≤ M (x). The last inequality actually shows that the space (I) is continuously embedded in (ϕ). Let us observe that if ϕ()<+∞, then BV(ϕ)=B(I). To see this let us take any bounded function x∈ B(I) such that |x(t)|≤ M for all t∈ I. Then for any interval J⊂ I we get |x(J)|≤ 2M and for any μ∈ℳ_ϕ and any (J_k)∈𝒫_I we have ∑_k∈ N |x(J_k)|μ{k}≤ 2M μ() leading to ϕ̂(x) ≤ 2M ϕ()<+∞. Let ϕ be a non-pathological lsc submeasure such that ϕ()=+∞. Then the space BV(ϕ) is a subset of the space of all bounded regulated functions (i.e. bounded functions having finite left and right limit in every point of their domain). Assume, contrary to our claim, that there exist monotone sequences (t_n)⊆ I and (s_n)⊆ I converging to a∈ I from the same side and such that x(s_n) - x(t_m)≥δ > 0 for all n,m∈. Then, taking subsequences if necessary, we have the sequence of nonoverlapping intervals I_n=[s_n,t_n]⊂ I such that |x(I_n)|≥δ. Then for any μ∈ℳ_ϕ we have ∑_n∈ |x(I_n)|μ({n}) ≥δμ() so ϕ̂(|x(I_n)|) ≥δϕ() = +∞. For any non-pathological lsc submeasure ϕ and any permutation π→, the function ψ:𝒫()→[0,∞] given by ψ(C) = ϕ(π[C]), for all C⊆, also is a non-pathological lsc submeasure. What is more, we can easily see that (ϕ) = (ψ). Let ϕ and ψ be two non-pathological submeasures. If there exists M>0 such that for every A⊆ we have |ϕ(A)-ψ(A)|≤ M then (ϕ)=(ψ). Suppose to the contrary that there exists x∈(ψ)∖(ϕ). Then x is bounded by some N> 0 and there exists M_1>0 such that for every J∈_I we have ψ̂(x(J))≤ M_1. On the other hand, there exists a measure μ≤ϕ, J∈_I and k∈ such that ∑_n≤ kμ({n}) |x(J_n)|> 2NM+2N+M_1. Meanwhile, there exists a measure ν≤ψ such that ∑_n≤ kν({n})=ψ({1,…,k}). Clearly, we have ∑_n≤ kν({n}) |x(J_n)|≤ M_1. Therefore, we obtain ∑_n≤ k (μ({n})-ν({n})) |x(J_n)|> 2NM+2N +M_1-M_1= 2N(M+1). Since |x(J_n)|≤ 2N for every n, it follows that ∑_n≤ k (μ({n})-ν({n})) ≥∑_n≤ k (μ({n})-ν({n})) |x(J_n)|/2N≥ M+1. Therefore, ϕ({1,…,k}) -ψ({1,…,k}) ≥∑_n≤ k (μ({n})-ν({n}))≥ M+1>M, a contradiction. For every non-pathological submeasure ϕ there exists a non-pathological submeasure ψ such that (ϕ)=(ψ) and the sequence (ψ({n})) does not tend to zero. Define the function δ:()→[0,∞] by δ(A)= 0, if A=∅, 1, if A≠∅. Clearly, δ is a non-pathological submeasure. Define the function ψ:()→[0,∞] by ψ(A)=max{ϕ(A),δ(A)} for every A⊆. Then ψ is non-pathological submeasure as a maximum of two non-pathological submeasures. It is also clear that (ψ({n})) does not tend to zero because ψ({n})≥δ({n})=1 for every n∈. Moreover, since |ϕ(A)-ψ(A)|≤ 1 for every A⊆, by Proposition <ref> we obtain (ϕ)=(ψ). §.§ Basic results For any non-pathological lsc submeasure ϕ the following are equivalent : (a) ϕ()<∞; (b) (ϕ)=𝒫(); (c) (ϕ)⊇ℓ_∞; (d) (ϕ)=; (e) (ϕ)⊈c_0; (f) every bounded function x:[0,1]→ℝ belongs to (ϕ). (a)(c): If x∈ℓ_∞, then there is M>0 such that |x_n|≤ M for all n∈. Hence, if 1̂∈ℝ^ is the infinite sequence constantly equal to 1, then ϕ̂(x)≤ϕ̂(M·1̂)=Mϕ̂(1̂)=Mϕ()<∞. Thus x∈(ϕ). (c)(d): If (ϕ)⊇ℓ_∞, then ⊇(ϕ)=(ϕ)∩⊇ℓ_∞∩=. (d)(e): The sequence constantly equal to one belongs to =(ϕ), but not to c_0. (e)(b): Since (ϕ)⊈c_0, there is some x∈(ϕ)∖ c_0. Since (ϕ)⊆, |x| converges to some l>0 and |x_n|≥ l for all n∈. Thus, if 1̂ denotes the infinite sequence constantly equal to 1, then ϕ()=ϕ̂(1̂)=1/lϕ̂(l·1̂)≤1/lϕ̂(x)<∞, so ∈(ϕ) and consequently (ϕ)=𝒫(). (b)(f): If x:[0,1]→ℝ is bounded by some M>0, then |x(J)| is bounded by 2M for every interval J⊆ I. Hence, since ∈(ϕ), we have sup_J∈𝒫_Iϕ̂(|x(J)|)≤ 2 Mϕ()<∞. (f)(a): Consider the Dirichlet function x_D:[0,1]→ℝ given by: x_D(t)= 1, if t∈ℚ∩[0,1]; 0, otherwise. Then x_D is bounded, so it belongs to (ϕ). Note also that there is J∈𝒫_I such that x_D(J) is constantly equal to 1. Thus, ϕ()=ϕ̂(x_D(J))<∞. The following are equivalent for any non-pathological lsc submeasure ϕ: (a) lim_nϕ({n,n+1,…})=0; (b) (ϕ)=𝒫(); (c) (ϕ)⊇ℓ_∞; (d) (ϕ)=; (e) (ϕ)⊈c_0. (a)(c): If x∈ℓ_∞, then there is M>0 such that |x_n|≤ M for all n∈. Hence, if 1̂∈ℝ^ is the infinite sequence constantly equal to 1, then ϕ̂(x·χ_{n,n+1,…})≤ϕ̂(M·1̂·χ_{n,n+1,…})=Mϕ({n,n+1,…})→ 0. Thus x∈(ϕ). (c)(d): If (ϕ)⊇ℓ_∞, then ⊇(ϕ)=(ϕ)∩⊇ℓ_∞∩=. (d)(e): The sequence constantly equal to one is in =(ϕ), but not in c_0. (e)(b): Since (ϕ)⊈c_0, there is some x∈(ϕ)∖ c_0. Since (ϕ)⊆, x converges to some l>0 and x_n≥ l for all n∈. Thus, if 1̂ denotes the infinite sequence constantly equal to 1, then ϕ(∖{1,2,…,n-1})=ϕ({n,n+1,…})=ϕ̂(1̂·χ_{n,n+1,…})=1/lϕ̂(l·1̂·χ_{n,n+1,…})≤1/lϕ̂(x·χ_{n,n+1,…})→ 0, so ∈(ϕ) and consequently (ϕ)=𝒫(). (b)(a): Since (ϕ)=𝒫(), ∈(ϕ), which means that lim_nϕ({n,n+1,…})=0. § INCLUSIONS IN THE GENERAL CASE §.§ Two orders Actually, ϕ̂ is a function defined on infinite sequences of reals. However, for simplicity, we will sometimes write ϕ̂(x) even for finite sequences x – in such cases we mean ϕ̂(x^⌢ 0), where x^⌢ 0 is the infnite sequence starting with x and followed by zeros. Let ϕ_1 and ϕ_2 be two non-pathological lsc submeasures. We write: * ϕ_2≼ϕ_1 if there is M>0 such that ϕ̂_2(x)≤ Mϕ̂_1(x) for every finite sequence x∈⋃_n∈^n; * ϕ_2≼_mϕ_1 if there is M>0 such that ϕ̂_2(x)≤ Mϕ̂_1(x) for every non-increasing finite sequence x∈⋃_n∈^n. §.§ Ideals Let ϕ_1 and ϕ_2 be two non-pathological lsc submeasures. (a) If ϕ_2≼ϕ_1 then (ϕ_1)⊆(ϕ_2) and (ϕ_1)⊆(ϕ_2) (b) If either (ϕ_1)⊆(ϕ_2) or (ϕ_1)⊆(ϕ_2), then (ϕ_1)⊆(ϕ_2). (c) (ϕ_1)⊆(ϕ_2) if and only if ∃_M>0 ∀_F∈ ϕ_1(F)>1/M or ϕ_2(F)<M. (a): This is clear with the use of Remark <ref>. (b): This follows from the inclusions (ϕ_1)⊆(ϕ_1) and (ϕ_2)⊆(ϕ_2) (see Proposition <ref>). (c): Suppose first that there is M>0 such that for all F∈ either ϕ_1(F)>1/M or ϕ_2(F)<M. Let C∈(ϕ_1). Then there is k∈ such that ϕ_1(C∖{1,2,…,k})<1/M. Hence, if F⊆ C∖{1,2,…,k} is finite, then ϕ_1(F)<1/M and ϕ_2(F)<M (by our assumption). Thus, since ϕ_2 is lsc, we have: ϕ_2(C) ≤ϕ_2(C∩{1,2,…,k})+ϕ_2(C∖{1,2,…,k}) ≤(∑_i≤ kϕ_2({i}))+M<∞. Suppose now that for each n∈ there is F_n∈ such that ϕ_1(F_n)≤1/2^n and ϕ_2(F_n)≥ 2^n. Then clearly C=⋃_n∈F_n∉(ϕ_2). On the other hand, we will show that C∈(ϕ_1). Given any ε>0 there is n_0∈ such that 1/2^n_0<ε. Find k_0∈ such that F_1∪…∪ F_n_0⊆[1,k_0] and observe that for each k>k_0 we have: ϕ_1(C∖{1,2,…,k}) ≤ϕ_1(C∖{1,2,…,k_0}) ≤∑_i=n_0+1^∞ϕ_1(F_i)≤∑_i=n_0+1^∞1/2^i=1/2^n_0<ε. Observe that there are non-pathological lsc submeasures ϕ_1 and ϕ_2 such that (ϕ_1)⊆(ϕ_2), but (ϕ_1)⊈(ϕ_2). Indeed, this is true for ϕ_1(C)= 1, if C≠∅, 0, if C=∅, and ϕ_2(C)=∑_i∈ C1/i, since in this case (ϕ_1)=⊆_1/n=(ϕ_2) and (ϕ_1)=𝒫()⊈_1/n=(ϕ_2). On the other hand, there are also non-pathological lsc submeasures ψ_1 and ψ_2 such that (ψ_1)⊈(ψ_2), but (ψ_1)⊆(ψ_2). This is the case for ψ_1=ϕ_2 and ψ_2=ϕ_1 (where ϕ_1 and ϕ_2 are as in the previous paragraph). §.§ Spaces of real sequences Let ϕ_1 and ϕ_2 be two non-pathological lsc submeasures and assume that ϕ_1({i})>0 and ϕ_2({i})>0 for every i∈. Then for every m∈ there is L>0 such that ϕ̂_2(y)≤ Lϕ̂_1(y) for every y∈^m. Define: L=∑_i≤ mϕ_2({i})/min_i≤ mϕ_1({i}). Then L>0. Fix any y∈^m. If y is constantly equal to zero, then ϕ̂_2(y)=0≤ Lϕ̂_1(y) and we are done. Otherwise, find r=max_i≤ m|y_i| and let j≤ m be such that r=|y_j|. Note that: ϕ̂_2(y/r) ≤∑_i≤ mϕ_2({i})|y_i|/r≤∑_i≤ mϕ_2({i})= Lmin_i≤ mϕ_1({i}) ≤ L|y_j|/rϕ_1({j})≤ Lϕ̂_1(y/r). Hence, after multiplying by r we get that ϕ̂_2(y)≤ Lϕ̂_1(y). Let ϕ_1 and ϕ_2 be two non-pathological lsc submeasures and assume that ϕ_1({i})>0 and ϕ_2({i})>0 for every i∈. The following are equivalent: (a) ϕ_2≼ϕ_1; (b) (ϕ_1)⊆(ϕ_2); (c) (ϕ_1)⊆(ϕ_2); (d) (ϕ_1)⊆(ϕ_2); (a)(b) and (a)(c) are clear, since ϕ̂(x)=lim_n→∞ϕ̂(x·χ_{1,2,…,n}) for every x∈^. (b)(d) and (c)(d) follow from (ϕ_1)⊆(ϕ_1) and (ϕ_2)⊆(ϕ_2) (see Proposition <ref>). (d)(a): Suppose that ϕ_2⋠ϕ_1, i.e., for every n∈ there is a finite real sequence z such that ϕ̂_2(z)>2^2n+1ϕ̂_1(z). We will recursively construct sequences (n_k),(m_k),(L_k)⊆ and (x_k)⊆⋃_n∈^n such that for each k∈: (i) n_1=1 and n_k+1>n_k; (ii) 2^2n_k+1>L_k; (iii) ϕ̂_2(y)≤ L_kϕ̂_1(y) for every y∈^m_k; (iv) m_k is the length of x_k; (v) ϕ̂_1(x_kχ_{m_k-1+1,m_k-1+2,…,m_k})=1/2^n_k; (vi) ϕ̂_2(x_kχ_{m_k-1+1,m_k-1+2,…,m_k})>2^n_k. Start with n_1=1 (so that item (i) is met; item (ii) is empty in this case), using our assumption we find some finite real sequence z_1 such that ϕ̂_2(z_1)>2^2n_1+1ϕ̂_1(z_1) and put x_1=z_1/2^n_1ϕ̂_1(z_1). Let m_1 be the length of x_1. Note that items (v) and (vi) are satisfied, since: ϕ̂_2(x_1)=ϕ̂_2(z_1)/2^n_1ϕ̂_1(z_1)>2^2n_1+1ϕ̂_1(z_1)/2^n_1ϕ̂_1(z_1)=2^n_1+1>2^n_1. Using Lemma <ref>, we can find L_1∈ satisfying item (iii). If n_i,m_i,L_i and x_i for all i≤ k are already defined, we can find n_k+1 such that items (i) and (ii) are met. By our assumption, there is some finite real sequence z_k+1 such that ϕ̂_2(z_k+1)>2^2n_k+1+1ϕ̂_1(z_k+1). Let m_k+1 be the length of z_k+1. Observe that m_k+1>m_k (by items (ii) and (iii)). Moreover, ϕ̂_2(z_k+1·χ_∖{1,2,…,m_k})>(2^2n_k+1+1-L_k)ϕ̂_1(z_k+1·χ_∖{1,2,…,m_k})>2^2n_k+1ϕ̂_1(z_k+1·χ_∖{1,2,…,m_k}). Indeed, the second inequality follows from item (ii) and if the first one would be false, using item (iii) we should get: ϕ̂_2(z_k+1) ≤ϕ̂_2(z_k+1·χ_{1,2,…,m_k})+ϕ̂_2(z_k+1·χ_∖{1,2,…,m_k}) ≤ L_kϕ̂_1(z_k+1·χ_{1,2,…,m_k})+(2^2n_k+1+1-L_k)ϕ̂_1(z_k+1·χ_∖{1,2,…,m_k}) ≤(L_k+(2^2n_k+1+1-L_k))ϕ̂_1(z_k+1)=2^2n_k+1+1ϕ̂_1(z_k+1), which contradicts the choice of z_k+1. Put: (x_k+1)_i= (x_k)_i, if i≤ m_k, (z_k+1)_i/2^n_k+1ϕ̂_1(z_k+1·χ_∖{1,2,…,m_k}), if m_k<i≤ m_k+1. Then we have: ϕ̂_2(x_k+1χ_{m_k+1,m_k+2,…,m_k+1})=ϕ̂_2(z_k+1χ_{m_k+1,m_k+2,…,m_k+1})/2^n_k+1ϕ̂_1(z_k+1·χ_∖{1,2,…,m_k})>2^n_k+1. Using Lemma <ref>, find L_k+1 satisfying item (iii) and observe that all conditions are met. Once the construction is completed, define x=⋃_k∈x_k. We need to show that x∈(ϕ_1)∖(ϕ_2). The fact that x∉(ϕ_2) follows from the observation that for each k∈ we have ϕ̂_2(x)≥ϕ̂_2(x_kχ_{m_k-1+1,m_k-1+2,…,m_k})>2^n_k+1 (by item (vi)). On the other hand, x∈(ϕ_1) follows from: ϕ̂_1(x·χ_∖{1,2,…,m_k})≤∑_i>kϕ̂_1(x_iχ_{m_i-1+1,m_i-1+2,…,m_i})=∑_i>k1/2^n_i≤1/2^k. §.§ Variations Let ϕ_1 and ϕ_2 be two non-pathological lsc submeasures and assume that ϕ_1({i})>0 and ϕ_2({i})>0 for every i∈. The following are equivalent: (a) ϕ_2≼_mϕ_1; (b) (ϕ_1)⊆(ϕ_2); (c) (ϕ_1)⊆(ϕ_2); (d) (ϕ_1)⊆(ϕ_2). Moreover, assuming that ϕ_1 and ϕ_2 satisfy: ∀_j=1,2∀_x,y∈ℝ^((∀_n∈∑_i=1^n |x_i|≤∑_i=1^n |y_i|)ϕ̂_j(x)≤ϕ̂_j(y)), the above conditions are also equivalent to the following one: (e) (ϕ_1)⊆(ϕ_2). Firstly, we will show equivalence of items (a), (c) and (d). Secondly, we will show the implications (b)(d) and (a)(b). Lastly, we will deal with item (e) by showing (a)(e) and (e)(c). (a)(c): Straightforward, since ϕ̂(x)=lim_n→∞ϕ̂(x·χ_{1,2,…,n}) for every x∈^. (c)(d): This follows from the inclusion (ϕ_1)⊆(ϕ_1) (see Proposition <ref>). (d)(a): Suppose that ϕ_2⋠_mϕ_1, i.e., for every n∈ there is a finite nonincreasing real sequence z such that ϕ̂_2(z)>2^2n+1ϕ̂_1(z). We will recursively construct sequences (n_k),(m_k),(L_k)⊆ and (x_k)⊆⋃_n∈^ such that for each k∈: (i) n_1=1 and n_k+1>n_k; (ii) 2^2n_k+1>L_k; (iii) ϕ̂_2(y)≤ L_kϕ̂_1(y) for every y∈^m_k; (iv) m_k is the length of x_k; (v) ϕ̂_1(x_kχ_{m_k-1+1,m_k-1+2,…,m_k})=1/2^n_k; (vi) ϕ̂_2(x_kχ_{m_k-1+1,m_k-1+2,…,m_k})>2^n_k; (vii) 1/2^n_k+1<ϕ̂_1(e_m_k+1x_k(m_k)) (here e_i∈ℝ^ is the sequence having 1 on ith coordinate and zeros on all other coordinates); (viii) x_k∈. Note that items (i)-(vi) are exactly the same as in the proof of the implication (d)(a) in Theorem <ref>. Hence, we will omit some details. Start with n_1=1 (note that item (vii) is empty in the case of k=1), find some finite nonincreasing real sequence z_1 such that ϕ̂_2(z_1)>2^2n_1+1ϕ̂_1(z_1) and put x_1=z_1/2^n_1ϕ̂_1(z_1). Note that item (viii) is satisfied, since z_1 is nonincreasing. Let m_1 be the length of x_1. Then items (v) and (vi) are satisfied for the same reason as in the proof of Theorem <ref>. Moreover, using Lemma <ref>, we find L_1∈ as in (iii). If n_i,m_i,L_i and x_i for all i≤ k are already defined, we can find n_k+1 such that items (i), (ii) and (vii) are met. There is some finite nonincreasing real sequence z_k+1 such that ϕ̂_2(z_k+1)>2^2n_k+1+1ϕ̂_1(z_k+1). Let m_k+1 be the length of z_k+1. Similarly as in the proof of Theorem <ref>, we have m_k+1>m_k and ϕ̂_2(z_k+1·χ_∖{1,2,…,m_k})>2^2n_k+1ϕ̂_1(z_k+1·χ_∖{1,2,…,m_k}). Put: (x_k+1)_i= (x_k)_i, if i≤ m_k, (z_k+1)_i/2^n_k+1ϕ̂_1(z_k+1·χ_∖{1,2,…,m_k}), if m_k<i≤ m_k+1. Then items (iv), (v) and (vi) are met. We will show that (viii) is satisfied. Observe that x_k+1{1,…,m_k} is nonincreasing by (viii) applied to k and x_k+1{m_k+1,m_k+2,…} is nonincreasing, since z_k+1 is. Thus, it suffices to check that |x_k+1(m_k)|≥|x_k+1(m_k+1)| and the latter follows from: |x_k+1(m_k+1)|ϕ̂_1(e_m_k+1) =ϕ̂_1(|x_k+1(m_k+1)|e_m_k+1) ≤ϕ̂_1(x_k+1χ_{m_k+1,m_k+2,…,m_k+1})=1/2^n_k+1 <ϕ̂_1(e_m_k+1x_k(m_k))=|x_k+1(m_k)|ϕ̂_1(e_m_k+1). To end the recursion step, use Lemma <ref> to find L_k+1 as in (iii). Once the construction is completed, define x=⋃_k∈x_k. Then x∈ follows from (viii) and x∈(ϕ_1)∖(ϕ_2) can be shown in the same way as in the proof of Theorem <ref>. (b)(d): This follows from the inclusion (ϕ_2)⊆(ϕ_2) (see Proposition <ref>). (a)(b): Assume that (ϕ_1)⊈(ϕ_2) and take any x∈(ϕ_1)∖(ϕ_2). Then x∈∖(ϕ_2), thus (ϕ_2)⊆ c_0 (by Proposition <ref>). By the fact that (ϕ_1)⊆(ϕ_1) (Proposition <ref>), we have ϕ̂_̂1̂(x)<∞, thus, without losing generality, we may assume that x is non-negative, non-increasing and ϕ̂_̂1̂(x)=1. There are two possible cases: either x∉(ϕ_2) or x∈(ϕ_2). In the case that x∉(ϕ_2), we have ϕ̂_̂2̂(x)=∞ and ϕ̂_̂1̂(x)=1. Since ϕ̂_̂2̂(x)=lim_n→∞ϕ̂_̂2̂(x·χ_{1,2,…,n}) and ϕ̂_̂1̂(x·χ_{1,2,…,n})≤ϕ̂_̂1̂(x)=1 for all n, we obtain: lim_n→∞ϕ̂_̂2̂(x·χ_{1,2,…,n})/ϕ̂_̂1̂(x·χ_{1,2,…,n})≥lim_n→∞ϕ̂_̂2̂(x·χ_{1,2,…,n})=∞, hence ϕ_2⋠_mϕ_1. In the case that x∈(ϕ_2), we know that x∈ c_0, as (ϕ_2)⊆ c_0. Take α>0 such that lim_n→∞ϕ̂_̂2̂(x·χ_{n,n+1,…})=α (such α exists, as ϕ̂_̂2̂(x·χ_{n,n+1,…}) is a non-increasing sequence in [0,ϕ̂_̂2̂(x)] and x∉(ϕ_2)). Define recursively the sequence (n_k) in such way that for all k∈ we get ϕ̂_̂1̂(x·χ_{n_k+1,n_k+2,…})≤ 1/k and x(n_k+1)/x(n_k)≤1/k. This is possible, because lim_n→∞x(n)=0, x is non-increasing and lim_n→∞ϕ̂_̂1̂(x·χ_{n,n+1,…})=0. Now, for any k∈, we can define the sequence y_k by y_k(n)=x(n_k+1) for n≤ n_k+1 and y_k(n)=x(n) otherwise. Clearly, y_k is non-increasing as x is non-increasing. Moreover, taking m_k such that ϕ̂_̂2̂(x·χ_{n_k+1+1,n_k+1+2,…,m_k})≥α/2, we may notice that: ϕ̂_̂2̂(y_k·χ_{1,2,…,m_k}) /ϕ̂_̂1̂(y_k·χ_{1,2,…,m_k}) ≥ϕ̂_̂2̂(y_k·χ_{n_k+1+1,n_k+1+2,…,m_k}) /ϕ̂_̂1̂(y_k·χ_{1,2…,n_k }) +ϕ̂_̂1̂(y_k·χ_{n_k+1,n_k+2,…,m_k })≥ ≥ϕ̂_̂2̂(x·χ_{n_k+1+1,n_k+1+2,…,m_k}) /x(n_k+1)/x(n_k)ϕ̂_̂1̂(x·χ_{1,2…,n_k }) +ϕ̂_̂1̂(x·χ_{n_k+1,n_k+2,…,m_k })≥α/2/1/k+1/k=kα/4, hence ϕ_2⋠_mϕ_1. (a)(e): Assume that ϕ_2≼_mϕ_1, i.e. there is M>0 such that ϕ_2(y)≤ Mϕ_1(y)for every non-increasing finite sequence y. Let x∈(ϕ_1) and fix any Ĵ=(Ĵ_n)∈𝒫_I. If ϕ_1()<+∞, then BV(ϕ_1)=BV(ϕ_2)=B(I) (by Proposition <ref>), so we may assume that ϕ_1()=+∞. Then we have x(Ĵ_n)∈ c_0 and we can apply the procedure described below. Let π:→ be such that |x(Ĵ_π(1))|=sup{|x(Ĵ_n)|:n∈} and |x(Ĵ_π(k+1))|=sup{|x(Ĵ_n)|:n∈∖{π(1),…,π(k)}} for all k∈. Then J^⋆=(Ĵ_π(n))∈𝒫_I and x(J^⋆)∈. Hence, by the condition imposed on ϕ_2, we have: ϕ̂_2(x(Ĵ))≤ϕ̂_2(x(J^⋆))≤ Mϕ̂_1(x(J^⋆))≤ Msup_J∈𝒫_Iϕ̂_1(x(J)). Since Ĵ was arbitrary, we get that sup_J∈𝒫_Iϕ̂_2(x(J))≤ Msup_J∈𝒥_Iϕ̂_1(x(J)), so x∈(ϕ_2). (e)(c): We will show that the condition (ϕ_1)⊈(ϕ_2) implies (ϕ_1)⊈(ϕ_2). Assume first that (ϕ_1)=. Since (ϕ_1)⊈(ϕ_2), (ϕ_2)≠. Then Proposition <ref> gives us (ϕ_1)=B(I)⊈(ϕ_2). Assume now that (ϕ_1)≠. Fix x=(x_n)∈(ϕ_1)∖(ϕ_2). Let f:[0,1]→ℝ be a piecewise linear function such that f(1)=0, f(0)=∑_n=1^∞ (-1)^n+1|x_n| and f(1/2^k)=∑_n=1^k (-1)^n+1|x_n| for all k∈. This function is well-defined as x∈ c_0 (by Proposition <ref>).. Observe that f∉(ϕ_2), since for the sequence of intervals Ĵ=(Ĵ_n)∈𝒥_I given by Ĵ_1=[1/2,1] and Ĵ_k+1=[1/2^k+1,1/2^k] for all k∈, we have |f(Ĵ_k)|=|x_k|. Thus sup_J∈𝒥_Iϕ̂_2(f(J))≥ϕ̂_2(f(Ĵ))=ϕ̂_2(x)=∞. On the other hand, for each J∈𝒥_I we have: ∀_n∈∑_i=1^n |f(J_i)|≤∑_i=1^n |f(Ĵ_i)|. This is actually a simple observation, which may either be proved directly or deduced from the general observation (see <cit.>) stating that when selecting the ends of the intervals we should select points of varying monotonicity to get higher value of the sum. Hence, by the assumption imposed on ϕ_1, we have sup_J∈𝒥_Iϕ̂_1(f(J))=ϕ̂_1(f(Ĵ))=ϕ̂_1(x)<∞. § PARTICULAR CASE I: SIMPLE DENSITY IDEALS In this Section we are interested in lsc submeasures of the form ϕ_g(C)=sup_n∈|C∩{1,2,…,n}|/g(n), where g:→ satisfies the following conditions: (a) (g(n)) is nondecreasing; (b) lim_n g(n)=∞; (c) n/g(n) does not tend to zero; (d) n/g(n) is nondecresing. Ideals of the form (ϕ_g) (for functions g satisfying all the above requirements except the last one) have been extensively studied in <cit.>, <cit.> and <cit.>. For that reason, we decided to write the last item separately despite the fact that (c) follows from (d). We will denote by 𝒢 the set of all functions g satisfying conditions (a)-(d). Note that in this case ϕ̂_g(x)=sup_n∈∑_i=1^n|x_i|/g(n). In 1974 Chanturia introduced the concept of the modulus of variation of the bounded function (see <cit.>), which for x I→ is given as a sequence v(x,n) = sup_𝒫_n∑_k=1^n |x(I_k)|, where 𝒫_n denotes the set of all n-element families of nonoverlapping intervals of I. In the mentioned paper <cit.> the Author introduced set of functions V[g], for a given sequence g→, as a family of those functions for which v(x,n) = O(g(n)). These classes are now called Chanturia (or Chanturiya) classes in literature. One of the statements (see Theorem 1 in the mentioned paper) was that the necessary and sufficient condition for a sequence to be v(x,n) for some function x is that it is nondecreasing and concave. These classes were studied since then in many papers, mainly in relation to a convergence of Fourier series and relations to other families of functions of bounded variation (see especially the relation between Chanturia classes and Waterman spaces given by Avdispahić in <cit.>). The sequences defining Chanturia classes are defined as real valued sequences but as we can see we may redefine any real-valued sequence h(n) to a sequence g(n) = ⌈ h(n)⌉, which has values in natural numbers and the same asymptotics as n→+∞. One more important observation for nondecreasing and concave sequences h(n) as considered in a context of Chanturia classes is that they are such that n/h(n) are nondecreasing (as required by the definition of a family 𝒢 above). Without the loss of generality we may assume that h(0)=0 and then from being concave we may deduce that for each k∈{1,...,n-1} we have h(k) = h(k/nn + (1-k/n)0) ≥k/n h(n) + (1-k/n)h(0) = k/nh(n), which gives h(k)/k≥h(n)/n, as desired. The following are equivalent for every g,h∈𝒢: (a) g(n) = O(h(n)), i.e., there is η>0 such that g(n)/h(n)≤η for all n∈ N; (b) ϕ_h≼ϕ_g; (c) ϕ_h≼_mϕ_g; (d) (ϕ_g)⊆(ϕ_h); (e) (ϕ_g)⊆(ϕ_h); (f) (ϕ_g)⊆(ϕ_h); (g) (ϕ_g)⊆(ϕ_h); (h) (ϕ_g)⊆(ϕ_h); (i) (ϕ_g)⊆(ϕ_h); (j) (ϕ_g)⊆(ϕ_h); (k) (ϕ_g)⊆(ϕ_h); (l) (ϕ_g)⊆(ϕ_h); (m) (ϕ_g)⊆(ϕ_h). (a)(b): We claim that ϕ_h(x)≤ηϕ_g(x) for every x∈ℝ^. Indeed, for every n∈ we have: ∑_i=1^n |x_i|/h(n)≤η∑_i=1^n |x_i|/g(n). Items (b), (g), (h) and (i) are equivalent thanks to Theorem <ref>. The (b)(c) is obvious. Items (c), (j), (k), (l) and (m) are equivalent thanks to Theorem <ref>, since for any g∈𝒢 and any x,y∈ℝ^ such that ∑_i=1^n |x_i|≤∑_i=1^n |y_i| for all n∈, it is easy to see that ϕ_g(x)≤ϕ_g(y). By Proposition <ref>, (b)(d), (b)(e), (d)(f) and (e)(f). Therefore, we only need to show (f)(a) and (c)(a). (c)(a): Assume that (a) does not hold. We need to show that (c) does not hold, i.e., for every M>0 there is a finite nonincreasing sequence x such that ϕ_h(x)>Mϕ_g(x). Fix M>0. Since (a) does not hold, there is n∈ such that g(n)/h(n)>M. Define x=(x_i)∈ℝ^ by: x_i=g(n)/n, if i≤ n, 0, otherwise. Then x is nonincreasing and ϕ_h(x)≥∑_i=1^n |x_i|/h(n)=g(n)/h(n)>M. Now we will show that ϕ_g(x)≤ 1, which will finish the proof. Fix m∈. There are three possibilities: * If m=n, then ∑_i=1^n |x_i|/g(m)=1. * If m>n, then ∑_i=1^n |x_i|/g(m)=g(n)/g(m)≤ 1, since g is nondecreasing. * If m<n, then ∑_i=1^n |x_i|/g(m)=g(n)/n/g(m)/m≤ 1, since n/g(n) is nondecresing. (f)(a): Assume that (a) does not hold. We will use Proposition <ref>(c) to show that (f) does not hold. Fix any k∈. We are looking for F∈ such that ϕ_g(F)≤1/2^k and ϕ_h(F)≥ 2^k. Let δ>1 be such that 1/g(1)≥1/δ. Note that i/g(i)≥1/δ for all i∈ (by the fact that i/g(i) is nondecresing). Since (a) does not hold, there is n∈ such that g(n)/h(n)>2^k 2^k+1δ. Actually, there are infinitely many such n (given one such n we can always find n'∈ with g(n')/h(n')>g(n)/h(n)>2^k 2^k+1δ). Thus, without loss of generality we may assume that n is big enough to guarantee that 1/g(n)<1/2^kδ<1/2^k (since lim_n g(n)=∞). Find j∈ such that 2^j+1δ≥ g(n)>2^jδ and note that j≥ k (as 1/g(n)<1/2^kδ). Moreover, since n/g(n)≥1/δ, we have n≥g(n)/δ>2^j. Hence, we can find a≤ n such that n-a=2^j-k. Define F={n-a+1,n-a+2,…,n}. Then ϕ_h(F)≥n-a/h(n)>n-a/g(n)2^k 2^k+1δ>2^j-k/2^j+1δ2^k 2^k+1δ=2^k. In order to finish the proof, we need to show that ϕ_g(F)=sup_m∈|F∩{1,…,m}|/g(m)≤1/2^k. Fix m∈. There are four possibilities: * If m=n, then |F∩{1,…,m}|/g(m)=n-a/g(n)<2^j-k/2^jδ<1/2^k. * If m>n, then |F∩{1,…,m}|/g(m)=n-a/g(m)≤n-a/g(n)<1/2^k, since g is nondecreasing. * If m≤ a, then |F∩{1,…,m}|/g(m)=0. * If a<m<n, then |F∩{1,…,m}|/g(m)=m-a/g(m)=m/g(m)-a/g(m)≤n/g(n)-a/g(n)=n-a/g(n)<1/2^k, since n/g(n) and g are nondecresing. § PARTICULAR CASE II: SUMMABLE IDEALS AND SPACES In this Section we are interested in lsc submeasures of the form ϕ_A(C)=∑_n∈ Ca_n, where A=(a_n) is a Waterman sequence (i.e., A is a nonincreasing sequence of positive real numbers such that ∑_n∈a_n=∞). Note that in this case ϕ is actually a measure and ϕ̂_A(x)=∑_n∈a_i|x_i|. In such case: * (ϕ_A)=(ϕ_A) is the summable ideal _A given by the sequence A (see Definition <ref> and <cit.>); * (ϕ_A)=(ϕ_A) and (ϕ_A)=(ϕ_A) (by the previous item, Proposition <ref> and Remark <ref>); * (ϕ_A) is the space of functions of bounded A-variation. We will need the following result. Assume we have two Waterman sequences A=(a_n)_n∈ and B=(b_n)_n∈. The following statements are equivalent: (a) _A≤_K_B; (b) There exist positive numbers m,M and a partition of into consecutive nonempty finite intervals (I_n)_n∈ such that ma_n≤∑_i∈ I_nb_i ≤ Ma_n for every n∈; (c) There exists M>0 such that ∑_i=1^k b_i≥ M∑_i=1^l a_i ⇒ b_k≤ Ma_l for all k,l∈. Assume we have two Waterman sequences A=(a_n)_n∈ and B=(b_n)_n∈. The following statements are equivalent: (a) ∑_i=1^n b_i = O(∑_i=1^n a_i), i.e., there is η>0 such that ∑_i=1^n b_i/∑_i=1^n a_i≤η for all n∈ N; (b) _A≤_K_B; (c) ϕ_B≼_mϕ_A; (d) ⊆; (e) (ϕ_A)⊆(ϕ_B). (f) (ϕ_A)⊆(ϕ_B); (g) (ϕ_A)⊆(ϕ_B); The equivalence of items (c)-(g) follows from Theorem <ref>, since given any Waterman sequence A=(a_n)_n∈ and any x,y∈ℝ^ such that ∑_i=1^n |x_i|≤∑_i=1^n |y_i| for all n∈, using the fact that (a_n) is nonincreasing we have: ∑_k=1^n a_k|x_k|=∑_k=1^n-1(∑_i=1^k |x_i|)(a_k-a_k+1)+a_n∑_i=1^n|x_i|≤ ≤∑_k=1^n-1(∑_i=1^k |y_i|)(a_k-a_k+1)+a_n∑_i=1^n|y_i|=∑_k=1^n a_k|y_k|. Hence, ϕ_A(x)≤ϕ_A(y). The equivalence of items (a) and (d) is proved in <cit.>, so it remains to show that (a) and (b) are equivalent. (b)⇒ (a): By the previous theorem, we can find M>0 and a partition of into consecutive nonempty intervals (I_n)_n∈ such that for all n∈ we have ∑_i∈ I_nb_i/a_n≤ M , thus ∑_i=1^max I_n b_i/∑_i=1^n a_i≤ M. Let us put η=M and take n∈. Then there exists j∈ such that n∈ I_j. It follows that n≥ j, thus ∑_i=1^n b_i/∑_i=1^n a_i≤∑_i=1^max I_j b_i/∑_i=1^j a_i≤ M=η. (a)⇒ (b): Suppose that _A≰_K_B and fix η>0. Let M=2η. By the previous theorem, we can find k,l∈ such that ∑_i=1^k b_i≥ M∑_i=1^l a_i and b_k> M a_l. We have two cases. If k≤ l then ∑_i=1^k b_i/∑_i=1^k a_i≥∑_i=1^k b_i/∑_i=1^l a_i≥ M> η. If k>l then ∑_i=l+1^k a_i≤∑_i=l+1^k a_l< ∑_i=l+1^k b_k/M≤∑_i=l+1^k b_i/M≤∑_i=1^kb_i/M, thus ∑_i=1^k a_i=∑_i=1^l a_i+∑_i=l+1^k a_i<∑_i=1^k b_i/M+∑_i=1^kb_i/M=2∑_i=1^kb_i/M, hence ∑_i=1^k b_i/∑_i=1^k a_i > M/2=η. Therefore, in both cases condition (a) does not hold. § COMPARISON OF TWO PARTICULAR CASES The next theorem is an interesting comment to the paper <cit.> giving different inclusions between Chanturia classes and Waterman spaces: this is a general proof that many of the inclusions given there are strict (these, for which the assumption on monotonicity of (g(n+1)-g(n)) is satisfied). Let g∈𝒢 be such that (g(n+1)-g(n)) monotonically tends to 0. Then (ϕ_g) is not equal to any . Let A=(a_n) be a Waterman sequence. We will obtain the thesis by showing that (ϕ_g)≠(ϕ_A) and applying Theorem <ref>. Define the sequence x=(x_n) by the formula ∑_i=1^n x_n= g(n). Clearly, x is nonincreasing, tends to 0 and belongs to (ϕ_g) as ϕ̂_g(x)=1. We have two cases: In the first case we assume that ∑_n=1^∞a_n x_n=∞. Then x∈(ϕ_g)∖(ϕ_A). In the second case we assume that ∑_n=1^∞a_n x_n<∞. Notice that then for every k∈ we have ∑_n=1^∞a_n(k· x_n)=k·∑_n=1^∞a_n x_n<∞. We will show that (ϕ_A)∖(ϕ_g)≠∅. We will now define recursively sequences (n_i) and (m_i) of natural numbers. We put as n_1 the smallest N such that 2·∑_n>N a_n x_n<1/4. Next, we put as m_1 the smallest N> n_1 such that 2 x_N≤ x_n_1. We can find such because lim_n→∞ x_n=0. Now, suppose we have already defined n_i and m_i for some i∈. Then we define n_i+1 as the smallest N>m_i such that (i+2) ∑_n>N a_n x_n<1/2^i+2 and (i+1)∑_n=m_i^N x_n≥i+1/2g(N). We can find such N, since lim_j→∞∑_n>j a_n x_n=0 and (i+1)∑_n=1^j x_n=(i+1) g(j), which tends to infinity, thus lim_j→∞(i+1) ∑_n=m_i^j x_n/g(j)=i+1. Next, we define m_i+1 as the smallest N>n_i+1 such that (i+2)x_N≤ (i+1) x_n_i+1. We can now proceed to defining the sequence y=(y_n) such that y∈(ϕ_A)∖(ϕ_g), which will end the proof. We put y_n= x_n, if n≤ n_1, (i+1) x_m_i, if n_i<n<m_i for some i, (i+1)x_n, if m_i≤ n≤ n_i+1 for some i. First, notice that y is nonincreasing as (x_n) is nonincreasing and (i+1)x_m_i≤ i· x_n_i for every i∈. Next, observe that y∉(ϕ_g) as ϕ̂_g(y)≥∑_n=m_i^n_i+1 (i+1)x_n/g(n_i+1)≥i+1/2 for every i∈. Finally, we obtain y∈(ϕ_A) by the fact that ∑_n=1^∞ a_n y_n ≤∑_n≤ n_1a_n x_n + ∑_i=1^∞∑_n=n_i+1^n_i+1 a_n(i+1)x_n≤ ≤∑_n≤ n_1a_n x_n + ∑_i=1^∞∑_n>n_i(i+1)a_n x_n < ∑_n≤ n_1a_n x_n + ∑_i=1^∞1/2^i+1 = =∑_n≤ n_1a_n x_n +1/2<∞. The sequence g(n)=√(n) belongs to 𝒢 and is such that g(n+1)-g(n) monotonically tends to 0. Therefore, (ϕ_g) is not equal to any . amsplain
http://arxiv.org/abs/2407.02126v1
20240702101713
AI-driven Alternative Medicine: A Novel Approach to Drug Discovery and Repurposing
[ "Oleksandr Bilokon", "Nataliya Bilokon", "Paul Bilokon" ]
q-bio.BM
[ "q-bio.BM" ]
Kinetics of Rayleigh-Taylor instability in van der Waals fluid: the influence of compressibility Jie Chen^1,2, Aiguo Xu^2,3,4,5[ Corresponding author. E-mail: Xu_Aiguo@iapcm.ac.cn],Yudong Zhang^6, Dawei Chen^2, Zhihua Chen^1 July 8, 2024 ==================================================================================================================================== § ABSTRACT AIAltMed is a cutting-edge platform designed for drug discovery and repurposing. It utilizes Tanimoto similarity to identify structurally similar non-medicinal compounds to known medicinal ones. This preprint introduces AIAltMed, discusses the concept of `AI-driven alternative medicine,' evaluates Tanimoto similarity's advantages and limitations, and details the system's architecture. Furthermore, it explores the benefits of extending the system to include PubChem and outlines a corresponding implementation strategy. § INTRODUCTION Drug discovery and repurposing are critical areas of pharmaceutical research. Traditional methods are often time-consuming and costly. AIAltMed (<http://aialtmed.com/>) presents a novel solution by utilizing AI and Tanimoto similarity (see <cit.>), also known as the Jaccard index (see <cit.>), to accelerate these processes. This platform not only aids in identifying potential new uses for existing drugs but also explores non-medicinal compounds with therapeutic potential, thus pioneering the field of AI-driven alternative medicine. § AI-DRIVEN ALTERNATIVE MEDICINE AI-driven alternative medicine is a novel approach that leverages artificial intelligence to identify non-medicinal compounds with structural similarities to medicinal compounds. By using Tanimoto similarity, AIAltMed can find compounds in databases like DrugBank (see <cit.>) and FooDB (see <cit.>) that may exhibit therapeutic effects. This method opens new avenues for discovering alternative treatments and broadening the scope of drug repurposing. This approach has been inspired by <cit.>, where an in silico drug repurposing pipeline was developed to identify drugs with the potential to inhibit SARS-CoV-2 replication, based on structural similarity to drugs already in clinical trials for COVID-19, leading to the identification of two candidate drugs for repurposing with the potential to inhibit SARS-CoV-2 replication (triamcinolone and gallopamil). The subsequent analysis proposed ibid. based on affected pathways is a natural next step to the similarity search. This approach has similarities to that discussed in <cit.>. A review of similarity-based approaches to molecular structure can be found in <cit.>. § TANIMOTO SIMILARITY AND SIMILARITY-BASED MOLECULE SEARCH Tanimoto similarity is a measure of the structural similarity between two molecules, calculated as the ratio of the intersection over the union of their molecular fingerprints. §.§ Advantages * Efficiently identifies structurally similar compounds. * Facilitates rapid screening of large compound libraries. * Helps in predicting biological activity based on structural similarity. §.§ Disadvantages * May overlook compounds with different structures but similar biological activity. * Dependent on the quality and completeness of the molecular fingerprint database. * Potential for high false positive rates if not carefully validated. Further merits and demerits of Tanimoto-similarity are discussed in <cit.>. § SYSTEM ARCHITECTURE The AIAltMed system is built using Django (see <cit.>), a high-level Python (see <cit.>) web framework, and caches the similarity table in memory for efficient retrieval and processing. §.§ Django Framework Django provides a robust and scalable framework for developing the AIAltMed platform, offering advantages such as rapid development, security features, and a comprehensive set of tools. Django is a high-level Python web framework that is well-suited for the development of the AIAltMed application due to its rapid development capabilities and robust feature set. One of the primary advantages of Django is its "batteries-included" philosophy, which provides a comprehensive set of tools and libraries out-of-the-box. This allows developers to focus on building the core functionality of AIAltMed without needing to spend extensive time on setting up and configuring ancillary components. Django's built-in admin interface is particularly useful for managing the application's data and user authentication, streamlining the process of creating and maintaining the backend. Additionally, Django's ORM (Object-Relational Mapping) system simplifies database interactions, enabling efficient handling of complex queries and ensuring data integrity. Another significant advantage of Django is its emphasis on security. Django includes numerous built-in security features, such as protection against SQL injection, cross-site scripting (XSS), cross-site request forgery (CSRF), and clickjacking. These features are essential for ensuring that the AIAltMed application can securely handle sensitive data, including proprietary chemical information and user credentials. Django's scalability is also noteworthy; it supports the development of applications ranging from simple websites to complex, data-intensive platforms like AIAltMed. The framework's ability to handle high traffic and large datasets ensures that AIAltMed can grow and scale as its user base and data volume expand. Furthermore, Django's extensive documentation and active developer community provide valuable resources and support, facilitating continuous improvement and troubleshooting. §.§ Fuzzy Search Implementation The Django/Python implementation of the AIAltMed system leverages fuzzy search functionality provided by the library, a popular Python package available on GitHub. (formerly known as ) is an effective tool for approximate string matching. It is widely used in applications where exact string matches are not always possible or practical. By employing the Levenshtein distance algorithm, can calculate the similarity between two strings, allowing the AIAltMed system to find and rank potential matches based on their resemblance. This capability is crucial for searching large databases where slight variations in data entry or terminology might otherwise lead to missed results. Integrating into the AIAltMed system enhances its ability to accurately retrieve relevant compounds, even when the search terms are not an exact match to the database entries. This is particularly beneficial in the context of drug discovery and repurposing, where the chemical names and descriptions might have minor inconsistencies. The use of fuzzy search ensures that researchers can identify all pertinent compounds without being hindered by minor discrepancies in naming conventions. Overall, significantly improves the robustness and reliability of the AIAltMed system's search capabilities, making it a more effective tool for identifying potential therapeutic compounds. §.§ In-memory Caching In-memory caching (see <cit.>) the similarity table in memory allows for fast access and processing, significantly improving the performance of similarity searches. In-memory caching is a powerful technique that significantly enhances the performance and efficiency of web applications by storing frequently accessed data in memory rather than on disk. This approach reduces the latency associated with retrieving data from traditional storage systems, enabling faster response times and improving the overall user experience. For the AIAltMed application, in-memory caching plays a crucial role in managing the similarity table used for compound searches. By keeping this data readily available in memory, the system can quickly perform complex similarity calculations and retrieve relevant results without the overhead of repeated database queries. This is particularly important given the potentially large size of the similarity table and the need for rapid, real-time access during user interactions. The use of in-memory caching also contributes to the scalability and reliability of the AIAltMed system. As the number of users and the volume of data grow, the demand on the backend infrastructure increases. In-memory caching helps to mitigate this by offloading a significant portion of the read operations from the database to the cache, thereby reducing the load on the database and improving its overall performance. Additionally, in-memory caching solutions, such as Redis or Memcached, offer features like data persistence, replication, and failover, which enhance the system's fault tolerance and data durability. These features ensure that the AIAltMed application remains responsive and available even under high load conditions or in the event of hardware failures. By leveraging in-memory caching, AIAltMed can maintain high performance and reliability, supporting its mission to provide efficient and effective drug discovery and repurposing solutions. § EXTENDING TO PUBCHEM Extending AIAltMed to include all of PubChem (see <cit.>) would vastly increase the potential for discovering novel therapeutic compounds. PubChem is one of the largest repositories of chemical information, and integrating it into AIAltMed could enhance the system's ability to identify relevant compounds. §.§ Advantages * Access to a more extensive and diverse chemical library. * Increased likelihood of identifying novel compounds with therapeutic potential. * Enhanced validation and cross-referencing capabilities. §.§ Implementation Strategy * Update the similarity search algorithm to handle larger datasets efficiently. * Implement scalable storage solutions to manage the increased data volume. * Optimize the in-memory caching mechanism to ensure performance is maintained. * Collaborate with PubChem to ensure data integration and consistency. § SCALING UP Scaling up AIAltMed, given its current architecture, involves several strategic enhancements across various components of the system. Firstly, to handle increased user demand and data volume, the backend infrastructure must be robust and scalable. This can be achieved by leveraging cloud computing platforms such as AWS, Azure, or Google Cloud, which offer scalable compute and storage resources. Using these platforms, AIAltMed can dynamically allocate resources based on demand, ensuring that the application remains responsive even during peak usage. Additionally, implementing containerization with Docker and orchestration with Kubernetes can streamline the deployment process, allowing the system to scale horizontally by adding more instances of the Django application as needed. Incorporating advanced database management techniques is another critical step in scaling AIAltMed. The current use of in-memory caching with Redis or Memcached significantly enhances performance by reducing latency in data retrieval. To further improve scalability, the database architecture could adopt a distributed database system such as Apache Cassandra or Google Bigtable, which can handle large-scale data across multiple nodes, ensuring high availability and fault tolerance. Furthermore, optimizing the existing database queries and indexing strategies can minimize the load on the database and speed up data access times, making the system more efficient. Finally, integrating machine learning models like Graph Neural Networks (GNNs) can enhance the system’s capability to process and analyze large datasets effectively. GNNs can provide deeper insights into molecular interactions and predict the therapeutic potential of compounds more accurately. To support the computational requirements of these models, AIAltMed can utilize GPU-accelerated computing or specialized hardware like TPUs (Tensor Processing Units) available on cloud platforms. Additionally, implementing a microservices architecture can decouple different functionalities of the application, allowing independent scaling of each service. For instance, the fuzzy search service using thefuzz can be scaled independently from the main Django application, ensuring that each component operates efficiently under increased load. These strategies collectively enable AIAltMed to scale effectively, accommodating growth in user base, data volume, and computational complexity. § CONCLUSION AIAltMed marks a substantial leap in drug discovery and repurposing. Utilizing AI and Tanimoto similarity, it efficiently identifies both medicinal and non-medicinal compounds with therapeutic potential. Expanding the system to integrate PubChem will significantly enhance its capabilities, offering an invaluable resource for researchers and clinicians in developing new treatments. This innovative approach underscores the potential of AI-driven alternative medicine to revolutionize pharmaceutical research and personalized healthcare. Future development of AIAltMed could explore the integration of graph neural networks (GNNs) <cit.> to enhance the system's ability to identify and predict the therapeutic potential of compounds. GNNs, which are designed to process data structured as graphs, can be particularly effective for analyzing molecular structures, which are naturally represented as graphs with atoms as nodes and chemical bonds as edges. By leveraging GNNs, AIAltMed can more accurately model the complex relationships and interactions within molecular structures, potentially uncovering new insights into how different compounds might interact with biological targets. This advanced modeling could improve the system's ability to predict biological activity and identify novel drug candidates, thus extending the platform's utility in drug discovery and repurposing. Additionally, AIAltMed could benefit from expanding its dataset to include more diverse and comprehensive chemical libraries. Incorporating data from additional databases beyond DrugBank, FooDB, and PubChem, such as ChEMBL <cit.> or ZINC <cit.>, would provide a richer set of molecular structures for analysis. Coupled with enhanced computational techniques like GNNs, this expanded dataset could enable the system to identify more subtle patterns and relationships within the data, leading to more robust predictions and discoveries. Furthermore, AIAltMed could explore the use of federated learning to collaborate with other research institutions and pharmaceutical companies. This approach would allow the platform to leverage vast amounts of distributed data while maintaining data privacy and security, fostering innovation and accelerating the discovery of new therapeutic compounds. § ACKNOWLEDGEMENTS This work was performed at Thalesians Marine Ltd, a private limited company registered in England and Wales with company number 12147626 with registered office address 3rd Floor, 120 Baker Street, London, England, W1U 6TU and trading address Level39, One Canada Square, Canary Wharf, London, England, E14 5AB. Level39, located in the heart of London's Canary Wharf, is one of the world's most connected technology hubs, fostering innovation and growth for startups and scaleups in the fields of finance, cybersecurity, retail, and smart cities. As a member of the Level39 community, Thalesians Marine Ltd benefits from access to a rich ecosystem of industry experts, investors, and mentors who provide invaluable support and guidance. The authors would like to thank Thalesians Marine Ltd and Level39 for creating a working environment conducive to innovative research and development work. § DISCLAIMER The information provided in this work is for general informational purposes only. It is not intended as a substitute for professional medical advice, diagnosis, or treatment. Always seek the advice of your physician or other qualified healthcare provider with any questions you may have regarding a medical condition. Never disregard professional medical advice or delay in seeking it because of something you have read in this work. Reliance on any information provided in this work is solely at your own risk. The authors of this work do not assume any responsibility or liability for any injury, damage, or other adverse consequences resulting from the use or misuse of the information provided herein. Furthermore, the information in this work may not be applicable to your particular medical condition or health concern. It is important to consult with a qualified healthcare professional before making any decisions about your health or medical treatment. The inclusion of any links to preprints, research papers, and third-party websites does not imply endorsement or recommendation by the authors. We are not responsible for the content or accuracy of any preprints, research papers, and third-party websites referenced from this work. Please consult with your healthcare provider if you have any questions or concerns about your health or medical condition. 15 urlstyle [Bajusz et al.(2015)Bajusz, Rácz, and Héberger]Bajusz2015 Dávid Bajusz, Anita Rácz, and Károly Héberger. Why is tanimoto index an appropriate choice for fingerprint-based similarity calculations? Journal of Cheminformatics, 70 (1), May 2015. ISSN 1758-2946. 10.1186/s13321-015-0069-3. [Bero et al.(2017)Bero, Muda, Choo, Muda, and Pratama]Bero2017 S A Bero, A K Muda, Y H Choo, N A Muda, and S F Pratama. Similarity measure for molecular structure: A brief review. Journal of Physics: Conference Series, 892:0 012015, September 2017. ISSN 1742-6596. 10.1088/1742-6596/892/1/012015. [Foundation(2005)]django Django Software Foundation. Django: The web framework for perfectionists with deadlines. <https://www.djangoproject.com/>, 2005. Accessed: 2024-07-02. [Foundation(1991)]python Python Software Foundation. Python programming language. <https://www.python.org/>, 1991. Accessed: 2024-07-02. [Gaulton et al.(2012)Gaulton, Bellis, Bento, Chambers, Davies, Hersey, Light, McGlinchey, Michalovich, Al-Lazikani, and Overington]gaulton2012chembl Anna Gaulton, Louisa J Bellis, A Patricia Bento, Jon Chambers, Mark Davies, Anne Hersey, Yvonne Light, Sean McGlinchey, David Michalovich, Bissan Al-Lazikani, and John P Overington. Chembl: a large-scale bioactivity database for drug discovery. Nucleic acids research, 400 (D1):0 D1100–D1107, 2012. [Irwin et al.(2012)Irwin, Sterling, Mysinger, Bolstad, and Coleman]irwin2012zinc John J Irwin, Teague Sterling, Michael M Mysinger, E Lilly Bolstad, and Tara P Coleman. Zinc 15–ligand discovery for everyone. Journal of chemical information and modeling, 520 (7):0 1757–1768, 2012. [Jaccard(1901)]jaccard1901distribution Paul Jaccard. Étude comparative de la distribution florale dans une portion des alpes et des jura. Bulletin de la Société Vaudoise des Sciences Naturelles, 37:0 547–579, 1901. [Kim et al.(2022)Kim, Chen, Cheng, Gindulyte, He, He, Li, Shoemaker, Thiessen, Yu, Zaslavsky, Zhang, and Bolton]Kim2022 Sunghwan Kim, Jie Chen, Tiejun Cheng, Asta Gindulyte, Jia He, Siqian He, Qingliang Li, Benjamin A Shoemaker, Paul A Thiessen, Bo Yu, Leonid Zaslavsky, Jian Zhang, and Evan E Bolton. Pubchem 2023 update. Nucleic Acids Research, 510 (D1):0 D1373–D1380, October 2022. ISSN 1362-4962. 10.1093/nar/gkac956. [Knox et al.(2023)Knox, Wilson, Klinger, Franklin, Oler, Wilson, Pon, Cox, Chin, Strawbridge, Garcia-Patino, Kruger, Sivakumaran, Sanford, Doshi, Khetarpal, Fatokun, Doucet, Zubkowski, Rayat, Jackson, Harford, Anjum, Zakir, Wang, Tian, Lee, Liigand, Peters, Wang, Nguyen, So, Sharp, da Silva, Gabriel, Scantlebury, Jasinski, Ackerman, Jewison, Sajed, Gautam, and Wishart]Knox2023 Craig Knox, Mike Wilson, Christen M Klinger, Mark Franklin, Eponine Oler, Alex Wilson, Allison Pon, Jordan Cox, Na Eun (Lucy) Chin, Seth A Strawbridge, Marysol Garcia-Patino, Ray Kruger, Aadhavya Sivakumaran, Selena Sanford, Rahil Doshi, Nitya Khetarpal, Omolola Fatokun, Daphnee Doucet, Ashley Zubkowski, Dorsa Yahya Rayat, Hayley Jackson, Karxena Harford, Afia Anjum, Mahi Zakir, Fei Wang, Siyang Tian, Brian Lee, Jaanus Liigand, Harrison Peters, Ruo Qi (Rachel) Wang, Tue Nguyen, Denise So, Matthew Sharp, Rodolfo da Silva, Cyrella Gabriel, Joshua Scantlebury, Marissa Jasinski, David Ackerman, Timothy Jewison, Tanvir Sajed, Vasuk Gautam, and David S Wishart. Drugbank 6.0: the drugbank knowledgebase for 2024. Nucleic Acids Research, 520 (D1):0 D1265–D1275, November 2023. ISSN 1362-4962. 10.1093/nar/gkad976. [MacMahon et al.(2023)MacMahon, Hwang, Yim, MacMahon, Abraham, Barton, Tharmakulasingam, Bilokon, Gaddi, and Han]MacMahon2023 Méabh MacMahon, Woochang Hwang, Soorin Yim, Eoghan MacMahon, Alexandre Abraham, Justin Barton, Mukunthan Tharmakulasingam, Paul Bilokon, Vasanthi Priyadarshini Gaddi, and Namshik Han. An in silico drug repurposing pipeline to identify drugs with the potential to inhibit sars-cov-2 replication. Informatics in Medicine Unlocked, 43:0 101387, 2023. ISSN 2352-9148. 10.1016/j.imu.2023.101387. [Metzger & Leymann(2011)Metzger and Leymann]metzger2011in Andreas Metzger and Frank Leymann. In-Memory Data Management: Technology and Applications. Springer, 2011. [Szilágyi et al.(2021)Szilágyi, Flachner, Hajdú, Szaszkó, Dobi, Lőrincz, Cseh, and Dormán]Szilagyi2021 Katalin Szilágyi, Beáta Flachner, István Hajdú, Mária Szaszkó, Krisztina Dobi, Zsolt Lőrincz, Sándor Cseh, and György Dormán. Rapid identification of potential drug candidates from multi-million compounds’ repositories. combination of 2d similarity search with 3d ligand/structure based methods and in vitro screening. Molecules, 260 (18):0 5593, September 2021. ISSN 1420-3049. 10.3390/molecules26185593. [Tanimoto(1958)]tanimoto1958elementary T.T. Tanimoto. An elementary mathematical theory of classification and prediction. International Business Machines Corporation, New York, pp. 1–16, 1958. [Wishart et al.(2018)Wishart, Guo, Oler, Wang, Anjum, Peters, Liang, Vázquez-Fresno, Sajed, Johnson, Karu, Sayeeda, Lo, Gautam, Torres-Calzada, Hameed, LeVatte, Forsythe, and Salek]FooDB David S. Wishart, An Chi Guo, Eponine Oler, Fan Wang, Amina Anjum, Justin Peters, Kai Liang, Rosa Vázquez-Fresno, Tanvir Sajed, Daniel Johnson, Naila Karu, Zeinab Sayeeda, Emmeline Lo, Vineet Gautam, Cristhian Torres-Calzada, Imran Hameed, Michael LeVatte, Ian Forsythe, and Reza M. Salek. Foodb: The food database. <http://foodb.ca>, 2018. Accessed: 2024-07-02. [Zhou et al.(2020)Zhou, Cui, Hu, Zhang, Yang, Liu, and Sun]zhou2020graph Jie Zhou, Guodong Cui, Shengding Hu, Zhiyuan Zhang, Cheng Yang, Zhiyuan Liu, and Maosong Sun. Graph neural networks: A review of methods and applications. AI Open, 1:0 57–81, 2020.
http://arxiv.org/abs/2407.03277v1
20240703170417
Evaluating Automatic Metrics with Incremental Machine Translation Systems
[ "Guojun Wu", "Shay B. Cohen", "Rico Sennrich" ]
cs.CL
[ "cs.CL" ]
[ David B. Massey =================== § ABSTRACT We introduce a dataset comprising commercial machine translations, gathered weekly over six years across 12 translation directions. Since human A/B testing is commonly used, we assume commercial systems improve over time, which enables us to evaluate machine translation (MT) metrics based on their preference for more recent translations. Our study confirms several previous findings in MT metrics research and demonstrates the dataset's value as a testbed for metric evaluation. We release our code.[<https://github.com/gjwubyron/Evo>] § INTRODUCTION Automatic metrics for machine translation (MT) are typically assessed by measuring their correlation with or accuracy with respect to human judgments <cit.>. However, human evaluation is resource-intensive and time-consuming, and the number of translation systems included in a meta-evaluation tends to be relatively small. In this study, we explore the use of commercial machine translations, collected weekly over a period of 6 years for 12 translation directions, for the evaluation of MT metrics. Given the common use of human A/B testing <cit.>, our base assumption is that commercial systems show real improvements over time and that we can assess metrics as to whether they prefer more recent MT outputs. Using our dataset, we revisit a number of recent findings in MT metrics research, and find that our dataset supports these. <cit.> revealed that neural metrics exhibit significantly higher correlation with human judgments compared to non-neural ones. In our experiments, we analyze metric scores over time and evaluate metrics' ability to accurately rank MT systems. Our findings demonstrate that neural metrics show a more consistent upward trend, and achieve higher accuracy than non-neural metrics. <cit.> demonstrated that the correlation between metrics and human judgments significantly decreased when considering only the top-performing systems. However, the limited number of MT systems (typically 10–15 MT systems per language pair) made it difficult to fully confirm this trend <cit.>. We revisit this finding using a larger sample and observe that the correlation tends to decrease for many language pairs as the quality of evaluated systems improves. High-quality synthetic references were found to produce a stronger correlation between human evaluations and metrics compared to human-generated references <cit.>. We reexamine the effect of using synthetic references with three language pairs and find that synthetic references can result in comparable correlation. § BACKGROUND AND RELATED WORK Designed to directly learn human judgments, trained metrics <cit.> have exhibited notable advancements in correlating with human judgments compared to non-neural metrics like BLEU <cit.>. Recent research <cit.> reveals that these trained metrics can also generalize to new domains and challenge sets. <cit.> assessed the stability of metrics across top-N MT systems, and noticed that the correlation between metric and human scores diminished as N decreased. A subsequent investigation <cit.> suggested that the decrease might be due to instability of small samples. They employed a rolling window of N systems, moving from the worst to the best systems and found that the correlation is unstable for small samples. Besides, due to the limited number of MT systems, they could not determine if metric reliability decreases as the quality of MT systems improves. In WMT23 Metrics shared task <cit.>, human translations received unexpectedly low ratings, which prompted an investigation into using synthetic references as a potential alternative. It was found that high-quality synthetic references led to a stronger correlation between human and metrics compared to humans references. Instead of evaluating metrics through comparison with human judgement, <cit.> explored a complementary approach by correlating metrics with the outcome of downstream tasks. Similarly, our study does not use human judgment directly; instead, we evaluate metrics based on their preference for newer MT outputs. § METHODS §.§ Data The original corpus contains sentences in English from Abstract Meaning Representation (AMR) Annotation Release 2.0 <cit.>, along with their German, Italian, Spanish, and Chinese translations developed by <cit.>. This corpus contains 1371 sentences per language. The source sentences were mainly drawn from content gathered in the news domain. Translations[Due to the origin of the translations, the data we use is to be licensed by the Linguistic Data Consortium. If you would like to use this dataset for your research, please contact the authors. The collection of the data continues.] are gathered weekly from May 2018 – March 2024 using Google Translate from each of the five languages to the other four languages. Early experiments revealed that for English→Spanish, there was a substantial similarity between professional translations and those generated by the earliest systems (details in Appendix <ref>). Consequently, Spanish was removed from further investigation, reducing the number of language pairs to 12. As minimal variation was observed between consecutive weeks, we subsample for the following analysis, with consecutive systems being approximately one month apart. After removing duplicates (systems receiving identical scores across all metrics), we retained 56–63 systems per language pair. §.§ Metrics §.§.§ Surface-level overlap BLEU <cit.> measures n-grams overlap between the translation and its reference. We use corpus_bleu in SacreBLEU <cit.>. chrF <cit.> assesses the overlap between the characters of the translation and the reference. We use corpus_chrf in SacreBLEU. §.§.§ Embedding based BERTScore <cit.> derives contextual embeddings from BERT <cit.> models and computes cosine similarity between embeddings of the translation and the reference. We use the F1 score without TF-IDF weighting. §.§.§ Trained with human judgements COMET-20 <cit.> is trained on top of XLM-R <cit.> using Direct Assessments (DA) from WMT17 to WMT19. We utilize wmt20-comet-da. UniTE <cit.> is capable of evaluating translation outputs in source-only, reference-only, and source-reference-combined assessment scenarios. We use unite-mup. COMET-22 <cit.> is the current default model in COMET and trained on DA from WMT17 to WMT20. We use wmt22-comet-da. COMET-Kiwi <cit.> is a reference-free metric trained using DA from WMT17 to WMT20, and DA from the MLQE-PE corpus. We use wmt22-cometkiwi-da. MS-COMET-QE-22 <cit.> is a reference-free metric, extending COMET by Microsoft Research with proprietary data. § RESULTS §.§ How do metric scores change over time? While it is reasonable to expect that systems improve over time, how metric scores will reflect these improvements remains unclear. To investigate this, we visualize how metric scores vary over time for individual language pairs in Appendix <ref>. In general, upward trends are evident for the metrics across the language pairs. We use Spearman correlation to measure whether the upward trends are consistent. Metrics with higher correlation are deemed more reliable, as they better reflect the overall ranking of the systems. As illustrated in Figure <ref>, COMET-22, UniTE, COMET-20, and COMET-Kiwi consistently demonstrate high correlation across the language pairs. Among the remaining four metrics, we notice low correlations in specific language pairs, like BLEU and chrF in English→German or MS-COMET-22-QE in Italian→English. §.§ How good can the metrics rank incremental systems accurately? In this section, we evaluate metrics in a common scenario <cit.>: ranking a pair of systems. As we assume newer systems are better than old ones, accuracy <cit.> is adopted as follows. For each system pair, we calculate the difference of the metric scores (metricΔ) and the difference in time (timeΔ). Accuracy for a specific metric is calculated as the ratio of rank agreements between metric and time deltas to the total number of comparisons: Accuracy = |sign(metricΔ) = sign(timeΔ) | /|all system pairs| Since the systems span from 2018 to 2024, those separated by a substantial time interval might exhibit considerable quality gaps, potentially resulting in an overestimate of metric reliability <cit.>. Consequently, we only pair systems with a gap of less than a year. Even within such a timeframe, substantial improvements in quality are possible <cit.>. Table <ref> shows that trained metrics generally outperform non-trained metrics. For all system pairs, COMET-22 achieves the highest accuracy, followed by COMET-Kiwi. In contrast, MS-COMET-QE-22 struggles to attain high accuracy except for into Chinese. Among surface-level metrics, chrF outperforms BLEU, reflecting results in previous studies <cit.>, and achieves the highest accuracy for into English. We also examine performance for individual language pairs. Trained metrics exhibit high accuracy, yet no single metric excels across all pairs. More details in Appendix<ref>. §.§ Does the reliability of metrics depend on the quality of the systems evaluated? As mentioned in Section <ref>, metric reliability may decline as the quality of evaluated systems improves <cit.>. However, the limited number of MT systems made it difficult to fully confirm this trend <cit.>. We revisit this issue using a larger sample of MT systems. Following the approach of <cit.>, we implement a rolling window of N systems, transitioning from the earliest to the most recent ones. Using accuracy as explained in Section <ref>, we conduct tests with N varying from 24 to 40. Figure <ref> illustrates the results for N = 36, representing the identified scenarios. Different metrics display varying trends. For instance, in English→German, trained metrics show an upward trend, while surface-level metrics show a downward trend. A downward trend is most common, with each metric showing a clear decline across 7 or more language pairs. However, we also observe upward or relatively flat trends in the remaining language pairs. §.§ How will synthetic references impact the metrics' judgement? We generate synthetic references for three language pairs using another commercial MT system, DeepL, and examine their impact on metric evaluation. As depicted in Figure <ref>, we observe that for English→German, all metrics achieve a higher accuracy, while for the remaining language pairs, there are some drops. Overall, synthetic references lead to a comparable accuracy for the three language pairs we investigate. § CONCLUSION We evaluated metrics based on their preference for newer translations, confirming many prior findings on MT metrics. Our dataset, covering 12 language pairs with at least 56 systems each, surpasses previous datasets that typically included only 3 pairs with around 15 systems each, providing larger-scale evidence for debated questions such as the relationship between MT quality and metric reliability. Additionally, the systems are incremental (a baseline compared to improvements developed by the same group), reflecting the most common use case of the metrics. We encourage the use of our dataset for future investigations into MT metrics or the development of MT quality over time. § LIMITATIONS Our study bases on the assumption that newer systems of Google Translate outperform older ones due to the quality assurance measures, including human testing, taken before deployment. Although this is a reasonable belief, it might not always be true. Recently, LLM-based evaluators have demonstrated great performance in evaluating MT systems. However, we have not included any LLM-based evaluators in this study because it would be costly to experiment with our extensive dataset. § METRIC SCORES FOR ENGLISH → SPANISH TRANSLATIONS Figure <ref> displays the scores of four different metrics for English→Spanish translations in our early experiments. Early systems achieved nearly perfect metric scores, whereas later systems displayed markedly lower scores. Upon closer examination of the human translations, we noticed roughly 25% of them are identical to that of the early systems. This indicates the use of Google Translate in the professional translation. § METRIC SCORES OVER TIME Figure <ref> illustrates the findings regarding the change of metric scores over time. Generally, upward trends are evident for the metrics across language pairs. Furthermore, these trends sometimes appear as step-like progressions. Based on a visual inspection of the results, we have some interesting findings as follows: * Although there have been concerns that MT systems were optimized for BLEU, given its longstanding status as the primary evaluation metric, our findings suggest that the upward trends of BLEU are less consistent compared to other metrics. This observation might provide implicit evidence that BLEU is not solely used during system development. * The trajectories of BLEU and chrF exhibit a high degree of similarity, as do the trajectories of COMET-20, COMET-22, COMET-Kiwi, and UniTE. In contrast, BERTScore and MS-COMET-22-QE follow distinct trajectories of their own. These similarities and discrepancies reflect the inherent properties of these metrics. BLEU and chrF both rely on measuring surface-level overlap, while BERTScore is unique in its reliance on contextual embeddings. As for the trained metrics, although they are all trained in a similar manner, MS-COMET-22-QE was trained using entirely different data. * In certain language pairs, the trajectories of certain metrics may experience a downturn. For instance, noticeable troughs are observed for BLEU and chrF in English→German, Italian→German, and English→Italian; for BERTScore in English→German, German→Italian, and English→Italian; and for MS-COMET-22-QE in Italian→English, Italian→German, and Chinese→English. On the other hand, the trajectories of the remaining metrics may occasionally exhibit bumps but do not show clear troughs. § ACCURACY ACROSS THE LANGUAGE PAIRS
http://arxiv.org/abs/2407.02850v1
20240703070614
Ok's relative converse theorem for square integrable representations
[ "Nadir Matringe" ]
math.RT
[ "math.RT", "math.NT", "22E50, 11F70" ]
Ok's relative converse theorem]Ok's relative converse theorem for square integrable representations Nadir Matringe. Institute of Mathematical Sciences, NYU Shanghai, 3663 Zhongshan Road North Shanghai, 200062, China and Institut de Mathématiques de Jussieu-Paris Rive Gauche, Université Paris Cité, 75205, Paris, France nrm6864@nyu.edu and matringe@img-prg.fr [ N. Matringe July 8, 2024 ================ § ABSTRACT Let E/F be a non Archimedean local field and ψ:E/F→ a non trivial character. Using harmonic analysis on Harish-Chandra Schwartz spaces, we prove that if n is an integer at least equal to two and π is a square integrable representation of _n(E) such that γ(1/2,π,π',ψ)=1 for any _n-1(F)-distinguished tempered representation π' of _n-1(E), then π is _n(F)-distinguished. § INTRODUCTION We recall that a complex smooth irreducible representation of _n(E) is called distinguished if it has a nonzero _n(F)-invariant linear form on its space, in which case this linear form lives in a one dimensional space by <cit.>. The result in the abstract, which we refer to as the (n,n-1) relative converse theorem for the pair (_n(E),_n(F)), was first obtained in the pioneering work <cit.> for _2(E). Hakim actually proves that the theorem holds for any generic unitary representation π of _2(E), and its statement was extended to all generic representations in <cit.>. For _3(E), it is a consequence of <cit.> that the (3,2) relative converse theorem holds for any generic unitary representation. However as soon as n≥ 4, it follows from <cit.> that one has to impose some important restrictions on π for such a theorem to hold, and actually that the best one can hope for such a statement to be valid is that π should be square integrable. It was actually proved by Ok in <cit.> that the (n,n-1) relative converse theorem holds when π is cuspidal, by taking generic unitary _n-1(F)-distinguished twists. One fundamental input of Ok's proof is Bernstein's abstract Plancherel formula (<cit.>) for _c(_n-1(F)\_n-1(E)). In <cit.>, a strategy was then proposed, using the classification of discrete series in terms of cuspidal representations, to reduce the (n,n-1) and actually the (n,n-2) relative converse theorem, from the case of discrete series to Ok's case. Namely <cit.> proves that the (n,n-1) relative converse theorem holds for discrete series of _n(E) if one assumes that the discrete series representation π is conjugate selfdual. Following <cit.>, we proved with Offen in <cit.> that one can indeed reduce to the conjugate selfdual case for square integrable representations, except in the exceptional case where π is a generalized Steinberg representation of the form _2(ρ), where ρ is a cuspidal representation of some _r(E) for r≥ 2. In other words we obtained the result of the abstract as well as its (n,n-2) variant, except for the exceptional discrete series representations _2(ρ). This is due to the fact that the method of <cit.> relies on a trick coming from explicit properties of L functions and gamma factors, which does not work in this special case if one restricts to twists by generic distinguished representations of _n-1(E). In fact the proof of the (n,n) relative converse theorem is complete by <cit.>. Here we directly extend Ok's proof, using Bernstein's abstract Plancherel formula for the Harish-Chandra Schwartz space (_n-1(F)\_n-1(E)). This extension is not immediate and poses several technical difficulties. Let us set G_n:=_n(E), denote by Z_n its center, and by θ the Galois involution of E/F. Let us also denote by U_n the unipotent radical of the standard parabolic subgroup of G_n of type (n-1,1). Fix a non trivial character ψ:E→^× trivial on F^×, and denote by the same letter the non degenerate character of the group of unipotent upper triangular matrices of G_n obtained from ψ in the usual way. The basic idea is to express Whittaker functions in the Whittaker model of a square integrable representation π of G_n as Fourier coefficients of matrix coefficients thanks to <cit.>. Then the main hustle of this paper is to prove that if f is a matrix coefficient of π, the function P_f:g ∈ G_n-1→∫_Z_n-1^θ∫_U_n f(uz(g,1))ψ^-1(u)dudz makes sense, and actually belongs to the Harish-Chandra Schwartz space (Z_n-1^θ\ G_n-1). Though quite delicate, this result is facilitated by the fact that the inner integral over U_n actually stabilizes. The Whittaker average of P_f is then of the form ∫_Z_n-1^θ W((zg,1))dz for W∈(π,ψ), and any function of the form ∫_Z_n-1^θ W((zg,1))dz can be obtained by this procedure. Once this is proved, we need to verify the absolute convergence of a certain integral considered in Section <ref>, to guarantee that Ok's argument extends to discrete series. To make our argument quicker, we use that the the Plancherel measure of L^2(H_n-1\ G_n-1) is supported on tempered representations as proved in <cit.>, but this could have been avoided up to more efforts (see Remark <ref>). Maybe it is worth observing that whenever π is a square integrable representation of G_n and π' is a tempered representation of G_n-1, one has γ(1/2,π,π',ψ)=ϵ(1/2,π,π',ψ) so that the relative converse theorem can as well be expressed in terms of epsilon factors. Actually this is probably more natural as it is known by <cit.> that ϵ(1/2,π,π',ψ)=1 whenever π and π' are irreducible distinguished, whereas γ(1/2,π,π',ψ) could be different from one even for π distinguished square integrable and π' distinguished generic, according to <cit.>. Though our result is optimal in terms of the class of π to which it applies (even if restated in terms of epsilon factor, see <cit.> again), one can imagine that the (n,n-2) version of it holds with a similar proof (see <cit.> and <cit.>), and more optimistically its (n,⌊ n/2 ⌋) version (see <cit.> for finite fields). Finally we mention that there is a similar converse theorem for linear models in place of Galois models. We intend to write the details of its closely similar proof later. The paper is organized as follows. In Section 2 we we introduce the materiel needed on reductive groups, their symmetric pairs, their Harish-Chandra Schcwartz spaces and their representations. In Section 3 we recall the abstract Fourier inversion formula for the Harish-Chandra Schwartz space of a Gelfand symmetric pair, following Bernstein. Section 4 is where we prove the main technical result on the map P_f explained above in the introduction. Finally we prove our main result Theorem <ref> in Section 5. §.§ Acknowledgement. We are grateful to Alberto Minguez for inviting us to Vienna University June 2024, where the elaboration of this project started. § PRELIMINARIES §.§ Locally profinite groups The letter G always denotes a group, and Z or Z(G) its center. We write K≤ G for ”K is a subgroup of G”. We set G(k):={x^k, x∈ G}. If G is locally compact and totally disconnected, we denote by δ_G its modulus character, and by (G) the space of complex valued locally constant functions on G. We denote by R and L the actions of G on (G), given respectively by right and left translation: R(g)f(x)=f(xg) and L(g)f(x)=f(g^-1x). If is a subspace of (G), we set ^K={f∈, ∀ k ∈ K^2, R(k)f=f} and _K={f∈, ∀ (k,k')∈ K^2, R(k)L(k')f=f}. For H≤ G, we also set (H\ G) to be the set of locally constant functions on H\ G, and denote by _c(H\ G) its subspace consisting of functions with compact support in H\ G. When H\ G has a righ G-invariant measure, we denote by L^2(H\ G) the corresponding L^2 space. Throughout this paper we do not insist on the choice of invariant measures on homogeneous space, but they are made in a coherent enough way for all usual identities that we use to hold. When H\ G has a right G-invariant measure, the space L^2(H\ G) is a Hilbert space equipped with its usual scalar product ⟨ f,f' ⟩_L^2(H\ G)=∫_H\ G f(g)f'(g)dg. §.§ Non Archimedean reductive groups Now let F be a non Archimedean local field with normalized absolute value | |_F, and suppose that G is (the group of F-points of) a connected reductive group defined over F. We denote by G^1 the intersection of the kernels of all unramified characters of G, and refer to <cit.> for a less tautological definition. For example, when G=_n(F) we have _n(F)^1={g∈_n(F), |(g)|_F=1}. We fix K_0 a maximal compact (open) subgroup of G with the properties described in <cit.>. We fix σ:G→ the logarithm of a norm function as defined in <cit.>. It is actually a non negative map in (G)_K_0, which is invariant under g→ g^-1. For d∈, we then put N_d(g)=(1+σ(g))^d. We also need the spherical coefficient Ξ:G→ defined in <cit.>. It is a positive function in (G)_K_0 and it is also invariant under g→ g^-1. Let X be a set and Y a subset of X. If f is a map on X, we denote by _Y(f) its restriction to Y. If f_1, f_2 are maps from X to _≥ 0, we write f_1≺ f_2 if there exists a c∈_>0 such that f_1≤ c f_2. By definition, the Harish-Chandra Schwartz space (G) of G is defined as (G)=⋃_K {f∈(G)_K, ∀ d∈, f≺Ξ N_-d}, where the union is over all compact open subgroups K of G. Note that we could replace by any unbounded set of _≥ 0 in the above definition. We define the Harish-Chandra Schwartz space (G^1) by replacing G by G^1 in the Equation (<ref>). §.§ Some properties of symmetric spaces In this section F is a non Archimedean local field of characteristic different from two, G is an F-reductive group, and θ is an F-rational involution of G. We let H be a symmetric subgroup of G contained G^θ, as in <cit.>. Following <cit.> but with different notatons, we define the following functions on H\ G: σ^H\ G(Hg)=σ(θ(g^-1)g) and Ξ^H\ G(Hg)=Ξ(θ(g^-1)g). We then set N_d^H\ G(Hg)=(1+σ^H\ G(Hg))^d. We recall that a torus A of G is called θ-split if it is F-split, and satsifies that θ(a)=a^-1 for all a∈ A. A parabolic subgroup P of G is θ-split if θ(P)=P^-, where P^- is the parabolic subgroup of G opposite to P. For P a minimal θ-split parabolic subgroup of G, we denote by A_P,θ the maximal θ-split torus in the center of Levi subgroup M:P∩θ(P). If moreover Δ_P,θ is the set of simple roots (see <cit.>) of A_P,θ acting on the Lie algebra of P, we set A_P,θ^+={a∈ A_P,θ, ∀α∈Δ_P,θ, |α(a)|_F≤ 1 }. Then by <cit.>, there exists a finite set of minimal θ-split parabolic subgroups of G, and a compact subset Ω of G, such that G=⋃_P∈ HA_P,θ^+Ω. This is called the Cartan decomposition of H\ G. §.§ Notations for the Galois pair of GLn Let E/F a quadratic extension and denote by θ the corresponding Galois involution. For n≥ 1, we set G_n=_E/F_n(F)=_n(E). Seeing G_n as a Weil restriction of scalars allows us to consider Z_n^θ\ G_n as an F-reductive group with anisotropic center. We denote by N_n the subgroup of G_n consisting of upper triangular unipotent matrices, and by T_n its diagonal torus. The group T_n can be parametrized by (E^×)^n via simple roots: t:(z_1,…,z_n)→(z_1… z_n,z_1… z_n-1,…, z_n) is a group isomorphism between (E^×)^n and T_n. We put B_n=N_nT_n, and denote by B_n^- its image under transpose. If ψ:E→^× is a non trivial character, it defines the non degenerate character still denoted ψ:N_n→^× given by ψ(x)=ψ(∑_i=1^n-1 x_i,i+1). We set K_n=_n(O_E), where O_E is the ring of integers of E. We write δ_0,n for the modulus character of B_n. For g∈ G_n, we put ν_E(g)=|(g)|_E. We denote by P_a_1,…,a_r=M_a_1,…,a_rN_a_1,…,a_r the standard Levi decomposition of the standard parabolic subgroup of G_n attached to the composition (a_1,…,a_r) of n. We denote by P_n the mirabolic subgroup of G_n, which consists of matrices in P_n-1,1 with lower right entry equal to one, and we set U_n:=N_n-1,1. For t∈ E^×, we set z_n-1(t)=(tI_n-1,1)∈(Z_n-1,1)≤ G_n, and for x∈ E^n-1, we set u(x)=[ I_n-1 x; 0 1 ]∈ U_n. §.§ The Harish-Chandra Schwartz space of a symmetric space Here F, G and H are as in Section <ref>. The Harish-Chandra Schwartz space (H\ G) is defined in <cit.> as (H\ G)=⋃_K {f∈(H\ G)^K, ∀ d∈, f≺Ξ^H\ G N_-d^H\ G}, where the union is again over all compact open subgroups K of G. Again we can replace by any unbounded set of _≥ 0 in its definition. The space (H\ G) is equipped with an LF-space structure by its very definition: the Fréchet topology on (H\ G)_K is given by the family of semi-norms ν_d(f)=sup_H\ G f× (Ξ^H\ G)^-1× N_d^H\ G for d∈. We observe that (H\ G)⊆ L^2(H\ G) thanks to <cit.>. On the other hand _c(H\ G)⊆(H\ G). It is known that both inclusions are dense with respect to the topology of the bigger space. Because later we will use the results of <cit.>, we note that it is explained in <cit.> that the definition of the Harish-Chandra Schwartz space of H\ G that we use here coincides with that given in <cit.>. We mention that the space H\ G has polynomial growth in the sense of Bernstein as shown in <cit.>. We note that Bernstein assumes the two properties (i) and (ii) in <cit.> to claim polynomial growth, but they have since been verified thanks to <cit.> and the Cartan decomposition of <cit.>. §.§ Very strongly discrete symmetric pairs Here F, G and H are as in Section <ref> again. Following <cit.>, we say that the symmetric pair (G,H) is very strongly discrete if the integral ∫_Z_H\ H f(h)dh is absolutely convergent for all f∈(Z\ G). For example it is proved in <cit.> that whenever E/F is a quadratic extension, H is an F-redictive groups, and G is the F-points of the Weil restriction of scalars from E to F of H, then the Galois symmetric pair (G,H) is very strongly discrete. Now we set Z_H:=Z∩ H and assume that Z_H\ H is compact. In this situation the integral ∫_Z_H\ H f(h)dh is a absolutely convergent for all f∈(Z_H\ G). In particular we have a projection map p_H: (Z_H\ G)→(H\ G) defined by p_H(g)=∫_Z_H\ H f(hg)dh. The following fact follows from <cit.> applied to the very strongly symmetric pair (Z_H\ G,Z_H\ H). Let (G,H) be a very strongly discrete symmetric pair such that Z_H\ Z is compact, then the projection p_H sends the Harish-Chandra Schwartz space (Z_H\ G) to (H\ G)=(H/Z_H\G/Z_H). The above proposition applies to (G_n, H_n). §.§ The Whittaker Harish-Chandra Schwartz space Here F is a non Archimedean local field and G an F-reductive group. We fix U_0 to be the unipotent radical of a minimal parabolic subgroup P_0 of G. We fix a Levi subgroup M_0 of P_0. We denote by _̣0 the modulus character of P_0. We denote by 𝔞_0 the tensor product over of with the lattice of algebraic characters of M_0, and we endow it with a Euclidean norm | | invariant under the action of the Weyl group of G with respect to M_0. We denote by H_0:M_0→𝔞_0 the function defined in <cit.>. Finally we fix ψ:U_0→^× a non-degenerate character as in <cit.>. In <cit.>, Delorme defines the Whittaker Harish-Chandra Schwartz space (U_0\ G,ψ) as the subspace of the smooth induced representation _U_0^G(ψ) of functions satisfying f≺δ_0^1/2(1+H_0)^-d for any positive integer d. The following lemma is an immediate consequence of <cit.>, <cit.> and the Iwasawa decomposition. For f∈(G), the map ∫_U_0 f(u)ψ^-1(u)du is absolutely convergent. Moreover the map W_f:g↦∫_U_0 f(ug)ψ^-1(u)du belongs to (U_0\ G,ψ). §.§ Representations Let F be a non Archimedean local field, and G be an F-reductive group. We only consider smooth complex representations of G and its closed subgroups. If (π,V_π) is a representations of G and K≤ G, we denote by V_π^K the space of K-invariant vectors in V_π. We denote by (G) the class of irreducible representations of G, by (G) its subclass consisting of unitary representations, by (G) the subclass of (G) consisting of tempered representations, and by (G) the subclass of (G) consisting of square-integrable representations. When π∈(G), there exists up to homothecy a unique scalar G-invariant scalar product on π which we denote ⟨ , ⟩_π, and we denote by π the Hilbert completion of π. If H is a subgroup of G, we say that a representation π∈(G) is H-distinguished if _H(V_π,) is not reduced to zero. We say that the pair (G,H) is a Gelfand pair if _H(V_π,) has dimension at most one for all π∈(G). If H is a symmetric subgroup of G, we say that (G,H) is a symmetric Gelfand pair if it is a Gelfand pair. We shall later specialize to general linear groups, when considering generic representations. We say that π∈(G_n) is generic if _N_n(V_π,ψ) is not reduced to zero, and this notion is independent of the non trivial character ψ:E→^×. It is known since <cit.> that _N_n(V_π,ψ) can have dimension at most one, and when this dimension is one, we denote by (π,ψ) the Whittaker model of π with respect to ψ. We denote by (G_n) the class of generic representations of G_n. We will use the notational rule ⋆(G)∩∙(G)=⋆,∙(G). For π a representation of G, we say that f:G→ is a matrix coefficient of π if there are v∈ V_π and v^∨ in the space V_π^∨ of the contragredient representation of π^∨ of π, such that for all g∈ G we have c(g)=⟨π(g)v, v^∨⟩. We denote by _2(G) the subspace of (G) generated by matrix coefficients of (irreducible) square integrable representations of G. We recall the following inclusion from <cit.>. Let G be a reductive group defined over F, then _G^1(_2(G))⊆(G^1). In view of <cit.>, we obtain the following consequence. Let f∈_2(G_n), then for any 1≤ k ≤ n, the map g∈ G_k^1→ f((g,I_n-k)) belongs to (G_k^1). We recall from <cit.> (see <cit.> for more details) that if π∈(G), the natural action of _c(G) on π extends to (G), and for f∈(G) and v∈ V_π, the vector π(f)v is characterized by the fact that ⟨π(f)v,v^∨⟩=∫_G f(g)⟨π(g) v, v^∨⟩ for all v^∨∈ V_π^∨, where the integrals are absolutely convergent. This easily implies, by considering a compact open subgroup K such that f∈(G)_K, that λ(π(f)v)=∫_G f(g)λ(π(g)v)dg for all v∈ V_π and all λ in the algebraic dual of V_π. Further, by convoluting f by an apporpriate function of _c(G) on the left which is permissible according to <cit.>, we obtain for all x∈ G: λ(π(x^-1)π(f)v)=∫_G f(xg)λ(π(g)v)dg. We shall use this fact. When P=MU is a parabolic subgroup of an F-reductive group G with Levi subgroup M and unipotent radical U, we denote by A_M the central split component of M. For π be a finite length representations of G, we denote by (V_π)_P its normalized Jacquet module with respect to P, and by V_π(U) the kernel of the projection V_π→ (V_π)_P. We denote by (A_M ,(V_π)_P) the set of exponents of A_M in (V_π)_P, i.e. the central characters of the irreducible subquotients of (V_π)_P restricted to A_M. For χ:E^×→ a character, we call its real part (χ) the unique real number r such that |χ(x)|=|z|_E^r for all z∈ E^×. We say that χ is positive if r>0. If π∈(G_n), it follows from Casselman's criterion <cit.> as explicated in <cit.> that for 1≤ k≤ n and χ∈(Z_M_k,n-k,(V_π)_P_k,n-k), the character z_k∈ E^×→χ((z_k,I_n-k)) is positive. § THE ABSTRACT FOURIER INVERSION FORMULA FOR SYMMETRIC SPACES Here F is a non Archimedean local field of characteristic different from two. We state a consequence of the main result of <cit.> for Gelfand symmetric pairs. Let (G,H) be a Gelfand symmetric pair and let μ belong to the class of Plancherel measures of the right regular representation L^2(H\ G). Then μ is supported on a subspace (G) of (G). Moreover for each representation π∈(G), there is a generator λ_π of _H(π,) such that: * if f∈(H\ G) and v∈π, the integral ⟨ f , v⟩_λ_π:=∫_H\ G f(g)λ_π(π(g)v)dg is absolutely convergent, * if K is a compact open subgroup of G fixing f on the right, then for any choice of orthonormal basis B_π^K of V_π^K, one has: f(eH)=∫_(G)∑_v∈ B_π^K⟨ f , v⟩_λ_πλ_π(v) dμ(π). According to <cit.>, and as further detailed in <cit.>, there exists a pointwise defined (see <cit.>) map α:_c(H\ G)→∫_(G)^⊕π dμ(π) with some extra properties that we now explicate. First, we can suppose that for each π, the map α_π sends _c(H\ G) to the smooth part π of π. Denote by β_π: π→(H\ G) the Hermitian adjoint of α_π defined by the relation ⟨α_π(f), v⟩_π=∫_H\ G f(g)β_π(v)(g) dg. By Frobenius reciporicity, there exists λ_π∈_H(π,) such that for any v∈π, and any g∈ G, one has β_π(v)(gH)=λ_π(π(g)v), as explained in <cit.>. In particular, the measure μ is actually supported on (G). Then <cit.> asserts that μ is actually supported on H-tempered representations, i.e. the set (G) of representations π such that each α_π:_c(H\ G)→π extends (necessarily uniquely) as a continuous map from the dense subspace _c(H\ G) to (H\ G), and we still denote by α_π this extension. Suppose that π is H-tempered, it follows from <cit.> that for any f∈(H\ G) and v∈π, the integral ∫_H\ G f(g)β_π(v)(g) dg is absolutely convergent and from <cit.> that Equality (<ref>) still holds. The main result of <cit.> then states that for any f∈(H\ G), we have the equality f=∫_(G)β_π(α_π(f))dμ(π) in L^2(H\ G). By definition, this means that for any ϕ in L^2(H\ G) one has ⟨ f, ϕ⟩_L^2(H\ G) =∫_(G)⟨β_π(α_π(f)), ϕ⟩_L^2(H\ G) dμ(π). Now consider f∈(H\ G)^K as in the statement, so that in particular each β_π(α_π(f)) is also right K-invariant, and take ϕ an appropriate multiple of the _H\ HK, we obtain f(eH)=∫_(G)β_π(α_π(f))(eH)dμ(π). Finally take any orthonormal basis B_π^K of the finite dimensional Hilbert space π^K, we can write α_π(f)=∑_v∈ B_π^K⟨α_π(f),v ⟩_π v, and the statement of theorem follows from Equation (<ref>). § PARTIAL FOURIER COEFFICIENT OF SQUARE INTEGRABLE MATRIX COEFFICIENTS From now on, we focus on the group G_n. §.§ Stable integrals of smooth functions on unipotent groups We will use results and methods from both <cit.> and <cit.> study stable integrals of some matrix coefficients over unipotent groups. First we define the coefficients in question: recall from <cit.> that if π is a generic unitary representation of G_n, then the map b_π:(W,W^∨)→∫_N_n\ P_n W(p)W'^∨(p)dp is a non degenerate G_n-invariant bilinear form on (π,ψ)×(π^∨,ψ) given by convergent integrals. In particular for (W,W^∨)∈(π,ψ)×(π^∨,ψ), the map f_W,W^∨:g→ b_π(R(g)W,W^∨) is a matrix coefficient of π. Moreover by non degeneracy of b_π, for any compact open subgroup K_0 of G_n, the map f_W,W^∨ is bi-K_0-invariant if and only if both W and W^∨ are right K_0-invariant. We sketch a proof of the split analogue of <cit.>, the proof of which adapts verbatim to our context. Fix K_0 a compact open subgroup of G. There exists a compact subgroup U(K_0) of U_n, such that for any compact open subgroup U of U_n containing U(K_0) and any generic unitary representation π of G_n, one has ∫_Uf_W,W^∨(u)ψ^-1(u)du=∫_U(K_0)f_W,W^∨(u)ψ^-1(u)du for all right K_0-invariant W and W^∨ in (π,ψ) and (π^∨,ψ). Like Ok, we actually prove a more precise result. Take W and W^∨ both right K_0-invariant in (π,ψ) and (π^∨,ψ) for π as in the statement. For g∈ G_n, we use the integration formula f_W,W^∨(g)=∫_B_n-1^-W((b^-,1)g)W^∨((b^-,1))db, where we recall that this integral is absolutely convergent. In particular by Fubini,s theorem the following integral is absolutely convergent as well: ∫_B_n-2^-W((b^-,1)g)W^∨((b^-,1))ν_E(b)^-1db. Then, following <cit.> word for word (this is a lengthy unfolding computation which we prefer to skip), we obtain the existence of a compact open subgroup U(K_0) of U_n such that fo all compact open subgroups U of U_n containing U(K_0): ∫_Uf_W,W^∨(u)ψ^-1(u)du=∫_B_n-2^-W((b^-,I_2))W^∨((b^-,I_2))ν_E(b)^-1db. Now, by applying the Fourier formula for smooth functions with compact support in G_n, we obtain as in <cit.> the following corollary. Let K_0 be a compact open subgroup of G, the there exists a compact open subgroup U(K_0) of U_n such that such that for all f∈(G)_K_0, we have ∫_Uf(u)ψ^-1(u)du=∫_U(K_0)f(u)ψ^-1(u)du whenever U is a compact open subgroup of U_n containing U(K_0). We denote by ∫_U_n^*f(u)ψ^-1(u)du the common value of all ∫_Uf(u)ψ^-1(u)du for f as in the above proposition. This allows then allows one to define the partial Fourier coefficient f_ψ(g):=∫_U_n^*f(u)ψ^-1(ug)du for any f∈(G_n), and we observe that f_ψ∈(G_n). We shall soon average _G_n-1(f_ψ) on Z_n-1^θ for f∈_2(G_n). In order to understand the asymptotic properties of this average, we will use the following results, the proof of which closely follows that of <cit.> (see <cit.> for the corrected version). Let f belong to (G_n), then there exists an integer b such that for any t∈ E^× with |t|_E≥ q_E^b and any p∈(P_n-1,1), one has f_ψ(z_n-1(t)p)=0. We observe that for any t∈ E^×, any x∈ E^n-1, and any p∈ P_n-1, one has f(z_n-1(t)(p,1)u(x))=f(u(tx)z_n-1(t)(p,1))=ψ(tx_n-1)f(z_n-1(t)(p,1)). From this relation, the vanishing property when |z_n-1|≥ q_E^b follows from smoothness of f (see <cit.> for the full argument). Let f belong to (G_n)(U_n) (for the right action R of G_n), then there exists an integer a such that for any t∈ E^× with |t|_E≤ q_E^a and any p∈(P_n-1,1), one has f_ψ(z_n-1(t)p)=0. See the beginning of <cit.>. Now we observe that the map f→ f_ψ is a R(G_n)-module endomorphism of (G_n), hence if the G_n module V:=⟨ R(G_n)f ⟩ generated by f has finite length, so does V_ψ:=⟨ R(G_n)f_ψ⟩. Let us set A_n-1,1:=A_M_n-1,1=Z(M_n-1,1). We recall from <cit.> for example, that the A_n-1,1-submodule ⟨R(A_n-1,1)f_ψ⟩ generated by the image f_ψ of f_ψ in (V_ψ)_P_n-1,1 is finite dimensional. A consequence of Lemma <ref> is the following asymptotic expansion: Let f belong to (G_n) and suppose that the G_n module V:=⟨ R(G_n)f ⟩ generated by f has finite length. Choose f_1,…,f_r in V such that the projections of the f_i,ψ in (V_ψ)_P_n-1,1 form a basis of ⟨R(A_n-1,1)f_ψ⟩. Then for each χ∈(A_n-1,1,V_P_n,n-1), there exists polynomials P_χ,1,…,P_χ,r in [X], and there exists an integer a such that for any t∈ E^× with |t|_E≤ q_E^a and any p∈(P_n-1,1), one has f_ψ(z_n-1(t)p)=∑_χ∈(A_n-1,1,V_P_n,n-1)∑_i=1^r χ(z_n-1(t))P_χ,i(v_E(t))f_i,ψ(p). By <cit.>, for each χ∈(A_n-1,1,V_P_n,n-1), there exists polynomials P_χ,1,…,P_χ,r in [X] such that for any t∈ E^×: R(z_n-1(t))f_ψ=∑_χ∈(A_n-1,1,(V_ψ)_P_n-1,1)∑_i=1^r χ(z_n-1(t))P_χ,i(v_E(t))f_i.ψ. Now we observe that because V_ψ is a G_n-quotient of V, we have the inclsuion of exponent sets (A_n-1,1,(V_ψ)_P_n-1,1)⊆(A_n-1,1,V_P_n-1,1). hence we can as well sum over (A_n-1,1,V_P_n-1,1) in Equation (<ref>) by taking the polynomials P_χ,i to be zero if χ∉(A_n-1,1,(V_ψ)_P_n-1,1) (which actually does not happen, but we won't discuss this detail). The statement now follows from Lemma <ref>. We will need the following result. Suppose that f∈_2(G_n), then the integral ∫_Z_n-1^θ f_ψ(z)dz is absolutely convergent, and moreover the map P_f:Z_n-1^θ\ G_n-1→ defined by P_f(g)=∫_Z_n-1^θ f_ψ(z(g,1))dz belongs to (Z_n-1^θ\ G_n-1). If f∈_2(G_n), then all the exponents in (A_n-1,1,V_P_n-1,1) are positive. Now take a as in Lemma <ref> and b≥ a as in Lemma <ref>. Then ∫_Z_n-1^θ f_ψ(z)dz= ∫_t∈ F^×, |t|_E≤ q_E^a f_ψ(z_n-1(t))dt+ ∫_t∈ F^×, q_E^a< |t|_E≤ q_E^b f_ψ(z_n-1(t))dt. The first summand converges absolutely thanks to Lemma <ref>, because the exponents characters χ(z_n-1( )) for χ∈(A_n-1,1) are positive, and the second summand is actually a finite sum. Then we observe that if K is a compact open subgroup of G_n and f∈_2(G_n)_K, then P_f∈(Z_n-1^θ\ G_n-1)_K∩ G_n-1 by straightforward change of variables. Now to prove that P_f belongs to (Z_n-1^θ\ G_n-1), it is enough to obtain the Harish-Chandra Schwartz majorizations on t(E^×,…,E^×,1)≤ Z_n-1^θ\ G_n-1: this is a consequence of the Cartan decomposition of Z_n-1^θ\ G_n-1. By Cartan decomposition again, this time for G_n-2, it is thus enough to obtain those majorizations on (G_n-2^1,1), and we claim that it is actually enough to obtain them on (G_n-2^1,1): indeed (G_n-2^1,1)Z_n-1 is the inverse image of F^×(n-1) by the determinant map inside the group (G_n-2,1)Z_n-1, hence of finite index inside it. Now take a≤ b for f as in Lemmata <ref> and <ref>, then for any g∈ G_n-2 we have ∫_Z_n-1^θ f_ψ(zg)dz= ∑_i(∫_t∈ F^×, |t|_E≤ q_E^a∑_χχ(z_n-1(t))P_χ,i(v_E(t))dt)f_i,ψ(g) + ∫_t∈ F^×, q_E^a< |t|_E≤ q_E^b f_ψ(z_n-1(t)g)dt, where the second integral is a finite sum of of the form ∑_k f_ψ(z_n-1(t_k)g ) with t_k independent of g∈ G_n-2. But then by Corollary <ref>, we deduce that _(G_n-2,1)(P_f) can be expressed as a finite sum of right translates of _(G_n-2,I_2)(f_i) and _(G_n-2,I_2)(f), and the result now follows from Corollary <ref>. § THE RELATIVE CONVERSE THEOREM We now follow Ok's arguments closely, but we extend them to the setting of Harish-Chadra Schwartz spaces. First we need a technical result which is not needed in Ok's work, as he deals with compactly supported functions. We fix ψ:E→^× a non trivial additive character trivial on F^×. §.§ An absolutely convergent integral Let f∈(Z_n^θ\ G_n). Proposition <cit.> shows that ∫_N_n |f(ut)|du≺_̣n,0^1/2(t)(1+σ(t))^-d on Z_n^θ\ T_n for all d∈. This has the following consequences. Suppose that f∈(Z_n^θ\ G_n), then the double integral ∫_Z_n^θ N_n^θ\ H_n∫_N_n|f(uh)|dudh is absolutely convergent and equal to ∫_N_n/N_n^θ∫_Z_n^θ\ H_n|f(uh)|dudh. We write ∫_Z_n^θ N_n^θ\ H_n∫_N_n|f(uh)|dudh=∫_Z_n^θ\ T_n∫_K_n^θ∫_N_n|f(utk)|duδ_0,n^-1/2(t)dkdt , which by smoothness of f reduces the problem to proving the convergence of ∫_Z_n^θ\ T_n^θ∫_K_n^θ∫_N_n|f(ut )|duδ_0,n^-1/2(t) dt. This integral is majorized by a positive multiple of ∫_Z_n^θ\ T_n^θ(1+σ(t))^-ddt for any positive d, by the observation before the lemma. However this latter integral is convergent for d large enough by <cit.>. The equality with the second integral in the statement easily follows from the first equality in the proof of the Lemma. A consequence of Lemma <ref> is the following proposition. Let f∈(Z_n^θ\ G_n), the integral I_ψ(f)=∫_N_n/N_n^θ∫_Z_n^θ\ H_nf(uh)ψ^-1(u)dhdu is absolutely convergent. Let (X_n^k)_k≥ 0 be and increasing family of compact open subsets which exhaust N_n/N_n^θ, then I_X_n^k,ψ(f)=∫_X_n^k∫_Z_n^θ\ H_nf(uh)ψ^-1(u)dudh converges to I_ψ(f). Moreover the function J_X_n^k,ψf:g→∫_X_n^k∫_Z_n^θ\ H_nf(uhg)ψ(u)dudh belongs to the Harish-Chandra Schwartz space (H_n\ G_n). In view of Proposition Proposition <ref>, the first assertion follows from the dominated convergence theorem. The second follows from Proposition <ref>. §.§ The orthogonal of tempered distinguished Whittaker functions We recall from Lemma <ref> that for f∈(Z_n-1^θ\ G_n-1), we defined W_f∈(N_n-1Z_n-1^θ\ G_n-1,ψ). In particular, for any π∈(Z_n-1^θ\ G_n-1) and any W∈(π,ψ^-1), the integral ∫_N_n-1Z_n-1^θ\ G_n-1 W_f(g)W(g)dg is absolutely convergent by <cit.>. The following lemma is a consequence of <cit.> (see <cit.>), Theorem <ref> together with an extra information on the Plancherel measure of L^2(H_n-1\ G_n-1) given by <cit.>. It is a generalization of <cit.>. Let f∈(Z_n-1^θ\ G_n-1). Suppose that for any π∈(G_n-1), and any W∈(π,ψ^-1), we have ∫_N_n-1Z_n-1^θ\ G_n-1 W_f(g)W(g)dg, Then ∫_Z_n-1^θ\ H_n-1 W_f(h)dh=0. Take K a compact open subgroup of G_n-1 such that f∈(Z_n-1^θ\ G_n-1)_K. For each π∈(G_n-1/Z_n-1^θ), we fix λ_π in _H_n-1((π,ψ^-1),) such that Theorem <ref> applies with this family to the pair (G_n-1/Z_n-1^θ,H_n-1/Z_n-1^θ). A consequence of <cit.> is the existence of an increasing exhaustive family (X_n-1^k)_k≥ 0 of compact open subset of N_n-1/ N_n-1^θ such that for any π∈(G_n-1) and any W∈(π,ψ^-1)^K, there exists c(λ_π)∈^× such that ∫_X_n-1^kλ_π(π(u^-1)W)ψ^-1(u)du=c(λ_π)W(I_n-1) for all k≥ 0. We then recall from Corollary <ref> that we defined a map J_X_n-1^k,ψf ∈(H_n-1/Z_n-1^θ\G_n-1/Z_n-1^θ)^K. Let's apply the Fourier inversion formula of Theorem <ref> to J_X_n-1^k,ψf, but observe that according to <cit.>, the Plancherel measure of L^2(H_n-1/Z_n-1^θ\G_n-1/Z_n-1^θ) is supported on (Z_n-1^θ\ G_n-1) so that we can integrate only on the tempered members of (G_n-1/Z_n-1^θ) in the Fourier inversion formula of Theorem <ref>. In such a situation, by Fubini's theorem, Equations (<ref>) and (<ref>), and simple integration in stages and change of variables, we obtain for any tempered π∈(G_n-1/Z_n-1^θ) and any W∈(π,ψ-1)^K the following equalities on the right hand side of the inversion formula: ⟨ J_X_n-1^k,ψf, W ⟩_λ_π = ∫_Z_n-1^θ\ G_n-1(∫_X_n-1^kf(ug)ψ^-1(u)du)λ_π(π(g)W)dg = ∫_X_n-1^k∫_Z_n-1^θ\ G_n-1f(ug)λ_π(π(g)W)dgψ^-1(u)du =∫_X_n-1^k∫_Z_n-1^θ\ G_n-1f(g)λ_π(π(u^-1)π(g)W))dgψ^-1(u)du = ∫_X_n-1^kλ_π(π(u^-1)π(f)W))ψ^-1(u)du=c(λ_π)(π(f)W)(I_n-1) =∫_Z_n-1^θ\ G_n-1f(g)W(g)dg=∫_Z_n-1^θ N_n-1^θ\ G_n-1W_f(g)W(g)dg=0. Considering the left hand side of the Fourier inversion formula, this gives for all k≥ 0: 0=J_X_n-1^k,ψf(I_n-1)=I_X_n-1^k,ψ(f). By Corollary <ref> and Lemma <ref>, this finally gives 0=I_ψ(f)=∫_Z_n-1^θ\ H_n-1∫_N_n-1/N_n-1^θf(uh)ψ^-1(u)dudh= ∫_Z_n-1^θ N_n-1^θ\ H_n-1∫_N_n-1f(uh)ψ^-1(u)dudh=∫_Z_n-1^θ N_n-1^θ\ H_n-1W_f(u)du. §.§ The proof of the main result Before moving on to the relative converse theorem, we observe that its statement is non empty thanks to <cit.>, which states that γ(π,π',ψ)=1 whenever π∈(G_n) and π'∈(G_n-1). In view of Lemma <ref>, the relative converse theorem easily follows from the (n,n-1) Rankin-Selberg equation of <cit.>. We denote by w_n the antidiagonal matrix of G_n with ones on the antidiagonal, and for W∈_N_n^G_n(ψ), we set W(g)=W(w_n^tg^-1). We recall the consequence of the functional equation that we need. Let π∈(G_n), and π'∈(G_n-1), then for any (W,W')∈(π,ψ)×(π',ψ^-1) we have ∫_N_n-1\ G_n-1W((g,1))W'(g)dg= γ(1/2,π,π',ψ)∫_N_n-1\ G_n-1 W((g),1)W'(g)dg. Here both sides of the functional equation are absolutely convergent. The absolute convergence of both sides follows for example from the asymptotics of Whittaker functions given in <cit.> and <cit.>. We will also use the following fact from <cit.> and the discussion after it. Let π∈(G_n). For any matrix coefficient Φ of π, the function W(Φ):g∈ G_n→∫_N_nΦ(ug)ψ^-1(u)du is defined by absolutely convergent integrals. Moreover it belongs to (π,ψ), and any Whittaker function in (π,ψ) is of this form. Here is our main result, which is optimal in the sense explained after its proof. Let π∈(G_n) such that γ(π,π',ψ)=1 for any π'∈(G_n-1), then π is _n(F)-distinguished. We fix W∈(π,ψ) and write it under the form W(Φ) thanks to Lemma <ref>. For g∈ G_n we then put f(g):=Φ(g)-Φ(w_n(w_n-1,1)g), so that f∈_2(G_n). Using the notations of Lemma <ref> and Proposition <ref>, we observe that W_P_f(g)=∫_Z_n-1^θ W((zg,1))-W(w_n(w_n-1zg,1))dz for all g∈ G_n-1. Rewriting the functional equation of Proposition <ref>, we obtain: 0=∫_N_n-1\ G_n-1 (W((g,1))-W(w_n(w_n-1g,1))) W'(g)dg =∫_N_n-1Z_n-1^θ\ G_n-1W_P_f(g) W'(g)dg. By Lemma <ref>, this implies that ∫_N_n-1^θ Z_n-1^θ\ H_n-1W_P_f(h)=0, which we rewrite ∫_N_n-1^θ\ H_n-1 W((h,1))= ∫_N_n-1^θ\ H_n-1W((h,1)). Hence we have two linear forms on (π,ψ) λ_π:W→∫_N_n-1^θ\ H_n-1 W((h,1)) and μ_π:W→∫_N_n-1^θ\ H_n-1W((h,1)) which, but the first is P_n^θ-invariant, whereas the second is ^t P_n^θ-invariant. Hence λ_π is H_n-invariant because H_n is generated by P_n^θ and its transpose. It is moreover nonzero by the theory of Bernstein-Zelevinsky derivatives (<cit.>). We want to mention that the use of the explicit Plancherel formula of <cit.>, which is a deep and sophisticated result, could have been avoided. Actually we only use a small part of <cit.>, which is that the support of the Plancherel measure of L^2(H_n-1\ G_n-1) is supported on tempered representations, and what we say is that we could do without this preliminary knowledge. Indeed we claim that one can prove Lemma <ref> for enough Whittaker functions in (Z_n-1^θ\ G_n-1) by using Bernstein's principle of meromorphic continuation as in <cit.> or <cit.>, for the conclusion of the converse theorem to hold. We actually intend to use such a strategy to prove the relative converse theorem for linear periods. alphanum
http://arxiv.org/abs/2407.01758v1
20240701194514
Quantifying cascading power outages during climate extremes considering renewable energy integration
[ "Luo Xu", "Ning Lin", "H. Vincent Poor", "Dazhi Xi", "A. T. D. Perera" ]
eess.SY
[ "eess.SY", "cs.SY" ]
a,d]Luo XuCorresponding author: luoxu@princeton.edu a,d]Ning Lin b]H. Vincent Poor a]Dazhi Xi c]A.T.D. Perera [a]Department of Civil and Environmental Engineering, Princeton University, Princeton, NJ, USA [b]Department of Electrical and Computer Engineering, Princeton University, Princeton, NJ, USA [c]Andlinger Center for Energy and the Environment, Princeton University, Princeton, NJ, USA [d]Center for Policy Research on Energy and the Environment, Princeton University, Princeton, NJ, USA Quantifying cascading power outages during climate extremes considering renewable energy integration [ ==================================================================================================== § ABSTRACT Climate extremes, such as hurricanes, combined with large-scale integration of environment-sensitive renewables, could exacerbate the risk of widespread power outages. We introduce a coupled climate-energy model for cascading power outages, which comprehensively captures the impacts of evolving climate extremes on renewable generation, and transmission and distribution networks. The model is validated by the 2022 Puerto Rico catastrophic blackout during Hurricane Fiona — the first-ever system-wide blackout event with complete weather-induced outage records. The model presents a novel resilience pattern that was not captured by the present state-of-the-art models and reveals that early failure of certain critical components surprisingly enhances overall system resilience. Sensitivity analysis of various behind-the-meter solar integration scenarios demonstrates that lower integration levels (below 45%, including the current level) exhibit minimal impact on system resilience in this event. However, surpassing this critical level without additional flexibility resources can exacerbate the failure probability due to substantially enlarged energy imbalances. Climate extremes, especially tropical cyclones — commonly known as hurricanes or typhoons — have threatened energy infrastructure over decades, leading to numerous widespread catastrophic blackouts globally<cit.>. Despite the growing emphasis on enhancing the resilience of power systems, which are fundamental to socio-economic functioning, weather-associated power outages in the U.S. have escalated by 78% during this decade compared to the last decade <cit.>. Hurricane Maria (Category 5) in 2017 and Hurricane Fiona (Category 1) in 2022 plunged the entire island of Puerto Rico and its 1.5 million electricity customers into darkness<cit.>, resulting in an estimated cost of US $113.3 billion<cit.>. . Meanwhile, in the context of long-term decarbonization roadmaps, various ambitious renewable energy integration targets have been set for power grids<cit.>. For instance, Puerto Rico has committed to achieving a 100% renewable power grid by 2050<cit.>. The large-scale integration of variable renewable energy such as solar photovoltaic (PV) and wind energy notably increases the uncertainty in power system operations and decreases the grid inertia<cit.>. Especially, the integration of behind-the-meter (BTM) solar PV systems associated with unregulated individual-optimized storage, can further challenge grid operations due to unexpected demand fluctuations<cit.>. Moreover, these environment-dependent renewable sources are particularly sensitive and vulnerable to extreme weather events. Solar panels have been observed to exhibit greater fragility in storms than their design requirements<cit.>. Additionally, hurricanes associated with large cumulonimbi substantially reduce solar generation even prior to their landfalls<cit.>, which can further enlarge imbalances between electricity supply and demand. These challenges highlight the potential amplified risk of catastrophic blackouts due to climate-energy interactions and underscore the importance of developing a coupled climate-energy model to quantify the effects of climate extremes on the resilience of renewable power systems<cit.>. Catastrophic blackouts triggered by initial common-cause disturbances have been extensively investigated. For instance, physics analyses reveal that small initial common-cause failures of critical lines can propagate through the transmission network, triggering catastrophic network collapse<cit.>. A network cascade model demonstrates more severe failures when incorporating dynamics of power systems in contrast to the findings from static power flow analysis<cit.>. Significant progress in understanding energy resilience has been widely achieved through efforts to bridge climate extremes and energy systems. Long-term coupled climate-energy planning for Sweden<cit.>, various EU cities<cit.> and Puerto Rico<cit.>, reveals the potential for catastrophic socio-economic consequences airing from extreme weather conditions. Towards the operation resilience of bulk power grid under specific events, the effectiveness of critical line hardening considering network cascades has been validated by synthetic Texas transmission grid under hurricanes<cit.>. The hydrological-power cascade effect has been demonstrated in the California transmission grid during extreme events characterized by high-temperature stress<cit.>. Despite the substantial advances made in understanding the common-cause network cascade mechanism<cit.> and the operation resilience<cit.> of transmission grids under extreme events, two critical obstacles remain in quantifying spatiotemporal power outages during extreme weather events such as hurricanes (i) the lack of high-resolution spatiotemporal power outage models covering the comprehensive effect of evolving climate weather hazards on grid operations, transmission and distribution networks, and renewable generation; and (ii) a limited understanding of the effects of environment-dependent renewable penetration on cascading power outages. In practical blackout events under climate extremes, it is not only the failures of transmission networks but also those of more vulnerable and extensive distribution networks<cit.>, as well as affected environment-sensitive renewable generation capacity<cit.>, that collectively contribute to these outages. However, due to the limited availability of comprehensive energy system data and high-resolution outage information of real-world events, existing research efforts on catastrophic blackouts have predominantly focused on cascading failures within the topology of transmission networks. To address these challenges, we develop a coupled climate-energy model to capture and quantify the cascading power outages in renewable power systems under climate extremes. This model accounts for the comprehensive effects of evolving extreme weather conditions on renewable generation, and transmission and distribution networks. Our model is validated by a retrospective analysis of the 2022 Puerto Rico catastrophic blackout during Hurricane Fiona, a milestone event documented with the first-ever high-resolution spatiotemporal outage data on a weather-induced system-wide blackout. Using thousands of realizations that account for the uncertainty in grid infrastructure resilience, we investigate the resilience and vulnerability patterns of the grid under an evolving hazard. To further explore the role of distributed solar-dominated renewable integration in this catastrophic event, we then conduct a sensitivity analysis across a wide range of renewable integration levels. Beyond this specific event, our methodology offers a broadly useful tool for assessing the risks associated with different generation portfolios of a regional power system in response to projected climate extremes. § RESULTS §.§ Modeling climate-induced power outages with renewable integration We propose a Climate-induced Renewable Energy System Cascading Event (CRESCENT) model. This model bridges the dynamics of renewable energy systems with evolving extreme weather events by integrating a spatiotemporal hazard model, a renewable energy system vulnerability model, and a multi-scale spatiotemporal cascading power outage model (see Fig. 1 and Methods). The meteorological component exemplifies tropical cyclones as a typical climate extreme to the Puerto Rico energy system and employs a physics-based tropical cyclone wind field model to generate high-resolution spatiotemporal wind hazards. The renewable energy system vulnerability model is used to simulate the effects of climate extremes on spatiotemporal infrastructure damage and renewable generation reduction. We assess the spatiotemporal infrastructure damage to both transmission and distribution networks through hazard resistance risk analysis (see Methods). The damage to distribution feeders within distribution networks results in the spatiotemporal loss of demand at the transmission network level, affecting grid operations. We consider the shutdown of wind turbines under extreme wind conditions and evaluate the climatic sensitivity of solar PV generation. This evaluation includes the analysis of generation decline in both utility-scale and distributed rooftop solar PV systems, attributed to wind-induced panel damage and reduced solar irradiance due to extensive cumulonimbus clouds. Informed by the vulnerability of renewable energy systems during evolving climate extremes, we propose a multi-scale spatiotemporal cascade model to quantify cascading power outages, featuring a temporal resolution compatible with the real-time operations of the decision-making system. Expanding the existing network cascade model that considers the overloaded line tripping and power flow redistribution induced by common-caused initial failures<cit.>, the proposed method further accounts for the interaction between system resilience and evolving hazards. Under extreme weather conditions, system resilience/reliability is fundamentally determined by two key factors influencing power balance across different time scales: grid inertia for alleviating transient imbalances and system flexibility for eliminating sustained imbalances<cit.>. Our cascade model considers power imbalances associated with renewable energy integration by taking these two factors into account. In the context of large-scale renewable integration, reduced grid inertia against transient power imbalances is considered in the network stability analysis. System flexibility considering reduced renewable generation and availability of flexible resources during climate extremes is embedded in real-time operation optimization (see Methods). §.§ Catastrophic blackout of Puerto Rico in 2022 On September 18th, 2022, Hurricane Fiona plunged the island of Puerto Rico into complete darkness again after a similar situation caused by Hurricane Maria in 2017. During this system-wide blackout event, LUMA Energy, the local system operator that took over the grid in 2021, captured high-resolution spatiotemporal outage data<cit.> in 10-minute intervals (i.e., at the real-time system dispatch scale). This marked the first recorded instance of such detailed data for a system-wide blackout caused by extreme weather events. Compared to regional-level power outage events, system-wide blackouts provide unique opportunities for exploring the weather-induced cascading failures and stability mechanisms of complex dynamic systems. This time-series outage data reveals a catastrophic cascading failure in the Puerto Rico power grid, with system outages escalating from below 50% to 100% within 10 minutes at 18:00 UTC, prior to the landfall of a mere Category 1 hurricane (Figs. 2a-2b). Consistently, the US official report<cit.> indicates that escalating damages to distribution and transmission infrastructure led to a system-wide imbalance between electricity supply and demand. This imbalance triggered the off-grid protection of generation units, resulting in system instability. We used the CRESCENT model to perform spatiotemporal simulations of the Puerto Rico power grid during Hurricane Fiona, with the compatible 10-minute temporal resolution as the power outage data and real-time operation. The energy-related models are configured based on the contemporary Puerto Rico power grid in 2022 with its utility-scale generation, distributed renewable generation, and transmission and distribution networks (Figs. 2c-2f), corresponding to the conditions during Hurricane Fiona that caused the latest catastrophic blackout. Considering the uncertainty in infrastructure resistance, we generated 1000 time-series power outage realizations, spanning from 0:00 to 23:00 UTC on September 18, 2022, in order to obtain meaningful simulation results. As shown in Fig. 3a, in the early stage (before 10:00 am UTC), characterized by a lower intensity of the hazard and higher functional integrity of the system, failures predominately occur within the distribution network, leading to a gradual degradation of system performance (characterized by the percentage of customers with electricity). However, as the hurricane approaches with increased hazard intensity, the distribution network sustains more severe damage and solar-based renewable generation reduces significantly. For instance, by 16:00 UTC (12:00 local time), distributed solar generation across the island declines to only 26.8% of generation under clear sky conditions, with a reduction of over 125 MW generation (See Fig. S14). These factors can result in substantial supply-demand imbalances within a short timeframe (e.g., a 10-minute control cycle of system real-time operations). Under these conditions, system operation under limited flexibility may be unable to completely eliminate power imbalances, thus necessitating proactive load shedding and leading to a precipitous decline in system performance. Moreover, the grid faces a higher failure risk of transmission towers and lines under the intensifying hazard. Transmission line tripping, which transfers power flow to remaining lines, can lead to line overloads and potentially trigger cascading failures. In the worst-case scenario, insufficient grid inertia against substantial transient imbalances can destabilize the grid, resulting in a catastrophic blackout. Among all the realizations, 60% result in catastrophic blackouts with complete system-wide outages. The occurrence of these catastrophic blackouts peaks at 18:00 UTC, the same time as for the real-world case. Additionally, we identified the largest failure (the greatest decline in system performance) of each realization (Fig. 3b). In the majority of the realizations, the largest failures occur between 17:00 UTC and 19:00 UTC under 40 55% system performance. During this period, the hurricane was closest to the island and generated the highest winds (Figs. S4-S10). The system had been progressively weakened by cumulative damages, including less robust topology connection and decreased system flexibility and grid inertia. Subsequent failures could then directly devastate the power grid. These findings closely align with the real-world catastrophic blackout incident, highlighting a high-risk state for a weakened system. The second peak of the largest failures is at a relatively early stage, between 15:00 UTC and 16:00 UTC. The early timing of the largest failures in the system indicates that critical components exist, and these failures correspond to scenarios in which critical components were assigned lower resistance on the hazard resistance distribution of components. Failures of critical components could contribute to severe degradation of the system, even at the early stage when the hazard is less severe and the system is not yet significantly weakened. §.§ Resilience patterns A complete power outage (catastrophic blackout) is far more severe and demanding than an extensive power outage with a functional grid, as it requires substantial effort to restore the system-level balance and synchronization. To further explore system resilience against this extreme event, therefore, we categorized all the realizations into resilient (without complete power outages) and vulnerable (with complete power outages) sets (Figs. 4a-4b). It is noted that the resilient cases experience their largest failures significantly earlier than the vulnerable cases. This result suggests that systems that experience early severe failures but maintain adequate functionality (e.g., above 50% system performance) can be more robust to ride through subsequent damages under increasing hazards. This finding indicates that passive early degradation might enhance system resilience. This resilience pattern, previously unidentified, shares a similar philosophy to proactive de-energization measures such as Public Safety Power Shutoffs<cit.> used in California against wildfires by reducing the grid’s burden and alleviating the magnitude of energy imbalances. The disparity in resilience and vulnerability patterns observed from a system-level perspective further motivates our considerations of how the failure timing of individual components, especially those with the most impact, affects system resilience. To identify these critical components, we define the critical index of a transmission line as the proportion of instances where its failures directly contribute to catastrophic blackouts among all realizations. The top four critical lines (Fig. 4c), all connected to Costa Sur — the largest power plant complex in Puerto Rico, play a vital role in the connectivity of transmission network topology, such as delivering electricity to the north region and establishing connections with the second-largest power plant complex (Aguirre). Regardless of this specific weather event, the steady-state topology analysis based on the current flow between centrality also suggests Costa Sur has the highest importance for the grid (see details in Supplementary Note 3). The resilience patterns of these critical transmission lines (Fig. 4d) suggest that earlier failures of these critical lines (when the system maintains relatively adequate functionality) tend to enhance overall system resilience, consistent with the resilience patterns at the system level (Fig. 4a-4b). In a later stage during the extreme event, the system is compromised in network connectivity, and the disruptions of critical lines followed by the large-scale power flow redistribution can overload the remaining lines, which may result in further overloads and incur cascading failures. Moreover, in a weakly connected network, the removal of these critical lines that connect to the highest-centrality node could lead to network segmentation, causing substantial power imbalances in the sub-networks and potentially leading to instability. To further validate this finding, we preset the hazard resistance of the most critical component (Costar Sur – Manati) to be either the weakest or the strongest among all components. When this critical component ranks within the weakest 1% and 10%, the probability of catastrophic blackouts under this extreme event is reduced by 17% and 9%, respectively. While scenarios featuring the critical component with higher wind hazard resistance exhibit increased failure probability (See Fig. S13). §.§ Sensitivity analysis of increasing renewable integration To explore the effects of increasing renewable integration, especially unregulated BTM distributed solar PV systems, on the risk of catastrophic blackouts, we conduct a sensitivity analysis using the proposed CRESCENT model under the same hurricane event. Here, we generated 1000 realizations considering the uncertainty in infrastructure resilience for each of the renewable integration levels ranging from 10% to 80%, where the renewable integration level is defined as the proportion of demand met by solar PV systems. Despite some BTM solar systems being equipped with energy storage for consumer self-sufficiency, such systems are not well regulated and dispatched by the grid operators and can even undermine grid resilience due to their individual optimal strategies<cit.>. Therefore, to simplify this sensitivity analysis, they are configured without additional storage and set to operate in the widely adopted maximum power point tracking (MPPT) mode<cit.>. To eliminate regional variations, we proportionally adjusted the renewable integrations according to the demand profiles of distribution feeders, ensuring a consistent level across the entire island. By comparing these realizations, we observe a nonlinear effect of increasing environment-sensitive renewables on potential catastrophic blackouts (Fig. 5). Below a renewable integration level of approximately 45%, including the current level of 16.1%, realizations exhibited similar probabilities of catastrophic blackouts and nearly identical failure patterns. This phenomenon suggests that below this level, the risk introduced by the growth of solar generation has a minimal impact on system resilience. This can be attributed to the real-time energy balances between supply and demand still being led by adequate conventional generation, which is resilient to extreme weather impacts. However, as the renewable integration level in the system further increases, the effects of renewables begin to emerge with the failure probability exhibiting a super-linear growth (Fig. 5a). Under a high renewable integration level, the extreme weather event significantly diminishes solar-dominated generation due to wind damage and cumulonimbus cloud cover, which enlarges energy imbalances between supply and demand and further challenges grid inertia and flexibility for maintaining system stability. Therefore, the system becomes even more likely to experience a catastrophic blackout in an earlier stage (Fig. 5b). The temporal distribution of catastrophic blackouts under various renewable integration levels (Fig. 5c) more clearly demonstrates that in a renewable-dominated scenario with over 70% renewable integration, grids face significant challenges in managing even the lower intensity of hazards at early stages, despite having less cumulative damage and relatively higher system functionality (above 65%). Conversely, under the same hazard, grids with lower renewable integration (below 50%) exhibit higher resilience during the initial phases. This difference is primarily attributable to the substantial reduction in power generation of the solar-dominated power system caused by the indirect effect (cloud cover) of hurricanes on such environment-sensitive renewables (see Fig. S15), which exacerbates energy imbalances. § DISCUSSION In the context of extreme weather events like hurricanes that cause extensive damage to electric power systems, our study presents the CRESCENT model to analyze cascading failures in climate-induced power outages. This model distinguishes itself from the existing cascading outage models at the transmission network level<cit.> by accounting for the spatiotemporal climate-energy dynamics of renewable power systems. It comprehensively incorporates the effects of evolving climate extremes on utility-scale and distributed generations, and transmission and distribution networks. This model also stands out by using the real-world grid datasets, validated by a weather-induced catastrophic blackout with the first-ever high-resolution outage data records (2022 Puerto Rico blackout during Hurricane Fiona). The CRESCENT model is adaptable to various regions and can be further employed to analyze the resilience of energy transitions in future climates, by incorporating storms projected in climate change scenarios<cit.>. Also, our study exemplifies hurricanes as a primary climate threat to Puerto Rico; in the future CRESCENT can be extended to model the impact of other climate extremes (e.g., flooding and heatwaves) to assess the overall climate resilience of renewable energy systems. In analyzing thousands of model-generated realizations for the Puerto Rico power system during Hurricane Fiona, we identified distinct resilience and vulnerability patterns. These patterns reveal that early degradation of the system or early failure of critical components can counterintuitively enhance system resilience, helping the grid ride through subsequent hurricane-induced damages. This paradoxical effect can be explained by the mechanism that early failure can reduce the energy burden on the grid and even separate the grid into multiple sub-grids in advance. Such early degradations can also alleviate the transient power imbalances and prevent the propagation of cascading failures in a later phase. Although proactive grid regulation strategies such as proactive power shutoff<cit.> and organized microgrid operation<cit.> are recognized to be effective, our analysis reveals that even passive failures occurring early can also enhance grid resilience against evolving climate extremes. Moreover, our model can be designed to support these proactive climate-resilient strategies, offering assistance in their implementation. To explore the role of renewable integration in this catastrophic blackout, we perform a sensitivity analysis across various solar-dominated distributed integration levels ranging from 10% to 80% within the context of the same event. Our results show the nonlinear impact of renewable energy integration on system resilience for Puerto Rico during the same event. Below an approximately 45% renewable integration level, including the current level of 16.1%, the integration of distributed solar systems is well-accommodated without compromising system stability. In contrast, surpassing this level towards a 100% renewable grid may significantly heighten the risk of climate-induced cascading power outages, primarily due to enlarged energy imbalances resulting from substantial reductions in renewable generation that further challenges the grid inertia and system flexibility. We note that this sensitivity analysis does not account for energy storage associated with behind-the-meter solar installations due to the current lack of a unified dispatch mechanism for such individually optimized distributed systems, which are even recognized to potentially compromise grid resilience<cit.>. However, the coordinated aggregation of energy storage systems, providing virtual inertia and additional flexibility, presents a promising solution to mitigate the risks associated with the large-scale integration of renewable energy. While the quantitative results of the sensitivity analysis are particularly derived for this event, our methodology can serve as a broadly useful tool for assessing the risks associated with different generation portfolios of a regional power system in response to forecasted or projected extreme weather events. This enables stakeholders and grid operators to make informed decisions on optimizing generation portfolios and operation strategies for enhancing climate resilience, thereby ensuring a more sustainable energy transition. In future work, this methodology can be extended to investigate the potential heterogeneity in the impact of renewable energy integration on catastrophic blackouts across projected extreme events in a changing climate. § METHODS §.§ Energy system configurations The configuration of the realistic Puerto Rico electric power system in 2022 (the period contemporary with Hurricane Fiona) is comprehensively defined by high-resolution datasets across four parts: utility-scale and distributed generation resources, transmission and distribution networks, and demand profiles. Data on the capacity and detailed location information of all utility-scale generation units, including utility-scale solar PV systems, wind turbines, hydropower, and other traditional power plants as of September 2022 were obtained from U.S. Energy Information Administration (EIA) Form EIA-860 Preliminary Monthly Electric Generator Inventory <cit.>. Operation availability for all Puerto Rico power plants during Hurricane Fiona was obtained from the Daily Generation Availability Report (September 18th, 2022, the same day as Hurricane Fiona's landfall) of the local power utility (LUMA Energy, also the grid operator) <cit.>. Generation unit parameters (e.g., ramp up/down rates in MW/minute) were obtained from the Puerto Rico Integrated Resource Plan 2018-2019 for Puerto Rico Electric Power Authority (PREPA) <cit.>. The installed capacity of distributed generation, specifically rooftop solar PV systems, of each distribution feeder in September 2022 was collected from the local power utility <cit.>. Geographic information system data for the Puerto Rico transmission and distribution networks, as recorded in September 2022, were also obtained from the local power utility <cit.> (see visualizations in Figs. 2c-2f). This dataset includes segment types and parameters of all 115 kV and 230 kV transmission lines of the transmission network, which are subsequently used in calculating network cascades and steady-state power flow. The distribution network part of this dataset includes the detailed geospatial topology of 973 operational distribution feeders with their connections to substations (see Fig. 2f and Fig. S2). The demand profile of each distribution feeder is also included in this dataset. Other detailed configurations of the Puerto Rico power system, including generator parameters and illustrations of the transmission network, the distribution networks, and the distributed feeders, are shown in Table S1 and Figs. S1-S2. §.§ Hurricane hazard model The track of Hurricane Fiona was sourced from the International Best Track Archive for Climate Stewardship (IBTrACS) at the National Center for Environmental Information of the National Oceanic and Atmospheric Administration (NOAA) <cit.>, which includes time-series data of hurricane’s center locations, maximum sustained wind speed, and radius of maximum wind. To align with the temporal scale of real-time power system operations, we interpolated the track data from a 3-hour to a 10-minute interval. A physics-based wind profile model that accounts for both the tropical cyclone inner core and outer radii dynamics <cit.> is used to generate the hurricane wind profile according to the track information. By integrating this boundary-layer hurricane wind profile with environmental background flow derived from the storm’s translation speed <cit.>, the asymmetric spatiotemporal wind fields for the hurricane are obtained. The accuracy of this wind field model has been validated for simulating tropical cyclone hazards such as coastal winds, rainfall, and storm surges <cit.>. To further consider the land roughness impact on the surface-layer wind speed, we convert the wind speeds based on the land cover classes using the logarithmic vertical wind speed profile <cit.>. Data of Puerto Rico’s land cover classes, e.g., residential, forests, crops, and wetlands, were obtained from the National Land Cover Database (NLCD) of the United States Geological Survey (USGS) <cit.>. For detailed information on the land roughness of Puerto Rico, see Fig. S11. §.§ Power outage data Spatiotemporal power outage data on the 2022 Puerto Rico blackout during Hurricane Fiona (as shown in Fig. 2a and the observed curve in Fig. 3a) were sourced from the US power outage datasets <cit.>. The data for this specific event, detailing the percentage of customers without electricity in seven regions ( Fig. S3) of Puerto Rico, were recorded by the Puerto Rico grid operator, LUMA Energy <cit.>, at 10-minute intervals, spanning from pre-event to post-event periods. The island’s power grid experienced a system-wide catastrophic blackout between 17:50 and 18:00 UTC, September 18th, 2022. The data recording was terminated after the catastrophic blackout and was not recovered until September 20th, 2022. These outage data represent the first high-resolution spatiotemporal record of a system-wide blackout induced by a climate extreme event, as the grid’s modernization and digitalization have been improved by LUMA Energy since it became the grid operator in 2021. §.§ Renewable power system vulnerability model The vulnerability model is used to generate spatiotemporal disturbances in the renewable power system for each realization. It accounts for infrastructure damage caused by hurricane winds and the decline in generation from environment-sensitive renewable energy resources. Infrastructure damage depends on the fragility of grid components, which include renewable energy structures (primarily solar panels), transmission towers, transmission lines, and distribution feeders. The damage estimates of these components are based on a series of functions known as fragility curves <cit.>, which describe the relationship between component failure probability and wind intensity. The selected fragility curves were particularly designed and calibrated for Puerto Rico, including utility-scale and distributed rooftop solar panels <cit.>, transmission lines <cit.>, transmission towers <cit.>, and distribution feeders <cit.>. Applying existing fragility curves, which are mostly independent of hazard duration, directly in spatiotemporal risk analysis can lead to an overestimation of infrastructure damage due to repeated sampling <cit.>. Therefore, we define resistance functions by taking the inverse of these fragility functions, thereby characterizing the distribution of wind resistance for each type of component. Rather than sampling fragility curves for time-varying failure probabilities of components, we assign time-invariant hazard resistances to components by sampling their resistance distributions in each realization. During the spatiotemporal risk analysis, if the wind intensity at a grid component exceeds its assigned resistance value, which is sampled at the onset of each realization, the component is considered to have failed. The spatiotemporal vulnerability model for generating disturbances is configured with a 10-minute time interval, aligning with the time scale of real-time operations. The decline in renewable energy resources accounts for the spatiotemporal impact of hurricane-induced cumulonimbus cloud cover on solar irradiance, leading to reduced generation output from both utility-scale and distributed rooftop solar PV systems. The solar generation reduction rate is derived from a solar irradiance decay model <cit.>, validated by large-scale historical global horizontal irradiance data and Atlantic hurricane activity from the Atlantic hurricane database from 2001 to 2017 (see details in Supplementary Note 7). Also, as the wind intensity (above 33 m/s) of a hurricane exceeds the typical cut-off speed (25 m/s) of existing wind turbines, wind turbines are considered to shut down during the hurricane period. §.§ Multi-Scale Spatiotemporal Cascade Model Informed by the renewable power system vulnerability model under evolving climate extremes, a multi-scale spatiotemporal cascade model is developed to simulate network dynamics. We expand state-of-the-art network cascade models designed for common-caused initial disturbances, to account for the decision-making system’s network dynamic evolution influenced by multi-scale system resilience factors — grid inertia and system flexibility — under climate extremes. The system’s dynamic evolution is discretized based on the real-time operations of the power system in a 10-minute time interval with a temporal discretization set denoted by 𝒯. At each time step t ∈𝒯, given the damage state estimated from the hazard and vulnerability models, we conduct a topology analysis of the transmission network to identify a set 𝒢(t) for all connected subgraphs 𝒢_n(t), that is, functional sub-grids with active generators and demand nodes. For each functional sub-grid, the network cascading dynamics are built upon the widely adopted ORNL-PSERC-Alaska (OPA) cascade model<cit.>. This model simulates the process of line overloads and subsequent cascading line tripping, triggered by initial failures due to infrastructure damage and subsequent power flow redistribution. The network cascading failures could reform the network topology with an updated set 𝒢^'(t). Within a functional sub-grid 𝒢_n^'(t) ∈𝒢^'(t), the power imbalance Δ P_imb^(n)(t) of this sub-grid is determined by the deviation of the total generation ∑_g∈𝒫_n P_g,t (accounting for the decline in renewable energy resources during the extreme event) from total demand ∑_d∈𝒟_n L_d,t within the network, i.e., Δ P_imb^(n)(t)=∑_g∈𝒫_n P_g,t-∑_d∈𝒟_n L_d,t,t∈𝒯 where 𝒟_n and 𝒫_n denote the sets of demand and generation units within the sub-grid. Grid inertia quantifies the system’s capability to mitigate the effect of the power imbalance. We use the rate of change of frequency (RoCoF) <cit.> jointly determined by the power imbalance and grid inertia provided by synchronous generators as a system stability constraint. A high RoCoF indicates that the system may reach a frequency nadir or zenith exceeding the system’s tolerance, thereby triggering the off-grid protection of generation units. We select the maximum RoCoF threshold as ±2 Hz/s following the recommendations by the National Renewable Energy Laboratory (NREL) for the Puerto Rico power grid <cit.>. For each sub-grid at time step t∈𝒯, we calculate the maximum RoCoF (Hz/s) by RoCoF_max^(n)=f^0Δ P_imb^(n)(t)/∑_g∈𝒫_n^SG2H_SG,gP_SG,g^nom where f^0 is the rated 60 Hz frequency for the grid, 𝒫_n^SG denotes the set of synchronous generators, H_SG,g and P_SG,g^Nom represent the inertia constant and nominal generation capacity of a synchronous generator, respectively. The sub-grid is removed from the functional network set if its maximum RoCoF exceeds the ±2 Hz/s threshold. For a detailed derivation of the maximum RoCoF, see Supplementary Note 5. For surviving sub-grids, we further embed the decision-making process of the system’s real-time operations to eliminate power imbalances, accounting for system flexibility. The progressive decline in climate-sensitive renewable generation under evolving climate extremes requires more flexible resources, e.g., dispatchable generation units, to meet the power balance constraint of the network. The decision-making operations of the power system during climate extreme events are formulated by solving a mixed-integer linear programming (MILP)-based grid operation model, which is adapted from unit commitment (normal operation) and optimal load shedding (remedial control) models <cit.>. The objective function of the grid operation model is designed to minimize the losses of load shedding and the curtailment of generation units by using flexible dispatchable resources within the network. The detailed model setup and programming solver for the grid operation model are provided in Supplementary Note 6. §.§ Data and Code Availability The data and code will be made publicly available upon publication. §.§ Acknowledgments L.X., N.L. and D.X. were supported by US National Science Foundation grant number 2103754 (as part of the Megalopolitan Coastal Transformation Hub) and Princeton University Metropolis Project. H.V.P. was supported in part by a grant from the C3.ai Digital Transformation Institute. §.§ Author contributions L.X. contributed to conceptualization, methodology, writing of the initial draft. N.L contributed to the conceptualization, writing, review, editing, supervision, and guidance. H.V.P. contributed to writing, review, editing, supervision, and guidance. D.X. contributed to methodology and editing. A.T.D.P contributed to writing, review and editing. §.§ Competing interests The authors declare no competing interests. ieeetr
http://arxiv.org/abs/2407.01787v1
20240701202804
Plasma Induced Variation of Electron Capture and Bound-State $β$ Decays
[ "Bharat Mishra", "Angelo Pidatella", "Simone Taioli", "Stefano Simonucci", "David Mascali" ]
physics.plasm-ph
[ "physics.plasm-ph", "nucl-th", "physics.atom-ph" ]
Fast modeling of the shear three-point correlation function Mike Jarvis 0000-0002-4179-5175 July 8, 2024 =========================================================== ^1Istituto Nazionale di Fisica Nucleare - Laboratori Nazionali del Sud, Catania, Italy ^2Dipartimento di Fisica e Astronomia - Università degli studi di Catania, Catania, Italy ^3European Centre for Theoretical Studies in Nuclear Physics and Related Areas (ECT*), Bruno Kessler Foundation, Trento, Italy ^4Trento Institute for Fundamental Physics and Applications (INFN-TIFPA), Trento, Italy ^5School of Science and Technology, University of Camerino, Camerino, Italy ^6Istituto Nazionale di Fisica Nucleare - Sezione di Perugia, Perugia, Italy § ABSTRACT The slow neutron capture (s-process) synthesises ∼50% of all elements in the universe heavier than iron, whose abundances are determined by the competition between neutron capture and nuclear β-decay rates. The latter are expected to vary inside hot and dense plasmas such as those found in s-process nucleosynthesis sites. Here, we present a new and general theoretical study of the effect of local and non-local thermodynamic equilibrium ((N)LTE) plasmas on β-decays, using orbital electron capture (EC) decays in ^7Be as a model case. We begin from the model of Takahashi and Yokoi to calculate the lepton phase volume of ^7Be as a function of its ionisation state and excitation level, and consequently, the configuration-dependent EC decay rate. We then calculate the in-plasma ion charge state distribution (CSD) and level population distribution (LPD) for a grid of plasma density and temperatures, using the population kinetics code FLYCHK. By combining the configuration-dependent EC rate with the CSD and LPD, we calculate the in-plasma orbital EC rate in ^7Be. The results show a strong correlation between the half-life and thermodynamic conditions of the plasma, underlining the importance of measuring decay rates in laboratory plasmas and the relevance of high precision atomic configuration models. The model discussed in this work is capable of calculating EC and bound state β decay (BSBD) rates in both low-density NLTE and high-density LTE plasmas. We conclude by validating our model with state-of-the-art data in literature on isotopes of Pr and Dy, and by proposing future extension of the model to laboratory magnetoplasmas and stellar interiors aimed at improving nucleosynthesis models. § INTRODUCTION Understanding the cosmic origin of elements has been a fascinating topic of study for decades. Since the seminal publications by Burbidge, Burbidge, Fowler and Hoyle <cit.> and Cameron <cit.>, the field of nuclear astrophysics has made great strides in identifying the various processes constituting nucleosynthesis, and in quantifying the abundance of the elements. Among these reactions, the s-process occurs in asymptotic giant branch (AGB) and massive stars, synthesising ∼50% of all elements between the iron peak and ^208Pb. The neutron density in these astrophysical sites N_n∼10^6 cm^-3, leading to neutron captures on the same time scale as β-decays (days to few years). Consequently, the nucleosynthesis pathway proceeds along the valley of β-stability on the nuclear chart <cit.>. Stellar nucleosynthesis models developed for calculating s-process branching of heavy nuclei are thus sensitive to both neutron capture cross sections σ(n,γ) and nuclear β-decay rates λ <cit.>. Uncertainties in either quantity can strongly affect the balance between the competing processes, leading to mismatch in observed and predicted elemental abundances <cit.>. While there are experimental facilities already operational for measuring and/or updating relevant σ(n,γ) to high precision <cit.>, nucleosynthesis models still rely on decay half-lives t_1/2 measured in neutral atoms which may be significantly different from their stellar plasma counterparts. Modification of λ due to changes in the atomic environment of radioactive nuclei has been investigated for decades <cit.>. The first studies on the subject involved measuring the rates in extreme physical environments, such as inside steel-encased cordite bombs <cit.> or at temperatures of liquid hydrogen <cit.>. However, only negligible changes (<0.05%) were observed. Further research found relatively larger changes on modifying the chemistry of the material containing the decaying nuclei, such as using a fluoride or oxide of the isotope instead of the metal <cit.>, or implanting the nucleus into different compounds <cit.>. This effect was attributed to perturbation of the electron density on the nuclear surface which affected electron capture rates, in line with the the hypotheses of Daudel and Segre <cit.>. The same phenomenon was also observed in internal conversion rates of isomers such as ^99mTc and ^90mNb <cit.>. A much stronger effect of the atomic environment on λ was observed in highly charged ions (HCIs) circulating in storage rings. The first such experiment was performed by Jung, Bosch and coworkers <cit.> who investigated the decay of fully-stripped ^163Dy^66+ ions. They discovered that the isotope became suddenly unstable (neutral ^163Dy is stable) and rapidly transmuted into ^163Ho through bound state β decay (BSBD), making it the first observation of this phenomenon. Soon after, Bosch, Faestermann and coworkers discovered BSBD in fully-stripped ^187Re <cit.> and measured a dramatic reduction in t_1/2 from 42 Gyr in neutral ^187Re to 32.9±2 yr in ^187Re^75+. Since the ^187Re/^187Os pair is used as a cosmochronometer due to its long t_1/2, a nine order increase in λ meant re-evaluation of the galactic age based on its actual t_1/2 in the stellar interior. A recent review by Litvinov and Chen <cit.> lists all the HCIs whose t_1/2 has been measured in storage rings, and offers valuable insights into BSBD and electron captures (EC) in single-charge species. Various aspects of the phenomenon have been studied theoretically over the years, and consequently, several formulations have been developed to calculate changes in t_1/2 in different materials <cit.> and in fully-stripped ions <cit.>. The interior of stars where nucleosynthesis occurs is a naturally occurring dense plasma which contains isotopes in various ionisation stages surrounded by an energetic electron cloud. It can thus be expected that decay rates will vary strongly, depending on the properties of the plasma. Palmerini <cit.> and Busso <cit.> speculated on possible plasma-induced variation of decay rates in isotopes of Cs, Lu, Nb, Zr and Kr - among others - to quantify their impact on modelling the abundance of s-process elements. Their work was complemented by Taioli and Simonucci who developed ab-initio Dirac-Hartree-Fock (DHF) models to predict t_1/2 of ^134Cs and ^135Cs as a function of plasma temperature and density <cit.>. In addition to the aforementioned radio-isotopes, in-plasma decay rate of ^7Be has attracted a great deal of attention from researchers. Bahcall, Iben and Gruzinov worked extensively on EC decays in ^7Be <cit.>, analysing in detail their relative contribution compared to continuum capture rates in the interior of the sun. The topic witnessed a renewed interest in recent years, with new predictions about the t_1/2 of ^7Be calculated using updated ab-initio models and its impact on the solar neutrino production discussed <cit.>. Coincidentally, many of the experiments investigating chemically-induced changes in λ mentioned earlier also involved ^7Be, underlining the historical role played by this isotope in this field. Studying ^7Be is also important for the cosmological lithium problem, where the abundance of ^7Li is calculated from a series of interconnected reactions, one of which is ^7Be(e^-,ν_e)^7Li <cit.>. The first comprehensive theories on β-decay in stellar plasma were proposed by Bahcall <cit.>, and covered continuum decays and captures in great detail. A full theoretical analysis of all in-plasma β-decay processes was made by Takahashi and Yokoi <cit.> (hereafter TY83) who put forward elegant expressions for calculating the lepton phase volume associated with each decay channel and showed how the stellar plasma may cause enhancement/suppression of some transitions. Their formulations were also used to predict BSBD rates in fully-ionised nuclei by Gupta <cit.> and Chang <cit.> and their coworkers, reproducing the observations from storage rings <cit.> in the process. In order to benchmark TY83 and study plasma-induced changes in β-decay rates experimentally, a new facility named PANDORA (Plasmas for Astrophysics, Nuclear Decay Observations and Radiation for Archaeometry) is under realisation at INFN-LNS in Catania, Italy <cit.>. The facility aims to use an electron cyclotron resonance ion source to trap a compact magnetoplasma that can emulate certain aspects of the stellar interior like the presence of a charge state distribution (CSD) in ions and a cloud of energetic electrons forming a continuum. By injecting radioisotopes into the plasma and measuring decay rates as a function of density and temperature, it would be possible to estimate stellar t_1/2 with greater accuracy than through experiments in storage rings or electron beam ion traps <cit.> that cannot reproduce plasma effects <cit.>. In its original formalism, the TY83 model predicts β-decay rates in spatially homogeneous plasmas under local thermodynamic equilibrium (LTE) wherein ion CSD, level population distribution (LPD), and electron properties are respectively defined by the Saha, Boltzmann and Maxwell distributions through a single unified temperature. In this work, to generalise TY83 for any kind of plasma - including low density non-LTE (NLTE) laboratory magnetoplasmas - we have split the model into two components: * An atomic component, which calculates the decay rate of the radio-isotope as a function of its charge state i and excitation level j (hereafter referred to as the configuration-dependent decay rate λ^*(ij)) * A plasma component, which calculates the CSD and LPD associated with the radio-isotope as a function of plasma density and temperature (hereafter denoted by the probability factor p_ij) The in-plasma decay rate can then be obtained by combining p_ij with λ^*(ij). By disentangling the two components, it is possible to analyse the fundamental coupling of atomic and nuclear properties independent of the plasma, and then incorporate the effects of the latter through the p_ij factors. Differently from LTE plasmas, p_ij in NLTE plasmas are obtained using population kinetics codes which generate a full reaction rate matrix coupling all charge states and excitation levels through a collision-radiative (CR) model, and iteratively solve it till plasma steady state is reached. In this work, we investigate the variation of β-decay rates in a generic plasma using the coupled atomic and plasma components. We discuss our approach through the example of orbital EC decay in ^7Be, but the formalism is identical for BSBD as well. The choice of this isotope is motivated by its simple atomic nature which facilitates analysis, but it also has fundamental importance in several astrophysical models, such as in calculating the solar neutrino flux mentioned earlier. § METHODS AND RESULTS The methodology used to investigate the plasma effects on the β-decay rate consist of multiple steps: first, we calculate λ^*(ij) following the technique outlined in TY83 and then use the population kinetics code FLYCHK <cit.> to calculate p_ij under different plasma conditions. We then calculate in-plasma orbital EC decay rates as a function of plasma electron density (n_e) and temperature (kT_e), in both low-density NLTE (corresponding to laboratory conditions) and high-density LTE systems (corresponding to stellar interiors). We benchmark our results with independent, ab-initio calculations, as well as with experimental data. Finally, we conclude by highlighting the versatility of the model, and with perspectives on how to adapt it for the PANDORA facility. §.§ Configuration-Dependent EC Decay Rate of ^7Be ^7Be decays into ^7Li through the electron capture process ^7Be+e^-→^7Li+ν_e where the captured e^- may belong to either the continuum or an atomic orbital. In case of the former, the decay is classified as continuum capture, while the latter is termed as orbital capture. Under neutral conditions, the decay is spontaneous with Q-value Q_0=861.815 keV for the ground state 3/2^-→3/2^- and Q_1=Q_0-477.6=384.215 keV for the excited 3/2^-→1/2^- transition. These values represent the difference between the atomic mass energies of the parent and daughter systems, and therefore already include a small contribution from the binding energy of the electrons. Both are allowed transitions, with the former being mixed Fermi and Gamow-Teller and the latter being pure Gamow-Teller. They are characterised by comparative half-lives logf_0t=3.324 and logf_1t=3.556, respectively <cit.>. The comparative half-lives are simply a product of the lepton phase volume f, and the half-life t_1/2, and depend only on the nuclear matrix element fixed by the levels of parent and daughter nuclei. The transitions are henceforth indexed by m for brevity, where m=0 and m=1 represent the decay to the ground and excited states of ^7Li, respectively. The lepton phase volume f describes the phase space accessible to the electron and neutrino during the decay. It varies with the atomic configuration (ij) of the radio-isotope ion and the type of transition m. According to the formulation of TY83, the configuration-dependent phase volumes f_m^*(ij) of orbital EC and BSBD can both be calculated as f_m^*(ij)=∑_xσ_xπ/2[g_x or f_x]^2(Q_m(ij)/m_ec^2)^2S_(m)x where x represents the atomic orbital from where the electron is captured (in case of EC) or into which it is emitted (in case of BSBD), Q_m(ij) is the decay Q-value which varies with the ion configuration, m_ec^2 is the electron rest mass energy, [g_x or f_x]^2 is the larger of electron radial wavefunction evaluated at the nuclear radius, and S_(m)x is the shape factor. The quantity σ_x represents the occupancy of the orbital x. The configuration-dependent EC decay rate in ^7Be can be calculated as λ^*(ij)=ln2(f^*_0(ij)/f_0t+f^*_1(ij)/f_1t) where f_0t, f_1t are the aforementioned ft values. The configuration-dependent Q_m(ij) can be evaluated as Q_m(ij)=Q_m+[B^*_^7Be^i-B^*_^7Li^i']+[e^*_^7Be^i,j-e^*_^7Be^i',j'] where Q_m=Q_0 or Q_1, B^*_^7Be^i-B^*_^7Li^i' is difference in electronic binding energies when the atoms are in the charge states i and i' respectively, and e^*_^7Be^i,j-e^*_^7Li^i',j' is the difference in energy of excitation levels j and j'. The bracketed terms denote the contribution of atomic ionisation and excitation to the decay Q-value, and when (ii',jj')→ 0, Q_m(ij)→ Q_m. This formalism is an extended version of the one used by Gupta and Chang <cit.>, applicable to any ij-configuration, and not just fully-stripped atoms. In its present form, Eq. <ref> allows calculating the modification of Q_m due to a single coupling between parent configuration ij and daughter configuration i'j'. However, given a configuration ij of ^7Be, any i'j'-configuration of ^7Li may be synthesised, as long as the effective Q_m(ij→ i'j')>0. Naturally, the ij-configuration should also have electrons in the relevant shell to allow capture. Consequently, Q_m(ij) should be calculated by fixing the configuration ij of ^7Be consistent with the type of capture, and coupling it with all configurations of ^7Li and then averaging the results. In this way, Eq. <ref> can be re-written as Q_m(ij)=Q_m+Δ̅ϵ^ij where Δ̅ϵ^ij now represents the total modification in Q_m on account of the ij-configuration coupling. The term is calculated as Δ̅ϵ^ij=ϵ^*_^7Be^i,j-1/N∑_i^'j^'ϵ^*_^7Li^i^',j^' where the summation ∑_i^'j^' is over all daughter configurations resulting in positive energy difference and N represents the total number of those levels. Here ϵ^*_^7Be^i,j and ϵ^*_^7Li^i',j' represent the total energy of their respective configurations, obtained by summing the ionisation potentials of all preceding charge states i/i'=0,1,...,i-1 and the energy of level j/j' relative to the ground state. This formalism is equivalent to that of Eq. <ref>, with the exception that the binding energies are replaced by the ionisation potentials. For sake of simplicity, it is assumed that the decay does not result in a change in the charge state of ^7Li such that i=i^' and only bound levels within the same ionisation state are coupled (no internal conversion). We use the atomic level descriptions in the population kinetics code FLYCHK <cit.> to generate a configuration-coupling scheme capable of calculating Δ̅ϵ^ij as a function of (ij). A detailed description of the nomenclature, spectroscopic notation and electronic configuration of FLYCHK levels is provided in the Appendix. An example of the level coupling scheme and aforementioned averaging procedure is shown in Fig. <ref> using K-shell capture in ^7Be^0+ and L-shell capture in ^7Be^1+. It can be seen that in some cases, only certain configurations of the parent nucleus can contribute to the capture - e.g., L-capture requires the presence of at least one electron in the L-shell and thus only the first two levels in ^7Be^1+ can contribute to the decay. Fig. <ref> shows the variation in Q_0(ij) for various configurations of ^7Be, which is basically the energy of the neutrino ν_e. The X-axes in the plots list a few level names, while the Y-axes show the difference between Q_0(ij) and Q_0. Since Q_0=861.885 keV, none of the configurations prohibit the decay because their Q-variations are on the eV scale, but there are fluctuations nonetheless. Deviations on the eV scale are anyway acceptable because the neutrino created in the decay process carries away the energy and balances the energetics of the reaction. The overlap between the K- and L-capture plots shows that in neutral ^7Be, all levels participate in both K- and L-capture but only certain levels can do the same from i=1^+ onward. These points are also detailed in the plots. The shape factor S_(m)x is defined by the type of transition and orbital from which the electron is captured <cit.>. For allowed transitions such as in ^7Be, S_(m)x=1 when capturing electrons from x=1s_1/2,2s_1/2,2p_1/2 and S_m(x)=0 for x=2p_3/2. Electrons beyond the L-shell have low capture probability and are consequently neglected. The shape factors are born from the selection rules and conservation of spin-parity between nucleon and lepton wavefunctions and are consistent with the explanation provided by Folan and Tsifrinovich <cit.>. Here we use the spectroscopic notation x=nl_j=l+s to denote the atomic orbitals. The quantities f_x and g_x describe the overlap between the electronic orbital and the nuclear surface, assuming that the capture takes place within a thin, shell-like region at the nuclear boundary. In principle, the capture may take place inside the entire nucleus, but a rigorous analysis of the overlap between nucleons and electrons in the nuclear interior by Morresi and coworkers <cit.> produced no significant difference with respect to pure surface capture alone, and thus the assumption still holds. f_x and g_x are calculated through the expressions f_x=f_κ(r)|_r=R=f_κ(R) g_x=g_κ(r)|_r=R=g_κ(R) where f_κ(r), g_κ(r) are radial wavefunctions associated with the orbital x, the parameter κ is related to the spin and angular momentum of the same orbital <cit.>, and R is the phenomenological nuclear radius R=R_0A^1/3 expressed in atomic units, with R_0=1.2 fm. The radial wavefunctions can be calculated as f_κ(r) =(ħ/m_eca_0)^3/2P_κ(r)/r g_κ(r) =(ħ/m_eca_0)^3/2Q_κ(r)/r where a_0 is the Bohr radius and P_κ(r), Q_κ(r) are the eigenfunctions of the radial component of the Dirac equations in a central Coulomb field of potential V dP_κ(r)/dr =-κ/rP_κ(r)-(2c'+V-ϵ/c')Q_κ(r) dQ_κ(r)/dr =κ/rQ_κ(r)+(V-ϵ/c')P_κ(r) where c' is the speed of light in atomic units and ϵ is the energy of the orbital. The coefficient ħ/m_eca_0 in Eqs. <ref> and <ref> converts P_κ(r),Q_κ(r) from atomic to natural units. Being eigenfunctions of the radial component alone, P_κ(r), Q_κ(r) obey the normalisation condition ∫_0^∞[P_κ(r)^2+Q_κ(r)^2]dr=1 where [P(r)^2+Q(r)^2]dr represents the radial charge probability, i.e. the probability of finding an electron at a certain distance r from the nucleus. On the other hand, f_κ(r) and g_κ(r) obey the normalisation condition ∫_0^∞[f_κ(r)^2+g_κ(r)^2]r^2dr=1 because f_κ(r)^2+g_κ(r)^2 represents the charge probability density. This means that while certain orbitals may have non-zero probability density at the centre of the nucleus (f_κ(r)^2+g_κ(r)^2≠ 0 for r→ 0), the probability of electrons being there is zero because [f_κ(r)^2+g_κ(r)^2]r^2→ 0. We use the analytical solutions to Eqs. <ref> and <ref> as derived in Ref <cit.> to calculate f_x and g_x. Fig. <ref>(a) shows the radial charge probability as a function of r for different orbitals in ^7Be. The plots corresponding to x=2p_1/2 and 2p_3/2 overlap with each other on account of their similarity. It can be seen that all orbitals have probability maxima, but in addition to this, the x=2s_1/2 also has a probability minimum, which denotes the node of the wavefunction. The larger of f_κ^2 or g_κ^2 as a function of r is shown in Fig. <ref>(b). The dotted vertical line in the same plot denotes the nuclear surface, and the intersection of the two represents f_x^2 or g_x^2. It can be easily seen that compared to x=1s_1/2 and x=2s_1/2, outer orbitals penetrate extremely weakly into the nucleus. K-captures are the strongest processes followed by L-captures which are almost entirely carried out by electrons in the x=2s_1/2 orbitals. The occupancy σ_x is a number between 0 and 1 which describes the completeness of an orbital, with the former meaning completely empty and the latter meaning completely full. It is calculated according to the usual Pauli exclusion principles for atomic shell filling. To use Eq. <ref> identically for both EC and BSBD, σ_x can be generalised as (σ_x)_EC=1-(σ_x)_BSBD We calculate the occupancy of the K- and L-shell orbitals approximately using the level descriptions of FLYCHK whose spectroscopic notations are often a direct representation of electron distribution in various orbitals. By combining all the above quantities with Eqs. <ref> and <ref>, we obtain the configuration-dependent EC rate λ^*(ij) in ^7Be. To demonstrate the effect of the ionic configuration better, we calculate the percentage change in EC decay rate δλ^*(ij) as δλ^*(ij)=λ^*(ij)-λ(0)/λ(0)×100% where λ(0) is the decay rate of neutral, ground state ^7Be. The results are shown in Fig. <ref>. It is evident that the ^7Be EC decay rate varies for all excitation levels, but the variation is strongest in doubly and triply-ionised states. For the neutral and i=1^+ ionisation states, δλ^*(ij) is driven by Q(ij), as can be corroborated by Fig. <ref>. The small spike in capture rate at li2s is a model-artifact, arising from the fact that σ_2s=0.5 for this level, whereas the same for levels in neutral ^7Be is smaller and subject to greater uncertainty. This is due to the lack of precision in the electronic configuration of neutral atoms in FLYCHK data, as is described in more detail in the Appendix. The deviations are quite small, however, and may be neglected. For i=2^+ and i=3^+, the major factor is the occupancy of the K and L-shells. Since K-shell captures carry more weight compared to their L-shell counterparts, a rapid drop from σ_1s=100% in he1s to σ_1s=50% in he2st (Fig. <ref>(b)) results in an almost equal drop in decay rates. The rate decreases again going from he2ss to he2pt when L-captures switch from 2s to 2p_1/2 orbitals. The final sharp drop by 50% in ^7Be3^+ when going from hy1 to hy2 is explained by the excitation of the one attached electron from the K- to L-shell. The λ^*(ij) calculated here are validated against independent results from the relativistic Dirac-Hartree-Fock model <cit.> and the match between the two rates can also be appreciated in Fig. <ref>. The small mismatch at li2s is due to the same overestimated capture rate predicted by our model as explained above, but the difference is quite small. When applied to neutral ^7Be in ground state (denoted by level index 04g02), the model predicts t_1/2=53.44 days and branching ratio 10.45% which are which is well within the error limits reported in the ENSDF database <cit.> (t_1/2=53.22±6 d and BR=10.44%). It should be mentioned that the logft values had to be slightly altered to get this degree of match - logf_0t=3.371 and logf_1t=3.650 were used - but the differences are only 1.4% and 2.6% respectively with respect to ENSDF data, and are well within the accepted uncertainty on these parameters. §.§ In-Plasma EC Decay Rate of ^7Be The configuration-dependent decay rates λ^*(ij) can be converted into the in-plasma decay rates through the equation λ^*=∑_ijp_ijλ^*(ij) where p_ij indicates the probability of finding the ion in the (ij)-configuration. The purpose of the plasma component of our model is to customise the formalism to evaluate p_ij according to the geometry and nature of the plasma. For uniform plasmas (homogeneous and isotropic), we use FLYCHK to obtain p_ij in a large range of n_e and kT_e values. The calculations can be run in both LTE and NLTE modes, which allows us to explore plasma ion dynamics under a variety of situations and study their effect on λ^*. Fig. <ref> shows the mean charge ⟨ Z⟩ of ^7Be as a function of kT_e for two different densities - n_e=10^12 cm^-3 and n_e=10^25 cm^-3 - under both LTE and NLTE conditions. The lower n_e is on the same order of magnitude as laboratory magnetoplasmas such as PANDORA, whereas the higher n_e describes stellar interiors. It can be noticed that at low densities, LTE and NLTE plasmas show strong differences in behaviour which vary according to the plasma temperature. At low kT_e, the predicted ⟨ Z⟩ are close, and they converge again at high kT_e. This convergence can be attributed to increased inter-level reaction rates, signalling the onset of equilibrium. At low densities, NLTE predictions can be believed to be more reliable because the plasma is not collisional enough to establish an equilibrium. High collisionality and reaction rates are present in high n_e plasmas at all temperatures, which explains the perfectly superposed LTE and NLTE ⟨ Z⟩ curves at n_e=10^25 cm^-3. These trends highlight the fact LTE-based models like TY83 cannot be directly validated in low density magnetoplasmas and therefore justifies the need to generalise the same to NLTE systems. We use the p_ij predicted by FLYCHK for n_e=10^12 cm^-3 and a set of kT_e∈[2,12,22,32,42] eV to calculate λ^* according to Eq. <ref>. Fig. <ref>(a) shows the in-plasma orbital EC decay rate of ^7Be as a function of kT_e, in LTE and NLTE conditions. The effect of ⟨ Z⟩ on λ^* is clear - at kT_e=2 eV, our model predicts the same λ^* for LTE and NLTE which have similar ⟨ Z⟩ (Fig. <ref>). The deviation between LTE and NLTE ⟨ Z⟩ on increasing kT_e→ 42 eV spills over into λ^* which decreases by orders of magnitude under LTE conditions, but shows a much lower suppression in NLTE plasmas. The decay rates can be expected to converge again at kT_e>60 eV, according to the trends in Fig. <ref>. Our results are also benchmarked by comparing our LTE results against independent DHF calculations <cit.>, which are in solid agreement. The first principle simulations of the LTE plasma using the DHF approach were performed with the in-house relativistic code DIRECT <cit.>. Fig. <ref>(b) shows the variation of t_1/2 of ^7Be in the NLTE plasma for low density n_e=10^12 cm^-3. It can be observed that t_1/2 increases with kT_e because hotter plasmas generate more 2^+ and 3^+ ions which strongly suppress capture rates (as seen in Figs. <ref> and <ref>). At kT_e=42 eV, there is already a five-fold increase in t_1/2, and it may be expected to rise even more till all ions are pushed into the i=4^+ state. At this point, orbital captures will stop completely and only continuum captures will contribute to the decay of ^7Be. The precise balance between orbital and continuum captures depends strongly on the density and temperature of the plasma, and has been a topic of intense study over the past few years <cit.>. Preliminary DHF calculations indicate that for n_e∼10^12-13 cm^-3, continuum captures in ^7Be will be suppressed on account of low electron density at the nuclear surface, and thus the radio-isotope may become completely stable. This will have a strong bearing on the planned experiments with ^7Be in PANDORA, and may have some influence on primordial nucleosynthesis models as well. The configuration-dependent EC decay rate of ^7Be as predicted by our model can also provide theory support to the experimental results obtained so far at the ERNA facility <cit.>, and upcoming measurements. § DISCUSSION To investigate the validity of our model in more detail, we analyse some aspects of the atomic and plasma components separately. Fig. <ref> shows the CSD p_i of ^7Be under solar conditions, as predicted by FLYCHK. The plasma parameters are taken as n_e∼3.8×10^25 cm^-3 and kT_e∼1.38 keV, corresponding to mass density ρ=150 g/cm^-3 and temperature T=16×10^6 K <cit.>. The temperature is the same for both electrons and ions because p_i is calculated under LTE conditions which are identical to NLTE at high n_e (Fig. <ref>). It can be observed that ∼20% of ^7Be is expected to be in i=3^+, which means that in the sun, there is a small but non-zero contribution of orbital EC to the total decay rate. This is precisely in line with the established literature on the subject <cit.> per the predictions by Iben, Bahcall, Moeller and Grunzinov <cit.>, and reaffirmed by Simonucci and Taioli <cit.>. While our model still needs to be updated to calculate the total capture rate on ^7Be in the sun through the addition of continuum capture and inclusion of screening effects on the electron wavefunctions, the results of Fig. <ref> reinforce the validity of the plasma component of our model because they reproduce the presence of bound electrons in the plasma. In addition, the plasma module also reproduces the expected physics of EC decay in ^7Be - as can be seen in Fig. <ref>(b), t_1/2→ 53.2 [d] as kT_e→ 0 meaning that the model inevitably reproduces the terrestrial decay rate of ^7Be when the plasma disappears. The atomic component of the model has been validated against EC and BSBD rates of certain radio-isotopes in select (ij) configurations. As already noted before, the t_1/2 of neutral ^7Be in ground state matches the ENSDF data perfectly well. In addition, we also analyse the situation for ^163Dy and ^140Pr whose configuration-dependent decays have been experimentally studied in storage rings. ^163Dy is stable under neutral conditions, but when fully-ionised, it undergoes BSBD with a measured t_1/2=47^+5_-4 d <cit.>. On applying Eq. <ref> to ^166Dy^66+ and using a single transition marked by logft=4.99, we calculate t_1/2=49.77 d, which is not only within the experimental error limits, but also the same as calculated in Chang and coworkers <cit.> (t_1/2=49.52 d). On the other hand, ^140Pr is an unstable element which undergoes decays through EC and β^+, with an accepted t_1/2=3.39±1 min under neutral conditions <cit.>. Experiments in storage rings have measured λ^*(ij) of ^140Pr in fully-ionised, one-electron attached (hydrogen-like) and two-electron attached (helium-like) configurations <cit.>. We calculate the EC decay rate for helium-like ^140Pr^57+ and obtain λ^*=0.00145 s^-1, in perfect agreement with measured rate 0.00147±7 s^-1. The authors in Ref. <cit.> also note that β^+ decay rates are fairly invariant over different atomic configurations, and on adding their measured λ^*_β^+ to our calculated λ^*_EC for neutral ^140Pr, we obtain t_1/2=3.61 min which also in full agreement with the aforementioned half-life of ^140Pr^0+. A list of all isotopes analysed and results obtained is presented in Table <ref>. It should be noted that t_1/2 reported for ^140Pr^57+ corresponds to EC decays alone, which explains the factor two difference compared to the neutral case. In summary, we have developed a general model of in-plasma β decay which can be identically applied to a vast range of plasma parameter space. The strength of the model lies in its separation of atomic and plasma components, which allows generating more observables to validate the model against, while also maintaining versatility when dealing with diverse environments. The model has tested well with ^7Be, ^163Dy and ^140Pr, and will be applied to many more radio-isotopes soon. Currently, we are introducing several upgrades to our model to improve its applicability. First and foremost is extending the formalism to predict in-plasma continuum captures and continuum β^+/- decays, complementary to the work by Gupta and coworkers <cit.>. This extension also involves including excited nuclear states in the parent and daughter atoms and assessing their impact on the modification of t_1/2. Since these additional nuclear couplings lack literature data on their comparative half-lives, we are looking into shell model and ab-initio calculations to obtain better precision on logft values, in line with the suggestions by Chang and coworkers <cit.>. We are also working on incorporating the effect of ionisation potential depression (IPD) on level energies which will modify Q_m(ij) as a function of n_e and kT_e. With regards to EC decays, Litvinov and coworkers <cit.> have already identified the presence of the hyperfine splitting effect in hydrogen-like ^140Pr^58+ which potentially enhances the capture rate <cit.>. We are working on probing the effect of the same on EC capture decay rates in ^140Pr^58+ and ^7Be^3+. And finally, to fulfill the objective of benchmarking TY83 in laboratory magnetoplasmas, we intend to couple our model with a Particle-in-Cell Monte Carlo (PIC-MC) code to simulate p_ij in an electron cyclotron resonance (ECR) plasma trap so as to predict 3D, space-resolved in-plasma decay rates for experimental verification by PANDORA. 9 B2FH1957 Burbidge E.M., Burbidge G.R., Fowler W.A. and Hoyle F., Synthesis of Elements in Stars. Reviews of Modern Physics 29, 4 (1957) Cameron1957 Cameron A.G.W, Nuclear Reactions in Stars and Nucleogenesis. Publications of the Astronomical Society of the Pacific 69, 408 (1957) Lugaro2023 Lugaro M., Pignatari M., Reifarth R. & Wiescher M., The s-Process and Beyond. Annual Review of Nuclear and Particle Science 73 (2023) Arcones2022 Arcones A. & Thielemann F., Origin of the Elements. Astronomy and Astrophysics Review 31, 1 (2022) Palmerini2021 Palmerini S., Busso M., Vescovi D., Naselli E., Pidatella A., Mucciola R., Cristallo S., Mascali D., Mengoni A., Simonucci S. & Taioli S., Presolar grain isotopic ratios as constraints to nuclear and stellar parameters of asymptotic giant branch star nucleosynthesis. The Astrophysical Journal 921 (2021) Busso2022 Busso M.M., Kratz K.-L., Palmerini S., Akram W., and Antonuccio-Delogu V., Production of solar abundances for nuclei beyond Sr: The s- and r-process perspectives. Frontiers in Astronomy and Space Science 9 (2022) Lisowski1990 Lisowski P.W., Bowman C.D., Russell G.J., and WenderS.A., The Los Alamos National Laboratory Spallation Neutron Sources. Nuclear Science and Engineering 106, 2 (1990) Abbondanno2003 Abbondanno U. et al, CERN n_TOF facility: Performance report (2003) Guerrero2013 Guerrero C. et al, Performance of the neutron time-of-flight facility n_TOF at CERN. The European Physical Journal A 49, 1 (2013) decayrev1972 Emery G.T., Perturbation of Nuclear Decay Rates, Annual Review of Nuclear Sciences 22, 165 (1972) Rutherford1907 Rutherford E. & Petavel J.E., The Effect of High Temperature on the Activity of the Products of Radium, The Collected Papers of Lord Rutherford of Nelson 2, pp. 456-457 (1907) Curie1913 Curie M.P. & Kammerlingh-Onnes M., The radiation of radium at the temperature of liquid hydrogen, KNAW Proceedings 15 II, pp. 1430-1441 (1913) Bouchez1949 Bouchez R., Daudel P., Daudel R., Muxart R. & Rogozinski A., Mise en évidence de l’influence du degré de l’ionisation de l’atome sur la période du nuclide ^7Be. J. Phys. Radium 10, 201 (1949) Leininger1949 Leininger R.F., Segre E. & Wiegand C., Experiments on the effect of atomic electrons on the decay constant of ^7Be. Physical Review 76, 897 (1949) Ray1999 Ray A., Das P., Sahu S., Das S., Sethi B., Mookerjee A., Chaudhuri C.B. & Pari G., Observation of large change of ^7Be decay rate in Au and Al_2O_3 and its implications. Physical Letters B 455, 69 (1999) Norman2001 Norman E., Rech G., Browne E., Larimer R.-M., Dragowsky M.R., Chan Y.D., Isaac M.C.P., McDonald R.J. & Smith A., Influence of physical and chemical environments on the decay rates of ^7Be and ^40K. Physical Letters B 519, 15 (2001) Ohtsuki2007 Ohtsuki T., Ohno K., Morisato T., Mitsugashira T., Hirose K., Yuki H. & Kasagi J., Radioactive Decay Speedup at T=5 K: Electron-Capture Decay Rate of ^7Be Encapsulated in C_60. Physical Review Letters 98, 1 (2007) Daudel1947 R. Daudel, Alteration of radioactive periods of the elements with the aid of chemical methods. Rev. Sci. 85 (1947) Segre1947 E. Segre, Possibility of altering the decay rate of a radioactive substance. Physical Review 71 (1947) decaymodTc1953 Bainbridge K.T., Goldhaber M. & Wilson E., Influence of the chemical state on the lifetime of a nuclear isomer, Tc^99m. Physical Review 90, 430 (1953) decaymodNb1965 Cooper J.A., Hollander J.M. & Rasmussen J.O., Effect of the chemical state on the lifetime of the 24-second isomer of Nb^90*. Physical Review Letters 15, 680 (1965) Jung1992 Jung M. et al, First Observation of Bound-State β-Decay. Physical Review Letters 69, 2164 (1992) Bosch1996 F. Bosch et al, Observation of Bound-State β-Decay of Fully Ionized ^187Re: ^187Re-^187Os Cosmochronometry. Physical Review Letters 77, 5190 (1996) Litvinov2023 Litvinov Y. and Chen R.J., Radioactive decays of stored highly charged ions. The European Physical Journal A 59, 102 (2023) DaudelBDecay1947 Daudel R., Jean M. & Lecoin M., On the possible existence of a particular type of e^- creating radioactive phenomenon. J. Phys. Radium 8, 238 (1947) Morisato2008 Morisato T., Ohno K., Ohtsuki T., Hirose K., Sluiter M. & Kawazoe Y., Electron-capture decay rate of ^7BeC_60 by first-principles calculations based on density functional theory. Physical Review B 78, 1 (2008) Lee2008 Lee K. & Stenle-Neumann G., Ab-initio study of the effects of pressure and chemistry on the electron-capture radioactive decay constants of ^7Be, ^22Na and ^40K, Earth and Planetary Science Letters 267, 628 (2008) Gupta2019 Gupta A., Lahiri C., and Sarkar S., Bound and continuum state β-decay of bare atoms: Enhancement of decay rate and changes in β-decay branching, Physical Review C 100, 1 (2019) Liu2021 Liu S., Gao C., and Xu C., Investigation of bound state β-decay half-lives of bare atoms. Physical Review C 104, 1 (2021) Gupta2023 Gupta A., Lahiri C., and Sarkar S., Allowed β^- decay of bare atoms with A≈60-80 in stellar environments, Physical Review C 108, 015805 (2023) Taioli2022 Taioli S., Vescovi D., Busso M., Palmerini S., Cristallo S., Mengoni A. & Simonucci S., Theoretical Estimate of the Half-Life for the Radioactive ^134Cs and ^135Cs in Astrophysical Scenarios, The Astrophysical Journal 933, 148 (2022) BahcallEC1962 J. Bahcall, Electron Capture and Nuclear Matrix Elements of Be^7. Physical Review 128, 1297 (1962) Iben1967 Iben Jr I., Kalata K. & Schwartz J., The effect of Be^7 K-Capture on the solar neutrino flux. The Astrophysical Journal 150, 1001 (1967) Bahcall1969 Bahcall J. & C. Moeller, The ^7Be Electron-Capture rate. The Astrophysical Journal 155, 511 (1969) Gruzinov1997 Gruzinov A.V. & Bahcall J., The ^7Be electron capture rate in the sun. The Astrophysical Journal 490, 437 (1997) Johnson1992 Johnson, C.W., Kolbe, E., Koonin S.E. & Langanke K., The Fate of ^7Be in the Sun. Astrophysical Journal 392 (1992) Adelberger2011 Adelberger E.G. et al, Solar fusion cross sections. II. The pp chains and CNO cycles. Review of Modern Physics 83, 1 (2011) Shaviv2003 Shaviv N. & Shaviv G., The state of ^7Be in the core of the Sun and the solar neutrino flux. Monthly Notices of the Royal Astronomical Society 341, 119 (2003) Quarati2008 Quarati P. & Scarafone A., Nuclear electron capture rate in stellar interiors and the case of ^7Be. Journal of Physics G: Nuclear and Particle Physics 36, 1 (2008) Sawyer2011 Sawyer R., Electron capture rates in a plasma. Physical Review C 83, 1 (2011) Simonucci2013 Simonucci S., Taioli S. Palmerini S. & Busso M., Theoretical estimates of stellar e^- captures. I. The half life of ^7Be in evolved stars. The Astrophysical Journal 764, 1 (2013) Vescovi2019 Vescovi D., Piersanti L., Cristallo S., Busso M., Vissani F., Palmerini S., Simonucci S. & Taioli S., Effects of a revised ^7Be e^--capture rate on solar neutrino fluxes. Astronomy and Astrophysics 623, A126 (2019) Broggini2012 Broggini C., Canton L., Fiorentini G. & Villante F.L., The cosmological ^7Li problem from a nuclear physics perspective. Journal of Cosmology and Astroparticle Physics 2012 (2012) Bahcall1961 Bahcall J., Theory of Bound-State Beta Decay. Physical Review 124, 495 (1961) Bahcall1962 Bahcall J., Beta Decay in Stellar Interiors. Physical Review 126, 1143 (1962) BahcallEC1964 Bahcall J., Electron Capture in Stellar Interiors. The Astrophysical Journal 139, 318 (1964) TakahashiYokoi1983 Takahashi K. and Yokoi K., Nuclear β-Decays of Highly Ionised Heavy Atoms in Stellar Interiors. Nucl. Phys. A 404, 578 (1983) Mascali2017 Mascali D., Musumarra A., Leone F. Romano F.P., Galatá A., Gammino S. and Massimi C., PANDORA, a new facility for interdisciplinary in-plasma physics. The European Physical Journal A 53, 145 (2017) Leach2017 Leach K.G., Dillmann I., Klawitter R., Leistenschneider E., Lennars A., Brunner T., Frekkers D., and Andreiou C., Electroweak Decay Studies of Highly Charged Radioactive Ions with TITAN at TRIUMF. Atoms 5, 1 (2017) Musumarra2023 Musumarra A., Massimi C., Pellegriti M.G., and Leone F., Ion Traps for Nuclear Decay Studies: a design for a handheld Electron Beam Ion Trap (EBIT). HNPS Advances in Nuclear Physics 29, (2023) Mascali2022 Mascali D., Santonocito D. et al, A Novel Approach to β-Decay: PANDORA, a New Experimental Setup for Future In-Plasma Measurements. Universe 8, 1 (2022) FLYCHK2005 Chung H.-K., Chen M., Morgan W., Ralchenko Y. & Lee R., FLYCHK: Generalized population kinetics and spectral model for rapid spectroscopic analysis for all elements. High Energy Density Phys. 1, 3 (2005) Tilley2002 Tilley D., Cheves C., Godwin J., Hale G., Hofmann H., Kelley J., Sheu C. & Weller H., Energy levels of light nuclei A=5,6,7. Nuclear Physics A 708, 3 (2002) FLY1996 Lee R.W. and Larsen J.T., A time-dependent model for plasma spectroscopy of K-shell emitters, Journal of Quantitative Spectroscopy and Radiative Transfer 56, 4 (1996) HULLAC2001 Bar-Shalom A. and Oreg, J., HULLAC, an integrated computer package for atomic processes in plasmas, Journal of Quantitative Spectroscopy and Radiative Transfer 71, 2-6 (2001) Folan1995 Folan L. & Tsifrinovich V., Effects of the Hyperfine Interaction on Orbital Electron Capture. Physical Review Letters 74, 499 (1995) Morresi2018 Morresi T., Taioli S. & Simonucci S., Relativistic Theory and Ab Initio Simulations of Electroweak Decay Spectra in Medium-Heavy Nuclei and of Atomic and Molecular Electronic Structure. Advanced Theory and Simulations: Nuclear Beta Decay 1, 11 (2018). Burke1967 Burke V. & Grant I., The effect of relativity on atomic wave functions. Proceedings of the Physical Society 90, 297 (1967). Taioli2021 Taioli S. & Simonucci S., Relativistic quantum theory and algorithms: a toolbox for many-fermion systems in different scenarios. Annual Reports in Computational Chemistry 17 (2021) Santonostaso2021 Santonostaso C., Buompane R., Di Leva A., Morales-Gallegos L., Itaco N., Landi G., Leitzert H.C., Rapagnani D. & Gialanella L., Change in the ^7Be half-life in different environments. Il Nuovo Cimento 44 C, 75 (2021) Nica2018 Nica N., Nuclear Data Sheets for A=140, Nuclear Data Sheets 154 (2018). Litvinov2007 Litvinov Y., Measurement of the β^+ and Orbital Electron-Capture Decay Rates in Fully Ionized, Hydrogenlike, and Heliumlike ^140Pr Ions, Physical Review Letters 99 262501 (2007). § ACKNOWLEDGEMENTS The authors gratefully acknowledge the support of 3^rd National Committee of INFN and funding from project PANDORA_Gr3. A.P. would like to acknowledge the financial support from the MUR-PNRR Project PE0000023-NQSTI, financed by the European Union (NextGeneration EU). S.T. would like to acknowledge the European Union under grant agreement no. 101046651 for this action. The authors are grateful to Chang Xu and Shuo Liu at the School of Physics, Nanjing University, China, for clarifying the meaning of the equations used to calculate the radial wavefunctions on the nuclear surface. § APPENDIX - FLYCHK DATA Data on the excitation levels in ions, their energy and electronic configurations used in this model is taken from the atomic physics database employed by FLYCHK. FLYCHK is a population kinetics code which can calculate CSD and LPD of ions for different kinds of plasma in various configurations <cit.>. The code is a successor to the FLY suite of codes <cit.> and is capable of generating detailed spectra of plasma in a broad wavelength range. In order to do so, FLYCHK uses an extensive repository of atomic data for elements from Z=1-71 which includes valuable info like level electronic configuration, energy, oscillation strength and statistical weight. The nomenclature of atomic levels in FLYCHK is a combination of existing names from the FLY module <cit.>, the HULLAC database <cit.> and superconfiguration states. A detailed explanation of each is provided in Ref. <cit.> but a brief summary is provided here. When performing computations for H-, He- and Li-like ions (meaning one, two and three electrons attached, respectively), FLYCHK uses the FLY database for Z≤26, and the HULLAC database for others. Consequently, levels in ^7Be^1/2/3+ and ^7Li^0/1/2+ follow the FLY nomenclature while those in neutral ^7Be^0+ follow FLYCHK's own system of nomenclature instead. The atomic levels are broadly categorised as bound and autoionising, depending on their energy - the former are levels with energy less than the ionisation energy of the atom/ion whereas the latter are free states that automatically collapse into the next ionisation state. The plasma decay model described in this work only considers bound levels. As may be seen from Figs.<ref> and <ref>(a), levels in ^7Be^0+ are named as 04g[n] where [n] ranges from 02-10. The first index 04 represents 4 electrons present in the system, while g represents bound states. [n] denotes the principal quantum number of the valence electron but no detailed information on the subshell configuration is available. This is the reason behind the imprecise estimation of σ_2s in EC rates in neutral ^7Be, which leads to a relative overestimation in the same at li2s. Levels in ^7Be^1+ and ^7Li^0+ are generally designated as li[nl] where [nl] represents the principal and azimuthal quantum numbers respectively. This detailed notation lasts till n=5, after which only n is shown. This implies that subshell configuration details from li6 onward do not exist. In case of ^7Be^2+ and ^7Li^1+, the level notations follow the general structure he[nls] where [nls] stands for principal and azimuthal quantum numbers of the valence electron, and total spin state respectively. This level of detail holds till n=2 beyond which the indices collapse to the simpler form he[nps]. The levels of ^7Be^3+ and ^7Li^2+ are simply represented as the hydrogenic hy[n] where n is the principal quantum number of the one attached electron. A short summary of the level indices and their characteristics is provided in Table <ref>.
http://arxiv.org/abs/2407.01876v2
20240702013604
Radiation MHD Simulations of Soft X-ray Emitting Regions in Changing Look AGN
[ "Taichi Igarashi", "Hiroyuki R. Takahashi", "Tomohisa Kawashima", "Ken Ohsuga", "Yosuke Matsumoto", "Ryoji Matsumoto" ]
astro-ph.HE
[ "astro-ph.HE", "astro-ph.GA" ]
Division of Science, National Astronomical Observatory of Japan, 2-21-1 Osawa, Mitaka, Tokyo 181-8588, Japan Department of Physics, Rikkyo University, 3-34-1 Nishiikebukuro, Toshima-Ku, Tokyo 171-8501, Japan Department of Natural Science, Faculty of Arts and Science, Komazawa University, 1-23-1 Komazawa, Setagaya-Ku, Tokyo 154-8525, Japan Institute for Cosmic Ray Research, The University of Tokyo, 5-1-5, Kashiwanoha, Kashiwa, Chiba 277-8582, Japan Center for Computational Sciences, University of Tsukuba, Tennodai, 1-1-1, Tsukuba, Ibaraki 305-8577, Japan Institute for Advanced Academic Research, Chiba University, 1-33 Yayoi-Cho, Inage-Ku, Chiba 263-8522, Japan Digital Transformation Enhancement Council, Chiba University, 1-33 Yayoi-Cho, Inage-Ku, Chiba 263-8522, Japan § ABSTRACT Strong soft X-ray emission called soft X-ray excess is often observed in luminous active galactic nuclei (AGN). It has been suggested that the soft X-rays are emitted from a warm (T=10^6∼10^7 K) region that is optically thick for the Thomson scattering (warm Comptonization region). Motivated by the recent observations that soft X-ray excess appears in changing look AGN (CLAGN) during the state transition from a dim state without broad emission lines to a bright state with broad emission lines, we performed global three-dimensional radiation magnetohydrodynamic simulations assuming that the mass accretion rate increases and becomes around 10% of the Eddington accretion rate. The simulation successfully reproduces a warm, Thomson-thick region outside the hot radiatively inefficient accretion flow near the black hole. The warm region is formed by efficient radiative cooling due to inverse Compton scattering. The calculated luminosity 0.01L_ Edd-0.08L_ Edd is consistent with the luminosity of CLAGN. We also found that the warm Comptonization region is well described by the steady model of magnetized disks supported by azimuthal magnetic fields. When the anti-parallel azimuthal magnetic fields supporting the radiatively cooled region reconnect around the equatorial plane of the disk, the temperature of the region becomes higher by releasing the magnetic energy transported to the region. § INTRODUCTION Luminous active galactic nuclei (AGN), such as Seyfert galaxies and quasars, exhibit soft X-ray emission. The origin of the soft X-ray emission in AGN is puzzling since the temperature of the standard accretion disk around a supermassive black hole is around 10^5 K <cit.>. Two types of models have been proposed to explain the soft X-ray emission. One is the ionized reflection model <cit.>. In this model, the soft X-ray emission is emitted by an optically thick disk that is irradiated by the hard X-ray emission from the vicinity of the central black hole. Another model is the emission from the Thomson thick warm corona with temperature T=10^6-10^7 K by inverse Compton scattering <cit.>. <cit.> showed that the intensity of the soft X-ray emission increases with increasing the mass accretion rate. However, the geometry of the warm Comptonization region is still unknown, and it is not clear whether it is a region separated from the cool optically thick disk or associated with the cool disk. Recent monitoring observations have shown that some Seyfert galaxies exhibit spectral state transitions <cit.>. They show transitions between a state without broad emission lines in the optical range (type 2 AGN) and a state with broad emission lines (type 1 AGN) <cit.>. During the state transitions, the UV to X-ray spectrum also shows transitions <cit.>. AGN that shows state transitions between type 1 and type 2 are called changing look AGN <cit.>. <cit.> and <cit.> showed that UV and soft X-ray emission is dominant in the bright phase of CLAGN, while hard X-ray emission is dominant in the dark phase. During the state transitions, the soft X-ray intensity changes drastically. Therefore, CLAGN may provide clues to the origin of the soft X-ray emission in AGN. The spectral state transitions in CLAGN in the UV to X-ray range are similar to the hard-to-soft/soft-to-hard state transitions in black hole X-ray binaries (BHXB) observed during an outburst <cit.>. During the hard-to-soft state transitions, BHXB remains in a bright but hard X-ray dominant state over ∼100 days <cit.>. This state is called the bright-hard state. In this state, the cut-off energy of the X-ray is anti-correlated with its luminosity. It indicates that electrons temperature decreases as the accretion rate increases <cit.>. To investigate the origin of the bright-hard state in BHXB, <cit.> conducted three-dimensional global magnetohydrodynamic (MHD) simulations of BH accretion flows by including optically thin radiative cooling. The initial state is a hot magnetized radiatively inefficient accretion flow <cit.>. When the disk surface density exceeds the upper limit, cooling dominates over the heating, so that the disk shrinks vertically by radiative cooling. They found that the disk does not transit to the standard disk because the disk is supported by magnetic pressure, which prevents further vertical contraction of the disk. <cit.> carried out simulations of sub-Eddington accretion flows in BHXB incorporating general relativity and radiative transfer and confirmed that the disk becomes supported by magnetic pressure. Motivated by the numerical results of <cit.>, <cit.> obtained steady solutions of BH accretion flows supported by azimuthal magnetic fields. They showed that the bright hard state of BHXB can be explained by the magnetically supported disk. <cit.> and <cit.> performed radiation MHD simulations of sub-Eddington accretion flow by numerically solving MHD equations coupled with the radiation transfer equations. Although the accretion rate in their simulations is higher than of <cit.>, resulting in the presence of optically thick regions, the accretion flow is still supported by the magnetic pressure <cit.>. In this paper, we present the results of three-dimensional global radiation MHD simulations of accretion flows onto a 10^7M_ BH when the accretion rate is around 10% of the Eddington accretion rate. The numerical methods and initial conditions are presented in section 2. In this study, we extend the previous work on radiation MHD simulations of AGN accretion flows, as outlined by <cit.>, through the inclusion of Compton scattering. The numerical results are presented in section 3. In section 4, we compare the numerical results with steady-state solutions of magnetically supported disks. Summary and conclusions are given in section 5. The derivation of thermal equilibrium curves of the magnetically supported disks is summarized in the Appendix. § BASIC EQUATIONS AND NUMERICAL SETUP We solve the radiation MHD equations numerically. The basic equations in the MHD part are ∂ρ/∂ t + ∇·(ρv) = 0, ∂ρv/∂ t + ∇·(ρvv + p_tI- BB/4π) =-ρ∇ϕ_PN -S, ∂E_t/∂ t + ∇·[(E_t+p_t)v - B(v· B)/4π] = -∇·(1/cηj×B)-ρv·∇ϕ_PN - c S_0, ∂B/∂ t + ∇·(vB-Bv+ψI) = -∇×(4π/cηj), ∂ψ/∂ t + c^2_h∇·B = -c^2_h/c^2_pψ, where ρ, v, B, and j = c∇×B/4π are the mass density, velocity, magnetic field, and current density, respectively. In addition, p_t=p_ gas+B^2/8π, and E_t = ρ v^2 /2+ p_gas/(γ - 1) + B^2/8π, where p_gas is the gas pressure and γ=5/3 is the specific heat ratio. In Equations (<ref>) and (<ref>), ψ is introduced so that the divergence-free magnetic field is maintained with minimal error during time integration, and c_h and c_p are constants <cit.>. For the gravitational potential, we adopt the pseudo-Newtonian potential to mimic general relativistic effects <cit.> ; ϕ_ PN = -GM_ BH/R-r_ s. Here G, R=√(r^2+z^2), and r_ s are the gravitational constant, distance to the black hole, and Schwarzschild radius, respectively. Here, we use cylindrical coordinates (r,φ,z). We assume M_ BH=10^7M_ in all simulations. We adopt the so-called anomalous resistivity; η = {[ η_0min[1,(v_d/v_c-1)^2], v_d≥ v_c,; 0, v_d≤ v_c, ]. where η_0 = 0.01cr_s, v_c=0.9c, and v_d=jm_ p/(ρ e) are the upper limits of the resistivity, critical velocity, and drift velocity, respectively. Here m_p is the proton mass, and e is the elementary charge. The resistivity η becomes large when the drift velocity v_d exceeds the critical velocity v_c <cit.>. Equations (1)-(5) are solved by the higher order accuracy code (5th order in space) CANS+ <cit.>. Note that in simulation codes based on finite volume methods, such as CANS+, numerical magnetic diffusion is unavoidable and can be larger than the anomalous resistivity. In equations (<ref>) and (<ref>), 𝐒 and S_0 are the radiation momentum and the radiation energy source terms, respectively, and are derived by solving the frequency-integrated 0th and 1st moments of the radiation transfer equations expressed in the following forms <cit.>: ∂ E_r/∂ t + ∇·F_r = cS_0, 1/c^2∂F_r/∂ t + ∇·P_r = S, where cS_0 = ρκ_ffc(a_r T^4 - E_r) + ρ(κ_ff-κ_es)v/c·[F_r - (vE_r + v·𝐏_r)] +Γ_ c, 𝐒 = ρκ_ffv/c(a_rT^4 - E_r) - ρ(κ_ff+κ_es)1/c[F_r - (vE_r + v· P_r)]. Here E_r, F_r, P_r and a_r are the radiation energy density, radiative flux, radiation stress tensor, and radiation constant, respectively. In Equations (9) and (10), κ_ff = 1.7×10^-25m_p^-2ρ T^-7/2 cm^2/g is the free-free opacity and κ_es = 0.4 cm^2/g is the electron scattering opacity. We solve equations (8) and  (9) by operator splitting. First, we solve the equations without the source term on the right-hand side by the explicit scheme. Then, the right-hand side (source terms) is incorporated by the implicit method <cit.>. This code was applied to the radiation MHD simulations of accretion flows in CLAGN <cit.> and super Eddington accretion flows around the stellar-mass black hole <cit.>. We include the cooling rate Γ_ c by the inverse Compton scattering by Γ_ c = ρκ_ es cE_ r04k_ B(T_ e - T_ r)/m_ ec^2, where E_ r0, T_ e=min(T,10^9 K), T_ r=(E_ r0/a_ r)^1/4, k_ B, and m_ e are the radiation energy density in the co-moving frame, the electron temperature, the radiation temperature, Boltzmann constant, and electron mass, respectively <cit.>. For simplicity, we solve the equations for the single-temperature plasma while partially accounting for the effects of the two-temperature plasma (i.e., the plasma in which electron temperature differs from the ion temperature). The two-temperature plasmas appear in the corona and accretion flows with accretion rates well below the Eddington accretion rate. Since the ion cooling rate through collisions with electrons is low in nearly collisionless hot plasmas, ions remain hot meanwhile electrons can be cooled by radiation. Two temperature models of RIAFs have been extensively studied <cit.>. Electron temperature in general relativistic radiation MHD simulations of two-temperature accretion flows is T_ e=10^9-10^10 K in RIAFs <cit.> and T_ e∼10^9 K when the accretion rate exceeds 10^-3Ṁ_ Edd <cit.>. For the studies of hot accretion flows with relatively higher mass accretion rates, the single-temperature plasma approximation will lead to overcooling of the ions <cit.>. To avoid the overcooling of ions, we introduce an upper limit for the electron temperature, because inverse Compton scattering in hot accretion flows cools the electrons down to 10^9 K. In the low-temperature region, a floor value for the gas temperature T=T_ floor=5×10^5 K is introduced to avoid negative gas pressure when the gas pressure is derived from the total energy which includes the thermal energy, the magnetic energy, and the kinetic energy. Note that the total energy is used as a conserved variable. In simulations presented in this paper, we first perform a non-radiative simulation starting from a rotating equilibrium torus embedded in a weak poloidal magnetic field with initial plasma β=10 until a quasi-steady BH accretion flow is formed. To describe a pure poloidal initial magnetic field, we set a φ component of the vector potential which is proportional to the density (A_φ∝ρ) of the initial torus <cit.>. The computational domain of our simulation is 0 ≤ r < 2000r_ s, 0 ≤φ < 2π, and |z| < 2000r_ s, and the number of grid points is (n_r,n_φ,n_z) = (464,32,464). The grid spacing is 0.1r_ s in the radial and vertical directions when r < 20r_ s and |z| < 5r_ s and increases outside the region. The absorbing boundary condition is imposed at R=2r_ s, and the outer boundaries are free boundaries where waves can transmit. Figure <ref> shows the azimuthally averaged distribution of the density ⟨ρ⟩ (left) and temperature ⟨ T ⟩ (right) at t/(10^4t_0)=1.05. Here, t_0=r_s/c=100 ( M_ BH/10^7M_) s and ρ_0 is the maximum density of the initial torus. In this paper, ⟨ A ⟩ denotes the azimuthal average calculated as ⟨ A ⟩ = ∫_0^2π A dφ/∫_0^2π dφ, ⟨ A ⟩_ρ denotes the density-weighted azimuthal average calculated as ⟨ A ⟩_ρ = ∫_0^2π Aρ dφ/∫_0^2πρ dφ, and ⟨ A ⟩_V denotes the volume average calculated as ⟨ A ⟩_V = ∭ A drrdφ dz/∭ drrdφ dz. Since the radiative cooling term is not included until t/(10^4t_0)=1.05, the accretion flow is hot (T>10^8 K) throughout the region. The radiative cooling term is turned on at this stage. The structure of the non-radiative accretion flow produced by this simulation is described in <cit.>. The left panel of Figure <ref> shows the radial distribution of the azimuthally averaged surface density Σ = ∫^100r_ s_-100r_ sρ dz averaged before including the radiation term (0.85<t/(10^4t_0)<1.05). The surface density is highest around r=40r_ s. The density enhancement is the remnant of the initial torus. The right panel of Figure <ref> shows the net mass accretion rate computed by; Ṁ = -∫^100r_ s_-100r_ s∫^2π_0 ρ v_r rdφ dz, where v_r is the radial velocity in cylindrical coordinates. The net mass accretion rate is constant up to ∼20r_ s. The unit density ρ_0 is set so that this accretion rate is equal to the accretion rate specified as a parameter. In this paper, numerical results for model M01 (Ṁ∼0.1Ṁ_Edd, ρ_0=1.0×10^-11 g/cm^3), and model M03 (Ṁ∼0.3Ṁ_Edd, ρ_0=3.0×10^-11 g/cm^3) are presented. Here, Ṁ_Edd is the Eddington accretion rate defined by Ṁ_Edd=L_Edd/c^2, where L_Edd is the Eddington luminosity. Table <ref> summarizes the normalizations and parameters used in this paper. The horizontal lines in the left panel of Figure <ref> show the surface density corresponding to τ_ es(0)=(τ_ es(+0)+τ_ es(-0))/2=10 for Model M01 and M03, where τ_ es(z) is the electron scattering optical depth calculated by τ_ es( z ) = {[ ∫^100r_ s_zρκ_ esdz, when z>0; ∫^z_-100r_ sρκ_ esdz, when z<0 ]. For model M01, τ_ es(0) exceeds 10 in the region 20r_ s<r<80r_ s. In model M03, τ_ es(0) exceeds 10 in 3r_ s<r<80r_ s. § NUMERICAL RESULTS §.§ Formation of the Soft X-ray Emitting, Warm Comptonization Region Figure <ref> shows the time evolution of the azimuthally averaged surface density ⟨Σ⟩ (upper panels) and the density-weighted azimuthally averaged equatorial temperature ⟨ T ⟩_ρ (lower panels) for model M01 (left panels) and M03 (right panels). After the inclusion of the radiative cooling term at t/(10^4t_0)=1.05, the accretion flow quickly cools and warm disk with temperature T=10^6-10^7.5 K is formed at r>35r_ s in M01 and cool dense disk with temperature T=10^5-10^6 K is formed at r>40r_ s in M03. Figure <ref> shows the azimuthally averaged density ⟨ρ⟩ (top), temperature ⟨ T ⟩ (middle), and radiation energy density ⟨ E_ r⟩ (bottom) distributions when the warm disk is formed. The left panels are for model M01 averaged over 1.5<t/(10^4t_0)<1.75, and the middle panels are for model M03 averaged over 1.55<t/(10^4t_0)<1.8. The right panel shows results without Compton cooling/heating reported by <cit.> (model NC, Ṁ∼0.1Ṁ_ Edd, ρ_0=2.0×10^-11 g/cm^3). The contours in the top panels of Figure <ref> show isocontours of τ_ es(z). In the model with Comptonization, the disk thickness and temperature decrease due to Compton cooling. In the outer region (r>20r_ s), the gas temperature decreases significantly to ∼10^7 K in model M01 and ∼10^6 K in model M03, and a warm, Thomson-thick region is formed. In models M01 and M03, the radiation energy density in the warm region is large. In the region where τ_ es(z)>10, radiation is trapped in this region and the region is radially wider than in the model without Comptonization. The radiation energy density in model M01 is comparable to that in model NC. On the other hand, model M03 has a higher radiation energy density than the other models due to the larger optical depth in the warm region. The white streamlines in the bottom panels show the direction of the radiative flux in the poloidal plane. The direction of the radiative flux is almost vertical in the region where τ_ es<1. The bolometric luminosity L=∫ F_z(r,φ,z=40r_ s)drrdφ of Model M01 is L∼0.01L_ Edd, consistent with the X-ray intensity at the onset of the changing look state transition. The luminosity increases with increasing the mass accretion rate, reaching L∼0.08L_ Edd in model M03. This luminosity is consistent with that in the UV and soft X-ray dominant state of CLAGN. To identify the soft X-ray emitting region, we calculate the Compton y-parameter defined by y = 4k_ BT_ e/m_ ec^2max(τ_ es(0), τ_ es(0)^2). When T>10^9 K, T_ e is set equal to 10^9 K, as discussed in the previous section. As we also mentioned in Section 2, we set the lower limit of temperature as 5×10^5 K to avoid the negative gas pressure in the simulation. This can lead to the overestimation of the Compton y-parameter. To avoid this, we replace the electron temperature T_ e with radiation temperature T_ r in the region where T<10^6 K when we compute the Compton y-parameter. Since T_ r can be less than 5×10^5 K, our estimate gives a lower limit of the Compton y-parameter. Figure <ref> shows the radial distribution of density-weighted azimuthally averaged T_ e (black dotted curves) and the Compton y-parameter (blue solid curves). In model M01, the warm region with a temperature T=10^6-10^7 K lies outside ∼40r_ s. The Compton y-parameter in this region is 0.3-0.4. The CLAGN observation suggests that the Compton y-parameter of the soft X-ray emitting region is ∼0.4 and less than 1 <cit.>. This is consistent with our simulation results. The Compton y-parameter for M01 is 0.1-0.2 in the hot inner region (r<30r_ s) where T_ e∼10^9 K. The hard X-rays can be emitted from this region by inverse Compton scattering of soft photons. In M03, the warm region (T=10^6-10^7 K) moves inward and is located around 30r_ s. The Compton y-parameter in this warm region is 0.3∼0.4, so we expect soft X-ray emission from this region. In the outer region where r>40r_ s, since the electron temperature drops to 10^5 K, we expect UV emission from this region. Table <ref> summarizes the observational features of our simulations. In model M01, the soft X-ray emitting region appears around r=40r_ s but the UV emitting region does not appear because the temperature exceeds 10^6 K. In model M03, temperature in r>50r_ s decreases down to 10^5 K (see figure <ref>) and emits UV radiation. In model NC, the Compton y-parameter is larger but the temperature exceeds 10^7 K because Compton cooling is not included <cit.>. Figure <ref> shows the distribution of ⟨ p_ gas+p_ rad⟩/⟨ p_ mag⟩ for model M01 (left) and M03 (right), where p_ rad=E_ r/3 is the radiation pressure. In M01, the magnetic pressure is dominant in the warm region in the upper half-hemisphere and in the hot inner torus. In model M03, the magnetic pressure dominates in the inner torus and in the warm region (20r_ s<r<30r_ s). The radiation pressure dominates in the outer region (r>30r_ s). For M01, the accretion rate is around 10% of the Eddington accretion rate, and the radiation pressure is lower than that in model NC (without Comptonization) reported in <cit.>. This is because the disk temperature in M01 is lower than that in NC due to Compton cooling. In model M01, the magnetic pressure dominant warm region appears only above the equatorial plane because the azimuthal magnetic fields below the equatorial plane reconnect with those above the equatorial plane. Figure <ref> shows the butterfly diagram of the azimuthally averaged azimuthal magnetic fields at r=40r_ s. During the vertical contraction of the disk due to radiative cooling, azimuthal magnetic fields anti-symmetric to the equatorial plane reconnect, and in model M01, only positive (red) magnetic fields remain in the upper hemisphere. This is the reason why the magnetic pressure dominant region in figure <ref> appears only in the upper hemisphere around r=40r_ s. In the later stage (t/(10^4t_0)>2.0), the azimuthal magnetic fields are again amplified by magnetorotational instability, and their polarity reverses. In model M03, azimuthal magnetic fields remain both in the lower and upper hemispheres at t/(10^4t_0)=1.75. §.§ Heating by the Magnetic Reconnection around the Equatorial Plane Figure <ref> shows the time variation of the azimuthally averaged vertically integrated magnetic pressure (blue solid line), and the density-weighted azimuthally averaged equatorial temperature (black dashed line) for model M01 at r=40r_ s. Around t/(10^4t_0)∼1.3 and t/(10^4t_0)∼1.9, the magnetic pressure decreases, and the equatorial temperature increases. It indicates that magnetic energy is converted to thermal energy. Figure <ref> shows the azimuthally averaged azimuthal magnetic field and poloidal magnetic field lines before and after t/(10^4t_0)∼1.3 (left panels) and t/(10^4t_0)∼1.9 (right panels). In the left panels, azimuthal magnetic flux tubes with opposite poloidal and azimuthal magnetic fields are moving outward. These helical flux tubes are formed in the inner region (r∼20r_ s) at around t/(10^4t_0)=1.2. They collide at the equatorial plane and merge. This event is similar to the merging of two spheromacs with opposite current helicity j· B. The merging of the counter-helicity flux tubes converts the magnetic energy into thermal energy and heats the plasma <cit.>. The right panels of figure <ref> show that the azimuthal magnetic field in the inner hot flow (r<20r_ s) is reversed from that in the left panels and magnetic reconnection with that in the outer warm disk takes place. The outward motion of the helical flux tubes produces the outward motion of the plasma in the equatorial region. Figure <ref> shows the space-time plot of the accretion rate at |z|>3r_ s (left panel), |z|<3r_ s (middle panel), and the net mass accretion rate (right panel) for M01. Before the radiative cooling is included (t/(10^4t_0)<1.05), accretion proceeds in the equatorial region of the disk but as the disk cools, accretion takes place mainly in the surface region of the disk, and outward motion becomes prominent in the equatorial region. It indicates that meridional circulation is taking place in the radiatively cooled warm disk at r>30r_ s. The surface accretion flow is driven by the angular momentum loss of the plasma around the disk surface through large-scale poloidal magnetic fields threading the disk. The surface accretion flow was called 'avalanche flow' in <cit.>. Figure <ref> shows the distribution of the Poynting flux F_ poy = -(v× B)× B/4π at t/(10^4t_0)=1.25 (left), and t/(10^4t_0)=1.5 (right). In the warm region at r>35r_ s, radially outward Poynting flux is comparable to the radiative flux in this region. It indicates that the magnetic energy transported to this region contributes to the heating of the plasma. Heating of the accretion flow by Poynting flux from the inner region has been studied by general relativistic radiation MHD simulations by <cit.>. Our numerical results indicate that the Poynting flux contributes to the heating of the warm Compton region of the disk even when the central black hole is not rotating. § COMPARISON WITH MODELS OF MAGNETIZED DISKS We showed by radiation MHD simulations including Compton cooling/heating that the warm Comptonization region supported by azimuthal magnetic fields is formed in the region outside the radiatively inefficient accretion flow near the black hole. In this region, the disk shrinks in the vertical direction by radiative cooling. In this section, we compare numerical results with steady models of magnetized black hole accretion flows. The basic equations of steady accretion disks partially supported by the magnetic pressure of the azimuthal magnetic field were derived by <cit.> assuming that the heating term in the energy equation is computed by using Q^+=-α W_ totrdΩ/dr where Ω is the angular speed of the rotating disk, and W_ tot is the vertically integrated total pressure p_ tot=p_ gas+p_ rad+p_ mag. Here we update their model by including additional magnetic heating due to magnetic reconnection and assume that Q^+=-α W_ totrdΩ/dr+(3/2)α'W_ totΩ. The derivation of thermal equilibrium curves for this model is shown in Appendix. In this section, we compare the numerical results with the magnetized disk model. Here, we examine the angular momentum transport rate, α which is calculated by the following equation. α = ⟨ -B_rB_φ/4π⟩_V/⟨ p_ tot⟩_V. Figure <ref> shows the time evolution of α averaged in 35r_ s<r<45r_ s. The dashed curve shows α in the equatorial region where τ_ es>1, and the solid curve shows α in the surface region where 0.1<τ_ es<1. The α value calculated in the surface region (solid curve) is larger than that of the disk region. The smaller α in the equatorial region is due to the inhibition of accretion through the radially outward motion of helical flux tubes. In the following, we adopt α=0.1. Figure <ref> over plots numerical results and the thermal equilibrium curve when α=0.1, α'=0, Σ_0=20, Φ_0=2×10^16, and ζ=0.5. Here, Φ_0 is the azimuthal magnetic flux, and ζ is a parameter that relates the magnetic flux and surface density as Φ=Φ_0(Σ/Σ_0)^ζ (see Appendix). The yellow rectangles and red diamonds show the surface density and the vertically integrated total pressure for model M01 and model M03 at r=40r_ s. The blue circles are for model NC at r=30r_ s Our simulation results are located between the optically thin branch and the optically thick branch. This is similar to the results for stellar-mass black holes <cit.>. In model NC (blue circles), W_ tot is larger than the equilibrium solution because Compton cooling is neglected in this model, and the bremsstrahlung cooling time scale is much longer than the dynamical time scale. In model M01 (yellow rectangles), the total pressure decreases towards the magnetic pressure-supported disk solution. The disk contraction enhances the magnetic pressure in the cool region and maintains the disk with a luminosity of around 0.01L_ Edd. In model M03 (red diamonds in Figure <ref>), the large optical depth for Thomson scattering enhances the radiation pressure so that the vertically integrated total pressure exceeds that of the magnetic pressure-dominant disk. The right panel of Figure <ref> shows the thermal equilibrium curves in the surface density and temperature planes. The yellow rectangles and red diamonds show the azimuthally averaged surface density and temperature at z=0 for model M01 and model M03 at r=40r_ s and the blue circles are for model NC at r=30r_ s, respectively. Our numerical results indicate that the temperature is 10^6-10^7 K at Σ=20-100 g/cm^2. This is consistent with the observation of the soft X-ray emission in CLAGN. However, the temperature is higher than that in the equilibrium solutions. The higher temperature can be explained by the additional heating due to the dissipation of magnetic energy caused by the merging of the antiparallel azimuthal magnetic field near the equatorial plane. The additional heating through the injection of the Poynting flux from the inner region can be evaluated by Q'=(3/2)α W_ tot(r_ in)Ω(r_ in) where r_ in is the radius where the outward moving helical flux tubes are formed. Since ṀΩ(r_ in)=2πα W_ tot(r_ in), Q' is proportional to Ω(r_ in)^2∝ r_ in^-3 when Ṁ is fixed. Thus, at r=40r_ s, Q'=(3/2)(r/r_ in)^3α W_ tot(r)Ω(r). When we denote Q'=(3/2)α' W_ tot(r)Ω(r), α'=(r/r_ in)^3α. The heating rate is Q^+=-α W_ tot rdΩ/dr + (3/2)α'W_ totΩ=(3/2)(α+α')W_ totΩ. When we adopt r_ in=22r_ s as suggested by figure <ref>, α'=(40/22)^3α∼6α=0.6. Figure <ref> shows the thermal equilibrium curves in surface density and vertically integrated total pressure (left panel) and temperature (right panel) for α=0.1 and α+α' = 0.7. The yellow rectangles, red diamonds, and blue circles show the numerical results. In the left panel of Figure <ref>, the magnetic pressure-supported equilibrium solution does not change with α', because the gas pressure does not contribute to the total pressure in this branch (plasma β∼0.01 in this regime). On the other hand, since the gas temperature increases with the additional heating (3/2)α'Ω W_ tot, the thermal equilibrium curve for the surface density and temperature approaches the numerical results when α+α'=0.7. Figure <ref> schematically shows the energy transport in the warm region suggested by numerical results. Magnetic field amplification through the surface accretion converts the gravitational energy to the magnetic energy around the interface between the radiatively cooled disk and the inner hot RIAF and forms helical azimuthal magnetic flux tubes. The helical flux tubes are expelled from the region through Lorentz force toward the positive radial direction and transport the magnetic energy. The magnetic energy transported to the warm disk is converted to thermal energy through magnetic reconnection. It should be noted that the magnetic energy has been accumulated in the disk by mass accretion which releases the gravitational energy. Therefore, the time-averaged heating rate should be determined by the time-averaged accretion rate. However, there can be a time lag between the magnetic energy accumulation and the dissipation. Therefore, there can be a transient state in which the energy dissipation rate is larger than that expected from the accretion rate at that state. When the magnetic energy release ceases, the cooling will dominate heating, so that the disk will shrink in the vertical direction, which enhances the vertically integrated magnetic energy again. § SUMMARY In this paper, we have shown the results of the global three-dimensional radiation MHD simulations with the sub-Eddington accretion rate. The simulations successfully reproduced the soft X-ray emitting Thomson-thick warm Comptonization region with T=10^6-10^7 K. When the accretion rate is 0.1Ṁ_ Edd (model M01), a warm, Thomson-thick region with an average temperature of 10^6-10^7 K is formed outside 40r_ s. In this state, the magnetic pressure is dominant in the warm region. The bolometric luminosity L∼0.01L_ Edd is close to the dark phase of the CLAGN. As the mass accretion rate increases (model M03), the warm region is formed around 30r_ s<r<40r_ s, and the temperature of the cool outer region decreases to 10^5 K. In this state, the cool region (r>40r_ s) is mainly dominated by radiation pressure, because the optical depth is larger than that in model M01. The luminosity is ∼0.08L_ Edd, which is close to the luminous phase of the CLAGN. The temperature and the Thomson optical depth of the warm region are consistent with the observational property of the warm Comptonization region in CLAGN <cit.>. As the radiation pressure increases further, the radiative cooling rate increases, and the UV-emitting cold region forms which is also consistent with increased UV emission in the luminous state of CLAGN <cit.>. We should note that the warm region is optically thin for the effective optical depth τ_ eff=√(τ_ ff(τ_ ff + τ_ es)) for model M01. Therefore, soft X-ray emission from reflections of cold disks is ruled out, at least in the low luminosity state. Soft X-rays are emitted from the warm Comptonization region itself by Comptonizing photons emitted by bremsstrahlung or synchrotron radiation in the same region, or by upscattering photons emitted from the outer cool region by inverse Compton scattering. In contrast, in model M03, the effective optical depth is close to unity in the 10^5 K region. In this region, soft X-rays can be emitted from the surface region <cit.> as well as the warm Comptonization region near the equatorial plane around r=30r_ s. We have also obtained the thermal equilibrium solutions of magnetized disks with an azimuthal magnetic field for 10^7M_ black holes. When the magnetic flux at r=40r_ s exceeds 2×10^16 Mx/cm, the equilibrium solutions with temperature T=10^6∼10^7 K appear when Σ=10∼100 g/cm^2. Numerical results show that the accretion flow approaches this state in the Σ-W_ tot plane, but the temperature is an order of magnitude larger than the equilibrium solution unless additional heating is considered. The origin of the additional heating is the magnetic energy dissipated through the reconnection of reversed magnetic fields transported from the inner region. This work is supported by JSPS Kakenhi 20H01941, 24K00672(PI R.M.), JP23K03448 (PI T.K.), 20K11851, 20H01941, and 20H00156 (H.R.T). We are grateful to the NAOJ for allowing us to use the XC50 systems operated by CfCA to run the simulations in this study. We thank Dr. H. Noda, A. Kubota, Y. Kato, S.Yamada, and M. Machida for the valuable discussion. This Appendix describes the thermal equilibrium solutions for axisymmetric, steady magnetized disks. Here, we assume the polytropic relation p_ tot=Kρ^(1+1/N), where N=3. Vertical hydrostatic balance yields ρ(r,z) = ρ_0(r)( 1-z^2/H^2)^N, and p_ tot(r,z) = p_ tot,0(r)( 1-z^2/H^2)^N+1, where the subscript 0 denotes the value at the equatorial plane, and H=[ 2(N+1)/Ω^2_K0p_ tot,0/ρ_0]^1/2 are the disk half thickness, where p_ tot=p_ gas+p_ rad+p_ mag, and Ω_ K are the total pressure, and Keplerian angular velocity, respectively. Integrating the equations in the vertical direction, the surface density Σ and the vertically integrated total pressure W_ tot can be written as Σ = ∫^+H_-Hρ dz = 2ρ_0 I_ NH, and W_ tot = ∫^+H_-Hp_ totdz = 2p_ tot,0I_N+1H, where I_N = (2^NN!)^2/(2N+1)!. By rewriting ρ_0 and p_ tot using Σ and W_ tot, we obtain H = ( 2N+3/Ω^2_ K0W_ tot/Σ)^1/2. Now, assuming axisymmetry and integrating the azimuthally averaged equations in the vertical direction, we get: Ṁ = -2π rΣ v_r = const., Ṁ(l-l_ in) = 2π r^2α W_ tot, Ṁ/2π r^2W_ rad+W_ gas/Σξ = Q^+ - Q^-, where ℓ is the specific angular momentum and ℓ_ in is the specific angular momentum carried into the black hole. The left-hand side of equation (<ref>) is the advective cooling rate Q_ adv. We assume ξ=1. <cit.> assumed that the magnetic flux advection rate Φ v_r where Φ=∫ B_φ dz is determined by Ṁ. Here we modify the formulation by introducing a parameter ζ and assume that Φ = Φ_0( Σ/Σ_0)^ζ. When ζ>0, the magnetic flux is stored in the region where the mass accumulates. Here we should note that the magnetic flux Φ is the magnetic flux integrated in the vertical direction. Thus the azimuthal magnetic field cancels when the azimuthal magnetic field is anti-symmetric with respect to the equatorial plane. Here we use |B_φ| to avoid cancellation of the azimuthal magnetic flux when B_φ is anti-symmetric with respect to the equatorial plane. Figure <ref> shows the relation between the absolute magnetic flux Φ_+=∫|B_φ|dz and Σ at r=40r_ s obtained by radiation MHD simulations. The magnetic flux is larger in the early stage. In the later stage, the magnetic flux decreases, because the anti-symmetric azimuthal magnetic field dissipates around the equatorial plane. The azimuthal magnetic flux Φ_0 and the parameter ζ can be estimated to be Φ_0=2×10^16 Mx/cm and 0.5, respectively. We further assume that the disk heating rate is written as Q^+ = -α W_ totrdΩ/dr + 3/2α'W_ totΩ. Here, the first term on the right-hand side is the heating by the vertically integrated rφ component of the stress tensor W_ rφ=-α W_ tot and the second term is the enhanced heating by magnetic reconnection, where α' is proportional to the reconnection rate. For radiative cooling, we consider thermal bremsstrahlung emission. The cooling rate in the optically thin limit is written as Q^-_ thin = 6.2×10^20I_2N+1/2/2I_N^2Σ^2/HT_0^1/2, and the cooling rate in the optically thick limit is written as Q^-_ thick = 16σ I_NT_0^4/3τ/2, where σ and τ=τ_ abs+τ_ es are Stefan-Boltzmann constant and the total optical depth. Here τ_ abs is the absorption optical depth, and τ_ es=0.5κ_ esΣ is the optical depth for Thomson scattering. For the intermediate case, we use the approximate form of radiative cooling <cit.>, Q^- = 16σ I_N T^4_0/3τ/2 + √(3) + τ_ abs^-1, where τ_ abs = 8I_ N^2/3I_ N+1τQ^-_ thin/Q^-_ thick. We also consider the cooling by the inverse Compton scattering. The cooling rate by the inverse Compton scattering can be written as Q^-_ Comp = κ_ esΣ Q^-4k_ B/m_ ec^2( I_ N+1/I_ NT_0 - T_ r), where T_ r=(3τ/2/4a_ rcI_ NQ^-)^1/4 is the radiation temperature. For simplicity, we assume that the ion and electron temperatures are the same. However, we should consider the temperature difference between ions and electrons in hot accretion flows where T_i>10^9 K. We can neglect the temperature difference between electrons and ions in the warm Comptonization region where T_i<10^8 K. To obtain the thermal equilibrium curves of the black hole accretion flows, taking into account the azimuthal magnetic field we need to solve the equation of state and the balance between heating, radiative cooling, and advective cooling. Defining f_1 = W_ tot - W_ rad -W_ gas - W_ mag and f_2 = Q^+ - Q^-_ rad - Q^-_ adv, we obtain f_1 = W_ tot - 1/4c16σ I_ 3T_0^4/3τ/2 + √(3) + τ_ abs^-1I_ N+1/I_ NH(τ + 2/√(3)) - I_ N+1/I_ NR/μΣ T_0 - Φ_0^2/8π H(Σ/Σ_0)^2ζ, and f_2 = Q^+ - 16σ I_ NT_0^4/3τ/2 + √(3) + τ_ abs^-1 - κ_ esΣ Q^-4k_ b/m_ ec^2( I_ N+1/I_ NT_0 - T_ r) -Ṁ/r^2W_ gas + W_ rad/κ_ esΣξ. The thermal equilibrium curve is obtained by solving f_1 = f_2 = 0 for a given radius and mass accretion rate. Figure <ref> shows the result for a supermassive black hole with mass M=10^7M_ at r=40r_ s when ζ=0.5, α=0.1, α'=0, and Σ_0=30 g/cm^2. The upper left panel of Figure <ref> shows the solution for the vertically integrated total pressure and surface density. When the surface density exceeds the upper limit for RIAF, the vertically integrated total pressure decreases because the radiative cooling overcomes the disk heating. However, since the magnetic pressure increases as the disk shrinks in the vertical direction, the enhanced heating balances the radiative cooling, so that the intermediate state between the optically thin branch and the optically thick standard disk appears as shown in <cit.> for a stellar-mass black hole. The upper right panel of figure <ref> shows the thermal equilibrium curve in the surface density and W_ tot planes. When the magnetic flux exceeds 2×10^16Mx/cm, a warm (T=10^6-10^7 K) region appears when Σ=1∼100 g/cm^2. The temperature and Thomson optical depth in this region are comparable to the warm Comptonization region and can explain the soft X-ray emission. Note that the temperature of this region is lower than that of stellar-mass black holes, where T=10^8 K <cit.>. The lower left panel of figure <ref> shows the plasma β defined as β=(W_ gas+W_ rad)/W_ mag. In the intermediate state, the magnetic pressure dominates the gas and radiation pressure. Note that the plasma β is lower for the lower magnetic flux model. This is because the disk heating rate can only balance the radiative cooling when the magnetic pressure is much higher than the gas pressure. The bottom right panel shows that the warm Comptonization region is optically thin for the effective optical depth. aasjournal
http://arxiv.org/abs/2407.02287v1
20240702142031
Do CAA, CT, and DANE Interlink in Certificate Deployments? A Web PKI Measurement Study
[ "Pouyan Fotouhi Tehrani", "Raphael Hiesgen", "Teresa Lübeck", "Thomas C. Schmidt", "Matthias Wählisch" ]
cs.CR
[ "cs.CR", "cs.NI" ]
IEEEexample:BSTcontrolNew Do CAA, CT, and DANE Interlink in Certificate Deployments? A Web PKI Measurement Study Pouyan Fotouhi Tehrani1, Raphael Hiesgen2, Teresa Lübeck2, Thomas C. Schmidt2 and Matthias Wählisch1 1TU Dresden, Dresden, Germany Email: {pouyan.tehrani,m.waehlisch}@tu-dresden.de 2HAW Hamburg, Hamburg, Germany Email: {raphael.hiesgen,teresa.luebeck,t.schmidt}@haw-hamburg.de July 8, 2024 ===================================================================================================================================================================================================================================================================================================================================== [overlay,remember picture] [draw, fill=gray!10, font=,anchor=south] at (.4,6cm) If you cite this paper, please use the TMA reference: Pouyan Fotouhi Tehrani, Raphael Hiesgen, Teresa Lübeck, Thomas C. Schmidt and Matthias Wählisch. 2024. Do CAA, CT, and DANE Interlink in Certificate Deployments? A Web PKI Measurement Study. In Proceedings of the Network Traffic Measurement and Analysis Conference (TMA '24). IEEE, Piscataway, NJ, USA, 11 pages. https://doi.org/10.23919/TMA62044.2024.1055908910.23919/TMA62044.2024.10559089; § ABSTRACT Integrity and trust on the web build on X.509 certificates. Misuse or misissuance of these certificates threaten the Web PKI security model, which led to the development of several guarding techniques. In this paper, we study the DNS/DNSSEC records CAA and TLSA as well as CT logs from the perspective of the certificates in use. Our measurements comprise 4 million popular domains, for which we explore the existence and consistency of the different extensions. Our findings indicate that CAA is almost exclusively deployed in the absence of DNSSEC, while DNSSEC protected service names tend to not use the DNS for guarding certificates. Even though mainly deployed in a formally correct way, CAA CA-strings tend to not selectively separate CAs, and numerous domains hold certificates beyond the CAA semantic. TLSA records are repeatedly poorly maintained and occasionally occur without DNSSEC. DNS, DNSSEC, CAA, PKI, TLS, CT Logs § INTRODUCTION Secure and authenticated transport is essential to the modern web. Trust in the Web PKI is built on X.509 certificates and derives from accepted root certification authorities (CA). Any CA, which is part of a valid trust chain, can issue a trustworthy certificate for any service. Millions of devices rely on this ecosystem for browsing, shopping, online banking, Given the immense reach of each CA, much effort has gone into securing the Web PKI (<ref>)—most prominently DNS-based Authentication of Named Entities (DANE) <cit.>, Certification Authority Authorization (CAA) <cit.>, and Certificate Transparency (CT) logs <cit.>. DANE and CAA build on the Domain Name System (DNS), which allows domain owners to store accessible information. DNSSEC adds an intrinsic chain of trust along the DNS hierarchy. Different from the certificate chain-of-trust, only the entity that controls a domain can sign its records. Using DANE TLSA records, the DNS can provide additional information to verify X.509 certificates and even establish trust without relying on a CA. In contrast, CAA records allow domain owners to restrict which CAs are allowed to issue certificates for their domains. Independent of the DNS, CT logs are append-only databases that collect published certificate and make misissued certificates visible to the public. All three standards—DANE, CAA, and CT logs—concern X.509 service certificates but use different methods, different publication channels, and are targeted at different audiences. In this work, we want to learn about the use (and misuse) of these concurrent approaches to harden the Web PKI ecosystem. In a large measurement study, we collect data from 4M domains based on the Tranco top list, spanning DNS records, X.509 certificates, and CT log entries. We compare and analyze the different records in use with respect to their existence, content, and intended semantics, correctness, consistency, and coherence. Our major findings read: * Nearly 9% (357k) of all domains deploy at least one of either DNSSEC, DANE, or CAA; overlapping deployment is much smaller. * CAA records are largely deployed correctly and consistent with certificate issuers (>90%). Nevertheless, CA strings are not precisely defined; several strings match more than a single CA, one matches 21 CAs. * CAA records are mainly deployed without DNSSEC; even some TLSA records lack DNSSEC protection. * TLSA records are rare and often poorly maintained, some of which correspond to certificates that have been revoked or have expired since long. The remainder of this paper is structured as follows. <ref> introduces the Web PKI along with the relevant technologies for securing certificate deployments and related work. Our measurement method, data collection and its processing are explained in <ref>. <ref> reports on the deployment of X.509 certificates and the DNS extension records and examine their correctness with respect to the actual certificates in use. <ref> combines information from DNS records, certificates, and logs to systematically explore consistency and coherence of the security information set. We conclude in <ref> with an outlook on future research. § BACKGROUND AND RELATED WORK The Web PKI ecosystem is the foundation for authentication on the Web using X.509 certificates at its core, <cit.>. Since its inception, various extensions have been introduced to enhance its functionality or address its shortcomings. In this section, we briefly introduce Web PKI and discuss how DNS CAA, DANE, and CT Logs address one of the biggest challenges: unconstrained and global certification authority. Furthermore, we present prior work. §.§ Background A fundamental security challenge on the Internet is the trust in cryptographic keys used for authentication. In a controlled environment, keys can be manually attributed to specific entities, but such an approach is less applicable in large-scale distributed systems such as the Internet. We now discuss how Web PKI addresses this challenge, which shortcomings still exist, and which remedies have been proposed. Web PKI Web PKI introduces Certification Authorities (CA) to bind public keys to domain names (among other attributes) to form a certificate. The authenticity of a certificate can be verified using its cryptographic signature. In public key cryptography, a signature is generated by a private key (which is kept secret) and can be validated by the corresponding public key (which is published openly). A relying party (RP), a piece of software that decides whether a certificate is valid or not, would then trust a certificate that is signed by or can be traced back to a trusted CA. On the Web, an RP is typically a browser, which maintains its own set of trusted CAs or Trust Anchors (TA) in a local trust store. Issuing a certificate correctly is the most important task of a CA. Unfortunately, CAs are not restricted when issuing certificates. They can create certificates for any name. A compromised or malicious CA, then poses security risks for all entities that rely on security assurances provided by the Web PKI. Those incidents occur in practice, such as the DigiNotar incident <cit.>. To counter this threat, various solutions have been introduced, which we summarize in <ref> and discuss in the following. Certification Authority Authorization (CAA) A CAA record <cit.> gives a domain name owner the ability to restrict issuance of certificates by defining which CAs are allowed to issue certificates for its name. Such a constraint is stored in the DNS using dedicated CAA resource records to describe restrictions for wildcard or fully qualified domain names (FQDN) for the namespace under control of the name owner. A CAA record is composed of a flag, a tag, and a value. Issuance constraints are defined by and tags. The latter only constrains wildcard certificates while records concern both, wildcards and FQDNs, but are superseded by an record. These two tags have a well-defined syntax. When the syntax is violated, a certificate should not be issued. To forbid issuance explicitly, an empty value () can be used. CAA records allow learning details about the CA itself. DigiCert, for example, accepts as well as (among others) according to its Certification Practices Statement (CPS) <cit.>. CAA also enables CAs to report policy violations to the name owner based on information configured in the tag. The value of this tag can be an email address or a URL. An example of a policy violation is when a certification request is submitted at a CA that does not satisfy the issuance constraints. DNS-Based Authentication of Named Entities DANE <cit.> allows the attestation that a public key is a valid key for a domain name. To enable this attestation, a domain name owner stores the public key in specific records of the name under attestation. For TLS-based services, DNS TLSA records signal RPs which (type) of certificate to expect from the server. This can be an end entity (EE) certificate (a leaf certificate) or a TA certificate (from a CA). A TLSA record can reference a certificate or its subject public key information (SPKI). The reference is either to the full raw data (hex formatted certificate) or its digest (SHA256 hash). DANE makes use of DNS Security Extensions (DNSSEC) which bring authentication and integrity assurance to the DNS <cit.> and mitigate common DNS attacks such as cache poisoning. DNS records are signed in DNSSEC, have limited validity, and must be signed again after expiration. Without DNSSEC, DNS records are susceptible to spoofing and manipulation, thus defeating the purpose of DANE. Certificate Transparency Logs CT logs <cit.> were introduced to enable public monitoring and auditing of issued certificates. Although logging is not mandatory <cit.>, major browsers such as Chrome and Safari only accept certificates that are logged in at least two compliant CT logs. Due to their market shares, this forces CAs to comply or lose out on large shares of customers. Target Audiences Each technology follows a different core idea and is targeted at different parties, see <ref>. Whereas CAA records are meant for CAs before a certificate is issued, CAs feed CT logs after issuance to make certificates visible to domain owners and thus allow verification. DANE is deployed by domain owners as well, but faced towards certificate consumers, relying parties, signaling which (type of) certificate to expect or accept. §.§ Related Work Efforts to secure the Web PKI have been ongoing for more than 10 years. CAA records were standardized in 2013 <cit.> and updated in 2019 <cit.>. DANE became a standard in 2012 <cit.>, and CT logs were standardized in 2013 <cit.> and updated to version 2 in 2021 <cit.>. Since 2017 the CA/B forum requests that CAs validate CAA records. To the best of our knowledge, this paper is the first study that comprehensively analyzes all these technologies together to better understand configurations in real deployments, including inconsistencies. CAA In 2018, Scheitle  <cit.> presented the first analysis of the CAA ecosystem, actively measuring the behavior of selected CAAs as well as auditing the ecosystem. They found that 3k of 95k domains in the Alex Top 1M list deployed CAA records. At that time, most domain owners (89%) configured a single CAA string, mainly letsencrypt.org (64%), and did not allow issuing certificates for arbitrary names of a domain or subdomain (59%). CA strings that occurred infrequent included many invalid strings due to misspellings or owners using their own domain as CA string—indicating a lack of automation and understanding of how to configure CAA. In a study focusing on nonfederal governments in the United States, Gebhard  <cit.> revealed that CAA is least deployed and grows slowly compared to the adoption of DNS records that strengthen identifying web servers (SPF, DMARC). Certificate Transparency In 2018, the same year Chrome made CT mandatory, Scheitle  <cit.> measured the adoption of CT and found an exponential increase of CT log entries. They further found two more use cases of CT logs. CT logs can be used to identify phishing domains,  malicious actors monitor CT logs to search for new targets. The latter was examined in a broader study in 2021 <cit.>, which confirmed the continued use of CT logs for target discovery. In a longitudinal study of TLS certificates gathered from active scanning and CT logs, Farhan  <cit.> found an improvement in share of valid certificates and key strengths, but also observe a centralization in the Web PKI. 80% of valid certificates are now signed by only 10 keys. DANE Three years after DANE was published, Zhu  <cit.> found in 2015 that less than 1000 domains use DANE. In 2020, DANE still did not gain widespread adoption in browsers. While the email ecosystem saw a comparatively higher adoption rate, mismatches between TLSA records and certificates and incorrect DNSSEC were still frequent <cit.>. Lee  <cit.> observe that 94% of SMTP servers still rely on the certificates issued by CAs when they deploy DANE. All the prior studies provide an in-depth understanding of CAA, CT, or DANE deployments. They focus, however, on each protection mechanism separately. In this paper, we first provide an update of recent deployments and then close a gap by taking a comprehensive view on the deployments of all three technologies together. § METHOD AND DATA CORPUS We collect a data corpus of 4M domains to examine the Web PKI ecosystem[Source code, raw data, and our analysis are available under <https://doi.org/10.5281/zenodo.11081271>.] using the processing pipeline in <ref>. §.§ Building a Target List Query DNS (T.1) As input, we take the Tranco top list <cit.> comprised of over 4.1M domain names ranked by popularity. We try to resolve each name to an A record and only keep names with valid records. ≈533k entries did not resolve to an IP address while 1150 timed out, likely due to trimmed domain names or dynamic DNS changes. Check Ports (T.2) & Transport (T.3) Next, we check ports 80 and 443 (TCP using ) and establish an HTTP or HTTPS connection over open ports with our own tool based on . We follow HTTP () and HTML () redirects. 145k domains allow neither HTTP nor HTTPS connections. The remaining names resolve to IP address and host a web server. Browser (T.4) We then feed collected names into Puppeteer (T.4), a headless Chrome browser, and follow further redirects (including JavaScript). Our target list now contains 4M unique domain names (including intermediates). §.§ Collecting DNS Records & Certificates CAA, DANE, and DNSSEC from DNS (M.1) We collect the following DNS resource records (RR) for each domain: SOA, A, CAA, TLSA, as well as contactemail and contactphone TXT records, see Appendix <ref>. Additionally, we query CAA records for all parents of a given name by recursively removing the leftmost label up to the TLD, for we query CAA records for the domain set . Queries set the flag to request DNSSEC records (if any) and have DNSSEC validated by the resolver. For DANE, we only query TLSA records that are associated with TCP services on port 443 (HTTPS). We use Google recursive resolvers for all DNS queries due to their availability, reliability, and provision of a JSON API. X.509 Certificates via TLS (M.2) For each domain name, we establish a TLS connection with the first retrieved IP address in our list, setting the domain name as the Server Name Indication (SNI). At this point, we do not validate certificates and we suppress TLS errors due to insecure cipher suites (OpenSSL security Level 0). By disregarding failures in TLS deployments, we can collect all available certificates to better understand shortcomings. Leaf Certs from CT Logs (M.3) Domain owners might have applied for more certificates than the one deployed on their web server. For example, Cloudflare issues backup certificates for its customers to be able to swiftly replace keys in case of compromise <cit.>. We collect certificates appended to CT logs for the subset of domain names that have either CAA or TLSA records, using the open database provided by Sectigo under . §.§ Preparing for the Analysis Parsing and Validating CAA (D.1) We parse CAA and values according to the Augmented Backus-Naur Form (ABNF) and consider malformed entries as empty (semantically equivalent to ). Malformed entries forbid the issuance of a certificate <cit.>. For records with tags, we verify that the value is a URL with the correct scheme (, , or ). For records with the and tags, we devise an algorithm that matches a set of given CAA RRs to certification authorities, visualized in Appendix <ref>. It uses the following classifications: * No CAA: No applicable CAA RR was found. * Implicit Match: No CAA RR constraints issuance. * Issuer Match: At least one CAA RR matches cert issuer. * Issuer Mismatch: No CAA RR matches the issuer. * Malf. Mismatch: All CAA RR are malformed. * Empty Mismatch: Only empty () CAA RR. This algorithm relies on a mapping from CAA issuer domain names (values in and ) to CA certificate properties. Our mapping is based on the “List of CAA Identifiers” in the Common CA Database <cit.>. We enrich this list manually based on in Certification Practice Statements (CPS) documented identifiers, and our own observations of undocumented string identifiers. The CPS is usually linked in the certificate. If this link is not valid, we manually identify the respective CA and browse its website to find the statement. In contrast to X.509 certificates or DNSSEC signatures, CAA RRs do not carry a validity timestamp and are in practice only validated by the CA at the time of issuance. This can cause a discrepancy between our observations at the time of measurement and what CAs observed when they issued a certificate, and thus leads to misclassifications. To verify whether our measurement setup introduces such misclassifications, we calculate the difference between our probe time and timestamp carried in certificates for domains with CAA records, see <ref>. Regardless of CAA matching state, our measurements occurred in 75% within three months of the issuance. Mismatches, however, occur much more frequently for certificates that are older. This indicates that our setup is not prone to mismatches that are actually valid. Parsing and Validating DANE (D.1) We validate that TLSA records are RFC-conformant using an open-source library <cit.> but leave the verification of the DNSSEC integrity to the recursive resolver. Parsing and Validating X.509 Certificates (D.2) We parse X.509 certificates with ZCrypto <cit.> and check three things. The subject (alternative) names in the certificate should match the queried domain name, the name is included as a SAN or covered by a wildcard SAN. The certificate chain should be valid. We select the Mozilla Common CA Database <cit.> as our trust store. And Attached SCTs (if any) should be valid, the SCT corresponds to the respective certificate and is signed by a trustworthy log. We use Google's library <cit.> to verify signatures with keys extracted from the list of all complying <cit.> logs <cit.>. Here, we assume that log operators behave correctly and omit to check the logs directly. § CONFIGURATION OF X.509 CERTIFICATES, DNS CAA, AND DANE RECORDS In this section, we focus on the deployment of X.509 certificates alongside CAA and DANE records within the DNS to better understand to which extent name owners care about correct configuration of each security extension. In total, we observe 357k unique domains (8.85% of all names in the Tranco list) that deploy at least CAA, DANE, or DNSSEC. We visualize the overlap of support for different technologies per name as an UpSet plot <cit.> in <ref>. The bar plot on the left shows how many domains fall into each category while the bar plot on top shows the size of exclusive intersections between the categories marked in the matrix below. It is clearly visible that domain name owners do not focus on comprehensive security support. §.§ X.509 Certificates Roughly 97% of collected certificates are valid (based on our trust store, see <ref>). Validation failures stem from age (47k), untrusted signees (≈ 15k), and malformed certificates (60k). All valid certificates were submitted to either 2 (72%), 3 (23%), or 4 (5%) CT logs. With 73%, most certs were submitted to logs operated by Google, two thirds were submitted to logs operated by Cloudflare, and about half to logs operated by DigiCert. Two out of every three certificates were issued by one of four major CAs: Let's Encrypt with a total share of 52%, followed by Google Trust services (16%), DigiCert (4.5%), and Sectigo (4.5%). This does not include resellers, ZeroSSL using Sectigo infrastructure. In our observation, 7.2% of unique domains point to hosts that provide certificates with mismatching subjects, not matching the original domain name. These are in part default (self-signed) server certificates or service provider certificates for parked domains. §.§ CAA Deployment 5.2% of the 4M domains we scanned support CAA records. Most (4.55%) domains only specify constraints for CAs for fully qualified domain name or wildcard names (respectively or records). Around 0.01% only provide information on reporting policy violations ( records). 0.63% of domains deploy both. Reporting Policy Violations— Records Several records that have been configured prevent contact because of misconfiguration. About 3.79% of entries are invalid due to an invalid schema <cit.> as neither nor is present. 17 entries with unknown schemas still contain a colon, eight of these use a non-existent schema (, , etc.), six are typos with missing or switched letters, and three contain characters that break the formatting. Among the remaining invalid entries, 924 are likely email addresses and 58 are HTTP endpoints. One record only contains a sequence of 27 numbers, which is unlikely a phone number because it is too long. Among valid records, nearly all records contain email addresses (> 99%), 42 in total an HTTPS URL, and 16 domains have both. We contacted domain owners with invalid iodef records. Since many email addresses were associated with multiple domains, the 924 entries could be reduced to 504 distinct addresses. Nearly 16% of the mails could not be delivered, mostly because the mailboxes no longer exist. While most did not respond, we also got kind responses and learned that at least one case was the result of false documentation of a hosting provider. Others mentioned that their DNS settings were applied automatically by the service provider. Granting Authorization to Issue (Wildcard) Domain Names— and Records Among ≈210k domains with issuance constraints, 97% have records with an tag and 57% have records with an tag. The overlap is 54%, which leaves 43% that only have and 3% that only have . 38 domains have malformed entries. Our dataset shows that compared to six years ago, name owners are more liberal when it comes to allowing multiple CAs to issue certificates. In 2018, 89% of domains only allowed a single CA to issue certificates <cit.>. Now, in 2024, 28.39% domain owners allow four different CAs to certificate their names, 16.35% 3 CAs, 16.26% 5 CAs, and 10.04% 2 CAs. We even observe one domain with 59 CAA records and another one with 46 records. Another 59 domains, all ending in , each have 17 CAA records. <ref> shows the most commonly used CAA strings as well as the number of unique CAs (by Subject Organization) that match this string in our dataset. Clearly, a domain owner grants permission to the operator of the CA infrastructure rather than to a specific CA. For example, ZeroSSL, an Austrian CA, accepts , because it uses Sectigo Infrastructure to issue certificates under its own brand. At the same time, a CA might accept different CAA strings. In our example, ZeroSSL also accepts , another brand of Sectigo. Records with Non-Standard Tags We find 353 CAA records with tags not defined in RFC <cit.>. These tags can be categorized in three types: 252 × unrecognized tags as defined in CA/B Baseline Requirements <cit.>, 50× misspellings such as an extra letter, and 51× malformed formats such as extra quotes. §.§ DANE Deployments About 0.1% of all unique names that we queried are standard compliant. 98% of these 3678 names provide a valid certificate. The majority (87%) define only end-entity constraints (DANE or PKIX EE), 10% only trust anchor constraints, and the rest both. Although DNSSEC is a requirement for DANE, one out of every third TLSA record set is delivered over 1126 insecure DNS sets. <ref> summarizes our findings. It is noteworthy that only 70% of names match their TLSA records as described below. The case of mismatching TLSA records is discussed in depth in <ref> using data from CT logs. Matching TLSA with Invalid Certificates About 1.5% (53 names) define matching DANE-TA or DANE-EE constraints with invalid certificates (see <ref>). More than half are expired and issued by Let's Encrypt and the remaining rest are self-signed certificates or leaf certificates by miscellaneous (partly not accredited) CAs. More than 71% of TLSA record sets here are secured by DNSSEC and 73% have no associated CAA records. Matching TLSA with Valid Certificates 2553 entries provide TLSA records that match the provided certificate and the certificate is valid. However, not all are secured by DNSSEC and only about 46% of entries provides CAA records. Only 190 names in this set have TLSA records that impose a limit on Web PKI certificates, while the rest are DANE-EE and DANE-TA constraints. § INFORMATION CONSISTENCY §.§ CAA Records vs X.509 Certificates Based on the results of our CAA matching algorithm, described in <ref>, we examine the consistency between CAA records and the certificate issuers of the TLS certificates. While the RFC <cit.> mentions the role of an auditor (“Certificate Evaluators”), CAA records provide information for CAs at the time of certificate issuance and are not required to stay consistent. CAA Validation Overview <ref> shows the result of our classification for two datasets based on CAA records with the tag. The upper bar contains all 4M domains in our dataset while the lower bar only considers the subset with CAA records. 3.8M domains (94.8%) do not have CAA records. 200k domains (4.96%) have certificates consistent with their CAA records (“Issuer Match”). These are 95.34% of the nearly 210k domains with a CAA record. Another 0.17% (3.22%) is classified as “Implicit Issuer Match”. These domains mostly (93%) have CAA records with the tag but not with an tag while containing FQDN in their certificate, the domain has the relevant resource records but only restricts the issuance for wildcard certificates. 6% only deploy records. <1% only have records with unknown tags. Nearly 91% of domains with CAA records deploy the relevant RR, the CAA record(s), themselves. For roughly 9% the direct parent domain has the relevant RR. For about 200 domains (≤1%) the DNS hierarchy needs to be traversed further, up to 4 times, which occurred only once. Only 0.06% of all domains (1.07% with CAA records) have an “Issuer Mismatch”, the string in their CAA records does not match the issuer of their certificate. We found two domains that deployed only malformed CAA records. Relevant CAA Records For most domains (92.4%) a CAA record with the tag was the deciding record, the CAA record that fit the domain we requested from the web server. In 95% of cases the issuer matches, 3.5% have an implicit match, and the remaining 1.5% mismatch. In cases where the CAA record with the tag was relevant (7.6%), the share of domains with a matching issuer is even higher with 99.4%. 0.6% mismatch, and we do not observe any implicit matches. Wildcard Certificates for Subdomains Next, we examine domains with CAA records and a wildcard name for their subdomains in the certificate. This is the case for 21% of domains with CAA records in our dataset (44k). For 99.5% domains the issuer of the certificate is consistent with their CAA records. 216 (0.49%) have a mismatch, their certificates should not have been issued in this configuration. 17 (0.04%) forbid issuance of a wildcard certificate with an empty CA string (but have an active wildcard certificate). Partial CAA Matches A CAA record covers a FQDN and a wildcard domain in the absence of an CAA record. However, a CAA record without an record would only restrict wildcard issuance. If a certificate lists both FQDN and a wildcard domain, CAs should check for CAA and records. A partial match occurs when the issuer of a certificate with a FQDN and a wildcard matches either an or record, but not both. For 60 domains, we can match the certificate issuer to the CAA record but observe a mismatch for the wildcard in the same certificate. And for the reverse—the certificate issuer matches a CAA record, but not the record—we find 151 occurrences. In no case is issuance restricted by an empty (`;') record. It seems unlikely that domains, which change their CAA records in the time between re-issuance, change only parts of their records and that they do not choose to set an empty (`;') record. As such, these are likely falsely issued certificates. §.§ CAA Records vs CT Logs In <ref> we match the issuer of the certificates from web servers against their respective CAA records. For all domain names with at least one CAA record, we query CT logs to fetch all other certificates bound to those names that are valid at the time of measurement. CT logs provide 1M certificates for about 191k unique domain names—reduced to 766k certificates after removing duplicates (when a CA logs both precert and leaf). To compare the results, we focus on domains with CAA records or mismatching server certificates. In 98% of cases the CAA matching state is consistent, the certificates we retrieved from the web server and the certificates we retrieved from the CT logs have the same consistency with the domain's CAA records. Among inconsistent cases, ≈55% of domains have servers that return a CAA-matching certificate while CT logs contain at least one other valid certificate that does not match the CAA constraints. 30% present a certificate that does not match the domain name (see <ref>) while we find a logged certificate that fulfills the CAA constrains. <ref> summarizes the findings. §.§ TLSA Records vs CT Logs DANE sets constraints on certificates provided by a service endpoint. We take advantage of CT logs to find certificates that are not deployed, but match either TLSA records or the domain name of DANE-enabled services. We explore the reasons for the existence of TLSA records that do not match the server certificate, the relationship between valid but not-deployed certificates and TLSA records, and compatibility of CAA records with certs referred to by DANE. TLSA Records that do not Match the Server Certificate We observe 1171 names with TLSA records that do not match the provided certificate. Thus, DANE-enabled clients would consider these certificates invalid. We query the CT database for certificates that match these records and find 1494 certificates (note that a single domain can have multiple TLSA records). <ref> depicts how these are distributed with respect to their validity (<ref>) and relative expiration time. The majority of mismatching TLSA records references certificates (leaf or CA) that are expired or have been removed from current trust stores. An indication that TLSA records have not been updated to reflect changes of the CA or leaf certificate. We also see TLSA records reference existing and valid certificates which are not deployed. Undeployed Certs that Match TLSA Records For 945 domain names with TLSA records that fit their respective web server certificate we find 1002 more matching certificates in CT logs that are not included in the certificate chain of the respective web servers. The majority (784) are leaf certificates, and the rest (218) are from intermediate or root CAs. All leaf certificates are either renewed or expired versions (same public key) of the certificate provided by the server. Certificates Matching DANE-secured Domains For all domain names with TLSA records, we query CT logs for matching certificates (subject or SAN) and keep only valid ones (relative to the measurement time). We only fetch leaf certificates and not the complete chain due to limitations in database API. Thus, we can only authenticate TLSA end-entity (PKIX-EE or DANE-EE) records against logged certificates. A total of 10358 certificates match 3320 unique domain names. <ref> summarizes for how many certificates can be authenticated by the TLSA records. Notable are cases where a TLSA record matches the server cert but not the CT cert and vice versa (rows #3 to 6). The majority of server-match/CT-mismatches (row #3) are TLSA records that specify a certificate by its fingerprint (and not SPKI), so that even if the same key is renewed, it will not match. We also observe Cloudflare backup certificates (of row #4), where multiple certificates are valid simultaneously at the same time. §.§ TLSA Records vs CAA Records Most DANE-secured certificates collected from CT logs have the same CAA matching status as their corresponding server certificates. The only notable exception is a set of domain names with matching CAA records and TLSA records referencing a valid certificate that does not match CAA (19 in total). We also observe 19 cases where the server returned a certificate that does not match its domain name, but TLSA records reference certificates that match CAA constraints (9 already expired).
http://arxiv.org/abs/2407.02887v2
20240703080356
Explicitly Guided Information Interaction Network for Cross-modal Point Cloud Completion
[ "Hang Xu", "Chen Long", "Wenxiao Zhang", "Yuan Liu", "Zhen Cao", "Zhen Dong", "Bisheng Yang" ]
cs.CV
[ "cs.CV" ]
EGIInet H. Xu and C. Long et al. LISMARS, Wuhan University University of Science and Technology of China The University of Hong Kong {190107xh, chenlong107, zhen.cao, dongzhenwhu,bshyang}@whu.edu.cn wenxxiao.zhang@gmail.com, yuanly@connect.hku.hk Explicitly Guided Information Interaction Network for Cross-modal Point Cloud Completion Hang Xu1* Chen Long1*Wenxiao Zhang2† Yuan Liu3 Zhen Cao1 Zhen Dong1 Bisheng Yang1 July 8, 2024 ======================================================================================== § ABSTRACT ^* Equal contribution ^† Corresponding authorIn this paper, we explore a novel framework, EGIInet (Explicitly Guided Information Interaction Network), a model for View-guided Point cloud Completion (ViPC) task, which aims to restore a complete point cloud from a partial one with a single view image. In comparison with previous methods that relied on the global semantics of input images, EGIInet efficiently combines the information from two modalities by leveraging the geometric nature of the completion task. Specifically, we propose an explicitly guided information interaction strategy supported by modal alignment for point cloud completion. First, in contrast to previous methods which simply use 2D and 3D backbones to encode features respectively, we unified the encoding process to promote modal alignment. Second, we propose a novel explicitly guided information interaction strategy that could help the network identify critical information within images, thus achieving better guidance for completion. Extensive experiments demonstrate the effectiveness of our framework, and we achieved a new state-of-the-art (+16% CD over XMFnet) in benchmark datasets despite using fewer parameters than the previous methods. The pre-trained model and code and are available at <https://github.com/WHU-USI3DV/EGIInet>. § INTRODUCTION The extensive application scenarios and significant research value of 3D Computer Vision have garnered increasing attention. Point clouds <cit.>, serving as a discrete representation of stereoscopic space, play a crucial role in various areas such as 3D reconstruction <cit.>, scene understanding <cit.>, and autonomous driving <cit.>. However, due to inherent constraints imposed by scanning sensors, reflections and occlusions, the raw point clouds obtained from 3D scanners are often sparse, noisy, and occluded <cit.>. Hence, it is necessary to conduct point cloud completion on this raw data before applying it to downstream tasks like point cloud segmentation <cit.> and reconstruction <cit.> and so on. To achieve this, point cloud completion emerges as a cost-effective and desirable way to restore the complete shape of the underlying surface. Traditional point cloud completion methods <cit.> aim to restore the complete shape from given incomplete point clouds. However, due to the inherent sparsity and unstructured nature of point clouds, learning the mapping from incomplete shapes to complete shapes solely based on point cloud data is extraordinarily challenging. As a more pragmatic option, <cit.> introduced the task of View-Guided Point Cloud Completion, wherein a partial point cloud is supplemented with an additional single view image to facilitate a more coherent completion. Unfortunately, while the image provides rich texture and structure information to guide the completion procedure, the inputs from different modalities also brought significant challenges for the design and training of models. To address this issue, ViPC <cit.> and CSDN <cit.> leverage the ideas from single-view reconstruction methods for result-level fusion with partial point clouds. However, estimating the 3D coordinates from images is an ill-posed problem <cit.>. Inspired by recent multi-modal fusion approaches, the most recent work XMFnet <cit.> proposed a fusion strategy based on latent space operations that incorporate a cross-attention mechanism to conduct information fusion among multi-modal features. Nevertheless, XMFnet <cit.> overlooks the inherent domain differences between inputs, and the indiscriminate stacking of cross-attention layers simply lacks explicit guidance for the process of information fusion. As shown in Fig. <ref> (c), we visualize the attention map of image features within the cross-attention layers. It can be observed that XMFnet <cit.> tends to gain the abstract global feature from the images, aiming to estimate global semantics while neglecting the inherent geometric structural characteristics of point cloud completion tasks, thus leading to sub-optimal completion outcomes. In order to solve this problem, we rethink the fundamental nature of the View-guided point cloud completion task, and consider the most important question: How to find the critical information contained in a corresponding image and fuse it into the completion process? To answer this question, we propose a novel completion framework named EGIInet, which identifies the critical information within images by explicitly guiding the information interaction, thus enhancing the effectiveness of single-view images in guiding the completion process. Specifically, We divide the completion process into two steps: Modal Alignment and Information Fusion. Fig. <ref> illustrates the whole pipeline of EGIInet. Firstly, diverging from existing methods that use different backbone networks to extract features, we have devised a unified multi-modal feature extractor aimed at mitigating modal disparities and reducing the difficulty of subsequent information interaction. Tokenization techniques are adopted to map data of different modalities into a unified representation, and a shared encoder structure is used to unify the learning process, thus ensuring features from different modalities are compatible in latent space. Each token feature contains the local geometry and the arrangement of the token sequence encapsulates the global structure. Through this unified encoder structure, the modal disparities among image features and point cloud features can be effectively reduced, thus promoting and simplifying the subsequent information interaction and feature fusion. Secondly, instead of fusing the two modality features directly for completion like previous methods, we expect the network could perceive the corresponding relation between the point cloud tokens and image tokens, which could help the network figure out which image tokens are helpful for point cloud completion. To achieve this, we propose a separated information interaction process with explicit structural guidance, which is achieved by an indirect interaction network supervised by a dual-designed loss function. Through this interaction process, the structural information in the image and point cloud can be transferred to each other. Finally, we fuse these two "transferred” features with only one simple cross-attention layer for final completion. Fig. <ref> (b) visualizes the weight map of image features within our cross-attention layer, demonstrating that our network could find the important structures for completion by performing explicit guided interaction between token features of images and point clouds, thus achieving better completion. We conduct a comprehensive experimental evaluation of our approach on the benchmark dataset, where we achieved a 16% improvement over the SOTA method XMFnet <cit.> in terms of the CD metric, despite utilizing fewer parameters (9.03M < 9.57M). Our contribution can be summarized as follows: * We analyze the limitations of mainstream methods and propose a novel point cloud completion framework called EGIInet. It consists of a unified encoder and a novel token feature structure transfer loss to provide an explicitly guided information interaction, which could help get more reliable and better performance for the completion task. * We assess the performance between ours and other SOTA methods on some simulated and real challenging datasets. Extensive experiments show the effectiveness of our methods, our method achieves superior performance, reaching the state of the art. § RELATED WORK §.§ Point cloud completion The pioneer work of point cloud completion is PCN <cit.>, which proposed a coarse-to-fine approach that is widely referenced in following studies <cit.>. Though there are differences in feature extraction and utilization, the basic idea of these studies is to reconstruct the skeleton of the complete shape first and then refine it. PointTr <cit.> does not follow the coarse-to-fine manner but only generates the missing part of the partial point cloud. The idea of only generating the missing part also appeared in <cit.>. In <cit.>, a generative adversarial network is used for point cloud completion. PMP-Net <cit.> and PMP-Net++ <cit.> treat point cloud completion as a kind of deformation and complete point cloud by moving points to the right positions. P2C <cit.> introduces additional losses to supervise the latent expressions. Other unsupervised point cloud completion works <cit.> achieve higher robustness through special training strategies Limited by inputs, these models must learn information about the complete shape from the occluded shape, which may lead to turning the task into a translation process from the occluded shape to the complete shape without meticulous analysis and design of the model. §.§ View-guided point cloud completion The purpose of the view-guided completion process is to introduce the missing geometric information from images to obtain better completion results. The pioneering work of View-guided point cloud completion is ViPC <cit.>, which designed a multi-modal architecture for image and point cloud and built the ShapeNet-ViPC dataset. The ViPC <cit.> model first used a modality transformer to convert images directly to skeleton point cloud and concatenate it with occluded point cloud, then refine it with concatenated image features and point cloud features. Rather than concatenation on results, CSDN <cit.> leverage the IPAdaIN <cit.> to let image features affect the process of deforming point cloud features into coarse point clouds, then by using pixel-wise aligned local features to performer dual-refinement. XMFnet <cit.> is the most recent baseline network that applied stacked cross-attention and self-attention layers to fuse the image feature with the point cloud feature and complete the point cloud in an end-to-end way using only the fused feature. The recent work CDPNet<cit.> introduce a two phase strategic which leverage global information from images to predict rough shape. These works show that the multi-modal information fusion strategy plays a critical role in view-guided point cloud completion. § METHOD The task of view-guided point cloud completion is to use an input occluded point cloud P∈ℝ^N×3 and a single view image I∈ℝ^H× W× C to predict the complete shape. The purpose of our design is to achieve better information fusion by performing modal alignment and information interaction, thus leading to a better prediction of the complete shape. To this purpose, we study i) a unified encoder for multi-modal input; and ii) a novel token feature structure transfer loss that guides the modal alignment and information interaction. Fig. <ref> shows an overview of our proposed architecture EGIInet. §.§ Unified Encoder The main difficulty in designing feature fusion in multi-modal models is to overcome the domain gap between different modals <cit.>. Our design reduces modal differences in both format and latent space by utilizing tokenization techniques and shared structure. The proposed Unified Encoder consists of Tokenizers and a shared feature extractor (SFE) which will be detailed in the following. §.§.§ Tokenizers The gap between the image and point cloud lies in the differences in data organizing, so the first step of modal alignment is to give a unified way of describing the image and point cloud. Tokenization is a common technique to convert data into a sequence of tokens that is similar to the sentences in natural language. Therefore, tokens are ideal for uniform representation of images and point clouds since both of them can be described in natural language. By unifying the description of the image and point clouds, the following alignment in latent space can be simplified and information fusion can be conducted in a more explicit way. The function of tokenizers is to transfer point clouds P and images I into a unified format, that is features F∈ℝ^N'× C' consists of N' tokens T∈ℝ^1× C'. The point cloud feature F_pc and image feature F_img consist of same number(N') of tokens. In order to explicitly guide the interaction of structural information, we need to first extract features that can represent both global structure and local geometry. For images, we can take advantage of the grid property of the image to represent the global structure using the organizational pattern among tokens. For point clouds, additional positional embedding is added to the token features to reduce the impact of the irregular nature of point clouds. Therefore, the image and point cloud tokenizers are designed to divide the image and the point cloud into several parts for mapping. In this way, the global structural information is contained in the organizational pattern among tokens and each token represents a certain local geometry. As shown in Fig. <ref> (a), for tokenizing images, we use a convolution layer with large kernel size and stride to divide the image into several parts and each part is described by one token. In this way, we can learn a simple projection from image to tokens. As shown in Fig. <ref> (b), for tokenizing point clouds, we adopt t steps of FPS (Farthest Point Sample) <cit.> downsampling while aggregating features in each step using Ball-query cluster. By aggregating the feature of each cluster, each point in the down-sampled point cloud P_center can be matched to one token and each token can describe the geometry of a specific area of the point cloud. In order to reduce the impact caused by the irregular nature of the point cloud, we extract the per-point features of the downsampled point cloud as the position embedding and add it to the tokens. §.§.§ Shared Feature Extractor (SFE) Another reason why multi-modal features are difficult to fuse is the difference between 2D backbones and 3D backbones. The features extracted by different network architectures have differences in latent distribution and semantic structure, which makes it difficult to merge information directly. To solve this problem, we use a unified shared architecture to learn the token sequences from different modals, so that the features of the image and point cloud are mapped to the adjacent latent space. Meanwhile, in order to make our model focus on the structural information that is essential for completion, we use self-attention-based ViT blocks <cit.> as the backbone of SFE. The SFE takes token sequence F_pc,F_img∈ℝ^N'× C' as input, and export processed features F_pc^stc,F_img^stc∈ℝ^N'× C'. The process of SFE can be described in Formula <ref> and Formula <ref>. F_pc^stc=SFE(F_pc) F_img^stc=SFE(F_img) §.§ Information Fusion Intuitively, the most critical information contained in the image from a different sight is that representing the missing part of the point cloud. However, the traditional latent fusion strategy could not always focus on this critical information due to the lack of structural guidance, leading to a sub-optimal solution for point cloud completion. To effectively fuse the critical information into the inference process of missing part, we introduce a dual-designed loss function to explicitly guide an information interaction process separated from the encoding stage. In the following sections we introduce the Shared Feature Transfer Network (SFTnet) which provides the information interaction process and the Feature Transfer Loss (FT-Loss) which explicitly guides the information interaction. §.§.§ Shared Feature Transfer Network (SFTnet) Separating the information interaction process from the encoding process makes the network have specific learning objectives at specific stages, thereby reducing the overall optimization difficulty. Meanwhile, we observe that directly fusing the point cloud and image features in latent space like previous methods will lead to an ambiguous feature interaction, as there is no explicit guidance to decide which part of the image contains critical information. Also, direct feature fusion will change the organizational pattern of features, leading to an extra learning on new latent expressions. Therefore, the SFTnet is purposed to give an independent interaction process without direct contact between features. In this way, point cloud features and image features can interact with each other in an explicitly guided manner while maintaining their respective information organization pattern. This transfer process is supervised by the Feature Transfer Loss (FT-Loss) which will be detailed in the next section. The SFTnet consists of ViT-based blocks <cit.> similar to SFE in implementation. The reason for using the similar design of SFE is that a unified shared design helps to maintain the information organization pattern of the features, so as to conduct the information interaction without destroying the original structure of the features. The process of Shared Feature Transfer Network can be described in Formula <ref> and Formula <ref>. F_pc'=SFTnet(F_pc^stc) F_img'=SFTnet(F_img^stc) §.§.§ Feature Transfer Loss (FT-Loss) We implement explicit guidance for feature interaction in the form of loss supervision, in this way we can artificially determine the information that features need to be interacted with. We explicitly guide the information transfer between image features and point cloud features to conduct the identifying of the critical information within the image and the transformation of the critical information from the image features to the point cloud features, thus achieving the ultimate goal of making the critical information in the image act in the point cloud completion. The purposed FT-Loss ℒ_transfer consists of Informational Loss ℒ_infor and Structural Loss ℒ_stc. The function of ℒ_infor is to interact with the critical structural information in image features and point cloud features while the function of ℒ_stc is to maintain the information structure of the point cloud features. We leverage the Gram matrix of features as the basis for information loss since the Gram matrix provides a way to describe the structural criticality of features. The Gram matrix can be considered as an eccentricity covariance matrix for features and can be calculated through Formula <ref>. For each feature, each element of its Gram matrix corresponds to a channel-wise global structural criticality. G(F)=F^T∙F The purpose of Information Loss ℒ_infor is to make the features of one modal perceive the structural information present in the features of another modal. To achieve this, we adopt a dual-designed loss to pass information between the transferring processes of images and point clouds. The Information Loss ℒ_infor is defined as Formula <ref>. By supervising the similarity of the Gram matrix of features, we can indirectly align the structural criticality of features, thus achieving structural information transformation. Through the alignment of structural criticality, the missing relationship contained in point cloud features can be transferred to the image features and the structure of the missing part contained in image features can be transferred to the point cloud features. ℒ_infor=(G(F_img^stc)-G(F_pc'))^2+(G(F_pc^stc)-G(F_img'))^2/N× C Where F_img^stc, F_pc^stc are inputs of SFTnet and F_img', F_pc' are outputs of SFTnet. As mentioned before, 2D features from images have difficulty predicting 3D coordinates directly, so the fused feature used to reconstruct the missing part should be based on 3D features from point clouds. The purpose of Structural Loss is to maintain the information structure of 3D point cloud features through the transfer process. The Structural Loss ℒ_stc is defined as Formula <ref>. ℒ_stc=(F_pc^stc-F_pc')^2 By supervising these two losses simultaneously we provide an explicitly guided information interaction method to transfer information between to modals, thus improving the fusion efficiency. The FT-Loss ℒ_transfer is defined as Formula <ref>. ℒ_transfer=ℒ_infor+ℒ_stc Chamfer Distance (CD)<cit.> is widely used in the reconstruction task. The calculation of l_1-CD is shown in Formula <ref> where P={p∈ℝ^3} is the ground truth point cloud and P̂={p̂∈ℝ^3} is the output completed point cloud of our model. ℒ_l_1-CD(P,P̂)=1/2N∑_p∈Pmin_p̂∈P̂||p-p̂||_2+1/2N∑_p̂∈P̂min_p∈P||p-p̂||_2 Together with Chamfer Distance, the total loss of our architecture can be defined as Formula <ref> where α is a hyperparameter. In implement, α is fix to 0.01 since ℒ_transfer is a large value compared with ℒ_l_1-CD ℒ_total=α×ℒ_transfer+ℒ_l_1-CD §.§.§ Feature Fusion To aggregate the features, we adopt a simple cross-attention layer to fuse the image feature and point cloud feature since these two features have been fully interacted within the previous process. §.§ Completion Decoder In order to decode the acquired fusion features into the complete point cloud, we need a decoder that is flexible and has a certain learning ability. To do this, we use a decoder architecture similar to XMFnet <cit.> to accept similar fused features and learn their implicit expressions to predict 3D coordinates. § EXPERIMENTAL RESULTS In this section, we first introduce the dataset and evaluation metrics in section <ref>. Quantitative and Qualitative comparisons are shown in section <ref>. Ablation studies are conducted in section <ref>. We also report the generalization ability of our method in section <ref>. Finally, the model complexity is shown in section <ref>. §.§ Experimental Settings §.§.§ Dataset In this work we train and test our model on ShapeNet-ViPC dataset <cit.>. The dataset contains 38,328 objects from 13 categories. For each object, ViPC <cit.> generates 24 incomplete point clouds under 24 viewpoints. In this paper, we follow the same dataset settings of ViPC <cit.>. §.§.§ Evaluation metrics We use l_2-CD <cit.> and F-score <cit.> to evaluate our model the same as the previous works do. The l_2-CD of point cloud X and Y is calculated as shown in Formula <ref>, where N_X and N_Y denotes the number of points in X and Y. Since CDPNet <cit.> was not open sourced during our study, we did not compare with it. CD(X,Y)=1/N_X∑_x∈Xmin_y∈Y||x-y||_2^2+1/N_Y∑_y∈Ymin_x∈X||x-y||_2^2 The F-score <cit.> is defined in Formula <ref>, where thresh hold d equals 0.001 the same as the previous works. F(X,Y)=2X(d)Y(d)/X(d)+Y(d), where X(d) and Y(d) denote the mean of the squared distances that are less than the threshold d. The calculation of squared distances follows the same in calculating CD. §.§ Results on ShapeNet-ViPC In this section we report our quantitative comparison with the existing works <cit.> that use the same training data on ShapeNet-ViPC dataset <cit.> and other SOTA point cloud completion methods <cit.> taken from <cit.>, results are reported in Table <ref> for the CD and Table <ref> for the F1-score. We conduct a qualitative comparison with CSDN <cit.> and XMFnet <cit.> in Fig. <ref>. It is shown that while keeping the decoding structure the same, our model is able to improve the CD by 16% compared to XMFnet <cit.>. This means that our design of better feature extraction and fusion can lead to better completion results. Specifically we achieve a great improve on lamps due to the accurately extracted information and well designed information interaction. §.§ Ablation Studies We first report the ablation on our model components. The object of ablation is the shared structure, FT-Loss and SFTnet. Then we analysis about the efficiency of image information. §.§.§ Ablation on FT-Loss To verify the effectiveness of FT-Loss, we do not calculate and supervise the loss during the process of training. Results are presented in Table <ref>. It can be seen that removing FT-Loss will decline the model's performance, and the missing relationship can not be transferred to the image feature without supervising the FT-Loss, as shown in Fig. <ref>. §.§.§ Ablation on shared structure To verify the effectiveness of the shared structure, we replicate the shared ViT <cit.> blocks, including SFE and SFTnet, and pass the image and point cloud tokens through separate networks. Results are presented in Table <ref>. The CD metric decreased significantly without the shared structure, indicating that the modal alignment achieved by the shared structure can promote subsequent interaction and fusion. Due to the limited parameters, the full model performs slightly worse than the model without shared encoders on some complex classes (less valid pixels). However, for most of the common categories, the shared structure can effectively align different modalities with fewer parameters. §.§.§ Ablation on SFTnet In order to verify the necessity of separating the information interaction process from the encoding process, we designed an ablation experiment on STFnet. In this experiment, we remove SFTnet and calculate only the direct information loss of structural features shown as Formula <ref>. As shown in Table <ref>, it is difficult to complete the feature extraction and information interaction by relying only on SFE and simplified losses ℒ_transfer', especially on some complex categories (lamp, watercraft, etc.). The reason behind this is that SFTnet can provide a more effective interaction for completion. ℒ_transfer'=(G(F_img^stc)-G(F_pc^str))^2/N× C §.§.§ Ablation of Input Modality To verify the effectiveness of input images, we only use point cloud as input, thus verifying that our design is able to make the information provided by the image positive. Results are presented in Table <ref>. §.§.§ Efficiency of Image Information The projection of the cross-attention weight map plot in Fig. <ref> shows that the image features of different views are able to focus on the geometer related to the missing part of the point cloud, thus proving the effectiveness of information interaction. §.§ Generalization Ability Evaluation §.§.§ Results on Unknown categories of ShapeNet-ViPC To verify the utility of our method, we conducted a zero-shot test on unknown categories in the ShapeNet-ViPC dataset. We used 8 known categories, including airplane, cabinet, car, chair, lamp, sofa, table, and watercraft in the training stage, and tested on 4 unknown categories, including bench, monitor, speaker, and cellphone. We compare the CD and F-score performance on 4 categories with other method <cit.> taken from <cit.> and we train XMFnet<cit.> on 8 categories as well. Quantitative comparisons are shown in table <ref> and qualitative comparisons are shown in Fig. <ref>. §.§.§ Results on Real Scenes We also report qualitative results on KITTI <cit.> cars extracted by <cit.>. Qualitative results are shown in Fig. <ref>. Though there is a synthetic-to-real gap, our method can still able to give a reasonable prediction. §.§ Comparisons about Model Sizes We compare the size of our model with existing view-guided completion models<cit.> in table <ref>. The comparison results show that our model achieves the best results (+ 16% CD) with the smallest number of parameters. § CONCLUSIONS In this paper, we propose an explicitly guided information interaction strategy supported by modal alignment for view-guided point cloud completion. This explicit guidance can promote the network to learn structural relationships for completion, thus leading to better utilization of the information provided by the image. Our proposed methods achieve new SOTA results on the ShapeNet-ViPC dataset <cit.>. In future work, we will continue to study this information fusion approach and have the potential to extend it to other data modalities and tasks to make it a new multi-modal learning paradigm. § ACKNOWLEDGMENT This work was supported by the National Key Research and Development Program of China under Grant 2022YFB3904102. splncs04 § A. IMPLEMENTATION DETAILS Input partial point cloud contains 2048 points, and input single view image is a 224×224×3 RGB image. The output complete point cloud contains 2048 points and is evaluated with ground truth point cloud of 2048 points. The train-test split follows the list provided by ViPC <cit.>. For each object, ShapeNet-ViPC <cit.> provides 24 different views corresponding to different missing situations. During training and testing, pairs of point clouds and images are randomly fed to the model. This means that a partial point cloud may be assisted by different images in different training epochs. We use the Adam optimizer with an initial learning rate of 0.001 to train our model. We train our model for 160 epochs. The learning rate decreased by 70% every 16 epochs. During tokenization, the point cloud is downsampled to 256 points and image is divided by a 16×16 grid. Therefore the number of point cloud tokens and image tokens are both 256. Each token in the token feature has 192 channels. The token features keep the size of 256×192 when passing the SFE and SFTnet. The fused feature is also the size of 256×192, and we use this fused feature to infer the missing part of the point cloud. § B. COMPARISON WITH SINGLE VIEW RECONSTRUCTION METHOD Compared with the emergence of ViPC <cit.> in 2021, the current single view reconstruction has made a great breakthrough. To dispel the doubts about why we still need to study view guided point cloud completion instead of directly reconstructing shapes from images, we made a qualitative comparison with the latest single-view reconstruction model TripoSR <cit.> as fairness as possible. TripoSR is a 3D reconstruction model based on LRM (Large Reconstruction Model) <cit.> that can produce a 3D mesh from a single view image. Due to the need for a strong prior and smooth distribution of latent space for robust single view reconstruction, it can only restore a roughly reasonable shape but difficult to achieve fine surface reconstruction. In addition, single view reconstruction method relies heavily on the quality of the image. On the other hand, our method can achieve fine reconstruction of the missing parts based on the point cloud prior and supplemented by finely extracted image structure information. we uniformly sample 2048 points on the mesh surface and normalized the sampled point cloud into a unit sphere. Qualitative comparisons are shown in Fig. <ref>. It can be seen that TripoSR <cit.> tends to generate a shape with the outline of the image but lacks accurate depth on the image in the ShapeNet-ViPC dataset <cit.> in some categories. This indicates that it is unreliable to predict the complete shape based on the image alone. The point cloud completion task can use the image to infer the complete shape more reliably on the basis of the partial point cloud. Further more, relying on images to infer possible shapes is much more costly than point cloud completion. View guided point cloud completion <cit.> is a relatively cost-effective and reliable option. § C. ADDITIONAL QUALITATIVE RESULTS We provide additional completion results of our model in Fig. <ref>, Fig. <ref>, Fig. <ref> and Fig. <ref>. All the point clouds in the figures are artificially rotated to an axis-aligned angle for easy observation. It can be seen that our method can give reasonable prediction in a variety of missing cases. Our model perform well on categories with large individual differences, such as lamps, chairs, etc. This is due to a deeper understanding of structural information, rather than learning the average shape distribution of the dataset.
http://arxiv.org/abs/2407.01661v1
20240701180000
The infall region as a complementary probe to cluster abundance
[ "Charlie T. Mpetha", "James E. Taylor", "Yuba Amoura", "Roan Haggar" ]
astro-ph.CO
[ "astro-ph.CO" ]
firstpage–lastpage [ Katharine Cella^1, Stephen R. Taylor^1 and Luke Zoltan Kelley^2,^3 Draft, June 25, 2024 ====================================================================== § ABSTRACT Galaxy cluster abundance measurements provide a classic test of cosmology. They are most sensitive to the evolved amplitude of fluctuations, usually expressed as S_8 = σ_8√(Ω_m/0.3). Thus, abundance constraints exhibit a strong degeneracy between σ_8 and Ω_ m, as do other similar low-redshift tests such as cosmic shear. The mass distribution in the infall region around galaxy clusters, where material is being accreted from the surrounding field, also exhibits a cosmological dependence, but in this case it is nearly orthogonal to the S_8 direction in the Ω_m–σ_8 plane, making it highly complementary to halo abundance or cosmic shear studies. We explore how weak lensing measurements of the infall region might be used to complement abundance studies, considering three different tests. The splashback radius is a prominent feature of the infall region; we show that detection of this feature in lensing data from the Euclid survey could independently constrain Ω_ m and σ_8 to ± 0.05. Another feature, the depletion radius where the bias reaches a minimum, also shows cosmological dependence, though it is challenging to observe in practice. The strongest constraints come from direct measurements of the shear profile in the infall region at 2–4 r_200 c. Combining the latter with abundance constraints such as those reported from SRG/eROSITA should reduce the area of the error contours by an estimated factor of 1.2 using a sample of clusters observed by the UNIONS survey, or a factor of 3 using clusters observed by the Euclid Wide survey over a broader range of redshift. gravitational lensing: weak – methods: observational – galaxies: clusters: general – galaxies: groups: general – galaxies: haloes – cosmological parameters § INTRODUCTION Galaxy clusters provide an excellent test-bed for astrophysics and cosmology. Their overall abundance depends sensitively on the evolved amplitude of fluctuations, often expressed as S_8 = σ_8√(Ω_m/0.3). Because S_8 combines the effects of early power, characterised by the linearly extrapolated amplitude of fluctuations on 8 h^-1 Mpc scales, σ_8, and late-time growth, characterised by the matter density parameter Ω_m, abundance measurements have a strong degeneracy in the Ω_m–σ_8 plane <cit.>, as do other, similar low-redshift probes of cosmological structure, such as cosmic shear <cit.>. The assembly rate of clusters and their underlying dark matter halos has a different dependence on Ω_ m and σ_8 <cit.>, and could provide an alternative cosmological test if it could be quantified through observations of cluster structure, an idea first proposed in the early days of cluster cosmology <cit.>. A number of previous studies have investigated the link between cluster structure, assembly history, and cosmology. The density profile of a dark matter halo within its virialised region has been investigated as a cosmological probe, via measurements of concentration <cit.> or halo sparsity <cit.>. Predictions in the inner parts of observed clusters remain uncertain, however, because of poorly constrained baryonic effects <cit.>. Furthermore, the constrains on Ω_ m and σ_8 from these methods have a similar degeneracy to abundance measurements <cit.>, and so combining either of them with abundance constraints may not lead to significant improvement. In contrast, the infall region outside the virial radius, where matter is currently being accreted onto the halo, can reveal information about its recent growth history <cit.>. The current growth rate has in turn been shown to be a useful cosmological probe <cit.>, with a degeneracy direction almost orthogonal to S_8 over a range of mass and redshift. Its independent measurement, in the same data used for abundance and cosmic shear studies, could break the Ω_ m–σ_8 degeneracy, significantly improving the precision of abundance constraints. The infall region includes a number of distinct features. The splashback radius r_ sp, the apocenter of the first orbit of material after it has been accreted into a dark matter halo <cit.>, is the most widely studied feature and is sometimes proposed as a more physical boundary than the virial radius, as it mitigates the impact of pseudo-evolution <cit.>. The so-called `depletion zone' <cit.> is the region around a halo from which material has been accreted, and hence is depleted with respect to the expected background. The `depletion radius', the radius where the bias (defined below) reaches a minimum, is another possible definition for the halo boundary <cit.>. Another feature of the depletion zone, the `inner depletion radius', has been shown to match up well with the “optimal halo exclusion radius" defined in <cit.>. Both the splashback radius and depletion radius are sensitive to the mass accretion history of a halo <cit.>. There is then an interplay between the formation time, the recent accretion history, and these features of the density profile, from which we may be able to derive cosmological information. The splashback feature has been detected significantly using galaxy number density profiles <cit.> and weak lensing <cit.>, which has also been used to observe the depletion zone <cit.>. Measuring the splashback radius through weak lensing is advantageous as it does not require knowledge of the galaxy bias, it allows the stacking of many dark matter halos, and provides mass-calibration for the observed sample. As this feature can be measured with good precision, using it as a cosmological test seems feasible. <cit.> made an initial study of the variation of the splashback radius with cosmology, testing WMAP and Planck cosmologies, and models with scale-free power spectra. <cit.>, henceforth cite.RoanH24, recently made a more detailed study of the cosmological dependence, showing that on cluster scales, the variation of r_ sp is close to orthogonal to the S_8 direction in the Ω_ m–σ_8 plane. There also has been work exploring how measurements of the splashback radius can be used to constrain alternative gravity models <cit.>. To our knowledge there has been no systematic investigation of how the depletion radius, or the general form of the density profile in the infall zone, depends on cosmology. In this work, we use dark-matter-only simulations to determine how features in the infall region of cluster-mass halos vary with cosmology. We then investigate how accurately these features could be measured in present and forthcoming weak lensing surveys. Our goal is to determine the best feature(s) to measure, and also the best cluster mass and redshift range to use for cosmological tests. To illustrate the complementarity with abundance studies, we will focus on the recent constraints derived from clusters detected in the SRG/eROSITA All-Sky Survey <cit.>. Beyond these results, there are exciting prospects for further improvements in abundance measurements from several present and forthcoming surveys, including the UNIONS <cit.>, Euclid-Wide <cit.>, and surveys by the Nancy Grace Roman Space Telescope <cit.> and the Vera C. Rubin Observatory <cit.>. The outline of the paper is as follows. In Section <ref>, we describe the cosmological simulations used in this work. Section <ref> discusses the mean density profile of the infall region, its main features, and their cosmological dependence. We also describe our profile fitting process in detail. Section <ref> explains how the 3D profiles are projected to produce the 2D mass profiles measured by lensing, and how these projected profiles are then fitted. Section <ref> presents forecasts for the constraining power of the infall region, and in Section <ref> we consider some anticipated challenges in applying this test to observations. We conclude in Section <ref>. § SIMULATIONS We use a suite of 21 dark-matter-only simulations described in <cit.> (see also Amoura et al. 2024, in preparation). The simulated cosmologies range over the combinations of Ω_ m and σ_8 shown in Table <ref>. Other cosmological parameters are fixed, with a Hubble parameter H_0 = 100 h = 70kms^-1Mpc^-1, a baryon density Ω_ b=0.0482, and a spectral tilt n_s=0.965. Each simulation was performed using Gadget 4 <cit.> with 1024^3 particles in a 500Mpc/h box, giving a particle mass Ω_ m× (3.23×10^10 M_⊙), and a softening length of 2.5kpc. Dark matter halos were identified using the Amiga Halo Finder (AHF) <cit.>, down to a lower mass limit of ∼10^12 M_⊙ h^-1, and their evolution was traced from z∼30 to z=0 in 119 snapshots (except for two initial runs denoted by an o in Table <ref>, which only had output saved at z=0). In what follows, comoving distances are written cMpch^-1. Halo masses M_200 c and radii r_200 c are defined as the mass enclosed within, and radius of, a spherical region 200 times the critical density ρ_c, respectively. As discussed in cite.RoanH24, for cluster-mass halos, baryonic effects are more or less negligible in the infall region, relative to the cosmological signal we consider. This conclusion is also supported by other works comparing dark-matter-only simulations with hydrodynamical simulations including baryons <cit.>. Thus, we have ignored baryonic effects in our analysis. These would be more significant on smaller mass scales, as discussed below, limiting the accuracy of the method on these scales. § FEATURES OF THE INFALL REGION §.§ Calculating the mean density profile Given a halo catalogue for each simulation and output redshift, we determine the mean density profile in a given mass range as follows. The density profile of each individual halo is found by counting the number of particles in 400 logarithmically spaced shells from r=0.01Mpc/h to r=20Mpch^-1, and dividing their total mass by the volume of the shell. Shells are centred on the most bound particle in the halo; the potential impact of mis-centering is discussed in Section <ref>. To overcome the shot noise present in individual profiles and match the quantities usually measured in lensing observations, we average them, calculating the mean density profile for all halos in a given mass range. Only halos with more than 200 particles are included, and any halo flagged as a subhalo by AHF is removed from the sample. Errors on the mean profile are computed using 500 bootstrap realisations. §.§ Fitting the density profile To fit the density profile in the infall region, we adopt the model of <cit.>. This model is designed to fit to both the inner and outer regions of a dark matter halo in a way that reflects the dynamics of the material. It includes orbiting and infalling density terms: ρ(r) = ρ_ orbit(r) + ρ_ infall(r) ρ_ orbit(r) = ρ_s exp(-2/α[ (r/r_s)^α-1]-1/β[(r/r_t)^β- (r_s/r_t)^β]) ρ_ infall(r) = A (1+ δ_1/√((δ_1/δ_ max)^2+(r/r_ pivot)^2s)) The model has 9 free parameters ρ_s,α,β,r_s,r_t,A, δ_1, and δ_ max, with r_ pivot being a fixed pivot scale that is set to r_ pivot(z) = (3 M/4π× 200 ρ_ m,P(z))^1/3 where ρ_ m,P(z) = (1+z)^3ρ_ m,P(z=0) = Ω_ m^ Planckρ_c,0 (1+z)^3 for a Planck 2018 cosmology <cit.>, and Ω_ m^ Planck=0.316. When fitting stacked halos with a range of masses, M is taken to be the minimum mass of this range. When fitting comoving density profiles, the (1+z)^3 term is dropped. In this model, the inner part of the profile is described by a modified Einasto profile <cit.> with an additional exponentially decaying truncation term, controlled by the truncation radius r_t. For the outer profile, δ_1 and δ_ max are the overdensity at r_ pivot and in the centre respectively, while s is the power-law slope of the infalling term. Note that the matter density ρ_ m in the original expression (Eq. (15) in ) has been replaced with an amplitude parameter A, as the value of ρ_ m is not known a priori. Fits are performed over the range r=0.06-20Mpch^-1 when fitting the intrinsic 3D profiles, and r=0.1-10 r_ pivotMpch^-1 when fitting mock observed 2D profiles, to avoid the impact of mis-centering (see Section <ref>). The python package lmfit <cit.> is used as a convenience wrapper incorporating the least-squares fitting function in scipy <cit.> with the Trust Region Reflective method. The prior ranges for the fitted values are given in Table 1 of <cit.> and we follow the fitting procedure described in Appendix A1 of that work, which is designed to avoid becoming trapped in a local minima of χ^2. Examples of mean 3D density profiles for z=0 halos, and the corresponding fit, can be seen in the left panel of Fig. <ref>. §.§ Location of the splashback radius Given our fits to the mean density profiles, we can proceed to test the cosmological dependence of the splashback radius and other features. The splashback radius is identified by finding the minimum of the gradient of the logarithmic density profile, outlined in the middle panel of Fig. <ref>. It occurs outside the virial radius r_200 c, and is dependent on the halo accretion rate <cit.>, which in turn is sensitive to cosmology <cit.>. cite.RoanH24 demonstrate that for cluster-mass halos, the splashback radius varies in a direction nearly orthogonal to S_8 in the Ω_ m–σ_8 plane. The relative simplicity of identifying r_ sp makes it a promising avenue to test cosmology in the infall region. The splashback feature is also easiest to observe in clusters, where the drop in density is sharper compared to lower mass halos. §.§ Location of the depletion radius The depletion radius is a more complicated feature of the infall region. The bias profile for a halo is defined by b(r) = ξ_ hm(r)/ξ_ mm(r) = ⟨δ(r)⟩/ξ_ mm(r) . and is related to its density profile though ρ(r) = ρ_ m[b(r) ξ_ mm(r)+1] . Clearly, knowledge of the matter-matter correlation function is required to calculate the bias profile from the density profile, and this function is itself cosmology-dependent. The impact of uncertainty in the matter-matter correlation function on calculation of the bias profile is discussed further in Section <ref>. The characteristic depletion radius is given by the minimum of the bias profile, seen in the right-hand panel of Fig. <ref>. For the cluster-mass halos shown in the figure, the bias minimum is more of a kink than a deep trough, as discussed in <cit.>. This flattening of the bias profile at high masses means the depletion radius is easier to measure in lower mass halos, contrary to the splashback radius. To avoid this problem, the minimum value of the bias profile itself could be used as the cosmological test, instead of the radius at which it reaches a minimum. However, the minimum bias is highly degenerate with the unknown value of the matter density and the poorly constrained shape of the matter-matter correlation function. For these reasons, we focus on the depletion radius, though a more sophisticated analysis that leaves ρ_ m and ξ_ mm free in the fitting could consider the minimum bias value. We also note that another feature of the depletion zone is the inner depletion radius, r_ id, which is defined as the radius of maximum mass inflow rate <cit.>. To determine this radius observationally, peculiar velocities of galaxies would be required. Since we are considering an analysis based on weak lensing shear profiles alone, we do not consider this scale further. §.§ Cosmological dependence Fig. <ref> demonstrates how the splashback and depletion radii vary with mass and also cosmology. Halos are binned by mass, with a bin width of 0.5dex, and the mean mass of the bin is plotted on the x-axis. The depletion radius is given by solid green lines, and the splashback radius by dashed blue lines. In both cases, darker colours correspond to larger values of S_8. The simulation results are shown at z=0, though in comoving units both radii are fairly constant with redshift, varying by less than 20% between z=0 and z=1 (see Appendix <ref>). Both radii vary considerably with cosmology, but the cosmological dependence also changes with halo mass. At low mass, both the splashback and depletion radii vary smoothly with S_8, larger S_8 values corresponding to smaller radii. For larger mass halos, however, the relationship between these radii and S_8 is not as clear. Fig. 2 of cite.RoanH24 shows that in fact, in the cluster mass range the splashback radius varies nearly orthogonally to S_8. This mass dependence can be traced back to the halo formation time. According to hierarchical structure formation, galaxy clusters are the most recent bound structures to form in the universe, and their mean formation time varies significantly with the background cosmology. For low-Ω_ m and high-σ_8 cosmologies, galaxy clusters typically form at earlier times <cit.>. Material currently reaching the apocentre of its orbit at the splashback radius passed through the centre of the cluster potential some time in the past, a delay we refer to as the "infall time", equal to 1/2 of a radial orbital period <cit.>. Earlier-forming clusters will have had a larger mass in place one infall time ago, compared to later-forming clusters. A larger gravitational potential one infall time ago produces a larger splashback radius in these systems. For the depletion radius, earlier-forming clusters have had more time to draw in material from their environment, pushing the minimum of the bias profile out to larger radii. These results hint at how profile measurements in the infall region can provide constraints on cosmological parameters. In Fig. <ref>, we also plot the examples to date of directly measured splashback radii. Masses have been converted to M_200 c using the measured concentration when it is provided, or the concentration-mass relation of <cit.> when a measured value is not given, and radii have been converted to comoving units using the mean halo redshift. Current observations appear consistent with our simulations, except perhaps at the smallest mass scale. § SIMULATED LENSING PROFILES §.§ Projected mass density profiles In weak lensing measurements of mass distributions, a key observable is the reduced shear of background `source' galaxies, g, g = γ/1-κ , where γ is the shear and κ is the convergence. It is often assumed in the weak lensing limit that g ≈γ <cit.>, although γ can also be inferred from g through an iterative reconstruction scheme <cit.>. The tangential component of the shear, γ_t(θ), is related to the excess surface mass density ΔΣ through ΔΣ(r) = γ_t(θ) Σ_ crit , where angular separations θ on the sky are converted to radial separations using the angular diameter distance to the lens redshift d_A(z), r = d_Aθ. The critical surface mass density is given by Σ_ crit(z_ l, z_ s) = c^2/4π Gχ_ s/χ_ l(χ_ s-χ_ l) (1+z_ l) . The comoving distance is χ(z), where the subscript l corresponds to a foreground lens (a galaxy or a galaxy group/cluster), and the subscript s to a source galaxy whose shape is being measured. Σ_ crit can be found using knowledge of the source catalogue's redshift distribution. Note that both d_A and Σ_ crit depend on cosmology. In our analysis, we will assume a fiducial cosmology in calculating these quantities, neglecting the cosmological variation. The impact of this approximation on cosmological constraints will be discussed in more detail, and shown to be minimal, in Section <ref>. Fig. <ref> illustrates the steps involved in going from the density profiles measured in the simulations to a predicted γ profile. We show results for two cosmologies, one with Ω_ m=0.4, σ_8=0.7 (thin green line), and another with Ω_ m=0.2, σ_8=1.0 (thick magenta line). These correspond roughly to the ends of the `banana', the typical uncertainty contour derived in previous cluster abundance or cosmic shear studies. The stacked density profile is converted into a 2D projected surface mass density profile using an Abel transform, Σ(R) = 2∫_R^∞ρ(r)r/√(r^2 - R^2) dr , where R is the projected distance from the centre. Given the surface mass density distribution, the excess surface mass density around a dark matter halo ΔΣ(R) is defined as ΔΣ(r) = Σ(R)-Σ(R) , with Σ(R) = 2/R^2∫_0^R R' Σ(R') dR' . This quantity is plotted in the left panel of Fig. <ref>. Next, using an assumed lens redshift and the source redshift distribution for a given weak lensing survey (in this case UNIONS), the ΔΣ(r) profile is converted to a shear profile, and radial separations are converted to angular separations on the sky. This gives the γ_t(θ) profile in the middle panel. Finally, realistic uncertainties on the γ profile are calculated using the cluster-lensing-cov package[<https://github.com/hywu/cluster-lensing-cov>] <cit.> and Gaussian scatter of data points is added to obtain the mock observations in the right-hand panel. We see a clear difference between the predicted shear profiles for the two cosmologies, and the amplitude of the difference greatly exceeds the noise, particularly in the infall region. Thus, measurements of the infall region with a lensing survey such as UNIONS should easily distinguish between the two `ends of the banana'. §.§ Fitting an observed shear profile To pick out features in the projected, 2D density profile, we fit the observed shear profile γ_t(θ) using a similar approach to our 3D fitting. Starting with a model 3D profile given by Eq.( <ref>, we project and convert this as described above to generate the corresponding γ_t(θ) profile. We then compare this to the mean profile measured in the simulation using least squares, and iterate over this process to determine the best choice of 3D parameters. Uncertainties in the fit are propagated along with simulated measurement uncertainties when generating the results in Section <ref>. Physical units are used when stacking the 2D profiles. 20 uniformly spaced data points are assumed in the fitting range, r=0.1-10 r_ pivotMpch^-1. When fitting the density profiles directly to infer the 3D r_ sp and r_ cd from the simulated halos, A in Eq. (<ref>) is left free, giving the most freedom to minimise the residual. Allowing A to be free introduces strong correlation between parameters, however, so when noisy stacked lensing profiles are being fitted its value is fixed to ρ_ m^ Planck to avoid large parameter uncertainties caused by strong degeneracies, at the cost of larger residuals. § COSMOLOGICAL CONSTRAINTS FROM THE INFALL REGION As demonstrated in cite.RoanH24 and Figs <ref>–<ref>, the density profile in the infall region of clusters shows clear systematic variations with cosmology. We will now investigate the prospects of using this dependence to constrain Ω_ m and σ_8, given lensing data from current and forthcoming surveys. For each specific survey considered, the uncertainty on shear profiles depends on the properties of an assumed lens sample (number of halos, mass and redshift distributions) and source sample (shape noise, surface density, and redshift distribution). Our assumed source and lens properties are outlined in Section <ref>. In Sections <ref> and <ref> respectively, we then consider constraints obtained by measuring the depletion radius or the splashback radius. Finally, in Section <ref> we estimate constraints based on fitting the entire infall region. Throughout this section, covariances are calculated using the cluster-lensing-cov package <cit.>. Uncertainties on the profile are used to generate Gaussian scatter in the data points. To calculate our final results (either fitting the profile to extract the splashback/depletion radius, or comparing the shear profiles directly), we repeat each analysis 50 times with random realisations of the errors, and average these multiple realisations to give the final quoted value. §.§ Source and lens catalogue assumptions We focus on two weak lensing surveys, UNIONS and Euclid Wide. The mass and redshift bins for the cluster sample assumed in each survey are summarised in Table <ref>. §.§.§ UNIONS UNIONS <cit.> is a 5-band ground-based imaging survey. The weak lensing component of the survey will cover an area of 4800deg^2, with source number density of 10deg^-2 and shape noise of 0.34, as described in <cit.>. The source redshift distribution n(z) from that work is also assumed here. For the lens catalogue, we use the sample described in <cit.> as a reference case. This was based on halos in the UNIONS footprint taken from the catalogue created by <cit.>, where a halo finder was run on SDSS galaxies. However <cit.> considered only 2000deg^2 of UNIONS, as the weak lensing survey was not complete at that time. We have assumed that the mean number density of lenses is the same across the full 4800deg^2 of the survey, and therefore lens numbers in that work are scaled up by a factor of 2.4. Our three lens mass bins also correspond to those defined in <cit.>. §.§.§ Euclid The Euclid Wide survey, planned for the Euclid telescope, is a space-based survey covering 36% of the sky, providing a total area of 15000deg^2, with a source number density of 30deg^-2 and shape noise of 0.3 <cit.>. The mean redshift of the source distribution is ⟨ z ⟩=0.95. As lenses, we consider cluster-mass halos detected in the Euclid photometric survey. The numbers in Bins 1-5 of Table <ref> are derived from Fig. 3 of <cit.>, assuming a detection threshold of clusters with N_500 c,field/σ_ field=5. The masses are based on the corresponding selection function in Fig. 2 of that work, converted to M_⊙ h^-1 assuming h=0.7. We focus on the lower redshift bins of the survey, to avoid incompleteness which might bias cluster properties and the inferred shape of the infall region. We also consider a group-scale sample in Bin 0, assuming a complete halo sample in the mass range 10^13≤ M [M_⊙ h^-1] ≤ 10^13.5 with a total number calculated from the halo mass function of <cit.> for a Planck cosmology. The resulting sample contains ∼ 10^5 halos, and represents the best we could hope to do in this mass range. §.§ Constraints from the depletion radius The depletion radius is defined as the point where the bias profile reaches a minimum value. To measure this, we fit the stacked ΔΣ(R) profiles with the density profile model. Errors on ΔΣ(R) are found using the cluster-lensing-cov package, as described in the previous section. These errors are propagated through the fitting procedure. The result is a set of best-fit parameters for the density profile model, and their covariance. The covariance is used to generate a set of correlated samples of the model parameters, assuming Gaussian uncertainties. Then a simple Monte Carlo operation is performed, drawing from this correlated set of samples repeatedly, and finding the new depletion radius of the density profile created on each random draw. Once a density profile is fitted from a measured γ profile, there are still two unknowns in Eq. (<ref>); the matter density ρ_ m, and the matter-matter correlation function ξ_ mm(r), which is calculated using colossus <cit.>. We assume a Planck cosmology for both of these, as in real data we will not know the true value of ρ_ m, and while ξ_ mm(r) can be determined from the observations, it requires knowledge of the galaxy bias. The net effect of this approximation will be discussed further in Section <ref>. For the highest mass halos in our sample, there is no actual dip in the bias profile around the depletion radius, as expected from <cit.>. These halos form in high-density environments, where the bias profile remains positive at all radii. To test the use of the depletion radius as a cosmological probe, we therefore consider a sample of less massive halos (`Bin 0', described above). At present, the depletion radius for halos in this mass range has been measured through weak lensing by <cit.>, to a precision of Δ r_ cd = 0.14. The top-left and bottom-left panels of Fig. <ref> show predicted uncertainties on the depletion radius in a stacked sample of group-mass halos, with profiles observed using UNIONS and Euclid respectively. With Euclid data, our predicted uncertainties lie in the range 0.1-0.15, making them consistent with the only existing measurement in the literature <cit.>. The predicted uncertainty in this quantity is larger than the cosmological variation over the range of Ω_ m, σ_8 considered, as shown in the left-hand columns of Fig. <ref>. We conclude that the depletion radius is not a good candidate for deriving cosmological constraints. The assumptions made when finding the bias profile also lead to a mismatch between the true and inferred bias minimum in some cases. This is discussed in more detail in Section <ref>, and is a serious limitation to using the location of the depletion radius as a cosmological test. §.§ Constraints from the splashback radius To determine the splashback radius, we use a similar approach as in Section <ref>. We focus on cluster-mass halos for this measurement, as they will provide the largest SNR. We note that cite.RoanH24 find that the splashback radius needs to be measured to an accuracy of ∼5% if it is to be a competitive independent probe on Ω_ m and σ_8. Other works have shown it is observable in the future to relatively high precision; for example <cit.> find an uncertainty on r_ sp of ∼0.05 (3-6% depending on the value) from forthcoming weak lensing surveys such as Euclid and LSST <cit.>. The top-middle and bottom-middle panels of Fig. <ref> summarise our results for profiles of stacked cluster-mass samples, measured using lensing data from UNIONS and Euclid respectively. For each simulation, the value of the splashback radius and its associated uncertainty is shown. For UNIONS, the errorbars are too large to constrain deviations from a Planck-like cosmology (indicated by the magenta band, with the width indicating the measurement error for that cosmology). The reported values from Planck are Ω_ m=0.316 and σ_8=0.811 <cit.>. On the other hand for Euclid, depending on the cosmology, we see that significant deviations from the expected Planck value can be observed. Overall, we find that deviations from the fiducial Planck values of Ω_ m and σ_8 can be constrained to ±0.05, using splashback radius measurements for the Euclid cluster sample. These results can be further improved by measuring r_ sp in stacked clusters from the four other redshift bins in Euclid in Table <ref>. The cosmological dependence tends to be similar or larger in higher redshift bins, and so for each simulation, stacking the SNR from all five bins improves the constraining power by at least a factor of two, even reaching a factor 4–6 improvement for cosmologies in close proximity to Planck in Ω_ m–σ_8. We note this does require complete samples and accurate mass estimates for clusters at redshifts ≥ 0.3. Our results mirror those of <cit.>: the projection effects involved in measuring a ΔΣ profile with weak lensing weaken constraints on the splashback radius. The quoted uncertainties on r_ sp in that work are smaller than here, likely due to the different number of halos assumed (25 000 in their case, compared 3 500 in ours). Previous work has also demonstrated a bias on the inferred splashback radius in optically selected cluster catalogues <cit.>. These selection effects would need to be modelled with more realistic simulations, depending on the cluster sample assumed. §.§ Constraints from the full profile Fitting a model of the density profile to observations in order to identify characteristic radii introduces large uncertainties. To avoid this, we can consider comparing an observed shear profile directly to simulated profiles, without assuming any particular analytic form. To evaluate the prospects for this method, we use the signal-to-noise ratio (SNR) of the difference between the average shear profiles in the infall region for two different cosmologies. Given cosmologies A and B, SNR = |⟨γ_t( infall) ⟩_A - ⟨γ_t( infall) ⟩_B |/√(σ_A^2 + σ_B^2) , σ = 1/N√(∑σ_i^2) , where σ_i is the uncertainty on the i^ th data point and N is the number of data points in the annulus. The average is computed in an angular annulus encompassing the infall region, described in the next section. Before performing this test, we first need to identify the angular scale where the SNR of the difference is greatest. §.§.§ Scale dependence of the SNR To test how the SNR of the cosmological variation of the profile depends on distance from the halo centre, we calculate mean profiles for two cosmologies, `A' with Ω_ m=0.25, σ_8=0.85 and `B' with Ω_ m=0.35, σ_8=0.75, and take their difference. These two cases were chosen as they bracket the Planck values in the Ω_ m–σ_8 plane along the S_8 degeneracy direction, the direction in which the profile of the infall region is expected to vary the most. The angular dependence of the SNR of this difference, normalised by its maximum value, is shown in Fig. <ref>. The SNR depends on the source and lens redshift distributions, and is shown for two examples, the UNIONS bin with 0.16≤ z_ l<0.26 (thin blue line) and the Euclid bin with 0.2 ≤ z_ l < 0.3 (thick purple line). Solid and dashed-dot lines correspond to group-mass halos and cluster-mass halos, respectively. The SNR peaks in the infall region, indicating that this is the best place to look for differences caused by the cosmological parameters. The smooth peak at ∼2–4 θ_200 c persists when comparing differences between other cosmologies and considering different mass bins, though it does vary slightly with mass, and typically has a width of ∼0.15-0.25dex. To determine the best angular scale to use for a given sample and survey, we measure the location of the peak in SNR for all combinations of cosmologies and over a large mass range, and determine the mean θ_ peak–mass relation. It is given by log(θ_ peak / deg) = a log(M / M_⊙ h^-1) + b , where for UNIONS a=0.21, b=-3.6, and for Euclid a=0.36, b=-5.9. The final annulus over which we integrate has a width of ±0.2dex around the peak position. For any mass bin, the adopted annulus always encompasses the peak of SNR, and it is not cosmology dependent. This annulus is used to produce the results in the top-right and bottom-right panels of Fig. <ref>, for UNIONS and Euclid respectively. §.§.§ Dependence of the profile variation on mass and redshift Given the mass-dependent radial annulus described in the previous section, we calculate the average γ_t within that annulus for the two different cosmologies, and determine the SNR of the difference between them. Fig. <ref> shows this SNR as a function of mass and redshift, assuming a fixed 1000 halos per mass and redshift bin, and Euclid weak lensing data. To get a rough sense of variation of the SNR with mass and redshift, we make a simple bi-linear fit to the values in Fig. <ref>: f( log(M/M_⊙ h^-1), z) = 8.93 log(M/M_⊙ h^-1) + 74.7 z - 5.79 log(M/M_⊙ h^-1) z -120 . An inverse weighting of 1/SNR is used in the fit to prioritise the high-SNR regime. More complicated functions could better fit both high and low SNRs, but here we opt for this form for simplicity. The fit and residuals are shown in Fig. <ref> Ignoring systematic errors and biases, and assuming the SNR scales as SNR∝√(N), we can use these results to set requirements on the sample size needed to distinguish cosmological models. For example, to distinguish between the two cosmologies considered so far (A with Ω_ m=0.25, σ_8=0.85 and B with Ω_ m=0.35, σ_8=0.75) at 5σ significance, assuming clusters with a mean mass of 10^14 M_⊙ h^-1 at ⟨ z_ l⟩∼0.2, we would need a sample size of ∼1500 clusters. §.§ Summary Fig. <ref> summarises the expected accuracy of cosmological constraints derived using the three methods, depletion radius measurements, splashback radius measurements, or full profile measurements, for the UNIONS survey (top panels) and the Euclid Wide survey (bottom panels). The vertical ordering of cosmologies in each panel is based on the fitting function in Fig. 6 of cite.RoanH24. To derive these results, we have considered only the lenses in the highest mass Bin of Table <ref> for each survey. Using additional bins would improve these constraints further. Fig. <ref> plots the results of the top-right and bottom-right panels of Fig. <ref> in the Ω_ m–σ_8 plane for UNIONS and Euclid respectively, this time finding the SNR of the difference of each cosmology with the fiducial combination Ω_ m=0.3, σ_8=0.85. The SNR contours indicate how well a cosmology in this plane could be constrained from the fiducial case. Also overlaid are recent results from the SRG/eROSITA all-sky survey <cit.>, demonstrating the complementary degeneracy direction of constraints using the infall region. We can also consider each of the five (three) bins for Euclid (UNIONS) in Table <ref> as independent tests, and sum their resulting SNRs in quadrature. For UNIONS, the improvement is marginal as the lower mass bins are noisier, and the cosmological dependence of their infall region is no longer orthogonal to S_8. On the other hand, for Euclid, the improvement in the SNR leads to a reduction in the contour area by a further factor of ∼2. The exact change in area is slightly uncertain, due to the interpolation between limited numbers of simulations (with parameters indicated by the open circles in Fig. <ref>). Taken at face value, however, this measurement should reduce the area of the contour reported by eROSITA by a factor of more than 3. § PRACTICAL CHALLENGES §.§ Use of a fiducial cosmology in the lensing calculations Converting the observed shear profile γ_t(θ) into a projected ΔΣ(r) profile involves assuming a fiducial cosmology to convert angles and redshifts into distances. An incorrect choice of fiducial cosmology when converting angles to transverse distances causes an overall shift in scale; an incorrect choice of cosmology when calculating Σ_ crit causes an overall shift in amplitude. Over the range of Ω_ m considered, the variation in the angular diameter distance with cosmology causes a ± 1.5% shift in scale relative to Planck. Similarly, the variation of Σ_ crit causes a ± 2.5% change in the amplitude of the ΔΣ profile recovered, relative to Planck. Fortunately, these are competing effects. Assuming too high a value of Ω_ m when converting from γ_t(θ) to ΔΣ(r) would shift the profile to smaller r and larger amplitude compared to the truth. Profiles from all simulations would be shifted in the same way, by different amounts depending on the value of Ω_ m relative to the fiducial case. Considering that the intrinsic variation in the profiles caused by cosmology is in a direction perpendicular to this shift, these two effects in combination largely cancel each-other out, leading to a ≤ 1% impact on the resulting ΔΣ profile caused by a large mis-estimate of the true Ω_ m. Thus, we do not include this correction in the main results. §.§ Measuring the correlation function ximm To derive the bias profile corresponding to a given density profile, the matter-matter correlation function is required in Eq. (<ref>). This function is itself cosmology-dependent. It can be inferred from observational measurements of the galaxy correlation function, but then a model for the galaxy bias, connecting the distribution of galaxies to the distribution of matter, is needed. The galaxy bias is itself uncertain, and therefore including it introduces large uncertainties on the recovered cluster bias profile. Alternatively, instead of assuming a fixed ξ_ mm(r), a cosmology dependent ξ_ mm found using a halo model (or directly from simulations) could be included in the cosmological inference. This removes the need to estimate it from observation, but introduces extra freedom in the fitting procedure. Another consideration is that applying the same ξ_ mm to the data and the simulation during the cosmological inference to generate a bias profile means we are essentially comparing the fitted density profile to a simulated density profile. This is similar to the method proposed in Section <ref>, but includes the extra step of fitting the measured shear to get a density profile. In the results presented in Section <ref>, we naively assume the correlation function ξ_ mm(r) for a Planck-like cosmology when deriving the bias profile, and as a result, the true location of the bias minimum is not always recovered. Variations in ξ_ mm are ±10% in the infall region as σ_8 is varied by by ±0.05. The variation is not constant with radius, so it changes the shape of the bias profile in the infall region, impacting the recovered depletion radius. Changing Ω_ m has a smaller but non-negligible impact. From this, we conclude that using the depletion radius (or minimum of the bias) as a cosmological probe is significantly hampered by uncertainty in the matter-matter correlation function, and thus probably not competitive with splashback or shear profile methods. §.§ Cluster mass calibration Mass calibration of galaxy clusters is a well known challenge. Accurate determinations of the mean mass and full mass distribution of the stacked halo sample are crucial in order to define equivalent samples of halos to compare to in simulations. Previous studies have shown there is a bias when using optically selected clusters—observed splashback radii are smaller than the values predicted by simulations <cit.>. Euclid will take advantage of the weak lensing and spectroscopic data available for its mass calibration, as well as cross-correlating with data from other surveys <cit.>. Furthermore, since we only consider the most massive clusters at low redshift, we can expect fairly complete samples. Nonetheless, Fig. 14 of <cit.> shows that, in the redshift range of interest, average weak lensing cluster mass estimates may be biased low by 5-10%. We can investigate the impact of this bias by recomputing mean density and shear profiles for halos with M≥ 0.9×10^14.3 and M ≥ 0.95×10^14.3, and comparing to the previous results obtained with M ≥ 10^14.3, thus measuring how an uncorrected bias of 5-10% would affect our final constraints. We find that the difference in the splashback radius in the stacked halo sample is typically ≪ 1σ in the case where all halo masses are biased low by 5%, where σ is the uncertainty on the inferred value of r_ sp from weak lensing profiles. When all halos are biased low by 10%, the difference is larger but still remains below the 1σ level. When comparing the amplitude of the stacked shear profile to simulations, errors are much smaller and the requirements on the mass calibration are more stringent. A downward mass bias of 10% (5%) for all halos reduces the lensing signal in Euclid clusters by 7% (3%), a difference that is significantly above the noise. While a mass bias of this amplitude is probably pessimistic, these results demonstrate the importance of accurate mass calibration. Furthermore, the mass uncertainty also has a significant impact. Halos with masses just below an imposed mass limit can be scattered into the sample, and those just above scattered out of the sample. The overall effect is a sample that contains less massive halos than expected. To test the impact, we generate 5 realisations of Euclid Bin 1 and UNIONS Bin 1 in Table 1 with masses scattered from their true values assuming a mass uncertainty of 0.2dex <cit.>. Results from these realisations are then compared to the original sample. As expected, the mean masses of the scattered realisations are each biased low by ∼0.1dex, and the measured splashback radii are biased low by ∼1σ. The shear profile amplitude is reduced by 10-15% in the infall region. These results further highlight the importance of accurate mass calibration, especially when directly comparing the shear profiles to simulations. An improved method would include realistic mass scatter in the simulated halos when comparing to observations, such that the main bias was accounted for. More sophisticated analysis methods, such as including abundance information by stacking a sample of the N most massive clusters within a survey volume, or only selecting relaxed clusters <cit.>, might also help reduce or avoid mass calibration problems. We will consider this possibility in future work. §.§ Mis-centering To assess the potential impact of stacking mis-centered halos on our study, we adopt the framework of <cit.>. The authors use a probability distribution for the size of the offset given by P(R_ off) = R_ off/σ_ offexp(-R_ off^2/2σ_ off^2) . Using a sample of galaxy groups detected in XMM-Newton <cit.> and Chandra <cit.> observations, with ΔΣ profiles determined from COSMOS weak lensing measurements <cit.>, they find a best fit for the width of the offset distribution of σ_ off=50kpc. If we assume offsets are randomly drawn from this distribution for each individual system in a stacked halo sample, compute the modified ΔΣ(R) profile, and compare it to the case with no offsets, this will give an idea of the impact of mis-centering. We assume that every halo has an offset drawn from this distribution, a pessimistic choice corresponding to the worst-case scenario. For example, in the optically selected redMaPPer cluster sample, only ∼25% of clusters were mis-centered <cit.>. Using σ_ off=50kpc, the difference in the amplitude of the shear profile can be up to 1–2% in the infall region, and the impact on the recovered splashback radius can be a shift of ∼0.05Mpc/h. Both of these results are within the 1σ errors for Euclid shear measurements, and are sub-dominant for UNIONS measured profiles. Assuming σ_ off can be estimated from observations, as in <cit.>, and only a fraction of clusters are strongly mis-centred, then we do not expect mis-centering to impact our conclusions. These results corroborate the findings of <cit.>, where they evaluated the impact of mis-centering on the location of the 3D splashback radius, finding it to increase errors only slightly, causing shifts of the inferred location well within uncertainties. § CONCLUSIONS The amplitude and shape of the mean shear profile around galaxy clusters should depend on cosmology. In universes with low Ω_ m and high σ_8, clusters `form' earlier, assembling their mass into a single large progenitor at higher redshift relative to universes with high Ω_ m and low σ_8 <cit.>. Material accreted over the past few Gyr is accelerated by a deeper potential well, and reaches a larger splashback radius. Thus, the outer density profiles of clusters in low Ω_ m/high σ_8 cosmologies will be more extended at the present day, and the shear signal in these regions will be larger. The dependence of this effect on Ω_ m and σ_8 is close to orthogonal to the S_8 degeneracy present in cosmic shear and cluster abundance studies (citep.RoanH24). Combining lensing measurements of the infall region around galaxy clusters with cluster number counts can therefore significantly improve constraints on these parameters, using only low-redshift cluster properties. Given realistic assumptions about the cluster samples and lensing data expected from Euclid, we find that measurements of the splashback radius alone may constrain deviations from fiducial Planck values of Ω_ m and σ_8 of ±0.05 or greater. Measuring the `depletion radius', where the bias reaches a minimum, proves not to be a promising cosmological probe. This is due to the lack of a pronounced negative dip in the bias at this radius on cluster scales, and also the need to determine the matter-matter correlation function in order to calculate the bias. A similar test could in principle be applied to lower-mass groups or even galaxy halos. In particular, the depletion radius is a more prominent feature in low-mass dark matter halos, and thus easier to measure. The density profiles of these lower-mass systems are more likely to be impacted by baryonic feedback effects, however, given their lower energy scale, as demonstrated by small-scale lensing or clustering studies <cit.>. Using the minimum bias value, instead of the radius at which the bias reaches a minimum, might improve the prospects of the depletion radius as a cosmological probe for galaxy clusters, although it requires a good knowledge of the matter density of and the matter-matter correlation function. Furthermore, we find that variation of infall region features with cosmology in these halos is no longer orthogonal to S_8 as with galaxy clusters, but has a similar degeneracy direction to S_8. This limits their utility in improving abundance or lensing-based constraints. Based on these arguments and our exploration of the SNR of differences in shear profiles, we can conclude the best halos to use for a cosmological analysis are high-mass clusters at low redshift. Fitting analytic forms to observed shear profiles to infer r_ sp or r_ cd is also a slightly noisy process, reducing the overall SNR of these tests. By applying a simple angular filter to the mean shear profiles in the infall region for our simulated clusters, and studying the intrinsic variation of the mean shear with cosmology, we demonstrated that direct measurements of the profile in weak lensing surveys such as UNIONS or Euclid can better constrain Ω_ m and σ_8, reaching a final precision comparable to that of cluster abundance studies, but with a different degeneracy direction in the Ω_ m–σ_8 plane. These constraints could be further improved by using an optimal weighted filter to extract the largest SNR from the infall region <cit.>. The real power comes from combining constraints based on measurements of the density profile with traditional results based on cluster abundance. We estimate that combining the two could reduce the area of the contours reported by the SRG/eROSITA All-Sky Survey by a factor of 1.2, using one mass and redshift bin from the UNIONS survey, or a factor of 3, using five redshift bins from the Euclid survey. Overall, using the full range of cluster mass and redshift available, lensing measurements of the infall zone should provide a competitive, independent cosmological test that is highly complementary to other low-redshift tests of cosmology, and requires only the data already collected for abundance and cosmic shear studies. In a follow-up work we will attempt the first such analysis using clusters in the UNIONS footprint. § ACKNOWLEDGEMENTS We thank the members of the UNIONS collaboration and D. Rana for helpful feedback in the preparation of this manuscript. C. T. M. is funded by a Leverhulme Trust Study Abroad Scholarship. J. E. T. acknowledges support from the Natural Sciences and Engineering Research Council of Canada (NSERC), through a Discovery Grant. This research was enabled in part by support provided by Compute Ontario (www.computeontario.ca) and the Digital Research Alliance of Canada (alliancecan.ca). The python packages numpy, scipy, matplotlib, lmfit, colossus and cluster-lensing-cov have been used in this work. For the purpose of open access, the authors have applied a Creative Commons Attribution (CC BY) licence to any Author Accepted Manuscript version arising from this submission. § DATA AVAILABILITY The simulations used in this article will be shared on reasonable request to the corresponding author. mnras § VARIATION OF THE CHARACTERISTIC RADII WITH REDSHIFT Although it is tangential to our main argument, it is also interesting to study how the cosmology dependence of the characteristic radii varies with redshift. Fig. <ref> shows the redshift dependence of the splashback and depletion radii for a sub-sample of cosmologies. We see that in comoving units, there is relatively little variation (≤ 20%) between z =0 and z=1. This agrees with other works, <cit.>, where the authors find any variation of the splashback radius with redshift is caused by the variation of Ω_ m(z), meaning that little variation is expected in comoving units. § DEPENDENCE OF ERRORS ON COSMOLOGY Due to the presence of the matter power spectrum in the covariance calculation, the predicted errors are in principle cosmology dependent. In our analysis, we make the approximation that the errors are equal to those of a fiducial Planck-like cosmology with Ω_ m=0.316 and σ_8=0.811 <cit.>. Fig. <ref> shows how the actual uncertainties depend on cosmology, for Euclid Bin 1 (solid lines) and UNIONS Bin 2 (dashed lines) defined in Table <ref>. The uncertainty is plotted as a function of Ω_ m and σ_8, with darker line colours corresponding to larger values of S_8. The variation is seen to be relatively small in the infall region, so we have neglected this correction in our calculations. In practice there are ways of dealing with this cosmology dependence in a likelihood analysis <cit.>, thereby slightly improving the accuracy of the SNR calculations.
http://arxiv.org/abs/2407.01697v1
20240701180817
NLPGuard: A Framework for Mitigating the Use of Protected Attributes by NLP Classifiers
[ "Salvatore Greco", "Ke Zhou", "Licia Capra", "Tania Cerquitelli", "Daniele Quercia" ]
cs.CL
[ "cs.CL", "cs.AI", "cs.HC" ]
A Framework for Mitigating the Use of Protected Attributes]NLPGuard: A Framework for Mitigating the Use of Protected Attributes by NLP Classifiers Work done at Nokia Bell Labs. salvatore_greco@polito.it 0000-0001-7239-9602 Politecnico di Torino Turin Italy ke.zhou@nokia-bell-labs.com 0000-0001-7177-9152 Nokia Bell Labs Cambridge UK l.capra@ucl.ac.uk 0000-0003-1425-3837 University College London London UK tania.cerquitelli@polito.it 0000-0002-9039-6226 Politecnico di Torino Turin Italy daniele.quercia@nokia-bell-labs.com 0000-0001-9461-5804 Nokia Bell Labs Cambridge UK § ABSTRACT AI regulations are expected to prohibit machine learning models from using sensitive attributes during training. However, the latest Natural Language Processing (NLP) classifiers, which rely on deep learning, operate as black-box systems, complicating the detection and remediation of such misuse. Traditional bias mitigation methods in NLP aim for comparable performance across different groups based on attributes like gender or race but fail to address the underlying issue of reliance on protected attributes. To partly fix that, we introduce NLPGuard, a framework for mitigating the reliance on protected attributes in NLP classifiers. NLPGuard takes an unlabeled dataset, an existing NLP classifier, and its training data as input, producing a modified training dataset that significantly reduces dependence on protected attributes without compromising accuracy. NLPGuard is applied to three classification tasks: identifying toxic language, sentiment analysis, and occupation classification. Our evaluation shows that current NLP classifiers heavily depend on protected attributes, with up to 23% of the most predictive words associated with these attributes. However, NLPGuard effectively reduces this reliance by up to 79%, while slightly improving accuracy. Disclaimer: This paper contains examples of language that some people may find offensive. [ Daniele Quercia July 8, 2024 =================== § INTRODUCTION In recent years, the adoption of deep learning-based NLP models has exponentially increased. Transformer-based models, such as BERT <cit.>, T5 <cit.>, and GPT <cit.>, have achieved unthinkable levels of performance on several natural language tasks. However, despite being increasingly accurate, these models remain black-boxes <cit.>. For an NLP classification task, models predict a class label from an input text without providing any information on the complex internal decision-making mechanism, making it challenging to identify and mitigate potential bias and/or unfair behavior in such models. Upcoming privacy laws regulating the use of AI will soon demand that learning shall not be done on protected attributes such as race, gender, or sexual orientation, as already identified by the General Data Protection Regulation (GDPR), the UK Government, and the anti-discrimination legislation in the United States <cit.>. Ensuring AI models avoid using protected attributes in decision-making is termed `fairness through unawareness' <cit.>, and it is crucial in many real-world scenarios. For instance, NLP-based systems often assess job applicants' resumes. Following the Civil Rights Act in the US, discrimination based on race, sex, nationality, or other protected attributes is forbidden. Hence, these NLP systems must omit words linked to protected attributes to prevent discriminatory practices against candidates, such as the “sexist” Amazon Recruitment tool,[<https://www.bbc.com/news/technology-45809919>] a system that learned to downgrade resumes containing the word `women'. Content moderation is another example, where all users should be treated equitably, without having their contributions censored or suppressed because of, for example, their demographic characteristics. However, as we will demonstrate in our analysis, state-of-the-art models often base their predictions on protected attributes, and accurate ones are frequently black boxes, posing challenges in identifying such misuse. Consider, for example, the task of determining whether a sentence contains toxic language or not in a dataset we will analyze. In Figure <ref>, we report four example sentences, together with the outcome of a toxicity classifier P(T); in Figure <ref>, we highlight in red the important words used by the classifier to make these predictions. As shown, the presence of words such as `black', `gay', or `homosexual' is used to distinguish between toxic or non-toxic texts. Yet, these words are protected attributes and should not be used in such classifications at all. As discussed in <ref>, prior studies on bias in NLP primarily focused on two challenges: ensuring fair performance across different groups and rectifying unfairness in word representations. However, these solutions only target specific biases and fail to eliminate the reliance of models on protected attributes for predictions. Therefore, we propose methods to reduce this bias in black-box NLP classifiers, removing most protected attributes from their decision-making process while maintaining accuracy, and making these approaches applicable across various datasets and tasks. In so doing, we make four main contributions: * We introduce NLPGuard (<ref>), a framework with three components: (1) an Explainer that finds the most important words for predictions; (2) an Identifier that checks if these words are about protected attributes; and (3) a Moderator that adjusts the training data to re-train the NLP model to reduce learning from such protected attributes. * We evaluate each part of our framework and use it to mitigate toxicity detection in Wikipedia comments with BERT (<ref>). BERT depends on protected attributes for toxicity predictions (23% of the most predictive words), but our approach cuts this down by 60% and even increases prediction accuracy by 0.8%. * We then evaluate whether our framework generalizes to different types of data and tasks, not just toxicity detection (<ref>). We found that our framework reduces the use of protected attributes by 79% when applied to out-of-distribution data. Also, it reduced reliance on protected attributes without compromising accuracy in tasks like sentiment analysis and occupation classification. * We make NLPGuard publicly accessible,[The code repository of our framework is available at <https://github.com/grecosalvatore/nlpguard>] and discuss how to incorporate it into existing NLP systems, its impact, and its limitations (<ref>). § RELATED WORK §.§ AI Regulations and Laws The growth of AI systems has raised privacy and discrimination concerns, leading to the introduction of numerous regulations and laws governing their use. In the European Union (EU), in May 2018, the GDPR <cit.> was introduced, which demands organizations ensure that personal data is processed lawfully, fairly, and transparently. It prohibits processing sensitive personal attributes such as race, ethnicity, religion, and political opinions, unless legitimately justified. The EU proposed the AI Act <cit.>, which defines rules and obligations depending on the level of risk of AI systems (e.g., transparency, documentation, human oversight) <cit.>. In the United Kingdom (UK), the UK Equality Act 2010 <cit.> established that it is unlawful to discriminate based on nine protected characteristics: age, disability, gender reassignment, marriage and civil partnership, pregnancy and maternity, race, religion or belief, sex, and sexual orientation. Compliance with the act is enforced by the Equality and Human Rights Commission (EHRC). In the United States (US), the Anti-discrimination Act <cit.> safeguards individuals from unfair treatment based on protected attributes. In late 2022, a blueprint of the AI Bill of Rights was passed <cit.>, declaring that algorithms that discriminate or perform unjustified different treatment based on protected attributes violate legal protections. AI regulations will continue to evolve in the coming years <cit.>, driven by a common goal: to minimize discriminatory outputs based on protected characteristics <cit.>. §.§ Bias mitigation for NLP Bias in NLP decision-making has manifested itself in several ways, including dialogue generation <cit.>, text classification <cit.>, and machine translation <cit.>. It usually arises from training data <cit.>. For instance, pre-trained models and word embedding can inherit biases and stereotypes present in the large training corpora <cit.>. When quantifying bias, existing works generally highlight disparities between demographic groups, with differences in performance or selection bias on protected attributes such as race, gender, religion, and sexual orientation <cit.>. To address biases in NLP, techniques can be developed that act at the three main stages of the NLP pipeline <cit.>: pre-processing (modifying training data), in-processing (imposing fairness constraints during model training), and post-processing (adjusting classifier predictions based on fairness metrics). Most existing works focus on the first two stages, exploiting data augmentation and modified model training techniques <cit.>. Furthermore, most of those studies focus on one protected category at a time. For example, <cit.> proposed the identification of protected attributes, such as gender, by creating a manual list of words, measuring the skewed occurrence of words across classes or predicted class probability distribution of words. <cit.> introduced gender swapping to equalize the number of male and female entities in the training data. <cit.> proposed dataset augmentation strategies that generate new sentences using templates or replace protected attributes with generic tags, such as part-of-speech or named-entity tags. <cit.> proposed mitigating biases in the training data by assuming a non-discrimination distribution and then reconstructing the distribution using instance weighting. <cit.> proposed removing information from neural representations concerning gender or race for debiasing word embedding for NLP classification. These past works try to mitigate unintended bias and performance imbalance between subgroups by (1) removing implicit bias from word embeddings, (2) performing data augmentation on the training set (data-based), or (3) intervening directly in the model architecture or objective function (model-based). However, a gap remains in evaluating (and tackling) the extent to which NLP classifiers depend on protected attributes for their predictions. In this paper, we aim to fill that gap. We consider the following definition of fairness through unawareness: “an algorithm is fair as long as any protected attributes are not explicitly used in the decision-making process” <cit.>. Textual data is unstructured; hence, protected attributes are not explicitly delineated as input features, such as columns used in structured datasets. Consequently, we refine this definition in the context of NLP applications to ensure words associated with protected characteristics are not utilized in decision-making unless necessary. Our approach aims to reduce the use of protected attributes in the decision-making process of NLP models, thereby better aligning them with legal regulations. Compared to prior work, our approach not only has a different objective, but it also overcomes two of their main limitations: (1) their focus on a subset of protected attributes at a time (usually race and gender); (2) their manual and static identification of protected attributes via pre-defined dictionaries, lists of identity terms, or additional annotations. The only technique addressing these limitations is Entropy-based Attention Regulation (EAR) <cit.>. EAR introduces a regularization term to discourage overfitting to training-specific potentially biased terms. However, those terms are automatically identified during training, leaving no flexibility for users to select which categories to mitigate. Unlike previous techniques, our approach: (1) identifies and mitigates multiple protected categories simultaneously; (2) can be fully automated, allowing for a dynamic update of the dictionary of protected attributes; and (3) allows for the selection of the categories to mitigate. § OUR MITIGATION FRAMEWORK Our Mitigation Framework, namely NLPGuard, has been designed to be generally applicable to any supervised machine learning-based NLP classification model applied on an unlabelled corpus. As illustrated in Figure <ref>, NLPGuard takes in input an unlabeled corpus and a pre-trained NLP classifier (together with its training dataset) to produce a mitigated training dataset in output. Ground truth class labels for the unlabelled corpus are not required; rather, the classifier is used to generate them, both for in-distribution (i.e., data that comes from the same distribution as the original training dataset) and for out-of-distribution data where labels are unavailable. Because of the black-box nature of NLP classifiers based on deep learning models, labels might be predicted using protected attributes. To mitigate that, our framework comprises the following three components: A. Explainer. This component uses Explainable Artificial Intelligence (XAI) techniques to identify the most important words used by the model for its predictions. The XAI field has made great strides in making black-box models more transparent, and several techniques exist to explain NLP classifiers <cit.>. The best one for our purpose should have two qualities: (1) quantify the importance of each feature word (feature-based), and (2) be applicable to explain the model's predictions after training (post-hoc). Many techniques meet these requirements, and most of them measure the importance of each word for the prediction within an individual sentence (local-explanations) <cit.>. Our Explainer component first identifies the words important for prediction within all individual sentences, exploiting any of those techniques. Each word is, as such, associated with multiple scores, one for each occurrence in each sentence. Second, it determines the most important predictive words for the model as a whole (global-explanations) following the idea of some of these techniques, which aggregate the words' importance over many sentences to compute the overall importance <cit.>. Specifically, for each word, it sums all their individual scores and divides them by their frequency to compute the word's classification score. The normalization step is required to also identify rare but important words. The output of the Explainer component is the ordered list of the most important words for the model's predictions. B. Identifier. This component aims to annotate which of the previously detected important words refer to protected attributes. We consider the nine protected categories defined by the Equality Act: age, disability, gender reassignment, marriage and civil partnership, pregnancy and maternity, race, religion or belief, sex, and sexual orientation. Our framework allows for annotation using human-in-the-loop (B1) and machine-in-the-loop (B2) approaches. B1. Human-in-the-loop Annotation. Crowdsourcing platforms, such as Amazon Mechanical Turk (MTurk) and Prolific <cit.>, have been extensively used by the research community to recruit crowdworkers for data labeling purposes. Crowdworkers <cit.> are anonymous people usually paid for completing simple tasks. This component leverages crowdsourcing to perform the protected attribute annotation of the most important words. In our work, we exploited MTurk, where for each important word {word}, participants were asked to answer the following question: * Question: Is the word {word} referring to: * Possible Answers: 1. Age, 2. Disability, 3. Gender reassignment, 4. Marriage and civil partnership, 5. Pregnancy and maternity, 6. Race, 7. Religion or belief, 8. Sex, 9. Sexual orientation, 10. None of the above; To ensure data quality, we adopt a trap mechanism to detect random responses from participants and reject them (details of this mechanism are presented in <ref>). B2. Machine-in-the-loop Annotation. As a cost-effective and scalable alternative to human-in-the-loop annotations via crowdsourcing, we also implemented a protected attributes annotation process that uses Large Language Models (LLMs). We did so inspired by a recent study <cit.> that found LLMs, including ChatGPT, to outperform crowdworkers in text-based annotation tasks. It also has been shown that LLMs are effective in solving many NLP tasks <cit.>. Specifically, we implemented an annotation process that interacts with ChatGPT as follows: two prompts provide the protected categories, their definitions, and links with additional information. Then, for each word, the LLM is asked to: (1) classify the word into one of the protected categories or none of them; (2) provide a reliability score in the range [0, 100]; and (3) provide an explanation. For example, the word `homosexual' would be classified with the protected category sexual orientation and a score of 100/100 by GPT-3.5-Turbo (see Figures <ref> – <ref> in Appendix A for further details). C. Moderator. This component produces a new mitigated training dataset that can be used to train a new classifier that uses fewer protected attributes previously identified. It takes the original pre-trained classifier, the original training dataset, and the list of the most important words enriched with the protected attribute label as input. It produces a new mitigated training dataset by adjusting the training dataset based on the identified protected attributes that can then be used to train a new mitigated classifier. We designed and tested five mitigation strategies (MS). (MS1) Sentence-level removal. Previous works have shown that subsampling can be an easy but effective technique for data balancing <cit.>. This mitigation strategy eliminates all sentences containing protected attributes from the training set. As a result, it reduces the overall number of training examples. For example, if a word W_i is identified as a protected attribute, all sentences in the training set that include that word are removed. The idea behind this strategy is that the imbalance in the number of training examples containing protected attributes for a particular class may have led the model to learn that these protected attributes are crucial for classifying that class. (MS2) Word-level removal. This strategy removes only the protected attribute words from the sentences in the training set while preserving the number of examples. The process involves removing the identified protected attribute words from all sentences in the training set, thereby removing their influence on the model's learning process. The idea is that the model should be able to classify sentences without relying solely on the protected words, and rather use other words in the text too. (MS3) Word-level replacement with a random synonym. This strategy replaces every instance of a protected attribute in the training set with one of its synonyms. It first uses embedding similarity techniques to identify the k-nearest neighbors for each protected attribute. Then, it randomly selects one of the k-most similar words to replace each instance of the protected attribute in the training set. This has been shown to mitigate bias in <cit.>, and it is believed that it may also help mitigate the use of protected attributes in classification, as the model may learn to rely on other words for classifying the classes rather than solely relying on protected attributes. This approach maintains the same number of examples in the training set but increases the diversity of words. (MS4) Word-level replacement with K random synonyms. This strategy expands the training set by generating new sentences using synonyms of protected attributes. Instead of replacing the protected attribute in-place with one similar word as in MS3, it creates k new sentences by replacing the protected attribute with each of its k-nearest neighbors. For example, given a sentence containing a protected attribute W_i, k new sentences are created by replacing W_i with each of its k most similar words. This increases the size of the training set and diversifies the words used in the sentences. (MS5) Word-level replacement with hypernym. This strategy replaces instances of protected attributes in the training set with higher-level words, called hypernyms, which provide a more general representation of the category to which the protected attribute belongs. For example, the hypernym of `dog' could be `animal'. By using hypernyms instead of the specific protected attributes, the model may not discriminate based on these attributes in its classifications. This technique has been shown to be effective in mitigating accuracy imbalance between subgroups <cit.>. § FRAMEWORK EVALUATION: EFFECTIVENESS AND SENSITIVITY We evaluate the effectiveness of our framework and the sensitivity of the Explainer (<ref>), Identifier (<ref>), and Moderator (<ref>) components in mitigating a toxicity classifier applied to in-distribution data (i.e., the test set). §.§ Evaluation task We choose toxicity prediction as the main evaluation task in line with previous research. Toxicity classifiers are used in different contexts <cit.>, such as Reddit, Twitter, and 4chan, with competitive performance. However, they suffer from different types of biases <cit.>. In Wikipedia, for example, any comment containing words associated with insults, offense, or profanity, regardless of the tone, the intent, and the context, would be classified as toxic; toxic language was however more likely predicted from minority communities, as found in <cit.>, thus suggesting the use of protected attributes by such models. In our experiments, we used the “original model” in the widely used detoxify <cit.> library[<https://github.com/unitaryai/detoxify>] as a pre-trained toxicity classifier. This is a BERT-base and uncased model <cit.> trained on a dataset of publicly available Wikipedia comments.[<https://www.kaggle.com/competitions/jigsaw-toxic-comment-classification-challenge/data>] It was fine-tuned for predicting 6 labels related to toxicity: toxicity, severe toxicity, obscene, threat, insult, and identitiy attack, achieving an average Area Under the ROC Curve (AUC) score of 98.6%. For the toxicity label only, the classifier achieved 0.82 macro and 0.93 weighted F1 scores (the dataset is imbalanced). This classifier is applied to the original test set comprising 153,164 texts, and predicted the toxicity label for 36,148 texts (23.6% of the test set).[The ground truth labels are available only for a subset of the test set (42%).] §.§ Component evaluation: Explainer The Explainer aims to identify the most crucial words utilized by the classifier for predictions (as described in <ref>-A). The employed XAI technique can influence the words recognized as significant, thereby impacting the identified protected attributes on which the model relies to make predictions. §.§.§ Evaluation metrics To evaluate the effectiveness of the Explainer component, we first measure the impact on the model's predictive performance (F1 score) by removing the most important words identified by each XAI technique. An effective and precise explainer should result in a noticeable decrease in the predictive performance when these words are removed. Secondly, to assess the sensitivity of the Explainer, we measure the overlap of the most important words identified by different XAI techniques. A substantial overlap indicates consistent outputs across XAI techniques. Lastly, we measure the computation time for generating explanations to assess the efficiency of the Explainer based on the XAI technique, which is a crucial aspect when dealing with large datasets. §.§.§ Explainer setup There are two main categories of XAI techniques to compute explanations within a sentence: permutation-based and gradient-based <cit.>. For this comparison, we instantiated the Explainer component with SHapley Additive exPlanations (SHAP) <cit.> as a representative of the permutation-based and with Integrated Gradients (IG) <cit.> of the gradient-based techniques. Both techniques have demonstrated competitive performance in prior studies <cit.>. Specifically, for SHAP, we used the text permutation explainer with 3,000 as the maximum evaluation step parameter. For Integrated Gradients, we exploited the implementation provided by the Ferret <cit.> library. §.§.§ Results We produced the explanations within each sentence over the toxic texts in the test set using both techniques (SHAP and IG); we then aggregated individual scores and extracted an ordered list of the most toxic words as previously described in <ref>­-A. Figure <ref> shows the decrease in the F1 score by removing the most important words in the range from 50 to 700 with a step of 50. As expected, removing the most important words from the test set causes a marked decrease in predictive performance, especially for the top 250 words. IG exhibits higher precision, leading to a more substantial decrease initially. However, the decrement tends to converge on the top 400 words for both techniques. This shows that both techniques effectively extract the words used by the classifier for its predictions. We selected the top 400 most toxic words (approximately 10%) – removing additional words caused a lower decrease in predictive performance. We then measured the overlap between the 400 most toxic words identified with IG and SHAP. We found that 307 out of the 400 words were identical (77%), indicating a substantial agreement between the two techniques. The 23% of disagreement may be attributable to the varying precision levels inherent in the two methods. Finally, we compared the execution times required to generate explanations using both techniques. We observed that IG significantly outperformed SHAP, completing the explanation process more than two orders of magnitude faster. On average, the execution time in seconds to obtain an explanation is approximately 0.2 for IG and 30 for SHAP, using a single Nvidia RTX A6000 GPU. In summary, Integrated Gradients proves to be the most effective technique for the Explainer component, as it exhibits higher precision in identifying the most crucial words and executes significantly faster. As a result, we adopt Integrated Gradients in the Explainer for all subsequent experiments. Nevertheless, the framework allows the use of alternative XAI techniques. §.§ Component evaluation: Identifier The Identifier aims to determine which of the most important words are actually protected attributes (<ref>-B). To evaluate its effectiveness, we compare the protected attributes identified by the component instantiated with human-in-the-loop and machine-in-the-loop approaches against the annotations provided by two expert annotators, who possess a greater depth of knowledge of the definitions of protected categories by AI regulators than participants engaged in the human study.[Expert annotators are people within our team with a background in human-computer interaction and trustworthy and responsible AI. They are located in two different Western countries, with different ethnicities and ages. They carefully read the UK Equality Act 2010 and unanimously agreed on the annotation to be performed, which was done independently.] We also add for comparison a pre-defined dictionary of 51 protected attributes from previous works <cit.>. §.§.§ Evaluation metrics We measure the Cohen’s kappa inter-annotator agreement <cit.> to evaluate the accuracy and reliability of the protected attributes identified by the different approaches. It ranges between [0,1]; the higher the score is, the higher the agreement. §.§.§ Protected attributes identifier setup We selected the 400 most toxic words extracted with the Explainer component instantiated with Integrated Gradients as the candidate set to identify protected attributes. Then, we configured the Identifier as follows. Human-in-the-loop setup. We set up an MTurk study where we asked annotators to label words with the protected category they refer to (if any). As anticipated in <ref>-B1, we also used a trap mechanism to detect poor quality responses; specifically, we gave annotators the following definition of toxicity: “Toxic language is a way of communicating that harms other people”. Then, for each word, we also asked participants to answer the following additional trap question: * Trap Question: Does the word {word} suggest toxic language? * Possible Answers: 1. Not at all, 2. Very little, 3. Somewhat, 4. To a great extent, 5. Definitely. To the list of 400 most toxic words, we added 15 trap words that can be easily classified as toxic (e.g., “asshole”) or non-toxic (e.g., “friendly”). The full list of trap words can be found in Table <ref> in Appendix B. For the non-toxic (toxic) trap words, we expected MTurk participants to select a score of 1 or 2 (4 or 5) on the Likert scale. Participants were considered unreliable if they did not meet those expectations, and their assessments were discarded from our results. We ended up with 246 reliable participants, evenly split between males and females. The majority were educated (74% finished college), mostly located in the United States, falling within the median age group of 26-39. In terms of racial demographics, most were White (52%), followed by Asian (27%), African (7%), and Hispanic (5%).[Crowdworkers have been paid for their valuable contributions and time devoted to this research.] We collected five annotations per word on average. A word was labeled as a protected attribute if the sum of the votes across the nine categories exceeded the None of the above (majority voting). Machine-in-the-loop setup. We also annotated the same set of words by prompting GPT-3.5-Turbo, as introduced in <ref>-B2. The temperature parameter was set to 0.3 to limit creativity in generating the responses. Other temperature values in the range [0.3, 0.7] have been experimented with, although no major differences were observed. For each candidate word, we prompted GPT-3.5-Turbo, asking for a possible protected category, a reliability score, and an explanation for the classification. If a word is classified with any protected category, it is labeled as a protected attribute. §.§.§ Results The two expert annotators (A1, A2) identified 72/400 (18%) and 66/400 (17%) protected attributes, respectively. The human-in-the-loop (MTurk) approach 108/400 (27%) ones. Instead, the machine-in-the-loop (ChatGPT) approach labeled 93/400 (23%) words as protected attributes. These findings indicate that the original classifier heavily relies on protected attributes for toxicity predictions. Interestingly, the ChatGPT Identifier is also able to annotate proxy words for the protected categories. For example, the word `headscarf' is annotated as related to religion and belief (see Figure <ref> in Appendix A). If we use the pre-defined dictionary <cit.> instead of the Identifier component, only 9 out of 400 words (2%) would have been labeled as protected attributes. This suggests that pre-defined dictionaries may consist of a limited subset of protected attributes and not encompass the entire range of relevant attributes. Figure <ref> shows the two-by-two Cohen’s kappa inter-annotator agreement between the annotation performed by the two experts (A1, A2), ChatGPT (GPT), MTurk (MT), and the pre-defined dictionary (D). A higher score indicates a greater level of agreement. The score between the two expert annotators is 0.81, corresponding to an almost perfect agreement according to Landis and Koch’s scale <cit.>. The ChatGPT annotations demonstrate substantial (0.67) and moderate (0.56) agreement with the expert annotators. In contrast, the MTurk annotations show a moderate agreement of 0.48 and 0.44 with the expert annotators, respectively. The pre-defined dictionary exhibits low agreement with all other annotations, as it only covers a small subset of the important words. This evaluation demonstrates that the machine-in-the-loop approach outperforms the human-in-the-loop one in identifying protected attributes, while also enabling the full automation of the framework. For the next experiments, we thus adopt the LLM-based approach for the Identifier. §.§ Component evaluation: Moderator The Moderator aims to create a mitigated training corpus to train a new classifier with reduced reliance on protected attributes and similar predictive performance. To evaluate the effectiveness of each mitigation strategy, we trained and evaluated a distinct mitigated model for each strategy. §.§.§ Evaluation metrics For each mitigation strategy, we examine two key aspects of the mitigated classifiers: fairness and predictive performance. Our fairness is defined as fairness through unawareness, whereby “an algorithm is fair as long as any protected attributes are not explicitly used in the decision-making process” <cit.>. This is quantified by measuring the number of protected attributes each mitigated model relies on in making predictions based on the Explainer and Identifier components. A lower number indicates a reduced dependence on protected attributes, which signifies progress towards a more fair and unbiased classifier. To evaluate the predictive performance, we measure the F1 score specifically for the toxicity label, providing insight into the model's accuracy in identifying toxic texts. Additionally, we evaluate the Area Under the Curve (AUC) score for all toxicity-related labels. This metric provides an overall measure of the model's performance in identifying various aspects of toxicity. By considering both fairness and predictive performance, we can ascertain the effectiveness of the mitigated models in achieving a balance between reducing reliance on protected attributes and maintaining similar predictive capabilities. §.§.§ Moderator setup For the mitigation strategies outlined in <ref>-C, we used the following setup: for the removal-based strategies (MS1, MS2), we removed the sentences or the words if, after the tokenization, the protected attributes are in the list of tokens. In the case of the mitigation strategies based on the k-neighbours (MS3, MS4), we set the value of k to 5, meaning that for each protected attribute, the five closest words were identified. To identify these nearest neighbors, we computed the cosine similarity between each word in the vocabulary and the protected attribute using the 300-dimensional GloVe <cit.> word embedding, as suggested in <cit.>. For the hypernyms-based strategy (MS5), we utilized the WordNet lexical database <cit.> provided in NLTK,[<https://www.nltk.org/howto/wordnet.html>] as suggested in <cit.>. We replaced each protected attribute with its first-level hypernym extracted from its synset of synonyms. §.§.§ Training the mitigated models We applied each mitigation strategy to the original Wikipedia comments training dataset. Each mitigation strategy produced a modified version of the training dataset (containing 159,571 examples), whose differences are shown in the third column of Table <ref>. The sentence-removal mitigation strategy (MS1) resulted in a decrease of 6k examples. Instead, the strategy that added k new sentences for each protected attribute (MS4) increased the training dataset by 108k new sentences. The other mitigation techniques did not change the number of training examples. All mitigated models (M_1^*-M_5^*) were trained by fine-tuning the original pre-trained weights of BERT[<https://huggingface.co/bert-base-uncased>] for 3 epochs, with a batch size of 16, and Adam <cit.> as optimizer. To evaluate the mitigated models, we first classified all texts in the test set with each mitigated model. Then, we applied the Explainer to extract the most important predictive words used by each mitigated model for the toxicity predictions in those texts. Finally, we exploited the Identifier to determine if the new important words of the mitigated models were protected attributes. §.§.§ Results Fairness. The last two columns in Table <ref> show the percentage and number of the most toxic words labeled as protected attributes for all the mitigated models (M_1^*-M_5^*). The number of those already present among the protected attributes of the original model (M_o) is indicated in curly brackets. All the mitigation strategies reduced the number of protected attributes the model relied upon. However, the mitigated models trained with removal-based strategies (MS1, MS2) achieved much better results. Only 9% and 10% of their most toxic words were labeled as protected attributes (37 and 40 words out of 400), representing a decrease of 61% from the original model (93 words out of 400). One possible reason for the lower performance of replacement-based mitigation strategies (MS3-MS5) is that they can introduce new protected attributes when replacing words. For a qualitative evaluation of fairness improvement, please refer to Figure <ref> and Figure <ref> in Appendix C. Predictive performance. In Table <ref>, columns 4 and 5 report the macro and weighted F1 scores on the toxicity label for the original and mitigated models. Column 6 also presents the mean AUC scores across all the toxicity-related labels. The results show that all the mitigated models present similar F1 scores compared to the original model, except for the one trained with MS4, which exhibits a greater decrease. Interestingly, the mitigated models trained on the removal-based mitigation strategies (MS1, MS2) achieve better F1 scores than the original model. The word-removal (MS2) increases the macro and weighted F1 scores by 1.3% and 1.2%. Indeed, we observed that the removal-based mitigation strategies reduced the number of false positives in the toxicity predictions (i.e., non-toxic texts wrongly predicted as toxic). All the mitigated models exhibit slightly lower AUC scores compared to the original model. However, the decrease is minor and acceptable (around 0.5% and 0.3%) in light of the reduced reliance on protected attributes. Summary. We conclude that our framework effectively reduces the model's reliance on protected attributes without compromising its predictive performance. Indeed, all mitigated models are fairer in that they significantly reduce the use of protected attributes and exhibit similar predictive performance to the original model. Interestingly, the removal-based strategies (MS1, MS2) even increased the models' predictive performance after the mitigation. § FRAMEWORK EVALUATION: GENERALIZABILITY We finally evaluate the generalizability of our framework first to toxicity prediction on out-of-distribution data (<ref>), and second on different tasks, i.e., sentiment analysis (<ref>) and occupation classification (<ref>). §.§ Framework and evaluation settings For this evaluation, we instantiated the Explainer with Integrated Gradients, the Identifier with ChatGPT, and the Moderator with the removal-based mitigation strategies. As shown in <ref>, this turned out to be the optimal framework configuration. We perform a similar evaluation of the mitigated models by measuring their fairness and predictive performance. Fairness is evaluated by quantifying the number of protected attributes each mitigated model relies on (fairness through unawareness). Predictive performance is evaluated using quantitative metrics on the test set. For the toxicity classifier, we measure the F1 score for the toxicity label, which allows us to gauge the model's accuracy in detecting instances of toxicity, and the Area Under the Curve (AUC) score for all toxicity-related labels, providing an overall assessment of the model's performance in identifying different aspects of toxicity. For the sentiment and the occupation classifiers, we solely measure the F1 score, as it provides a comprehensive assessment of the model's accuracy in these tasks. §.§ Mitigating toxicity prediction on out-of-distribution data This experiment aims to assess the applicability of our framework in mitigating the toxicity model when applied to out-of-distribution data, specifically company reviews. This is crucial as classifiers are normally applied to datasets from other domains with different word distributions from training. §.§.§ Company reviews data We collected data from a popular online platform where current and former employees write reviews about companies. Reviewers comment on various aspects such as personal experience with the company or managers, salary information, workplace culture, and typical job interviews. The platform fosters a constructive approach among its users by manually and automatically moderating the content of reviews. However, reviews are published anonymously. On the one hand, this promotes user privacy. On the other hand, it can also cause some users to write public insults and offenses toward companies or people. Specifically, we collected a dataset of 439,163 reviews from U.S.-based companies across all 51 U.S. states written from 2008 to 2020.[To preserve the privacy of individuals, Personally Identifiable Information (PII) was removed.] Each review contains a pros part (positive comments in the review) and a cons part (negative comments). We applied the same toxicity classifier introduced in <ref> to identify toxic company reviews. §.§.§ Toxicity in company reviews The initial expectation was not to have many toxic reviews in the dataset due to the highly curated nature of the platform. However, if we consider a post to be toxic when at least one of the cons or pros fields contains inappropriate content, we found 1.6% of reviews (7,224) to be toxic. The number of reviews classified as toxic by using the pros and cons texts as input is 853 for pros (0.2%) and 6,495 for cons (1.5%) over 439,163. As expected, we found that most of the toxic texts are present in cons. Interestingly, some people tend to be so angry and frustrated by the work experience that they let off steam even in the pros field. §.§.§ Identify protected attributes in toxicity predictions on company reviews All pros and cons reviews predicted as toxic were analyzed by the Explainer component to extract the most important words used by the model in predicting toxic reviews. Then, we selected the 400 most toxic words extracted, and we annotated those words with GPT-3.5-Turbo. Among the 400 most important words used by the model in predicting toxic reviews, 76 are protected attributes (19%), as shown in the last two columns of the first row in Table <ref> (original model M_o). We can conclude that the original classifier exhibits a significant reliance on protected attributes for toxicity predictions, even when applied to different out-of-distribution data. §.§.§ Training the mitigated models We applied the removal-based mitigation strategies (MS1, MS2) to the original Wikipedia comments training dataset based on the protected attributes identified in the toxic company reviews. Table <ref> shows the differences in the number of training examples after each strategy in the third column. The original training dataset contained 159,571 examples. MS1 resulted in a decrease of 6k examples, while MS2 did not change it. All mitigated models were fine-tuned for 3 epochs, with a batch size of 16, and Adam as optimizer. To evaluate the mitigated models, all pros and cons reviews were classified by each mitigated model. Then, we applied the Explainer component to extract the most important 400 predictive words used by each mitigated model for the toxicity predictions on company reviews. Finally, we exploited the Identifier to determine if the new important words of the mitigated models were protected attributes. §.§.§ Results Fairness. The last two columns in Table <ref> show the percentage and number of the most toxic words labeled as protected attributes. The number of those already present among the protected attributes used by the original model (M_o) is also indicated in curly brackets. The results confirm that removal-based mitigation strategies reduce the number of protected attributes the model relied upon. MS1 and MS2 reduce the percentage of protected attributes from 19% to 4% and 5% (16 and 19 out of 400 words), respectively. This corresponds to a decrease of 79% and 75%. They also reduce the protected attributes the original classifier relies on from 76 to 8 and 11, respectively. Predictive performance. Columns 4 and 5 in Table <ref> show the macro and weighted F1 scores achieved by the original and mitigated models on the test set. The mitigated models exhibit higher predictive performance in terms of F1 scores than the original model. The increment is around 1% for both scores and mitigated models. Finally, column 6 shows the AUC for all the toxicity-related labels. The mitigated model produced by MS1 achieves 0.979 on the AUC score, with a decrease of 0.007 from the original model. Instead, with MS2, the decrease in performance is only 0.003. Summary. The experimental results obtained from the out-of-distribution data demonstrate the capability of our framework to effectively mitigate a model's reliance on protected attributes when applied to non-training data, where ground truth labels are unavailable. It showcases its adaptability and robustness in real-world scenarios where labeled data may not be readily accessible. §.§ Mitigating sentiment analysis This evaluation aims to assess the versatility and effectiveness of our framework across different classification tasks. For this experiment, we chose sentiment classification to test our framework in mitigating the use of protected attributes for tasks where their reliance might be lower. §.§.§ Training the original sentiment classifier We selected a dataset of 163K tweets and 37K Reddit comments in English, expressing people's opinions towards the general elections held in India in 2019.[<https://www.kaggle.com/datasets/cosmos98/twitter-and-reddit-sentimental-analysis-dataset>] The task consists of a multi-class sentiment classification problem with 3 classes: negative, neutral, and positive. We split the dataset with 80% for training (160k) and 20% for testing (40k). We fine-tuned the BERT model for 3 epochs, achieving a 0.96 F1 score on the test set. §.§.§ Identifying protected attributes in sentiment predictions We used the fine-tuned model to predict the sentiment label over the entire test set. Then, we analyzed, separately, all the negative and positive texts with the Explainer component instantiated with Integrated Gradients. The neutral texts do not contain specific patterns that the model should learn and are not of interest for mitigation. Then, we annotated with GPT-3.5-Turbo the 5% of the most important words for the negative and 5% for the positive texts separately, resulting in the top 200 negative and 200 positive words. We found that 16 (8%) of 200 negative words and 11 (6%) of 200 positive words were labeled as protected attributes by the Identifier, suggesting a moderate reliance on protected attributes. §.§.§ Training the mitigated models We applied the two removal-based mitigation strategies (MS1, MS2). In this case, the mitigation is performed separately per class label (e.g., the protected attributes in the most negative words are mitigated only on the negative training examples). MS1 decreases the training set by 5k negative and 8k positive training examples, as shown in the third and fourth columns in Table <ref>. Also in this case, the models were fine-tuned for 3 epochs. We used the mitigated models to predict the sentiment label over the test set. Then, we extracted the most important 200 words from the negative and positive texts separately with the Explainer, and we annotated those words with the Identifier. §.§.§ Results Fairness. The last four columns in Table <ref> show the percentage and the number of protected attributes of the original (M_o) and mitigated (M_1^* and M_2^*) sentiment classifiers for the negative and positive classes, separately. MS1 produces a mitigated model that relies on half of the protected attributes of the original classifier (4% for the negative and 3% for the positive classes). Interestingly, the number of protected attributes the original classifier relied on is almost completely mitigated, except for 2 for the negative and 1 for the positive classes. MS2 has a similar behavior in this. However, many new protected attributes emerge as new important words. In the end, the total number of protected attributes remains the same, even though the protected attributes of the original model have almost all been mitigated. Therefore, MS1 is the most effective in this case. Predictive performance. The fifth column in Table <ref> shows the F1 score obtained by the original sentiment classifier and the mitigated models on the test set. The mitigated models achieve the same F1 score, thus showing the same predictive capabilities. Summary. These results show that our framework can be effective not only on the toxicity model, which heavily relies on protected attributes, but also on the sentiment model, which is moderately impacted by protected attributes, confirming its general effectiveness. §.§ Mitigating occupation classification This evaluation serves three primary objectives: (1) to further assess the adaptability and effectiveness of our framework across various classification tasks, (2) to mitigate the use of protected attributes in scenarios where the final prediction has tangible consequences for individuals, and (3) to compare our framework with two mitigation techniques that act on the model rather than data. To achieve these goals, we selected the task of predicting occupations from online biographies, using a dataset of biographies annotated by gender and occupation from previous works <cit.>. We used the field `cleaned_input_text' as input text, where sentences that directly reveal the occupation were removed (e.g., “he is a journalist”), and we removed first names. We compare our framework with two model-based mitigation techniques: Iterative Null-space Projection (INLP) <cit.> and Entropy-based Attention Regulation (EAR) <cit.>. INLP requires an additional annotation for the mitigated category, which is only present for gender in this dataset. Therefore, we conduct two distinct analyses: (1) focusing solely on gender-related, and (2) examining all protected categories together. For the gender-related protected attributes, we compare both baselines with our word-removal (MS2). Instead, we use only EAR as a baseline to mitigate all the protected categories. We chose MS2 because it has shown similar mitigation effectiveness while maintaining competitive performance, but it has higher flexibility across datasets than sentence-removal (MS1) (see <ref>). §.§.§ Training the original occupation classifiers The dataset contains 393,423 biographies for 28 occupations split into 255,710, 39,369, and 98,344 train, dev, and test examples. We fine-tuned a BERT-base and uncased model for each occupation in one-vs-all settings (i.e., a binary model that predicts the occupation or not for each label). Due to the high imbalance of the dataset, we performed the experiments for the five most frequent classes (i.e., nurse, attorney, journalist, physician, and professor). The models were fine-tuned for 3 epochs.[We utilized inversely proportional class weights in the loss function due to the highly imbalanced training dataset.] The second column in Table <ref> shows the macro F1 score on the test set for each original model trained for four occupations.[Results for the professor occupation are not reported since the model do not rely on gender-related protected attributes.] Those models are highly effective in classifying occupations, achieving a macro F1 score higher than 0.89. Still, such high performance could be achieved by heavily using protected attributes. §.§.§ Identifying protected attributes in occupation classification We used each original model to predict each occupation label over the entire test set and analyzed those texts using the Explainer to extract the most important words in predicting each occupation. Then, for each occupation, we annotated with GPT-3.5-Turbo the top 400 words to identify protected attributes. We measured the models' reliance on protected attributes related to (1) gender only, and (2) all categories (third and fourth columns in Table <ref>). All the models moderately rely on protected attributes. The nurse occupation is the most influenced, also by gender-related words, such as pronouns (e.g., `she', `her'). §.§.§ Training the mitigated models. With our framework, we applied the word removal mitigation strategy (MS2) for each occupation on (1) gender-related only, and (2) all the protected categories simultaneously. Therefore, we trained two different mitigated models for each occupation. We trained one mitigated model for each occupation with the INLP methodology <cit.>. INLP can mitigate the gender-related protected attributes but is not applicable to all the other categories, since the dataset contains the additional annotation only for gender. Specifically, we used the original pre-trained BERT weights as the encoder. Then, we multiplied the embedding representation of the [CLS] token from the last hidden layer for each input text by the projection matrix produced by the INLP technique (to ensure that the embedding representation does not encode information about gender). Finally, we added a classification layer on top. We fine-tuned only the classification layer while freezing the BERT encoder and the projection matrix, as suggested in <cit.>. For EAR <cit.>, we used the same BERT architecture, and we added the loss function regularization term, with 0.001 as regularization strength. EAR does not allow the selection of which protected categories to mitigate, but it identifies by itself which words have a high attention entropy. Thus, we trained a single mitigated model, and we evaluated its reliance on gender and all protected categories separately. We used the mitigated models to predict the occupation labels over the entire test set, we extracted the most important 400 words for each occupation separately with the Explainer, and we annotated those words with the Identifier to evaluate if they exhibit a reduced reliance on protected attributes. §.§.§ Results. Fairness. Columns 7 and 9 in Table <ref> show the number and percentage of protected attributes on which the mitigated models rely for gender only and all categories separately. The number of these words already present among the ones used by the original model is also indicated in curly brackets. The objective of each mitigated model is to reduce the reliance on protected attributes (columns 7 and 9) compared to the respective original model for each occupation (columns 3 and 4). Concerning gender (column 7), our framework is the most or as effective as other techniques in mitigating the use of such protected attributes. For the nurse occupation, which represents the model showing a greater dependence on gender-related protected attributes, our framework reduced the number of the most significant gender-related words from 11 to 2. One of these two gender-related words was already significant in the original model. INLP obtained a similar mitigation effect, while EAR is less effective, with 4 gender-related protected attributes still important for the mitigated model. For the attorney and physician occupations, the original model exhibited a lower reliance on gender-related protected attributes, and our framework was able to fully mitigate the use of those words. On average, our framework reduces reliance on gender-related protected attributes by 79%. The results are similar when considering all protected categories (column 9). Our framework is always more effective than EAR in mitigating the use of protected attributes, except for the nurse occupation, where both techniques achieve the same mitigation effect. On average, our framework successfully reduces reliance on all categories of protected attributes by 43%. Predictive performance. Columns 6 and 8 in Table <ref> show the macro F1 score achieved on the test set by each mitigated model. The objective of each mitigated model is to achieve similar predictive performance (columns 6 and 8) compared to the respective original ones (column 2). The mitigated models produced by our framework and the EAR technique achieve similar or sometimes even better performance than the original model (e.g., for the journalist and physician occupations). Therefore, they are able to mitigate the use of protected attributes without sacrificing predictive performance. Instead, the INLP technique is able to produce models with mitigated bias at the cost of significantly reducing their performance. Indeed, all the mitigated models produced by INLP experienced an average loss in predictive performance of 10%. This tendency to achieve fairness by making every advantaged group worse off or by bringing better performing groups down to the level of the worst off is a common undesirable behavior of bias mitigation techniques <cit.>. Summary. These results confirm the effectiveness of our framework in a different task where the protected attributes are strictly related to individuals. They show that our framework is more effective in achieving such an objective than previous bias techniques, while also providing the flexibility to select which protected category to mitigate. This flexibility enables the mitigation of only a subset of protected categories when some are required for the task at hand. § DISCUSSION Our results show how the proposed framework could be exploited to train a new classifier that mitigates the use of protected attributes while maintaining competitive performance. We evaluated its sensitivity to each component and its effectiveness in mitigating a toxicity classifier. We also proved its generalizability on models applied to out-of-distribution data (i.e., toxicity on company reviews) and two other tasks (i.e., sentiment analysis and occupation classification). Removal-based strategies (MS1, MS2) have been shown to be the most effective mitigation techniques. We also show that the LLM-based Identifier outperforms the crowdsourcing one, allowing the automation of the framework, and a dynamic updating of the dictionary of protected attributes. §.§ Framework integration, versatility and complexity Integration. Our framework can be integrated into existing NLP pipelines for two main purposes. (1) The Explainer and Identifier can be used to measure and evaluate existing NLP classifiers' reliance on protected attributes. As a result, models can be quantitatively compared not only through predictive performance and traditional fairness metrics (e.g., conditional demographic disparity <cit.>) but also through the use of protected attributes in predictions. (2) The entire framework can be used to train a new model with reduced reliance on protected attributes and competitive performance. Our framework can also be integrated to complement other bias mitigation techniques acting in both the model and data spaces that require pre-defined dictionaries or lists of protected attributes or identity terms. The Explainer can improve existing techniques to pinpoint the specific words the model mostly uses for the classification rather than looking at all possible words in the corpus. The Identifier can annotate protected attributes covering a broader range of categories, as many protected attributes, such as disability or religious belief, were rarely covered by prior studies. Versatility. Our framework is designed to achieve fairness through unawareness by mitigating the model's reliance on protected attributes in predictions. It addresses multiple categories simultaneously. Therefore, it can potentially address intersectional bias, i.e., that encompasses multiple sensitive attributes together <cit.>. Moreover, it provides users the flexibility to choose which categories to mitigate, thanks to the fine-grained annotation performed by the Identifier. Such flexibility is particularly useful when some categories are indispensable or aimed at the prediction task. Through this, our framework can address domain-specific bias related to the classification task (e.g., gender-related protected attributes in occupation classification). Consequently, in scenarios where the inclusion of certain protected attributes is necessary for accuracy, our framework can still be utilized to effectively mitigate all other protected attributes that are not essential for the task. Complexity. The execution time to produce the mitigated training dataset depends on many factors. Given a fixed model complexity, the execution times increase linearly with: (1) the unlabeled corpus size, (2) the number of most important words annotated, and (3) the training dataset size. Increasing the model's complexity results in a slight increase in the execution time of all components. Our framework yields a mitigated training corpus, necessitating extra training to produce the mitigated model. The (re-)training time varies based on model complexity and original data size. Techniques such as MS2, MS3, and MS5 maintain dataset dimensionality, resulting in comparable training times for both original and mitigated models. Conversely, MS1 reduces or MS4 increases dataset size, affecting training time accordingly. An example of execution time is reported in Appendix D. §.§ Implications Our research significantly contributes to the CSCW community by exploring human-AI collaboration, especially in decision-making contexts. For example, our tool could assist humans in comprehending the hiring decisions made by NLP classifiers and address biases in the hiring process <cit.>. Our work extends into content moderation, empowering the development of robust systems capable of effectively identifying and mitigating toxic content while ensuring fairness. This aids humans in understanding crucial moderation aspects, encompassing significant words and considerations around protected attributes, to foster collaboration with machines to collectively arrive at fair decisions in content moderation. Our work has three main implications: Fully-automated framework for compliant NLP classifiers. We release an open-source framework.[The code repository of our framework is available at <https://github.com/grecosalvatore/nlpguard>] By leveraging LLM annotations, it operates in a fully automated manner. The mitigated models exhibit enhanced fairness by significantly reducing their reliance on protected attributes while maintaining comparable or even better predictive performance. Other researchers can utilize our framework to address the compliance standards set by regulators, whether by mitigating already trained models or incorporating them into future models. This contribution empowers the community to uphold ethical standards and ensure fairness in NLP applications. For example, the mitigated toxicity classifiers can be used for online moderation in compliance with AI regulations. Protected attributes annotation. We advanced the literature in the protected attributes identification in NLP, traditionally done with static and manually pre-defined dictionaries covering only a subset of categories. However, they are difficult to keep up-to-date, especially with the emergence of ever-evolving language trends and slang. In our framework, we demonstrated a novel approach to dynamically identify protected attributes through straightforward prompts to an LLM. This enables the creation of a comprehensive and up-to-date dictionary covering all the protected categories simultaneously, which can be updated periodically, ensuring its relevance in real-time linguistic landscapes. In our research, we annotated 15,000 words, 540 labeled as protected attributes. We release our dictionary in the GitHub repo. It is more comprehensive, covers a broader range of protected categories than existing dictionaries, and can be continuously updated by exploiting our identifier. Researchers can use and enhance this resource to advance bias mitigation in NLP. Humans vs. LLM annotations. Building upon a recent finding <cit.>, our study demonstrates that LLM annotation can outperform human-in-the-loop crowdsourcing annotations. Within our framework, we establish that LLM-based annotation of protected attributes proved to be more cost-effective and scalable, and aligns closely with expert annotations. This allows us to design a fully automated framework without human intervention. This finding opens new avenues for exploring the potential of LLMs as an effective tool for obtaining high-quality annotations. §.§ Limitations and Future Directions Our current approach has six main potential limitations or areas of concern. Context unawareness. Our Identifier and Moderator label and mitigate words related to protected attributes without considering context, simplifying annotation but risking inaccuracies. For instance, `black' may be a protected attribute in one context (“If you are a guy (black) or lesbian you get hired fast”) but not in another (“I bought a new black desk”). Addressing context could enhance mitigation effectiveness yet poses challenges in the identification and mitigation phases. Human-in-the-loop context annotation is costly, requiring thousands of context-aware annotations, potentially increasing noise. Machine-in-the-loop is more scalable but complicates prompt engineering, potentially leading to misunderstandings and noisy responses <cit.>. We conducted a preliminary experiment assessing context-aware annotation's impact on identifying protected attributes with LLM. Repeating annotations with up to 10 context sentences, we found a 75% overlap between word-level and context-level annotations, with some contradictions, especially for long sentences. However, future research should explore this further across datasets. Potential bias introduced by the Identifier. The annotation of protected attributes is a subjective task. Therefore, the Identifier can potentially introduce further sources of bias. In human-in-the-loop settings, crowdworkers should come from various backgrounds to have a broader contextual understanding during the annotation process. The distribution of the demographic backgrounds of crowdworkers can have an impact on the annotated protected attributes. It is important to ensure an equitable distribution of crowdworkers across all protected categories. However, this can often be challenging in practice. Instead, in the machine-in-the-loop settings, the Identifier can introduce potential bias inherent in the LLM system adopted. The LLM can associate certain words with protected attributes based on stereotypes prevalent in the training data. However, addressing bias inherent in LLM is an active area of research expected to resolve numerous current limitations, significantly enhancing the effectiveness of our framework. Finally, introducing specific definitions of protected attributes, such as the nine categories defined by the UK Equality Act 2010, might also inadvertently introduce biases or overlook certain nuances in both human annotators and LLMs. Reliance on XAI techniques. Our framework relies on XAI techniques to identify the most important words for the model. Nevertheless, it is important to acknowledge that XAI methods have inherent limitations <cit.>, including challenges like effective aggregation and normalization methods, and the contextual variability of words across different explanations. These limitations may hinder the accurate extraction of the most important words used by the model, affecting the Identifier in the identification of protected attributes that the model relies on to make predictions. This issue can extend to the Moderator, impacting the mitigated protected attributes. In the future, improvements in the XAI field could make our framework even more effective. This is because our framework is flexible and can use any feature importance explainability method that can be applied to a pre-trained classifier (e.g., Integrated Gradients or SHAP), as explained in <ref>-A. Defined protected categories. Our framework annotates protected attributes based on the nine categories outlined in the Equality Act 2010. These categories represent a significant step toward addressing discrimination and promoting equality. However, they might not cover all aspects of human diversity or potential discrimination. Since they were formalized in 2010, some characteristics remain unaddressed by the Act. More than a decade later, initiatives are underway to broaden these categories for aspects like socio-economic status, health status, genetic heritage, and physical appearance <cit.>. Future extensions will further enhance the comprehensiveness of our framework in encompassing a broader range of protected categories. Notably, for the LLM-based annotation, incorporating new categories is a straightforward process that involves modifying the prompt. Mitigation with small training datasets or common protected attributes. In scenarios with small or imbalanced training data, or when protected attributes are common in most input texts (e.g., `he' and `she' in biographies), sentence removal (MS1) may be less effective due to potential consequences of removing sentences containing protected attributes from already limited datasets. Frequent presence of common protected attributes in inputs may exclude most sentences, reducing available training data significantly. This reduction can cause a significant and unacceptable decrease in model accuracy. Hence, alternative strategies, like word-removal (MS2), should be considered. MS2 has similar effectiveness in mitigating protected attributes while maintaining predictive performance, offering flexibility across datasets without suffering from these issues. Fairness-privacy tradeoff. Our approach neither protects the privacy of individuals nor considers words or sentences to be private. Instead, it focuses on constraining the classification so that does not rely on protected attributes. In the way that loans cannot be given by an automatic system that relies on racial backgrounds, natural language classification should not rely on protected attributes. The tradeoff between fairness and privacy becomes more pronounced when considering human- and machine-in-the-loop identifiers. The protected attributes annotation requires exposing textual sensitive information to individuals who are not necessarily trusted or to LLMs. This creates a risk of privacy violations and potential harm to the individuals whose sensitive information is being used <cit.>. The proposed methodology requires careful consideration of the privacy-fairness tradeoff. In future work, we plan to develop a context-aware framework that would allow us to identify and mitigate protected attributes based on their context by extracting the words and context information from the dataset, identifying protected attributes within each context, and applying mitigation strategies only to those sentences that contain protected attributes in similar contexts. ACM-Reference-Format § APPENDIX tocsectionAppendices The Appendix sections are organized as follows. Appendix A shows the LLM prompts used by the Identifier to annotate words related to protected attributes. Appendix B shows the list of trap words used in the MTurk study. Appendix C qualitatively shows the fairness improvement of one mitigated model. Appendix D discusses the framework's running time. §.§ A. LLM prompts The LLM prompts used by the Identifier component to annotate words related to protected attributes (as described in <ref>-B2). A first prompt (Figure <ref>) provides the protected categories and their definitions. A second prompt (Figure <ref>) suggests some links that provide more information about the protected categories. Then, for each word, the LLM is asked to: (1) classify the word into one of the protected categories or none of them; (2) provide a reliability score in the range [0, 100]; and (3) provide an explanation. Figure <ref> shows the response provided by GPT-3.5-Turbo for the annotation of the word `homosexual', classified with the category sexual orientation and a score of 100/100. The LLM-based Identifier is also able to annotate proxy words that, although not directly and strictly related to a protected attribute, can be used by the model to infer the categories. An example is shown in Figure <ref>, where the word `headscarf' is annotated as related to religion and belief. §.§ B. Trap words Table <ref> shows the list of trap words used in the MTurk for the Identifier in the machine-in-the-loop setup (<ref>). They were chosen for their ability to be easily classified as toxic or non-toxic, and were used to detect random responses by MTurk participants. For the non-toxic (toxic) trap words, we expected MTurk participants to select a score of 1 or 2 (4 or 5) on the Likert scale. Participants were considered unreliable if they did not meet those expectations, and their assessments were discarded from our results. §.§ C. Fairness improvement of a mitigated model: a qualitative analysis Figure <ref> and Figure <ref> show the fairness improvement of a mitigated classifier for toxicity predictions in the in-distribution experiment (discussed in <ref>). They show the prediction on the same texts discussed before (see Figure <ref> and Figure <ref> in <ref>) made by one mitigated model (M_2^*). The first three sentences (misclassified by the original model) are not predicted as toxic anymore, as the model is not extensively using the words `black', `homosexual', and `gay' for toxicity predictions anymore. The fourth sentence is still correctly predicted as toxic. However, the prediction is influenced by words such as `hate', `fucking', and `shitty' and not by `black' anymore. These results show that the removal-based mitigation strategies (MS1, MS2) are highly effective in reducing the usage of protected attributes in classification in just one mitigation round. §.§ D. Framework's running time analysis The execution time to produce the mitigated training dataset depends on many factors. Given a fixed model complexity, the execution times increase linearly with: (1) the unlabeled corpus size, (2) the number of most important words annotated, and (3) the training dataset size. Increasing the model's complexity results in a slight increase in the execution time of all components. We report the execution time for the mitigation of the BERT model for the nurse occupation classification (discussed in <ref>) with Integrated Gradients as the Explainer and ChatGPT-turbo-3.5 as the Identifier, using a single Nvidia RTX A6000 GPU. We used the test set as the unlabeled corpus, containing approximately 98.3K sentences. Firstly, the Explainer uses the original classifier on each input text of the unlabeled corpus. With a batch size of 512, it takes 846 seconds (0.01 seconds per text). Then, the Explainer generates the explanations within each sentence for all the texts predicted with the nurse occupation, in this case 4,071, and aggregates the scores to compile the overall list of the most important words. This process is completed in 725 seconds (0.18 seconds per text). Next, the Identifier annotates the most important 400 words, running in 534 seconds (1.3 seconds per word). Finally, producing the mitigated training dataset with the word removal mitigation strategy (MS2) of the Moderator on the 255.7k examples in the training set requires 29 seconds. The total execution time is 2,134 seconds (35 minutes). Our framework produces a mitigated training corpus, requiring an additional training phase to generate the mitigated model. The (re-)training time depends on the model's complexity and the original training data size. Some mitigation techniques (MS2, MS3, MS5) maintain the training dataset's dimensionality, resulting in equivalent training times for the mitigated and original models (1.5 hours in the previous example). In contrast, techniques like MS1 decrease or MS4 increase the dataset size, leading to corresponding changes in training time.
http://arxiv.org/abs/2407.02546v1
20240702130801
Adaptive Autopilot: Constrained DRL for Diverse Driving Behaviors
[ "Dinesh Cyril Selvaraj", "Christian Vitale", "Tania Panayiotou", "Panayiotis Kolios", "Carla Fabiana Chiasserini", "Georgios Ellinas" ]
cs.RO
[ "cs.RO", "cs.LG" ]
Domain Generalizable Knowledge Tracing via Concept Aggregation and Relation-Based Attention Yuquan Xie, Wanqi Yang, Jinyu Wei, Ming Yang and Yang Gao 29 February 2024 =========================================================================================== empty empty § ABSTRACT In pursuit of autonomous vehicles, achieving human-like driving behavior is vital. This study introduces adaptive autopilot (AA), a unique framework utilizing constrained-deep reinforcement learning (C-DRL). AA aims to safely emulate human driving to reduce the necessity for driver intervention. Focusing on the car-following scenario, the process involves (i) extracting data from the highD natural driving study and categorizing it into three driving styles using a rule-based classifier; (ii) employing deep neural network (DNN) regressors to predict human-like acceleration across styles; and (iii) using C-DRL, specifically the soft actor-critic Lagrangian technique, to learn human-like safe driving policies. Results indicate effectiveness in each step, with the rule-based classifier distinguishing driving styles, the regressor model accurately predicting acceleration, outperforming traditional car-following models, and C-DRL agents learning optimal policies for human-like driving across styles. § INTRODUCTION In recent years, the automotive industry has experienced a digital transformation, enhancing vehicles with sensing devices, electronic control units, and advanced driver assistance algorithms, including features like blind-spot detection and adaptive cruise control (ACC) <cit.>. This evolution aims to improve safety, traffic efficiency, and the overall travel experience <cit.>. However, consumer adoption relies on trust in automated systems and considerations on legal issues <cit.>. The acceptance of these systems is also influenced by their ability to emulate human-like driving styles <cit.>. Toward this, distinct driver categories, like aggressive drivers prioritizing smaller gaps with abrupt maneuvers and conservative drivers favoring larger gaps with smoother behavior <cit.>, require tailored controllers. Current car-following controllers <cit.>, despite attempts to differentiate among driving styles, depend on predefined parameters, lacking real-world adaptability. The remedy lies in data-driven controllers utilizing real-world data, emulating diverse driving styles, and, theoretically, having the potential to reduce disengagement rates <cit.>. In this context, machine learning (ML) plays a pivotal role in developing models capable of making informed decisions by analyzing complex and multi-variate data. Among ML paradigms, reinforcement learning (RL) is well-suited for this intricate task <cit.>, as RL agents learn by interacting with the environment through a trial-and-error mechanism, aiming to maximize cumulative rewards. However, traditional RL agents often overlook safety constraints critical for real-world applications like autonomous driving. C-DRL addresses this limitation as, unlike traditional RL, it incorporates constraints through cost functions, ensuring safe driving by minimizing them during the learning process <cit.>. Building on C-DRL, our work introduces the adaptive autopilot (AA) framework. This framework employs a C-DRL approach to effectively accommodate diverse driving styles by integrating rewards based on a human-like acceleration predictor, alongside constraints to enforce a minimum headway among vehicles. The three main steps of the framework are: (i) categorizing real-world driving data from the highD dataset <cit.> into aggressive, normal, and conservative styles, (ii) training deep neural network-based regressors to predict human-like vehicle acceleration tailored to each driving behavior, and (iii) implementing the C-DRL framework to take human-like safe actions. The trained agents, corresponding to each driving style and based on the soft actor-critic Lagrangian algorithm <cit.>, are validated using real-world driving data from the highD dataset. Results demonstrate the framework's ability to drive the vehicle in line with corresponding human drivers under different styles, with the headway trend highlighting the prioritization of safety constraints. To summarize, the main contributions of this work are as follows: (i) Real-world driving data is classified into aggressive, normal, and conservative styles using a rule-based approach. Separate neural network-based regressors are then trained for each style to predict human-like vehicle accelerations. (ii) A novel C-DRL framework is introduced, adapting vehicle acceleration to different driving styles by (safely) mimicking human drivers. This is achieved through minimizing the difference between C-DRL and predicted human actions at each step. Further, a headway-based safety constraint is imposed during training, where multiple real-world driving traces are used to enhance generalization. (iii) Performance results demonstrate the proposed framework's ability to adapt to diverse driving styles while adhering to safety constraints. In the remainder of the paper, Sec. <ref> describes related research, Sec. <ref> introduces the AA framework, Sec. <ref> discusses the obtained results, and Sec. <ref> presents concluding remarks. § RELATED WORK While commercial ACC systems were introduced in the early 2000s to enhance safety and driving experience <cit.>, they still offer limited customization options with few user-defined parameters like desired gap and velocity. The rigidity of these systems hampers their ability to accurately replicate human driving styles, leading to reduced trust and increased driver intervention, thereby affecting safety benefits. Various research directions <cit.> have been explored to address these limitations and enhance ACC systems. One research direction involves car-following (CF) models to provide optimal control actions in response to lead vehicle movements. Relevant models include the Gipps model <cit.>, which prioritizes a safe inter-vehicle distance, incorporating human factors like reaction time and comfort. The intelligent driver model (IDM) <cit.> considers desired velocity and inter-vehicle distance, using different parameter values for various driving styles <cit.>. However, these CF models struggle to accurately represent real-world driving behavior due to oversimplification, and their parameters are calibrated for traffic scenarios and safety rather than human-like driving behavior <cit.>. Our framework, compared to IDM, employs a data-driven approach demonstrating safe and human-like acceleration behavior across different styles. Another research direction explores data-driven models, optimizing vehicle control using real-world mobility traces. For example, <cit.> employs particle swarm optimization with bi-directional long short-term memory (PSO–Bi–LSTM) to enhance IDM model parameters and predict human driving behavior. IDM's learned fixed parameters limit however its ability to accurately model driving behavior. Other works use traditional and recurrent neural networks (NNs) for acceleration/velocity predictions <cit.>. Such NNs face challenges in personalized driver behavior modeling due to training data influences. Similarly, DRL has been utilized <cit.> for improved car-following behavior, emphasizing safety, traffic efficiency, and comfort. Nevertheless, these DRL approaches focus solely on generic driving behaviors, lacking consideration for a human in their training process to achieve human-like driving behavior. Finally, the offline human-in-the-loop RL paradigm gains popularity for enhancing RL frameworks' adaptability by incorporating the human factor <cit.>. This approach does not require real-time human intervention but leverages human experience to shape reward functions. For instance, <cit.> uses the Shanghai naturalistic driving study data to mimic human-like driving behavior by designing reward functions to reduce errors between simulated and empirical values in spacing and velocity. It outperforms traditional NN models in capturing driver behavior, although safety concerns arise as aggressive human behaviors are replicated without considering safety. Additionally, <cit.> employs behavior cloning, an imitation learning method, to achieve human-like driving behavior. A major drawback of imitation learning is the accumulation of errors over time, leading to adverse control actions. To the best of our knowledge, our work is the first to present a comprehensive human-in-the-loop C-DRL framework designed to adapt vehicle driving behavior across diverse driving styles along with safety constraints. § ADAPTIVE AUTOPILOT FRAMEWORK Our framework addresses three interconnected problems: (i) identifying and classifying the driver's style using a rule-based approach based on headway, lead vehicle relative velocity, and acceleration (Sec. <ref>); (ii) training a neural network-based regressor to predict human-like control actions, particularly acceleration rates, of the same driving style (Sec. <ref>); (iii) implementing a C-DRL framework for the controller, considering vehicle states as input and ensuring safety while minimizing the difference between the control action and human-like acceleration predicted by the regressor (Sec. <ref>). §.§ Rule-based Classifier Inspired by <cit.>, we categorize driving styles into aggressive, normal, and conservative. Such classification typically utilizes indicators related to longitudinal movements, including speed, acceleration, headway, relative velocity, as well as steering input and lateral acceleration <cit.>. Nevertheless, given the focus on car-following scenarios, only longitudinal control-related indicators are employed in this work. Designing a model to accurately classify a driver's entire data trace into a unique driving style is challenging due to potential variations within a driver's behavior. For instance, aggressive drivers may exhibit normal or conservative driving at times. In this work, each control action of the driver is tagged with a specific driving behavior. Subsequently, the ratio of each tagged behavior across the entire trace is calculated to categorize drivers as aggressive, normal, or conservative. Driver actions are tagged with a specific driving behavior based on a rule-based approach, which utilizes headway trends as a key factor in differentiating driving styles. As suggested by <cit.>, aggressive drivers aim to maintain a headway of 1 second or below, Normal drivers aim for headways of around 1.5 seconds, while conservative drivers aim for a headway of 1.8 seconds and above. Based on longitudinal indicators, the classifier's objective is to tag the driver's intention, analyzing how the driver's action will change the headway and toward which of the three goal headways it will lead in the long term. Considering these aspects, Fig. <ref> outlines the hierarchical rule-based classification approach employed in this work, where the leaf nodes represent the assigned driving style. Note that only a partial classification is presented in Fig. <ref> (the case for headway less than or equal to 1 sec). Similar classifications have also been created for headway between 1-1.5 sec and for headway greater than 1.5 sec but are not presented here due to lack of space. Specifically, the classifier considers information related to both lead and ego vehicles at a given time t: X(t) ={ϑ(t), ν(t), ẍ_ego(t), ẍ_lead(t)}, representing headway, relative velocity, ego vehicle acceleration, and lead vehicle acceleration, respectively, to classify the behavior: y(t) ={Aggressive, Normal, Conservative}. Headway and relative velocity are formulated as: ϑ(t) =Δ x_lead(t)/ẋ_ego(t) ν(t) =ẋ_lead(t)-ẋ_ego(t), where Δ x_lead(t)=x_lead(t)- x_ego(t) is the relative distance between lead and ego vehicles, and ẋ_lead(t) and ẋ_ego(t) represent the lead and ego vehicle's speed, respectively. Among the input features, headway serves as the primary criterion for splitting the data into three leaf nodes. Subsequently, for each leaf node, relative velocity becomes a crucial factor in anticipating potential headway changes in subsequent time steps, acting as a secondary criterion for data segmentation. Considering the current headways and relative velocities, the driver's action, specifically the applied acceleration, is categorized into one of the three driving styles. Acceleration differences among the lead and ego vehicles enable an understanding of future changes in relative velocity before they manifest in the data points. This ability facilitates the identification of future achieved headways. Matching future achieved headways with desired headways (as per <cit.>) provides a straightforward mean to categorize driving actions. §.§ Human-like Action Predictor The human-like action predictor operates as a regressor model, using relevant input data to forecast the vehicle's acceleration. Its purpose is to learn a non-linear function that approximates the relationship between input data and the next vehicle's acceleration. To circumvent relying on past human actions, which might be unavailable in autopilot scenarios as the one of the AA framework, the regressor incorporates both historical and current data related to the lead vehicle, while utilizing only the present ego vehicle data as input. The input dataset consists of the following set of observations: X(t)={ẍ_lead(t- 2Δ t), ẍ_lead(t-Δ t), ẍ_lead(t), ẋ_lead(t-2Δ t),ẋ_lead(t-Δ t),ẋ_lead(t), ẋ_ego(t), ϑ(t)}, to obtain prediction ŷ(t)=ẍ_preg(t) corresponding to the ego vehicle acceleration y(t)=ẍ_ego(t), where t, t-Δ t, and t-2Δ t represent the present and two past time instants, respectively. In this work, a DNN-based regressor, a deep learning technique, is utilized to predict human-like acceleration values. The highD dataset, <cit.>, serves as the training dataset, following the segmentation into the three driving styles mentioned above, achieved through the rule-based classifier outlined in Sec. <ref>. Hence, a separate model is obtained for each driving style. Throughout the training process, the model is optimized to minimize the mean absolute error (MAE) loss function: L_mae=1/N∑_i=1^N|y_i - ŷ_i|, where N is the number of observations used for MAE loss minimization, ŷ_i is the predicted value of the i^th observation, and y_i is the actual value of the i^th observation. §.§ C-DRL Framework We now introduce our C-DRL framework, inspired by a previously proposed algorithm <cit.>. The C-DRL framework utilizes pertinent vehicle data as input to decide the vehicle's longitudinal control action, focusing specifically on vehicle acceleration. The control action applied guides the vehicle, earning rewards based on its ability to emulate the desired human-like driving behavior. Moreover, the framework integrates safety constraints to guarantee that the applied control actions maintain a safe distance between vehicles. Figure <ref> provides an overview of the proposed AA framework. Background: Constrained RL extends traditional RL by introducing constraints on the actions taken by the agent. C-RL is formalized as a constrained Markov decision process (CMDP) <cit.>, an extension of the standard MDP framework. In this form, C-RL is characterized by the tuple (𝒮, 𝒜, 𝒫, ℛ, 𝒞, b, γ), representing the state space, action space, transition probabilities, reward, cost function, safety threshold, and discount factor, respectively. The goal of C-RL is to solve the CMDP by learning an optimal policy π:𝒮→𝒜 that maximizes the expected cumulative discounted reward while satisfying the constraints. The problem addressed by C-RL is: max_π:((t), a(t)) ∼ρ_π𝔼[∑_tγ^tℛ((t), a(t))] subject to: 𝔼[∑_tγ^t𝒞((t), a(t))] ≤ b where ρ_π denotes the trajectory distribution following policy π, and ℛ((t), a(t)) and 𝒞((t), a(t)) represent (resp.) the reward and cost functions associated with state (t) and action a(t) at a specific time step t. As modeling the state transition probabilities for intricate problems can be challenging, in C-RL a model-free approach is typically used, with the relationship between action and reward/cost implicitly learned by interacting with the environment. In constrained optimization problems, such as Eq. (<ref>), C-RL can utilize an equivalent formulation with Lagrangian multipliers (λ) for optimization. The Lagrangian's saddle point is determined through iterative gradient ascent steps for the policy function π and gradient descent on the Lagrangian multipliers λ <cit.>. Notably, the gradient step related to λ emphasizes the loss function associated with the constraint. If the constraint is violated, the gradient update increases the multiplier's value, prioritizing the constraint over the reward function, and vice versa. C-DRL advances upon C-RL by incorporating deep neural network-based function approximators to model the policy function π(|θ), where θ denotes the neural network parameters. This augmentation significantly improves the C-RL framework's capability to navigate intricate, high-dimensional real-world environments. In this study, we specifically adopt the soft actor-critic Lagrangian (SAC-Lagrangian) technique <cit.> as the C-DRL methodology to achieve the desired outcome. States and Action Space: The C-DRL state, 𝐬(t) ∈𝒮, represents the vehicle state at any time t and is given by: (t)={ẍ_lead(t), ϑ(t), ẋ_ego(t), ν(t), a(t-Δ t), ψ(t)} where a(t-Δ t) denotes the control action taken by the C-DRL framework at time t-Δ t and ψ(t) represents the value obtained from the indicator cost function associated with the safety constraint. The cost function is formulated as: ψ(t)=I(ϑ(t)>ω), where ω represents the safety threshold for the distance between the vehicles. As mentioned earlier, the action space, a(t) ∈𝒜, corresponds to the vehicle's acceleration (a continuous variable bounded within the range [-4, 4]ms^-2). Additionally, consecutive acceleration values are restricted to vary by no more than ±0.24 ms^-2 <cit.>. Reward Components: The reward signal is a scalar value provided by the environment after each action, offering insights into the agent's performance concerning the framework objectives. Our reward function comprises two key components: (i) human similarity reward, which assesses the disparity between C-DRL control actions and human-like actions predicted by the regressor model; and (ii) comfort, ensuring smooth acceleration changes between time steps. The trends of these reward components are illustrated in Fig. <ref>. Formally, the reward is expressed as: r((t),a(t)) = r_h((t),a(t)) + r_c((t),a(t)), where r_h((t),a(t)) and r_c((t),a(t)) represent human similarity and comfort rewards at time step t, respectively. The reward components are further detailed below. Human Similarity Reward Component: This reward component assesses the similarity between the driving behavior of the vehicle and that of a human. It quantifies the disparity between the predicted acceleration values by the DNN regressor and the ones applied by C-DRL, encouraging the agent to minimize this difference. Specifically, the function offers a reward that is maximum (+1) for zero error and decreases significantly as the difference between predictions increases. The reward formulation incorporates a tanh function for this purpose: r_h((t),a(t)) = 2 · F_h+ 1, F_h = tanh(-2 ·ξ(t)), ξ(t) = |a(t) - ẍ_preg(t)|. Comfort Reward Component: Sudden acceleration changes can lead to passenger discomfort. To address this, the comfort reward component considers the rate of change of acceleration with time, known as jerk (j(t)). The reward function is designed to decrease gradually as the absolute jerk value increases, ranging from a maximum reward value of 0 to a minimum of -1. This desired reward trend is crafted using a curve-fitting function, specifically a 4PL model, as illustrated in Fig. <ref> (right), depicting the comfort reward trend. Simulation Environment: A straightforward car-following simulation environment is created to replicate vehicle movements, enabling the C-DRL agent to learn the desired behavior. Utilizing the highD dataset, the simulation environment incorporates movements for the lead vehicle, while simulating the ego vehicle's motions using a linear motion model. The C-DRL agent's predicted acceleration serves as the control action to drive the vehicle, with a set sampling interval of Δ t=80 ms. The ego vehicle movements follow: ẋ_ego(t+Δ t)=ẋ_ego(t)+ a(t) Δ t, x_ego(t+Δ t)=x_ego(t) + ẋ_ego(t)Δ t + 0.5 a(t) Δ t^2. Learning Process: In C-DRL, the exploration-exploitation process is crucial, involving a balance between trying new actions and exploiting high-reward actions. Initial training stages necessitate thorough exploration of the action space to discover those maximizing cumulative rewards. However, if action values are restricted, hindering exploration, the agent may miss identifying actions leading to higher rewards. To mitigate this, we adopt a curriculum learning approach <cit.>, gradually increasing difficulty during training. Initial episodes allow unrestricted changes in subsequent actions, with limits introduced once the agent learns the desired behavior. Training utilizes multiple driver traces from the highD dataset for each driving style to ensure generalization, and it continues until satisfactory and stable rewards are achieved. § PERFORMANCE EVALUATION In this section, we introduce the dataset we used for training and evaluation of our proposed framework, and present the performance of our solution. §.§ Dataset This work utilizes the highD dataset, comprising vehicle trajectories recorded via a drone on German highways at six locations, each covering 420 meters <cit.>. The dataset consists of 110,500 vehicle trajectories across 60 recordings, with an average recording length of 17 minutes, encompassing free-driving, car-following, and lane-changing events. To focus on the car-following scenario, vehicle traces were filtered based on criteria including duration (minimum 10 s of data), absence of lane changes, consistent lead vehicle, minimum speed (6 ms^-1), and vehicle type classification. The sampling frequency used in this work is Δ t=80 ms. Among the 60 recordings, 32 were selected for pre-processing, resulting in approximately 2.6 million rows of data, balancing accuracy in training regressor models with computational efficiency. §.§ Performance Results: Rule-based Classifier Here we showcase the performance of the rule-based classifier on the highD dataset. Based on the rules presented in Sec. <ref>, the data points are classified into three categories: Aggressive, Normal, and Conservative. Figure <ref> depicts the key characteristics, in terms of longitudinal acceleration and time headway, of each category using this rule-based setup. Overall, aggressive, normal, and conservative driving behaviors comprise 924k, 1.4M, and 863k data points, respectively, with some data double-tagged because certain behaviors coincide with more than one driving style. The performance results of the classifier are consistent with expectations, showing large differences between driving styles. Looking at the probability density function (PDF) (Fig. <ref> (top)) of the applied longitudinal acceleration, conservative drivers tend to brake to increase the distance from the lead vehicle. Specifically, the PDF mode is at -0.2 ms^-2 and 85% of the conservative actions represent a braking action. On the contrary, aggressive drivers aim to close the gap with the lead vehicle as much as possible (PDF mode is at 0.2 ms^-2, and 73% of the aggressive actions represent acceleration). Normal driving follows a hybrid pattern, with the PDF mode at around 0 ms^-2. Further, although classification happens based on the driver's intention to change its headway, the headway PDF plots (Fig. <ref> (bottom)) show that the mode behavior of a specific driving style corresponds to the ones envisioned in <cit.> (i.e., the PDF mode of aggressive drivers just below the 1-s mark, of normal drivers between 1 and 1.5 s, and of conservative drivers just before the 2-s mark). §.§ Performance Results: Acceleration Prediction This section discusses the performance of the regressor models corresponding to the three driving behaviors. As mentioned earlier, we employed a traditional DNN regressor to train the models for predicting vehicle acceleration based on the input data. Using the categorized data obtained from the rule-based classifier, each driver behavior dataset was divided into training (65%), validation (15%), and testing (20%) sets. Given the varied nature of the input, we employed a standardization technique to scale the input features, with zero mean and standard deviation equal to one to help the training. Also, to mitigate overfitting, an early stopping technique was employed to stop training if the error improvement was less than 0.001 in the validation set for five epochs. It is noted that this section presents the best configurations for each model after extensive hyperparameter trials (Tab. <ref>). MAE was used to compare the results obtained during inference. In Tab. <ref>, the results obtained by the proposed DNN regressor model are compared with the well-known car-following algorithm, IDM, with parameters suggested in <cit.>. Additionally, the IDM model was enhanced to match as closely as possible the highD dataset. The fixed parameters used in <cit.> were modified so as to obtain the best possible fitting with the dataset data points, i.e., the IDM fixed parameters were selected so that the MAE was minimized (IDM-MAE). The results obtained demonstrate that the proposed regressor models outperform the car-following algorithms for all driving styles. Further, analyzing the CDF of the MAE (not presented in the paper due to space limitations), the optimal performance of the DNN predictor is also showcased by the fact that the absolute error of the prediction, i.e., |ẍ_ego(t)-ẍ_pred(t)|, is less than 0.21 ms^-2 in 80% of the data points for all driving styles. To assess the predictor's performance on individual driver traces, Fig. <ref> displays three traces representing aggressive (top), normal (middle), and conservative (bottom) driving behaviors. To evaluate their long-term performance, for both the DNN predictor and the benchmarks, the vehicles in the traces are moved by applying the predicted accelerations using the motion model in Eq. <ref>. That is, while the lead vehicle traces correspond to those in the highD dataset, the ego vehicle trace disregards the driver's applied accelerations but incorporates the acceleration predicted by the DNN and benchmarks. Despite some deviations, the DNN-based predicted acceleration closely matches the actual driver behavior and outperforms existing benchmarks. The slight discrepancies observed can be attributed to the DNN predictor leveraging data from thousands of drivers to learn how to predict acceleration in specific situations, while individual driver styles may vary slightly even within the same driving behavior. Although space constraints prevent us from presenting it, applying the wrong driving style's DNN regressor in Fig. <ref> would yield significantly different results, with vehicle headways consistently diverging from the true values experienced by the drivers. §.§ Performance Results: Human-like Driving This section discusses the results of the C-DRL models, aiming to mimic human behavior safely. One model was trained for each driving style, with hyperparameter values similar to those in SAC-Lagrangian <cit.> (except for the differences listed in Tab. <ref>). Diverse traces were used for each driving style to ensure learning generalized behavior. For each agent, a driving trace was randomly selected from the ones chosen for training for each episode. The safety objective is to maintain a minimum headway of ω=1 s between two vehicles. Hence, the final accelerations decided by the C-DRL agent must ensure that the two vehicles are never closer than this minimum headway. To achieve this objective, during training, the agent primarily focuses on finding a policy that minimizes the cost function representing the safety constraint. After ensuring the safety constraint is met, the agent tries to maximize the reward function, aiming to find a comfortable acceleration profile that mimics human-like driving behavior. Figure <ref> presents the evolution during training of the rewards and the weights assigned to the cost (λ) and reward (1-λ) functions for the three agents. Specifically, the first row shows the reward trend of the evaluation episodes during training (executed every 100 training episodes), which is used to assess the training progress. As depicted, the reward grows and stabilizes as training progresses. The normal and conservative agents achieve higher rewards than the aggressive agent because the aggressive agent prioritizes the safety constraint cost function before maximizing rewards (Fig. <ref> (second row)). The conservative and normal driving behavior agents give more importance to the rewards, as the corresponding agents would not breach the safety constraint (according to their driving style). However, for aggressive drivers, who would naturally drive the headway below the 1-sec mark, the optimal cost function weight λ is not equal to zero. This indicates that reward maximization would fail to respect the safety constraint, which is undesirable. Hence, the final agent trades reward maximization for enhanced safety. To test each agent's performance, we selected the best-performing model from the evaluation episodes for the inference phase. During inference, the agents are tested on driving traces that were not used during training. Figures <ref>-<ref> illustrate the agent's performance across the three driving styles, each for a specific trace (with similar results obtained for all tested traces). Figure <ref> demonstrates that the aggressive agent could safely drive the vehicle, maintaining the headway around the 1-s mark without violating the safety threshold and also imitating the acceleration predicted by the regressor model whenever possible. When the headway drops below the safety threshold, the agent starts braking smoothly to maintain a safe distance from the lead vehicle. Once the agent has successfully satisfied the safety constraint, its focus shifts to mimicking the driver's behavior, as depicted by the acceleration trend (Fig. <ref> (top right)), closely following the DNN regressor model predictions. This is also confirmed by the human similarity reward trend (Fig. <ref> (bottom left)). To emphasize the importance of the C-DRL approach, we compared the proposed framework with a non-constrained DRL technique (referred to as Ego_DRL), where we excluded the cost indicator function during training, confirming that without the safety constraint, the aggressive driving style could lead to unsafe headway, potentially resulting in dangerous situations. In the normal and conservative driving styles, where the safety constraint's role is not crucial, the agents effectively mimicked human driving behavior by following the DNN regressor-predicted accelerations (Figs. <ref> and <ref>). To quantitatively evaluate the results, we calculated the root mean square error between the regressor-predicted human-like acceleration and the C-DRL predicted acceleration, resulting in error values of 0.282, 0.043, and 0.013 for aggressive, normal, and conservative driving behavior, respectively. As anticipated, aggressive driving behavior yields a higher error due to safety constraints, while errors for normal and conservative driving behaviors remain minimal. Additionally, it is noteworthy that, although slight discrepancies exist between the DNN regressor and the actual human-applied acceleration, the overall headway profiles generated by the C-DRL agents consistently align closely with those observed in the dataset. § CONCLUSION We presented an adaptive autopilot framework utilizing C-DRL to drive vehicles similarly to human drivers, adapting to diverse driving styles. The adaptive autopilot framework tackles three interconnected sub-problems: identifying driving styles using real-world data through a rule-based approach, predicting human-like acceleration across different driving styles using a DNN regressor model, and proposing a C-DRL approach to drive vehicles while considering safety constraints and mimicking human-like behavior. Results indicate that the regressor model can mimic human-like driving behavior as close and as safe as possible, and outperforms state-of-the-art IDM models in predicting acceleration. Hence, the comfortable experience provided by the proposed adaptive autopilot framework has the potential to enhance the satisfaction of human drivers, leading to a reduced disengagement rate of the autopilot driving system. Future work includes leveraging semi-supervised learning for enhanced driving style categorization and extending the framework to realistic environments that include complex scenarios like cut-ins and lane changes. 10 url@samestyle yurtsever_survey_2020 E. Yurtsever et al., “A survey of autonomous driving: Common practices and emerging technologies,” IEEE Access, vol. 8, 2020. yu_researches_2022 L. Yu and R. Wang, “Researches on adaptive cruise control system: A state of the art review,” Proc. of the Inst. of Mech. Eng., Part D: Journal of Automobile Eng., vol. 236, no. 2-3, pp. 211–240, 2022. kyriakidis_public_2015 M. Kyriakidis et al., “Public opinion on automated driving: Results of an international questionnaire among 5000 respondents,” Transp. Res. Part F Traffic Psych. Behav., vol. 32, pp. 127–140, 2015. ma_drivers_2021 Z. Ma and Y. Zhang, “Drivers trust, acceptance, and takeover behaviors in fully automated vehicles: Effects of automated driving styles and driver’s driving styles,” Accid. Anal. Prev., vol. 159, 2021. ma_investigating_2020 ——, “Investigating the effects of automated driving styles and driver’s driving styles on driver trust, acceptance, and take over behaviors,” in Proc. Hum. Factors Ergon. Soc. Annu. Meet., no. 1, 2020. sagberg_review_2015 F. Sagberg et al., “A review of research on driving styles and road safety,” Human Factors: The Journal of the Human Factors and Ergonomics Society, vol. 57, no. 7, pp. 1248–1275, 2015. kesting_agents_2008 A. Kesting et al., “Agents for traffic simulation,” arXiv:0805.0300 [physics.soc-ph], 2008. Treiber2013 M. Treiber and A. Kesting, Modeling human aspects of driving behavior.1em plus 0.5em minus 0.4emSpringer Berlin Heidelberg, 2013, pp. 205–224. etde_627062 S. Krauss, “Microscopic modeling of traffic flow: investigation of collision free vehicle dynamics,” PhD Thesis, Univ. of Cologne, 1998. gipps_behavioural_1981 P. Gipps, “A behavioural car-following model for computer simulation,” Transp. Res. Part B Method., vol. 15, no. 2, 1981. kiran_deep_2022 B. Kiran et al., “Deep reinforcement learning for autonomous driving: A survey,” IEEE Trans. Intell. Transp. Syst., vol. 23, no. 6, 2022. selvaraj_ml-aided_2023 D. C. Selvaraj et al., “An ML-aided reinforcement learning approach for challenging vehicle maneuvers,” IEEE Trans. Intell. Veh., vol. 8, no. 2, pp. 1686–1698, 2023. altman_constrained_2021 E. Altman, Constrained Markov Decision Processes: Stochastic Modeling.1em plus 0.5em minus 0.4emRoutledge, 2021. krajewski_highd_2018 R. Krajewski et al., “The highD dataset: A drone dataset of naturalistic vehicle trajectories on German highways for validation of highly automated driving systems,” in Proc. IEEE ITSC, 2018. roy_direct_2021 J. Roy et al., “Direct behavior specification via constrained reinforcement learning,” arXiv:2112.12228 [cs.LG], 2021. watanabe_development_1995 T. Watanabe et al., “Development of an intelligent cruise control system,” in Steps Forward. Intell. Transp. Syst. World Congr., 1995. zhang_car-following_2023 T. Zhang et al., “Car-following models: A multidisciplinary review,” arXiv:2304.07143 [eess.SY], 2023. papathanasopoulou_towards_2015 V. Papathanasopoulou and C. Antoniou, “Towards data-driven car-following models,” Transp. Res. Part C Emerg. Techno., vol. 55, 2015. khodayari_modified_2012 A. Khodayari et al., “A modified car-following model based on a neural network model of the human driver effects,” IEEE Trans. Syst. Man. Cybern. - Part A: Syst. Hum., vol. 42, no. 6, 2012. wang_capturing_2018 X. Wang et al., “Capturing car-following behaviors by deep learning,” IEEE Trans. Intell. Transp. Syst., vol. 19, no. 3, pp. 910–920, 2018. zhu_safe_2020 M. Zhu et al., “Safe, efficient, and comfortable velocity control based on reinforcement learning for autonomous driving,” Transp. Res. Part C Emerg. Techno., vol. 117, p. 102662, 2020. liang_human—loop_2017 H. Liang et al., “Human-in-the-loop reinforcement learning,” in Proc. IEEE CAC, 2017, pp. 4511–4518. zhu_human-like_2018 M. Zhu et al., “Human-like autonomous car-following model with deep reinforcement learning,” Transp Res Part C Emerg Tech, vol. 97, 2018. tian_learning_2022 Y. Tian et al., “Learning to drive like human beings: A method based on deep reinforcement learning,” IEEE Trans. Intell. Transp. Syst., vol. 23, no. 7, pp. 6357–6367, 2022. ha_learning_2020 S. Ha et al., “Learning to walk in the real world with minimal human effort,” arXiv:2002.08550 [cs.RO], 2020. yang_wcsac:_2021 Q. Yang et al., “WCSAC: Worst-case soft actor critic for safety-constrained reinforcement learning,” in Proc. AAAI Conf. Artif. Intell., no. 12, 2021, pp. 10 639–10 646. salles_extending_2022 D. Salles et al., “Extending the intelligent driver model in SUMO and verifying the drive off trajectories with aerial measurements,” in SUMO Conf. Proc., 2022, pp. 1–25. hacohen_power_2019 G. Hacohen and D. Weinshall, “On the power of curriculum learning in training deep networks,” arXiv:1904.03626 [cs.LG], 2019. markudova_recoco:_2023 D. Markudova et al., “ReCoCo: Reinforcement learning-based congestion control for real-time applications,” in Proc. IEEE HPSR, 2023.
http://arxiv.org/abs/2407.01884v1
20240702021115
EIT-1M: One Million EEG-Image-Text Pairs for Human Visual-textual Recognition and More
[ "Xu Zheng", "Ling Wang", "Kanghao Chen", "Yuanhuiyi Lyu", "Jiazhou Zhou", "Lin Wang" ]
cs.CV
[ "cs.CV", "cs.HC" ]
PO-MSCKF: An Efficient Visual-Inertial Odometry by Reconstructing the Multi-State Constrained Kalman Filter with the Pose-only Theory Authors are with the College of Intelligent Science and Technology, National University of Defense Technology, Changsha, 410073, China. This research is funded by the National Natural Science Foundation of China (grant number: 62103430, 62103427, 62073331) and Major Project of Natural Science Foundation of Hunan Province (No. 2021JC0004) ^†Xueyu Du and Lilian Zhang contribute equally to this work ^*Jun Mao is the corresponding author: (maojun12@nudt.edu.cn) Xueyu Du, Lilian Zhang^†, Ruochen Liu, Maosong Wang, Wenqi Wu and Jun Mao^* July 8, 2024 ====================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== § ABSTRACT Recently, electroencephalography (EEG) signals have been actively incorporated to decode brain activity to visual or textual stimuli and achieve object recognition in multi-modal AI. Accordingly, endeavors have been focused on building EEG-based datasets from visual or textual single-modal stimuli. However, these datasets offer limited EEG epochs per category, and the complex semantics of stimuli presented to participants compromise their quality and fidelity in capturing precise brain activity. The study in neuroscience unveils that the relationship between visual and textual stimulus in EEG recordings provides valuable insights into the brain's ability to process and integrate multi-modal information simultaneously. Inspired by this, we propose a novel large-scale multi-modal dataset, named EIT-1M, with over 1 million EEG-image-text pairs. Our dataset is superior in its capacity of reflecting brain activities in simultaneously processing multi-modal information. To achieve this, we collected data pairs while participants viewed alternating sequences of visual-textual stimuli from 60K natural images and category-specific texts. Common semantic categories are also included to elicit better reactions from participants' brains. Meanwhile, response-based stimulus timing and repetition across blocks and sessions are included to ensure data diversity. To verify the effectiveness of EIT-1M, we provide an in-depth analysis of EEG data captured from multi-modal stimuli across different categories and participants, along with data quality scores for transparency. We demonstrate its validity on two tasks: 1) EEG recognition from visual or textual stimuli or both and 2) EEG-to-visual generation. § INTRODUCTION Electroencephalography (EEG) is a widely applied neuroimaging modality in cognitive neuroscience. It is known for its ability to decipher intricate brain activity patterns during various cognitive processes <cit.>. In the early days, research focused on constructing EEG datasets for medical purposes, such as detecting and predicting seizures <cit.>. Recently, EEG signals have been broadly incorporated to decode brain activity to visual or textual stimuli and achieve object recognition in multi-modal artificial intelligence (AI) <cit.>. This enriches the data landscape, allowing for more nuanced and accurate models of brain activity and cognitive processes. Accordingly, research endeavors have been focused on building EEG-based datasets  <cit.>, as summarized in Tab. <ref>. For instance, ZuCo 1.0 <cit.> is a pioneering EEG-Text dataset that records neural processes underlying reading and language comprehension during the reading tasks. On the other hand, Brain2Image <cit.> is a representative EEG-image dataset that includes evoked responses to visual stimuli from 40 classes. However, these datasets have two distinct shortcomings: 1) They offer limited EEG epochs per category, and the complex semantics of stimuli presented to participants compromise their quality and fidelity in capturing precise brain activity. 2) They only encompass EEG signals recorded from single-modal stimuli, either visual or textual. This makes them less possible to be used for training high-performance multi-modal AI models. The study in neural science reveals that EEG recordings reveal a significant relationship between visual and textual stimuli, offering valuable insights into the brain's capacity to integrate multi-modal information simultaneously <cit.>. This integration is crucial for understanding how the brain processes complex, real-world scenarios where multiple types of sensory input are encountered simultaneously. Inspired by this, we introduce a novel large-scale multi-modal EEG dataset, EIT-1M, comprising paired EEG, visual, and textual data for the benefit of research communities. The key insight of our dataset is to record human brain activities while simultaneously processing multi-modal information. To achieve this, data was collected from five participants exposed to random sequences of 60K natural images and their corresponding category descriptions. To date, we have gathered over 1 million epochs of brain responses using a 64-channel EEG headset (actiCHamp Plus[<https://brainvision.com/products/actichamp-plus/>]). Specifically, we utilize the 10-category dataset CIFAR-10 <cit.> to construct the visual and textual stimulus. This dataset harnesses an image resolution of 32×32 pixels without excessive details, Empirically, as shown in Fig. <ref>, we find that low-resolution visual stimuli stimulate more stable neural responses, suggesting they are appropriate and manageable within a brief viewing period. We present visual and textual stimuli sequentially to maintain continuous engagement with the objects and concepts, as shown in Fig. <ref>. Moreover, our dataset features response-based stimulus timing, repetition across blocks and sessions, and diverse visual and textual classes. To verify the effectiveness, we provide an in-depth analysis of EEG data captured from multi-modal stimuli across different categories and participants. The data analysis includes EEG topographic maps, corresponding signals analysis and ERP analysis. These analysis highlight the distinct ERP characteristics from visual and textual stimuli, providing insights in the multi-modal information processing of brains. For transparency, we include data quality scores (See Tab. <ref>). To benchmark our EIT-1M, we demonstrate its validity on two tasks: 1) EEG recognition from visual or textual stimuli or both (See Sec. <ref>) and 2) EEG-to-visual generation (See Sec. <ref>). We expect our dataset to be a benchmark contributor for advancing the research for multi-modal AI <cit.> and potentially for cognitive neuroscience. § RELATED WORK EEG Datasets with Visual Stimuli. They capture EEG waveforms while participants view visual stimuli, facilitating studies of brain activity, as shown in Tab. <ref>. A representative dataset is Brain2Image <cit.>, which includes evoked responses to visual stimuli from 40 classes, each with 50 images, totaling 2K images. However, this dataset is impeded by its lack of train-test separation during recording, block-specific stimuli patterns, and inconsistency across frequency bands <cit.>. In contrast, the THINGS-EEG1 <cit.> and THINGS-EEG2 <cit.> datasets address these issues by incorporating both main and validation sessions to ensure data quality and consistency. These two datasets contain human EEG responses from 50 subjects to 22,248 images in the THINGS stimulus set. Regarding the diversity of stimuli, while studies like Brain2Image and  <cit.> involve 40 classes, other studies focus on only 10 different image classes <cit.>. This limited representation allows for more controlled studies but fails to capture the continuous and diverse nature of naturalistic stimuli due to the limited samples from each category. Other datasets like MindBigData <cit.> and  <cit.> capture a wide range of images but are derived from a single individual, limiting their potential for training image reconstruction models that generalize to other individuals. Recently, to address these limitations, Alljoined1 <cit.> includes 10K images per participant from object categories in MS-COCO <cit.>, thereby accounting for the diversity and continuity of real-world images. EEG Datasets with Textual Stimuli They are primarily developed for brain signal decoding. Notable examples include ZuCo 1.0 <cit.> and ZuCo 2.0 <cit.>, captured with 128-channel EEG devices. These datasets provide insights into the neural processes underlying reading and language comprehension by recording EEG signals during reading tasks. EEG2Text <cit.> focuses on translating brain signals into textual descriptions, supporting the development of AI models for decoding and generating text from EEG signals. Despite these advancements, there remains a need for datasets that integrate both visual and textual stimuli to capture the complex interplay between different modalities in the brain. In a nutshell, all these datasets primarily focus on single-modal stimuli, limiting their fidelity for training multi-modal AI models. Our EIT-1M dataset addresses this gap by providing paired EEG, visual, and textual data, enabling comprehensive multi-modal analysis. Thus our dataset is superior in its capacity of reflecting brain activities in simultaneously processing multi-modal information. § DATASET COLLECTION METHODS Tab. <ref> provides an overview of one experiment involving five participants, aged 20-30 years, with a gender distribution of two females and four males. Each participant underwent two 300-minute sessions, during which 1,200,000 events were recorded, including 600K visual and 600K textual stimuli. The stimuli were drawn from ten CIFAR-10 categories for visuals and ten textual categories. EEG recordings were made using a 64-channel headset at a 1000 Hz sampling rate. The dataset ensures high quality with an average signal-to-noise ratio as in Tab. <ref>, maintaining impedance levels at or below 20 kΩ. Each session featured an average of 10K events, with each event lasting 50 ms and an inter-event interval of 1 second. Preprocessing involved 1-40 Hz band-pass filtering and epoching from -20 to 30 ms relative to stimulus onset, with baseline correction at -20 ms. This dataset aims to support research in EEG analysis and multi-modal recognition. §.§ Experimental Settings Participants Five adults (mean age 24.83 years; 1 female, 4 male) participated in this study, all with normal or corrected-to-normal vision, and none of them have suffered or are suffering neurological or psychiatric problems such as ADHD and epilepsy. Each participant provided informed written consent and received monetary reimbursement for their involvement. The study procedures were approved by the ethical committee. It is important to acknowledge the potential limitations of this study, such as the gender imbalance among participants and the low age disparity. Stimuli. All images used in this study as visual stimuli are sourced from the CIFAR-10 dataset <cit.>. This dataset is a well-known benchmark in machine learning and computer vision, comprising 60K color images across 10 different classes, with 6K images per class. These classes represent a variety of everyday objects and animals, including airplanes, automobiles, birds, cats, deer, dogs, frogs, horses, ships, and trucks. This dataset is selected for our study due to its diversity and balanced categories, which provide a robust set of stimuli for examining neural responses across different visual contexts. Utilizing this dataset allows for an in-depth exploration of how the brain processes various types of visual information and supports the development of multi-modal models that can generalize across different categories of visual stimuli. Each image in this dataset has a resolution of 32x32 pixels, making it ideal for stimulating brain activities for visual stimuli from participants. The textual stimuli are derived from the category names within the CIFAR-10 dataset: airplane, automobile, bird, cat, deer, dog, frog, horse, ship, and truck. Hardware Setup We recorded data using a 64-electrode actiCHamp Plus system, digitized at a rate of 1024 Hz with 24-bit A/D conversion. The montage was arranged according to the international 10-20 System, and the electrode offset was kept below 40 mV. A 22-inch Dell monitor with a resolution of 1080p at 60 Hz was used to display the visual and textual stimuli. As shown in Fig. <ref>, the monitor was centrally positioned at a distance of 80 cm from the participant, maintaining a 3.5-degree angle of stimuli. We ensured that the angle remained small to minimize the occurrence of gaze drift. §.§ Data Collection Procedure Before viewing the stimuli, conductive gel was injected into each electrode to ensure the resistance was less than 20 ohms, facilitating better signal capture. Participants were then shown images and text over the course of four sessions, each four hours long. Each session comprised multiple blocks, with each block containing images from the same class and the corresponding category name text. The visual and textual stimuli were randomly arranged in a visual-textual-visual-textual order within each block. Different blocks contained stimuli from different classes. Within each block, 1,000 visual stimuli images and 1,000 text stimuli category names from CIFAR-10 were presented. Within each trial, an image was presented for 50 ms, followed by 50 ms of a black screen. The corresponding category name of the image was also presented for 50 ms, followed by 50 ms of a black screen. A white fixation cross was visible on the screen throughout the entire trial. To ensure focus, participants were prompted to press the space bar after completing two consecutive blocks. Additionally, five to ten-minute breaks were provided between blocks based on participants' needs for better data recording. Fig. <ref> (a) shows a schematic overview of the structure of trials, blocks, categories, and sessions, which follows rapid serial visual presentation (RSVP) paradigm <cit.>. Each of the 5 block-specific CIFAR-10 training images and label text is presented once within each block, and each of the 10 category-specific blocks is presented once within each session. Each participant performed two sessions on different days. Each of the 2 sessions thus consists of 50,000 images and texts within and across blocks. Fig. <ref> (b) illustrates that Sessions 3 and 4 consist of 10,000 images and labels from the CIFAR-10 testing set. § DATA ANALYSIS §.§ EEG Topographic Maps and Corresponding Signals Analysis Fig. <ref> presents the comparison of EEG signals by showcasing topographic maps and corresponding signals averaged across 63 electrodes (channel FCz as reference) for different stimuli conditions, , visual and textual stimuli with airplane and frog categories. Each column of the figure represents a different stimulus type: visual stimuli (left column) and textual stimuli (right column). The visual stimuli include images from the CIFAR-10 dataset, and the textual stimuli comprise category names from the same dataset. Each row represents different categories, specifically airplane and frog. Visual Stimuli (Left Column of Fig. <ref>) The topographic maps show the distribution of brain activity across the scalp at various time points (-0.050s, -0.025s, 0.000s, 0.025s, and 0.050s) after the stimulus onset. The maps reveal distinct patterns of neural activation, indicating how the brain processes visual stimuli over time. For instance, the airplane category (1st row) shows significant activation in the occipital and parietal regions, which are known to be involved in visual processing <cit.>. The corresponding ERP signals show the average response over time for all electrodes. The signals depict the dynamic changes in brain activity, with notable peaks and troughs corresponding to different cognitive processes. For the visual stimuli, there are clear ERP components around 20ms and 40ms, which might correspond to early visual processing and higher-level cognitive processing, respectively. Textual Stimuli (Right Column of Fig. <ref>) Similar to the visual stimuli, the topographic maps for textual stimuli show brain activity at the time points in 50 ms later as visual stimuli. Note that the visual and textual stimuli are presented with a gap of 50 ms. There are noticeable differences in the activation patterns compared to visual stimuli, highlighting the distinct neural processes involved in reading and understanding texts of the participants. For the airplane text (1st row), there is significant activation in the temporal and frontal regions, areas associated with language processing <cit.>. The ERP signals for textual stimuli also display characteristic peaks, though the patterns differ from those elicited by visual stimuli. The airplane text category shows a strong response between 20 ms to 40 ms, likely reflecting early semantic processing <cit.>. According to these visualizations, we have the following findings: (I) Individual and Common Patterns: Fig. <ref> highlights both individual and common brain activity patterns associated with both image and text presentation. This indicates that while there are distinct neural processes for visual and textual stimuli, there are also commonalities in how the brain responds to different types of information. (II) Temporal Dynamics: The temporal dynamics of the ERP signals provide insights into the timing of cognitive processes. Early components (within the last 20ms) are typically associated with sensory processing, while earlier components (before 20ms) are linked to cognitive and semantic processing. (III) Gap Influence: The 50 ms gap between visual and textual stimuli presentations allowed us to observe the sequential processing of different modalities, showing how the brain transitions between visual and textual information processing. (IV) ERP Characteristics: The ERP characteristics, such as the peaks around 20 ms and 40 ms for visual stimuli and between 0 ms to 20 ms for textual stimuli, provide valuable hints for understanding the stages of information processing in the brain. Unlike previous EEG datasets, such as the THINGS-EEG dataset <cit.>, which use high-resolution images as visual stimuli and introduce a vast number of object concepts (1854), our datasets can address the following limitations of previous ones. Fig. <ref> illustrates the differences in EEG signal responses between high-resolution and lower-resolution visual stimuli. The more stable and less variable neural responses to the lower-resolution images suggest their suitability for creating robust EEG datasets. High-resolution images, on the other hand, require more time for participants to process content and details, making them less suitable for effectively capturing quick neural responses at the millisecond level. §.§ ERP Analysis Fig. <ref> presents the event-related potentials (ERPs) averaged over occipital and parietal electrodes for a participant viewing visual images (right panel) and category text (left panel). Both plots display ERP data from -200 ms to 600 ms relative to stimulus onset (0 ms), with the average occipito-parietal ERPs fluctuating between approximately -0.5 and 1.5 microvolts for both visual and text stimuli. Each trace, with a distinct color, represents a specific category, including airplane, automobile, bird, cat, deer, dog, frog, horse, ship, and truck. Regarding the text stimuli (left panel), a significant initial deflection is noticed around 0 ms, showing the brain's quick response to text stimuli. Early components, like peaks and troughs, are seen around 100 ms and 200 ms post-stimulus, typical of early ERP components such as the P1 and N1, which are linked to sensory processing. Additional peaks around 300 ms and beyond likely indicate higher-order cognitive processing. The shaded area around the grand average ERP line signifies the standard deviation, reflecting variability across different trials and categories. This variability is higher at certain peaks, suggesting differences in how the brain processes various text categories. Concerning the visual stimuli (right panel), a comparable initial deflection is observed around 0 ms. Distinct peaks are evident at approximately 100 ms and 200 ms, corresponding to the P1 and N1 components, which are more pronounced and consistent across different visual categories compared to textual stimuli. Significant peaks around 300 ms and later may denote the P3 component, indicating cognitive processing associated with visual categorization. The standard deviation shading around the grand average indicates less variability compared to text stimuli, suggesting more consistent brain responses to visual stimuli across various categories. In comparison, visual stimuli evoke more consistent ERPs across categories than text stimuli, as indicated by the smaller standard deviation areas. Both types of stimuli elicit similar amplitude ranges in the ERP responses, reflecting comparable levels of neural activity. The timing of early and late ERP components is similar for both text and visual stimuli, suggesting that initial sensory processing and subsequent cognitive processing occur within similar time frames for both types of stimuli. In conclusion, the ERPs for both textual and visual stimuli exhibit characteristic early and late components, indicative of sensory and cognitive processing stages. Visual stimuli elicit more consistent responses across categories, whereas text stimuli exhibit greater variability. This analysis provides insights into the sensory and cognitive functions associated with different types of stimuli. § EXPERIMENTS WITH EIT-1M DATASET Implementation Details. For preprocessing, a band-pass filter is applied to retain frequencies between 1 and 40 Hz within the raw EEG data. Subsequently, the continuous data is segmented into epochs, each commencing 50 ms prior to the stimulus onset and concluding 50 ms following each event. To train and evaluate the recognition models, the EEG data from one participant (Tab. <ref>) and two participants (Tab. <ref>) are divided using an 80/20% split to create training and evaluation sets, respectively. The models are trained using the Adam optimizer, coupled with a step learning rate schedule, across 500 epochs. The default settings for the learning rate, weight decay, and batch size are 1 × 10^-3, 1 × 10^-5, and 2048, respectively. We apply three widely-used metric to evaluate the recognition performance on EIT-1M, including Accuracy, recall, and F1 score. §.§ Recognition The results of experiments conducted within one session of a single participant are shown in Tab. <ref>, illustrating the effectiveness of our dataset in the individual collection procedure. The results in Tab. <ref> include the performance across various models with EEG signals captured from visual and textual stimuli. Note that Image & Text refers to the combined EEG signals from both visual and textual stimuli for recognition. The evaluated models include EEGNet <cit.>, MobileNet-v2 <cit.>, ResNet18 <cit.>, ResNet34 <cit.>, and ResNet50 <cit.>. Combining EEG signals from image and text stimuli generally enhances performance metrics across all models, suggesting that multi-modal data provides richer information, leading to better classification accuracy and robustness. The consistent performance improvements observed from MobileNet-v2 to ResNet architectures indicate that our EIT-1M dataset is well-suited for various deep-learning models. ResNet models, in particular, show significant improvements, highlighting the dataset's capacity to support complex neural networks. Similar performance metrics for image and text stimuli alone indicate that the dataset offers a balanced representation of both modalities. This balance is crucial for training models to generalize well across different types of stimuli. Additionally, the high F1 scores, especially for the ResNet models, reflect good data quality, ensuring that the recorded EEG signals are reliable and effective for training AI models. Tab. <ref> summarizes benchmark experiments across different sessions of two participants. The results consistently show that combining EEG signals from both visual and textual stimuli improves performance across all models compared to using either visual or textual stimuli alone. For both visual and textual stimuli, ResNet models maintain consistently high performance, indicating the robustness of ResNet architectures in processing and learning from EEG data. The analysis of Tab. <ref> and Tab. <ref> supports the rationality of our EIT-1M dataset. By providing high-quality, balanced, and scalable data, our dataset proves to be an excellent resource for advancing research in multi-modal AI and cognitive neuroscience. The observed improvements in combined image and text stimuli further highlight the importance of multi-modal datasets in capturing the intricate interplay between different types of information. §.§ Generation We follow the classic EEG-to-Image generation task proposed by ThoughtVis <cit.>, which obtains images from EEG signals. As shown in the generation results in Fig. <ref>, our proposed EIT-1M dataset shows the capability to support the EEG-to-Image generation task. § CONCLUSION, LIMITATIONS, AND FUTURE WORK In this paper, we presented EIT-1M, a large-scale multi-modal dataset comprising 1 million EEG-image-text pairs. We collected the data pairs while participants viewed alternating sequences of visual-textual stimuli from 60K natural images and corresponding label texts. Our EIT-1M is superior in its capacity of recording brain activities in simultaneously processing multi-modal information, , images and text. It features response-based stimulus timing and repetition across blocks and sessions. To verify the effectiveness of EIT-1M, we provided an in-depth analysis of the EEG signals in EIT-1M across different categories and sessions and conducted experiments on two tasks. Limitations. Despite the robustness of our dataset, there are areas for enhancement. Our current dataset includes data from multiple participants and sessions, but increasing the number of participants and sessions could yield a more comprehensive understanding of neural responses and improve the generalizability of the models trained on this data. Additionally, while we used a well-defined set of visual and textual stimuli, expanding the variety of stimuli, especially for the textual stimuli, could further enhance the dataset's fidelity for studying more diverse and complex neural processes. Future work. It could be a good direction to integrate additional modalities, such as audio or tactile feedback, to create an even richer multi-modal dataset. This integration could provide deeper insights into the interplay between different sensory inputs and brain activity, advancing research in multi-modal AI and neuroscience. By addressing these limitations and expanding the dataset's scope, we can significantly contribute to the understanding and development of multi-modal AI models. Broader Impact. EIT-1M advances neuroscience and AI by enabling deeper insights into cognitive processes and sensory integration. It improves brain-computer interfaces and personalized learning. Ethical considerations regarding neural data privacy are crucial for responsible applications. ieeenat_fullname
http://arxiv.org/abs/2407.02844v1
20240703064026
Multi-Attention Integrated Deep Learning Frameworks for Enhanced Breast Cancer Segmentation and Identification
[ "Pandiyaraju V", "Shravan Venkatraman", "Pavan Kumar S", "Santhosh Malarvannan", "Kannan A" ]
eess.IV
[ "eess.IV", "cs.CV", "cs.LG", "F.2.2, I.2.7" ]
Teleporting two-qubit entanglement across 19 qubits on a superconducting quantum computer Lloyd C. L. Hollenberg July 8, 2024 ========================================================================================= § ABSTRACT Breast cancer poses a profound threat to lives globally, claiming numerous lives each year. Therefore, timely detection is crucial for early intervention and improved chances of survival. Accurately diagnosing and classifying breast tumors using ultrasound images is a persistent challenge in medicine, demanding cutting-edge solutions for improved treatment strategies. This research introduces multi-attention-enhanced deep learning (DL) frameworks designed for the classification and segmentation of breast cancer tumors from ultrasound images. A spatial channel attention mechanism is proposed for segmenting tumors from ultrasound images, utilizing a novel LinkNet DL framework with an InceptionResNet backbone. Following this, the paper proposes a deep convolutional neural network with an integrated multi-attention framework (DCNNIMAF) to classify the segmented tumor as benign, malignant, or normal. From experimental results, it is observed that the segmentation model has recorded an accuracy of 98.1%, with a minimal loss of 0.6%. It has also achieved high Intersection over Union (IoU) and Dice Coefficient scores of 96.9% and 97.2%, respectively. Similarly, the classification model has attained an accuracy of 99.2%, with a low loss of 0.31%. Furthermore, the classification framework has achieved outstanding F1-Score, precision, and recall values of 99.1%, 99.3%, and 99.1%, respectively. By offering a robust framework for early detection and accurate classification of breast cancer, this proposed work significantly advances the field of medical image analysis, potentially improving diagnostic precision and patient outcomes. § INTRODUCTION Breast cancer is one of the most common cancers among women worldwide, resulting in approximately 570,000 deaths in 2015 alone. Annually, over 1.5 million women, accounting for 25% of all female cancer diagnoses, are diagnosed with breast cancer globally <cit.><cit.>. Breast tumors often originate as ductal hyperproliferation and can progress to benign tumors or metastatic carcinomas when stimulated by various carcinogenic agents. The tumor microenvironment, including stromal effects and macrophages, plays a crucial role in the development and progression of breast cancer <cit.>.Early detection of breast carcinoma significantly increases the chances of successful treatment. Therefore, implementing effective procedures for identifying early signs of breast cancer is crucial <cit.>. Mammography, ultrasound, and thermography are the primary imaging techniques used for screening and diagnosing breast cancer <cit.><cit.>. With over 75% of tumors responding to hormones, breast cancer is primarily a postmenopausal illness. Their incidence rates are at the highest between the ages of 35-39 and then plateau after 80 years, with age and female sex being significant risk factors. This hormone dependency interacts with environmental and genetic factors to determine the incidence and progression of the disease <cit.>. Precise segmentation and classification of breast cancer are essential for effective treatment planning and positive patient outcomes. Traditional methods heavily depend on manual interpretation, which is both time-consuming and prone to errors. Advancements in technology have transformed the provision of healthcare. High processing power, primarily from GPUs, enables the creation of deep neural networks with multiple layers, allowing for the extraction of formerly unachievable features. Convolutional Neural Networks (CNNs) have made a profound impact on image processing and understanding, especially in the areas of segmentation, classification, and analysis <cit.><cit.>. Deep learning models can process vast amounts of medical imaging data and detect subtle abnormalities that might elude human observers. Accurate tumor segmentation and classification enhances oncologists' capacity to make decisions about whether a tumor is malignant or not. Typically, these methods require professional annotation and pathology reports to make this assessment <cit.>, which consumes a lot of human effort. DL provides an efficient and promising solution for the automation of these procedures. They can learn complicated patterns and features from ultrasounds and mamograms, which has the potential to improve classification accuracy and efficiency. This paper proposes the Spatial-Channel Attention LinkNet Framework with InceptionResNet Backbone for breast cancer segmentation, and DCNNIMAF Framework for breast cancer classification. The segmentation framework is a novel and effective attention-enhanced mechanism that uses a pre-trained CNN model architecture for the encoder backbone. This enhances the capability of feature extraction, while effectively enhancing segmentation using a coupled spatial and channel attention mechanism in the decoder. The proposed classification framework - Deep CNN with an Integrated Multi-Attention Framework (DCNNIMAF) - is a unique and novel architecture with a hybrid of integrated self and spatial attention mechanisms. The segmentation results were evaluated using evaluation metrics such as Dice coefficient, IoU score, and a combination of focal loss and Jaccard loss, while classification evaluation metrics include recall, F1-score, precision, and accuracy. The organization of this paper is as follows: Section 2 reviews the literature on breast cancer segmentation and classification; Section 3 describes the proposed approach; Section 4 presents experimental results; Section 5 concludes and outlines future research directions. § RELATED WORKS Osareh et al. <cit.> utilized the K-nearest neighbors (KNN), Support Vector Machine (SVM), and Probabilistic Neural Network (PNN) classification models to perform the classification of tumor regions. The methodology was employed on two different publicly available datasets where one of the datasets was composed of Fine Needle Aspirates of the Breast Lumps (FNAB) with 457 negative samples and 235 positive samples while the other dataset was composed of 295 gene microarrays with 115 good-prognosis class and 180 poor-prognosis class data. To support the classifier, feature extraction and selection methodologies were utilized. Feature extraction techniques like Principal Component Analysis (PCA), optimized with auto-covariance coefficients of feature vectors, were employed to reduce high-dimensional features into low-dimensional ones. Feature selection includes two different approaches such as the Relief algorithm for filter approach where the features are selected using a pre-processing step and no bias of the induction algorithms is considered unlike the wrapper approach namely the proposed Sequential forward selection (SFS) technique where a feature set composed of 15 sonographic features are obtained. The results underwent ranking using a feature ranking method that employed Signal-to-Noise Ratio (SNR) to identify crucial features. The evaluation involved wrapper approach estimates assessed through a leave-one-out cross-validation procedure, focusing on overall accuracy, Sensitivity, Specificity, and Matthews Correlation Coefficient (MCC). Li et al. <cit.> introduced a novel patches screening method that included the extraction of multi-size and discriminative patches from histology images involving tissue-level and cell-level features. Firstly, patches of dimensions 512x512 and 128x128 are generated from the input data. This is followed by the utilization of two ResNet50s where one of the models is fed with patches of dimensions 128x128 while the other inputs patches of dimensions 512x512 which extract tissue-level and cell-level features respectively. A finetuning approach is adopted to train the ResNet50 models this is followed by a screening of patches by aggregating them into different clusters based on their phenotype. For speeding up the process, the patch size is reduced to obtain 1024 features followed by PCA to reduce the number of features to 200. This is followed by the k mean clustering process. A ResNet50 fine-tuned with 128x128 size patches is employed to select the clusters. Subsequently, the P-norm pooling feature method is applied to extract the final features of the image, followed by the use of a Support Vector Machine to classify input images into four distinct classes: Normal, Benign, In situ carcinoma, or Invasive carcinoma. Zheng et al. <cit.> introduced a DL-assisted Efficient Adaboost Algorithm (DLA-EABA) where the Convolutional Neural Network is trained with extensive data so that high precision can be achieved. A stacked autoencoder is utilized for generating a deep convolutional neural network and the encoder and decoder sections contain multiple non-linear transformations which are taken from the combined depictions of actual data which is taken as input. An efficient Adaboost algorithm is utilized to train the classifiers which estimate the positive value for threshold and parity and is done by reviewing all the potential mixtures of both values, The deep CNN contains Long Short-Term Memory (LSTM) with logistic activation function as conventional artificial neurons. This is followed by Softmax Regression for classifying the images with the help of features extracted. Lotter et al. <cit.> introduced a robust breast tumor classification model for mammography images which utilizes bounding box annotations and is extended to digital breast tomosynthesis images to be able to identify the tumor region in the image. The CNN first trains to classify if lesions are present in the cropped image patches. Subsequently, using the entire image as input, the CNN initializes the backbone of the detection-based model. This model outputs the entire image with a bounding box, providing a classification score. The model's performance is then evaluated by comparing its ability to identify the tumor region with Breast Imaging Reporting and Data System Standard (BI-RADS) scores of 1 and 1 considered as negative interpretations and index and pre-index cancer exams. Saber et al. <cit.> employed transfer learning methodology on five different models: ResNet50, VGG19, Inception V3, Inception-V2, and VGG16. Feature extraction involved freezing the trained parameters from the source task except for the last three layers, which were then transferred to the target task. The images were preprocessed using different methods such as Median Filter, Histogram Equalization, Morphological Analysis, Segmentation, and Image Resizing. The dataset is split into an 80-20 ratio and Augmentation is applied to the training dataset where the images are rotated and flipped. The newly trained layers are combined with the existing pre-trained layers and the features are extracted using these models. Classification is done by feeding the extracted features from the transfer learning models into a Support Vector Machine classifier and Softmax classifiers that are fine-tuned using the Stochastic Gradient Descent method with momentum (SGDM). The gradient’s high-velocity dimensions are reduced due to SGDM jittering and the past gradients with momentum are reduced to saddle point. Cho et al. <cit.> proposed a Breast Tumor Ensemble Classification Network (BTEC-Net) which utilizes an improved DenseNet121 and ResNet101 as base classifiers where each of the four blocks is connected to the Squeeze and Excitation Block and Global Average Pooling layer. Next, the feature map sizes are aligned using a fully connected layer and integrated along the channel dimension. The combined feature map is then fed into a feature-level fusion module to perform binary classification. Once the classification is done, segmentation is carried out by utilizing the proposed Residual Feature Selection UNet model (RFS-UNet) which is an encoder-decoder network and are connected with the layer positions of the same feature map size using skip-connections. The encoder part is composed of five encoders with each one comprising of a convolutional layer, an RFS module, a residual convolutional block, and a max-pooling layer. Similarly, the model is composed of five decoders where each decoder comprises a convolutional layer and an RFS module as well, a transpose convolutional layer and a Residual Block. The skip connections contain a spatial attention module where the input involves the output of transposed convolution and output of the RFS module from the encoder and the output is concatenated to the output of the same transposed convolution layer. The segmentation process ends with a sigmoid activation function which returns the segmented tumor region. Dayong Wang et al. <cit.> introduced a novel method for automatically detecting metastatic breast cancer in whole slide images of sentinel lymph node biopsies, achieving first place in the International Symposium on Biomedical Imaging (ISBI) grand challenge. Their system delivered impressive results with an AUC of 0.925 for whole slide image classification and a 0.7051 tumor localization score, surpassing an independent pathologist's review. By integrating the DL system's predictions with pathologist diagnoses, a notable reduction in the error rate was achieved, showcasing the profound impact of DL on enhancing the accuracy of pathological diagnoses for breast cancer metastases. Abdelrahman Sayed Sayed et al. <cit.> developed a new, economical design for a 3-RRR Planar Parallel Manipulator (PPM), aiming to overcome the challenge of deriving kinematic constraint equations for manipulators with complex nonlinear behavior. Utilizing screw theory, they computed direct and inverse kinematics and then developed a Neuro-Fuzzy Inference System (NFIS) model that was optimized with Particle Swarm Optimization (PSO) and Genetic Algorithm (GA) to predict the position of the end-effector. The proposed PPM structure underwent investigation, with the development of its kinematic model and subsequent testing of a prototype in ADAMS, followed by fabrication for validation. Results showed that PSO outperformed GA in tuning the NFIS model, aligning closely with actual PPM data, indicating promise for enhanced robot capabilities and performance through further optimization and control strategies. Luuk Balkenende et al. <cit.> proposed a comprehensive review elucidating the integration of deep learning techniques in breast cancer imaging. Their research highlights the wide-ranging applications of DL across modalities such as digital mammography, ultrasound, and magnetic resonance imaging (MRI), with a focus on tasks including lesion classification, segmentation, and predicting therapy response. Additionally, they discuss research on diagnosing breast cancer metastasis using CNNs on whole-body scintigraphy scans, and their investigation into aiding clinicians in diagnosing axillary lymph node metastasis with a 3D CNN model on PET/CT images. They emphasize the necessity of conducting large-scale trials and addressing ethical considerations to fully harness the potential of deep learning in clinical breast cancer imaging. Shen et al. <cit.> proposed a pioneering DL-based approach for detecting breast cancer on screening mammograms. Their innovative "end-to-end" algorithm efficiently utilizes training datasets with varying levels of annotation, achieving exceptional performance compared to previous methods. On independent test sets from diverse mammography platforms, the proposed method achieves per-image AUCs ranging from 0.88 to 0.98, with sensitivities between 86.1% and 86.7%. Notably, the algorithm's transferability across different mammography platforms is demonstrated, requiring minimal additional data for fine-tuning. These results emphasize the potential of deep learning to revolutionize breast cancer screening, offering more accurate and efficient diagnostic tools for clinical applications. Han et al. <cit.> introduced a novel method for breast cancer diagnosis and prognosis. Their Class Structure-based Deep CNN (CSDCNN) achieves impressive accuracy (average 93.2%) by addressing challenges in automated multi-class classification from histopathological images. Combining hierarchical feature representation and distance constraints in feature space, their methodology offers a unique solution to subtle differences among breast cancer classes. Comparative experiments highlight the superior performance of the CSDCNN compared to existing methods, positioning it as a valuable tool for clinical decision-making in breast cancer management. Their work represents a significant advancement in automated breast cancer classification, providing clinicians with a reliable diagnostic aid. Wang et al. <cit.> introduced DeepGrade, a deep learning-based histological grading model aimed at improving prognostic stratification for NHG 2 tumors. Developed and validated on large-scale datasets of digital whole-slide histopathology images, DeepGrade offers a novel approach to classify NHG 1 and NHG 3 morphological patterns. By re-stratifying NHG 2 tumors into DG2-high and DG2-low groups, DeepGrade provides independent prognostic information beyond traditional risk factors. Its performance was validated internally and externally, showcasing its ability to predict recurrence risk accurately. The ensemble approach, employing 20 deep convolutional neural network models, ensures robustness and reliability in classification tasks. DeepGrade shows promise as a cost-effective alternative to molecular profiling, supported by high area under the receiver operating characteristic curve values. This innovative methodology heralds a significant advancement in histological grading for breast cancer, promising improved clinical decision-making and personalized treatment strategies. Further research should focus on validating DeepGrade across diverse patient populations and integrating it into routine clinical practice. Sizilio et al. <cit.> introduced a fuzzy logic-based approach for pre-diagnosing breast cancer from Fine Needle Aspirate (FNA) analysis. Addressing the global burden of breast cancer and the variability in FNA diagnostic accuracy (65% to 98%), this method enhances reliability through computational intelligence. The research employed the Wisconsin Diagnostic Breast Cancer Data (WDBC) and proceeded through four stages: fuzzification, rule base establishment, inference processing, and defuzzification. Validation included cross-validation and expert reviews. The method achieved a sensitivity of 98.59% and a specificity of 85.43%, demonstrating high reliability in detecting malignancies but highlighting the need for improvement in identifying benign cases. This approach shows significant potential for enhancing breast cancer diagnostic accuracy. Sarkar et al. <cit.> explored the use of the K-Nearest Neighbors (KNN) algorithm for diagnosing breast cancer with the Wisconsin-Madison Breast Cancer dataset. Recognized for its straightforward and efficient implementation, KNN served as a non-parametric classifier in this study. The research showed that KNN improved classification performance by 1.17% over the best-known result for the dataset. Advantages of KNN include its simplicity, effectiveness with small training sets, and no need for retraining when new data is incorporated. However, the algorithm also has significant limitations, such as substantial storage requirements for large datasets and extensive computational demands for distance calculations between test and training data. The study noted the existence of faster KNN variants, such as those using k-d trees, which have been successful in tasks like script and speech recognition. The findings highlight KNN's potential for various diagnostic applications, even though no single algorithm is optimal for all diagnostic problems. This research emphasizes KNN's promise in enhancing diagnostic accuracy while acknowledging its challenges with storage and computational efficiency. Song et al. <cit.> introduced an ML technique aimed at accurately annotating noncoding RNAs (ncRNAs) by searching genomes to find ncRNA genes characterized by known secondary structures. Their method involves aligning sequences optimally with a structure model, a critical step for identifying ncRNAs within genomes. Acknowledging the limitations of using a single structure model, they developed an approach that processes genome sequence segments to extract feature vectors. These vectors are then classified to differentiate between ncRNA family members and other sequences. The results showed that this method captures essential features of ncRNA families more effectively and enhances the accuracy of genome annotation compared to traditional tools. This work underscores the significant role of ML in bioinformatics, particularly in improving the precision of ncRNA gene identification. Foster et al. <cit.> offered a critical commentary on the integration of ML in biomedical engineering, particularly focusing on the application of support vector machines (SVMs) beyond mere statistical tools. Their analysis highlighted the inherent challenges in developing clinically validated diagnostic techniques using SVMs, emphasizing concerns such as overfitting and the imperative for robust validation procedures. Unlike studies focused on specific diseases, their research aimed to evaluate and enhance existing ML models for broader biomedical applications. The commentary serves as a cautionary perspective for researchers, reviewers, and readers, stressing the complexities and potential pitfalls in classifier development. It advocates for an integrated approach where classifier validation forms an integral part of the experimental process. This work underscores the critical need to establish the clinical validity of diagnostic tools developed through ML in biomedical research. Wei et al. <cit.> proposed an innovative method for improving microcalcification classification in breast cancer diagnosis using content-based image retrieval (CBIR) combined with ML. Their approach integrates CBIR to retrieve similar mammogram cases, enhancing the performance of a support vector machine (SVM) classifier. By incorporating local proximity information from retrieved cases, the adaptive SVM achieved a notable increase in classification accuracy from 78% to 82%, as measured by the area under the ROC curve. This method aims to provide radiologists with enhanced diagnostic support, serving as a valuable "second opinion" tool. Despite these advancements, the study acknowledges limitations in dataset size, which may affect generalizability. These findings underscore the potential of CBIR-assisted classification approaches in improving the precision of breast cancer diagnostics, emphasizing the need for further validation with larger clinical datasets to validate its efficacy and applicability in real-world clinical settings. § PROPOSED WORK §.§ Methodology The ultrasound images are first augmented to handle class imbalance. Following augmentation, the images were preprocessed using a sequence of preprocessing steps – gamma correction, gaussian filtering, image resizing, and normalization. Pixel values in ultrasound images can reflect non-linearities, especially in high- or low-intensity regions. Gamma correction can help compensate for these non-linearities, leading to more accurate and visually appealing images. An effective technique for noise reduction and edge detail preservation in ultrasound images is the application of Gaussian filtering. This effectively reduces noise while preserving edge details. To preserve consistency throughout the dataset and facilitate batch processing, resizing is done to ensure the images fed into the proposed DL model have the same dimensions. Normalizing the image pixels to scale within a specific range enhances the quality of activation functions' ability to capture the non-linearities in the data. Here, the images have been scaled to fall within the range (0, 1). The preprocessed images are then fed to the proposed Spatial-Channel Attention LinkNet Framework with InceptionResNet backbone for segmenting the tumor region. The segmented tumor maps are then fed to the proposed DCNNIMAF classifier to classify the segmented mass as benign, normal, or malignant. The overall workflow of this proposed work has been presented in Figure 1. §.§ Dataset Exploration The data utilized in this work was obtained from the Breast Ultrasound Images Dataset <cit.> made available by Arya Shah on Kaggle. It contains a total of 780 ultrasound images along with their corresponding segmented ground truth masks, split into three categories – benign, malignant, and normal. Figure 2 showcases a sample of ultrasound images from the dataset overlapped with their corresponding segmentation maps. The dataset exhibits a significant class imbalance, with benign samples contributing to 56.5% of the data, while malignant and normal samples covered only 26.7% and 16.9% respectively. The distribution of ultrasound images exhibiting this class imbalance has been represented graphically in Figure 3. To mitigate this imbalance and avoid bias during the training of segmentation and classification models, augmentation techniques are utilized. Specifically, random crop, random rotation, random zoom, random shear, and random exposure methods were applied to augment the images belonging to the 'normal' and 'malignant' classes. The rationale behind this augmentation approach is to level the data count of the 'normal' and 'malignant' classes, thereby aligning them more closely with the larger 'benign' class. By increasing the training data for the 'normal' and 'malignant' classes through augmentation, the effects of class imbalance are aimed to be mitigated and enable the models to learn effectively from all classes. This approach ensures that the segmentation and classification models are trained on a more balanced dataset, thereby improving their ability to accurately segment, identify, and characterize breast tumors across different classes. This augmentation resulted in a well-balanced data distribution of each category, which has been represented in Figure 4. §.§ Preprocessing Following augmentation, the images were preprocessed using a preprocessing pipeline, consisting of four stages – gamma correction, gaussian filtering, resizing, and image normalization. The output images obtained after each preprocessing step of breast ultrasound image preprocessing are shown in Figure 5, and the overall preprocessing algorithm has been presented in Algorithm 1. §.§.§ Gamma Correction Gamma correction serves as the initial preprocessing step tailored specifically for breast ultrasound images. It plays a pivotal role in enhancing the visibility of crucial anatomical structures and subtle details within the images. By adjusting the image's brightness and contrast, gamma correction improves the delineation of tumor boundaries and enhances the visibility of tumor features. This step is particularly critical in breast cancer tumor segmentation, where accurate visualization of tumor margins is essential for precise delineation and subsequent analysis.Gamma correction can be represented mathematically as follows: I_out_Output Pixel Intensity = I_in_Input Pixel Intensity^γ §.§.§ Gaussian Filtering Following gamma correction, Gaussian filtering is employed to mitigate speckle noise, a common artifact in ultrasound images that can obscure tumor boundaries and hinder accurate segmentation. By selectively smoothing out noise while preserving essential details, Gaussian filtering improves the clarity of tumor features and enhances the accuracy of segmentation algorithms. This step is crucial in breast cancer tumor segmentation and classification, as it reduces noise artifacts and improves the fidelity of tumor delineation, leading to more accurate and reliable segmentation results. Gaussian filtering ensures that the images are cleaner and more conducive to subsequent segmentation and classification tasks, facilitating the accurate identification and characterization of breast tumors.Gaussian filtering is given by: I_out(x,y)_OutputImage = ∑_i=-N/2^N/2∑_j=-N/2^N/2I_in(x+i,y+j)_InputImage·G(i,j)_GaussianKernel §.§.§ Ultrasound Image Resizing Once the images have undergone gamma correction and Gaussian filtering, resizing is performed to standardize image dimensions, facilitating compatibility with segmentation and classification algorithms. Standardized image dimensions are essential for ensuring consistency and comparability across different datasets and analysis pipelines. Resizing enables researchers to create a uniform framework for analysis, simplifying the processing pipeline and reducing computational complexity. Resizing of breast cancer images can be represented mathematically as: I_out(x',y')_OutputImage = I_in(x/r_x, y/r_y)_InputImage §.§.§ Pixel Normalization The last preprocessing stage, normalization, scales the pixel values of images to a standardized range, typically from 0 to 1. This normalization process is crucial for ensuring consistency in pixel intensity across different images, which is essential for training machine learning models and neural networks. Normalization enhances the comparability of images and improves the convergence speed of machine learning algorithms during training. By eliminating variations in intensity that may arise due to differences in acquisition parameters or imaging conditions, normalization ensures that segmentation and classification algorithms can learn effectively from the data, leading to more accurate and reliable analysis results. The normalization process is denoted as: I_out_Output Image = I_in_Input Image - min(I_in)_Minimum Of Input Image/max(I_in)_Maximum Of Input Image - min(I_in)_Minimum Of Input Image §.§ Dual Attention and CNN Backbone Enhanced LinkNet Framework for Breast Cancer Segmentation This section presents the proposed framework for breast cancer tumor segmentation utilizing a LinkNet framework with an InceptionResNet backbone, employing a dual spatial-channel attention mechanism. The framework takes preprocessing breast ultrasound images and their corresponding ground truth masks as input to the segmentation model and provides the predicted segmentation map as output. The LinkNet architecture <cit.> is a deep learning model designed for semantic segmentation tasks, particularly in the context of biomedical imaging. The encoder of the proposed framework is built using the InceptionResNet CNN model <cit.>, which is designed to capture contextual information from the entire input image. The decoder is a series of transpose convolution layers with dual spatial-channel attention mechanisms incorporated within the decoder blocks. The proposed model incorporates a series of essential blocks engineered to extract and refine features from input breast ultrasound images, facilitating the task of segmentation. These blocks encompass InceptionResNet blocks, reduction blocks, stem block, and decoder blocks. The decoder block further consists of a spatial-channel attention block. Each block plays a pivotal role in the feature extraction and segmentation process. The layer architecture of the proposed segmentation framework is presented in Figure 6. The workflow of the segmentation framework is given in Algorithm 2. §.§.§ Encoder Section The encoder section of the segmentation architecture is designed using an InceptionResNet CNN backbone and thus consists of a stem block, three types of InceptionResNet blocks, and two types of reduction blocks. The stem block begins with three convolution layers and is followed by a max pooling layer and a convolution layer where the layers get executed at the same time. This is followed by a filter concatenation layer and this is split into two paths that are parallel to each other. One of the paths contains two convolutional layers while the other is composed of four convolutional layers. Both paths are combined using filter concatenation and are followed by a parallel convolution and max pooling again which is further followed by filter concatenation. Conv(I_(i,j), F)_Convolutional Operation = ∑_m=0^M-1∑_n=0^N-1I_(i+m,j+n)_Input feature map·F_(m,n)_Filter + b_Bias The Inception Resnet blocks are of three types, named A, B and C respectively. Block A is composed of three different paths and a residual connection. The first path consists of a single convolution operation while the second and third paths consist of three and two convolutional operations respectively. The three paths are combined with the help of another convolution operation followed by concatenation with the residual connection. Blocks B and C are similar but the major difference is with the size of the feature maps since an average pooling operation is responsible for downsampling the data from block B to block C. They are composed of two different paths, one with three convolution operations and the other with one convolution operation. The convolution paths are combined by utilizing another convolution operation. There also exists a residual connection which is combined with the result of the convolution operations by utilizing a convolution operation. Concatenation(A, B)_(i,j,k)_Concatenation Operation = A_(i,j,k)_Input feature map if 1 ≤ k ≤depth(A) B_(i,j,k-depth(A))_Input feature map if depth(A) ≤ k ≤depth(A) + depth(B) ReLU(x)_Rectified Linear Unit Activation = x_Feature map if x > 0 0 otherwise The Reduction blocks are of two variants which are called A and B respectively. Block A begins with a filter concatenation operation which is then split into three paths. The first and third paths are composed of a max pooling and a convolution operation respectively and the second path is composed of three convolution layers. The three paths are then combined with the help of a filter concatenation operation. Block B also consists of a max pooling operation and three convolution operations which are present parallelly. Unlike block A, block B is composed of four different parallel paths where the first two paths are described in the previous statement. The other two paths are two convolution operations respectively and all the four paths are combined by utilising a filter concatenation operation. MaxPooling(O)_(i,j)_Max Pooling Operation = max_p=0^k-1max_q=0^k-1I_(i · s + p, j · s + q)_Input feature map FilterConcat(F_1, F_2)(X)_Filter Concatenation = Concatenation(Conv(X, F_1), Conv(X, F_2))_Concatenated Convolutions §.§.§ Decoder Section The Decoder section is composed of decoder blocks, spatial-channel attention blocks, convolution and transpose convolution layers, and a softmax activation function. The decoder block begins with convolution and a batch normalization operation followed by a transpose convolution operation and another batch normalization operation which is followed by convolution and batch normalization operations again. BN(x)_Batch Normalization = γ_Learnable parameter( x_Input - μ_Mean/√(σ^2_Variance + ϵ_Small constant)) + β_Learnable parameter TransposeConv(X, K)_(i,j,d)_Transpose Convolution = ∑_p=0^F-1∑_q=0^F-1∑_c=0^C-1X_(i+s · p, j+s · q, c)_Input feature map·K_(p, q, c, d)_Filter kernel The spatial channel attention block begins with two double convolution operations taking place simultaneously followed by the addition of the two feature maps obtained from the operation. The addition operation is followed by the introduction of non-linearity using ReLU activation followed by another convolution operation. The convolution operation is followed by a sigmoid activation function to restrict the values to lie within the range 0 and 1. AveragePooling(X)_(i,j)_Average Pooling = 1/k^2∑_p=0^k-1∑_q=0^k-1X_(i · s + p, j · s + q)_Input feature map Sigmoid(x)_Sigmoid Activation = 1/1 + e^- x_Input Addition(A, B)_(i,j)_Addition Operation = A_(i,j)_Input feature map + B_(i,j)_Input feature map This is followed by channel attention. The Channel attention block is composed of two different pooling operations (max pooling and average pooling) which happen simultaneously and the obtained feature maps are given as input to a shared multi-layered perceptron. The shared MLP is composed of a flatten layer, gaussian error linear unit (GELU) activation function and dropout layers. Once these three operations are done flatten and dropout operations are performed again. The decoder operation ends with a transpose convolution operation, two convolution operations, and a softmax activation function thus displaying the segmented output. y = GELU(W · x + b)_Gated Linear Unit GELU(x)_Gaussian Error Linear Unit = x ·Φ(x)_Gaussian error function O_(i,j)_Output feature map = I_(i,j)_Input feature map/1 - rate_Dropout rate softmax(z)_Softmax Activation = e^z_Output score/∑_i,j e^z §.§.§ Workflow and Execution Initially, the image is processed through a stem block, which captures low-level features such as edges and textures. These edges outline the boundaries of potential tumors, while textures reveal the internal structure of these masses, which often differ significantly between healthy tissue and malignancies. Following the stem block, the image progresses through five InceptionResNet-A Blocks. Microcalcifications require finer resolution, whereas architectural distortions span larger areas. The residual connections, facilitate deeper networks by mitigating the vanishing gradient problem, and capture a broad spectrum of features at various scales. This multi-scale feature capture is crucial for analyzing ultrasound images of breast tissue, where abnormalities can manifest at different scales. Subsequently, the image encounters a reduction block, which reduces the spatial dimensions of the feature maps. This reduction allows the model to focus on higher-level features and significantly reduces computational complexity, facilitating more efficient processing. This is particularly useful for identifying broader patterns indicative of cancer, such as the overall shape and orientation of a mass. The image then navigates through two InceptionResNet Block B layers, further through another stem block. This refines the detection of mid-level features, such as more nuanced textural patterns and subtle edge variations. The stem block repetition extracts additional low-level features that complement the more complex features identified in the intermediate stages, ensuring the model has a comprehensive grasp of the ultrasound image's content. Following this, the image passes through a reduction block, which reduces the spatial dimensions of the feature maps. This reduction allows the model to focus on low-level features, essential for the precise delineation of tumor boundaries. The image then enters five InceptionResNet Block C layers, and finally into an average pooling layer. These operations are optimized for extracting high-level semantic features for differentiating between various types of tissues present in the image. This compressed feature map is then fed to the LinkNet decoder which transforms the abstracted feature map into a spatially coherent segmentation map. In the decoder, upsampling refines the segmentation map generated by the encoder, and the attention mechanism focuses specifically on the tumor region, enhancing its emphasis. By integrating spatial and channel attention mechanisms, the model can enhance feature maps by emphasizing spatial locations and informative channels. This comprehensive approach improves the model's capability to understand intricate tumor patterns and structures, thereby enhancing segmentation performance. Initially, the feature map is fed to 2 convolutional blocks, followed by a spatial-channel attention block, which is repeated thrice. They perform a preliminary enhancement of the map, focusing on sharpening the details and adjusting the contrast to make the underlying structures more prominent. This ensures that the feature map contains clear and distinguishable elements that correspond to the anatomical structures within the breast ultrasound images. It is then passed to the first decoder block. The initial decoder block is designed to capture high-level semantic features essential for segmenting larger structures within breast ultrasound images. It facilitates the reconstruction of the spatial relationships and contextual information abstracted away during the encoding process. The spatial-channel attention block that follows this decoder block scrutinizes the feature map to identify and accentuate the regions that are most likely to contain tumor structures. This is achieved by assigning higher weights to the spatial locations that exhibit characteristics typical of tumors, such as irregular shapes and unusual textural patterns. The channel attention mechanism analyses the feature map across different channels to determine the ones that carry the most relevant information for segmentation. By amplifying the signals from these informative channels, the model can better discern the unique features that differentiate tumor tissue from the surrounding healthy tissue. Finer textures and structures within the tumors are captured as the feature map moves up to the second decoder block. The spatial-channel attention block adjusts the feature map's weights to emphasize the spatial locations where these detailed features are most prominent, resulting in more precise segmentation of smaller tumor components. Channel attention further identifies the most relevant feature map channels for the task, focusing on the texture and shape of the tumors. In the third decoder block, the feature map captures even more detailed features, including intricate patterns and structures within the tumors. The attention mechanism in this block focuses on the boundaries of the tumor region, which helps the model improve the quality of the produced segmentation map. This makes the output more accurate and minimizes extraneous markings. The final decoder block is responsible for capturing the most detailed features, including the specific patterns and structures unique to each tumor. The attention mechanism allows the model to distinguish between benign and malignant types of tumors and identify subtle variations within a single tumor type. The output from this decoder block is transpose-convolved to ensure a consistent output shape of the segmentation map, followed by convolutions to correct the output channels. This finally transforms the abstracted feature map into a detailed and accurate segmentation map. The model was trained by backpropagating over a custom loss function (21), equal to an aggregate of focal loss (19) and dice (Jaccard) loss (20) obtained after each training epoch. loss_focal(p_t)_Focal loss = - (1 - p_t_True class probability)^γ_Focal loss focusing parameterlog(p_t_True class probability) loss_Jaccard_Jaccard loss = 1 - V_p ∩ V_g_Intersection of predicted and ground truth/V_p ∪ V_g_Union of predicted and ground truth loss_total_Total loss = loss_focal_Focal loss + loss_Jaccard_Jaccard loss The model specifications and parameters of the proposed Spatial-Channel Attention LinkNet Framework with InceptionResNet Backbone are shown in Table 1. §.§ Multi-Attention Integrated Deep CNN Framework for Breast Cancer Classification This section presents the proposed breast cancer deep learning classification model, coined Deep CNN with an Integrated Multi-Attention Framework (DCNNIMAF). Utilizing multiple attention modules integrated within its architecture the proposed approach is designed to effectively classify breast ultrasound images into malignant, benign, or normal categories. The input to the model comprises preprocessed breast ultrasound images and outputs the predicted class to which the image belongs. The model architecture of DCNNIMAF integrates several pivotal blocks designed to extract pertinent features from the input breast ultrasound images. These blocks include convolutional blocks, double convolutional blocks, self-attention blocks, and fully connected layers. Each block plays a crucial role in feature extraction and classification. The layer architecture diagram of the proposed DCNNIMAF model is shown in Figure 7. §.§.§ Convolutional Block The convolutional block within DCNNIMAF consists of a convolutional layer, followed by a batch normalization layer, and finally an activation layer. The activation function used varies between Leaky ReLU and SiLU in different convolutional blocks. The operations performed by the block on the input feature map are mathematically represented as follows: O_i,j_Output feature map value = ∑_m=0^M-1∑_n=0^N-1I_i+m,j+n_Input pixel value·F_m,n_Filter weight + b_Bias BN(x)_Batch Normalization = γ_Scale parameterx_Input value - μ_Mean/√(σ^2_Variance + ϵ_Small constant) + β_Shift parameter LeakyReLU(x) = x_Input value if x > 0 α x_Leaky slope otherwise SiLU(x)_Sigmoid Linear Unit = x/1 + e^-x §.§.§ Double Convolutional Block The double convolutional block comprises two consecutive convolutional layers with 256 filters, a kernel size of 3, and a padding of 1. Mathematically, the operation of this block can be represented as O_Output = conv_2_Second convolution( conv_1_First convolution( I_Input) ) §.§.§ Self-Attention Block The self-attention block in DCNNIMAF computes the attention weights α_ij for each pair of positions (i, j) within the feature map. α_ij_Attention weight for position (i,j) = softmax(QK^T_Dot product of query and key/√(d_k_Dimensionality of key vectors))_Softmax normalizationV_Value matrix §.§.§ Workflow and Execution The flow of information through DCNNIMAF begins with an input layer of shape (256, 256, 3). Initially, the segmentation map undergoes a convolutional block with 512 filters, a padding of 2 and a kernel size of 3. This extracts low-level features such as textures and edges from the input. Following this, the output from the first convolutional block is passed through another convolutional block with 256 filters, the same kernel size, and padding, but with SiLU activation. The introduction of SiLU activation enhances the non-linearity for higher-level feature extraction, which helps to distinguish between different breast tissue characteristics indicative of cancerous growth. Subsequently, a double convolutional block is applied to further refine feature extraction. By employing consecutive convolutional layers with 256 filters each, this block extracts deeper and more abstract features from the input. Following this, a convolutional block with 128 filters, a kernel size of 4, and padding of 2 is employed, accompanied by a leaky ReLU activation. This operation aims to distill the extracted features into more compact and discriminative representations, facilitating the model's capability to detect, and interpret complex patterns within the tumor's structure such as textural anomalies to irregular shapes Continuing the feature refinement process, another convolutional block with 128 filters, a padding of 1, and a kernel size of 3 is applied, this time utilizing SiLU activation. Subsequently, two convolutional blocks are utilized - the first with 128 filters, a kernel size of 4, and a padding of 2, and the second with 64 filters, a padding of 1, and a kernel size of 3. These features are then fed to a spatial attention mechanism, enhancing the model's capacity to adjust to subtle differences between various tissue characteristics associated with malignant and benign tumors. The feature map obtained from the preceding operations is then concatenated with the output from a convolution and batch normalization layer with 64 filters, a padding of 2, and a kernel size of 3. This model integrates both high-level and low-level features across different layers through a concatenation approach, enabling a more comprehensive representation of the input image. This allows the model to learn about the presence of microcalcifications and the density of the tumor tissue, that are most indicative of malignancy. This concatenated output undergoes further processing through convolutional and activation layers before being upsampled and concatenated again with intricate feature attention results. This iterative refinement process ensures that the model can effectively leverage both global and local contextual details present in the input segmentation map. This is then fed through additional convolutional blocks and pooling layers before being passed through a self-attention block. By incorporating self-attention mechanisms, it allows the model to highlight more weightage to the distribution of cells or the presence of necrosis, filtering out less relevant information and potential artifacts that could obscure diagnosis. Ultimately, the result from the self-attention block is flattened and subjected to dropout regularization to mitigate overfitting. Dropout prevents the model from relying on specific features or patterns within the training data that may not generalize well to unseen samples, thereby improving its robustness and generalization performance. The feature map is then directed into a fully connected layer containing 128 neurons, then proceeds to an output layer with three neurons and softmax activation for classification into malignant, benign, or normal categories. This final step consolidates the extracted features into a compact representation suitable for classification, enabling the model to make accurate predictions concerning the existence and severity of breast tumors based upon the input ultrasound image. The model’s training parameters were updated after each epoch via backpropagation using the categorical cross entropy loss criterion (29). loss_CE_Categorical Cross Entropy Loss = - log( e^x_p_Exponential of the true class score/∑_j=1^N e^x_j_Sum of exponentials of all class scores) The working algorithm in classifying breast cancer as benign, malignant, or normal, is demonstrated in Algorithm 3. The model specifications and parameters of the proposed DCNNIMAF classifier have been shown in Table 2. § EXPERIMENTAL SETUP AND RESULTS This section outlines the findings and discussion achieved from training the proposed models. The experiments were conducted in a system with the following specifications: CPU - AMD Ryzen 7 4800H with Radeon Graphics, x86_64 architecture, running at a speed of 3GHz with 8 cores; GPU - NVIDIA GeForce RTX 3050-PCI Bus 1; and 32GB of RAM. These details are summarized in Table 3. §.§ Segmentation Evaluation Metrics The proposed segmentation framework’s performance was evaluated during the training and validation phase using the following segmentation metrics: §.§.§ Accuracy Accuracy measures the proportion of pixels that were classified correctly in the segmentation map compared to the ground truth. accuracy_Segmentation Accuracy = correctly_classified_pixels_Number of correctly classified pixels/total_pixel_count_Total number of pixels in the image §.§.§ IoU Score The IoU score, often termed the Jaccard index, assesses the intersection of the ground truth mask with the predicted segmentation mask divided by their union. It represents the amount of tumor region correctly segmented regarding the total tumor region (ground truth). IoU_Score_IoU = Area_segmentation∩Area_groundTruth_Intersection/Area_segmentation∪Area_groundTruth_Union §.§.§ Dice Coefficient The Dice coefficient, often recognized as the Dice similarity index, assesses the overlap between the ground truth and the predicted segmentation mask. Dice_Coefficient_Dice = 2 × |Area_segmentation∩Area_groundTruth_Intersection|/|Area_segmentation_Segmentation| + |Area_groundTruth_Ground Truth| Figures 8 and 9 depict the training and validation curves for accuracy and total loss, respectively, obtained while training the proposed segmentation framework. From the graphs, it is evident that the model has achieved a high accuracy of 98.1%, with a minimal loss of 0.06 at the end of 100 epochs. The model also achieved an impressive Dice Coefficient score of 97.2% and an IoU score of 96.9%. The training and validation curves of these metrics have been shown in Figures 10 and 11 respectively. §.§.§ Performance Evaluation and Discussion From the segmentation results, it can be inferred that this model has demonstrated impressive performance. The high values obtained from IoU, Dice Coefficient, and Accuracy scores, along with the minimal total loss imply that the InceptionResNet backbone managed to successfully extract important characteristics from input preprocessed images, and the dual-attention mechanism in the decoder blocks helped fine-tune the segmentation maps during segmentation. Grad-CAMs, which stands for Gradient-weighted Class Activation Mapping, is a method in deep learning used to visualize important regions in an input image that guide the model's decision-making process <cit.>. They are particularly useful in understanding how Convolutional Neural Networks (CNNs) make their predictions, especially in tasks like medical image segmentation, where it is necessary to observe if the attention mechanism carries out its operations properly. The GradCAMs of the attention block at the topmost decoder block, as provided in Figure 12, show how the attention mechanism focuses on specific regions of the feature map, highlighting the importance of these regions for the segmentation task. This visualization helps in understanding how the attention mechanism contributes to the segmentation performance by emphasizing the most relevant features and their spatial locations. From GradCAMs, it can be observed that the attention mechanism progressively shifts its focus towards the tumor region, with an improvement in localization accuracy as the number of training epochs increases. U-Net <cit.> model attains a Dice coefficient of 82.52% and an IoU score of 69.76%. These scores reflect a foundational capability in segmenting tumors from breast ultrasound images and highlight the model's limitations in capturing the full extent of tumor boundaries and internal structures, particularly in the nuanced textures and densities often found in breast tissues. Res U-Net <cit.> enhances the original U-Net with a Dice coefficient of 88% and an IoU score of 80%, demonstrating enhanced performance through the incorporation of residual connections, but further refinements in its network architecture and feature extraction are necessary to achieve optimal segmentation accuracy, especially in dealing with the variable echo intensities and shadowing effects commonly encountered in breast ultrasound imaging. By integrating a DenseNet backbone, the U-Net with DenseNet Backbone <cit.> reaches a Dice coefficient of 89.8% and an IoU score of 79.1%, showcasing the benefits of dense connectivity in improving segmentation outcomes. However, additional strategies may be required to fully leverage the complex patterns inherent in breast ultrasound images, such as the differentiation between cystic and solid components of tumors, which is critical for accurate diagnosis. The Multi-scale Fusion U-Net <cit.> achieves a Dice coefficient of 95.35% and an IoU score of 91.12%, marking a significant improvement over earlier models. But it shows suboptimal performance when handling the heterogeneity of breast tissues and the dynamic nature of tumor growth observed in ultrasound sequences. The proposed Spatial-Channel Attention LinkNet Framework with InceptionResNet Backbone stands out with a Dice coefficient of 97.20% and an IoU score of 96.91%. This performance is attributed to the integration of spatial-channel attention mechanisms and the robust InceptionResNet backbone, which together enable precise localization and delineation of tumors, including the ability to distinguish between different types of breast lesions based on their texture, shape, and boundary characteristics. §.§ Classification Evaluation Metrics The outcomes of the proposed DCNNIMAF model for breast cancer classification are evaluated during the training and validation phase using the following classification metrics: §.§.§ Accuracy Accuracy is a fundamental metric that evaluates the overall performance of a model across all classes. It measures the proportion of true classifications (both true positives and true negatives) in the total images classified, providing a comprehensive view of the model's effectiveness in correctly classifying instances. Accuracy = ∑_k=1^n (TP_k + TN_k)/∑_k=1^n (TP_k + TN_k + FP_k + FN_k) Where: * TP denotes the number of true positives. * TN denotes the number of true negatives. * FP denotes the number of false positives. * FN denotes the number of false negatives. * n denotes the total number of classes. §.§.§ Precision Precision focuses on the proportion of true positive predictions among all positive predictions made by the classifier. It is particularly important in situations where false positives are costly, as it helps in minimizing the impact of false positives on the overall performance of the model. Precision = ∑_k=1^n TP_k/∑_k=1^n (TP_k + FP_k) §.§.§ Recall Recall, also known as sensitivity, measures the ability of the classifier to identify all relevant instances within a specific class. It is crucial in situations where missing a positive instance (false negative) is more detrimental than identifying a negative instance as positive (false positive). Recall helps in ensuring that the model does not overlook any relevant instances. Recall = ∑_k=1^n TP_k/∑_k=1^n (TP_k + FN_k) §.§.§ F1-Score F1-Score combines precision and recall into a single measure, providing a balanced view of the model's performance. It is useful in scenarios where both false positives and false negatives are equally important, and a balance between these two metrics is desired. F1 Score = 2 ∑_k=1^n TP_k/∑_k=1^n (2TP_k + TN_k + FP_k) The proposed DCNNIMAF classifier was trained for 100 epochs, and the evaluation metrics were recorded after each epoch. The training and validation curves obtained for accuracy coupled with categorical cross-entropy loss have been depicted in Figures 13 and 14 respectively. From the graph plots, it can be seen that the classification model has obtained a high accuracy of 99.2% at a minimal loss of 0.03. Figures 15, 16, and 17 display the training and validation precision, recall, and F1-score curve, respectively. It can be inferred from the graphs, that the proposed model has minimized false positives and false negatives, thereby achieving a remarkable precision of 99.3% and a recall of 99.1%. The high values of precision and recall contribute to the high F1-score value of 99.1%. §.§.§ Performance Evaluation and Discussion The normalized confusion matrix obtained on the validation data using the trained DCNNIMAF classification model has been presented in Figure 18. A normalized confusion matrix is a type of confusion matrix where the values are normalized to show proportions or percentages. It is useful for comparing classification performance across classes, since the values are between 0 to 1, making it easy to interpret. In Figure 18, the normalized confusion matrix depicts the proposed model's classification performance across three the three breast cancer classes: "benign," "normal," and "malignant." Each row corresponds to the actual class, with each column representing the predicted class.The matrix's values show the proportion of true-class cases that were successfully classified (along the diagonal) or misclassified (off-diagonal). From the matrix, it can be observed that the model has obtained remarkable accuracy. With most values along the diagonal close to one, it indicates that the majority of the samples were categorized correctly. For the "benign" class, the model had a true positive rate of 0.99, indicating that 99% of benign tumors were properly categorized. In the "normal" class, the true positive rate was 0.98, implying that 98% of normal cases were correctly identified. Similarly, in the "malignant" class, the true positive rate was 0.99, indicating that 99% of malignant tumors were correctly identified. Misclassification errors were minor, with extremely low false positive and false negative rates. Table 5 compares the proposed classifier's performance with existing models. Table 5 compares the proposed classifier's performance with existing models. The proposed DCNNIMAF model is compared with other pretrained CNNs, including EfficientNetV2<cit.>, MobileNetV2 <cit.>, <cit.>, NASNetMobile<cit.>, Xception<cit.>, InceptionV3<cit.>, InceptionResNetV2<cit.>, MobileNet<cit.>, VGG16<cit.>, and ResNet50<cit.>. This comparison aims to provide an overall assessment of the proposed model relative to existing baseline CNNs widely utilized for breast cancer classification. All models, including the proposed one, are trained utilizing the identical dataset, and the outcomes are presented in Table 5. The performance of these models is evaluated based on the following metrics: Accuracy (Acc), Precision (Prec), Recall (Rec), and F1 Score (F1). From Table 5, it is evident that the proposed DCNNIMAF model has outperformed all baseline CNN models in terms of performance evaluation metrics. EfficientNetV2 overfits on the data due to difficulty in generalizing the nuanced features of breast cancer like irregular margins of malignant lesions or varying degrees of echogenicity observed in ultrasound images. MobileNetV2's lightweight architecture struggles with the detailed analysis required to detect early signs of breast cancer, such as subtle changes in echotexture or the presence of microcalcifications within lesions. While DenseNet121 benefits from dense connectivity for feature reuse, its performance in identifying specific breast cancer markers like the orientation and distribution of calcifications or the assessment of lesion vascularity is compromised. NASNetMobile, designed for mobile applications, lacks the precision needed to capture the complex interplay of features indicative of breast cancer, such as the irregular shapes of masses or variations in posterior acoustic shadowing. Xception does not fully exploit the spatial dependencies crucial for identifying specific indicators of breast cancer, such as the pattern of calcifications or the echogenicity of surrounding tissue. InceptionV3's design compromise for computational efficiency limits its capacity to analyze the multidimensional data characteristic of breast cancer ultrasound images, particularly in detecting subtle architectural distortions or changes in tissue echotexture. Despite its sophisticated architecture, InceptionResNetV2 does not optimally align with the need to identify specific, disease-related features like the texture and margin irregularities of masses or the presence of ductal abnormalities. MobileNet's focus on efficiency limits its depth necessary for detailed feature extraction from breast cancer ultrasound images. VGG16's simplicity and relative shallowness struggles with the detailed analysis required to detect and classify features such as the presence of posterior acoustic enhancement, leading to lower accuracy in validation tests. Features such as the assessment of lesion margins might not be adequately learned due to limitations in the ResNet50’s depth and focus. The proposed DCNNIMAF model distinguishes itself by effectively integrating multiple spatial and self-attention mechanisms, enabling precise identification of critical features such as calcifications, architectural distortions, and mass margins. These enhancements allow the model to capture the complex, heterogeneous pathology of breast cancer evident in ultrasound imagery. From the results presented in Table 6, it is apparent that the DCNNIMAF model proposed in this research outperforms all other models in existing research. The assembly of Fine Tuned VGG16 and VGG19 <cit.> achieves moderate performance with accuracy and F1-scores around 95%. Its performance is relatively low, indicating potential limitations in its ability to capture the complexity of breast cancer pathology fully. CNN-based Ensemble Learner with MLP Meta Classifier <cit.> has shown high performance with an accuracy of 98% but has struggled with identifying subtle changes in the irregular shapes of masses. BCCNN <cit.> shows promising results with metrics around 98%. However, the slight variation in F1-score compared to the highest performers suggests it faces challenges in maintaining a balance between precision and recall, essential for minimizing errors in breast cancer diagnosis. ResNet50 Hybrid with SVM <cit.> presents strong recall but exhibits a lower precision score. This discrepancy indicates that while the model is capable of identifying many positive cases, it struggles with accurately distinguishing between benign and malignant lesions, leading to potential false positives. The precision score of Deep CNN with Fuzzy Merging <cit.> drops significantly highlighting a critical issue in its ability to classify breast cancer cases precisely. This suggests that while the model captures broad patterns effectively, it overlooks finer details necessary for accurate diagnosis. Xception combined with SVM R <cit.> shows a balanced performance of around 96% but indicates a relative inefficiency in comparison to other models in terms of feature extraction capabilities, leading to inefficiency in real-world use. Grid-based Deep Feature Generator with DNN Classifier <cit.> demonstrates a high precision score, but the minor discrepancies in recall and F1-score indicate potential inefficiencies in capturing all relevant pathological features, affecting its overall efficacy. InceptionV3 with Residual Connections <cit.> achieves a high recall but significantly lower precision, indicating a significant imbalance in its diagnostic capabilities. This suggests challenges in accurately discriminating between similar-looking benign and malignant cases, which is crucial for reducing false positives. EDLCDS-BCDC <cit.> presents moderate performance across metrics, around 95% to 97%, highlighting potential shortcomings in accurately identifying subtle differences. AlexNet, ResNet50, and MobileNetV2 Hybrid Feature Extractor with mRMR and SVM <cit.>. shows solid performance with accuracy and F1-scores around 95%. However, its limitations suggest shortcomings in fully adapting to the complex and varied nature of breast cancer pathology, indicating areas for potential enhancement. The proposed DCNNIMAF model demonstrates remarkable performance across all metrics evaluated, surpassing all other models in this comparison. This can be attributed to its meticulously designed architecture that incorporates advanced feature extraction techniques and multiple attention mechanisms, allowing for the precise and effective identification of the nuanced pathological features associated with breast cancer. This specialized approach ensures not only high accuracy but also maintains excellent precision and recall, showcasing its robustness and reliability in clinical applications for breast cancer classification. § CONCLUSION AND FUTURE DIRECTION The primary objective of this research is to detect and segment tumor regions within breast ultrasound images, subsequently categorizing them as benign, malignant, or normal. The objective of this work is to develop an accurate and efficient system for breast cancer tumor segmentation and classification, aiming to improve diagnosis and treatment outcomes for patients. The proposed segmentation model utilizes an InceptionResNet-based LinkNet framework with an intelligent dual-attention mechanism to precisely segment the tumor region. Leveraging spatial and self-attention mechanisms across multiple layers, the DCNNIMAF classification framework enables accurate classification of breast cancer types or the absence of cancerous conditions. The proposed models have excelled in performance, in comparison to existing works. In segmentation tasks, they showcase exceptional accuracy, IoU score, and Dice coefficient score. Furthermore, the classification metrics reveal impressive accuracy, precision, F1-score, and recall rates. Future work could extend the framework's utility to other medical imaging modalities, facilitating the detection and classification of abnormalities beyond breast ultrasound images. IEEEtran